text
stringlengths 1
2.55M
| id
stringlengths 21
25
| metadata
dict |
---|---|---|
\section{Introduction}
These days, most equities exchanges in the U.S.\ are discussing maker-taker fees, in which an exchange pays rebates to makers who are traders placing orders in the order book and charges fees to takers who are traders taking orders from the order book \cite{Do12,BCJ16}.
The merits of maker-taker fees are that market liquidity is provided for taking orders, improving liquidity may increase the exchange's trading share, the bid-ask spread can be decreased by maker orders, and the market can be expected to be more efficient.
There have been some previous reports on maker-taker fees \cite{FKK13,CVV19,Ro19};
however, it remains unclear how maker-taker fees affect the total cost of a taking order, including all the charged fees and the market impact.
In this research, we investigated the effect of maker-taker fees on the market with our artificial market model, which is an agent-based simulation model for financial markets \cite{CIP09,CCD12,MIYY14}.
One method for dealing with situations that defy analysis by previous empirical research methods is an artificial market.
In this case, it is difficult to investigate in empirical research the impact on the market caused by the amount of maker-taker fees that are not actually used, but if artificial markets are used, the effect of these fees on the market can be easily understood.
Each agent is assigned a specific trading (i.e., buying or selling) rule and is then set to trade financial assets as an investor. The market can be observed in order to determine how the agents behave. Then, by modeling and incorporating certain restrictions on the market side (e.g., limitations to ensure market stability and efficiency such as a short selling regulation), it is possible to examine how investors behave, as well as what kinds of effect the restrictions have on the market.
We constructed an artificial market with the maker-taker fee structure and analyzed the effect of maker-taker fees, i.e., the total cost of a taker's trade, including the charged fee, on the market from the viewpoints of market impact and volatility.
Furthermore, we checked the relationship between the total cost of a taker's trade and
market efficiency.
\section{Artificial market model}\label{Artificial market model}
In this research, we constructed an artificial market model on the basis of the artificial market model of Chiarella et al.\ \cite{CIP09} and Yagi et al.\ \cite{YMM19}, because
they built simple agents and a pricing mechanism which could reproduce the statistical characteristics of the kinds of long-term price fluctuations observed in empirical analyses.
Moreover, algorithm agents \cite{MKKMI15} and a position-based market maker \cite{KMHI14} were added to our model.
The normal agents used in the basis models \cite{CIP09, YMM19} correspond to both takers and makers in real financial markets, where the algorithm agents correspond to takers and the position-based market makers correspond to makers.
Only one risk asset is available for trading. Hereinafter, we refer to risk assets as simply assets, whereas we refer to non-risk assets as cash, because non-risk assets are equivalent to cash in the proposed artificial market.
The numbers of normal agents and algorithm agents are $n$ and $m$ (where $1 \leq m \leq n$), respectively. Each of the normal agents $j=1, \ldots, n$ places an order in sequence. After the final agent, agent $n$, has placed an order, the first agent, agent $1$, places the next order. Each time $\lfloor n/m \rfloor$ normal agents place orders, the algorithm agent places one order. Each of the algorithm agents $k=1, \ldots, m$ also places an order in sequence. There is one position-based market maker, which places a sell order and a buy order before either the normal agents or algorithm agents place orders.
The time $t$ increases by 1 each time a normal agent or an algorithm agent places an order.
Thus, the process moves forward one step even when a trade does not occur and this new order is placed in the order book. However, time $t$ does not advance in the position-based market maker order.
The mechanism for determining the price in this model being a continuous double auction (continuous trading session) means that if there are sell (buy) order prices in the order book that are lower (higher) than the buy (sell) order price of the agent, then the agent's order is immediately matched to the lowest sell order (highest buy order) in the order book. We refer to this as a market order.
If there are no such orders in the order book, then the order does not match any other order and remains in the order book. We refer to this order as a limit order.
The remaining orders in the order book are canceled at time $t_c$ (order effective period) after the order was placed.
The tick size, which is the minimum unit for price, is $\Delta{P}$, and when orders are sell orders, fractional values smaller than $\Delta{P}$ are rounded up. On the other hand, when orders are buy orders, they are rounded down.
Each agent sends one order each time. An agent can possess an unlimited amount of assets as the quantity of cash held by the agent is set to infinity. Agents can also short sell.
\subsection{Maker-taker fee structure}\label{Maker-taker fee structure}
One of the exchange's revenues is charging traders for the services that process their trades. Maker-taker fees provide a benefit for exchanges in that the exchanges can profit from the difference between the taker fee and the market maker rebate as follows:
\begin{equation}
R_{EX}=C_T-R_M.
\label{eq1}
\end{equation}
Let $R_{EX}$, $C_T$, and $R_M$ be the fee that the exchange receives, the taker fee, and the market maker rebate, respectively. In this study, $R_{EX}$ is fixed ($R_{EX} = 0.100 \%$), since $R_{EX}$ has essentially no effect on the relationship between the total cost of a taker's trade, including the taker fee, and market efficiency.
When $R_M$ is positive, it means that a market maker receives the market maker rebate. When $R_M$ is negative, it means that a market maker pays the market maker rebate.
When $C_T$ is positive, it means that a taker pays the taker fee.
When $C_T$ is negative, it means that a taker receives the taker fee. The values of $R_ {EX}$, $R_M$, and $C_T$ are shown as ratios to the fundamental price $P_f$ described later.
\subsection{Normal agent}\label{Normal agent}
Normal agents, who are assumed to be general investors in the real world, are designed for as simple a model which can reproduce the desired stylized facts.
Normal agents have three trading strategies: fundamental strategy, technical strategy, and noise trading. The first two strategies are implemented in our model because many empirical studies have found that the fundamental strategy, technical strategy, or both were used generally for any market and any time (e.g., Menkhoff et al.\ \cite{MT07}). We also implement noise trading to model objectively investors' desire for a better strategy through trial and error.
\subsubsection{Order process}\label{Order process}
The order prices of normal agent $j$ by transaction are determined as shown below.
The rate of change of the price expected by agent $j$ at time $t$ (the expected return) ${r_e}_j^t$ is given by
\begin{equation}
{r_e}^t_j=\frac{1}{{w_1}^t_j+{w_2}^t_j+u_j}\left({w_1}^t_j{r_1}^t_j+{w_2}^t_j{r_2}^t_j+u_j\epsilon_j^t\right),
\label{exp_return}
\end{equation}
where ${w_i}^t_j$ is the weight of the $i$-th term for agent $j$ at time $t$ and is set according to the uniform distribution between $0$ and $w_{i,max}$ at the start of the simulation and then varied using the learning process described later herein.
Furthermore, $u_j$, the weight of the third term in parentheses, is set according to the uniform distribution between 0 and $u_{max}$ at the start of the simulation and is kept constant thereafter.
The first term in parentheses on the right-hand side of Eq.~(\ref{exp_return}) represents the fundamental strategy. The form of the term indicates that an agent expects a positive (negative) return when the market price is lower (higher) than the fundamental price.
The term ${r_1}^t_j$ is the expected return of the fundamental strategy for agent $j$ at time $t$ and is given by ${r_1}^t_j = \log(P_f / P^{t-n})$, where
$P_f$ is the fundamental price, which is constant over time, and $P^t$ is the market price at time $t$. The market price is set to the most recent price at the time if no trading is occurring. The initial market price is set to the fundamental price, i.e., $P^0=P_f$.
The second term represents the technical strategy. The form of this term indicates that an agent expects a positive (negative) return when the historical return is positive (negative).
Here, ${r_2}^t_j$ is the expected return of the technical strategy for agent $j$ at time $t$ and is given by ${r_2}^t_j = \log (P^{t-n} / P^{t-n-\tau_j})$, where $\tau_j$ is set according to the uniform distribution between $1$ and $\tau_{max}$ at the start of the simulation.
The third term represents the noise strategy.
Here, $\epsilon_j^t$ is a normally distributed random error with mean zero and standard deviation $\sigma_{\epsilon}$.
Based on the expected return ${r_e}_j^t$, the expected price ${P_e}_j^t$ is found using the following equation:
\begin{equation}
{P_e}_j^t = P^{t-1}\exp({r_e}_j^t).
\label{Pejt}
\end{equation}
The order price ${P_o}_j^t$ is a normally distributed random number with mean ${P_e}_j^t$ and standard deviation $P_{\sigma}^t$, given by
\begin{equation}
P_{\sigma}^t = {P_e}_j^t \cdot Est,
\end{equation}
where $Est(0<Est\le1)$ is the variation coefficient of the order price.
The choice between buying and selling is determined by the relative sizes of the expected price ${P_e}_j^t$ and the order price ${P_o}_j^t$. An agent places a buy order for one share if ${P_e}_j^t~>~{P_o}_j^t$, while an agent places a sell order for one share if ${P_e}_j^t~<~{P_o}_j^t$.
\subsection{Learning process}\label{Learning process}
Previous studies using an artificial market have implemented various kinds of learning processes. For example, agents switch strategies and/or tune their strategy parameters based on their performance, market price, etc.\ \cite{AHLPT96,LM99,NT13}.
The learning process in the present study is implemented to switch the strategy between the fundamental and technical strategies.
We modeled the learning process as follows based on Yagi et al.\ \cite{YMM19}.
For ${r_i}^t_j$, learning is performed by each agent immediately before the agent places an order. That is, when ${r_i}^t_j$ and ${r_l} ^t = \log(P^t/P^{t-t_l})$ are of the same sign, ${w_i}^t_j$ is updated as follows:
\begin{equation}
{w_i}^t_j{\leftarrow} {w_i}^t_j + k_l|{r_l}^t|q_j^t(w_{i,max} - {w_i}^t_j),
\end{equation}
where $k_l$ is a constant, and $q^t_j$ is set according to the uniform distribution between 0 and 1.
When ${r_i}^t_j$ and ${r_l}^t$ have opposite signs, ${w_i}^t_j$ is updated as follows:
\begin{equation}
{w_i}^t_j{\leftarrow} {w_i}^t_j - k_l|{r_l}^t|q_j^t{w_i}^t_j.
\end{equation}
Separately from the process for learning based on past performance, ${w_i}^t_j$ is reset with a small probability $m$, according to the uniform distribution between $0$ and $w_{i,max}$.
\subsection{Algorithm agent}\label{Algorithm agent}
Algorithm agents are assumed to be institutional investors who use an algorithmic trading strategy. Algorithmic trading is a process in which a big order is divided into small orders and automatically executed little by little. All algorithm agents always place buy market orders for one share. The purpose of incorporating algorithm agents into our artificial market is to use them to measure the market impact ({\it MI}) described later.
When there is an order with the best ask price in the order book, an algorithm agent places a buy order at the best ask price plus tick size $\Delta P$. Otherwise, an algorithm agent does not place an order.
\subsection{Position-based market maker}\label{Position-based market maker}
A position-based market maker is assumed to be an institutional investor who takes a market maker strategy, which means the strategy of placing both a buy order and a sell order whose price exceeds the buy order price, to make a profit. Hereinafter, a positon-based market maker is simply called a market maker.
If the previous sell, buy, or both orders of the market maker remain in the order book, the market maker cancels them and places new buy and sell limit orders.
Generally, a market maker decides their own order price based on the best-bid, the best-ask, and the spread, which is equal to the amount of its own expected return per transaction. However, the order price of the market maker also depends on their position, which means the amount of an asset held by the market maker, as they act to keep their position neutral \cite{NS04,KMHI14}. That is, when the market maker has a long position, which means the agent buys and holds some amount of an asset, their buy and sell order prices are set lower so that their sell order matches an order from normal agents and algorithm agents easier than its buy order. On the other hand, when the market maker has a short position, which means the agent short-sells the asset, their buy and sell order prices are set higher so that their buy order matches an order from normal agents easier than their sell order.
Let the base spread of the market maker and the coefficient of their position (its initial value is set based on Kusaka et al.\ \cite{KMHI14} as $5.0\times10^{-8}$) be $\theta_M$ and $w_M$. Let the best-bid, the best-ask, and the basic order price, buy order price, sell order price at time $t$, and position between time $t$ and $t+1$ of the market maker be ${P_{bb}}^t$, ${P_{ba}}^t$, ${P_{bv,M}}^t$, ${P_{bo,M}}^t$, ${P_{so,M}}^t$, and ${s_M}^t$, respectively. Then, ${P_{bo,M}}^t$, ${P_{so,M}}^t$, and ${P_{bv,M}}^t$ are as follows:
\begin{align}
{P_{bo,M}}^t & = {P_{bv,M}}^t - \frac{1}{2}P_f\cdot \theta_M, \label{PHt} \\
{P_{so,M}}^t & = {P_{bv,M}}^t + \frac{1}{2}P_f\cdot \theta_M, \\
{P_{bv,M}}^t & = (1-w_M(s_M^t)^3)\cdot \frac{1}{2}({P_{bb}}^t + {P_{ba}}^t) .
\end{align}
When the sell (buy) order price of the market maker is lower (higher) than the best-bid (best-ask), the market maker's order becomes a market order. Therefore, if the following conditions are satisfied, the buy and sell order prices of the market maker are changed \cite{KMHI15}.
That is, if ${P_{bo,M}}^t \geq {P_{ba}}^t$, then
\begin{align}
{P_{bo,M}}^t & = {P_{ba}}^t - \Delta P, \label{PHt1} \\
{P_{so,M}}^t & = ({P_{ba}}^t - \Delta P) + P_f\cdot\theta_M.
\end{align}
If ${P_{so,M}}^t \leq {P_{bb}}^t$, then
\begin{align}
{P_{bo,M}}^t & = ({P_{bb}}^t + \Delta P) - P_f\cdot\theta_M, \label{PHt2} \\
{P_{so,M}}^t & = {P_{bb}}^t + \Delta P.
\end{align}
is set in consideration of the market maker rebate. That is,
\begin{equation}
\theta_M=Re_M-2R_M,
\label{basespread}
\end{equation}
where $Re_M$ means the expected return of the market maker per transaction. Note that $Re_M$ is shown as a ratio to the fundamental price $P_f$ and is set to $0.300\%$. The reason why $R_M$ has a factor of 2 in the equation is that the market maker may receive rebates when both a buy limit order and a sell limit order of the market maker are executed.
Table \ref{table1} shows the relationship between $\theta_M$ and $C_T$ by listing their values for various market maker rebates $R_M$ under the above conditions.
\begin{table}[t]
\caption{Relationship between base spread $\theta_M$ and taker fee $C_T$ with respect to market maker rebate $R_M$ when $Re_M=0.300\%$ and $R_{EX}=0.100\%$}
\begin{tabular}{c|c|c}
$R_M$ & $\theta_M$ & $C_T$\\ \hline\hline
-0.100\% & 0.500\% & 0.000\% \\
-0.075\% & 0.450\% & 0.025\% \\
-0.050\% & 0.400\% & 0.050\% \\
-0.025\% & 0.350\% & 0.075\% \\
-0.000\% & 0.300\% & 0.100\% \\
0.025\% & 0.250\% & 0.125\% \\
0.050\% & 0.200\% & 0.150\% \\
0.075\% & 0.150\% & 0.175\% \\
0.100\% & 0.100\% & 0.200\% \\
0.125\% & 0.050\% & 0.225\% \\
0.140\% & 0.020\% & 0.240\% \\
0.145\% & 0.010\% & 0.245\% \\
\end{tabular}
\label{table1}
\end{table}
\section{Simulation and results}\label{Simulation and results}
In this study, we checked the market impact and volatility by changing the market maker rebate as shown Table \ref{table1}.We set the initial values of the model parameters as follows: $n=990$, $m=10$, ${w_1}_{max}=1$, ${w_2}_{max}=10$, $u_{max}=1$, $\tau_{max}=10{,}000$, $\sigma_{\epsilon}=0.06$, $Est=0.003$, $\Delta P=1.0$, $P_f=10{,}000$,
$t_l=10{,}000$, $t_c=20{,}000$, $k_l=4.0$, $\delta _l=0.01$, and $w_{pm}=0.00000005$.
We ran simulations from $t = 0$ to $t = 1{,}000{,}000$ and defined the end time of simulations as $t_e$.
We now introduce the market impact $MI$, defined as how much higher
the price at which algorithm agents bought an asset is than the fundamental price $P_f$:
\begin{equation}
MI=\frac{1}{n_{{\it buy}}}\sum_{j=1}^{n_{{\it buy}}}\frac{P^{t^j_{{\it buy}}}-P_f}{P_f},
\label{mi}
\end{equation}
where $n_{{\it buy}}$ is the amount of assets that algorithm agents bought and $P^{t^j_{{\it buy}}}$ is the price at which algorithm agents bought assets at time $t^j_{{\it buy}}$
The market impact is also used to calculate the average price at which algorithm agents bought the asset.
The market price $P^t$ becomes almost the same as the fundamental price $P_f$ in the case without algorithm agents. Therefore, when orders of the algorithm agents do not have an impact on the price formation, $MI = 0$ \cite{MKKMIYY16}. A larger $MI$ indicates that the orders have more impact on price formation.
Volatility is defined as the standard deviation of the return $\ln(P^t/P^{t-1})$.
We define the market inefficiency $M_{\it ie}$ to measure the market efficiency as follows \cite{VDV13}:
\begin{equation}
M_{\it ie}=\frac{1}{t_e}\sum_{t=0}^{t_e}\frac{P^t-P_f}{P_f}.
\label{mie}
\end{equation}
The market inefficiency is defined as the actual difference between market and fundamental prices. If the market was perfect efficiency, then the market prices would be exactly same as the fundamental price, i.e., $M_{\it ie} = 0$. A larger $M_{\it ie}$ indicates a more inefficient market.
\subsection{Validation of artificial market model}\label{Validation of artificial market model}
\begin{table}[t]
\begin{center}
\caption{Stylized facts($\theta _{pm}=0.300\%$)\label{table2}}
\begin{tabular}{cc | c} \hline
Kurtosis & & 17.54 \\ \hline
& Lag & \\
& 1 & 0.045 \\
Autocorrelation & 2 & 0.045 \\
coefficients & 3 & 0.044 \\
for squared returns & 4 & 0.042 \\
& 5 &0.040 \\ \hline
\end{tabular}
\end{center}
\end{table}
As many empirical studies have mentioned \cite{Co01,Se06}, a fat tail and volatility clustering appear in actual markets, which are two stylized facts of financial markets. Therefore, we set the artificial market parameters so as to replicate these features.
Table \ref{table2} shows the statistics for stylized facts in the case that the initial cash coefficient is 10 for which we calculated the price returns, i.e., $\ln({P^t/P^{t-1}})$, at intervals of 100 time units. As shown, both kurtosis and autocorrelation coefficients for squared returns with several lags are positive, which means that the runs for all five patterns replicate a fat tail and volatility clustering. This indicates that the model replicates long-term statistical characteristics observed in real financial markets. Since similar results were obtained for the other values of the initial cash coefficient, those results are omitted here due to space limitations.
A previous empirical study showed that kurtosis of returns and autocorrelation coefficients for squared returns with several lags are positive when a fat tail and volatility clustering appear in actual markets \cite{Ts05}. For example, the kurtoses of monthly log returns of U.S. bonds, the S\&P composite index of 500 stocks, and Microsoft are 4.86, 7.77, and 1.19, respectively. Note that kurtosis of returns depend on the kinds of financial assets. Likewise, autocorrelation coefficients for squared returns with several lags of the S\&P composite index of 500 stocks are 0.0536, 0.0537, 0.0537, 0.0538, and 0.0538 when lags are from 1 to 5, respectively. It seems that the results of our model do not differ significantly from these results. Therefore, it can be confirmed that the proposed model is valid.
\subsection{Results and discussion}\label{Results and discussion}
As a result of the simulations, we found that volatility, market impact, and market inefficiency decrease as the market maker rebate increases.
\subsubsection{Volatility}\label{volatility}
Fig. \ref{vola} shows volatility as a function of the market maker rebate. Volatility decreases when the market maker rebate increases. The reason for this result is as follows. As the market maker rebate increases, the maker can offer orders with a narrower spread, because the expected return of the market maker depends on both the market maker rebate and the spread (refer to Eq.~(\ref{basespread})). If the market maker narrows the spread between their orders, then it becomes easier for the normal agent's order to execute the market maker orders. Therefore, the market price tends to converge between the market maker's order prices and volatility decreases.
\begin{figure}[t]
\includegraphics[width=\linewidth]{volatility.png}
\caption{Volatility as a function of the market maker rebate\label{vola}}
\end{figure}
\subsubsection{Market impact}\label{Market impact}
Fig. \ref{market_impact} shows the market impact as a function of the market maker rebate. The market impact decreases as the market maker rebate increases. For the reasons mentioned , the market maker can offer orders with a narrower spread. Therefore, as the buy market order prices of algorithm agents match the sell limit orders of the market maker, the market price does not rise much and the market impact decreases.
\begin{figure}[t]
\includegraphics[width=\linewidth]{market_impact.png}
\caption{Market impact as a function of the market maker rebate\label{market_impact}}
\end{figure}
\subsubsection{Market inefficiency}\label{Market inefficiency}
Fig. \ref{market_inefficiency} shows the market inefficiency as a function of the market maker rebate. In the market with the maker-taker fee structure, the market maker offers orders with a narrower spread around the fundamental price. As the taker's orders match the orders of the market maker, the market price converges to the fundamental price. Therefore, market inefficiency decreases, that is, market efficiency increases, when the market maker rebate increases.
\begin{figure}[t]
\includegraphics[width=\linewidth]{market_inefficiency.png}
\caption{Market inefficiency as a function of the market maker rebate\label{market_inefficiency}}
\end{figure}
\subsubsection{Total cost of a taking order}\label{Total cost of a taking order}
Fig. \ref{MIcases} shows some cases of the total cost of a taking order.
Generally, if the decrement of the market impact due to the maker-taker fees is larger than the increment of the taker fee due to the maker-taker fees, then the total cost of a taking order in the market with the maker-taker fee structure is lower than that of a taking order in the market without it (refer to case 2 in Fig. \ref{MIcases}).
This means that the algorithm agents in the market with the maker-taker fee structure can trade at lower transaction costs than those in the market without it.
Note that $\Delta C_T$ is the increment of the taker fee due to the maker-taker fees and $MI'$ is market impact due to the maker-taker fees.
On the other hand, the algorithm agents in the market with the maker-taker fee structure have to trade at higher transaction costs than those in the market without it, if the decrement of the market impact due to the maker-taker fees is less than the amount of the increment of the taker fee (refer to case 3 in Fig. \ref{MIcases}).
\begin{figure}[t]
\includegraphics[width=\linewidth]{MIcases.png}
\caption{Relationship between the market impact and the increment of the taker fee\label{MIcases}}
\end{figure}
Fig. \ref{total_cost} shows a comparison of the total cost of a taking order between with and without the maker-taker fee structure in the market.
Note that Fig. \ref{total_cost} shows the total cost of a taking order expressed as its ratio to the sum of it and the trading price.
The total cost of a taking order in the market without the maker-taker fee structure is 0.327\% (orange line in Fig. \ref{total_cost}). On the other hand, the total cost of a taking order in the market with the maker-taker fee structure is generally higher than that in the market without the maker-taker fee structure.
Therefore, we find that the maker-taker fee structure may create a disadvantageous environment for algorithm agents to trade in.
\begin{figure}[t]
\includegraphics[width=\linewidth]{total_cost.png}
\caption{Total cost of the taking orders as a function of the market maker rebate\label{total_cost}}
\end{figure}
\section{Conclusion}\label{Conclusion}
In this study, we investigated the effect of maker-taker fees on the market from the viewpoints of volatility, market impact, and market efficiency by using an artificial market model. Furthermore, we also checked the relationship between the total costs of the taker's trade and market efficiency. As a result, we found that maker-taker fees contributed to decreases in volatility and market impact and an increase in market efficiency. However, we also confirmed that it causes an increase in the total cost of the taking orders.
Thus, if an exchange charges a high maker-taker fee, market makers are likely to make a profit, while other investors may leave the market due to higher transaction costs.
Therefore, regulators should make regulations regarding the maker-taker fee structure in consideration of the above points, or at least avoid regulations that encourage exchanges to rebate market makers.
In our future work, we plan to confirm the effect of maker-taker fees on a market in which algorithm agents who always place not a market buy order but a market sell order for one share participate.
\section*{DISCLAIMER}
It should be noted that the opinions contained herein are solely those
of the authors and do not necessarily reflect those of SPARX Asset
Management Co., Ltd.
\begin{acks}
This work was supported by JSPS KAKENHI Grant Number 20K04977.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
| proofpile-arXiv_059-15629 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{introduction}
Consider a seller who would like to sell one object to a group of buyers.
The classical optimal auction due to \cite{Myerson1981} assumes that each
buyer knows his own valuation; moreover, each valuation follows a
distribution which is common knowledge. In reality, however, the buyers may
not know their own valuations for the good and have to assess how well the
product suits their need via various information sources. Their assessment
might be based on advertisements, recommendations from their friends, or the
product description on an internet platform. In this paper, we study an
information design problem in optimal auctions where each buyer learns his
private valuation independently via a signal.\footnote
We assume that both the buyers' valuations and signals are independently
distributed.} The seller earns the expected highest nonnegative virtual
value, whereas the buyers earn the expected total surplus minus the seller's
revenue. We derive the \emph{buyer-optimal information structure} which
maximizes the buyers' total surplus as well as \emph{the seller-worst
information structure} which minimizes the seller's optimal revenue.
As the seller runs an optimal auction, providing more information to the
buyers can lead to not only a higher surplus but also a higher payment for
them. Hence, the effect of a new information source on the buyers' welfare
is not a priori clear. The buyer-optimal information structure contributes
to our understanding of this issue by identifying an information structure
which maximizes the buyers' surplus. In this regard, our study builds upon
the prior work by \cite{roesler2017buyer} with a single buyer but extends to
an auction setup with multiple buyers. The problem is particularly relevant,
when the information designer is a regulator who aims to promote the
consumers' welfare by requiring the seller to disclose certain information
about the product.\footnote{\cite{terstiege2020buyer} study buyer-optimal
information design under monopoly pricing in which the information designer
may be a regulator of product information. They focus on a situation where
the buyer cannot commit to ignore any additional information released by the
seller, whereas we follow \cite{roesler2017buyer} in setting aside the issue
of the seller's disclosure.}
The seller-worst information design provides a \textquotedblleft
minmax\textquotedblright\ upper bound for the revenue of an informationally
robust optimal auction which achieves the highest \emph{revenue guarantee}
(the so-called \textquotedblleft maxmin\textquotedblright\ revenue)
regardless of the equilibrium and the information structure. Indeed, a
similar \textquotedblleft minmax\textquotedblright\ upper bound is critical
in establishing the strong duality results in \cite{du2018robust} for the
one-buyer case and also in \cite{bergemann2016informationally} and \cit
{brooks2019optimal} for the multi-buyer common-value case. Specifically,
these strong duality results show, in different contexts, that this
\textquotedblleft minmax\textquotedblright\ upper bound can indeed be
achieved via an optimal informationally robust\ auction. In this vein, the
seller-worst information provides a first step toward establishing such a
strong duality result---or lack thereof, in an independent private-value
setting.
We assume that the seller has no value for the good and the buyers are ex
ante symmetric. In particular, all of the buyers have an ex post valuation
of the good equal to either $0$ or $1$ with the same mean $p$. We follow
\cite{roesler2017buyer} in assuming that each signal provides an unbiased
estimator about the buyer's valuation. Hence, by \cit
{blackwell1953equivalent}, an information structure is feasible if and only
if it consists of a profile of independent signal distribution, all with
mean $p$.\footnote
Alternatively, our analysis also applies and produces the same result if the
information designer only knows that each buyer's prior distribution of
values has mean $p$ and support $\left[ 0,1\right] $. In a similar vein,
\cite{carrasco2018optimal} study a revenue-maximizing seller with a single
buyer and the seller has only partial information about the buyer's
valuation distribution. The partial information is formulated as finitely
many moment constraints, with bounded or unbounded support.} We begin with
solving an optimal symmetric signal distribution in the two information
design problems. As with deriving symmetric equilibria in symmetric
auctions, it is also more tractable to derive a symmetric information
structure in our ex ante symmetric information design problems.\footnote
Even with symmetric information structures, we still allow for irregular
signal distributions for which the optimal auction need not be a
second-price auction with reserve. Hence, our information design problem is
not equivalent to the corresponding information design problem where the
seller is committed to adopting a second-price auction with reserve; see
Section \ref{rsn2} for more discussions and Appendix \ref{irexample} for an
illustrative example.}
We show that as long as there are two or more buyers, the buyer-optimal
information structure need not be equal to the seller-worst information
structure. The result sharply contrasts the results of \cit
{roesler2017buyer} and \cite{du2018robust} which show that the two
information structures are equivalent when there is only one buyer. More
precisely, \cite{roesler2017buyer} show that with one buyer, the
buyer-optimal information structure is equal to a truncated Pareto
distribution with virtual value $0$ for any signal less than $1$, and with
virtual value $1$ for any signal equal to $1$. Consequently, the good is
always sold. \cite{du2018robust} constructs an informationally robust
mechanism which guarantees the revenue under the buyer-optimal information
structure of \cite{roesler2017buyer}. Hence, the buyer-optimal information
structure also minimizes the seller's revenue.
When there are two or more buyers, we pin down a cutoff $p^{s}$ which is
decreasing with the number of buyers. If $p$ is no more than $p^{s}$, then
the (symmetric) seller-worst information structure for each buyer remains
the same as in the one-buyer case. If $p$ is higher than $p^{s}$, then the
seller-worst signal distribution remains equal to a truncated Pareto
distribution but now with virtual value $k^{s}>0$ for any signal less than
1, and with virtual value $1$ for any signal equal to $1$. Since all virtual
values are nonnegative at any signal profile, the good is always sold.
Indeed, raising the virtual value from $0$ to $k^{s}$ has two countervailing
effects. First, the seller's revenue increases as the low virtual value
increases. Second, to satisfy the mean constraint, increasing the low
virtual value must be compensated by decreasing the probability of having
the high virtual value 1. As either the prior mean or the number of buyers
grows, competition among the buyers becomes more severe and the second
effect dominates the first one.
The (symmetric) buyer-optimal information structure differs in a number of
important ways. First, we pin down two cutoffs $r^{b}$ and $p^{b}$ which are
also decreasing in the number of buyers. If $p$ lies between $r^{b}$ and
p^{b}$, then the buyer-optimal information structure remains the same as in
the one-buyer case. When $p$ is less than $r^{b}$, the buyer-optimal signal
distribution puts positive mass on signal $0$ and the remaining mass on a
truncated Pareto distribution with virtual values $0$ and $1$.\footnote
We provide a definition of virtual value with arbitrary signal distribution
in Section \ref{virtual_def}.} Since signal $0$ induces a negative virtual
value, with positive probability the seller withholds the good. This is
because to maximize the buyers' surplus, the information designer needs to
consider not only the seller's revenue but also the total surplus. The total
surplus is convex in the buyers' signal and hence favors a wider spread.
When $p$ is above $p^{b}$, the buyer-optimal signal distribution is a
truncated Pareto distribution. However, also due to convexity of total
surplus, the distribution induces a low virtual value $k^{b}<k^{s}$ for any
signal less than 1 and high virtual value $1$ otherwise. We discuss the
trade-off with more details in Section \ref{Outline of the solution}
\footnote
Different from our information design problems, \cite{yang2019buyer} studies
the buyers' strategic information acquisition and also shows that the
buyers' equilibrium signal distributions are within the Pareto class. Due to
the buyers' competition in information acquisition, however, tie occurs with
zero probability in the symmetric equilibrium signal distribution in \cite
Proposition 4]{yang2019buyer}, whereas tie occurs with positive probability
in our buyer-optimal information.}
We also show that when the number of buyers goes to infinity, the cutoffs
p^{s}$, $r^{b}$, and $p^{b}$ all tend to zero, both $k^{b}$ and $k^{s}$
monotonically increase to $p$, whereas the corresponding probabilities
assigned to virtual value $1$ monotonically decrease. As a result, both the
buyer-optimal and the seller-worst signal distributions converge to a
degenerate distribution which puts all mass on the prior mean $p$, i.e., no
disclosure.\footnote
However, in a common-value model, \cite{brooks2019optimal} construct a
minmax/seller-worst information structure with a graded value function and
the aggregate signals following an Erlang distribution. When the prior has
binary support on $\left\{ 0,1\right\} $, their information structure
induces virtual values $0$ and $1$, regardless of the number of buyers.} We
summarize the results on optimal symmetric information in Table \ref{table}.
\begin{table}[tbp]
\caption{Main Results and Comparisons}
\label{table
\begin{tabular}{|c|lc|ll|l|c|}
\hline
\begin{tabular}{@{}c}
Number \\
of Buyer
\end{tabular}
& \multicolumn{5}{c|}{Optimal distribution with different prior mean $p$} &
\begin{tabular}{@{}c}
Always \\
sell
\end{tabular}
\\ \hline
\multicolumn{1}{|l|}{\multirow{2}{*}{$n=1$}} & Seller-worst &
\multicolumn{4}{|l|}
\begin{tabular}{@{}l}
Virtual value $\{0,1\}$ for any prior mean; \\
coincides with \cite{roesler2017buyer
\end{tabular
} & Yes \\ \cline{2-7}
\multicolumn{1}{|l|}{} & Buyer-optimal & \multicolumn{4}{|l|}{Virtual value
\{0,1\}$ for any prior mean} & Yes \\ \hline
\multicolumn{1}{|l|}{\multirow{4}{*}{$n\geq2$}} &
\multirow{2}{*}{Seller-worst} & \multicolumn{2}{|c|}{$p<p^{s}$} &
\multicolumn{2}{|c|}{$p^{s}<p\leq 1$} & Yes \\ \cline{3-6}
\multicolumn{1}{|l|}{} & & \multicolumn{2}{|l|}{Virtual value $\{0,1\}$} &
\multicolumn{2}{l|}{Virtual value $\{k^{s},1\}$} & \\ \cline{2-7}
\multicolumn{1}{|l|}{} & \multirow{2}{*}{Buyer-optimal} &
\multicolumn{1}{|c|}{$p<r^{b}$} & $r^{b}\leq p\leq p^{b}$ & & $p^{b}<p\leq
1 $ & No \\ \cline{3-6}
\multicolumn{1}{|l|}{} & & \multicolumn{1}{|l|}
\begin{tabular}{@{}l}
Positive mass on $0$, \\
virtual value $\{0,1\}
\end{tabular
} & \multicolumn{2}{l|}
\begin{tabular}{@{}l}
Virtual value, \\
$\{0,1\}
\end{tabular
} &
\begin{tabular}{@{}l}
Virtual value, \\
$\{k^{b},1\}
\end{tabular}
& \\ \hline
\multicolumn{1}{|l|}{\multirow{2}{*}{$n\rightarrow\infty$}} & Seller-worst &
\multicolumn{4}{l|}{Degenerate distribution (revealing nothing) for any mean}
& Yes \\ \cline{2-7}
\multicolumn{1}{|l|}{} & Buyer-optimal & \multicolumn{4}{l|}{Degenerate
distribution (revealing nothing) for any mean} & Yes \\ \hline
\end{tabular
\end{table}
We also investigate asymmetric signal distribution in both information
design problems. For the seller-worst problem, we show that the optimal
symmetric information structure remains the unique optimal solution, even if
the information designer can choose different signal distributions for
different buyers. Intuitively, averaging a profile of asymmetric virtual
value distributions grants the seller less option values in selecting the
highest virtual values and results in less revenue. This means restricting
attention to symmetric signal distributions entails no loss in minimizing
the seller's revenue.
Although averaging a profile of asymmetric signal distribution entails no
loss in minimizing the seller's revenue, it may entail loss in the expected
total surplus. Indeed, we demonstrate one case with two buyers and another
case with the number of buyers approaching infinity where an asymmetric
information structure generates strictly higher surplus for the buyers than
the optimal symmetric information structure. As our setup is ex ante
symmetric, the result shows that asymmetric information structure can emerge
endogenously as the choice of a buyer-optimal information designer.
We explain how our argument differs from previous papers. In either
one-buyer or common-value models, the optimal information structures are
constructed to make sure that under \emph{any} signal realizations each
buyer has the same nonnegative virtual value; see \cite{roesler2017buyer},
\cite{bergemann2016informationally} and \cite{brooks2019optimal}. This is
possible either because there is only one buyer or because each buyer's
common value is set to depend on \emph{all} buyers' signal realizations. In
our independent private-value model, each buyer's interim value depends only
on his signal and hence his virtual value, being also independently
distributed, must differ from other buyers' virtual values for some signal
realizations.\footnote
Since we assume independence of both valuations and signals among buyers,
feasibility of a signal distribution is characterized by the mean
constraint. While allowing the interdependence of interim valuations and
signals may help us address the issue with distinct virtual values, the
induced ex post valuations of different buyers from such interim valuations
may end up being correlated and inconsistent with the independent prior; see
Section \ref{discussion}.} More importantly, when there are multiple buyers,
both \cite{bergemann2016informationally} and \cite{brooks2019optimal} prove
that their conjectured information structure is indeed the
seller-worst/minmax information structure from their strong duality result
\footnote
Since the good is always sold in their information structure (i.e., the
expected total surplus is also maximized), their result also establishes the
equivalence of the buyer-optimal and the seller-worst information structures
in their common-value model.} To the best of our knowledge, however, such a
strong duality result remains unknown in the independent private-value
setting which we study here.
To address this difficulty, we transform the control variables of the
information design problems. More precisely, instead of working with
signal/interim value distributions, we work with the interim \emph{virtual}
value distribution. After change of control variables, the information
design problem becomes an isoperimetric problem in optimal control theory.
The Euler-Lagrange equation in this problem can then be invoked to argue
that the virtual value distribution function is a step function with at most
two steps; see Section \ref{Outline of the solution}. This effectively
reduces the infinite-dimensional information design problem into a tractable
finite-dimensional constrained optimization problem. Moreover, with the few
control variables such as the two-step virtual value distribution functions,
we are able to understand the trade-off in pinning down their optimal
choices as we elaborate above.
The rest of this paper proceeds as follows. In Section 2, we describe our
model and formulate the information design problem. Section 3 presents our
main results. Section 4 demonstrates how we simplify the control variables
of the information design problems. Section 5 studies the information design
problem with asymmetric signal distributions. Section 6 discusses some
issues with the extensions of our results. The appendix contains all proofs
which are omitted from the main text.
\section{Model}
\label{model}
There is a seller who has one object to sell to a finite set $N=\left\{
1,2,...,n\right\} $ of potential buyers. The seller has no value for the
object. Each buyer's prior valuation, $v_{i}$, is identically and
independently drawn from a Bernoulli distribution $H$ on $\{0,1\}$. Let $p
\mathbb{E}[v_{i}]=\Pr \left( v_{i}=1\right) $ denote the mean of $H$. To
rule out trivial cases, we assume that $p\in (0,1)$. Suppose that each buyer
can observe an independently and identically distributed signal $x_{i}$
about $v_{i}$ from an information designer and the distribution of $v_{i}$
and $x_{i}$ is common knowledge among the seller as well as the buyers.
\subsection{Information structure}
Following \cite{roesler2017buyer}, we say a signal distribution is feasible
if each signal of a buyer provides him with an unbiased estimate about his
valuation. Then, according to the characterization of \cit
{blackwell1953equivalent}, the prior valuation distribution $H$ is a
mean-preserving spread of any feasible distribution of signals. Since $H$ is
a Bernoulli distribution on $\{0,1\}$, the mean-preserving spread condition
can be reduced to a mean constraint. Hence, a feasible symmetric information
structure is a signal distribution $G$ with $G\in \mathcal{G}_{H}$ where
\begin{equation*}
\mathcal{G}_{H}=\left\{ G:[0,1]\mapsto \lbrack 0,1]\left\vert {}\right.
\int_{0}^{1}x\,\mathrm{d}G(x)=p\text{ and }G\text{ is a CDF}\right\} .
\end{equation*}
\subsection{Information design problem}
\label{virtual_def}
Given a feasible signal distribution $G$, a revenue-maximizing mechanism is
an optimal auction due to \cite{Myerson1981}. In the optimal auction, the
seller's revenue is equal to the expected highest, nonnegative, ironed
virtual value $\max_{i}\{\hat{\varphi}_{i}(x_{i}|G),0\}$. Formally, for any
CDF $G$ with $supp(G)\subset \lbrack 0,1]$, let $a=\inf \{x\in \lbrack
0,1]|G(x)>0\}$, and define
\begin{equation*}
\Psi (x|G)
\begin{cases}
0, & \mbox{if }x\in \lbrack 0,a); \\
a-x(1-G(x)), & \mbox{if }x\in \lbrack a,1]\text{.
\end{cases
\end{equation*
Let $\Phi (x|G)$ be the convexification of $\Psi $ under measure $G$
\footnote
That is, $\Phi (x|G)$ is the largest convex function that is everywhere
weakly lower than $\Psi (x|G)$.} By definition, for any $x\in \lbrack 0,1]$,
the (\emph{ironed}) \emph{virtual valuation} at $x$, denoted as $\hat{\varph
}(x|G)$, is a $G$-sub-gradient of $\Phi (x|G)$; see also \cite{yang2019buyer
. If $\Phi (x|G)=\Psi (x|G)$ for any $x$, then we say that $G$ is a \emph
regular distribution}. We use denote by $\hat{\varphi}$ an ironed virtual
value and use $\varphi $ to denote a virtual value induced from a regular
distribution. If $G$ is regular,
then the virtual value has the well known expression equal to
\begin{equation*}
\varphi (x|G)=x-\frac{1-G\left( x\right) }{G^{\prime }\left( x\right) }\text
.}
\end{equation*
We allow the information designer to choose any distribution function $G$,
whether it is regular or irregular and whether it admits a density function
or not.
Let $M(x)=\{i\in N|\hat{\varphi}(x_{i}|G)\geq \max_{j}\{\hat{\varphi
(x_{j}|G),0\}\}$ be the set of buyers who have the largest nonnegative
virtual value and $M^{\prime }(x)=\{i\in N|x_{i}\geq {x_{j}},\forall j\in
M(x)\}$ be the set of buyers who not only have the highest nonnegative
virtual value but also the largest signal among those with the highest
virtual value. Define an allocation rule
\begin{equation*}
q_{i}(x_{i},x_{-i})
\begin{cases}
\frac{1}{|M^{\prime }(x)|}, & \mbox{if }i\in M^{\prime }(x); \\
0, & \mbox{if }i\not\in M^{\prime }(x)
\end{cases
\end{equation*
That is, $q_{i}(x_{i},x_{-i})$ is an optimal auction allocation rule which
breaks a tie in favor of a surplus-maximizing information designer.
We study the following information design problem parameterized by $\alpha
=0 $ or $1$:
\begin{align}
\max_{G(\cdot )}\int_{[0,1]^{n}}& \sum_{i=1}^{n}\left( \alpha x_{i}-\hat
\varphi}(x_{i}|G)\right) q_{i}(x_{i},x_{-i})\prod_{i=1}^{n}\left( \mathrm{d
G(x_{i})\right) \label{info} \\
\text{s.t. }& \int_{0}^{1}1-G(x)\mathrm{d}x=p. \label{mean}
\end{align
If $\alpha =1$, the term $\sum_{i=1}^{n}x_{i}q_{i}(x_{i},x_{-i})$ is the
total surplus generated under the optimal auction allocation rule $q_{i}$.
Moreover, the term $\sum_{i=1}^{n}\hat{\varphi}(x_{i}|G)q_{i}(x_{i},x_{-i})$
is the seller's revenue under the allocation rule $q_{i}$, namely the
expected highest nonnegative virtual value. Hence, if $\alpha =0$, the
information designer aims to minimize the seller's revenue, and it
corresponds to the seller-worst information design problem. If $\alpha =1$,
the information designer aims to maximize the buyers' surplus, and it
corresponds to the buyer-optimal information design problem. Hereafter, we
call (\ref{mean}) the \emph{mean constraint}.
Endow the space of Borel probability measures on $\left[ 0,1\right] $ with
the weak$^{\ast }$ topology. We say a signal distribution $G$ induces \emph
almost nonnegative virtual values} if the virtual values induced by $G$ are
nonnegative almost everywhere on $\left( 0,1\right] $. We denote by
\mathcal{G}_{H}^{+}\subset \mathcal{G}_{H}$ the feasible signal distribution
with almost nonnegative virtual values. Also we say a signal distribution $G$
is \emph{almost regular} if $G$ is regular almost everywhere on $\left(
0,1\right] $. We will argue later in Lemma \ref{almost_nonneg} and in Lemma \ref{regularity}, the optimal
signal distribution must induce almost nonnegative virtual values and be almost regular. We
present two lemmas here. The first lemma documents the existence of the
solution to the problem in \eqref{info}. The second lemma highlights the
additional trade-off which a buyer-optimal information designer is facing,
on top of minimizing the seller's revenue.
\begin{lemma}
\label{existence}For the problem in \eqref{info}, an optimal solution exists.
\end{lemma}
\begin{proof}
We first establish the seller-worst case. By Theorem 2 of \cit
{monteiro2015note}, the expected revenue is a lower semicontinuous function
in $G$. Hence, the objective function of the problem in \eqref{info} is
upper semicontinuous function in $G$. Third, since $\mathcal{G}_{H}$ is a
closed subset of the set of Borel probability measures on $[0,1]$, $\mathcal
G}_{H}$ is compact. Thus, by the extreme value theorem, an optimal solution
exists.
For the buyer-optimal problem, the existence is more involved and we provide
a formal proof in Appendix \ref{proofexist}.
\end{proof}
\begin{lemma}
\label{convex}For the problem in \eqref{info} and signal distribution $\hat{
}\in \mathcal{G}_{H}^{+}$, if $\hat{G}$ is a (resp. strict) mean-preserving
spread of signal distribution $G$, then $\hat{G}$ will generate (resp.
strictly) more total surplus than $G$. \footnote
We say $\hat{G}$ is a (resp. strict) mean-preserving spread of signal
distribution $G$ if $\int_{0}^{x}\hat{G}(t)-G(t)\mathrm{d}t\geq 0$ for all
x $ with equality at $x=1$ (resp. and with strict inequality at some $x$
with $G$-positive probability).}
\end{lemma}
\begin{proof}
Observe that $\sum_{i=1}^{n}x_{i}q_{i}(x_{i},x_{-i})=\max \{x_{1},\cdots
,x_{n}\}$ if the good is allocated. Therefore,
\begin{equation*}
\int_{\lbrack
0,1]^{n}}\sum_{i=1}^{n}x_{i}q_{i}(x_{i},x_{-i})\prod_{i=1}^{n}\left( \mathrm
d}G(x_{i})\right) \leq \int_{0}^{1}x\mathrm{d}G^{n}\leq \int_{0}^{1}x\mathrm
d}\hat{G}^{n}=\int_{[0,1]^{n}}\sum_{i=1}^{n}x_{i}q_{i}(x_{i},x_{-i}
\prod_{i=1}^{n}\left( \mathrm{d}\hat{G}(x_{i})\right) \text{.}
\end{equation*
The first inequality follows because the good may not be allocated under
q_{i}$. The second inequality follows because $\hat{G}$ is a mean-preserving
spread of $G$ and $\max \{x_{1},\cdots ,x_{n}\}$ is convex in $x$. Moreover,
the second inequality is strict if $\hat{G}$ is a strict mean-preserving
spread of $G$. The equality follows because $\hat{G}\in \mathcal{G}_{H}^{+}
, the good is always sold under $q_{i}$ expect when $x_{i}=0$ for every $i$
(and in this case, the total surplus remains the same, whether the good is
sold or not).
\end{proof}
\section{Main results}
\label{main results}In this section, we present our results on both
information design problems. We will also sketch their proofs in Section \re
{Outline of the solution}.
First, the following result summarizes the seller-worst information
structure:
\begin{theorem}
\label{result_seller}The unique symmetric seller-worst information structure
is a truncated Pareto distributions $G^{s}$ with
\begin{equation*}
G^{s}(x)
\begin{cases}
1-\frac{x^{s}}{x-k^{s}} & \text{ if }x\in \lbrack x^{s}+k^{s},1); \\
1 & \text{ if }x=1
\end{cases
\end{equation*
where $x^{s}$ satisfies the mean constraint $x^{s}+k^{s}+x^{s}\left( \log
(1-k^{s})-\log (x^{s})\right) =p$. Moreover, there exists a threshold $p^{s}$
such that
\begin{enumerate}
\item if $p\in \lbrack 0,p^{s}]$, then the optimal $k^{s}=0$;
\item if $p\in (p^{s},1]$, then the optimal $k^{s}>0$.
\end{enumerate}
\end{theorem}
\begin{proof}
See Section \ref{Outline of the solution}. The threshold $p^{s}$ is strictly
decreasing in $n$ and $k^{s}$ is strictly increasing in $n$. Their expression can be
found in Appendix \ref{fsw}.
\end{proof}
Theorem \ref{result_seller} states that the (symmetric) seller-worst signal
distribution for each buyer is equal to a truncated Pareto distribution with
virtual value $k^{s}$ for any signal less than 1, and with virtual value 1
for any signal equal to 1. Indeed, the virtual value is $x-\frac{1-G^{s}(x)}
g^{s}(x)}=x-\frac{x^{s}/(x-k^{s})}{x^{s}/(x-k^{s})^{2}}=k^{s}$ when $x$ is
less than $1$. Also, there exists a cut-off $p^{s}$ such that if $p$ is no
more than $p^{s}$, then the generated virtual value $k^{s}=0$ remains the
same as in the one-buyer case. If $p>p^{s}$, the generated virtual value
k^{s}$ will be strictly larger than $0$.
Since all virtual values are nonnegative at any signal profile, the good
will always be sold. Indeed, since the seller gets no revenue so long as the
virtual value is zero, it is never optimal for a seller-worst information
designer to induce a negative virtual value. More precisely, by raising a
negative virtual value to zero, the information designer can also decrease
the probability assigned to a positive virtual value to balance the mean and
thereby lower the seller's revenue. The exact trade-off will become more
clear after we transform the control variable from a signal distribution to
a virtual value distribution in Section \ref{Outline of the solution}.
In contrast, raising the lower virtual value from zero to a positive number
has two countervailing effects on the seller's revenue. First, by increasing
the lower virtual value, the seller's revenue increases. Second, to obey the
mean constraint, the probability of having the high virtual value $1$ must
be reduced to compensate the increase in the lower virtual value.
Intuitively, if either $p$ or $n$ is large, competition among the buyers
becomes more severe and the second effect dominates the first one.
Second, we summarize the buyer-optimal information structure:
\begin{theorem}
\label{result_buyer}The unique symmetric buyer-optimal information structure
is a truncated Pareto distributions $G^{b}$ with
\begin{equation*}
G^{b}(x)
\begin{cases}
\theta _{0} & \text{ if }x\in \lbrack 0,x^{b}); \\
1-\frac{x^{b}(1-\theta _{0})}{x-k^{b}} & \text{ if }x\in \lbrack
x^{b}+k^{b},1); \\
1 & \text{ if }x=1
\end{cases
\end{equation*
where $x^{b}$ satisfies the mean constraint, $(1-\theta _{0})\left(
x^{b}+k^{b}+x^{b}\left( \log (1-k^{b})-\log (x^{b})\right) \right) =p$.
Moreover, there exist two thresholds $r^{b}<p^{b}$ such that
\begin{enumerate}
\item if $p\in \lbrack 0,r^{b})$, then $\theta _{0}>0$ and $k^{b}=0$;
\item if $p\in \lbrack r^{b},p^{b})$, then $\theta _{0}=0$ and $k^{b}=0$;
\item if $p\in \lbrack p^{b},1]$, then $\theta _{0}=0$ and $k^{b}>0$.
\end{enumerate}
\end{theorem}
\begin{proof}
See Section \ref{Outline of the solution}. The thresholds $r^{b}$ and $p^{b}$
are both strictly decreasing in $n$, whereas $k^{b}$ is strictly increasing in $n$. Their
expression can be found in Appendix \ref{fbo}.
\end{proof}
The buyer-optimal information structure looks similar to the seller-worst
information structure but differs in several ways. First, when the prior
p<r^{b}$, the buyer-optimal signal distribution puts mass $\theta _{0}$ on
signal $0$. That is, with probability $\left(\theta _{0}\right)^{n}$ the
good is not sold. In fact, to maximize the buyers' surplus, the information
designer needs to consider not only the seller's revenue but also the
expected total surplus. Although putting positive mass on signal $0$ will
increase the seller's revenue, it also follows from Lemma \ref{convex} that
a mean-preserving spread of the seller-worst distribution generates more
expected total surplus. Theorem \ref{result_buyer} shows that in this case,
the benefit of increasing the expected total surplus dominates the cost of
increasing the seller's revenue when the mean $p$ is lower than $r^{b}$.
This sharply contrasts the seller-worst information structure in which the
virtual value is always nonnegative and the good is always sold. The
comparison also helps us understand why the two information structures are
equivalent and the good is always sold when there is only one buyer. With
one buyer the expected total surplus is linear in the buyer's signal and
hence, there is no benefit in having a negative virtual value at zero.
When $p>p^{b}$, the lower virtual becomes positive. As we anticipate, $p^{b}$
is larger than $p^{s}$ and $k^{b}<k^{s}$.\footnote
Indeed, suppose that $p^{b}<p^{s}$. When $p\in \lbrack p^{b},p^{s}]$, the
buyer-optimal signal distribution generates positive virtual value $k^{b}$,
while the seller-worst distribution generates zero virtual value. The
buyer-optimal information designer can replace the buyer-optimal signal
distribution by the seller-worst distribution; however, since this will
generate a higher total surplus (by convexity) and strictly lower seller
revenue (by the definition of the seller-worst distribution), it contradicts
the optimality of the buyer-optimal distribution.} That is, the
buyer-optimal information designer does not raise the lower virtual value by
the same amount as he does in the seller-worst case. Again, this is because
raising the lower virtual value decreases the mean-preserving spread and
thereby also the expected total surplus by Lemma \ref{convex}.
\begin{corollary}
\label{equivalence}For $n\rightarrow \infty $, the buyer-optimal information
structure coincides with the seller-worst information structure in the
limit. Both are given by a degenerate distribution $G$ where
\begin{equation*}
G(x)=\delta _{p}
\begin{cases}
0 & \text{ if }x\in \lbrack 0,p); \\
1 & \text{ if }x\in \lbrack p,1]
\end{cases
\end{equation*}
\end{corollary}
\begin{proof}
See Appendix \ref{proofequi}.
\end{proof}
Corollary \ref{equivalence} implies that when $n$ is large, both the
buyer-optimal and the seller-worst information structures are close to
\textquotedblleft no disclosure\textquotedblright . Namely that the
information designer chooses the degenerate distribution function which
concentrates on $p$. Moreover, the seller extracts the ex ante expectation
of a single buyer's value (i.e., $p$) and leaves no surplus to the buyers.
This limiting result is obtained under our assumption that the information
designer has full control of the information structure among the buyers.
Corollary \ref{equivalence} also contrasts the result of \cite{yang2019buyer
. Specifically, \cite{yang2019buyer} shows that when the buyers' information
structure is a result of their strategic information acquisition, the unique
symmetric equilibrium information structure converges to full information,
as the number of buyers goes to infinity. Hence, the buyers retain zero
surplus in the limit, whether in our buyer-optimal information structure or
in the equilibrium information structure of \cite{yang2019buyer}. As our
symmetric buyer-optimal information structure also provides an upper bound
of the buyers' surplus under the symmetric equilibrium in \cit
{yang2019buyer}, it follows that the gap vanishes as the number of buyers
goes to infinity.
In \cite{yang2019buyer}, the buyers' surplus goes to zero because of the
increasing competition in information acquisition with more buyers. In our
case, the limiting zero surplus is driven by the buyer-optimal information
designer's purposeful choice to increase the low virtual value $k^{b}$ in
order to reduce the probability of virtual value $1$. The gain from such
reduction (in the seller's revenue) eventually dominates the opposite
consideration to increase the spread for higher total surplus. Moreover, the
total surplus is derived down to the ex ante expectation of a single buyer's
value (i.e., $p$) which the seller can fully extract. However, we will show
in Section \ref{asyboc} that if we allow for the asymmetric information
structure, the buyer-optimal surplus will remain strictly positive even if
n\rightarrow \infty $.
\section{Outline of the solution}
\label{Outline of the solution}
Here we outline the steps to solve the information design problems. The key
idea, as we mention in the introduction, is to reduce the problems into
tractable finite-dimensional constrained optimization problems.
\begin{enumerate}
\item We first present two preliminary lemmas to restrict the class of
distributions of interest to the information designer:
\begin{enumerate}
\item In solving the seller-worst information structure, we can assume
without loss of generality that the virtual values are nonnegative. In
solving the buyer-optimal information structure, similarly, we can also show
that the virtual values are \textquotedblleft almost
nonnegative\textquotedblright\ in the sense that they are nonnegative except
for some mass may be placed at signal $x=0$ (Lemma \ref{almost_nonneg}).
\item We can show that the buyer-optimal signal distribution must be almost
regular and the seller-worst signal distribution must be regular (Lemma \re
{regularity}).
\end{enumerate}
\item We change our choice variable from the distribution of signals to the
distribution of virtual values (Lemma \ref{change_variables}).\footnote
The reason why we use change of variable is that it is not easy to choose
the distribution and keep its almost regularity at the same time.}
\item We show that the reformulation after change of variable leads to a
tractable reduction to a finite-dimensional problem. When $n=1$, the
reduction yields the solution in the one-buyer case derived by \cit
{roesler2017buyer}. Moreover, this approach can still be applied to derive a
explicit solution even when $n\geq 2$ which is our novel case here.
\end{enumerate}
\subsection{Preliminary Lemmas}
We first establish the following two lemmas.
\begin{lemma}
\label{almost_nonneg}Any optimal signal distribution $G$ which solves the
information design problem in (\ref{info}) must induce almost nonnegative
virtual values (i.e., $G\in \mathcal{G}_{H}^{+}$); moreover, if $G$ is a
solution to (\ref{info}) for $\alpha =0$, it must induce nonnegative virtual
values almost everywhere on $\left[ 0,1\right] $.
\end{lemma}
\begin{proof}
See Appendix \ref{proofalmost}.
\end{proof}
\begin{figure}[tbp]
\centering
\begin{minipage}[c]{0.5\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
tick label style={font=\scriptsize},
xlabel={$x$},
ylabel={Distribution: $G(x)$},
ytick={0, 0.180328, 1},
yticklabels={0,$\theta_0$, 1},
xtick={ 0.305,0.5, 1},
xticklabels={ $x_1$,$x_2$,1},
no markers,
line width=0.3pt,
cycle list={{red,solid}},
samples=200,
smooth,
domain=0:1.3,
xmin=0, xmax=1,
ymin=0, ymax=1,
width=6cm, height=5.5cm,
legend cell align=left,
legend pos= north west,
legend style={draw=none,fill=none,name=legend},
]
\addplot[blue,thick,domain=0:1]{x};
\addplot[red,thick,domain=0.305:0.5]{1-(0.25/x)};
\legend{Original , Modified};
\addplot[red,thick,domain=0.5:1]{x-0.005};
\addplot[red,dashed] coordinates {
(0,0.180328)
(0.305,0.180328)
};
\addplot[dashed] coordinates {
(0.305,0)
(0.305,0.180328)
};
\addplot[dashed] coordinates {
(0.5,-1)
(0.5,0)
};
\addplot[dashed] coordinates {
(0.5,0)
(0.5,0.5)
};
\end{axis}
\end{tikzpicture}
\end{minipage
\begin{minipage}[c]{0.5\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
tick label style={font=\scriptsize},
xlabel={$x$},
ylabel={Virtual value: $\hat{\varphi}(x)$},
ytick={-0.5,0, 1},
yticklabels={$\hat{\varphi}(0)$,0, 1},
xtick={ 0.305,0.5 1},
xticklabels={ $x_1$,$x_2$,1},
no markers,
line width=0.3pt,
cycle list={{red,solid}},
samples=200,
smooth,
domain=0:1.3,
xmin=0, xmax=1,
ymin=-1, ymax=1,
width=6cm, height=5.5cm,
legend cell align=left,
legend pos= north west,
legend style={draw=none,fill=none,name=legend},
]
\addplot[blue,thick,domain=0:1]{2*x-0.999};
\addplot[red,thick,domain=0.305:0.5]{0};
\legend{Maybe negative, Almost nonnegative};
\addplot[red,thick,domain=0.5:1]{2*x-1};
\addplot[red,dashed] coordinates {
(0.305,-0.5)
(0.305,0)
};
\addplot[red,thick] coordinates {
(0,-0.5)
(0.305,-0.5)
};
\addplot[dashed] coordinates {
(0.305,-1)
(0.305,-0.5)
};
\addplot[dashed] coordinates {
(0,0)
(0.5,0)
};
\addplot[dashed] coordinates {
(0.5,-1)
(0.5,0)
};
\end{axis}
\end{tikzpicture}
\end{minipage}
\caption{Total surplus is larger, while the seller's revenue is the same.}
\label{nonneg_figure}
\end{figure}
Figure \ref{nonneg_figure} illustrates how to improve the information
designer's objective (for both the buyer-optimal case and the seller-worst
case) by modifying a signal distribution into one which admits almost
nonnegative virtual values. First, the red curve is a strict mean-preserving
spread of the blue curve in the sub-figure on the left side; hence, by Lemma
\ref{convex}, the red curve can generate more expected total surplus.
Second, the nonnegative ironed virtual value of the red curve coincides with
that of the blue curve in the sub-figure on the right side. As the seller
will only allocate the good to a buyer with the highest nonnegative virtual
value, the seller's revenue remains the same. Therefore, the red curve
generates strictly higher surplus for the buyers than the blue curve.
\begin{figure}[tbp]
\centering
\begin{minipage}[c]{0.5\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
tick label style={font=\scriptsize},
xlabel={$x$},
ylabel={Distribution: $G(x)$},
ytick={0, 0.180328, 1},
yticklabels={0, $\theta_0$, 1},
xtick={0.22, 0.305,0.5,0.69, 1},
xticklabels={$x_0$, $x_1$,$x_2$,$x_3$,1},
no markers,
line width=0.3pt,
cycle list={{red,solid}},
samples=200,
smooth,
domain=0:1.3,
xmin=0, xmax=1,
ymin=0, ymax=1,
width=6cm, height=5.5cm,
legend cell align=left,
legend pos= north west,
legend style={draw=none,fill=none,name=legend},
]
\addplot[red,thick,domain=0.305:0.5]{1-(0.25/x)};
\addplot[green,thick,domain=0.22:0.69]{1-(0.22/x)};
\legend{Modified, Further modified};
\addplot[red,thick,domain=0.5:1]{x};
\addplot[red,dashed] coordinates {
(0,0.180328)
(0.305,0.180328)
};
\addplot[dashed] coordinates {
(0.69,0)
(0.69,0.6811)
};
\addplot[dashed] coordinates {
(0.305,0)
(0.305,0.180328)
};
\addplot[dashed] coordinates {
(0.5,0)
(0.5,0.5)
};
\end{axis}
\end{tikzpicture}
\end{minipage
\begin{minipage}[c]{0.5\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
tick label style={font=\scriptsize},
xlabel={$x$},
ylabel={Virtual value: $\hat{\varphi}(x)$},
ytick={-0.5,0, 1},
yticklabels={$\hat{\varphi}(0)$,0, 1},
xtick={0.22, 0.305,0.5,0.69, 1},
xticklabels={$x_0$, $x_1$,$x_2$,$x_3$,1},
no markers,
line width=0.3pt,
cycle list={{red,solid}},
samples=200,
smooth,
domain=0:1.3,
xmin=0, xmax=1,
ymin=-1, ymax=1,
width=6cm, height=5.5cm,
legend cell align=left,
legend pos= north west,
legend style={draw=none,fill=none,name=legend},
]
\addplot[red,thick,domain=0.305:0.5]{0};
\addplot[green,thick,domain=0.22:0.69]{0};
\legend{Almost nonnegative,Nonnegative};
\addplot[red,thick,domain=0.5:1]{2*x-1};
\addplot[red,dashed] coordinates {
(0.305,-0.5)
(0.305,0)
};
\addplot[red,thick] coordinates {
(0,-0.5)
(0.305,-0.5)
};
\addplot[red,thick] coordinates {
(0.305,0)
(0.5001,0)
};
\addplot[green,thick] coordinates {
(0.22,0)
(0.305,0)
};
\addplot[green,dashed] coordinates {
(0.69,0)
(0.69,0.38)
};
\addplot[dashed] coordinates {
(0.69,-1)
(0.69,0)
};
\addplot[dashed] coordinates {
(0.5,-1)
(0.5,0)
};
\addplot[dashed] coordinates {
(0.305,-1)
(0.305,-0.5)
};
\addplot[dashed] coordinates {
(0.22,-1)
(0.22,0)
};
\end{axis}
\end{tikzpicture}
\end{minipage}
\caption{Nonnegative virtual value distribution generates less the seller's
revenue}
\label{diff_figure}
\end{figure}
Figure \ref{diff_figure} illustrates how to further decrease the seller's
revenue by further modifying a distribution into one which admits only
nonnegative virtual values. First, the green curve generates the same mean
as the red one in the sub-figure on the left side; hence the green curve is
a feasible distribution. Second, the virtual values of the red curve is
strictly high than those of the green cure over the interval $(x_{2},x_{3})
; hence the green curve generates strictly less revenue than the red one. As
we mentioned, by raising a negative virtual value to zero, the information
designer can decrease the probability assigned to a positive virtual value
and thereby lower the seller's revenue.
\begin{lemma}
\label{regularity}Any optimal signal distribution $G$ which solves the
information design problem in (\ref{info}) must be almost regular; moreover,
if $G$ is a solution to (\ref{info}) for $\alpha =0$, it must be regular.
\end{lemma}
\begin{proof}
See Appendix \ref{proofregular}.
\end{proof}
By Lemma \ref{regularity}, we will use $\varphi $ instead of $\hat{\varphi}$
to denote the virtual value hereafter. Figure \ref{regular_figure}
illustrates how to improve the buyer-optimal information designer's
objective with almost regular distributions. First,
the red curve is a strict mean-preserving spread of the blue curve in the
first sub-figure.\footnote
We draw the blue curve above the red curve on the ironed interval
[x_{1},x_{2}]$ because the Pareto signal distribution with virtual $k$
first-order stochastically dominates the original signal distribution; see
Lemma \ref{pre} in Appendix \ref{proofpre}.} Hence, by Lemma \ref{convex}
and Lemma \ref{almost_nonneg}, the red curve can generate more expected
total surplus. Second, the ironed virtual values of the blue curve will be
weakly higher than those of the red curve in the second sub-figure; hence,
the seller's revenue is weakly less. Overall, the buyer-optimal information
designer's objective value is higher under the red curve than under the blue
curve. For the seller worst case, it directly follows from the nonnegativity
of Lemma \ref{almost_nonneg} that an optimal signal distribution must be
regular.
\begin{figure}[tbp]
\centering
\begin{minipage}[c]{0.5\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
tick label style={font=\scriptsize},
xlabel={$x$},
ylabel={Distribution: $G(x)$},
ytick={0, 0.09366, 1},
yticklabels={0, $\theta_0$, 1},
xtick={0.1287, 0.25,0.5, 1},
xticklabels={$x_0$, $x_1$,$x_2$, 1},
no markers,
line width=0.3pt,
cycle list={{red,solid}},
samples=200,
smooth,
domain=0:1.3,
xmin=0, xmax=1,
ymin=0, ymax=1,
width=6cm, height=5.5cm,
legend cell align=left,
legend pos= north west,
legend style={draw=none,fill=none,name=legend},
]
\addplot[blue,thick,domain=0.1287:0.25]{1-(0.1287/(x))};
\addplot[red,thick,domain=0.142:0.5]{1-(0.1287/(x))};
\legend{Irregular, Almost Regular};
\addplot[blue,thick,domain=0.25:1/3]{2*x};
\addplot[blue,thick,domain=1/3:1] {(1/2)*x+0.5};
\addplot[red,thick,domain=0.5:1]{(1/2)*x+0.495};
\addplot[red,dashed] coordinates {
(0,0.09366)
(0.142,0.09366)
};
\addplot[dashed] coordinates {
(0.25,0)
(0.25,0.5)
};
\addplot[dashed] coordinates {
(0.5,0)
(0.5,0.75)
};
\node [coordinate,pin=right:{$1-\frac{\zeta}{x-k}$}]
at (axis cs:0.35,0.632286) {};
\end{axis}
\end{tikzpicture}
\end{minipage
\begin{minipage}[c]{0.5\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
tick label style={font=\scriptsize},
xlabel={$x$},
ylabel={Virtual value: $\hat{\varphi}(x)$},
ytick={-0.4, 0, 1},
yticklabels={$\hat{\varphi}(0)$, $k$, 1},
xtick={0.124, 0.25,0.5, 1},
xticklabels={$x_0$, $x_1$, $x_2$, 1},
no markers,
line width=0.3pt,
cycle list={{red,solid}},
samples=200,
smooth,
domain=0:1.3,
xmin=0, xmax=1,
ymin=-0.5, ymax=1,
width=6cm, height=5.5cm,
legend cell align=left,
legend pos= north west,
legend style={draw=none,fill=none,name=legend},
]
\addplot[blue,thick,domain=0.25:1/3]{2*x-0.49};
\addplot[red,thick,domain=0.142:0.5]{-0.01};
\legend{Irregular, Almost Regular};
\addplot[blue,thick,domain=0.124:0.25]{0};
\addplot[blue,thick,domain=1/3:1] {2*x-0.99};
\addplot[red,thick,domain=0.5:1]{2*x-1};
\addplot[blue,thick,domain=0.25:0.5] coordinates {
(0.25,0)
(0.5,0)
};
\addplot[red,thick] coordinates {
(0,-0.4)
(0.142,-0.4)
};
\addplot[blue,dashed] coordinates {
(1/3,2/3-1)
(1/3,2/3-0.5)
};
\addplot[dashed] coordinates {
(0.25,-0.5)
(0.25,0)
};
\addplot[blue,dashed] coordinates {
(0.124,-0.5)
(0.124,0)
};
\addplot[dashed] coordinates {
(0.5,-0.5)
(0.5,0)
};
\addplot[red,dashed] coordinates {
(0.142,-0.4)
(0.142,0)
};
\node [coordinate,pin=90:{$x-\frac{1-G}{g}=k$}]
at (axis cs:0.3,0.1) {};
\node [coordinate,pin=310:{Ironing}]
at (axis cs:0.4,0) {};
\end{axis}
\end{tikzpicture}
\end{minipage}
\caption{An improvement by almost regular distributions}
\label{regular_figure}
\end{figure}
Here we are able to argue that (almost) regularity entails no loss of
generality, because we assume that the seller chooses Myerson's optimal
auction. As we will argue in Section \ref{continuous prior}, this is no
longer the case when the seller is committed to using a second-price auction
(with reserve).
\subsection{Change of variable}
We now introduce the key step of changing our control variables in the
information design problems. Let $F(k)$ be the distribution of virtual
values given a feasible signal distribution $G$. Since $G$ is almost
regular, $F\left( k\right) =Prob_{G}\left\{ x\vert \varphi(x)\leq k\right\} $. Then, except for $x=0$, the virtual value of $G$ at signal $x$ is
\begin{equation}
\varphi (x)=k=x\left( k\right) -\frac{1-G(x)}{G^{\prime }(x)}=x\left(
k\right) -\frac{1-F(k)}{F^{\prime }(k)}x^{\prime }\left( k\right) .
\label{varphi}
\end{equation
The change of variable enables us to express the true value $x(k)$ in terms
of the virtual value $k$, when $\varphi $ is strictly increasing in $x$:
\begin{lemma}
\label{changeofvariable}Suppose that $\varphi $ is strictly increasing in $x
. For each $k$ with $\varphi (x)=k$, $x(k)$ is the buyer's expected virtual
value conditional on his virtual value being greater than or equal to $k$,
i.e.,
\begin{equation}
x\left( k\right) =\mathbb{E}[\varphi |\varphi \geq k]=k+\frac
\int_{k}^{1}(1-F(s))\mathrm{d}s}{1-F(k)}. \label{cv}
\end{equation}
\end{lemma}
Since $\varphi (x)$ is strictly increasing in $x$, we have
\begin{equation*}
x^{\prime }\left( k\right) -\frac{F^{\prime }(k)}{1-F(k)}\cdot x\left(
k\right) =\frac{-kF^{\prime }(k)}{1-F(k)}.
\end{equation*
Solving the first-order ordinary differential equation, we obtain $x(k)$ as
in Lemma \ref{changeofvariable}.
In fact, we can still make use of the expression of $x\left( k\right) $ in
\ref{cv}), even when $\varphi (x)$ is only weakly increasing (i.e., $\varphi
$ is almost regular). To see this, note that any weakly increasing function
can be uniformly approximated by a strictly increasing function. Let
\{\varphi _{m}\}_{m=1}^{\infty }$ be a sequence of strictly increasing
functions converging uniformly to $\varphi $. For each $m$, let $G_{m}$ and
F_{m}$ be sequences of signal distributions and virtual value distributions
corresponding to $\varphi _{m}$. Specifically, by solving Equation (\re
{varphi}), $G_{m}(x)=1-\exp \left( \int_{0}^{x}\left( \varphi
_{m}(t)-t\right) ^{-1}\mathrm{d}t\right) $ and $F_{m}$ is the virtual value
distribution induced by $G_{m}$. Since $\left\{ \varphi _{m}\right\} $
converges uniformly to $\varphi $, we also have $\left\{ G_{m}\right\} $ and
$\left\{ F_{m}\right\} $ uniformly converge to $G$ and $F$, respectively. In
Appendix \ref{proofchange2}, we will use the expression in (\ref{cv}) to
establish the following two equations:
\begin{align}
& \int_{0}^{1}x\mathrm{d}G(x)=1-\int_{0}^{1}G(x)\mathrm{d}x \notag \\
=& \int_{0}^{1}(1-F(k))(1-\log (1-F(k))-\log (1-F(0^{-})))\mathrm{d}k.
\label{m1} \\
& \int_{0}^{1}x\mathrm{d}G^{n}(x)=1-\int_{0}^{1}G^{n}(x)\mathrm{d}x \notag
\\
=& \int_{0}^{1}n(1-F(k))\left( \sum_{i=1}^{n-1}\frac{-F^{i}(k)}{i}-\log
(1-F(k))+\left( \sum_{i=1}^{n-1}\frac{F^{i}(0^{-})}{i}+\log
(1-F(0^{-}))\right) \right) -F^{n}(k)\mathrm{d}k+1. \label{m2}
\end{align
where $G\left( 0\right) =F\left( 0^{-}\right) $. Indeed, we show in Appendix
\ref{proofchange2} that the equations in \eqref{m1} and \eqref{m2} follow
for each $G_{m}$ and $F_{m}$ and then apply the bounded convergence theorem.
The equation \eqref{m1} is the expected mean and the equation \eqref{m2} is
the total surplus given the distribution $G$. Hence, we have the following
lemma,
\begin{lemma}
\label{change_variables}After the change of variable, the information
designer's problem in \eqref{info} can be written as follows:
\begin{align}
\max_{F(k)}& \int_{0}^{1}\alpha n(1-F(k))\left( \sum_{i=1}^{n-1}\frac
-F^{i}(k)}{i}-\log (1-F(k))+\left( \sum_{i=1}^{n-1}\frac{F^{i}(0^-)}{i}+\log
(1-F(0^-))\right) \right) \,\mathrm{d}k \label{info_object} \\
& +(1-\alpha )\int_{0}^{1}F^{n}(k)\mathrm{d}k+(\alpha -1) \notag \\
\text{s.t. }& \int_{0}^{1}(1-F(k))(1-\log (1-F(k))-\log (1-F(0^-)))\mathrm{d
k=p. \notag
\end{align}
\end{lemma}
\begin{proof}
It directly follows from Equations \eqref{m1} and \eqref{m2}.
\end{proof}
\subsection{The case with $n=1$: \protect\cite{roesler2017buyer} revisited}
\label{rs17}
We are now ready to solve the information design problem for the case with
n=1$, which is analyzed in \cite{roesler2017buyer}. For $n=1$, we can
rewrite the information designer's problem as
\begin{align*}
& \max_{F(k)}\quad \alpha \underbrace{\int_{0}^{1}x\mathrm{d}G}_{\text{total
surplus}}-\underbrace{\int_{0}^{1}k\mathrm{d}F(k)}_{\text{seller's revenue}}
\\
\text{s.t. }& \int_{0}^{1}(1-F(k))(1-\log (1-F(k))-\log (1-F(0^-)))\mathrm{d
k=p.
\end{align*
When $n=1$, the total surplus $\int_{0}^{1}x\mathrm{d}G$ is linear in $G$;
moreover, $\int_{0}^{1}x\mathrm{d}G=p$ by the mean constraint.\footnote
Here, we do not use the virtual value distribution to represent the total
surplus when $n=1$, since even if we change the variables, the total surplus
is still $p$ by the mean constraint.} Therefore, the value of $\alpha $ has
no effect on the optimization. This implies that the buyer-optimal
information structure is equivalent to the seller-worst information
structure. Since for the seller-worst case, the virtual value is always
nonnegative, we have $G(0)=F(0^-)=0$, i.e., there is no mass on $x=0$.
The information design problem is an isoperimetric problem in optimal
control theory; see Theorem 4.2.1 of \cite{van2004isoperimetric}. To solve
the optimal control problem, we can write the Lagrangian formula as
\begin{equation*}
\mathcal{L}(F,\lambda )=\alpha p+\int_{0}^{1}\left( F-\lambda ((1-F)(1-\log
(1-F))-p)\right) \mathrm{d}k-1.
\end{equation*
Let $\theta =F(k)$. Then, for each $k$, the Euler-Lagrange equation implies
that
\begin{equation*}
\partial \mathcal{L}/\partial \theta =1-\lambda \log (1-\theta )=0.
\end{equation*
Since $\lambda $ is constant for any $k$, there exists a unique $\theta
^{\ast }\in (0,1)$ such that $F(k)$ should be constant and equal to $\theta
^{\ast }$ except $k=1$. That is, the support of $F(k)$ has at most two
points $k $ and $1$.
Now, the information designer only need to choose $\{k,F(k)=\theta \}$ to
maximize
\begin{align*}
& \max_{k\geq 0,\theta }\alpha p-\left( \theta \times k+(1-\theta )\times
1\right) \\
\text{s.t. }& k+(1-k)(1-\theta )(1-\log (1-\theta ))=p.
\end{align*
The Lagrangian is
\begin{equation*}
\mathcal{L}(k,\theta ,\lambda ,\mu )=\alpha p-k\theta -(1-\theta )+\lambda
\left( p-(k+(1-k)(1-\theta )(1-\log (1-\theta )))\right) +\mu k.
\end{equation*
with the Euler-Lagrangian equation for $\theta $:
\begin{equation*}
\dfrac{\partial \mathcal{L}}{\partial \theta }=(1-k)\left( 1-\lambda (\log
(1-\theta ))\right) =0\Leftrightarrow \lambda =1/\log (1-\theta )
\end{equation*
and the Euler-Lagrangian equation for $k$:
\begin{align*}
\dfrac{\partial \mathcal{L}}{\partial k}& =-\theta -\lambda (\theta
+(1-\theta )\log (1-\theta ))+\mu \\
& =\frac{\theta +\log (1-\theta )}{-\log (1-\theta )}+\mu =0.
\end{align*
Since $\frac{\theta +\log (1-\theta )}{-\log (1-\theta )}<0$ for $\theta \in
(0,1)$, $\mu >0$, and therefore, the optimal $k=0$. In summary, we have
reproduced the optimal signal distribution derived in \cite{roesler2017buyer
, namely,
\begin{equation*}
G(x)
\begin{cases}
1-\frac{1-\theta }{x} & \text{ if }x\in \lbrack 1-\theta ,1); \\
1 & \text{ if }x=1
\end{cases
\end{equation*
Under the optimal signal distribution, for $x\in \lbrack 1-\theta ,1)$, the
virtual value is $0$ with probability $\theta $. For $x=1$, the virtual
value is $1$ with probability $1-\theta $.
\subsection{The case with $n \geq 2$}
\label{rsn2}
Similarly to the case with $n=1$, we can reduce the infinite-dimensional
information design problem into a finite-dimensional problem by the
following lemma.
\begin{lemma}
The support of any optimal virtual value distribution $F(k)$ has at most two
points, say $\{k,1\}$.
\end{lemma}
\begin{proof}
Again, the information design problem is also an isoperimetric problem in
optimal control theory. Define the following Lagrangian formula,
\begin{align*}
\mathcal{L}(F,\lambda )=& \int_{0}^{1}\alpha n(1-F(k))\left( \sum_{i=1}^{n-1
\frac{-F^{i}(k)}{i}-\log (1-F(k))+\left( \sum_{i=1}^{n-1}\frac{F^{i}(0^-)}{i
+\log (1-F(0^-))\right) \right) +(1-\alpha )F^{n}(k)\mathrm{d}k \\
& +\left. \int_{0}^{1}-\lambda (1-F(k))(1-\log (1-F(k))-\log
(1-F(0^-)))\right) \mathrm{d}k+p\lambda +(\alpha -1)\text{.}
\end{align*
Let $\theta _{0}=F(0^-)=G\left( 0\right) $ and $\theta =F\left( k\right) $.
By Theorem 4.2.1 of \cite{van2004isoperimetric}, the Euler-Lagrange equation
for $\mathcal{L}$ and each state $k$ should be satisfied as follows:
\begin{gather*}
I_{\alpha }(\theta )\equiv n\alpha \sum_{i=1}^{n-1}\theta ^{i}/i+n(2-\alpha
)\theta ^{n-1}+n\alpha \log (1-\theta )-\lambda \log (1-\theta ) \\
+\left( -n\alpha \sum_{i=1}^{n-1}\theta _{0}^{i}/i-(n\alpha -\lambda )\log
(1-\theta _{0})\right) =0.
\end{gather*}
Taking derivative of $I_{\alpha }(\theta )$ with respect to $\theta $, we
have:
\begin{equation*}
I_{\alpha }^{\prime }(\theta )=\frac{\lambda +n\theta ^{n-2}\left( (2-\alpha
)(n-1)+(2+\alpha (n-2)-2n)\theta \right) }{1-\theta }.
\end{equation*}
We prove the following lemma in Appendix \ref{proofstablesoln}:
\begin{lemma}
\label{stablesoln}There is at most one $\theta $ with $I_{\alpha }(\theta
)=0 $ which also satisfies the second-order condition.\footnote
We also draw the curve of $I_{\alpha }(\theta )$ and $I_{\alpha }^{\prime
}(\theta )$ in Figure \ref{local_figure} to illustrate this lemma. In Figure
\ref{local_figure}, we choose the parameters to be $n=3$, $\alpha =1$,
\lambda =-0.5$, and $\theta _{0}=0$.}
\end{lemma}
By Lemma \ref{stablesoln}, for any state $k$, $F(k)$ is constant except $k=1
. Moreover, $F(1)=1$. Thus, the support of virtual values has two points
\{k,1\}$ with $F(k)=\theta $ and $F(1)=1$.
\end{proof}
\begin{figure}[tbp]
\centering
\begin{minipage}[c]{0.5\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
tick label style={font=\scriptsize},
xlabel={$\theta$},
ylabel={ $I'(\theta)$},
ytick={-0.7,0, 0.7},
yticklabels={-0.7,0, 0.7},
xtick={0, 0.099,0.566},
xticklabels={0, $\theta_1$, $\theta_2$},
no markers,
line width=0.3pt,
cycle list={{red,solid}},
samples=200,
smooth,
domain=-2:1.3,
xmin=0, xmax=0.8,
ymin=-0.7, ymax=0.7,
width=6cm, height=6cm,
legend cell align=left,
legend pos= north west,
legend style={draw=none,fill=none,name=legend},
]
\addplot[blue,thick,domain=0:1]{-0.5+3*x*(2-3*x)};
\addplot[thick] coordinates {
(0,0)
(1,0)
};
\addplot[dashed] coordinates {
(0.099,0)
(0.099,-0.7)
};
\addplot[dashed] coordinates {
(0.566,0)
(0.566,-0.7)
};
\node at (axis cs:0.33,0.2) {$I_1(\theta)\nearrow$ };
\node at (axis cs:0.7,-0.3) {$I_1(\theta)\searrow$};
\node at (axis cs:0.1,-0.3) {$I_1(\theta)\searrow$};
\end{axis}
\end{tikzpicture}
\end{minipage
\begin{minipage}[c]{0.5\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
tick label style={font=\scriptsize},
xlabel={$\theta$},
ylabel={ $I_1(\theta)$},
ytick={-0.5,0, 0.5},
yticklabels={-0.5,0, 0.5},
xtick={0, 0.099,0.566},
xticklabels={0, $\theta_1$, $\theta_2$},
no markers,
line width=0.3pt,
cycle list={{red,solid}},
samples=200,
smooth,
domain=-2:1.3,
xmin=0, xmax=0.8,
ymin=-0.5, ymax=0.5,
width=6cm, height=6cm,
legend cell align=left,
legend pos= north west,
legend style={draw=none,fill=none,name=legend},
]
\addplot[red,thick,domain=0:1]{1.5*x*(2+3*x)+3.5*ln(1-x)};
\addplot[thick] coordinates {
(0,0)
(1,0)
};
\addplot[dashed] coordinates {
(0.099,-0.03)
(0.099,-0.7)
};
\addplot[dashed] coordinates {
(0.566,0.22)
(0.566,-0.7)
};
\node [coordinate,pin=90:{local min}]
at (axis cs:0.2,0) {};
\node [coordinate,pin=245:{local max}]
at (axis cs:0.735,0) {};
\end{axis}
\end{tikzpicture}
\end{minipage}
\caption{ The curve of $I_1^{\prime }(\protect\theta)$ and $I_1(\protec
\theta)$. }
\label{local_figure}
\end{figure}
Therefore, the information designer will choose the optimal $k,\theta $, and
$\theta _{0}$ (with $\theta =F(k)$ and $\theta _{0}=G(0)=F(0^-)$) such that
\begin{align}
\max_{\{\theta _{0},k,\theta \}}& \left( \alpha n(1-k)(1-\theta )\left(
\sum_{i=1}^{n-1}\frac{-\theta ^{i}}{i}-\log (1-\theta )+\left(
\sum_{i=1}^{n-1}\frac{\theta _{0}^{i}}{i}+\log (1-\theta _{0})\right)
\right) \right) \notag \\
& +(\alpha -1)+(1-\alpha )(1-k)\theta ^{n}+(1-\alpha )k\theta _{0}^{n}
\label{finiteinfo} \\
\text{s.t. }& (1-k)\left( (1-\theta )(1-\log (1-\theta )+\log (1-\theta
_{0})\right) +k(1-\theta _{0})=p, \notag \\
& k\geq 0,\quad \theta _{0}\geq 0,\quad \text{and}\quad \theta _{0}\leq
\theta \leq 1. \notag
\end{align
The solution to this finite-dimensional optimization problem is standard and
we present the details in Appendices \ref{fsw} and \ref{fbo}. In fact, the
solution to this finite-dimensional problem is unique. Since the information
design problem has at least a solution by Lemma \ref{existence}, this unique
solution is globally optimal and this concludes the proof of Theorems \re
{result_seller} and \ref{result_buyer}.
We briefly comment on how our approach differs from that of \cit
{roesler2017buyer}. When there is only one buyer, a posted price mechanism
is optimal; see Proposition 2.5 of \cite{borgers2015introduction}. Hence, a
signal distribution matters only in determining the optimal posted price,
i.e., $\varphi ^{-1}\left( 0\right) $. This is how \cite{roesler2017buyer}
are able to argue that it entails no loss of generality to focus on a class
of Pareto distribution and the two-point Pareto distribution with virtual
values $0$ and $1$ is the buyer-optimal information structure. When there
are multiple buyers, we may take the second-price auction with an optimal
reserve price as an extension of the posted price mechanism. Unlike \cit
{roesler2017buyer}, however, a second-price auction with a reserve price
need not be an optimal auction against an irregular signal distribution.
To wit, while an irregular signal distribution can be ironed into a regular
signal distribution, the optimal expected revenue under the irregular
distribution is the same as the revenue of a second-price auction with
reserve under the regular distribution (obtained from ironing) rather than
the irregular distribution.\footnote
For the sake of completeness, we provide an example in Appendix \re
{irexample} to illustrate this point; see also the working paper version of
\cite{monteiro2010optimal} for a similar example. In particular, our example
admits a density and nonnegative (ironed) virtual values for every signal.}
We show in Section \ref{continuous prior} when the seller is committed to
using a second-price auction with no reserve price and $n=2$, the
seller-worst information structure is fully revealing. Fully revealing
results in the binary prior which is an irregular distribution.
\subsection{Connection to second-price auction with an optimal reserve price}
\label{connection}
A recent paper by \cite{suzdaltsev2020distributionally} studies an optimal
reserve price of a second-price auction to maximize expected revenue
guarantee when the seller knows only the mean of buyers' value distribution
and an upper bound on values. Under the assumption that the value
distribution is identically and independently distributed across the
bidders, \cite{suzdaltsev2020distributionally} shows that it is optimal to
set the reserve price to seller's own valuation which is zero in our setup.
In the information design problem which we analyze, since the seller moves
after the information designer, our seller-worst revenue is an upper bound
of the revenue guarantee in \cite{suzdaltsev2020distributionally}. Moreover,
it follows from Theorem \ref{result_seller} that the seller-worst signal
distribution is regular and has nonnegative virtual values. As a result, our
seller will also set the reserve price equal to zero in response to the
seller-worst information structure. However, our seller-worst signal
distribution is derived with respect to Myerson's optimal auction and may no
longer be revenue-minimizing when the seller is committed to using a
second-price auction with an optimal reserve price. Hence, we have
\begin{equation*}
\text{the seller-worst revenue}\geq \min_{\text{signal}}\max_{\text{reserve}
\text{Revenue}\geq \max_{\text{reserve}}\min_{\text{signal}}\text{Revenue}
\text{Suzdaltsev's revenue}.
\end{equation*}
In fact, our result shows that
\begin{equation*}
\text{the seller-worst revenue}>\text{Suzdaltsev's revenue.}
\end{equation*
For instance, when $n=2$ and $p=0.5$, the seller-worst revenue is
2a-a^{2}=0.3385$, where $a$ satisfies $a-a\log (a)=p$, which is obtained
from truncated Pareto distribution, whereas Suzdaltsev's revenue is
p^{2}=0.25$ which is obtained from fully revealing; see \cit
{suzdaltsev2020distributionally}. To see the intuition, consider the case
with $n=2$. Firstly, fully revealing is not optimal in our model. This is
because under fully revealing, the seller is better off changing the reserve
price from $0$ to $1$. Secondly, our seller-worst signal distribution is no
longer revenue-minimizing if the seller is committed to using a second-price
auction with no reserve price. Since the seller obtains the minimum of the
two buyers' values which is a concave function, the information designer
would like to maximize the spread by fully revealing; see Proposition in
Section \ref{continuous prior}.\footnote{\cite{che2019distributionally} also
studies the optimal reserve price which maximizes the expected revenue
guarantee of a second-price auction under mean constraints. However, \cit
{che2019distributionally} allows correlated signal distributions and hence
the difference in his optimal revenue guarantee and our seller-worst revenue
is not purely driven by the order of moves between the information designer
and the mechanism designer.}
\section{Asymmetric information structures}
\label{Asymmetric information structures}
So far we assume that the information designer chooses the same signal
distribution across all buyers. A natural question is whether the
information designer can do better by choosing different signal
distributions for different buyers. The short answer is \textquotedblleft
No\textquotedblright\ for the seller-worst information design problem and
\textquotedblleft Yes\textquotedblright\ for the buyer-optimal information
design problem. We explain further below.
To allow for asymmetric signal distribution, we first need to reformulate
the information design problems. Let $M(x)=\{i\in N|\hat{\varphi
(x_{i}|G_{i})\geq \max_{j}\{\hat{\varphi}(x_{j}|G_{j}),0\}\}$ be the set of
buyers who have the largest nonnegative virtual value and $M^{\prime
}(x)=\{i\in N|x_{i}\geq \max_{j}{x_{j}},\forall j\in M(x)\}$ be the set of
buyers who not only have the largest virtual value but also the largest
signal among those with the highest virtual value. Then, the optimal auction
allocation rule for buyer $i$ when all buyers report their signals is given
by,
\begin{equation*}
q_{i}(x_{i},x_{-i})
\begin{cases}
\frac{1}{|M^{\prime }(x)|}, & \mbox{if }i\in M^{\prime }(x); \\
0, & \mbox{if }i\not\in M^{\prime }(x)
\end{cases
\end{equation*}
We now study the following information design problem:
\begin{align}
\max_{\{G_{i}(x_{i})\}_{i=1}^{n}}\int_{[0,1]^{n}}& \sum_{i=1}^{n}\left(
\alpha x_{i}-\hat{\varphi}(x_{i}|G_{i})\right)
q_{i}(x_{i},x_{-i})\prod_{i=1}^{n}\left( \mathrm{d}G_{i}(x_{i})\right)
\label{info2} \\
\text{s.t. }& \int_{0}^{1}1-G_{i}(x_{i})\mathrm{d}x_{i}=p,\forall i=1,\cdots
,n. \notag
\end{align}
\subsection{The seller-worst case}
In this section, we show that the optimal symmetric seller-worst information
structure in Theorem \ref{result_seller} remains the unique optimal
seller-worst information structure, even if the information designer can
choose an asymmetric information structure. We first document the existence
of the solution to problem \eqref{info2} when $\alpha =0$. The proof is
similar to the proof of Lemma \ref{existence} and omitted.
\begin{lemma}
\label{existence2}For the problem in \eqref{info2} with $\alpha =0$, an
optimal solution exists.
\end{lemma}
The following lemma corresponds to Lemmas \ref{almost_nonneg} and \re
{regularity}. The proof is similar and we only provide a sketch in Appendix
\ref{proofnonneg}.
\begin{lemma}
\label{nonneg}Any optimal signal distribution $G$ which solves the
information design problem in \eqref{info2} must be regular and induce
nonnegative virtual values almost everywhere on $\left[ 0,1\right] $.
\end{lemma}
Then, similarly to Lemma \ref{change_variables}, it follows from Lemma \re
{nonneg} that the asymmetric information design problem can be reformulated
as
\begin{align}
\max_{\{F_{i}(k)\}_{i=1}^{n}}& \int_{0}^{1}\prod_{i=1}^{n}F_{i}(k)\mathrm{d
k-1 \label{adp} \\
\text{s.t. }& \int_{0}^{1}(1-F_{i}(k))(1-\log (1-F_{i}(k)))\mathrm{d
k=p,\quad \forall i=1,\cdots ,n. \notag
\end{align}
We state and prove the following theorem.
\begin{theorem}
\label{asy_seller}The unique seller-worst information structure in Theorem
\ref{result_seller} remains the unique seller-worst information structure
which solves the problem in (\ref{info}) with $\alpha =0$.
\end{theorem}
\begin{proof}
For any profile of virtual value distributions $\{F_{i}\}_{i=1}^{n}$, let
F(k)\equiv \frac{1}{n}\sum_{i=1}^{n}F_{i}(k)$ and denote by $F$ a symmetric
signal distribution profile where each buyer receives his signal according
to $F$.
First, the symmetric signal distribution $F$ yields weakly less revenue. For
any $k$, by the inequality of arithmetic and geometric means, we have
\begin{equation}
F^{n}(k)=\left( \frac{1}{n}\sum_{i=1}^{n}F_{i}(k)\right) ^{n}\geq
\prod_{i=1}^{n}F_{i}(k). \label{ag}
\end{equation
Moreover, equality in (\ref{ag}) holds if and only if $F_{i}(k)=F$ for all
i $. Integrating both sides yields that $\int_{0}^{1}F^{n}(k)\mathrm{d}k\geq
\int_{0}^{1}\prod_{i=1}^{n}F_{i}(k)\mathrm{d}k$. Hence, $F$ yields weakly
less revenue than $\{F_{i}\}_{i=1}^{n}$ and strictly less revenue when
F_{i}\neq F$ for some $i$.
Second, $F$ generates weakly higher mean than $p$. Indeed, the integral term
in the mean constraint is strictly concave with respect to $F(k)$. That is,
I^{\prime \prime }(\theta )=-1/(1-\theta )<0$ where $I(\theta )=(1-\theta
)(1-\log (1-\theta ))$. Moreover, since $F_{i}$ admits only nonnegative
virtual values in $\left[ 0,1\right] $, $F$ also admits only nonnegative
virtual values in $\left[ 0,1\right] $. Then, define
\begin{equation*}
\hat{F}\left( k\right)
\begin{cases}
F(k_{1}^{-}), & \mbox{if }k\in \lbrack 0,k_{1}) \\
F(k), & \mbox{if }k\in \lbrack k_{1},1]
\end{cases
\end{equation*
for some $k_{1}$ so that $\hat{F}$ satisfies the constraint in (\ref{adp}).
Since $\hat{F}(k)\geq F(k)$ for any $k$, $\int_{0}^{1}\hat{F}^{n}(k)\mathrm{
}k\geq \int_{0}^{1}F^{n}(k)\mathrm{d}k$ and $\hat{F}$ yields less revenue
than $F$. Therefore, the improvement from $\{F_{i}\}_{i=1}^{n}$ to $\hat{F}$
is strict when $F_{i}\neq F$ for some $i$. It follows taht the symmetric
seller-worst information structure in Theorem \ref{result_seller} remains
the unique seller-worst information structure for the problem in (\ref{adp}).
\end{proof}
\subsection{The buyer-optimal case}
\label{asyboc}
For the buyer-optimal information design problem, we demonstrate that an
asymmetric signal distribution can strictly improve upon the optimal
symmetric signal distribution in Theorem \ref{result_buyer}. We demonstrate
such an improvement for $n=2$ with $p<r^{b}$ and $n\rightarrow \infty $.
\begin{proposition}
\label{asy_buyer}For $n=2$ with $p<r^{b}$or $n\rightarrow \infty $, there
exist asymmetric information structures which can strictly improve upon the
optimal symmetric signal distribution in Theorem \ref{result_buyer} and
Corollary \ref{equivalence}, respectively.
\end{proposition}
\begin{proof}
See Appendix \ref{proofasybuyer}. To see the main idea, for $n=2$ with
p<r^{b}$, we focus our search of improvement on the signal distributions
which put positive mass only on signal $0$, signals with virtual value $0$,
and signals with virtual value $1$ for each buyer. Since each buyer's
virtual value distribution needs to satisfy the mean constraint, it is
uniquely determined by its probability assigned to signal $0$. We then
optimize within the specific class of information structures with these two
variables. For $n\rightarrow \infty $, we also consider a specific class of
information structures in which (i) the buyer $i$'s signal distribution puts
positive mass only on signal $0$, signals with virtual value $p$, and
signals with virtual value $1$ and (ii) all the other buyers' signal
distributions are a degenerate distribution which puts the entire mass on
signal $p$.
\end{proof}
Proposition \ref{asy_buyer} shows that the buyer-optimal signal
distributions (if there exists one) are asymmetric in general. Indeed, while
averaging the signal distributions ($F=\frac{1}{n}F_{i}$) can reduce the
revenue, it might reduce the total surplus. The issue is reminiscent of \cit
{bergemann2007information} which shows that an
seller-optimal/revenue-maximizing information structure is asymmetric across
the buyers. The existence and exact shape of an asymmetric buyer-optimal
information structure remains unknown to us.
\section{Discussion}
\label{discussion}
In this section, we discuss the issues of generalizing our results beyond
the ex ante symmetric binary-value setting. In particular, we will discuss
the issues with asymmetric priors, continuous priors, and an information
structure with value interdependence. In discussing each of these three
aspects, we will keep the rest as is in order to focus on the specific issue
at hand.
\subsection{Asymmetric prior mean}
\label{apm}
Suppose that each buyer $i$ has his own prior mean $p_{i}$. We only discuss
the seller-worst problem for which a solution exists. The seller-worst
information problem is to maximize the same objective function in \eqref{adp}
with each buyer $i$'s individual mean constraint now being:
\begin{equation*}
\int_{0}^{1}(1-F_{i}(k))(1-\log (1-F_{i}(k))-\log (1-F_{i}(0^-)))\mathrm{d
k=p_{i}.
\end{equation*
In this case, our arguments for regularity and almost nonnegative virtual
values are still valid. For $n=2$, we can still obtain the seller-worst
information structure in the following proposition.
\begin{proposition}
\label{ap}Suppose that $n=2$ and each buyer $i$ has prior mean $p_{i}$.
Then, the optimal seller-worst signal distribution for buyer $i$ is a
truncated Pareto distribution $G_{i}(x)$ such that
\begin{equation*}
G_{i}(x)
\begin{cases}
1-\frac{x_{i}}{x} & \text{ if }x\in \lbrack x_{i},1); \\
1 & \text{ if }x=1
\end{cases
\end{equation*
where $x_{i}$ is determined by buyer $i$'s mean constraint.
\end{proposition}
\begin{proof}
See Appendix \ref{proofap}.
\end{proof}
Thus, both buyers have the virtual values $\{0,1\}$ and the degree of
asymmetry between $p_{1}$ and $p_{2}$ is only reflected by $x_{1}$ and
x_{2} $. For $n\geq 3$, the seller-worst problem remains an isoperimetric
problem and we can similarly reduce it to a finite-dimensional constrained
optimization problem we do in Theorem \ref{result_seller}. The difficulty is
that the finite-dimensional problem becomes intractable and its closed-form
solution(s) remain unknown to us.
\subsection{Continuous prior distributions}
\label{continuous prior}
We now consider continuous prior distributions. Our analysis in Sections \re
{main results}--\ref{Asymmetric information structures} remains applicable,
if the information designer only knows the mean about the prior and imposes
the mean constraint. However, if the information designer has full
information about the prior, then a signal distribution is feasible if and
only if the prior is a mean-preserving spread of the distribution; see
\citep{blackwell1953equivalent}. That is, the set of feasible signal
distributions becomes
\begin{equation*}
\mathcal{G}_{H}=\left\{ G:[0,1]\mapsto \lbrack 0,1]\bigg \vert\int_{0}^{1}x\
\mathrm{d}G(x)=p,\int_{0}^{x}G(t)\mathrm{d}t\leq \int_{0}^{x}H(t)\mathrm{d
t,\forall x\in \lbrack 0,1]\right\} .
\end{equation*}
The main issue here is to handle the mean-preserving spread constraint on
the signal distributions. Change of variable is no longer useful and we need
a different tool akin to the method developed in \cite{dworczak2019simple}.
Moreover, our objective function here is also different from that in \cit
{dworczak2019simple}. Consider, for instance, the case of a second-price
auction (with no reserve). The seller-worst problem can be expressed as
\begin{align}
\max_{G}& \int_{0}^{1}\left( nG^{n-1}\left( x\right) -(n-1)G^{n}\left(
x\right) -1\right) \mathrm{d}x \label{2nd} \\
\text{s.t. }& H\text{ is mean preserving spread of }G. \notag
\end{align}
We obtain the following result for the case with two buyers:
\begin{proposition}
\label{cps}For $n=2$, fully revealing (i.e., $G=H$) solves the problem in
\ref{2nd}).
\end{proposition}
\begin{proof}
The objective is
\begin{equation*}
\int_{0}^{1}2G-G^{2}-1\mathrm{d}x=-\int_{0}^{1}x\mathrm{d}\left(
2G-G^{2}\right) =\int_{0}^{1}x\mathrm{d}G^{2}(x)-2p.
\end{equation*
Moreover, $\int_{0}^{1}x\mathrm{d}G^{2}(x)$ is maximized if $G=H$, since
G^{2}$ is the CDF of the convex function $\max \left\{ x_{1},x_{2}\right\} $
when $x_{1}$ and $x_{2}$ are independently distributed according to $G$.
Hence, fully revealing minimizes the sellers' revenue.
\end{proof}
Proposition \ref{cps} has an implication on both the seller-worst problem
and the buyer-optimal problem with two buyers. Specifically, we can still
argue that the seller-worst signal distribution must be regular, symmetric,
and admit only nonnegative virtual values. Since second-price auction with
reserve is optimal for such signal distributions, identifying a seller-worst
information structure amounts to solving the problem in (\ref{2nd}). If the
prior distribution is regular and admits nonnegative virtual values, it is
also a feasible choice in this problem. Hence, Proposition \ref{cps} applies
and fully revealing is seller-worst. Moreover, since fully revealing
maximizes the total surplus, it is also buyer-optimal. In summary, we have
the following corollary:
\begin{corollary}
\label{coro-continuous}For $n=2$, if the prior distribution is regular and
admits nonnegative virtual values, then fully revealing is both the unique
symmetric buyer-optimal and seller-worst information structure.
\end{corollary}
Note that Corollary \ref{coro-continuous} requires that the prior be regular
and admit nonnegative virtual values; hence it rules out the binary prior
which we analyze in Theorem \ref{result_seller}. If $n\geq 3$ or the prior
is irregular, we can still solve the problem in (\ref{2nd}) and we report
the solution in \cite{chen2020quantile}. However, the optimal signal
distributions which we obtain in these situations are no longer regular.
Hence, as we demonstrate in Example \ref{irexample}, the optimal auction may
not be a second-price auction with reserve.
\subsection{Interdependent values}
\label{sec-inter}
In studying a two-buyer common-value problem, \cit
{bergemann2016informationally} constructs a seller-worst information
structure in which the buyers share a common interim value function
depending on the entire signal profile and each signal is independently
drawn from a uniform distribution. As they also assume that ex post common
value is either $0$ or $1$, we may interpret the interim valuation as the
probability that the buyers have ex post common value $1$. A natural
question is whether we can mimic the construction of \cit
{bergemann2016informationally} in setting the interim value function as:
\begin{equation}
v_{i}(s_{1},s_{2})=\min \left\{ \frac{a}{(1-s_{1})(1-s_{2})},1\right\},\quad
\forall i=1,2, \label{interdependent}
\end{equation
where $a$ is determined by $\mathbb{E}_{s_{1},s_{2}}[v_{i}]=a\left( 1-\log
(a)+\frac{1}{2}\log ^{2}(a)\right) =p$. However, with the interdependent
values, we must also replace the mean constraint with the consistency
condition, namely that $v_{i}\left( \cdot ,\cdot \right) $ together with the
uniform distribution on $S_{1}\times S_{2}$ induces the same distribution as
the prior $H^{2}$ on $\left\{ 0,1\right\} \times \left\{ 0,1\right\} $.
In Appendix \ref{proofcorr}, we show that the interim value function defined
in (\ref{interdependent}) violates the consistency condition and hence is
not a feasible choice. The main reason is that interim value interdependence
in (\ref{interdependent}) induces a correlated rather than independent
distribution on $\left\{ 0,1\right\} \times \left\{ 0,1\right\} $. Of
course, there may be other candidate interim value functions $v_{i}(\cdot
,\cdot )$ but their explicit form remains elusive to us. In particular, \cit
{brooks2020strong} give a simulation result with interdependent values in
which the seller obtains strictly lower revenue than our seller-worst
revenue.
\bibliographystyle{econometrica}
| proofpile-arXiv_059-15630 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\goal{Introduce RMA \& RDMA; convince readers they're getting common}
Partitioned Global Address Space (PGAS), and the wider class of Remote
Memory Access (RMA) programming models enable high-performance
communications that often outperform Message
Passing~\cite{fompi-paper,Petrovic:2012:HRB:2312005.2312029}.
RMA utilizes remote direct memory access (RDMA) hardware features to
access memories at remote processes without involving the OS or the
remote CPU.
RDMA is offered by most modern HPC networks (InfiniBand, Myrinet, Cray's Gemini
and Aries, IBM's Blue Gene, and PERCS) and many Ethernet interconnects
that use the RoCE or iWARP protocols. RMA languages and
libraries include Unified Parallel C (UPC), Fortran 2008 (formerly
known as CAF), MPI-3 One Sided, Cray's SHMEM
interface, or Open Fabrics (OFED). Thus,
we observe that RMA
is quickly emerging to be the programming model of choice for cluster systems, HPC computers, and
large datacenters.
\goal{Introduce basic concepts (CR, ML) in FT and motivate our work}
\sloppy
Fault tolerance of such systems is important because hardware
and software faults are ubiquitous~\cite{Sato:2012:DMN:2388996.2389022}.
Two popular resilience schemes used in today's computing environments
are coordinated checkpointing (CC) and uncoordinated checkpointing augmented
with message logging (UC)~\cite{Elnozahy:2002:SRP:568522.568525}.
In CC applications regularly synchronize to save
their state to memory, local disks, or parallel file system (PFS)~\cite{Sato:2012:DMN:2388996.2389022};
this data is used to restart after a crash. In UC processes take checkpoints
independently and use message logging to avoid rollbacks caused by the \emph{domino effect}~\cite{Riesen:2012:ASI:2388996.2389021}. There has been considerable
research on CC and UC for the message passing (MP)
model~\cite{Elnozahy:2002:SRP:568522.568525,Alvisi:1998:MLP:630821.631222}.
Still, no work addresses the exact design of these schemes for
RMA-based systems.
\begin{table*}
\centering
\scriptsize \begin{tabular}{lllll}
\toprule
\parbox{0.1cm}{} & \textbf{MPI-3 one sided operation} & \textbf{UPC operation}
& \textbf{Fortran 2008 operation} & \textbf{Cat.} \\ \midrule
\multirowbt{2}{*}{\parbox{0.1cm}{\begin{turn}{90}comm.\end{turn}}} &
\parbox{6.2cm}{\textsf{MPI\_Put}, \textsf{MPI\_Accumulate},
\textsf{MPI\_Get\_accumulate},\\ \textsf{MPI\_Fetch\_and\_op},
\textsf{MPI\_Compare\_and\_swap}} & \parbox{4.9cm}{\textsf{upc\_memput},
\textsf{upc\_memcpy}, \textsf{upc\_memset},\\assignment (\textsf{=}), all UPC
collectives} & \parbox{3.8cm}{assignment (\textsf{=})} &
\parbox{0.7cm}{\textsc{put}} \\ \cmidrule{2-5}
\parbox{0.1cm}{} & \parbox{6.2cm}{\textsf{MPI\_Get}, \textsf{MPI\_Compare\_and\_swap},\\
\textsf{MPI\_Get\_accumulate}, \textsf{MPI\_Fetch\_and\_op}} & \parbox{4.9cm}{\textsf{upc\_memget},
\textsf{upc\_memcpy}, \textsf{upc\_memset},\\assignment (\textsf{=}), all UPC
collectives} & \parbox{3.8cm}{assignment (\textsf{=})} &
\parbox{0.7cm}{\textsc{get}} \\ \midrule
\multirowbt{4}{*}{\parbox{0.1cm}{\begin{turn}{90}sync.\end{turn}}} &
\parbox{6.2cm}{\textsf{MPI\_Win\_lock}, \textsf{MPI\_Win\_lock\_all}} & \parbox{4.9cm}{\textsf{upc\_lock}} & \parbox{3.8cm}{\textsf{lock}} & \parbox{0.7cm}{\textsc{lock}}\\
\cmidrule{2-5}
\parbox{0.1cm}{} & \parbox{6.2cm}{\textsf{MPI\_Win\_unlock},
\textsf{MPI\_Win\_unlock\_all}} & \parbox{4.9cm}{\textsf{upc\_unlock}} &
\parbox{3.8cm}{\textsf{unlock}} & \parbox{0.7cm}{\textsc{unlock}}\\
\cmidrule{2-5}
\parbox{0.1cm}{} & \parbox{6.2cm}{\textsf{MPI\_Win\_fence}} &
\parbox{4.9cm}{\textsf{upc\_barrier}} & \parbox{3.8cm}{\textsf{sync\_all},
\textsf{sync\_team}, \textsf{sync\_images}} &
\parbox{0.7cm}{\textsc{gsync}}\\ \cmidrule{2-5}
\parbox{0.1cm}{} & \parbox{6.2cm}{\textsf{MPI\_Win\_flush},
\textsf{MPI\_Win\_flush\_all}, \textsf{MPI\_Win\_sync}} &
\parbox{4.9cm}{\textsf{upc\_fence}} & \parbox{3.8cm}{\textsf{sync\_memory}} &
\parbox{0.7cm}{\textsc{flush}}\\ \bottomrule
\end{tabular}
\caption{Categorization of MPI One Sided/UPC/Fortran 2008 operations in our
model. Some atomic functions are considered as both \textsc{put}s and
\textsc{get}s.
In UPC, the collectives, assignments and
\textsf{upc\_memset}/\textsf{upc\_memcpy} behave similarly depending on
the values of pointers to shared objects; the same applies to Fortran
2008. We omit MPI's post-start-complete-wait synchronization
and request-based RMA operations for simplicity.}
\label{tab:mpiCategories}
\end{table*}
\goal{state that we explore FT for RMA (novel)}
In this work we develop a generic model for reasoning
about resilience in RMA. Then, using this model, we show that CC and UC for RMA fundamentally differ
from analogous schemes for MP. We also construct protocols that
enable simple checkpointing and logging of remote memory accesses. We \emph{only} use
\emph{in-memory} mechanisms to avoid costly I/O flushes and frequent
disk and PFS failures~\cite{Sato:2012:DMN:2388996.2389022, disk_fails}.
We then
extend our model to cover two features of today's petascale and future
exascale machines: (1) the growing complexity of hardware components and (2)
decreasing amounts of memory per core. \emph{With this, our study fills an
important knowledge gap between fault-tolerance and emerging
RMA programming in large-scale computing systems.}
\goal{itemize and describe our concrete constributions}
In detail, we provide the following major contributions:
\begin{itemize}[leftmargin=1em]
\item We design a model for reasoning about the reliability of RMA systems
running on flat and hierarchical hardware with limited memory per
core. To our knowledge, this is the first work that addresses these
issues.
\item We construct schemes for in-memory checkpointing, logging, and recovering RMA-based
applications.
\item We unify these concepts in a topology-aware diskless protocol and
we use real data and an analytic model to show that the protocol
can endure concurrent hardware failures.
\item We present the implementation of our protocol, analyze its performance,
show it entails negligible overheads, and compare it to other schemes.
\end{itemize}
\section{RMA Programming} \label{sec:formalModel}
\goal{Introduce our model and describe memory sharing}
We now discuss concepts of RMA programming and present a formalization
that covers existing RMA/PGAS models with strict or relaxed
memory consistency (e.g., UPC or MPI-3 One Sided).
In RMA, each process explicitly
exposes an area of its local memory as shared. Memory can be shared in different
ways (e.g., MPI windows, UPC shared arrays, or Co-Arrays in Fortran 2008); details are
outside the scope of this work. Once shared,
memory can be accessed with various language-specific operations.
\subsection{RMA Operations}
\label{sec:rma_ops}
\goal{+ Distinguish between communication/synchronization calls and active/passive procs}
\sloppy
We identify two fundamental types of RMA operations:
\emph{communication} actions (often called \emph{accesses}; they transfer data between processes), and
\emph{synchronization} actions (synchronize processes and guarantee memory
consistency). A process $p$ that issues an RMA action targeted at
$q$ is called the \emph{active} \emph{source}, and
$q$ is called the \emph{passive} \emph{target}.
We assume $p$ is active and
$q$ is passive (unless stated otherwise).
\subsubsection{Communication Actions}
\goal{++ Introduce puts/gets and a formalism for communication functions}
We denote an action that transfers data from $p$ to $q$ and from $q$ to $p$ as \textsc{put}\Dptoq\
and \textsc{get}\Dpfromq\@, respectively.
We use double-arrows to emphasize the asymmetry of the two
operations: the upper arrow indicates the direction of data flow and the lower arrow
indicates the direction of control flow.
The upper part of Table~\ref{tab:mpiCategories}
categorizes communication operations in various RMA languages. Some
actions (e.g., atomic compare and swap) transfer data in \emph{both} directions and thus
fall into the family of \textsc{put}s \emph{and} \textsc{get}s.
\goal{++ Model atomics that combine remote and local data}
We also distinguish between \textsc{put}s that ``blindly'' replace a targeted memory region at $q$ with a new value
(e.g., UPC assignment), and \textsc{put}s that combine the data moved to
$q$ with the data that already resides at $q$ (e.g.,
\textsf{MPI\_Accumulate}). When necessary, we refer to the former type as the \emph{replacing} \textsc{put}, and to the latter as the \emph{combining} \textsc{put}.
\subsubsection{Memory Synchronization Actions}
\goal{++ Describe and formalize synchronization actions}
We identify four major categories of memory synchronization actions:
\textsc{lock}($p \rightarrow q$$,str)$ (locks a structure $str$ in $q$'s memory to
provide exclusive access),
\textsc{unlock}($p \rightarrow q$$,str)$ (unlocks $str$ in $q$'s memory and
enforces consistency of $str$),
\textsc{flush}($p \rightarrow q$$,str)$ (enforces consistency of $str$ in $p$'s and
$q$'s memories)\@, and \textsc{gsync}$(p \to \diamond, str)$ (enforces
consistency of $str$); $\diamond$ indicates that a call targets all processes.
Arrows indicate the flow of control (synchronization). When we refer to
the whole process memory (and not a single structure), we omit $str$
(e.g., \textsc{lock}($p \rightarrow q$)). The lower part of
Table~\ref{tab:mpiCategories} categorizes synchronization calls in
various RMA languages.
\subsection{Epochs and Consistency Order}
\label{sec:epochs_consistency}
\goal{+ Explain and formalize epochs}
RMA's relaxed memory consistency enables non-blocking
\textsc{put}s and \textsc{get}s. Issued operations are completed by
memory consistency actions (\textsc{flush}, \textsc{unlock},
\textsc{gsync}). The period between any two such actions issued by $p$ and targeting the same
process $q$ is called an
\emph{epoch}. Every \textsc{unlock}$($\scriptsize $p \rightarrow q$\normalsize$)$\ or \textsc{flush}$($\scriptsize $p \rightarrow q$\normalsize$)$\
\emph{closes} $p$'s current epoch and
\emph{opens} a new one (i.e., increments $p$'s epoch number denoted as
$E$$($\scriptsize $p \rightarrow q$\normalsize$)$\@).
$p$ can be in several
independent epochs related to each process that it communicates with.
As \textsc{gsync} is a collective call, it increases
epochs at every process.
\goal{+ Introduce and explain consistency order}
An important concept related to epochs is the \emph{consistency order} (denoted as $\xrightarrow{co}$). $\xrightarrow{co}$ orders
the visibility of actions: $x \xrightarrow{co} y$ means that memory effects of action $x$ are globally visible before action $y$.
Actions issued in different epochs by process $p$ targeting the same process $q$ are always ordered with $\xrightarrow{co}$. Epochs and $\xrightarrow{co}$ are illustrated in Figure~\ref{fig:epochs}. $x\ ||_{co}\ y$ means that actions $x$ and $y$ are \emph{not} ordered with $\xrightarrow{co}$.
\begin{figure}[h!] \centering
\includegraphics[width=0.44\textwidth]{epochs_3_co_big-eps-converted-to.pdf}
\caption{Epochs and the consistency order
$\xrightarrow{co}$ (\cref{sec:epochs_consistency}). White circles
symbolize synchronization calls (in this case \textsc{flush}).
Grey squares show when calls' results become
globally visible in $q$'s or $p$'s memory.
} \label{fig:epochs}
\end{figure}
\subsection{Program, Synchronization, and Happened Before Orders}
\label{sec:orders}
\goal{+ Introduce and explain PO, SO, HB orders}
In addition to $\xrightarrow{co}$ we require three more orders to
specify an RMA execution~\cite{hoefler2013remote}:
The \emph{program order} ($\xrightarrow{po}$) specifies the order of
actions of a single thread, similarly to the program order in
Java~\cite{Manson:2005:JMM:1040305.1040336} ($x \xrightarrow{po} y$
means that $x$ is called before $y$ by some thread).
The \emph{synchronization order} ($\xrightarrow{so}$) orders
\textsc{lock} and \textsc{unlock} and other synchronizing operations.
\emph{Happened-before} (HB, $\xrightarrow{hb}$), a relation well-known in message passing~\cite{Lamport:1978:TCO:359545.359563}, is the transitive closure of the union of $\xrightarrow{po}$ and $\xrightarrow{so}$.
We abbreviate a \emph{consistent happen-before} as $\xrightarrow{cohb}$: $a \xrightarrow{cohb} b \equiv a \xrightarrow{co} b \wedge a \xrightarrow{hb} b$.
To state that actions are \emph{parallel} in an order, we use the symbols $||_{po},\ ||_{so},\ ||_{hb}$.
We show the orders in
Figure~\ref{fig:orders}; more details can be found
in~\cite{hoefler2013remote}.
\begin{figure}[h!] \centering
\includegraphics[width=0.44\textwidth]{po_so_hb_2_big-eps-converted-to.pdf}
\caption{Example RMA orderings $\xrightarrow{po}, \xrightarrow{so}$, $\xrightarrow{hb}$ (\cref{sec:orders}).} \label{fig:orders}
\end{figure}
\subsection{Formal Model}
\label{sec:formal_model}
\goal{+ Describe the model assumptions}
We now combine the various RMA concepts and fault tolerance into a
single formal model.
We assume fail-stop faults (processes can disappear
nondeterministically but behave correctly while being a part of the
program). The data
communication may happen out of order as specified for most RMA models.
Communication channels between
non-failed processes are asynchronous, reliable, and error-free.
The user code can only communicate and synchronize using RMA functions specified in Section~\ref{sec:rma_ops}.
Finally, checkpoints and logs are stored
in \emph{volatile} memories.
\goal{+ Formalize communication actions}
We define a communication action $a$ as a tuple
\small
\begin{alignat}{1}
a = \langle type, src, trg, combine, EC, GC, SC, GNC, data \rangle
\end{alignat}
\normalsize
\noindent
where $type$ is either a put or a get, $src$ and $trg$ specify the
source and the target, and $data$ is the data carried by $a$. $Combine$
determines if $a$ is a replacing \textsc{put} ($combine = false$) or a
combining \textsc{put} ($combine = true$). $EC$ (\emph{Epoch Counter})
is the epoch number in which $a$ was issued. $GC$, $SC$, and $GNC$ are
counters required for correct recovery; we discuss them in more detail
in Section~\ref{sec:complicated}. We combine the notation from
Section~\ref{sec:rma_ops} with this definition and write
\textsc{put}\Dptoq$.EC$ to refer to the epoch in which the put happens.
We also define a {\emph{determinant}} of $a$ (denoted as $\#a$, cf.~\cite{Alvisi:1998:MLP:630821.631222}) to be tuple $a$ without $data$:
\small
\begin{alignat}{1}
\#a = \langle type, src, trg, combine, EC, GC, SC, GNC \rangle.
\end{alignat}
\normalsize
\goal{+ Formalize synchronization actions}
\noindent
Similarly, a synchronization action $b$ is defined as
\small
\begin{alignat}{1}
b = \langle type, src, trg, EC, GC, SC, GNC, str \rangle.
\end{alignat}
\normalsize
\goal{+ Formalize a distributed system}
\noindent
Finally, a trace of an RMA program running on a distributed system can
be written as the
tuple
\small
\begin{alignat}{2}
\mathcal{D} = \langle \mathcal{P}, \mathcal{E},
\mathcal{S}, \xrightarrow{po}, \xrightarrow{so}, \xrightarrow{hb},
\xrightarrow{co} \rangle,
\end{alignat}
\normalsize
\noindent
where
\begin{description}[leftmargin=1.4em]
\itemsep-1pt
\item $\mathcal{P}$ is the set of all $\mathcal{P}$rocesses in
$\mathcal{D}$ ($|\mathcal{P}| = N$),
\item $\mathcal{E} = \mathcal{A} \cup \mathcal{I}$ is
the set of all $\mathcal{E}$vents:
\item $\mathcal{A} $ is the set of RMA $\mathcal{A}$ctions,
\item $\mathcal{I}$ is the set of $\mathcal{I}$nternal
actions (reads, writes, checkpoint actions). $\textsc{read}(x,p)$ loads local variable $x$ and
$\textsc{write}(x := val,p)$ assigns $val$ to $x$ (in $p$'s memory).
$C_{p}^{i}$ is the $i$th checkpoint action taken by $p$. Internal
events are partially ordered with actions using $\xrightarrow{po}$,
$\xrightarrow{co}$, and $\xrightarrow{hb}$.
\item $\mathcal{S}$ is the set of all data $\mathcal{S}$tructures used by
the program
\end{description}
\section{Fault-tolerance for RMA}
\label{sec:basicFTforRMA}
\goal{Introduce and summarize the section}
We now present schemes that make RMA codes fault tolerant. We start with
the simpler CC and then present RMA protocols for UC.
\subsection{Coordinated Checkpointing (CC)}
\label{sec:taking_coordinated_ckp}
In many CC schemes, the user explicitly calls a function to take a
checkpoint. Such protocols may leverage RMA's features (e.g., direct
memory access) to improve the performance. However, these schemes have
several drawbacks: they complicate the code because they can only be
called when the network is quiet~\cite{632814} and they do not
always fit the optimality criteria such as Daly's checkpointing
interval~\cite{Daly:2006:HOE:1134241.1134248}. In this section, we
first identify how CC in RMA differs from CC in MP and then describe a
scheme for RMA codes that performs CC \emph{transparently} to the
application.
We model a coordinated checkpoint as a set $C = \{C_{p_1}^{i_1},
C_{p_2}^{i_2}, ..., C_{p_N}^{i_N}\} \subseteq \mathcal{I}, p_m \neq p_n$
for any $m,n$.
\subsubsection{RMA vs. MP: Coordinated Checkpointing}
\label{sec:rma_vs_mp_cc}
\goal{+ Explain why CC differ in MP and RMA}
In MP, every $C$ has to satisfy a \emph{consistency
condition}~\cite{632814}: $\forall C_{p}^{i}, C_{q}^{j} \in C:\
C_{p}^{i}\ ||_{hb}\ C_{q}^{j}$. This condition ensures that $C$ does not
reflect a system state in which one process received a message that was
\emph{not} sent by any other process. We adopt this condition and
extend it to cover all RMA semantics:
\begin{defi}
$C$ is RMA-consistent iff $\forall C_{p}^{i}, C_{q}^{j} \in C:\
C_{p}^{i}\ ||_{cohb}\ C_{q}^{j}$.
\end{defi}
We extend $||_{hb}$ to $||_{cohb}$ to
guarantee that the system state saved in $C$ does not contain a process
affected by a memory access that was \emph{not} issued by any other
process. In RMA, unlike in MP, this condition can be easily satisfied
because each process can drain the network with a local \textsc{flush}
(enforcing consistency at any point
is legal~\cite{hoefler2013remote})
\subsubsection{Taking a Coordinated Checkpoint}
\label{sec:cc_for_rma}
\goal{+ Describe our CC schemes}
We now propose two diskless schemes that obey the
RMA-consistency condition and target MPI-3 RMA codes. The first
(``Gsync'') scheme can be used in programs that \emph{only} synchronize with
\textsc{gsync}s. The other (``Locks'') scheme targets codes that
\emph{only} synchronize with \textsc{lock}s and \textsc{unlock}s. Note that in
correct MPI-3 RMA programs \textsc{gsync}s and
\textsc{lock}s/\textsc{unlock}s cannot be mixed~\cite{mpi3}. All our schemes
assume that a \textsc{gsync} may also introduce an additional
$\xrightarrow{hb}$ order, which is true in some
implementations~\cite{mpi3}.
\goal{+ Describe the ``gsyncs'' coordinated scheme}
\textbf{The ``Gsync'' Scheme }
Every process may take a coordinated checkpoint right after the user
calls a \textsc{gsync} and before any further RMA calls by: (1)
optionally enforcing the global $\xrightarrow{hb}$ order with an
operation such as \textsf{MPI\_Barrier} (denoted as \textsc{bar}), and
taking the checkpoint.
Depending on the application needs, not every \textsc{gsync} has to be
followed by a checkpoint. We use Daly's
formula~\cite{Daly:2006:HOE:1134241.1134248} to compute the best
interval between such checkpoints and we take checkpoints after the right
\textsc{gsync} calls.
\goal{+ Prove that ``gsyncs'' satisfies the consistency condition}
\begin{theorem}
The Gsync scheme satisfies the RMA-consistency condition and does not deadlock.
\end{theorem}
\htor{you need to define what the RMA consistency condition is,
formally! ``Definition 1: RMA Consistency'' ...}
\begin{proof}
We assume correct MPI-3 RMA programs represented by their trace $\mathcal{D}$~\cite{hoefler2013remote,mpi3}. For
all $p,q \in \mathcal{P}$, each $\textsc{gsync}(p \to \diamond)$ has a
matching $\textsc{gsync}(q \to \diamond)$ such that $[\textsc{gsync}(p
\to \diamond)\ ||_{hb}\ \textsc{gsync}(q \to \diamond)]$. Thus, if
every process calls \textsc{bar} right after \textsc{gsync}
then \textsc{bar} matching is guaranteed and the program cannot deadlock. In addition, the \textsc{gsync} calls
introduce a global consistency order $\xrightarrow{co}$ such that the
checkpoint is coordinated and consistent.
\end{proof}
\goal{+ Describe the ``Locks'' coordinated scheme}
\textbf{The ``Locks'' Scheme }
Every process $p$ maintains a local \emph{Lock Counter} $LC_p$
that starts with zero and is incremented after each \textsc{lock} and
decremented after each \textsc{unlock}. When $LC_p = 0$, process $p$
can perform a checkpoint in three phases: (1) enforce consistency with a
\textsc{flush}$(p \to \diamond)$, (2) call a \textsc{bar} to provides
the global $\xrightarrow{hb}$ order, and (3) take a
checkpoint $C_{p}^{i}$.
The last phase, the actual checkpoint stage, is performed collectively
thus all processes can take the checkpoint $C$ in coordination.
\goal{+ Prove that in ``Locks'' a checkpoint is always eventually taken}
\begin{theorem}
The Locks scheme satisfies the RMA-consistency condition and does not deadlock.
\end{theorem}
\begin{proof}
The call to \textsc{flush}$(p \to \diamond)$ in phase 1 guarantees global
consistency at each process. The \textsc{bar} in phase 2 guarantees that
all processes are globally consistent before the checkpoint taken in phase~3.
It remains to proof deadlock-freedom.
We assume correct MPI-3 RMA programs~\cite{hoefler2013remote,mpi3}.
A $\textsc{lock}(p \to q)$ can only block waiting for an active lock
$\textsc{lock}(z \to q)$ and no \textsc{bar} can be started at $z$ while
the lock is held.
In addition, for
every $\textsc{lock}(z \to q)$, there is a matching $\textsc{unlock}(z
\to q)$ in the execution such that $\textsc{lock}(z \to q)
\xrightarrow{po} \textsc{unlock}(z \to q)$ (for any $z,p,q \in
\mathcal{P}$).
Thus, all locks must be released eventually, i.e., $\exists a \in
\mathcal{E}:\ a \xrightarrow{po} \textsc{write}(LC_p := 0,p)$ for any $p
\in \mathcal{P}$.
\end{proof}
The above schemes show that the transparent CC can be achieved much
simpler in RMA than in MP. In MP, such protocols usually have to
analyze inter-process dependencies due to sent/received messages, and
add protocol-specific data to
messages~\cite{Elnozahy:2002:SRP:568522.568525,Chandy:1985:DSD:214451.214456},
which reduces the bandwidth. In RMA this is not necessary.
\subsection{Uncoordinated Checkpointing (UC)}
\label{sec:uncoordinated_ckp}
Uncoordinated checkpointing augmented with message logging
reduces energy consumption and synchronization costs
because a single process crash does not force all other processes
to revert to the previous checkpoint and recompute~\cite{Riesen:2012:ASI:2388996.2389021,Elnozahy:2002:SRP:568522.568525}.
Instead, a failed process fetches its last checkpoint and replays messages
logged beyond this checkpoint. However, UC schemes are usually more
complex than CC~\cite{Elnozahy:2002:SRP:568522.568525}.
We now analyze how UC in RMA differs from UC in MP, followed by a
discussion of our UC protocols.
\subsubsection{RMA vs. MP: Uncoordinated Checkpointing}
\label{sec:rma_vs_mp_ucc}
\goal{+ Explain why UC differs in MP and RMA}
The first and obvious difference is that we now log not \emph{messages}
but \emph{accesses}. Other differences are as follows:
\textbf{Storing Access Logs}
In MP, processes exchange messages
that \emph{always} flow \emph{from} the sender (process $p$) \emph{to} the
receiver (process $q$). Messages can be recorded at the sender's side~\cite{Riesen:2012:ASI:2388996.2389021,Elnozahy:2002:SRP:568522.568525}. During a
recovery, the restored process interacts with other processes to get and reply
the logged messages (see Figure~\ref{fig:mp_rma_simple} (part (1)).
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{mp_rma_simple_2-eps-converted-to.pdf}
\caption{The logging of messages vs. RMA {puts} and {gets} (\cref{sec:rma_vs_mp_cc}).}
\label{fig:mp_rma_simple}
\end{figure}
\goal{+ Describe logging puts/gets and why it differs from ML}
In RMA, a \textsc{put}\Dptoq\ changes the state of $q$, but a \textsc{get}\Dpfromq\
modifies the state of $p$. Thus, \textsc{put}\Dptoq\ can be logged in $p$'s
memory, but \textsc{get}\Dpfromq\ cannot because a failure of $p$ would prevent a successful
recovery (see Figure~\ref{fig:mp_rma_simple}, part 2 and 3)
\goal{+ Say why in MP resilience schemes obstruct more processes}
\textbf{Transparency of Schemes }
In MP, both $p$ and $q$ actively participate in communication. In RMA,
$q$ is oblivious to accesses to its
memory and thus any recovery or logging
performed by $p$ can be \emph{transparent} to (i.e., does not obstruct) $q$ (which is usually \emph{not} the case
in MP, cf.~\cite{Riesen:2012:ASI:2388996.2389021}).
\goal{+ Explain piggybacking and why it can't be used in RMA}
\textbf{No Piggybacking }
Adding some protocol-specific data to messages (e.g., \emph{piggybacking})
is a popular concept in MP~\cite{Elnozahy:2002:SRP:568522.568525}. Still, it cannot be used in RMA because
\textsc{put}s and \textsc{get}s are {accesses}, not {messages}. Yet,
issuing additional accesses is cheap in RMA.
\goal{+ Compare and explain send determinism and access determinism}
\textbf{Access Determinism }
Recent works in MP (e.g.,~\cite{6012907}) explore \emph{send determinism}: the output of an application run is oblivious to the order of
received messages. In our work we identify a similar concept in RMA that
we call \emph{access determinism}. For example, in race-free MPI-3
programs the application output does not depend on the order in which
two accesses $a$ and $b$ committed to memory if $a\ ||_{co}\ b$.
\goal{+ Say why causal recovery in RNA is more complex}
\goal{+ Explain orphan processes in MP}
\textbf{Orphan Processes }
In some MP schemes (called \emph{optimistic}), senders postpone logging messages for
performance reasons~\cite{Elnozahy:2002:SRP:568522.568525}.
Assume $q$ received a message $m$ from $p$ and then sent a message $m'$ to $r$. If $q$ crashes
and $m$ is not logged by $p$ at that time, then $q$ may follow a run in
that it
\emph{does not} send $m'$. Thus, $r$ becomes an \emph{orphan}: its state
depends on a message $m'$ that was \emph{not} sent~\cite{Elnozahy:2002:SRP:568522.568525} (see Figure~\ref{fig:orphan_locks}, part 1).
\goal{+ Describe orphans in RMA}
In RMA, a process may also become an orphan. Consider
Figure~\ref{fig:orphan_locks} (part 2). First, $p$ modifies a variable $x$
at $q$. Then, $q$ reads $x$ and conditionally issues a \textsc{put}\Dqtor. If $q$ crashes and $p$ postponed logging \textsc{put}\Dptoq\@, then $q$ (while recovering) may follow a run in which it does not issue \textsc{put}\Dqtor; thus $r$ becomes an orphan.
\begin{figure}[h!] \centering
\includegraphics[width=0.44\textwidth]{orphan_locks-eps-converted-to.pdf}
\vspace{-0.5em}
\caption{Illustration of orphans in MP and RMA (\cref{sec:rma_vs_mp_cc}).}
\label{fig:orphan_locks}
\vspace{-1.0em}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{logging_orders_1_no_po_2-eps-converted-to.pdf}
\caption{Logging orders $\xrightarrow{so}$, $\xrightarrow{co}$, and $\xrightarrow{hb}$ (\cref{sec:logging_order_info}). In each figure we illustrate example orderings.}
\label{fig:logging_orders_1}
\end{figure*}
\subsubsection{Taking an Uncoordinated Checkpoint}
\label{sec:taking_uncoordinated_ckp}
\goal{+ Describe why and how we do uncoordinated checkpoints}
We denote the $i$th uncoordinated checkpoint taken by process $p$ as
$C_{p}^{i}$.
Taking $C_{p}^{i}$ is simple and entails: (1) locking local application data,
(2) sending the copy of the data to some remote volatile storage, and (3) unlocking the application data
(we defer the discussion on the implementation details until Section~\ref{sec:libImplementation}).
After $p$ takes $C_{p}^{i}$, any process $q$ can delete the logs of every
\textsc{put}\Dqtop\ (from $LP_q[p]$) and \textsc{get}\Dpfromq\ (from $LG_q[p]$)
that committed in $p$'s memory before $C_{p}^{i}$
(i.e., \textsc{put}\Dqtop\ $\xrightarrow{co} C_{p}^{i}$ and
\textsc{get}\Dpfromq\ $\xrightarrow{co} C_{p}^{i}$).
\goal{+ Describe a correctness condition for uncoordinated checkpoints}
We demand that every $C_{p}^{i}$ is taken \emph{immediately after}
closing/opening an epoch and \emph{before} issuing any new communication
operations (we call this the \emph{epoch condition}).
This condition is required because, if $p$ issues a \textsc{get}\Dpfromq\@,
the application data is guaranteed to be consistent only after closing the epoch.
\subsubsection{Transparent Logging of RMA Accesses}
\label{sec:transparentLogging}
\goal{+ Say why we want logging in RMA}
We now describe the logging of \textsc{put}s and \textsc{get}s; all necessary data structures are shown in Table~\ref{tab:str}.
\begin{table}
{\centering
\small
\begin{tabular}{>{\centering\arraybackslash}m{1.5cm} p{6.3cm}} \toprule
\textbf{Structure}&\textbf{Description}\\ \midrule
$LP_{p}[q] \in \mathcal{S}$&\makecell[l]{Logs of \textsc{put}s issued by $p$ and targeted at
$q$.}\\ \midrule
$LG_{q}[p] \in \mathcal{S}$&\makecell[l]{Logs of \textsc{get}s targeted at $q$ and issued by
$p$.}\\ \midrule
$LP_{p} \in \mathcal{S}$&\makecell[l]{Logs of \textsc{put}s issued and stored by $p$ and targeted\\at any
other process; $LP_{p} \equiv \bigcup_{r \in \mathcal{P} \wedge r \neq p}
LP_{p}[r]$.}\\ \midrule
$LG_{q} \in \mathcal{S}$&\makecell[l]{Logs of $gets$ targeted and stored at $q$, issued by\\any
other process; $LG_{q} \equiv \bigcup_{r \in \mathcal{P} \wedge r \neq q}
LG_{q}[r]$.}\\
\midrule
$Q_p \in \mathcal{S}$&\makecell[l]{A helper container stored at $p$, used to\\temporarily log \#\textsc{get}s issued by $p$.}\\
\midrule
$N_q[p] \in \mathcal{S}$&\makecell[l]{A structure (stored at $q$) that determines\\whether or not $p$ issued a non-blocking\\ \textsc{get}\Dptoq\ ($N_q[p] = true$ or $false$, respectively)}\\
\bottomrule
\end{tabular} \normalsize}
\caption{Data structures used in RMA logging (\cref{sec:transparentLogging}). $LP_{p}[q]$ and $LP_p$ are stored at $p$. $LG_{q}[p]$ and $LG_q$ are stored at $q$.}
\label{tab:str}
\end{table}
\goal{++ Describe logging of puts}
\textbf{Logging Puts }
To log a \textsc{put}\Dptoq\@, $p$ first calls \textsc{lock}$(p \rightarrow p, LP_{p})$.
Self-locking is necessary because there may be other processes being
recovered that may try to read $LP_p$. Then, the \textsc{put} is logged ($LP_{p}[q] := LP_{p}[q]\ \cup\ \{$\textsc{put}\Dptoq\}; ``:='' denotes the assignment of
a new value to a variable or a structure).
Finally, $p$ unlocks $LP_p$.
Atomicity between logging and putting is not required because, in the weak
consistency memory model, the source memory of the put operation may not be
modified until the current epoch ends. If the program modifies it nevertheless,
RMA implementations are allowed to return any value, thus the logged value is
irrelevant. We log \textsc{put}\Dptoq\ before closing
the epoch \textsc{put}\Dptoq$.EC$. If the \textsc{put} is blocking then we
log it before issuing, analogously to the \emph{pessimistic} message logging~\cite{Elnozahy:2002:SRP:568522.568525}.
\goal{++ Describe logging of gets}
\textbf{Logging Gets }
We log a \textsc{get}\Dpfromq\ in two phases to retain its asynchronous behavior (see Algorithm~\ref{alg:log_gets}). First, we record the determinant \#\textsc{get}\Dpfromq\ in
$Q_p$ (lines 2-3). We cannot access \textsc{get}\Dpfromq$.data$ as the
local memory will only be valid after the epoch ends. We avoid issuing an
additional blocking \textsc{flush}$($\scriptsize $p \rightarrow q$\normalsize$)$\@, instead we rely on the user's
call to end the epoch.
Second, when the user ends the epoch, we lock the remote log $LG_{q}$, record \textsc{get}\Dpfromq\@, and
unlock $LG_q$ (lines 4-7).
\goal{++ Describe how me manage faults happening before the epoch ends}
Note that if $p$ fails between issuing
\textsc{get}\Dpfromq\ and closing the epoch, it will not be able to replay it
consistently. To address this problem, $p$ sets $N_q[p]$ at process $q$ to \emph{true} right before issuing the first
\textsc{get}\Dptoq\ (line 1), and to \emph{false} after closing the epoch \textsc{get}\Dptoq$.EC$ (line 8). During the recovery,
if $p$ notices that any $N_q[p] = true$, it falls back to another resilience mechanism (i.e., the last
coordinated checkpoint). If the \textsc{get} is blocking then we set $N_q[p] = false$ after returning from the call.
\begin{algorithm}
\scriptsize
\DontPrintSemicolon
\KwIn{$get := \textsc{get}(p \mathrel{\substack{\textstyle\Leftarrow\\[-0.5ex]
\textstyle\rightarrow}} q)$}
\tcc{\scriptsize Phase 1: starts right before issuing the $get$}
$N_q[p] := true$\;
\tcc{\scriptsize Now we issue the $get$ and log the $\#get$}
issue $\textsc{get}(p \mathrel{\substack{\textstyle\Leftarrow\\[-0.5ex]\textstyle\rightarrow}} q)$\;
$Q_{p} \gets Q_{p} \cup \#get$\;
\tcc{\scriptsize Phase 2: begins after ending the epoch $get.EC$}
\textsc{lock}$(p \rightarrow q, LG_{q})$\;
$LG_{q}[p] := LG_{q}[p] \cup get$\;
$Q_{p} := Q_{p}\ \textbackslash\ \#get$\;
\textsc{unlock}$(p \rightarrow q, LG_{q})$\;
$N_q[p] := false$\;
\caption{Logging \emph{gets} (\cref{sec:transparentLogging})}
\label{alg:log_gets}
\end{algorithm}
\vspace{-1em}
\section{Causal Recovery for UC}
\label{sec:complicated}
\goal{Describe in general a causal recovery and summarize the section}
We now show how to causally recover a failed process (\emph{causally} means preserving
$\xrightarrow{co}$, $\xrightarrow{so}$, and $\xrightarrow{hb}$).
This section describes technical details on how to guarantee all orders to
ensure a correct access replay. If the reader is not interested in all
details, she may proceed to Section~\ref{sec:divisionIntoGroups} without
disrupting the flow.
A {causal} process recovery has three phases: (1) fetching uncoordinated checkpoint data,
(2) replaying accesses from remote logs, and (3) in case of a problem during the replay, falling
back to the last coordinated checkpoint.
We first show how we log the respective orderings between accesses (Section~\ref{sec:logging_order_info}) and
how we prevent replaying some accesses twice (Section~\ref{sec:managing_unc}). We finish with
our recovery scheme (Section~\ref{sec:recovery_scheme}) and a discussion (Section~\ref{sec:discussion_rec}).
\subsection{Logging Order Information}
\label{sec:logging_order_info}
\goal{+ Describe our explanation methodology in this section}
We now show how to record
$\xrightarrow{so}$, $\xrightarrow{hb}$, and $\xrightarrow{co}$.
For clarity, but without loss of generality, we
separately present several scenarios that exhaust possible
communication/synchronization patterns in our model. We
consider three processes ($p$, $q$, $r$) and we
analyze what data is required to replay $q$. We show each
pattern in Figure~\ref{fig:logging_orders_1}.
\goal{+ Describe logging of consistency order when using puts/flushes}
\textbf{A. Puts and Flushes }
First, $p$ and $r$ issue \textsc{put}s and \textsc{flush}es
at $q$. At both $p$ and $r$, \textsc{put}s separated by \textsc{flush}es are ordered
with $\xrightarrow{co}$. This order is preserved by recording epoch counters ($.EC$) with
every logged \textsc{put}\Dptoq\@. Note that, however, RMA semantics
\emph{do not} order calls issued by $p$ and $r$: $[\textsc{put}$\Dptoq\
$||_{co}\ \textsc{put}$\Drtoq] without additional process
synchronization.
Here, we assume \emph{access determinism}: the recovery output does not depend on the order in which such \textsc{put}s committed in $q$'s memory.
\goal{+ Describe logging of consistency order when using gets/flushes}
\textbf{B. Gets and Flushes }
Next, $q$ issues \textsc{get}s and \textsc{flush}es
targeted at $p$ and $r$. Again, $\xrightarrow{co}$ has to be logged.
However, this time \textsc{get}s targeted at \emph{different}
processes \emph{are} ordered (because they are issued by the same process). To log this ordering, $q$ maintains a local \emph{Get Counter}
$GC_q$ that is incremented each time $q$ issues a \textsc{flush}$(q \to
\diamond)$ to any other process.
The value of this counter is logged with each \textsc{get} using the field $.GC$ (cf. Section~\ref{sec:formal_model}).
\goal{+ Describe logging of synchronization order when using puts/locks}
\textbf{C. Puts and Locks }
In this scenario $p$ and $r$ issue \textsc{put}s at $q$ and synchronize their accesses
with \textsc{lock}s and \textsc{unlock}s. This pattern requires logging the $\xrightarrow{so}$ order.
We achieve this with a \emph{Synchronization Counter} $SC_q$ stored at $q$. After issuing a \textsc{lock}$($\scriptsize $p \rightarrow q$\normalsize$)$\@, $p$ (the same refers to $r$) fetches the value of $SC_q$, increments it, updates remote $SC_q$, and records it with every \textsc{put} using the field
$.SC$ (cf. Section~\ref{sec:formal_model}). In addition, this scenario requires recording $\xrightarrow{co}$ that we solve with $.EC$, analogously as in the ``Puts and Flushes'' pattern.
\goal{+ Describe logging of synchronization order when using gets/locks}
\textbf{D. Gets and Locks }
Next, $q$ issues \textsc{get}s and uses \textsc{lock}s
targeted at $p$ and $r$. This pattern is solved analogously to
the ``Gets and Flushes'' pattern.
\goal{+ Describe logging of consistency and HB order when using gsyncs}
\textbf{E. Gsyncs }
The final pattern are \textsc{gsync}s (that may again introduce $\xrightarrow{hb}$) combined with any communication
action. Upon a \textsc{gsync}, each process $q$ increments its
\emph{GsyNc Counter} $GNC_q$ that is logged in an actions' $.GNC$ field
(cf. Section~\ref{sec:formal_model})
\begin{algorithm}[h!]
\scriptsize
\SetAlgoLined\DontPrintSemicolon
\SetKwFunction{recovery}{recovery}
\SetKwFunction{logsWithMinCnt}{logsWithMinCnt}
\SetKwFunction{replayEachAction}{replayEachAction}
\SetKwFunction{fetchCheckpointData}{fetchCheckpointData}
\SetKwProg{myalg}{Function}{}{}
\myalg{\recovery{}}{
fetch\_checkpoint\_data()\;
put\_logs := \{\}; get\_logs := \{\}\;
\ForAll{$q \in \mathcal{P}:\ q \neq p_{new}$}{
\textsc{lock}$(p_{new} \rightarrow q)$\;
\If{$N_q[p_f] = 1 \lor M_q[p_f] = true$}{
\tcc{\scriptsize Stop the recovery and fall back to the last coordinated checkpoint}
}
put\_logs := put\_logs\ $\cup LP_q[p_f]$\;
get\_logs := get\_logs\ $\cup LG_q[p_f]$\;
\textsc{unlock}$(p_{new} \rightarrow q)$\;
}
\While{|put\_logs| > 0 $\lor$ |get\_logs| > 0}{
gnc\_logs := logsWithMinCnt(GNC, put\_logs $\cup$ get\_logs)\;
\While{|gnc\_logs| > 0}{
gnc\_put\_logs := gnc\_logs $\cap$ put\_logs\;
gnc\_get\_logs := gnc\_logs $\cap$ get\_logs\;
ec\_logs := logsWithMinCnt(EC, gnc\_put\_logs)\;
gc\_logs := logsWithMinCnt(GC, gnc\_get\_logs)\;
replayEachAction(ec\_logs)\;
replayEachAction(gc\_logs)\;
gnc\_logs := gnc\_logs \textbackslash\ (ec\_logs $\cup$ gc\_logs)\;
}
put\_logs := put\_logs \textbackslash\ gnc\_logs\;
get\_logs := get\_logs \textbackslash\ gnc\_logs\;
}
\KwRet\;
}
\SetKwProg{myproc}{Function}{}{}
\myalg{\logsWithMinCnt{Counter, Logs}}{
\scriptsize
\tcc{Return a set with logs from $Logs$ that have the smallest value of the specified counter (one of:$GNC, EC, GC, SC$).}
}
\myalg{\replayEachAction{Logs}}{
\scriptsize
\tcc{Reply each log from set $Logs$ in any order.}
}
\myalg{\fetchCheckpointData{}}{
\scriptsize
\tcc{Fetch the last checkpoint and load into the memory.}
}
\caption{The causal recovery scheme for codes that synchronize with \textsc{gsync}s (\cref{sec:recovery_scheme}, \cref{sec:discussion_rec}).}
\label{alg:recovery}
\end{algorithm}
\subsection{Preventing Replaying Accesses Twice}
\label{sec:managing_unc}
\goal{+ Describe the problem of puts that modify memory twice}
Assume that process $p$ issues a \textsc{put}\Dptoq\ (immediately logged
by $p$ in $LP_p[q]$) such that \textsc{put}\Dptoq\ $\xrightarrow{co}
C_{q}^{j}$. It means that the state of $q$ recorded in checkpoint
$C_{q}^{j}$ is affected by \textsc{put}\Dptoq\@. Now assume that $q$
fails and begins to replay the logs. If $p$ did not delete the log of \textsc{put}\Dptoq\ from $LP_p[q]$ (it was allowed to do it after $q$ took $C_{q}^{j}$), then $q$ replays \textsc{put}\Dptoq\ and this \textsc{put} affects its memory \emph{for the second time}. This is not a problem if \textsc{put}\Dptoq$.combine = false$, because such a \textsc{put}
always overwrites the memory region with the same value. However, if \textsc{put}\Dptoq$.combine = true$, then $q$ ends up
in an inconsistent state (e.g., if this \textsc{put} increments a memory cell, this cell will be incremented twice).
\goal{+ Describe the solution to the above problem}
To solve this problem, every process $p$ maintains a local structure $M_p[q] \in \mathcal{S}$. When $p$ issues and logs a \textsc{put}\Dptoq\ such that \textsc{put}\Dptoq$.combine = true$, it sets $M_p[q] := true$. When $p$ deletes \textsc{put}\Dptoq\ from its logs, it sets $M_p[q] := false$. If $q$ fails, starts to recover, and sees that any $M_p[q] = true$, it stops the recovery and falls back to the coordinated checkpoint. This scheme is valid if access determinism is assumed. Otherwise we set $M_p[q] := true$ regardless of the value of \textsc{put}\Dptoq$.combine$; we use the same approach if $q$ can issue \textsc{write}s to the memory regions accessed with remote \textsc{put}s parallel in $||_{co}$ to these \textsc{write}s
\vspace{+2.2em}
\subsection{Recovering a Failed Process}
\label{sec:recovery_scheme}
\goal{+ Describe the first part of the recovery}
We now describe a protocol for codes that synchronize with \textsc{gsync}s.
Let us denote the failed process as $p_{f}$. We assume an underlying batch system that provides a new process $p_{new}$ in the place of $p_{f}$, and that other processes resume their communication with $p_{new}$ after it fully recovers. We illustrate the scheme in Algorithm~\ref{alg:recovery}. First, $p_{new}$ fetches the checkpointed data.
Second, $p_{new}$ gets the logs of \textsc{put}s (put\_logs) and
\textsc{get}s (get\_logs) related to $p_f$ (lines 3-11). It also checks
if any $N_q[p_f] = true$ (see~\cref{sec:transparentLogging}) or
$M_q[p_f] = true$ (see~\cref{sec:managing_unc}), if yes it instructs all
processes to roll back to the last coordinated checkpoint. The protocol uses \textsc{lock}s (lines 5,10) to prevent data races due to, e.g., concurrent recoveries and log cleanups by $q$
\goal{+ Describe the main part of the recovery}
The main part (lines 12-27) replays accesses causally. The recovery ends when there are no logs left (line 12; $|logs|$ is the size of the set ``logs''). We first get the logs with the smallest $.GNC$ (line 13) to maintain $\xrightarrow{cohb}$ introduced by \textsc{gsync}s (see~\cref{sec:logging_order_info} E). Then, within this step, we find the logs with minimum $.EC$ and $.GC$ to preserve $\xrightarrow{co}$ in issued \textsc{put}s and \textsc{get}s, respectively (lines 18-19, see~\cref{sec:logging_order_info} A, B). We replay them in lines 20-21.
\begin{theorem}
The recovery scheme presented in Algorithm~\ref{alg:recovery} replays each fetched action exactly once.
\end{theorem}
\begin{proof}
Consider the gnc\_logs set obtained in line 13. The definition of function logsWithMinCnt ensures that, after executing the action in line 13 and before entering the loop that starts in line 15, every $a \in $ gnc\_logs has identical $a.GNC$. Then,
the condition in line 15 together with the actions in line 22 and the definition of logsWithMinCnt ensure that gnc\_logs is empty when the loop in lines 15-23 exits (all logs in gnc\_logs are replayed). This result, together with the actions in lines 24-25, guarantee that each action $a$ obtained in the lines 4-11 is extracted from log\_puts $\cup$ log\_gets in line 13 and replayed exactly once.
\end{proof}
\begin{theorem}
The recovery scheme presented in Algorithm~\ref{alg:recovery} preserves the $\xrightarrow{cohb}$ order introduced by \textsc{gsync}s (referred to as the \emph{gsync order}).
\end{theorem}
\begin{proof}
Let us denote the action that replays communication action $a$ at process $p_{new}$ as $\mathcal{R}(a,p_{new}) \in \mathcal{I}$ (as $\mathcal{R}(a,p_{new})$ affects only the memory of the calling process $p_{new}$, it is an internal action).
Assume by contradiction that the gsync order is not preserved while recovering. Thus, $\exists a_1, a_2 \in \text{log\_puts} \cup \text{log\_gets}:\ (a_1.GNC > a_2.GNC) \wedge (\mathcal{R}(a_1,p_{new}) \xrightarrow{po} \mathcal{R}(a_2,p_{new}))$.
It means that the action of including $a_1$ into gnc\_logs (line 13) took place before the analogous action for $a_2$ (in the $\xrightarrow{po}$ order). But this contradicts the definition of function logsWithMinCnt($GNC$, set) that returns all the actions from set that have the minimum value of the $GNC$ counter.
\end{proof}
\begin{algorithm}[h!]
\scriptsize
\SetAlgoLined\DontPrintSemicolon
\SetKwFunction{recovery}{recovery}
\SetKwFunction{logsWithMinCnt}{logsWithMinCnt}
\SetKwFunction{replayEachAction}{replayEachAction}
\SetKwFunction{fetchCheckpointData}{fetchCheckpointData}
\SetKwProg{myalg}{Function}{}{}
\myalg{\recovery{}}{
fetch\_checkpoint\_data()\;
put\_logs := \{\}\;
\ForAll{$q \in \mathcal{P}:\ q \neq p_{new}$}{
\textsc{lock}$(p_{new} \rightarrow q)$\;
\If{$M_q[p_f] = true$}{
\tcc{\scriptsize Stop the recovery and fall back to the last coordinated checkpoint}
}
put\_logs := put\_logs\ $\cup LP_q[p_f]$\;
\textsc{unlock}$(p_{new} \rightarrow q)$\;
}
\While{|put\_logs| > 0}{
sc\_put\_logs := logsWithMinCnt(SC, put\_logs)\;
\While{|sc\_put\_logs| > 0}{
ec\_logs := logsWithMinCnt(EC, sc\_put\_logs)\;
replayEachAction(ec\_logs)\;
sc\_put\_logs := sc\_put\_logs \textbackslash\ ec\_logs\;
}
put\_logs := put\_logs \textbackslash\ sc\_put\_logs\;
}
\KwRet\;
}
\SetKwProg{myproc}{Function}{}{}
\myalg{\logsWithMinCnt{Counter, Logs}}{
\scriptsize
\tcc{Return a set with logs from $Logs$ that have the smallest value of the specified counter (one of:$GNC, EC, GC, SC$).}
}
\myalg{\replayEachAction{Logs}}{
\scriptsize
\tcc{Reply each log from set $Logs$ in any order.}
}
\myalg{\fetchCheckpointData{}}{
\scriptsize
\tcc{Fetch the last checkpoint and load into the memory.}
}
\caption{The causal recovery scheme for codes that synchronize with \textsc{lock}s and communicate with \textsc{put}s (\cref{sec:recovery_scheme}, \cref{sec:discussion_rec}).}
\label{alg:recovery_locks}
\end{algorithm}
We now present a recovery scheme for codes that synchronize with \textsc{lock}s and communicate with \textsc{put}s.
The first part of the scheme is identical to the one that targets \textsc{gsync}s; the difference is that we do not have to check the values of $N_q[p_f]$.
\goal{+ Describe the main part of the recovery}
In the main part (lines 11-20) actions are replayed causally. We first get the logs with the smallest $.SC$ (line 12) to maintain $\xrightarrow{so}$ introduced by \textsc{lock}s (see~\cref{sec:logging_order_info} C). Then, within this step, we find the logs with minimum $.EC$ to preserve the $\xrightarrow{co}$ order (line 4, see~\cref{sec:logging_order_info} A). The \textsc{put}s are replayed in line 15.
\subsection{Discussion}
\label{sec:discussion_rec}
\goal{+ Describe the memory-performance tradeoffs}
Our recovery schemes present a trade-off between memory efficiency and time to recover.
Process $p_{new}$ fetches all related logs and only then begins to replay accesses. Thus,
we assume that its memory has capacity to contain put\_logs and get\_logs; a reasonable
assumption if the user program has regular communication patterns (true for most of today's
RMA applications~\cite{fompi-paper}). A more memory-efficient scheme fetches logs
while recovering. This incurs performance issues as $p_{new}$ has to access remote logs multiple times.
\section{Extending the Model for more Resilience}
\label{sec:divisionIntoGroups}
\goal{Motivate and explain our model extensions}
Our model and in-memory resilience schemes
are oblivious to the underlying hardware.
However, virtually all of today's systems have
a hierarchical hardware layout (e.g., cores
reside on a single chip, chips reside in a single node, nodes form a rack, and racks form a cabinet).
Multiple elements may be affected by a single
failure at a higher level, jeopardizing the safety
of our protocols.
We now extend our model to
cover arbitrary hierarchies
and propose \emph{topology-aware} mechanisms
to make our schemes handle concurrent hardware failures.
Specifically, we propose three following extensions:
\goal{+ Describe failure domain hierarchies and how we model them}
\textbf{The Hierarchy of Failure Domains }
A \emph{failure domain} (FD) is an element of a hardware hierarchy that
can fail (e.g., a node or a cabinet). FDs constitute an FD hierarchy
(FDH) with $h$ levels. An example FDH is shown in
Figure~\ref{fig:failureHierarchy}, $h = 4$. We skip the level of single
cores because in practice the smallest FD is a node (e.g., in the TSUBAME2.0 system failure history, there are no core failures~\cite{tsubame2}).
Then, we define $\mathcal{H} = \bigcup_{1 \le j \le h} \left( \bigcup_{1
\le i \le H_{j}} H_{i,j} \right)$ to be the set of all the FD elements
in an FDH. $H_{i,j}$ and $H_j$ are element $i$ of hierarchy level $j$
and the number of such elements at level $j$, respectively. For example,
in Figure~\ref{fig:failureHierarchy} $H_{3,2}$ is the third blade (level 2) and $H_2 = 96$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.44\textwidth]{hardware_hierarchy_7-eps-converted-to.pdf}
\caption{An example hardware layout (Cray XT/XE) and the corresponding FDH (\cref{sec:divisionIntoGroups}). In this example, $h = 4$.}
\label{fig:failureHierarchy}
\end{figure}
\goal{+ Describe why and how we split processes and add checksums}
\textbf{Groups of Processes }
To improve resilience, we split the process set $\mathcal{P}$ into $g$
equally-sized groups $G_i$ and
add $m$ \emph{checksum}
processes to each group to store checksums of checkpoints taken in each
group (using, e.g., the Reed-Solomon~\cite{reed1960polynomial} coding scheme).
Thus, every group can resist $m$ concurrent process crashes.
The group size is $|G| = \frac{|\mathcal{P}|}{g} + m$.
\goal{+ Show and describe the extensions to the previous system definition}
\textbf{New System Definition}
We now extend the definition of a distributed system $\mathcal{D}$ to
cover the additional concepts:
\begin{alignat}{2}
\langle \mathcal{P}, \mathcal{E}, \mathcal{S}, \mathcal{H}, \mathcal{G},
\xrightarrow{po}, \xrightarrow{so}, \xrightarrow{hb}, \xrightarrow{co},
\mathscr{M}\rangle.
\end{alignat}
\noindent
$\mathcal{G} = \{G_{1}, ..., G_{g}\}$ is a set of $\mathcal{G}$roups of processes and $\mathscr{M}: \mathcal{P} \times \mathbb{N} \to \mathcal{H}$ is a function that $\mathscr{M}$aps process $p$ to the FD at hierarchy level $k$ where $p$ runs:
$\mathscr{M}(p,k) = H_{j,k}$. $\mathscr{M}$ defines how processes are distributed over FDH. For example, if $p$ runs on blade $H_{1,2}$ from Figure~\ref{fig:failureHierarchy}, then $\mathscr{M}(p,2) = H_{1,2}$.
\vspace{+1.3em}
\subsection{Handling Multiple Hardware Failures}
\label{sec:handling_multiple_hf}
\goal{+ Describe handling multiple failures with topology-awareness}
More than $m$ process crashes in any group $G_i$ result in a \emph{catastrophic failure} (CF; we use the name from \cite{Bautista-Gomez:2011:FHP:2063384.2063427}) that incurs restarting the whole computation. Depending on how $\mathscr{M}$ distributes processes,
such a CF may be caused by several (or even one) crashed FDs. To minimize the risk of CFs, $\mathscr{M}$ has to be \emph{topology-aware} (t-aware): for a given level $n$ (called a \emph{t-awareness level}), no more than $m$ processes from the same group can run on the same $H_{i,k}$ at any level $k, k\le n$:
\small
\begin{alignat}{2}
&\forall p_1,p_2,...,p_m \in \mathcal{P}\quad \forall G \in \mathcal{G}\quad \forall 1 \le k \le n:\nonumber\\
&(p_1 \in G \wedge ... \wedge p_m \in G) \Rightarrow (\mathscr{M}(p_1,k) \neq ... \neq \mathscr{M}(p_m,k))
\end{alignat}
\normalsize
\noindent
Figure~\ref{fig:distribution} shows an example t-aware process distribution.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{distribution_2-eps-converted-to.pdf}
\caption{T-aware distribution at the node \emph{and} rack level (\cref{sec:handling_multiple_hf}).}
\label{fig:distribution}
\end{figure}
\subsection{Calculating Probability of a CF}
\label{sec:probabilityStudy}
\goal{+ Describe goals and assumptions in calculating CF probability}
We now calculate
the probability of a catastrophic failure ($P_{cf}$) in our model. We
later (\cref{sec:reliabilityStudy}) use $P_{cf}$ to show that our
protocols are resilient on a concrete machine (the TSUMABE2.0
supercomputer~\cite{tsubame2}). If a reader is not interested in the
derivation details, she may proceed to
Section~\ref{sec:libImplementation} where we present the results. We set $m = 1$ and thus use the XOR erasure code, similar to an additional disk
in a RAID5~\cite{Chen:1994:RHR:176979.176981}.
We assume that failures at different hierarchy levels are independent
and that any number $x_j$ of elements from any hierarchy level $j$ ($1
\le x_j \le H_j$, $1 \le j \le h$) can fail. Thus,
\scriptsize
\begin{alignat}{2}
P_{cf} &= \sum_{j=1}^{h} \sum_{x_j=1}^{H_j} P(x_j \cap x_{j,cf}) =
\sum_{j=1}^{h} \sum_{x_j=1}^{H_j} P_{j}(x_j) P_{j}(x_{j,cf} | x_j).
\end{alignat}
\normalsize
\goal{+ Describe the first, general formula (7)}
$P(x_j \cap x_{j,cf})$ is the probability that $x_j$ elements of the $j$ hierarchy level will fail \emph{and} result in a catastrophic failure. $P_{j}(x_j)$ is the probability of the failure of $x_j$ elements from level $j$ of the hierarchy. $P_{j}(x_{j,cf} | x_j)$ is the probability that $x_j$ given concurrent failures at hierarchy level $j$ are catastrophic to the system. It is difficult to analytically derive $P_{j}(x_j)$ as it is specific for every machine. For our example study (see Section~\ref{sec:reliabilityStudy}) we use the failure rates from the TSUBAME2 failure history \cite{tsubame2}.
\goal{+ Describe the conditional formula (8) and show the final Equation (9)}
In contrast, $P_{j}(x_{j,cf} | x_j)$ can be calculated using combinatorial theory. Assume that $\mathscr{M}$ distributes processes in a t-aware way at levels $1$ to $n$ of the FDH ($1 \le n \le h$). First, we derive $P_{j}(x_{j,cf} | x_j)$ for any level $j$ such that $1 \le j \le n$:
\scriptsize
\begin{alignat}{2}
P_{j}(x_{j,cf} | x_j) &= \frac{D_j \cdot \binom{|G|}{2}\cdot
\binom{H_j-2}{x_j-2} }{ \binom{H_j}{x_j} }.
\end{alignat}
\normalsize
\noindent
$\binom{|G|}{2}$ is the number of the possible catastrophic failure
scenarios \emph{in a single group} ($m=1$ thus any two process crashes in
one group are catastrophic). $D_j$ is the number of such single-group
scenarios \emph{at the whole level $j$} and is equal to $\left\lceil
\frac{H_j}{|G|} \right\rceil$ (see Figure~\ref{fig:dist_expl} for intuitive
explanation). $\binom{H_j-2}{x_j-2}$ is the number of the remaining
possible failure scenarios and $\binom{H_j}{x_j}$ is the total number of
the possible failure scenarios. Second, for remaining levels $j$ ($n+1
\le j \le h$) $\mathscr{M}$ is \emph{not} t-aware and thus in the
worst-case scenario any element crash is catastrophic: $P_{j}(x_{j,cf} |
x_j) = 1$. The final formula for $P_{cf}$ is thus
\scriptsize
\begin{alignat}{2}
P_{cf} &= \sum_{j=1}^{n} \sum_{x_j=1}^{H_j} P_{j}(x_j) \frac{D \cdot
\binom{|G|}{2} \cdot \binom{H_j-2}{x_j-2} }{ \binom{H_j}{x_j} }
+\sum_{j=n+1}^{h} \sum_{x_j=1}^{N_j} P_{j}(x_j).
\end{alignat}
\normalsize
\begin{figure}[h!]
\centering
\includegraphics[scale=1.7]{failure_model_3-eps-converted-to.pdf}
\caption{(\cref{sec:probabilityStudy}) Consider three process distribution scenarios by $\mathscr{M}$ (\emph{each} is t-aware). Optimistically, processes can be distributed contiguously (scenario A) or partially fragmented (scenario B). To get the upper bound for $P_{cf}$ we use the worst-case pattern (scenario C). Now, to get the number of single-group CF scenarios at the whole level $j$ ($D_j$), we need to obtain the number of the groups of \emph{hardware elements} at $j$ that
hold process groups: $\lceil H_j/|G| \rceil$.}
\label{fig:dist_expl}
\end{figure}
\section{Holistic Resilience Protocol}
\label{sec:libImplementation}
\goal{Describe the goal and the implementation details of the protocol}
We now describe an example conceptual implementation of holistic fault tolerance for RMA
that we developed to understand the tradeoffs between the resilience and performance in RMA-based systems.
We implement it as a portable library (based on C
and MPI) called \textsc{ftRMA}{}. We utilize MPI-3's one sided
interface, but any other RMA model enabling relaxed memory
consistency could be used instead (e.g., UPC or Fortran 2008). We use
the publicly available \textsc{foMPI}{} implementation of MPI-3 one sided as MPI
library~\cite{fompi} but any other MPI-3 compliant
library would be suitable.
For simplicity we assume that the user application uses one
contiguous region of shared memory of the same size at each process.
Still, all the conclusions drawn are valid for any other
application pattern based on RMA. Following the MPI-3 specification, we call this shared region of memory at every process a $window$.
Finally, we divide user processes (referred to as CoMputing processes, $CMs$) into groups (as described in Section~\ref{sec:divisionIntoGroups})
and add one CHecksum process (denoted as $CH$) per group ($m=1$). For any computing process $p$, we denote the $CH$ in its group as $CH(p)$. $CHs$ store and update XOR checksums of their $CMs$.
\subsection{Protocol Overview}
\label{sec:prot_over_small}
\goal{+ Describe protocol layers and modules}
In this section we provide a general overview of the layered protocol implementation
(see Figure~\ref{fig:generalOverview}). The first part (layer 1) logs accesses.
The second layer takes uncoordinated checkpoints (called \emph{demand} checkpoints) to trim the logs.
Layer 3 performs regular coordinated checkpoints. All layers are diskless.
Causal recovery replays memory accesses. Finally, our FDH increases resilience of the whole protocol.
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\textwidth]{general_model_2_w_2-eps-converted-to.pdf}
\caption{The overview of the protocol (\cref{sec:prot_over_small}). Layer 1 and 2 constitute the
uncoordinated part of the protocol that falls back to the
coordinated checkpointing if logging fails or if its overhead is too high.}
\label{fig:generalOverview}
\end{figure}
\begin{figure*}
\centering
\vspace{-1.2em}
\subfloat[Distribution of node crashes (samples and the fit) (\cref{sec:reliabilityStudy}).]{
\includegraphics[width=0.23\textwidth]{nodes-barplot-eps-converted-to.pdf}
\label{fig:pdfRacks}
}\hfill
\subfloat[Distribution of PSU crashes (samples and the fit) (\cref{sec:reliabilityStudy}).]{
\includegraphics[width=0.23\textwidth]{psus-barplot-eps-converted-to.pdf}
\label{fig:pdfNodes}
}\hfill
\subfloat[Probability of a catastrophic failure (\cref{sec:compar_resil}).]{
\includegraphics[width=0.23\textwidth]{catas-eps-converted-to.pdf}
\label{fig:probabilityPict}
}\hfill
\subfloat[NAS FFT (class C) fault-free runs: checkpointing (\cref{sec:nas_eval}).]{
\includegraphics[width=0.23\textwidth]{nas-chpt-eps-converted-to.pdf}
\label{fig:fftFFrun}
}\hfill
\caption{Distribution of PSU \& node failures, $P_{cf}$ in TSUBAME2.0
running 4,000 processes, and the performance of NAS 3D FFT.}
\label{fig:probability}
\vspace{-1.0em}
\end{figure*}
\goal{+ Say how and why we use Daly's formula}
\textbf{Daly's Interval }
Layer 3 uses Daly's formula~\cite{Daly:2006:HOE:1134241.1134248} as the optimum interval between coordinated checkpoints: \small$\sqrt{2 \delta M} \cdot [1 + 1/3 \sqrt{\delta/(2 M)} + (1/9) (\delta/(2 M))] - \delta$ \normalsize(for $\delta < 2 M$), or $M$ (for $\delta \ge 2 M$). $M$ is the MTBF (mean time between failures that \textsc{ftRMA}{} handles with coordinated checkpointing) for the target machine and $\delta$ is the time to take a checkpoint. The user provides $M$ while $\delta$ is estimated by our protocol.
\goal{+ Say how we interface with user/runtime}
\textbf{Interfacing with User Programs and Runtime }
\textsc{ftRMA}{} routines are called after each RMA action.
This would entail runtime system calls in compiled
languages and we use the PMPI profiling interface~\cite{mpi3} in our
implementation.
During window creation
the user can specify: (1) the number of $CHs$,
(2) MTBF, (3) whether to use topology-awareness.
After window creation, the protocol divides processes into $CMs$ and
$CHs$. If the user enables t-awareness, groups of processes running on the same
FDs are also created. In the current version \textsc{ftRMA}{} takes into account computing nodes
when applying t-awareness.
\subsection{Demand Checkpointing}
\goal{+ Motivate and describe coordinated checkpoints}
\emph{Demand checkpoints} address the problem of diminishing amounts of
memory per core in today's and future computing centers.
If free memory at $CM$ process $p$ is scarce, $p$ selects the process $q$ with
the largest $LP_p[q]$ or $LG_p[q]$ and requests a demand checkpoint.\htor{? cUsing
this data iheckpoint}
First, $p$ sends a \emph{checkpoint request} to $CH(q)$ which, in turn, forces $q$ to
checkpoint. This can be done by: closing all the epochs, locking all the relevant data
structures, calculating the XOR checksum, and: (1) streaming the result
to $CH(q)$ piece by piece or (2) sending the
result in one bulk. $CH(q)$ integrates the received checkpoint data into the existing XOR checksum. Variant (1) is memory-efficient, and (2) is less
time-consuming. Next, $q$ unlocks all the data
structures. Finally, $CH(q)$ sends a confirmation with the epoch number $E$$($\scriptsize $p \rightarrow q$\normalsize$)$\ and respective counters ($GNC_q$, $GC_q$, $SC_q$) to $p$.
Process $p$ can delete logs of actions $a$ where $a.EC < E$$($\scriptsize $p \rightarrow q$\normalsize$)$\@, $a.GNC < GNC_q$, $a.GC < GC_q$, $a.SC < SC_q$.
\section{Testing and Evaluation}
\label{sec:testing}
\goal{Summarize the section, explain the notation used}
In this section we first analyze the resilience of our protocol using real data from TSUBAME2.0~\cite{tsubame2} failure history. Then, we test the performance of \textsc{ftRMA}{} with a NAS benchmark~\cite{Bailey91thenas} that computes 3D Fast Fourier Transformation and a distributed key-value store.
We denote the number of $CHs$ and $CMs$ as $|CH|$ and $|CM|$, respectively.
\subsection{Analysis of Protocol Resilience}
\label{sec:reliabilityStudy}
\goal{+ Explain why we do resilience analysis}
Our protocol stores all data in volatile memories to avoid I/O performance penalties
and frequent disk and parallel file system failures~\cite{Sato:2012:DMN:2388996.2389022,disk_fails}.
This brings several questions on whether the scheme is resilient in practical environments.
To answer this question, we calculate the probability of a catastrophic failure
$P_{cf}$ (using Equations~(7) and~(9)) of our protocol, applying t-awareness at different levels of FDH.
\goal{+ Describe how we use our model with data from TSUBAME}
We first fix model parameters ($H_j$, $h$) to reflect the hierarchy of TSUBAME2.0.
TSUBAME2.0 FDH has 4 levels~\cite{Sato:2012:DMN:2388996.2389022}: nodes,
power supply units (PSUs), edge switches, and racks ($h=4$)~\cite{Sato:2012:DMN:2388996.2389022}.
Then, to get $P_{cf}$, we calculate distributions $P_j(x_j)$ that determine the probability
of $x_j$ concurrent crashes at level $j$ of the TSUBAME FDH.
To obtain $P_j(x_j)$ we analyzed 1962
crashes in the history of
TSUBAME2.0 failures~\cite{tsubame2}. Following
\cite{Bautista-Gomez:2011:FHP:2063384.2063427} we decided to use
exponential probability distributions, where the argument is the number
of concurrent failures $x_j$.
We derived four probability density functions (PDFs) that approximate the failure distributions of
nodes (\small$0.30142 \cdot 10^{-2} e^{-1.3567 x_1}$\normalsize),
PSUs (\small$1.1836 \cdot 10^{-4} e^{-1.4831 x_2}$\normalsize),
switches (\small$3.9249 \cdot 10^{-5} e^{-1.5902 x_3}$\normalsize),
and racks (\small$3.2257 \cdot 10^{-5} e^{-1.5488 x_4}$\normalsize).
The unit is failures per day.
Figures~\ref{fig:pdfRacks} and~\ref{fig:pdfNodes}
illustrate two PDF plots with histograms.
The
distributions for PSUs, switches, and racks are based on real data
only. For nodes it was not always possible to determine the exact
correlation of failures. Thus, we pessimistically assumed
(basing on~\cite{Bautista-Gomez:2011:FHP:2063384.2063427}) that single
crashes constitute 75\% of all node failures, two
concurrent crashes constitute 20\%, and other values decrease
exponentially.
\subsubsection{Comparison of Resilience}
\label{sec:compar_resil}
\goal{+ Describe the results of the resilience analysis}
Figure~\ref{fig:probabilityPict} shows the resilience of our protocol when using
five t-awareness strategies. The number of processes $N$ is 4,000.
$P_{cf}$ is normalized to one day period. Without t-awareness (\smalltt{no-topo})
a single crash of any FD of TSUBAME2.0 is catastrophic, thus $P_{cf}$ does
not depend on $|CH|$.
In other scenarios every process from
every group runs on a different node (\smalltt{nodes}), PSU ({\smalltt{PSUs}}),
switch enclosure ({\smalltt{switches}}) and rack (\smalltt{racks}).
In all cases $P_{cf}$ decreases proportionally to the increasing $|CH|$, however at some point the
exponential distributions ($P_j(x_j)$) begin to dominate the results.
Topology-awareness at higher hierarchy levels
significantly improves the resilience of our protocol.
For example, if $CH = 5\% N$, $P_{cf}$ in the \smalltt{switches}
scenario is $\approx$4 times lower than in \smalltt{nodes}.
Furthermore, all t-aware schemes are 1-3 orders of
magnitude more resilient than \smalltt{no-topo}.
\goal{+ State our schemes are safe and we don't want I/O and disks}
The results show that even a simple scheme (\smalltt{nodes}) significantly
improves the resilience of our protocol that performs
only in-memory checkpointing and logging. We conclude that
costly I/O flushes to the parallel file system (PFS) are not required for
obtaining a high level of resilience.
On the contrary, such flushes may
even \emph{increase} the risk of failures. They usually entail stressing the I/O system for significant amounts of time
\cite{Sato:2012:DMN:2388996.2389022}, and stable storage is often the
element most susceptible to crashes. For example, a Blue Gene/P
supercomputer had 4,164 disk fail events in 2011 (for 10,400 total
disks)~\cite{disk_fails}, and its PFS failed 77 times, almost two times
more often than other hardware~\cite{disk_fails}.
\begin{figure*}
\centering
\subfloat[NAS FFT (class A) fault-free runs: demand checkpointing.]{
\includegraphics[width=0.30\textwidth]{d-checks-b-eps-converted-to.pdf}
\label{fig:dchecks}
}\hfill
\subfloat[NAS FFT (class A) fault-free runs: logging.]{
\includegraphics[width=0.3\textwidth]{nas-logs-b-eps-converted-to.pdf}
\label{fig:nas-logs}
}\hfill
\subfloat[Key-value store fault-free runs.]{
\includegraphics[width=0.3\textwidth]{kv-log-b-eps-converted-to.pdf}
\label{fig:hashFFrun}
}
\caption{Performance of the NAS FFT code (\cref{sec:nas_eval}) and the key-value store (\cref{sec:kv_eval}).}
\label{fig:res-fft-kv}
\end{figure*}
\subsection{Analysis of Protocol Performance}
\goal{+ Describe the evaluation section, state we're doing worst-case benchmarks}
We now discuss the performance of our fault tolerance protocol after the
integration with two applications: NAS 3D FFT and a distributed
key-value store. Both of these
applications are characterized by intensive communication patterns,
thus they demonstrate worst-case scenarios for our protocol. Integrating \textsc{ftRMA}{}
with the application code was trivial and required minimal code changes
resulting in the same code complexity.
\goal{+ Describe SCR and how we configure it}
\textbf{Comparison to Scalable Checkpoint/Restart }
We compare \textsc{ftRMA}{} to Scalable Checkpoint-Restart (SCR)~\cite{scr}, a popular open-source message passing library that provides
checkpoint and restart capability for MPI codes but does not enable logging.
We turn on the XOR scheme in SCR
and we fix the size of SCR groups~\cite{scr} so that they match the analogous parameter
in \textsc{ftRMA}{} ($|G|$).
To make the comparison fair,
we configure SCR to save checkpoints to both in-memory tmpfs (\smalltt{SCR-RAM})
and to the PFS
(\smalltt{SCR-PFS}).
\goal{+ Describe a simple ML protocol that we compare to }
\textbf{Comparison to Message Logging }
To compare the logging overheads in MP and RMA we also developed a simple message logging (ML) scheme (basing on the protocol from~\cite{Riesen:2012:ASI:2388996.2389021}) that
records accesses.
Similarly to~\cite{Riesen:2012:ASI:2388996.2389021}
we use additional processes to store protocol-specific access logs;
the data is stored at the sender's or receiver's side
depending on the type of operation.
\goal{+ Describe hardware we used for benchmarks}
We execute all benchmarks on the
Monte Rosa system and we use Cray XE6 computing
nodes. Each node contains four 8-core 2.3~GHz AMD Opterons 6276
(Interlagos) and is connected to a 3D-Torus
Gemini network. We use the Cray Programming Environment 4.1.46 to
compile the code.
\subsubsection{NAS 3D Fast Fourier Transformation}
\label{sec:nas_eval}
\goal{+ Describe NAS 3D FFT}
Our version of the NAS 3D FFT~\cite{Bailey91thenas} benchmark is based on MPI-3
nonblocking \textsc{put}s (we exploit the overlap of computation and
communication). The benchmark calculates 3D FFT using a 2D decomposition.
\goal{+ Describe the performance of fault-free runs when checkpointing}
\textbf{Performance of Coordinated Checkpointing }
We begin with evaluating our checkpointing ``Gsync'' scheme.
Figure~\ref{fig:fftFFrun} illustrates the performance of NAS FFT fault-free
runs. We compare: the original application code without any fault-tolerance
(\smalltt{no-FT}), \textsc{ftRMA}{}, \smalltt{SCR-RAM}, and \smalltt{SCR-PFS}. We fix $|CH| = 12.5\% |CM|$.
We include two \textsc{ftRMA}{} scenarios: \smalltt{f-daly} (use Daly's formula for coordinated checkpoints), and
\smalltt{f-no-daly} (fixed frequency of checkpoints without Daly's formula, $\approx$2.7s for 1024 processes).
We use the same t-awareness policy in all codes (\smalltt{nodes}).
The tested schemes have the respective fault-tolerance
overheads over the baseline \smalltt{no-FT}: 1-5\% (\smalltt{f-daly}), 1-15\% (\smalltt{f-no-daly}), 21-37\% (\smalltt{SCR-RAM}) and
46-67\% (\smalltt{SCR-PFS}). The performance
of SCR-RAM is lower than \smalltt{f-daly} and \smalltt{f-no-daly} because \textsc{ftRMA}{} is based on the Gsync scheme that incurs less synchronization. \smalltt{SCR-PFS} entails the highest overheads
due to costly I/O flushes.
\textbf{Performance of Demand Checkpointing }
We now analyze how the size of the log
impacts the number of demand checkpoints and the performance of fault-free runs (see Figure~\ref{fig:dchecks}).
Dedicating less than 44 MiB of memory for storing logs (per process) triggers demand checkpoint requests to
clear the log. This results in performance penalties but leaves more memory available to the user.
Second, we illustrate how $|CH|$ impacts the performance of recovering a process from its last demand
checkpoint (see Figure~\ref{fig:nas-rec}).
We run the NAS benchmark 10 times and after every such iteration we communicate the checksum
necessary to recover the process. We use the \smalltt{nodes} t-awareness and compare
\smalltt{no-FT}, \smalltt{f-12.5-nodes} ($|CH|=12.5\% |CM|$), and \smalltt{f-6.25-nodes} ($|CH|=6.25\% |CM|$).
RMA's direct memory accesses ensure transparency and
relatively small overheads: when $|CH|=12.5\% |CM|$
10 checksum transfers during 10 iterations make the run only 60\% slower than \smalltt{no-FT}.
\begin{figure}[h!] \centering
\includegraphics[width=0.3\textwidth]{nas-rec-b-eps-converted-to.pdf}
\vspace{-0.5em}
\caption{NAS FFT (class C) recovery from a demand checkpoint. (\cref{sec:nas_eval}).}
\label{fig:nas-rec}
\end{figure}
\goal{+ Describe the performance of fault-free runs when logging}
\textbf{Performance of Access Logging }
As the next step we evaluate our logging scheme.
Figure~\ref{fig:nas-logs} illustrates the performance of fault-free
runs. We compare
\smalltt{no-FT}, \textsc{ftRMA}{}, and our ML protocol (\smalltt{ML}).
\textsc{ftRMA}{} adds only $\approx$8-9\% of overhead to the baseline (\smalltt{no-FT})
and consistently outperforms \smalltt{ML} by $\approx$9\%
due to the smaller amount of protocol-specific interaction between processes.
\goal{+ Describe how topo-awareness and |CH| influences results}
\textbf{Varying |CH| and T-Awareness Policies }
Here, we analyze how $|CH|$ and t-awareness impact the performance of NAS FFT fault-free runs. We set $|CH|=12.5\% |CM|$ and $|CH|=6.25\% |CM|$, and we use the \smalltt{no-topo} and \smalltt{nodes} t-awareness policies. The results show that all these schemes differ negligibly from \smalltt{no-FT} by 1-5\%
\begin{figure*}[ht]
\centering
\scalebox{0.55}{\input{protocols_4.tex}}
\caption{An overview of existing checkpointing and logging schemes (\cref{sec:relatedWork}). A dashed rectangle illustrates a new sub-hierarchy introduced in the paper: dividing the logging protocols with respect to the \emph{communication model} that they address.}
\label{fig:overview}
\end{figure*}
\subsubsection{Key-Value Store}
\label{sec:kv_eval}
\goal{+ Describe our DHT}
Our key-value store is based on a simple distributed hashtable (DHT)
that stores 8--Byte integers.
The DHT consists of parts called \emph{local volumes} constructed with
fixed-sized arrays. Every local volume is managed
by a different process. Inserts are
based on MPI-3 atomic Compare-And-Swap and Fetch-And-Op functions.
Elements after hash collisions are inserted in the overflow heap that is the part of each local volume.
To insert an element, a thread atomically updates the pointers to the next free cell and the last element
in the local volume.
Memory consistency is ensured with flushes. One \textsc{get} and one \textsc{put} are logged if there is no hash collision, otherwise 6 \textsc{put}s and 4 \textsc{get}s are recorded.
\goal{+ Describe the DHT fault-free-runs benchmark}
\textbf{Performance of Access Logging }
We now measure the relative performance
penalty of logging \textsc{put}s and \textsc{get}s. During the
benchmark, processes insert random elements with random keys. We focus
on inserts only as they are perfectly representative for the logging
evaluation. To simulate realistic requests, every process waits
for a random time after every insert. The function that we use to
calculate this interval is based on the exponential probability
distribution: $f \delta e^{-\delta x}$, where $f$ is a scaling factor,
$\delta$ is a rate parameter and $x \in [0;b)$ is a random number.
The selected parameter values ensure that
every process spends
$\approx$5-10\% of the total runtime on inserting elements. For many
computation-intense applications this is already a high amount of
communication. We again compare
\smalltt{no-FT}, \smalltt{ML}, and two \textsc{ftRMA}{} scenarios: \smalltt{f-puts} (logging
only \textsc{put}s) and \smalltt{f-puts-gets} (logging \textsc{put}s
and \textsc{get}s). We fix $|CH| = 12.5\% |CM|$ and use the
\smalltt{nodes} t-awareness. We skip SCR as it
does not enable logging.
\goal{+ Describe the benchmark results}
We present the results in Figure~\ref{fig:hashFFrun}. For $N=256$, the logging overhead over the baseline (\smalltt{no-FT}) is: $\approx$12\% (\smalltt{f-puts}), 33\% (\smalltt{f-gets}), and 40\% (\smalltt{ML}). The overhead of logging \textsc{put}s in is due to the fact that every operation is recorded directly after issuing. Traditional message passing protocols suffer from a similar effect~\cite{Elnozahy:2002:SRP:568522.568525}. The overhead generated by logging \textsc{get}s in \smalltt{f-puts-gets} and \smalltt{ML} is more significant because, due to RMA's one-sided semantics, every \textsc{get} has to be recorded {remotely}.
In addition, \smalltt{f-puts-gets} suffers from synchronization overheads (caused by concurrent accesses to $LG$), while \smalltt{ML} from inter-process protocol-specific communication.
Discussed overheads heavily depend on the application type. Our key-value store constitutes a worst-case scenario
because it does not allow for long epochs that could enable, e.g., sending the logs of multiple \textsc{get}s in a bulk.
The performance penalties would be smaller in applications that
overlap computation with communication and use non blocking \textsc{get}s.
\section{Related Work}
\label{sec:relatedWork}
\goal{Introduce the section and imply there's no research on resilience in RMA}
In this section we discuss existing checkpointing and
logging schemes (see Figure~\ref{fig:overview}). For excellent surveys, see
\cite{Elnozahy:2002:SRP:568522.568525,
Alvisi:1998:MLP:630821.631222,
vasavada2011comparing}. Existing work on fault tolerance in RMA/PGAS is scarce,
an example scheme that uses PGAS for data replication can be found in~\cite{5738978}.
\subsection{Checkpointing Protocols}
\goal{+ Say how we divide checkpointing schemes}
These schemes are traditionally divided into \emph{uncoordinated}, \emph{coordinated}, and \emph{communication induced}, depending on process coordination scale~\cite{Elnozahy:2002:SRP:568522.568525}. There are also \emph{complete} and \emph{incremental} protocols that differ in checkpoint sizes~\cite{vasavada2011comparing}.
\textbf{Uncoordinated Schemes }
Uncoordinated schemes do not synchronize while checkpointing, but may suffer from \emph{domino effect} or complex recoveries~\cite{Elnozahy:2002:SRP:568522.568525}. Example protocols are based on \emph{dependency}~\cite{Bhargava25775} or \emph{checkpoint graphs}~\cite{Elnozahy:2002:SRP:568522.568525}. A recent scheme targeting large-scale systems is Ken~\cite{Yoo:2012:CRA:2342821.2342824}.
\textbf{Coordinated Schemes }
Here, processes synchronize to produce consistent global checkpoints. There is no domino effect and recovery is simple but synchronization may incur severe overheads. Coordinated schemes can be \emph{blocking}~\cite{Elnozahy:2002:SRP:568522.568525} or \emph{non-blocking}~\cite{Chandy:1985:DSD:214451.214456}. There are also schemes based on \emph{loosely synchronized clocks}~\cite{Tong:1992:RRD:628900.629082} and \emph{minimal coordination}~\cite{1702129}.
\textbf{Communication Induced Schemes }
Here, senders add scheme-specific data to application messages that receivers use to, e.g., avoid taking useless checkpoints. These schemes can be \emph{index-based}~\cite{632814} or \emph{model-based}~\cite{Elnozahy:2002:SRP:568522.568525,342127}.
\textbf{Incremental Checkpointing }
An incremental checkpoint updates only the data that changed since the previous checkpoint. These protocols are divided into page-based~\cite{vasavada2011comparing} and hash-based~\cite{Agarwal:2004:AIC:1006209.1006248}. They can reside at the level of an \emph{application}, a \emph{library}, an \emph{OS}, or \emph{hardware}~\cite{vasavada2011comparing}. Other schemes can be \emph{compiler-enhanced}~\cite{Bronevetsky:2008:CIC:1345206.1345253} or \emph{adaptive}~\cite{Agarwal:2004:AIC:1006209.1006248}.
\textbf{Others }
Recently, \emph{multi-level} checkpointing was introduced~\cite{Moody:2010:DME:1884643.1884666,Bautista-Gomez:2011:FHP:2063384.2063427,Sato:2012:DMN:2388996.2389022}. \emph{Adaptive} checkpointing based on failure prediction is discussed in~\cite{li2007using}. A study on checkpointing targeted specifically at GPU-based computations can be found in~\cite{6012895}. \cite{730527} presents diskless checkpointing. Other interesting schemes are based on: Reed-Solomon coding~\cite{Bautista-Gomez:2011:FHP:2063384.2063427}, cutoff and compression to reduce checkpoint sizes~\cite{6264674}, checkpointing on clouds~\cite{Nicolae:2011:BEC:2063384.2063429}, reducing I/O bottlenecks~\cite{scc}, and performant checkpoints to PFS~\cite{Arteaga:2011:TSA:2060102.2060540}.
\subsection{Logging Protocols}
\label{sec:loggingProtocols}
\goal{+ Say how we divide logging schemes}
Logging enables restored processes to replay their execution beyond the most recent checkpoint. Log-based protocols are traditionally categorized into: \emph{pessimistic}, \emph{optimistic}, \emph{causal}~\cite{Elnozahy:2002:SRP:568522.568525}; they can also be \emph{sender-based}~\cite{Riesen:2012:ASI:2388996.2389021,6012907} and \emph{receiver-based}~\cite{Elnozahy:2002:SRP:568522.568525} depending on which side logs messages.
\textbf{Pessimistic Schemes }
Such protocols log events before they influence the system. This ensures no orphan processes and simpler recovery, but may incur severe overheads during fault-free runs. An example protocol is V-MPICH~\cite{1592865}.
\textbf{Optimistic Schemes }
Here, processes postpone logging messages to achieve, e.g., better computation-communication overlap. However, the algorithms for recovery are usually more complicated and crashed processes may become orphans~\cite{Elnozahy:2002:SRP:568522.568525}. A recent scheme can be found in~\cite{Riesen:2012:ASI:2388996.2389021}.
\textbf{Causal Schemes }
In such schemes processes log and exchange (by piggybacking to messages) dependencies needed for recovery. This ensures no orphans but may reduce bandwidth~\cite{Elnozahy:2002:SRP:568522.568525}. An example protocol is discussed in~\cite{142678}.
\subsection{Other Important Studies \& Discussion}
\goal{+ Describe other related research}
Deriving an optimum checkpointing interval is presented in~\cite{Daly:2006:HOE:1134241.1134248}. Formalizations targeting resilience can be found in~\cite{342127,Elnozahy:2002:SRP:568522.568525}. Power consumption was addressed in~\cite{Sardashti:2012:ULN:2304576.2304587,Chung:2012:CDS:2388996.2389075}. \emph{Containment domains} for encapsulating failures within a hierarchical scope are discussed in~\cite{Chung:2012:CDS:2388996.2389075}. Modeling and prediction of failures is addressed in~\cite{Bautista-Gomez:2011:FHP:2063384.2063427, Chung:2012:CDS:2388996.2389075}. Work on send determinism in MP can be found in~\cite{6012907}.
\goal{+ Say why we differ \& are better than the above}
Our study goes beyond the existing research scope presented in this
section. First, we develop a fault tolerance model that covers virtually whole rich RMA
semantics. Other existing formalizations (e.g.,~\cite{342127,Elnozahy:2002:SRP:568522.568525,
Alvisi:1998:MLP:630821.631222}) target MP only.
We then use the model to formally analyze why resilience for RMA
differs from MP and to design checkpointing, logging, and recovery
protocols for RMA. We identify and propose solutions to several
challenges in resilience for RMA that \emph{do not} exist in MP, e.g.:
consistency problems
caused by the relaxed RMA memory model (\cref{sec:taking_coordinated_ckp}, \cref{sec:taking_uncoordinated_ckp}, \cref{sec:transparentLogging}), access non-determinism (\cref{sec:managing_unc}), issues due to
one-sided RMA communication (\cref{sec:rma_vs_mp_ucc}), logging multiple RMA-specific orders (\cref{sec:logging_order_info}), etc.
Our model enables proving correctness of proposed schemes.
Extending our model for arbitrary hardware hierarchies generalizes the approach from~\cite{Bautista-Gomez:2011:FHP:2063384.2063427}
and enables formal reasoning about crashes of hardware elements and process distribution.
Finally, our protocol leverages and combines several important concepts and mechanisms (Daly's interval~\cite{Daly:2006:HOE:1134241.1134248}, multi-level design~\cite{Moody:2010:DME:1884643.1884666}, etc.) to improve the resilience of RMA systems even further and is the first implementation of holistic fault tolerance for RMA.
\section{Conclusion}
\goal{State RMA is becoming popular but there's no fault-tolerance for it}
RMA programming models are growing in popularity and importance as they allow for the best utilization of hardware features such as OS-bypass or zero-copy data transfer. Still, little work addresses fault tolerance for RMA.
\goal{Advertise our formal model and describe its broader applications}
We established, described, and explored a
complete formal model of fault tolerance for RMA and illustrated how to use it to design and reason about resilience protocols running on flat and hierarchical machines. It will play an important role in making emerging RMA programming fault tolerant and can be easily extended to cover, e.g., stable storage.
\goal{Describe our protocol/implementation and suggest its broader applications}
Our study does not resort to traditional less scalable mechanisms that often rely on costly I/O flushes.
The implementation of our holistic protocol adds negligible overheads to the applications runtime, for example 1-5\% for in-memory checkpointing and 8\% for fully transparent logging of remote memory accesses in the NAS 3D FFT code. Our probability study shows that the protocol offers high resilience. The idea of demand checkpoints will help alleviate the problem of limited memory amounts in today's petascale and future exascale computing centers.
\goal{Describe broader potential behind the whole paper}
Finally, our work provides the basis for
further reasoning about fault-tolerance not only for RMA,
but also for all the other models that can be constructed upon it, such
as task-based programming models. This will play an important role in complex heterogeneous large-scale systems.
\maciej{TODO: show (prove?) than MP CANNOT log puts/gets}
\maciej{add ``stable''?}
\maciej{TODO: epoch equations}
\maciej{do BUPC}
{
\vspace{0em}\section*{Acknowledgements}
We thank the CSCS team granting access to the Monte Rosa machine, and for their
excellent technical support. We thank Franck Cappello for inspiring remarks.
We thank Timo Schneider
for his immense help with computing infrastructure at SPCL.}
\bibliographystyle{abbrv}
| proofpile-arXiv_059-15631 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{#1} \setcounter{equation}{0}}
\def\textup{div}{\textup{div}}
\def\mathcal{D}{\mathcal{D}}
\def\mathcal{C}{\mathcal{C}}
\def\mathbb{R}{\mathbb{R}}
\def\mathbb{I}{\mathbb{I}}
\def\mathcal{L}{\mathcal{L}}
\def\mathcal{M}{\mathcal{M}}
\def\mathcal{N}{\mathcal{N}}
\def\mathcal{F}{\mathcal{F}}
\def\mathcal{K}{\mathcal{K}}
\def\mathcal{A}{\mathcal{A}}
\def\mathcal{H}{\mathcal{H}}
\newcommand{\caps}[1]{\textup{\textsc{#1}}}
\providecommand{\bysame}{\makebox[3em]{\hrulefill}\thinspace}
\newcommand{\mathbb}{\mathbb}
\newcommand{\ov}[1]{\mbox{$\overline{#1}$}}
\newcommand{\upshape}{\upshape}
\newcommand{\upvec}[2]{\mbox{$#1^{1},\ldots,#1^{#2}$}}
\newcommand{\lovec}[2]{\mbox{$#1_{1},\ldots,#1_{#2}$}}
\providecommand{\xrighto}[1]{\mbox{$\;\xrightarrow{#1}\;$}}
\providecommand{\xleftto}{\mbox{$\xleftarrow$}}
\newcommand{\mbox{$\;\leftarrow\!\mapstochar\;$}}{\mbox{$\;\leftarrow\!\mapstochar\;$}}
\newcommand{\mbox{$\;\longleftarrow\!\mapstochar\;\,$}}{\mbox{$\;\longleftarrow\!\mapstochar\;\,$}}
\newcommand{\longrightarrow}{\longrightarrow}
\newcommand{\hookrightarrow}{\hookrightarrow}
\newcommand{\twoheadrightarrow}{\twoheadrightarrow}
\def\vv<#1>{\langle#1\rangle}
\def\ww<#1>{\langle\langle#1\rangle\rangle}
\newcommand{\mbox{$\textup{Tr}$}}{\mbox{$\textup{Tr}$}}
\newcommand{\diag}[1]{\mbox{$\textup{diag}(#1)$}}
\newcommand{\mbox{$\text{\up{mult}}$}}{\mbox{$\text{\upshape{mult}}$}}
\newcommand{\mbox{$\text{\up{inv}}$}}{\mbox{$\text{\upshape{inv}}$}}
\newcommand{\mbox{$\text{\up{im}}\,$}}{\mbox{$\text{\upshape{im}}\,$}}
\newcommand{\mbox{$\text{\up{id}}\,$}}{\mbox{$\text{\upshape{id}}\,$}}
\newcommand{\mbox{$\text{\up{pr}}$}}{\mbox{$\text{\upshape{pr}}$}}
\newcommand{\mbox{$\text{\up{coker}}\,$}}{\mbox{$\text{\upshape{coker}}\,$}}
\newcommand{\mbox{$\text{\up{ev}}$}}{\mbox{$\text{\upshape{ev}}$}}
\newcommand{\mbox{$\text{\up{rank}}\,$}}{\mbox{$\text{\upshape{rank}}\,$}}
\newcommand{\mbox{$\dim_{\bb{R}}$}}{\mbox{$\dim_{\mathbb{R}}$}}
\newcommand{\mbox{$\dim_{\bb{C}}$}}{\mbox{$\dim_{\mathbb{C}}$}}
\newcommand{\mbox{$\text{ddim}\,$}}{\mbox{$\text{ddim}\,$}}
\providecommand{\det}{\mbox{$\text{\upshape{det}}\,$}}
\providecommand{\codim}{\mbox{$\text{\upshape{codim}}\,$}}
\providecommand{\sign}{\mbox{$\text{\upshape{sign}}\,$}}
\providecommand{\vol}{\mbox{$\text{\upshape{vol}}$}}
\newcommand{\mbox{$\text{\up{Span}}$}}{\mbox{$\text{\upshape{Span}}$}}
\newcommand{\overline{\partial}}{\overline{\partial}}
\newcommand{\dd}[2]{\mbox{$\frac{\partial #2}{\partial #1}$}}
\newcommand{\mbox{$\dd{t}{}|_{0}$}}{\mbox{$\dd{t}{}|_{0}$}}
\providecommand{\del}{\partial}
\newcommand{\omega}{\omega}
\newcommand{\Omega}{\Omega}
\newcommand{\varphi}{\varphi}
\newcommand{\Varphi}{\Varphi}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\lambda}{\lambda}
\newcommand{\Lambda}{\Lambda}
\newcommand{\wt}[1]{\mbox{$\widetilde{#1}$}}
\newcommand{\dwt}[1]{\mbox{$\widetilde{\widetilde{#1}}$}}
\newcommand{\mbox{$\text{Alg}$}}{\mbox{$\text{Alg}$}}
\newcommand{\mbox{$\text{\up{Fred}}$}}{\mbox{$\text{\upshape{Fred}}$}}
\newcommand{\mbox{$\text{\up{Poly}}$}}{\mbox{$\text{\upshape{Poly}}$}}
\providecommand{\Hom}{\mbox{$\text{\upshape{Hom}}$}}
\newcommand{\mbox{$\bigsqcup$}}{\mbox{$\bigsqcup$}}
\newcommand{\by}[2]{\mbox{$\frac{#1}{#2}$}}
\newcommand{\mbox{$C^{\infty}$}}{\mbox{$C^{\infty}$}}
\providecommand{\set}[1]{\mbox{$\{#1\}$}}
\newcommand{\subseteq}{\subseteq}
\newcommand{\supseteq}{\supseteq}
\newcommand{\mbox{$\sum$}}{\mbox{$\sum$}}
\newcommand{\hcm}{\mbox{$H_{\textup{CM}}$}}
\newcommand{\hcml}{\mbox{$H_{\textup{CM}}^{(L)}$}}
\newcommand{\mbox{$H_{\textup{free}}$}}{\mbox{$H_{\textup{free}}$}}
\newcommand{\bra}[1]{\mbox{$\{#1\}$}}
\newcommand{\ham}[1]{\mbox{$\smash{\nabla^{\omega}_{#1}}$}}
\newcommand{\hamo}[1]{\mbox{$\smash{\nabla^{\omega_{0}}_{#1}}$}}
\newcommand{Cartan subalgebra\xspace}{Cartan subalgebra\xspace}
\newcommand{Cartan subgroup\xspace}{Cartan subgroup\xspace}
\newcommand{\lie}[1]{\mbox{$\text{\upshape{Lie}}(#1)$}}
\newcommand{\mathfrak{a}}{\mathfrak{a}}
\newcommand{\mathfrak{b}}{\mathfrak{b}}
\newcommand{\mathfrak{c}}{\mathfrak{c}}
\newcommand{\doo}{\mathfrak{d}}
\newcommand{\mathfrak{e}}{\mathfrak{e}}
\newcommand{\mathfrak{f}}{\mathfrak{f}}
\newcommand{\mathfrak{g}}{\mathfrak{g}}
\newcommand{\mathfrak{h}}{\mathfrak{h}}
\newcommand{\mathfrak{i}}{\mathfrak{i}}
\newcommand{\mathfrak{j}}{\mathfrak{j}}
\newcommand{\mathfrak{k}}{\mathfrak{k}}
\newcommand{\mathfrak{l}}{\mathfrak{l}}
\newcommand{\mathfrak{m}}{\mathfrak{m}}
\newcommand{\mathfrak{n}}{\mathfrak{n}}
\newcommand{\mathfrak{o}}{\mathfrak{o}}
\newcommand{\mathfrak{p}}{\mathfrak{p}}
\newcommand{\mathfrak{q}}{\mathfrak{q}}
\newcommand{\mathfrak{r}}{\mathfrak{r}}
\newcommand{\mathfrak{s}}{\mathfrak{s}}
\newcommand{\mathfrak{t}}{\mathfrak{t}}
\newcommand{\mathfrak{u}}{\mathfrak{u}}
\newcommand{\mathfrak{w}}{\mathfrak{w}}
\newcommand{\mbox{$\text{\up{conj}}$}}{\mbox{$\text{\upshape{conj}}$}}
\newcommand{\mbox{$\text{\upshape{Ad}}$}}{\mbox{$\text{\upshape{Ad}}$}}
\newcommand{\mbox{$\text{\upshape{ad}}$}}{\mbox{$\text{\upshape{ad}}$}}
\newcommand{\mbox{$\text{\upshape{Ad}}^{*}$}}{\mbox{$\text{\upshape{Ad}}^{*}$}}
\newcommand{\mbox{$\text{\upshape{ad}}^{*}$}}{\mbox{$\text{\upshape{ad}}^{*}$}}
\newcommand{\mbox{$\mathcal{O}$}}{\mbox{$\mathcal{O}$}}
\newcommand{\mbox{$\orb^{\bot}$}}{\mbox{$\mbox{$\mathcal{O}$}^{\bot}$}}
\newcommand{\mbox{$C^{\circ}$}}{\mbox{$C^{\circ}$}}
\newcommand{\mbox{$\Delta_{+}$}}{\mbox{$\Delta_{+}$}}
\newcommand{\mbox{$\textup{SL}$}}{\mbox{$\textup{SL}$}}
\newcommand{\mbox{$\mathfrak{sl}$}}{\mbox{$\mathfrak{sl}$}}
\newcommand{\mbox{$\textup{GL}$}}{\mbox{$\textup{GL}$}}
\newcommand{\mbox{$\mathfrak{gl}$}}{\mbox{$\mathfrak{gl}$}}
\newcommand{\mbox{$\textup{SU}$}}{\mbox{$\textup{SU}$}}
\newcommand{\mbox{$\mathfrak{su}$}}{\mbox{$\mathfrak{su}$}}
\newcommand{\mbox{$\textup{SO}$}}{\mbox{$\textup{SO}$}}
\newcommand{\mbox{$\mathfrak{so}$}}{\mbox{$\mathfrak{so}$}}
\newcommand{\mbox{$\textup{U}$}}{\mbox{$\textup{U}$}}
\newcommand{\mbox{$\mathcal{X}$}}{\mbox{$\mathcal{X}$}}
\newcommand{\mbox{$\,\delta_t$}}{\mbox{$\,\delta_t$}}
\newcommand{\mbox{$\,d_t$}}{\mbox{$\,d_t$}}
\newcommand{\cl}[1]{\mbox{$[#1]_{\mathfrak{g}^*}$}}
\newcommand{\bigcl}[1]{\mbox{$\Big[#1\Big]_{\mathfrak{g}_0^*}$}}
\newcommand{\mbox{$\textup{SVect}$}}{\mbox{$\textup{SVect}$}}
\newcommand{\mbox{$\textup{Vect}$}}{\mbox{$\textup{Vect}$}}
\newcommand{\todo}[1]{\vspace{5 mm}\par \noindent
\marginpar{\textsc
}} \framebox{\begin{minipage}[c]{0.95
\textwidth}\raggedright \tt #1 \end{minipage}}\vspace{5 mm}\par}
\newcommand{\revise}[1]{{\color{red} #1}}
\newcommand{\retwo}[1]{#1
\title[Probabilistic representation of helicity in viscous fluids]{%
Probabilistic representation of helicity \\
in viscous fluids
}
\usepackage[foot]{amsaddr}
\author{Simon Hochgerner}
\address{\"Osterreichische Finanzmarktaufsicht (FMA),
Otto-Wagner Platz 5, A-1090 Vienna
}
\email{simon.hochgerner@fma.gv.at}
\begin{document}
\begin{abstract}
It is shown that the helicity of three dimensional viscous incompressible flow can be
identified with the overall linking of the fluid's initial vorticity to the expectation of a stochastic mean field limit.
The relevant mean field limit is obtained by following the Lagrangian paths in the stochastic Hamiltonian interacting particle system of [S.\ Hochgerner, Proc.\ R.\ Soc.\ A
\textbf{474}:20180178].
\end{abstract}
\maketitle
\section{Introduction}
The evolution of the velocity field, $u=u(t,x)=u_t(x)$, of a three dimensional incompressible fluid with constant mass density, $\rho=1$, is given by
\begin{equation}
\label{1e:NS}
\by{\del}{\del t}u
=
-\nabla_u u - \nabla p + \nu\Delta u,
\qquad
\textup{div}\,u = 0,
\qquad
u(0,.) = u_0
\end{equation}
where $t\in[0,T]$, $x\in \mathbb{R}^3$, $\nabla_u u = \vv<u,\nabla>u = \sum_{j=1}^3 u^j\del_j u$ is the covariant derivative in $\mathbb{R}^3$, $p = p(t,x)$ is the pressure determined by $\textup{div}\,u=0$, and the smooth divergence free vector field $u_0$ is an initial condition which decays sufficiently rapidly at infinity.
If $\nu>0$ then \eqref{1e:NS} is the incompressible Navier-Stokes equation, and if $\nu=0$ it is the incompressible Euler equation in $\mathbb{R}^3$.
The helicity of the fluid is defined by
\begin{equation}
\label{1e:hel}
\mathcal{H}_t
= \int_{\mathbb{R}^3} \vv<u_t,\textup{curl}\,u_t>\,dx
\end{equation}
where $dx$ is the Euclidean volume element in $\mathbb{R}^3$. Helicity is a topological quantity measuring the overall degree of linking and knotting of vortex lines (Moffatt et al.~\cite{M69,MR92,MT92}).
Let $u$ be a solution to the Euler equation ($\nu=0$) and consider the Lagrangian flow $g=g_t(x)$ generated by
\[
\by{\del}{\del t}g_t = u_t\circ g_t,
\qquad
g_0 = e
\]
where $e$ is the identity map in $\mathbb{R}^3$.
Then $g_t$ is a curve in the group of volume preserving diffeomorphisms $\textup{SDiff}(\mathbb{R}^3)$. Arnold~\cite{A66} has shown that the Euler equation has the structure of an infinite dimensional Hamiltonian system with configuration space $\textup{SDiff}(\mathbb{R}^3)$.
Moreover, the Hamiltonian function, $\int_{\mathbb{R}^3}\vv<u,u>\,dx\,/2$, is invariant under the relabeling symmetry, which is given by composition from the right in $\textup{SDiff}(\mathbb{R}^3)$.
Thus Noether's theorem applies, and yields
\begin{equation}\label{1e:cons}
u_t
= \mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^{\top}u_0
\end{equation}
where the transpose adjoint action, $\mbox{$\text{\upshape{Ad}}$}(\cdot)^{\top}$, is defined as follows: For a vector field, $X$, let $PX = X - \nabla\Delta^{-1}\textup{div}\,X$ be the Leray-Hodge projection onto the divergence free part. Then $\mbox{$\text{\upshape{Ad}}$}(h)^{\top}v = P\cdot(Th)^{\top}\cdot(v\circ h)$ where $h\in\textup{SDiff}(\mathbb{R}^3)$, $(Th)^{\top}$ is the transpose matrix, and $v$ is a divergence free vector field.
In fact, $\mbox{$\text{\upshape{Ad}}$}(h)^{\top}$ is the transpose with respect to the $L^2$ inner product to the adjoint action (inverse vector field pullback) $\mbox{$\text{\upshape{Ad}}$}(h): v\mapsto Th\cdot(v\circ h^{-1})$.
We remark that $Th = (\del_i h^j)_{j,i}$ will throughout refer to differentiation in the space variable, while time differentiation will be denoted by $\frac{\del}{\del t}$ (ordinary), $\mbox{$\,\delta_t$}$ (Stratonovich), or $\mbox{$\,d_t$}$ (Ito calculus).
The transport equation~\eqref{1e:cons} and the identity $\textup{curl}\,\mbox{$\text{\upshape{Ad}}$}(g^{-1})^{\top} = \mbox{$\text{\upshape{Ad}}$}(g)\,\textup{curl}$ yield
\[
\mathcal{H}_t
= \int_{\mathbb{R}^3} \Big\langle
\mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^{\top}u_0 , \textup{curl}\,\mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^{\top}u_0
\Big\rangle \,dx
= \int_{\mathbb{R}^3} \Big\langle
u_0 , \mbox{$\text{\upshape{Ad}}$}(g_t^{-1})\,\textup{curl}\,\mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^{\top}u_0
\Big\rangle \,dx
=
\mathcal{H}_0.
\]
Hence Euler flow conserves helicity (\cite{M61,M69}).
In the viscous case, $\nu>0$, helicity is generally not conserved. The precise mechanism whereby helicity changes under Navier-Stokes flow is subject to ongoing investigation (\cite{Scheeler,LRS15,Kerr15,Kerr17}).
This note studies helicity in the viscous case from the point of view of stochastic Hamiltonian interacting particle systems (SHIPS) as in \cite{H17,H18,H20}. These systems can be viewed as a stochastic perturbation (along Hamiltonian vector fields) of ideal fluid mechanics. Ideal fluid mechanics (Euler flow) preserves energy and helicity. Both are quadratic invariants, but they have different geometric origins. Energy conservation follows because the Hamiltonian coincides with the energy functional, but such a quantity is generally not preserved under stochastic Hamiltonian perturbations. On the other hand, helicity is a Casimir function (constant on coadjoint orbits), and Casimirs are preserved by stochastic Hamiltonian perturbations. Hence the SHIPS approach can be expected to possess a helicity type invariant.
In \cite{H18} it is shown that solutions of incompressible Navier-Stokes equation can be obtained as the mean field limit of these interacting particle systems, and one may wonder how the helicity preserving stochastic Hamiltonian construction gives rise to a flow with non-constant helicity.
The observation of this paper is that helicity in viscous fluids may be considered as an average over cross-helicities of stochastically perturbed ideal flows. Moreover, the group structure in $\textup{SDiff}(\mathbb{R}^{3})$ allows to identify this as the cross-helicity of initial vorticity and an average over backward-forward transports of the initial velocity.
Concretely, and to describe the stochastic Hamiltonian equations in question, fix a (large) integer $N$ and consider for, $\alpha = 1,\dots,N$, the IPS
\begin{align}
\label{1e:syst}
(\mbox{$\,\delta_t$} g_t^{\alpha})\circ (g_t^{\alpha})^{-1}
&= \frac{1}{N}\sum_{\beta=1}^N u_t^{\beta} \mbox{$\,\delta_t$} t + \sqrt{2\nu}\mbox{$\,\delta_t$} W_t^{\alpha}
,\qquad g_0^{\alpha} = e
, \qquad
u_t^{\alpha}
= \mbox{$\text{\upshape{Ad}}$}\Big( (g_t^{\alpha})^{-1} \Big)^{\top} u_0
\end{align}
where $\mbox{$\,\delta_t$}$ denotes Stratonovich differentiation and $(W^{\alpha})$ is a sequence of $N$ mutually independent Brownian motions in $\mathbb{R}^3$.
The process $g_t^{\alpha}$ takes values in the group of volume preserving diffeomorphisms, $\textup{SDiff}(\mathbb{R}^3)$, and $e$ is the identity diffeomorphism.
The system~\eqref{1e:syst} is presented in Section~\ref{sec:ships} from the Lie-Poisson point of view.
The process $u_t^{\alpha}$ takes values (by construction) in the space of divergence free vector fields, $\mbox{$\textup{SVect}$}(\mathbb{R}^3)$. Further, it depends on $N$ and we can consider the mean field limit, $u_t^{\infty} = \lim_{N\to\infty}u_t^{\alpha}$, which is a limit in probability (\cite{Oel84,DV95,JW17}). In fact, since all particles are identical, it suffices to consider the limit for $\alpha=1$.
Theorem~\ref{thm:prel}, which is a summary of \cite{H18}, shows that a given solution, $u_t$, to the Navier-Stokes equation can be represented as $u_t = E[u_t^{\infty}] = \lim_{N\to\infty}\sum_{\alpha=1}^N u_t^{\alpha}/N$.
Therefore, and also by analogy to the ideal fluid case \eqref{1e:cons}, it makes sense to call $\mbox{$\text{\upshape{Ad}}$}( (g_t^{\alpha})^{-1} )^{\top} u_0$ the forward transport of $u_0$. That is, the initial condition is transported forward in time along the stochastic Lagrangian path $g_t^{\alpha}$.
The IPS~\eqref{1e:syst} arises from a decomposition of each infinitesimally small blob of fluid (at each $x\in \mathbb{R}^3$) into $N$ identical sub-blobs, and insisting that the sub-blobs follow their common center of mass while at the same time undergoing each their own stochastic process. See Section~\ref{sec:2phys}.
Fix an index $\alpha$ and consider the fluid collection that is made up of all $\alpha$-sub-blobs. In Section~\ref{sec:hel-rep-ships} it is observed that helicity is indeed preserved along the corresponding stochastic flow~\eqref{1e:syst}, that is
\[
\int_{\mathbb{R}^3} \vv<u_t^{\alpha},\textup{curl}\,u_t^{\alpha}>\,dx
= \mathcal{H}_0
\]
for all $\alpha=1,\dots,N$.
This yields the representation
\begin{equation}
\tag{Theorem~\ref{thm:hel}}
\mathcal{H}_t
=
\lim_{N\to\infty}\sum_{1\le\alpha\neq\beta\le N}\int_{\mathbb{R}^3}
\Big\langle
\mbox{$\text{\upshape{Ad}}$}(g_t^{\alpha})^{\top} \mbox{$\text{\upshape{Ad}}$}((g_t^{\beta})^{-1})^{\top} \,u_0 ,
\textup{curl}\,u_0
\Big\rangle\,dx\, / N^2
\end{equation}
for the helicity~\eqref{1e:hel} of Navier-Stokes flow (see also Remark~\ref{rem:hel}).
Hence the initial velocity, $u_0$, is transported forward along a stochastic Lagrangian path, $g_t^{\beta}$, and then backwards along another path, $g_t^{\alpha}$.
The result, $\mathcal{H}_t$, is obtained by averaging over the $L^2$ inner products of the initial vorticity and all such backward-forward transports with $1\le\alpha\neq\beta\le N$, and letting $N$ tend to infinity.
Helicity at time $t$ is thus the average over all possible linkings of integral curves, corresponding to $\textup{curl}\,\mbox{$\text{\upshape{Ad}}$}(g_t^{\alpha})^{\top} \mbox{$\text{\upshape{Ad}}$}((g_t^{\beta})^{-1})^{\top} \,u_0$, and initial vortex lines, corresponding to $\textup{curl}\,u_0$. See also Section~\ref{sec:3phys}.
If we set $\nu=0$, the Lagrangian paths $g_t^{\alpha}$ are deterministic and coincide with each other, whence it follows that
$\mbox{$\text{\upshape{Ad}}$}(g_t^{\alpha})^{\top} \mbox{$\text{\upshape{Ad}}$}((g_t^{\beta})^{-1})^{\top} \,u_0 = u_0$ and
Theorem~\ref{thm:hel}
reduces to $\mathcal{H}_t = \int_{\mathbb{R}^3}\vv<u_0,\textup{curl}\,u_0>\,dx = \mathcal{H}_0$,
which is the conservation of helicity in the inviscid case.
Consider the SDE that follows from \eqref{1e:syst} in the mean field limit as $N\to\infty$. To obtain this limit we may fix $\alpha=1$ since all interacting particles (i.e., sub-blobs) are identical. The result is the stochastic mean field system
\begin{equation}
\label{1e:syst_mf}
(\mbox{$\,\delta_t$} g_t^{\infty})\circ(g_t^{\infty})^{-1}
= E[u_t^{\infty}]\mbox{$\,\delta_t$} t + \sqrt{2\nu}\mbox{$\,\delta_t$} W_t,
\qquad
g_0^{\infty} = e,
\qquad
u_t^{\infty} = \mbox{$\text{\upshape{Ad}}$}\Big( (g_t^{\infty})^{-1} \Big)^{\top}u_0
\end{equation}
for processes $g_t^{\infty}$ in $\textup{SDiff}(\mathbb{R}^3)$ and $u_t^{\infty}$ in $\mbox{$\textup{SVect}$}(\mathbb{R}^3)$, and where $W$ is Brownian motion in $\mathbb{R}^3$. It follows that $u_t = E[u_t^{\infty}]$ satisfies the Navier-Stokes equation (Theorem~\ref{thm:prel}).
The IPS approach in \cite{H17,H18,H20} is a Hamiltonian analogue of the Constantin and Iyer~\cite{CI05} representation of solutions to the Navier-Stokes equation via the stochastic Weber formula. In fact, \eqref{1e:syst_mf} is equivalent to \cite{H18} via the stochastic Noether theorem (cf.\ Theorem~\ref{thm:prel}), and coincides, up to notation, with the stochastic Weber formula of \cite[Theorem~2.2]{CI05}. In the present context the $\mbox{$\text{\upshape{Ad}}$}(\cdot)^{\top}$ notation is kept because the group theoretic formulation is helpful in the helicity calculations.
The system~\eqref{1e:syst_mf} leads to an expression for the mean field limit in
Theorem~\ref{thm:hel},
which is
\begin{equation}
\tag{Theorem~\ref{thm:hel-mf}}
\mathcal{H}_t
=
E\Big[
\int_{\mathbb{R}^3}\vv< \mbox{$\text{\upshape{Ad}}$}(h_t)^{\top}u_0, \,\textup{curl}\,u_0 >\, dx
\Big]
\end{equation}
where the process $h_t$ in $\textup{SDiff}(\mathbb{R}^3)$ is the solution to the SDE with random coefficients
\[
(\mbox{$\,\delta_t$} h_t)\circ h_t^{-1}
= \sqrt{2\nu}\, (Tg_t^{\infty})^{-1}\cdot \mbox{$\,\delta_t$}(B_t - W_t),
\qquad
h_0 = e
\]
and where $B$ is a Brownian motion in $\mathbb{R}^3$ which is independent of $W$.
Section~\ref{sec:3phys} contains a physical interpretation of these equations.
These results depend on the existence of the mean field limits under consideration. The existence of these limits is assumed, at least for a short period of time $[0,T]$, but not proven (in this paper). Constantin and Iyer~\cite{CI05} have shown short time existence for \eqref{1e:syst_mf}.
\begin{comment}
\section{Notation and preliminaries}\label{sec:nota}
\subsection{Brownian motion in $\mathfrak{g}_0$}\label{sec:BMgu}
Let $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\in[0,T]},\mathbb{P})$ be a filtered probability space satisfying the usual assumptions as specified in \cite{Pro}. In the following, all stochastic processes shall be understood to be adapted to this filtration.
Let
\[
\mathbb{Z}_3^+
:= \set{k\in\mathbb{Z}_3: k_1>0
\textup{ or, for } i=2, 3,
k_1=\ldots=k_{i-1}=0, k_i>0
}.
\]
For $k\in\mathbb{Z}^+_3$ let $k_1^{\bot}, k_{2}^{\bot}$ denote a choice of pairwise orthogonal vectors in $\mathbb{R}^3$ such that $|k_i^{\bot}|=|k|$ and $\vv<k_i^{\bot},k>=0$ for $i=1,2$.
Consider the following system of vectors in $\mathfrak{g}_0$ (\cite{CM08,CS09}):
\begin{equation*}
A_{(k,i)} = \frac{1}{|k|^{s+1}}\cos\vv<k,x>k^{\bot}_i,\;
B_{(k,i)} = \frac{1}{|k|^{s+1}}\sin\vv<k,x>k^{\bot}_i,\;
A_{(0,j)} = e_j
\end{equation*}
where $e_j\in\mathbb{R}^3$ is the standard basis and $s$ is the Sobolev index from Section~\ref{sec:diffgps}.
By slight abuse of notation we identify these vectors with their corresponding right invariant vector fields on $\textup{Diff}({\mathbb{R}^3})_0$.
Further, in the context of the $X_r$ vectors we shall make use of the multi-index notation $r = (k,i,a)$ where $k\in\mathbb{Z}_3^+$ and $a=0,1,2$ such that
\begin{align*}
X_r &= A_{(0,i)}
\textup{ with } i=1,\ldots,3
\textup{ if } a=0\\
X_r &= A_{(k,i)}
\textup{ with } i=1,2
\textup{ if } a=1\\
X_r &= B_{(k,i)}
\textup{ with } i=1,2
\textup{ if } a=2
\end{align*}
Thus by a sum over $X_r$ we shall mean a sum over these multi-indices, and this notation will be used throughout the rest of the paper.
It can be shown (see \cite[Appendix]{CS09} for details) that the $X_r$ form an orthogonal system of basis vectors in $\mathfrak{g}_0$, such that
\begin{equation}\label{e:nablaXX}
\nabla_{X_r}X_r = 0
\end{equation}
and, for $X\in\mbox{$\textup{Vect}$}({\mathbb{R}^3})$,
\begin{equation}
\label{e:Delta}
\sum \nabla_{X_r}\nabla_{X_r}X = c^s\Delta X
\end{equation}
where $c^s = 1+\frac{2}{3}\sum_{k\in\mathbb{Z}_3^+}\frac{1}{|k|^{2s}}$ is a constant and $\Delta$ is the vector Laplacian.
\begin{proposition}[\cite{CM08,DZ}]\label{prop:bm
Let $W_t = \sum X_r W_t^p$, where $W_t^r$ are independent copies of Brownian motion in $\mathbb{R}$. Then $W$ defines (a version of) Brownian motion (i.e., cylindrical Wiener process) in $\mathfrak{g}_0$.
\end{proposition}
\end{comment}
\section{Stochastic Hamiltonian interacting particle system (SHIPS) and mean-field limit}\label{sec:ships}
The stochastic Hamiltonian approach that is presented in this section has been developed in \cite{H17,H18,H20}.
This approach is a Hamiltonian analogy to the mean field Weber formula theory of Constantin and Iyer~\cite{CI05}, and it is also related to Holm's variational principle for stochastic fluid mechanics (\cite{Holm15}).
Section~\ref{sec:vor} contains the mean field evolution equation for stochastic vorticity, which is a straightforward consequence of \cite{H18} but has not been presented in this form elsewhere. A brief explanation of the physical picture underlying the mean field approach is given in Section~\ref{sec:2phys}.
\subsection{Diffeomorphism groups}\label{sec:diffgps}
We fix $s>5/2$ and let $\textup{SDiff}({\mathbb{R}^3})$ denote the infinite dimensional $\mbox{$C^{\infty}$}$-manifold of volume preserving $H^s$-diffeomorphisms on ${\mathbb{R}^3}$.
This space is a topological group, but not a Lie group since left composition is only continuous but not smooth. Right composition is smooth.
Let
\[
\mathfrak{g} = \mbox{$\textup{SVect}$}({\mathbb{R}^3})
\]
be the space of divergence free vector fields on ${\mathbb{R}^3}$ of class $H^s$.
The tangent space of $\textup{SDiff}({\mathbb{R}^3})$ at the identity $e$ consists of divergence free and compactly supported vector fields, denoted by
\[
T_e \textup{SDiff}({\mathbb{R}^3})
= \mathfrak{g}_0
= \mbox{$\textup{SVect}$}({\mathbb{R}^3})_{\textup{cp}}.
\]
We use right multiplication $R^g: \textup{SDiff}({\mathbb{R}^3})\to \textup{SDiff}({\mathbb{R}^3})$, $k\mapsto k\circ g = kg$ to trivialize the tangent bundle $T\textup{SDiff}({\mathbb{R}^3})\cong \textup{SDiff}({\mathbb{R}^3})\times\mathfrak{g}_0$, $v_g\mapsto(g,(TR^g)^{-1} v_g)$.
The $L^2$ scalar product $\ww<.,.>$ on $\mathfrak{g}_0$ is defined by
\[
\ww<v,w>
= \int_{\mathbb{R}^3}\vv<v(x),w(x)>\, dx
\]
for $v,w\in\mathfrak{g}_0$, where $dx$ is the standard volume element in ${\mathbb{R}^3}$, and $\vv<.,.>$ is the Euclidean inner product.
Via $R^g$ this can be extended to a right invariant Riemannian metric on $\textup{SDiff}({\mathbb{R}^3})$.
See \cite{AK98,EM70,MEF,Michor06}.
\begin{comment}
\subsection{Derivatives}
The adjoint with respect to $\ww<.,.>$ to the Lie derivative $L$, given by $L_X Y = \nabla_X Y - \nabla_Y X$, is
\begin{equation}
L^{\top}_X Y
=
-\nabla_X Y - \textup{div}(X)Y - (\nabla^{\top}X)Y
\end{equation}
with
$
\nabla_X Y
= \vv<X,\nabla>Y
= \sum X^i\del_i Y^j e_j
$
and
$
(\nabla^{\top}X)Y
= \sum (\del_i X^j)Y^j e_i
$
with respect to the standard basis $e_i$, $i = 1,\dots,n$.
The notation $\mbox{$\text{\upshape{ad}}$}(X)Y = [X,Y] = -L_X Y$ and $\mbox{$\text{\upshape{ad}}$}(X)^{\bot} = -L^{\top}_X$ will be used.
\end{comment}
\subsection{Phase space}\label{sec:PS}
The configuration space of incompressible fluid mechanics on ${\mathbb{R}^3}$ is $\textup{SDiff}({\mathbb{R}^3})$.
The corresponding phase space is trivialized via right multiplication as
\[
T^*\textup{SDiff}({\mathbb{R}^3}) \cong \textup{SDiff}({\mathbb{R}^3}) \times \mathfrak{g}^*
\]
where $\mathfrak{g}^*$ is defined as $\mathfrak{g}^* = \Omega^1(\mathbb{R}^3)/d\mathcal{F}(\mathbb{R}^3)$.
Here $\Omega^k(\mathbb{R}^3)$ are $k$-forms (of class $H^s$), $\mathcal{F}(\mathbb{R}^3)$ are functions (of class $H^{s+1}$), and $\Omega^1(\mathbb{R}^3)/d\mathcal{F}(\mathbb{R}^3)$ is the space of equivalence classes modulo exact one-forms.
Elements in $\mathfrak{g}^*$ will thus be denoted by $\cl{\xi}$ where $\xi\in\Omega^1(\mathbb{R}^3)$ is a representative of the class in $\mathfrak{g}^* = \Omega^1(\mathbb{R}^3)/d\mathcal{F}(\mathbb{R}^3)$.
Let $\flat: \mbox{$\textup{Vect}$}(\mathbb{R}^3)\to\Omega^1(\mathbb{R}^3)$, $X\mapsto X^{\flat}$ be the metric (musical) isomorphism with inverse $\flat^{-1}=\sharp$. Let $P: \mbox{$\textup{Vect}$}(\mathbb{R}^3)\to\mbox{$\textup{SVect}$}(\mathbb{R}^3)$, $X\mapsto X-\nabla\Delta^{-1}\textup{div}(X)$ be the Hodge projection onto divergence free vector fields. Then we obtain an isomorphism $\mu: \mathfrak{g}\to\mathfrak{g}^*$, $X\mapsto\cl{X^{\flat}}$ with inverse $\mu^{-1}: \mathfrak{g}^*\to\mathfrak{g}$, $\cl{\xi}\mapsto P \xi^\sharp$.
\begin{remark}\label{rem:no-iso}
The restriction of $\mu$ to $\mathfrak{g}_0$ does not induce an isomorphism of $T\textup{SDiff}(\mathbb{R}^3)\cong\textup{SDiff}(\mathbb{R}^3)\times\mathfrak{g}_0$ and $T^*\textup{SDiff}(\mathbb{R}^3)\cong \textup{SDiff}(\mathbb{R}^3) \times \mathfrak{g}^*$ since $\mu^{-1}(\cl{\xi})$ need not be compactly supported.
\end{remark}
\subsection{SHIPS}
In \cite{H18} a stochastic Hamiltonian interacting particle system is constructed which yields, in the mean field limit, the solution to the incompressible Navier-Stokes equation. Let $N$ be the number of interacting particles (or interacting blobs of fluid). The approach is
Hamiltonian and the relevant phase space is
\[
\mathcal{P}^N
=
\Big( \textup{SDiff}(\mathbb{R}^3)\times\mathfrak{g}^*\Big)^N.
\]
This space is equipped with the direct product symplectic structure obtained from the canonical symplectic form on each copy $\textup{SDiff}(\mathbb{R}^3)\times\mathfrak{g}^*$.
The direct product group $\textup{SDiff}(\mathbb{R}^3)^N$ acts on $\mathcal{P}^N$ through the product action and there is a corresponding momentum map
\begin{equation}
\label{e:momap}
J: \mathcal{P}^N\to(\mathfrak{g}^*)^N,\qquad
\Big(g^{\alpha},\cl{\xi^{\alpha}}\Big)_{\alpha=1}^N
\mapsto
\Big(
\mbox{$\text{\upshape{Ad}}$}(g^{\alpha})^*\cl{\xi^{\alpha}} \Big)_{\alpha=1}^N
\end{equation}
where the coadjoint representation is determined by
$\vv<\mbox{$\text{\upshape{Ad}}$}(g)^*\cl{\xi}, X> = \int_{\mathbb{R}^3}\vv<\xi,\mbox{$\text{\upshape{Ad}}$}(g)X>\,dx$
and $\mbox{$\text{\upshape{Ad}}$}(g)X = Tg\cdot(X\circ g^{-1}) = (g^{-1})^*X$
for $g\in\textup{SDiff}(\mathbb{R}^3)$, $\cl{\xi}\in\mathfrak{g}^*$ and $X\in\mathfrak{g}_0$.
That is,
\[
\mbox{$\text{\upshape{Ad}}$}\Big(g^{\alpha}\Big)^*\bigcl{\xi^{\alpha}}
= \bigcl{(g^{\alpha})^*\xi^{\alpha}}
= \bigcl{ (\xi^{\alpha}\circ g^{\alpha})\cdot Tg^{\alpha} }
.
\]
The infinitesimal adjoint representation is given by $\mbox{$\text{\upshape{ad}}$}(X).Y = [X,Y] = -L_X Y$ where $L_X Y = \nabla_X Y - \nabla_Y X$ is the Lie derivative. The corresponding coadjoint representation $\mbox{$\text{\upshape{ad}}$}(X)^*: \mathfrak{g}^*\to\mathfrak{g}^*$ is characterized by $\ww<\mbox{$\text{\upshape{ad}}$}(X)^*\cl{\xi},Y> = \ww<P\xi^{\sharp}, [X,Y]>$. Thus, $\mbox{$\text{\upshape{ad}}$}(X)^*\cl{\xi} = \cl{L_X\xi}$ where $L_X\xi$ is the Lie derivative of a one-form.
Let $e_1$, $e_2$, $e_3$ be the standard basis vectors in $\mathbb{R}^3$. In the following, the vectors $e_j$ will be viewed as constant vector fields on $\mathbb{R}^3$.
Let $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\in[0,T]},\mathbb{P})$ be a filtered probability space satisfying the usual assumptions as specified in \cite{Pro}. All stochastic processes shall be understood to be adapted to this filtration. For $\alpha=1,\dots,N$ consider a sequence of mutually independent Brownian motions $W^{\alpha} = \sum W^{j,\alpha}e_j$ in $\mathbb{R}^3$.
The Stratonovich differential will be denoted by $\mbox{$\,\delta_t$}$.
The equations of motion for a path $(g_t^{\alpha},\cl{\xi_t^{\alpha}})_{\alpha=1}^N$ in $\mathcal{P}^N$ are given by the system of Stratonovich SDEs (see \cite[Equ.~(2.11)-(2.12)]{H18}):
\begin{align}
\label{e:eom1}
\mbox{$\,\delta_t$} g_t^{\alpha}
&= TR^{g_t^{\alpha}}\Big(\frac{1}{N}\sum_{\beta=1}^N P(\xi_t^{\beta})^{\sharp} \mbox{$\,\delta_t$} t
+ \varepsilon\sum_{j=1}^3 e_j\mbox{$\,\delta_t$} W^{j,\alpha} \Big),
\qquad
g_0^{\alpha} = e \\
\label{e:eom2}
\mbox{$\,\delta_t$}\bigcl{\xi_t^{\alpha}}
&=
-\mbox{$\text{\upshape{ad}}$}\Big(\frac{1}{N}\sum_{\beta=1}^N P(\xi_t^{\beta})^{\sharp} \Big)^*\bigcl{\xi_t^{\alpha}} \mbox{$\,\delta_t$} t
- \varepsilon\sum_{j=1}^3\mbox{$\text{\upshape{ad}}$}\Big(e_j \Big)^*\bigcl{\xi_t^{\alpha}} \mbox{$\,\delta_t$} W^{j,\alpha},
\qquad
\xi_0^{\alpha} = u_0^{\flat}
\end{align}
where $\mbox{$\,\delta_t$}$ indicates Stratonovich differentiation, $\varepsilon>0$ is a constant, $e$ is the identity diffeomorphism and $u_0\in\mathfrak{g}$ is a smooth deterministic and divergence free vector field. This system depends on the empirical average $\frac{1}{N}\sum_{\beta=1}^N P(\xi_t^{\beta})^{\sharp}$ and is therefore an interacting particle system. In the following we assume that the mean field limit of this IPS exists such that the limit in probability,
$
\lim_{N\to\infty} \frac{1}{N}\sum_{\beta=1}^N P(\xi_t^{\beta})^{\sharp}
$,
is a deterministic time-dependent vector field and satisfies the desired initial condition.
Moreover, for each $\alpha$, the process $\cl{\xi_t^{\alpha}}$ converges, as $N\to\infty$, to a stochastic process $\cl{\xi_t}$. Since the particles (i.e., fluid blobs) are identical, it suffices to consider $\cl{\xi_t^1}$, that is $\cl{\xi_t} = \lim_{N\to\infty}\cl{\xi^{1}_t}$. It follows that
\begin{equation}
\lim_{N\to\infty} \frac{1}{N}\sum_{\beta=1}^N P(\xi_t^{\beta})^{\sharp}
= P\, E[\xi_t]^{\sharp}
=: u_t.
\end{equation}
\begin{remark}\label{rem:N}
The process $(g_t^{\alpha},\cl{\xi_t^{\alpha}})$ depends on $N$. It would thus be more concise to write
\[
(g_t^{\alpha,N},\cl{\xi_t^{\alpha,N}})
\]
such that $\cl{\xi_t} = \lim_{N\to\infty}\cl{\xi^{1,N}_t}$.
However, to make the notation more readable the superscript $N$ is omitted, but it is always tacitly implied. A solution to \eqref{e:eom1}-\eqref{e:eom2} will mean a strong solution on an interval $[0,T]$ independent of $N$ and such that the mean field limit exists.
See \cite{Oel84,AD95,DV95,JW17} for background on mean field SDEs.
\end{remark}
\begin{remark}\label{rem:weak-sp}
The canonical symplectic form on $\textup{SDiff}(\mathbb{R}^3)\times\mathfrak{g}^*$ is only weakly symplectic. This follows from Remark~\ref{rem:no-iso} and implies that the induced homomorphism $T(\textup{SDiff}(\mathbb{R}^3)\times\mathfrak{g}^*)\to T^*(\textup{SDiff}(\mathbb{R}^3)\times\mathfrak{g}^*)$ is only injective but not surjective. Hence the Hamiltonian vector field does not exist for all functions on $\textup{SDiff}(\mathbb{R}^3)\times\mathfrak{g}^*$.
In fact, equations~\eqref{e:eom1}-\eqref{e:eom2} do not arise from a Hamiltonian vector field since neither $\frac{1}{N}\sum_{\beta=1}^N P(\xi_t^{\beta})^{\sharp}$ nor $e_j$ are compactly supported.
Not even the initial condition, $u_0$, is assumed to have compact support.
In \cite{H18} this problem was circumvented by taking the torus (which is compact) as the fluid's domain.
Therefore, in the present context, the Hamiltonian approach can be used only as a guiding principle. This means that, if $(g_t^{\alpha},\cl{\xi_t^{\alpha}})$ is a solution to \eqref{e:eom1}-\eqref{e:eom2}, the (Hamiltonian) conclusion $J(g_t^{\alpha},\cl{\xi_t^{\alpha}}) = J(g_0^{\alpha},\cl{\xi_0^{\alpha}})$ has to be proved directly.
Equation~\eqref{e:eom2} is a stochastic Euler equation, and so
restricting $P(\xi_t^{\alpha})^{\sharp}$ to be of compact support does not seem to be reasonable since solutions of the (deterministic) Euler equation are generally not expected to be compactly supported (\cite{CLV19}).
\end{remark}
\begin{theorem}[\cite{H18}]\label{thm:prel}
Consider a solution $(g_t^{\alpha},\cl{\xi_t^{\alpha}})$ of \eqref{e:eom1}-\eqref{e:eom2}.
Then:
\begin{enumerate}
\item
The equations for the stochastic mean field limit of the interacting particle system \eqref{e:eom1}-\eqref{e:eom2} are
\begin{align}
\label{e:mf-eom1}
\mbox{$\,\delta_t$} g_t
&= TR^{g_t}\Big( u_t \mbox{$\,\delta_t$} t
+ \varepsilon\sum_{j=1}^3 e_j\mbox{$\,\delta_t$} W^j \Big),
\qquad
g_0 = e \\
\label{e:mf-eom2}
\mbox{$\,\delta_t$}\bigcl{\xi_t}
&=
-\mbox{$\text{\upshape{ad}}$}\Big( u_t\mbox{$\,\delta_t$} t + \varepsilon\sum_{j=1}^3 e_j \mbox{$\,\delta_t$} W_t^j \Big)^*\bigcl{\xi_t}
\qquad
\xi_0 = u_0^{\flat}
\end{align}
where $(g_t,\xi_t) = \lim_{N\to\infty}(g_t^1,\xi_t^1)$ and $u_t = E[P\xi_t^{\sharp}]$.
\item
$u_t
= \lim_{N\to\infty} \frac{1}{N}\sum_{\beta=1}^N P(\xi_t^{\beta})^{\sharp}
= E[P\xi_t^{\sharp}]$ solves the incompressible Navier-Stokes equation
\begin{equation}
\label{e:NS}
\by{\del}{\del t}u = -\nabla_u u - \nabla p + \nu\Delta u, \qquad \textup{div}\,u = 0
\end{equation}
where $p$ is the pressure and $\nu = \varepsilon^2/2$.
Conversely, if $u_t$ satisfies \eqref{e:NS} and $g_t$ is defined by \eqref{e:mf-eom1}, then $\cl{\xi_t} = \mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^*\cl{u_0^{\flat}}$ is a solution to \eqref{e:mf-eom2} and $E[P\xi_t^{\sharp}] = u_t$.
\item
Assume $(g_t^{\alpha})$ satisfies \eqref{e:eom1}.
Then \eqref{e:eom2} holds if, and only if,
$\mbox{$\text{\upshape{Ad}}$}(g_t^{\alpha})^*\cl{\xi_t^{\alpha}}
= \cl{(g_t^{\alpha})^*\xi_t^{\alpha}} = \cl{u_0^{\flat}}$ for all $\alpha=1,\dots,N$.
In the limit, $N\to\infty$, this implies $\mbox{$\text{\upshape{Ad}}$}(g_t)^*\cl{\xi_t} = \cl{u_0^{\flat}}$.
\end{enumerate}
\end{theorem}
\begin{proof}
These assertions are shown in \cite{H18} where item~(3) follows because \eqref{e:eom1}-\eqref{e:eom2} constitute a right invariant Hamiltonian system whence the momentum map \eqref{e:momap} is constant along solutions. In the present context (Remark~\ref{rem:weak-sp}) item~(3) is shown directly:\\
For an arbitrary $k$-form $\sigma$ we have the identity
$\mbox{$\,\delta_t$} (g_t^{\alpha})^*\sigma = (g_t^{\alpha})^* L_{(TR^{g_t^{\alpha}})^{-1}\delta_t g_t^{\alpha}} \sigma$.
Now,
$\mbox{$\text{\upshape{Ad}}$}(g_t^{\alpha})^*\cl{\xi_t^{\alpha}}
= \cl{(g_t^{\alpha})^*\xi_t^{\alpha}} = \cl{u_0^{\flat}}$ for all $\alpha=1,\dots,N$ holds if, and only if,
\begin{align*}
\mbox{$\,\delta_t$} \int_{\mathbb{R}^3}
\vv< (g_t^{\alpha})^*\xi_t^{\alpha} , X >\, dx
&=
\int_{\mathbb{R}^3}
\vv<
(g_t^{\alpha})^* L_{(TR^{g_t^{\alpha}})^{-1}\delta_t g_t^{\alpha}} \xi_t^{\alpha} , X >\, dx
+
\int_{\mathbb{R}^3}
\vv< (g_t^{\alpha})^*\mbox{$\,\delta_t$}\xi_t^{\alpha} , X >\, dx
= 0
\end{align*}
for all $X\in\mathfrak{g}_0$.
Because of \eqref{e:eom1} the assertion follows.
\end{proof}
\subsection{Vorticity formulation}\label{sec:vor}
The system \eqref{e:eom1}-\eqref{e:eom2} is a stochastic version of ideal incompressible flow. Consequently, the corresponding vorticity may be expected to be transported along the stochastic flow. In this section it is shown that this is indeed the case. The vorticity, $\omega=\omega(\cl{\xi})$, associated to an element $\cl{\xi}\in\mathfrak{g}^*$ is defined as
\[
\omega = d\xi \in\mathcal{C}^2\subset\Omega^2(\mathbb{R}^3)
\]
where $\mathcal{C}^2$ denotes the space of closed two-forms.
If $X = \mu^{-1}\cl{\xi} = P\xi^{\sharp}$ and $*$ is the Hodge star operator, then we have $(*\omega)^{\sharp} = \nabla\times X$, which is the expression of the vorticity when considered as a vector field. We thus obtain an isomorphism, $\cl{\xi}\mapsto\omega(\cl{\xi})$, from $\mathfrak{g}^*$ to $\mathcal{C}^2$. The induced coadjoint action on $\mathcal{C}^2$ is given by pullback, that is $\mbox{$\text{\upshape{Ad}}$}(g)^*\omega = g^*\omega = (\omega\circ g)\cdot\Lambda^2 Tg$. The infinitesimal coadjoint action is given by the Lie derivative, $\mbox{$\text{\upshape{ad}}$}(X)^*\omega = L_X\omega$.
Let $\cl{\xi_t^{\alpha}}$, for $\alpha=1,\dots,N$, be a solution to \eqref{e:eom2}.
The Maurer-Cartan formula, $L_X = di_X+i_Xd$, then implies that the vorticity, $\omega^{\alpha}_t = d\xi^{\alpha}_t$, satisfies
\begin{align}
\label{e:vor1}
\mbox{$\,\delta_t$}\omega^{\alpha}
=
-\mbox{$\text{\upshape{ad}}$}\Big(u^{(N)}\,\delta_t t +\varepsilon\sum_{j=1}^3 e_j \mbox{$\,\delta_t$} W^{j,\alpha} \Big)^*\omega^{\alpha}
=
-L_{u^{(N)}}\omega^{\alpha}\mbox{$\,\delta_t$} t
-\varepsilon \sum_{j=1}^3 L_{e_j}\omega^{\alpha}\mbox{$\,\delta_t$} W^{j,\alpha}
\end{align}
where
\[
u^{(N)}
= \sum_{\beta=1}^N P(\xi^{\beta})^{\sharp}/N
= \sum_{\beta=1}^N BS \Big(( *\omega^{\beta} )^{\sharp}\Big)/N ,
\]
and $( *\omega^{\alpha} )^{\sharp}$ is the divergence free vector field associated to $\omega^{\alpha}$ and $BS$ is the Biot-Savart operator.
The latter is defined as
\begin{equation}
\label{e:BS}
BS(w)(x)
= \frac{1}{4\pi}\int_{\mathbb{R}^3}\frac{w(y)\times(x-y)}{(x-y)^3}\,dy
\end{equation}
for $w\in\mathfrak{g}=\mbox{$\textup{SVect}$}(\mathbb{R}^3)$.
Closed two-forms and divergence free vector fields are in one-to-one correspondence via $\omega\mapsto(*\omega)^{\sharp}$.
This can be used to define $BS^*: \mathcal{C}^2\to\Omega^1(\mathbb{R}^3)$, $\omega\mapsto (BS((*\omega)^{\sharp}) )^{\flat}$, which satisfies $BS^* d \xi = \xi$.
Applying $BS^*$ to \eqref{e:vor1} yields equation~\eqref{e:eom2}. Hence these equations are equivalent, and the former is the vorticity formulation of the latter. Furthermore, the constancy of the momentum map~\eqref{e:momap}, i.e.\ Theorem~\ref{thm:prel}(3), along solutions implies that the vorticity is transported along the stochastic flow:
\begin{equation}
\label{e:vor-trn}
\mbox{$\text{\upshape{Ad}}$}(g_t^{\alpha})^*\omega_t^{\alpha} = \omega_0
\end{equation}
for all $\alpha=1,\dots,N$. Note that the initial conditions are assumed to be independent of $\alpha$, $\omega_0^{\alpha} = \omega_0 = du_0^{\flat}$ and $g_0^{\alpha}=e$.
Under the assumption that the mean field limit exists, we consider $\omega_t = \lim_{N\to\infty}\omega_t^1$. It follows that
\begin{align}
\label{e:vor-mf}
\mbox{$\,\delta_t$} \omega_t
&= -\mbox{$\text{\upshape{ad}}$}\Big(u_t \mbox{$\,\delta_t$} t + \varepsilon\sum_{j=1}^3e_j\mbox{$\,\delta_t$} W^j\Big)^*\omega_t\\
\notag
u_t
&=
\Big(BS^*\Big(E[ \omega_t]\Big)\Big)^{\sharp}
=
\lim_{N\to\infty} \Big(BS^*\Big(\sum_{\alpha=1}^N \omega_t^{\alpha}/N \Big)\Big)^{\sharp}
\end{align}
which is a mean field SDE because the drift depends on the expectation.
If $g_t = \lim g_t^1$ with $g_0=e$, this can be restated as $\omega_t = \mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^*\omega_0$.
Moreover, since $\cl{BS^*\omega_t}$ satisfies \eqref{e:mf-eom2},
Theorem~\ref{thm:prel} implies that $u$ is a solution to the incompressible Navier-Stokes equation~\eqref{e:NS}.
\subsection{Physical interpretation of SHIPS}\label{sec:2phys}
The picture underlying the IPS \eqref{e:eom1}-\eqref{e:eom2} is that each infinitesimal blob of fluid is divided into $N$ identical sub-blobs. These sub-blobs interact to follow their common center of mass and, at the same time, each undergo their own Brownian motion. The barycentric component of the motion is due to the $\sum_{\beta=1}^NP(\xi_t^{\beta})^{\sharp}/N$ part of the equation, while the stochastic perturbation is encoded in $\varepsilon\sum e_j\mbox{$\,\delta_t$} W_t^{j,\alpha}$.
In \cite{H18} the equations \eqref{e:eom1}-\eqref{e:eom2} are given the structure of a stochastic Hamiltonian system. In fact, \cite{H18} treats the case where the fluid's domain is a torus and the perturbation vectors are given by a certain infinite sequence of divergence free vector fields. These are then interpreted as the velocities of molecules which impart their momenta on the sub-blobs. In the present context, because helicity is usually defined on simply-connected domains, the domain is chosen to be $\mathbb{R}^3$. Thus the stochastic perturbation, $\varepsilon\sum e_j\mbox{$\,\delta_t$} W_t^{j,\alpha}$, is interpreted as a model for the combined effect of molecules of different momenta hitting the sub-blob indexed by $\alpha$. Due to the non-compactness of $\mathbb{R}^3$, the Hamiltonian interpretation encounters structural difficulties (Remark~\ref{rem:weak-sp}). Nevertheless, the central (and only) conclusion from the Hamiltonian approach, namely that the momentum map \eqref{e:momap} is constant along solutions, still holds. In fact, Theorem~\ref{thm:prel}(3) shows that \eqref{e:eom2} is equivalent to the preservation of the momentum map.
Each infinitesimal element, $dx$, is subdivided into a partition of $N$ identical $dx^{\alpha}$, and the initial conditions in each $dx^{\alpha}$ are given by $(g_0^{\alpha}, \cl{\xi_0^{\alpha}}) = (e, \cl{u_0^{\flat}})$, independently of $\alpha$.
As this subdivision is carried out simultaneously for all infinitesimal elements $dx$ in the domain, this may also be seen as $N$ copies of the domain at each point in time. Thus for each $t\in[0,T]$ we have a system of $N$ interacting stochastic fluid states $(g_t^{\alpha},\cl{\xi_t^{\alpha}})$, and, as $N$ becomes large, their average velocity converges to the macroscopically observed state $u_t$, which is the solution to the Navier-Stokes equation~\eqref{e:NS}.
The construction is Hamiltonian (with the caveat mentioned in Remark~\ref{rem:weak-sp}) and therefore the state of each copy, $(g_t^{\alpha},\xi_t^{\alpha})$, evolves according to a stochastic Hamiltonian system. This does, in general, not mean that energy is conserved (and it is not for the case at hand as discussed in \cite{H18}). However, coadjoint orbits are conserved by stochastic Hamiltonian mechanics when the phase space is the dual of a Lie algebra. Via $d: \mathfrak{g}^*\to\mathcal{C}^2$ the coadjoint orbits in $\mathfrak{g}^*$ are isomorphic to sets of the form $\{\mbox{$\text{\upshape{Ad}}$}(g)^*\omega: g\in\textup{SDiff}(\mathbb{R}^3)\}$. Hence, for each $\alpha$, vorticity is transported along flow lines, i.e.\ \eqref{e:vor-trn} holds.
\section{Helicity representation}\label{sec:hel-rep}
Let $u = u(t,x)$, with $t\in[0,T]$ and $x\in\mathbb{R}^3$, be a solution to the incompressible Navier-Stokes equation.
The helicity at time $t\in[0,T]$ is
\begin{equation}
\label{e:hel}
\mathcal{H}_t
= \int_{\mathbb{R}^3}\vv<u_t,\textup{curl}\,u_t>\,dx
= \int_{\mathbb{R}^3} u_t^{\flat}\wedge\omega_t
= \int_{\mathbb{R}^3} BS^*(\omega_t)\wedge\omega_t
\end{equation}
where the vorticity, $\omega_t = du_t^{\flat}$, and Biot-Savart operator $BS^*$ have been defined in Section~\ref{sec:vor}.
The physical meaning of the following calculations is discussed in Section~\ref{sec:3phys}.
\subsection{SHIPS representation}\label{sec:hel-rep-ships}
Note that $d: \mathfrak{g}^*\to\mathcal{C}$ has the equivariance property $d\circ\mbox{$\text{\upshape{Ad}}$}(g)^*=\mbox{$\text{\upshape{Ad}}$}(g)^*\circ d$, for all $g\in\textup{SDiff}(\mathbb{R}^3)$.
Define the map
\[
I: \mathcal{C}^2\to\mathbb{R},
\qquad
\omega \mapsto \int_{\mathbb{R}^3} BS^*(\omega)\wedge\omega.
\]
It follows that $I(\mbox{$\text{\upshape{Ad}}$}(g)^*\omega) = I(g^*\omega) = I(\omega)$ for all $g\in\textup{SDiff}(\mathbb{R}^3)$.
Let $(g_t^{\alpha},\omega_t^{\alpha})_{\alpha=1}^N$ be a solution to the system \eqref{e:eom1}, \eqref{e:vor1}. Equation~\eqref{e:vor-trn}, which expresses the observation that the vorticity $\omega_t^{\alpha}$ is transported along the stochastic flow $g_t^{\alpha}$, implies that
\begin{align}
\sum_{\alpha=1}^N I(\mbox{$\text{\upshape{Ad}}$}(g_t^{\alpha})^*\omega_t^{\alpha})
=
\sum_{\alpha=1}^N I(\omega_0)
=
N\mathcal{H}_0.
\end{align}
Consider the mean field limit \eqref{e:vor-mf} of the system \eqref{e:eom1}, \eqref{e:vor1}. Hence the (time-dependent and deterministic) vector field $u$, defined in \eqref{e:vor-mf}, satisfies the incompressible Navier-Stokes equation.
Conversely, every solution $u$ can be represented in this manner (Theorem~\ref{thm:prel}).
Define
\begin{equation}
\label{e:lambda}
\lambda_t
= \lim_{N\to\infty}\sum_{1\le\alpha\neq\beta\le N}
\mbox{$\text{\upshape{Ad}}$}\Big(
(g_t^{\alpha})^{-1}g_t^{\beta}
\Big)^* \omega_0 / N^2
\end{equation}
where the limit is taken in probability.
\begin{theorem}\label{thm:hel}
The helicity \eqref{e:hel} of $u$ satisfies
\[
\mathcal{H}_t
=
\int_{\mathbb{R}^3} BS^*(\lambda_t)\wedge\omega_0.
\]
\end{theorem}
\begin{proof}
Indeed, the mean field vorticity formulation \eqref{e:vor-mf} yields
\begin{align*}
\mathcal{H}_t
&=
\int_{\mathbb{R}^3} BS^*(du_t^{\flat})\wedge du_t^{\flat}
=
\int_{\mathbb{R}^3} BS^*\Big(\lim_{N\to\infty}\sum_{\alpha}\omega_t^{\alpha}/N\Big)\wedge \lim_{N\to\infty}\sum_{\beta}\omega_t^{\beta}/N \\
&=
\lim_{N\to\infty}\Big(
\underbrace{\sum_{\alpha}\int_{\mathbb{R}^3} BS^*(\omega_t^{\alpha})\wedge \omega_t^{\alpha}/N^2}_{\mathcal{H}_0/N\longto0}
+
\sum_{\alpha\neq\beta}
\int_{\mathbb{R}^3} BS^*(\omega_t^{\alpha})\wedge \omega_t^{\beta}/N^2
\Big)\\
&=
\lim_{N\to\infty}
\sum_{\alpha\neq\beta}\int_{\mathbb{R}^3}
(g_t^{\beta})^*\,(BS^*(\omega_t^{\alpha})\wedge \omega_t^{\beta})/N^2\\
&=
\int_{\mathbb{R}^3} BS^*(\lambda_t)\wedge\omega_0
\end{align*}
where $\alpha$ and $\beta$ range from $1$ to $N$, and we have used the invariance property of $I$ together with \eqref{e:vor-trn}, which implies
\[
\Big(g_t^{\beta}\Big)^*\Big(BS^*(\omega_t^{\alpha})\wedge \omega_t^{\beta}\Big)
=
\Big((g_t^{\beta})^*BS^*(\omega_t^{\alpha})\Big)\wedge \Big(g_t^{\beta}\Big)^*\omega_t^{\beta}
=
\Big(\mbox{$\text{\upshape{Ad}}$}(g_t^{\beta})^*BS^*(\mbox{$\text{\upshape{Ad}}$}((g_t^{\alpha})^{-1})^*\omega_0)\Big)\wedge \omega_0
\]
and
$\int_{\mathbb{R}^3}(\mbox{$\text{\upshape{Ad}}$}(g_t^{\beta})^*BS^*(\mbox{$\text{\upshape{Ad}}$}((g_t^{\alpha})^{-1})^*\omega_0))\wedge \omega_0 = \int_{\mathbb{R}^3}(BS^*(\mbox{$\text{\upshape{Ad}}$}(g_t^{\beta})^*\mbox{$\text{\upshape{Ad}}$}((g_t^{\alpha})^{-1})^*\omega_0))\wedge \omega_0$;
the last equality holds because
$d( \mbox{$\text{\upshape{Ad}}$}(g_t^{\beta})^*BS^*\omega_t^{\alpha} - BS^*\mbox{$\text{\upshape{Ad}}$}(g_t^{\beta})^*\omega_t^{\alpha}) = 0$.
\end{proof}
\begin{remark}\label{rem:hel}
The initial vorticity, $\omega_0$, is related to the curl of the initial velocity, $u_0$, as $(*\omega_0)^{\sharp} = (*du_0^{\flat})^{\sharp} = \textup{curl}\,u_0$.
Further, we have
$\int_{\mathbb{R}^3}\vv<(*\omega_0)^{\sharp},u_0>\,dx
= \int_{\mathbb{R}^3} \omega_0\wedge u_0^{\flat}
= \int_{\mathbb{R}^3} \vv< \textup{curl}\,u_0, u_0>\,dx
$.
If $X$ is a divergence free vector field its pullback,
$h^*X = Th^{-1}\cdot(X\circ h) = \mbox{$\text{\upshape{Ad}}$}(h^{-1})X$, by $h\in\textup{SDiff}(\mathbb{R}^3)$ is again divergence free.
Hence Theorem~\ref{thm:hel} can be reformulated as
\begin{align*}
\mathcal{H}_t
&=
\int_{\mathbb{R}^3}\Big(BS(*\lambda_t)^{\sharp}\Big)\wedge u_0^{\flat}
=
\int_{\mathbb{R}^3}\Big\langle BS(*\lambda_t)^{\sharp}, \textup{curl}\,u_0\Big\rangle\,dx
=
\int_{\mathbb{R}^3}(**\lambda_t)\wedge u_0^{\flat}
\\
&=
\lim_{N\to\infty}\sum_{1\le\alpha\neq\beta\le N}\int_{\mathbb{R}^3}
\mbox{$\text{\upshape{Ad}}$}(g_t^{\beta})^* \mbox{$\text{\upshape{Ad}}$}((g_t^{\alpha})^{-1})^*\omega_0\wedge u_0^{\flat}
/ N^2 \\
&=
\lim_{N\to\infty}\sum_{1\le\alpha\neq\beta\le N}\int_{\mathbb{R}^3}
\Big\langle
\mbox{$\text{\upshape{Ad}}$}((g_t^{\beta})^{-1}) \mbox{$\text{\upshape{Ad}}$}(g_t^{\alpha})\,\textup{curl}\, u_0,
u_0
\Big\rangle\,dx
/ N^2 \\
&=
\lim_{N\to\infty}\sum_{1\le\alpha\neq\beta\le N}\int_{\mathbb{R}^3}
\Big\langle
\mbox{$\text{\upshape{Ad}}$}(g_t^{\alpha})^{\top} \mbox{$\text{\upshape{Ad}}$}((g_t^{\beta})^{-1})^{\top} \,u_0 ,
\textup{curl}\,u_0
\Big\rangle\,dx / N^2 .
\end{align*}
\end{remark}
\subsection{Mean field limit}\label{sec:hel-rep-mf}
To find the evolution equation characterizing the limit $\lambda_t$ defined in \eqref{e:lambda}, set $h_t^{\beta,\alpha} = (g_t^{\beta})^{-1} g_t^{\alpha}$ and
\begin{equation}
\eta_t^{\beta,\alpha}
= \mbox{$\text{\upshape{Ad}}$}\Big( (g_t^{\alpha})^{-1}g_t^{\beta} \Big)^*\omega_0
= \mbox{$\text{\upshape{Ad}}$}\Big( (h_t^{\beta,\alpha})^{-1}\Big)^*\omega_0.
\end{equation}
The product formula for Stratonovich equations implies that the process $h_t^{\beta,\alpha}$ in $\textup{SDiff}(\mathbb{R}^3)$ is the solution to
\begin{align}
\notag
\mbox{$\,\delta_t$} h_t^{\beta,\alpha}
&=
TR^{h_t^{\beta,\alpha}}\cdot
\Big(
-(Tg_t^{\beta})^{-1}
\underbrace{(\mbox{$\,\delta_t$} g_t^{\beta})(g_t^{\beta})^{-1}}_{\eqref{e:eom1}}
\underbrace{g_t^{\alpha}(h_t^{\beta,\alpha})^{-1}}_{g_t^{\beta}}
+
(Tg_t^{\beta})^{-1}
\underbrace{(\mbox{$\,\delta_t$} g_t^{\alpha})(g_t^{\alpha})^{-1}}_{\eqref{e:eom1}}
g_t^{\beta}
\Big) \\
\label{e:h^ab}
&=
\varepsilon \, TR^{h_t^{\beta,\alpha}}\cdot
\Big( \mbox{$\text{\upshape{Ad}}$}(g_t^{\beta})^{-1} \mbox{$\,\delta_t$} \hat{W}^{\alpha,\beta}
\Big) \\
\notag
h_0^{\beta,\alpha}
&= e
\end{align}
where $\hat{W}^{\alpha,\beta} = \sum e_j(W^{j,\alpha}-W^{j,\beta})$ is a difference of two independent Brownian motions. It is therefore, up to a multiplicative factor, again a Brownian motion with quadratic variation $[\hat{W}^{\alpha,\beta},\hat{W}^{\alpha,\beta}]_t = [W^{\alpha},W^{\alpha}]_t + [W^{\beta},W^{\beta}]_t = 2t$.
Equation~\eqref{e:h^ab} implies that $\eta_t^{\beta,\alpha}$ satisfies
\begin{align}
\label{e:etaSDE}
\mbox{$\,\delta_t$}\eta_t^{\beta,\alpha}
=
-\varepsilon\,
\mbox{$\text{\upshape{ad}}$}\Big( \mbox{$\text{\upshape{Ad}}$}(g_t^{\beta})^{-1} \mbox{$\,\delta_t$} \hat{W}^{\alpha,\beta}
\Big)^* \eta_t^{\beta,\alpha},
\qquad
\eta_0^{\beta,\alpha} = \omega_0.
\end{align}
Equations~\eqref{e:h^ab} and \eqref{e:etaSDE} are SDEs with random coefficients, since there is a dependence on realizations of $g_t^{\beta}$. But neither depend on $g_t^{\alpha}$ for $\alpha\neq\beta$.
Therefore, for all $N$ and all $\beta\le N$, the sequence $\eta_t^{\beta,\alpha}$ with $\alpha=1,\ldots,\hat{\beta},\ldots,N$ ($\beta$ omitted) is i.i.d., and we have that
\begin{equation}\label{e:iid}
\sum_{\alpha\neq\beta, \alpha=1}^N\eta_t^{\beta,\alpha}
\sim
(N-1)\eta_t^{\beta,\beta+1}
\end{equation}
where $\sim$ means equivalence in distribution. (For $\beta= N$ the expression $\eta^{\beta,\beta+1}$ does not make sense, thus one should write, e.g., $\sum_{\alpha=1}^{N-1}\eta_t^{N,\alpha} \sim (N-1)\eta_t^{N,N-1}$ for this case. However, below we will only need the case $\beta=1$ and so the inconsistency at $\beta=N$ will be ignored from now on.)
Recall from Remark~\ref{rem:N} that $g_t^{\beta}$ depends on $N$.
Let $N$ go to infinity and assume $g_t = \lim_{N\to\infty} g_t^{\beta}$ is the stochastic mean field limit (for an arbitrarily fixed $\beta$, e.g.\ $\beta=1$) of the IPS \eqref{e:eom1}, given by the mean field SDE~\eqref{e:mf-eom1}
and where the driving Brownian motion is denoted by $W$.
Equations~\eqref{e:h^ab} and \eqref{e:etaSDE} imply, respectively,
that $h_t = \lim_{N\to\infty}h_t^{\beta,\beta+1}$ satisfies
\begin{equation}
\label{e:h}
\mbox{$\,\delta_t$} h_t
=
\varepsilon \, TR^{h_t}\cdot
\Big( \mbox{$\text{\upshape{Ad}}$}(g_t)^{-1} \mbox{$\,\delta_t$} \hat{W}
\Big) ,
\qquad
h_0
= e
\end{equation}
and
that $\eta_t = \mbox{$\text{\upshape{Ad}}$}(h_t^{-1})^*\omega_0 = \lim_{N\to\infty}\eta_t^{\beta,\beta+1}$ satisfies
\begin{equation}
\label{e:etaSDE2}
\mbox{$\,\delta_t$} \eta_t
= -\varepsilon\, \mbox{$\text{\upshape{ad}}$}\Big(\mbox{$\text{\upshape{Ad}}$}(g_t)^{-1}\mbox{$\,\delta_t$} \hat{W}\Big)^*\eta_t,
\quad
\eta_0 = \omega_0
\end{equation}
where $\hat{W} = B-W$ and $B$ is a Brownian motion in $\mathbb{R}^3$ independent of $W$.
Now, to find the evolution equation for \eqref{e:lambda}, note that \eqref{e:iid} yields
\begin{align}
\label{e:lam}
\lambda_t
=
\lim_{N\to\infty}\sum_{\beta=1}^N\sum_{\alpha\neq\beta, \alpha=1}^N\eta_t^{\beta,\alpha}/N^
=
\lim_{N\to\infty}\sum_{\beta=1}^N(N-1)\eta_t^{\beta,\beta+1}/N^2
=
\lim_{N\to\infty}\sum_{\beta=1}^N \eta_t^{\beta,\beta+1}/N
= E\Big[\eta_t\Big].
\end{align}
Here we use the propagation of chaos property of mean field limits (\cite{Oel84,JW17}) which implies independence of $g_t^{\beta}$ in the limit as $N\to\infty$, such that the $\eta_t^{\beta,\beta+1}$ are asymptotically i.i.d.
However, because \eqref{e:etaSDE2} is an SDE with random coefficients, it does not have the Markov property and one cannot expect the evolution of $\lambda_t = E[\eta_t]$ to be given by a deterministic PDE.
\begin{theorem}
\label{thm:hel-mf}
The helicity \eqref{e:hel} satisfies
\begin{align*}
\mathcal{H}_t
=
\int_{\mathbb{R}^3} BS^*(E[\eta_t])\wedge\omega_0
=
E\Big[\int_{\mathbb{R}^3} \Big(\mbox{$\text{\upshape{Ad}}$}(h_t^{-1})^*\omega_0\Big) \wedge u_0^{\flat}\Big]
=
E\Big[\int_{\mathbb{R}^3}
\vv< \mbox{$\text{\upshape{Ad}}$}(h_t)^{\top}u_0, \,\textup{curl}\,u_0>\,dx \Big].
\end{align*}
Moreover,
\begin{equation}
\by{\del}{\del t} E [\eta_t]
= \varepsilon^2 E\Big[
\mbox{$\text{\upshape{Ad}}$}(g_t)^*\Delta \mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^*\eta_t
\Big]
\end{equation}
where $\Delta = (d+*d*)^2$ is the Laplacian.
\end{theorem}
\begin{proof}
The first part follows from \eqref{e:lam} and Theorem~\ref{thm:hel}, and because
$\int_{\mathbb{R}^3} (\mbox{$\text{\upshape{Ad}}$}(h_t^{-1})^*\omega_0) \wedge u_0^{\flat}
=
\int_{\mathbb{R}^3} *\,(\textup{curl}\,u_0)^{\flat} \wedge \mbox{$\text{\upshape{Ad}}$}(h_t)^* u_0^{\flat}
=
\int_{\mathbb{R}^3} \vv<\mbox{$\text{\upshape{Ad}}$}(h_t)\,\textup{curl}\,u_0 , u_0>\,dx
$.
For the second statement, it remains to transform \eqref{e:etaSDE2} into Ito form.
By definition of the Stratonovich integral (see \cite{Pro}) it follows that
\begin{align*}
\eta_t-\eta_0
&=
-\varepsilon\sum_{j=1}^3\int_0^t\mbox{$\text{\upshape{ad}}$}\Big(\mbox{$\text{\upshape{Ad}}$}(g_s)^{-1}e_j\Big)^*\eta_s\mbox{$\,\delta_t$}\hat{W}_s^{j}\\
&=
-\varepsilon\sum_{j=1}^3\int_0^t\mbox{$\text{\upshape{ad}}$}\Big(\mbox{$\text{\upshape{Ad}}$}(g_s)^{-1}e_j\Big)^*\eta_s\mbox{$\,d_t$}\hat{W}_s^{j}
-\frac{\varepsilon}{2}\sum_{j=1}^3\Big[
\mbox{$\text{\upshape{ad}}$}\Big(\mbox{$\text{\upshape{Ad}}$}(g_.)^{-1}e_j\Big)^*\eta_., \hat{W}_.^j
\Big]_t
\end{align*}
where $\mbox{$\,d_t$}$ indicates Ito differentiation and $[.,.]_t$ is the quadratic variation process.
The product formula for Stratonovich SDEs applied to equations~\eqref{e:eom1} and \eqref{e:etaSDE2} implies
\begin{equation}
\label{e:pf3}
\mbox{$\,\delta_t$}\Big(\mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^*\eta_t\Big)
= -\mbox{$\text{\upshape{ad}}$}\Big(u\mbox{$\,\delta_t$} t + \varepsilon\mbox{$\,\delta_t$} B_t\Big)^*\mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^*\eta_t
\end{equation}
whence
\begin{align*}
\mbox{$\,\delta_t$}\Big( \mbox{$\text{\upshape{ad}}$}(\mbox{$\text{\upshape{Ad}}$}(g_t^{-1})e_j)^*\eta_t \Big)
&=
\mbox{$\,\delta_t$}\Big( \mbox{$\text{\upshape{Ad}}$}(g_t)^*\mbox{$\text{\upshape{ad}}$}(e_j)^*\mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^*\eta_t \Big)
\\
&=
\Big(\ldots\Big)\mbox{$\,\delta_t$} t
+
\varepsilon\sum_{k=1}^3\mbox{$\text{\upshape{Ad}}$}(g_t)^*\mbox{$\text{\upshape{ad}}$}(e_k)^*\mbox{$\text{\upshape{ad}}$}(e_j)^*\mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^*\eta_t\mbox{$\,\delta_t$} W^k\\
&\phantom{==}
-
\varepsilon\sum_{l=1}^3\mbox{$\text{\upshape{Ad}}$}(g_t)^*\mbox{$\text{\upshape{ad}}$}(e_j)^*\mbox{$\text{\upshape{ad}}$}(e_l)^*\mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^*\eta_t\mbox{$\,\delta_t$} B^k.
\end{align*}
Using \cite[Ch.~2,~Thm.~29]{Pro},
\begin{align*}
\Big[
\mbox{$\text{\upshape{ad}}$}\Big(\mbox{$\text{\upshape{Ad}}$}(g_.^{-1})e_j\Big)^*\eta_., \hat{W}_.^j
\Big]_t
&=
\varepsilon\sum_{k=1}^3\Big[
\int_0^.\mbox{$\text{\upshape{Ad}}$}(g_s)^*\mbox{$\text{\upshape{ad}}$}(e_k)^*\mbox{$\text{\upshape{ad}}$}(e_j)^*\mbox{$\text{\upshape{Ad}}$}(g_s^{-1})^*\eta_.\mbox{$\,d_t$} W_s^k, -W^j_.
\Big]_t\\
&\phantom{==}
-
\varepsilon\sum_{l=1}^3\Big[
\int_0^.\mbox{$\text{\upshape{Ad}}$}(g_s)^*\mbox{$\text{\upshape{ad}}$}(e_j)^*\mbox{$\text{\upshape{ad}}$}(e_l)^*\mbox{$\text{\upshape{Ad}}$}(g_s^{-1})^*\eta_.\mbox{$\,d_t$} B_s^l, B^j_.
\Big] \\
&=
-2\varepsilon \int_0^t
\mbox{$\text{\upshape{Ad}}$}(g_s)^*\mbox{$\text{\upshape{ad}}$}(e_j)^*\mbox{$\text{\upshape{ad}}$}(e_j)^*\mbox{$\text{\upshape{Ad}}$}(g_s^{-1})^*\eta_s\,ds.
\end{align*}
Therefore,
\begin{equation}
\label{e:eta-ito}
\mbox{$\,d_t$}\eta_t
=
\varepsilon^2\mbox{$\text{\upshape{Ad}}$}(g_t)^*\Delta\mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^*\eta_t\mbox{$\,d_t$} t
-
\varepsilon\, \mbox{$\text{\upshape{ad}}$}\Big(\mbox{$\text{\upshape{Ad}}$}(g_t)^{-1}\mbox{$\,d_t$} \hat{W}_t\Big)^*\eta_t
\end{equation}
and the claim follows since
$E[\int_0^t \mbox{$\text{\upshape{ad}}$}(\mbox{$\text{\upshape{Ad}}$}(g_s)^{-1}\mbox{$\,d_t$} \hat{W}_s)^*\eta_s] = 0$.
\end{proof}
\begin{remark}
Equation~\eqref{e:pf3} has the same structure and initial condition as \eqref{e:vor-mf}. But the driving Brownian motions are different, thus these equations do not imply path-wise equality of $\mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^*\eta_t$ and $\omega_t$.
\end{remark}
The transpose of the adjoint operator, $\mbox{$\text{\upshape{ad}}$}(\cdot)^{\top}: \mathfrak{g}\to\mathfrak{g}$, is characterized by $\ww<\mbox{$\text{\upshape{ad}}$}(X)^{\top}Y, Z> = \ww<Y, \mbox{$\text{\upshape{ad}}$}(X)Z>$ for $X,Y\in\mathfrak{g}$ and $Z\in\mathfrak{g}_0$, and given by
\[
\mbox{$\text{\upshape{ad}}$}(X)^{\top}Y
= P\Big(\nabla_X Y + (\nabla^{\top} X)Y \Big)
\]
where $(\nabla^{\top} X)Y = \sum (\del_i X^j)Y^j e_i$.
Consider now a solution, $\cl{\xi_t}$, to the mean field equation \eqref{e:mf-eom2} and let the corresponding vector field valued process be defined by $u_t^W = P\xi_t^{\sharp}$. Then $u_t^W$ satisfies
\[
\mbox{$\,\delta_t$} u_t^W
= -\mbox{$\text{\upshape{ad}}$}\Big( u_t \mbox{$\,\delta_t$} t + \varepsilon\mbox{$\,\delta_t$} W_t \Big)^{\top}u_t^W
= -P\Big(\nabla_{u_t}u_t^W + (\nabla^{\top}u_t)u_t^W\Big)\mbox{$\,\delta_t$} t
- \sum_{k=1}^3P \nabla_{e_k}u_t^W \mbox{$\,\delta_t$} W_t^k
\]
with $u_t = E[u_t^W]$ and where $W$ is the same Brownian motion as in \eqref{e:mf-eom1}. Consider furthermore $u_t^B$ defined by
\[
\mbox{$\,\delta_t$} u_t^B
= -\mbox{$\text{\upshape{ad}}$}\Big( u_t \mbox{$\,\delta_t$} t + \varepsilon\mbox{$\,\delta_t$} B_t \Big)^{\top}u_t^B
= -P\Big(\nabla_{u_t}u_t^B + (\nabla^{\top}u_t)u_t^B\Big)\mbox{$\,\delta_t$} t
- \sum_{k=1}^3P \nabla_{e_k}u_t^B \mbox{$\,\delta_t$} B_t^k
\]
with $u_t = E[u_t^B] = E[u_t^W]$ and where $B$ is the same as in \eqref{e:etaSDE2}.
Comparing equations~\eqref{e:vor-mf} and \eqref{e:pf3} then implies $\textup{curl}\,u_t^B = (*\mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^*\eta_t)^{\sharp}$.
Since $\varepsilon^2 = 2\nu$ Theorem~\ref{thm:hel-mf} implies in particular that
\begin{align}
\label{e:inf}
\by{\del}{\del t}\mathcal{H}_t
&=
\varepsilon^2 E\Big[ \int_{\mathbb{R}^3}
\Big(\mbox{$\text{\upshape{Ad}}$}(g_t)^*\Delta \mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^*\eta_t\Big)\wedge BS^* \omega_0
\Big]
=
\varepsilon^2 E\Big[ \int_{\mathbb{R}^3}
(\Delta \underbrace{\mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^*\eta_t}_{*(\textup{curl}\,u_t^B)^{\flat}} )
\wedge
\underbrace{\mbox{$\text{\upshape{Ad}}$}(g_t^{-1})^*\xi_0}_{ (u_t^W)^{\flat} }
\Big] \\
\notag
&=
\varepsilon^2 E\Big[
\int_{\mathbb{R}^3}\vv<\textup{curl}\,\Delta\,u_t^B, u_t^W>\,dx
\Big]
=
\varepsilon^2
\int_{\mathbb{R}^3}\vv<\textup{curl}\,\Delta\, E[u_t^B], E[u_t^W]>\,dx
=
2\nu
\int_{\mathbb{R}^3}\vv<\textup{curl}\,\Delta\,u_t, u_t>\,dx
\end{align}
where we use $*\Delta = \Delta *$ and that $u^W$ and $u^B$ are independent.
This coincides, of course, with the result of using the Navier-Stokes equation to obtain $\by{\del}{\del t}\mathcal{H}_t$ from the definition~\eqref{e:hel}.
\subsection{Physical interpretation}\label{sec:3phys}
The proof of Theorem~\ref{thm:hel} offers two representations for the helicity $\mathcal{H}_t$:
The first is
\begin{equation}
\label{e:3p1}
\mathcal{H}_t
= \lim_{N\to\infty}\sum_{1\le\alpha\neq\beta\le N}\int_{\mathbb{R}^3} BS^*(\omega_t^{\alpha})\wedge\omega_t^{\beta} / N^2 .
\end{equation}
Each integral of the form $\int_{\mathbb{R}^3} BS^*(\omega_t^{\alpha})\wedge\omega_t^{\beta}$ represents the overall linking of $(*\omega_t^{\alpha})^{\sharp}$-lines to $(*\omega_t^{\beta})^{\sharp}$-lines (\cite{M69}). Let us refer to this quantity as simply the linking of $\omega_t^{\alpha}$ and $\omega_t^{\beta}$.
Then \eqref{e:3p1} says that helicity can be interpreted as the average linking of $\omega_t^{\alpha}$ to $\omega_t^{\beta}$
(over all pairs $\alpha, \beta$ with $\alpha\neq\beta$).
Because each vorticity, $\omega_t^{\alpha}$, is transported along its own stochastic Lagrangian path, $g_t^{\alpha}$, the self-linking of $\omega_t^{\alpha}$ remains constant. This is a consequence of \eqref{e:vor-trn}, which implies that $\int_{\mathbb{R}^3} BS^*(\omega_t^{\alpha})\wedge\omega_t^{\alpha} = \mathcal{H}_0$ for all $\alpha$. However, as $\omega_t^{\alpha}$ and $\omega_t^{\beta}$ are driven by different Brownian motions for $\alpha\neq\beta$, the linking of their respective vortex lines can change as time progresses.
The second representation of helicity, compare also with Remark~\ref{rem:hel}, is
\begin{equation}
\label{e:3p2}
\mathcal{H}_t
= \int_{\mathbb{R}^3} BS^*(\lambda_t)\wedge\omega_0
=
\lim_{N\to\infty}\sum_{1\le\alpha\neq\beta\le N}\int_{\mathbb{R}^3}
\Big\langle
\mbox{$\text{\upshape{Ad}}$}(g_t^{\alpha})^{\top}\mbox{$\text{\upshape{Ad}}$}((g_t^{\beta})^{-1})^{\top}\,u_0 ,
\,\textup{curl}\,u_0
\Big\rangle\,dx \, / N^2.
\end{equation}
This arises because of the transport property~\eqref{e:vor-trn}. Indeed, the stochastic flow, $g_t^{\beta}$, lies in the group of volume preserving diffeomorphisms, and this allows to pull-back $\omega_t^{\alpha}$ along $g_t^{\beta}$. The result is \eqref{e:3p2}, which is now the average overall linking of the initial vorticity, $\omega_0$, to the vorticities associated to backward-forward transports of $u_0$.
The second interpretation leads to the mean field limit $\lambda_t = E[\eta_t]$
This shows that $\mathcal{H}_t$ equals the expectation of the cross-helicity of the stochastically transported initial condition, $\mbox{$\text{\upshape{Ad}}$}(h_t)^{\top}u_0$, and $\textup{curl}\,u_0$.
| proofpile-arXiv_059-15632 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Hindi is the fourth most-spoken first language in the world\footnote{https://en.wikipedia.org/wiki/List\_of\_languages\newline\_by\_number\_of\_native\_speakers}. According to one estimate, nearly 0.615 billion people speak Hindi as their first language\footnote{https://blog.busuu.com/most-spoken-languages\-in\-the\-world/}. Of these, most of the speakers are in India. The second most spoken language in India is English\footnote{https://en.wikipedia.org/wiki/List\_of\_languages\newline\_by\_number\_of\_native\_speakers\_in\_India}. Hindi and English are the official languages of the Indian Commonwealth\footnote{https://en.wikipedia.org/wiki/Hindi}. A large number of these people have joined the Internet recently. As a matter of fact, Next Billion Users (NBU) is a term commonly used in tech and business circles to refer to the large number of people from India, Brazil, China and South-East Asia who joined the Internet in the last decade\footnote{https://www.blog.google/technology/next\-billion\-users/next\-billion\-users\-are\-future\-internet/}. This phenomena is primarily attributed to ubiquitous highly affordable phone and internet plans\footnote{https://www.hup.harvard.edu/catalog.php?isbn=9780674983786}. A large fraction of NBU users come from India and speak Hindi as either their first or second language. A large number of these people use a blend of Hindi and English in their daily informal communication. This hybrid language is also known as Hinglish\footnote{https://en.wikipedia.org/wiki/Hinglish}.
These users extensively use Internet platforms for User Generated Contents (UGC) - social media platforms such as Facebook or Twitter; messaging platforms such as WhatsApp or Facebook messenger; user reviews aggregators such as Google play store or Amazon. A key characteristic of their behaviour on such platforms is their use of Hinglish. Thus, building any Natural Language Processing (NLP) based Internet applications for these users necessitates the ability to process this `new' language. Further, these UGC platforms are notoriously noisy. This means there is an additional challenge of non-canonical text. Therefore, a key step in building applications for such text data is \textit{text normalization}. Intuitively, it is transforming text to a form where written text aligned to its normalized spoken form\cite{sproat2016rnn}. More formally, \textit{it is the task of mapping non-canonical language, typical of speech transcription and computer-mediated communication, to standardized writing}\cite{lusetti2018encoder}.
Separately, there has been a lot of work in the two areas of normalization and building corpora of Hindi-English code mix text data, not much has been done at the intersection of the two(refer to section 2). To the best of our knowledge, there does not exist a corpus of Hindi-English Code Mixed sentences for normalization where the normalizations are human annotated. This work is an effort to release such a corpus
This work is motivated from our business use case where we are building a conversational system over WhatsApp to screen candidates for blue-collar jobs. Our candidate user base often comes from tier-2 and tier-3 cities of India. Their responses to our conversational bot are mostly a code mix of Hindi and English coupled with non-canonical text (ex: typos, non-standard syntactic constructions, spelling variations, phonetic substitutions, foreign language words in non-native script, grammatically incorrect text, colloquialisms, abbreviations, etc). The raw text our system gets is far from clean well formatted text and text normalization becomes a necessity to process it any further.
The main contributions of this work are two-fold, viz. (i) creating a human annotated corpus for text normalization of Hindi-English code mix sentences; and (ii) reporting baseline metrics on the corpus. Further, we release the corpus and annotations under a Creative Commons Attribution-NonCommercial-ShareAlike License\footnote{http://creativecommons.org/licenses/by-nc-sa/4.0/}.
\section{Related Work}
In this section, we present relevant work in the following areas viz.(1) Text Normalization (2) Normalization and UGC Datasets (3) Code-mixed Datasets, (4) Hindi-English Datasets.
\noindent \textbf{Text Normalization}: Text normalization, sometimes also called lexical normalization, is the task of translating/transforming a non-standard text to a standard format. Using text normalization on noisy data, one can provide cleaner text data to downstream NLP tasks and improve the overall system performance\cite{liu2012broad}\cite{satapathy2017phonetic}. Some of the early work used a rule-based spell-checker approach to generate a list of corrections for any misspelled word, ranked by corresponding posterior probabilities\cite{church1991probability}\cite{mays1991context}\cite{brill2000improved}. However, this approach did not factor in any context while normalizing words. \cite{choudhury2007investigation} used a Hidden Markov Model (HMM), where they modeled each standard English word as a HMM and calculated the probability of observing the noisy token. “Moses”, a well known Statistical Machine Translation (SMT) tool, provided significant improvements in comparison to previous solutions\cite{koehn2007moses}. \cite{aw2006phrase} adapted a phrase-based Machine Translation (MT) model for normalizing SMS and achieved significant gain in performance. In the past few years, Neural network based approaches for text normalization have become increasingly popular and have shown competitive performance in shared tasks\cite{chrupala2014normalizing}\cite{min2015ncsu_sas_wookhee}. \cite{lusetti2018encoder}, \cite{liu2012broad} and \cite{satapathy2017phonetic} provide excellent literature covering the landscape on this topic.
\noindent \textbf{Normalization and UGC Datasets}: \cite{han2011lexical} introduced a text normalization approach for twitter data using a variety of supervised \& unsupervised learning techniques. This study resulted in `lexNorm'\footnote{http://people.eng.unimelb.edu.au/tbaldwin/etc/lexnorm\_v1.2.tgz}, an open-source dataset containing 549 tweets. \cite{baldwin2015shared} subsequently released lexNorm15\footnote{https://github.com/noisy-text/noisy-text.github.io/blob/master/2015/files/lexnorm2015.tgz}. This new dataset contained 2950/1967 annotated tweets in train/test sets. \cite{michel2018mtnt} created the MTNT dataset\footnote{https://www.cs.cmu.edu/~pmichel1/mtnt/} containing translations of Reddit comments from the English language to French/ Japanese and vice versa, containing 7k$\sim$37K data points per language pair. This dataset contains user-generated text with different kinds of noise, e.g., typos, grammatical errors, emojis, spoken languages, etc. for two language pairs. \cite{van2017monoise} introduced ‘MoNoise’, a general purpose model for normalizing UGC text data. This model utilizes Aspell spell checker, an n-gram based language model and word embeddings trained on a few million tweets. It gave significant improvement in State-Of-The-Art (SOTA) normalization performance on the lexNorm15 dataset. \cite{muller2019enhancing} focused on enhancing BERT model on UGC by applying lexical normalization.
\noindent \textbf{Code-Mixed Datasets}: Since the launch of EMNLP Shared Tasks of Language identification in Code-Switched Data\footnote{http://emnlp2014.org/workshops/CodeSwitch/call.html}, there has been an increased focus on analyzing the nature of code-mixed data, language identification approaches and how to carry out NLP tasks like POS tagging and Text normalization on such text data. For the first shared task, code-switched data was collected for language pairs such as Spanish-English (ES-EN), Mandarin-English (MAN-EN), Nepali-English(NEP-EN) and Modern Standard Arabic - Dialectal Arabic(MSA-DA)\cite{solorio2014overview}. Subsequently more language pairs were added with primary focus on language identification task\cite{molina2019overview}. \cite{aguilar2019named} introduced Named Entity Recognition on Code-Switched Data. \cite{mandal2018preparing} introduced Bengali-English code-mixed corpus for sentiment analysis. More recently, normalization of code mixed data has been receiving a lot of attention. \cite{barik2019normalization} worked on normalizing Indonesian-English code-mixed noisy social media data. Further, they released 825 annotated tweets from this corpus\footnote{https://github.com/seelenbrecher/code-mixed-normalization/tree/master/data }. \cite{phadte2017towards} focused on normalization of Konkani-English code-mixed text data from social media.
\noindent \textbf{Hindi-English Datasets}: \cite{vyas2014pos} was one of the earliest work to focus on creating a Hindi-English code-mixed corpus from social media content for POS tagging. The same year \cite{bali2014borrowing} analyzed Facebook English-Hindi posts to show a significant amount of code-mixing. \cite{bhat2018universal} worked with similar English-Hindi code-mixed tweets in roman script for dependency parsing. \cite{patra2018sentiment} worked on sentiment analysis of code mixed Hindi-English \& Bengali-English language pairs. \cite{singh2018automatic} focused on normalization of code-mixed text using pipeline processing to improve the performance on POS Tagging task. indicnlp\_catalog\footnote{https://github.com/anoopkunchukuttan/indic\_nlp\_library} is a effort to consolidate resources on Indian languages. Table~\ref{table:hiEndatasets} presents the most relevant Hindi-English datasets from this effort.
\begin{table}
\centering
\begin{tabular}{p{0.55\columnwidth}|p{0.2\columnwidth}|p{0.2\columnwidth}}
\hline
\textbf{} \textbf{Dataset} & \textbf{Task} & \textbf{Size} \\
\hline
IITB English-Hindi Parallel Corpus \cite{anoop2018iit} & Machine Translation
& Train - 1,561,840 \newline Dev - 520 \newline Test - 2,507 \\
& & \\
HindiEnCorp 0.5 \cite{dhariya2017hybrid} & Machine Translation & 132,300 sentences \\
& & \\
Xlit-Crowd: Hindi-English Transliteration Corpus \cite{khapra2014transliteration} & Machine Translation & 14,919 words \\
& & \\
IIITH Codemixed Sentiment Dataset \cite{prabhu2016towards} & Sentiment Analysis & 4,981 sentences \\
\hline
\end{tabular}
\caption{indicnlp\_catalog Hindi-English Datasets}
\label{table:hiEndatasets}
\end{table}
While there is extensive work done in each of these areas, for some reason normalization of Hindi-English (which is at intersection of these areas) hasn't received its due attention. This may be partly due to unavailability of a comprehensive data set and baseline. We believe our work will address some of this.
\section{Corpus Preparation}
While preparing this corpus, we carry out the following steps.
\begin{enumerate}
\itemsep0em
\item \textbf{Data Collection}: collecting Hindi-English sentences.
\item \textbf{Data Filtering \& Cleaning}: standard pre-processing of raw sentences.
\item \textbf{Data Annotation}: sentence-level text normalization by human annotators.
\end{enumerate}
\subsection{Data Collection}
We collected data in two phases: In the first phase we built and deployed general chit-chat bots on social media platforms. User responses were randomly sampled and pooled to create the dataset.
In the second phase, we collected data from our platform. Here too the responses were chosen randomly to be added to the dataset.
\subsection{Data Filtering \& Cleaning}
The raw text data we collected was then preprocessed and cleaned. Following were the key steps:
\begin{enumerate}
\itemsep0em
\item Drop all messages that were forwarded messages or consisted of only emojis.
\item Hindi words were written in both scripts - Devanagari and Roman. All words in Devanagari were converted into roman script.
\item Removed all characters other than alpha-numeric characters.
\item All sentences containing profane words or phrases were dropped.
\item All sentences containing any Personal Identification Information (PII) were dropped.
\end{enumerate}
Steps (4) and (5) were done manually.
\subsection{Data Annotation}
The preprocessed data was sent to human annotators for text normalization annotation. Each word in the input sentence was tagged for the type of non-canonical variation \& its phonetically standard transcription. The annotators chosen were native speakers of Hindi and had bilingual proficiency in English. The dataset was annotated by three annotators while maintaining an average inter-annotator agreement of 95\% on the dataset.
Based on the context of the words in the input sentence, annotators provide the corresponding normalized sentence. Further, to better capture the process used by the annotators to arrive at the normalized text, the annotators provide a unique \textit{tag} for each word. This tag describes the transformation applied by annotators to arrive at the corresponding normalized word. The corpus along with normalized text also contains these tags. Below we describe various tags used in the corpus, the scenario in which a given tag is used and explain the transformation applied with example(s):
\begin{enumerate}
\itemsep0em
\item \textbf{Looks Good}: The word under consideration is already an English word with proper spelling e.g. \textit{“yes”}, \textit{“hello”}, \textit{“friend”}
\item \textbf{Hindi}: The word is a Hindi word in Roman script. In case the spelling is incorrect, replace the word with the corresponding phonetically correct transcription. e.g. \textit{“haaan”} $\rightarrow$ \textit{“haan”}\footnote{Hindi word corresponding to \textit{“yes”} in English }, \textit{“namskar”} $\rightarrow$ \textit{“namaskaar”}\footnote{Hindi greeting corresponding to \textit{“hi”} in English}
\item \textbf{Merge}: A word is mistakenly split into two or more consecutive words by uncautious white spaces. e.g. \textit{“ye s”} $\rightarrow$ \textit{“yes”}, \textit{“hell oo”} $\rightarrow$ \textit{“hello”}, \textit{“fri en dd”} $\rightarrow$ \textit{“friend”}
\item \textbf{Split}: Two words get conjoined or when a user uses a contraction of two words. Split the words with correct spelling e.g. \textit{“yeshellofriend”} $\rightarrow$ \textit{“yes hello friend”}, \textit{“isn’t”} $\rightarrow$ \textit{“is not”}, \textit{“should’ve”} $\rightarrow$ \textit{“should have”}
\item \textbf{Short Form}: Word is a short form (phonetically or colloquially). Replace the word with the corresponding full word e.g. \textit{“u”} $\rightarrow$ \textit{“you”}, \textit{“y”} $\rightarrow$ \textit{“why”}, \textit{“doc”} $\rightarrow$ \textit{“doctor”}
\item \textbf{Acronym}: Word is an Acronym or Abbreviation. Replace with their full form e.g. \textit{“fb”} $\rightarrow$ \textit{“facebook”}, \textit{“brb”} $\rightarrow$ \textit{“be right back”}
\item \textbf{Typo}: The word is a typo (due to haste, fat-finger error\footnote{https://en.wikipedia.org/wiki/Fat-finger\_error} or low attention to details) while typing e.g \textit{“yass”} $\rightarrow$ \textit{“yes”}, \textit{“helllo00o”} $\rightarrow$ \textit{“hello”}, \textit{“frieendd”} $\rightarrow$ \textit{“friend”}
\item \textbf{Wordplay}: User has deliberately modified the word for creative purposes. Undo the creativity. e.g. \textit{“hiiiii”} $\rightarrow$ \textit{“hi”}, \textit{“you are punny”} $\rightarrow$ \textit{“you are funny”}, \textit{“pun in ten dead”} $\rightarrow$ \textit{“pun intended”}
\item \textbf{Profanity}: Word is profane e,g, \textit{“f**k”}, \textit{“ba*tard”}
\item \textbf{Unrecognizable or other language}: Default tag to catch any non-English/Hindi words or any other type of error/variation not covered by any of the previous tags, like gibberish. e.g.\textit{“\underline{asgjhsad} is my name”}, \textit{“\underline{n233s} is cool”}, \textit{“call \underline{maadi}”}\footnote{Slang that means \textit{“call me”}}
\end{enumerate}
\subsection{Corpus Examples}
To facilitate better understanding, we list out two examples from our dataset to illustrate the input text and corresponding normalized text along with annotations:
\begin{enumerate}
\itemsep0em
\item
\begin{itemize}
\itemsep0em
\item[] \textbf{id:} 5089
\item[] \textbf{inputText:} \textit{``pakka lisenc nahin hai but ll hai”}
\item[] \textbf{tags}: ['Hindi', 'Typo’, 'Hindi', 'Hindi', 'Looks Good', 'Acronym', 'Hindi']
\item[] \textbf{normalizedText:} \textit{“pakka license nahi hai but learning licence hai”}\footnote{Corresponding English translation: ``don't have a permanent license, but I have learning licence”}
\\
Here, the word 'pukka' is a Hindi word that means `confirmed'. `lisenc' is a typo for `license' and refers to driving license. `nahin' in hindi means `no', so on and so forth.
\\
\end{itemize}
\item
\begin{itemize}
\itemsep0em
\item[] \textbf{id}: 13427
\item[] \textbf{inputText}: \textit{“hiii mjhe jab chaiye”}
\item[] \textbf{tags}: ['Wordplay', 'Hindi', 'Typo', 'Hindi']
\item[] \textbf{normalizedText}: \textit{“hi mujhe job chaahie”}\footnote{Corresponding English translation: \textit{“hi, I want a job”}}
\\
Here, the word 'hiii' is a wordplay for 'hi', 'mjhe' is a typo for hindi word 'mujhe' which means 'I'. 'jab' is a typo for 'job' and `chaiye' is typo for hindi word `chaahie' which means `want'.
\end{itemize}
\end{enumerate}
\section{Corpus Analysis}
\begin{table}[]
\centering
\begin{tabular}{l|c}
\hline
\textbf{} {\textbf{Attribute}} & \textbf{Value} \\
\hline
\# Datapoints & 13494 \\
\# Train & 10795 \\
\# Test & 2699 \\
\% Sentences Modified after Annotation & 80.08\% \\
\% Hindi-English Code-Mixing Sentences & 52.69\% \\
\% Non-English/Hindi words & 5.41\% \\
\% Hindi Words in Corpus & 41.48\% \\
Code-Mixing Index (CMI) \cite{das2014identifying} & 88.40 \\
\hline
\end{tabular}
\caption{Basic Statistics \textit{hinglishNorm} Corpus }
\label{table:basicStats}
\end{table}
After the preprocessing and manual annotation as described in Section 3, we refer to the data set obtained as \textit{hinglishNorm}. It contains 13494 sentence pairs. Table~\ref{table:basicStats} presents some basic statistics of \textit{hinglishNorm} corpus. Each data point in the corpus is a sentence pair consisting of an \textit{inputText} and \textit{normalizedText}. \textit{inputText} is the text as given by the user after preprocessing and \textit{normalizedText} is the corresponding human annotated text. Table~\ref{table:inputTtextVsnormalizedTextStats} gives corpus level statistics of \textit{inputText} and \textit{normalizedText}.
\begin{table}[]
\centering
\begin{tabular}{l|c|c}
\hline
\textbf{Features} & \textbf{\textit{inputText}} & \textbf{\textit{normalizedText}} \\
\hline
\# Sentence & 13494 & 13494 \\
\# Unique Sentences & 13066 & 12547 \\
\# Unique Words & 9326 & 7465 \\
\# Unique Characters & 37 & 37 \\
Most Common Sentence & “whats ur name” & “what is your name” \\
\# Most Common Sentence & 12 & 38 \\
Mean Character Length & 22.06 & 25.25 \\
Std Var of Character Length & 16.97 & 19.00 \\
Median Character Length & 18 & 21 \\
Mean Word Length & 4.96 & 5.13 \\
Std Var of Word Length & 3.53 & 3.66 \\
Median Word Length & 4 & 4 \\
\hline
\end{tabular}
\caption{Statistics for \textit{inputText} vs \textit{normalizedText}}
\label{table:inputTtextVsnormalizedTextStats}
\end{table}
An important aspect of this corpus is that the \textit{normalizedText} can vary depending on the context of the sentence. Based on the context of the sentence, misspelled words might require different corrections. For e.g.
\begin{itemize}
\itemsep0em
\item \textit{“\underline{hii}, I have a bike” (inputText) $\rightarrow$ “\underline{hi}, I have a bike” (normalizedText)}
\begin{itemize}
\itemsep0em
\item Input text provided by the user is an English language sentence with misspelled \textit{“hi”}. Annotators understand that the word belongs to English language and correct spelling, in this case, should be \textit{“hi”}
\end{itemize}
\item \textit{“mere pass bike \underline{hii}” (inputText) $\rightarrow$ “mere pass bike \underline{hai}” (normalizedText)}
\begin{itemize}
\itemsep0em
\item Input text is a romanized version of a Hindi sentence that means \textit{“I have a bike”}. Annotators understand that the word belongs to Hindi language and correct spelling, in this case, should be \textit{“hai”}
\end{itemize}
\end{itemize}
\section{Benchmark Baseline}
It is common to model the text normalization problem as a Machine Translation problem\cite{mansfield2019neural}\cite{lusetti2018encoder}\cite{filip2006text}\cite{zhang2019neural}. We built a text normalization model using Bidirectional LSTM with attention on the lines of work by \cite{bahdanau2014neural}. We evaluated our system using well established metrics - Word-Error Rate (WER)\cite{niessen2000evaluation}, BiLingual Evaluation Understudy (BLEU)\cite{papineni2002bleu} and Metric for Evaluation of Translation with Explicit ORdering (METEOR)\cite{banerjee2005meteor}. Table~\ref{table:baselinePerformance} shows the results of our experiments over \textit{hinglishNorm}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|}
\hline
\textbf{} \textbf{Evaluation Metric} & \textbf{Baseline} \\
\hline
WER & 15.55 \\
BLEU & 71.21 \\
METEOR & 0.50 \\
\hline
\end{tabular}
\caption{Baseline Performance on \textit{hinglishNorm}}
\label{table:baselinePerformance}
\end{center}
\end{table}
\section{Conclusion \& Future Work}
We presented \textit{hinglishNorm} version 1.0, a corpus of Hindi-English code mix sentences for text normalization task. Thereby, filling a much needed gap. We provided benchmark baseline results on this corpus.
In future, we plan to build stronger baselines like BERT\cite{devlin2018bert} and its variants such as DistilBERT\cite{sanh2019distilbert}, RoBERTa\cite{liu2019roberta}, etc.
\bibliographystyle{unsrt}
| proofpile-arXiv_059-15633 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
First and second order methods have been widely used in industrial computational fluid dynamics (CFD) applications due to their simplicity and robustness. With the same computational cost, high-order methods are capable of generating solutions with higher accuracy and thus efficiency\cite{ZJWang2013}. In particular, high-order methods on unstructured meshes are more flexible for complex geometries than their counterparts on structured meshes, and the corresponding workload for mesh generation can be significantly reduced.
As a result, various high-order methods have been developed in recent decades, which roughly fall into two categories. The first one is the finite volume (FV) method which has good robustness and resolution of discontinuities in high-speed flows \cite{TJBarth1990,CHu1999}. However, the compactness is always a bottleneck. To overcome this problem, many compact high-order FV methods have been developed, \textcolor{black}{in which only face-neighboring cells are involved in reconstruction}, such as the variational reconstruction method \cite{QWang2017}, the explicit multi-step reconstruction method \cite{YZhang2019} and the subcell finite volume method (SCFV) \cite{JPan2017}. In the SCFV method, each element is partitioned into subcells. The reconstruction on each subcell can involve subcells of face-neighbor elements. The residual at each subcell only depends on the current element and its nearest face neighbor elements, thus achieving good compactness while preserving the advantages of a finite volume framework such as the strong robustness and good resolution of discontinuities. The accuracy and efficiency can also be improved in smooth flow regions due to the packed continuous reconstruction for sub cells, \textcolor{black}{in which the solution polynomial is continuous across sub cells.}
The second one includes discontiuous methods with internal degrees-of-freedom (IDOF) inside each element. They are compact and can preserve high accuracy in smooth flow regions, such as the Discontinuous Galerkin (DG) method \cite{BCockburn1998,HLuo2008,XZhang2010}.
In 2007, Huynh developed a high-order method named flux reconstruction (FR) with the advantage of simplicity and efficiency. It solves the 1D conservation laws in differential form \cite{Huynh2007, Huynh2009}. Based on a set of solution points (SPs) as IDOFs, the method evaluates the derivative of the flux by constructing a discontinuous piece-wise flux polynomial. Then a flux correction is added to the discontinuous flux by removing the flux jumps at cell interfaces. The approach provides a unified framework for many existing high-order methods with appropriate choices of correction function, such as the DG, the spectral difference (SD) method \cite{YLiu2006,CLiang2009} and the spectral volume (SV) method \cite{ZJWang2002,ZJWang2004}. And it has been shown to be simpler and more efficient than the original versions of these methods. Compared to the modal DG method, the cost of the numerical integration can be avoided, and by choosing the SPs properly, the reconstruction can be very efficient. The FR method can be extended to quadrilateral mesh via tensor product directly. And it was first extended to triangular and mixed meshes by Wang and Gao, named lifting collocation penalty (LCP) \cite{ZJWang2009,HGao2009}. Extension to the 3D Euler
and Navier-Stokes (N-S) equations on mixed meshes can be referred to Refs.\cite{THaga2010,THaga2011}. Due to the tight connection between FR and LCP, they are renamed as correction procedure via reconstruction (CPR). Detailed reviews of the CPR method can be found in Ref.\cite{HTHuynh2014}.
A challenging problem for high-order methods based on IDOFs is shock capturing, especially for strong shock waves, as the solution variables are always represented by continuous polynomials within each element. To enhance the resolution of flow discontinuities, a subcell finite volume (SCFV) limiting procedure has been proposed in the DG method and has shown good robustness and accuracy \cite{MDumbser2014,MDumbser2016}. This limiter is applied only to "troubled cells" with the regular DG method applied elsewhere. Thus the high-order accuracy of DG is preserved and the resolution of discontinuities in high-speed flow is enhanced.
The aforementioned methods mainly focus on the high-order discretization of space. However, high-order time evolution is also very important. Traditional high-order methods usually adopt Riemann solvers to compute the inviscid flux. For viscous flows, the viscous flux needs to be treated additionally. To achieve high-order time accuracy, multi-stage Runge-Kutta (R-K) methods are adopted for time integration.
Based on the mesoscopic Bhatnagar-Gross-Krook (BGK) model, the gas-kinetic scheme (GKS) offers an alternative way to recover the N-S solutions \cite{KXu2001,QBLi2005}, which is achieved through the first-order Chapman-Enskog expansion. With the help of the local integral solution of the BGK equation, a time-dependent flux function can be constructed by a Taylor expansion of the gas distribution function in space and time. Different from traditional Riemann solvers, the inviscid and viscous fluxes in GKS are coupled and obtained simultaneously. It is genuinely multi-dimensional by involving both normal and tangential derivatives in the gas distribution function \cite{QBLi2006}. The multiscale evolution process from a kinetic scale to a hydrodynamic scale keeps a good balance between accuracy and robustness \cite{KXu2005}.
Through a second-order Taylor expansion of the gas distribution function, a third-order multi-dimensional GKS (HGKS or HBGK) approach has been developed successfully \cite{QBLi2008,QBLi2010,QBLi2012}. Based on this high-order gas evolution model, a series of third-order gas-kinetic schemes have been developed within a single stage, such as the compact GKS \cite{LPan2015,LPan2016}, SV-GKS \cite{NLiu2017}, DG-GKS \cite{XDRen2015} CLS-GKS \cite{JL2019}, SCFV-GKS \cite{CZhang2019}, etc. Recently, a single-stage third-order gas-kinetic CPR method on triangular meshes has also been developed \cite{CZhang2018}. By combining the efficient CPR framework with the third-order gas-kinetic flux, it shows high accuracy and efficiency in many benchmark flow problems. Therefore, it is worth developing a fourth-order gas-kinetic CPR scheme on triangular meshes to achieve higher accuracy and efficiency.
A straightforward way to develop a fourth-order gas-kinetic CPR is to use a third-order Taylor expansion in space and time to construct a single stage time-evolution flux. Through this way a one-dimensional finite volume scheme has been developed \cite{NLiu2014}. Fortunately, the two-stage fourth-order time-stepping method has been developed for Lax-Wendroff type flow solvers \cite{JLi2016, ZDu2018, JQLi201902} by using the flux and its first-order time derivative which seems simpler. Recently, the two-stage temporal discretization has also been extended to DG based on the flux solver for the generalized Riemann problem (GRP) for the Euler equations \cite{JQLi2019}. As only one middle stage is used and little additional computational cost for the time derivative is required, the scheme shows higher efficiency than the traditional RKDG. In addition, the reduction of temporal stage also means the reduction of effective stencil size, or better locality, which benefits parallel computations. In fact, the inherent space-time coupling in this two-stage technique, as well as the locality is important for Euler and compressible N-S equations. Since the gas-kinetic flux also provides a time-evolving flux function, an efficient two-stage fourth-order GKS has also been constructed successfully through the combination of the second-order gas-kinetic flux solver and the two-stage temporal discretization method \cite{LPan2016II, FXZhao2019}.
In the meanwhile, the multi-stage multi-derivative method was developed for hyperbolic conservation laws \cite{DCSeal2014}, which has also been used to develop a family of HGKS \cite{XJi2018}.
The objective of current study is to construct a highly efficient forth-order scheme on triangular meshes for the compressible N-S equations. To achieve high efficiency, the new scheme fully combines the efficient CPR framework with the efficient two-stage fourth-order time stepping method, as well as the robust time-evolving second-order gas-kinetic flux solver. Furthermore, a robust SCFV limiting approach is extended to CPR to enhance the resolution of discontinuities. Compared with existing two-stage fourth-order finite volume GKS, the present method is suitable for triangular meshes, and it is compact and efficient. On the other hand, compared with existing CPR methods, it can achieve higher efficiency due to the coupling of inviscid and viscous effects in the gas-kinetic flux, and the efficient time stepping technique with only one middle stage instead of the multi-stage R-K method. In addition,
with the help of SCFV, the limiting procedure is simple and the subcell resolution of discontinuities can be automatically achieved in high-speed flows.
The paper is organized as follows. In Section \ref{section2}, the construction of the current method is presented, including the CPR framework, the gas-kinetic flux solver, the two-stage 4th order temporal discretization and the subcell finite volume limiting procedure. Numerical tests are presented in Section \ref{section3} to verify and demonstrate the performance of the current scheme. The last section draws the conclusions.
\section{Numerical method} \label{section2}
\subsection{CPR framework}
The CPR framework has many competitive features compared to other high-order methods.
Compact reconstruction under the CPR framework is straightforward. It provides a unified framework for DG, SD, SV, etc, and is more efficient and simpler to be implemented.
Here the CPR framework is briefly reviewed. Consider the conservation law,
\begin{equation}\label{eq_ConLaw}
\frac{\partial\bm{\mathrm Q}}{\partial{t}}+\nabla\cdot\bm{\mathrm F} =0,
\end{equation}
where $\bm{\mathrm Q}=(\rho,\rho\bm{\mathrm U},\rho E)^T$ are the conservative variables, in which $\bm{\mathrm U}=(U,V)$ are the macroscopic velocities and $\bm{\mathrm F}=(\bm{\mathrm F}_x,\bm{\mathrm F}_y)$ is the flux vector. The computational domain is divided into $N$ non-overlapping triangular cells $\{\Omega_i\}$. In each cell ${\Omega}_i$, $\bm{\mathrm Q}$ is approximated by a solution polynomial $\bm{\mathrm Q}_i$ with degree $k$, which belongs to the polynomial space $P^k$. $\bm{\mathrm Q}_i$ can be constructed by the Lagrange interpolation,
\begin{equation}\label{eq_Q(x,y)}
\bm{\mathrm Q}_i(\bm{\mathrm x})=\sum_j L_j(\bm{\mathrm x})\bm{\mathrm Q}_{i,j},
\end{equation}
where ${L}_j(\bm{\mathrm x})$ is the Lagrange basis, $\bm{\mathrm Q}_{i,j}$ are the conservative variables at the solution point (SP) with the position $\bm{\mathrm x}_{i,j}=(x_{i,j},y_{i,j})$. For the fourth-order CPR, $k=3$, the number of SPs is equal to $(k+1)(k+2)/2=10$. To involve the interaction between cells, we need to define $(k+1)$ flux points (FPs) at each cell interface to compute the common flux. The distribution of SPs and FPs is shown in Fig.\ref{SPsFPs}. For efficiency, the SPs are chosen to coincide with the FPs.
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{Figure/SPsFPs.eps}\\
\captionsetup{labelsep=period}
\caption{Solution points (circles) and flux points (squares) with $k$=3.}
\label{SPsFPs}
\end{figure}
The semi-discrete CPR formulation can be expressed as
\begin{equation}\label{eq_CPR}
\begin{split}
\frac{\partial\bm{\mathrm Q}_{i,j}}{\partial{t}}&= \mathcal{R}_{i,j}({\bm{\mathrm F}}), \\
\mathcal{R}_{i,j}({\bm{\mathrm F})}=-\Pi_j\left(\nabla\cdot{\bm{\mathrm F}_i}\right)&-\frac{1}{|{\Omega}_i|}\sum_{s\in\partial{\Omega}_i}\sum_{l}\alpha_{j,s,l}[{\bm{\mathrm F}}_{i,i+}]_{s,l}|\Gamma|_s,
\end{split}
\end{equation}
where the second term denotes the flux divergence at SPs, $\Pi$ is the projection operator which projects the flux divergence term onto $P^k$. Through the Lagrange interpolation, a flux polynomial ${\bm{\mathrm F}}_i={\bm{\mathrm F}}(\bm{\mathrm Q}_i)$ with degree $k$ can also be constructed. Then the projection can be expressed as
\begin{equation}\label{eq4}
\Pi{\left(\nabla\cdot{\bm{\mathrm F}}_i\right)}=\nabla\cdot\left(\sum_{j}{L}_j(\bm{\mathrm x}){\bm{\mathrm F}}(\bm{\mathrm Q}_{i,j})\right),
\end{equation}
where ${\bm{\mathrm F}}(\bm{\mathrm Q}_{i,j})$ is the flux at SPs. The third term in Eq.(\ref{eq_CPR}), named correction field, is to involve the interaction between cell ${\Omega}_i$ and its face neighbors ${\Omega}_{i+}$, in which $[{\bm{\mathrm F}}_{i,i+}]_{s,l}=[{\bm{\mathrm F}}_{com}(\bm{\mathrm Q}_i,\bm{\mathrm Q}_{i+},\bm{n})-{\bm{\mathrm F}}(\bm{\mathrm Q}_i)\cdot\bm{n}]_{s,l}$ is the normal flux difference at FPs and ${\bm{\mathrm F}}_{com}(\bm{\mathrm Q}_i,\bm{\mathrm Q}_{i+},\bm{n})$ is the common flux. $\alpha_{j,s,l}$ is the lifting coefficient, which is independent of the solution and geometry. $|{\Omega}_i|$ is the area of cell $i$ and $|\Gamma|_s$ is the length of triangular edge $s$. More details of the CPR framework can be found in Ref.\cite{Huynh2007, ZJWang2009}.
\subsection{Gas-kinetic flux solver}\label{section2.3}
Traditional Riemann solvers are usually used to compute the fluxes at the flux points. Then the governing equation is discretized in time using a multi-stage R-K scheme. In order to achieve 4th order accuracy in time, a minimum number of 4 stages is needed in the R-K scheme. In the present study, we improve the efficiency of the CPR method combining the gas-kinetic flux solver and a two-stage fourth-order time marching scheme. In particular, the second-order gas-kinetic flux solver is the foundation of the overall algorithm. Its basic principle is briefly introduced here.
In the mesoscopic gas-kinetic theory, the flow is described by the gas distribution function $f=f(\bm{\mathrm x},t,\bm{\mathrm u},\bm{\mathrm \xi})$ which is a function of physical space $\bm{\mathrm x}$, time $t$, particle velocity $\bm{\mathrm u}=(u,v)$ and internal degrees of freedom $\bm{\mathrm \xi}$. In the present study, only two-dimensional (2D) flow is considered.
The macroscopic conservative variables $\bm{\mathrm Q}$ and the flux vector $\bm{\mathrm F}$ can be obtained by taking moments of $f$ in the phase space, i.e.,
\begin{equation}\label{eq_f_Q}
\bm{\mathrm Q}=\int f\bm{\mathrm \psi}\mathrm{d}\Xi,~~~
\end{equation}
\begin{equation}\label{eq_f_F}
{\bm{\mathrm F}}_\sigma=\int u_\sigma f\bm{\mathrm \psi}\mathrm{d}\Xi,~ \sigma=1,2,
\end{equation}
where $\bm{\mathrm \psi}={\left(1,\bm{\mathrm u},({\bm{\mathrm u}}^2+{\bm{\mathrm \xi}}^2)/2\right)}^T$ is the vector of moments, $\mathrm{d}\Xi=\mathrm{d}\bm{\mathrm u}\mathrm{d}\bm{\mathrm \xi}$ is the element of the phase space. The governing equation of $f$ is the 2D BGK equation \cite{Bhatnagar1954}
\begin{equation}\label{eq_bgk}
\frac{\partial f}{\partial t}+\bm{\mathrm u}\cdot\nabla{f}= \frac{g-f}{\tau},
\end{equation}
where $\tau=\mu/p$ is the collision time dependent on the dynamic viscous coefficient $\mu$ and pressure $p$. The local equilibrium state $g$ approached by $f$ is the Maxwellian distribution,
\begin{equation}
g=\rho (2\pi RT)^{-(K+2)/2} \mathrm{e}^{-[ (\bm{\mathrm u}-\bm{\mathrm U})^2+\bm{\mathrm \xi}^2]/(2RT)},
\end{equation}
where $\rho$ is the density, $R$ is the gas constant, $T$ is the temperature, and $K$ is the total number of $\bm{\mathrm \xi}$, which is equal to $(5-3\gamma)/(\gamma-1)+1$ for two-dimensional flow, in which $\gamma$ is the specific heat ratio. By taking moments of Eq.(\ref{eq_bgk}), the macroscopic conservation law Eq.(\ref{eq_ConLaw}) can be recovered, in which the collision term $(g-f)/\tau$ vanishes automatically due to the conservation of mass, moments and total energy during collisions, i.e., the compatibility condition. Particularly, the Naiver-Stokes equations can be recovered through the first-order Chapman-Enskog expansion \cite{KXu2015}
\begin{equation}\label{eq_C_E}
\begin{split}
f_{NS}=g-\tau \left(\frac{\partial g}{\partial t}+\bm{\mathrm u}\cdot\nabla{g}\right).
\end{split}
\end{equation}
To update the numerical solution under the CPR framework, the fluxes at FP and SP need to be determined. Since the solution is discontinuous across cell interfaces, common fluxes are computed with the solutions from both sides of the cell interface. In the present study, the time-dependent gas distribution function is constructed based on the local analytical solution of the BGK equation, i.e.,
\begin{equation}\label{eq_BGK_exact}
\begin{split}
f(\bm{\mathrm x},t,\bm{\mathrm u},\bm{\mathrm \xi})=\frac{1}{\tau}\int_0^t g(\bm{\mathrm x} -\bm{\mathrm u} (t-t'),t',\bm{\mathrm u} ,\bm{\mathrm \xi} )\mathrm{e}^{-(t-t')/\tau}\mathrm{d} t'
+\mathrm{e}^{-t/\tau}f_0(\bm{\mathrm x} -\bm{\mathrm u} t,\bm{\mathrm u} ,\bm{\mathrm \xi}),
\end{split}
\end{equation}
where $f_0$ is the piecewise continuous initial distribution function at the start of each time step, $g$ is the local equilibrium state. For simplicity, the cell interface is assumed perpendicular to the $x$-axis. Through a first-order Taylor expansion of $f_{NS}$ and $g$ at a FP to construct $f_0$ and $g$ in the neighborhood, the time-dependent gas distribution function can be obtained with Eq.(\ref{eq_BGK_exact}), i.e.,
\begin{equation}\label{eq_f_dis}
\begin{split}
f_{_\mathrm{FP}}(t,\bm{\mathrm u},\bm{\mathrm \xi})=&g_0\left(1-\mathrm{e}^{-t/\tau}
+((t+\tau)\mathrm{e}^{-t/\tau}-\tau)(a_1u+a_2v)
+(t-\tau+\tau\mathrm{e}^{-t/\tau})A\right) \\
&+\mathrm{e}^{-t/\tau}g_R\left(1-(\tau+t)(a_1^Ru+a_2^Rv)-\tau A^R\right) \mathrm{H}(u) \\
&+\mathrm{e}^{-t/\tau}g_L\left(1-(\tau+t)(a_1^Lu+a_2^Lv)-\tau A^L\right) \left(1-\mathrm{H}(u)\right),
\end{split}
\end{equation}
where $\mathrm{H}(u)$ is the Heaviside function. The coefficients $a_1,a_2$ and $A$ are related to the Taylor expansion of the corresponding Maxwellian functions, i.e., the derivatives of $g_0$. Other coefficients with superscript $L, R$ correspond to $g_L$ and $g_R$ respectively. More details can be found in Refs.\cite{KXu2001,QBLi2005}. Both normal and tangential spatial derivatives are involved in the construction of the gas distribution function, which depicts a multidimensional transport process across the interface, contributing to a intrinsically multidimensional flux solver\cite{QBLi2006}.
Since the solution is continuous inside each cell, the gas distribution function at a SP can be simplified as,
\begin{equation}\label{eq_f_con}
\begin{split}
f_{_\mathrm{SP}}(t,\bm{\mathrm u},\bm{\mathrm \xi})=&g_0\left(1-\tau(a_1u+a_2v)+(t-\tau)A\right).
\end{split}
\end{equation}
Thus the computational cost can be significantly reduced and the accuracy is improved in smooth flow region as well.
Based on the above gas distribution functions Eq.(\ref{eq_f_dis}) and Eq.(\ref{eq_f_con}), a time-evolving flux function can be obtained according to Eq.(\ref{eq_f_F}). If the flux function is integrated within a time step directly, second-order time accuracy can be obtained with only one stage \textcolor{black}{since both the flux and its time derivative are involved}. However, if combined with the two-stage temporal discretization \cite{LPan2016II}, fourth-order time accuracy can be achieved in a highly efficient way, with the help of time derivative of the flux which can be simply computed from the gas-kinetic flux. This is introduced in the following.
\subsection{Fourth order Two-stage time integration}
Based on the unsteady gas-kinetic flux, an efficient two-stage time marching scheme can be constructed for the CPR framework to achieve fourth-order time accuracy by adopting the two-stage temporal discretization \cite{JLi2016, ZDu2018}.
For the semi-discrete CPR method, Eq.(\ref{eq_CPR}), the solution ${\bm{\mathrm Q}}$ can be updated by,
\begin{equation}\label{eq_S2O4}
\begin{split}
&{\bm{\mathrm Q}}^*={\bm{\mathrm Q}}^n+\frac12 \Delta t \mathcal{R}({\bm{\mathrm F}}^n)+\frac18 \Delta t^2 \frac{\partial \mathcal{R}({\bm{\mathrm F}}^n)}{\partial t},\\
&{\bm{\mathrm Q}}^{n+1}={\bm{\mathrm Q}}^n+\Delta t \mathcal{R}({\bm{\mathrm F}}^n)+\frac16 \Delta t^2
\left(\frac{\partial \mathcal{R}({\bm{\mathrm F}}^n)}{\partial t}+2\frac{\partial \mathcal{R}({\bm{\mathrm F}}^*)}{\partial t}\right),
\end{split}
\end{equation}
where ${\bm{\mathrm F}}^*={\bm{\mathrm F}}({\bm{\mathrm Q}}^*,t)$ is the flux at the middle stage $t^*=t^n+\Delta{t}/2$. It has been proved that the above two-stage temporal discretization achieves fourth-order time accuracy for hyperbolic conservation law. The success lies in the use of both the flux and its first-order time derivative.
If the computational mesh does not change with time, $\mathcal{R}$ is a linear function of ${\bm{\mathrm F}}$, thus $\partial \mathcal{R}({\bm{\mathrm F}})/\partial t=\mathcal{R}(\partial{\bm{\mathrm F}}/\partial t)$. So the key point is to compute the flux and its time derivative for both stages. First of all, the time-dependent flux can be obtained through the integration of the distribution function in the phase space according to Eq.(\ref{eq_f_F}), denoted as $\bm{\mathrm F}(\bm{\mathrm Q}^n,t)$. As it is not a linear function of $t$, a simple fitting method can be adopted to obtain the approximated flux and its first-order derivative. It is based on the time integration of $\bm{\mathrm F}(\bm{\mathrm Q}^n,t)$ within $[t_n,t_n+\delta]$,
\begin{equation}
\label{eq_S2O4_F(t)_int}
\widehat{\bm{\mathrm F}}(\bm{\mathrm Q}^n,\delta)=\int_{t^n}^{t^n+\delta}\bm{\mathrm F}(\bm{\mathrm Q}^n,t)\mathrm{d}{t}.
\end{equation}
As $\bm{\mathrm F}(\bm{\mathrm Q}^n,t)$ is approximated by a linear function $\tilde{\bm{\mathrm F}}(\bm{\mathrm Q}^n,t)=\bm{\mathrm F}^n+(t-t^n)\partial_t\bm{\mathrm F}^n$ within the time interval $[t^n,t^n+\Delta t]$, the time integration gives,
\begin{equation}
\label{equ:S2O4_linear_F(t)_int_II}
\begin{split}
&\frac12\Delta{t}\bm{\mathrm F}^n+\frac18\Delta{t^2}\partial_t\bm{\mathrm F}^n=\widehat{\bm{\mathrm F}}(\bm{\mathrm Q}^n,\Delta t/2),\\
&\Delta{t}\bm{\mathrm F}^n+\frac12\Delta{t^2}\partial_t\bm{\mathrm F}^n=\widehat{\bm{\mathrm F}}(\bm{\mathrm Q}^n,\Delta t),
\end{split}
\end{equation}
where the left hand side is the time integration of $\tilde{\bm{\mathrm F}}(\bm{\mathrm Q}^n,t)$ within $[t^n,t^n+\Delta t/2]$ and $[t^n,t^n+\Delta t]$ respectively, while the right hand side is obtained according to Eq.(\ref{eq_S2O4_F(t)_int}). By solving the equation set Eq.(\ref{equ:S2O4_linear_F(t)_int_II}), we have
\begin{equation}
\label{equ:S2O4_linear_F(t)_int_III}
\begin{split}
&\bm{\mathrm F}^n=(4\widehat{\bm{\mathrm F}}(\bm{\mathrm Q}^n,\Delta t/2)-\widehat{\bm{\mathrm F}}(\bm{\mathrm Q}^n,\Delta t))/\Delta t,\\
&\partial_t\bm{\mathrm F}^n=4(\widehat{\bm{\mathrm F}}(\bm{\mathrm Q}^n,\Delta t)-2\widehat{\bm{\mathrm F}}(\bm{\mathrm Q}^n,\Delta t/2))/\Delta t^2.
\end{split}
\end{equation}
Similarly, the approximated flux $\bm{\mathrm F}^{\ast}$ and its time derivative $\partial_t\bm{\mathrm F}^{\ast}$ can be obtained, by simply replacing the superscript $n$ in Eq.(\ref{equ:S2O4_linear_F(t)_int_III}) with $\ast$.
Now the final two-stage gas-kinetic CPR framework can be developed through Eqs.(\ref{eq_S2O4}) and (\ref{equ:S2O4_linear_F(t)_int_III}). At the first stage, with the solution at $t^n$, the gas distribution function is constructed at each SP and FP through Eqs.(\ref{eq_f_dis}) and (\ref{eq_f_con}), respectively. Then the time integrals $\widehat{\bm{\mathrm F}}(\bm{\mathrm Q}^n,\Delta t/2)$ and $\widehat{\bm{\mathrm F}}(\bm{\mathrm Q}^n,\Delta t)$ can be computed through Eq.(\ref{eq_S2O4_F(t)_int}), and thus the solution at each SP is updated by
\begin{equation}
\label{equ:S2O4_CPR_stage_I}
\bm{\mathrm Q}_{i,j}^*=\bm{\mathrm Q}_{i,j}^n + \mathcal{R}_{i,j}(\widehat{\bm{\mathrm F}}),
\end{equation}
where the flux at each SP and FP is computed by
\begin{equation}
\label{equ:S2O4_CPR_stage_I_flux}
\widehat{\bm{\mathrm F}}=\widehat{\bm{\mathrm F}}(\bm{\mathrm Q}^n,\Delta t/2).
\end{equation}
At the second stage, the time integration of the flux $\widehat{\bm{\mathrm F}}(\bm{\mathrm Q}^*,\Delta t/2)$ and $\widehat{\bm{\mathrm F}}(\bm{\mathrm Q}^*,\Delta t)$ at each SP and FP can be obtained in a similar way with the solution at $t^*$. Then the solution at each SP is updated by
\begin{equation}
\label{equ:S2O4_CPR_stage_II}
\bm{\mathrm Q}_{i,j}^{n+1}=\bm{\mathrm Q}_{i,j}^* + \mathcal{R}_{i,j}(\widetilde{\bm{\mathrm F}}),
\end{equation}
where the flux at each SP and FP is computed by
\begin{equation}
\label{equ:S2O4_CPR_stage_II_flux}
\widetilde{\bm{\mathrm F}}=\frac{8}{3}\widehat{\bm{\mathrm F}}(\bm{\mathrm Q}^n,\Delta t/2) - \frac{1}{3}\widehat{\bm{\mathrm F}}(\bm{\mathrm Q}^n,\Delta t) -\frac{8}{3}\widehat{\bm{\mathrm F}}(\bm{\mathrm Q}^*,\Delta t/2) + \frac{4}{3}\widehat{\bm{\mathrm F}}(\bm{\mathrm Q}^*,\Delta t) .
\end{equation}
Compared with traditional CPR methods, which usually adopt the multi-stage R-K method, the current two-stage fourth-order gas-kinetic CPR can be more efficient because the number of stages is reduced from four or five to two, and the viscous flux computation is avoided.
\subsection{Hybrid CPR/SCFV approach for shock capturing}
When simulating high-speed flows with shock waves, it is difficult for a high-order scheme with IDOFs to preserve its high resolution as the solution is continuous within the element. Numerical oscillations always occur due to the Gibbs phenomenon. Limiters or artificial viscosity can be adopted to suppress the oscillations. However, the subcell resolution may be lost because shock-capturing schemes are only first order $O({\delta} x)$ at the element containing the shock.
To capture shock waves robustly and accurately, a hybrid CPR/SCFV approach is developed by extending the subcell finite volume (SCFV) limiting procedure \cite{MDumbser2014,MDumbser2016} to the CPR framework. The smooth flow regions are solved by the CPR method directly, while regions containing shock waves are solved by the SCFV method. With the hybrid method, the subcell resolution can be preserved automatically and shock waves can be captured with high resolution and robustness.
To construct a hybrid method, we need to identify the "troubled cells", and determine how the solutions are transferred between both methods.
To distinguish regions containing shocks from smooth regions, a simple and effective shock detector is adopted to mark troubled cells \cite{GFu2017} at the beginning of each time step,
\begin{equation}
\label{equ:detector}
I_{\Omega_0}=\frac{\sum\limits_{l=1}^3 |\bar{Q}_0^n-\bar{Q}_{0,l}^n|}{\max\limits_{0\leq l \leq 3}\bar{Q}_l^n},
\end{equation}
\textcolor{black}{where $\bar{Q}_l^n$ indicates the averaged density or total energy on the target cell with $l=0$, and its three face neighbors with $l=1\sim3$. $\bar{Q}_{0,l}^n$ is the average of polynomial $Q_l^n(\bm{\mathrm x})$ ($l=1\sim3$) on the target cell.}
\textcolor{black}{As recommened in Ref.\cite{GFu2017}, the target cell is marked as a troubled cell when $I_{\Omega_0}>0.12$.} However, in our study, it is found too low, and too many cells are marked in some cases. Thus, for all numerical tests below, the threshold is set as 0.3, and meanwhile the face neighbors of a troubled cell are also marked. The new strategy seems to work better. Nevertheless, we note that the current detector may not be the best. More investigations are necessary in the future.
For the unmarked, the CPR framework is adopted directly to keep the high accuracy and efficiency in smooth flow regions. Meanwhile, a robust SCFV method is applied to those troubled cells, which need to be further partitioned into a set of subcells. For clarity, the original cell is also called a main cell.
Assume that $\Omega_i$ is marked as a troubled cell at $t_n$. For a fourth-order CPR with $k=3$, each cell contains 10 IDOFs. To preserve the subcell resolution and avoid the complexity of partition encountered in the SV method \cite{ZJWang2004}, $\Omega_i$ is uniformly partitioned into $(k+1)^2=16$ subcells $\{\Omega_{i,j}\},~j=1\sim16$, as shown in Fig.\ref{Subcells}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.25]{Figure/SCFV_Limiter_Subcells.eps}\\
\captionsetup{labelsep=period}
\caption{The partition of subcells.}
\label{Subcells}
\end{figure}
The data transfer from CPR to SCFV is directly based on the solution polynomial. By taking average of $\bm{\mathrm Q}_i^n(\bm{\mathrm x})$ on subcell $\Omega_{i,j}$, the subcell averaged solution can be obtained
\begin{equation}
\label{eq_subcell_average}
\bar{\bm{\mathrm Q}}_{i,j}^n=\frac1{|\Omega_{i,j}|}\int_{\Omega_{i,j}}\bm{\mathrm Q}_i^n(\bm{\mathrm x})\mathrm{d}\Omega,~~~\forall\Omega_{i,j}\in\{\Omega_{i,j}\}.
\end{equation}
With the subcell averaged solutions, a robust second-order TVD finite volume reconstruction is implemented on each subcell $\Omega_{i,j}$ to obtain a non-oscillatory linear solution distribution, denoted as $\tilde{\bm{\mathrm Q}}_{i,j}^n(\bm{\mathrm x})$. The reconstruction stencil for a subcell only involves its face-neighboring subcells. We note that, to provide the stencil of the reconstruction, the face-neighboring main cells of $\Omega_i$ also need to be partitioned into subcells, even if they are not marked as troubled cells. Details of the reconstruction can be referred to Ref.\cite{ZJWang2004,TJBarth1989}. As a result, the original cubic solution polynomial $\bm{\mathrm Q}_i^n(\bm{\mathrm x})$ on the troubled cell $\Omega_i$ is replaced by a set of piecewise linear solution distributions, i.e., $\{\tilde{\bm{\mathrm Q}}_{i,j}^n(\bm{\mathrm x})\}$.
With the new distributions, we can obtain the solutions at interfaces bounding each subcell, on which the flux is computed. Then the subcell averaged solution are updated in the finite volume framework. We note that the flux solver and the time-stepping method used on these subcells are consistent with that used in the CPR framework.
The data transformation from SCFV to CPR is actually a reconstruction. At $t^{n+1}$, with the set of subcell averaged solutions on $\Omega_i$, i.e., $\{\bar{\bm{\mathrm Q}}_{i,j}^{n+1}\}$, the cubic solution polynomial $\bm{\mathrm Q}_i^{n+1}(\bm{\mathrm x})$ can be recovered by a least-square reconstruction.
\begin{equation}
\label{equ_SCFV_t_n+1_LS}
\mathrm{min}\left[\sum_{j=1}^{16}\left(\frac{1}{|\Omega_{i,j}|} \int_{\Omega_{i,j}}\bm{\mathrm Q}_i^{n+1}(\bm{\mathrm x})\mathrm{d}\Omega-\bar{\bm{\mathrm Q}}_{i,j}^{n+1}\right)^2\right].
\end{equation}
To conserve the main cell averaged solution on $\Omega_i$ directly, the zero-mean basis function is adopted in $\bm{\mathrm Q}_i^{n+1}(\bm{\mathrm x})$. Then the shock detector is applied to mark troubled cells at $t^{n+1}$. If $\Omega_i$ is marked again, $\{\bar{\bm{\mathrm Q}}_{i,j}^{n+1}\}$ can be directly used for the limiting procedure at $t^{n+1}$. Otherwise, if $\Omega_i$ is not marked at $t^{n+1}$, the CPR framework is adopted on this cell. The solution at SPs can be computed with $\bm{\mathrm Q}_i^{n+1}(\bm{\mathrm x})$. More details of the limiting procedure can be found in Ref.\cite{MDumbser2016}.
Since each subcell is treated separately, discontinuities are introduced on the interfaces between subcells. As a result, shock waves can be resolved in the scale of subcells. The thickness of shock waves can be smaller than the size of main cells. On the contrary, for traditional limiters applied to main cells directly, discontinuities can only exist on interfaces between main cells, and shock waves can only be resolved in the scale of main cells. It is obvious that the current method is able to capture shock waves with higher resolution. In short, the hybrid CPR/SCFV approach fully combines the high accuracy and efficiency of CPR with the robustness and high resolution of SCFV.
\subsection{Remarks on efficiency}
In terms of the computational cost, the main difference between the present and the traditional CPR methods is the flux evaluation, and the time-stepping approach. To achieve fourth-order time accuracy, the traditional CPR usually adopts the widely used five-stage fourth-order R-K method. In contrast, the current scheme only needs two stages.
Denote $T_{gks}$ as the computational cost in a time step for the present method, and $T_{cpr}$ for the traditional CPR. They can be expressed as
\begin{equation}
\label{cost_gks}
\begin{split}
&T_{gks}=2(T_{gks,sp}^{inv+vis}+T_{gks,fp}^{inv+vis}+T_{gks}^{res}),\\
&T_{cpr} =5(T_{cpr,sp}^{inv}+T_{cpr,sp}^{vis}+T_{cpr,fp}^{inv}+T_{cpr,fp}^{vis}+T_{cpr}^{res}),
\end{split}
\end{equation}
in which the factors 2 and 5 indicate the number of stages. The subscripts $sp$ and $fp$ represent the corresponding cost of flux computation at SPs and FPs.
The superscript $res$ indicates the rest part of the computational cost, including reconstruction and residual evaluation. The superscripts $inv$ and $vis$ are for invscid and viscous parts, respectively. For the traditional CPR, the inviscid flux and viscous flux indicated by the superscripts $inv$ and $vis$ are computed separately. The flux at SPs is computed according to the flux function of the N-S equations. The inviscid flux at FPs is usually computed by Riemann solvers. For the gas-kinetic flux solver, these inviscid and viscous fluxes are coupled and computed simultaneously.
Now we can easily estimate the computational cost of these two methods. For simplicity, only smooth flow is considered, thus no limiter is considered in the estimate. The invscid flux is computed with the Roe scheme in CPR and the viscous part is by the LDG scheme. Due to the higher order reconstruction for the viscous flux, $T_{cpr}^{res}$ is a little bit larger than $T_{gks}^{res}$ by about ten percent from numerical experiments. For the flux computation at FPs, the gas-kinetic solver is more expensive than the Roe scheme plus LDG, with a ratio $T_{gks,fp}^{inv+vis}/(T_{cpr,fp}^{inv}+T_{cpr,fp}^{vis})\approx 3.4$. For the flux at at SPs, the ratio between the continuous gas-kinetic solver and analytic N-S flux function becomes $T_{gks,sp}^{inv+vis}/(T_{cpr,sp}^{inv}+T_{cpr,sp}^{vis})\approx 2.1$. The ratios among different parts for GKS are nearly $T_{gks,sp}^{inv+vis}:T_{gks,fp}^{inv+vis}:T_{gks}^{res} \approx 1:2.5:1.1$. Therefore, it can be estimated that the overall computational cost between CPR and the current scheme is about $T_{cpr}/T_{gks}\approx 1.3$ when simulating viscous flows. Detailed data can be found in the following numerical test.
Although the codes are not optimized, one can see that the efficiency improvement of the current method mainly comes from the adoption of the two-stage time stepping method, using only two stages rather than five stages in R-K. The straightforward computation of the time-derivative of the flux also contributed to the efficiency gain. Besides, it can also be observed that the overall cost of GKS is less than CPR, even if a four-stage R-K method is adopted. Furthermore, for supersonic flows, it is necessary to adopt a limiter for shock capturing, which usually takes a large mount of additional CPU time. As can be expected, the current method should can be more efficient. {\color{black}For the current hybrid method, a little additional CPU time is requrired for the data transfer between CPR and SCFV for troubled cells, but the adopted TVD limiter on subcells is low cost.} In addition, for three-dimensional flow simulations the ratio of $T^{res}$ also increases remarkably, thus using the two-stage time-stepping method can achieve higher efficiency.
\section{Numerical tests}\label{section3}
Several benchmark flows are simulated to validate the performance of the current scheme. The ratio of specific heats is $\gamma=1.4$ in all of these tests. The collision time for viscous flows is computed by
\begin{equation}\label{eq_tau}
\tau =\frac{\mu}{p}, \quad
\tau_n =\tau+\epsilon_2\left|\frac{p^L-p^R}{p^L+p^R}\right|^{\epsilon_3}\Delta t.
\end{equation}
The variable $\tau_n$ is used to replace the physical collision time $\tau$ in the exponential function in Eq.(\ref{eq_BGK_exact}), for better controlling the numerical dissipation through the transition from $f_0$ to $g$, such as the second term in $\tau_n$. Here $p^L$ and $p^R$ are the pressure at the left and right sides of a cell interface. The coefficient $\epsilon_2=10$ is chosen. Another coefficient $\epsilon_3=\min(1,5\mathrm{Ma})$ is included to improve the accuracy when solving flows with very low Mach number. For inviscid flows, the physical viscosity $\mu/p$ in $\tau$ is replaced with $\epsilon_1\Delta t$, where $\epsilon_1=0.005$ is adopted in the current study.
To compute the time step $\Delta t$ based on the main cell size $h$, the CFL number is fixed to 0.1 in all tests. Boundary conditions are implemented with the help of ghost cells. When evaluating the computational cost, the simulation is carried out on the Intel Core i7-3770 CPU $\&$ 3.40 GHz. For convenience, the current scheme is denoted as CPR-GKS in the following.
For a high-order accurate scheme on triangular meshes, two questions are often encountered when presenting the numerical results. The first one is related to the IDOFs. For a numerical scheme with IDOFs, the variables at these IDOFs are meaningful which is different to a FV method.
In the present study, the errors in the accuracy test are computed directly from the variables at SPs,
\begin{equation}\label{eq_L1L2error}
\begin{split}
L_2~\mathrm{error}=\sqrt{\frac{\sum_{i=1}^{N}\sum_{j=1}^{10}(q_{i,j}-q_{i,j}^e)^2}{10N}},
\end{split}
\end{equation}
in which $q_{i,j}$ and $q_{i,j}^e$ denote the numerical and analytical solution, respectively. $N$ is the number of cells. Similarly, the residuals for steady flow problems are calculated, in which the steady state is achieved when the residual of velocity is less than $10^{-14}$ for the Couette flow, while $10^{-9}$ for the lid-driven cavity flow. Besides, all 2D contours are presented based on the averaged solution on subcells.
The second question is how to obtain the variables at given locations, especially along a specific line in 1D flows. Usually the interpolation can not be avoided which may lead to a result different to the original one. Here for simplicity, the computational triangular mesh cells are obtained from the triangulation of rectangular cells, except for the double Mach reflection flow and the viscous shock tube flow. Thus the interpolation can be simpler. For example, as shown in Fig.\ref{fig:Shu_Osher_Mesh}, where the black thick lines indicate main cells and the grey thin lines subcells, the variables at the equidistant green points can be computed directly by the solution polynomial Eq.(\ref{eq_Q(x,y)}) at both sides. Then the unique value can be determined through a simple arithmetic average of both sides. In the Shu-Osher flow, the averaged variables of a main cell and the corresponding subcells are also discussed, where the cell centroids can be easily found along the red dashed line and the blue one, respectively.
Furthermore, based on the rectangular mesh cell the scale of cells can be well controlled which is important for accuracy tests.
\begin{figure}[H]
\centering
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.23]{Figure/Shu_Osher_Mesh_Zoom.eps}\\
\end{varwidth}
\captionsetup{labelsep=period}
\caption{Enlarged view of the mesh with $h=1/15$ near the centerline $y=1$ in Shu-Osher problem.}
\label{fig:Shu_Osher_Mesh}
\end{figure}
\subsection{Compressible Couette flow}
To quantitatively validate the accuracy of the current scheme, the compressible Couette flow is simulated, which has an analytical solution \cite{KXu2001,CZhang2018}. It is a steady flow between two parallel plates, driven by the upper plate, which has a constant speed $U_1=0.5$ and temperature $T_1=1$. The lower plate is stationary and adiabatic. The computational domain is $[0,4H]\times[0,2H]$ with $H=1$. The computational mesh is obtained by the triangulation of a rectangular mesh.
The Mach number is set as $\mathrm{Ma}=U_1/\sqrt{\gamma R T_1}=0.5$ and the Reynolds number is $\mathrm{Re}=\rho_1 U_1 H/\mu_1=500$ with $\rho_1=1$. The viscosity of the flow is determined by the linear law $\mu=\mu_1 T/T_1$. The Prandtl number is $\mathrm{Pr}=1$.
The flow variables at ghost cells are fixed to the analytical solutions for all boundaries.
For comparison, the results obtained by the traditional CPR method (denoted as CPR-LDG) is also presented, in which the inviscid flux is computed by the Roe scheme, while the viscous flux is computed by the LDG scheme, and the five-stage fourth-order R-K method is used for time stepping. The errors and convergence orders of density are presented in Table~\ref{AccuracyCouetteRho}. Both of the two schemes achieve the designed order of accuracy. There is only slight difference between them in terms of both errors and accuracy orders. An additional flow with very low Mach number $\mathrm{Ma}=0.02$ is also simulated to verify the performance of the current scheme in nearly incompressible flows. The results are shown in Table~\ref{AccuracyCouetteRho_II} in which the finest grid is not considered as the error is so low that it is affected by the round-off error. The designed order of accuracy can be achieved as well, and the errors significantly decrease with the Mach number. {\color{black}It should be noted that the fourth-order temporal accuracy of the current scheme is also validated through the inviscid isentropic vortex propagation flow, which is not presented here for brevity.}
\begin{table}[H]
\centering
\captionsetup{labelsep=period}
\caption{Accuracy test in compressible Couette flow with $\mathrm{Ma}=0.5$}
\label{AccuracyCouetteRho}
\renewcommand\arraystretch{1.3}
\begin{tabular}{lllll}
\hline
$h$ & \multicolumn{2}{l}{CPR-GKS} & \multicolumn{2}{l}{CPR-LDG} \\ \cline{2-5}
& $L_2$ error & Order & $L_2$ error & Order \\ \hline
1.0 & 3.24E-06 & & 2.55E-06 & \\
0.5 & 1.66E-07 & 4.29 & 1.53E-07 & 4.06 \\
0.25 & 9.90E-09 & 4.07 & 9.52E-09 & 4.01 \\
0.125 & 6.01E-10 & 4.04 & 5.98E-10 & 3.99 \\ \hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\captionsetup{labelsep=period}
\caption{Accuracy test in compressible Couette flow with $\mathrm{Ma}=0.02$}
\label{AccuracyCouetteRho_II}
\renewcommand\arraystretch{1.3}
\begin{tabular}{lllll}
\hline
$h$ & \multicolumn{2}{l}{CPR-GKS} & \multicolumn{2}{l}{CPR-LDG} \\ \cline{2-5}
& $L_2$ error & Order & $L_2$ error & Order \\ \hline
1.0 & 5.76E-12 & & 5.43E-12 & \\
0.5 & 3.93E-13 & 3.87 & 3.23E-13 & 4.07 \\
0.25 & 2.55E-14 & 3.95 & 2.08E-14 & 3.95 \\ \hline
\end{tabular}
\end{table}
For a comparison of the efficiency, Table~\ref{CPU_Couette} shows the computational cost for 1000 time steps with $h=0.125$. We note that, for CPR-LDG, the cost of flux is the sum of inviscid flux and viscous flux which are presented separately. For example, the flux computation at SPs takes 3.29s for inviscid flux and 7.68s for viscous flux respectively. Thanks to the simplification of the gas distribution function at SPs, CPR-GKS takes less cost than CPR-LDG for the flux at SPs. For the flux at FPs, CPR-GKS is less efficient than CPR-LDG since the gas distribution function is more complicated. In consistent with the foregoing discussion, the overall cost for flux computation is close to each other. However, CPR-LDG takes much more cost than CPR-GKS in other parts which roughly includes the cost of the reconstruction, computing the flux divergence and flux correction terms. This is mainly due to the use of different time-stepping methods. With the efficient two-stage time-stepping method, CPR-GKS saves the cost of these parts in extra stages needed in CPR-LDG. As a result, CPR-GKS is more efficient than CPR-LDG. Furthermore, when solving high speed flows with shock waves, the limiter also takes a large part of the computational cost. Then it can be expected that the current scheme can be more efficient than traditional CPR, thanks to the two-stage flux evolution. The results of this case demonstrate the high accuracy and efficiency of the current scheme in compressible viscous flows.
\begin{table}[H]
\centering
\captionsetup{labelsep=period}
\caption{Computational cost for 1000 time steps with $h=0.125$.}
\label{CPU_Couette}
\renewcommand\arraystretch{1.3}
\begin{tabular}{lllll}
\hline
Scheme & Flux at SPs & Flux at FPs & Other parts & Total cost \\ \hline
CPR-GKS & 9.41 s & 23.46 s & 10.05 s & 42.92 s \\
CPR-LDG & 3.29 s + 7.68 s & 5.81 s + 11.46 s & 27.66 s & 55.90 s \\
Ratio & 1.17 & 0.74 & 2.75 & 1.30 \\ \hline
\end{tabular}
\end{table}
\subsection{Shu-Osher Problem}
The Shu-Osher problem \cite{QWShu1989} involves the interaction between a shock wave and an entropy wave, which is simulated to validate the high resolution of the current scheme. The computational domain is $[0,10]\times[0,2]$ and two mesh sizes are considered with $h=1/15$ and $h=1/30$, respectively.
The initial condition is
\begin{equation}\label{eq_Shu_Osher_initial}
(\rho,U,V,p)=\begin{cases}
(3.857134,2.629369,0,10.33333),& 0\leq x \leq 1, \\
(1+0.2\sin(5x),0,0,1),& 1< x \leq 10.
\end{cases}
\end{equation}
Fig.\ref{fig:Shu_Osher_rho_1D} presents the density distribution at $t=1.8$ along the horizontal centerline $y=1$. The results computed by the 1D HGKS \cite{QBLi2010} with $h=1/1000$ is chosen for reference. It can be observed that the current scheme can capture the density profile smoothly without numerical oscillation. With the mesh size $h=1/30$, the result obtained by CPR-GKS matches well with the reference data.
For a numerical scheme with IDOFs, the variables at these IDOFs are meaningful which is different to a FV method. Here the cell averaged densities for $h=1/15$ are also presented for comparison (see Fig.\ref{fig:Shu_Osher_1D_Uave}). It can be observed that the main-cell averaged solutions are as accurate as the subcell counterpart. However, the subcell averaged solutions can provide much more details of the flow distribution with higher resolution. Fig.\ref{fig:Shu_Osher_contours_2D} shows the 2D density contours. In particular, an enlarged view near the shock wave is also presented in Fig.\ref{fig:Shu_Osher_thickness}. The thickness of the predicted shock wave is less than the size of main cells, demonstrating the subcell resolution of the current scheme.
\begin{figure}[H]
\centering
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.33]{Figure/Shu_Osher_1D_rho.eps}\\
\end{varwidth}
\qquad
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.33]{Figure/Shu_Osher_1D_rho_local.eps}\\
\end{varwidth}
\captionsetup{labelsep=period}
\caption{Density distribution along the horizontal centerline at t=1.8 in Shu-Osher problem.}
\label{fig:Shu_Osher_rho_1D}
\end{figure}
\begin{figure}[H]
\centering
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.35]{Figure/Shu_Osher_1D_rho_local_average.eps}\\
\end{varwidth}
\captionsetup{labelsep=period}
\caption{Cell-averaged density distribution of both subcells and main cells with $h=1/15$ at t=1.8 in Shu-Osher problem.}
\label{fig:Shu_Osher_1D_Uave}
\end{figure}
\begin{figure}[H]
\centering
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.45]{Figure/Shu_Osher_2D_Contours_150.eps}\\
\end{varwidth}
\captionsetup{labelsep=period}
\caption{Density contours at t=1.8 with $h=1/15$ in Shu-Osher problem. 20 contours are drawn from 0.8 to 4.6.}
\label{fig:Shu_Osher_contours_2D}
\end{figure}
\begin{figure}[H]
\centering
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.35]{Figure/Shu_Osher_shock_thickness.eps}\\
\end{varwidth}
\captionsetup{labelsep=period}
\caption{Density contours near the shock wave at $t=1.8$ with $h=1/15$ in Shu-Osher problem.}
\label{fig:Shu_Osher_thickness}
\end{figure}
Besides, the troubled cells at $t=1.8$ are shown in Fig.\ref{fig:Shu_Osher_TBC}. It can be observed that the shock detector successfully marks the shock wave region which is simulated by the SCFV method. Nevertheless, it is noted that the current shock detecting strategy may not be the best. Furthermore, on these subcells the second-order TVD limiter is applied which usually introduces large numerical dissipation and may impair the resolution of wave structure when the shock wave is interacting with the entropy wave. A high-order limiter may be adopted to to improve the accuracy in the near-shock region and reduce the dependence of shock detectors.
\begin{figure}[H]
\centering
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.5]{Figure/Shu_Osher_troubled_cells_150.eps}\\
\end{varwidth}
\captionsetup{labelsep=period}
\caption{Distribution of troubled cells at $t=1.8$ with $h=1/15$ in Shu-Osher problem.}
\label{fig:Shu_Osher_TBC}
\end{figure}
\subsection{Blast wave problem}
The blast wave problem \cite{PWoodward1984} is a typical case with extremely strong shock waves, which can be used to validate the robustness and resolution of strong discontinuity. The computational domain is $[0,10]\times[0,2]$. The computational mesh is the same as that used in the Shu-Osher problem. The initial condition is
\begin{equation}\label{eq_Blast_initial}
(\rho,U,V,p)=\begin{cases}
(1,0,0,1000),& 0\leq x \leq 1, \\
(1,0,0,0.01),& 1< x \leq 9, \\
(1,0,0,100),& 9< x \leq 10.
\end{cases}
\end{equation}
The results computed by the 1D HGKS \cite{QBLi2010} with the mesh size $h=1/1000$ are chosen as the reference data. The density distribution at $t=0.38$ along the horizontal centerline $y=1$ is shown in Fig.\ref{fig:blast_rho_1D}. The current scheme successfully captures the strong shock wave interaction without oscillation. With the mesh size $h=1/30$, the density distribution matches very well with the reference data. Due to the extremely strong shock wave, it is very difficult to suppress the oscillation by applying limiters to the high-order solution polynomial directly. Although the results can be non-oscillatory as well, too much numerical dissipation is introduced leading to a much more dissipative flow distribution even with a fine mesh. In contrast, with the SCFV limiting procedure, the oscillation can be suppressed successfully. Moreover, since discontinuity can exist between subcells, shock waves are captured much more sharply. Compared to a traditional CPR using the MLP limiter and with the same mesh size $h=1/30$ \cite{JSPark2016}, the density profile obtained by the current scheme is much better. These results demonstrate the robustness and high resolution of the current scheme for strong shock waves.
\begin{figure}[H]
\centering
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.33]{Figure/Blast_rho_1D.eps}\\
\end{varwidth}
\qquad
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.33]{Figure/Blast_rho_1D_local.eps}\\
\end{varwidth}
\captionsetup{labelsep=period}
\caption{Density distribution along the horizontal centerline at t=0.38 in blast wave problem.}
\label{fig:blast_rho_1D}
\end{figure}
\subsection{Double Mach reflection}
The double Mach reflection \cite{PWoodward1984} is a 2D benchmark flow to validate the robustness and resolution of a numerical scheme in hypersonic flows. Initially, a right-moving shock wave with the Mach number $\mathrm{Ma}=10$ is located at $x=0$ with the condition
\begin{equation}\label{eq_DMR_initial}
(\rho,U,V,p)=\begin{cases}
(1.4,0,0,1),& x \leq 0, \\
(8,8.25,0,116.5),& x> 0,
\end{cases}
\end{equation}
which impinges on a $30^{\circ}$ wedge and leads to the double Mach reflection. The exact post-shock condition is applied to the left boundary and the bottom boundary from $x=-0.2$ to $x=0$. The reflecting boundary condition is applied to the wedge and upper boundary. Fig.\ref{fig:DMR_rho_contours} presents the density contours with the mesh size $h=1/80$ and $h=1/160$. The shock wave is captured sharply and the instability of the slip line is well resolved by the current scheme.
To further verify the subcell resolution of the current scheme, the density contours (blue) of the Mach stem near the wall are shown in Fig.\ref{fig:DMR_shock_thickness} where the black solid lines indicate main cells and the gray dashed lines indicate subcells. It can be clearly seen that the thickness of the Mach stem is smaller than the size of main cells for both $h=1/80$ and $h=1/160$. The shock wave spans across less than three subcells. Since discontinuity is allowed to exist on the interface between subcells, the resolution of shock waves can be much higher.
Furthermore, the contours are smooth in the flow field. In contrast, for many other fourth-order methods, such as in Ref.\cite{GFu2017}, much finer mesh is usually required in order to achieve the same level of resolution, and it is very difficult to suppress the numerical noise occurring in the flow field, especially under the oblique shock wave.
These results demonstrates the capability of the current scheme to resolve hypersonic flows with strong robustness and high accuracy.
\begin{figure}[H]
\centering
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.28]{Figure/DMR_rho_contours_80.eps}\\
\end{varwidth}
\qquad
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.28]{Figure/DMR_rho_contours_80_local.eps}\\
\end{varwidth}
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.28]{Figure/DMR_rho_contours_160.eps}\\
\end{varwidth}
\qquad
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.28]{Figure/DMR_rho_contours_160_local.eps}\\
\end{varwidth}
\captionsetup{labelsep=period}
\caption{Density distributions at $t=0.2$ and the enlarged view near the Mach stem in double Mach reflection with mesh size $h=1/80$ (top) and $h=1/160$ (bottom). 30 contours are drawn from 2.0 to 22.5.}
\label{fig:DMR_rho_contours}
\end{figure}
\begin{figure}[H]
\centering
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.32]{Figure/DMR_shock_thickness_80.eps}\\
\end{varwidth}
\qquad
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.32]{Figure/DMR_shock_thickness_160.eps}\\
\end{varwidth}
\captionsetup{labelsep=period}
\caption{Density contours of the Mach stem near the wall with mesh size $h=1/80$ (left) and $h=1/160$ (right).}
\label{fig:DMR_shock_thickness}
\end{figure}
\subsection{Lid-driven cavity flow}
The lid-driven cavity flow is a typical 2D incompressible viscous flow. It is bounded by a unit square $[0,1]\times[0,1]$. The upper wall is moving with the speed $U=1$, corresponding to a Mach number $\mathrm{Ma}=0.15$. Other walls are fixed. The non-slip and isothermal boundary conditions are applied to all boundaries with the temperature $T=1$. The initial flow is stationary with the density $\rho=1$ and temperature $T=1$. Two Reynolds numbers are considered, i.e., $\mathrm{Re}=1000$ and $\mathrm{Re}=3200$. As shown in Fig.\ref{fig:Cavity_Mesh}, the computational mesh contains $12\times12\times2$ elements with the minimum mesh size $h_{\min}=0.02$, and the stretching rate about $1.77$ near boundaries.
The streamlines for $\mathrm{Re}=1000$ are also shown in Fig.\ref{fig:Cavity_Mesh}. The primary flow structures including the primary and secondary vortices are well captured. Fig.\ref{fig:Cavity_UV_Re1000} and Fig.\ref{fig:Cavity_UV_Re3200} present the U-velocities along the vertical centerline and V-velocities along the horizontal centerline. It can be observed that, the velocity profiles agree very well with existing benchmark data for both $\mathrm{Re}=1000$ and $\mathrm{Re}=3200$. On such a coarse mesh with large stretching rate, it is challenging to resolve the velocity profiles accurately, especially the V-velocity profile for $\mathrm{Re}=3200$. The results demonstrate the high accuracy of the current scheme.
\begin{figure}[H]
\centering
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.3]{Figure/Cavity_mesh.eps}\\
\end{varwidth}
\qquad
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.3]{Figure/Cavity_Streamline_Re_1000.eps}\\
\end{varwidth}
\captionsetup{labelsep=period}
\caption{Computational mesh (left) and streamlines for Re=1000 (right) in lid-driven cavity flow.}
\label{fig:Cavity_Mesh}
\end{figure}
\begin{figure}[H]
\centering
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.33]{Figure/Cavity_y_u_Re_1000.eps}\\
\end{varwidth}
\qquad
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.33]{Figure/Cavity_x_v_Re_1000.eps}\\
\end{varwidth}
\qquad
\captionsetup{labelsep=period}
\caption{U-velocities along the vertical centerline and V-velocities along the horizontal centerline in lid-driven cavity flow with $\mathrm{Re}=1000$}
\label{fig:Cavity_UV_Re1000}
\end{figure}
\begin{figure}[H]
\centering
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.33]{Figure/Cavity_y_u_Re_3200.eps}\\
\end{varwidth}
\qquad
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.33]{Figure/Cavity_x_v_Re_3200.eps}\\
\end{varwidth}
\captionsetup{labelsep=period}
\caption{U-velocities along the vertical centerline and V-velocities along the horizontal centerline in lid-driven cavity flow with $\mathrm{Re}=3200$}
\label{fig:Cavity_UV_Re3200}
\end{figure}
\subsection{Viscous shock tube problem}
To validate the performance of the current scheme in high-speed viscous flows, the viscous shock tube problem is simulated, which has been studied extensively \cite{Daru2004}. The flow is driven by a strong initial discontinuity at the center and bounded by a unit square. There exist complex unsteady interactions between the shock wave and the boundary layer, which requires not only strong robustness but also high resolution of a numerical scheme. The Reynolds number $\mathrm{Re}=200$ is chosen, which is based on a constant dynamic viscosity $\mu=0.005$. The Prandtl number is $\mathrm{Pr}=0.73$. Considering the symmetry, the computational domain is set as $[0,1]\times[0,0.5]$ and the symmetrical condition is applied on the upper boundary. The non-slip and adiabatic conditions are adopted on other boundaries. The initial condition is
\begin{equation}\label{eq_VST_initial}
(\rho,U,V,p)=\begin{cases}
(120,0,0,120/\gamma),& 0\leq x \leq 0.5, \\
(1.2,0,0,1.2/\gamma),& 0.5\leq x \leq 1.
\end{cases}
\end{equation}
The result provided by a high-order GKS (HGKS) in Ref.\cite{GZZhou2018} is chosen as the reference data with the mesh size $h=1/1500$. Fig.\ref{fig:VST_rho_2D} shows the density contours at $t=1$ with the mesh size $h=1/150$ and $h=1/300$. It can be observed that the complex flow structure are well resolved by the current scheme, including the lambda shock and the vortex structures. Table~\ref{VST_Height} presents the height of primary vortex, achieving a good agreement with the reference data.
The density distribution along the bottom wall is presented in Fig.\ref{fig:VST_rho_wall}, which matches very well with the reference data, especially for the result with the mesh size $h=1/300$. Even with such a coarse mesh, the flow field can be well resolved by the current scheme, demonstrating the good performance of the current scheme in high-speed viscous flows.
\begin{figure}[H]
\centering
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.3]{Figure/VST_2D_rho_contours_150.eps}\\
\end{varwidth}
\qquad
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.3]{Figure/VST_2D_rho_contours_300.eps}\\
\end{varwidth}
\qquad
\captionsetup{labelsep=period}
\caption{Density contours at t=1 in viscous shock tube flow with the mesh size $h=1/150$ (left) and $h=1/300$ (right). 20 uniform contours from 25 to 120.}
\label{fig:VST_rho_2D}
\end{figure}
\begin{table}[H]
\centering
\captionsetup{labelsep=period}
\caption{Comparison of the height of the primary vortex in viscous shock tube flow}
\label{VST_Height}
\renewcommand\arraystretch{1.3}
\begin{tabular}{lllll}
\hline
Scheme & & $h$ & & Height \\ \hline
\multirow{2}{*}{CPR-GKS} & & 1/150 & & 0.168 \\
& & 1/300 & & 0.166 \\
HGKS & & 1/1500 & & 0.166 \\ \hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\begin{varwidth}[t]{\textwidth}
\vspace{0pt}
\includegraphics[scale=0.35]{Figure/VST_rho_wall.eps}\\
\end{varwidth}
\captionsetup{labelsep=period}
\caption{Density distribution along the bottom wall in the viscous shock tube flow.}
\label{fig:VST_rho_wall}
\end{figure}
\section{Conclusions}
A two-stage fourth-order gas-kinetic CPR method is developed for the compressible N-S equations on triangular meshes. It combines the efficient CPR framework with the robust and the efficient two-stage fourth-order temporal discretization, based on an efficient time-evolving gas-kinetic flux. Besides, a hybrid CPR/SCFV method is developed by extending a robust and accurate subcell finite volume limiting procedure to the CPR framework to improve the resolution of discontinuities. Under the CPR framework, a fourth-order compact reconstruction is straightforward, simple and efficient, which avoids the difficulty of compactness encountered by traditional high-order finite volume GKS. Different from traditional CPR methods, the current method fully integrates the unique features of gas-kinetic flux solver. The inviscid and viscous fluxes are coupled and computed simultaneously. It is genuinely multi-dimensional by involving both normal and tangential variations in the gas distribution function. More importantly, with the time-evolving flux function, both flux and its first-order time derivative is available. The two-stage temporal discretization can therefore be extended to the CPR framework in a straightforward manner, which is more efficient than the multi-stage R-K method, by saving the computational cost of additional stages. In addition, with the help of SCFV limiting the subcell resolution of flow discontinuities is achieved, thus shock waves can be sharply and robustly captured even for extremely strong shock waves.
Several inviscid and viscous benchmark flows are simulated, ranging from nearly incompressible flows to supersonic flows with strong shock waves. It is verified that the current scheme achieves fourth-order accuracy in both space and time for the N-S equations, and the efficiency is higher than traditional CPR methods. For low-speed viscous flow, the current scheme has good agreement with benchmark data. For high-speed flows the smooth flow structures are accurately resolved while the strong shock waves can be well captured with subcell resolution. The present study demonstrate the high accuracy, efficiency and robustness of the current scheme, which makes it very competitive and promising in practical applications.
Nevertheless, it should be noted that, the performance of the current hybrid CPR/SCFV method partly depends on the effectiveness of the shock detector. As only the second-order TVD limiter is considered in the SCFV method, if cells in smooth regions are mistakenly marked as troubled cells, the accuracy may be impaired. Thus, more investigations are required. To reduce the dependence on the shock detector, high-order limiter may also be implemented for the SCFV method. Besides,
it is still necessary to make more comparisons with other typical high-order methods. The extension to three dimensions will be carried out in the future.
\section{Acknowledgments}
This work is supported by the National Natural Science Foundation of China (11672158, 91852109).
We also would like to acknowledge the technical support of PARATERA and the ``Explorer 100'' cluster system of Tsinghua National Laboratory for Information Science and Technology.
\section{References}
| proofpile-arXiv_059-15634 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Many problems in science and engineering involve solving complex partial differential equation (PDE) systems repeatedly for different values of some parameters. Examples arise in molecular dynamics, micro-mechanics, and turbulent flows. Often such systems require fine discretization in order to capture the phenomenon being modeled.
As a consequence, traditional numerical solvers are slow and sometimes inefficient. For example, when designing materials such as airfoils, one needs to solve the associated inverse problem where thousands of evaluations of the forward model are needed. A fast method can make such problems feasible.
\paragraph{Conventional solvers vs. Data-driven methods.}
Traditional solvers such as finite element methods (FEM) and finite difference methods (FDM) solve the equation by discretizing the space. Therefore, they impose a trade-off on the resolution: coarse grids are fast but less accurate; fine grids are accurate but slow. Complex PDE systems, as described above, usually require a very fine discretization, and therefore very challenging and time-consuming for traditional solvers.
On the other hand, data-driven methods can directly learn the trajectory of the family of equations from the data. As a result, the learning-based method can be orders of magnitude faster than the conventional solvers.
Machine learning methods may hold the key to revolutionizing scientific disciplines by providing fast solvers that approximate or enhance traditional ones \citep{raissi2019physics, jiang2020meshfreeflownet, greenfeld2019learning, kochkov2021machine}. However, classical neural networks map between finite-dimensional spaces and can therefore only learn solutions tied to a specific discretization. This is often a limitation for practical applications and therefore the development of mesh-invariant neural networks is required.
We first outline two mainstream neural network-based approaches for PDEs -- the finite-dimensional operators and Neural-FEM.
\paragraph{Finite-dimensional operators.}
These approaches parameterize the solution operator as a deep convolutional neural network between finite-dimensional Euclidean spaces \cite{guo2016convolutional, Zabaras, Adler2017, bhatnagar2019prediction,khoo2017solving}.
Such approaches are, by definition, mesh-dependent and will need modifications and tuning for different resolutions and discretizations in order to achieve consistent error (if at all possible). Furthermore, these approaches are limited to the discretization size and geometry of the training data and hence, it is not possible to query solutions at new points in the domain. In contrast, we show, for our method, both invariance of the error to grid resolution, and the ability to transfer the solution between meshes.
\paragraph{Neural-FEM.}
The second approach directly parameterizes the solution function as a neural network
\citep{Weinan, raissi2019physics,bar2019unsupervised,smith2020eikonet, pan2020physics}. This approach is designed to model one specific instance of the PDE, not the solution operator. It is mesh-independent and accurate, but for any given new instance of the functional parameter/coefficient, it requires training a new neural network. The approach closely resembles classical methods such as finite elements, replacing the linear span of a finite set of local basis functions with the space of neural networks.
The Neural-FEM approach suffers from the same computational issue as classical methods: the optimization problem needs to be solved for every new instance.
Furthermore, the approach is limited to a setting in which the underlying PDE is known.
\begin{figure}
\centering
\includegraphics[width=12cm]{figs/FourierNN_NV2.png}\\
\small{
Zero-shot super-resolution: Navier-Stokes Equation with viscosity $\nu=1\mathrm{e}{-4}$; Ground truth on top and prediction on bottom; trained on $64\times64\times20$ dataset; evaluated on $256\times256\times80$ (see Section \ref{sec:superresolution}).}
\caption{ {\bf top:} The architecture of the Fourier layer; \textbf{bottom:} Example flow from Navier-Stokes.}
\label{fig:1}
\end{figure}
\paragraph{Neural Operators.}
Recently, a new line of work proposed learning mesh-free, infinite-dimensional operators with neural networks
\citep{lu2019deeponet,Kovachki,nelsen2020random,li2020neural,li2020multipole, patel2021physics}.
The neural operator remedies the mesh-dependent nature of the finite-dimensional operator methods discussed above by producing a single set of network parameters that may be used with different discretizations. It has the ability to transfer solutions between meshes.
Furthermore, the neural operator needs to be trained only once. Obtaining a solution for a new instance of the parameter requires only a forward pass of the network, alleviating the major computational issues incurred in Neural-FEM methods.
Lastly, the neural operator requires no knowledge of the underlying PDE, only data. Thus far, neural operators have not yielded efficient numerical algorithms that can parallel the success of convolutional or recurrent neural networks in the finite-dimensional setting due to the cost of evaluating integral operators. Through the fast Fourier transform, our work alleviates this issue.
\paragraph{Fourier Transform.}
The Fourier transform is frequently used in spectral methods for solving differential equations, since differentiation is equivalent to multiplication in the Fourier domain.
Fourier transforms have also played an important role in the development of deep learning. In theory, they appear in the proof of the universal approximation theorem \citep{hornik1989multilayer} and, empirically, they have been used to speed up convolutional neural networks \citep{mathieu2013fast}.
Neural network architectures involving the Fourier transform or the use of sinusoidal activation functions have also been proposed and studied \citep{bengio2007scaling,mingo2004Fourier, sitzmann2020implicit}.
Recently, some spectral methods for PDEs have been extended to neural networks \citep{fan2019bcr, fan2019multiscale, kashinath2020enforcing}. We build on these works by proposing a neural operator architecture defined directly in Fourier space with quasi-linear time complexity and state-of-the-art approximation capabilities.
\paragraph{Our Contributions.}
\label{ssec:OC}
We introduce the Fourier neural operator, a novel deep learning architecture able to learn mappings between infinite-dimensional spaces of functions; the integral operator is restricted to a convolution, and instantiated through a linear transformation in the Fourier domain.
\begin{itemize}[leftmargin=*]
\item The Fourier neural operator is the first work that learns the resolution-invariant solution operator for the family of Navier-Stokes equation in the turbulent regime, where previous graph-based neural operators do not converge.
\item By construction, the method shares the same learned network parameters irrespective of the discretization used on the input and output spaces. It can do zero-shot super-resolution: trained on a lower resolution directly evaluated on a higher resolution, as shown in Figure \ref{fig:1}.
\item The proposed method consistently outperforms all existing deep learning methods even when fixing the resolution to be $64 \times 64$. It achieves error rates that are $30\%$ lower on Burgers' Equation, $60\%$ lower on Darcy Flow, and $30\%$ lower on Navier Stokes (turbulent regime with viscosity $\nu=1\mathrm{e}{-4}$).
When learning the mapping for the entire time series, the method achieves $<1\%$ error with viscosity $\nu=1\mathrm{e}{-3}$ and $8\%$ error with viscosity $\nu=1\mathrm{e}{-4}$.
\item On a $256 \times 256$ grid, the Fourier neural operator has an inference time of only $0.005$s compared to the $2.2s$ of the pseudo-spectral method used to solve Navier-Stokes.
Despite its tremendous speed advantage, the method does not suffer from accuracy degradation when used in downstream applications such as solving the Bayesian inverse problem, as shown in Figure \ref{fig:baysian}.
\end{itemize}
We observed that the proposed framework can approximate complex operators raising in PDEs that are highly non-linear, with high frequency modes and slow energy decay.
The power of neural operators comes from combining linear, global integral operators (via the Fourier transform) and non-linear, local activation functions. Similar to the way standard neural networks approximate highly non-linear functions by combining linear multiplications with non-linear activations, the proposed neural operators can approximate highly non-linear operators.
\section{Learning Operators}
\label{sec:operator}
Our methodology learns a mapping between two infinite dimensional spaces from a finite
collection of observed input-output pairs. Let $D \subset \mathbb{R}^d$ be a bounded, open set and \(\mathcal{A} = \mathcal{A}(D;\mathbb{R}^{d_a})\) and \(\mathcal{U}= \mathcal{U}(D;\mathbb{R}^{d_u})\) be separable Banach spaces of function taking values in \(\mathbb{R}^{d_a}\) and \(\mathbb{R}^{d_u}\) respectively. Furthermore let \(G^\dagger : \mathcal{A} \to \mathcal{U}\) be a (typically) non-linear map. We study maps \(G^\dagger\) which arise as the solution operators of parametric PDEs -- see Section \ref{sec:numerics} for examples. Suppose we have observations \(\{a_j, u_j\}_{j=1}^N\) where
\(a_j \sim \mu\) is an i.i.d. sequence from the probability measure \(\mu\) supported on
\(\mathcal{A}\) and \(u_j = G^\dagger(a_j)\) is possibly corrupted with noise. We aim to build an approximation of \(G^\dagger\) by
constructing a parametric map
\begin{equation}
\label{eq:approxmap}
{G} : \mathcal{A} \times \Theta \to \mathcal{U}
\qquad
\text{or equivalently,}
\qquad
{G}_{\theta} : \mathcal{A} \to \mathcal{U}, \quad \theta \in \Theta
\end{equation}
for some finite-dimensional parameter space \(\Theta\) by choosing
\(\theta^\dagger \in \Theta\) so that \({G}(\cdot, \theta^\dagger) = {G}_{\theta^\dagger} \approx G^\dagger\).
This is a natural framework for learning in infinite-dimensions as one could define a cost functional \(C : \mathcal{U} \times \mathcal{U} \to \mathbb{R}\) and seek a minimizer of the problem
\[\min_{\theta \in \Theta} \mathbb{E}_{a \sim \mu} [C({G}(a,\theta), G^\dagger(a))]\]
which directly parallels the classical finite-dimensional
setting \citep{Vapnik1998}. Showing the existence of minimizers, in the infinite-dimensional setting, remains a challenging open problem. We will approach this problem in the test-train setting by using a data-driven
empirical approximation to the cost used to determine
$\theta$ and to test the accuracy of the approximation.
Because we conceptualize our methodology in the infinite-dimensional setting, all finite-dimensional approximations share a common set of parameters which are consistent in infinite dimensions.
A table of notation is shown in Appendix \ref{table:burgers}.
\paragraph{Learning the Operator.} Approximating the operator \(G^\dagger\) is a different and typically much more challenging task than finding the solution \(u \in \mathcal{U}\) of a PDE for a single instance of the parameter \(a \in \mathcal{A}\). Most existing methods, ranging from classical finite elements, finite differences, and finite volumes to modern machine learning approaches such as physics-informed neural networks (PINNs) \citep{raissi2019physics} aim at the latter and can therefore be computationally expensive. This makes them impractical for applications where a solution to the PDE is required for many different instances of the parameter. On the other hand, our approach directly approximates the operator and is therefore much cheaper and faster, offering tremendous computational savings when compared to traditional solvers. For an example application to Bayesian inverse problems, see Section \ref{sec:bayesian}.
\paragraph{Discretization.} Since our data \(a_j\) and \(u_j\) are, in general, functions, to work with them numerically, we assume access only to point-wise evaluations.
Let \(D_j = \{x_1,\dots,x_n\} \subset D\) be a \(n\)-point discretization of the domain \(D\) and assume we have observations \(a_j|_{D_j} \in \mathbb{R}^{n \times d_a}\), \(u_j|_{D_j} \in \mathbb{R}^{n\times d_v}\), for a finite collection of input-output pairs indexed by $j$.
To be discretization-invariant, the neural operator can produce an answer \(u(x)\) for any \(x \in D\), potentially $x \notin D_j$.
Such a property is highly desirable as it allows a transfer of solutions between different grid geometries and discretizations.
\begin{figure}
\centering
\includegraphics[width=14cm]{figs/fourier_full_arch5.png}\\
\small{
{\bf (a) The full architecture of neural operator}: start from input $a$. 1. Lift to a higher dimension channel space by a neural network $P$. 2. Apply four layers of integral operators and activation functions. 3. Project back to the target dimension by a neural network $Q$. Output $u$.
{\bf (b) Fourier layers}: Start from input $v$. On top: apply the Fourier transform $\mathcal{F}$; a linear transform $R$ on the lower Fourier modes and filters out the higher modes; then apply the inverse Fourier transform $\mathcal{F}^{-1}$. On the bottom: apply a local linear transform $W$.
}
\caption{ {\bf top:} The architecture of the neural operators; \textbf{bottom:} Fourier layer.}
\label{fig:arch}
\end{figure}
\section{Neural Operator}
The neural operator, proposed in \citep{li2020neural}, is formulated as an iterative architecture $v_0 \mapsto v_1 \mapsto \ldots \mapsto v_T$ where $v_j$ for $j=0,1,\dots,T-1$
is a sequence of functions each taking values in $\mathbb{R}^{d_v}$. As shown in Figure \ref{fig:arch} (a), the input \(a \in \mathcal{A}\) is first lifted to a higher dimensional representation $v_0(x) = P(a(x))$ by the local transformation \(P\) which is usually parameterized by a shallow fully-connected neural network. Then we apply several iterations of updates $v_t \mapsto v_{t+1}$ (defined below). The output $u(x) = Q(v_T(x))$ is the projection of $v_T$ by the local transformation $Q: \mathbb{R}^{d_v} \to \mathbb{R}^{d_u}$. In each iteration, the update $v_t \mapsto v_{t+1}$ is defined as the composition of a non-local integral operator $\mathcal{K}$ and a local, nonlinear activation function $\sigma$.
\begin{definition}[Iterative updates]
Define the update to the representation $v_t \mapsto v_{t+1}$ by
\begin{equation}\label{def:int}
v_{t+1}(x) := \sigma\Big( W v_t(x)
+ \bigl(\mathcal{K}(a;\phi)v_t\bigr)(x) \Big), \qquad \forall x \in D
\end{equation}
where $\mathcal{K}: \mathcal{A} \times \Theta_{\mathcal{K}} \to \mathcal{L}(\mathcal{U}(D; \mathbb{R}^{d_v}),\mathcal{U}(D; \mathbb{R}^{d_v}))$ maps to bounded linear operators on $\mathcal{U}(D; \mathbb{R}^{d_v})$ and is parameterized by $\phi \in \Theta_{\mathcal{K}}$, $W: \mathbb{R}^{d_v} \to \mathbb{R}^{d_v}$ is a linear transformation, and $\sigma : \mathbb{R} \to \mathbb{R}$ is a non-linear activation function whose action is defined component-wise.
\end{definition}
We choose $\mathcal{K}(a;\phi)$ to be a kernel integral transformation parameterized by a neural network.
\begin{definition}[Kernel integral operator $\mathcal{K}$]
Define the kernel integral operator mapping in (\ref{def:int}) by
\begin{equation}
\label{def:K_int}
\bigl(\mathcal{K}(a;\phi)v_t\bigr)(x) :=
\int_{D} \kappa\big(x,y,a(x),a(y);\phi\big) v_t(y) \mathrm{d}y, \qquad \forall x \in D
\end{equation}
where $\kappa_{\phi}: \mathbb{R}^{2(d+d_a)} \to \mathbb{R}^{d_v \times d_v}$ is a neural network parameterized by $\phi \in \Theta_{\mathcal{K}}$.
\end{definition}
Here $\kappa_\phi$ plays the role of a kernel function which we learn from data. Together definitions 1 and 2 constitute a generalization of neural networks to infinite-dimensional spaces as first proposed in \cite{li2020neural}. Notice even the integral operator is linear, the neural operator can learn highly non-linear operators by composing linear integral operators with non-linear activation functions, analogous to standard neural networks.
If we remove the dependence on the function \(a\) and impose $\kappa_{\phi}(x,y) = \kappa_{\phi}(x-y)$, we obtain that (\ref{def:K_int}) is a convolution operator, which is a natural choice from the perspective of fundamental solutions. We exploit this fact in the following section by parameterizing $\kappa_{\phi}$ directly in Fourier space and using the Fast Fourier Transform (FFT) to efficiently compute (\ref{def:K_int}). This leads to a fast architecture that obtains state-of-the-art results for PDE problems.
\section{Fourier Neural Operator}
\label{sec:fourier}
We propose replacing the kernel integral operator in (\ref{def:K_int}), by a convolution operator defined in Fourier space. Let \(\mathcal{F}\) denote the Fourier transform of a function $f: D \to \mathbb{R}^{d_v}$ and $\mathcal{F}^{-1}$ its inverse then
\begin{align*}
(\mathcal{F} f)_j(k) = \int_{D} f_j(x) e^{- 2i \pi \langle x, k \rangle} \mathrm{d}x, \qquad
(\mathcal{F}^{-1} f)_j(x) = \int_{D} f_j(k) e^{2i \pi \langle x, k \rangle} \mathrm{d}k
\end{align*}
for $j=1,\dots,d_v$ where \(i = \sqrt{-1}\) is the imaginary unit. By letting $\kappa_{\phi}(x,y,a(x),a(y)) = \kappa_{\phi}(x-y)$ in (\ref{def:K_int}) and applying the convolution theorem, we find that
\[\bigl(\mathcal{K}(a;\phi)v_t\bigr)(x) = \mathcal{F}^{-1} \bigl( \mathcal{F}(\kappa_\phi) \cdot \mathcal{F}(v_t) \bigr )(x), \qquad \forall x \in D. \]
We, therefore, propose to directly parameterize $\kappa_\phi$ in Fourier space.
\begin{definition}[Fourier integral operator $\mathcal{K}$] Define the Fourier integral operator
\begin{equation}
\label{eq:Fourier}
\bigl(\mathcal{K}(\phi)v_t\bigr)(x)=
\mathcal{F}^{-1}\Bigl(R_\phi \cdot (\mathcal{F} v_t) \Bigr)(x) \qquad \forall x \in D
\end{equation}
where $R_\phi$ is the Fourier transform of a periodic function $\kappa: \bar{D} \to \mathbb{R}^{d_v \times d_v}$ parameterized by \(\phi \in \Theta_\mathcal{K}\).
An illustration is given in Figure \ref{fig:arch} (b).
\end{definition}
For frequency mode \(k \in D\), we have $(\mathcal{F} v_t)(k) \in \mathbb{C}^{d_v}$ and $R_\phi(k) \in \mathbb{C}^{d_v \times d_v}$. Notice that since we assume $\kappa$ is periodic, it admits a Fourier series expansion, so we may work with the discrete modes $k \in \mathbb{Z}^d$. We pick a finite-dimensional parameterization by truncating the Fourier series at a maximal number of modes
\(k_{\text{max}} = |Z_{k_{\text{max}}}| = |\{k \in \mathbb{Z}^d : |k_j| \leq k_{\text{max},j}, \text{ for } j=1,\dots,d\}|.\)
We thus parameterize $R_\phi$ directly as complex-valued $(k_{\text{max}} \times d_v \times d_v)$-tensor comprising a collection of truncated Fourier modes and therefore drop $\phi$ from our notation. Since $\kappa$ is real-valued, we impose conjugate symmetry.
We note that the set $Z_{k_{\text{max}}}$ is not the canonical choice for the low frequency modes of $v_t$. Indeed, the low frequency modes are usually defined by placing an upper-bound on the $\ell_1$-norm of $k \in \mathbb{Z}^d$. We choose $Z_{k_{\text{max}}}$ as above since it allows for an efficient implementation.
\paragraph{The discrete case and the FFT.}
Assuming the domain $D$ is discretized with $n \in \mathbb{N}$ points, we have that $v_t \in \mathbb{R}^{n \times d_v}$ and $\mathcal{F} (v_t) \in \mathbb{C}^{n \times d_v}$. Since we convolve $v_t$ with a function which only has $k_{\text{max}}$ Fourier modes, we may simply truncate the higher modes to obtain $\mathcal{F} (v_t) \in \mathbb{C}^{k_{\text{max}} \times d_v}$. Multiplication by the weight tensor $R \in \mathbb{C}^{k_{\text{max}} \times d_v \times d_v}$ is then
\begin{equation}
\label{eq:fft_mult}
\bigl( R \cdot (\mathcal{F} v_t) \bigr)_{k,l} = \sum_{j=1}^{d_v} R_{k,l,j} (\mathcal{F} v_t)_{k,j}, \qquad k=1,\dots,k_{\text{max}}, \quad j=1,\dots,d_v.
\end{equation}
When the discretization is uniform with resolution \(s_1 \times \cdots \times s_d = n\), $\mathcal{F}$ can be replaced by the Fast Fourier Transform. For $f \in \mathbb{R}^{n \times d_v}$, $k = (k_1, \ldots, k_{d}) \in \mathbb{Z}_{s_1} \times \cdots \times \mathbb{Z}_{s_d}$, and $x=(x_1, \ldots, x_{d}) \in D$, the FFT $\hat{\mathcal{F}}$ and its inverse $\hat{\mathcal{F}}^{-1}$ are defined as
\begin{align*}
(\hat{\mathcal{F}} f)_l(k) = \sum_{x_1=0}^{s_1-1} \cdots \sum_{x_{d}=0}^{s_d-1} f_l(x_1, \ldots, x_{d}) e^{- 2i \pi \sum_{j=1}^{d} \frac{x_j k_j}{s_j} }, \\
(\hat{\mathcal{F}}^{-1} f)_l(x) = \sum_{k_1=0}^{s_1-1} \cdots \sum_{k_{d}=0}^{s_d-1} f_l(k_1, \ldots, k_{d}) e^{2i \pi \sum_{j=1}^{d} \frac{x_j k_j}{s_j} }
\end{align*}
for $l=1,\dots,d_v$.
In this case, the set of truncated modes becomes
\[Z_{k_{\text{max}}} = \{(k_1, \ldots, k_{d}) \in \mathbb{Z}_{s_1} \times \cdots \times \mathbb{Z}_{s_d} \mid k_j \leq k_{\text{max},j} \text{ or }\ s_j-k_j \leq k_{\text{max},j}, \text{ for } j=1,\dots,d\}.\]
When implemented, $R$ is treated as a $(s_1 \times \cdots \times s_d \times d_v \times d_v)$-tensor and the above definition of $Z_{k_{\text{max}}}$ corresponds to the ``corners'' of $R$, which allows for a straight-forward parallel implementation of (\ref{eq:fft_mult}) via matrix-vector multiplication.
In practice, we have found that choosing $k_{\text{max},j} = 12$ which yields $k_{\text{max}} = 12^d$ parameters per channel to be sufficient for all the tasks that we consider.
\paragraph{Parameterizations of $R$.}
In general, $R$ can be defined to depend on $(\mathcal{F} a)$ to parallel (\ref{def:K_int}).
Indeed, we can define $R_\phi: \mathbb{Z}^d \times \mathbb{R}^{d_v} \to \mathbb{R}^{d_v \times d_v}$
as a parametric function that maps \(\bigl(k,(\mathcal{F} a)(k))\) to the values of the appropriate Fourier modes. We have experimented with linear as well as neural network parameterizations of $R_\phi$.
We find that the linear parameterization has a similar performance to the previously described direct parameterization,
while neural networks have worse performance. This is likely due to the discrete structure of the space $\mathbb{Z}^d$.
Our experiments in this work focus on the direct parameterization presented above.
\paragraph{Invariance to discretization.}
The Fourier layers are discretization-invariant because they can learn from and evaluate functions which are discretized in an arbitrary way. Since parameters are learned directly in Fourier space, resolving the functions in physical space simply amounts to projecting on the basis $e^{2\pi i \langle x, k \rangle}$ which are well-defined everywhere on $\mathbb{R}^d$. This allows us to achieve zero-shot super-resolution as shown in Section \ref{sec:superresolution}.
Furthermore, our architecture has a consistent error at any resolution of the inputs and outputs. On the other hand, notice that, in Figure \ref{fig:error}, the standard CNN methods we compare against have an error that grows with the resolution.
\paragraph{Quasi-linear complexity.}
The weight tensor $R$ contains $k_{\text{max}} < n$ modes, so the inner multiplication has complexity $O(k_{\text{max}})$. Therefore, the majority of the computational cost lies in computing the Fourier transform $\mathcal{F}(v_t)$ and its inverse. General Fourier transforms have complexity $O(n^2)$, however, since we truncate the series the complexity is in fact $O(n k_{\text{max}})$, while the FFT has complexity $O(n \log n)$. Generally, we have found using FFTs to be very efficient. However a uniform discretization is required.
\section{Numerical experiments}
\label{sec:numerics}
In this section, we compare the proposed Fourier neural operator with multiple finite-dimensional architectures as well as operator-based approximation methods on the 1-d Burgers' equation, the 2-d Darcy Flow problem, and 2-d Navier-Stokes equation. The data generation processes are discussed in Appendices \ref{app:burgers}, \ref{app:darcy}, and \ref{app:ns} respectively. We do not compare against traditional solvers (FEM/FDM) or neural-FEM type methods since our goal is to produce an efficient operator approximation that can be used for downstream applications. We demonstrate one such application to the Bayesian inverse problem in Section \ref{sec:bayesian}.
We construct our Fourier neural operator by stacking four Fourier integral operator layers as specified in (\ref{def:int}) and (\ref{eq:Fourier}) with the ReLU activation as well as batch normalization. Unless otherwise specified, we use $N=1000$ training instances and $200$ testing instances. We use Adam optimizer to train for $500$ epochs with an initial learning rate of $0.001$ that is halved every $100$ epochs. We set $k_{\text{max},j} = 16, d_v=64$ for the 1-d problem and $k_{\text{max},j} = 12, d_v=32$ for the 2-d problems.
Lower resolution data are downsampled from higher resolution.
All the computation is carried on a single Nvidia V100 GPU with 16GB memory.
\paragraph{Remark on Resolution.}
Traditional PDE solvers such as FEM and FDM approximate a single function and therefore their error to the continuum decreases as the resolution is increased. On the other hand, operator approximation is independent of the ways its data is discretized as long as all relevant information is resolved. Resolution-invariant operators have consistent error rates among different resolutions as shown in Figure \ref{fig:error}. Further, resolution-invariant operators can do zero-shot super-resolution, as shown in Section \ref{sec:superresolution}.
\paragraph{Benchmarks for time-independent problems (Burgers and Darcy):}
{\bf NN:} a simple point-wise feedforward neural network.
{\bf RBM:} the classical Reduced Basis Method (using a POD basis) \citep{DeVoreReducedBasis}.
{\bf FCN:} a the-state-of-the-art neural network architecture based on Fully Convolution Networks \citep{Zabaras}.
{\bf PCANN:} an operator method using PCA as an autoencoder on both the input and output data and interpolating the latent spaces with a neural network \citep{Kovachki}.
{\bf GNO:} the original graph neural operator \citep{li2020neural}.
{\bf MGNO:} the multipole graph neural operator \citep{li2020multipole}.
{\bf LNO:} a neural operator method based on the low-rank decomposition of the kernel $\kappa(x,y) := \sum^r_{j=1} \phi_j(x) \psi_j(y)$, similar to the unstacked DeepONet proposed in \citep{lu2019deeponet}.
{\bf FNO:} the newly purposed Fourier neural operator.
\paragraph{Benchmarks for time-dependent problems (Navier-Stokes):}
{\bf ResNet:} $18$ layers of 2-d convolution with residual connections \citep{he2016deep}.
{\bf U-Net:} A popular choice for image-to-image regression tasks consisting of four blocks with 2-d convolutions and deconvolutions \citep{ronneberger2015u}.
{\bf TF-Net:} A network designed for learning turbulent flows based on a combination of spatial and temporal convolutions \citep{wang2020towards}.
{\bf FNO-2d:} 2-d Fourier neural operator with a RNN structure in time.
{\bf FNO-3d:} 3-d Fourier neural operator that directly convolves in space-time.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figs/FourierNN_error.png}
{\small \textbf{Left:} benchmarks on Burgers equation; \textbf{Mid:} benchmarks on Darcy Flow for different resolutions; \textbf{Right:} the learning curves on Navier-Stokes $\nu=1\mathrm{e}{-3}$ with different benchmarks. Train and test on the same resolution.
For acronyms, see Section \ref{sec:numerics}; details in Tables \ref{table:ns}, \ref{table:burgers}, \ref{table:darcy}.}
\caption{Benchmark on Burger's equation, Darcy Flow, and Navier-Stokes}
\label{fig:error}
\end{figure}
\subsection{Burgers' Equation}
\label{ssec:burgers}
The 1-d Burgers' equation is a non-linear PDE with various applications including modeling the one dimensional flow of a viscous fluid. It takes the form
\begin{align}
\begin{split}
\partial_t u(x,t) + \partial_x ( u^2(x,t)/2) &= \nu \partial_{xx} u(x,t), \qquad x \in (0,1), t \in (0,1] \\
u(x,0) &= u_0(x), \qquad \qquad \:\: x \in (0,1)
\end{split}
\end{align}
with periodic boundary conditions where $u_0 \in L^2_{\text{per}}((0,1);\mathbb{R})$ is the initial condition and $\nu \in \mathbb{R}_+$ is the viscosity coefficient. We aim to learn the operator mapping the initial condition to the solution at time one, $G^\dagger: L^2_{\text{per}}((0,1);\mathbb{R}) \to H^r_{\text{per}} ((0,1);\mathbb{R})$ defined by $u_0 \mapsto u(\cdot, 1)$ for any $r > 0$.
The results of our experiments are shown in Figure \ref{fig:error} (a) and Table \ref{table:burgers} (Appendix \ref{app:burgers}). Our proposed method obtains the lowest relative error compared to any of the benchmarks. Further, the error is invariant with the resolution, while the error of convolution neural network based methods (FCN) grows with the resolution.
Compared to other neural operator methods such as GNO and MGNO that use Nystr\"om sampling in physical space, the Fourier neural operator is both more accurate and more computationally efficient.
\subsection{Darcy Flow}
\label{sec:darcy}
We consider the steady-state of the 2-d Darcy Flow equation on the unit box which is the second order, linear, elliptic PDE
\begin{align}\label{ssec:darcy}
\begin{split}
- \nabla \cdot (a(x) \nabla u(x)) &= f(x) \qquad x \in (0,1)^2 \\
u(x) &= 0 \qquad \quad \:\:x \in \partial (0,1)^2
\end{split}
\end{align}
with a Dirichlet boundary where $a \in L^\infty((0,1)^2;\mathbb{R}_+)$ is the diffusion coefficient and $f \in L^2((0,1)^2;\mathbb{R})$ is the forcing function. This PDE has numerous applications including modeling the pressure of subsurface flow, the deformation of linearly elastic materials, and the electric potential in conductive materials. We are interested in learning the operator mapping the diffusion coefficient to the solution,
$G^\dagger: L^\infty((0,1)^2;\mathbb{R}_+) \to H^1_0 ((0,1)^2;\mathbb{R}_+)$ defined by $a \mapsto u$. Note that although the PDE is linear, the operator $G^\dagger$ is not.
The results of our experiments are shown in Figure \ref{fig:error} (b) and Table \ref{table:darcy} (Appendix \ref{app:darcy}). The proposed Fourier neural operator obtains nearly one order of magnitude lower relative error compared to any benchmarks. We again observe the invariance of the error with respect to the resolution.
\subsection{ Navier-Stokes Equation}
\label{sec:ns}
We consider the 2-d Navier-Stokes equation for a viscous, incompressible fluid in vorticity form on the unit torus:
\begin{align}
\begin{split}
\partial_t w(x,t) + u(x,t) \cdot \nabla w(x,t) &= \nu \Delta w(x,t) + f(x), \qquad x \in (0,1)^2, t \in (0,T] \\
\nabla \cdot u(x,t) &= 0, \qquad \qquad \qquad \qquad \quad x \in (0,1)^2, t \in [0,T] \\
w(x,0) &= w_0(x), \qquad \qquad \qquad \quad x \in (0,1)^2
\end{split}
\end{align}
where $u \in C([0,T]; H^r_{\text{per}}((0,1)^2; \mathbb{R}^2))$ for any $r>0$ is the velocity field, $w = \nabla \times u$ is the vorticity, $w_0 \in L^2_{\text{per}}((0,1)^2;\mathbb{R})$ is the initial vorticity, $\nu \in \mathbb{R}_+$ is the viscosity coefficient, and $f \in L^2_{\text{per}}((0,1)^2;\mathbb{R})$ is the forcing function. We are interested in learning the operator mapping the vorticity up to time 10 to the vorticity up to some later time $T > 10$,
$G^\dagger: C([0,10]; H^r_{\text{per}}((0,1)^2; \mathbb{R})) \to C((10,T]; H^r_{\text{per}}((0,1)^2; \mathbb{R}))$
defined by $w|_{(0,1)^2 \times [0,10]} \mapsto w|_{(0,1)^2 \times (10,T]}$.
Given the vorticity it is easy to derive the velocity. While vorticity is harder to model compared to velocity, it provides more information. By formulating the problem on vorticity, the neural network models mimic the pseudo-spectral method.
We experiment with the viscosities $\nu = 1\mathrm{e}{-3}, 1\mathrm{e}{-4}, 1\mathrm{e}{-5}$, decreasing the final time $T$ as the dynamic becomes chaotic.
Since the baseline methods are not resolution-invariant, we fix the resolution to be $64 \times 64$ for both training and testing.
\begin{table}[h]
\caption{Benchmarks on Navier Stokes (fixing resolution $64 \times 64$ for both training and testing)}
\label{table:ns}
\begin{center}
\begin{tabular}{l|rc|cccc}
\multicolumn{1}{c}{}
&\multicolumn{1}{c}{}
&\multicolumn{1}{c}{}
&\multicolumn{1}{c}{}
&\multicolumn{1}{c}{}\\
& {\bf Parameters}& {\bf Time}& $\nu=1\mathrm{e}{-3}$ &$\nu=1\mathrm{e}{-4}$ &$\nu=1\mathrm{e}{-4}$ & $\nu=1\mathrm{e}{-5}$\\
{\bf Config}&& {\bf per} &$T=50$ &$T=30$ &$T=30$ & $T=20$\\
&& {\bf epoch} &$ N=1000$ &$ N=1000$ &$N=10000$ & $ N=1000$\\
\hline
FNO-3D & $6,558,537$ & $38.99s$ &${\bf 0.0086}$ &$0.1918$ &${\bf 0.0820}$ &$0.1893$ \\
FNO-2D & $414,517$ & $127.80s$ &$0.0128 $ &${\bf 0.1559}$ &$0.0834$ &${\bf 0.1556}$ \\
U-Net & $24,950,491$ & $48.67s$ &$0.0245 $ &$0.2051$ &$0.1190$ &$0.1982$ \\
TF-Net & $7,451,724$ & $47.21s$ &$0.0225 $ &$0.2253$ &$0.1168$ &$0.2268$ \\
ResNet &$266,641$ & $78.47s$ &$0.0701 $ &$0.2871$ &$0.2311$ &$0.2753$ \\
\hline
\end{tabular}
\end{center}
\end{table}
As shown in Table \ref{table:ns}, the FNO-3D has the best performance when there is sufficient data ($\nu=1\mathrm{e}{-3}, N=1000$ and $\nu=1\mathrm{e}{-4}, N=10000$). For the configurations where the amount of data is insufficient ($\nu=1\mathrm{e}{-4}, N=1000$ and $\nu=1\mathrm{e}{-5}, N=1000$), all methods have $>15\%$ error with FNO-2D achieving the lowest. Note that we only present results for spatial resolution $64 \times 64$ since all benchmarks we compare against are designed for this resolution. Increasing it degrades their performance while FNO achieves the same errors.
\paragraph{2D and 3D Convolutions.}
FNO-2D, U-Net, TF-Net, and ResNet all do 2D-convolution in the spatial domain and recurrently propagate in the time domain (2D+RNN). The operator maps the solution at the previous $10$ time steps to the next time step (2D functions to 2D functions). On the other hand, FNO-3D performs convolution in space-time. It maps the initial time steps directly to the full trajectory (3D functions to 3D functions).
The 2D+RNN structure can propagate the solution to any arbitrary time $T$ in increments of a fixed interval length $\Delta t$, while the Conv3D structure is fixed to the interval $[0, T]$ but can transfer the solution to an arbitrary time-discretization. We find the 3-d method to be more expressive and easier to train compared to its RNN-structured counterpart.
\subsection{Zero-shot super-resolution.}
\label{sec:superresolution}
The neural operator is mesh-invariant, so it can be trained on a lower resolution and evaluated at a higher resolution, without seeing any higher resolution data (zero-shot super-resolution).
Figure \ref{fig:1} shows an example where we train the FNO-3D model on $64 \times 64 \times 20$ resolution data in the setting above with ($\nu=1\mathrm{e}{-4}, N=10000$) and transfer to $256 \times 256 \times 80$ resolution, demonstrating super-resolution in space-time. Fourier neural operator is the only model among the benchmarks (FNO-2D, U-Net, TF-Net, and ResNet) that can do zero-shot super-resolution. And surprisingly, it can do super-resolution not only in the spatial domain but also in the temporal domain.
\subsection{Bayesian Inverse Problem}
\label{sec:bayesian}
In this experiment, we use a function space Markov chain Monte Carlo (MCMC) method \citep{Cotter_2013} to draw samples from the posterior distribution of the initial vorticity in Navier-Stokes given sparse, noisy observations at time $T=50$. We compare the Fourier neural operator acting as a surrogate model with the traditional solvers used to generate our train-test data (both run on GPU). We generate 25,000 samples from the posterior (with a 5,000 sample burn-in period), requiring 30,000 evaluations of the forward operator.
As shown in Figure \ref{fig:baysian} (Appendix \ref{app:bayesian}), FNO and the traditional solver recover almost the same posterior mean which, when pushed forward, recovers well the late-time dynamic of Navier Stokes.
In sharp contrast, FNO takes $0.005s$ to evaluate a single instance while the traditional solver, after being optimized to use the largest possible internal time-step which does not lead to blow-up, takes $2.2s$. This amounts to $2.5$ minutes for the MCMC using FNO and over $18$ hours for the traditional solver. Even if we account for data generation and training time (offline steps) which take $12$ hours, using FNO is still faster! Once trained, FNO can be used to quickly perform multiple MCMC runs for different initial conditions and observations, while the traditional solver will take $18$ hours for every instance. Furthermore, since FNO is differentiable, it can easily be applied to PDE-constrained optimization problems without the need for the adjoint method.
\paragraph{Spectral analysis.}
Due to the way we parameterize $R_\phi$, the function output by (\ref{eq:Fourier}) has at most $k_{\text{max},j}$ Fourier modes per channel. This, however, does not mean that the Fourier neural operator can only approximate functions up to $k_{\text{max},j}$ modes. Indeed, the activation functions which occur between integral operators and the final decoder network $Q$ recover the high frequency modes.
As an example, consider a solution to the Navier-Stokes equation with viscosity $\nu=1\mathrm{e}{-3}$. Truncating this function at $20$ Fourier modes yields an error around $2\%$
while our Fourier neural operator learns the parametric dependence and produces approximations to an error of $\leq 1\%$ with only $k_{\text{max},j}=12$ parameterized modes.
\paragraph{Non-periodic boundary condition.} Traditional Fourier methods work only with periodic boundary conditions. However, the Fourier neural operator does not have this limitation. This is due to the linear transform $W$ (the bias term) which keeps the track of non-periodic boundary. As an example, the Darcy Flow and the time domain of Navier-Stokes have non-periodic boundary conditions, and the Fourier neural operator still learns the solution operator with excellent accuracy.
\section{Discussion and Conclusion}
\textbf{Requirements on Data.} Data-driven methods rely on the quality and quantity of data. To learn Navier-Stokes equation with viscosity $\nu=1\mathrm{e}{-4}$, we need to generate $N=10000$ training pairs $\{a_j,u_j\}$ with the numerical solver. However, for more challenging PDEs, generating a few training samples can be already very expensive. A future direction is to combine neural operators with numerical solvers to levitate the requirements on data.
\textbf{Recurrent structure.} The neural operator has an iterative structure that can naturally be formulated as a recurrent network where all layers share the same parameters without sacrificing performance. (We did not impose this restriction in the experiments.)
\textbf{Computer vision.} Operator learning is not restricted to PDEs. Images can naturally be viewed as real-valued functions on 2-d domains and videos simply add a temporal structure.
Our approach is therefore a natural choice for problems in computer vision where invariance to discretization crucial is important \citep{chi2020fast}.
\section*{Acknowledgements}
The authors want to thank Ray Wang and Rose Yu for meaningful discussions.
Z. Li gratefully acknowledges the financial support from the Kortschak Scholars Program.
A. Anandkumar is supported in part by Bren endowed chair, LwLL grants, Beyond Limits, Raytheon, Microsoft, Google, Adobe faculty fellowships, and DE Logi grant.
K. Bhattacharya, N. B. Kovachki, B. Liu, and A. M. Stuart gratefully acknowledge the financial support of the Army Research Laboratory through the Cooperative Agreement Number W911NF-12-0022. Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-12-2-0022.
The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
| proofpile-arXiv_059-15635 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Since the pioneering work of Anderson on localization of electronic wave function in disordered media, eigenvector localization has become a fascinating and an active area of research \cite{anderson}.\
In his original paper, Anderson argued that disorder introduced in the diagonal elements of a Hamiltonian matrix will lead to localization of the electronic
wave function.\ Later this theory successfully explained the phenomenon of metal-insulator transition.\
The theory of Anderson localization found its application in almost all the areas of physics including condensed matter physics \cite{cond_matter},
chaos \cite{chaos}, photonics \cite{photonics}, etc.\ Additionally, the phenomenon of localization-delocalization transition has been investigated for different systems
such as random banded matrix \cite{RBM}, power-law random banded matrix (PRBM) \cite{PLBM}, vibration in glasses \cite{vibration_glasses}, percolation systems \cite{percoln_threshold}, etc.\
Most of these studies concentrated in analyzing an impact of the diagonal and off-diagonal disorder on the localization properties.\ Further, there exist systems in which disorder is originated from randomness in their geometry, leading to extensive research on localization
in topologically disorder systems
\cite {topolo_disorder, topology_andr_insul, Anderson_topological}.\
Many complex systems can be described as graphs or networks consisting of {\it nodes} and {\it links}.\ The {\it nodes} correspond to the elements of a system and links
represent the interactions between these elements.\ Various network models have been proposed to capture and mimic properties of real-world complex systems,
among which Erd\"os-Renyi random network \cite{random}, scale-free network \cite{albert}, and small-world network \cite{small-world} models have been the most popular ones.\ The small-world networks are characterized by high clustering coefficient and small characteristic path length arising due to the topological disorder or random distributions of the connections in an originally regular network.\
The real-world systems exhibiting topological disorder are ramified fractals, percolation networks, polymers \cite{Ramified_fractals, percolation, polymers}, etc.\
Other examples of real-world complex systems depicting the small-world characteristics include brain network \cite{brain} and ecological network \cite{food_web}.\
Here, we construct small-world networks using the Watts and Strogatz algorithm \cite{small-world} as follows.\ Starting from a regular network where each node is connected with its $k$ nearest neighbors, the connections are rewired randomly with a probability $p_{r}$.\ For the intermediate rewiring probability, the network undergoes the small-world transition characterized by high clustering and low path-length.\
Further, several dynamical processes on networks can be better understood by spectra of the corresponding adjacency and Laplacian matrices.\ For example, the small-world network as a quantum model has been studied in terms of localization-delocalization transition of the spectra of underlying adjacency matrices \cite{Locn_small_world_t1_t2}.\
Using the level statistics, it was shown that the small-world networks having diagonal disorder and rewired links with different values of coupling constant manifest localization-delocalization transition at a critical rewiring probability \cite{Locn_small_world_t1_t2} .\
Furthermore, quantum diffusion of a particle localized at an initial site on small-world networks was demonstrated to have its diffusion time being associated with the participation ratio and higher for the case of regular networks than that of the networks with the shorter path length \cite{diffusion}.\
Further, quantum transport modeled by continuous-time quantum walk (CTQW) in small-world networks has also been investigated \cite{CTQW}; however, here the small-world model was a bit
different than the one proposed by Watts and Strogatz in \cite{CTQW}; additional bonds were added to a ring lattice to make it a small-world network.\
It was argued that adding a large number of bonds leads to suppression of the transition probability of CTQW which is just opposite to its classical counterpart, i.e., continuous-time random walk
where adding shortcuts leads to an enhancement of the transition probability \cite{CTQW}.\
This paper investigates localization properties of eigenvectors of the adjacency matrix of the small-world networks due to presence of disorder in the network's topology arising due to random rewiring of the links to the originally regular network structure.\
We emphasize that
unlike the original Anderson tight-binding model having diagonal disorder and nearest-neighbor interactions, we do not introduce diagonal disorder and rather consider long-range hooping (interactions).\
Using multifractal analysis, we analyze the localization properties of the eigenvectors of the adjacency matrix considering the entire eigenvalue
spectrum as the network undergoes topological transitions from an initial regular structure to a random structure via the small-world network as a consequence of the links rewiring.\ We probe the localization properties of the entire eigenvalue spectrum since existence of even a fraction of delocalized eigenvectors has been shown to impart crucial changes in the behaviour of the corresponding system. For example, an infinitesimal fraction of delocalized eigenvectors have been shown to impact the transport properties of the underlying system \cite {CTQW} \cite{N_extend}.\
The idea of the multifractal system was first introduced by Mandelbrot \cite{manderbolt} which later found its application in various different areas of real world complex systems such as stock market data \cite{stock},
foreign exchange data \cite{foreign_exchange}, time-series data of sunspots \cite{time} traffic \cite{ traffic} air-pollution \cite{air_polution}, heartbeat dynamics, \cite{heartbeat},
etc.\
We find that for small values of the rewiring probabilities
($p_{r}$ $\leq 0.01$), an increase in
the $p_{r}$ values, i.e., by increasing the randomness, leads to an increase in the degree of localization; whereas for high rewiring probabilities
($p_{r}$ $\geq 0.01$), localization get decreased with an increase in the value of $p_{r}$.\
Furthermore, we discover that it requires a very few number of rewiring, i.e. a very small amount of deviation from the regular structure,
for the occurrence of the delocalization-localization transition of the eigenvectors captured using IPR statistics.\ The probability density function of the logarithmic of IPR shows scale-invariance at the critical rewiring probability corresponding to the transition.\
\section{Method}
A network denoted by {\it G} = \{V,\,E\} consists of set of {\it nodes} and interaction {\it links}.\ The set of {\it nodes} are represented by V = $\{v_{1}, v_{2}, v_{3}, $\ldots$, v_{N}\}$ and
{\it links}
by E = $\{e_{1}, e_{2}, e_{3}, $\ldots$ ,e_{M}\}$ where $N$ and $M$ are size of $V$ and $E$ respectively.\ Mathematically, a network can be represented by its adjacency matrix $A$ whose elements
are defined
as $A_{ij}$ = $1$ if node $i$ and $j$ are connected and $0$ otherwise.\ Further, here we consider simple network without any self-loop or multiple connections.\
The eigenvalues of the adjacency matrix $A$ are denoted by $\left\{\lambda_{1}, \lambda_{2}, \lambda_{3},\ldots,\lambda_{N}\right\}$ where
$\lambda_{1} \geq \lambda_{2} \geq $\ldots$ \geq \lambda_{N}$ and
the corresponding orthonormal eigenvectors
as $\left\{\bm{x}_{1}, \bm{x}_{2}, \bm{x}_{3}, \ldots, \bm{x}_{N}\right\}$.\
Starting with a regular network in which all the nodes have an equal degree, we rewire each edge of the network with a probability $p_r$.\
This procedure of the rewiring allows to transform a regular network with $p_r = 0$, to a random
network with $p_r = 1$.\ In the intermediate $p_{r}$ values, network manifests the small-world behavior which is quantified by a very high
clustering coefficient and a very small average shortest path length \cite{small-world}. We would also like to divulge important topological properties of the network capturing various topological transitions upon links rewiring. First, the initial regular network ($p_{r} = 0$) has periodic boundary condition and each node is connected to its ($k/2$) nearest neighbours on each side of it. Let the shortest distance between any given pair of the nodes $i$ and $j$ be denoted by $r_{i,j}$ and thus the average shortest path length of the network would be $r$ = $ {\sum_{i \neq j} {r_{i,j}}}/ {N(N-1)}$. For $p_{r}$ = 0, $r$ scales like $r \sim N/2k$ which leads to its hausdroff dimension being equal to 1. The hausdroff dimension $d$ can be determined by the scaling of $r$ with the network size $N$, defined as $r \sim N^{1/d}$. When the initial regular network is perturbed, for $p_{r}$ $<$ $0.01$, the networks have finite dimensions i.e $r$ grows as $r \sim N^{\gamma}$ where $0$ $<$ $\gamma$ $<$ 1. For $p_{r}$ = 0.001, fitting $r \sim N^{\gamma}$ yields $\gamma$ $\approx$ $0.27$ and $d \approx 3.7$. Upon an increase in the rewiring probability which leads to occurrence of the small-world transition for $p_{r}$ $\geq$ $0.01$, $r$ scales like $r \sim \ln N$ which makes the network having infinite dimension \cite{sc_mf}.
We investigate the localization property of the eigenvectors as the
network undergoes from the regular structure to a random one.\ Localization of an eigenvector means that a few entries of the eigenvector have much higher values
compared to the others.\ We quantify localization of the $\bm{x}_{j}$ eigenvectors
by measuring the inverse participation ratio (IPR) denoted as $Y_{\bm{x}_{j}}$.\ The IPR of an eigenvector $\bm{x}_{j}$ is defined as \cite{ipr}
\begin{equation}
Y_{\bm{x}_{j}} = \sum_{i=1}^N (x_{i})_j^{4},
\label{ipr}
\end{equation}
where $(x_{i})_j$ is the $i^{th}$ component of the normalized eigenvectors $\bm{x}_{j}$ with $j$ $\in\left\{1,2,3 \ldots ,N\right\}$.\ The most delocalized eigenvector $\bm{x}_{j}$ will have
all its components equal, i.e., $(x_{i})_j = \frac{1}{\sqrt{N}}$, with IPR value being $1/N$.\ Whereas, for the most localized eigenvector, only one component of the eigenvector will be non-zero, and the normalization condition of the eigenvectors ensures that the non-zero component should be equal to unity. Thus the value of IPR for the most
localized eigenvector is equal to $1$.\ It is also worth noting that there may exist fluctuations in the IPR values for a given state $\bm{x}_{j}$ for different
realizations of the
network for a given rewiring
probability.\ We report the results for the ensemble average
$Y_{\bm{x}_{j}}$ which we define as a sum of IPR values over $\bm{x}_{j}$ lying in the range
$\lambda<\lambda_{j}<\lambda+d\lambda$ divided by the number of such eigenvectors $NP(\lambda)d\lambda$ in this range, where $P(\lambda)$ is the probability distribution function (PDF) of $\lambda$.\ We now elaborate the averaging process for discrete eigenvalue spectrum.\ Let $\lambda^{R}$ = \{$\lambda_{1}$,$\lambda_{2}$, $\ldots$ ,$\lambda_{N\times R}\}$ such that
$\lambda_{1} \leq \lambda_{2} \leq $\ldots$ \leq \lambda_{N\times R}$ is a set of eigenvalues of a network for all $R$ random realizations where $N \times R$ is the size of $\lambda^{R}$.\
The corresponding eigenvector set of the $\lambda^{R}$ are denoted by {$\bm{x}^R$} $=$ $\left\{\bm{x}_{1}, \bm{x}_{2}, \bm{x}_{3}, \ldots, \bm{x}_{N\times R}\right\}$.
We then divide $\lambda^{R}$ for a given value of $d\lambda$ into further $m$ subsets where $m$ = $(\lambda_{N \times R} - \lambda_{1}) $/$ d\lambda$.\
For each
$\lambda^{j}$ $\subset$ $\lambda^{R}$ and the corresponding eigenvectors $\bm{x}^{j}$ $\subset$ {$\bm{x}^R$}, $\forall j = 1,2,$\ldots$ ,m $;
$\lambda^{j}$ = $\{\lambda_{1}, \lambda_{2},$\ldots$,\lambda_{l^{j}}\}$ and corresponding eigenvector $\bm{x}^{j}$ = $\left\{\bm{x}_{1}, \bm{x}_{2}, \bm{x}_{3}, \ldots, \bm{x}_{l^j}\right\}$ where $l^{j}$ is the size of $j^{th}$ subset such
that $\sum_{j=1}^m l^{j}$ = $N \times R$ with a constraint that $\lambda_{l^{j}} - \lambda_{1^{j}} \leq d\lambda$ for each subset, the corresponding set of IPRs for $\bm{x}^{j}$
will be $\{Y_{\bm{x}_{1}},Y_{\bm{x}_{2}}, $\ldots$, Y_{\bm{x}_{l^j}}\}$.\
Hence, the average IPR ($Y_{\bm{x}_{j}}(\lambda)$) for each subset $\bm{x}^{j}$ can be calculated as
$ \frac {\sum_{i=1}^{l^{j}} Y_{\bm{x}_{i}}}
{l^{j}}$ where $\lambda$ is central value for each subset i.e. $\lambda$ + $\frac{d\lambda}{2}$ = $\lambda_{l^{j}}$ and $\lambda$ - $\frac{d\lambda}{2}$ = $\lambda_{1^{j}}$. Here, we have taken, $N = 2000$, $m = 200$ and $R = 50$ for each rewiring probability. All the physical quantities follow the same averaging procedure in
this paper.
\begin{figure}[t]
\centering
\includegraphics[width=.48\textwidth]{ipr_iprdistr_ms.eps}
\caption{IPR of the eigenvectors plotted as a function of the corresponding eigenvalues for various values of the rewiring probability.\
The dashed green lines plotted at $0.0005$ and $0.0015$ correspond to the minimum possible value of the IPR ($1/N$) and the reandom matrix predicted value for the maximum delocalized state ($3/N$).\
Here, $N = 2000$ and $\langle k \rangle = 20$ are kept fixed for all the networks.}
\label{ipr distribution}
\end{figure}
Further, in the seminal paper of Wegner \cite{Wegner} it was found that at the criticality, the generalized IPRs (GIPR) defined as $\chi_q = \sum_{i=1}^N {x_i}^{2q}$ shows
an anomalous scaling with the system size
$N$, i.e., $\langle \chi_q\rangle \propto N^{-\tau(q)}$, where
$-\tau(q) = (q-1) \times D_q$.\ For the localized eigenvectors, $\langle \chi_q\rangle \propto N^{0}$, and for the completely delocalized eigenvectors $\langle \chi_q\rangle \propto N^{-d(q-1)}$ where d is the dimension of the system.\ However,
if the eigenvector corresponds to the critical state, $D_{q}$ becomes non-linear function of $q$ and therefore the scaling is described by many exponents $D_{q}$ indicating that a critical eigenvector depicts multifractal behavior.
We use the standard box-counting method as described in \cite{cri_multifrac} for the multifractal analysis.\
Let us consider an eigenvector $\bm{x}_j$ whose components are represented as $(x_{1})_j, (x_{2})_j \ldots (x_{N})_j$.\ We then divide the $N$ sites into $N_L$ number of boxes
with each box having the size $l$.\ The box probability $\mu^{k}(l)$ of the $k^{th}$ box of the size $l$ is defined as
\begin{equation}
\mu^{k}(l) = \sum_{i= (k-1)l+1}^{kl} (x_{i})_j^2.
\label{multifractal}
\end{equation}
The $q^{\rm th}$ moment of the box probability is thus
\begin{equation}
\chi_q = \sum_k \mu_k^q(l) \sim l^{-\tau(q)},
\label{multifractal}
\end{equation}
\begin{figure} [t]
\centering
\includegraphics[width=.48\textwidth]{DQ.eps}
\caption{The generalized fractal dimension $D_{q}$ plotted as a function of the exponent $q$ for the eigenvector
${\bm{x}^{j}}$ corresponding to $\lambda_{TR}^{+}, \lambda_{TR-1}^{+}, \lambda_{TR+1}^{+}$
for various rewiring probabilities $p_r$. The $\color{red} ---$ , $ \color{blue} \times$, $ \color{green} \diamond$ are for $\lambda_{TR}^{+}, \lambda_{TR+1}^{+}, \lambda_{TR-1}^{+}, $ respectively.}
\label{ev_multifractal}
\end{figure}
In the above equation, if the scaling exponent $\tau(q)$ is a linear function of the parameter $q$, it corresponds to the mono-fractal behavior, and for the nonlinear relation it indicates the multifractal property of the eigenvector.\
Note that, apart from the box counting method there exists an alternative method widely used in the localization theory through multifractal analysis. In this method, instead of varying the length of the box ($l$), the system size ($N$) is varied by keeping the value of $l$ = 1 fixed.\ One usually first calculates $\chi_q$ and observes its scaling with the linear size of the system $L$ i.e. $\chi_q \sim L^{-\tau(q)}$.
This is also equivalent to $\chi_q$ $\sim$ $N^{-\delta(q)}$ where $N$ = $L^{d}$ and $\delta(q)$ = $\tau(q)$/$d$. The linear size of a network is defined as its diameter which is the longest of the shortest path between all the pairs of the nodes. Thus, approaching the problem through this method will require varying the network size to a very large value which becomes computationally very exhaustive. We prefer box counting method in our analysis instead of the above described method. Nevertheless, both the methods will yield the same results.
\section{Results}
We analyze the localization properties of the eigenvectors of adjacency matrix of networks with the variation of $p_{r}$.\ Rewiring of the connections affects two major structural properties of the network: clustering coefficient ($CC$)
and average shortest path-length ($r$).\
For small values of the rewiring probability ($p_{r} < 0.01$), CC remains at very high value whereas r shows a drastic drop and attains a very low value. For $p_{r}$ $\geq$ $0.01$, there is no further change in $r$ as it has already attained a very low value but $CC$ starts decreasing. Note that a higher clustering is known to drive localization whereas a smaller path-length is believed to support delocalization. Thus,
to understand the contrasting impact of these two structural properties on the eigenvector localization, we first characterize the eigenvalue spectrum into different regimes based on the localization properties of the corresponding eigenvectors.
The central part ($\lambda_{TR}^{-} \leq \lambda \leq \lambda_{TR}^{+}$) of the spectrum consists of critical eigenvectors having IPR of the order of
$10^{-3}$.\
This is the most localized part of the eigenvalue spectrum.\ Further,
$Y_{x_{j}}(\lambda)$ has U-shape for the smaller eigenvalues ($\lambda < \lambda_{TR}^{-} $) while it remains almost constant for the higher eigenvalues ($\lambda > \lambda_{TR}^{+} $) forming the tail part of the spectra.
The eigenvalues $\lambda_{TR}^{-}$ and $\lambda_{TR}^{+}$ separate the central part from the smaller and the larger eigenvalues, respectively.\
We refer to the central regime as a critical state regime, since here all the eigenvectors are at the critical state identified using the multifractal analysis. Both sides of the central part are referred to as the mixed state as in this regime both the delocalized eigenvector (with IPR $\sim 10^{-3}$) and critical states eigenvector (with IPR $\sim 10^{-4}$) co-exist.
Using the multifractal analysis, we first determine $\lambda_{TR}^{+}$ for various values of the rewiring probability.\ This will help us to know the nature of the change in
the width of the central part (critical states regime) with the change in the rewiring probability.\ We then analyze the change in the degree of localization of eigenvectors with the change
in the rewiring probability.\
\begin{figure} [t]
\centering
\includegraphics[width=.48\textwidth]{Delta_Dq.eps}
\caption{Plot of $\Delta D_{q}$ as a function of rewiring probability $p_{r}$ for $q$ = $2\ (\color{red} \bullet) $ , $5\ (\color{blue} \blacksquare) $ and $10\ (\color{green} \blacktriangle) $
respectively. (a) $\lambda \approx 1.271 $ (b) $\lambda \approx 1.371 $. These are the eigenvalues from the central regime.}
\label{delta_dq}
\end{figure}
{\bf Calculation of} $\bm\lambda_{\bm T \bm R}^{\bm +}$ $:$\ Figure \ref{ipr distribution} plots $Y_{x_{j}}(\lambda)$ as a function of $\lambda$ for various values of the rewiring probabilities.\
All the different regimes, critical and the mixed can easily be identified from the Figure~\ref{ipr distribution} .\
First, we discuss the impact of rewiring on the value of $\lambda_{TR}^{+}$, which helps us to further understand the change in
the width of the central part (critical states regime) with the variation in $p_{r}$.\ To achieve this, we analyze the multifractal behavior of a few
eigenvectors separating the critical regime with the mixed regime corresponding to the higher eigenvalue, i.e., $\bm{x}^j$ corresponding to $\lambda_{TR}^{+}$, $\lambda_{TR+1}^{+}$, and $\lambda_{TR-1}^{+}$.\
Here, $\lambda_{TR+1}^{+}$ and $\lambda_{TR-1}^{+}$ refer the eigenvalues just after and before $\lambda_{TR}^{+}$, respectively such that $\lambda_{TR-1}^{+}<\lambda_{TR}^{+}<\lambda_{TR+1}^{+}$ relation holds.
The eigenvectors $\bm{x}^j$ corresponding to $\lambda_{TR+1}^{+}$ are delocalized, hence we expect them having $D_{q}$ $\rightarrow$ $1$ ; whereas $\lambda_{TR-1}^{+}$ lies on the
critical regime and therefore should have a multifractal property.
Figure \ref{ev_multifractal} plots $D_{q}$ as a function of $q$ for the eigenvector $\bm{x}^j$ corresponding to $\lambda_{TR}^{+}, \lambda_{TR+1}^{+}$ and $\lambda_{TR-1}^{+}$ for different values of $p_r$.\
For $0.001 \leq p_{r} \leq 0.05$, $\bm{x}^j$ corresponding to $ \lambda_{TR}^{+}, \lambda_{TR-1}^{+}$ show the multifractal characteristics accompanied by a wide range of the
generalized multifractals dimension values, on the otherhand, $\bm{x}^j$ corresponding to $\lambda_{TR+1}^{+}$ have $D_{q}\rightarrow 1\, \forall$ $q>0$.\
We find that $\lambda_{TR}^{+}$ for various values of the rewiring probability lies in the range $2.03 \leq \lambda_{TR}^{+} \leq 3.34$,
i.e., there exists no significant change in the value of $\lambda_{TR}^{+}$ with the change in the rewiring probability for fixed value of network parameters such as size $N$ and the average degree $k$ and hence width of the central part remains almost fixed.\
We furthermore notice that $\lambda_{TR}^{+}$ for various values of the rewiring probability always remains equal to the boundary of the bulk part of the
eigenvalue density ($\rho(\lambda$)) and the tail part of the eigenvalue density. Since the radius of the bulk part of the eigenvalues largely depends on the networks parameters \cite{camellia}, it is not surprising that the value of $\lambda_{TR}^{+}$ remains almost same. The tail part of the eigenvalue spectrum have very low value of probability density. Mathematically, this means that,
$\rho$($\lambda_{TR}^{+}+\epsilon$) $\rightarrow$ $0$ and $\rho$($\lambda_{TR}^{+}-\epsilon$) $\rightarrow$ $\delta$ where
$\delta > \epsilon$ and $\epsilon$ $\ll$ $1$.\ This can be easily understood with the following argument.\ The eigenvalue spacing $\lambda_{i+1}$$-$$\lambda_{i}$ $\ll$ $\xi$ for
$\lambda$ $<$ $\lambda_{TR}^{+}$ whereas for $\lambda$ $>$ $\lambda_{TR}^{+}$, $\lambda_{i+1}$$-$$\lambda_{i}$ $>$ $\zeta$ where $\zeta$ $>$ $\xi$ and $\xi$ $\ll$ $1$.\
It has been argued that the eigenvalue spectra corresponding to the localized states is continuous while that of the eigenvalue spectra of the delocalized states is discrete \cite{spec_loca_deloca}.\ Thus, $\lambda_{TR}^{+}$ is
the eigenvalue separating the central regime of high IPR value with the regime of the low IPR values.\ Additionally, it also separates the bulk part from the tail of the density of the eigenvalues.\ Therefore, these
calculations of eigenvalues separating the regime of higher IPR values with lower IPR values are in agreement with the previously known conjecture on
localization.\ More interestingly, the methodology followed here provides the exact value of the eigenvalue separating the critical regime and mixed regime.\
After $p_{r} \geq 0.05$, as randomness increases further, it is difficult to divide the spectrum into different regimes based on the localization properties of eigenvectors and all the three regimes start coinciding with each other (Figure \ref{ipr distribution}).\ Additionally, it can be seen that $D_{q}$ for
$\bm{x}^j$ corresponding to $\lambda_{TR}^{+}, \lambda_{TR+1}^{+}$ and $\lambda_{TR-1}^{+}$ starts coinciding with each other (Figure \ref{ev_multifractal}).
\begin{figure} [t]
\centering
\includegraphics[width=.48\textwidth]{ipr_D2.eps}
\caption{The correlation dimension $D_2$ and IPRs of the eigenvectors are plotted as a function of the corresponding eigenvalues for the following four different rewiring probabilities:
(a)-(b) $p_r = 0.001$; (c)-(d) $p_r = 0.0021$; (e)-(f) $p_r = 0.01$; and (g)-(h) $p_r = 0.05$.}
\label{D2}
\end{figure}
{\bf Change in the localization properties with $p_{r}$}: We next discuss the impact of rewiring on the degree of localization of the eigenvectors.\ Specifically, we focus on the
eigenvectors belonging to the central regime as this part of the spectrum undergoes the localization-delocalization transition with the increase in the rewiring probability.\
The other eigenvectors lying outside the central regime do not witness significant change in their localization properties.\ For $p_{r}$ = $0.001$, the eigenvectors which are nearer to the band edge, i.e. $\bm{x}^j$ corresponding to $\lambda_{TR}^{+}$, $\lambda_{TR-1}^{+}$ are characterized by strong multifractality having a wide range of the generalized multifractal dimensions.\ On the other hand, the eigenvectors inside the band are characterized by weak multifractality satisfying $D_{q}$ = $1-\beta q$ $\forall$ $q>0$ and $\beta \ll$ 1.\ The weak multifractality means that the eigenvectors corresponding to the critical state are close towards the extended states which are analogous to Anderson transition in $d = 2+\epsilon$ with $\epsilon \ll$ 1 dimension.\ Furthermore, a strong multifractality means that
the corresponding eigenvectors is more inclined towards the localization which is similar to the conventional Anderson transition in $d \gg$ 1 dimensions \cite{Anders_trans}.\
As the rewiring probability is increased further, for $0.001<p_{r} \leq 0.05$, we do not find any significant change in the multifractal characteristics of the
eigenvectors lying at the band edge.\ However, the eigenvectors lying inside the band are now described by the strong multifractal characteristics.\ To demonstrate the change in the strength of multifractality of the eigenvectors
with the variation in $p_{r}$,
we calculate the decay in $D_{q}$ with respect to $q$.\ For this, we define $\Delta D_{q} = D_{0}-D_{q}$.\ Note that, $D_{0}$ = $d$ (in our case: $d = 1$) irrespective of the nature of the eigenvector.
Hence, $\Delta D_{q}$ provides a correct measure to compare the decay in $D_{q}$ with respect to $q$ inturn providing insight about the strength of multifractalty.\ Thus, we use $\Delta D_{q}$ as a measure of the degree of
localization.\ Figure \ref{delta_dq} plots $\Delta D_{q}$ as a function of $p_{r}$ for two different eigenvalues from the central part.\
Figure \ref{delta_dq} demonstrates that as the rewiring probability increases, $\Delta D_{q}$ manifests an increase until the onset of small-world transition ($p_{r} \approx 0.01$).
Thereafter, it shows a decrease for a further increase in the $p_{r}$ values till $p_r = 1$ .\ The increase in $\Delta D_{q}$ for the initial $p_{r}$ values indicates that there exists an increase in the
multifractal characteristics indicating an enhancement in the degree of localization with the increase in the rewiring probability.\
Thus, based on the effect of rewiring of the connections on the localization properties of the
eigenvectors, $p_{r}$ can be divided into two domains.\ First, $0.001<p_{r}\leq 0.01$ where an increase in the rewiring probability leads to an increase in the degree of localization of eigenvectors;
while for $0.01 \leq p_{r} \leq 1$, eigenvectors undergo a continuous decrease in the localization.\ Moreover,
the transition takes place exactly at the onset of the small-world transition.\ This can be further explained by the following.\ For $0 < p_{r} \leq 0.01$, the average clustering coefficient of the network remains
constant at $CC = 3/4$, while the average shortest path drops down drastically.\ It is a common belief that a shorter $r$ support diffusion whereas a higher clustering is known to
drive toward the localization transition \cite{CC_locn}.\ Therefore, there exists an interplay of these two structural quantities on deciding the localization properties of the
eigenvectors.\ For $p_{r} \leq 0.01$, a high number of triangles accounted for localization, whereas for $p_{r}\geq 0.01$, there exists a significant
decrease in the CC with $r$ being small, and thereby leading to decrease in the degree of localization of eigenvectors.\
We would like to stress that distorting the initial regular network by rewiring a few connections (for $p_r$ being very small) does not cause localization of the eigenvectors, rather they reach to the critical states detected by the calculation correlation dimension ($D_{2}$).\
\begin{figure} [t]
\centering
\includegraphics[width=0.52\textwidth]{P_Ipr_N.eps}
\caption{ Distribution function of $P(ln(Y_{x_{j}}))$ for rewiring probability (a) 0.001 (b) 0.01 (c) 0.2.\ $\color{red} -$, $ \color{green} ---$,
$\color{blue}\cdots $, $ \color{brown} -\cdot-\cdot-$ are used for $N = 1000, 2000, 4000, 6000$ respectively. }
\label{L_Ipr_N}
\end{figure}
The correlation dimension ($D_{2}$) of the eigenvectors provides insight into the scaling of the IPRs.\
For the localized eigenvectors, $D_{2}$ $\rightarrow$ $0$, while $D_{2}$ $\rightarrow$ $1$ for
the completely delocalized eigenvector.\ On the otherhand, if $0< D_{2}<1$, the eigenvector is said to be at the critical state.\ Therefore, we next calculate the correlation dimension of the eigenvectors for various values of $p_{r}$.\ Figure \ref{D2} presents results of $D_2$ and IPR for the eigenvectors as a function of the corresponding eigenvalues for four different $p_{r}$ values.\
The plot clearly depicts that there exists a sharp change in $D_{2}$ at a point which separates the central and the delocalized regime.\ For $\lambda$ $>$ $\lambda_{TR}^{+}$, $D_2$ $>$ $0.94$
for all the value of the rewiring probability indicating delocalized eigenvector.\ However, at the critical point, which separates the critical and the mixed regimes, the values of $D_2$
is different for the different $p_r$ values.\ For $p_r = 0.001, 0.002, 0.01$ and $0.05$, the values of $D_2$ are equal to $0.53, 0.66, 0.66$ and $0.72$, respectively.\ The range $0.4<D_{2}<0.90$ for the
eigenvectors in the critical regime clearly suggests that though they reach at the critical state arising due to the links rewiring, they do not get completely localized ($D_{2} \rightarrow 0$).\
Further, the value of $D_{2}$ at $\lambda_{TR}^{+}$ is minimum for the entire spectrum.\ Thus,the eigenvectors at the boundary of the central part are the most localized in the spectrum for the values of the initial rewiring
probabilities.\
{\bf IPR Statistics} : So far, we have discussed the impact of rewiring on the localization properties of the eigenvectors when IPR and other physical measures of eigenvectors are
being averaged over in the small eigenvalue window.\
\ However, an analysis of the IPR statistics
can provide us further information about the system.\ For instance, in the case of power-law random banded matrix (PRBM), it was found that at the
critical point of the localization-delocalization transition, the width and the shape of the distribution of the logarithm of IPR do not change with the system size, or we can say that it is scale invariant \cite{ipr_N}.\ We calculate the distribution function of IPR for various rewiring probabilities for different
system sizes.\ Figure \ref{L_Ipr_N} shows that, for $p_{r}$ = 0.001, $\rho(\ln(Y_{x_{j}}))$ remains invariant with the change in the network size as neither its shape nor its width changes
with $N$.\
For $p_{r} \geq 0.001$
the distribution function $\rho(\ln(Y_{x_{j}}))$ witness a continuous decrease in the width with an increase in $N$ (Figure \ref{L_Ipr_N}).\ Thus, we can infer
that, $p_{r}$ = $0.001$ is the critical
rewiring probability for the localization-delocalization transition.\
\begin{figure} [t]
\centering
\includegraphics[width=.48\textwidth]{DQ_K.eps}
\caption{Plot of $D_{q}$ as a function of q and $k$ for various rewiring probabilities. (a) $p_{r}$ = 0.001 (b) $p_{r}$ = 0.005 (c)
$p_{r}$ = 0.01 (d) $p_{r}$ = 0.05 (e) $p_{r}$ = 0.1 (f) $p_{r}$ = 1. In all the cases , $\lambda$ $\approx$ $0$ is considered and $D_{q}$ is average over all the eigenvectors belonging to d$\lambda$ $=$ $0.25$ as described in method section. }
\label{Dq_k}
\end{figure}
\begin{figure} [t]
\centering
\includegraphics[width=.48\textwidth]{MFD_N.eps}
\caption{Plot of $D_{q}$ as a function of q and $N$ for various rewiring probabilities. (a) $p_{r}$ = 0.001, $\lambda$ $\approx$ $1.47$ (b) $p_{r}$ = $0.005$, $\lambda$ $\approx$ 1.403 (c)
$p_{r}$ = $0.01$, $\lambda$ $\approx$ = $1.527$ (d) $p_{r}$ = $0.05$, $\lambda$ $\approx$ $1.69$. Here, $\lambda$ belongs to the central regime and $D_{q}$ is average over all the eigenvectors belonging to d$\lambda$ as described in Sec.Method. }
\label{Dq_N}
\end{figure}
\section{Impact of variation in average degree ($k$) on $D_{q}$}
In this section, we discuss the impact of average degree $k$ on $D_{q}$. Note that the largest eigenvalue of network is bounded with the largest degree $k^{max}$ \cite{camellia}. Moreover, for a random network $\lambda_{1}$ $\approx$ $[1+o(1)]k$, where $o(1)$ means a function that converges to $0$ \cite{lam_bound}. Thus, varying $k$ may affect the eigenvalue spectrum drastically even for the fixed network size. Here, we have considered three sets of $k$ = $10$, $15$, $20$ with $N$ = $2000$ being fixed. First we calculate the value of $\lambda_{TR}^{+}$ for $k$ = $10$ and $15$. For such a small change in the average degree though leads to notable changes in $\lambda_{1}$, there exist no such significant impact of $k$ on $\lambda_{TR}^{+}$.
We next probe impact of $k$ on $D_{q}$ for various values of the rewiring probabilities. We witness no significant changes in the nature of $D_{q}$ for the tail part of the eigenvalue spectrum ($\lambda$ $>$ $\lambda_{TR}^{+}$) with the change in the average degree. However, for the central regime, a decrease in the average degree leads to an increase in the strength of the multifractality of the eigenvectors as depicted in Figure \ref{Dq_k}.As we have already discussed that the strength of the multifractality of a eigenvectors indicates about the degree of localization. Thus, decreasing the average degree suggests an enhancement in the degree of eigenvector localization.
\section{Effect of Finite Size}
It is well known that critical phenomenon accurately defined only in the thermodynamics limit i.e., $N$ $\rightarrow$ $\infty$. Further, multifractality of the eigenvectors might be due to the finite-size of the system, which may not exist at the infinite size limit. Hence, one needs to be careful regarding the critical point. Nevertheless, $D_{q}$ certainly reveals the tendency towards a more localized or a more delocalized behavior of a given eigenvector. Therefore, we have calculated $D_{q}$ for various system sizes to check the impact of finite-size effect in our analysis.
Figure \ref{Dq_N}, $D_{q}$ is ploted for the eigenvalues lying in the central regime for various different values of the rewiring probability as the network size is varied from $2000$ to $20000$.It is evident from the Figure \ref {Dq_N} (a,b) there is no significant change in $D_{q}$ as the network size is changed from $2000$ to $20000$. This is also supported by the Figure \ref{L_Ipr_N} (a), where the distribution $\rho(\ln(Y_{x_{j}}))$ remains scale invariant and thus giving rise to the unique fractal dimension $D_{q}$.
However, there exists a slight change between $D_{q}$ at $N =2000 $ and $D_{q}$ at $N = 20000$
in the case of $p_{r}$ = $0.01$ [Figure \ref {Dq_N} (c) ] though $D_{q}$ gets saturated after $N = 8000$. However,
we do find a significant change in $D_{q}$ with a change in the network size in the case of $p_{r}$ = $0.05$ though it still keeps showing the multifractal characterstics.
Thus, we see that change in the value of $D_{q}$ by varying $N$ increases with an increase in the rewiring probability which appears very intriguing. One of the possible reasons for the larger fluctuations in Figure \ref {Dq_N} (d) could be the higher rewiring probability as for a given rewiring probability, the number of the rewired links ($N_{r}$) on average equals to ($N \times p_{r} \times k$)/$2$. Thus, for Figure \ref {Dq_N} (d), $N_{r}$ equals to $10^{3}$ and $10^{4}$ for $N = 2000$ and $20000$, respectively. This difference is very high as compared with that of the smaller rewiring probabilities leading to higher changes in the network topology with higher $N$.
Note that fluctuation of $D_{2}$ for a critical eigenvector of power-law random banded matrix (PRBM) with system size was also reported and investigated in \cite{Dq_N}.
\section{ Discussion and Conclusion}
We have investigated the localization behavior of the eigenvectors of the small-world networks.\ First, we characterize the eigenvalue spectrum into different regimes.\ The central regime corresponds to the critical state eigenvectors and the mixed regime where we found delocalized eigenvectors along with some critical states eigenvectors.\
Using the multifractal analysis, we find that there exists no significant change in the eigenvalue ($\lambda_{TR}^{+}$) separating the central regime and the mixed regime.\ Additionally, we notice no significant change in
$\lambda_{TR}^{+}$ with an increase in $N$, i.e. for $N \rightarrow \infty$, $\lambda_{TR}^{+}$($N$) $\sim$ $\mathcal{O}(1)$.\ Further, we demonstrated that the rewiring procedure can be divided into two domains.\
For small rewiring, $p_{r}$ $\leq$ $0.01$, with an increase in the random connections, there exists a continuous enhancement in the localization of the eigenvectors corresponding to the central regime, while for
the higher rewiring probability $p_{r}$ $\geq$ $0.01$, eigenvectors gradually loose their degree of localization.\ Interestingly, this change in the behavior of the eigenvectors takes place at the onset of the small-world transition possibly arising due to the fact that for $p_{r}$ $\leq$ $0.01$, there exists a decrease in the characteristics path length ($r$) co-existing with a high
clustering coefficient (CC = 3/4).\
It is well known that a higher clustering drives localization of the eigenvectors.\ On the other hand, for $p_{r}$ $\geq$ $0.01$, there exists a significant decrease in CC with $r$ being small, eigenvectors undergo continuous decrease in the degree of localization with an increase in randomness in connections for $p_{r}$ $\geq$ $0.01$.\
We would also like to emphasize here that distorting the initial regular network
topology by rewiring few connections does not lead to localization of the eigenvectors, instead, it drives them toward the critical states with $0.4<D_{2}<0.90$.\ Further, it requires a very few rewiring, i.e., a small amount of randomness from the regular structure
to achieve the critical states which we have captured here using the IPR statistics.\ The probability density function of
logarithm of IPR remains the scale-invariant for the critical rewiring probability corresponding to the transition.\
Our work be useful to understand various dynamical processes occurring on the small-world networks. For instance, in \cite{epilepsy}, epilepsy in small-world neural networks was investigated and it was argued that network activities depend on the proportion of long-distance connections. For this particular exmaple, for small, intermediate and high proportion of long-distance connections, the network activity was shown to behave as normal, seizure and bursts, respectively. Normal activity was characterized by low population of firing rates neurons. The Seizure activity was characterized by a significantly higher population firing rates while burst activity in the network was characterized by higher firing rates which rise and falls rapidly. A spontaneous active potential in one neuron was shown to lead to the activity in neurons having common post postsynaptic target. Thus, once a wave got initiated, it could give rise to new waves of activity in other regions through the long-distance connections. In this paper, we have shown that for small proportion of long distance connections ($p_{r}$ $<$ 0.01), eigenvectors are more localized as compared to those for higher $p_{r}$ values. Thus, it suggests that probability that if a wave has been initiated will generate another wave through long-distance connections is less since it dies out at the local region perhaps due to constructive interference making this region to behave normal. On the other hand, for intermediate proportion of long distance connections ( 0.01 $\leq$ $p_{r}$ $<$ 0.1), the eigenvectors are less localized as compared to those at the small rewiring probability thus there is a finite probability that if a wave is initiated can initiate new waves through the long-distance connections which may lead to seizure. Finally, at higher rewiring probability eigenvectors are again least localized which can lead to burst activity in network.
\section{Acknowledgments}
S.J. acknowledges Govt of India, BRNS Grant No. 37(3)/14/11/2018-BRNS/37131 for financial support.
\bibliographystyle{elsarticle-num-names}
| proofpile-arXiv_059-15636 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
We consider the problem of generating a minimal sequence of observing locations
to achieve complete line-of-sight visibility coverage of an environment. In
particular, we are interested in the case when environment is initially
unknown. This is particularly useful for autonomous agents to map out unknown,
or otherwise unreachable environments, such as undersea caverns. Military
personnel may avoid dangerous situations by sending autonomous agents to scout
new territory. We first assume the environment is known in order to gain
insights.
Consider a domain $\Omega \subseteq \mathbb{R}^d$. Partition the domain
$\Omega=\free\cup\obs$ into an open set $\free$ representing the free space,
and a closed set $\obs$ of finite obstacles without holes. We will refer to the
$\obs$ as the environment, since it is characterized by the obstacles.
Let $x_i\in\free$ be a vantage point, from which a range sensor,
such as LiDAR, takes omnidirectional measurements $\mathcal{P}_{x_i}:S^{d-1}\to\mathbb{R}$.
That is, $\mathcal{P}_{x_i}$ outputs the distance to closest obstacle for each direction in the unit sphere.
One can map the range measurements to the visibility set $\visset_{x_i}$;
points in $\visset_{x_i}$ are visible from $x_i$:
\begin{equation} \begin{aligned}
x \in \visset_{x_i} \text{ if } \|x-x_i\|_2 < \mathcal{P}_{x_i} \Big( \frac{x-x_i}{\|x-x_i\|_2} \Big)
\end{aligned} \end{equation}
As more range
measurements are acquired, $\free$
can be approximated by the \emph{cumulatively visible set} $\Omega_k$:
\begin{equation} \begin{aligned} \Omega_k = \bigcup_{i=0}^k \mathcal{V}_{x_i} \end{aligned} \end{equation}
By construction, $\Omega_k$ admits partial ordering: $\Omega_{i-1} \subset \Omega_{i}$.
For suitable choices of $x_i$, it is possible that
$ \Omega_n \to \free$
(say, in the Hausdorff distance).
We aim at determining a \emph{minimal set of vantage points} $\vanpts$ from which
every $x \in \free$ can be seen.
One may formulate a constrained optimization problem and
look for sparse solutions. When the environment is known, we have the \emph{surveillance} problem:
\begin{equation} \begin{aligned}\label{eq:surveillance-problem}
\min_{\vanpts\subseteq \free} \ | \vanpts | ~~~\text{subject to } \free=\bigcup_{x\in \vanpts} \visset_{x} \ .
\end{aligned} \end{equation}
When the environment is not known apriori, the agent must be careful to avoid collision with obstacles.
New vantage points must be a point that is currently visible. That is, $x_{k+1} \in \Omega_k$.
Define the set of admissible sequences:
\begin{equation} \begin{aligned}
\mathbf{A}(\free):=\{ (x_0,\dots,x_{n-1}) \ |\ n\in\mathbb{N},\ x_0\in\free, \ x_{k+1}\in\Omega_k\}
\end{aligned} \end{equation}
For the unknown environment, we have the \emph{exploration} problem:
\begin{equation} \begin{aligned}\label{eq:exploration-problem}
\min_{\vanpts \in\mathbf{A}(\free)} \ |\vanpts| ~~~\text{subject to } \free=\bigcup_{x\in \vanpts} \visset_{x}.
\end{aligned} \end{equation}
The problem is feasible as long as obstacles do not have holes.
\begin{figure}[hptb]
\centering
\includegraphics[width=2in]{figures/notation_sketch0.pdf} \quad
\includegraphics[width=2in]{figures/notation_sketch1.pdf}
\caption{An illustration of the environment. Dashed and dotted lines are the
horizons from $x_0$ and $x_1$, respectively. Their shadow boundary, $B_1$,
is shown in thick, solid blue. The area of the green region represents
$g(x_1; \Omega_0)$. }
\label{fig:setup}
\end{figure}
\subsection{Related works}
The surveillance problem is related to the art gallery problem in computational
geometry, where the task is to determine the minimum set of guards who can
together observe a polygonal gallery. Vertex guards must be stationed at the
vertices of the polygon, while point guards can be anywhere in
the interior. For simply-connected polygonal scenes, Chv\'atal
showed that $\lfloor n/3 \rfloor$ vertex guards, where $n$ is the number of vertices,
are sometimes necessary and always sufficient \cite{chvatal1975combinatorial}.
For polygonal scenes with $h$ holes, $\lfloor (n+h)/3 \rfloor$ point
guards are sufficient \cite{bjorling1995efficient,hoffmann1991art}. However,
determining the optimal set of observers is NP-complete
\cite{urrutia2000art,o1983some,lee1986computational}.
Goroshin et al. propose an alternating minimization scheme for optimizing the
visibility of $N$ observers \cite{goroshin2011approximate}.
Kang et al. use a system of differential
equations to optimize the location and orientation of $N$ sensors to maximize
surveillance \cite{kang2017optimal}. Both works assume the number of sensors is given.
For the exploration problem, the ``wall-following'' strategy may be used to map
out simple environments \cite{zhang2004experimental}. LaValle and
Tovar et al. \cite{tovar2004gap,lavalle2006planning,tovar2007distance} combine
wall-following with a gap navigation tree to keep track of gaps, critical
events which hide a connected region of the environment that is occluded from a
vantage point. Exploration is complete when all gaps have been eliminated.
This approach does not produce any geometric representation of the environment upon completion, due to limited information from gap sensors.
A class of approaches pick new
vantage points along shadow boundaries (aka frontiers), the boundary between free
and occluded regions \cite{yamauchi1997frontier}.
Ghosh et al. propose a frontier-based approach for 2D polygonal environments which requires $r+1$ views,
where $r$ is the number of reflex angles \cite{ghosh2008online}.
For general 2D environments, Landa et al.
\cite{landa2006visibility,landa2007robotic,landa2008visibility} use high order
ENO interpolation to estimate curvature, which is then used to determine how
far past the horizon to step.
However, it is not necessarily optimal to pick only points along the shadow
boundary, e.g. when the map is a star-shaped polygon \cite{ghosh2008online}.
Next-best-view algorithms try to find vantage points that maximize a utility
function, consisting of some notion of \emph{information gain} and another
criteria such as path length. The vantage point does not have to lie along the
shadow boundary. A common measure of information gain is the volume of \emph{entire}
unexplored region within sensor range that is not occluded by obstacles
\cite{gonzalez2002navigation, bircher2016receding, bircher2018receding,
heng2015efficient}. Surmann et al. count the number of intersections of rays
into the occlusion \cite{surmann2003autonomous}, while Valente et
al. \cite{valente2014information} use
the surface area of the shadow boundary, weighted by the viewing angle from the vantage points,
to define potential information gain. The issue with these heurisitics is
that they are independent of the underlying geometry.
In addition, computing the information gain at each potential vantage point is
costly and another heurisitic is used to determine which points to sample.
There has been some attempts to incorporate deep learning into the
exploration problem, but they focus on navigation rather than
exploration.
The approach of Bai et al. \cite{bai2017toward} terminates when there is no
occlusion within view of the agent, even if the global map is still incomplete.
Tai and Liu \cite{tai2016mobile,tai2017virtual,lei2016robot} train agents to
learn obstacle avoidance.
Our work uses a gain function to steer a greedy approach, similar to the
next-best-view algorithms. However, our measure of information gain takes the
geometry of the environment into account. By taking advantage of precomputation
via convolutional neural networks, our model learns shape priors for a large
class of obstacles and is efficient at runtime. We use a volumetric
representation which can handle arbitrary geometries in 2D and 3D. Also, we
assume that the sensor range is larger than the domain, which makes the problem
more global and challenging.
\section{Greedy algorithm}
\label{sec:greedy}
We propose a greedy approach which sequentially determines a new vantage point, $x_{k+1}$, based on the information gathered from all previous vantage points, $x_0,x_1,\cdots, x_{k}$.
The strategy is greedy because $x_{k+1}$ would be a location that \emph{maximizes the information gain}.
For the surveillance problem, the environment is known. We define the \emph{gain} function:
\begin{equation}g(x;\Omega_k) := | \visset_x \cup \Omega_k| - |\Omega_k |, \label{gain-func} \end{equation}
i.e. the volume of the region that is visible from $x$ but not from $x_0,x_1,\cdots,x_{k}$.
Note that $g$ depends on $\obs$, which we omit for clarity of notation.
The next vantage point should be chosen to maximize the newly-surveyed volume. We define the greedy surveillance algorithm as:
\begin{equation} x_{k+1} = \arg \max_{x\in \free} g(x;\Omega_k).\label{eq:greedy-surv}\end{equation}
The problem of exploration is even more challenging since, by definition, the
environment is not known. Subsequent vantage points must lie within the current visible set $\Omega_k$.
The corresponding greedy exploration algorithm is
\begin{equation} x_{k+1} = \arg \max_{x\in \Omega_k} g(x;\Omega_k).\label{eq:greedy-exp}\end{equation}
However, we remark that in practice, one is
typically interested only in a subset $\mathscr{S}$ of all possible environments
$\mathcal{S}:=\{\obs|\obs\subseteq\mathbb{R}^d\}$.
For example, cities generally follow a grid-like pattern. Knowing these priors
can help guide our estimate of $g$ for certain types of $\obs$, even when
$\obs$ is unknown initially.
We propose to encode these priors formally into the parameters, $\theta$, of a learned function:
\begin{equation} \begin{aligned} g_\theta(x; \Omega_k, B_k ) \text{ for } \obs \in \mathscr{S}, \end{aligned} \end{equation}
where $B_k$ is the part of $\partial{\Omega}_k$ that may actually lie in the free space $\free$:
\begin{equation} \begin{aligned}
B_k &= \partial \Omega_k \backslash \obs.
\end{aligned} \end{equation}
See Figure \ref{fig:gain} for an example gain function.
We shall demonstrate that while training for $g_\theta$, incorporating the shadow boundaries
helps, in some sense, localize the learning of $g$, and is essential in creating usable $g_\theta$.
\begin{figure}[hptb]
\vspace{.8em}
\centering
\includegraphics[height=1.4in,trim={0 0.11in 0 0.11in},clip]{figures/example_gain_map.pdf}
\includegraphics[height=1.4in,trim={0 0.11in 0 0.11in},clip]{figures/example_gain_func.pdf}
\caption{ Left: the map of a scene consisting of two disks. Right: the
intensity of the corresponding gain function. The current vantage point is
shown as the red dot. The location which maximizes the gain function is shown
as the red {\tt x}.} \label{fig:gain}
\end{figure}
\subsection{A bound for the known environment}
\label{sec:bound-surv}
We present a bound on the optimality of the greedy algorithm, based on
submodularity \cite{krause2014submodular}, a useful property of set functions.
We start with standard definitions.
Let $V$ be a finite set and $f:2^V \to \mathbb{R}$ be a set function which
assigns a value to each subset $S\subseteq V$.
\begin{definition}{(Monotonicity)}
A set function $f$ is \emph{monotone} if for
every $A\subseteq B \subseteq V$, $$f(A)\le f(B).$$
\end{definition}
\begin{definition}{(Discrete derivative)}
The \emph{discrete derivative} of $f$ at $S$ with respect to $v\in V$
is $$\Delta_f(v|S):= f(S\cup \{v\}) - f(S).$$
\end{definition}
\begin{definition}{(Submodularity)} A set function $f$ is \emph{submodular} if
for every $A\subseteq B\subseteq V$ and $v\in V\setminus B$,
$$\Delta_f(v|A) \ge \Delta_f(v|B).$$
\end{definition}
In other words, set functions are submodular if they have diminishing returns.
More details and extensions of submodularity can be found in \cite{krause2014submodular}.
Now, suppose the environment $\obs$ is known.
Let $\vanpts$ be the set of vantage points, and let $\visarea(\vanpts)$ be the
volume of the region visible from $\vanpts$:
\begin{equation} \begin{aligned}
\mathscr{V}(\vanpts) &:= \bigcup_{x\in\vanpts} \visset_x \\
\visarea(\vanpts) &:= \Big| \mathscr{V}(\vanpts) \Big|
\end{aligned} \end{equation}
\begin{lemma}{}
The function $\visarea$ is monotone.
\end{lemma}
\begin{proof}
Consider $A\subseteq B\subseteq \free$. Since $\visarea$ is the cardinality of unions of sets, we have
\begin{equation*} \begin{aligned}
\visarea(B) &= \Big| \bigcup_{x\in B} \visset_x \ \Big| \\
&= \Big| \bigcup_{x\in A\cup\{B\setminus A \}} \visset_x \ \Big| \\
&\ge \Big| \bigcup_{x\in A} \visset_x \ \Big| \\
&= \visarea(A).
\end{aligned} \end{equation*}
\end{proof}
\begin{lemma}{}
The function $\visarea$ is submodular.
\end{lemma}
\begin{proof}
Suppose $A\subseteq B$ and $\{v\} \in \free\setminus B$. By properties of unions and intersections, we have
\begin{equation*} \begin{aligned}
\visarea(A\cup \{v\}) + \visarea(B) &= \Big| \bigcup_{x\in (A\cup \{v\})} \visset_x \Big| + \Big| \bigcup_{x\in B} \visset_x \Big| \\
&\ge \Big| \bigcup_{x\in A\cup \{v\}\cup B} \visset_x \Big| + \Big| \bigcup_{x\in (A\cup\{v\}) \cap B} \visset_x \Big| \\
&= \Big| \bigcup_{x\in B\cup \{v\}} \visset_x \Big| + \Big| \bigcup_{x\in A} \visset_x \Big| \\
&= \visarea( B\cup \{v\}) + \visarea(A)\\
\end{aligned} \end{equation*}
Rearranging, we have
\begin{equation*} \begin{aligned}
\visarea(A\cup \{v\}) + \visarea(B) &\ge \visarea( B\cup \{v\}) + \visarea(A)\\
\visarea(A\cup \{v\}) - \visarea(A) &\ge \visarea( B\cup \{v\}) - \visarea(B)\\
\Delta_\visarea (v|A) &\ge \Delta_\visarea(v|B).
\end{aligned} \end{equation*}%
\end{proof}
Submodularity and monotonicity enable a bound which compares the relative
performance of the greedy algorithm to the optimal solution.
\begin{theorem}{}
\label{thm:bound-surv}
Let $\vanpts_k^\ast$ be the optimal set of $k$ sensors.
Let $\vanpts_n= \{x_i\}_{i=1}^n$ be the set of $n$ sensors placed using the greedy surveillance algorithm \eqref{eq:greedy-surv}.
Then,
$$\visarea(\vanpts_n) \ge (1-e^{-n/k}) \visarea(\vanpts_k^\ast) .$$
\end{theorem}
\begin{proof}
For $l<n$ we have
\begin{align}
\visarea(\vanpts_k^\ast) &\le \visarea(\vanpts_k^\ast \cup \vanpts_l) \label{eq:bound-mono}\\
&= \visarea(\vanpts_l) + \Delta_\visarea(\vanpts_k^\ast | \vanpts_l) \\
&= \visarea(\vanpts_l) + \sum_{i=1}^k \Delta_\visarea(x_i^\ast|\vanpts_l \cup\{x_1^\ast,\dots,x_{i-1}^\ast\} ) \\
&\le \visarea(\vanpts_l) + \sum_{i=1}^k \Delta_\visarea(x_i^\ast|\vanpts_l) \label{eq:bound-sub}\\
&\le \visarea(\vanpts_l) + \sum_{i=1}^k \visarea(\vanpts_{l+1}) - \visarea(\vanpts_l) \label{eq:bound-greedy} \\
&= \visarea(\vanpts_l) + k \big[ \visarea(\vanpts_{l+1}) - \visarea(\vanpts_l) \big].
\end{align}
Line (\ref{eq:bound-mono}) follows from monotonicity, (\ref{eq:bound-sub}) follows from submodularity
of $\visarea$, and (\ref{eq:bound-greedy}) from definition of the greedy algorithm.
Define $\delta_l := \visarea(\vanpts_k^\ast) - \visarea(\vanpts_l)$, with $\delta_0:=\visarea(\vanpts_k^\ast)$.
Then
\begin{equation*} \begin{aligned}
\visarea(\vanpts_k^\ast) - \visarea(\vanpts_l) &\le k \big[ \visarea(\vanpts_{l+1}) - \visarea(\vanpts_l)\big] \\
\delta_l &\le k \big[ \delta_l - \delta_{l+1}\big] \\
\delta_l \Big( 1 - k \Big) &\le - k \delta_{l+1} \\
\delta_l \Big( 1 - \frac{1}{k} \Big) &\ge \delta_{l+1} \\
\end{aligned} \end{equation*}
Expanding the recurrence relation with $\delta_n$, we have
\begin{equation*} \begin{aligned}
\delta_n &\le \Big( 1 - \frac{1}{k} \Big) \delta_{n-1}\\
&\le \Big( 1-\frac{1}{k} \Big)^n \delta_0 \\
&= \Big( 1-\frac{1}{k} \Big)^n \visarea(\vanpts_k^\ast) \\
\end{aligned} \end{equation*}
Finally, substituting back the definition for $\delta_n$, we have the desired result:
\begin{align}
\delta_n \le \Big( 1-\frac{1}{k} \Big)^n \visarea(\vanpts_k^\ast) \nonumber \\
\visarea(\vanpts_k^\ast) - \visarea(\vanpts_n) \le \Big( 1-\frac{1}{k} \Big)^n \visarea(\vanpts_k^\ast) \nonumber \\
\visarea(\vanpts_k^\ast)\Big( 1- (1-1/k)^n\Big) \le \visarea(\vanpts_n) \nonumber\\
\visarea(\vanpts_k^\ast)\Big( 1- e^{-n/k} \Big) \le \visarea(\vanpts_n) \label{eq:bound-exp}
\end{align}
where (\ref{eq:bound-exp}) follows from the inequality $1-x\le e^{-x}$.
\end{proof}
In particular, if $n=k$, then $(1-e^{-1})\approx 0.63$.
This means that $k$ steps of the greedy algorithm is guaranteed to cover at least 63\%
of the total volume, if the optimal solution can also be obtained with $k$ steps. When $n=3k$, the greedy algorithm covers at least 95\% of the total volume.
In \cite{nemhauser1978best}, it was shown that no polynomial time algorithm can achieve a better bound.
\subsection{A bound for the unknown environment}
\label{sec:bound-exp}
When the environment is not known, subsequent vantage points must lie within
the current visible set to avoid collision with obstacles:
\begin{equation} \begin{aligned}
x_{k+1} \in \mathscr{V}(\vanpts_k)
\end{aligned} \end{equation}
Thus, the performance of the exploration algorithm has a strong dependence on the environment
$\obs$ and the initial vantage point $x_1$. We characterize this dependence
using the notion of the \emph{exploration ratio}.
Given an environment $\obs$ and $A\subseteq\free$, consider the ratio of the marginal value of the greedy exploration algorithm,
to that of the greedy surveillance algorithm:
\begin{equation} \begin{aligned}
\rho(A) &:= \frac{ \displaystyle{\sup_{x\in \mathscr{V}(A)}}\Delta_\visarea(x|A ) }{ \displaystyle{ \sup_{x\in\free}} \Delta_\visarea(x|A)}.
\end{aligned} \end{equation}
That is, $\rho(A)$ characterizes the relative gap (for lack of a better word) caused by the collision-avoidance constraint $x\in\mathscr{V}(A)$.
Let $A_x =\{A\subseteq \free|x\in A\}$ be the set of vantage points which contain $x$.
Define the \emph{exploration ratio} as
\begin{equation} \begin{aligned}
\rho_x &:= \inf_{A\in A_x} \rho(A).
\end{aligned} \end{equation}
The exploration ratio is the worst-case gap between the two greedy
algorithms, conditioned on $x$.
It helps to provide a bound for the difference between the optimal
solution set of size $k$, and the one prescribed by $n$ steps the greedy exploration algorithm.
\begin{theorem}{}
\label{thm:bound-exp}
Let $\vanpts_k^\ast =\{x_i^\ast\}_{i=1}^k$ be the optimal sequence of $k$ sensors which includes $x_1^\ast=x_1$.
Let $\vanpts_n= \{x_i\}_{i=1}^n$ be the sequence of $n$ sensors placed using the greedy exploration algorithm
\eqref{eq:greedy-exp}.
Then, for $k,n>1$:
$$\visarea(\vanpts_n) \ge \Big[1-\exp \Big({\frac{-(n-1)\rho_{x_1}}{k-1}}\Big) \Big(1-\frac{\visarea(x_1)}{\visarea(\vanpts_k^\ast)} \Big) \Big] \visarea(\vanpts_k^\ast) .$$
\end{theorem}
This is reminiscent of Theorem~\ref{thm:bound-surv}, with two subtle differences.
The $\big[1-\frac{\visarea(x_1)}{\visarea(\vanpts_k^\ast)}\big]$ term accounts for the shared vantage point $x_1$.
If $\visarea(x_1)$ is large, then the exponential term has little effect, since $\visarea(x_1)$ is already close to $\visarea(\vanpts_k^\ast)$. On the other hand, if it is small, then
the exploration ratio $\rho_{x_1}$ plays a factor.
The idea of the proof is similar, with some subtle differences in algebra to account for the shared vantage point
$x_1$, and the exploration ratio $\rho_{x_1}$.
\begin{proof}
We have, for $l<n$:
\begin{align}
\visarea(\vanpts_k^\ast) &\le \visarea(\vanpts_k^\ast \cup \vanpts_l) \nonumber\\
&= \visarea(\vanpts_l) + \Delta_\visarea(\vanpts_k^\ast | \vanpts_l) \nonumber\\
&= \visarea(\vanpts_l) + \sum_{i=1}^k \Delta_\visarea(x_i^\ast|\vanpts_l \cup\{x_1^\ast,\dots,x_{i-1}^\ast\} ) \label{eq:bd-exp-tele}\\
&\le \visarea(\vanpts_l) + \sum_{i=1}^k \Delta_\visarea(x_i^\ast|\vanpts_l) \label{eq:bd-exp-sub}\\
&= \visarea(\vanpts_l) + \Delta_\visarea(x_1^\ast|\vanpts_l) + \sum_{i=2}^k \Delta_\visarea(x_i^\ast|\vanpts_l) \nonumber\\
&= \visarea(\vanpts_l) + \sum_{i=2}^k \Delta_\visarea(x_i^\ast|\vanpts_l) \label{eq:bd-exp-x1}\\
&\le \visarea(\vanpts_l) + \sum_{i=2}^k \max_{x\in\free} \Delta_\visarea(x|\vanpts_l) \nonumber\\
&\le \visarea(\vanpts_l) + \frac{1}{\rho_{x_1}} \sum_{i=2}^k \max_{x\in\mathscr{V}(\vanpts_l)} \Delta_\visarea(x|\vanpts_l) \label{eq:bd-exp-gap}\\
&\le \visarea(\vanpts_l) + \frac{1}{\rho_{x_1}} \sum_{i=2}^k \visarea(\vanpts_{l+1}) - \visarea(\vanpts_l) \label{eq:bd-exp-greedy} \\
&= \visarea(\vanpts_l) + \frac{k-1}{\rho_{x_1}} \big[ \visarea(\vanpts_{l+1}) - \visarea(\vanpts_l) \big]. \nonumber
\end{align}
Line \eqref{eq:bd-exp-tele} is a telescoping sum, \eqref{eq:bd-exp-sub} follows from submodularity of $\visarea$,
\eqref{eq:bd-exp-x1} uses the fact that $x_1^\ast\in\vanpts_l$,
\eqref{eq:bd-exp-gap} follows from the definition of $\rho_{x_1}$ and \eqref{eq:bd-exp-greedy} stems from the definition
of the greedy exploration algorithm \eqref{eq:greedy-exp}.
As before, define $\delta_l := \visarea(\vanpts_k^\ast) - \visarea(\vanpts_l)$. However, this time, note
that $\delta_1:=\visarea(\vanpts_k^\ast) - \visarea(\vanpts_1) = \visarea(\vanpts_k^\ast) - \visarea(x_1) $.
Then
\begin{equation*} \begin{aligned}
\visarea(\vanpts_k^\ast) - \visarea(\vanpts_l) &\le \frac{k-1}{\rho_{x_1}} \big[ \visarea(\vanpts_{l+1}) - \visarea(\vanpts_l)\big] \\
\delta_l &\le \frac{k-1}{\rho_{x_1}} \big[ \delta_l - \delta_{l+1}\big] \\
\delta_l \Big( 1 - \frac{k-1}{\rho_{x_1}} \Big) &\le - \frac{k-1}{\rho_{x_1}} \delta_{l+1} \\
\delta_l \Big( 1 - \frac{\rho_{x_1}}{k-1} \Big) &\ge \delta_{l+1} \\
\end{aligned} \end{equation*}
Expanding the recurrence relation with $\delta_n$, we have
\begin{equation*} \begin{aligned}
\delta_n &\le \Big( 1 - \frac{\rho_{x_1}}{k-1} \Big) \delta_{n-1}\\
&\le \Big( 1-\frac{\rho_{x_1}}{k-1} \Big)^{n-1} \delta_1 \\
&= \Big( 1- \frac{\rho_{x_1}}{k-1} \Big)^{n-1} \big[ \visarea(\vanpts_k^\ast) -\visarea(x_1)\big] \\
\end{aligned} \end{equation*}
Now, substituting back the definition for $\delta_n$, we arrive at
\begin{align}
\delta_n &\le \Big( 1-\frac{\rho_{x_1}}{k-1} \Big)^{n-1} \big[ \visarea(\vanpts_k^\ast)-\visarea(x_1)\big] \nonumber \\
\visarea(\vanpts_k^\ast) - \visarea(\vanpts_n) &\le \Big( 1-\frac{\rho_{x_1}}{k-1} \Big)^{n-1} \big[ \visarea(\vanpts_k^\ast)-\visarea(x_1)\big] \nonumber \\
\visarea(\vanpts_k^\ast) -\visarea(x_1) -\big[ \visarea(\vanpts_n)-\visarea(x_1)\big] &\le \Big( 1-\frac{\rho_{x_1}}{k-1} \Big)^{n-1} \big[ \visarea(\vanpts_k^\ast)-\visarea(x_1)\big] \nonumber \\
\big[\visarea(\vanpts_k^\ast) -\visarea(x_1)\big] \Big(1- \big[ 1-\frac{\rho_{x_1}}{k-1} \big]^{n-1}\Big) &\le \big[ \visarea(\vanpts_n)-\visarea(x_1)\big] \nonumber \\
\big[\visarea(\vanpts_k^\ast) -\visarea(x_1)\big] \Big(1- e^{-\frac{(n-1)\rho_{x_1}}{k-1}} \Big) &\le \big[ \visarea(\vanpts_n)-\visarea(x_1)\big] \nonumber.
\end{align}
Finally, with some more algebra
\begin{align}
\big[ \visarea(\vanpts_n)-\visarea(x_1)\big] &\ge \Big(1- e^{-\frac{(n-1)\rho_{x_1}}{k-1}} \Big) \big[\visarea(\vanpts_k^\ast) -\visarea(x_1)\big] \nonumber \\
\visarea(\vanpts_n) &\ge \visarea(x_1) + \Big(1- e^{-\frac{(n-1)\rho_{x_1}}{k-1}} \Big) \big[\visarea(\vanpts_k^\ast) -\visarea(x_1)\big] \nonumber \\
\visarea(\vanpts_n) &\ge \visarea(x_1) + \Big(1- e^{-\frac{(n-1)\rho_{x_1}}{k-1}} \Big) \visarea(\vanpts_k^\ast) - \visarea(x_1) + \visarea(x_1) e^{-\frac{(n-1)\rho_{x_1}}{k-1}} \nonumber \\
\visarea(\vanpts_n) &\ge \Big(1- e^{-\frac{(n-1)\rho_{x_1}}{k-1}} \Big) \visarea(\vanpts_k^\ast) + \visarea(x_1) e^{-\frac{(n-1)\rho_{x_1}}{k-1}} \nonumber \\
\visarea(\vanpts_n) &\ge \Big(1- e^{-\frac{(n-1)\rho_{x_1}}{k-1}} \big[ 1-\frac{\visarea(x_1)}{\visarea(\vanpts_n^\ast)} \big] \Big) \visarea(\vanpts_k^\ast) \nonumber.
\end{align}
\end{proof}
\subsubsection*{Exploration ratio example}
We demonstrate an example where $\rho_{x}$ can be an arbitrarily small
factor that is determined by the geometry of $\free$.
Figure~\ref{fig:alley} depicts an illustration of the setup for the narrow alley environment.
\begin{figure}[hptb]
\centering
\includegraphics[width=.5\textwidth]{figures/alley.pdf}
\caption{A map with a narrow alley. Scale exaggerated for illustration.}
\label{fig:alley}
\end{figure}
Consider a domain $\Omega=[0,1]\times[0,1]$ with a thin vertical wall of width $\varepsilon\ll1$,
whose center stretches from $(\frac{3}{2}\varepsilon,0)$ to
$(\frac{3}{2}\varepsilon,1)$. A narrow opening of size
$\varepsilon^2\times\varepsilon$ is centered at
$(\frac{3}{2}\varepsilon,\frac{1}{2})$.
Suppose $x_1=x_1^\ast=A$ so that
$$\visarea(\{x_1\})=\varepsilon +\mathcal{O}(\varepsilon^2),$$
where the $\varepsilon^2$ factor is due to the small sliver of the narrow alley visible from $A$.
By observation, the optimal solution contains two vantage points. One such solution places $x_2^\ast =C$.
The greedy exploration algorithm can only place $x_2\in\mathscr{V}(x_1)=[0,\varepsilon]\times[0,1]$.
One possible location is $x_2=B$.
Then, after 2 steps of the greedy algorithm, we have $$\visarea(\vanpts_2)=\varepsilon+\mathcal{O}(\varepsilon^2).$$
Meanwhile, the total visible area is $$\visarea(\vanpts_2^\ast)=1-\mathcal{O}(\varepsilon)$$
and the ratio of greedy to optimal area coverage is
\begin{equation} \begin{aligned}
\label{eq:alley-greedy-ratio-1}
\frac{\visarea(\vanpts_2)}{\visarea(\vanpts_2^\ast)} =\frac{\varepsilon+\mathcal{O}(\varepsilon^2)}{1-\mathcal{O}(\varepsilon)} = \mathcal{O}(\varepsilon)
\end{aligned} \end{equation}
The exploration ratio is $\rho_{x_1}=\mathcal{O}(\varepsilon^2)$, since
\begin{equation} \begin{aligned}
\max_{x\in\mathscr{V}(\{x_1\})} \Delta_\visarea(x|\{x_1\})&=\mathcal{O}(\varepsilon^2) \\
\max_{x\in\free} \Delta_\visarea(x|\{x_1\})&=1-\mathcal{O}(\varepsilon) \\
\end{aligned} \end{equation}
According to the bound, with $k=n=2$, we should have
\begin{equation} \begin{aligned}
\frac{\visarea(\vanpts_2)}{\visarea(\vanpts_2^\ast)} &\ge \Big(1- e^{-\frac{(n-1)\rho_{x_1}}{k-1}} \big[ 1-\frac{\visarea(x_1)}{\visarea(\vanpts_2^\ast)} \big] \Big) \\
&= \Big(1- e^{-\mathcal{O}(\varepsilon^2)} \big[ 1-\mathcal{O}(\varepsilon) \big] \Big)\\
&= \Upomega(\varepsilon)
\end{aligned} \end{equation}
which reflects what we see in \eqref{eq:alley-greedy-ratio-1}.
On the other hand, if $\vanpts_2=\{C,B\}$ and $\vanpts_2^\ast=\{C,B\}$, we would have
$$\visarea(\{x_1\})=1-\mathcal{O}(\varepsilon)$$ and $\rho_{x_1}=1$,
since both the greedy exploration and surveillance step coincide.
According to the bound, with $k=n=2$, we should have
\begin{equation} \begin{aligned}
\frac{\visarea(\vanpts_2)}{\visarea(\vanpts_2^\ast)} &\ge \Big(1- e^{-\frac{(n-1)\rho_{x_1}}{k-1}} \big[ 1-\frac{\visarea(x_1)}{\visarea(\vanpts_n^\ast)} \big] \Big) \\
&\ge 1-\mathcal{O}(\varepsilon)
\end{aligned} \end{equation}
which is the case, since $\visarea(\vanpts_2)=\visarea(\vanpts_2^\ast)$.
By considering the first vantage point $x_1$ as part of the bound,
we account for some of the unavoidable uncertainties associated with unknown environments during exploration.
\subsection{Numerical comparison}
\label{sec:bound-num}
We compare both greedy algorithms on random arrangements of up to 6 circular obstacles.
Each algorithm starts from the same initial position and runs until all free area is covered.
We record the number of vantage points required over 200 runs for each number of obstacles.
Surprisingly, the exploration algorithm sometimes requires fewer vantage points
than the surveillance algorithm. Perhaps the latter is too aggressive, or perhaps the collision-avoidance
constraint acts as a regularizer.
For example, when there is a single circle, the greedy surveillance algorithm places the
second vantage point $x_2$ on the opposite side of this obstacle. This may lead to two slivers
of occlusion forming of either side of the circle, which will require 2 additional vantage points to cover.
With the greedy exploration algorithm, we do not have this problem, due to the collision-avoidance constraint.
Figure~\ref{fig:circles-135} shows an select example with 1 and 5 obstacles.
Figure~\ref{fig:circles-histogram} show the histogram of the number of steps needed
for each algorithm. On average, both algorithms require a similar number of steps, but
the exploration algorithm has a slight advantage.
\begin{figure}[hptb]
\centering
\includegraphics[width=.99\textwidth]{figures/circles_known_vs_unknown_001_01.pdf}
\includegraphics[width=.99\textwidth]{figures/circles_known_vs_unknown_015_05.pdf}
\caption{Comparing the greedy algorithm for the known (left) and unknown (right) environment on circular obstacles.
Spikes on each vantage point indicate the ordering, e.g. the initial point has no spike. Gray areas
are shadows from each vantage point. Lighter regions are visible from more vantage points.}
\label{fig:circles-135}
\end{figure}
\begin{figure}[hptb]
\centering
\includegraphics[height=.93\textheight]{figures/circles_histogram.pdf}
\caption{Histogram of number of vantage points needed for the
surveillance (blue) and exploration (orange) greedy algorithms to completely
cover environments consisting up of to 6 circles.}
\label{fig:circles-histogram}
\end{figure}
\section{Learning the gain function}
In this section, we discuss the method for approximating the gain function when the map is not known.
Given the set of previously-visited vantage points, we compute the cumulative visibility
and shadow boundaries. We approximate the gain function by applying the trained
neural network on this pair of inputs, and pick the next point according to
\eqref{eq:greedy-surv}. This procedure repeats until there are no shadow
boundaries or occlusions.
The data needed for the training and evaluation of $g_\theta$ are computed
using level sets \cite{osher1988fronts,sethian1999level,osher2006level}.
Occupancy grids may be applicable, but we choose level sets since they have proven
to be accurate and robust. In particular, level sets are necessary for subpixel
resolution of shadow boundaries and they allow for efficient visibility
computation, which is crucial when generating the library of training
examples.
The training geometry is embedded by a signed distance function, denoted by $\map$.
For each vantage point $x_i$, the visibility set is represented by the level
set function $\vis(\cdot,x_i)$, which is computed efficiently using the algorithm described in
\cite{tsai2004visibility}.
In the calculus of level set functions, unions and intersections of sets are
translated, respectively, into taking maximum and minimum of the corresponding characteristic
functions. The cumulatively visible sets $\Omega_k$ are represented by the
level set function $ \Psi_k(x)$, which is defined recursively by
\begin{align}
\Psi_0(x) &=\vis(x,{x_0}), \\
\Psi_k(x) &=\max \left\{ \Psi_{k-1}(x), \vis(x,{x_k}) \right\}, \quad k=1,2,\dots
\end{align}
where the max is taken point-wise. Thus we have
\begin{align}
\free &= \{x|\map(x) >0\},\\
\visset_{x_i} &= \{x|\vis(x,{x_i})>0\},\\
\Omega_k &= \{x|\Psi_k(x) >0\}.
\end{align}
The shadow boundaries $B_k$
are approximated by the "smeared out" function:
\begin{equation}
b_k(x) := \delta_\varepsilon (\Psi_k) \cdot \left[ 1-H(G_k(x)) \right],
\end{equation}
where $H(x)$ is the Heaviside function and
\begin{align}
\delta_\varepsilon(x) &= \frac{2}{\varepsilon} \cos^2\left( \frac{\pi x }{\varepsilon} \right) \cdot \mathds{1}_{[-\frac{\varepsilon}{2},\frac{\varepsilon}{2}]} (x), \\
\graze(x,x_0) &= (x_0-x)^T \cdot \nabla \map(x),\\
G_0 &= \graze(x,x_0),\\
G_k(x) &= \max\{G_{k-1}(x),\graze(x,x_k)\}, \quad k=1,2,\dots
\end{align}
Recall, the shadow boundaries are the portion of the $\partial \Omega_k$ that lie in free space;
the role of $1-H(G_k)$ is to mask out the portion of obstacles that are currently visible from
$\{x_i\}_{i=1}^k$. See Figure~\ref{fig:ls_notation} for an example of $\graze$.
In our implementation, we take
$\varepsilon=3\Delta x$ where $\Delta x$ is the grid node spacing. We refer
the readers to \cite{tsai2005total} for a short review of relevant
details.
When the environment $\obs$ is known, we can compute the gain function exactly
\begin{equation}g(x; \Omega_k) = \int H \Big( H\big(\vis(\xi,x)\big) - H\big( \Psi_k(\xi) \big)\Big) \ d\xi. \end{equation}
We remark that the integrand will
be 1 where the new vantage point uncovers something not previously seen.
Computing $g$ for all $x$ is costly;
each visibility and volume computation requires $\mathcal{O}(m^d)$ operations, and repeating this for all points
in the domain results in $\mathcal{O}(m^{2d})$ total flops.
We approximate it with a function ${\tilde g}_\theta$ parameterized
by $\theta$:
\begin{equation} \begin{aligned} {\tilde g}_\theta(x; \Psi_k, \map, b_k) \approx g(x; \Omega_k).
\end{aligned} \end{equation}
If the environment is unknown, we directly approximate the gain function
by learning the parameters $\theta$ of a function
\begin{equation} \begin{aligned}
g_\theta(x;\Psi_k,b_k) \approx g(x; \Omega_k) H(\Psi_k)
\end{aligned} \end{equation}
using only the observations as input. Note the $H(\Psi_k)$ factor is needed for
collision avoidance during exploration because it is not known \emph{a priori} whether
an occluded location $y$ is part of an obstacle or free space. Thus $g_\theta(y)$ must be zero.
\subsection{Training procedure}
We sample the environments uniformly from a library. For each $\obs$, a sequence of
data pairs is generated and included into the training set $\mathcal{T}$:
\begin{equation} \begin{aligned} \big(\{ \Psi_k, b_k\}, g(x;\Omega_k)H(\Psi_k) \big), \qquad k=0,1,2,\dots.\end{aligned} \end{equation}
For a given environment $\obs$, define a path $\vanpts=\{x_i\}_{i=0}^k$
as admissible if $\map(x_0)>0$ and $\Psi_i(x_{i+1})>0$ for $i=0,\dots,k-1$.
That is, it should only contain points in free space and
in the case of exploration, subsequent points must be visible from at least one
of the previous vantage points.
Let $\mathcal{A}$ be the set of admissible paths.
Then training set should ideally include all paths in $\mathcal{A}$. However
this is too costly, since there are $\mathcal{O}(m^{kd})$ paths consisting of $k$ steps.
Instead, to generate causally relevant data, we use an $\varepsilon$-greedy
approach: we uniformly sample initial positions. With probability
$\varepsilon$, the next vantage point is chosen randomly from admissible set.
With probability $1-\varepsilon$, the next vantage point is chosen according to
\eqref{eq:greedy-surv}. Figure~\ref{fig:manifold} shows an illustration of the
generation of causal data along the subspace of relevant shapes.
\begin{figure}[ht]
\centering
\includegraphics[width=.98\textwidth]{figures/manifold_figure/manifold.pdf}
\caption{Causal data generation along the subspace of relevant shapes.
Each dot is a data sample corresponding to a sequence of vantage points.}
\label{fig:manifold}
\end{figure}
The function $g_\theta$ is
learned by minimizing the empirical loss across all data pairs for each $\obs$
in the training set $\mathcal{T}$:
\begin{equation} \begin{aligned}
\underset{\theta}{\mathrm{argmin}} \ \frac{1}{N}\sum_{\obs \in \mathcal{T} } \sum_k L\Big(g_\theta(x;\Psi_k,b_k), g(x;\Omega_k) H(\Psi_k) \Big),
\end{aligned} \end{equation}
where $N$ is the total number of data pairs. We use the cross entropy loss function:
\begin{equation} \begin{aligned}
L(p,q)=\int p(x) \log q(x) + (1-p(x)) \log (1-q(x)) \ dx.
\end{aligned} \end{equation}
\begin{figure}[hptb]
\centering
\subfloat[][a)]{\includegraphics[height=.25\textheight,trim={0 0.02in 0 0.02in},clip]{figures/manifold_figure/seed_97126_austin3_22_2454_3639_4_03_map.pdf}}\quad
\subfloat[][b)]{\includegraphics[height=.25\textheight,trim={0 0.02in 0 0.02in},clip]{figures/manifold_figure/seed_97126_austin3_22_2454_3639_4_03_vis.pdf}}\phantom{test}\\
\subfloat[][c)]{\includegraphics[height=.25\textheight,trim={0.01in 0.02in 0 0.02in},clip]{figures/manifold_figure/seed_97126_austin3_22_2454_3639_4_03_horizon.pdf}}\quad
\subfloat[][d)]{\includegraphics[height=.25\textheight,trim={0 0.02in 0 0.02in},clip]{figures/manifold_figure/seed_97126_austin3_22_2454_3639_4_03_gain.pdf}}\\
\caption{A training data pair consists of the cumulative visibility and
shadow boundaries as input, and the gain function as the output. Each sequence of vantage
points generates a data sample which depends strongly the shapes of the obstacles and shadows. a) The
underlying map with current vantage points shown in red. b) The cumulative
visibility of the current vantage points. c) The corresponding shadow
boundaries. d) The corresponding gain function.}
\label{fig:training-data} \end{figure}
\subsection*{Network architecture}
We use convolutional neural networks (CNNs) to approximate the gain function,
which depends on the shape of $\obs$ and the location $x$. CNNs have been
used to approximate functions of shapes effectively in many applications.
Their feedforward evaluations are efficient if the off-line training cost is
ignored. The gain function $g(x)$ does not depend \emph{directly} on $x$, but
rather, $x$'s visibility of $\free$, with a domain of dependence bounded by
the sensor range.
We employ a fully convolutional
approach for learning $g$, which makes the network applicable to domains of
different sizes. The generalization to 3D is also straight-forward.
We base the architecture of the CNN on U-Net \cite{ronneberger2015u}, which has
had great success in dense inference problems, such as image segmentation. It
aggregates information from various layers in order to have wide receptive
fields while maintaining pixel precision. The main design choice is to make
sure that the receptive field of our model is sufficient. That is, we want to
make sure that the value predicted at each voxel depends on a sufficiently
large neighborhood. For efficiency, we use convolution kernels of size $3$ in
each dimension. By stacking multiple layers, we can achieve large receptive
fields. Thus the complexity for feedforward computations is linear in the
total number of grid points.
Define a \emph{conv block} as the following layers:
convolution, batch norm, leaky {\tt relu},
stride 2 convolution, batch norm, and leaky {\tt relu}.
Each \emph{conv block} reduces the image size by a factor of 2.
The latter half of the network increases the image size using \emph{deconv blocks}:
bilinear 2x upsampling, convolution, batch norm, and leaky {\tt relu}.
Our 2D network uses 6 \emph{conv blocks} followed by 6 \emph{deconv blocks},
while our 3D network uses 5 of each block. We choose the number of blocks to
ensure that the receptive field is at least the size of the training images: $128\times128$
and $64\times64\times64$. The first \emph{conv block} outputs 4 channels.
The number of channels doubles with each \emph{conv block}, and halves with
each \emph{deconv block}.
The network ends with a single channel, kernel of size~1 convolution layer
followed by the sigmoid activation. This ensures that the network aggregates all
information into a prediction of the correct size and range.
\section{Numerical results}
We present some experiments to demonstrate the efficacy of our approach. Also, we demonstrate its limitations.
First, we train on $128\times128$ aerial city blocks cropped from INRIA Aerial Image Labeling Dataset \cite{maggiori2017dataset}.
It contains binary images with building labels from several urban areas, including Austin, Chicago, Vienna, and Tyrol.
We train on all the areas except Austin, which we hold out for evaluation.
We call this model {\bf City-CNN}. We train a similar model {\bf NoSB-CNN} on the same training data, but omit the shadow boundary from the input.
Third, we train another model {\bf Radial-CNN}, on synthetically-generated radial maps, such
as the one in Figure \ref{fig:shapes-gain}.
Given a map, we randomly select an initial location.
In order to generate the sequence of vantage points, we apply
\eqref{eq:greedy-surv}, using $g_\theta$ in place of $g$. Ties are broken by
choosing the closest point to $x_k$. We repeat this process until there are no
shadow boundaries, the gain function is smaller than $\epsilon$, or the
residual is less than $\delta$, where the residual is defined as:
\begin{equation} \label{eq:residual}
r = \frac{|\free \setminus \Omega_k| } {|\free| } .
\end{equation}
We compare these against the algorithm which uses the exact gain function,
which we call {\bf Exact}. We also compare against {\bf Random}, a random
walker, which chooses subsequent vantage points uniformly from the visible
region, and {\bf Random-SB} which samples points uniformly in a small neighborhood
of the shadow boundaries. We analyze the number of steps required to cover the scene and the
residual as a function of the number of steps.
\begin{figure}[htbp]
\vspace{.8em}
\centering
\includegraphics[height=1.8in]{figures/seed_52478_austin36_67_3489_3103_1_01_pred_gain.pdf}
\includegraphics[height=1.8in]{figures/seed_52478_austin36_67_3489_3103_1_01_gain.pdf}
\caption{Comparison of predicted (left) and exact (right) gain function
for an Austin map. Although the functions are not identical, the predicted gain
function peaks in similar locations to the exact gain function, leading to
similar steps.}
\label{fig:gain-compare}
\end{figure}
Lastly, we present simulation for exploring 3D environments.
Due to the limited availability of datasets, the model, {\bf 3D-CNN}, is
trained using synthetic $64\times64\times64$ voxel images consisting of
tetrahedrons, cylinders, ellipsoids, and cuboids of random positions, sizes,
and orientations.
In the site\footnote{http://visibility.page.link/demo}, the
interested reader may inspect the performance of the {\bf 3D-CNN} in some other
challenging 3D environments.
For our experiments using trained networks, we make use of a CPU-only machine containing
four Intel Core i5-7600 CPU @ 3.50GHz and 8 GB of RAM.
Additionally, we use an Nvidia Tesla K40 GPU with 12 GB of memory for training
and predicting the gain function in 3D scenes.
\begin{figure}[htb]
\vspace{.8em}
\centering
\includegraphics[height=1.3in]{figures/austin3_00_2914_3284_num_steps.pdf}
\includegraphics[height=1.3in]{figures/austin3_00_2914_3284_residual.pdf}
\caption{Distribution of the residual and number of steps generated
across multiple runs over an Austin map. The proposed method is robust against
varying initial conditions. The algorithm reduces the residual to roughly 0.1 \% within
39 steps by using a threshold on the predicted gain function as a termination
condition. }
\label{fig:city_stats}
\end{figure}
\begin{figure}[htpb]
\centering
\includegraphics[height=4in,trim={0 0.02in 0 0.02in},clip]{figures/austin3_00_2914_3284_cnn_36.pdf}
\caption{\label{fig:austin} An example of 36 vantage points (red disks) using {\bf City-CNN} model. White regions are free space while gray regions are occluded. Black borders indicate edges of obstacles.}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=4in]{figures/austin2_00_1513_2258_with_random_sb_residual_vs_steps.pdf}
\caption{Graph showing the decrease in residual over 50 steps among
various algorithms starting from the same initial position for an Austin map.
Without using shadow boundary information, {\bf NoSB-CNN} can at times be worse than
{\bf Random}. Our {\bf City-CNN} model is significantly faster than {\bf Exact}
while remaining comparable in terms of residual.}
\label{fig:comparison}
\end{figure}
\subsection*{2D city}
The {\bf City-CNN} model works well on 2D Austin maps.
First, we compare the predicted gain function to the exact gain function on a $128\times128$ map, as in Figure \ref{fig:gain-compare}.
Without knowing the underlying map, it is difficult to accurately determine the gain function.
Still, the predicted gain function peaks in locations similar to those in the exact gain function. This results
in similar sequences of vantage points.
\emph{The algorithm is robust to the initial positions.} Figure~\ref{fig:city_stats}
show the distribution of the number of steps and residual across over 800 runs
from varying initial positions over a $512\times512$ Austin map. In practice, using the
shadow boundaries as a stopping criteria can be unreliable. Due to numerical
precision and discretization effects, the shadow boundaries may never
completely disappear. Instead, the algorithm terminates when the maximum
predicted gain falls below a certain threshold $\epsilon$. In this example, we
used $\epsilon = 0.1$. Empirically, this strategy is robust. On average, the
algorithm required 33 vantage points to reduce the occluded region to within
0.1\% of the explorable area.
Figure \ref{fig:austin} shows an example sequence consisting of 36 vantage points.
Each subsequent step is generated in under 1 sec using the CPU and instantaneously with a GPU.
Even when the maximizer of the predicted gain function is different from that
of the exact gain function, the difference in gain is negligible. This is
evident when we see the residuals for {\bf City-CNN} decrease at similar rates
to {\bf Exact}. Figure \ref{fig:comparison} demonstrates an example of the
residual as a function of the number of steps for one such sequence generated
by these algorithms on a $1024\times1024$ map of Austin.
We see that {\bf City-CNN} performs comparably to {\bf Exact} approach in terms
of residual. However, {\bf City-CNN} takes 140 secs to generate 50 steps on
the CPU while {\bf Exact}, an $\mathcal{O}(m^4)$ algorithm, takes more than 16
hours to produce 50 steps.
\begin{figure}[htpb]
\vspace{.5em}
\centering
\includegraphics[height=4in,trim={0 0.02in 0 0.02in},clip]{figures/austin3_00_2914_3284_no_hor_50.pdf}
\caption{\label{fig:nosb} A sequence of 50 vantage points generated from {\bf NoSB-CNN}. The points cluster near flat edges due to ambiguity and the algorithm becomes stuck. Gray regions without black borders have not been fully explored.}
\end{figure}
\subsection*{Effect of shadow boundaries}
\emph{The inclusion of the shadow boundaries as input to the CNN is critical for the algorithm to work.}
Without the shadow boundaries, the algorithm cannot distinguish between obstacles and occluded regions.
If an edge corresponds to an occluded region, then choosing a nearby vantage point will
reduce the residual. However, choosing a vantage point near a flat obstacle will result in no change to the
cumulative visibility. At the next iteration, the input is same as the previous iteration, and the result will be the same;
the algorithm becomes stuck in a cycle. To avoid this, we prevent vantage points from repeating
by zeroing out the gain function at that point and recomputing the argmax.
Still, the vantage points
tend to cluster near flat edges, as in Figure \ref{fig:nosb}. This clustering behavior causes the {\bf NoSB-CNN} model to
be, at times, worse than {\bf Random}. See Figure \ref{fig:comparison} to see how the clustering inhibits the reduction
in the residual.
\subsection*{Effect of shape}
The shape of the obstacles, i.e. $\Omega^c$, used in training affects the gain function predictions.
Figure \ref{fig:shapes-gain} compares the gain functions produced by {\bf City-CNN} and {\bf Radial-CNN}.
\begin{figure}[phtb]
\vspace{2em}
\centering
\subfloat[][a)]{\includegraphics[height=1.5in,trim={0 0.11in 0 0.11in},clip]{figures/seed_19536_radial_00000_07_map.pdf}}\phantom{filler}\quad
\subfloat[][b)]{\includegraphics[height=1.5in,trim={0 0.11in 0 0.11in},clip]{figures/seed_19536_radial_00000_07_gain.pdf}}\\
\subfloat[][c)]{\includegraphics[height=1.5in,trim={0 0.11in 0 0.11in},clip]{figures/seed_19536_radial_00000_07_pred_gain.pdf}}\quad
\subfloat[][d)]{\includegraphics[height=1.5in,trim={0 0.11in 0 0.11in},clip]{figures/seed_19536_radial_00000_07_pred_gain_radial.pdf}}\\
\caption{Comparison of gain functions produced with various models on a
radial scene. Naturally, the CNN model trained on radial obstacles best
approximates the true gain function. a) The underlying radial map with vantage
points show in red. b) The exact gain function c) {\bf City-CNN} predicted
gain function. d) {\bf Radial-CNN} predicted gain function.}
\label{fig:shapes-gain}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[height=4in,trim={0 0.02in 0 0.02in},clip]{figures/austin3_00_2914_3284_frequency.pdf}
\caption{\label{fig:frequency} Distribution of vantage points generated by {\bf City-CNN} method from various initial positions. Hot spots are brighter and are visited more frequently since they are essential for completing coverage.}
\end{figure}
\subsection*{Frequency map}
Here we present one of our studies concerning the exclusivity of
vantage point placements in $\Omega$. We generated sequences of vantage points
starting from over 800 different initial conditions using {\bf City-CNN} model
on a $512\times512$ Austin map. Then, we model each vantage point as a Gaussian with
fixed width, and overlay the resulting distribution on the Austin map in Figure
\ref{fig:frequency}. This gives us a frequency map of the most recurring
vantage points. These hot spots reveal regions that are more secluded and
therefore, the visibility of those regions is more sensitive to vantage point
selection.
The efficiency of the CNN method
allows us to address many surveillance related
questions for a large collection of relevant geometries.
\subsection*{Art gallery}
Our proposed approach outperforms the computational geometry solution
\cite{o1987art} to the art gallery problem, even though we do not assume the
environment is known. The key issue with computational geometry approaches is
that they are heavily dependent on the triangulation.
In an extreme example, consider an art gallery that is a simple convex $\emph{n-gon}$. Even though it is sufficient to place a single vantage point anywhere in the interior of the room, the triangulation-based approach produces a solution with $\lfloor n/3 \rfloor$ vertex guards.
Figure \ref{fig:art-gallery} shows an example gallery consisting of 58 vertices. The computational
geometry approach requires $\lfloor\frac{n}{3}\rfloor=19$ vantage points to completely cover the scene,
even if point guards are used \cite{bjorling1995efficient,hoffmann1991art}.
The gallery contains $r=19$ reflex angles, so the work of \cite{ghosh2008online} requires $r+1=20$ vantage points.
On average,
{\bf City-CNN}
requires only 8 vantage points.
\begin{figure}[bpt]
\centering
\includegraphics[height=2.5in]{figures/enterprise_art_gallery.pdf}
\includegraphics[height=2.5in]{figures/enterprise_07.pdf}
\caption{Comparison of the computational geometry approach and the {\bf City-CNN}
approach to the art gallery problem. The red circles are the vantage points computed by the methods.
Left: A result computed by the computational geometry approach, given the environment.
Right: An example sequence of 7 vantage points generated by the {\bf City-CNN}
model. }
\label{fig:art-gallery}
\end{figure}
\subsection*{3D environment}
We present a 3D simulation of a 250m$\times$250m environment based on Castle Square Parks in Boston.
Figure \ref{fig:3d-urban} for snapshots of the algorithm in action.
The map is discretized as a level set function on a $768\times768\times64$ voxel grid.
At this resolution, small pillars are accurately reconstructed by our exploration algorithm.
Each step can be generated in 3 seconds using the GPU or 300 seconds using the
CPU. Parallelization of the distance function computation will further reduce the computation time significantly.
A map of this size was previously unfeasible.
Lastly, Figure~\ref{fig:3d-pipes} shows snapshots from the exploration of a
more challenging, cluttered 3D scene with many nooks.
\section{Conclusion}
From the perspective of inverse problems, we proposed a greedy algorithm for
autonomous surveillance and exploration. We show that this formulation can be
well-approximated using convolutional neural networks, which learns
geometric priors for a large class of obstacles. The inclusion of shadow
boundaries, computed using the level set method, is crucial for the success of
the algorithm.
One of the advantages of using the gain function \eqref{gain-func}, an
integral quantity, is its stability with respect to noise in positioning and sensor
measurements. In practice, we envision that it can be used in
conjuction with SLAM algorithms \cite{durrant2006simultaneous,
bailey2006simultaneous} for a wide range of real-world applications.
One may also consider $n$-step greedy algorithms, where $n$ vantage points are
chosen simultaneously. However, being more greedy is not necessarily better.
If the performance metric is the cardinality of the solution set, then it is
not clear that multi-step greedy algorithms lead to smaller solutions. We saw in
section~\ref{sec:greedy} that, even for the single circular obstacle, the greedy
surveillance algorithm may sometimes require more steps than the exploration
algorithm to attain complete coverage.
If the performance metric is based on the rate in which the objective function
increases, then a multi-step greedy approach would be appropriate. However, on a grid
with $m$ nodes in $d$ dimensions, there are $\mathcal{O}(m^{nd})$ possible
combinations. For each combination, computing the visibility and gain function
requires $\mathcal{O}(nm^d)$ cost. In total, the complexity is $\mathcal{O}(nm^{d(n+1)})$,
which is very expensive, even when used for offline training of a neural
network. In such cases, it is necessary to selectively sample only the relevant
combinations. One such way to do that, is through a tree search algorithm.
\begin{figure}[htpb]
\centering
\includegraphics[width=5in]{figures/3d_urban_0.png}
\includegraphics[width=5in]{figures/3d_urban_1.png}
\caption{Snapshots demonstrating the exploration of an initially unknown
3D urban environment using sparse sensor measurements. The red spheres
indicate the vantage point. The gray surface is the reconstruction of the
environment based on line of sight measurements taken from the sequence of
vantage points. New vantage points are computed in virtually real-time using
{\bf 3D-CNN}.}
\label{fig:3d-urban}
\end{figure}
\begin{figure}[thpb]
\centering
\includegraphics[width=5in]{figures/20180511_messy_pipes_limited_range_1.png}
\includegraphics[width=5in]{figures/20180511_messy_pipes_limited_range_2.png}
\caption{Snapshots of {\bf 3D-CNN} applied to exploration of a cluttered scene.}
\label{fig:3d-pipes}
\end{figure}
\chapter{Introduction}
As we approach an era of autonomous drones and self-driving cars, it becomes
increasingly important to develop efficient algorithms for path-planning and
data processing. Depth sensors, such as LiDAR (Light Detection and Ranging),
allow robotic agents to create virtual maps of the scene. Armed with this
information, agents can perform complicated tasks in environments consisting of
multiple obstacles. We consider two such problems involving line-of-sight
visibility; two points are visible from one another if there is no obstacle in
the line segment that connects the points. Lastly, we consider point cloud
feature extraction and classification through the use of random rays.
\section{Contributions}
Broadly speaking, we develop mathematical algorithms for various problems
involving visibility. Key components of these algorithms often involve
computationally expensive operations. We apply neural networks to efficiently
approximate those operations. Our success so far seems to be rooted in the
generation of causally relevant data and in the flexibility of neural
networks to interpolate such data.
In Chapter~\ref{chap:exploration}, we consider the exploration problem: an
agent equipped with a depth sensor must map out a previously unknown
environment using as few sensor measurements as possible. We propose an approach
based on supervised learning of a greedy algorithm. We provide a bound on the
optimality of the greedy algorithm using submodularity theory. Using a level
set representation, we train a convolutional neural network to determine
vantage points that maximize visibility. We show that this method drastically
reduces the on-line computational cost and determines a small set of vantage
points that solve the problem. This enables us to efficiently produce
highly-resolved and topologically accurate maps of complex 3D environments.
Unlike traditional next-best-view and frontier-based strategies, the proposed
method accounts for geometric priors while evaluating potential vantage points.
While existing deep learning approaches focus on obstacle avoidance and local
navigation, our method aims at finding near-optimal solutions to the more
global exploration problem. We present realistic simulations on 2D and 3D urban
environments.
In Chapter~\ref{chap:surveillance}, we consider surveillance-evasion
differential games, where a pursuer must try to constantly maintain visibility
of a moving evader. The pursuer loses as soon as the evader becomes occluded.
Optimal controls for game can be formulated as a Hamilton-Jacobi-Isaac
equation. We use an upwind scheme to compute the feedback value
function, corresponding to the end-game time of the differential game. Although
the value function enables optimal controls, it is prohibitively expensive to
compute, even for a single pursuer and single evader on a small grid. We
consider a discrete variant of the surveillance-game. We propose two locally
optimal strategies based on the static value function for the
surveillance-evasion game with multiple pursuers and evaders. We show that
Monte Carlo tree search and self-play reinforcement learning can train a deep
neural network to generate reasonable strategies for on-line game play. Given
enough computational resources and offline training time, the proposed model
can continue to improve its policies and efficiently scale to higher resolutions.
In Chapter~\ref{chap:raysense}, we present an algorithm for the classification
of point clouds. The algorithm exploits properties of the point clouds'
signature in a new framework called \emph{RaySense}. The RaySense signature
is computed by finding nearest neighbors along a set of randomly generated
rays. From the signature, statistical information about the whole data set, as
well as certain geometric information, can be extracted, independent of the
choice of ray set. A RaySense signature is not merely a subset of the point
cloud: while all points sampled by each ray retain some local geometrical
information of the point cloud, certain specific points are sampled repeatedly
by different rays, giving a more global ``sketch'' of the point cloud's shape.
We propose a convolutional neural network, \emph{RayNN}, that uses RaySense
signatures for point cloud classification. We evaluate RayNN on the 3D
ModelNet benchmark and compare its performance with other state-of-the-art
methods. RayNN results are comparable to other methods, while enjoying lower
complexity. However, in the presence of additional corruptions, such as the
introduction of unseen outliers or removal of data points, RayNN appears to be
more robust.
\section{Visibility level sets}
We first review our representation of geometry and visibility.
All the functions described below can be computed efficiently in $\mathcal{O}(m^d)$, where
$m$ is the number of grid points in each of $d$ dimensions.
\subsection*{Level set functions}
Level set functions \cite{osher1988fronts,sethian1999level,osher2006level} are useful as an implicit
representation of geometry. Let $\obs \subseteq \mathbb{R}^d$ be a closed set
of a finite number of connected components
representing the obstacle. Denote the occluder function $\map$ with the following properties:
\begin{equation} \begin{aligned}
\begin{cases}
\map(x) < 0 & x \in \obs \\
\map(x) = 0 & x \in \partial \obs \\
\map(x) > 0 & x \notin \obs \\
\end{cases}
\label{eq:occluder}
\end{aligned} \end{equation}
The occluder function is not unique; notice that for any constant $c>0$, the function $c \map$
also satisfies (\ref{eq:occluder}). We use the signed distance function as the occluder function:
\begin{equation} \begin{aligned}
\map(x):=
\begin{dcases}
-\inf_{y\in \partial \obs} \|x-y\|_2 & x\in \obs \\
\inf_{y\in \partial \obs} \|x-y\|_2 & x \notin \obs
\end{dcases}
\end{aligned} \end{equation}
The signed distance function is a viscosity solution to the Eikonal equation:
\begin{equation} \begin{aligned}
|\nabla \map| &= 1 \\
\map(x) &= 0 \text{ for } x \in \partial \obs
\end{aligned} \end{equation}
It can be computed, for example, using the fast sweeping method \cite{tsai2002rapid} or
the fast marching method \cite{tsitsiklis1995efficient}.
\subsection*{Visibility function}
Let $\free$ be the open set representing free space. Let $\visset_{x_0}$ be the set of points in $\free$
visible from $x_0\in\free$. We seek a function $\vis(x,x_0)$ with properties
\begin{equation} \begin{aligned}
\begin{cases}
\vis(x,y) > 0 & x \in \visset_{x_0} \\
\vis(x,y) = 0 & x \in \partial \visset_{x_0} \\
\vis(x,y) < 0 & x \notin \visset_{x_0} \\
\end{cases}
\label{eq:visibility}
\end{aligned} \end{equation}
Define the visibility level set function $\vis$:
\begin{equation} \begin{aligned}
\vis(x,{x_0}) = \min_{r\in[0,1]} \map(x_0 + r(x-{x_0}))
\end{aligned} \end{equation}
It can be computed efficiently using the fast sweeping method based on the PDE formulation described in \cite{tsai2004visibility}:
\begin{equation} \begin{aligned}
\nabla \vis \cdot \frac{x-{x_0}}{|x-{x_0}|} &= \min \Big\{ H(\vis-\map) \nabla \vis \cdot \frac{x-{x_0}}{|x-{x_0}|},0 \Big\} \\
\vis({x_0},{x_0}) &= \map({x_0})
\end{aligned} \end{equation}
where $H$ is the characteristic function of $[0,\infty)$.
\subsection*{Shadow function}
\label{sec:shadow}
When dealing with visibility, it is useful to represent the shadow regions.
The gradient of the occluder function $\nabla \map$ is perpendicular to the
level sets $\map$. The dot product
of $\nabla \map$ and the viewing direction $(x_0-x)$ characterizes the cosine
of the grazing angle $\theta$ between obstacles and viewing direction. In particular,
$|\theta|<\pi/4$ for the portion of obstacles that are directly visible to $x_0$.
Define the grazing function:
\begin{equation} \begin{aligned}
\graze(x,x_0) &= (x_0-x)^T \cdot \nabla \map(x)\\
\end{aligned} \end{equation}
By masking with the occluder function, we can characterize the portion of the
obstacle boundary that is not visible from the vantage point $x_0$. Define the
auxiliary and auxiliary visibility functions: \begin{equation} \begin{aligned} \aux(x,x_0) &= \max \{
\map(x,x_0), \graze(x,x_0) \} \\ \tilde{\aux}(x,x_0) &= \min_{r\in[0,1]} \aux(x
+ r(x_0-x),x_0) \\ \end{aligned} \end{equation}
By masking the auxilary visibility function with the obstacle, we arrive at the desired
shadow function \cite{takei2014efficient}:
\begin{equation} \begin{aligned}
\xi(x,x_0) &= \max\{ \tilde{\aux}(x,x_0), -\map(x)\}
\end{aligned} \end{equation}
\captionsetup[subfloat]{labelformat=empty}
\begin{figure}[hptb]
\centering
\subfloat[Occluder function $\map(\cdot)$]{\includegraphics[width=.35\textwidth]{figures/ls_notation_occluder.pdf}} \quad
\subfloat[Visibility function $\vis(\cdot)$]{\includegraphics[width=.35\textwidth]{figures/ls_notation_visibility.pdf}} \\
\subfloat[Grazing function $\graze(\cdot)$]{\includegraphics[width=.35\textwidth]{figures/ls_notation_dot.pdf}} \quad
\subfloat[Auxiliary function $\aux(\cdot)$]{\includegraphics[width=.35\textwidth]{figures/ls_notation_aux.pdf}} \\
\subfloat[Auxiliary visibility function $\tilde{\aux}(\cdot)$]{\includegraphics[width=.35\textwidth]{figures/ls_notation_aux_tilde.pdf}} \quad
\subfloat[Shadow function $\xi(\cdot)$]{\includegraphics[width=.35\textwidth]{figures/ls_notation_shadow.pdf}}
\caption{Level set functions from a vantage point $x_0$ (blue dot). Each function is negative in the shaded region. Obstacle boundary shown as black contour.} \label{fig:ls_notation}
\end{figure}
The difference between the shadow function and the visibility function is that
the shadow function excludes the obstacles.
Although
$$\tilde{\xi}(x,x_0) = \max\{-\map(x),\vis(x,x_0)\}$$
looks like a candidate shadow function, it
is not correct. In particular
$$\{ x|\tilde{\xi}(x,x_0)=0\}$$
includes the portion of the obstacle boundary visible to $x_0$.
Figure~\ref{fig:ls_notation} summarizes the relevant level set functions used in this work.
\section{Convolutional neural networks}
Convolutional neural networks (CNNs) have become the undisputed
state-of-the-art for image classification tasks, where an input image has to be
binned into one of $k$ classes. CNNs are advantageous in that they use a
learned classifier over learned set of features. This has been empirically
shown to be better than the previous state-of-the-art, where learned
classifiers were trained on hand-tuned features.
Training generally requires lots of data and
computational time.
A neural network architecture consists of various layers, which map the input
to the desired output. There are many architectures; the most popular image
classification networks are Alex-Net \cite{krizhevsky2012imagenet} , VGG
\cite{simonyan2014very}, Inception \cite{szegedy2015going}, and ResNet
\cite{he2016deep}. More recently, fully convolutional neural networks have
shown promise in dense inference tasks, which requires an output for each
input. One such network, U-Net \cite{ronneberger2015u}, is popular due to its
simple architecture and ability to aggregate information across multiple
scales.
\subsection*{Neural networks}
We review some basic concepts.
A single layer neural network is defined by a function
\[
y=f(x;W,b):=\mathbf{\sigma}(Wx+b),
\]
where $x\in\mathbb{R}^m$ is the input and $y\in\mathbb{R}^n$ is the output.
The parameters $W$ and $b$ are \emph{learned}, where
$W:\mathbb{R}^{m}\to\mathbb{R}^{n}$ is a linear function; i.e. an $n\times m$
real-valued matrix, while $b \in \mathbb{R}^{n}$ is a bias term. Lastly,
$\sigma:\mathbb{R}^n\to\mathbb{R}^n$ is a nonlinear activation function.
Common nonlinearities include \emph{sigmoid}: $\sigma(y)=1/(1+e^{-y})$ and
\emph{ReLU}: \mbox{$\sigma(y) = \max\{0,y\}$}. The activation function is an
``element-wise'' function; i.e. if
$y=(y_{1},y_{2,}\cdots,y_{n})$ , then
$$
\sigma(y):=(\sigma(y_{1}),\sigma(y_{2}),\cdots,\sigma(y_{n})).
$$
For classification tasks, the \emph{softmax} function normalizes the output so that it sums
to $1$:
$$
S(y):=\frac{e^{y}}{\displaystyle \sum_{j=1}^n e^{y_j}}.
$$
Through the compositions of functions, a neural network can have multiple layers:
\begin{align*}
y^{(L)} & :=f_{L}(f_{L-1}\circ\cdots\circ f_{1}(x,W^{(1)},b^{(1)}),\dots,W^{(L)},b^{(L)})\\
& \equiv f(x;W^{(1)},\cdots,W^{(L)},b^{(1)},\cdots,b^{(L)}),
\end{align*}
where
\begin{align}
y^{(0)} & :=x \nonumber,\\
y^{(\ell+1)} & :=\sigma_{\ell}(W^{(\ell+1)}y^{(\ell)}+b^{(\ell+1)}),\,\,\,\ell=0,1,\cdots,L-1. \label{eq:nn-recursion}
\end{align}
The function $\sigma_{\ell}$ is the activation function or some
operations that change the dimensionality of a vector for layer $\ell$.
\subsection*{Residual networks}
For very deep neural networks with many layers, \cite{he2016deep} showed that
incorporating a \emph{residual~connection} into \eqref{eq:nn-recursion} eases
the training process and improves accuracy. A \emph{residual connection} with
layer $\ell$ is similar to an explicit Euler scheme -- the operation is an update
to that layer rather than an operation on the entire output of that layer:
\begin{align*}
y^{(\ell+1)} & :=y^{(\ell)}+\sigma_{\ell}(W^{(\ell+1)}y^{(\ell)}+b^{(\ell+1)}),\,\,\,\ell=0,1,\cdots,L-1.
\end{align*}
\subsection*{Training}
The parameters $\theta:=(W^{(1)},\cdots,W^{(L))},b^{(1)},\cdots,b^{(L)})$,
are learned via stochastic gradient descent to minimize the \emph{empirical risk}:
\[
\frac{1}{N}\sum_{j=1}^{N}\mathcal{L}(\tilde{y}_{j},f(\tilde{x}_{j};\theta))
\]
where $\mathcal{L}$ is the loss function and $(\tilde{x}_{j},\tilde{y}_{j})\}_{j=1}^{N}$ are $N$ training examples.
The loss function is typically the \emph{squared loss} for regression tasks:
\begin{equation*} \begin{aligned}
\mathcal{L}(p,q) &= \|p-q\|_2^2,
\end{aligned} \end{equation*}
or the \emph{cross entropy loss} for classification tasks:
\begin{equation*} \begin{aligned}
\mathcal{L}(p,q) &= -p \cdot \log(q).
\end{aligned} \end{equation*}
\subsection*{Convolution}
Next, we describe the operations that act on 2D
image-based inputs.
Generally, the input is a stack of 2D images. The third
(stack) dimension is the feature dimension. One can think of each pixel in the
image as a feature vector. For example, an RGB image has 3 channels, where
each pixel is a vector describing the intensity of red, green, and blue.
A \emph{convolution} layer convolves the input with a filter, whose weights are
learned. The filter operates on all image channels, but only acts on a small
neighborhood of spatial pixels, determined by the kernel size. Each filter
takes as input a 3D image, and outputs a 2D image. Each convolutional layer
may have multiple filters, resulting in a 3D stack of filtered images as
output.
Let $x\in\mathbb{R}^{M_\text{in}\times N_\text{in}\times D_\text{in}}$ be an
input vector-valued image,
where $M_\text{in}$, $N_\text{in}$, and $D_\text{in}$
are the width, height and number of channels, respectively.
Let $s_{k}$ be the kernel size (generally odd so that there is a
well-defined center) and $D_{out}$ be the number of filters. Then
the output of the convolutional layer is
$y\in\mathbb{R}^{M_{out}\times N_{out}\times D_{out}}$,
where $M_{out}=M_{in}-s_{k}+1$, $N_{out}=N_{in}-s_{k}+1$, and each
entry in $y:=W\ast x+b$ is given by
\[
y_{ijk}=b_{k}+\sum_{r=1}^{D_{in}}\sum_{q=0}^{s_{k}}\sum_{p=0}^{s_{k}}W_{p,q,r,k} \ x_{i+p,j+q,r}
\]
where $W\in\mathbb{R}^{s_{k}\times s_{k}\times D_{in}\times D_{out}}$
and $b\in\mathbb{R}^{D_{out}}$ is the bias.
In general, a zero padding of $s_{p}$ pixels can be introduced to
the spatial dimensions of the input image to change the output size.
This is particularly useful to make the output image the same size
as the input. Also, a stride of $s_{s}$ pixels can be used if it
is not desirable to apply filter to consecutive pixels. In this case,
$M_{out}=\frac{1}{s_{s}}(M_{in}-s_{k}+2s_{p})+1$ , $N_{out}=\frac{1}{s_{s}}(N_{in}-s_{k}+2s_{p})+1$
and
\[
y_{ijk}=b_{k}+\sum_{r=1}^{D_{in}}\sum_{q=0}^{s_{k}}\sum_{p=0}^{s_{k}}W_{p,q,r,k} \ \tilde{x}_{s_{s}\cdot i+p,s_{s}\cdot j+q,r}
\]
where $\tilde{x}$ is the zero-padded image.
A common choice for kernel size is $s_{k}=3$, with zero padding $s_{p}=1$
and stride $s_{s}=1$.
\subsection*{Pooling}
Pooling is a downsampling of
the image, in order to reduce computational efforts and also allow
the model to be spatially invariant. The most common is \emph{max-pooling}
with kernel size $2$ and stride $2$. This replaces each 2x2 patch
in the input with its max value:
\[
Y_{ijk}=\max_{p,q\in\{0,1\}^{2}}\left[X_{2\cdot i+p,2\cdot j+q,k}\right]
\]
For general kernel size $s_{k}$ and stride $s_{s}$, we have
\[
Y_{ijk}=\max_{p,q\in\{0,\dots,s_{k}\}^{2}}\left[X_{s_{s}\cdot i+p,s_{s}\cdot j+q,k}\right]
\]
Average pooling, where the $\max$ is replaced the average, is also common.
The convolution and pooling operations generalize to 3D images in a straight-forward manner; in those cases, the input will be a stack of 3D images.
\iffalse
\begin{enumerate}
\item \emph{Softmax:} A transformation that normalizes the input across
the classes so that the output corresponds to a probability that each
pixel belongs to a certain class.
\[
P_{ijk}=\frac{\exp(X_{ijk})}{{\displaystyle \sum_{r=1}^{D_{in}}}\exp(X_{ijr})}
\]
Here $P_{ijk}$ is the probability that pixel $X_{ij}$ belongs to
class $k$.
\item \emph{Fully Connected}: Each output node of a fully connected layer
is simply an affine transformation of all nodes in the previous layer.
Let $D_{out}$ be the desired number of output nodes. Then $Y\in\mathbb{R}^{D_{out}}$
where each entry
\[
Y_{k}=W_{0}^{k}+\sum_{r=1}^{D_{in}}\sum_{q=1}^{N_{in}}\sum_{p=1}^{M_{in}}W_{pqr}^{k}X_{pqr}
\]
\item \emph{Cross-Entropy Loss}: Let $L_{ij}\in\{0,\dots,C_{out}\}$ be
the true label of pixel $X_{ij}$. The cross-entropy loss is defined
as
\[
-\sum_{ijk}{\bf 1}_{L_{ij}=k}\log(P_{ijk})
\]
where $\mathbf{1}$ is the indicator function.
\end{enumerate}
\fi
\section*{Approximation theory}
We review some literature relating to the capabilities of neural networks
in approximating certain classes of functions. The term \emph{hidden layer(s)} is
used to refer to the intermediate layer(s) of a neural network.
The number of \emph{hidden units} refers to the dimension of the intermediate layers.
\subsection*{Universality}
Define a sigmoidal function $\sigma$ as any function with the properties:
$\lim_{x\to-\infty}\sigma(x)=0$ and $\lim_{x\to\infty}\sigma(x)=1$.
Cybenko \cite{cybenko1989approximation} showed that
neural networks with one hidden layer and an arbitrary
continuous sigmoidal function can approximate continuous functions
$f:[0,1]^d \to \mathbb{R}$ to arbitrary precision.
More precisely, let the neural network be a sum of the form
$$
f_\theta(x)= \sum_{j=1}^n c_j \sigma(W_j^T x + b_j).
$$
Then, given any $f\in C([0,1]^d)$ and $\varepsilon>0$, there
exists $f_\theta$ such that
$$
|f_\theta(x)-f(x)|< \varepsilon \text{ for all } x\in [0,1]^d.
$$
Similarly, Hornik et. al. \cite{hornik1989multilayer} showed that a neural network, with
as few as a single hidden layer, using an arbitrary nondecreasing sigmoidal function,
is capable of approximating any Borel measurable function
$f:\mathbb{R}^d\to\mathbb{R}^m$ to any level of accuracy.
The work of \cite{zhou2020universality} generalizes these ideas to convolutional neural networks.
\subsection*{Rate of convergence}
However, the existence of a neural network that approximates a desired
function well does not mean that there is a good way to constructing such a
neural network, nor does it mean that the amount of computational resources
needed for its construction is acceptable. The number of hidden units must be
sufficiently large. A relevant question is how the number of hidden units $n$
should be increased in order meet the accuracy $\varepsilon$
and how it depends the input dimension $d$, the output dimension
$m$, and the number of training samples $N$.
Barron \cite{barron1993universal,barron1994approximation} gives
such an analysis for a single hidden layer neural network approximating
a class of functions $f:\mathbb{R}^d \to \mathbb{R}$ with bounded domains.
The analysis also depends on the first absolute moment of the Fourier transform of $f$:
$$C_{f,d}:=\int_{\mathbb{R}^{d}}|\omega|_{1}|\hat{f}(\omega)|d\omega.$$
They use a slightly different sigmoidal function $\sigma$ which is Lipschitz and
satisfies $\sigma(x)\rightarrow\pm1$ at least polynomially fast as $x\rightarrow\pm\infty$,
i.e., there exists $p>0$ such that $\frac{\pm1-\sigma(x)}{|x|^{p}}$ is bounded
as $x\to\pm\infty$.
The result is as follows. Define the neural network
\[
f_{\theta}(x):=\sum_{k=1}^{n}c_{k}\phi(W_{k}^T x+b_{k})+c_{0}.
\]
Then as $n,d,N\rightarrow\infty$, we have
\[
||f-f_{\theta}||_2^{2}\sim\mathcal{O}\Big(\frac{C_{f,d}^{2}}{n}\Big)+\mathcal{O}\Big(\frac{nd}{N}\log N\Big).
\]
In particular, the second term on the right hand side increases as $n$ increases.
This says that increasing the network capacity (number of hidden units) leads
to \emph{overfitting} if the number of training examples does not increase accordingly.
Meanwhile, the first term decreases as $n$ increases. When
\[
n\sim C_{f,d}\left(\frac{N}{d\log N}\right)^{1/2},
\]
the optimal error is achieved:
\[
||f-\ensuremath{f_{\theta}}|\ensuremath{|_2^{2}}\ensuremath{\ensuremath{\sim\mathcal{O}}\Big(\ensuremath{C_{f,d}\left(\frac{d}{N}\log N\right)^{1/2}}\Big)}
\]
That is, the rate of convergence relative to the number of training examples is
$\sqrt{\log N/N}$, with a rate $1/2$ that is independent of $d$. However, the
constant $C_{f,d}$ may be exponential in $d$.
\section{Introduction}
We consider a multiplayer \emph{surveillance-evasion} game consisting of two
teams, the pursuers and the evaders. The pursuers must maintain line-of-sight
visibility of the evaders for as long as possible as they move through an
environment with obstacles. Meanwhile, the evaders aim to hide from the
pursuers as soon as possible. The game ends when the pursuers lose sight of the
evaders. We assume all players have perfect knowledge of the obstacles and the
game is closed-loop -- each player employs a feedback strategy, reacting
dynamically to the positions of all other players.
In section~\ref{sec:hji}, we consider the game in the context of Hamilton-Jacobi-Isaacs
(HJI) equations. We propose a scheme to compute the value function, which,
informally, describes how "good" it is for each player to be in a specific
state. Then each player can pick the strategy that optimizes the value function
locally. Due to the principal of optimality, local optimization with respect to
the value function is globally optimal. This is because the value function
encodes information from all possible trajectories. As a result, the value
function is also very expensive to compute.
Section~\ref{sec:locally} discusses locally-optimal policies and
section~\ref{sec:learning-policy} presents search-based methods
to learn policies for the multiplayer version of the game.
\subsection{Related works}
The surveillance-evasion game is related to a popular class of games called
pursuit-evasion \cite{ho1965differential,isaacs1965differential}, where the
objective is for the pursuer to physically capture the evader. Classical
problems take place in obstacle-free space with constraints on the players'
motion. Variants include the lion and man
\cite{karnad2009lion,sgall2001solution}, where both players have the same
maneuverability, and the homicidal chauffeur \cite{merz1974homicidal}, where
one player drives a vehicle, which is faster, but has constrained mobility.
Lewin et. al. \cite{lewin1975surveillance} considered a
game in an obstacle-free space, where the pursuer must keep the evader within a
detection circle.
Bardi et. al. \cite{bardi1999numerical} proposed a semi-Lagrangian scheme for
approximating the value function of the pursuit-evasion game as viscosity solution
to the Hamilton-Jacobi-Isaacs equation, in a bounded domain with no obstacles.
In general, these methods are very expensive, with complexity $\mathcal{O}(m^{kd})$ where
$k$ is the number of players and $d$ is the dimension. This is because the
value function, once computed, can provide the optimal controls for all
possible player positions.
A class of methods try
to deal with the curse of dimensionality by solving for the solutions of
Hamilton-Jacobi equations at individual points in space and time. These methods
are causality-free; the solution at one point does not depend on solutions at
other points, making them conveniently parallelizable. They are efficient,
since one only solves for the value function locally, where it is needed,
rather than globally. Chow et. al.
\cite{chow2017algorithm,chow2018algorithm,chow2019algorithm} use the Hopf-Lax
formula to efficiently solve Hamilton-Jacobi equations for a class of
Hamiltonians. Sparse grid characteristics, due to Kang et. al. \cite{kang2017mitigating}, is another
causality-free method which finds the solution by solving a boundary value
problem for each point. Unfortunately, these methods do not apply to domains
with obstacles since they cannot handle boundary conditions.
The visibility-based pursuit-evasion game, introduced by Suzuki. et. al
\cite{suzuki1992searching}, is a version where the
pursuer(s) must compute the shortest path to find all hidden evaders in a
cluttered environment, or report it is not possible. The number of evaders is
unknown and their speed is unbounded. Guibas et. al.
\cite{guibas1997visibility} proposed a graph-based method for polygonal
environments. Other settings include multiple pursuers
\cite{stiffler2014complete}, bounded speeds \cite{tovar2008visibility},
unknown, piecewise-smooth planar environments \cite{sachs2004visibility}, and
simply-connected two-dimensional curved environments
\cite{lavalle2001visibility}.
The surveillance-evasion game has been studied previously
in the literature.
LaValle et. al. \cite{lavalle1997motion} use dynamic programming to compute
optimal trajectories for the pursuer, assuming a known evader trajectory. For
the case of an unpredictable evader, they suggest a local heuristic: maximize
the probably of visibility of the evader at the next time step.
They also mention, but do not implement, an idea to locally maximize the evader's time to occlusion.
Bhattacharya et. al. \cite{bhattacharya2008approximation,bhattacharya2011cell}
used geometric arguments to partition the environment into several regions
based on the outcome of the game.
In \cite{bhattacharya2009existence,zou2018optimal}, they use geometry and optimal control to compute
optimal trajectories for a single pursuer and single evader near the corners of a polygon.
The controls are then extended to the whole domain containing polygonal obstacles by partitioning based on the corners
\cite{zou2016optimal},
for the finite-horizon tracking problem \cite{zou2016visibility},
and for multiple players by allocating a pursuer for each evader via the Hungrarian matching algorithm
\cite{zhang2016multi}.
Takei et. al. \cite{takei2014efficient} proposed an efficient
algorithm for computing the static value function corresponding to the open loop game,
where each player moves according to a fixed strategy determined at initial time.
Their open loop game is conservative towards the pursuer, since the evader can
optimally counter any of the pursuer's strategies. As a consequence, the game
is guaranteed to end in finite time, as long as the domain is not
star-shaped.
In contrast, a closed loop game allows players to react dynamically to each other's actions.
In \cite{cartee2019time,gilles2019evasive,takei2015optimal}, the authors
propose optimal paths for an evader to reach a target destination, while
minimizing exposure to an observer. In \cite{gilles2019evasive}, the observer
is stationary. In \cite{cartee2019time}, the observer moves according to a
fixed trajectory. In \cite{takei2015optimal}, the evader can tolerate brief
moments of exposure so long as the consecutive exposure time does not exceed a
given threshold. In all three cases, the observer's controls are restricted to
choosing from a known distribution of trajectories; they are not allowed to
move freely
Bharadwaj et. al. \cite{bharadwaj2018synthesis} use reactive synthesis to
determine the pursuer's controls for the surveillance-evasion game on a discrete
grid. They propose a method of \emph{belief abstraction} to coarsen the state
space and only refine as needed. The method is quadratic in the number of
states: $\mathcal{O}(m^{2kd})$ for $k$ players. While it is more computationally
expensive than the Hamilton-Jacobi based methods, it is more flexible in being
able to handle a wider class of temporal surveillance objectives, such as
maintaining visibility at all times, maintaining a bound on the spatial
uncertainty of the evader, or guaranteeing visibility of the evader infinitely
often.
Recently, Silver et. al developed the AlphaGoZero and AlphaZero programs that
excel at playing Go, Chess, and Shogi, without using any prior knowledge of the
games besides the rules \cite{silver2017mastering,silver2017mastering2}. They
use Monte Carlo tree search, deep neural networks and self-play reinforcement
learning to become competitive with the world's top professional players.
\subsection*{Contributions}
We use a Godunov upwind scheme to compute the value function for the
closed loop surveillance-evasion game with obstacles in two dimensions. The
state space is four dimensional. The value function allows us to compute the
optimal feedback controls for the pursuers and evaders. Unlike the static game
\cite{takei2014efficient}, it is possible for the pursuer to win. However, the
computation is $\mathcal{O}(m^{kd})$ where $k$ is the number of players and $d$ the
dimensions.
As the number of players grows, computing the value function becomes infeasible.
We propose locally optimal strategies for the multiplayer surveillance-evasion
game, based on the value function for the static game. In addition, we propose
a deep neural network trained via self play and Monte Carlo tree search to
learn controls for the pursuer. Unlike Go, Chess, and Shogi, the
surveillance-evasion game is not symmetric; the pursuers and evaders require
different tactics. We use the local strategies to help improve the
efficiency of self-play.
The neural network is trained offline on a class of environments. Then,
during play time, the trained network can be used to play games efficiently on
previously unseen environments. That is, at the expense of preprocessing time
and optimality, we present an algorithm which can run efficiently. While the
deviation from optimality may sound undesirable, it actually is reasonable.
Optimality assumes perfect actions and instant reactions. It real
applications, noise and delays will perturb the system away from optimal
trajectories. We show promising examples in 2D.
\section{Visibility level sets}
We review our representation of geometry and visibility.
All the functions described below can be computed efficiently in $\mathcal{O}(m^d)$, where
$m$ is the number of grid points in each of $d$ dimensions.
\subsection*{Level set functions}
Level set functions \cite{osher1988fronts,sethian1999level,osher2006level} are useful as an implicit
representation of geometry. Let $\obs \subseteq \mathbb{R}^d$ be a closed set
of a finite number of connected components
representing the obstacle. Denote the occluder function $\map$ with the following properties:
\begin{equation} \begin{aligned}
\begin{cases}
\map(x) < 0 & x \in \obs \\
\map(x) = 0 & x \in \partial \obs \\
\map(x) > 0 & x \notin \obs \\
\end{cases}
\label{eq:occluder}
\end{aligned} \end{equation}
The occluder function is not unique; notice that for any constant $c>0$, the function $c \map$
also satisfies (\ref{eq:occluder}). We use the signed distance function as the occluder function:
\begin{equation} \begin{aligned}
\map(x):=
\begin{dcases}
-\inf_{y\in \partial \obs} \|x-y\|_2 & x\in \obs \\
\inf_{y\in \partial \obs} \|x-y\|_2 & x \notin \obs
\end{dcases}
\end{aligned} \end{equation}
The signed distance function is a viscosity solution to the Eikonal equation:
\begin{equation} \begin{aligned}
|\nabla \map| &= 1 \\
\map(x) &= 0 \text{ for } x \in \partial \obs
\end{aligned} \end{equation}
It can be computed, for example, using the fast sweeping method \cite{tsai2002rapid} or
the fast marching method \cite{tsitsiklis1995efficient}.
\subsection*{Visibility function}
Let $\free$ be the open set representing free space. Let $\visset_{x_0}$ be the set of points in $\free$
visible from $x_0\in\free$. We seek a function $\vis(x,x_0)$ with properties
\begin{equation} \begin{aligned}
\begin{cases}
\vis(x,y) > 0 & x \in \visset_{x_0} \\
\vis(x,y) = 0 & x \in \partial \visset_{x_0} \\
\vis(x,y) < 0 & x \notin \visset_{x_0} \\
\end{cases}
\label{eq:visibility}
\end{aligned} \end{equation}
Define the visibility level set function $\vis$:
\begin{equation} \begin{aligned}
\vis(x,{x_0}) = \min_{r\in[0,1]} \map(x_0 + r(x-{x_0}))
\end{aligned} \end{equation}
It can be computed efficiently using the fast sweeping method based on the PDE formulation described in \cite{tsai2004visibility}:
\begin{equation} \begin{aligned}
\nabla \vis \cdot \frac{x-{x_0}}{|x-{x_0}|} &= \min \Big\{ H(\vis-\map) \nabla \vis \cdot \frac{x-{x_0}}{|x-{x_0}|},0 \Big\} \\
\vis({x_0},{x_0}) &= \map({x_0})
\end{aligned} \end{equation}
where $H$ is the characteristic function of $[0,\infty)$.
\subsection*{Shadow function}
\label{sec:shadow}
When dealing with visibility, it is useful to represent the shadow regions.
The gradient of the occluder function $\nabla \map$ is perpendicular to the
level sets $\map$. The dot product
of $\nabla \map$ and the viewing direction $(x_0-x)$ characterizes the cosine
of the grazing angle $\theta$ between obstacles and viewing direction. In particular,
$|\theta|<\pi/4$ for the portion of obstacles that are directly visible to $x_0$.
Define the grazing function:
\begin{equation} \begin{aligned}
\graze(x,x_0) &= (x_0-x)^T \cdot \nabla \map(x)\\
\end{aligned} \end{equation}
By masking with the occluder function, we can characterize the portion of the
obstacle boundary that is not visible from the vantage point $x_0$. Define the
auxiliary and auxiliary visibility functions: \begin{equation} \begin{aligned} \aux(x,x_0) &= \max \{
\map(x,x_0), \graze(x,x_0) \} \\ \tilde{\aux}(x,x_0) &= \min_{r\in[0,1]} \aux(x
+ r(x_0-x),x_0) \\ \end{aligned} \end{equation}
By masking the auxilary visibility function with the obstacle, we arrive at the desired
shadow function \cite{takei2014efficient}:
\begin{equation} \begin{aligned}
\xi(x,x_0) &= \max\{ \tilde{\aux}(x,x_0), -\map(x)\}
\end{aligned} \end{equation}
\captionsetup[subfloat]{labelformat=empty}
\begin{figure}[hptb]
\centering
\subfloat[Occluder function $\map(\cdot)$]{\includegraphics[width=.35\textwidth]{figures/ls_notation_occluder.pdf}} \quad
\subfloat[Visibility function $\vis(\cdot)$]{\includegraphics[width=.35\textwidth]{figures/ls_notation_visibility.pdf}} \\
\subfloat[Grazing function $\graze(\cdot)$]{\includegraphics[width=.35\textwidth]{figures/ls_notation_dot.pdf}} \quad
\subfloat[Auxiliary function $\aux(\cdot)$]{\includegraphics[width=.35\textwidth]{figures/ls_notation_aux.pdf}} \\
\subfloat[Auxiliary visibility function $\tilde{\aux}(\cdot)$]{\includegraphics[width=.35\textwidth]{figures/ls_notation_aux_tilde.pdf}} \quad
\subfloat[Shadow function $\xi(\cdot)$]{\includegraphics[width=.35\textwidth]{figures/ls_notation_shadow.pdf}}
\caption{Level set functions from a vantage point $x_0$ (blue dot). Each function is negative in the shaded region. Obstacle boundary shown as black contour.} \label{fig:ls_notation}
\end{figure}
The difference between the shadow function and the visibility function is that
the shadow function excludes the obstacles.
Although
$$\tilde{\xi}(x,x_0) = \max\{-\map(x),\vis(x,x_0)\}$$
looks like a candidate shadow function, it
is not correct. In particular
$$\{ x|\tilde{\xi}(x,x_0)=0\}$$
includes the portion of the obstacle boundary visible to $x_0$.
Figure~\ref{fig:ls_notation} summarizes the relevant level set functions used in this work.
\section{Value function from HJI equation}
\label{sec:hji}
Without loss of generality, we formulate the two player game, with a single pursuer and single evader.
The domain $\Omega\subseteq \mathbb{R}^d$ consists of obstacles and free space:
$\Omega=\obs\cup\free$. Consider a pursuer and evader whose positions at a particular time instance
are given by ${x_P},{x_E}:[0,\infty) \to \free$, respectively.
Let $A:= S^{d-1}\cup \{{\bf 0}\}$ be the compact set of control values.
The feedback controls map the players' positions to a control value:
\begin{equation} \begin{aligned}{\sigma_P},{\sigma_E}\in{\mathcal{A}}:=\{\sigma:\free\times\free\to A \ | \ \sigma \text{ measurable}\},\end{aligned} \end{equation}
where ${\mathcal{A}}$ is the set of admissible controls.
The players move with velocities ${f_P},{f_E}:
\Omega \to [0,\infty)$ according to the dynamics
\begin{equation} \begin{aligned}
\dot{{x_P}}(t) &= {f_P}({x_P}(t)) {\sigma_P}({x_P}(t),{x_E}(t)) \qquad &\dot{{x_E}}(t) &= {f_E}({x_E}(t)) {\sigma_E}({x_P}(t),{x_E}(t))\\
{x_P}(0)&= {\xp^0} \qquad &{x_E}(0)&= {\xe^0} \\
\end{aligned} \end{equation}
For clarity of notation, we will omit the dependence of the controls on
the players' positions. For simplicity, we assume velocities are isotropic,
meaning they do not depend on the controls. In real-world scenarios, this may not
be the case. For example, an airplane's dynamics might be constrained by its
momentum and turning radius.
As a slight relaxation, we consider the finite-horizon version of the game, where the pursuers
win if they can prolong the game past a time threshold $T$.
Let $\mathcal{T_\text{end}} := \{({x_P}(\cdot),{x_E}(\cdot)) | \xi({x_P}(\cdot),{x_E}(\cdot))\le 0\}$ be the end-game set of losing positions,
where $\xi$ is the shadow function defined in section~\ref{sec:shadow}.
Define the payoff function
\begin{equation} \begin{aligned}
\mathcal{J}[{\xp^0},{\xe^0},t,{\sigma_P},{\sigma_E}] :=
\inf\{0\le \tau \le t|({x_P}(\tau),{x_E}(\tau)) \in \mathcal{T_\text{end}} \},
\end{aligned} \end{equation}
where $\mathcal{J}[{\xp^0},{\xe^0},t,{\sigma_P},{\sigma_E}]:=t$ if the set $({x_P}(\tau),{x_E}(\tau))\in\mathcal{T_\text{end}}$ is empty.
The payoff is the minimum time-to-occlusion for given set
initial positions and controls.
Define the finite-horizon value function as:
\begin{equation} \begin{aligned}
V({\xp^0},{\xe^0},t) = \sup_{{\sigma_P}\in{\mathcal{A}}} \inf_{{\sigma_E}\in{\mathcal{A}}} \mathcal{J}[{\xp^0},{\xe^0},t,{\sigma_P},{\sigma_E}]
\end{aligned} \end{equation}
The value function describes the length of the game played to time $t$, starting from all pairs of initial positions, and assuming optimal controls.
We are interested in $V({\xp^0},{\xe^0},T)$ for a sufficiently large $T$, which characterizes
the set of initial positions from which the pursuers can maintain visibility of the evaders
for at least $T$ time units.
As $T\to \infty$, we recover the infinite-horizon value function.
By using the principle of optimality and Taylor expansion, one can derive
the Hamilton-Jacobi-Isaacs equation \cite{bardi2008optimal,crandall1983viscosity,evans1984differential}:
\iffalse
Consider a pair of initial positions ${\xp^0}$ and ${\xe^0}$. For clarify of notation, we use ${x_P}(t)$ and ${x_E}(t)$ to denote the positions
of the pursuer and evader at time $t$ according to the controls ${\sigma_P}$ and ${\sigma_E}$, assuming ${x_P}(0)={\xp^0}$ and ${x_E}(0)={\xe^0}$.
The value function satisfies the Dynamic Programming Principle:
\begin{align}
V({\xp^0},{\xe^0}) &= \sup_{\sigma_P} \inf_{\sigma_E} \mathcal{J}[{\xp^0},{\xe^0},{\sigma_P},{\sigma_E}] \\
&= t + \sup_{\sigma_P} \inf_{\sigma_E} \mathcal{J}[{x_P}(t),{x_E}(t),{\sigma_P},{\sigma_E}] \\
&= t + \sup_{\sigma_P} \inf_{\sigma_E} V({x_P}(t),{x_E}(t) )
\end{align}
Rearranging and dividing by $t>0$, we have
\begin{equation} \begin{aligned}
\frac{1}{t} \Big[ V({\xp^0},{\xe^0}) - V({x_P}(t),{x_E}(t))\Big] &= 1
\label{eq:dpp}
\end{aligned} \end{equation}
By Taylor expansion, we have
\begin{align}
{x_P}(t) &= {\xp^0} + t \dot{{x_P}}(0) + \mathcal{O}(t^2) \\
{x_E}(t) &= {\xe^0} + t \dot{{x_E}}(0) + \mathcal{O}(t^2) \\
V({x_P}(t),{x_E}(t)) &= V({\xp^0},{\xe^0}) + t \ \nabla_{x_P} V \cdot \dot{{x_P}}(0) + t \ \nabla_{x_E} V \cdot \dot{{x_E}}(0) + \mathcal{O}(t^2)
\end{align}
Substituting back into (\ref{eq:dpp}), and taking the limit
\begin{align}
\lim_{t\to0^+} \frac{1}{t} \Big[ V({\xp^0},{\xe^0}) - V({x_P}(t),{x_E}(t))\Big] &= 1 \\
\lim_{t\to0^+} \frac{1}{t} \Big[ -t \ \nabla_{x_P} V \cdot \dot{{x_P}}(0) - t \ \nabla_{x_E} V \cdot \dot{{x_E}}(0) + \mathcal{O}(t^2) \Big] &= 1 \\
- \ \nabla_{x_P} V \cdot \dot{{x_P}}(0) - \ \nabla_{x_E} V \cdot \dot{{x_E}}(0) &= 1 \\
- {f_P}({\xp^0})\ \nabla_{x_P} V \cdot {\sigma_P}({\xp^0},{\xe^0}) - {f_E}({\xe^0}) \ \nabla_{x_E} V \cdot {\sigma_E}({\xp^0},{\xe^0}) &= 1
\end{align}
\fi
\begin{equation} \begin{aligned}
V_t + \inf_{{\sigma_E}\in A} \sup_{{\sigma_P} \in A} \{-\nabla_{x_P} V \cdot {\sigma_P} - \nabla_{x_E} V \cdot {\sigma_E}\}
&= 1 \ , &\text{ on } \free\setminus\mathcal{T_\text{end}} \\
V({x_P},{x_E},0) &= 0 \\
V({x_P},{x_E},t) &= 0 \ , & ({x_P},{x_E})\in\mathcal{T_\text{end}} \\
V({x_P},{x_E},t) &= \infty \ , & {x_P} \text{ or }{x_E} \in \obs \label{eq:hji}
\end{aligned} \end{equation}
It has been shown the value function is the viscosity solution \cite{bardi2008optimal,crandall1983viscosity,evans1984differential} to \eqref{eq:hji}.
For isotropic controls, this simplifies to the following Eikonal equation:
\begin{equation} \begin{aligned}
V_t -{f_P} |\nabla_{x_P} V| +{f_E} |\nabla_{x_E} V| &= 1 \ , &\text{ on } \free\setminus\mathcal{T_\text{end}} \\
V({x_P},{x_E},0) &= 0 \\
V({x_P},{x_E},t) &= 0 \ , & ({x_P},{x_E} )\in \mathcal{T_\text{end}} \\
V({x_P},{x_E},t) &= \infty \ , & {x_P} \text{ or }{x_E} \in \obs \label{eq:hji-eikonal}
\end{aligned} \end{equation}
The optimal controls can be recovered by computing the gradient of the value function:
\begin{align} \label{eq:optcontrols}
{\sigma_P} &= \frac{\nabla_{x_P} V}{|\nabla_{x_P} V|} \ , \qquad
{\sigma_E} = -\frac{\nabla_{x_E} V}{|\nabla_{x_E} V|} \ .
\end{align}
\subsection{Algorithm}
\iffalse
We solve (\ref{eq:hji-eikonal}) as the steady state of the time-dependent equation \cite{osher1993level}:
\begin{equation} \begin{aligned}
V_t -{f_P} |\nabla_{x_P} V| +{f_E} |\nabla_{x_E} V| &= 1 \qquad &\text{ on } \free\setminus\mathcal{T_\text{end}} \\
V({x_P},{x_E},t) &= 0 &\qquad ({x_P},{x_E} )\in \mathcal{T_\text{end}} \\
V({x_P},{x_E}) &= \infty &\qquad {x_P} \text{ or }{x_E} \in \obs \label{eq:hji-eikonal-time}
\end{aligned} \end{equation}
\fi
Following the ideas in \cite{tsai2003fast}, we discretize the gradient using upwind scheme as follows.
Let ${x_P}_{i,j},{x_E}_{k,l}\in\free$ be the discretized positions with grid
spacing $h$. Denote $V_{i,j,k,l}$ as the numerical solution to (\ref{eq:hji-eikonal}) for
initial positions ${x_P}_{i,j},{x_E}_{k,l}$. We estimate the gradient using finite
difference. For clarity, we will only mark the relevant subscripts, e.g.
$V_{i+1}:=V_{i+1,j,k,l}$.
\begin{equation} \begin{aligned}
{x_P}_{x^-} &:= \frac{1}{h} \Big(V_{i} - V_{i-1}\Big), \qquad &{x_E}_{x^-} &:= \frac{1}{h} \Big(V_{k} - V_{k-1}\Big),\\
{x_P}_{x^+} &:= \frac{1}{h} \Big(V_{i+1} - V_{i}\Big), \qquad &{x_E}_{x^+} &:= \frac{1}{h} \Big(V_{k+1} - V_{k}\Big),\\
{x_P}_{y^-} &:= \frac{1}{h} \Big(V_{j} - V_{j-1}\Big), \qquad &{x_E}_{y^-} &:= \frac{1}{h} \Big(V_{l} - V_{l-1}\Big),\\
{x_P}_{y^+} &:= \frac{1}{h} \Big(V_{j+1} - V_{j}\Big), \qquad &{x_E}_{y^+} &:= \frac{1}{h} \Big(V_{l+1} - V_{l}\Big).\\
\end{aligned} \end{equation}
Let $a^- := -\min(0,a)$ and
$a^+ := \max(0,a)$. Define
\begin{equation} \begin{aligned}
{\text{sgn}}\max(a,b) &:=
\begin{cases}
a^+ & \text{ if } \max(a^+,b^-) = a^+ \\
-b^- & \text{ if } \max(a^+,b^-) = b^- \\
\end{cases} \\
\end{aligned} \end{equation}
and
\begin{equation} \begin{aligned}
\partial {x_P}_x V &= {\text{sgn}}\max ({x_P}_{x^+}, {x_P}_{x^-} ) \\
\partial {x_P}_y V &= {\text{sgn}}\max ({x_P}_{y^+}, {x_P}_{y^-} ) \\
\partial {x_E}_x V &= {\text{sgn}}\max ({x_E}_{x^-}, {x_E}_{x^+} ) \\
\partial {x_E}_y V &= {\text{sgn}}\max ({x_E}_{y^-}, {x_E}_{y^+} ) \\
\end{aligned} \end{equation}
Finally, the desired numerical gradients are
\begin{equation} \begin{aligned}
|\nabla_{x_P} V| &= \Big( (\partial {x_P}_x V )^2 + (\partial {x_P}_y V)^2 \Big)^{1/2} \\
|\nabla_{x_E} V| &= \Big( (\partial {x_E}_x V )^2 + (\partial {x_E}_y V)^2 \Big)^{1/2} \\
\end{aligned} \end{equation}
Then we have a simple explicit scheme.
\begin{equation} \begin{aligned}
V^{n+1} = V^{n} + \Delta t (1 + {f_P} |\nabla_{x_P} V| - {f_E} |\nabla_{x_E} V| )
\label{eq:hji-scheme}
\end{aligned} \end{equation}
The CFL conditions dictate that the time step $\Delta t$ should be
\begin{equation} \begin{aligned}
\Delta t \le \frac{h}{16 \max({f_P},{f_E})}
\end{aligned} \end{equation}
For a given environment, we precompute the value function by iteration until convergence.
During play time, we initialize ${\xp^0},{\xe^0}$ and compute the optimal trajectories according to
(\ref{eq:optcontrols}) using $\Delta t$ time increments.
\subsubsection{Boundary conditions}
The obstacles appear in the HJI equation as boundary conditions.
However, direct numerical implementation leads to artifacts near the obstacles.
Instead, we model the obstacles by setting the velocities to be small inside obstacles.
We regularize the velocities by adding a smooth transition \cite{takei2014efficient}:
\begin{align}
v_\epsilon(x) &=
\begin{cases}
v(x) & \map(x)>0 \\
v_\text{min} + \frac{v(x)-v_\text{min}}{2} \Big[ \cos\Big(\frac{\phi(x)\pi}{2\epsilon}\Big)+1 \Big] & \map(x)\in[-2\epsilon,0]\\
v_\text{min} & \map(x) < -2\epsilon
\end{cases}
\end{align}
where $\map$ is the signed distance function to the obstacle boundaries.
In the numerical experiments, we use $\epsilon=16\Delta x$ and $v_\text{min} = 1/100$.
\subsection{Numerical results}
\subsubsection*{Stationary pursuer}
We verify that the scheme converges numerically for the case in which the pursuer is stationary.
When ${f_P}=0$, the HJI equation \eqref{eq:hji-eikonal} becomes the time-dependent Eikonal equation:
\begin{equation} \begin{aligned}
V_t + {f_E} |\nabla_{x_E} V| &= 1 \qquad &\text{ on } \free\setminus\mathcal{T_\text{end}} \\
V({x_P},{x_E},0) &= 0 \\
V({x_P},{x_E},t) &= 0 &\qquad ({x_P},{x_E} )\in \mathcal{T_\text{end}} \label{eq:hji-stationary}
\end{aligned} \end{equation}
In particular, for sufficiently large $t$, the value function reaches a steady state, and satisfies the Eikonal equation:
\begin{equation} \begin{aligned}
{f_E} |\nabla_{x_E} V| &= 1 \qquad &\text{ on } \free\setminus\mathcal{T_\text{end}} \\
V({x_P},{x_E}) &= 0 &\qquad ({x_P},{x_E} )\in \mathcal{T_\text{end}} \label{eq:hji-stationary-steady}
\end{aligned} \end{equation}
For this special case, the exact solution is known; the solution corresponds to the evader's travel time to
the end-game set $\mathcal{T_\text{end}}$.
Also, this case effectively reduces the computational cost
from $\mathcal{O}(m^4)$ to $\mathcal{O}(m^2)$, so that we can reasonably compute solutions at higher resolutions.
We consider a $\Omega=[0,1)\times[0,1)$ with a single circular obstacle of radius $0.15$ centered at $(1/2,1/2)$.
The pursuer is stationary at ${x_P}_0=(1/8,1/2)$.
We use $\Delta t = \Delta x/20$ and iterate until the solution no longer changes in the $L_1$ sense, using a tolerance of $10^{-5}$.
We compute the ``exact'' solution using fast marching method on high resolution grid $M=2048$.
We vary $m$ from $16$ to $1024$ and observe convergence in the $L_1$ and $L_2$ sense,
as seen in Table~\ref{tab:hji_conv}.
In Figure~\ref{fig:hji_contour}, we plot the level curves comparing the computed solution at $m=512,1024$ to the ``exact'' solution.
Notice the discrepancies are a result of the difficulty in dealing with boundary conditions.
However, these errors decay as the grid is refined.
The case where the evader is stationary is not interesting.
\begin{table}[]
\begin{center}
\caption{Error for the stationary pursuer case, compared to the known solution computed using fast marching method at resolution $M=2048$.\vspace{1em}}
\label{tab:hji_conv}
\begin{tabular}{c|c|c}
$m $ & $L_1$ error & $L_2$ error \\
\hline
$16 $ & $0.08972215 $ &$0.01288563 $ \\
$32 $ & $0.03177683 $ &$0.00159669 $ \\
$64 $ & $0.02442984 $ &$0.00111537 $ \\
$128 $ & $0.01059728 $ &$0.00021345 $ \\
$256 $ & $0.00515584 $ &$0.00005214 $ \\
$512 $ & $0.00304322 $ &$0.00001961 $ \\
$1024$ & $0.00086068 $ &$0.00000142 $
\end{tabular}
\end{center}
\end{table}
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth]{figures/hj_convergence_512.pdf} \quad
\includegraphics[width=.45\textwidth]{figures/hj_convergence_1024.pdf}
\caption{Comparison of contours of the ``exact'' solution (blue) with those
computed by the scheme (\ref{eq:hji-scheme}) (red) using grid resolutions $m=512$ (left) and
$m=1024$ (right). The pursuer (blue square) is stationary. The error emanates from the obstacle
due to boundary conditions, but the scheme converges
as the grid is refined.} \label{fig:hji_contour}
\end{figure}
\subsubsection*{A circular obstacle}
The evader has the advantage in the surveillance-evasion game. It is
difficult for the pursuer to win unless it is sufficiently fast.
But once it is fast enough, it can almost always win.
Define the winning regions for the pursuer and evader, respectively:
\begin{align}
\mathcal{W}_{x_P} &= \{({x_P},{x_E}) | V({x_P},{x_E}) > T_\text{max}\} \\
\mathcal{W}_{x_E} &= \{({x_P},{x_E}) | V({x_P},{x_E}) \le T_\text{max}\}
\end{align}
Here, we use $T_\text{max}=.9 T$ to tolerate numerical artifacts due to boundary conditions.
In Figure~\ref{fig:hj_circles_winset}, we show how the winning region
for a fixed evader/pursuer position changes as the pursuer's speed increases.
Since it is difficult to visualize data in
4D, we plot the slices $V({\xp^0},\cdot)$ and $V(\cdot,{\xe^0})$ where
${\xp^0}={\xe^0}=(1/8,1/2)$.
We use $m=64$, $\Delta t = \Delta x/20$ and iterate until $T=10$.
We fix ${f_P}=1$ and compute $V$ for each ${f_E}\in\{\frac{1}{3},\frac{1}{2},\frac{2}{3}\}$.
The computation for each value function takes 16 hours.
\begin{figure}[hptb]
\centering
\includegraphics[width=.78\textwidth]{figures/hj_circle_winsets_cE_66.pdf}
\includegraphics[width=.78\textwidth]{figures/hj_circle_winsets_cE_50.pdf}
\includegraphics[width=.78\textwidth]{figures/hj_circle_winsets_cE_33.pdf}
\caption{Comparison of winning initial positions for the evader (left,
red contour) against a pursuer with fixed initial position (blue square) and
vice versa -- winning initial positions for the pursuer (right, blue contour)
against an evader with fixed initial position (red circle). Left column shows
$V({\xp^0},\cdot)$ while right column shows $V(\cdot,{\xe^0})$, where higher values of
$V$ are yellow, while lower values are dark blue. From top to bottom, the pursuer is $1.5$, $2$
and $3$ times faster than the evader.
The pursuer must be sufficiently fast to have a chance at winning.
} \label{fig:hj_circles_winset} \end{figure}
Figure~\ref{fig:hj_circles_traj} shows trajectories from several initial positions
with various speeds. Interestingly, once the evader is cornered, the optimal controls
dictate that it is futile to move. That is, the value function is locally constant.
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth]{figures/hj_traj_circles_32_08_04_60_cE_100.pdf}
\includegraphics[width=.45\textwidth]{figures/hj_traj_circles_32_08_04_60_cE_50.pdf}
\includegraphics[width=.45\textwidth]{figures/hj_traj_circles_48_08_04_32_cE_100.pdf}
\includegraphics[width=.45\textwidth]{figures/hj_traj_circles_48_08_04_32_cE_50.pdf}
\caption{Trajectories of several games played around a circle. The
pursuer loses when it has same speed as the evader (left column). When the pursuer is 2x faster than the evader,
it is possible to win; the evader essentially gives up once it is cornered, since no controls will change the outcome (right column).
Initial positions are shown as stars. Black lines connect positions at constant time intervals.}
\label{fig:hj_circles_traj} \end{figure}
\subsubsection*{More obstacles}
In Figure~\ref{fig:hj_dice_traj} we consider a more complicated environment with multiple obstacles.
Here, the pursuer is twice as fast as the evader. Although there are many obstacles, the dynamics
are not so interesting in the sense that the evader will generally navigate towards a single obstacle.
Again, the evader tends to give up once the game has been decided.
Finally, in Figure~\ref{fig:hj_human_traj} we show suboptimal controls for the
evader. In particular, the evader is controlled manually. Although manual
controls do not help the evader win, they lead to more interesting
trajectories.
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{figures/hj_traj_dice_32_08_07_37_cE_50.pdf}
\includegraphics[width=.45\textwidth]{figures/hj_traj_dice_56_08_57_35_cE_50.pdf}
\caption{Trajectories of games played around 5 circular obstacles. Pursuer (blue) is 2x as fast as evader (red).
The evader wins (left) if it can quickly hide. Otherwise it will give up once it is captured (right).
Initial positions are shown as stars. Black lines connect positions at constant time intervals.}
\label{fig:hj_dice_traj} \end{figure}
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth]{figures/hj_traj_circles_32_08_32_21_cE_33_human.pdf}
\includegraphics[width=.45\textwidth]{figures/hj_traj_dice_56_08_57_35_cE_50_human.pdf}
\caption{Manually controlled evader against an optimal pursuer. The evader loses in both cases, but does not give up.}
\label{fig:hj_human_traj} \end{figure}
\subsection{Discussion}
Notice that an optimal trajectory for the pursuer balances distance and
visibility. While moving closer to the evader will guarantee that it can't
``get away'', it is not sufficient, since being close leads to more
occlussions. On the other hand, moving far away gives better visibility of the
environment, but may make it impossible to catch up once E turns the corner.
Although we described the formulation for a single pursuer and evader,
the same scheme holds for multiple pursuers and evaders. The end-game set just
needs to be modified to take into account the multiple players.
That is, the game ends as soon as any evader is occluded from all pursuers.
However, the computational complexity of the scheme is $\mathcal{O}(m^{kd})$ which
quickly becomes unfeasible even on small grid sizes.
At the expense of offline compute time, the game can be played efficiently
online. The caveat is that, the value function is only valid for the specific
map and velocities computed offline.
\section{Locally optimal strategies}
\label{sec:locally}
As the number of players increases, computing the value function from the HJI
equations is no longer tractable.
We consider a discrete version of the game, with the aim of feasibly
computing controls for games with multiple pursuers and multiple evaders.
Each player's position is now restricted on a grid, and at each turn,
the player can move to a new position within a neighborhood determined by its
velocity. Players cannot move through obstacles.
Formally, define the arrival time function
\begin{equation} \begin{aligned}
\parrival(x,y) &:= \min_{{\sigma_P}\in{\mathcal{A}}} \min \{t|{x_P}(0)=x,{x_P}(t)=y\}\\
\earrival(x,y) &:= \min_{{\sigma_E}\in{\mathcal{A}}} \min \{t|{x_E}(0)=x,{x_E}(t)=y\} .
\end{aligned} \end{equation}
The set of valid actions are the positions $y$ which can be reached from $x$
within a $\Delta t$ time increment:
\begin{equation} \begin{aligned}
\pvalid(t)&:=\{y\in\free | \parrival({x_P}(t),y) \le \Delta t\}\\
\evalid(t)&:=\{y\in\free | \earrival({x_E}(t),y) \le \Delta t\}.
\end{aligned} \end{equation}
In a $\Delta t$ time step, each player can move to a position
\begin{equation} \begin{aligned}
{x_P}(t+\Delta t) \in \pvalid(t) \\
{x_E}(t+\Delta t) \in \evalid(t).
\end{aligned} \end{equation}
Analogously, for multiple players, denote the number of pursuers and evaders as
$k_P$ and $k_E$, respectively. Define
\begin{equation} \begin{aligned}
{{\bf P}} &= ({x_P}_1,\dots, {x_P}_{k_P})\\
{{\bf E}} &= ({x_E}_1,\dots, {x_E}_{k_E})\\
\bfpvalid(t) &= \{{{\bf P}}| {x_P}_i \in \pvalid(t) \ , \ i=1,\dots,k_P\} \\
\bfevalid(t) &= \{{{\bf E}}| {x_E}_j \in \evalid(t) \ , \ j=1,\dots,k_E\}
\end{aligned} \end{equation}
so that in $\Delta t$ time, each team can move to
\begin{equation} \begin{aligned}
{{\bf P}}(t+\Delta t) \in \bfpvalid(t) \\
{{\bf E}}(t+\Delta t) \in \bfevalid(t) \\
\end{aligned} \end{equation}
The game ends as soon as one evader is occluded from all
pursuers. The end-game set is
\begin{equation} \begin{aligned}
\mathcal{T_\text{end}} = \{ ({{\bf P}},{{\bf E}}) \ |\ \exists \ j : \xi({x_P}_i,{x_E}_j)\le 0 \text{ for } i=1,\dots,k_P\} ,
\end{aligned} \end{equation}
We propose two locally optimal strategies for the pursuer.
\subsection{Distance strategy}
The trajectories from the section~\ref{sec:hji} suggest that the pursuer must
generally remain close to the evader. Otherwise, the evader can quickly hide behind obstacles.
A simple strategy for the pursuer is to move towards the evader:
\begin{equation} \begin{aligned}
{x_P}(t+\Delta t) &= \argmin_{x\in\pvalid(t)} \parrival\Big(x,{x_E}(t) \Big).
\end{aligned} \end{equation}
That is, in the time increment $\Delta t$, the pursuer should pick the action that minimizes
its travel time to the evader's current position at time $t$.
For the multiplayer game, we propose a variant of the
Hausdorff distance, where max is replaced by a sum:
\begin{align*}
{\bf \parrival}({\bf x, {x_E}}(t)) :=
\frac{1}{2}\Big[ \sum_{i=1}^{k_P} \min_j \parrival[i]\Big( x_i,{x_E}_j(t) \Big)^2 \ \Big]^{1/2} + \frac{1}{2}\Big[ \sum_{j=1}^{k_E} \min_i \parrival[i]\Big( x_i,{x_E}_j(t)\Big)^2 \ \Big]^{1/2} .
\end{align*}
Informally, the first term encourages each pursuer to be close to an evader,
while the second term encourages a pursuer to be close to each evader. The sum helps
to prevent ties.
The optimal action according to the distance strategy is
\begin{equation} \begin{aligned}
{{{\bf P}}(t+\Delta t)} = \argmin_{{\bf x} \in \bfpvalid(t)}{\bf \parrival}({\bf x, {x_E}}(t))
\end{aligned} \end{equation}
In the next section, we will use search algorithms to refine policies. Rather
than determining the \emph{best} action, it is useful to quantify the utility
of each action. To do so, define
$p_\text{distance}:\Omega \times \free \times \dots \times \free \to \mathbb{R}$ as the policy,
which outputs a probability the agent should take an action, conditioned on the current player positions.
We normalize using the \emph{softmax} function to generate the policy:
\begin{equation} \begin{aligned}
\alpha &= \Big[\displaystyle{\sum_{{\bf x}\in\bfpvalid(t)}} e^{ -{\bf \parrival}({\bf x},{\bf {x_E}}(t))}\Big]^{-1}\\
p_\text{distance}\big({\bf x}| ({\bf {x_P}}(t),{\bf {x_E}}(t))\big) &=
\begin{dcases}
\alpha e^{ -{\bf \parrival}({\bf x},{\bf {x_E}}(t))} & {\bf x} \in \bfpvalid(t)\\
0 & \text{otherwise}
\end{dcases}
\end{aligned} \end{equation}
For the discrete game with $\Delta t$ time increments,
one can enumerate the possible positions for each ${x_P}_i$
and evaluate the travel time to find the optimal control.
The arrival time function $\parrival[i](\cdot,{x_E}_j(t))$ to each ${x_E}_j$ at the current
time $t$ can be precomputed in $\mathcal{O}(m^d)$ time.
In the general case, where each pursuer may have a different velocity field ${f_P}_i$, one
would need to compute $k_P$ arrival time functions.
If $a_{\Delta t}(P_i)$ is the
max number of possible actions each pursuer can make in $\Delta t$ increment, then the total
computational complexity for one move is
$$ O\Big(k_P k_E m^d + \prod_{i=1}^{k_P} a_{\Delta_t}(P_i)\Big). $$
For the special case when the pursuers have the same ${f_P}$, the complexity reduces to
$$ O\Big(k_E m^d + \prod_{i=1}^{k_P} a_{\Delta_t}(P_i)\Big). $$
\subsection{Shadow strategy}
Recall that, for a stationary pursuer, the value function for the evader becomes
the Eikonal equation:
\begin{equation} \begin{aligned}
{f_E} |\nabla_{x_E} V| &= 1 \qquad &\text{ on } \free\setminus\mathcal{T_\text{end}} \\
V({x_P},{x_E}) &= 0 &\qquad ({x_P},{x_E} )\in \mathcal{T_\text{end}},
\end{aligned} \end{equation}
whose solution is the travel time to the shadow set.
Define the time-to-occlusion as
\begin{equation} \begin{aligned}
\tstar({x_P},{x_E}^0) := \min_{{\sigma_E}\in{\mathcal{A}}} \min \{ t\ge 0 \ | \ {x_E}(0)={x_E}^0 \ , \ ({x_P},{x_E}(t)) \in \mathcal{T_\text{end}} \ \} .
\end{aligned} \end{equation}
It is the shortest time in which an evader at ${x_E}^0$ can be occluded from a stationary pursuer at ${x_P}$.
Thus, a reasonable strategy for the evader is to pick the action which brings it
closest to the shadow formed by the pursuer's position:
\begin{equation} \begin{aligned}
{x_E}(t+\Delta t) = \argmin_{y \in\evalid(t)} \tstar({x_P}(t),y) .
\label{eq:evader-action}
\end{aligned} \end{equation}
A conservative strategy for the pursuer, then, is to maximize time-to-occlusion, assuming that the evader can anticipate its actions:
\begin{align}
\tstar^\ast(x,{x_E}(t)) &= \min_{y \in\evalid(t)} \tstar(x,y)\\
{x_P}(t+\Delta t) &= \argmax_{x\in\pvalid(t)} \tstar^\ast(x) \label{eq:local-takei}.
\end{align}
\emph{Remark:} The strategy (\ref{eq:local-takei}) is a local variant of the static value
function proposed in \cite{takei2014efficient}. In that paper, they suggest
using the static value function for feedback controls by moving towards the
globally optimal destination, and then recomputing at $\Delta t$ time
intervals. Here, we use the locally optimal action.
For multiple players, the game ends as soon as any evader is hidden from all pursuers. Define the time-to-occlusion for multiple players:
\begin{equation} \begin{aligned}
\bftstar({{{\bf P}}},{{\bf E}}^0):= \min_{{\sigma_E}_i\in{\mathcal{A}}} \min \{ t\ge 0 \ | \ {{\bf E}}(0)={{\bf E}}^0 \ , \ ({{\bf P}},{{\bf E}}(t)) \in \mathcal{T_\text{end}} \ \} .
\end{aligned} \end{equation}
Then, the strategy should consider the shortest time-to-occlusion among all possible evaders' actions in the $\Delta t$ time increment:
\begin{equation} \begin{aligned}
\bftstar^\ast({\bf x},{\bf {x_E}}(t)) &:= \min_{{\bf y} \in\bfevalid(t)} \bftstar({\bf x},{\bf y}) \\
{{\bf P}}(t+\Delta t) &= \argmax_{{\bf x}\in\bfpvalid(t)} \bftstar^\ast({\bf x},{\bf {x_E}}(t)).
\end{aligned} \end{equation}
The corresponding shadow policy is:
\begin{equation} \begin{aligned}
\alpha &= \Big[\displaystyle{\sum_{x\in\bfpvalid(t)}} e^{ \bftstar^\ast({\bf x},{\bf {x_E}}(t))}\Big]^{-1}\\
p_\text{shadow}\big({\bf x}| ({\bf {x_P}}(t),{\bf {x_E}}(t))\big) &=
\begin{dcases}
\alpha e^{ \bftstar^\ast({\bf x},{\bf {x_E}}(t))} & {\bf x} \in \bfpvalid(t)\\
0 & \text{otherwise}
\end{dcases}
\end{aligned} \end{equation}
This strategy is computationally expensive.
One can precompute the arrival time to each evader by solving an
Eikonal equation $\mathcal{O}(m^d)$.
For each combination of pursuer
positions, one must compute the joint visibility function and corresponding
shadow function $\mathcal{O}(m^d)$.
Then the time-to-occlusion can be found by evaluating the precomputed
arrival times to find the minimum within the shadow set $\mathcal{O}(m^d)$.
The computational complexity for one move is
\begin{equation} \begin{aligned} O \Big( k_E m^d + m^d \cdot \prod_{i=1}^{k_P} a_{\Delta t}(P_i) \Big). \end{aligned} \end{equation}
One may also consider alternating minimization strategies to achieve
\begin{equation} \begin{aligned} O \Big(k_E m^d + m^d \cdot \sum_{i=1}^{k_P} a_{\Delta t}(P_i) \Big), \end{aligned} \end{equation}
though we leave that for future work.
\subsection*{Blend strategy}
We have seen from $\ref{sec:hji}$ that optimal controls for the pursuer balance the
distance to, and visibility of, the evader. Thus a reasonable approach would be
to combine the distance and shadow strategies. However, it is not clear how
they should be integrated. One may consider a linear combination, but
the appropriate weighting depends on the game settings and environment.
Empirically, we observe that the product
of the policies provides promising results across a range of scenarios. Specifically,
\begin{equation} \begin{aligned}
p_\text{blend} \propto p_\text{shadow} \cdot p_\text{distance}
\end{aligned} \end{equation}
\subsection{Numerical results}
We present some representative examples of the local policies.
First, we consider the game with a circular obstacle, a single purser and single evader
whose speeds are ${f_P}=3$ and ${f_E}=2$, respectively.
Figure~\ref{fig:traj-local-circle} illustrates the typical trajectories for each policy.
In general, the distance strategy leads the pursuer into a cat-and-mouse game
the with evader; the pursuer, when close enough, will jump to the evader's
position at the previous time step. The shadow strategy keeps the pursuer far
away from obstacles, since this allows it to \emph{steer the shadows} in the
fastest way. The blend strategy balances the two approaches and resembles the
optimal trajectories based on the HJI equation in section~\ref{sec:hji}.
\begin{figure}[hptb]
\centering
\includegraphics[width=.42\textwidth,trim={0 0 0.1in 0},clip]{figures/traj/traj_seed_808_m_64_vP_3_vE_2_mcts_0_distance_vs_shadow_000.pdf}
\includegraphics[width=.42\textwidth,trim={0.1in 0 0 0},clip]{figures/traj/traj_seed_808_m_64_vP_3_vE_2_mcts_0_distance_vs_shadow_001.pdf}
\includegraphics[width=.42\textwidth,trim={0 0 0.1in 0},clip]{figures/traj/traj_seed_808_m_64_vP_3_vE_2_mcts_0_tstar_vs_shadow_000.pdf}
\includegraphics[width=.42\textwidth,trim={0.1in 0 0 0},clip]{figures/traj/traj_seed_808_m_64_vP_3_vE_2_mcts_0_tstar_vs_shadow_001.pdf}
\includegraphics[width=.42\textwidth,trim={0 0 0.1in 0},clip]{figures/traj/traj_seed_808_m_64_vP_3_vE_2_mcts_0_tstardist_vs_shadow_000.pdf}
\includegraphics[width=.42\textwidth,trim={0.1in 0 0 0},clip]{figures/traj/traj_seed_808_m_64_vP_3_vE_2_mcts_0_tstardist_vs_shadow_001.pdf}
\caption{Distance strategy (top) follows the evader closely, shadow strategy
(middle) stays far to gain better perspective, while the blend strategy
(bottom) strikes a balance.}
\label{fig:traj-local-circle}
\end{figure}
Next, we highlight the advantages of the shadow strategy with a
2 pursuer, 2 evader game on a map with two crescent-shaped obstacles.
The pursuer and evader speeds are ${f_P}=4$ and ${f_E}=2$, respectively.
The openness of the environment creates large occlusions. The pursuers use the
shadow strategy to cooperate and essentially corner the evaders.
Figure~\ref{fig:traj-shadow-eyes} shows snapshots of the game.
The distance strategy loses immediately since the green purser does not
properly track the orange evader.
\begin{figure}[hptb]
\centering
\includegraphics[width=.42\textwidth,trim={0 .1in .1in 0},clip]{figures/traj/traj_eyes_seed_01_m_128_vP_4_vE_2_mcts_0_tstar_vs_shadow_000.pdf}
\includegraphics[width=.42\textwidth,trim={.1in .1in 0 0},clip]{figures/traj/traj_eyes_seed_01_m_128_vP_4_vE_2_mcts_0_tstar_vs_shadow_001.pdf}
\includegraphics[width=.42\textwidth,trim={0 .0 .1in .1in},clip]{figures/traj/traj_eyes_seed_01_m_128_vP_4_vE_2_mcts_0_tstar_vs_shadow_002.pdf}
\includegraphics[width=.42\textwidth,trim={.1in .0 0 .1in},clip]{figures/traj/traj_eyes_seed_01_m_128_vP_4_vE_2_mcts_0_tstar_vs_shadow_003.pdf}
\includegraphics[width=.42\textwidth]{figures/traj/traj_eyes_seed_01_m_128_vP_4_vE_2_mcts_0_distance_vs_shadow_000.pdf}
\caption{(Top 2 rows) The blue and green pursuers cooperate by using the shadow strategy. Green initially has responsibility of the orange
evader, but blue is able to take over.
(Bottom) The distance strategy loses immediately.
}
\label{fig:traj-shadow-eyes}
\end{figure}
We present cases where the distance and shadow strategy fail in
Figure~\ref{fig:traj-failure}. The evader tends to stay close to the obstacle,
since that enables the shortest path around the obstacle. Using the distance
strategy, the pursuer agressively follows the evader. The evader is able to
counter by quickly jumping behind sharp corners. On the other hand, the shadow
strategy moves the pursuer away from obstacles to reduce the size of shadows.
As a consequence, the pursuer will generally be too far away from the evader
and eventually lose. In environments with many nonconvex obstacles, both
strategies will fail.
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth]{figures/traj/traj_cross_seed_823_m_32_vP_4_vE_3_mcts_0_distance_vs_shadow_000.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_dice_seed_05_m_32_vP_3_vE_2_mcts_0_tstar_vs_shadow_000.pdf}
\caption{Failure modes for the local strategies. Blindly using the distance
strategy (left) allows the evader to exploit the sharp concavities. The
shadow strategy (right) keeps the pursuer far away to reduce the size of shadows, but often,
the pursuer is too far away to catch the evader.}
\label{fig:traj-failure} \end{figure}
Finally, we show that blending the shadow and distance strategies is very
effective in compensating for the shortcomings of each individual policy.
The pursuers are able to efficiently track the evaders while maintain a safe distance.
Figure~\ref{fig:traj-blend-dice} shows an example with 2 pursuers and 2 evaders on
a map with multiple obstacles, where ${f_P}=3$ and ${f_E}=2$.
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth]{figures/traj/traj_dice_seed_04_m_32_vP_3_vE_2_mcts_0_tstardist_vs_shadow_000.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_dice_seed_04_m_32_vP_3_vE_2_mcts_0_tstardist_vs_shadow_001.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_dice_seed_04_m_32_vP_3_vE_2_mcts_0_tstardist_vs_shadow_002.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_dice_seed_04_m_32_vP_3_vE_2_mcts_0_tstardist_vs_shadow_003.pdf}
\caption{The pursuers (blue and green)
are able to win by combining the distance and shadow strategy. The pursuers stay close, while maintaining
enough distance to avoid creating large shadow regions. The pursuers are slightly faster than the evaders.
}
\label{fig:traj-blend-dice}
\end{figure}
\section{Learning the pursuer policy}
\label{sec:learning-policy}
We propose a method for learning optimal controls for the pursuer, though our
methods can be applied to find controls for the evader as well. Again, we
consider a discrete game, where each player's position is restricted on a grid,
and at each turn, the player can move to a new position within a neighborhood
determined by their velocity. All players move simultaneously. The game
termination conditions are checked at the end of each turn.
We initialize a neural network which takes as input any game state,
and produces a policy and value pair. The policy is probability distribution
over actions. Unlike in section~\ref{sec:hji}, the value, in this context, is
an estimate of the likely winner given the input state.
Initially the policy and value estimates are random. We use Monte
Carlo tree search to compute refined policies. We play the game using the
refined policies, and train the neural network to learn the refined policies.
By iterating in this feedback loop, the neural network continually learns to
improve its policy and value estimates. We train on games in various
environments and play games on-line on maps that were were not seen during the
training phase.
\subsection{Monte Carlo tree search}
In this section, we review the Monte Carlo tree search algorithm,
which allows the agent to plan ahead and refine policies. For clarity of notation,
we describe the single pursuer, single evader scenario, but the method applies to arbitrary
number of players.
Define the set of game states $\mathcal{S}:=\{({x_P},{x_E})\in\free\times\free\}$ so that
each state $s\in\mathcal{S}$ characterizes the position of the players.
Let $\mathcal{A}\subseteq \free$ be the set of actions.
Let $T(s,a) : \mathcal{S}\times \mathcal{A} \to \mathcal{S}$ be the transition function
which outputs the state resulting from taking action $a$ at state $s$.
Let ${f}:\mathcal{S} \to \mathbb{R}^{m^d}\times[-1,1]$ be an evaluator function
which takes the current state as input and provides a policy and value estimate: ${f}(s)=(\vec{p},v)$.
Formally, Monte Carlo tree search is mapping takes as input the current state $s_0$, the evaluator function ${f}$, and a parameter $M$
indicating the number of search iterations: $\mathcal{M}(s_0,{f};M)$.
It outputs a refined policy $\vec{\pi}^\ast$.
Algorithm \ref{alg:mcts} summarizes the MCTS algorithm.
At a high level, MCTS simulates game play starting from the current state,
keeping track of nodes it has visited during the search.
Each action is chosen according to a formula $U(s,a)$ which balances exploration and exploitation.
Simulation continues until the algorithm reaches a \emph{leaf node} $s_n$, a state which has not previously
been visited. At this point, we use the evaluator function ${f}(s_n)=(\vec{p},v)$ to estimate a policy
and value for that leaf node. The value $v$ is propagated to all parent nodes.
One iteration of MCTS ends when it reaches a leaf node.
MCTS keeps track of statistics that help guide the search. In particular
\begin{itemize}
\item $N(s,a)$: the number of times the action $a$ has been selected from state $s$
\item $W(s,a)$: the cumulative value estimate for each state-action pair
\item $Q(s,a)$: the mean value estimate for each state-action pair
\item $P(s,a)=(1-\varepsilon)p(a|s) + \varepsilon\eta$: the prior policy, computed by evaluating ${f}$.
Dirichlet noise $\eta$ is added to allow a chance for each move to be chosen.
\item $U(s,a)=Q(s,a) + P(s,a) \frac{\sqrt{\sum_b N(s,b) }} {1+N(s,a)}$ is the \emph{upper confidence bound} \cite{rosin2011multi}.
The first term exploits moves with high value, while the second term encourages moves that have not selected.
\end{itemize}
When all $M$ iterations are completed, the desired refined policy is proportional to $N(s_0,a)^{1/\tau}$,
where $\tau$ is a smoothing term.
\begin{algorithm}
\caption{Monte Carlo tree search: $\mathcal{M}(s_0,{f},M)$} \label{alg:mcts}
\begin{algorithmic}
\State $N(s,a) \gets 0$
\State $Q(s,a) \gets 0$
\State $W(s,a) \gets 0$
\State visited = $\{ \emptyset \}$
\For{$i = 1,\dots,M $}
\State $n\gets0$
\While{$s_n \notin$ visited}
\State
\If{ $\sum_b N(s_n,b) > 0$}
\State $a_n^\ast = \arg\max_a Q(s_n,a) + P(s_n,a) \frac{\sqrt{\sum_b N(s_n,b) }} {1+N(s_n,a)}$
\Else
\State $a_n^\ast = \arg\max_a P(s_n,a)$
\EndIf
\State $s_{n+1} = T(s_n,a_n^\ast)$
\State $n \gets n+1$
\EndWhile
\State $(p,v) = {f}(s_n)$
\State $P(s_n,a) = (1-\varepsilon)p(a|s_n) + \varepsilon\eta$
\State visited.append($s_n$)
\For{$j = 0, \dots, n-1$}
\State $N(s_j,a) \gets N(s_j,a_j^\ast) + 1$
\State $W(s_j,a) \gets W(s_j,a_j^\ast) + v$
\State $Q(s_j,a) \gets Q(s_j,a_j^\ast) / N(s_j,a_j^\ast)$
\EndFor
\EndFor
\State $\pi^\ast(a|s_0) = N(s_0,a)^{1/\tau} / \sum_b N(s_0,b)^{1/\tau}$
\State \Return $\pi^\ast$
\end{algorithmic}
\end{algorithm}
\subsection{Policy and value network}
We use a convolutional neural network which takes in the game state
and produces a policy and value estimate.
Although the state can be completely characterized by the positions
of the players and the obstacles, the neural network requires more context in order to be
able to generalize to new environments.
We provide the following features as input to the neural network, each of which
is an $m\times m$ image:
\begin{itemize}
\item Obstacles as binary image
\item Player positions, a separate binary image for each player
\item Joint visibility of all pursuers, as a binary image
\item Joint shadow boundaries of all pursuers
\item Visibility from each pursuer's perspective, as a binary image
\item Shadow boundary from each pursuer's perspective
\item Valid actions for each player, as a binary image
\item Each evader's policy according to (\ref{eq:evader-action})
\end{itemize}
AlphaZero \cite{silver2017mastering2} suggests that training the policy and
value networks jointly improves performance. We use a single network based on U-Net
\cite{ronneberger2015u}, which splits off to give output policy and value.
The input is $m\times m \times C_{\text{in}}$, where $C_{\text{in}}=2+4k_P+2k_E$ and $k_P,k_E$ are the number of
pursuers and evaders, respectively.
The U-Net consists of $\log_2(m)+1$ \emph{down-blocks}, followed by the same
number of \emph{up-blocks}.
All convolution layers in the down-blocks and up-blocks use size 3 kernels.
Each down-block consists of input, conv, batch
norm, relu, conv, batch norm, residual connection from input, relu, followed by
downsampling with stride 2 conv, batch norm, and relu. A residual connection links
the beginning and end of each block, before downsampling. The width
of each conv layer in the $l^{th}$ down-block is $l\cdot C_\text{in}$. Each
up-block is the same as the down-block, except instead of downsampling, we use
bilinear interpolation to upsample the image by a factor of 2. The upsampled
result is concatenated with the predownsampled output from the corresponding (same size)
down-block, followed by conv, batch norm, relu. The width of each conv layer in
the up-block is same as those in the down-block of corresponding size.
Then, the network splits into a policy and value head.
The policy head consists of $1\times 1$ conv with width 8, batch norm, relu, and
$1\times 1$ conv with width $k_P$. The final activation layer is a softmax to output $p\in \mathbb{R}^{m\times m \times k_P}$, a policy for each pursuer.
The value head is similar, with $1\times 1$ conv with width 8, batch norm, relu, and
$1\times 1$ conv with width 1. The result passes through a tanh activation and
average pooling to output a scalar $v\in[-1,1]$.
\subsection{Training procedure}
Since we do not have the true value and policy, we cannot train the networks
in the usual supervised fashion. Instead, we use MCTS to generate refined policies,
which serve as the training label for the policy network.
Multiple games are played with actions selected according to MCTS refined policies.
The game outcomes act as the label for the value for each state in the game.
We train over various maps consisting of 2-7 obstacles, including circles, ellipses, squares,
and tetrahedrons.
More specifically,
let ${f}_\theta(s) $ be the neural network parameterized by $\theta$, which takes a state $s$ as input, and outputs a policy $\vec{\pi}_\theta(s)$ and value $v_\theta(s)$.
Let $s_j(0)$ be the initial positions.
For $j=1,\dots,J$, play the game using MCTS:
\begin{align}
\vec{\pi}_j(a|s_j(k)) &= \text{MCTS}(s_j(k), f_\theta; M) \\
s_j(k+1) &= \arg\max_a \vec{\pi}_j^\ast(a|s_j(k))
\end{align}
for $k=0,\dots,K_j$. The game ends at
\begin{equation} \begin{aligned}
K_j = \inf\{k | s_j(k) \in \mathcal{T}_\text{end}\}
\end{aligned} \end{equation}
Then the "true" policy and value are
\begin{align}
\vec{\pi}_j^\ast(k) &= \vec{\pi}_j(\cdot|s_j(k))\\
v^\ast_j(k)&=
\begin{cases}
1 & K_j > K_\text{max} \\
-1 & \text{otherwise}
\end{cases}
\end{align}
The parameters $\theta$ of the neural network
are updated by stochastic gradient descent (SGD) on the loss function:
\begin{equation} \begin{aligned}
\min_\theta \sum_{j=1}^J \sum_{k=0}^{K_j} L_{\text{policy}} & \Big(\vec{\pi}_\theta(s_j(k)) , \vec{\pi}_j^\ast(k) \Big) + L_\text{value} \Big(v_\theta(s_j(k)),v_j^\ast(k)\Big)\\
L_\text{policy}(\vec{p},\vec{q}) &= -\vec{p} \cdot \log \vec{q} \\
L_\text{value}(p,q) &= (p-q)^2 \\
\end{aligned} \end{equation}
We use a learning rate of $0.001$ and the Adam optimizer \cite{kingma2014adam}.
\subsection{Numerical results}
A key difficulty in learning a good policy for the pursuer
is that it requires a good evader. If the evader is static,
then the pursuer can win with any random policy.
During training and evaluation, the game is played with the evader moving
according to (\ref{eq:evader-action}). Although all players move
simultaneously, our MCTS models each team's actions sequentially, with the
pursuers moving first. This is conservative towards the pursuers, since the
evaders can counter.
We train using a single workstation with 2 Intel Xeon CPU E5-2620 v4 2.10GHz
processors and a single NVidia 1080-TI GPU. For simplicity, ${f_P}$ and ${f_E}$ are
constant, though it is straightforward to have spatially varying
velocity fields. We use a gridsize of $m=16$. We set $K_{\text{max}}=100$ and
$M=1000$ MCTS iterations per move. One step of training consists of playing
$J=64$ games and then training the neural network for 1 epoch based on training
data for the last 10 steps of training. Self-play game data is generated in
parallel, while network training is done using the GPU with batch size 128. The total
training time is 1 day.
The training environments consist of between 2 to 6 randomly oriented obstacles,
each uniformly chosen from the set of ellipses, diamonds, and rectangles. We emphasize
that the environments shown in the experiments are not in the training set.
We compare our trained neural network against uniform random and dirichlet
noise-based policies, as well as the local policies from
section~\ref{sec:locally}. In order to draw a fair comparison, we make sure
each action requires the same amount of compute time. Each MCTS-based move in
the 2 player game takes 4 secs while the multiplayer game takes about 10 secs
per move, on average. Since the noise-based policies require less overhead,
they are able to use more MCTS iterations. The shadow strategies become very
expensive as more players are added. For the 1v1 game, we use $\hat{M}=1000$,
while the 2v2 game can only run for $\hat{M}=250$ in the same amount of time as
the Neural Net.
Specifically,
\begin{itemize}
\item Distance strategy
\item Shadow strategy
\item Blend strategy
\item $\mathcal{M}(\cdot,{f}_\text{distance},1000)$ where ${f}_\text{distance}(s) = (p_\text{distance},0)$.
\item $\mathcal{M}(\cdot,{f}_\text{shadow},\hat{M})$ where ${f}_\text{shadow}(s) = (p_\text{shadow},0)$.
\item $\mathcal{M}(\cdot,{f}_\text{blend},\hat{M})$ where ${f}_\text{blend}(s) = (p_\text{blend},0)$.
\item $\mathcal{M}(\cdot,{f}_\mu,2000)$ where ${f}_\nu(s) = (\text{Uniform},0)$.
\item $\mathcal{M}(\cdot,{f}_\eta,2000)$ where ${f}_\eta(s) = (\text{Dir}(0.3),0)$.
\item $\mathcal{M}(\cdot,{f}_\theta,1000)$ where ${f}_\theta$ is the trained Neural Network
\end{itemize}
\subsection*{Two players}
As a sanity check, we show an example on a single circular obstacle with a
single pursuer and single evader. As we saw from the previous section, the
pursuer needs to be faster in order to have a chance at winning. We let
${f_P}=2$ and ${f_E}=1$.
Figure~\ref{fig:nnet-traj-circle} shows an example trajectory using Neural Net.
The neural network model gives reasonable policies.
Figure~\ref{fig:nnet-traj-v} shows an adversarial human evader playing against the Neural Net pursuer,
on a map with two obstacles. The pursuer changes strategies depending on the shape of the obstacle.
In particular, near the corners of the "V" shape, it maintains a safe distance rather than blindly following
the evader.
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth]{figures/traj/traj_seed_01_m_16_vP_2_vE_1_mcts_1000_nnet_vs_shadow_000.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_seed_01_m_16_vP_2_vE_1_mcts_1000_nnet_vs_shadow_001.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_seed_01_m_16_vP_2_vE_1_mcts_1000_nnet_vs_shadow_002.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_seed_01_m_16_vP_2_vE_1_mcts_1000_nnet_vs_shadow_003.pdf}
\caption{Snapshots of the trajectory for the Neural Net pursuer around a circular obstacle.
The pursuer (blue) tracks the evader (red) while maintaining a safe distance.
View from left to right, top to bottom.
Stars indicate the initial positions, and the black line (of sight) connects the players at the end of each time interval.}
\label{fig:nnet-traj-circle}
\end{figure}
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth]{figures/traj/traj_vshape_seed_82_m_16_vP_2_vE_1_mcts_0_nnet_vs_shadow_000.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_vshape_seed_82_m_16_vP_2_vE_1_mcts_0_nnet_vs_shadow_001.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_vshape_seed_82_m_16_vP_2_vE_1_mcts_0_nnet_vs_shadow_002.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_vshape_seed_82_m_16_vP_2_vE_1_mcts_0_nnet_vs_shadow_003.pdf}
\caption{Trajectory for the Neural Net pursuer against an adversarial human evader on a map with two obstacles. The pursuer
transitions between following closely, and leaving some space, depending on the shape of the obstacle.}
\label{fig:nnet-traj-v}
\end{figure}
In order to do a more systematic comparison, we run multiple games
over the same map and report the game time statistics for each method.
We fix the pursuer's position at $(1/2,1/4)$ and vary the evader's initial location within the free space.
Figure~\ref{fig:value-slice-setup} shows the setup for the two maps considered for the statistical studies in this section.
One contains a single circle in the center, as we have seen previously. The other one contain 5 circular obstacles,
though the one in the center has some extra protrusions.
Figure~\ref{fig:value-slice-1v1dice} shows an image corresponding the the length of the game
for each evader position; essentially, it is a single slice
of the value function for each method.
Table~\ref{tab:1v1dice} shows the number of games won. Shadow strategy particularly benefits
from using MCTS for policy improvements, going from 16\% to 67.15\% win rate. Our neural network
model outperforms the rest with a 70.8\% win rate.
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth]{figures/value_slices/value_2v2circle_setup.png}
\includegraphics[width=.45\textwidth]{figures/value_slices/value_2v2dice_setup.png}
\caption{Setup for computing a slice of the value function for the circular obstacle (left) and 5 obstacle map (right). The pursuer's initial position is fixed (blue) while the
evader's changes within the free space.}
\label{fig:value-slice-setup}
\end{figure}
\begin{figure}[hptb]
\centering
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_dist.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_tstar.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_tstardist.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_dist1000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_tstar1000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_tstardist1000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_uniform2000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_dir2000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_nnet1000.png}
\caption{One slice of the "value" function for single pursuer, single evader game with 5 obstacles. Bright spots
indicate that the pursuer won the game if that pixel was the evader's initial position.}
\label{fig:value-slice-1v1dice}
\end{figure}
\begin{table}[]
\caption{Game statistics for the 1 pursuer vs 1 evader game with 5 circular obstacles, where ${f_P}=2$ and ${f_E}=1$.\vspace{1em}}
\label{tab:1v1dice}
\begin{tabular}{l|ccc}
Method & Win \% (137 games) & Average game time \\
\hline
Distance &52.55 & 53.56 \\
Shadow &16.06 & 22.36 \\
Blend &63.50 & 64.45 \\
MCTS($\cdot$, Distance; 1000) &55.47 & 62.40 \\
MCTS($\cdot$, Shadow; 1000) &67.15 & 69.45 \\
MCTS($\cdot$, Blend; 1000) &58.39 & 63.27 \\
MCTS($\cdot$, Uniform; 2000) &60.58 & 65.84 \\
MCTS($\cdot$, Dirichlet; 2000) &65.69 & 69.02 \\
MCTS($\cdot$, Neural Net; 1000) &70.80 & 71.61 \\
\end{tabular}
\end{table}
\subsection*{Multiple players}
Next, we consider the multiplayer case with 2 pursuers and 2 evaders on a circular obstacle map
where ${f_P}=2$ and ${f_E}=2$.
Even on a $16\times16$ grid, the computation of the corresponding feedback value function
would take several days.
Figure~\ref{fig:nnet-traj-circle-multi} shows a sample trajectory. Surprisingly, the neural network has learned
a smart strategy. Since there is only a single obstacle, it is sufficient for each
pursuer to guard one opposing corner of the map. Although all players have the same speed, it is possible
to win.
\begin{figure}[phtb]
\centering
\includegraphics[width=.45\textwidth]{figures/traj/traj_seed_06_m_16_vP_2_vE_2_mcts_1000_nnet_vs_shadow_000.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_seed_06_m_16_vP_2_vE_2_mcts_1000_nnet_vs_shadow_001.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_seed_06_m_16_vP_2_vE_2_mcts_1000_nnet_vs_shadow_002.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_seed_06_m_16_vP_2_vE_2_mcts_1000_nnet_vs_shadow_003.pdf}
\caption{Trajectories for the multiplayer game played using NNet around a circle. Pursuers are blue and green, while evaders are red and orange.
Blue has learned the tactic of remaining stationary in the corner, while green manages the opposite side.
The evaders movements are sporadic because there is no chance of winning; there are no shadows in which to hide.}
\label{fig:nnet-traj-circle-multi}
\end{figure}
Figure~\ref{fig:value-slice-2v2circle} shows a slice of the value function, where 3 players' positions are fixed,
and one evader's position varies.
Table~\ref{tab:2v2circle} shows the game statistics.
Here, we see some deviation from the baseline. As the number of players increase,
the number of actions increases. It is no longer sufficient to use random sampling.
The neural network is learning useful strategies to help guide the
Monte Carlo tree search to more significant paths. The distance and blend strategies
are effective by themselves. MCTS helps improve performance for Distance. However, 250 iterations is
not enough to search the action space, and actually lead to poor performance for Blend and Shadow.
For this game setup, MCTS(Distance,1000) performs the best with a 73.5\% win rate, followed by
Blend with 65.4\% and Neural Net with 59.9\%. Although the trained network is not the best
in this case, the results are very promising. We want to emphasize that the model was trained with no prior
knowledge. Given enough offline time and resources, we believe the proposed approach can scale to larger grids and
learn more optimal policies than the local heuristics.
\begin{figure}[phtb]
\centering
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_dist.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_tstar.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_tstardist.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_dist1000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_tstar250.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_tstardist250.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_uniform2000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_dir2000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_nnet1000.png}
\caption{One slice of the value function for 2 pursuer, 2 evader game on the circular obstacle.}
\label{fig:value-slice-2v2circle}
\end{figure}
\begin{table}[]
\caption{Game statistics for the 2 pursuer vs 2 evader game with a circular obstacle.\vspace{1em}}
\label{tab:2v2circle}
\begin{tabular}{l|ccc}
Method & Win \% (162 games) & Average game time \\
\hline
Distance & 56.8 & 58.8 \\
Shadow & 46.3 & 50.1 \\
Blend & 65.4 & 67.9 \\
MCTS($\cdot$, Distance; 1000) & 73.5 & 76.6 \\
MCTS($\cdot$, Shadow; 250) & 40.7 & 44.5 \\
MCTS($\cdot$, Blend; 250) & 00.0 & 4.4 \\
MCTS($\cdot$, Uniform; 2000) & 00.0 & 5.3 \\
MCTS($\cdot$, Dirichlet; 2000) & 27.8 & 32.8 \\
MCTS($\cdot$, Neural Net; 1000) & 59.9 & 61.7 \\
\end{tabular}
\end{table}
\iffalse
We repeat the experiment for the more complicated map with 5 obstacles,
with ${f_P}={f_E}=2$, and show the results in
Figure~\ref{fig:value-slice-2v2dice} and Table~\ref{tab:2v2dice}. The map
is much more difficult and the evader generally wins. For this particular map and game setting,
where all players
Distance has a slight edge over the other strategies, though
\begin{table}[]
\caption{Game statistics for the 2 pursuer vs 2 evader game with 5 obstacles.\vspace{1em}}
\label{tab:2v2dice}
\begin{tabular}{l|ccc}
Method & Win \% (137 games) & Average game time \\
\hline
Distance & 6.6 & 8.89 \\
Shadow & 0 & 2.68 \\
Blend & 0.7 & 3.95 \\
MCTS($\cdot$,Distance; 1000) & 0.7 & 3.77 \\
MCTS($\cdot$,Shadow; 250) & 0 & 2.24 \\
MCTS($\cdot$,Blend; 250) & 0 & 1.77 \\
MCTS($\cdot$,Uniform; 2000) & 0 & 2.10 \\
MCTS($\cdot$,Dirichlet; 2000) & 0 & 2.26 \\
MCTS($\cdot$,Neural Net; 1000) & 0 & 3.0
\end{tabular}
\end{table}
\begin{figure}[phtb]
\centering
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_dist.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_tstar.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_tstardist.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_dist1000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_tstar250.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_tstardist250.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_uniform2000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_dir2000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_nnet1000.png}
\caption{One slice of the value function for the dice}
\label{fig:value-slice-2v2dice}
\end{figure}
\fi
Figure~\ref{fig:mcts-stats-multi} shows a comparison of the depth of search for
$M=1000$ MCTS iterations. Specifically, we report depth of each leaf node, as
measured by game time. To be fair, we allow the uniform and dirichlet baselines
to run for 2000 MCTS iterations to match the runtime needed for 1 move. Also,
the shadow strategies are extremely costly, and can only run 250 MCTS
iterations in the same amount of time. However, we show the statistics for
$M=1000$ to gain better insights. Ideally, a good search would balance breadth
and depth. The neural network appears to search further than the baselines. Of
course, this alone is not sufficient to draw any conclusions. For example, a
naive approach could be a depth-first search.
In Figure~\ref{fig:mcts-stats}, we show a similar chart for the single pursuer,
single evader game with a circular obstacle. In this case, the game is
relatively easy, and all evaluator functions are comparable.
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth,trim={0 0.15in 0 0},clip]{figures/mcts_stats/seed_02_m_16_vP_2_vE_2_mcts_2000_uniform_vs_shadow.pdf} \quad
\includegraphics[width=.45\textwidth,trim={0 0.15in 0 0},clip]{figures/mcts_stats/seed_02_m_16_vP_2_vE_2_mcts_2000_dirichlet_vs_shadow.pdf}\\[.5em]
\includegraphics[width=.45\textwidth,trim={0 0.15in 0 0},clip]{figures/mcts_stats/seed_02_m_16_vP_2_vE_2_mcts_1000_distance_vs_shadow.pdf} \quad
\includegraphics[width=.45\textwidth,trim={0 0.15in 0 0},clip]{figures/mcts_stats/seed_02_m_16_vP_2_vE_2_mcts_1000_tstar_vs_shadow.pdf} \\[.5em]
\includegraphics[width=.45\textwidth,trim={0 0in 0 0},clip]{figures/mcts_stats/seed_02_m_16_vP_2_vE_2_mcts_1000_tstardist_vs_shadow.pdf} \quad
\includegraphics[width=.45\textwidth,trim={0 0in 0 0},clip]{figures/mcts_stats/seed_02_m_16_vP_2_vE_2_mcts_1000_nnet_vs_shadow.pdf} \quad
\caption{Histogram of leaf node depth for MCTS using various evaluator
functions for the multiplayer game around a circular obstacle. The colors show
increments of 100 iterations. The multiplayer game has a much larger action
space, making tree search difficult. The neural network appears to search deeper
into the tree.}
\label{fig:mcts-stats-multi}
\end{figure}
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth,trim={0 0.15in 0 0},clip]{figures/mcts_stats/seed_00_m_16_vP_2_vE_1_mcts_2000_uniform_vs_shadow.pdf} \quad
\includegraphics[width=.45\textwidth,trim={0 0.15in 0 0},clip]{figures/mcts_stats/seed_00_m_16_vP_2_vE_1_mcts_2000_dirichlet_vs_shadow.pdf}\\[.5em]
\includegraphics[width=.45\textwidth,trim={0 0.15in 0 0},clip]{figures/mcts_stats/seed_00_m_16_vP_2_vE_1_mcts_1000_distance_vs_shadow.pdf} \quad
\includegraphics[width=.45\textwidth,trim={0 0.15in 0 0},clip]{figures/mcts_stats/seed_00_m_16_vP_2_vE_1_mcts_1000_tstar_vs_shadow.pdf} \\[.5em]
\includegraphics[width=.45\textwidth,trim={0 0.00in 0 0},clip]{figures/mcts_stats/seed_00_m_16_vP_2_vE_1_mcts_1000_tstardist_vs_shadow.pdf} \quad
\includegraphics[width=.45\textwidth,trim={0 0.00in 0 0},clip]{figures/mcts_stats/seed_00_m_16_vP_2_vE_1_mcts_1000_nnet_vs_shadow.pdf} \quad
\caption{Histogram of leaf node depth for MCTS using various evaluator
functions for the single pursuer vs single evader game around a circular
obstacle. The colors show increments of 100 iterations. The game is relatively easy
and thus all algorithms appear comparable. Note that Uniform and Dirichlet are allowed 2000 MCTS iterations,
since they require less overhead to run.}
\label{fig:mcts-stats}
\end{figure}
\section{Conclusion and future work}
We proposed three approaches for approximating optimal controls for the
surveillance-evasion game. When there are few players and the grid size is
small, one may compute the value function via the Hamilton-Jacobi-Isaacs
equations. The offline cost is immense, but on-line game play is very
efficient. The game can be played on the continuously in time and space, since
the controls can be interpolated from the value function. However, the value
function must be recomputed if the game settings, such as the obstacles or player
velocities, change.
When there are many players, we proposed locally optimal strategies for the
pursuer and evader. There is no offline preprocessing. All computation is done
on-line, though the computation does not scale well as the velocities or number
of pursuers increases. The game is discrete in time and space.
Lastly, we proposed a reinforcement learning approach for the
multiplayer game. The offline training time can be enormous, but on-line game play
is very efficient and scales linearly with the number of players. The game is played
in discrete time and space, but the neural network model generalizes to
maps not seen during training. Given enough computational resources,
the neural network has the potential to approach the optimal controls
afforded by the HJI equations, while being more efficient than the local strategies.
There are many avenues to explore for future research.
We are working on the extension of our reinforcement learning approach to 3D,
which is straight-forward, but requires more computational resources.
Figure~\ref{fig:3dseg} shows an example surveillance-evasion game in 3D. Along
those lines, a multi-resolution scheme is imperative for scaling to higher
dimensions and resolutions. One may also consider different game objectives,
such as seeking out an intially hidden evader, or allowing brief moments of
occlusion.
\begin{figure}[hptb]
\vspace{1em}
\centering
\includegraphics[width=.65\textwidth]{figures/3d_seg_crop.png}
\caption{A snapshot of a 3D surveillance-evasion game around a sphere.}
\label{fig:3dseg}
\end{figure}
\iffalse
\subsection{The surveillance-constrained patrol problem}
In \cite{bharadwaj2019strategy}, we considered a surveillance-constrained
patrol problem where a pursuer must optimize short-term visibility of the
environment, with the constraint that
it must always keep the evader within its line-of-sight. In this section, we briefly
review the ideas in that paper, and mention a direction for future work which combines
the ideas presented in Chapters~\ref{chap:exploration} and \ref{chap:surveillance}.
The game is played in discrete space and time. We assume that
both players have a map of the environment; the pursuer must be faster than the
evader, otherwise it may not have any flexibility to do anything other than the
surveillance constraint.
Formally, let $K\in \mathbb{N}$. Short-term visibility means that the pursuer's
visibility of the environment at time $t$ is only valid for $K$ time steps, after which those
portions are assumed to be occluded again. Define the short-term visibility set
\begin{equation} \begin{aligned}
\Omega_{i-K}^i := \bigcup_{j=i-K}^i \visset({x_P}_j)
\end{aligned} \end{equation}
As in Chapter~\ref{chap:exploration}, one may define the gain function
\begin{equation}
g^K(x;\Omega_{i-K}^i) := | \visset_x \cup \Omega_{i-K}^i| - |\Omega_{i-K}^i|, \label{eq:short-gain-func}
\end{equation}
Then the problem can be stated as a constrained optimization problem:
\begin{equation} \begin{aligned}
&\max_{{x_P}(i)} g^K({{x_P}(i)};\Omega_{i-K}^i) \\ &\text{ subj. to } {x_E}(t) \in \visset({x_P}(t)) \text{ for } t\ge i
\end{aligned} \end{equation}
The constraint is challenging since it needs to hold for all future time.
In \cite{bharadwaj2019strategy}, we satisfy the constraint by using reactive synthesis tools
to precompute the \emph{feasible set}
$$\mathcal{F} :=\{ ({x_P}(0),{x_E}(0)) |{x_E}(t) \in \visset({x_P}(t)) \text{ for } t\ge0\} ,$$
which is the set of positions from which it is possible to maintain the surveillance requirement for all time.
The problem can now be reformulated as
\begin{equation} \begin{aligned}
&\max_{{x_P}(i)} g^K({{x_P}(i)};\Omega_{i-K}^i) \\&\text{ subj. to } ({x_P}(i),{x_E}(i)) \in \mathcal{F},
\end{aligned} \end{equation}
where the optimization of the objective function can be done greedily during game-play.
Figure~\ref{fig:patrol} shows snapshots from an example surveillance-constrained patrol game.
For future work, we envision the use of the value function from section~\ref{sec:hji} to compute
the feasible set for the \emph{continuous} version of the game.
Optimization of $g^K$ can be done using a greedy approach powered by a convolutional neural network, as in
Chapter~\ref{chap:exploration}. Monte Carlo tree search may also help refine strategies and generate
causually relevant training data that surpasses the one-step lookahead policy afforded by the greedy algorithm.
\begin{figure}[ht!]
\subfloat[$t=0$]{
\includegraphics[width=0.3\textwidth]{figures/patrol/casestudyfigs_high_res/frame_green_0000.png}
}
\subfloat[$t=10$ \label{fig:case1t10}]{
\includegraphics[width=0.3\textwidth]{figures/patrol/casestudyfigs_high_res/frame_green_0010.png}
}
\subfloat[$t=16$ \label{fig:case1t16}]{
\includegraphics[width=0.3\textwidth]{figures/patrol/casestudyfigs_high_res/frame_green_0016.png}
}\\
\subfloat[$t=20$ \label{fig:case1t20}]{
\includegraphics[width=0.3\textwidth]{figures/patrol/casestudyfigs_high_res/frame_green_0020.png}
}
\subfloat[$t=24$ \label{fig:case1t24}]{
\includegraphics[width=0.3\textwidth]{figures/patrol/casestudyfigs_high_res/frame_green_0024.png}
}
\subfloat[$t=33$ \label{fig:case1t33}]{
\includegraphics[width=0.3\textwidth]{figures/patrol/casestudyfigs_high_res/frame_green_0033.png}
}
\caption{Example of the surveillance-constrained patrol game with a single
pursuer (blue) and single evader (orange). The pursuer tries to optimize
short-term visibility of the environment when possible, but must always
maintain visibility of the evader. The green cells correspond to the pursuer's
planned path to a vantage point. Obstacles are shown in red, with occlusions
in black.}
\label{fig:patrol}
\end{figure}
\fi
\section{Introduction}
We consider a multiplayer \emph{surveillance-evasion} game consisting of two
teams, the pursuers and the evaders. The pursuers must maintain line-of-sight
visibility of the evaders for as long as possible as they move through an
environment with obstacles. Meanwhile, the evaders aim to hide from the
pursuers as soon as possible. The game ends when the pursuers lose sight of the
evaders. We assume all players have perfect knowledge of the obstacles and the
game is closed-loop -- each player employs a feedback strategy, reacting
dynamically to the positions of all other players.
In section~\ref{sec:hji}, we consider the game in the context of Hamilton-Jacobi-Isaacs
(HJI) equations. We propose a scheme to compute the value function, which,
informally, describes how "good" it is for each player to be in a specific
state. Then each player can pick the strategy that optimizes the value function
locally. Due to the principal of optimality, local optimization with respect to
the value function is globally optimal. This is because the value function
encodes information from all possible trajectories. As a result, the value
function is also very expensive to compute.
Section~\ref{sec:locally} discusses locally-optimal policies and
section~\ref{sec:learning-policy} presents search-based methods
to learn policies for the multiplayer version of the game.
\subsection{Related works}
The surveillance-evasion game is related to a popular class of games called
pursuit-evasion \cite{ho1965differential,isaacs1965differential}, where the
objective is for the pursuer to physically capture the evader. Classical
problems take place in obstacle-free space with constraints on the players'
motion. Variants include the lion and man
\cite{karnad2009lion,sgall2001solution}, where both players have the same
maneuverability, and the homicidal chauffeur \cite{merz1974homicidal}, where
one player drives a vehicle, which is faster, but has constrained mobility.
Lewin et. al. \cite{lewin1975surveillance} considered a
game in an obstacle-free space, where the pursuer must keep the evader within a
detection circle.
Bardi et. al. \cite{bardi1999numerical} proposed a semi-Lagrangian scheme for
approximating the value function of the pursuit-evasion game as viscosity solution
to the Hamilton-Jacobi-Isaacs equation, in a bounded domain with no obstacles.
In general, these methods are very expensive, with complexity $\mathcal{O}(m^{kd})$ where
$k$ is the number of players and $d$ is the dimension. This is because the
value function, once computed, can provide the optimal controls for all
possible player positions.
A class of methods try
to deal with the curse of dimensionality by solving for the solutions of
Hamilton-Jacobi equations at individual points in space and time. These methods
are causality-free; the solution at one point does not depend on solutions at
other points, making them conveniently parallelizable. They are efficient,
since one only solves for the value function locally, where it is needed,
rather than globally. Chow et. al.
\cite{chow2017algorithm,chow2018algorithm,chow2019algorithm} use the Hopf-Lax
formula to efficiently solve Hamilton-Jacobi equations for a class of
Hamiltonians. Sparse grid characteristics, due to Kang et. al. \cite{kang2017mitigating}, is another
causality-free method which finds the solution by solving a boundary value
problem for each point. Unfortunately, these methods do not apply to domains
with obstacles since they cannot handle boundary conditions.
The visibility-based pursuit-evasion game, introduced by Suzuki. et. al
\cite{suzuki1992searching}, is a version where the
pursuer(s) must compute the shortest path to find all hidden evaders in a
cluttered environment, or report it is not possible. The number of evaders is
unknown and their speed is unbounded. Guibas et. al.
\cite{guibas1997visibility} proposed a graph-based method for polygonal
environments. Other settings include multiple pursuers
\cite{stiffler2014complete}, bounded speeds \cite{tovar2008visibility},
unknown, piecewise-smooth planar environments \cite{sachs2004visibility}, and
simply-connected two-dimensional curved environments
\cite{lavalle2001visibility}.
The surveillance-evasion game has been studied previously
in the literature.
LaValle et. al. \cite{lavalle1997motion} use dynamic programming to compute
optimal trajectories for the pursuer, assuming a known evader trajectory. For
the case of an unpredictable evader, they suggest a local heuristic: maximize
the probably of visibility of the evader at the next time step.
They also mention, but do not implement, an idea to locally maximize the evader's time to occlusion.
Bhattacharya et. al. \cite{bhattacharya2008approximation,bhattacharya2011cell}
used geometric arguments to partition the environment into several regions
based on the outcome of the game.
In \cite{bhattacharya2009existence,zou2018optimal}, they use geometry and optimal control to compute
optimal trajectories for a single pursuer and single evader near the corners of a polygon.
The controls are then extended to the whole domain containing polygonal obstacles by partitioning based on the corners
\cite{zou2016optimal},
for the finite-horizon tracking problem \cite{zou2016visibility},
and for multiple players by allocating a pursuer for each evader via the Hungrarian matching algorithm
\cite{zhang2016multi}.
Takei et. al. \cite{takei2014efficient} proposed an efficient
algorithm for computing the static value function corresponding to the open loop game,
where each player moves according to a fixed strategy determined at initial time.
Their open loop game is conservative towards the pursuer, since the evader can
optimally counter any of the pursuer's strategies. As a consequence, the game
is guaranteed to end in finite time, as long as the domain is not
star-shaped.
In contrast, a closed loop game allows players to react dynamically to each other's actions.
In \cite{cartee2019time,gilles2019evasive,takei2015optimal}, the authors
propose optimal paths for an evader to reach a target destination, while
minimizing exposure to an observer. In \cite{gilles2019evasive}, the observer
is stationary. In \cite{cartee2019time}, the observer moves according to a
fixed trajectory. In \cite{takei2015optimal}, the evader can tolerate brief
moments of exposure so long as the consecutive exposure time does not exceed a
given threshold. In all three cases, the observer's controls are restricted to
choosing from a known distribution of trajectories; they are not allowed to
move freely
Bharadwaj et. al. \cite{bharadwaj2018synthesis} use reactive synthesis to
determine the pursuer's controls for the surveillance-evasion game on a discrete
grid. They propose a method of \emph{belief abstraction} to coarsen the state
space and only refine as needed. The method is quadratic in the number of
states: $\mathcal{O}(m^{2kd})$ for $k$ players. While it is more computationally
expensive than the Hamilton-Jacobi based methods, it is more flexible in being
able to handle a wider class of temporal surveillance objectives, such as
maintaining visibility at all times, maintaining a bound on the spatial
uncertainty of the evader, or guaranteeing visibility of the evader infinitely
often.
Recently, Silver et. al developed the AlphaGoZero and AlphaZero programs that
excel at playing Go, Chess, and Shogi, without using any prior knowledge of the
games besides the rules \cite{silver2017mastering,silver2017mastering2}. They
use Monte Carlo tree search, deep neural networks and self-play reinforcement
learning to become competitive with the world's top professional players.
\subsection*{Contributions}
We use a Godunov upwind scheme to compute the value function for the
closed loop surveillance-evasion game with obstacles in two dimensions. The
state space is four dimensional. The value function allows us to compute the
optimal feedback controls for the pursuers and evaders. Unlike the static game
\cite{takei2014efficient}, it is possible for the pursuer to win. However, the
computation is $\mathcal{O}(m^{kd})$ where $k$ is the number of players and $d$ the
dimensions.
As the number of players grows, computing the value function becomes infeasible.
We propose locally optimal strategies for the multiplayer surveillance-evasion
game, based on the value function for the static game. In addition, we propose
a deep neural network trained via self play and Monte Carlo tree search to
learn controls for the pursuer. Unlike Go, Chess, and Shogi, the
surveillance-evasion game is not symmetric; the pursuers and evaders require
different tactics. We use the local strategies to help improve the
efficiency of self-play.
The neural network is trained offline on a class of environments. Then,
during play time, the trained network can be used to play games efficiently on
previously unseen environments. That is, at the expense of preprocessing time
and optimality, we present an algorithm which can run efficiently. While the
deviation from optimality may sound undesirable, it actually is reasonable.
Optimality assumes perfect actions and instant reactions. It real
applications, noise and delays will perturb the system away from optimal
trajectories. We show promising examples in 2D.
\section{Value function from HJI equation}
\label{sec:hji}
Without loss of generality, we formulate the two player game, with a single pursuer and single evader.
The domain $\Omega\subseteq \mathbb{R}^d$ consists of obstacles and free space:
$\Omega=\obs\cup\free$. Consider a pursuer and evader whose positions at a particular time instance
are given by ${x_P},{x_E}:[0,\infty) \to \free$, respectively.
Let $A:= S^{d-1}\cup \{{\bf 0}\}$ be the compact set of control values.
The feedback controls map the players' positions to a control value:
\begin{equation} \begin{aligned}{\sigma_P},{\sigma_E}\in{\mathcal{A}}:=\{\sigma:\free\times\free\to A \ | \ \sigma \text{ measurable}\},\end{aligned} \end{equation}
where ${\mathcal{A}}$ is the set of admissible controls.
The players move with velocities ${f_P},{f_E}:
\Omega \to [0,\infty)$ according to the dynamics
\begin{equation} \begin{aligned}
\dot{{x_P}}(t) &= {f_P}({x_P}(t)) {\sigma_P}({x_P}(t),{x_E}(t)) \qquad &\dot{{x_E}}(t) &= {f_E}({x_E}(t)) {\sigma_E}({x_P}(t),{x_E}(t))\\
{x_P}(0)&= {\xp^0} \qquad &{x_E}(0)&= {\xe^0} \\
\end{aligned} \end{equation}
For clarity of notation, we will omit the dependence of the controls on
the players' positions. For simplicity, we assume velocities are isotropic,
meaning they do not depend on the controls. In real-world scenarios, this may not
be the case. For example, an airplane's dynamics might be constrained by its
momentum and turning radius.
As a slight relaxation, we consider the finite-horizon version of the game, where the pursuers
win if they can prolong the game past a time threshold $T$.
Let $\mathcal{T_\text{end}} := \{({x_P}(\cdot),{x_E}(\cdot)) | \xi({x_P}(\cdot),{x_E}(\cdot))\le 0\}$ be the end-game set of losing positions,
where $\xi$ is the shadow function defined in section~\ref{sec:shadow}.
Define the payoff function
\begin{equation} \begin{aligned}
\mathcal{J}[{\xp^0},{\xe^0},t,{\sigma_P},{\sigma_E}] :=
\inf\{0\le \tau \le t|({x_P}(\tau),{x_E}(\tau)) \in \mathcal{T_\text{end}} \},
\end{aligned} \end{equation}
where $\mathcal{J}[{\xp^0},{\xe^0},t,{\sigma_P},{\sigma_E}]:=t$ if the set $({x_P}(\tau),{x_E}(\tau))\in\mathcal{T_\text{end}}$ is empty.
The payoff is the minimum time-to-occlusion for given set
initial positions and controls.
Define the finite-horizon value function as:
\begin{equation} \begin{aligned}
V({\xp^0},{\xe^0},t) = \sup_{{\sigma_P}\in{\mathcal{A}}} \inf_{{\sigma_E}\in{\mathcal{A}}} \mathcal{J}[{\xp^0},{\xe^0},t,{\sigma_P},{\sigma_E}]
\end{aligned} \end{equation}
The value function describes the length of the game played to time $t$, starting from all pairs of initial positions, and assuming optimal controls.
We are interested in $V({\xp^0},{\xe^0},T)$ for a sufficiently large $T$, which characterizes
the set of initial positions from which the pursuers can maintain visibility of the evaders
for at least $T$ time units.
As $T\to \infty$, we recover the infinite-horizon value function.
By using the principle of optimality and Taylor expansion, one can derive
the Hamilton-Jacobi-Isaacs equation \cite{bardi2008optimal,crandall1983viscosity,evans1984differential}:
\iffalse
Consider a pair of initial positions ${\xp^0}$ and ${\xe^0}$. For clarify of notation, we use ${x_P}(t)$ and ${x_E}(t)$ to denote the positions
of the pursuer and evader at time $t$ according to the controls ${\sigma_P}$ and ${\sigma_E}$, assuming ${x_P}(0)={\xp^0}$ and ${x_E}(0)={\xe^0}$.
The value function satisfies the Dynamic Programming Principle:
\begin{align}
V({\xp^0},{\xe^0}) &= \sup_{\sigma_P} \inf_{\sigma_E} \mathcal{J}[{\xp^0},{\xe^0},{\sigma_P},{\sigma_E}] \\
&= t + \sup_{\sigma_P} \inf_{\sigma_E} \mathcal{J}[{x_P}(t),{x_E}(t),{\sigma_P},{\sigma_E}] \\
&= t + \sup_{\sigma_P} \inf_{\sigma_E} V({x_P}(t),{x_E}(t) )
\end{align}
Rearranging and dividing by $t>0$, we have
\begin{equation} \begin{aligned}
\frac{1}{t} \Big[ V({\xp^0},{\xe^0}) - V({x_P}(t),{x_E}(t))\Big] &= 1
\label{eq:dpp}
\end{aligned} \end{equation}
By Taylor expansion, we have
\begin{align}
{x_P}(t) &= {\xp^0} + t \dot{{x_P}}(0) + \mathcal{O}(t^2) \\
{x_E}(t) &= {\xe^0} + t \dot{{x_E}}(0) + \mathcal{O}(t^2) \\
V({x_P}(t),{x_E}(t)) &= V({\xp^0},{\xe^0}) + t \ \nabla_{x_P} V \cdot \dot{{x_P}}(0) + t \ \nabla_{x_E} V \cdot \dot{{x_E}}(0) + \mathcal{O}(t^2)
\end{align}
Substituting back into (\ref{eq:dpp}), and taking the limit
\begin{align}
\lim_{t\to0^+} \frac{1}{t} \Big[ V({\xp^0},{\xe^0}) - V({x_P}(t),{x_E}(t))\Big] &= 1 \\
\lim_{t\to0^+} \frac{1}{t} \Big[ -t \ \nabla_{x_P} V \cdot \dot{{x_P}}(0) - t \ \nabla_{x_E} V \cdot \dot{{x_E}}(0) + \mathcal{O}(t^2) \Big] &= 1 \\
- \ \nabla_{x_P} V \cdot \dot{{x_P}}(0) - \ \nabla_{x_E} V \cdot \dot{{x_E}}(0) &= 1 \\
- {f_P}({\xp^0})\ \nabla_{x_P} V \cdot {\sigma_P}({\xp^0},{\xe^0}) - {f_E}({\xe^0}) \ \nabla_{x_E} V \cdot {\sigma_E}({\xp^0},{\xe^0}) &= 1
\end{align}
\fi
\begin{equation} \begin{aligned}
V_t + \inf_{{\sigma_E}\in A} \sup_{{\sigma_P} \in A} \{-\nabla_{x_P} V \cdot {\sigma_P} - \nabla_{x_E} V \cdot {\sigma_E}\}
&= 1 \ , &\text{ on } \free\setminus\mathcal{T_\text{end}} \\
V({x_P},{x_E},0) &= 0 \\
V({x_P},{x_E},t) &= 0 \ , & ({x_P},{x_E})\in\mathcal{T_\text{end}} \\
V({x_P},{x_E},t) &= \infty \ , & {x_P} \text{ or }{x_E} \in \obs \label{eq:hji}
\end{aligned} \end{equation}
It has been shown the value function is the viscosity solution \cite{bardi2008optimal,crandall1983viscosity,evans1984differential} to \eqref{eq:hji}.
For isotropic controls, this simplifies to the following Eikonal equation:
\begin{equation} \begin{aligned}
V_t -{f_P} |\nabla_{x_P} V| +{f_E} |\nabla_{x_E} V| &= 1 \ , &\text{ on } \free\setminus\mathcal{T_\text{end}} \\
V({x_P},{x_E},0) &= 0 \\
V({x_P},{x_E},t) &= 0 \ , & ({x_P},{x_E} )\in \mathcal{T_\text{end}} \\
V({x_P},{x_E},t) &= \infty \ , & {x_P} \text{ or }{x_E} \in \obs \label{eq:hji-eikonal}
\end{aligned} \end{equation}
The optimal controls can be recovered by computing the gradient of the value function:
\begin{align} \label{eq:optcontrols}
{\sigma_P} &= \frac{\nabla_{x_P} V}{|\nabla_{x_P} V|} \ , \qquad
{\sigma_E} = -\frac{\nabla_{x_E} V}{|\nabla_{x_E} V|} \ .
\end{align}
\subsection{Algorithm}
\iffalse
We solve (\ref{eq:hji-eikonal}) as the steady state of the time-dependent equation \cite{osher1993level}:
\begin{equation} \begin{aligned}
V_t -{f_P} |\nabla_{x_P} V| +{f_E} |\nabla_{x_E} V| &= 1 \qquad &\text{ on } \free\setminus\mathcal{T_\text{end}} \\
V({x_P},{x_E},t) &= 0 &\qquad ({x_P},{x_E} )\in \mathcal{T_\text{end}} \\
V({x_P},{x_E}) &= \infty &\qquad {x_P} \text{ or }{x_E} \in \obs \label{eq:hji-eikonal-time}
\end{aligned} \end{equation}
\fi
Following the ideas in \cite{tsai2003fast}, we discretize the gradient using upwind scheme as follows.
Let ${x_P}_{i,j},{x_E}_{k,l}\in\free$ be the discretized positions with grid
spacing $h$. Denote $V_{i,j,k,l}$ as the numerical solution to (\ref{eq:hji-eikonal}) for
initial positions ${x_P}_{i,j},{x_E}_{k,l}$. We estimate the gradient using finite
difference. For clarity, we will only mark the relevant subscripts, e.g.
$V_{i+1}:=V_{i+1,j,k,l}$.
\begin{equation} \begin{aligned}
{x_P}_{x^-} &:= \frac{1}{h} \Big(V_{i} - V_{i-1}\Big), \qquad &{x_E}_{x^-} &:= \frac{1}{h} \Big(V_{k} - V_{k-1}\Big),\\
{x_P}_{x^+} &:= \frac{1}{h} \Big(V_{i+1} - V_{i}\Big), \qquad &{x_E}_{x^+} &:= \frac{1}{h} \Big(V_{k+1} - V_{k}\Big),\\
{x_P}_{y^-} &:= \frac{1}{h} \Big(V_{j} - V_{j-1}\Big), \qquad &{x_E}_{y^-} &:= \frac{1}{h} \Big(V_{l} - V_{l-1}\Big),\\
{x_P}_{y^+} &:= \frac{1}{h} \Big(V_{j+1} - V_{j}\Big), \qquad &{x_E}_{y^+} &:= \frac{1}{h} \Big(V_{l+1} - V_{l}\Big).\\
\end{aligned} \end{equation}
Let $a^- := -\min(0,a)$ and
$a^+ := \max(0,a)$. Define
\begin{equation} \begin{aligned}
{\text{sgn}}\max(a,b) &:=
\begin{cases}
a^+ & \text{ if } \max(a^+,b^-) = a^+ \\
-b^- & \text{ if } \max(a^+,b^-) = b^- \\
\end{cases} \\
\end{aligned} \end{equation}
and
\begin{equation} \begin{aligned}
\partial {x_P}_x V &= {\text{sgn}}\max ({x_P}_{x^+}, {x_P}_{x^-} ) \\
\partial {x_P}_y V &= {\text{sgn}}\max ({x_P}_{y^+}, {x_P}_{y^-} ) \\
\partial {x_E}_x V &= {\text{sgn}}\max ({x_E}_{x^-}, {x_E}_{x^+} ) \\
\partial {x_E}_y V &= {\text{sgn}}\max ({x_E}_{y^-}, {x_E}_{y^+} ) \\
\end{aligned} \end{equation}
Finally, the desired numerical gradients are
\begin{equation} \begin{aligned}
|\nabla_{x_P} V| &= \Big( (\partial {x_P}_x V )^2 + (\partial {x_P}_y V)^2 \Big)^{1/2} \\
|\nabla_{x_E} V| &= \Big( (\partial {x_E}_x V )^2 + (\partial {x_E}_y V)^2 \Big)^{1/2} \\
\end{aligned} \end{equation}
Then we have a simple explicit scheme.
\begin{equation} \begin{aligned}
V^{n+1} = V^{n} + \Delta t (1 + {f_P} |\nabla_{x_P} V| - {f_E} |\nabla_{x_E} V| )
\label{eq:hji-scheme}
\end{aligned} \end{equation}
The CFL conditions dictate that the time step $\Delta t$ should be
\begin{equation} \begin{aligned}
\Delta t \le \frac{h}{16 \max({f_P},{f_E})}
\end{aligned} \end{equation}
For a given environment, we precompute the value function by iteration until convergence.
During play time, we initialize ${\xp^0},{\xe^0}$ and compute the optimal trajectories according to
(\ref{eq:optcontrols}) using $\Delta t$ time increments.
\subsubsection{Boundary conditions}
The obstacles appear in the HJI equation as boundary conditions.
However, direct numerical implementation leads to artifacts near the obstacles.
Instead, we model the obstacles by setting the velocities to be small inside obstacles.
We regularize the velocities by adding a smooth transition \cite{takei2014efficient}:
\begin{align}
v_\epsilon(x) &=
\begin{cases}
v(x) & \map(x)>0 \\
v_\text{min} + \frac{v(x)-v_\text{min}}{2} \Big[ \cos\Big(\frac{\phi(x)\pi}{2\epsilon}\Big)+1 \Big] & \map(x)\in[-2\epsilon,0]\\
v_\text{min} & \map(x) < -2\epsilon
\end{cases}
\end{align}
where $\map$ is the signed distance function to the obstacle boundaries.
In the numerical experiments, we use $\epsilon=16\Delta x$ and $v_\text{min} = 1/100$.
\subsection{Numerical results}
\subsubsection*{Stationary pursuer}
We verify that the scheme converges numerically for the case in which the pursuer is stationary.
When ${f_P}=0$, the HJI equation \eqref{eq:hji-eikonal} becomes the time-dependent Eikonal equation:
\begin{equation} \begin{aligned}
V_t + {f_E} |\nabla_{x_E} V| &= 1 \qquad &\text{ on } \free\setminus\mathcal{T_\text{end}} \\
V({x_P},{x_E},0) &= 0 \\
V({x_P},{x_E},t) &= 0 &\qquad ({x_P},{x_E} )\in \mathcal{T_\text{end}} \label{eq:hji-stationary}
\end{aligned} \end{equation}
In particular, for sufficiently large $t$, the value function reaches a steady state, and satisfies the Eikonal equation:
\begin{equation} \begin{aligned}
{f_E} |\nabla_{x_E} V| &= 1 \qquad &\text{ on } \free\setminus\mathcal{T_\text{end}} \\
V({x_P},{x_E}) &= 0 &\qquad ({x_P},{x_E} )\in \mathcal{T_\text{end}} \label{eq:hji-stationary-steady}
\end{aligned} \end{equation}
For this special case, the exact solution is known; the solution corresponds to the evader's travel time to
the end-game set $\mathcal{T_\text{end}}$.
Also, this case effectively reduces the computational cost
from $\mathcal{O}(m^4)$ to $\mathcal{O}(m^2)$, so that we can reasonably compute solutions at higher resolutions.
We consider a $\Omega=[0,1)\times[0,1)$ with a single circular obstacle of radius $0.15$ centered at $(1/2,1/2)$.
The pursuer is stationary at ${x_P}_0=(1/8,1/2)$.
We use $\Delta t = \Delta x/20$ and iterate until the solution no longer changes in the $L_1$ sense, using a tolerance of $10^{-5}$.
We compute the ``exact'' solution using fast marching method on high resolution grid $M=2048$.
We vary $m$ from $16$ to $1024$ and observe convergence in the $L_1$ and $L_2$ sense,
as seen in Table~\ref{tab:hji_conv}.
In Figure~\ref{fig:hji_contour}, we plot the level curves comparing the computed solution at $m=512,1024$ to the ``exact'' solution.
Notice the discrepancies are a result of the difficulty in dealing with boundary conditions.
However, these errors decay as the grid is refined.
The case where the evader is stationary is not interesting.
\begin{table}[]
\begin{center}
\caption{Error for the stationary pursuer case, compared to the known solution computed using fast marching method at resolution $M=2048$.\vspace{1em}}
\label{tab:hji_conv}
\begin{tabular}{c|c|c}
$m $ & $L_1$ error & $L_2$ error \\
\hline
$16 $ & $0.08972215 $ &$0.01288563 $ \\
$32 $ & $0.03177683 $ &$0.00159669 $ \\
$64 $ & $0.02442984 $ &$0.00111537 $ \\
$128 $ & $0.01059728 $ &$0.00021345 $ \\
$256 $ & $0.00515584 $ &$0.00005214 $ \\
$512 $ & $0.00304322 $ &$0.00001961 $ \\
$1024$ & $0.00086068 $ &$0.00000142 $
\end{tabular}
\end{center}
\end{table}
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth]{figures/hj_convergence_512.pdf} \quad
\includegraphics[width=.45\textwidth]{figures/hj_convergence_1024.pdf}
\caption{Comparison of contours of the ``exact'' solution (blue) with those
computed by the scheme (\ref{eq:hji-scheme}) (red) using grid resolutions $m=512$ (left) and
$m=1024$ (right). The pursuer (blue square) is stationary. The error emanates from the obstacle
due to boundary conditions, but the scheme converges
as the grid is refined.} \label{fig:hji_contour}
\end{figure}
\subsubsection*{A circular obstacle}
The evader has the advantage in the surveillance-evasion game. It is
difficult for the pursuer to win unless it is sufficiently fast.
But once it is fast enough, it can almost always win.
Define the winning regions for the pursuer and evader, respectively:
\begin{align}
\mathcal{W}_{x_P} &= \{({x_P},{x_E}) | V({x_P},{x_E}) > T_\text{max}\} \\
\mathcal{W}_{x_E} &= \{({x_P},{x_E}) | V({x_P},{x_E}) \le T_\text{max}\}
\end{align}
Here, we use $T_\text{max}=.9 T$ to tolerate numerical artifacts due to boundary conditions.
In Figure~\ref{fig:hj_circles_winset}, we show how the winning region
for a fixed evader/pursuer position changes as the pursuer's speed increases.
Since it is difficult to visualize data in
4D, we plot the slices $V({\xp^0},\cdot)$ and $V(\cdot,{\xe^0})$ where
${\xp^0}={\xe^0}=(1/8,1/2)$.
We use $m=64$, $\Delta t = \Delta x/20$ and iterate until $T=10$.
We fix ${f_P}=1$ and compute $V$ for each ${f_E}\in\{\frac{1}{3},\frac{1}{2},\frac{2}{3}\}$.
The computation for each value function takes 16 hours.
\begin{figure}[hptb]
\centering
\includegraphics[width=.78\textwidth]{figures/hj_circle_winsets_cE_66.pdf}
\includegraphics[width=.78\textwidth]{figures/hj_circle_winsets_cE_50.pdf}
\includegraphics[width=.78\textwidth]{figures/hj_circle_winsets_cE_33.pdf}
\caption{Comparison of winning initial positions for the evader (left,
red contour) against a pursuer with fixed initial position (blue square) and
vice versa -- winning initial positions for the pursuer (right, blue contour)
against an evader with fixed initial position (red circle). Left column shows
$V({\xp^0},\cdot)$ while right column shows $V(\cdot,{\xe^0})$, where higher values of
$V$ are yellow, while lower values are dark blue. From top to bottom, the pursuer is $1.5$, $2$
and $3$ times faster than the evader.
The pursuer must be sufficiently fast to have a chance at winning.
} \label{fig:hj_circles_winset} \end{figure}
Figure~\ref{fig:hj_circles_traj} shows trajectories from several initial positions
with various speeds. Interestingly, once the evader is cornered, the optimal controls
dictate that it is futile to move. That is, the value function is locally constant.
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth]{figures/hj_traj_circles_32_08_04_60_cE_100.pdf}
\includegraphics[width=.45\textwidth]{figures/hj_traj_circles_32_08_04_60_cE_50.pdf}
\includegraphics[width=.45\textwidth]{figures/hj_traj_circles_48_08_04_32_cE_100.pdf}
\includegraphics[width=.45\textwidth]{figures/hj_traj_circles_48_08_04_32_cE_50.pdf}
\caption{Trajectories of several games played around a circle. The
pursuer loses when it has same speed as the evader (left column). When the pursuer is 2x faster than the evader,
it is possible to win; the evader essentially gives up once it is cornered, since no controls will change the outcome (right column).
Initial positions are shown as stars. Black lines connect positions at constant time intervals.}
\label{fig:hj_circles_traj} \end{figure}
\subsubsection*{More obstacles}
In Figure~\ref{fig:hj_dice_traj} we consider a more complicated environment with multiple obstacles.
Here, the pursuer is twice as fast as the evader. Although there are many obstacles, the dynamics
are not so interesting in the sense that the evader will generally navigate towards a single obstacle.
Again, the evader tends to give up once the game has been decided.
Finally, in Figure~\ref{fig:hj_human_traj} we show suboptimal controls for the
evader. In particular, the evader is controlled manually. Although manual
controls do not help the evader win, they lead to more interesting
trajectories.
\begin{figure}[htb]
\centering
\includegraphics[width=.45\textwidth]{figures/hj_traj_dice_32_08_07_37_cE_50.pdf}
\includegraphics[width=.45\textwidth]{figures/hj_traj_dice_56_08_57_35_cE_50.pdf}
\caption{Trajectories of games played around 5 circular obstacles. Pursuer (blue) is 2x as fast as evader (red).
The evader wins (left) if it can quickly hide. Otherwise it will give up once it is captured (right).
Initial positions are shown as stars. Black lines connect positions at constant time intervals.}
\label{fig:hj_dice_traj} \end{figure}
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth]{figures/hj_traj_circles_32_08_32_21_cE_33_human.pdf}
\includegraphics[width=.45\textwidth]{figures/hj_traj_dice_56_08_57_35_cE_50_human.pdf}
\caption{Manually controlled evader against an optimal pursuer. The evader loses in both cases, but does not give up.}
\label{fig:hj_human_traj} \end{figure}
\subsection{Discussion}
Notice that an optimal trajectory for the pursuer balances distance and
visibility. While moving closer to the evader will guarantee that it can't
``get away'', it is not sufficient, since being close leads to more
occlussions. On the other hand, moving far away gives better visibility of the
environment, but may make it impossible to catch up once E turns the corner.
Although we described the formulation for a single pursuer and evader,
the same scheme holds for multiple pursuers and evaders. The end-game set just
needs to be modified to take into account the multiple players.
That is, the game ends as soon as any evader is occluded from all pursuers.
However, the computational complexity of the scheme is $\mathcal{O}(m^{kd})$ which
quickly becomes unfeasible even on small grid sizes.
At the expense of offline compute time, the game can be played efficiently
online. The caveat is that, the value function is only valid for the specific
map and velocities computed offline.
\section{Locally optimal strategies}
\label{sec:locally}
As the number of players increases, computing the value function from the HJI
equations is no longer tractable.
We consider a discrete version of the game, with the aim of feasibly
computing controls for games with multiple pursuers and multiple evaders.
Each player's position is now restricted on a grid, and at each turn,
the player can move to a new position within a neighborhood determined by its
velocity. Players cannot move through obstacles.
Formally, define the arrival time function
\begin{equation} \begin{aligned}
\parrival(x,y) &:= \min_{{\sigma_P}\in{\mathcal{A}}} \min \{t|{x_P}(0)=x,{x_P}(t)=y\}\\
\earrival(x,y) &:= \min_{{\sigma_E}\in{\mathcal{A}}} \min \{t|{x_E}(0)=x,{x_E}(t)=y\} .
\end{aligned} \end{equation}
The set of valid actions are the positions $y$ which can be reached from $x$
within a $\Delta t$ time increment:
\begin{equation} \begin{aligned}
\pvalid(t)&:=\{y\in\free | \parrival({x_P}(t),y) \le \Delta t\}\\
\evalid(t)&:=\{y\in\free | \earrival({x_E}(t),y) \le \Delta t\}.
\end{aligned} \end{equation}
In a $\Delta t$ time step, each player can move to a position
\begin{equation} \begin{aligned}
{x_P}(t+\Delta t) \in \pvalid(t) \\
{x_E}(t+\Delta t) \in \evalid(t).
\end{aligned} \end{equation}
Analogously, for multiple players, denote the number of pursuers and evaders as
$k_P$ and $k_E$, respectively. Define
\begin{equation} \begin{aligned}
{{\bf P}} &= ({x_P}_1,\dots, {x_P}_{k_P})\\
{{\bf E}} &= ({x_E}_1,\dots, {x_E}_{k_E})\\
\bfpvalid(t) &= \{{{\bf P}}| {x_P}_i \in \pvalid(t) \ , \ i=1,\dots,k_P\} \\
\bfevalid(t) &= \{{{\bf E}}| {x_E}_j \in \evalid(t) \ , \ j=1,\dots,k_E\}
\end{aligned} \end{equation}
so that in $\Delta t$ time, each team can move to
\begin{equation} \begin{aligned}
{{\bf P}}(t+\Delta t) \in \bfpvalid(t) \\
{{\bf E}}(t+\Delta t) \in \bfevalid(t) \\
\end{aligned} \end{equation}
The game ends as soon as one evader is occluded from all
pursuers. The end-game set is
\begin{equation} \begin{aligned}
\mathcal{T_\text{end}} = \{ ({{\bf P}},{{\bf E}}) \ |\ \exists \ j : \xi({x_P}_i,{x_E}_j)\le 0 \text{ for } i=1,\dots,k_P\} ,
\end{aligned} \end{equation}
We propose two locally optimal strategies for the pursuer.
\subsection{Distance strategy}
The trajectories from the section~\ref{sec:hji} suggest that the pursuer must
generally remain close to the evader. Otherwise, the evader can quickly hide behind obstacles.
A simple strategy for the pursuer is to move towards the evader:
\begin{equation} \begin{aligned}
{x_P}(t+\Delta t) &= \argmin_{x\in\pvalid(t)} \parrival\Big(x,{x_E}(t) \Big).
\end{aligned} \end{equation}
That is, in the time increment $\Delta t$, the pursuer should pick the action that minimizes
its travel time to the evader's current position at time $t$.
For the multiplayer game, we propose a variant of the
Hausdorff distance, where max is replaced by a sum:
\begin{align*}
{\bf \parrival}({\bf x, {x_E}}(t)) :=
\frac{1}{2}\Big[ \sum_{i=1}^{k_P} \min_j \parrival[i]\Big( x_i,{x_E}_j(t) \Big)^2 \ \Big]^{1/2} + \frac{1}{2}\Big[ \sum_{j=1}^{k_E} \min_i \parrival[i]\Big( x_i,{x_E}_j(t)\Big)^2 \ \Big]^{1/2} .
\end{align*}
Informally, the first term encourages each pursuer to be close to an evader,
while the second term encourages a pursuer to be close to each evader. The sum helps
to prevent ties.
The optimal action according to the distance strategy is
\begin{equation} \begin{aligned}
{{{\bf P}}(t+\Delta t)} = \argmin_{{\bf x} \in \bfpvalid(t)}{\bf \parrival}({\bf x, {x_E}}(t))
\end{aligned} \end{equation}
In the next section, we will use search algorithms to refine policies. Rather
than determining the \emph{best} action, it is useful to quantify the utility
of each action. To do so, define
$p_\text{distance}:\Omega \times \free \times \dots \times \free \to \mathbb{R}$ as the policy,
which outputs a probability the agent should take an action, conditioned on the current player positions.
We normalize using the \emph{softmax} function to generate the policy:
\begin{equation} \begin{aligned}
\alpha &= \Big[\displaystyle{\sum_{{\bf x}\in\bfpvalid(t)}} e^{ -{\bf \parrival}({\bf x},{\bf {x_E}}(t))}\Big]^{-1}\\
p_\text{distance}\big({\bf x}| ({\bf {x_P}}(t),{\bf {x_E}}(t))\big) &=
\begin{dcases}
\alpha e^{ -{\bf \parrival}({\bf x},{\bf {x_E}}(t))} & {\bf x} \in \bfpvalid(t)\\
0 & \text{otherwise}
\end{dcases}
\end{aligned} \end{equation}
For the discrete game with $\Delta t$ time increments,
one can enumerate the possible positions for each ${x_P}_i$
and evaluate the travel time to find the optimal control.
The arrival time function $\parrival[i](\cdot,{x_E}_j(t))$ to each ${x_E}_j$ at the current
time $t$ can be precomputed in $\mathcal{O}(m^d)$ time.
In the general case, where each pursuer may have a different velocity field ${f_P}_i$, one
would need to compute $k_P$ arrival time functions.
If $a_{\Delta t}(P_i)$ is the
max number of possible actions each pursuer can make in $\Delta t$ increment, then the total
computational complexity for one move is
$$ O\Big(k_P k_E m^d + \prod_{i=1}^{k_P} a_{\Delta_t}(P_i)\Big). $$
For the special case when the pursuers have the same ${f_P}$, the complexity reduces to
$$ O\Big(k_E m^d + \prod_{i=1}^{k_P} a_{\Delta_t}(P_i)\Big). $$
\subsection{Shadow strategy}
Recall that, for a stationary pursuer, the value function for the evader becomes
the Eikonal equation:
\begin{equation} \begin{aligned}
{f_E} |\nabla_{x_E} V| &= 1 \qquad &\text{ on } \free\setminus\mathcal{T_\text{end}} \\
V({x_P},{x_E}) &= 0 &\qquad ({x_P},{x_E} )\in \mathcal{T_\text{end}},
\end{aligned} \end{equation}
whose solution is the travel time to the shadow set.
Define the time-to-occlusion as
\begin{equation} \begin{aligned}
\tstar({x_P},{x_E}^0) := \min_{{\sigma_E}\in{\mathcal{A}}} \min \{ t\ge 0 \ | \ {x_E}(0)={x_E}^0 \ , \ ({x_P},{x_E}(t)) \in \mathcal{T_\text{end}} \ \} .
\end{aligned} \end{equation}
It is the shortest time in which an evader at ${x_E}^0$ can be occluded from a stationary pursuer at ${x_P}$.
Thus, a reasonable strategy for the evader is to pick the action which brings it
closest to the shadow formed by the pursuer's position:
\begin{equation} \begin{aligned}
{x_E}(t+\Delta t) = \argmin_{y \in\evalid(t)} \tstar({x_P}(t),y) .
\label{eq:evader-action}
\end{aligned} \end{equation}
A conservative strategy for the pursuer, then, is to maximize time-to-occlusion, assuming that the evader can anticipate its actions:
\begin{align}
\tstar^\ast(x,{x_E}(t)) &= \min_{y \in\evalid(t)} \tstar(x,y)\\
{x_P}(t+\Delta t) &= \argmax_{x\in\pvalid(t)} \tstar^\ast(x) \label{eq:local-takei}.
\end{align}
\emph{Remark:} The strategy (\ref{eq:local-takei}) is a local variant of the static value
function proposed in \cite{takei2014efficient}. In that paper, they suggest
using the static value function for feedback controls by moving towards the
globally optimal destination, and then recomputing at $\Delta t$ time
intervals. Here, we use the locally optimal action.
For multiple players, the game ends as soon as any evader is hidden from all pursuers. Define the time-to-occlusion for multiple players:
\begin{equation} \begin{aligned}
\bftstar({{{\bf P}}},{{\bf E}}^0):= \min_{{\sigma_E}_i\in{\mathcal{A}}} \min \{ t\ge 0 \ | \ {{\bf E}}(0)={{\bf E}}^0 \ , \ ({{\bf P}},{{\bf E}}(t)) \in \mathcal{T_\text{end}} \ \} .
\end{aligned} \end{equation}
Then, the strategy should consider the shortest time-to-occlusion among all possible evaders' actions in the $\Delta t$ time increment:
\begin{equation} \begin{aligned}
\bftstar^\ast({\bf x},{\bf {x_E}}(t)) &:= \min_{{\bf y} \in\bfevalid(t)} \bftstar({\bf x},{\bf y}) \\
{{\bf P}}(t+\Delta t) &= \argmax_{{\bf x}\in\bfpvalid(t)} \bftstar^\ast({\bf x},{\bf {x_E}}(t)).
\end{aligned} \end{equation}
The corresponding shadow policy is:
\begin{equation} \begin{aligned}
\alpha &= \Big[\displaystyle{\sum_{x\in\bfpvalid(t)}} e^{ \bftstar^\ast({\bf x},{\bf {x_E}}(t))}\Big]^{-1}\\
p_\text{shadow}\big({\bf x}| ({\bf {x_P}}(t),{\bf {x_E}}(t))\big) &=
\begin{dcases}
\alpha e^{ \bftstar^\ast({\bf x},{\bf {x_E}}(t))} & {\bf x} \in \bfpvalid(t)\\
0 & \text{otherwise}
\end{dcases}
\end{aligned} \end{equation}
This strategy is computationally expensive.
One can precompute the arrival time to each evader by solving an
Eikonal equation $\mathcal{O}(m^d)$.
For each combination of pursuer
positions, one must compute the joint visibility function and corresponding
shadow function $\mathcal{O}(m^d)$.
Then the time-to-occlusion can be found by evaluating the precomputed
arrival times to find the minimum within the shadow set $\mathcal{O}(m^d)$.
The computational complexity for one move is
\begin{equation} \begin{aligned} O \Big( k_E m^d + m^d \cdot \prod_{i=1}^{k_P} a_{\Delta t}(P_i) \Big). \end{aligned} \end{equation}
One may also consider alternating minimization strategies to achieve
\begin{equation} \begin{aligned} O \Big(k_E m^d + m^d \cdot \sum_{i=1}^{k_P} a_{\Delta t}(P_i) \Big), \end{aligned} \end{equation}
though we leave that for future work.
\subsection*{Blend strategy}
We have seen from $\ref{sec:hji}$ that optimal controls for the pursuer balance the
distance to, and visibility of, the evader. Thus a reasonable approach would be
to combine the distance and shadow strategies. However, it is not clear how
they should be integrated. One may consider a linear combination, but
the appropriate weighting depends on the game settings and environment.
Empirically, we observe that the product
of the policies provides promising results across a range of scenarios. Specifically,
\begin{equation} \begin{aligned}
p_\text{blend} \propto p_\text{shadow} \cdot p_\text{distance}
\end{aligned} \end{equation}
\subsection{Numerical results}
We present some representative examples of the local policies.
First, we consider the game with a circular obstacle, a single purser and single evader
whose speeds are ${f_P}=3$ and ${f_E}=2$, respectively.
Figure~\ref{fig:traj-local-circle} illustrates the typical trajectories for each policy.
In general, the distance strategy leads the pursuer into a cat-and-mouse game
the with evader; the pursuer, when close enough, will jump to the evader's
position at the previous time step. The shadow strategy keeps the pursuer far
away from obstacles, since this allows it to \emph{steer the shadows} in the
fastest way. The blend strategy balances the two approaches and resembles the
optimal trajectories based on the HJI equation in section~\ref{sec:hji}.
\begin{figure}[hptb]
\centering
\includegraphics[width=.42\textwidth,trim={0 0 0.1in 0},clip]{figures/traj/traj_seed_808_m_64_vP_3_vE_2_mcts_0_distance_vs_shadow_000.pdf}
\includegraphics[width=.42\textwidth,trim={0.1in 0 0 0},clip]{figures/traj/traj_seed_808_m_64_vP_3_vE_2_mcts_0_distance_vs_shadow_001.pdf}
\includegraphics[width=.42\textwidth,trim={0 0 0.1in 0},clip]{figures/traj/traj_seed_808_m_64_vP_3_vE_2_mcts_0_tstar_vs_shadow_000.pdf}
\includegraphics[width=.42\textwidth,trim={0.1in 0 0 0},clip]{figures/traj/traj_seed_808_m_64_vP_3_vE_2_mcts_0_tstar_vs_shadow_001.pdf}
\includegraphics[width=.42\textwidth,trim={0 0 0.1in 0},clip]{figures/traj/traj_seed_808_m_64_vP_3_vE_2_mcts_0_tstardist_vs_shadow_000.pdf}
\includegraphics[width=.42\textwidth,trim={0.1in 0 0 0},clip]{figures/traj/traj_seed_808_m_64_vP_3_vE_2_mcts_0_tstardist_vs_shadow_001.pdf}
\caption{Distance strategy (top) follows the evader closely, shadow strategy
(middle) stays far to gain better perspective, while the blend strategy
(bottom) strikes a balance.}
\label{fig:traj-local-circle}
\end{figure}
Next, we highlight the advantages of the shadow strategy with a
2 pursuer, 2 evader game on a map with two crescent-shaped obstacles.
The pursuer and evader speeds are ${f_P}=4$ and ${f_E}=2$, respectively.
The openness of the environment creates large occlusions. The pursuers use the
shadow strategy to cooperate and essentially corner the evaders.
Figure~\ref{fig:traj-shadow-eyes} shows snapshots of the game.
The distance strategy loses immediately since the green purser does not
properly track the orange evader.
\begin{figure}[hptb]
\centering
\includegraphics[width=.42\textwidth,trim={0 .1in .1in 0},clip]{figures/traj/traj_eyes_seed_01_m_128_vP_4_vE_2_mcts_0_tstar_vs_shadow_000.pdf}
\includegraphics[width=.42\textwidth,trim={.1in .1in 0 0},clip]{figures/traj/traj_eyes_seed_01_m_128_vP_4_vE_2_mcts_0_tstar_vs_shadow_001.pdf}
\includegraphics[width=.42\textwidth,trim={0 .0 .1in .1in},clip]{figures/traj/traj_eyes_seed_01_m_128_vP_4_vE_2_mcts_0_tstar_vs_shadow_002.pdf}
\includegraphics[width=.42\textwidth,trim={.1in .0 0 .1in},clip]{figures/traj/traj_eyes_seed_01_m_128_vP_4_vE_2_mcts_0_tstar_vs_shadow_003.pdf}
\includegraphics[width=.42\textwidth]{figures/traj/traj_eyes_seed_01_m_128_vP_4_vE_2_mcts_0_distance_vs_shadow_000.pdf}
\caption{(Top 2 rows) The blue and green pursuers cooperate by using the shadow strategy. Green initially has responsibility of the orange
evader, but blue is able to take over.
(Bottom) The distance strategy loses immediately.
}
\label{fig:traj-shadow-eyes}
\end{figure}
We present cases where the distance and shadow strategy fail in
Figure~\ref{fig:traj-failure}. The evader tends to stay close to the obstacle,
since that enables the shortest path around the obstacle. Using the distance
strategy, the pursuer agressively follows the evader. The evader is able to
counter by quickly jumping behind sharp corners. On the other hand, the shadow
strategy moves the pursuer away from obstacles to reduce the size of shadows.
As a consequence, the pursuer will generally be too far away from the evader
and eventually lose. In environments with many nonconvex obstacles, both
strategies will fail.
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth]{figures/traj/traj_cross_seed_823_m_32_vP_4_vE_3_mcts_0_distance_vs_shadow_000.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_dice_seed_05_m_32_vP_3_vE_2_mcts_0_tstar_vs_shadow_000.pdf}
\caption{Failure modes for the local strategies. Blindly using the distance
strategy (left) allows the evader to exploit the sharp concavities. The
shadow strategy (right) keeps the pursuer far away to reduce the size of shadows, but often,
the pursuer is too far away to catch the evader.}
\label{fig:traj-failure} \end{figure}
Finally, we show that blending the shadow and distance strategies is very
effective in compensating for the shortcomings of each individual policy.
The pursuers are able to efficiently track the evaders while maintain a safe distance.
Figure~\ref{fig:traj-blend-dice} shows an example with 2 pursuers and 2 evaders on
a map with multiple obstacles, where ${f_P}=3$ and ${f_E}=2$.
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth]{figures/traj/traj_dice_seed_04_m_32_vP_3_vE_2_mcts_0_tstardist_vs_shadow_000.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_dice_seed_04_m_32_vP_3_vE_2_mcts_0_tstardist_vs_shadow_001.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_dice_seed_04_m_32_vP_3_vE_2_mcts_0_tstardist_vs_shadow_002.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_dice_seed_04_m_32_vP_3_vE_2_mcts_0_tstardist_vs_shadow_003.pdf}
\caption{The pursuers (blue and green)
are able to win by combining the distance and shadow strategy. The pursuers stay close, while maintaining
enough distance to avoid creating large shadow regions. The pursuers are slightly faster than the evaders.
}
\label{fig:traj-blend-dice}
\end{figure}
\section{Learning the pursuer policy}
\label{sec:learning-policy}
We propose a method for learning optimal controls for the pursuer, though our
methods can be applied to find controls for the evader as well. Again, we
consider a discrete game, where each player's position is restricted on a grid,
and at each turn, the player can move to a new position within a neighborhood
determined by their velocity. All players move simultaneously. The game
termination conditions are checked at the end of each turn.
We initialize a neural network which takes as input any game state,
and produces a policy and value pair. The policy is probability distribution
over actions. Unlike in section~\ref{sec:hji}, the value, in this context, is
an estimate of the likely winner given the input state.
Initially the policy and value estimates are random. We use Monte
Carlo tree search to compute refined policies. We play the game using the
refined policies, and train the neural network to learn the refined policies.
By iterating in this feedback loop, the neural network continually learns to
improve its policy and value estimates. We train on games in various
environments and play games on-line on maps that were were not seen during the
training phase.
\subsection{Monte Carlo tree search}
In this section, we review the Monte Carlo tree search algorithm,
which allows the agent to plan ahead and refine policies. For clarity of notation,
we describe the single pursuer, single evader scenario, but the method applies to arbitrary
number of players.
Define the set of game states $\mathcal{S}:=\{({x_P},{x_E})\in\free\times\free\}$ so that
each state $s\in\mathcal{S}$ characterizes the position of the players.
Let $\mathcal{A}\subseteq \free$ be the set of actions.
Let $T(s,a) : \mathcal{S}\times \mathcal{A} \to \mathcal{S}$ be the transition function
which outputs the state resulting from taking action $a$ at state $s$.
Let ${f}:\mathcal{S} \to \mathbb{R}^{m^d}\times[-1,1]$ be an evaluator function
which takes the current state as input and provides a policy and value estimate: ${f}(s)=(\vec{p},v)$.
Formally, Monte Carlo tree search is mapping takes as input the current state $s_0$, the evaluator function ${f}$, and a parameter $M$
indicating the number of search iterations: $\mathcal{M}(s_0,{f};M)$.
It outputs a refined policy $\vec{\pi}^\ast$.
Algorithm \ref{alg:mcts} summarizes the MCTS algorithm.
At a high level, MCTS simulates game play starting from the current state,
keeping track of nodes it has visited during the search.
Each action is chosen according to a formula $U(s,a)$ which balances exploration and exploitation.
Simulation continues until the algorithm reaches a \emph{leaf node} $s_n$, a state which has not previously
been visited. At this point, we use the evaluator function ${f}(s_n)=(\vec{p},v)$ to estimate a policy
and value for that leaf node. The value $v$ is propagated to all parent nodes.
One iteration of MCTS ends when it reaches a leaf node.
MCTS keeps track of statistics that help guide the search. In particular
\begin{itemize}
\item $N(s,a)$: the number of times the action $a$ has been selected from state $s$
\item $W(s,a)$: the cumulative value estimate for each state-action pair
\item $Q(s,a)$: the mean value estimate for each state-action pair
\item $P(s,a)=(1-\varepsilon)p(a|s) + \varepsilon\eta$: the prior policy, computed by evaluating ${f}$.
Dirichlet noise $\eta$ is added to allow a chance for each move to be chosen.
\item $U(s,a)=Q(s,a) + P(s,a) \frac{\sqrt{\sum_b N(s,b) }} {1+N(s,a)}$ is the \emph{upper confidence bound} \cite{rosin2011multi}.
The first term exploits moves with high value, while the second term encourages moves that have not selected.
\end{itemize}
When all $M$ iterations are completed, the desired refined policy is proportional to $N(s_0,a)^{1/\tau}$,
where $\tau$ is a smoothing term.
\begin{algorithm}
\caption{Monte Carlo tree search: $\mathcal{M}(s_0,{f},M)$} \label{alg:mcts}
\begin{algorithmic}
\State $N(s,a) \gets 0$
\State $Q(s,a) \gets 0$
\State $W(s,a) \gets 0$
\State visited = $\{ \emptyset \}$
\For{$i = 1,\dots,M $}
\State $n\gets0$
\While{$s_n \notin$ visited}
\State
\If{ $\sum_b N(s_n,b) > 0$}
\State $a_n^\ast = \arg\max_a Q(s_n,a) + P(s_n,a) \frac{\sqrt{\sum_b N(s_n,b) }} {1+N(s_n,a)}$
\Else
\State $a_n^\ast = \arg\max_a P(s_n,a)$
\EndIf
\State $s_{n+1} = T(s_n,a_n^\ast)$
\State $n \gets n+1$
\EndWhile
\State $(p,v) = {f}(s_n)$
\State $P(s_n,a) = (1-\varepsilon)p(a|s_n) + \varepsilon\eta$
\State visited.append($s_n$)
\For{$j = 0, \dots, n-1$}
\State $N(s_j,a) \gets N(s_j,a_j^\ast) + 1$
\State $W(s_j,a) \gets W(s_j,a_j^\ast) + v$
\State $Q(s_j,a) \gets Q(s_j,a_j^\ast) / N(s_j,a_j^\ast)$
\EndFor
\EndFor
\State $\pi^\ast(a|s_0) = N(s_0,a)^{1/\tau} / \sum_b N(s_0,b)^{1/\tau}$
\State \Return $\pi^\ast$
\end{algorithmic}
\end{algorithm}
\subsection{Policy and value network}
We use a convolutional neural network which takes in the game state
and produces a policy and value estimate.
Although the state can be completely characterized by the positions
of the players and the obstacles, the neural network requires more context in order to be
able to generalize to new environments.
We provide the following features as input to the neural network, each of which
is an $m\times m$ image:
\begin{itemize}
\item Obstacles as binary image
\item Player positions, a separate binary image for each player
\item Joint visibility of all pursuers, as a binary image
\item Joint shadow boundaries of all pursuers
\item Visibility from each pursuer's perspective, as a binary image
\item Shadow boundary from each pursuer's perspective
\item Valid actions for each player, as a binary image
\item Each evader's policy according to (\ref{eq:evader-action})
\end{itemize}
AlphaZero \cite{silver2017mastering2} suggests that training the policy and
value networks jointly improves performance. We use a single network based on U-Net
\cite{ronneberger2015u}, which splits off to give output policy and value.
The input is $m\times m \times C_{\text{in}}$, where $C_{\text{in}}=2+4k_P+2k_E$ and $k_P,k_E$ are the number of
pursuers and evaders, respectively.
The U-Net consists of $\log_2(m)+1$ \emph{down-blocks}, followed by the same
number of \emph{up-blocks}.
All convolution layers in the down-blocks and up-blocks use size 3 kernels.
Each down-block consists of input, conv, batch
norm, relu, conv, batch norm, residual connection from input, relu, followed by
downsampling with stride 2 conv, batch norm, and relu. A residual connection links
the beginning and end of each block, before downsampling. The width
of each conv layer in the $l^{th}$ down-block is $l\cdot C_\text{in}$. Each
up-block is the same as the down-block, except instead of downsampling, we use
bilinear interpolation to upsample the image by a factor of 2. The upsampled
result is concatenated with the predownsampled output from the corresponding (same size)
down-block, followed by conv, batch norm, relu. The width of each conv layer in
the up-block is same as those in the down-block of corresponding size.
Then, the network splits into a policy and value head.
The policy head consists of $1\times 1$ conv with width 8, batch norm, relu, and
$1\times 1$ conv with width $k_P$. The final activation layer is a softmax to output $p\in \mathbb{R}^{m\times m \times k_P}$, a policy for each pursuer.
The value head is similar, with $1\times 1$ conv with width 8, batch norm, relu, and
$1\times 1$ conv with width 1. The result passes through a tanh activation and
average pooling to output a scalar $v\in[-1,1]$.
\subsection{Training procedure}
Since we do not have the true value and policy, we cannot train the networks
in the usual supervised fashion. Instead, we use MCTS to generate refined policies,
which serve as the training label for the policy network.
Multiple games are played with actions selected according to MCTS refined policies.
The game outcomes act as the label for the value for each state in the game.
We train over various maps consisting of 2-7 obstacles, including circles, ellipses, squares,
and tetrahedrons.
More specifically,
let ${f}_\theta(s) $ be the neural network parameterized by $\theta$, which takes a state $s$ as input, and outputs a policy $\vec{\pi}_\theta(s)$ and value $v_\theta(s)$.
Let $s_j(0)$ be the initial positions.
For $j=1,\dots,J$, play the game using MCTS:
\begin{align}
\vec{\pi}_j(a|s_j(k)) &= \text{MCTS}(s_j(k), f_\theta; M) \\
s_j(k+1) &= \arg\max_a \vec{\pi}_j^\ast(a|s_j(k))
\end{align}
for $k=0,\dots,K_j$. The game ends at
\begin{equation} \begin{aligned}
K_j = \inf\{k | s_j(k) \in \mathcal{T}_\text{end}\}
\end{aligned} \end{equation}
Then the "true" policy and value are
\begin{align}
\vec{\pi}_j^\ast(k) &= \vec{\pi}_j(\cdot|s_j(k))\\
v^\ast_j(k)&=
\begin{cases}
1 & K_j > K_\text{max} \\
-1 & \text{otherwise}
\end{cases}
\end{align}
The parameters $\theta$ of the neural network
are updated by stochastic gradient descent (SGD) on the loss function:
\begin{equation} \begin{aligned}
\min_\theta \sum_{j=1}^J \sum_{k=0}^{K_j} L_{\text{policy}} & \Big(\vec{\pi}_\theta(s_j(k)) , \vec{\pi}_j^\ast(k) \Big) + L_\text{value} \Big(v_\theta(s_j(k)),v_j^\ast(k)\Big)\\
L_\text{policy}(\vec{p},\vec{q}) &= -\vec{p} \cdot \log \vec{q} \\
L_\text{value}(p,q) &= (p-q)^2 \\
\end{aligned} \end{equation}
We use a learning rate of $0.001$ and the Adam optimizer \cite{kingma2014adam}.
\subsection{Numerical results}
A key difficulty in learning a good policy for the pursuer
is that it requires a good evader. If the evader is static,
then the pursuer can win with any random policy.
During training and evaluation, the game is played with the evader moving
according to (\ref{eq:evader-action}). Although all players move
simultaneously, our MCTS models each team's actions sequentially, with the
pursuers moving first. This is conservative towards the pursuers, since the
evaders can counter.
We train using a single workstation with 2 Intel Xeon CPU E5-2620 v4 2.10GHz
processors and a single NVidia 1080-TI GPU. For simplicity, ${f_P}$ and ${f_E}$ are
constant, though it is straightforward to have spatially varying
velocity fields. We use a gridsize of $m=16$. We set $K_{\text{max}}=100$ and
$M=1000$ MCTS iterations per move. One step of training consists of playing
$J=64$ games and then training the neural network for 1 epoch based on training
data for the last 10 steps of training. Self-play game data is generated in
parallel, while network training is done using the GPU with batch size 128. The total
training time is 1 day.
The training environments consist of between 2 to 6 randomly oriented obstacles,
each uniformly chosen from the set of ellipses, diamonds, and rectangles. We emphasize
that the environments shown in the experiments are not in the training set.
We compare our trained neural network against uniform random and dirichlet
noise-based policies, as well as the local policies from
section~\ref{sec:locally}. In order to draw a fair comparison, we make sure
each action requires the same amount of compute time. Each MCTS-based move in
the 2 player game takes 4 secs while the multiplayer game takes about 10 secs
per move, on average. Since the noise-based policies require less overhead,
they are able to use more MCTS iterations. The shadow strategies become very
expensive as more players are added. For the 1v1 game, we use $\hat{M}=1000$,
while the 2v2 game can only run for $\hat{M}=250$ in the same amount of time as
the Neural Net.
Specifically,
\begin{itemize}
\item Distance strategy
\item Shadow strategy
\item Blend strategy
\item $\mathcal{M}(\cdot,{f}_\text{distance},1000)$ where ${f}_\text{distance}(s) = (p_\text{distance},0)$.
\item $\mathcal{M}(\cdot,{f}_\text{shadow},\hat{M})$ where ${f}_\text{shadow}(s) = (p_\text{shadow},0)$.
\item $\mathcal{M}(\cdot,{f}_\text{blend},\hat{M})$ where ${f}_\text{blend}(s) = (p_\text{blend},0)$.
\item $\mathcal{M}(\cdot,{f}_\mu,2000)$ where ${f}_\nu(s) = (\text{Uniform},0)$.
\item $\mathcal{M}(\cdot,{f}_\eta,2000)$ where ${f}_\eta(s) = (\text{Dir}(0.3),0)$.
\item $\mathcal{M}(\cdot,{f}_\theta,1000)$ where ${f}_\theta$ is the trained Neural Network
\end{itemize}
\subsection*{Two players}
As a sanity check, we show an example on a single circular obstacle with a
single pursuer and single evader. As we saw from the previous section, the
pursuer needs to be faster in order to have a chance at winning. We let
${f_P}=2$ and ${f_E}=1$.
Figure~\ref{fig:nnet-traj-circle} shows an example trajectory using Neural Net.
The neural network model gives reasonable policies.
Figure~\ref{fig:nnet-traj-v} shows an adversarial human evader playing against the Neural Net pursuer,
on a map with two obstacles. The pursuer changes strategies depending on the shape of the obstacle.
In particular, near the corners of the "V" shape, it maintains a safe distance rather than blindly following
the evader.
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth]{figures/traj/traj_seed_01_m_16_vP_2_vE_1_mcts_1000_nnet_vs_shadow_000.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_seed_01_m_16_vP_2_vE_1_mcts_1000_nnet_vs_shadow_001.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_seed_01_m_16_vP_2_vE_1_mcts_1000_nnet_vs_shadow_002.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_seed_01_m_16_vP_2_vE_1_mcts_1000_nnet_vs_shadow_003.pdf}
\caption{Snapshots of the trajectory for the Neural Net pursuer around a circular obstacle.
The pursuer (blue) tracks the evader (red) while maintaining a safe distance.
View from left to right, top to bottom.
Stars indicate the initial positions, and the black line (of sight) connects the players at the end of each time interval.}
\label{fig:nnet-traj-circle}
\end{figure}
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth]{figures/traj/traj_vshape_seed_82_m_16_vP_2_vE_1_mcts_0_nnet_vs_shadow_000.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_vshape_seed_82_m_16_vP_2_vE_1_mcts_0_nnet_vs_shadow_001.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_vshape_seed_82_m_16_vP_2_vE_1_mcts_0_nnet_vs_shadow_002.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_vshape_seed_82_m_16_vP_2_vE_1_mcts_0_nnet_vs_shadow_003.pdf}
\caption{Trajectory for the Neural Net pursuer against an adversarial human evader on a map with two obstacles. The pursuer
transitions between following closely, and leaving some space, depending on the shape of the obstacle.}
\label{fig:nnet-traj-v}
\end{figure}
In order to do a more systematic comparison, we run multiple games
over the same map and report the game time statistics for each method.
We fix the pursuer's position at $(1/2,1/4)$ and vary the evader's initial location within the free space.
Figure~\ref{fig:value-slice-setup} shows the setup for the two maps considered for the statistical studies in this section.
One contains a single circle in the center, as we have seen previously. The other one contain 5 circular obstacles,
though the one in the center has some extra protrusions.
Figure~\ref{fig:value-slice-1v1dice} shows an image corresponding the the length of the game
for each evader position; essentially, it is a single slice
of the value function for each method.
Table~\ref{tab:1v1dice} shows the number of games won. Shadow strategy particularly benefits
from using MCTS for policy improvements, going from 16\% to 67.15\% win rate. Our neural network
model outperforms the rest with a 70.8\% win rate.
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth]{figures/value_slices/value_2v2circle_setup.png}
\includegraphics[width=.45\textwidth]{figures/value_slices/value_2v2dice_setup.png}
\caption{Setup for computing a slice of the value function for the circular obstacle (left) and 5 obstacle map (right). The pursuer's initial position is fixed (blue) while the
evader's changes within the free space.}
\label{fig:value-slice-setup}
\end{figure}
\begin{figure}[hptb]
\centering
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_dist.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_tstar.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_tstardist.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_dist1000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_tstar1000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_tstardist1000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_uniform2000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_dir2000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_1v1dice_nnet1000.png}
\caption{One slice of the "value" function for single pursuer, single evader game with 5 obstacles. Bright spots
indicate that the pursuer won the game if that pixel was the evader's initial position.}
\label{fig:value-slice-1v1dice}
\end{figure}
\begin{table}[]
\caption{Game statistics for the 1 pursuer vs 1 evader game with 5 circular obstacles, where ${f_P}=2$ and ${f_E}=1$.\vspace{1em}}
\label{tab:1v1dice}
\begin{tabular}{l|ccc}
Method & Win \% (137 games) & Average game time \\
\hline
Distance &52.55 & 53.56 \\
Shadow &16.06 & 22.36 \\
Blend &63.50 & 64.45 \\
MCTS($\cdot$, Distance; 1000) &55.47 & 62.40 \\
MCTS($\cdot$, Shadow; 1000) &67.15 & 69.45 \\
MCTS($\cdot$, Blend; 1000) &58.39 & 63.27 \\
MCTS($\cdot$, Uniform; 2000) &60.58 & 65.84 \\
MCTS($\cdot$, Dirichlet; 2000) &65.69 & 69.02 \\
MCTS($\cdot$, Neural Net; 1000) &70.80 & 71.61 \\
\end{tabular}
\end{table}
\subsection*{Multiple players}
Next, we consider the multiplayer case with 2 pursuers and 2 evaders on a circular obstacle map
where ${f_P}=2$ and ${f_E}=2$.
Even on a $16\times16$ grid, the computation of the corresponding feedback value function
would take several days.
Figure~\ref{fig:nnet-traj-circle-multi} shows a sample trajectory. Surprisingly, the neural network has learned
a smart strategy. Since there is only a single obstacle, it is sufficient for each
pursuer to guard one opposing corner of the map. Although all players have the same speed, it is possible
to win.
\begin{figure}[phtb]
\centering
\includegraphics[width=.45\textwidth]{figures/traj/traj_seed_06_m_16_vP_2_vE_2_mcts_1000_nnet_vs_shadow_000.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_seed_06_m_16_vP_2_vE_2_mcts_1000_nnet_vs_shadow_001.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_seed_06_m_16_vP_2_vE_2_mcts_1000_nnet_vs_shadow_002.pdf}
\includegraphics[width=.45\textwidth]{figures/traj/traj_seed_06_m_16_vP_2_vE_2_mcts_1000_nnet_vs_shadow_003.pdf}
\caption{Trajectories for the multiplayer game played using NNet around a circle. Pursuers are blue and green, while evaders are red and orange.
Blue has learned the tactic of remaining stationary in the corner, while green manages the opposite side.
The evaders movements are sporadic because there is no chance of winning; there are no shadows in which to hide.}
\label{fig:nnet-traj-circle-multi}
\end{figure}
Figure~\ref{fig:value-slice-2v2circle} shows a slice of the value function, where 3 players' positions are fixed,
and one evader's position varies.
Table~\ref{tab:2v2circle} shows the game statistics.
Here, we see some deviation from the baseline. As the number of players increase,
the number of actions increases. It is no longer sufficient to use random sampling.
The neural network is learning useful strategies to help guide the
Monte Carlo tree search to more significant paths. The distance and blend strategies
are effective by themselves. MCTS helps improve performance for Distance. However, 250 iterations is
not enough to search the action space, and actually lead to poor performance for Blend and Shadow.
For this game setup, MCTS(Distance,1000) performs the best with a 73.5\% win rate, followed by
Blend with 65.4\% and Neural Net with 59.9\%. Although the trained network is not the best
in this case, the results are very promising. We want to emphasize that the model was trained with no prior
knowledge. Given enough offline time and resources, we believe the proposed approach can scale to larger grids and
learn more optimal policies than the local heuristics.
\begin{figure}[phtb]
\centering
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_dist.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_tstar.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_tstardist.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_dist1000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_tstar250.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_tstardist250.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_uniform2000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_dir2000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2circle_nnet1000.png}
\caption{One slice of the value function for 2 pursuer, 2 evader game on the circular obstacle.}
\label{fig:value-slice-2v2circle}
\end{figure}
\begin{table}[]
\caption{Game statistics for the 2 pursuer vs 2 evader game with a circular obstacle.\vspace{1em}}
\label{tab:2v2circle}
\begin{tabular}{l|ccc}
Method & Win \% (162 games) & Average game time \\
\hline
Distance & 56.8 & 58.8 \\
Shadow & 46.3 & 50.1 \\
Blend & 65.4 & 67.9 \\
MCTS($\cdot$, Distance; 1000) & 73.5 & 76.6 \\
MCTS($\cdot$, Shadow; 250) & 40.7 & 44.5 \\
MCTS($\cdot$, Blend; 250) & 00.0 & 4.4 \\
MCTS($\cdot$, Uniform; 2000) & 00.0 & 5.3 \\
MCTS($\cdot$, Dirichlet; 2000) & 27.8 & 32.8 \\
MCTS($\cdot$, Neural Net; 1000) & 59.9 & 61.7 \\
\end{tabular}
\end{table}
\iffalse
We repeat the experiment for the more complicated map with 5 obstacles,
with ${f_P}={f_E}=2$, and show the results in
Figure~\ref{fig:value-slice-2v2dice} and Table~\ref{tab:2v2dice}. The map
is much more difficult and the evader generally wins. For this particular map and game setting,
where all players
Distance has a slight edge over the other strategies, though
\begin{table}[]
\caption{Game statistics for the 2 pursuer vs 2 evader game with 5 obstacles.\vspace{1em}}
\label{tab:2v2dice}
\begin{tabular}{l|ccc}
Method & Win \% (137 games) & Average game time \\
\hline
Distance & 6.6 & 8.89 \\
Shadow & 0 & 2.68 \\
Blend & 0.7 & 3.95 \\
MCTS($\cdot$,Distance; 1000) & 0.7 & 3.77 \\
MCTS($\cdot$,Shadow; 250) & 0 & 2.24 \\
MCTS($\cdot$,Blend; 250) & 0 & 1.77 \\
MCTS($\cdot$,Uniform; 2000) & 0 & 2.10 \\
MCTS($\cdot$,Dirichlet; 2000) & 0 & 2.26 \\
MCTS($\cdot$,Neural Net; 1000) & 0 & 3.0
\end{tabular}
\end{table}
\begin{figure}[phtb]
\centering
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_dist.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_tstar.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_tstardist.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_dist1000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_tstar250.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_tstardist250.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_uniform2000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_dir2000.png}
\includegraphics[width=.3\textwidth]{figures/value_slices/value_2v2dice_nnet1000.png}
\caption{One slice of the value function for the dice}
\label{fig:value-slice-2v2dice}
\end{figure}
\fi
Figure~\ref{fig:mcts-stats-multi} shows a comparison of the depth of search for
$M=1000$ MCTS iterations. Specifically, we report depth of each leaf node, as
measured by game time. To be fair, we allow the uniform and dirichlet baselines
to run for 2000 MCTS iterations to match the runtime needed for 1 move. Also,
the shadow strategies are extremely costly, and can only run 250 MCTS
iterations in the same amount of time. However, we show the statistics for
$M=1000$ to gain better insights. Ideally, a good search would balance breadth
and depth. The neural network appears to search further than the baselines. Of
course, this alone is not sufficient to draw any conclusions. For example, a
naive approach could be a depth-first search.
In Figure~\ref{fig:mcts-stats}, we show a similar chart for the single pursuer,
single evader game with a circular obstacle. In this case, the game is
relatively easy, and all evaluator functions are comparable.
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth,trim={0 0.15in 0 0},clip]{figures/mcts_stats/seed_02_m_16_vP_2_vE_2_mcts_2000_uniform_vs_shadow.pdf} \quad
\includegraphics[width=.45\textwidth,trim={0 0.15in 0 0},clip]{figures/mcts_stats/seed_02_m_16_vP_2_vE_2_mcts_2000_dirichlet_vs_shadow.pdf}\\[.5em]
\includegraphics[width=.45\textwidth,trim={0 0.15in 0 0},clip]{figures/mcts_stats/seed_02_m_16_vP_2_vE_2_mcts_1000_distance_vs_shadow.pdf} \quad
\includegraphics[width=.45\textwidth,trim={0 0.15in 0 0},clip]{figures/mcts_stats/seed_02_m_16_vP_2_vE_2_mcts_1000_tstar_vs_shadow.pdf} \\[.5em]
\includegraphics[width=.45\textwidth,trim={0 0in 0 0},clip]{figures/mcts_stats/seed_02_m_16_vP_2_vE_2_mcts_1000_tstardist_vs_shadow.pdf} \quad
\includegraphics[width=.45\textwidth,trim={0 0in 0 0},clip]{figures/mcts_stats/seed_02_m_16_vP_2_vE_2_mcts_1000_nnet_vs_shadow.pdf} \quad
\caption{Histogram of leaf node depth for MCTS using various evaluator
functions for the multiplayer game around a circular obstacle. The colors show
increments of 100 iterations. The multiplayer game has a much larger action
space, making tree search difficult. The neural network appears to search deeper
into the tree.}
\label{fig:mcts-stats-multi}
\end{figure}
\begin{figure}[hptb]
\centering
\includegraphics[width=.45\textwidth,trim={0 0.15in 0 0},clip]{figures/mcts_stats/seed_00_m_16_vP_2_vE_1_mcts_2000_uniform_vs_shadow.pdf} \quad
\includegraphics[width=.45\textwidth,trim={0 0.15in 0 0},clip]{figures/mcts_stats/seed_00_m_16_vP_2_vE_1_mcts_2000_dirichlet_vs_shadow.pdf}\\[.5em]
\includegraphics[width=.45\textwidth,trim={0 0.15in 0 0},clip]{figures/mcts_stats/seed_00_m_16_vP_2_vE_1_mcts_1000_distance_vs_shadow.pdf} \quad
\includegraphics[width=.45\textwidth,trim={0 0.15in 0 0},clip]{figures/mcts_stats/seed_00_m_16_vP_2_vE_1_mcts_1000_tstar_vs_shadow.pdf} \\[.5em]
\includegraphics[width=.45\textwidth,trim={0 0.00in 0 0},clip]{figures/mcts_stats/seed_00_m_16_vP_2_vE_1_mcts_1000_tstardist_vs_shadow.pdf} \quad
\includegraphics[width=.45\textwidth,trim={0 0.00in 0 0},clip]{figures/mcts_stats/seed_00_m_16_vP_2_vE_1_mcts_1000_nnet_vs_shadow.pdf} \quad
\caption{Histogram of leaf node depth for MCTS using various evaluator
functions for the single pursuer vs single evader game around a circular
obstacle. The colors show increments of 100 iterations. The game is relatively easy
and thus all algorithms appear comparable. Note that Uniform and Dirichlet are allowed 2000 MCTS iterations,
since they require less overhead to run.}
\label{fig:mcts-stats}
\end{figure}
\section{Conclusion and future work}
We proposed three approaches for approximating optimal controls for the
surveillance-evasion game. When there are few players and the grid size is
small, one may compute the value function via the Hamilton-Jacobi-Isaacs
equations. The offline cost is immense, but on-line game play is very
efficient. The game can be played on the continuously in time and space, since
the controls can be interpolated from the value function. However, the value
function must be recomputed if the game settings, such as the obstacles or player
velocities, change.
When there are many players, we proposed locally optimal strategies for the
pursuer and evader. There is no offline preprocessing. All computation is done
on-line, though the computation does not scale well as the velocities or number
of pursuers increases. The game is discrete in time and space.
Lastly, we proposed a reinforcement learning approach for the
multiplayer game. The offline training time can be enormous, but on-line game play
is very efficient and scales linearly with the number of players. The game is played
in discrete time and space, but the neural network model generalizes to
maps not seen during training. Given enough computational resources,
the neural network has the potential to approach the optimal controls
afforded by the HJI equations, while being more efficient than the local strategies.
There are many avenues to explore for future research.
We are working on the extension of our reinforcement learning approach to 3D,
which is straight-forward, but requires more computational resources.
Figure~\ref{fig:3dseg} shows an example surveillance-evasion game in 3D. Along
those lines, a multi-resolution scheme is imperative for scaling to higher
dimensions and resolutions. One may also consider different game objectives,
such as seeking out an intially hidden evader, or allowing brief moments of
occlusion.
\begin{figure}[hptb]
\vspace{1em}
\centering
\includegraphics[width=.65\textwidth]{figures/3d_seg_crop.png}
\caption{A snapshot of a 3D surveillance-evasion game around a sphere.}
\label{fig:3dseg}
\end{figure}
\iffalse
\subsection{The surveillance-constrained patrol problem}
In \cite{bharadwaj2019strategy}, we considered a surveillance-constrained
patrol problem where a pursuer must optimize short-term visibility of the
environment, with the constraint that
it must always keep the evader within its line-of-sight. In this section, we briefly
review the ideas in that paper, and mention a direction for future work which combines
the ideas presented in Chapters~\ref{chap:exploration} and \ref{chap:surveillance}.
The game is played in discrete space and time. We assume that
both players have a map of the environment; the pursuer must be faster than the
evader, otherwise it may not have any flexibility to do anything other than the
surveillance constraint.
Formally, let $K\in \mathbb{N}$. Short-term visibility means that the pursuer's
visibility of the environment at time $t$ is only valid for $K$ time steps, after which those
portions are assumed to be occluded again. Define the short-term visibility set
\begin{equation} \begin{aligned}
\Omega_{i-K}^i := \bigcup_{j=i-K}^i \visset({x_P}_j)
\end{aligned} \end{equation}
As in Chapter~\ref{chap:exploration}, one may define the gain function
\begin{equation}
g^K(x;\Omega_{i-K}^i) := | \visset_x \cup \Omega_{i-K}^i| - |\Omega_{i-K}^i|, \label{eq:short-gain-func}
\end{equation}
Then the problem can be stated as a constrained optimization problem:
\begin{equation} \begin{aligned}
&\max_{{x_P}(i)} g^K({{x_P}(i)};\Omega_{i-K}^i) \\ &\text{ subj. to } {x_E}(t) \in \visset({x_P}(t)) \text{ for } t\ge i
\end{aligned} \end{equation}
The constraint is challenging since it needs to hold for all future time.
In \cite{bharadwaj2019strategy}, we satisfy the constraint by using reactive synthesis tools
to precompute the \emph{feasible set}
$$\mathcal{F} :=\{ ({x_P}(0),{x_E}(0)) |{x_E}(t) \in \visset({x_P}(t)) \text{ for } t\ge0\} ,$$
which is the set of positions from which it is possible to maintain the surveillance requirement for all time.
The problem can now be reformulated as
\begin{equation} \begin{aligned}
&\max_{{x_P}(i)} g^K({{x_P}(i)};\Omega_{i-K}^i) \\&\text{ subj. to } ({x_P}(i),{x_E}(i)) \in \mathcal{F},
\end{aligned} \end{equation}
where the optimization of the objective function can be done greedily during game-play.
Figure~\ref{fig:patrol} shows snapshots from an example surveillance-constrained patrol game.
For future work, we envision the use of the value function from section~\ref{sec:hji} to compute
the feasible set for the \emph{continuous} version of the game.
Optimization of $g^K$ can be done using a greedy approach powered by a convolutional neural network, as in
Chapter~\ref{chap:exploration}. Monte Carlo tree search may also help refine strategies and generate
causually relevant training data that surpasses the one-step lookahead policy afforded by the greedy algorithm.
\begin{figure}[ht!]
\subfloat[$t=0$]{
\includegraphics[width=0.3\textwidth]{figures/patrol/casestudyfigs_high_res/frame_green_0000.png}
}
\subfloat[$t=10$ \label{fig:case1t10}]{
\includegraphics[width=0.3\textwidth]{figures/patrol/casestudyfigs_high_res/frame_green_0010.png}
}
\subfloat[$t=16$ \label{fig:case1t16}]{
\includegraphics[width=0.3\textwidth]{figures/patrol/casestudyfigs_high_res/frame_green_0016.png}
}\\
\subfloat[$t=20$ \label{fig:case1t20}]{
\includegraphics[width=0.3\textwidth]{figures/patrol/casestudyfigs_high_res/frame_green_0020.png}
}
\subfloat[$t=24$ \label{fig:case1t24}]{
\includegraphics[width=0.3\textwidth]{figures/patrol/casestudyfigs_high_res/frame_green_0024.png}
}
\subfloat[$t=33$ \label{fig:case1t33}]{
\includegraphics[width=0.3\textwidth]{figures/patrol/casestudyfigs_high_res/frame_green_0033.png}
}
\caption{Example of the surveillance-constrained patrol game with a single
pursuer (blue) and single evader (orange). The pursuer tries to optimize
short-term visibility of the environment when possible, but must always
maintain visibility of the evader. The green cells correspond to the pursuer's
planned path to a vantage point. Obstacles are shown in red, with occlusions
in black.}
\label{fig:patrol}
\end{figure}
\fi
\section*{Acknowledgment}
This work was partially supported by NSF grant DMS-1913209.
\bibliographystyle{plain}
\subsubsection*{Layers}
Given an input vector-valued image $I\in\mathbb{R}^{W_{in}\times H_{in}\times C_{in}}$,
a CNN defines a transformation $f:I\to P\in\mathbb{R}^{W_{out}\times H_{out}\times C_{out}}$.
Usually $P$ will be a probability map, i.e. each entry in $P$ contains
the probability that the corresponding pixel belongs to each of the
$C_{out}$ classes. This transformation is a composition of linear
and nonlinear functions. A CNN has multiple layers of each of these.
Some of the most common functions include:
\begin{enumerate}
\item \emph{Convolution}: Convolves the input with a filter, whose weights
are learned. The filter operates on all image channels, but only acts
on a small neighborhood of spatial pixels, determined by the kernel
size. Each filter takes as input a 3D image, and outputs a 2D image.
Each convolutional layer may have multiple filters, resulting in a
3D stack of filtered images as output. Assume the input is $X\in\mathbb{R}^{M_{in}\times N_{in}\times D_{in}}$.
Let $s_{k}$ be the kernel size (generally odd so that there is a
well-defined center) and $D_{out}$ be the number of filters. Then
the output of the convolutional layer is $Y\in\mathbb{R}^{M_{out}\times N_{out}\times D_{out}}$,
where $M_{out}=M_{in}-s_{k}+1$, $N_{out}=N_{in}-s_{k}+1$, and each
entry in $Y:=W\ast X$ is given by
\[
Y_{ijk}=W_{0}^{k}+\sum_{r=1}^{D_{in}}\sum_{q=0}^{s_{k}}\sum_{p=0}^{s_{k}}W_{p,q,r}^{k}X_{i+p,j+q,r}
\]
where $W\in\mathbb{R}^{s_{k}\times s_{k}\times D_{in}\times D_{out}}$
and $W_{0}\in\mathbb{R}^{D_{out}}$ is a set of biases for each of
the $D_{out}$ filters.\\
In general, a zero padding of $s_{p}$ pixels can be introduced to
the spatial dimensions of the input image to change the output size.
This is particulary useful to make the output image the same size
as the input. Also, a stride of $s_{s}$ pixels can be used if it
is not desirable to apply filter to consecutive pixels. In this case,
$M_{out}=\frac{1}{s_{s}}(M_{in}-s_{k}+2s_{p})+1$ and $N_{out}=\frac{1}{s_{s}}(N_{in}-s_{k}+2s_{p})+1$
and
\[
Y_{ijk}=W_{0}^{k}+\sum_{r=1}^{D_{in}}\sum_{q=0}^{s_{k}}\sum_{p=0}^{s_{k}}W_{p,q,r}^{k}\tilde{X}_{s_{s}\cdot i+p,s_{s}\cdot j+q,r}
\]
where $\tilde{X}$ is the zero-padded image.\\
A common choice for kernel size is $s_{k}=3$, with zero padding $s_{p}=1$
and stride $s_{s}=1$. (Although this operation is actually known
as correlation in signal processing, the name \emph{convolution} has
stuck in the deep learning community).
\item \emph{Nonlinearity}: an element-wise nonlinear transformation that
allows neural networks to learn arbitrary functions. The most commonly
used is the rectified linear unit, aka ReLU. It is popular due to
its simplicity and has been shown emprically to improve training over
other functions, such as the sigmoid or tanh. Define $Y:=ReLU(X)$
where each entry
\[
Y_{ijk}=\max(0,X_{ijk})
\]
\item \emph{Pooling}: Pooling is a (possibly) nonlinear downsampling of
the image, in order to reduce computational efforts and also allow
the model to be spatially invariant. The most common is max-pooling
with kernel size $2$ and stride $2$. This replaces each 2x2 patch
in the input with its max value:
\[
Y_{ijk}=\max_{p,q\in\{0,1\}^{2}}\left[X_{2\cdot i+p,2\cdot j+q,k}\right]
\]
For general kernel size $s_{k}$ and stride $s_{s}$, we have
\[
Y_{ijk}=\max_{p,q\in\{0,\dots,s_{k}\}^{2}}\left[X_{s_{s}\cdot i+p,s_{s}\cdot j+q,k}\right]
\]
\iffalse
\item \emph{Fully Connected}: Each output node of a fully connected layer
is simply an affine transformation of all nodes in the previous layer.
Let $D_{out}$ be the desired number of output nodes. Then $Y\in\mathbb{R}^{D_{out}}$
where each entry
\[
Y_{k}=W_{0}^{k}+\sum_{r=1}^{D_{in}}\sum_{q=1}^{N_{in}}\sum_{p=1}^{M_{in}}W_{pqr}^{k}X_{pqr}
\]
\item \emph{Softmax:} A transformation that normalizes the input across
the classes so that the output corresponds to a probability that each
pixel belongs to a certain class.
\[
P_{ijk}=\frac{\exp(X_{ijk})}{{\displaystyle \sum_{r=1}^{D_{in}}}\exp(X_{ijr})}
\]
Here $P_{ijk}$ is the probability that pixel $X_{ij}$ belongs to
class $k$.
\fi
\item \emph{Cross-Entropy Loss}: Let $L_{ij}\in\{0,\dots,C_{out}\}$ be
the true label of pixel $X_{ij}$. The cross-entropy loss is defined
as
\[
-\sum_{ijk}{\bf 1}_{L_{ij}=k}\log(P_{ijk})
\]
where $\mathbf{1}$ is the indicator function.
\end{enumerate}
The CNN is trained by optimizing the weights of the network
so that the loss between the predicted and true labels are minimized
over the entire training set.
| proofpile-arXiv_059-15637 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction and main theorems}
Denote by $B^n (p; r) := \{z \in \mathbb{C}^n \colon \|z-p\| < r \}$ and by
$S^m := \{v \in \mathbb{R}^{m+1} \colon \|v\|=1\}$. With such notation,
the boundary of $B^n (0; 1)$ is $S^{2n-1}$. Then the well known
Forelli's theorem \cite{Forelli77} states
\begin{theorem}[Forelli] \label{Forelli-original}
If a function $f\colon B^n (0; 1) \to \mathbb{C}$ satisfies the following
two conditions
\begin{enumerate}
\item $f\in C^{\infty}(0)$, meaning that for any positive integer $k$ there
exists an open neighborhood $V_k$ of the origin $0$ such that $f \in C^k
(V_k)$, and
\item $f_v (\zeta) := f (\zeta v)$ is holomorphic on $B^1 (0;1)$ for every
$v \in \mathbb{C}^n$ with $\|v\|=1$,
\end{enumerate}
then $f$ is holomorphic on $B^n(0;1)$.
\end{theorem}
Deferring the historical counts to a later section, we state the
main theorems of this article.
\begin{definition}[Pencil]
\textup{
For a nonempty open subset $U$ of $S^{2n-1}$, consider
$P_0(U) := \{ \lambda u \colon \lambda \in B^1 (0;1), u \in U \}$.
By a \textit{standard pencil} of holomorphic discs at $0\in \mathbb{C}^n$
we mean the pair $(P_0(U), \psi)$ where $\psi (u, \lambda) := \lambda u$
for each $u \in U$ defines a holomorphic map
$\psi(u, \cdot)\colon B^1(0;1) \to B^n (0;1)$.
}
\textup{
More generally, we say that a pair $(\mathcal{P} (p, U, \Omega), \varphi)$ is a
\textit{$C^k$ pencil of holomorphic discs at $p\in \Omega$}
($k >0$ integer) if $\mathcal{P} (p, U, \Omega) = \varphi (P_0 (U))$ and
$\varphi\colon P_0 (U) \to \Omega$ is a $C^k$ diffeomorphism onto
its image that satisfies:
\begin{enumerate}
\item $\varphi (0) = p$, and
\item $\lambda \to \varphi(\lambda u)\colon B^1 (0;1) \to \Omega$
defines a holomorphic map.
\end{enumerate}
}
\end{definition}
Note that for each $u\in U$, the map $\lambda\to \varphi(\lambda u)$ is a holomorphic embedding.
\begin{theorem} \label{CK20a}
If a complex valued function $f\colon B^n (0;1) \to \mathbb{C}$ satisfies
the following two properties
\begin{enumerate}
\item $f \in C^\infty (0)$, and
\item $\lambda \to f(\lambda u)$ defines a holomorphic function
in $\lambda\in B^1 (0;1)$ for every $u \in U$ for
some nonempty open subset $U$ of $S^{2n-1}$,
\end{enumerate}
then there exists $r>0$ such that $f$ is holomorphic on
$B^n (0; r) \cup P_0 (U)$.
\end{theorem}
Notice that this, together with Hartogs' lemma (Sect.\ 5 of \cite{Chirka06}),
implies Forelli's theorem above.
\begin{theorem} \label{CK20b}
If a domain $\Omega \subset \mathbb{C}^n$ admits a $C^1$ pencil, say
$(\mathcal{P}(p, U, \Omega), \varphi)$, of holomorphic discs at $p\in \Omega$, then
any complex valued function $f\colon \Omega \to \mathbb{C}$
satisfying the conditions
\begin{enumerate}
\item $f \in C^\infty (p)$, and
\item $f$ is holomorphic along the pencil $(\mathcal{P}(p, U, \Omega), \varphi)$, meaning
that $\lambda \to f(\varphi(\lambda u))$ is holomorphic
in $\lambda \in B^1 (0;1)$ for any $u \in U$,
\end{enumerate}
is holomorphic on $B^n (0; r) \cup \mathcal{P}(p, U, \Omega)$ for some
$r>0$.
\end{theorem}
Notice that this answers the second question in Section 6
of \cite{Chirka06} (p.\ 219). It also generalizes the following:
\begin{theorem}[Joo-Kim-Schmalz \cite{JKS13}]\label{JKS13}
If $U = S^{2n-1}$ and if $f\colon\Omega \to \mathbb{C}$ satisfies the conditions
(1) and (2) of the hypotheses of the preceding theorem, then $f$ is holomorphic on $\Omega$.
\end{theorem}
\begin{remark} \textup{
The condition, which we denote by $f \in E(p)$, that $f$ admits a
formal Taylor series at $p$ is weaker than $f \in C^\infty(p)$. The
precise definition that $f\in E(p)$ is as follows: for each positive integer
$k$, there exists a polynomial $p_k (z,\bar z)$ with degree not more
than $k$ satisfying
\[
f(z) - p_k (z,\bar z) = o(\|z\|^k),
\]
as $\|z\|$ tends to $0$. We point out that the condition $f \in C^\infty (p)$
in the preceding theorems can be weakened to $f \in E(p)$ and $f$ is of
class $C^1$ on a neighborhood of $p$. It will become evident, since
those are the only assumptions we are working with throughout the paper.}
\end{remark}
\section{Structure of paper, and remarks}
The original version of Forelli's theorem is concerned with the functions
harmonic along the leaves of linear foliation of the ball $B^n (0;1)$
by the radial complex discs passing through the origin.
Forelli proved that such a function,
say $\psi\colon B^n (0;1) \to \mathbb{R}$ is pluriharmonic provided also
that $\psi \in C^\infty (0)$. Then the proof arguments imply (with
only very minor adjustments) Theorem \ref{Forelli-original}
as pointed out in \cite{Stoll80}.
There had been many attempts to weaken the condition
$f \in C^\infty (0)$ for $f$ to finite differentiability. This may have
sounded reasonable to a naive mind, as Hartogs' analyticity theorem
does not require any regularity assumptions beyond separate analyticity.
But no success was possible; numerous nonholomorphic examples of $f$
satisfying condition (2) of Theorem \ref{Forelli-original} were found, when
condition (1) was replaced by the condition $f \in C^k$ for any finite
$k$ \cite{KPS09}. On the other hand, there were quite a few successful
attempts (more than 25 years after \cite{Forelli77}), starting with
\cite{Chirka06}: see \cite{KPS09, JKS13, JKS16}, just to name a few.
Then there are related recent papers such as
\cite{BoKu19, Krantz18}. The second named author would like to
acknowledge that professor A. Sadullaev \cite{Sadullaev} informed him of
\cite{Madrak86}, even though we were unable to have an access to the paper.
We point out that the methods we introduce are elementary, to the
detail and different. The starting point is with
the formal Taylor series of the given function $f:B^n(0;1)\to \mathbb{C}$
at the origin in both types of variables $z=(z_1, \ldots, z_n),
\bar z = (\bar z_1, \ldots, \bar z_n)$.
Then, under the assumptions of Theorem \ref{CK20a}, the formal
power series turns out to be free of $\bar z$ terms.
Once this step is achieved, we use the answers to a question of Bochner
by Lelong \cite{Lelong51} (also by Zorn \cite{Zorn47} and Ree \cite{Ree49}).
A mild adjustment on Lelong's analysis \cite{Lelong51} leads us to an
all dimensional principle which says that $f$ is holomorphic on
$B^n (0;r)$ for some $r>0$. Then the
conclusion of Theorem \ref{CK20a} follows by Hartogs' lemma
(cf., e.g., \cite{Chirka06}).
At this juncture we, especially the second named author, would like to
thank professors Takeo Ohsawa and Nessim Sibony for pointing out, in two
different occasions and with different suggestions, the possible relevance
of \cite{Zorn47, Ree49, Lelong51}.
Once the standard pencil case is obtained, we show that the case of a general
pencil of nonlinear Riemann surfaces follows by two important sources:
\begin{itemize}
\item[(1)] A variation (perhaps also minor) of the arguments of \cite{JKS13}.
\item[(2)] Finding a standard pencil (in the given pencil) along which the given function is holomorphic.
\end{itemize}
On the other hand, we remark that we do not know how to carry out
the arguments without the $C^1$ assumption on the foliation.
\section{Analysis with formal power series}
\subsection{Analysis with formal power series on standard pencil}
Note that the holomorphicity of $f:B^n(0;1)\to \mathbb{C}$ along a standard
pencil $(P_0(U),\psi)$ is the same as the following assumption:
\begin{itemize}
\item[(2')] $\bar E f \equiv 0$ on $P_0(U),$ where
$E = \sum\limits_{k=1}^n z_k \frac\partial{\partial z_k}$.
\end{itemize}
Denote by $\mathbb{C}[[z_1,\ldots,z_n,\bar{z}_1,\ldots,\bar{z}_n]]$ the ring
of formal power series in the variables
$z_1,\ldots,z_n,\bar{z}_1,\ldots,\bar{z}_n$ with coefficients in $\mathbb{C}$.
Call $S \in \mathbb{C}[[z_1,\ldots,z_n,\bar{z}_1,\ldots,\bar{z}_n]]$ \textit{of
holomorphic type} if $S$ is free of variables $\bar{z}_1,\ldots,\bar{z}_n$.
\begin{proposition}\label{holo formal}
If $S = \sum C_{I}^{J}z^I\bar{z}^J$ is an element of
$\mathbb{C}[[z_1,\ldots,z_n,\bar{z}_1,\ldots,\bar{z}_n]]$ satisfying the equation
$\bar{E}S\equiv 0$ on a
nonempty open subset $V$ of $\mathbb{C}^n$, then $S$ is of
holomorphic type, i.e., $C_{I}^{J}=0$ whenever $J\neq 0$.
\end{proposition}
\begin{proof}
Recall the multi-index notation as follows:
\[\alpha=(\alpha_1,\ldots,\alpha_n), ~z^{\alpha}=z_1^{\alpha_1}
\cdots z_n^{\alpha_n}~\text{and}~|\alpha|:=\alpha_1+\cdots+\alpha_n.\]
We first prove the proposition in the case that $S$ is a polynomial of finite
degree. Note that $\bar{E}S\equiv 0$ on $\mathbb{C}^n$, since $\bar{E}S$ is
real analytic. Consider the monomial term $S_{IJ}$ in $S$ of multi-degree
$(I, J) := (i_1,\ldots,i_n,j_1,\ldots,j_n)$ where $J=(j_1,\ldots,j_n)\neq 0$.
The monomial term in $\bar{E}S$ of multi-degree $(I, J)$ is precisely
$\bar E (S_{IJ})$. Then $\bar{E}S \equiv 0$ implies
\[
|J|C_{i_1,\ldots,i_n}^{j_1,\ldots,j_n}=0.
\]
Consequently, $C_{i_1,\ldots,i_n}^{j_1,\ldots,j_n}=0$, whenever
$J=(j_1,\ldots,j_n) \neq 0$.
Now let $S=\sum C_{I}^{J}z^I\bar{z}^J$ be any formal power series satisfying
$\bar E S \equiv 0$ on $V$. Then, for each nonnegative integer $m$,
we have
\[
\bar{E}(S_m)=(\bar{E}S)_m=0~\text{on}~V,
\]
where
\[
S_m:=\sum_{|I|+|J|\leq m}C_{I}^{J}z^I\bar{z}^J.
\]
Thus we conclude from the previous arguments on finite polynomials that
$S$ is of holomorphic type.
\end{proof}
\begin{corollary}
Let $f\colon B^n(0;1) \to \mathbb{C}$ be a complex valued function
satisfying the hypothesis of Theorem \ref{CK20a}. Then, the formal
Taylor series $S_f$ of $f$ at the origin is of holomorphic type.
\end{corollary}
\subsection{Bochner's problem on Radius of convergence}
Now the proof of Theorem \ref{CK20a} is reduced to establishing the
existence of a region of convergence of $S_f$ at the origin. The goal of this
subsection is to introduce Theorem \ref{Lelong} which is a criterion on the
analyticity of formal power series of holomorphic type in two complex
variables (all dimensional principle will be handled in the next section). The
line of research concerned with the theorem originates from the following
question of Bochner (cf.\ \cite{Zorn47}):
\begin{question}
Let $S=\sum a_{ij}z_1^iz_2^j$ be a formal power series with complex
coefficients such that every substitution of convergent power series with
complex coefficients $z_1=\sum b_it^i$, $z_2=\sum c_it^i$ produces a
convergent power series in $t$. Is $S$ convergent on some neighborhood
of $0\in \mathbb{C}^2?$
\end{question}
This question was answered affirmatively by Zorn in the following form.
\begin{theorem}[\cite{Zorn47}]\label{Zorn}
Let $S=\sum a_{ij}z_1^iz_2^j$ be a formal power series with complex
coefficients such that for any $(a,b)\in \mathbb{C}^2$, $S(at,bt)$ is a power
series in the complex variable $t$ with a positive radius of convergence. Then
there exists a neighborhood $U$ of $0\in \mathbb{C}^2$ such that $S$ is
holomorphic on $U$.
\end{theorem}
Zorn also remarked in \cite{Zorn47} that it would be interesting to know
whether Bochner's conjecture holds in the `real case'. Ree \cite{Ree49}
clarified the meaning of the real case and generalized the work of Zorn as
follows:
\begin{theorem}[\cite{Ree49}]\label{Ree}
Let $S=\sum a_{ij}z^i_1z^j_2$ be a formal power series with complex
coefficients such that for any $(a,b)\in \mathbb{R}^2$, $S(at,bt)$ is a power
series in the complex variable $t$ with a positive radius of convergence. Then
there exists a neighborhood $U$ of $0\in \mathbb{C}^2$ such that $S$ is
holomorphic on $U$.
\end{theorem}
Lelong \cite{Lelong51} introduced the following definition to generalize the
preceding theorems further:
\begin{definition}[\cite{Lelong51}]
\textup{$E\subset \mathbb{C}^2$ is called \textit{normal} if any
formal power series $S\in \mathbb{C}[[z_1,z_2]]$ enjoying the property
that $S_{a,b}(t):=S(at,bt)\in \mathbb{C}[[t]]$ has a positive radius of convergence
for every $(a,b)\in E$ becomes holomorphic on some open neighborhood
of the origin in $\mathbb{C}^2$.
}
\end{definition}
\subsection{Logarithmic capacity}
In the terminology of \cite{Lelong51}, what Theorems \ref{Zorn}
and \ref{Ree} say is that $\mathbb{R}$ and $\mathbb{C}$ are normal sets,
respectively. For the sake of smooth exposition, we would like to
cite the following definition.
\begin{definition}[cf., \cite{Ransford95}] \label{log capacity}
\textup{Let $\mu$ be a finite Borel measure on $\mathbb{C}$ with compact
support. Then the $\textit{energy}~ I(\mu)$ of $\mu$ is given by
$I(\mu)=\iint \log|z-w|d\mu(w) d{\mu}(z)$. The
$\textit{logarithmic capacity}$ of a subset $E$ in $\mathbb{C}$ is
defined to be $c(E):=\sup e^{I(\mu)}$, where the supremum is taken
over the set of all probability measures on $\mathbb{C}$ with
compact support in $E$.
}
\end{definition}
Then the theorem of Lelong in \cite{Lelong51} states:
\begin{theorem}[\cite{Lelong51}]\label{Lelong}
$E\subset \mathbb{C}^2$ is normal if, and only if,
$E':=\{\frac{z_2}{z_1}\in \mathbb{C}\colon (z_1,z_2)\in E, z_1\neq 0\}$
is not contained in any $F_{\sigma}-$set $F$ with $c(F)=0$.
\end{theorem}
Notice that the proof of Theorem \ref{CK20a} in the case of complex
dimension two follows by the arguments up to this point.
\section{Cases for standard pencils in all dimensions}
Now we generalize Theorem \ref{Lelong} to all dimensions and give a
complete proof of Theorem \ref{CK20a}. We shall follow the line
of arguments by Lelong \cite{Lelong51} but start with the notion of
capacity introduced by Siciak.
\begin{definition}[\cite{Siciak81}]
\textup{Denote by $\textup{PSH}(\mathbb{C}^n)$ the set of all
plurisubharmonic functions on $\mathbb{C}^n$. For each positive
integer $n$ and a subset $E$ of $\mathbb{C}^n$, define
\[
\mathcal{L}_n:=\{u\in \textup{PSH}(\mathbb{C}^n):\exists C_u\in \mathbb{R}
~ \text{such that}~u(z)\leq C_u+\textup{log}~(1+\|z\|)
~ \forall z\in \mathbb{C}^n \},
\]
\[
V_E(z):=\textup{sup}\{u(z)\colon u\in \mathcal{L}_n ,u\leq 0 ~\text{on}~ E\},
~ \forall z\in \mathbb{C}^n.
\]
Then the $\textit{logarithmic capacity}$ of $E$ is defined to be
\[
c_n(E):=e^{-\gamma_n(E)},
\]
where
}
\begin{align*}
\gamma_n(E)&:=\limsup\limits_{\|z\|\to \infty}(V^*_E(z)-\textup{log}\,\|z\|),\\
V^*_E(z)&:=\limsup\limits_{w\to z}V_E(w).
\end{align*}
\end{definition}
Notice that this concept of logarithmic capacity coincides with
the aforementioned logarithmic capacity in the case of $n=1$.
The following theorem (cf. Sect.\ 3 of \cite{Siciak81}, p.172 of
\cite{Klimek91}) plays an important role for our arguments:
\begin{theorem}\label{sequence of pshfunctions}
Let $U$ be an open subset of $\mathbb{C}^n$ and $\{u_k\}$ a sequence of
plurisubharmonic functions on $U$. Define functions $u,u^*$ on $U$ as
\[
u(z):=\limsup\limits_{k\to \infty}u_k(z),\quad
u^*(z):=\limsup\limits_{U \ni w\to z}u(w).
\]
If $u$ is locally bounded from above, then the set
$E:=\{z\in U\colon u(z)<u^*(z)\}$ has a vanishing logarithmic
capacity.
\end{theorem}
Now we investigate the behavior of the sequence
$\{\frac{1}{k}\text{log}\,|P_k(z)|\}$ of plurisubharmonic functions.
\begin{proposition}\label{sequence of polynomials}
Let $\{P_k\}\subset \mathbb{C}[z_1,\ldots,z_n]$ be a sequence of
polynomials with $\textup{deg}\,P_k\leq k$ for each positive integer $k$.
Let $z=(z_1,\ldots,z_n)\in \mathbb{C}^n$,
$r=(r_1,\ldots,r_n)\in {\mathbb{R}^n_{+}}$, where
\[
{\mathbb{R}^n_{+}}:=\{(r_1,\ldots,r_n)\in \mathbb{R}^n\colon r_i>0
~\text{for each}~i = 1,\ldots,n\}.
\]
Also, define a sequence of plurisubharmonic functions
\(
u_k(z):=\frac{1}{k} \textup{log}\, |P_k(z)|
\)
on $\mathbb{C}^n$ and their average
\[
u^{r}_k(z):=\frac{1}{(2\pi)^n}\int_{0}^{2\pi}\cdots\int_{0}^{2\pi}
P_k(z_1+r_1e^{i\theta_1},\ldots,z_n+r_ne^{i\theta_n})\ d\theta_1
\cdots d\theta_n
\]
on the distinguished boundary of the polydisc
$P^n(z;r):=\{(w_1,\ldots,w_n)\in \mathbb{C}^n
\colon |w_i-z_i|<r_i~\text{for all}~ i =1,\ldots,n\}$ of polyradius $r$
centered at $z$. Then the following hold:\\
\begin{enumerate}
\item Let $r_0\in (0,\infty)$ be given. If $r,s\in \mathbb{R}^n_+$
and $r_i, s_i>r_0,~ \forall i = 1,\ldots,n$, then
\begin{equation}\label{estimate n}
|u^{r}_k(z)-u^{s}_k(w)|
< \frac{1}{r_0}(|r-s|+|z-w|)
\end{equation}
for all positive integer $k$ and $z,w\in \mathbb{C}^n$, where
$|z-w|:=\sum_{i=1}^{n}|z_i-w_i|.$\\
\item Fix $r=(r_1,\ldots,r_n) \in \mathbb{R}_+^n$
and let $\alpha_{r}=
\limsup_{k\to \infty}u^{r}_k(0)$. Then one and the only one of the following
mutually exclusive cases is valid:
\\
\begin{enumerate}
\itemsep 0.4em
\item $\alpha_{r}= -\infty$ and $\{u_k\}$ is uniformly bounded from above on each compact subset of $\mathbb{C}^n$.
\item $\alpha_{r}= +\infty$ and $u(z)=\limsup_{k\to \infty}u_k(z)=\infty$ except on a set of vanishing logarithmic capacity.
\item $-\infty<\alpha_{r}<\infty$ and $\{u_k\}$ is uniformly bounded from above on each compact subset of $\mathbb{C}^n$. \\
\end{enumerate}
\end{enumerate}
\end{proposition}
\begin{proof}
(1) In the case of $n=1$, the following inequality has been established
in \cite{Lelong51}:
\begin{equation}\label{estimate 1}
|u^r_k(z)-u^s_k(w)|< \frac{1}{r_0}(|r-s|+|z-w|)
\end{equation}
for any positive integer $k$, $z,w\in \mathbb{C}$, $r,s>r_0$.
Let
\begin{align*}
z&=(z_1,\ldots,z_n),~ w=(w_1,\ldots,w_n)\in \mathbb{C}^n,\\
r&=(r_1,\ldots,r_n),~ s=(s_1,\ldots,s_n)\in {\mathbb{R}^n_{+}}
\end{align*}
be given. Define
\[
z'_i=(w_1,\dots,w_i,z_{i+1},\dots,z_n),~
r'_i=(s_1,\dots,s_i,r_{i+1},\dots,r_n)
\]
for each $i\in \{0,\ldots,n\}$ and fix a positive integer $k$. Since $u_k$
is subharmonic in each variable separately, the following sequence of
inequalities follows from the inequality (\ref{estimate 1}):
\[
|u^{r'_{i-1}}_k(z'_{i-1})-u^{r'_i}_k(z'_i)|<\frac{1}{r_0}(|r_i-s_i|+|z_i-w_i|),
~ i \in\{1,\ldots,n\}.
\]
Then we obtain
\begin{align*}
|u^{r}_k(z)-u^{s}_k(w)|
&\leq
\sum_{i=1}^{n} |u^{r'_{i-1}}_k(z'_{i-1})-u^{r'_i}_k(z'_i)| \\
&<
\sum_{i=1}^{n}\frac{1}{r_0}(|r_i-s_i|+|z_i-w_i|)\\
&=
\frac{1}{r_0}(|r-s|+|z-w|).
\end{align*}
(2-a) If $\alpha_{r}=-\infty$, then $u^{r}_k(0)\to -\infty$ as $k\to \infty$.
Let $K$ be a compact subset of $\mathbb{C}^n$. By the inequality
(\ref{estimate n}) and the sub-mean value inequality $u_k(z)\leq u^{r}_k(z)$,
\[
\lim\limits_{k\to \infty}u_k(z)= -\infty
\]
uniformly in $z\in K$.
(2-b) If $\alpha_{r}=\infty$, then there exists a subsequence
$\{u^r_{n_k}(0)\}$ of $\{u^r_{k}(0)\}$ such that
\[
\lim\limits_{k\to \infty}u^{r}_{n_k}(0)=\infty.
\]
Define a sequence $\{v_k(z)\}$ of plurisubharmonic functions on
$\mathbb{C}^n$ by
\[v_k(z):=\frac{u_{n_k}(z)}{u^{r}_{n_k}(0)}.
\]
It follows from the inequality (\ref{estimate n}) that
$\lim \limits_{k \to \infty}{v}_k^{r}(z)=1$ for all $z\in \mathbb{C}^n$.
Set $v(z):=\limsup \limits_{k\to \infty}v_k(z)$ for any $z\in \mathbb{C}^n$.
One can check directly that
\[
v^*(z):=\limsup \limits_{w\to z}v(w)
=\lim \limits_{r\to 0}(\limsup \limits_{k\to \infty}v_k^{r}(z))=1
\]
for every $z \in \mathbb{C}^n$. Consequently,
\[
\{z\in \mathbb{C}^n\colon u(z)<\infty \} \subset \{z\in \mathbb{C}^n
\colon v(z)=0<1=v^*(z)\}.
\]
Therefore, $u(z)= \infty $ for all $z\in \mathbb{C}^n$ except on a set of
vanishing logarithmic capacity by Theorem \ref{sequence of pshfunctions}.
(2-c) Let $K$ be a compact subset of $\mathbb{C}^n.$ If $\alpha_{r}$ is
finite, then it follows from the inequality (\ref{estimate n}) that $\{u^{r}_k(z)\}$
is uniformly bounded from above on $K$. Therefore, $\{u_k(z)\}$ is also
uniformly bounded from above on $K$ since $u_k(z)\leq u^{r}_k(z)$ for all
positive integer $k$ and $z \in \mathbb{C}^n$.
\end{proof}
\begin{definition}
\textup{A set $E\subset \mathbb{C}^n$ is called $\textit{normal}$
if any formal
power series $S\in \mathbb{C}[[z_1,$ $\ldots,z_n]]$ for which
$S_{a_1,\ldots,a_n}(t):=S(a_1t,\ldots,a_nt)\in \mathbb{C}[[t]]$ has a positive
radius of convergence $R_{(a_1,\ldots,a_n)}>0$ for every
$ (a_1,\ldots,a_n)\in E$ becomes holomorphic on some open neighborhood
of $0$ in $\mathbb{C}^n$.
}
\end{definition}
Now we use:
\begin{theorem}\label{generalized Lelong}
$E\subset \mathbb{C}^n$ is normal if $c_{n-1}(E')\neq 0$, where
$E':=\{(\frac{z_2}{z_1},\ldots,\frac{z_n}{z_1})\in \mathbb{C}^{n-1}
\colon (z_1,\ldots,z_n)\in E, z_1\neq 0 \}$.
\end{theorem}
In fact, more general result was known earlier. See \cite{LevenMol88} and
the references therein. But we only need the simple case stated here. We
choose to provide here the following proof for the sake of smooth reading.
\begin{proof}
Let
\[
S=\sum a_{i_1,\ldots,i_n}{z_1}^{i_1}\cdots {z_n}^{i_n}\in \mathbb{C}[[z_1,\ldots,z_n]]
\]
be a formal power series for which
$S_{a_1,\ldots,a_n}(t):=S(a_1t,\ldots,a_nt)$ has a positive radius of
convergence $R_{(a_1,\ldots,a_n)}$ $>0$ for every $(a_1,\ldots,a_n)\in E$.
We are to show that $S$ is holomorphic on some open neighborhood
of $0$ in $\mathbb{C}^n$.
Note that, for any $b=(b_1,\ldots,b_{n-1})\in E'$, the power series
$S_{1,b_1,\ldots,b_{n-1}}$ converges absolutely and uniformly within
the radius $\frac12 R_{1,b_1,\ldots,b_{n-1}}$. So it can be rearranged
as follows, with the same radius of convergence:
\begin{align*}
S_{1,b_1,\ldots,b_{n-1}}(t)&=\sum_{k=0}^{\infty}\Big(
\sum_{i_1,\ldots,i_{n-1}=0}^{k}
a_{k-(i_1+\cdots+i_{n-1}),i_1,\ldots,i_{n-1}}
b_1^{i_1}\cdots b_{n-1}^{i_{n-1}}\Big) t^k \\
&=\sum_{k=0}^{\infty}P_k(b)t^k,
\end{align*}
where
\[
P_k(z):=\sum_{i_1,\ldots,i_{n-1}=0}^{k}a_{k-(i_1+\cdots+i_{n-1}),
i_1,\ldots,i_{n-1}}z_1^{i_1}\cdots z_{n-1}^{i_{n-1}}
\in \mathbb{C}[z_1,\ldots,z_{n-1}].
\]
Notice that $P_k$ is a polynomial of degree less than or equal to
$k$.
The root test implies
\[
\limsup\limits_{k\to \infty}\frac{1}{k}\text{log}\, |P_k(b)|=-\text{log}\,
R_{(1,b)}<\infty ~\text{for all}~ b\in E'.
\]
Choose any ${r}=(r_0,\ldots,r_0) \in \mathbb{R}_+^n$ and recall that
$c_{n-1}(E')\neq 0$. By Proposition \ref{sequence of polynomials},
there exists a constant $M>0$ such that
\[
\frac{1}{k}\,\text{log}\,|P_k(b)|\leq \text{log}\,M,
\]
or equivalently,
\[
|P_k(b)|<M^k, ~\forall b \in P^{n-1}(0;2r),
\]
for any positive integer $k$. Cauchy estimates implies
\[
|a_{k-(i_1+\cdots+i_{n-1}),i_1,\ldots,i_{n-1}}|
\leq M^k r_0^{-{(i_1+\cdots+i_{n-1})}}
\]
for any multi-index $(i_1,\ldots,i_{n-1})$ satisfying
$i_1+\cdots+i_{n-1}\leq k$. This yields that
\[
|a_{i_1,\ldots,i_{n}}z_1^{i_1}\cdots z_n^{i_n}|<(\tfrac{1}{2})^{k}
\]
whenever
\[
|z_1|<\frac{1}{2M},|z_2|<\frac{r_0}{2M},~\cdots~,|z_n|<\frac{r_0}{2M}
\textrm{ and } i_1+\cdots+i_n=k.
\]
Therefore, we have
\[
\sum_{i_1+\cdots+i_n=k}
|a_{i_1,\ldots,i_{n}}z_1^{i_1}\cdots z_n^{i_n}|<k^n 2^{-k}
\]
for each positive integer $k$, $(z_1,\ldots,z_n)\in P^n(0;r')$, where
$r'=(\frac{1}{2M},\frac{r_0}{2M},\ldots,\frac{r_0}{2M})\in \mathbb{R}^n_+$.
Then we conclude from the Weierstrass $M$-test that the formal power series
$S$ is convergent on $P^n(0;r')$.
\end{proof}
\textit{Proof of Theorem \ref{CK20a}}. Let
$f\colon B^n(0;1)\to \mathbb{C} $ be a function that is smooth at the
origin and holomorphic along a standard pencil $(P_0(U),\psi)$. Then the
formal Taylor series $S_f$ is of holomorphic type by Corollary
\ref{holo formal}. Note that given an open subset $U$ of $S^{2n-1}\subset
\mathbb{C}^n$, the corresponding set $U'$ in
$\mathbb{C}^{n-1}$ contains an open ball, say $B$, of a positive radius.
Note also that $0\neq c_{n-1}(B)\leq c_{n-1}(U')$. So, $S_f$ is
holomorphic on $B^n(0;r)$ for some $r>0$ by
Theorem \ref{generalized Lelong}. Now $f=S_f$ and moreover,
Hartogs' lemma [\textit{Ibid}] implies that
there exists a unique holomorphic extension of $f$ on $B^n(0;r)\cup P_0(U)$.
\hfill $\Box$
\section{Case of a general pencil---Proof of Theorem \ref{CK20b}}
\begin{definition} \textup{
By a \textit{subpencil} of a pencil $(\mathcal{P}(p, U, \Omega), h)$
we mean a pencil of the form $(\mathcal{P}(p, V, \Omega), h')$,
where $h'$ is a restriction of $h$ to $P_0(V)\cap B^n(0;r)$ for some
constant $r$ with $0<r\leq 1$ and an open subset $V$ of $U$.
}
\end{definition}
The key turns out to be in finding an appropriate standard subpencil
at $p$ along which $f$ is holomorphic. We therefore proceed in two steps
as follows:
\bigskip
\begin{narrower}
\textbf{Step 1.} There is a subpencil on whose underlying set
$f$ is holomorphic.
\textbf{Step 2.} There is a standard subpencil of
the pencil found in Step 1.
\end{narrower}
\bigskip
\noindent
Notice that it follows by Step 2 and Theorem \ref{CK20a} that
there is an open neighborhood of $p$ on which $f$ is holomorphic.
This of course proves Theorem \ref{CK20b}, the desired final
conclusion.
\bigskip
\textbf{Step 1.} We may assume without loss of generality that $p=0$ in
$\mathbb{C}^n$. Now we recapitulate the following result of \cite{JKS13}:
\begin{theorem}[\cite{JKS13}] \label{CR equation}
Let $(\mathcal{P}(0, U, \Omega), h)$ be a $C^1$ pencil of holomorphic discs at the
origin in $\mathbb{C}^n$. If $f\colon \Omega \to \mathbb{C}$ satisfies the
following conditions
\begin{enumerate}
\item $f\in C^{\infty}(0)$, and
\item $f$ is holomorphic along the pencil,
\end{enumerate}
then for each $v\in U$, there exists a positive number $r_v>0$
such that $f$ satisfies the Cauchy-Riemann equation on
$L_v:=\{h(zv)\in \Omega \colon z\in B^1(0;r_v) \}$.
\end{theorem}
\begin{proof}
We sketch the proof only for the case of $n=2$, even though it works in
all dimensions, as in \cite{JKS13}.
Let $v_0 \in U$ be given. One can choose coordinates around the origin
in $\mathbb{C}^2$ such that $v_0$ becomes
$(1,0)$. Fix a number $\epsilon_0>0$
such that
\[
\tilde{h}\colon B^1(0;\epsilon_0)\times B^1(0;\epsilon_0)\to \Omega,
\quad \tilde{h}(z_1,z_2):=h(z_1,z_1z_2)
\]
is well-defined.
By a change of holomorphic coordinates, the map $\tilde{h}$ can be locally
expressed as
\[
\tilde{h}\colon B^1(0;\epsilon)\times B^1(0;\epsilon)\to \mathbb{C},
\quad \tilde{h}(z_1,z_2)=(z_1,k(z_1,z_2))
\]
where $\epsilon$ is a positive real number and
$k\colon B^1(0;\epsilon)\times B^1(0;\epsilon)\to \mathbb{C}$
satisfies the following conditions:
\begin{enumerate}
\itemsep 0.5mm
\item $k(z_1,z_2)$ is $C^1$ in $z_1,z_2$ and holomorphic in $z_1$.
\item $k(0,z_2)=0$ for any $z_2\in B^1(0;\epsilon) $.
\item $\frac{\partial k(z_1,z_2)}{\partial z_1}\vert_{z_1=0}=z_2$ and
hence $k(z_1,z_2)=z_1z_2+o(|z_1|).$
\end{enumerate}
Note that $\tilde{h}(z_1,0)=(z_1,0)$ represents $h_{v_0}\colon
z\in B^1(0;\epsilon)\to h(zv_0)\in \Omega.$
Let $f=f(z,w)$ be a function satisfying the given assumptions and
let $F(z_1,z_2):=f(z_1,k(z_1,z_2))$.
Then
\[
\frac{\partial F}{\partial z_2}=\frac{\partial f}{\partial w}
\frac{\partial k}{\partial z_2}+\frac{\partial f}{\partial \bar{w}}
\frac{\partial \bar{k}}{\partial z_2},
\]
\[
\frac{\partial F}{\partial \bar{z}_2}=\frac{\partial f}{\partial w}
\frac{\partial k}{\partial \bar{z}_2}+\frac{\partial f}{\partial \bar{w}}
\frac{\partial \bar{k}}{\partial \bar{z}_2}.
\]
Since $\frac{\partial k}{\partial z_2}(z_1,0)$ and
$\frac{\partial k}{\partial \bar{z}_2}(z_1,0)$ are holomorphic in
$z_1$ (see p.1173--1174 of \cite{JKS13}), it follows that
\[
\Big(\frac{\partial k}{\partial \bar{z}_2}\frac{\partial F}{\partial z_2}
-\frac{\partial k}{\partial z_2}\frac{\partial F}{\partial \bar{z}_2}\Big)
\Big|_{(z_1,0)}
\]
is also holomorphic in $z_1$. Notice that this last is equal to
\[
H(z_1)\frac{\partial f}{\partial \bar{w}}(z_1,0),
\]
where
\[
H(z_1):=\Big(\frac{\partial k}{\partial \bar{z}_2}
\frac{\partial \bar{k}}{\partial z_2}
-\frac{\partial k}{\partial z_2}\frac{\partial \bar{k}}{\partial \bar{z}_2}
\Big)(z_1,0).
\]
Let $G(z_1) = H(z_1)\frac{\partial f}{\partial \bar{w}}(z_1,0)$.
By a direct computation, $G$ turns out to coincide with $\bar{z}_1g(z_1)$,
where $g$ is a smooth function. Hence the Taylor series of $G$ at
$0\in \mathbb{C}$ vanishes identically, which implies that $G\equiv 0$.
It holds also that $H(z_1)$ is nowhere zero in a punctured neighborhood
of $0$. Altogether, it follows that there exists $r>0$ such that
\[
\frac{\partial f}{\partial \bar{w}}(z_1,0)=0~\text{whenever}~ |z_1|<r,
\]
which implies
\[
0=\frac{\partial F}{\partial\bar{z}}(z_1,0)
= (\frac{\partial f}{\partial\bar{z}_1}+\frac{\partial f}{\partial\bar{w}}
\frac{\partial \bar{k}}{\partial\bar{z_1}})(z_1,0)
= \frac{\partial f}{\partial\bar{z}}(z_1,0)
\]
for any $z_1$ with $|z_1|<r$. Therefore, $f$ satisfies the Cauchy-Riemann
equation on $L_{v_0}$.
\end{proof}
Moreover, we have
\begin{proposition}
Let $(\mathcal{P} (0, U, \Omega),h)$ and $f:\Omega\subset \mathbb{C}^n \to \mathbb{C}$
be given as in Theorem \ref{CR equation}. Then there exists a subpencil
$(\mathcal{P} (0, V, \Omega),h')$ which $f$ satisfies the Cauchy-Riemann
equation at every point of the underlying set of.
\end{proposition}
\begin{proof}
For each positive integer ${\ell}$, define
\[
V_{\ell}:=\{v\in U\colon \frac{\partial f}{\partial \bar{z}_k}(h(zv))\equiv 0~
\text{for all}~ k\in \{1,\ldots,n\},~ z\in B^1(0;\frac{1}{\ell})\}.
\]
Since $f$ is $C^1$ on $B^n(0;r)$ for some $r>0$, each $V_{\ell}$ is closed
and by Theorem \ref{CR equation},
\[
V=\bigcup_{\ell} V_{\ell}
\]
where the union is taken over the set of all positive integers. By Baire
category theorem, there exists a positive integer $m$ such that $V_m$
contains some nonempty open subset $V$ of $U$.
Then $f$ satisfies the Cauchy-Riemann equation on
\[
\mathcal{P} (0, V, \Omega):=\Big\{h(zv)\in \Omega\colon
z\in B^1(0;\frac{1}{m}), v\in V\Big\},
\]
as desired.
\end{proof}
\textbf{Step 2.}
We are ready to complete the proof of Theorem \ref{CK20b}. Recall that
it remains only to establish the existence of a standard subpencil of
$\mathcal{P}(0, V, \Omega)$ along which $f$ is holomorphic.
\begin{theorem}
Let $(\mathcal{P}(0, V, \Omega),h)$ be a $C^1$ pencil of holomorphic discs at the
origin of $\mathbb{C}^n$ and $W$ a relatively compact open subset of $V$.
Then $P_0(W)\cap B^n(0;r)\subset\mathcal{P}(0, V, \Omega)$ for some $r>0$.
\end{theorem}
\begin{proof}
Suppose that
\[
P_0(W)\cap B^n(0;\tfrac{1}{m})\not\subset \mathcal{P}(0, V, \Omega)
\]
for any positive integer $m$. Fix $v \in W$. Since $h$ is a homeomorphism,
there exists $s_v>0$ such that $zv\in h(B^1(0;1))$, for any $z\in \mathbb{C}$
with $|z|<s_v$.
Then there exist sequences
\[
\{z_k\}\subset B^1(0;1),~\{z'_k\}\subset B^1(0;1),
~\{v'_k\}\subset S^{2n-1}\backslash V,~ \{v_k\}\subset W
\]
satisfying the following conditions:
\begin{enumerate}
\item $z_kv_k=h(z'_kv'_k)\notin \mathcal{P}(0, V, \Omega)$ for each
positive integer $k$,
\item $\lim_{k\to\infty} z_k = 0 = \lim_{k\to\infty} z'_k$.
\end{enumerate}
Taking subsequences whenever necessary, we may assume without
loss of generality that
\[
\lim\limits_{k \to \infty}v'_k= v' \in S^{2n-1} \text{ and }~
\lim\limits_{k \to \infty}v_k= v \in \overline{W}\subset V.
\]
Furthermore, we see that
\[
\big(\lim\limits_{k \to \infty}\tfrac{z_k}{z'_k}\big)v
=\lim\limits_{k \to \infty}\big\{\tfrac{h(z'_kv'_k)}{z'_k}\big\}=v'.
\]
Therefore, there exists a real number $\theta$ such that
$e^{i\theta}v'=v\in V$. Since $\lim_{k\to\infty} v'_k = v'$, there is
a positive integer $N$ such that $e^{i\theta}v'_N\in V$. This implies that
\[
z_Nv_N=h(z'_Nv'_N)=h(e^{-i\theta}z'_Ne^{i\theta}v'_N)\in \mathcal{P}(0, V, \Omega),
\]
contradicting (1) above. This completes the proof.
\end{proof}
\section*{Data availability statement}
This article does not use any associated data.
| proofpile-arXiv_059-15638 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\chapter*{Abstract}
The quantum field theoretic description of general relativity is a modern approach to gravity where gravitational force is carried by spin-2 gravitons.
In the classical limit of this theory, general relativity as described by the Einstein field equations is obtained.
This limit, where classical general relativity is derived from quantum field theory is the topic of this thesis.
\par
The Schwarzschild-Tangherlini\ metric, which describes the gravitational field of an inertial point particle in arbitrary space-time dimensions, $D$, is analyzed.
The metric is related to the exact three-point vertex function of a massive scalar interacting with a graviton to all orders in $G_N$, and the one-loop contribution to this amplitude is computed from which the leading-order self-interaction contribution to the metric is derived.
\par
To understand the gauge-dependence of the metric, covariant gauge (i.e. $R_\xi$-gauge) is used which introduces the arbitrary parameter, $\xi$, and the gauge-fixing function $G_\sigma$.
In the classical limit, the gauge-fixing function turns out to be the coordinate condition, $G_\sigma=0$.
As gauge-fixing function a novel family of gauges, which depends on an arbitrary parameter $\alpha$ and includes both harmonic and \dDo\ gauge, is used.
\par
Feynman rules for the graviton field are derived and important results are the graviton propagator in covariant \dDo-gauge and a general formula for the n-graviton vertex in terms of the Einstein tensor.
The Feynman rules are used both in deriving the Schwarzschild-Tangherlini\ metric from amplitudes and in the computation of the one-loop correction to the metric.
\par
The one-loop correction to the metric is independent of the covariant gauge parameter, $\xi$, and satisfies the gauge condition $G_\sigma=0$ where $G_\sigma$ is the family of gauges depending on $\alpha$.
It is compared to the literature for particular values of $\alpha$ and $D$ and, also, to an independent derivation using only methods from classical general relativity.
In space-time $D=5$ a logarithm appears in position space and this phenomena is analyzed in terms of redundant gauge freedom.
\tableofcontents
\chapter{Introduction}
\labelx{sec:Introduction}
\pagenumbering{arabic}
The modern approach to general relativity using the framework of quantum field theory has successfully been used to derive several exciting results in the theory of gravity.
These include the description of classical binary systems as well as predictions on quantum corrections to gravitational processes at low energies.
For example, the analytic tools from quantum field theory are essential to improve the accuracy of theoretic predictions on gravitational waves.
Here, modern methods of quantum field theory such as on-shell scattering amplitudes and generalized unitarity are helpful.
The predictions on quantum corrections to gravity, being at the moment insignificant to experimental observations, are of great interest for investigations into the quantum theory of gravity.
\par
Well-known physicists have worked on the quantum field theoretic description of gravity.
In Refs.~\cite{Feynman:1996kb,Feynman:1963ax}, Feynman introduced spin-2 particles to describe gravitational interactions.
From these and similar investigations it was realized that it is necessary to include ``ghosts'' in the Feynman rules, which is now understood from the Faddeev-Popov gauge-fixing procedure~\cite{DeWitt:1967ub,DeWitt:1967uc,Faddeev:1973zb,Faddeev:1967fc}.
In Ref.~\cite{tHooft:1974toh}, 't Hooft and Veltman analyzed the one-loop divergencies of quantum gravity.
At one-loop order, they found that ``pure gravity'', that is gravity described by the \EHA\ action, is renormalizable.
Later investigations~\cite{Goroff:1985th} have shown that this fails at two-loop order so that ``pure gravity'' is non-renormalizable.
This is not surprising considering the negative mass dimension of the gravitational constant.
In Weinberg's comprehensive treatment of general relativity, Ref.~\cite{Weinberg:1972kfs}, the field theoretic point of view was consistently used in favor of the geometric description of gravity.
Additional significant contributions include Refs.~\cite{Schwinger:1963re,Schwinger:1968rh,Weinberg:1964ew,Iwasaki:1971vb}.
\par
An important contribution to the quantum field theoretic description of gravity was that of Donoghue who used an effective field theoretic approach which made it possible to deal rigorously with the non-renormalizability of quantum gravity and to compute quantum corrections to gravity at low energies \cite{Donoghue:1995cz,Donoghue:1994dn,Donoghue:1993eb,BjerrumBohr:2002kt,Bjerrum-Bohr:cand}.
In one line of work, quantum corrections to the metric was computed~\cite{Donoghue:2001qc,BjerrumBohr:2002ks}.
\par
Today, methods from quantum field theory have been used to derive a number of results in classical general relativity~\cite{Duff:1973zz,Buonanno:1998gg,Goldberger:2004jt,Holstein:2008sx,Neill:2013wsa,Akhoury:2013yua,Vaidya:2014kza,Damour:2017zjx,Damour:2019lcq,Guevara:2017csg,Cachazo:2017jef,KoemansCollado:2019ggb,Bjerrum-Bohr:2018xdl,Cheung:2018wkq,Bern:2019nnu,Bern:2019crd,Chung:2019yfs,Cheung:2020gyp,Kosower:2018adc,Bjerrum-Bohr:2019kec,Cristofoli:2019neg,Jakobsen:2020ksu,Cristofoli:2020uzm,Cristofoli:2020hnk}.
These include the post-Minkowskian expansion with which the binary system of two gravitationally interacting objects have been analyzed.
Here, scattering amplitudes are used to derive Hamiltonians, potentials and scattering angles of the two classical, interacting objects.
\par
Instead of the four space-time dimensions that we usually associate with the physical world, the field theoretic description of gravity is easily generalized to arbitrary space-time dimensions, $D$.
This is also relevant when the dimensional regularization scheme is used.
Studies of gravity in arbitrary dimensions can be found in Refs.~\cite{Cristofoli:2020uzm,Emparan:2008eg,Collado:2018isu,Cristofoli:2020hnk}.
In arbitrary dimensions, the metric of an inertial point particle is generalized from the Schwarzschild metric to the Schwarzschild-Tangherlini\ metric.
The derivation of this metric from the quantum field theoretic approach to gravity is a main topic of this thesis.
\par
In quantum field theory, gauge theories of spin-1 particles are a great success and play a major role in the Standard Model of particle physics.
Examples are Yang-Mills theory, quantum chromodynamics and quantum electrodynamics.
In the quantum field theoretic approach to general relativity it is found that gravitational interactions can be described by spin-2 particles.
These spin-2 gravitons carry the gravitational force in analogy to e.g. the strong force carried by spin-1 gluons from quantum chromodynamics.
These similarities invite for a new interpretation of the general covariance of general relativity where the graviton field is treated as a gauge particle in analogy to the gluon field.
An exciting discovery is the double copy nature of quantum gravity in terms of Yang-Mills theory~\cite{Bern:2019crd,Bern:2010ue,Cheung:2016say}.
For example, tree amplitudes of gravitons are expanded in terms of products of tree amplitudes of gluons and gravity is said to be the square of Yang-Mills.
\par
In general, the gauge theory of spin-2 gravitons is much more complicated than that of spin-1 particles.
In $D=4$, the negative mass dimension of the gravitational constant is in contrast to the dimensionless coupling constant of Yang-Mills theory.
Also, the Einstein field equations are highly non-linear which usually means that the Feynman rules of quantum gravity include vertices with an arbitrary number of gravitons.
This makes the Schwarzschild-Tangherlini\ metric very different from the Coulomb potential of quantum electrodynamics.
While the Coulomb potential is exact at tree level, the Schwarzschild-Tangherlini\ metric gets corrections from diagrams with an arbitrary number of loops.
This is an exciting fact about the classical limit of quantum field theory that, in general, diagrams with any number of loops contribute.
\par
In this thesis, the Feynman diagram expansion of the Schwarzschild-Tangherlini\ metric is analyzed from the quantum field theoretic approach to general relativity.
Such metric expansions were first analyzed by Duff in Ref.~\cite{Duff:1973zz} and have later been studied in Refs.~\cite{Bjerrum-Bohr:2018xdl,BjerrumBohr:2002ks,Galusha:cand,Cristofoli:2020hnk,Chung:2019yfs,Donoghue:2001qc,Jakobsen:2020ksu}.
In Ref.~\cite{Donoghue:2001qc,BjerrumBohr:2002ks} quantum corrections to the classical metric was considered while the reduction of triangle n-loop integrals in the classical limit was discussed in Refs.~\cite{Bjerrum-Bohr:2018xdl,Galusha:cand}.
\par
An interesting aspect of the Schwarzschild-Tangherlini\ metric derived from amplitudes is its gauge/coordinate dependence.
In what coordinates is the Schwarzschild-Tangherlini\ metric when computed from amplitudes?
What kind of coordinate conditions is it possible to use in the gauge-fixed action?
We will use the quantum field theoretic point of view and instead of coordinates, we will speak of gauge choices and conditions.
The question is then how the quantum gauge-fixing procedure is related to gauge conditions in the classical limit.
To this aim it is advantageous to use the path integral method and Feynman rules where gauge dependence is explicit.
\par
As a whole, general relativity from quantum field theory is an exciting field which incorporates gravity into the modern description of quantum particles.
It has found importance in providing analytic tools for the analysis of gravitational waves and is of interest for the development of quantum gravity.
To this end, a better understanding of the relation of the Schwarzschild-Tangherlini\ metric to this framework as well as its gauge dependence is desirable.
\par
The thesis is structured as follows.
In Ch.~\ref{sec:Background} we briefly go through relevant aspects of classical general relativity and quantum field theory.
This serves as a starting point from which the two theories can be combined and also, importantly, we state several conventions on signs and notation in this chapter.
\par
The next chapters are concerned with the quantum field theoretic description of gravity.
In Ch.~\ref{sec:GaugeDependence} we analyze the gauge theory of spin-2 gravitons and how it is related to general covariance.
Then, in Ch.~\ref{sec:ExpansionsAround} we expand the objects of general relativity in the graviton field and in the gravitational constant around flat space-time.
In Ch.~\ref{sec:FeynmanRules} we use these results to derive Feynman rules for the graviton.
\par
We choose to work in covariant gauge (i.e. $R_\xi$-gauge) with the arbitrary covariant gauge parameter, $\xi$, and as gauge-fixing function, $G_\sigma$, we choose a family of gauge conditions depending on an arbitrary parameter $\alpha$.
The expansions in Ch.~\ref{sec:ExpansionsAround} are used for Feynman rules and also later in Ch.~\ref{sec:STM} to analyze the classical equations of motion.
We have not found a detailed treatment of quantum gravity in covariant gauge in the literature and the Feynman rules in Ch.~\ref{sec:FeynmanRules} present new results such as the graviton propagator in covariant \dDo-gauge.
Another interesting result of Ch.~\ref{sec:ExpansionsAround} and~\ref{sec:FeynmanRules} is an expression of the n-graviton vertex in terms of the Einstein tensor and an analogous tensor $H^\mn$.
\par
The later chapters are concerned with the derivation of the Schwarzschild-Tangherlini\ metric from amplitudes.
In Ch.~\ref{sec:STM}, general formulas relating the all-order metric from general relativity to amplitudes from quantum field theory are derived.
This includes a detailed analysis of the gauge-fixed classical equations of motion which are solved in a perturbative expansion analogous to a Feynman diagram expansion.
In particular, the Schwarzschild-Tangherlini\ metric is derived from the exact three-point function of a massive scalar interacting with a graviton.
Then, in Ch.~\ref{sec:PerturbativeExpansion2} we specialize to the one-loop diagram contribution to the metric.
This is the first correction to the metric due to graviton self-interaction and depends on the three-graviton vertex.
\par
Both chapters~\ref{sec:STM} and~\ref{sec:PerturbativeExpansion2} present exciting results.
The Feynman rules for the general n-graviton vertex derived in earlier chapters clearly shows how the Feynman diagram expansion of the three-point vertex function is related to the perturbative solution of the classical equations of motion.
The one-loop contribution to the metric is a very general result depending on the arbitrary dimension $D$ and the gauge parameter $\alpha$ from the gauge-fixing function $G_\sigma$.
This result is compared with the literature.
An interesting phenomenon occurs in the metric in $D=5$ where a logarithmic dependence on the radial coordinate appears.
Also, we present a classical derivation of the metric contribution which confirms the results from the amplitude computation.
\par
Finally, in Ch.~\ref{sec:Conclusion} we summarize the results of this thesis and suggest directions for further research.
\chapter{Background}
\labelx{sec:Background}
We briefly discuss the classical theory of general relativity and the path integral approach to quantum field theory.
We show how the action of general relativity can be included in the path integral in a minimal way.
Finally we consider how to interpret the classical limit $\hbar\rightarrow0$ of quantum field theory.
\par
We work with the mostly negative metric and with units ($c=\hbar=1$).
We will use the comma-notation for partial derivatives.
\section{General Relativity}
\labelx{sec:GeneralRelativity}
In the traditional approach to general relativity, gravity is described by the metric tensor which measures the geometry of space-time.
Physical equations are required to be general covariant, that is, invariant under general coordinate transformations.
To this end, tensor fields are introduced which obey definite transformation laws.
In the modern field theoretic approach, it is recognized that general covariance can be thought of as the gauge symmetry of spin-2 gravitons.
In this section we will follow the traditional approach and later in Ch.~\ref{sec:GaugeDependence} we will discuss the description of general relativity in terms of spin-2 gravitons.
\par
We have found the treatments of classical general relativity of Weinberg~\cite{Weinberg:1972kfs} and Dirac~\cite{Dirac:GR} useful.
In particular, we use the same conventions as Dirac~\cite{Dirac:GR} which also coincide with the conventions of Refs.~\cite{Donoghue:1995cz,Bjerrum-Bohr:cand,BjerrumBohr:2002ks}.
These conventions include the mostly negative metric and how the Ricci tensor is defined in terms of the curvature tensor.
We work all the time in arbitrary space-time dimensions $D$.
Investigations in gravity in arbitrary space-time dimensions can be found in Refs.~\cite{Cristofoli:2020uzm,Emparan:2008eg,Collado:2018isu,Cristofoli:2020hnk}.
\par
The strength of gravitational interactions are described by the gravitational constant, $G_N$.
As mentioned in the introduction, in $D=4$ the gravitational constant is dimensionful in contrast to e.g. Yukawa and Yang-Mills couplings.
In general the mass dimension of $G_N$ is:
\begin{align}
[G_N] = [\text{mass}]^{-(D-2)}
\ .
\labelt{nn21}
\end{align}
In $D=2$ and $D=3$ general relativity behaves very differently from $D\geq4$.
In this thesis we will only consider $D\geq4$ and by arbitrary dimensions $D$ we always assume $D\geq4$.
Instead of working with $G_N$, we will often use $\kappa$ where:
\begin{equation}
\kappa^2 = 32 \pi G_N
\ .
\labelt{nn22}
\end{equation}
This definition of $\kappa$ agrees with Refs.~\cite{Donoghue:1995cz,Bjerrum-Bohr:cand,BjerrumBohr:2002ks}
\par
The curvature tensor is:
\begin{equation}
R_{\mn\rs}
=
\Gamma_{\mn\sigma,\rho} - \Gamma_{\mn\rho,\sigma}
-\Gamma_{\beta\mu\rho}\Gamma^\beta_{\nu\sigma} + \Gamma_{\beta\mu\sigma}\Gamma^\beta_{\nu\rho}
\ .
\labelt{nn23}
\end{equation}
We use the comma-notation to denote partial derivatives so that e.g. $g_{\mn,\rho}=\partial_\rho g_\mn$.
\par
The Christoffel symbols are
\begin{equation}
\Gamma_{\rho\mn}
=
\frac{1}{2}
\big(
g_{\rho\mu,\nu} + g_{\rho\nu,\mu} - g_{\mn,\rho}
\big)
\ ,
\labelt{nn24}
\end{equation}
and $\Gamma^\rho_{\mn}=g^\rs \Gamma_{\sigma\mn}$.
\par
From the curvature tensor we get the Ricci tensor, $R_\mn$, and scalar, $R$:
\begin{align}
&R_\mn = g^\rs R_{\rho\mn\sigma}
\ ,
\labelt{nn25}
\\
&R = g^\mn R_\mn
\ .
\labelt{nn26}
\end{align}
The Einstein-Hilbert action can be defined in terms of the Ricci scalar,
\begin{equation}
S_{EH} =
\int \dDx
\sqrt{-g}
\ R
\ ,
\labelt{nn27}
\end{equation}
where $g$ is the determinant of $g_\mn$.
If we have a Lagrangian, $\mathcal{L}_\phi$, describing matter in special relativity, we can add a matter term to the Einstein-Hilbert action,
\begin{align}
S_\phi = \int \dDx \sqrt{-g} \
\mathcal{L}_\phi
\ ,
\labelt{nn28}
\end{align}
where now contractions in $\mathcal{L}_\phi$ should be made with $g_\mn$ and $g^\mn$.
In our case, matter will be described by massive scalar fields and hence:
\begin{align}
\mathcal{L}_\phi
=
\frac{1}{2}
\Big(
g^\mn \phi_{,\mu} \phi_{,\nu}
-m^2\phi^2
\Big)
\ .
\end{align}
The Einstein field equations can be derived from the variational principle $\delta S_\tecl=0$ where:
\begin{align}
S_\tecl = \frac{2}{\kappa^2} S_{EH} + S_\phi
\ .
\labelt{nn29}
\end{align}
The subscript on $S_\tecl$ can be read as ``classical''.
\par
The Einstein field equations are
\begin{equation}
G^\mn = -\frac{\kappa^2}{4} T^\mn
\ ,
\labelt{n210}
\end{equation}
where $G^\mn$ is the Einstein tensor
\begin{equation}
G^\mn = R^\mn - \frac{1}{2} R\ g^\mn
\ ,
\labelt{n211}
\end{equation}
and $T^\mn$ is the energy-momentum tensor of matter.
\par
The Einstein tensor appears when we vary the Einstein-Hilbert action:
\begin{equation}
\delta S_{EH}
=
-
\int \dDx
\sqrt{-g}
\ G^\mn \delta g_\mn
\ .
\labelt{ein1}
\end{equation}
It obeys
\begin{equation}
D_\mu G^\mn = 0
\labelt{n213}
\end{equation}
where $D_\mu$ is the covariant derivative.
\par
Similarly $T^\mn$ appears when we vary the matter action
\begin{equation}
\delta S_\phi = -\frac{1}{2}
\int \dDx \sqg
\ T^\mn \delta g_\mn
\ ,
\labelt{n214}
\end{equation}
and it, too, obeys $D_\mu T^\mn=0$.
\par
In $D=4$ the well-known Schwarzschild metric describes the gravitational field of an inertial, non-spinning point particle.
In arbitrary space-time dimensions it is generalized to the Schwarzschild-Tangherlini\ metric, which in spherical coordinates is given by:
\begin{align}
d\tau^2 =
(1-\frac{\mu}{r^\dmt}) dt^2
-\frac{1}{1-\frac{\mu}{r^\dmt}} dr^2
- r^2 d\Omega^2_{D-2}
\labelt{n215}
\end{align}
Here, $n=D-3$ and $\mu$ is the Schwarzschild-Tangherlini\ parameter:
\begin{align}
\mu = \frac{16 \pi G_N M}{(D-2) \Omega_{D-2}}
\labelt{mud1}
\end{align}
In this equation $\Omega_{d-1}$ is the surface area of a sphere in d-dimensional space and is given explicitly by:
\begin{align}
\Omega_{d} = \frac{2\sqrt{\pi}^{d+1}}{\Gamma((d+1)/2)}
\ .
\labelt{n217}
\end{align}
The Schwarzschild-Tangherlini\ metric in Eq.~\eqreft{n215} can be found in e.g. \cite{Emparan:2008eg}.
It solves the Einstein field equations in vacuum with a point particle singularity at centrum.
\section{Quantum Field Theory}
\labelx{sec:QuantumField}
Quantum field theory unites special relativity and quantum mechanics.
The Standard Model of particle physics is formulated in its framework.
Here, non-abelian gauge theories play a dominant role.
The treatments of quantum field theory of Srednicki~\cite{Srednicki:2007qs} and of Schwartz~\cite{Schwartz:2013pla} have been useful.
In particular, we have used the same approach to path integral quantization as in Srednicki.
\par
Naively, we will treat the gravitational field, $g_\mn$, as any other quantum field and use the \EHA\ action minimally coupled to a scalar field in the path integral.
As already mentioned, this leads to a non-renormalizable quantum theory.
However, in the classical, low-energy limit we can ignore the divergent terms.
\par
Later, we will expand the action in the ``graviton field'' $h_\mn=g_\mn-\eta_\mn$.
This field, $h_\mn$, will describe spin-2 gravitons.
The general covariance of the gravitational action translates into gauge theory of the spin-2 particles.
As with spin-1 particles, it is necessary to ``fix a gauge'' in the path integral.
We will briefly indicate how this is done with the Faddeev-Popov method following Srednicki~\cite{Srednicki:2007qs}.
\par
Our action is that of Eq.~\eqreft{nn29} which is:
\begin{equation}
S_c = \int \dDx \sqrt{-g}
\bigg(
\frac{2R}{\kappa^2} \
+\
\frac{1}{2}(g^{\mu \nu}
\partial_\mu \phi \partial_\nu \phi - m^2 \phi^2)
\bigg)
\ .
\labelt{act1}
\end{equation}
The partition function is then given by:
\begin{equation}
Z_\omega = \int \mathcal{D} g_{\mu \nu} \ \mathcal{D} \phi \ \det(\frac{\delta G}{\delta \epsilon}) \
\delta ( G_\sigma - \omega_\sigma ) \ e^{iS_c}
\ .
\labelt{n219}
\end{equation}
Here we have used the Faddeev-Popov gauge-fixing procedure.
The $\delta$-function picks out a specific gauge-choice so that the path integral extends only over independent field configurations.
The gauge-fixing function $G_\sigma$ breaks the general covariance of the Einstein Hilbert action.
The Jacobian determinant is expanded by introducing ghosts.
The arbitrary field $\omega_\sigma$ on which $Z_\omega$ depends is integrated out with a Gaussian weight function.
\par
This gives the final expression for the partition function $Z$ in covariant gauge with gauge-fixing function $G_\sigma$:
\begin{align}
Z &= \int
\mathcal{D} \omega_\sigma \ Z_\omega \
\exp(i\int \dDx \ \frac{1}{\kappa^2 \xi} \eta^{\sigma \rho} \omega_\sigma \omega_\rho)
\labelt{n220}
\\
&= \int \mathcal{D} g_{\mu \nu} \ \mathcal{D} \phi \ \mathcal{D}c \mathcal{D}\bar{c} \
e^{iS_\tecl + i\frac{2}{\kappa^2}S_\tegf + iS_\tegh}
\ .
\nonumber
\end{align}
There are three types of fields.
The gravitational field $g_\mn$, the scalar field $\phi$ and the ghost fields $c$ and $\bar c$.
\par
In addition to $S_\tecl$ two new terms appear in the action.
The gauge-fixing term, $S_\tegf$, which comes from the Gaussian weight function and the ghost term $S_\tegh$ which comes from the expansion of the Jacobian determinant.
In this work, we will mainly be concerned with the classical limit of quantum gravity and in this limit we can ignore the ghosts.
The gauge-fixing term, $S_\tegf$, is given by:
\begin{equation}
S_{gf} =
\frac{1}{2 \xi}
\int \dDx\
\eta^{\sigma \rho} G_\sigma G_\rho
\ ,
\labelt{gau1}
\end{equation}
Neglecting the ghost term, we get a final expression for the gauge-fixed action:
\begin{align}
S &= S_\tecl + \frac{2}{\kappa^2} S_\tegf
\labelt{act2}
\\
&= \frac{2}{\kappa^2}
\big(
S_\teEH + S_\tegf
\big)
+S_\phi
\ .
\nonumber
\end{align}
This action is relevant for the classical limit of quantum gravity.
We get the Feynman rules after we expand the action in $h_\mn$.
It is however more convenient to use the normalization $\hka_\mn=\frac{1}{\kappa}h_\mn$ which will clearly show, at which order in $G_N$, the different vertices contribute.
This will be relevant in Ch.~\ref{sec:FeynmanRules} when the Feynman rules are derived.
\par
Let us briefly discuss how the Feynman rules are derived from a given action.
The quadratic terms of the action give rise to propagators.
There will be a second derivative operator between the two fields, which when inverted gives the corresponding propagator.
This is most easily done in momentum space, which we will also use in this work.
\par
The vertex rules come from terms with more than two fields.
In our case we will have two types of vertices, namely a scalar meeting an arbitrary number of gravitons or an arbitrary number of gravitons meeting in a single self-interaction vertex.
We will denote them $\phi^2 h^n$ and $h^n$.
For the Schwarzschild-Tangherlini\ metric computation, the self-interaction vertices are essential.
\par
To derive the vertex rule in momentum space we transform our fields:
\begin{subequations}
\label{n223}
\begin{align}
&\tilde \phi(q)
=
\int \dDx e^{iqx} \phi
\ ,
\labelt{n223a}
\\
&\tilde \hka(q)
=
\int \dDx e^{iqx} \hka
\ .
\labelt{n223b}
\end{align}
\end{subequations}
When we change to momentum space, the space integration removes the exponential factors and introduces a $\delta$-function which describes conservation of momentum.
A term in the action which results in a $\phi^2 h^2$ vertex would look like:
\begin{align}
&\kappa^2
\int \dDx\
\eta^{\mu\alpha}\eta^{\nu\beta} \hka_\mn \hka_\ab \eta^\rs \phi_{,\rho} \phi_{,\sigma}
=
-\kappa^2
\int
\dDp{l_\teon}\dDp{l_\tetw}\dDp{p}\dDp{q}
\labelt{exa2}
\\
&\qquad\qquad\qquad\qquad\times
(2\pi)^D \delta^D\big(l_\teon+l_\tetw+p+q\big)
\eta^{\mu\alpha}\eta^{\nu\beta}
\tilde \hka_\mn(l_1) \tilde \hka_\ab(l_2)
\tilde \phi(p) \tilde \phi(q)
\eta^\rs p_\sigma q_\rho
\nonumber{}
\end{align}
To get the corresponding vertex rule we would take the integrand without the $\delta$-function.
We would then remove the fields and multiply by $i$ and $2!2!$.
The two factorial factors come from the two factors of $h$ and the two factors of $\phi$.
For a $h^5$ vertex it would be $5!$ and for a $\phi^2h^3$ it would be $2!3!$.
When we remove the fields, we have to make sure that the leftover vertex is symmetric in the fields.
In the case of Eq.~\eqreft{exa2} this is easily done.
Thus, from the example in Eq.~\eqreft{exa2} we get the following vertex rule, $V^{\mn\ \ab}$:
\begin{align}
V^{\mn\ \ab}
=
-i4\kappa^2 \frac{1}{2}\big(\eta^{\mu\alpha}\eta^{\nu\beta} + \eta^{\mu\beta}\eta^{\nu\alpha}\big) p_\sigma q^\sigma
\labelt{n225}
\end{align}
Here, $\mn$ and $\ab$ are graviton indices of the two graviton lines and $p$ and $q$ are the incoming (or outgoing) momenta of the two scalar lines.
\subsection{The Classical Limit}
\labelx{Sec:TheClassical}
The classical limit of quantum field theory can be defined as the limit where $\hbar\rightarrow0$.
The conventional interpretation of this limit is that even slight variations from classical field configurations makes the integrand oscillate greatly so that only configurations where $\delta S=0$ are significant.
The equations $\delta S=0$ are the classical equations of motion.
The classical limit of quantum field theory is analyzed in detail in Ref.~\cite{Kosower:2018adc} and for the particular case of gravity in Refs.~\cite{Bjerrum-Bohr:2018xdl,Cheung:2020gyp,Bjerrum-Bohr:2019kec}.
\par
In general, Feynman diagrams with any number of loops still contribute in the classical limit.
This is so due to several cancellations of $\hbar$.
An important distinction in classical physics is that between waves and particles which in the quantum theory is blurred.
Thus, in quantum theory, wavenumbers, $l^\mu$, and particle momenta, $p^\mu$, are related through the formula:
\begin{align}
p^\mu = \hbar l^\mu
\ .
\labelt{me31}
\end{align}
If we wish to introduce $\hbar$ explicitly as a dimensionful quantity in a Feynman diagram we should consider, for each (quantum) momentum, whether it belongs to a particle or wave in the classical limit.
If we assume that, by default, any momentum variable in the Feynman diagram represents a particle momentum, then a change to wavenumber introduces a factor $\hbar$.
The importance of this distinction is that in the classical limit a particle has a finite particle momentum while a wave has a finite wavenumber.
For a given quantum particle the question is whether $p^\mu$ or $l^\mu$ in Eq.~\eqreft{me31} should stay finite.
\par
We will work with units $\hbar=1$ which seems contradictory to the limit $\hbar\rightarrow0$.
In this setup, the classical limit rather means that the action $S$ is much larger than $\hbar$, that is much larger than unity.
After we have extracted results in the classical limit, they should be independent of $\hbar$ and we can forget about the initial choice $\hbar=1$.
If we assume that momenta are by default ``particle momenta'', we see from Eq.~\eqreft{me31} that the momentum of classical particles should stay finite while the momentum of classical waves should be sent to zero.
Thus, if $q^\mu$ is the momentum of a wave-like particle in a Feynman diagram, then $q^\mu=\hbar l^\mu$ where $l^\mu$ is the wavenumber which should stay finite.
Then $q^\mu$ must be small in comparison.
\par
In our case, we have two types of particles, massive scalars and gravitons.
The massive scalars are interpreted as point particles and we let their momenta stay finite.
On the other hand, gravitons behave like waves in the classical theory and their momenta are sent to zero.
These conclusions apply both to external momenta and internal loop momenta.
In our case the classical limit is to some extend equivalent to a long-range limit.
Thus, from the theory of Fourier transforms we learn that small wavenumbers in momentum space are related to long distances in position space.
A rigorous discussion of these conclusions is found in Ref.~\cite{Kosower:2018adc}.
\par
In the classical limit massive scalars can be interpreted as point particles.
Also, we can think of these as an effective description of larger extended objects.
In Ref.~\cite{Goldberger:2004jt} finite size effects are described by including non-minimal terms in the action.
\par
In our work we will explore the Schwarzschild-Tangherlini\ metric which is generated by a single particle.
This should be compared to the Coulomb potential from electrodynamics.
While the Coulomb potential is exact at tree-level the Schwarzschild-Tangherlini\ metric gets corrections from diagrams with an arbitrary number of loops.
This is a consequence of the fact that gravitons interact with themselves and makes the investigation of the Schwarzschild-Tangherlini\ metric interesting.
\chapter{Gauge Theory of Gravity in Quantum Field Theory}
\labelx{sec:GaugeDependence}
Gauge symmetry is an important concept in modern physics.
Successful gauge theories are Yang Mills theory, quantum chromodynamics and electroweak theory.
It is an exciting idea, that general covariance in general relativity can be considered as gauge symmetry of spin-2 particles.
A related insight of modern physics is that of describing gravity as a double copy of Yang-Mills theory~\cite{Bern:2019crd,Bern:2010ue,Cheung:2016say}.
\par
In Sec.~\ref{sec:GeneralRelativity2}, we discuss the description of gravity in terms of spin-2 gravitons on a flat background.
Then, in Sec.~\ref{sec:GaugeTransformations} we derive the transformation properties of the graviton field under gauge transformations.
Finally, in Sec.~\ref{sec:GaugeFixing} we discuss the freedom in choosing a parameterization for the graviton field as well as gauge-fixing functions.
\section{General Relativity in Lorentz Covariant Quantum Field Theory}
\labelx{sec:GeneralRelativity2}
In our approach, the action of general relativity is expanded around flat space-time and the dynamical field is chosen to be this perturbation, $h_\mn$.
This makes the graviton field, $h_\mn$, look very similar to any other quantum field on a flat space-time.
In the long-range limit of gravity, where space-time is approximately flat, it is possible to change completely to a quantum field theoretic point of view.
We then describe spin-2 particles and their gauge symmetry on a flat space-time instead of general covariant objects on an arbitrary space-time.
From this point of view, general coordinate transformations are instead interpreted as gauge transformations and choosing a coordinate system is translated to ``fixing the gauge''.
This is e.g. the point of view developed in~\cite{Feynman:1996kb}.
\par
In general relativity, general covariant tensors are often taken as fundamental quantities.
Instead, we will mostly focus on Lorentz covariant tensors.
Thus, when speaking of tensors, we will mostly mean Lorentz covariant tensors.
Such tensors are defined with respect to the nearly flat space-time far away from matter.
The indices on Lorentz covariant tensors are raised and lowered with the flat space metric.
Thus, the indices on most tensors in this work are raised and lowered with $\eta^\mn$ and $\eta_\mn$.
\par
We will often need to change between position and momentum space.
For this purpose the conventions of relativistic quantum field theory will be used.
Generally, we will use a tilde to denote objects in momentum space.
For example, we have the graviton field in position space $h_\mn(x)$ and in momentum space $\tilde h_\mn (q)$ and their relation is:
\begin{subequations}
\label{nn31}
\begin{align}
&h_\mn(x)
=
\int \dDp{q}
\
e^{-iqx}
\
\tilde h_\mn(q)
\labelt{nn31a}
\\
&\tilde h_\mn(q)
=
\int \dDx
\
e^{iqx}
\
h_\mn(x)
\labelt{nn31b}
\end{align}
\end{subequations}
These transformations are then meant, when we speak of momentum or position space, or Fourier transforms.
Often, we will leave out the explicit dependence on $x$ or $q$ and simply write $h_\mn$ or $\tilde h_\mn$ when we feel no confusion can arise.
\par
Later, in Ch.~\ref{sec:STM} and Ch.~\ref{sec:PerturbativeExpansion2} we will focus in great detail on the Schwarzschild-Tangherlini\ metric.
There, we will keep all our equations Lorentz covariant which is natural when working with amplitudes and Feynman diagrams.
Here, we will develop a notation which makes these expressions clear from a physical, and mathematical, point of view.
\par
The Schwarzschild-Tangherlini\ metric describes the gravitational field of an inertial point particle.
This particle will have momentum which we take as $k^\mu$ and a mass $m = \sqrt{k^2}$.
We then introduce the following projection operators:
\begin{subequations}
\label{nn32}
\begin{align}
&\etat{\mn}=\frac{k_\mu k_\nu}{m^2}
\ ,
\labelt{nn32a}
\\
&\etar{\mn}=\eta_\mn-\frac{k_\mu k_\nu}{m^2}
\ .
\labelt{nn32b}
\end{align}
\end{subequations}
These are projection operators, i.e. their sum is $\eta_\mn$, they are both idempotent, and they are orthogonal to each other.
The tensor $\etat{\mn}$ projects tensors parallel to $k^\mu$ and $\etar{\mn}$ orthogonal to $k^\mu$.
Alternatively, $\etat{\mn}$ projects tensors along the worldline of the particle, while $\etar{\mn}$ projects tensors into the orthogonal space.
We will then use similar symbols on tensors to denote projection.
For example, if we have a vector $q^\mu$ we will write:
\begin{subequations}
\labelt{nn33}
\begin{align}
&q^\mu_\prl = {\eta_\prl}^\mu_\nu q^\nu
\ ,
\labelt{nn33a}
\\
&q^\mu_\bot = {\eta_\bot}^\mu_\nu q^\nu
\ .
\labelt{nn33b}
\end{align}
\end{subequations}
Note that $q^\mu_\prl$ is time-like so that $q^\mu_\prl q_\mu^\prl>0$ and in some sense, it is 1-dimensional.
Similarly $q^\mu_\bot$ is space-like so that $q^\mu_\bot q_\mu^\bot<0$ and in some sense, it is $(D-1)$-dimensional.
We will define the short-hand:
\begin{align}
q_\prl = \sqrt{q^\mu_\prl q_\mu^\prl}
\ .
\labelt{nn34}
\end{align}
Then $k_\prl=m$ and $k_\mu q^\mu = m q_\prl$.
\par
It is particularly simple to work in the inertial frame of $k^\mu$.
Here $\eta_\prl^\mn$ and $\eta_\bot^\mn$ are both diagonal and represent the time and space components of $\eta^\mn$ respectively.
Then $\eta_\prl^{00}=1$ and zero otherwise and $\eta_\bot^{ij} = -\delta^{ij}$ and zero otherwise.
Also, $q_\bot^\mu$ is the space components of $q^\mu$ and $q_\prl=q^0$.
Hence, in this special frame, $q_\bot^2 = - \absvec{q}^2$.
\section{Gauge Transformations of the Graviton Field}
\labelx{sec:GaugeTransformations}
In this section we analyze gauge transformations of $h_\mn$.
In general, all fields transform under gravitational gauge transformations.
This is clear from the traditional point of view, since a gravitational gauge transformation, that is a coordinate transformation, changes the functional dependence of every field on the coordinates.
On the other hand, it is sometimes argued, that this does not constitute a real change, so that a scalar field is left unchanged under a coordinate transformation.
\par
In this work, the gauge transformations of the fields will not play an essential role.
However, since we have not found detailed accounts of the gauge transformations of $h_\mn$ in other sources, we will include this brief discussion on its gauge transformations.
On the other hand, the formulas for gauge transformations in linearized gravity are well known.
We will see, that these formulas are consequences of the more general equations in this section.
We will use part of these results later in Sec.~\ref{sec:AppearanceOf}.
\par
We derive the formulas from the point of view, that gravitons are spin-2 fields defined on a flat space-time background.
As mentioned, this point of view only works at long distances, when the space-time is approximately flat.
\par
We start with formulas from the traditional framework of general relativity and rewrite them in terms of $h_\mn$.
Under a general transformation of coordinates to $\hat x^\mu(x)$ we have:
\begin{align}
\hat g_\mn (\hat x)
= g_\ab (x)
\frac{\partial x^\alpha}{\partial \hat x^\mu}
\frac{\partial x^\beta}{\partial \hat x^\nu}
\ .
\labelt{gtr1}
\end{align}
Choose a coordinate transformation according to:
\begin{align}
x^\mu = \hat x^\mu + \hat \epsilon^\mu(\hat x)
\labelt{rel1}
\ .
\end{align}
Let us also write the transformation in another symmetric way:
\begin{align}
\hat x^\mu = x^\mu + \epsilon^\mu(x)
\labelt{rel3}
\ .
\end{align}
Thus $\epsilon^\mu(x)$ relates the new coordinates to the old ones as a function of the old coordinates.
In contrast $\hat \epsilon^\mu(\hat x)$ relates the new coordinates to the old ones as a function of the new coordinates.
Often it would be most natural to use $\epsilon^\mu(x)$ to define the new coordinates.
The analysis is, however, simpler when we use $\hat \epsilon(\hat x)$.
They obey the equation
\begin{align}
\epsilon(x)
+ \hat\epsilon(\hat x) = 0
\ ,
\end{align}
from which they can be related to each other by using Eqs.~\eqreft{rel1} and~\eqreft{rel3} and using Taylor expansions.
\par
Let us insert the coordinate transformation Eq.~\eqreft{rel1} into the formula for transforming $g_\mn$ Eq.~\eqreft{gtr1}.
First, we compute the partial derivative of $x^\alpha$:
\begin{align}
\frac{\partial x^\alpha}{\partial \hat x^\mu}
= \delta^\alpha_\mu
+\frac{\partial \hat \epsilon^\alpha(\hat x)}{\partial \hat x^\mu}
\end{align}
We insert this into Eq.~\eqreft{gtr1}:
\begin{align}
\hat g_\mn (\hat x)
&= g_\mn (x)
+ g_{\alpha\nu} (x)
\frac{\hat\partial \hat \epsilon^\alpha(\hat x)}{\hat\partial \hat x^\mu}
+ g_{\mu\beta} (x)
\frac{\hat\partial \hat \epsilon^\beta(\hat x)}{\hat\partial \hat x^\nu}
+ g_{\ab} (x)
\frac{\hat\partial \hat \epsilon^\alpha(\hat x)}{\hat\partial \hat x^\mu}
\frac{\hat\partial \hat \epsilon^\beta(\hat x)}{\hat\partial \hat x^\nu}
\labelt{gmn1}
\\
&=
g_\ab(x)
\Big(
\delta^\alpha_\mu \delta^\beta_\nu
+
\frac{\hat\partial \hat \epsilon^\alpha(\hat x)}{\hat\partial \hat x^\mu}
\delta^\beta_\nu
+
\delta^\alpha_\mu
\frac{\hat\partial \hat \epsilon^\beta(\hat x)}{\hat\partial \hat x^\nu}
+
\frac{\hat\partial \hat \epsilon^\alpha(\hat x)}{\hat\partial \hat x^\mu}
\frac{\hat\partial \hat \epsilon^\beta(\hat x)}{\hat\partial \hat x^\nu}
\Big)
\nonumber{}
\end{align}
The fields are evaluated at different coordinates, either $x$ or $\hat x$.
We want all fields to be evaluated at the same coordinate and we choose $\hat x$.
The only occurrence of $x$ is in $g_\mn(x)$.
We use Eq.~\eqref{eqn:rel1} to relate $x$ to $\hat x$ in $g_\ab(x)$ and make a Taylor expansion:
\begin{subequations}
\label{n312}
\begin{align}
g_\mn(x)
&= g_\mn(\hat x + \hat \epsilon(\hat x))
\labelt{n312a}
\\
&=
\sum_{\nzi}\oov{n!}
\hat \epsilon^\sigma(\hat x)
\ .\ .\ .\ \hat\partial_\sigma\ .\ .\ .\ g_\mn(\hat x)
\labelt{n312b}
\\
&=
g_\mn(\hat x)
+\hat \epsilon^\sigma(\hat x) \hat\partial_\sigma g_\mn(\hat x)
+\oov{2}\hat \epsilon^\sigma(\hat x) \hat \epsilon^\rho(\hat x)
\hat\partial_\sigma \hat\partial_\rho
g_\mn(\hat x)
+\ .\ .\ .
\labelt{n312c}
\\
&=
\sum_{n=0..\infty}
\oov{n!}
\Big(
\hat \epsilon^\sigma(\hat x) \hat\partial_\sigma^{(g)}
\Big)^n
g_\mn(\hat x)
\ .
\labelt{n312d}
\end{align}
\end{subequations}
This is a complicated formula, and it is not easily written without developing some notation.
In Eq.~\eqreft{n312c} the expansion is written out explicitly and in Eq.~\eqreft{n312d} the superscript on $\partial^{(g)}_\sigma$ means that the partial derivative only hits $g_\mn$ and ignores any $\hat \epsilon$.
Now that we can express both sides of equation~\eqref{eqn:gmn1} in terms of the same coordinate we ignore the dependence on coordinates:
\begin{align}
\hat g_\mn =
\Big(
\delta^\alpha_\mu \delta^\beta_\nu
+
\partial_\mu \hat\epsilon^\alpha
\delta^\beta_\nu
+
\delta^\alpha_\mu
\partial_\nu \hat\epsilon^\beta
+
\partial_\mu \hat\epsilon^\alpha
\partial_\nu \hat\epsilon^\beta
\Big)
\sum_{n=0..\infty}
\oov{n!}
\Big(
\hat \epsilon^\sigma \partial_\sigma^{(g)}
\Big)^n
g_\ab
\labelt{gmn2}
\end{align}
We can insert the definitions of $g_\mn$ in terms of $h_\mn$ to arrive at:
\begin{align}
\hspace*{-0.3cm}
\hat h_\mn =
\partial_\mu \hat \epsilon_\nu
+
\partial_\nu \hat \epsilon_\mu
+
\partial_\mu \hat \epsilon_\alpha
\partial_\nu \hat \epsilon^\alpha
+
\Big(
\delta^\alpha_\mu \delta^\beta_\nu
+
\partial_\mu \hat\epsilon^\alpha
\delta^\beta_\nu
+
\delta^\alpha_\mu
\partial_\nu \hat\epsilon^\beta
+
\partial_\mu \hat\epsilon^\alpha
\partial_\nu \hat\epsilon^\beta
\Big)
\sum_{n=0..\infty}
\oov{n!}
\Big(
\hat \epsilon^\sigma \partial_\sigma^{(g)}
\Big)^n
h_\ab
\labelt{gmn3}
\end{align}
This is the transformation law of the graviton field under a transformation with gauge parameter $\epsilon^\mu$.
For example we can get the well known linear transformations of linearized gravity if we assume $\hat \epsilon$ and $h_\mn$ to be small of the same order:
\begin{align}
\hat h_\mn \approx
h_\mn
+
\partial_\nu\hat \epsilon_\mu
+
\partial_\mu\hat \epsilon_\nu
\ .
\end{align}
This equation is reminiscent of the gauge transformations of the vector potential in electrodynamics.
\section{Gauge-Fixing Functions and Coordinate Conditions}
\labelx{sec:GaugeFixing}
In this work we use covariant gauge with an arbitrary covariant parameter $\xi$.
This results in the gauge-fixed action from Eq.~\eqreft{act2}:
\begin{align}
S =
\int \dDx \sqrt{-g}
\bigg(
\frac{2R}{\kappa^2} \
+\
\mathcal{L}_\phi
\bigg)
+
\int \dDx
\frac{1}{\kappa^2 \xi} \eta^{\sigma \rho} G_\sigma G_\rho
\ .
\labelt{act3}
\end{align}
What is the classical limit of this action?
In Sec.~\ref{sec:GaugeFixed} we will analyze the classical equations of motion of this action in detail.
The result is that the classical limit of this action is general relativity described by the Einstein field equations together with the coordinate condition $G_\sigma=0$.
\par
In this section we will discuss possible choices of $G_\sigma$ and alternative parameterizations of the graviton field $h_\mn$.
Let us first mention some coordinate choices from the traditional approach to general relativity.
In the study of black holes spherical or cylindrical-type coordinates are often used.
However, these are not well suited for expansions around flat space-time.
Another well-known coordinate condition is harmonic gauge:
\begin{align}
g^\mn \Gamma_\mn^\sigma = 0
\labelt{har1}
\end{align}
The coordinates in this gauge are cartesian-like and well suited for expansions around flat space-time.
The linearized version of the harmonic gauge condition is familiar from linearized gravity:
\begin{align}
\partial_\mu (h^\mu_\sigma - \frac{1}{2} \eta^\mu_\sigma h^\nu_\nu) = 0
\labelt{ddo1}
\end{align}
However, it is rarely used as an exact coordinate-condition and we do not know of any exact metrics in the linear gauge of Eq.~\eqreft{ddo1}.
\par
The study of gravity from the quantum field theoretic point of view has initiated new investigations into the gauge theory of gravity.
For example, in Refs.~\cite{Cheung:2020gyp,Cheung:2016say} very general choices of gauge functions and parameterizations of the graviton field are studied.
Instead of
\begin{equation}
g_\mn = \eta_\mn + h_\mn
\ ,
\labelt{par2}
\end{equation}
we can use non-linear parameterizations such as:
\begin{equation}
g_\mn =
e^{\pi_\mn}
\ .
\labelt{par3}
\end{equation}
In this equation the exponential function should be evaluated as though $\pi$ is a matrix and contractions should be made with $\eta_\mn$.
With this parameterization, the inverse metric is simply:
\begin{align}
g^\mn = e^{-\pi^\mn}
\ .
\labelt{par4}
\end{align}
Other choices are:
\begin{align}
\sqrt{-g} g_\mn = \eta_\mn + h'_\mn
\labelt{par5}
\ .
\end{align}
In Ref.~\cite{Cheung:2020gyp}, the most general parameterization to second order in $h_\mn$ is considered.
However, in our work we will only consider the simple parameterization of Eq.~\eqreft{par2}.
\par
As with the choice of parameterization, there is a large freedom in the choice of the gauge-fixing function.
It would be interesting to investigate this freedom from the perspective of traditional general relativity.
In our work we will use the following \gff:
\begin{equation}
G_\sigma =
(1-\alpha) \ \partial_\mu (h^\mu_\sigma - \frac{1}{2} \eta^\mu_\sigma h_\nu^\nu)
+ \alpha\ g^\mn \Gamma_{\sigma\mn}
\ .
\labelt{gau3}
\end{equation}
It is an interpolation between the two gauge choices of Eq.~\eqreft{har1} and Eq.~\eqreft{ddo1}, that is between harmonic and \dDo\ gauge.
Here, we use the same terminology as~\cite{Cheung:2020gyp}, that is harmonic gauge is $g^\mn \Gamma_\mn^\sigma=0$ and \dDo\ gauge is
\begin{align}
\eta^\mn \partial_\mu
g_{\sigma\nu}
-\frac{1}{2}
\partial_\sigma
\eta^\mn
g_\mn
=0
\ ,
\labelt{vv42}
\end{align}
where we have written the condition in terms of $g_\mn$ instead of $h_\mn$ to stress that it is meant as an exact constraint rather than an approximate constraint in linearized gravity.
\par
Let us look into the details of the generalized \dDo-type gauge function Eq.~\eqreft{gau3}.
As mentioned, it combines harmonic and \dDo\ gauge so that when $\alpha=0$ we have \dDo\ gauge and when $\alpha=1$ we have harmonic gauge.
However, any choice of $\alpha$ is valid and corresponds to some gauge condition.
The dependence on $\alpha$ is chosen such that when $G_\sigma$ is expanded in $h_\mn$ the linear term is independent of $\alpha$ and the non-linear terms are scaled by $\alpha$.
In particular, this means that the graviton propagator will be independent of $\alpha$.
Because the gauge function agrees with \dDo\ gauge at linear order, we will speak of the gauge choice as being of \dDo-type.
\chapter{Expansions Around Flat Space-Time}
\labelx{sec:ExpansionsAround}
It will be convenient to develop notation and concepts to facilitate expansions of the objects from general relativity in the graviton field and in the gravitational constant $G_N$.
The expansions in the graviton field are used to derive Feynman rules for quantum gravity in Ch.~\ref{sec:FeynmanRules}.
The expansions in $G_N$ are used in the analysis of the classical equations of motion in Ch.~\ref{sec:STM}.
In Sec.~\ref{sec:ExpansionsIn} we will distinguish the two types of expansions, namely in $h_\mn$ and in $G_N$.
In Sec.~\ref{sec:ExpansionOf}, we will expand the Einstein tensor, $G^\mn$, and the action, $S$.
We will compute the expansion of $S$ explicitly to third order in $h_\mn$ from which we can derive the graviton propagator and three-graviton vertex in covariant \dDo-type gauge.
We will then relate the expansion of $S$ to that of $G^\mn$.
\par
In the following sections we will work with a multitude of tensors.
Two important ones are $I^\mn_\ab$ and $\maP^\mn_\ab$:
\begin{subequations}
\label{eqn:imt1}
\begin{align}
&I^\mn_\ab = \frac{1}{2}
\Big(
\delta^\mu_\alpha\delta^\nu_\beta
+
\delta^\mu_\beta\delta^\nu_\alpha
\Big)
\ ,
\labelt{imt2}
\\
&\maP^\mn_\ab = I^\mn_\ab - \frac{1}{2}\eta^\mn\eta_\ab
\ .
\labelt{imt3}
\end{align}
\end{subequations}
These tensors, as well as most other tensors in this chapter, are considered as Lorentz covariant tensors, and thus indices are raised and lowered with the flat space metric.
The definitions in Eqs.~\eqreft{imt1} agree with those of Refs.~\cite{Donoghue:1994dn,Bjerrum-Bohr:cand,BjerrumBohr:2002kt}.
\section{Expansions in $h_\mn$ and $G_N$}
\labelx{sec:ExpansionsIn}
We can expand the objects of general relativity in two different, though slightly related, ways.
First, we can expand in the graviton field, $h_\mn = g_\mn - \eta_\mn$.
Second, we can expand in the gravitational constant $G_N$.
The expansions in $h_\mn$ are important for deriving the Feynman rules.
The expansions in $G_N$ are useful when we analyze the classical equations of motion.
Also, the expansions in $G_N$ can be related to the expansions in $h_\mn$.
\par
If we have an object from general relativity such as the Ricci scalar we can expand it in $h_\mn$,
\begin{subequations}
\label{eqn:exp5}
\begin{equation}
R = \sum_{\noi} R_{\chn{n}}
\ ,
\labelt{exp5a}
\end{equation}
or in $G_N$,
\begin{equation}
R = \sum_{\noi} R_{\cGn{n}}
\ .
\labelt{exp5b}
\end{equation}
\end{subequations}
We will use this kind of notation in this and later chapters.
Thus a subscript or superscript with $\chn{n}$ or $\cGn{n}$ denotes the $n$'th term in the expansion in $h_\mn$ or $G_N$ respectively.
\par
Let us start with two simple, but important, examples of expansions in $h_\mn$.
These are $g^\mn=(\eta_\mn+h_\mn)^{-1}$ and $\sqrt{-g}$.
In principle, this allows us to expand the action to any order in $h_\mn$.
\par
For $g^\mn$, we find:
\begin{align}
g^\mn
&=
\eta^\mn - h^\mn + h^\mu_\rho h^{\rho\nu}
- h^{\mu}_{\rho}h^\rho_\sigma h^{\sigma\nu}
+ .\ .\ .
\labelt{exp3}
\end{align}
This expansion should be compared with the geometric series:
\begin{equation}
\frac{1}{1+x} = \sum_{\nzi} (-x)^n
=
1 - x + x^2 - x^3 + \ellipsis
\labelt{exp4}
\end{equation}
Eq.~\eqreft{exp3} can be derived by introducing:
\begin{equation}
\hat h^\mn = g^\mn - \eta^\mn
\ .
\labelt{nn45}
\end{equation}
Using the equation $g^\ab g_{\beta\gamma} = \delta^\alpha_\gamma$ we get an equation between $h_\mn$ and $\hat h_\mn$:
\begin{equation}
\hat h_\mn = - h_\mn - \hat h^\sigma_\mu h_{\sigma\nu}
\ .
\labelt{nn46}
\end{equation}
This is solved inductively by inserting the equation into itself repeatedly.
\par
The expansion of $g^\mn$ in $h_\mn$ is then rather straightforward and follows the structure of the geometric series.
Using the notation introduced in Eqs.~\eqreft{exp5} we would e.g. write:
\begin{equation}
(g^{\mn})_{\chn{3}} = - h^\mu_{\rho}h^\rho_\sigma h^{\sigma\nu}
\ .
\labelt{nn47}
\end{equation}
This is the $\chn{3}$ term of the expansion of $g^\mn$.
\par
Let us turn to $\sqg$.
Here, we use the trace log expansion of the determinant:
\begin{subequations}
\label{eqn:nn48}
\begin{align}
\sqrt{-g} &= \exp(\frac{1}{2}\tr\ln(\eta^\mu_\nu+h^\mu_\nu))
\labelt{nn48a}
\\
&=\exp(\frac{1}{2}\big(
h^\mu_\mu
- \frac{1}{2} h^\mu_\nu h^\nu_\mu
+ \frac{1}{3} h^\mu_\nu h^\nu_\rho h^\rho_\mu
- \frac{1}{4} h^\mu_\nu h^\nu_\rho h^\rho_\sigma h^\sigma_\mu
+ .\ .\ .
\big))
\labelt{nn48b}
\\
&=
1
+ \frac{1}{2} h
- \frac{1}{4} \maP^\rs_\mn h^\mn h_\rs
+ .\ .\ .
\labelt{nn48c}
\end{align}
\end{subequations}
This series is less straightforward and it would be interesting to look into methods to derive the terms more effectively.
In the third line, we have computed terms to second order in $h_\mn$.
In case of the graviton propagator and the three-graviton vertex it is sufficient to know the linear term.
\par
Again, using the notation of Eqs.~\eqreft{exp5} we can e.g. write:
\begin{equation}
\sqrt{-g}
_{\chn{2}} = - \frac{1}{4} \maP^\rs_\mn h^\mn h_\rs
\ .
\labelt{nn49}
\end{equation}
In general, the $h^n$ term in the $h_\mn$ expansion will be a function of $n$ factors of $h_\mn$.
For example in Eq.~\eqreft{nn49} we have the $h^2$ term of the $\sqg$-expansion which is a quadratic function of $h_\mn$.
Sometimes it will be useful to show the dependence on $h_\mn$ explicitly so that we e.g. write $\sqrt{-g}_{\chn{2}}(h,h)$ to denote that $\sqrt{-g}_{\chn{2}}$ is a quadratic function of $h_\mn$.
It is then possible to evaluate $\sqrt{-g}_{\chn{2}}$ with different arguments than $h_\mn$.
\par
For example, we can evaluate $\sqg_{\chn{2}}(h,h)$ in $h^{(a)}$ and $h^{(b)}$ which we think of as some given tensors:
\begin{align}
\sqrt{-g}_{\chn{2}}(h^{(a)}_\mn,h^{(b)}_\mn) = - \frac{1}{4} \maP^\rs_\mn h^{(a)\mn} h^{(b)}_\rs
\ .
\labelt{n410}
\end{align}
This idea is uniquely defined as long as we demand the functions to be symmetric in the factors of $h_\mn$.
\par
Let us now consider expansions in $G_N$.
We will relate these to the expansions in $h_\mn$.
As an example, we will consider the Einstein tensor, $G^\mn$, and relate its expansion in $G_N$ to its expansion in $h_\mn$.
\par
By definition, we have
\begin{equation}
G^\mn
=
\sum_{n=1..\infty}
G_{\chn{n}}^\mn
\labelt{n411}
\end{equation}
where we have used that $G^\mn$ does not have any $h^0$ terms.
We assume, that $h_\mn$ is at least of first order in $G_N$.
Then $G_{\chn{n}}^\mn$ is at least of $n$'th order in $G_N$.
\par
We will expand the terms $G_{\chn{n}}^\mn$ in $G_N$ separately.
We will do the cases $G_{\chno}^\mn$ and $G_{\chn{2}}^\mn$ explicitly after which the general case can be inferred.
\par
For the linear case:
\begin{subequations}
\label{n412}
\begin{align}
G^\mn_{\chno}(h_\mn)
&=
G^\mn_{\chno}\big(\sum_{\nzi}h^{\cGn{n}}_\mn\big)
\labelt{n412a}
\\
&=
\sum_{\noi} G^\mn_{\chno}\big(h^{\cGn{n}}_\mn\big)
\labelt{n412b}
\end{align}
\end{subequations}
In the first line we inserted the expansion of $h_\mn$ in terms of $G_N$.
Then, in the second line we used that $G^\mn_{\chno}(h_\mn)$ is a linear function of $h_\mn$.
The term $G^\mn_{\chno}\big(h^{\cGn{n}}_\mn\big)$ is of n'th order in $G_N$.
\par
We can expand the quadratic term similarly:
\begin{subequations}
\label{eqn:n413}
\begin{align}
G^\mn_{\chn{2}}(h_\mn,h_\mn)
&=
G^\mn_{\chn{2}}
\big(
\sum_{n=0..\infty}h^{\cGn{n}}_\mn
,
\sum_{m=0..\infty}h^{\cGn{m}}_\mn
\big)
\labelt{n413a}
\\
&=
\sum_{n=1..\infty}
\sum_{m=1..\infty}
G^\mn_{\chn{2}}
\big(
h^{\cGn{n}}_\mn
,
h^{\cGn{m}}_\mn
\big)
\labelt{n413b}
\\
&=
G^\mn_{\chn{2}}
\big(
h^{\cGno}_\mn
,
h^{\cGno}_\mn
\big)
+
2
G^\mn_{\chn{2}}
\big(
h^{\cGno}_\mn
,
h^{\cGn{2}}_\mn
\big)
+...
\labelt{n413c}
\end{align}
\end{subequations}
Again, in the first line we inserted the expansions of $h_\mn$ in terms of $G_N$.
In the second line we used that $G^\mn_{\chn{2}}(h,h)$ is linear in both of its arguments.
In the third line we have written out terms explicitly to third order in $G_N$.
For example $G^\mn_{\chn{2}} \big( h^{\cGno}_\mn , h^{\cGn{2}}_\mn \big)$ is of third order in $G_N$.
\par
These expansions can be compared to the expansion of the following polynomial:
\begin{align}
(x_1 + x_2 + \ellipsis + x_n)^n
\ .
\labelt{n414}
\end{align}
For the case $n=3$ there would e.g. be a term $6G_{\chn{3}}^\mn(h^{\cGn{1}},h^{\cGn{2}},h^{\cGn{3}})$ which should be compared to $6x_1 x_2 x_3$ in the expansion of Eq.~\eqreft{n414}.
\par
We can now write down the explicit expansion of $G^\mn$ to third order in $G_N$ in terms of the functions $G_{\chn{n}}^\mn$.
\begin{align}
G^\mn
\approx&
\
G^\mn_{\chno}(h^{\cGno}_\mn)
\labelt{exp1}
\\
&+
G^\mn_{\chno}(h^{\cGn{2}}_\mn)
+
G^\mn_{\chn{2}}
\big(
h^{\cGno}_\mn
,
h^{\cGno}_\mn
\big)
\nonumber
\\
&+
G^\mn_{\chno}(h^{\cGn{3}}_\mn)
+
2 G^\mn_{\chn{2}}
\big(
h^{\cGno}_\mn
,
h^{\cGn{2}}_\mn
\big)
+
G^\mn_{\chn{3}}
\big(
h^{\cGno}_\mn
,
h^{\cGno}_\mn
,
h^{\cGno}_\mn
\big)
\nonumber
\end{align}
where in the first line we have the linear $G_N$ term, in the second line the $(G_N)^2$ terms and in the third line the $(G_N)^3$ terms.
Thus, when we know the expansion of $G^\mn$ in terms of $h_\mn$ we can find the expansion of $G^\mn$ in terms of $G_N$ as well.
\section{Action and Einstein Tensor Expanded in the Graviton Field}
\labelx{sec:ExpansionOf}
It is now the goal to expand the action $S$ and the Einstein tensor $G^\mn$ in the graviton field $h_\mn$.
First, we will focus on the gravitational part of the action, that is $S_\teEH+S_\tegf$, which from Eqs.~\eqreft{nn27} and~\eqreft{gau1} is given by:
\begin{equation}
S_\teEH + S_\tegf
=
\int \dDx
\Big(
\sqg\ R + \frac{1}{2\xi} \eta^\rs G_\rho G_\sigma
\Big)
\labelt{act5}
\end{equation}
For the gauge-fixing function, $G_\sigma$, the \dDo-type function from Eq.~\eqreft{gau3} will be used.
\par
We will use two different expansions of the action.
First, with partial integrations the action can be rewritten so that it depends only on first derivatives of the metric.
We will then expand this form of the action explicitly to third order in $h_\mn$ from which we can derive the three-graviton vertex.
Second, we will write a general expansion in terms of tensor functions $\Ghn{n}{\mn}$ and $\Hhn{n}{\mn}$ which will be related to $G^\mn$ and an analogous tensor $H^\mn$ respectively.
\subsection{Action in terms of First Derivatives}
\labelx{sec:ActionIn}
The idea to rewrite the \EHA\ action in terms of first derivatives of the metric can e.g. be found in Dirac~\cite{Dirac:GR}.
We get:
\begin{subequations}
\begin{align}
S_{EH}
&=
\int \dDx
\sqrt{-g} \
R
\labelt{dir1}
\\
&=
\int \dDx
\sqrt{-g}
\
g^\mn
\Big(
\Gamma^\rho_{\rho\mu,\nu} - \Gamma^\rho_{\mn,\rho}
-
\Gamma^\rho_\mn
\Gamma^\sigma_{\rho\sigma}
+
\Gamma^\rho_{\mu\sigma}
\Gamma^\sigma_{\nu\rho}
\Big)
\labelt{dir2}
\\
&=
\int \dDx
\sqrt{-g}
\
g^\mn
\Big(
\Gamma^\rho_\mn
\Gamma^\sigma_{\rho\sigma}
-
\Gamma^\rho_{\mu\sigma}
\Gamma^\sigma_{\nu\rho}
\Big)
\labelt{dir3}
\end{align}
\end{subequations}
In the second line we inserted the definition of the Ricci scalar $R$ and the third line follows after partial integrations.
The result in the third line is that the first two terms of Eq.~\eqref{eqn:dir2} are removed while the last two terms of Eq.~\eqref{eqn:dir2} change sign.
\par
We can write the \EHA\ action entirely in terms of $g_\mn$ and $g^\mn$ by inserting the definition of the Christoffel symbols:
\begin{equation}
S_{EH} = \int \dDx \sqrt{-g}\
\frac{1}{4}
\Big(
2 g^{\sigma\gamma} g^{\rho\delta} g^{\ab}
- g^\gd g^\ab g^\rs
- 2g^{\sigma\alpha}g^{\gamma\rho}g^{\delta\beta}
+ g^{\rs}g^{\alpha\gamma}g^{\beta\delta}
\Big)
g_{\ab,\rho} g_{\gd,\sigma}
\labelt{act6}
\end{equation}
This expression conforms to the traditional idea of a Lagrangian as a function of the field and its first derivatives.
\par
Both $S_\teEH$ and $S_\tegf$ are now quadratic functions of $h_{\mn,\sigma}=g_{\mn,\sigma}$.
In case of $S_\tegf$ this is so, since $G_\sigma$ is linear in $h_{\mn,\sigma}$.
We can now expand everything in $h_\mn$ and collect orders in $h_\mn$.
The only necessary expansions are those of $g^\mn$ and $\sqrt{-g}$ which we know from Sec.~\ref{sec:ExpansionsIn}.
\par
The expansion will be done to third order in $h_\mn$.
Let us start with the gauge-fixing term, which is simpler than the \EHA\ action.
It will be necessary to know the gauge-fixing function $G_\sigma$ to second order in $h_\mn$.
This is found to be
\begin{equation}
G_\sigma \approx
\maP^\mn_\rs h_\mn^{,\rho}
- \alpha\
\Gamma^{\rho\ab}_{\sigma\mn} h^\mn h_{\ab,\rho}
\ ,
\labelt{n419}
\end{equation}
where we have used $\maP$ from Eq.~\eqreft{imt3} and introduced a new tensor $\Gamma^{\rho\ab}_{\sigma\mn}$.
This tensor will sometimes be useful in the following and is defined such that:
\begin{align}
\Gamma_{\rho\mn} = \Gamma_{\rho\mn}^{\sigma\ab} g_{\ab,\sigma}
\ .
\labelt{n420}
\end{align}
It is given entirely in terms of $\delta^\mu_\nu$ by the formula:
\begin{align}
\Gamma^{\rho\ab}_{\sigma\mn} = I^{\ab}_{\sigma\kappa} I^{\rho\kappa}_{\mn} - \frac{1}{2} I^{\ab}_{\mn} \delta^\rho_\sigma
\labelt{gam2}
\end{align}
It is now straightforward to expand $G_\sigma G^\sigma$ to third order in $h_\mn$:
\begin{align}
G_\sigma G^\sigma
\approx
h^{,\rho}_\mn \maP^\mn_{\rho\kappa} \maP^{\kappa\sigma}_\ab h^\ab_{,\sigma}
- 2\alpha
\maP_\gd^{\rho\kappa} h^\gd_{,\rho}
\Gamma^{\sigma\ab}_{\kappa\mn}
h^\mn h_{\ab,\sigma}
\ .
\labelt{act9}
\end{align}
This expression can now be inserted in the gauge-fixing term $S_\tegf$ of Eq.~\eqreft{act5}.
\par
The expansion of the \EHA\ action is more complicated than that of the gauge-fixing term.
Using Eq.~\eqreft{act6} we get the quadratic term by replacing $g^\mn$ by $\eta^\mn$ and $\sqg$ by unity since the two factors of $h_\mn$ come from $h_{\mn,\sigma}$.
We get:
\begin{align}
\hspace*{-.2cm}
(S_\teEH)_{\chn{2}}
&=
\int \dDx
\frac{1}{4}
\Big(
2 \eta^{\sigma\gamma} \eta^{\rho\delta} \eta^{\ab}
- \eta^\gd \eta^\ab \eta^\rs
- 2\eta^{\sigma\alpha}\eta^{\gamma\rho}\eta^{\delta\beta}
+ \eta^{\rs}\eta^{\alpha\gamma}\eta^{\beta\delta}
\Big)
h_{\ab,\rho} h_{\gd,\sigma}
\labelt{act7}
\\
&=
\int \dDx
\frac{1}{4}
\Big(
2 \eta^{\sigma\gamma} \eta^{\rho\delta} \eta^{\ab}
- \eta^\gd \eta^\ab \eta^\rs
- 2\eta^{\sigma\alpha}\eta^{\gamma\rho}\eta^{\delta\beta}
+ \eta^{\rs}\eta^{\alpha\gamma}\eta^{\beta\delta}
\Big)
h_{\ab,\sigma} h_{\gd,\rho}
\ .
\nonumber{}
\end{align}
In the second line we made partial integrations so that the partial derivatives on the two factors of $h_\mn$ were exchanged.
\par
Using
\begin{align}
2\maP^\ab_{\kappa\rho} \maP_\gd^{\kappa\sigma} h_{\ab}^{,\rho} h^\gd_{,\sigma}
=
\Big(
2\eta^{\sigma\alpha}\eta^{\gamma\rho}\eta^{\delta\beta}
- 2\eta^{\sigma\gamma} \eta^{\rho\delta} \eta^{\ab}
+ \frac{1}{2} \eta^\gd \eta^\ab \eta^\rs
\Big)
h_{\ab,\sigma} h_{\gd,\rho}
\ ,
\labelt{n424}
\end{align}
we can rewrite the quadratic term, Eq.~\eqreft{act7}, as:
\begin{equation}
(S_\teEH)_{\chn{2}} =
\frac{1}{4}
\int \dDx
\ h_{\gd}^{,\rho}
\Big(
\delta^\rho_\sigma \mathcal{P}^\gd_\ab
-2 \mathcal{P}^{\gd}_{\rho\kappa} \mathcal{P}_{\ab}^{\sigma\kappa}
\Big)
h^{\ab}_{,\sigma}
\ .
\labelt{n425}
\end{equation}
This is a rather simple result and the tensor structure of the second term in the quadratic operator is the same as the one that comes from the gauge-fixing term.
\par
The three-graviton term of the \EHA\ action is more involved than that of the gauge-fixing term.
In Eq.~\eqreft{act6} we should in turn replace one factor of $g^\mn$ with $-h^\mn$ and the rest with $\eta^\mn$.
Then we should also add the contribution from $\sqg \approx 1 + \frac{1}{2} h^\nu_\nu$.
Naively, this gives one term for each factor of $g^\mn$, that is 12 terms, and 4 additional terms from the contribution of $\sqg$ multiplied into the brackets.
However, some of these terms are equivalent.
\par
Computing all of these terms we have found that:
\begin{equation}
(S_\teEH)_{\chn{3}} =
\frac{1}{2} \int \dDx
\ U_\tecl^{\mn\ \ab\rho\ \gd\sigma} h_\mn h_{\ab,\rho} h_{\gd,\sigma}
\ ,
\labelt{n426}
\end{equation}
where the three-graviton term can be written in the compact form:
\begin{align}
U_\tecl^{\mn\ \ab\rho\ \gd\sigma} \ h_{\ab,\rho} h_{\gd,\sigma}
=
&
2 I^\mn_\pe \maP^\ab_\rs \maP^{\sigma\phi}_{\gamma\delta} h^{,\epsilon}_\ab h^{\gd,\rho}
- \maP^{\mu\rho}_\ab \maP^{\nu\sigma}_\gd \eta_\rs h^\ab_{,\kappa} h^{\gd,\kappa}
\labelt{ute1}
\\
&
+ \maP^\mn_\rs
\Big(
h^{\rho\alpha}_{,\beta} h^{\sigma\beta}_{,\alpha}
-\frac{1}{2} h^{\alpha,\rho}_{\beta} h^{\beta,\sigma}_{\alpha}
- h^\rs_{,\alpha} h^\ab_{,\beta}
\Big)
\nonumber
\ .
\end{align}
It can also be written in a less compact form, which is more easily compared to the action in Eq.~\eqreft{act6}:
\begin{align}
U_\tecl^{\mn\ \ab\rho\ \gd\sigma} h_\mn h_{\ab,\rho} h_{\gd,\sigma}
= &\frac{1}{2} h^\mu_\nu h_{,\mu} h^{,\nu}
- \frac{1}{4} h h_{,\rho} h^{,\rho}
+ h^\mu_\nu h^\nu_{\mu,\rho} h^{,\rho}
- h^\mu_\nu h_\mu^{\nu,\sigma} h_{\sigma,\rho}^\rho
+\frac{1}{4} h h^\mu_{\nu,\rho} h_\mu^{\nu,\rho}
\nonumber{}
\\
&- h^\nu_\mu h^\mu_{\sigma,\nu} h^{,\sigma}
- h^\mu_\nu h^{,\nu} h^\rho_{\mu,\rho}
+\frac{1}{2} h h^\rho_{\sigma,\rho} h^{,\sigma}
-h^\mu_\nu h^\rho_{\mu,\sigma} h_\rho^{\nu,\sigma}
-\frac{1}{2} h h^\rho_{\nu,\mu} h_\rho^{\mu,\nu}
\nonumber{}
\\
&+h^\mn h_{\mu,\rho}^\sigma h_{\nu,\sigma}^\rho
-\frac{1}{2} h^\mu_\nu h^\rho_{\sigma,\mu} h_\rho^{\sigma,\nu}
+2 h^\mu_\nu h_\rho^{\sigma,\nu} h^\rho_{\mu,\sigma}
\labelt{ute4}
\end{align}
Here, $h=h_\nu^\nu$.
There are 13 terms instead of the 16 terms estimated from the naive counting.
\par
By analogy we define a $U_\tegf$:
\begin{equation}
(S_\tegf)_{\chn{3}} =
\frac{1}{2\xi} \int \dDx
\ U_\tegf^{\mn\ \ab\rho\ \gd\sigma} h_\mn h_{\ab,\rho} h_{\gd,\sigma}
\ .
\labelt{act8}
\end{equation}
The tensor, $U_\tegf$, can easily be read off from Eq.~\eqreft{act9}:
\begin{subequations}
\label{eqn:n430}
\begin{align}
U_{gf}^{\mn\ \ab\rho\ \gd\sigma} \ h_{\ab,\rho} h_{\gd,\sigma}
&=
-2 \alpha \maP^\ab_\rs h^{,\sigma}_\ab \Gamma^{\rho\mn}_{\kappa\gd} h^{\gd,\kappa}
\labelt{n430a}
\\
&=
\alpha
\mathcal{P}^\rs_\ab
h^\ab_{,\sigma}
\Big(
- h_{\rho}^{\mu,\nu}
- h_{\rho}^{\nu,\mu}
+ h^\mn_{,\rho}
\Big)
\ .
\labelt{n430b}
\end{align}
\end{subequations}
In the second line we inserted the definition of $\Gamma^{\rho\mn}_{\kappa\gd}$.
\par
We define
\begin{equation}
U^{\mn\ \ab\rho\ \gd\sigma} =
U_\tecl^{\mn\ \ab\rho\ \gd\sigma}
+
\frac{1}{\xi}
U_\tegf^{\mn\ \ab\rho\ \gd\sigma}
\ ,
\labelt{n431}
\end{equation}
and we get the final expression for the expansion of the gravitational action, Eq.~\eqreft{act5}, to third order in $h_\mn$:
\begin{align}
S_\teEH + S_\tegf
\approx
&\ \frac{1}{4}
\int \dDx
\ h_{\mn}^{,\rho}
\Big(
\delta^\rho_\sigma \mathcal{P}^\mn_\ab
-2(1-\frac{1}{\xi})
\mathcal{P}^{\mn}_{\rho\kappa} \mathcal{P}_{\ab}^{\sigma\kappa}
\Big)
h^{\ab}_{,\sigma}
\nonumber{}
\\
&+ \frac{1}{2} \int \dDx
\ U^{\mn\ \ab\rho\ \gd\sigma} h_\mn h_{\ab,\rho} h_{\gd,\sigma}
\ .
\labelt{n432}
\end{align}
This is the main result of this section.
\par
The tensors $U^{\mn\ \ab\rho\ \gd\sigma}$, $U_\tecl$ and $U_\tegf$ are defined to be symmetric under exchange $\ab\rho \leftrightarrow \gd\sigma$.
With this symmetry as well as symmetry in $\mu\leftrightarrow \nu$, $\alpha \leftrightarrow \beta$ and $\gamma \leftrightarrow \delta$ the definitions of the $U$-tensors can be read off from the expressions in Eqs.~\eqreft{ute1},~\eqreft{ute4} and~\eqreft{n430}.
\par
Let us show how this is done with the simple $U_\tegf$ in Eq.~\eqreft{n430b}.
For example the last term of Eq.~\eqreft{n430b}:
\begin{align}
\alpha
\maP^\rs_\ab h^\ab_{,\sigma} h^\mn_{,\rho}
&=
\alpha \maP^{\ab\rs} I^{\gd\mn} h_{\ab,\rho}h_{\mn,\sigma}
\labelt{n433}
\\
&=
\frac{1}{2}
\alpha
\Big(
\maP^{\ab\rs} I^{\gd\mn}
+
\maP^{\gd\rs} I^{\ab\mn}
\Big)
h_{\ab,\rho}h_{\mn,\sigma}
\labelt{n434}
\end{align}
Hence this term contributes with
\begin{align}
\frac{1}{2}
\alpha
\Big(
\maP^{\ab\rs} I^{\gd\mn}
+
\maP^{\gd\rs} I^{\ab\mn}
\Big)
\labelt{n435}
\end{align}
to $U_\tegf$.
In general $U_\tegf$ is:
\begin{align}
\hspace*{-1cm}
U_\tegf^{\mn\ \ab\rho\ \gd\sigma}
=
-\alpha
\Big(
I^{\mn\rho\kappa}
I^\ab_{\kappa\lambda}
\maP^{\lambda\sigma\gd}
+
I^{\mn\sigma\kappa}
I^\gd_{\kappa\lambda}
\maP^{\lambda\rho\ab}
\Big)
+
\frac{1}{2}
\alpha
\Big(
\maP^{\ab\rs} I^{\gd\mn}
+
\maP^{\gd\rs} I^{\ab\mn}
\Big)
\labelt{n436}
\end{align}
A similar expression can be derived for $U_\tecl$.
\subsection{Action in terms of the Einstein Tensor}
We will now use a different approach for the expansion of the gravitational action which is suited for the metric computation in the classical limit.
We postulate an expansion of $S_\teEH$ to any order in $h_\mn$ in terms of tensor functions $\Ghn{n}{\mn}$:
\begin{align}
S_\teEH
=
-
\int \dDx
\
h_\mn
\sum_{n=1..\infty} \oov{(n+1)} \Ghn{n}{\mn}(h,h,\ellipsis,h)
\ .
\labelt{n437}
\end{align}
The functions $\Ghn{n}{\mn}$ will then be related to the Einstein tensor.
They are evaluated from $n$ factors of $h_\mn$.
We can make an analogous expansion of $S_\tegf$.
We postulate:
\begin{align}
S_\tegf
=
-
\frac{1}{\xi}
\int \dDx
\
h_\mn
\sum_{n=1..\infty} \oov{(n+1)} \Hhn{n}{\mn}(h,h,\ellipsis,h)
\ .
\labelt{n438}
\end{align}
Here $\Hhn{n}{\mn}$ are analogous to $\Ghn{n}{\mn}$ and will be related to a tensor analogous to the Einstein tensor which we will call $H^\mn$.
\par
Let us focus on $S_\teEH$ first.
The case of $S_\tegf$ will be similar.
We require that
\begin{equation}
\Ghn{n}{\mn}(h,h,\ellipsis,h)
\ ,
\labelt{n439}
\end{equation}
is symmetric in its n arguments, that is, it is symmetric in the $n$ factors of $h_\mn$.
In addition, we require that
\begin{equation}
\int \dDx
\
h_\mn
\oov{(n+1)} \Ghn{n}{\mn}(h,h,\ellipsis,h)
\ ,
\labelt{n440}
\end{equation}
is symmetric in the $(n+1)$ factors of $h_\mn$.
Obviously, the integrand is not symmetric by itself, since the $h_\mn$ contracted to $\Ghn{n}{\mn}$ plays an asymmetrical role in comparison to the other $n$ factors of $h_\mn$.
However, the integral can still be symmetric in the $(n+1)$ factors of $h_\mn$ due to partial integrations.
\par
The last condition, that the integral Eq.~\eqreft{n440} is symmetric in its factors of $h_\mn$, means that it is straightforward to vary this integral in $h_\mn$:
\begin{align}
\delta \int \dDx
\
h_\mn
\oov{(n+1)} \Ghn{n}{\mn}(h,h,\ellipsis,h)
=
\int \dDx
\
\Ghn{n}{\mn}(h,h,\ellipsis,h)
\
\delta h_\mn
\labelt{n441}
\end{align}
This makes it easy to relate the functions $\Ghn{n}{\mn}$ to the Einstein tensor.
\par
The requirements Eqs.~\eqreft{n439} and~\eqreft{n440} are always possible to fulfill and uniquely define the functions $\Ghn{n}{\mn}$.
As an example let us relate $\Ghn{2}{\mn}$ to $U^{\mn\ \ab\rho\ \gd\sigma}_c$:
\begin{subequations}
\label{sht5}
\begin{align}
\hspace*{-1cm}
(S_\teEH)_{\chn{3}}
&= -\oov{3}\int\ddx\ h_\mn \Ghn{2}{\mn}(h,h)
\labelt{sht1}
\\
&= \oov{2}\int\ddx\ h_\mn U_\tecl^{\mn\ \ab\rho\ \gd\sigma} h_{\ab,\rho}h_{\gd,\sigma}
\labelt{sht2}
\\
&= \oov{6}\int\ddx\
\Big(
h_\mn U_\tecl^{\mn\ \ab\rho\ \gd\sigma} h_{\ab,\rho}h_{\gd,\sigma}
+h_{\mn,\sigma} U_\tecl^{\ab\ \gd\rho\ \mn\sigma} h_{\ab}h_{\gd,\rho}
+h_{\mn,\rho} U_\tecl^{\gd\ \mn\rho\ \ab\sigma} h_{\ab,\sigma}h_{\gd}
\Big)
\label{eqn:sht3}
\\
&= \oov{6}\int\ddx\
h_\mn
\Big(
U_\tecl^{\mn\ \ab\rho\ \gd\sigma} h_{\ab,\rho}h_{\gd,\sigma}
- U_\tecl^{\ab\ \gd\rho\ \mn\sigma} \partial_\sigma(h_{\ab}h_{\gd,\rho})
- U_\tecl^{\gd\ \mn\rho\ \ab\sigma} \partial_\rho(h_{\ab,\sigma}h_{\gd})
\Big)
\label{eqn:sht4}
\end{align}
\end{subequations}
First, in Eqs.~\eqreft{sht1} and~\eqreft{sht2} we use the definitions of $S_\teEH$ in terms of $\Ghn{n}{\mn}$ and $U_\tecl^{\mn\ \ab\rho\ \gd\sigma}$ respectively.
Then in Eq.~\eqreft{sht3} we pretend that the three factors of $h_\mn$ in the action are distinguished by their written order and rewrite the action in terms of $U_\tecl^{\mn\ \ab\rho\ \gd\sigma}$ so that it is symmetric in these three factors.
Then it certainly obeys an equation similar to~\eqreft{n441}.
After partial integrations which leave the action unchanged we rewrite the action in Eq.~\eqreft{sht4} in the same form as $\Ghn{n}{\mn}$ in Eq.~\eqreft{n437}.
After expanding the partial derivatives in Eq.~\eqreft{sht4} and using the symmetries of $U$, we find that $\Ghn{2}{\mn}$ is given in terms of $U_\tecl$ according to:
\begin{align}
\hspace*{-1cm}
\Ghn{2}{\mn}(h,h)
=
-\frac{1}{2}
\Big(
U_\tecl^{\mn\ \ab\rho\ \gd\sigma} h_{\ab,\rho}h_{\gd,\sigma}
- 2U_\tecl^{\ab\ \gd\rho\ \mn\sigma} h_{\ab,\sigma}h_{\gd,\rho}
- 2U_\tecl^{\ab\ \gd\rho\ \mn\sigma} h_{\ab}h_{\gd,\rho\sigma}
\Big)
\labelt{ute2}
\end{align}
By similar arguments any term in the action can uniquely be rearranged to obey the two conditions in Eqs.~\eqreft{n439} and~\eqreft{n440}.
\par
We will now relate the functions $\Ghn{n}{\mn}$ to the Einstein tensor.
Using Eq.~\eqreft{n441} we vary the Einstein-Hilbert action in the form of Eq.~\eqreft{n437} and get:
\begin{align}
\delta S_{EH}
=
-
\int \dDx\
\delta h_\mn
\sum_{n=1..\infty} \Ghn{n}{\mn}(h,h,\ellipsis,h)
\ .
\labelt{n444}
\end{align}
This is easily compared to the expression for $\delta S$ in terms of $G^\mn$ Eq.~\eqreft{ein1}:
\begin{align}
\delta S_{EH}
= - \int \ddx
\ \sqrt{-g} \ G^\mn \delta h_\mn
\labelt{n445}
\end{align}
In this way $\Ghn{n}{\mn}$ is related to the Einstein tensor:
\begin{align}
\sum_{n=1..\infty} \Ghn{n}{\mn}(h,h,\ellipsis,h)
= \sqrt{-g}\ G^\mn
\labelt{n446}
\end{align}
Using the notation introduced in Sec.~\ref{sec:ExpansionsIn} we can write the relation as:
\begin{subequations}
\label{eqn:n447}
\begin{align}
\Ghn{n}{\mn}
&=
\big(
\sqrt{-g} G^\mn
\big)_{\chn{n}}
\labelt{n447a}
\\
&= \sum_{j=0..n}
G^\mn_{h^{n-j}} \ (\sqrt{-g})_{h^j}
\labelt{n447b}
\end{align}
\end{subequations}
And in particular:
\begin{subequations}
\label{eqn:n448}
\begin{align}
&\Ghn{1}{\mn} = G^\mn_{h^1}
\labelt{n448a}
\\
&\Ghn{2}{\mn} = G^\mn_{h^2} +\frac{1}{2} h\ G^\mn_{h^1}
\labelt{n448b}
\\
&\Ghn{3}{\mn}
= G^\mn_{h^3}
+\frac{1}{2} h\ G^\mn_{h^2}
- \frac{1}{4} \maP^\rs_\ab h^\ab h_\rs\ G^\mn_{h^1}
\labelt{n448c}
\end{align}
\end{subequations}
In general, it makes sense to define
\begin{equation}
\Gpz^\mn = \sqg\ G^\mn
\ ,
\labelt{n449}
\end{equation}
which nicely summarizes the relation of the functions $\Ghn{n}{\mn}$ to the Einstein tensor.
\par
We will now discuss the similar result for $S_\tegf$.
Let us introduce the analogous tensor to the Einstein tensor by:
\begin{equation}
\delta S_\tegf =
\frac{1}{\xi}
\int
\dDx
\
\eta^\rs
G_\rho
\delta G_\sigma
=
-
\frac{1}{\xi}
\int \dDx \
\sqrt{-g}
H^\mn \delta h_\mn
\ .
\labelt{hmn1}
\end{equation}
In Ch.~\ref{sec:STM} the tensor $H^\mn$ will be analyzed in detail for the generalized \dDo-type gauge function introduced in Sec.~\ref{sec:GaugeFixing}.
\par
We require the functions $\Hhn{n}{\mn}$ to obey the same conditions as $\Ghn{n}{\mn}$ in Eqs.~\eqreft{n439} and~\eqreft{n440}.
By similar arguments as for $\Ghn{n}{\mn}$ we find that $\Hhn{n}{\mn}$ is related to $H^\mn$ by:
\begin{align}
\Hhn{n}{\mn}
&=
\big(
\sqrt{-g} H^\mn
\big)_{\chn{n}}
\ .
\labelt{n451}
\end{align}
And we define also
\begin{align}
\Hpz^\mn
=
\sqg \ H^\mn
\labelt{n452}
\end{align}
which summarizes the relation between $\Hhn{n}{\mn}$ and $H^\mn$.
\par
In Sec.~\ref{sec:GravitonSelf} we will derive the vertex rules for the n-graviton self-interaction vertices in terms of $\Ghn{n}{\mn}$ and $\Hhn{n}{\mn}$.
This makes it possible to compare the n-graviton amplitude to the classical equations of motion which we will do in Ch.~\ref{sec:STM}.
\subsection{Einstein Tensor to Second Order in the Graviton Field}
\labelx{sec:EinsteinTensor}
We will now derive results for the expansion of the Einstein tensor $G^\mn$ to second order in $h_\mn$.
These results will not be used for explicit computations in this work.
However, since we have related the expansion of the \EHA\ action to the Einstein tensor, they can be used as an alternative definition of the three-graviton vertex instead of the $U$-tensor.
They are suited for the one-loop computation of the Schwarzschild-Tangherlini\ metric if the triangle integrals are simplified appropriately.
Also, we will introduce a tensor $Q^{\mn\ \ab\ \gd}$ which describes the quadratic term in the \EHA\ action.
\par
The Einstein tensor can be written in the following way:
\begin{subequations}
\begin{align}
G^\mn
&=
\frac{1}{2}
\Big(
g^{\mu\alpha}g^{\nu\beta}
+
g^{\mu\beta}g^{\nu\alpha}
- g^\mn g^\ab
\Big)
R_\ab
\\
&=
\frac{1}{4}
\Big(
g^{\mu\alpha}g^{\nu\beta}
+
g^{\mu\beta}g^{\nu\alpha}
- g^\mn g^\ab
\Big)
g^\gd
F_{\ab\ \gd}^{\rs\ \pe}
\Big(
g_{\pe,\rs}
+
g^{\kappa\lambda}
\Gamma_{\kappa\rs}
\Gamma_{\lambda\pe}
\Big)
\labelt{ein2}
\end{align}
\end{subequations}
Recall, that $\Gamma_{\sigma\mn}$ is linear in $h_{\mn,\sigma}$.
Here $F^{\rs\ \pe}_{\ab\ \gd}$ is a tensor introduced for convenience defined by:
\begin{align}
F^{\ab\ \gd}_{\mn\ \rs}
=
I^\ab_\mn I^\gd_\rs
+
I^\ab_\rs I^\gd_\mn
-2
I^\ab_{\epsilon\zeta}
I^{\zeta\eta}_{\mn}
I^{\gd}_{\eta\theta}
I^{\theta\epsilon}_\rs
\ .
\end{align}
It is expressible entirely in terms of $\delta^\mu_\nu$.
\par
It can be helpful to separate $G^\mn$ into a second derivative part $G_a$ and a first derivative part $G_b$.
\begin{subequations}
\begin{align}
&G^\mn_a
=
\frac{1}{4}
\Big(
g^{\mu\alpha}g^{\nu\beta}
+
g^{\mu\beta}g^{\nu\alpha}
- g^\mn g^\ab
\Big)
g^\gd
F_{\ab\ \gd}^{\rs\ \pe}
\partial_\rho \partial_\sigma g_\pe
\ ,
\\
&G^\mn_b
=
\frac{1}{4}
\Big(
g^{\mu\alpha}g^{\nu\beta}
+
g^{\mu\beta}g^{\nu\alpha}
- g^\mn g^\ab
\Big)
g^\gd
F_{\ab\ \gd}^{\rs\ \pe}
g^{\kappa\lambda}
\Gamma_{\kappa\rs}
\Gamma_{\lambda\pe}
\ .
\end{align}
\end{subequations}
Here, $G_a^\mn$ is at least of first order in $h_\mn$ and $G^\mn_b$ is at least of second order.
\par
For the linear term of $G^\mn$ in $h_\mn$ we expand $G_a^\mn$.
All instances of $g^\mn$ are replaced by $\eta^\mn$ and we get:
\begin{align}
G^\mn_{\chno}
= (G_{a}^\mn)_{\chno}
&=
\frac{1}{2} \maP^{\mn\ab} \eta^\gd
F^{\rs\ \pe}_{\ab\ \gd}
h_{\rs,\pe}
\nonumber{}
\\
&=
\frac{1}{2} Q^{\mn\ \ab\ \gd} h_{\ab,\gd}
\labelt{n457}
\end{align}
The $Q$-tensor describes the tensor structure of $G^\mn_\chno$.
From $G^\mn_\chno$ together with the similar term of $H^\mn_\chno$ we would be able to derive the graviton propagator.
The $Q$-tensor is:
\begin{subequations}
\label{eqn:n459}
\begin{align}
Q^{\mn\ \ab\ \gd}
&=
\maP^{\mn\rs} \eta^\pe
F^{\ab\ \gd}_{\rs\ \pe}
\labelt{n459a}
\\
&=
\eta^\mn \maP^{\ab\gd}
-2
I^\mn_{\sigma\rho}
\maP^{\rho\phi\ab}
\eta_\pe
\maP^{\epsilon\sigma\gd}
\labelt{n459b}
\end{align}
\end{subequations}
Although it is not apparent from its definition $Q^{\mn\ \ab\ \gd}$ is symmetric in all its pairs of indices, that is it is symmetric when $\mn \leftrightarrow \ab$ and $\mn \leftrightarrow \gd$ and also $\ab \leftrightarrow \gd$.
\par
The quadratic term, $G^\mn_{\chn{2}}$, gets contributions from both $G_a^\mn$ and $G_b^\mn$.
For $G_b^\mn$ we replace all instances of $g^\mn$ by $\eta^\mn$ while for $G_a^\mn$ instances of $g^\mn$ should in turn be replaced by $-h^\mn$.
Then, for the $h^2$ terms of $G_a$ and $G_b$:
\begin{subequations}
\begin{align}
&
(G_{a}^\mn)_{\chn{2}}
=
-\Big(
\maP^\mn_{\rho\kappa} \ h^\rho_\sigma \ \maPi^{\sigma\kappa}_\ab
+ \frac{1}{2(\stdim-2)} h^\mn \eta_\ab
\Big)
Q^\ab_{\gd\ \pe}
h^{\gd,\pe}
-\frac{1}{2} F^{\mn\ \rs}_{\gd\ \pe} h_\rs h^{\gd,\pe}
\ ,
\labelt{gea1}
\\
&
(G_{b}^\mn)_{\chn{2}}
=
\frac{1}{2} Q^{\mn\ \ab\ \gd}
\ \eta^\rs
\ \Gamma_{\rho \ab}
\ \Gamma_{\sigma \gd}
\ .
\labelt{geb1}
\end{align}
\end{subequations}
These formulas can be used to define the three-graviton vertex.
Also, they can be used in the perturbative expansion of the classical equations of motion in Sec.~\ref{sec:PerturbativeExpansion1}.
For the Schwarzschild-Tangherlini\ metric computation only the last term of Eq.~\eqreft{gea1} with the $F$-tensor and Eq.~\eqreft{geb1} would contribute.
\chapter{Graviton Feynman Rules}
\labelx{sec:FeynmanRules}
The Feynman rules are derived using the expansion of the action, $S$, in the graviton field, $h_\mn$ from Ch.~\ref{sec:ExpansionsAround}.
In Sec.~\ref{sec:GravitonPropagator}, we analyze the quadratic term of the gauge-fixed gravitational action from which the graviton propagator in covariant \dDo-type gauge is derived.
In Sec.~\ref{sec:ScalarGraviton} we compute the matter interactions from the scalar part of the action.
Finally, in Sec.~\ref{sec:GravitonSelf} we focus on the graviton self-interaction vertices.
Here, we will derive explicit results for the three-graviton vertex as well as expressions for the general n-graviton vertex in terms of the Einstein tensor.
\section{Covariant Gauge Graviton Propagator}
\labelx{sec:GravitonPropagator}
To derive the graviton propagator we need the quadratic term of the gauge-fixed gravitational action.
From Eq.~\eqreft{n432} we get this term:
\begin{equation}
(S_\teEH + S_\tegf)_{\chn{2}}
=
\frac{1}{4}
\int \dDx
\ h_{\mn}^{,\rho}
\Big(
\delta^\rho_\sigma \mathcal{P}^\mn_\ab
-2(1-\oov{\xi}) \mathcal{P}^{\mn}_{\rho\kappa} \mathcal{P}_{\ab}^{\sigma\kappa}
\Big)
h^{\ab}_{,\sigma}
\labelt{n463}
\end{equation}
However, when we derive the Feynman rules we will use the rescaled graviton field $\hka_\mn = \frac{1}{\kappa}h_\mn$ and the rescaled action $\frac{2}{\kappa^2}(S_\teEH+S_\tegf)$.
Using the rescaled quantities, we get:
\begin{equation}
\frac{2}{\kappa^2}(S_\teEH + S_\tegf)_{\chn{2}}
=
\frac{1}{2}
\int \dDx
\ \hka_{\mn}^{,\rho}
\Big(
\delta^\rho_\sigma \mathcal{P}^\mn_\ab
-2(1-\oov{\xi}) \mathcal{P}^{\mn}_{\rho\kappa} \mathcal{P}_{\ab}^{\sigma\kappa}
\Big)
\hka^{\ab}_{,\sigma}
\labelt{n464}
\end{equation}
This is the quadratic graviton action in covariant \dDo-type gauge.
For $\xi=1$ the quadratic term reduces significantly similarly to the case of quantum electrodynamics in Feynman-'t Hooft gauge.
\par
We will derive the graviton propagator in momentum space and we transform Eq.~\eqreft{n464} to momentum space and get:
\begin{equation}
\frac{2}{\kappa^2}(S_\teEH + S_\tegf)_{\chn{2}}
=
\frac{1}{2}
\int \dDp{p}
\
\tilde \hka_{\mn}^\dagger
\ p^2
\Big(
\mathcal{P}^\mn_\ab
-2(1-\oov{\xi})
\mathcal{P}^{\mn}_{\rho\kappa}
\frac{p^\rho p_\sigma}{p^2}
\mathcal{P}_{\ab}^{\sigma\kappa}
\Big)
\tilde \hka^{\ab}
\labelt{n465}
\end{equation}
We want to invert the quadratic operator between the two gravitons.
Let us analyze its tensor structure:
\begin{equation}
\Delta^\mn_\ab =
\mathcal{P}^\mn_\ab
-2(1-\oov{\xi})
\mathcal{P}^{\mn}_{\rho\kappa}
\frac{p^\rho p_\sigma}{p^2}
\mathcal{P}_{\ab}^{\sigma\kappa}
\labelt{del1}
\end{equation}
It depends on the momentum of the graviton $p^\mu$ and the covariant gauge parameter $\xi$.
\par
We want to invert $\Delta^\ab_\mn$.
We can do this in several ways.
For example, in $D=4$ we can think of $\Delta^\ab_\mn$ as a matrix in 10-dimensional space.
In the inertial frame of $p_\mu$ we can then write $\Delta^\ab_\mn$ as an explicit 10-by-10 matrix and invert it with methods from linear algebra or computer algebra.
\par
Here, we will write an ansatz for the most general covariant quadratic operator depending on the graviton momentum.
There are 5 such independent operators, which we will define as follows:
\begin{subequations}
\label{eqn:ope1}
\begin{align}
&I^\mn_\ab = \frac{1}{2}(\delta^\mu_\nu\delta^\alpha_\beta + \delta^\mu_\beta\delta^\nu_\alpha)
\labelt{ope2}
\\
&T^\mn_\ab = \frac{1}{4} \eta^\mn \eta_\ab
\\
&C^\mn_\ab = \frac{1}{2}
\Big(
\eta^\mn \frac{p_\alpha p_\beta}{p^2}
+ \frac{p^\mu p^\nu}{p^2} \eta_\ab
\Big)
\\
&\maJ^\mn_\ab = I^\mn_{\rho\kappa} \frac{p_\sigma p^\rho}{p^2} I^{\sigma\kappa}_\ab
\\
&K^\mn_\ab = \frac{p^\mu p^\nu}{p^2} \frac{p_\alpha p_\beta}{p^2}
\end{align}
\end{subequations}
Any other quadratic operator built from $p^\sigma$ and $\eta^\mn$ can be written in terms of these operators.
\par
Let us now use an index free notation where matrix multiplication is understood.
Note that
\begin{equation}
\maP = I - 2 T
\ ,
\labelt{n471}
\end{equation}
and:
\begin{equation}
\Delta = \maP +2(1-\frac{1}{\xi})\maP\maJ\maP
\ .
\labelt{n472}
\end{equation}
We want to find $G = \Delta^{-1}$.
We write $G$ as a linear combination of the 5 operators in Eqs.~\eqreft{ope1}:
\begin{equation}
G = \alpha_1 I + \alpha_2 T + \alpha_3 C + \alpha_4 \maJ + \alpha_5 K
\ .
\labelt{pro6}
\end{equation}
The coefficients $\alpha_n$ are determined from the equation
\begin{equation}
\Delta G = I
\ ,
\labelt{pro4}
\end{equation}
which follows from the definition of $G$.
\par
The operators in Eqs.~\eqreft{ope1} are easily multiplied together.
During the calculations an antisymmetric operator enters as well:
\begin{equation}
A^\mn_\ab = \frac{1}{2}
\Big(
\eta^\mn \frac{p_\alpha p_\beta}{p^2}
- \frac{p^\mu p^\nu}{p^2} \eta_\ab
\Big)
\labelt{n475}
\end{equation}
Let us show some examples of the operators being multiplied together:
\begin{subequations}
\label{eqn:n476}
\begin{align}
&T T = \frac{D}{4} T
\labelt{n476a}
\\
&\maJ \maJ = \frac{1}{2} \maJ + \frac{1}{2} K
\\
&\maJ T \maJ = \frac{1}{4} K
\\
&T^\mn_\ab C^\ab_\gd =
\frac{1}{2} T^\mn_\gd
+ \frac{D}{8} C^\mn_\gd
+\frac{D}{8} A^\mn_\gd
\labelt{n476d}
\end{align}
\end{subequations}
These relations are derived by inserting the definitions of the operators from Eqs.~\eqreft{ope1} and manipulating the tensors $\eta^\mn$ and $p^\mu$.
In Eq.~\eqreft{n476d} it is necessary to show the indices since $A^\mn_\gd$ is antisymmetric.
\par
Multiplying $G$ with $\Delta$, we find:
\begin{subequations}
\label{eqn:pro3}
\begin{align}
\hspace*{-1.5cm}
\Delta^\mn_\ab G^\ab_\gd
=\
&\alpha_1
I^\mn_\gd
\labelt{pro3a}
\\
&+
\Big(
\alpha_1
(-4+\frac{2}{\xi})
+\alpha_2
(-1+\frac{1}{2\xi})(D-2)
-
\alpha_3
\frac{1}{\xi}
\Big)
T^\mn_\gd
\labelt{pro3b}
\\
&+
\Big(
\alpha_1
(2-2\frac{1}{\xi})
+
\alpha_2
\frac{D-2}{4}(1-\frac{1}{\xi})
+
\alpha_3
(-\frac{D-2}{2}+\frac{D}{4\xi})
-
(\alpha_4+\alpha_5)
\frac{1}{2\xi}
\Big)
C^\mn_\gd
\labelt{pro3c}
\\
&+
\Big(
\alpha_1
2(-1+\frac{1}{\xi})
+
\alpha_4
\frac{1}{\xi}
\Big)
\maJ^\mn_\gd
\labelt{pro3d}
\\
&+
\Big(
\alpha_3
(D-2)(\frac{1}{2}-\frac{1}{\xi})
+
\alpha_5
\frac{1}{\xi}
\Big)
K^\mn_\gd
\labelt{pro3e}
\\
&+
\Big(
\alpha_2
\frac{D-2}{4}(-1+\frac{1}{\xi})
+
\alpha_3
(-\frac{D-2}{2} + \frac{D-4}{4\xi})
-
(\alpha_4+\alpha_5)
\frac{1}{2\xi}
\Big)
A^\mn_\gd
\labelt{pro3f}
\end{align}
\end{subequations}
This equation should be compared with Eq.~\eqreft{pro4} which is $\Delta G=I$ and we get 6 equations which determine $\alpha_n$.
They are not independent and $\alpha_n$ can e.g. be determined from the first 5 lines of Eq.~\eqreft{pro3}.
From the first line, that is Eq.~\eqreft{pro3a}, we get $\alpha_1=1$.
Then, from the fourth line, that is Eq.~\eqreft{pro3d}, we get:
\begin{align}
2(-1+\frac{1}{\xi})
+
\alpha_4
\frac{1}{\xi}
=
0
\labelt{n478}
\end{align}
Hence $\alpha_4 = -2(1-\xi)$.
From Eqs.~\eqreft{pro3b}, \eqreft{pro3c} and~\eqreft{pro3e}, we can then determine $\alpha_2$, $\alpha_3$ and $\alpha_5$.
\par
In the end, the coefficients are determined to be:
\begin{subequations}
\label{eqn:pro5}
\begin{align}
&\alpha_1 = 1
\labelt{pro5a}
\\
&\alpha_2 = -\frac{4}{D-2}
\\
&\alpha_3 = 0
\\
&\alpha_4 = -2(1-\xi)
\\
&\alpha_5 = 0
\end{align}
\end{subequations}
Inserting these values in Eq.~\eqreft{pro3} makes all terms but the first line disappear so that $\Delta G = I$.
\par
Finally, we can insert the coefficients from Eqs.~\eqreft{pro5} into the ansatz for $G^\mn_\ab$ in Eq.~\eqreft{pro6} to get the tensor structure of the graviton propagator:
\begin{equation}
G
=
\maPi
-2(1-\xi)\maJ
\ .
\labelt{n480}
\end{equation}
Here we have used $\maPi$ which is the inverse operator to $\maP$ and is given by:
\begin{equation}
\maPi = I - \frac{4}{D-2} T
\ .
\labelt{n481}
\end{equation}
When $\xi=1$ the covariant operator reduces to $\maPi$ which is the well-known \dDo\ propagator.
In four space-time dimensions, that is $D=4$, it is even more simple, since then $\maPi=\maP$.
\par
In the end both $\Delta$ and $G$ are somewhat similar
\begin{subequations}
\label{eqn:n482}
\begin{align}
&\Delta = \maP - 2(1-\frac{1}{\xi})\maP\maJ\maP
\labelt{n482a}
\\
&G = \maPi
-2(1-\xi)\maJ
\ ,
\end{align}
\end{subequations}
and it is the case that $G = \maPi\Delta\maPi$ when we also let $\xi\rightarrow\frac{1}{\xi}$.
\par
Let us check, that $G$ and $\Delta$ are really inverse operators to each other.
This is most easily done by splitting both $\Delta$ and $G$ into two parts according to their dependence on the covariant gauge parameter $\xi$.
In case of $\Delta$:
\begin{subequations}
\label{eqn:pro2}
\begin{align}
&\Delta
=
\Delc + \frac{1}{\xi} \Delgf
\labelt{pro2a}
\\
&\Delc
=
\mathcal{P}
-2
\mathcal{P}
\maJ
\mathcal{P}
\labelt{pro2b}
\\
&\Delgf
=
2
\mathcal{P}
\maJ
\mathcal{P}
\labelt{pro2c}
\end{align}
\end{subequations}
And in case of $G$:
\begin{subequations}
\label{eqn:n485}
\begin{align}
&G = G_\tecl + \xi G_\tegf
\labelt{n485a}
\\
&G_\tecl = \maPi - 2 \maJ
\\
&G_\tegf = 2 \maJ
\end{align}
\end{subequations}
We will multiply $\Delta$ and $G$ together using these expressions.
The statement under Eq.~\eqreft{n482} translates to ${G_\tecl = \maPi\Delta_\tecl\maPi}$ and ${G_\tegf = \maPi\Delta_\tegf\maPi}$.
\par
First, we note the following property of $\Delta_\tecc$ and/or $Q^{\ab\ \gd\ \mn}$:
\begin{align}
\Delc^\mn_\ab
=
Q^{\mn\ \sr}_\ab \frac{p_\sigma p_\rho}{p^2}
\end{align}
The $Q$-tensor was introduced in Eq.~\eqreft{n459}.
The indices $\ab$ on $Q^{\mn\ \sr}_\ab$ where lowered with the flat space metric and there should be no confusion since $Q^{\ab\ \gd\ \mn}$ is symmetric in its three ``double indices'' i.e. $\ab \leftrightarrow \gd$.
The non-trivial property of $\Delta_\tecl$ and/or $Q$ is:
\begin{align}
p_\mu\Delc^\mn_\ab
=
p_\mu Q^{\mn\ \sr}_\ab \frac{p_\sigma p_\rho}{p^2}
= 0
\labelt{pro1}
\end{align}
This is an identity which follows from the definitions of $\Delta_\tecl$ and/or $Q$.
It can be derived by inserting the definition of $\Delta_\tecl$
\begin{align}
p_\mu\Delc^\mn_\ab
=
p_\mu\mathcal{P}^\mn_\ab
-2
p_\mu\mathcal{P}^{\mn}_{\rho\kappa}
\frac{p^\rho p_\sigma}{p^2}
\mathcal{P}_{\ab}^{\sigma\kappa}
\ ,
\labelt{n468}
\end{align}
and using the following relation:
\begin{align}
2
p_\mu\mathcal{P}^{\mn}_{\rho\kappa}
\frac{p^\rho p_\sigma}{p^2}
\mathcal{P}_{\ab}^{\sigma\kappa}
=
p_\mu \maP^\mn_\ab
\ .
\labelt{n469}
\end{align}
It can be verified by writing out the definition of $\maP$.
\par
Let us now multiply $\Delta$ and $G$ together using Eqs~\eqreft{pro2a} and~\eqreft{n485a}.
We get:
\begin{equation}
\Delta G
=
\Delta_\tecl G_\tecl
+
\Delta_\tegf G_\tegf
+
\xi \Delta_\tecl G_\tegf
+
\frac{1}{\xi} \Delta_\tegf G_\tecl
\ .
\labelt{pro7}
\end{equation}
The last two terms, which depend on $\xi$, must necessarily vanish.
This is easily seen for $\Delta_\tecl G_\tegf$ since $G_\tegf$ contracts a momentum, $p^\mu$, to $\Delta_\tecl$ and from Eq.~\eqreft{pro1} we have that ${\Delta_\tecl}^\mn_\ab p_\mu$ vanishes.
\par
To check that $\Delta_\tegf G_\tecl$ vanishes and that the remaining terms reduce to the identity it is helpful to use the following identity:
\begin{align}
\maJ \maP \maJ
&=
\maJ^2 - 2 \maJ T \maJ
\nonumber{}
\\
&= \frac{1}{2} \maJ
\labelt{jpj1}
\ .
\end{align}
This can be checked using the relations given in Eqs.~\eqreft{n476}.
\par
Then, for $\Delta_\tegf G_\tecl$ we get
\begin{subequations}
\label{eqn:n486}
\begin{align}
\Delta_\tegf G_\tecl
&=
2\maP\maJ\maP(\maPi-2\maJ)
\labelt{n486a}
\\
&=
2\maP(\maJ-2\maJ\maP\maJ)
=
0
\ ,
\end{align}
\end{subequations}
where we used the identity Eq.~\eqreft{jpj1} to conclude that the second line vanishes.
For the remaining $\xi$-independent terms of Eq.~\eqreft{pro7} we get:
\begin{subequations}
\label{eqn:n487}
\begin{align}
\Delta_\tecl G_\tecl
+
\Delta_\tegf G_\tegf
&=
(\maP - 2\maP \maJ \maP)(\maPi - 2 \maJ)
+ 4 \maP \maJ \maP \maJ
\labelt{n487a}
\\
&=
I
-4\maP\maJ
+8\maP\maJ\maP\maJ
=
I
\end{align}
\end{subequations}
Again, we used Eq.~\eqreft{jpj1} to conclude that the second line reduces to $I$.
\par
The major result of this section is the graviton propagator in covariant \dDo-type gauge, which we can now write down:
\begin{figure}[H]
\centering
\begin{tikzpicture}
\node
(nfd1)
{
\includegraphics{fd1}
};
\node
[right=-0.5cm of nfd1]
{
$= \ \frac{i}{p^2+i\epsilon}G^\mn_\ab$
};
\end{tikzpicture}
\end{figure}
Here, $G^\mn_\ab$ is the tensor structure of the propagator and in components it is:
\begin{equation}
G^\mn_\ab
=
I^\mn_\ab-\frac{1}{D-2}\eta^\mn\eta_\ab
- 2(1-\xi)
I^{\mn}_{\rho\kappa} \frac{\ p^\rho p_\sigma}{p^2} I^{\kappa \sigma}_{\ab}
\labelt{n488}
\end{equation}
We have not found this result in earlier literature.
It is a generalization of the simpler \dDo\ propagator to which the covariant propagator reduces for $\xi=1$.
\section{Matter Interaction: Scalar Graviton Vertices}
\labelx{sec:ScalarGraviton}
Our matter field is the scalar field and scalar-graviton vertices describe the interaction of gravitation and matter.
In the classical limit, the scalars can be interpreted as neutral, non-spinning point particles.
The scalar graviton vertex rules come from the expansion of the matter term of the action which from Eq.~\eqreft{nn28} is:
\begin{equation}
S_\phi = \frac{1}{2} \int\dDx
\sqrt{-g}
\Big(
g^\mn \phi_{,\mu} \phi_{,\nu} - m^2 \phi^2
\Big)
\labelt{ver5}
\end{equation}
In the computation of the Schwarzschild-Tangherlini\ metric only the simplest vertex contributes, i.e. the $\phi^2 h$-vertex where a graviton interacts with a scalar.
In this section we will compute both this vertex and the next order vertex, $\phi^2 h^2$.
\par
The interactions of $\phi$ with $h_\mn$ come from the expansions of $g^\mn$ and $\sqg$.
These expansions were analyzed and computed to the necessary order in Sec.~\ref{sec:ExpansionsIn} in Eqs.~\eqreft{exp3} and~\eqreft{nn48}.
For example, the $\phi^2 h$ vertex gets one term when we replace $g^\mn$ by $-h^\mn$ and one term when we replace $\sqg$ by $\frac{1}{2}h^\mu_\mu$.
It becomes:
\begin{align}
(S_\phi)_{\chno}
=
-\frac{1}{2}
\int\dDx\
h^\mn \phi_{,\mu}\phi_{,\nu}
+\frac{1}{4}
\int\dDx\
h^\mu_\mu
\big(
\phi^{,\nu}\phi_{,\nu} - m^2 \phi^2
\big)
\labelt{ver4}
\end{align}
For the $\phi^2 h^2$ term we have to pick up quadratic terms in $h_\mn$ from $\sqg$ and $g^\mn$ and a mixed term from both $\sqg$ and $g^\mn$.
In the end, the scalar action when expanded to second order in $h_\mn$ becomes:
\begin{align}
S_\phi
\approx\
\frac{1}{2} &\int\dDx
\big(
\eta^\mn\phi_{,\mu}\phi_{,\nu}
-m^2\phi^2
\big)
\labelt{ver3}
\\
-\frac{1}{2}
&\int\dDx
\Big(
h^\mn \phi_{,\mu}\phi_{,\nu}
-\frac{1}{2} h^\mu_\mu
\big(
\phi^{,\nu}\phi_{,\nu} - m^2 \phi^2
\big)
\Big)
\nonumber{}
\\
+\frac{1}{2}
&\int\dDx
\Big(
h_\mn h^{\alpha\gamma} \maP^\mn_\ab \phi_{,\gamma}\phi^{,\beta}
-\frac{1}{4} h_\mn h^\ab \maP^\mn_\ab
\big(
\phi^{,\nu}\phi_{,\nu} - m^2 \phi^2
\big)
\Big)
\ .
\nonumber{}
\end{align}
From this action we can read off the $\phi^2 h$ and the $\phi^2 h^2$ vertices as well as the scalar propagator.
Again, we should scale $h_\mn$ into $\hka_\mn=\frac{1}{\kappa}h_\mn$ before doing so.
\par
From the $\phi^2 h$ action we get a vertex rule proportional to
\begin{equation}
\IVe_{\phi^2h}^{\mu\nu}(p,k,m) =
I_{\alpha\beta}^{\mu\nu} p^\alpha k^\beta - \frac{pk-m^2}{2} \eta^{\mu\nu}
\ ,
\labelt{ver1}
\end{equation}
where $p$ and $k$ are incoming and outgoing scalar momenta.
For example, $I^\mn_\ab p^\alpha k^\beta$ comes from the term $h^\mn \phi_{,\mu} \phi_{,\nu}$.
\par
The $\phi^2 h^2$ vertex rule can be written as $2i\kappa^2 \tau_{\phi^2h^2}^{\mu\nu\ \alpha\beta}(p,k,m)$ where again $p$ and $k$ are scalar momenta:
\begin{align}
&\IVe_{\phi^2h^2}^{\mu\nu\ \alpha\beta}(p,k,m) =
\Big(
I^{\mu\nu\ \eta}_{\hphantom{\mu\nu\ \eta}\lambda} I^{\alpha\beta\ \lambda}_{\hphantom{\alpha\beta\ \lambda}\kappa}
I^{\sigma\rho\ \kappa}_{\hphantom{\sigma\rho\ \kappa}\eta}
- \frac{1}{4} \big( \eta^{\mu\nu} I^{\alpha\beta\ \sigma\rho}
+ \eta^{\alpha\beta} I^{\mu\nu\ \sigma\rho} \big)
\Big)
p_\sigma k_\rho - \frac{pk-m^2}{4} \mathcal{P}^{\mu\nu\ \alpha\beta}
\labelt{ver2}
\end{align}
We can then write down Feynman rules for the scalar propagator and the $\kappa$ and $\kappa^2$ scalar graviton interactions.
\par
The scalar propagator is:
\begin{equation}
\frac{i}{p^2-m^2+i\epsilon}
\ .
\end{equation}
Here $p$ is the momentum of the scalar and $m$ its mass.
\par
The $\phi^2 h$ vertex is:
\begin{figure}[H]
\centering
\begin{tikzpicture}
\node
(nfd2)
{
\includegraphics{fd2}
};
\node
[right=-0.5cm of nfd2]
{
$= \ -i\kappa \ \IVe_{\phi^2h}^{\mn}(p,k,m)$
};
\end{tikzpicture}
\end{figure}
The $\phi^2 h^2$ vertex is:
\begin{figure}[H]
\centering
\begin{tikzpicture}
\node
(nfd3)
{
\includegraphics{fd3}
};
\node
[right=-0.5cm of nfd3]
{
$= \ 2i\kappa^2 \ \IVe_{\phi^2h^2}^{\mn\ \ab}(p,k,m)$
};
\end{tikzpicture}
\end{figure}
The tensors $\IVe_{\phi^2h}^{\mn}(p,k,m)$ and $\IVe_{\phi^2h^2}^{\mn\ \ab}(p,k,m)$ where given in Eqs.~\eqreft{ver1} and~\eqreft{ver2} respectively.
As already mentioned, only the $\phi^2 h$ vertex will contribute to the computation of the Schwarzschild-Tangherlini\ metric.
\section{Gravitational Self-Interaction: n-Graviton Vertices}
\labelx{sec:GravitonSelf}
The graviton self-interaction vertices are important for the derivation of the Schwarzschild-Tangherlini\ metric.
We will first derive an explicit formula for the three-graviton vertex which we will use in Ch.~\ref{sec:PerturbativeExpansion2} to compute the $(G_N)^2$ correction to the Schwarzschild-Tangherlini\ metric in the \dDo-type gauge of Eq.~\eqreft{gau3}.
Then we will consider how the general n-graviton vertex can be written in terms of the functions $\Ghn{n}{\mn}$ and $\Hhn{n}{\mn}$ introduced in Sec.~\ref{sec:ExpansionOf}.
The general n-graviton vertex will be used in Sec.~\ref{sec:TheMetric} to relate the Schwarzschild-Tangherlini\ metric to Feynman diagrams.
\par
The three-graviton vertex is conveniently written in terms of the $U$-tensor introduced in Sec.~\ref{sec:ActionIn}.
From Eq.~\eqreft{n432} we have the three-graviton term of the action written in terms of the $U$-tensor:
\begin{equation}
S_{\chn{3}}
= \frac{1}{\kappa^2} \int\ddx\ h_\mn U^{\mn\ \ab\rho\ \gd\sigma} h_{\ab,\rho}h_{\gd,\sigma}
\end{equation}
\par
As before, we introduce $\hka_\mn$ instead of $h_\mn$.
Also, this time we will explicitly transform to momentum space and follow the prescription discussed around Eq.~\eqreft{exa2}.
Hence:
\begin{equation}
\hka_\mn = \int \dDp{l}
\ e^{-ilx} \thka_\mn(l)
\end{equation}
Inserting this into the three-graviton action we get:
\begin{align}
S_{\chn{3}}
= -\kappa \int
\dDp{l_\teon}
\dDp{l_\tetw}
\dDp{l_\tetr}\
(2\pi)^D\delta^D(l_\teon+l_\tetw+l_\tetr)
\thka_\mn^\teon
U^{\mn\ \ab\rho\ \gd\sigma}
\thka_{\ab}^\tetw
\thka_{\gd}^\tetr
l_{\tetw \rho}
l_{\tetr \sigma}
\ .
\labelt{sel1}
\end{align}
Here we have e.g. written $\thka_\mn^\teon$ instead of $\thka_\mn(l_\teon)$.
\par
We now want to make Eq.~\eqreft{sel1} symmetric in $\thka_\mn^\teon$, $\thka_\mn^\tetw$ and $\thka_\mn^\tetr$.
We do this by adding three copies of Eq.~\eqreft{sel1} where we cyclically permute the graviton fields.
We get:
\begin{align}
S_{\chn{3}}
= -\frac{\kappa}{3} \int
\dDp{l_\teon}
&\dDp{l_\tetw}
\dDp{l_\tetr}\
(2\pi)^D\delta^D(l_\teon+l_\tetw+l_\tetr)
\Big(
U^{\mn\ \ab\rho\ \gd\sigma}
l_{\tetw \rho}
l_{\tetr \sigma}
\nonumber{}
\\
&+
U^{\ab\ \gd\rho\ \mn\sigma}
l_{\tetr \rho}
l_{\teon \sigma}
+
U^{\gd\ \mn\rho\ \ab\sigma}
l_{\teon \rho}
l_{\tetw \sigma}
\Big)
\thka_\mn^\teon
\thka_\ab^\tetw
\thka_\gd^\tetr
\ .
\labelt{sel2}
\end{align}
Now, we can read off the vertex rule from the integrand where we ignore the $\delta$-function and the $(2\pi)^D$ factors.
\par
For the three-graviton vertex, we then get:
\begin{figure}[H]
\centering
\begin{tikzpicture}
\node
(nfd8)
{
\includegraphics{fd8}
};
\node
[right=-.5cm of nfd8]
{
$=2i\kappa \IVe^{\mn\ \ab\ \gd}_{h^3} (l_1,l_2,l_3)$
};
\end{tikzpicture}
\end{figure}
Where $\IVe^{\mn\ \ab\ \gd}(l_\teon,l_\tetw,l_\tetr)$ can be written in terms of $U^{\mn\ \ab\rho\ \gd\sigma}$ as:
\begin{equation}
\IVe^{\ab\ \gd\ \rs}_{h^3} (l_{(1)},l_{(2)},l_{(3)})
=
-
\Big(
U^{\mn\ \ab\rho\ \gd\sigma}
l_{\tetw \rho}
l_{\tetr \sigma}
+
U^{\ab\ \gd\rho\ \mn\sigma}
l_{\tetr \rho}
l_{\teon \sigma}
+
U^{\gd\ \mn\rho\ \ab\sigma}
l_{\teon \rho}
l_{\tetw \sigma}
\Big)
\labelt{be10}
\end{equation}
This, then, is the result for the three-graviton vertex.
The $U$-tensor where defined in Eqs.~\eqreft{ute1}, \eqreft{n430}, and~\eqreft{n431}.
\par
The general n-graviton vertex will now be analyzed.
Here, we use the expression for the action from Eqs.~\eqreft{n437} and~\eqreft{n438} which is:
\begin{align}
S_\teEH
+ S_\tegf
=
-
\int \dDx
\
h_\mn
\sum_{n=1..\infty} \oov{(n+1)}
\Big(
\Ghn{n}{\mn}(h,h,\ellipsis,h)
+
\frac{1}{\xi}
\Hhn{n}{\mn}(h,h,\ellipsis,h)
\Big)
\labelt{sel3}
\end{align}
From Eq.~\eqreft{n440} this expression was defined to be symmetric in all factors of $h_\mn$.
We need, however, to transform to momentum space.
This requires a small development of the notation.
\par
We need to find the Fourier transform of e.g. $\Ghn{n}{\mn}(h,h,\ellipsis,h)$.
We will focus on $n=3$ from which the general case can be inferred.
We insert the definition of $h_\mn$ in terms of $\tilde h_\mn$ into $\Ghn{3}{\mn}$:
\begin{align}
\Ghn{3}{\mn}(h_\mn,h_\mn,h_\mn)
=
\Ghn{3}{\mn}
\Big(
\int \dDp{l_\teon} e^{-ixl_\teon} \thmn_\mn^\teon
,
\int \dDp{l_\tetw} e^{-ixl_\tetw} \thmn_\mn^\tetw
,
\int \dDp{l_\tetr} e^{-ixl_\tetr} \thmn_\mn^\tetr
\Big)
\labelt{ber9}
\end{align}
Where, again, $\thmn_\mn^\teon = \thmn_\mn(l_\teon)$.
Since $\Ghn{3}{\mn}$ is linear in its arguments the integrals can be pulled out.
We get:
\begin{align}
\Ghn{3}{\mn}(h_\mn,h_\mn,h_\mn)
=
\int
\dDp{l_\teon}
\dDp{l_\tetw}
\dDp{l_\tetr}
\Ghn{3}{\mn}
\Big(
e^{-ixl_\teon} \thmn_\mn^\teon
,
e^{-ixl_\tetw} \thmn_\mn^\tetw
,
e^{-ixl_\tetr} \thmn_\mn^\tetr
\Big)
\labelt{ber8}
\end{align}
If $\Ghn{3}{\mn}$ did not depend on the partial derivatives of $h_\mn$ we could also pull the exponential factors out.
However, we can still do so, if we replace partial derivatives, $\partial_\mu$ by $-i l_\mu$ where $l_\mu$ is the momentum of the graviton which was differentiated.
We will use a tilde on $\Ghn{n}{\mn}$ to denote that partial derivatives was replaced by $-il_\mu$.
Since there are always two derivatives in $\Ghn{n}{\mn}$ the two factors of $i$ will always cancel with each other and introduce a sign.
\par
With this notation we get:
\begin{align}
\Ghn{3}{\mn}(h_\mn,h_\mn,h_\mn)
=
\int
\dDp{l_\teon}
\dDp{l_\tetw}
\dDp{l_\tetr}
e^{-ix(l_\teon+l_\tetw+l_\tetr)}
\tGhn{3}{\mn}
\Big(
\thmn_\mn^\teon
,
\thmn_\mn^\tetw
,
\thmn_\mn^\tetr
\Big)
\labelt{ber7}
\end{align}
And then for the Fourier transform:
\begin{align}
\hspace*{-1cm}
\int \dDx \ e^{iqx}\
\Ghn{3}{\mn}(h_\mn,h_\mn,h_\mn)
=
\int
\dDp{l_\teon}
\dDp{l_\tetw}
\dDp{l_\tetr}
(2\pi)^D \delta^D(l_\teon+l_\tetw+l_\tetr)
\tGhn{3}{\mn}
\Big(
\thmn_\mn^\teon
,
\thmn_\mn^\tetw
,
\thmn_\mn^\tetr
\Big)
\labelt{leq3}
\end{align}
The general case can then be inferred from this example.
\par
Thus, after going to momentum space, the integrand of $\frac{2}{\kappa^2}(S_\teEH+S_\tegf)$ from Eq.~\eqreft{sel3} becomes:
\begin{equation}
\sum_{\noi}
-\frac{2\kappa^{n-1}}{n+1} \thka_\mn^\teze
\Big(
\tGhn{n}{\mn}
(\thka_\mn^\teon,..,\thka_\mn^\tenn)
+
\frac{1}{\xi}
\tHhn{n}{\mn}
(\thka_\mn^\teon,..,\thka_\mn^\tenn)
\Big)
\labelt{sel4}
\end{equation}
Here the momentum conserving $\delta$-function and factors of $2\pi$ was ignored in the integrand.
Due to the properties of $\Ghn{n}{\mn}$ and $\Hhn{n}{\mn}$ this expression is symmetric in all the factors of $\thka_\mn^{(i)}$ when integrated.
\par
From Eq.~\eqreft{sel4} we can read off the n-graviton self-interaction vertex by multiplying by $\big(i(n+1)!\big)$.
The ${(n+1)}$-graviton vertex rule becomes:
\begin{figure}[H]
\centering
\begin{tikzpicture}
\node
(nfd6)
{
\includegraphics{fd6}
};
\node
[right=-.5cm of nfd6]
{
$=i\ n!\ \kappa^{n-1}\ $
$\IVe_{h^{n+1}}^{\mn\ \mu_1\nu_1\ \mu_2\nu_2\ ...\ \mu_n\nu_n}(q,l_1,l_2,...,l_n)$
};
\end{tikzpicture}
\end{figure}
The tensor $\IVe_{h^{n+1}}^{\mn\ \mu_1\nu_1\ \mu_2\nu_2\ ...\ \mu_n\nu_n}(q,l_1,l_2,...,l_n)$ in terms of $\Ghn{n}{\mn}$ and $\Hhn{n}{\mn}$ becomes:
\begin{align}
\IVe_{h^{n+1}}^{\mn\ \mu_1\nu_1\ \mu_2\nu_2\ ...\ \mu_n\nu_n}(q,l_\teon,l_\tetw,...,l_\tenn)
\thmn_{\mu_1\nu_1}^\teon
\thmn_{\mu_2\nu_2}^\tetw
\ellipsis
\thmn_{\mu_n\nu_n}^\tenn
=
-2
\Big(
&\tGhn{n}{\mn}
\big(
\thmn_{\mu_1\nu_1}^\teon,
\thmn_{\mu_2\nu_2}^\tetw, \ellipsis,
\thmn_{\mu_n\nu_n}^\tenn
\big)
\nonumber{}
\\
+\frac{1}{\xi}
&\tHhn{n}{\mn}
\big(
\thmn_{\mu_1\nu_1}^\teon,
\thmn_{\mu_2\nu_2}^\tetw, \ellipsis,
\thmn_{\mu_n\nu_n}^\tenn
\big)
\Big)
\labelt{ber6}
\end{align}
This, then, is the $(n+1)$-graviton vertex expressed in terms of the Einstein tensor and the analogous tensor $H^\mn$ through the tensors $\Gpz^\mn$ and $\Hpz^\mn$.
This result and the three-graviton vertex in terms of the $U$-tensor are the major results of this section.
As it stands in Eq.~\eqreft{ber6} it is not obvious that the vertex rule is symmetric in the $n+1$ graviton indices and/or momenta since e.g. it does not depend explicitly on $q^\mu$.
It is, however, symmetric in all indices and momenta due to momentum conservation which corresponds to the partial integrations used to make the original integral symmetric in Eq.~\eqreft{n440}.
When this vertex rule is used to derive the Schwarzschild-Tangherlini\ metric, the asymmetrical role of the $q^\mu$-momentum will be an external graviton and the other momenta will be gravitons contracted to a scalar line.
\chapter{Schwarzschild-Tangherlini\ Metric from Amplitudes}
\labelx{sec:STM}
In this chapter we will show that the Schwarzschild-Tangherlini\ metric can be derived from the three-point vertex function of a massive scalar interacting with a graviton.
First, in Sec.~\ref{sec:GaugeFixed} we will analyze the equations of motion in covariant gauge in detail.
Then, in Sec.~\ref{sec:PerturbativeExpansion1} we will show how the classical equations of motion can be solved perturbatively in a similar fashion as Feynman expansions.
Finally in Sec.~\ref{sec:TheMetric} we will relate the metric to the three-point vertex function.
\section{Gauge-Fixed Action and Equations of Motion}
\labelx{sec:GaugeFixed}
The gauge-fixed action was discussed in Sec.~\ref{sec:GaugeFixing} and from Eq.~\eqreft{act3} it is:
\begin{align}
S
&=
\frac{2}{\kappa^2}(S_{EH}+S_{gf}) + \int \dDx \sqrt{-g}\ \mathcal{L}_\phi
\labelt{act4}
\\
&=
\int \dDx \sqrt{-g}
\bigg(
\frac{2R}{\kappa^2} \
+\
\mathcal{L}_\phi
\bigg)
+
\int \dDx
\frac{1}{\kappa^2 \xi} \eta^{\sigma \rho} G_\sigma G_\rho
\nonumber
\end{align}
We want to find the classical equations of motion derived from the action by the variational principle $\delta S=0$.
These, we will refer to as the \cgE\ equations which will be similar to the Einstein equations only with a gauge breaking term.
\par
The variation of the general covariant terms of $S_\teEH$ produces the terms from the Einstein field equations and was given in Eqs.~\eqreft{ein1} and~\eqreft{n214}.
In Eq.~\eqreft{hmn1} the tensor $H^\mn$ was defined to be analogous to the Einstein tensor only derived from $S_\tegf$ instead of $S_\teEH$:
\begin{align}
\delta S_\tegf =
\frac{1}{\xi}
\int
\dDx
\
\eta^\rs
G_\rho
\delta G_\sigma
=
-
\frac{1}{\xi}
\int \dDx \
\sqrt{-g}
H^\mn \delta h_\mn
\ .
\labelt{dgs2}
\end{align}
We will now find $H^\mn$ for the generalized gauge function which was introduced in Eq.~\eqreft{gau3} and which we reprint here:
\begin{equation}
G_\sigma =
(1-\alpha) \ \partial_\mu (h^\mu_\sigma - \frac{1}{2} \eta^\mu_\sigma h_\nu^\nu)
+ \alpha\ g^\mn \Gamma_{\sigma\mn}
\ .
\labelt{gau2}
\end{equation}
The gauge function can be rewritten in a simple form using the tensors $\hat h^\mn = g^\mn-\eta^\mn$ and $\Gamma^{\sigma\ab}_{\rho\mn}$ introduced in Eqs.~\eqreft{nn45} and~\eqreft{gam2} respectively:
\begin{subequations}
\begin{align}
G_\sigma
&=
(1-\alpha) \ \eta^{\mn} \Gamma^{\rho\ab}_{\sigma\mn} h_{\ab,\rho}
+ \alpha\ g^\mn \Gamma^{\rho\ab}_{\sigma\mn} h_{\ab,\rho}
\ .
\labelt{gau4}
\\
&=
(\eta^\mn + \alpha(g^\mn-\eta^\mn) )
\Gamma^{\rho\ab}_{\sigma\mn} h_{\ab,\rho}
\\
&=
(\eta^\mn + \alpha \hhatu{\mn})
\Gamma^{\rho\ab}_{\sigma\mn} h_{\ab,\rho}
\ .
\labelt{gac1c}
\end{align}
\end{subequations}
In the third line, we have the resulting expression for $G_\sigma$.
Here it is evident that $\alpha$ scales the non-linear terms of $G_\sigma$ since these come from $\hat h^\mn$.
\par
With Eq.~\eqreft{gac1c} it is straightforward to find $\delta G_\sigma$.
We use that
\begin{equation}
\delta \hat h^\mn = \delta(g^\mn)=-g^{\mu\alpha}g^{\nu\beta}\delta g_\ab
\ ,
\end{equation}
and that, naturally, $\delta g_\mn = \delta h_\mn$.
For $\delta G_\sigma$ we then get:
\begin{equation}
\delta G_\sigma
=
(\eta^\mn + \alpha \hhatu{\mn})
\Gamma^{\rho\ab}_{\sigma\mn} \delta h_{\ab,\rho}
-
\alpha
\Gamma^{\rho\ab}_{\sigma\mn} h_{\ab,\rho}
g^{\mu\gamma}g^{\nu\delta} \delta h_{\gd}
\ .
\labelt{dgs1}
\end{equation}
The partial derivative on $\delta h_\mn$ in the first term makes it necessary to do a partial integration.
\par
We insert $G_\sigma$ and $\delta G_\sigma$ into $\delta S_\tegf$ from Eq.~\eqreft{dgs2} and get:
\begin{subequations}
\begin{align}
\delta S_{gf}
&=
\frac{1}{\xi}
\int
\dDx
\
\eta^\rs
G_\rho
\delta G_\sigma
\\
&=
\frac{1}{\xi}
\int \dDx \
G^\sigma
\Big(
(\eta^\mn + \alpha \hhatu{\mn})
\Gamma^{\rho\gd}_{\sigma\mn} \delta h_{\gd,\rho}
-
\alpha
\Gamma^{\rho\ab}_{\sigma\mn} h_{\ab,\rho}
g^{\mu\gamma}g^{\nu\delta} \delta h_{\gd}
\Big)
\\
&=
-\frac{1}{\xi}
\int \dDx \
\Big(
\Gamma^{\rho\gd}_{\sigma\mn}
\partial_\rho
\big(
G^\sigma
(\eta^\mn + \alpha \hhatu{\mn})
\big)
+
\alpha
G^\sigma
\Gamma^{\rho\ab}_{\sigma\mn} h_{\ab,\rho}
g^{\mu\gamma}g^{\nu\delta}
\Big)
\delta h_{\gd}
\end{align}
\end{subequations}
Here, we have treated $G_\sigma$ as a Lorentz tensor and raised its index with $\eta^\mn$.
Comparing with the expression for $S_\tegf$ in Eq.~\eqreft{dgs2} we can read off $H^\mn$ and we get:
\begin{align}
\sqrt{-g} H^\mn
&=
\Hpz^\mn
\nonumber{}
\\
&= \alpha G^\rho \Gamma_{\rho\ab} g^{\alpha\mu}g^{\beta\nu}
+
\Gamma^{\sigma\mn}_{\rho\ab}
\partial_\sigma
\Big(
G^\rho
\big(
\eta^{\ab} + \alpha\hhatu{\ab}
\big)
\Big)
\ .
\labelt{hte1}
\end{align}
We see, that in some sense $\Hpz^\mn$ is simpler than $H^\mn$.
This is in contrast to the Einstein tensor, where $G^\mn$ is defined without a $\sqg$ while $\Gpz^\mn = \sqg G^\mn$.
\par
The tensor $H^\mn$ enters the Einstein equations as a gauge breaking term added to the Einstein tensor:
\begin{equation}
G^{\mn} + \frac{1}{\xi} H^{\mn} = -\frac{\ \kappa^2}{4} T^{\mn}
\ ,
\labelt{ein3}
\end{equation}
This is the \cgE\ equation which follows from $\delta S=0$ and which we will now analyze.
\par
We expect the \cgE\ equation to describe classical general relativity.
Hence, we expect the metric to satisfy the Einstein field equations:
\begin{equation}
G^\mn = -\frac{\ \kappa^2}{4} T^\mn
\ .
\end{equation}
If the metric satisfies this equation, then $H^\mn$ is forced to disappear.
The conclusion is that, if we expect to get a solution from classical general relativity we must have the additional equation $H^\mn=0$.
\par
Let us see, if we can deduce that $H^\mn$ vanishes directly from the \cgE\ equation.
We find that the metric obeys an additional simple equation which follows from the \cgE\ equation by taking the covariant derivative on each side.
Since both $D_\mu G^\mn =0$ and $D_\mu T^{\mn}=0$ we get that Eq.~\eqreft{ein3} implies:
\begin{equation}
D_\mu H^\mn = 0
\ .
\labelt{gau5}
\end{equation}
We interpret this additional equation as our gauge (coordinate) condition.
We will now show that in a perturbative expansion we can conclude from Eq.~\eqreft{gau5} that also $H^\mn=0$.
This is done indirectly by deducing that $G_\sigma$ must vanish, from which it follows that $H^\mn=0$.
We insert the definition of the covariant derivative from e.g. Weinberg~\cite{Weinberg:1972kfs} into Eq.~\eqreft{gau5}:
\begin{equation}
\sqrt{-g} D_\mu H^\mn
=
\partial_\mu
\big(
\Hpz^\mn
\big)
+
\Gamma^\nu_\ab \Hpz^\ab
\end{equation}
We want to expand this equation perturbatively in $G_N$.
We then know that each term in the expansion must disappear by itself.
We assume that the graviton field is at least of linear order in $G_N$ from which it follows that both $\Gamma^\nu_\ab$ and $\Hpz^\mn$ are so too.
To linear order in $G_N$ we then get:
\begin{equation}
\sqrt{-g} D_\mu H^\mn
\approx
\partial_\mu \Hpz^\mn
\ .
\end{equation}
For $\Hpz^\mn$ we find to linear order:
\begin{equation}
H^\mn
\approx
\Gamma^{\sigma\mn}_{\rho\ab} \eta^\ab \partial_\sigma G^\rho
\ .
\end{equation}
To linear order in $G_N$ we then conclude that Eq.~\eqreft{gau5} reduces to:
\begin{equation}
D_\mu H^\mn
\approx
\Gamma^{\sigma\mn}_{\rho\ab} \eta^\ab \partial_\sigma \partial_\mu G^\rho
\ .
\end{equation}
This can be simplified further by inserting the definition of $\Gamma^{\sigma\mn}_{\rho\ab}$.
We then get:
\begin{equation}
D_\mu H^\mn
\approx
\frac{1}{2} \partial_\sigma\partial^\sigma G^\nu
\end{equation}
To first order in $G_N$ we get that $\partial^2 G_\sigma$ disappears.
With the boundary condition that $G_\sigma$ goes to zero at infinity we conclude that $G_\sigma=0$ to first order in $G_N$.
\par
This means that $G_\sigma$ is at least second order in $G_N$ which implies the same for $\Hpz^\mn$.
We can then follow the same line of reasoning and conclude that to second order in $G_N$ we also have $\partial^2 G_\sigma=0$.
By induction we conclude that the equation $\partial^2G_\sigma=0$ is satisfied to all orders in $G_N$ and that then $G_\sigma=0$.
This implies that $H^\mn$ also disappears.
\par
The conclusion is that the \cgE\ equation is equivalent to the Einstein field equations with the gauge/coordinate condition $G_\sigma=0$.
Hence our equations of motion in the classical limit are the Einstein field equations,
\begin{align}
G^\mn = -\frac{\kappa^2}{4} T^\mn
\ ,
\end{align}
and the gauge condition:
\begin{align}
G_\sigma = 0
\ .
\labelt{dgs3}
\end{align}
They are combined into a single equation in the \cgE\ equation:
\begin{align}
G^\mn + \frac{1}{\xi} H^\mn = -\frac{\kappa^2}{4} T^\mn
\ .
\end{align}
The fact that the \cgE\ equation implies the gauge condition in Eq.~\eqreft{dgs3} is a significant result of this chapter.
\par
We will now rewrite this equation in a similar way as Weinberg~\cite{Weinberg:1972kfs} splitting it into linear and non-linear parts in $h_\mn$.
This will be the starting point of the perturbative expansion of the \cgE\ equation in the next section.
We separate $G^\mn$ and $H^\mn$ into linear and non-linear parts in $h_\mn$.
Thus as in Sec.~\ref{sec:ExpansionsIn} we have
\begin{align}
G^\mn
&=
\sum_{n=1..\infty} (G^\mn)_{\chn{n}}
\\
&=
G^\mn_{\chno} + G^\mn_\tenl
\ ,
\end{align}
and similarly for $H^\mn$.
Here $G_\tenl^\mn$ and $H_\tenl^\mn$ are the non-linear parts of the expansions of $G^\mn$ and $H^\mn$ in $h_\mn$.
We then rewrite the \cgE\ equation as:
\begin{align}
G^\mn_\chno + \frac{1}{\xi} H^\mn_\chno = -\frac{\kappa^2}{4} T^\mn-G^\mn_\tenl-\frac{1}{\xi}H^\mn_\tenl
\labelt{cge1}
\end{align}
The part $G^\mn_\tenl$ can be interpreted as gravitational energy-momentum.
We can then introduce the total energy-momentum tensor $\tau^\mn$:
\begin{align}
\tau^\mn = T^\mn + \frac{4}{\kappa^2} G^\mn_\tenl
\ .
\labelt{dgs7}
\end{align}
We will now see that this tensor is locally conserved.
\par
In terms of the energy-momentum tensor, $\tau^\mn$, the Einstein field equations become:
\begin{align}
G^\mn_\chno = -\frac{\kappa^2}{4} \tau^\mn
\ .
\end{align}
Inserting the formula for the linear term of $G^\mn$ from Eq.~\eqreft{n457} we get:
\begin{align}
Q^{\mn\ \ab\ \gd} h_{\ab,\gd} = -\frac{\kappa^2}{2} \tau^\mn
\end{align}
Here, the $Q$-tensor was defined in Eq.~\eqreft{n459}.
An important property of the $Q$-tensor was discussed around Eq.~\eqreft{pro1}, namely that $Q^{\mn\ \ab\ \gd} p_\beta p_\gamma p_\delta=0$ where $p^\mu$ is any space-time vector.
Recall also, that $Q^{\mn\ \ab\ \gd}$ is symmetric in its three pairs of indices e.g. $\mn\leftrightarrow \ab$.
Using these properties we get that when the fields obey the equations of motion the energy-momentum tensor is conserved:
\begin{align}
0 =
\partial_\mu Q^{\mn\ \ab\ \gd} h_{\ab,\gd}
=
-\frac{\kappa^2}{2} \partial_\mu \tau^\mn
\end{align}
Thus $\tau^\mn$ is locally conserved on the equations of motion.
\par
An analogous equation can be derived for $H^\mn_\tenl$.
When the fields obey the \cgE\ equation we have that $H^\mn$ vanishes.
We then get:
\begin{align}
H^\mn_\chno = - H^\mn_\tenl
\labelt{dgs4}
\end{align}
That is, the linear part of $H^\mn$ equals the non-linear part with a negative sign.
The linear part of $H^\mn$ is:
\begin{equation}
H^\mn_\chno = \maP^\mn_{\sigma\kappa} \maP^{\rho\kappa}_\gd \partial^\sigma \partial_\rho h^\gd
\ .
\end{equation}
This can e.g. be derived from the Eqs.~\eqreft{gac1c} and~\eqreft{hte1} from this section.
This should be compared to the operator $\Delta_\tegf$ from Eq.~\eqreft{pro2c}.
Similarly to Eq.~\eqreft{n486a} which is ${\Delta_\tegf G_\tecl=0}$, we get that:
\begin{equation}
{G_\tecl}^\mn_\ab H^\ab_\chno = 0
\end{equation}
Using this and the equation of motion Eq.~\eqreft{dgs4} we conclude that the non-linear part of $H^\mn$ satisfies:
\begin{equation}
{G_{(c)}}^\mn_\ab H^\ab_\tenl = 0
\labelt{dgs5}
\end{equation}
This is a non-trivial equation for $H^\mn_\tenl$.
Recall, that the $G^\mn_\ab$-tensors are the tensor structure of the graviton propagator.
\par
We summarize Eq.~\eqreft{dgs5} and the conservation law of $\tau^\mn$ in the equations:
\begin{subequations}
\label{eqn:eom1}
\begin{align}
&{G_\tegf}_\ab^\mn \tau^\ab = 0
\ ,
\labelt{eom2}
\\
&{G_\tecl}^\mn_\ab H^\ab_\tenl = 0
\ .
\labelt{eom3}
\end{align}
\end{subequations}
These are consequences of the analysis of the \cgE\ equation.
In the next section these equations will be used to derive an expression for the metric independent of the covariant parameter $\xi$.
\par
In the final part of this section we will discuss the equations of motion in terms of the tensors $\Gpz^\mn$ and $\Hpz^\mn$ introduced in Eqs.~\eqreft{n449} and~\eqreft{n452}.
This analysis is useful since the n-graviton vertices are easily related to $\Gpz^\mn$.
Multiplying the \cgE\ equation by $\sqg$ we get:
\begin{align}
\Gpz^\mn + \frac{1}{\xi} \Hpz^\mn = -\frac{\kappa^2}{4} \sqg \ T^\mn
\ .
\end{align}
And we have the two simpler equations:
\begin{subequations}
\begin{align}
&\Gpz^\mn = -\frac{\kappa^2}{4} \sqg\ T^\mn
\\
&\Hpz^\mn = 0
\end{align}
\end{subequations}
Note that the linear terms of $\Gpz^\mn$ and $G^\mn$ are equal and similarly for $\Hpz^\mn$:
\begin{subequations}
\label{eqn:dgs6}
\begin{align}
&\Gpz^\mn_\chno = G^\mn_\chno
\\
&\Hpz^\mn_\chno = H^\mn_\chno
\labelt{dgs6b}
\end{align}
\end{subequations}
Thus, if we split the equations of motion in terms of $\Gpz^\mn$ and $\Hpz^\mn$ into linear and non-linear parts as in Eq.~\eqreft{cge1} we get simple relations between the non-linear parts.
\par
For example, we have
\begin{align}
\Hpz^\mn_\chno = -\Hpz^\mn_\tenl
\ ,
\end{align}
and due to Eqs.~\eqreft{dgs6} and Eq.~\eqreft{dgs4} we get:
\begin{align}
\Hpz_\tenl^\mn = H_\tenl^\mn
\labelt{hej3}
\end{align}
Similarly, using the Einstein field equations, we get an expression for the energy-momentum tensor, $\tau^\mn$, in terms of $\Gpz^\mn$:
\begin{align}
\tau^\mn = \sqg\ T^\mn + \frac{4}{\kappa^2} \Gpz^\mn_\tenl
\ .
\end{align}
This should be compared to Eq.~\eqreft{dgs7} which relates $\tau^\mn$ to $G^\mn$.
\par
Let us briefly remark on a special property of \dDo-gauge before moving on.
In this gauge $\alpha$ is zero and we see that $\Hpz^\mn$ from Eq.~\eqreft{hte1} is linear in the graviton field.
Hence, in this gauge $\Hpz_\tenl^\mn$ disappears and according to Eq.~\eqreft{hej3} so does $H^\mn_\tenl$.
From the quantum field theoretic point of view this is explained by the fact that $G_\sigma$ is linear in $h_\mn$ in this gauge and then the gauge dependence of the self-interaction vertices disappear.
In this case ``Landau gauge'' is possible where we let $\xi\rightarrow0$.
We can summarize this special property of \dDo-gauge by the property that in this gauge, the graviton field couples directly to the total energy-momentum tensor, $\tau^\mn$.
\section{Perturbative Expansion of the Classical Equations of Motion}
\labelx{sec:PerturbativeExpansion1}
We will now turn to the perturbative expansion of the \CEEq.
Our starting point is Eq.~\eqreft{cge1} where, after inserting the expressions for $G_\chno^\mn$ and $H_\chno^\mn$ in terms of $h_\mn$, we get:
\begin{align}
\Big(
\eta^\rs \mathcal{P}^{\mn\ab}
-2(1-\oov{\xi}) \mathcal{P}^{\mn\rho\phi} \eta_\pe \mathcal{P}^{\ab\sigma\epsilon}
\Big)
h_{\ab,\rs}
=
-\frac{\kappa^2}{2} \tau^\mn
-
2\frac{1}{\xi}H^\mn_\tenl
\labelt{peo1}
\end{align}
Again, in \dDo\ gauge only $\tau^\mn$ would be on the right hand side since in this gauge $H^\mn_\tenl$ disappears.
It is advantageous to transform to momentum space.
This also makes the similarity with the Feynman diagram expansion clear.
In momentum space, Eq.~\eqreft{peo1} becomes:
\begin{align}
q^2
\Delta^{\mn\ab}
\tilde h_{\ab}
=
\frac{\kappa^2}{2} \tilde\tau^\mn
+
2\frac{1}{\xi} \tilde H^\mn_\tenl
\labelt{cge2}
\ .
\end{align}
Here, $\Delta^{\mn\ab}$ is the operator from Eq.~\eqreft{del1} which is the quadratic operator in the gauge-fixed Einstein-Hilbert action.
The momentum dependence on $q^\mu$ in Eq.~\eqreft{cge2} is hidden, although all objects including $\Delta^{\mn\ab}$ depend on the momentum variable, $q^\mu$.
Note that our perturbative expansion is in the long-range limit.
Hence, the momentum variable $q^\mu_\bot$ should be taken as small analogously to the classical limit of the Feynman diagrams.
\par
In Sec.~\ref{sec:GravitonPropagator}, the inverse to this operator was analyzed which is the graviton propagator in covariant gauge, $G^\ab_\mn$.
Multiplying with this operator on each side of Eq.~\eqreft{cge2} we get:
\begin{align}
\tilde h_{\ab}
=
\frac{ G_{\ab\mn}}{q^2}
\Big(
\frac{\kappa^2}{2} \tilde\tau^\mn
+
\frac{2}{\xi} \tilde H^\mn_\tenl
\Big)
\ .
\labelt{lin1}
\end{align}
Using Eqs.~\eqreft{eom1} we can rewrite it in a form independent of $\xi$:
\begin{subequations}
\label{eqn:lin2}
\begin{align}
\tilde h_{\ab}
&=
\frac{1}{q^2}
\Big(
\frac{\kappa^2}{2}
{G_\tecl}_{\ab \mn} \tilde\tau^\mn
+
2{G_\tegf}_{\ab \mn} \tilde H^\mn_\tenl
\Big)
\labelt{lin2a}
\\
&=
\frac{\maPi_{\ab\mn}}{q^2}
\Big(
\frac{\kappa^2}{2} \tilde\tau^\mn
+
2 \tilde H^\mn_\tenl
\Big)
\ .
\end{align}
\end{subequations}
The second line is equivalent to setting $\xi=1$ from the beginning which is fine since the metric is independent of $\xi$.
\par
We will now expand Eq.~\eqreft{lin1}, or equivalently Eq.~\eqreft{lin2}, perturbatively in $G_N$ using the techniques introduced in Ch.~\ref{sec:ExpansionsAround}.
We assume $T^\mn$ to be of zeroth order in $G_N$ and the graviton field, $h_\mn$, to be of first order.
Then $\tau^\mn$ is of zeroth order and $H_\tenl^\mn$ is of second order.
Inserting the expansions into Eq.~\eqreft{lin1} we get:
\begin{align}
\sum_{n=1..\infty}
\tilde h_\ab^{\cGn{n}}
=
\frac{G_{\ab\mn}}{q^2}
\Big(
\frac{\kappa^2}{2}
\sum_{\nzi}
\tilde\tau^\mn_{\cGn{n}}
+
2\frac{1}{\xi}
\sum_{n=2..\infty}
(\tilde H^\mn_\tenl
)_{\cGn{n}}
\Big)
\ .
\labelt{ger1}
\end{align}
Recall, that $\kappa^2=32\pi G_N$.
Equating terms of equal order in $G_N$ we get:
\begin{align}
&\tilde h_\ab^{\cGn{1}}(q)
=
\frac{G_{\ab\mn}}{q^2}
\frac{\kappa^2}{2}
\tilde T^\mn(q)
\labelt{leq1}
\end{align}
And for $n\geq2$ we get:
\begin{align}
&\tilde h_\ab^{\cGn{n}}
=
\frac{G_{\ab\mn}}{q^2}
\Big(
\frac{\kappa^2}{2}
\tilde\tau^\mn_{G^{n-1}}
+
\frac{2}{\xi}
\big(
\tilde H^\mn_\tenl
\big)_{\cGn{n}}
\Big)
\ .
\labelt{exp2}
\end{align}
In general, the material energy-momentum $T^\mn$ which is included in $\tau^\mn$ can depend in a non-trivial way on $h_\mn$ i.e. if the matter part has its own equation of motion.
In the rest of this chapter we will focus on the special case of a single inertial point particle, that is the Schwarzschild-Tangherlini\ metric.
We will then assume that any gravitational corrections to $T^\mn$ are local and that $T^\mn$ is exact at zeroth order in the perturbative expansion.
\par
With this assumption we can rewrite Eq.~\eqreft{exp2} into:
\begin{align}
&\tilde h_\ab^{\cGn{n}}
=
2\frac{G_{\ab\mn}}{q^2}
\big(
\tilde G^\mn_\tenl
+
\frac{1}{\xi}\tilde H^\mn_\tenl
\big)_{\cGn{n}}
\ .
\labelt{leq2}
\end{align}
Here, we still assume $n\geq2$.
Note that this equation is equivalent to demanding:
\begin{equation}
\big(
G^\mn
+
\frac{1}{\xi}
H^\mn
\big)
_{\cGn{n}} = 0
\ .
\labelt{ger2}
\end{equation}
This follows directly from the \cgE\ equation and the assumption that $T^\mn$ is exact at zeroth order.
\par
We will now specialize on the first terms in the expansion.
From Eq.~\eqreft{leq1}, the first order correction to $g_\mn$ is given by:
\begin{align}
h^{\cGno}_\mn =
\frac{\kappa^2}{2}
\int \dDp{q}
e^{-iqx}\
\frac{G_{\mn\ab}}{q^2}
\
\tilde T^\ab
\labelt{ger3}
\end{align}
This will correspond to the tree diagram in the Feynman graph expansion.
In the end of this section we will compute this simple example explicitly.
\par
Let us go to the case $n=2$ where we can use Eq.~\eqreft{leq2}:
\begin{align}
&\tilde h_\ab^{\cGn{2}}
=
2\frac{G_{\ab\mn}}{q^2}
\big(
\tilde G^\mn_\tenl
+
\frac{1}{\xi}\tilde H^\mn_\tenl
\big)_{\cGn{2}}
\ .
\labelt{leq4}
\end{align}
We have to find the $G_N^2$ term of $G^\mn_\tenl$ and $H^\mn_\tenl$.
We can use Eq.~\eqreft{exp1} for this and we get
\begin{align}
(G^\mn_\tenl)_{\cGn{2}}
=
G_{\chn{2}}^\mn
(
h^{\cGno}_\mn ,
h^{\cGno}_\mn
)
\ ,
\labelt{ger4}
\end{align}
and similarly for $H_\tenl^\mn$.
These expressions, however, are in position space and it is necessary to transform them to momentum space so that we can insert them into Eq.~\eqreft{leq4}.
\par
The transformation to momentum space follows the same pattern as the arguments around Eq.~\eqreft{leq3}.
We treat the case of $G_{\chn{2}}^\mn$ explicitly and that of $H_{\chn{2}}^\mn$ follows the same line of reasoning.
We insert the expressions of $h_\mn$ in terms of $\tilde h_\mn$:
\begin{align}
G_{\chn{2}}^\mn
(
h^{\cGno}_\mn ,
h^{\cGno}_\mn
)
=
G_{\chn{2}}^\mn
(
\int \dDp{l_1} e^{-il_1x} \tilde h^{\cGno}_\mn(l_1) ,
\int \dDp{l_2} e^{-il_2x} \tilde h^{\cGno}_\mn(l_2)
)
\labelt{ger5}
\end{align}
The function $G_{\chn{2}}^\mn$ is linear in its arguments and we can pull out the integrals:
\begin{align}
G_{\chn{2}}^\mn
(
h^{\cGno}_\mn ,
h^{\cGno}_\mn
)
=
\int \dDp{l_\teon} \dDp{l_\tetw}
G_{\chn{2}}^\mn
(
e^{-il_1x} \tilde h^{\cGno}_\mn(l_\teon) ,
e^{-il_2x} \tilde h^{\cGno}_\mn(l_\tetw)
)
\labelt{ger6}
\end{align}
We can pull out the exponential factors as well, if we substitute derivatives in $G_{\chn{2}}^\mn$ with $-il_\teon^\mu$ or $-il_\tetw^\mu$ which, as in Sec.~\ref{sec:GravitonSelf}, we symbolize with a tilde on $G_{\chn{2}}^\mn$:
\begin{align}
G_{\chn{2}}^\mn
(
h^{\cGno}_\mn ,
h^{\cGno}_\mn
)
&=
\int \dDp{l_\teon} \dDp{l_\tetw}
e^{-i(l_\teon+l_\tetw)x}
\tilde G_{\chn{2}}^\mn
(
\tilde h^{\cGno}_\mn(l_\teon) ,
\tilde h^{\cGno}_\mn(l_\tetw)
)
\labelt{ger7}
\\
&=
\int \dDp{q}
e^{-iqx}
\int \dDp{l}
\tilde G_{\chn{2}}^\mn
(
\tilde h^{\cGno}_\mn(l) ,
\tilde h^{\cGno}_\mn(q-l)
)
\nonumber{}
\end{align}
We can now read off $G^\mn_{\chn{2}}$ in momentum space and for $(\tilde G^\mn_\tenl)_{\cGn{2}}$ we get:
\begin{align}
(\tilde G^\mn_\tenl)_{\cGn{2}}
=
\int \dDp{l}
\tilde G_{\chn{2}}^\mn
(
\tilde h^{\cGno}_\mn(l) ,
\tilde h^{\cGno}_\mn(q-l)
)
\labelt{leq5}
\end{align}
The integral on the right-hand side looks similar to a loop integral.
\par
Inserting Eq.~\eqreft{leq5} into the expression for $\tilde h_\mn^{\cGn{2}}$ from Eq.~\eqreft{leq4} we get:
\begin{align}
&\tilde h_\mn^{\cGn{2}}
=
2
\frac{G_{\mn\ab}}{q^2}
\int \dDp{l}
\bigg(
\tilde G^\ab_{\chn{2}}
\Big(
\tilde h^{\cGno}_\mn(l) ,
\tilde h^{\cGno}_\mn(q-l)
\Big)
+
\frac{1}{\xi} \tilde H^\ab_{\chn{2}}
\Big(
\tilde h^{\cGno}_\mn(l) ,
\tilde h^{\cGno}_\mn(q-l)
\Big)
\bigg)
\ .
\labelt{lin3}
\end{align}
The second order metric is derived from a quadratic energy-momentum function of the first order metric, $G_{\chn{2}}^\mn$, and a gauge dependent term, $H_{\chn{2}}^\mn$.
Let us insert the expression for $\tilde h_\mn^\cGno$ in terms of $\tilde T^\mn$ from Eq.~\eqreft{leq1} which will make the relation to the Feynman graph expansion clear.
It is most convenient to focus only on $\tilde G^\ab_{\chn{2}}$ first:
\begin{align}
\int \dDp{l}
\tilde G^\ab_{\chn{2}}
&\Big(
\tilde h^{\cGno}_\mn(l) ,
\tilde h^{\cGno}_\mn(q-l)
\Big)
=
\int \dDp{l}
\tilde G^\ab_{\chn{2}}
\Big(
\frac{\kappa^2}{2}\frac{G_{\mn\ab}\tilde T^\ab(l)}{l^2} ,
\frac{\kappa^2}{2}\frac{G_{\mn\ab}\tilde T^\ab(l-q)}{(l-q)^2}
\Big)
\nonumber{}
\\
&=
\frac{\kappa^4}{4}
\int \dDp{l}
\frac{1}{l^2(l-q)^2}
\tilde G^\ab_{\chn{2}}
\Big(
G_{\mn\ab}\tilde T^\ab(l) ,
G_{\mn\ab}\tilde T^\ab(l-q)
\Big)
\labelt{ger8}
\end{align}
In the second line we used that $\tilde G^\ab_{\chn{2}}$ is a linear function.
Notice that the momentum dependence of the propagators, $G_{\mn\ab}$, is not written explicitly but understood from the context.
In the second line the relevant integral is reminiscent of loop integrals from quantum field theory.
\par
In the final part of this section we will derive an expression for $h^{\cGn{3}}_\mn$.
We need the $G_N^3$ term of $G_\tenl^\mn$ and $H_\tenl^\mn$ and again, Eq.~\eqreft{exp1} can be used:
\begin{align}
(G_\tenl^\mn)_{\cGn{3}}
=
2 G^\mn_{\chn{2}}
\big(
h^{\cGno}_\mn
,
h^{\cGn{2}}_\mn
\big)
+
G^\mn_{\chn{3}}
\big(
h^{\cGno}_\mn
,
h^{\cGno}_\mn
,
h^{\cGno}_\mn
\big)
\labelt{ger9}
\end{align}
Again, an analogous equation holds for $H_\tenl^\mn$.
The Fourier transformation to momentum space is similar to the case of $h^{\cGn{2}}_\mn$ above and the case discussed around Eq.~\eqreft{leq3}:
\begin{align}
G^\mn_{\chn{2}}
\big(
h^{\cGno}_\mn
,
h^{\cGn{2}}_\mn
\big)
=
\int \dDp{q}
e^{-iqx}
\int \dDp{l}
\tilde G_{\chn{2}}^\mn
\Big(
\tilde h^{\cGno}_\mn(l) ,
\tilde h^{\cGn{2}}_\mn(q-l)
\Big)
\labelt{ge10}
\end{align}
And:
\begin{align}
&G^\mn_{\chn{3}}
\big(
h^{\cGno}_\mn
,
h^{\cGno}_\mn
,
h^{\cGno}_\mn
\big)
=
\int \dDp{l_\teon} \dDp{l_\tetw} \dDp{l_\tetr}
e^{-i(l_\teon + l_\tetw + l_\tetr)x}
\tilde G^\mn_{\chn{3}}
\Big(
\tilde h^{\cGno}_\mn (l_\teon)
,
\tilde h^{\cGno}_\mn (l_\tetw)
,
\tilde h^{\cGno}_\mn (l_\tetr)
\Big)
\nonumber{}
\\
&\qquad\qquad
=
\int \dDp{q}
e^{-iqx}
\int \dDp{l_\teon} \dDp{l_\tetw}
\tilde G^\mn_{\chn{3}}
\Big(
\tilde h^{\cGno}_\mn (l_\teon)
,
\tilde h^{\cGno}_\mn (l_\tetw-l_\teon)
,
\tilde h^{\cGno}_\mn (q-l_\tetw)
\Big)
\labelt{ge11}
\end{align}
So that we can now write down the equation for $h_\ab^{\cGn{3}}$:
\begin{align}
&\tilde h_\ab^{\cGn{3}}
=
2
\frac{G_{\ab\mn}}{q^2}
\Big(
(\tilde G^\mn_\tenl)_{\cGn{3}}
+
\frac{1}{\xi}
(\tilde H^\mn_\tenl)_{\cGn{3}}
\Big)
\ .
\labelt{ge12}
\end{align}
where
\begin{align}
(\tilde G^\mn_\tenl)_{\cGn{3}}
=
&\ 2\int \dDp{l}
\tilde G_{\chn{2}}^\mn
\Big(
\tilde h^{\cGno}_\mn(l) ,
\tilde h^{\cGn{2}}_\mn(q-l)
\Big)
\labelt{lin4}
\\
+
&\int \dDp{l_\teon} \dDp{l_\tetw}
\tilde G^\mn_{\chn{3}}
\Big(
\tilde h^{\cGno}_\mn (l_\teon)
,
\tilde h^{\cGno}_\mn (l_\tetw-l_\teon)
,
\tilde h^{\cGno}_\mn (q-l_\tetw)
\Big)
\nonumber{}
\end{align}
And an equivalent equation holds for $(\tilde H^\mn_\tenl)_{\cGn{3}}$.
These formulas are similar to two-loop graphs.
\par
In the next section, Sec.~\ref{sec:TheMetric}, the equation for $\tilde h_\ab^{\cGn{3}}$ and that for $\tilde h_\ab^{\cGn{2}}$ will be compared to explicit Feynman diagram expansions.
First, as a concrete example to show how the formulas of this section work, we will use the simple formula for $\tilde h_\ab^{\cGn{1}}$ to derive the first order contribution to the Schwarzschild-Tangherlini\ metric.
\subsection{Newton Potential in Arbitrary Dimension}
\labelx{sec:NewtonPotential}
As an example we will apply Eq.~\eqreft{leq1} to derive the first order correction to the metric in arbitrary dimensions.
As $T^\mn$ we will take the point particle energy-momentum tensor of special relativity (see e.g.~\cite{Weinberg:1972kfs}).
We take a point particle of momentum $k^\mu$ and mass $m$ and use the covariant notation introduced in Eqs.~\ref{nn32}:
\begin{subequations}
\begin{align}
T^\mn
&=
\frac{k^\mu k^\nu}{m} \delta^{D-1}(x_\bot)
\ ,
\\
&=
m
\ \eta_\prl^\mn
\ \delta^{D-1}(x_\bot)
\ .
\labelt{ber1}
\end{align}
\end{subequations}
That this tensor describes an inertial particle is easily verified in the reference frame of $k^\mu$.
In momentum space this becomes:
\begin{align}
\tilde T^\mn (q)
=
\int \dDx\
e^{iqx}\
T^\mn
=
2\pi\delta(q_\prl)\
m\
\eta_\prl^\mn
\ .
\labelt{ber2}
\end{align}
In this expression it is more easily seen that the tensor is Lorentz covariant.
The energy-momentum tensor $T^\mn$ is conserved since $q_\bot^\mu \eta_\mn^\prl$ vanishes.
Then, using Eq.~\eqreft{leq1} we get:
\begin{subequations}
\label{eqn:lin5}
\begin{align}
\tilde h_\ab^{\cGn{1}}(q)
&=
\frac{G_{\ab\mn}}{q^2}
\frac{\kappa^2}{2}
2\pi\delta(q_\prl)\
m\
\eta_\prl^\mn
\ ,
\labelt{lin5a}
\\
&=
\frac{\kappa^2m}{2}
\frac{2\pi\delta(q_\prl)}{q_\bot^2}
G_{\ab\mn}\ \eta^\mn_\prl
\ .
\end{align}
\end{subequations}
Since $T^\mn$ is conserved we get:
\begin{align}
G_{\ab\mn}\ \tilde T^\mn = \maPi_{\ab\mn} \tilde T^\mn
\ .
\labelt{ber3}
\end{align}
The tensor structure of $\tilde h_\ab^{\cGn{1}}(q)$ becomes:
\begin{subequations}
\label{eqn:ber4}
\begin{align}
\maPi_{\ab\mn} \tilde \eta^\mn_\prl
&=
\eta^\mn_\prl - \frac{1}{D-2}\eta^\mn
\ ,
\labelt{ber4a}
\\
&=
\frac{D-3}{D-2}
\big(
\eta^\prl_\mn - \frac{1}{D-3} \eta^\bot_\mn
\big)
\ .
\end{align}
\end{subequations}
This should be inserted in Eq.~\eqreft{lin5}.
\par
We have now the final expression for $\tilde h_\mn^\cGno$:
\begin{align}
\tilde h_\mn^\cGno
=
\frac{\kappa^2m}{2}
\frac{2\pi\delta(q_\prl)}{q_\bot^2}
\frac{D-3}{D-2}
\big(
\eta^\prl_\mn - \frac{1}{D-3} \eta^\bot_\mn
\big)
\ .
\labelt{ber5}
\end{align}
Using a Fourier integral introduced later in Eq.~\eqreft{vv34} we can go to position space.
The leading order contribution to $h_\mn$ becomes:
\begin{align}
h^{\cGno}_\mn
= -\frac{\mu}{\sqrt{-x_\bot^2}^{D-3}}
\big(
\etat{\mn} - \frac{1}{D-3} \etar{\mn}
\big)
\ .
\labelt{me32}
\end{align}
Here, we used the Schwarzschild-Tangherlini\ parameter from Eq.~\eqreft{mud1}.
This result is in agreement with Refs.~\cite{Collado:2018isu,Emparan:2008eg}.
It is independent of the gauge parameter $\alpha$ since this parameter only enters in the graviton self-interaction vertices.
It obeys the first order gauge condition $(G_\sigma)^{\cGno}=0$ where:
\begin{align}
(G_\sigma)^{\cGno} = G_\sigma^{\chno}(h^{\cGno}_\mn)
= \partial^\rho \maP^\mn_\rs h^{\cGno}_\mn
\end{align}
\section{The Metric from the Three-Point Vertex Function}
\labelx{sec:TheMetric}
It is now the goal to derive the Schwarzschild-Tangherlini\ metric from a Feynman diagram expansion.
The relevant amplitude will be the exact three-point vertex function of a massive scalar interacting with a graviton which is shown in Fig.~\ref{fig:amp2}.
\begin{figure}[h]
\centering
\captionsetup{width=.8\linewidth}
\includegraphics[width=5cm]{fd9}
\caption{
A massive scalar interacts with a graviton.
The diagram represents the exact three-point vertex function, $i\maM_\teve^\mn$.
\flabel{amp2}
\ffig{fd9}
}
\label{fig:amp2}
\end{figure}
Here the scalar momentum $k^\mu$ is on-shell $k^2=m^2$ but the graviton momentum, $q^\mu$, is considered as arbitrary.
\par
We will use the ideas of Refs.~\cite{Bjerrum-Bohr:2018xdl,Galusha:cand} to reduce the n-loop integrals in the expansion of $\maM_\teve^\mn$ to simple couplings to classical sources.
Essentially, the idea is that in the classical limit only triangle graphs contribute.
Two-loop examples of such diagrams are found in Fig.~\ref{fig:amp3}.
\begin{figure}[h]
\centering
\captionsetup{width=0.8\linewidth}
\begin{tikzpicture}
\node(fd110)
{
$i\maM_{\text{2-loop}}^\mn=$
};
\node(fd11)[right=0cm of fd110]
{
\includegraphics[width=4.4cm]{fd11}
};
\node[right=0cm of fd11]
{
$+$
};
\node(fd12)[right=.5cm of fd11]
{
\includegraphics[width=4.4cm]{fd12}
};
\node[right=0cm of fd12]
{
$+\ .\ .\ .$
};
\end{tikzpicture}
\caption{
$\maM_{\text{$2$-loop}}^\mn$ is expanded in Feynman diagrams.
The ellipsis denotes two permutations of the second diagram.
Other 2-loop diagrams exist which do not contribute in the classical limit.
\flabel{amp3}
\ffig{fd11,fd12}
}
\label{fig:amp3}
\end{figure}
A property of such triangle graphs is that, if the scalar line is removed, the remaining graph is a graviton tree diagram.
The result for the n-loop integrals in the classical limit is that they reduce to a convolution integral which contracts $(n+1)$ classical sources to the graviton tree diagram which remains after the scalar line is removed.
\par
We have not found exact formulas for how the reduction is done in the general case since, also, both of the Refs.~\cite{Bjerrum-Bohr:2018xdl,Galusha:cand} work only in $D=4$.
We will then not focus on exact equalities but instead indicate how the Schwarzschild-Tangherlini\ metric can be derived from the three-point function given that the integrals reduce as explained.
To this aim, our Feynman rules for the general n-graviton vertex in terms of $\Gpz^\mn$ and $\Hpz^\mn$ will be helpful.
\par
In this section, we will analyze the one-loop and two-loop cases.
From this the general case can be worked out.
In the next chapter we will then compute the one-loop case explicitly using the formulas developed in this section and the three-graviton vertex from Sec.~\ref{sec:GravitonSelf}.
\par
Let us first discuss how, in the end, the Schwarzschild-Tangherlini\ metric will be related to the three-point function.
Then afterwards we will show how this relation can be derived.
The three-point function $\maM_\teve^\mn$ is interpreted as the source of the graviton field.
We relate $\maM_\teve^\mn$ to the tensors introduced in Sec.~\ref{sec:GaugeFixed}:
\begin{equation}
2\pi\delta(kq) \maM^\mn_{\text{vertex}}
=
-\kappa
\tilde \tau^\mn
-\frac{4}{\kappa\xi}
\tilde H_\tenl^\mn
\ .
\labelt{ver6}
\end{equation}
This is exactly the right-hand side of Eq.~\eqreft{cge1} (up to a factor).
It is then straightforward to relate the amplitude to the metric:
\begin{equation}
g_\mn = \eta_\mn
- \frac{\kappa}{2}
\int \frac{d^Dq\ \delta(kq)\ e^{-iq x}}{(2\pi)^{D-1}}
\frac{G_{\mn\ab}}{q^2}
\mathcal{M}_{\text{vertex}}^\ab
\ .
\labelt{ext2}
\end{equation}
This is an exciting formula relating the Lorentz covariant three-point function from quantum field theory to the general covariant all order metric from general relativity.
Following similar arguments as those that led to Eq.~\eqreft{lin2} we can get an equation independent of $\xi$ relating $g_\mn$ and $\maM_\teve^\mn$.
We will write it in terms of $\tilde h_\mn$ instead of $g_\mn$:
\begin{equation}
\tilde h_\mn = \frac{\mathcal{P}^{-1}_{\mn\ab}}{q^2}
\Big(
\frac{\kappa^2}{2}
\tilde \tau^\ab
+2
\tilde H_{\text{non-linear}}^\ab
\Big)
\ .
\labelt{ver7}
\end{equation}
And, of course, $g_\mn = \eta_\mn + h_\mn$.
\par
Let us now look into the derivation of these formulas.
First, we will look at the tree-level contribution to the three-point function which comes from the diagram in Fig.~\ref{fig:amp6}.
\begin{figure}[h]
\centering
\captionsetup{width=.8\linewidth}
\includegraphics[width=5cm]{fd16}
\caption{
The tree-level contribution to the three-point vertex function.
In the classical limit, this becomes the point particle energy-momentum tensor.
\flabel{amp6}
\ffig{fd16}
}
\label{fig:amp6}
\end{figure}
This will show how the formulas are used in a concrete example.
Also the exact constant of proportionality between the three-point amplitude and $\tilde \tau^\mn$ is easily determined from this example.
\par
The scalar graviton vertex rules were derived in Sec.~\ref{sec:ScalarGraviton}.
The diagram in Fig.~\ref{fig:amp6} will be proportional to the $\phi^2 h$ vertex rule which from Eq.~\eqreft{ver1} is:
\begin{align}
\IVe_{\phi^2h}^{\mu\nu}(k,k-q,m)
&=
I_{\alpha\beta}^{\mu\nu} k^\alpha (k-q)^\beta - \frac{k(k-q)-m^2}{2} \eta^{\mu\nu}
\labelt{ver8}
\\
&=
I_{\alpha\beta}^{\mu\nu} k^\alpha (k-q)^\beta + \frac{kq}{2} \eta^{\mu\nu}
\ .
\nonumber{}
\end{align}
We have $\maM_\teee^\mn = -\kappa\IVe_{\phi^2h}^{\mu\nu}(k,k-q,m)$.
We can simplify the amplitude in the classical limit where we can neglect factors of the graviton momentum, $q^\mu$, in comparison to the particle momentum, $k^\mu$.
In the classical limit the amplitude then reduces to:
\begin{align}
\maM_\teee^\mn = -\kappa k^\mu k^\nu
= -\kappa m^2 \eta_\prl^\mn
\end{align}
According to Eq.~\eqreft{ver6} we then get $\tilde \tau^\mn$ as:
\begin{subequations}
\begin{align}
\tilde \tau^\mn
&=
-\frac{2\pi\delta(kq)}{\kappa} \maM_\teee^\mn
\\
&=
2\pi\delta(q_\prl)
\
m
\ \eta_\prl^\mn
\end{align}
\end{subequations}
This is exactly the energy-momentum tensor from Eq.~\eqreft{ber2} which describes an inertial point particle.
Thus, Eq.~\eqreft{ver6} is correct at tree-level.
\par
Let us turn to the one-loop contribution to the metric.
We will compute this exactly in the next chapter.
Here, we will only show how it can be related to the perturbative expansion of the classical equations of motion.
For the one-loop case only one diagram, the one in Fig.~\ref{fig:amp7}, contributes in the classical limit.
\begin{figure}[h]
\centering
\captionsetup{width=.8\linewidth}
\includegraphics[width=5cm]{fd7}
\caption{
In position space, this diagram is reduced to a local quadratic function of $h^\cGno_\mn$.
\flabel{amp7}\ffig{fd7}
}
\label{fig:amp7}
\end{figure}
The idea is, that the massive propagator with momentum $k+l$ can be integrated away with $l_\prl$.
Then we can think of the amplitude, $\maM_\teol^\mn$, as two tree-diagrams of the type of Fig.~\ref{fig:amp6} contracted to the three-graviton vertex with a convolution integral.
\par
In the next chapter we will look at these integrals explicitly.
For now, however, the arguments will only be suggestive.
We take the three-graviton vertex rule from Eq.~\eqreft{ber6}:
\begin{equation}
i2\kappa
\IVe_{h^{3}}^{\mn\ \ab\ \gd}(q,l,l+q)
\tfmn_{\ab}(l)
\tfmn_{\gd}(l+q)
=
-i4\kappa
\Big(
\tGhn{2}{\mn}
\big(
\tfmn_{\ab}^{(l)},
\tfmn_{\gd}^{(l+q)}
\big)
+\frac{1}{\xi}
\tHhn{2}{\mn}
\big(
\tfmn_{\ab}^{(l)},
\tfmn_{\gd}^{(l+q)}
\big)
\Big)
\labelt{vv43}
\end{equation}
Here $\tilde f_\ab(l)$ and $\tilde f_\ab(l+q)$ are the tree-diagrams from Fig.~\ref{fig:amp6} after they have been propagated with the graviton propagator.
Thus, they are the first order correction to the metric, $\thmn^{\cGno}_\mn$, from Eq.~\eqreft{ber5}.
The one-loop amplitude then becomes something like:
\begin{align}
\maM_\teol^\mn
\sim
\int \dDp{l}
\IVe_{h^{3}}^{\mn\ \ab\ \gd}(q,l,l+q)
\thmn^\cGno_{\ab}(l)
\thmn^\cGno_{\gd}(l+q)
\labelt{vv44}
\end{align}
Inserting Eq.~\eqreft{vv43} into Eq.~\eqreft{vv44} we recognize this equation as the source term of $\tilde h^{\cGn{2}}_\mn$ from Eq.~\eqreft{lin3}.
Thus, if the integrals reduce as discussed, this amplitude produces the correct second order contribution to the metric when propagated with the graviton propagator as in Eq.~\eqreft{ext2}.
\par
We will now analyze the two-loop case.
The two-loop triangle graphs which contribute in the classical limit where shown in Fig.~\ref{fig:amp3}.
There are four such graphs, three of them being permutations of each other.
Assuming that the integrals reduce as discussed in the classical limit we can cut the scalar line into three pieces, cutting it where massive propagators are.
The result is that three $\phi^2 h$ tree diagrams from Fig.~\ref{fig:amp6} are contracted to the 4-point graviton tree amplitude as depicted in Fig.~\ref{fig:amp5}.
\begin{figure}[h]
\centering
\captionsetup{width=0.8\linewidth}
\begin{tikzpicture}
\node(fd13)
{
\includegraphics{fd13}
};
\node(fd130)[right=-0.3cm of fd13]
{
$=$
};
\node(fd14)[right=-0.3cm of fd130]
{
\includegraphics{fd14}
};
\node(fd140)[right=-0.3cm of fd14]
{
$+\quad3\times$
};
\node(fd15)[right=-0.3cm of fd140]
{
\includegraphics{fd15}
};
\end{tikzpicture}
\caption{
The four-graviton tree amplitude contracted with three classical sources.
\flabel{amp5}
\ffig{fd13,fd14,fd15}
}
\label{fig:amp5}
\end{figure}
\par
In Fig.~\ref{fig:amp5} the shaded blob on the left-hand side is the graviton 4-point tree amplitude.
It is contracted to 3 symmetric classical sources which are depicted as smaller solid blobs.
On the right-hand side of Fig.~\ref{fig:amp5} the graviton tree amplitude is expanded in graviton self-interaction vertices.
The three distinct Feynman graphs of Fig.~\ref{fig:amp3} which were permutations of each other have now been reduced to three equivalent terms due to the three classical sources being contracted in a symmetric way.
\par
The terms on the right-hand side of Fig.~\ref{fig:amp5} should be compared to Eqs.~\eqreft{ge11} and~\eqreft{ge10} from the perturbative expansion of the classical equations of motion determining $\thmn^{\cGn{3}}_\mn$.
For example, in the last term two classical sources are contracted to a three-graviton vertex which gives $h^{\cGn{2}}_\mn$.
The three-graviton vertex together with a classical source is then contracted to another three-graviton vertex.
We thus relate the last term of Fig.~\ref{fig:amp5} to the first term of Eq.~\eqreft{lin4} which is:
\begin{equation}
\int \dDp{l}
\Big(
\tilde G_{\chn{2}}^\mn
\big(
\tilde h^{\cGno}_\mn(l) ,
\tilde h^{\cGn{2}}_\mn(q-l)
\big)
+
\frac{1}{\xi}
\tilde H_{\chn{2}}^\mn
\big(
\tilde h^{\cGno}_\mn(l) ,
\tilde h^{\cGn{2}}_\mn(q-l)
\big)
\Big)
\labelt{ge13}
\end{equation}
Similarly we relate the first term of Fig.~\ref{fig:amp5} to the last term of Eq.~\eqreft{lin4} which is:
\begin{equation}
\int \dDp{l_\teon} \dDp{l_\tetw}
\tilde G^\mn_{\chn{3}}
\Big(
\tilde h^{\cGno}_\mn (l_\teon)
,
\tilde h^{\cGno}_\mn (l_\tetw-l_\teon)
,
\tilde h^{\cGno}_\mn (q-l_\tetw)
\Big)
\labelt{ge14}
\end{equation}
This expression is thus related to the diagram where 3 classical sources meet the 4-graviton self-interaction vertex.
\par
In Eq.~\eqreft{lin4} there is a factor 2 in front of Eq.~\eqreft{ge13} and factor unity in front of Eq.~\eqreft{ge14}.
In Fig.~\ref{fig:amp5} it looks as though there is a factor 3 in front of Eq.~\eqreft{ge13} and factor unity in front of Eq.~\eqreft{ge14}.
We should, however, remember that the graviton self-interaction vertices are related to $\Gpz^\mn$ and $\Hpz^\mn$ with a numerical factor which depends on the number of gravitons.
The numerical factor is $(n!)$ for the ${(n+1)}$-graviton vertex.
Thus, instead, in Fig.~\ref{fig:amp5} Eq.~\eqreft{ge13} comes with a factor 12 and Eq.~\eqreft{ge14} comes with a factor 6 (along with other common factors which we disregard).
We conclude that also in Fig.~\ref{fig:amp5} the terms come with a correct relative factor of 2.
\par
There is also the difference between Fig.~\ref{fig:amp5} and Eq.~\eqreft{lin4} that the n-graviton vertices are related to $\Gpz^\mn$ and $\Hpz^\mn$ while Eq.~\eqreft{lin4} is in terms of $G^\mn$ and $H^\mn$.
However, in the final part of Sec.~\ref{sec:GaugeFixed} we discussed that $\Gpz^\mn$ and $G^\mn$ can be related to each other and similarly for $\Hpz^\mn$ and $H^\mn$.
\par
The general case can be deduced from these examples and is depicted in Fig.~\ref{fig:amp4}.
\begin{figure}[h]
\centering
\captionsetup{width=0.8\linewidth}
\begin{tikzpicture}
\node(fd100)
{
$i\maM_\tenp^\mn\sim$
};
\node(fd10)[right=0cm of fd100]
{
\includegraphics{fd10}
};
\end{tikzpicture}
\caption{
The shaded blob represents the $(n+2)$-graviton tree amplitude and the smaller solid blobs represent the $\phi^2h$-tree diagrams i.e. the point particle energy-momentum tensor.
\flabel{amp4}
\ffig{fd10}
}
\label{fig:amp4}
\end{figure}
First, it requires consideration of the general combinatorics of comparing the graviton tree-amplitudes to the terms of the classical expansion of the \cgE\ equation.
Here, the general formula for the n-graviton vertex turns out to be suitable for the inductive structure of the triangle diagrams.
Also, the n-loop integrals have to be reduced in the classical limit, so that the assumptions made in this section are fulfilled.
This is left for future investigations which would allow the general n-loop diagram to be reduced explicitly to the corresponding terms in the perturbative classical expansion.
\chapter{Schwarzschild-Tangherlini\ Metric at One-Loop Order}
\labelx{sec:PerturbativeExpansion2}
We will now compute the one-loop correction to the Schwarzschild-Tangherlini\ metric in \dDo-type gauge.
We will use Eq.~\eqreft{ext2} which relates the three-point vertex function to the metric.
First, in Sec.~\ref{sec:OneLoop} we go through the technical details of the Feynman diagram computation.
This includes evaluation of the one-loop triangle integrals in the classical limit and working out the tensor contractions of the three-graviton vertex.
In Sec.~\ref{sec:SecondOrder} we will then use Eq.~\eqreft{ext2} to derive the metric contribution in position space from the amplitude.
This requires us contract the amplitude with the graviton propagator and make a Fourier transform to position space.
\par
In space-time dimension $D=5$ a logarithm appears in position space.
This is analyzed in Sec.~\ref{sec:AppearanceOf} where the triangle integrals are treated more carefully and divergencies are removed.
The logarithm is connected to a redundant gauge freedom, which also occurs in $D=4$.
Finally, in Sec.~\ref{sec:ClassicalDerivation} we present a derivation of the second order contribution to the Schwarzschild-Tangherlini\ metric using only methods from classical general relativity.
The results of this computation verify the amplitude computation including the appearance of logarithms in $D=5$.
\section{Feynman Diagram Computation}
\labelx{sec:OneLoop}
The diagram in Fig.~\ref{fig:amp1} is the only one that contributes to the three-point vertex function at one-loop order in the classical limit.
Other diagrams do not have the required triangle structure.
In this section we will compute the corresponding amplitude, $\maM_\teol^\mn$, in detail.
\begin{figure}[h]
\centering
\captionsetup{width=0.8\linewidth}
\includegraphics[width=5cm]{fd7}
\caption{Feynman triangle diagram. The solid line is a massive scalar and wiggly lines are gravitons.\flabel{amp1}\ffig{fd7}}
\label{fig:amp1}
\end{figure}
\par
The Feynman graph in Fig.~\ref{fig:amp1} was already discussed in Sec.~\ref{sec:TheMetric} around Fig.~\ref{fig:amp7}.
Here the one-loop triangle integrals where assumed to reduce to a simple convolution integral.
In Sec.~\ref{sec:TriangleIntegrals} we will evaluate these integrals explicitly.
We find that only the spatial part of the integrals conform to the assumptions made in Sec.~\ref{sec:TheMetric}.
However, in Sec.~\ref{sec:Tensor} we see during the calculation, that only the space-components of the integrals contribute to the amplitude so that in the end, the assumptions made in Sec.~\ref{sec:TheMetric} are satisfied at one-loop order.
This is seen using the three-graviton vertex in terms of the $U^{\mn\ \ab\rho\ \gd\sigma}$ tensor.
It would be interesting if another argument could show this immediately, which would make the Feynman rule in terms of $\Gpz^\mn$ and $\Hpz^\mn$ advantageous.
\subsection{Triangle Loop Integrals}
\labelx{sec:TriangleIntegrals}
The triangle one-loop integrals relevant for the graph in Fig.~\ref{fig:amp1} are well known in the literature in $D=4$ \cite{Bjerrum-Bohr:cand,BjerrumBohr:2002kt,Donoghue:1994dn} and also in arbitrary dimensions \cite{Cristofoli:2020uzm}.
\par
Our derivation will be slightly different from \cite{Cristofoli:2020uzm}, although the results agree.
Let us first consider the simplest case of the scalar integral:
\begin{equation}
I = \int \dDp{l}
\frac{1}{l^2+i\epsilon}
\frac{1}{(l+q_\bot)^2+i\epsilon}
\frac{1}{(l+k)^2-m^2+i\epsilon}
\labelt{vvv2}
\end{equation}
In the literature, they have been solved with the implicit constraint on $q^\mu$ that $kq\approx0$.
In our work we treat $q^\mu$ as an arbitrary momentum variable.
We have then added the subscript on $q_\bot^\mu$ in the triangle integrals so that we still have the similar equation that $q_\bot k=0$.
\par
The classical limit allows us to simplify the massive propagator:
\begin{subequations}
\label{eqn:vvv1}
\begin{align}
\frac{1}{(l+k)^2-m^2+i\epsilon}
&\approx
\frac{1}{2kl+i\epsilon}
\\
&=
\frac{1}{2m}
\frac{1}{l_\prl+i\epsilon}
\labelt{vvv1b}
\end{align}
\end{subequations}
In the first line we used that $l^2 \ll kl$ and in the second line, that $kl=ml_\prl$.
\par
We rewrite Eq.~\eqreft{vvv1b} using the following formula:
\begin{align}
\frac{1}{l_\prl+i\epsilon}
=
\frac{1}{l_\prl}
-
i\pi\delta(l_\prl)
\labelt{rel2}
\end{align}
When we insert this equation into the scalar triangle integral of Eq.~\eqreft{vvv2} we can neglect the first term of Eq.~\eqreft{rel2}.
This is so since the two graviton propagators are even in $l_\prl$.
In the classical limit the scalar triangle integral is then reduced to:
\begin{equation}
I =
-\frac{i}{4m}
\int \dDp{l}
2\pi\delta(l_\prl)
\frac{1}{l_\bot^2}
\frac{1}{(l_\bot+q_\bot)^2}
\ .
\labelt{vvv3}
\end{equation}
Here we have neglected $i\epsilon$ since both $l_\bot$ and $q_\bot$ are space-like.
The remaining integral is a bubble integral, which is e.g. solved in Srednicki~\cite{Srednicki:2007qs}.
\par
The integral in Eq.~\eqreft{vvv3} is similar to the integrals from the expansion of the classical equations of motion.
We will symbolize it as $N_{D-1}$:
\begin{subequations}
\label{eqn:vvv4}
\begin{align}
N_{D-1}
&=
\int \dDp{l}
2\pi\delta(l_\prl)
\frac{1}{l_\bot^2}
\frac{1}{(l_\bot+q_\bot)^2}
\labelt{vvv4a}
\\
&=
\int \ddp{l_\bot}
\frac{1}{l_\bot^2}
\frac{1}{(l_\bot+q_\bot)^2}
\end{align}
\end{subequations}
And we find that
\begin{equation}
N_d = -\frac{\Omega_{d-2}\sqrt{-q^2_\bot}^{d-4}}{4(4\pi)^{d-2}\ \sin(\frac{\pi}{2}d)}
\labelt{vvv5}
\end{equation}
where $\Omega_{d-1}$ is the surface area of a sphere in d-dimensional space which was defined in Eq.~\eqreft{n217}.
The integral $N_{D-1}$ from Eq.~\eqreft{vvv4} is special since it conforms exactly to the analysis from Sec.~\ref{sec:TheMetric}.
It is a convolution integral which in position space corresponds to simple multiplication.
\par
The tensor triangle integrals can be treated analogously.
Only the integrals corresponding to $I^\mu$ and $I^\mn$ contribute in the classical limit:
\begin{subequations}
\label{eqn:vvv7}
\begin{align}
I^\mu &= \int \dDp{l}
\frac{l^\mu}{\big(l^2+i\epsilon\big)
\big((l+q_\bot)^2+i\epsilon\big)
\big((l+k)^2-m^2+i\epsilon\big)}
\labelt{vvv7a}
\\
I^\mn &= \int \dDp{l}
\frac{l^\mu l^\nu}{\big(l^2+i\epsilon\big)
\big((l+q_\bot)^2+i\epsilon\big)
\big((l+k)^2-m^2+i\epsilon\big)}
\labelt{vvv7b}
\end{align}
\end{subequations}
Integrals with more loop momenta in the numerator does not contribute in the classical limit.
This is due to the fact that the three-graviton vertex adds two graviton momenta to the numerator and terms with extra graviton momenta are neglected in comparison.
\par
The tensor integrals can be solved algebraically by writing an ansatz.
Let us work out $I^\mu$ as an example:
\begin{equation}
I^\mu = A q^\mu_\bot + B k^\mu
\labelt{vvv8}
\end{equation}
The coefficients $A$ and $B$ are found using the equations $q_\mu^\bot I^\mu = q_\bot^2 A$ and $k_\mu I^\mu = m^2 B$.
Then we can use relations such as $2 k_\mu l^\mu = ((l+k)^2-m^2)-l^2$ to reduce the numerators so that we again have scalar integrals without any loop momenta in the numerator.
We will do this for the $I_\prl$ and $I_\bot^\mu$ parts separately.
\par
For the $I_\bot^\mu$ we get a result similar to the scalar triangle integral:
\begin{subequations}
\label{eqn:vvv9}
\begin{align}
I_\bot^\mu
&= A q^\mu_\bot
\labelt{vvv9a}
\\
&=
-\frac{i}{4m}
\int \dDp{l}
2\pi\delta(l_\prl)
\frac{l^\mu_\bot}{l_\bot^2
(l_\bot+q_\bot)^2}
\labelt{vvv9b}
\end{align}
\end{subequations}
In the first line, we inserted the ansatz from Eq.~\eqreft{vvv8} and then, in the second line, the definition from Eq.~\eqreft{vvv7a} using the simplifications of the massive propagator in Eq.~\eqreft{rel2}.
The integral is similar to the $N_{D-1}$-integral defined in Eq.~\eqreft{vvv4} and we define:
\begin{align}
N_{D-1}^\mu
&=
\int \dDp{l}
2\pi\delta(l_\prl)
\frac{l^\mu_\bot}{l_\bot^2
(l_\bot+q_\bot)^2}
\labelt{vv10}
\end{align}
To solve it algebraically we dot with $q_\bot$ and use $2l_\bot q_\bot = (l_\bot+q_\bot)^2-l_\bot^2-q_\bot^2$.
We get:
\begin{align}
\hspace*{-.5cm}
2q^\bot_\mu
N_{D-1}^\mu
&=
\int \dDp{l}
2\pi\delta(l_\prl)
\frac{1}{l_\bot^2}
-
\int \dDp{l}
2\pi\delta(l_\prl)
\frac{1}{(l_\bot+q_\bot)^2}
-
q^2_\bot
\int \dDp{l}
2\pi\delta(l_\prl)
\frac{1}{l_\bot^2
(l_\bot+q_\bot)^2}
\nonumber{}
\\
&=
-q_\bot^2
N_{D-1}
\labelt{vvv6}
\end{align}
The two first integrals of the first line can be neglected in the classical limit and the third one is recognized as $N_{D-1}$.
The expression for $N^\mu_{D-1}$ becomes:
\begin{align}
N^\mu_{D-1}
=
-\frac{1}{2}q_\bot^\mu
N_{D-1}
\labelt{vv11}
\end{align}
Again, $N_{D-1}^\mu$ has the same simple interpretation as $N_{D-1}$.
\par
We will now compute the parallel part of $I^\mu$:
\begin{subequations}
\label{eqn:vv12}
\begin{align}
I_\prl &= mB
\labelt{vv12a}
\\
&=
\frac{1}{2m}
\int \dDp{l}
\frac{1}{l^2
(l+q_\bot)^2}
\frac{l_\prl}{l_\prl+i\epsilon}
\labelt{vv12b}
\end{align}
\end{subequations}
Again, in the first line we inserted the ansatz from Eq.~\eqreft{vvv8} and then in the second line the definition from Eq.~\eqreft{vvv7a}.
This time, however, the simplification of the massive propagator is different since the remaining integral is now odd in $l_\prl$.
Thus instead of a $\delta$-function from the factor $1/(l_\prl+i\epsilon)$ it cancels with the $l_\prl$ in the numerator.
Thus the final factor of Eq.~\eqreft{vv12b}, $l_\prl/(l_\prl+i\epsilon)$, is unity.
After a Wick rotation the remaining integral is of the same type as $N_D$ and we get:
\begin{align}
I_\prl =
i\frac{N_D}{2m}
\end{align}
The final expression for $I^\mu$ becomes:
\begin{subequations}
\begin{align}
I^\mu
&= -\frac{i}{4m} N_{D-1}^\mu
+ \frac{i}{2m^2} N_D k^\mu
\\
&=
\frac{i}{8m} N_{D-1}\ q_\bot^\mu
+\frac{i}{2m^2} N_D\ k^\mu
\end{align}
\end{subequations}
We can treat $I^\mn$ in the same fashion and the result is:
\begin{align}
I^\mn
&=
-\frac{i}{4m} N^\mn_{D-1}
-\frac{i}{4m^2}
N_D \
(q^\mu_\bot k^\nu + k^\mu q^\nu_\bot)
\end{align}
Where $N^\mn_{D-1}$ is:
\begin{subequations}
\begin{align}
N^\mn_{D-1}
&=
\int \dDp{l}
2\pi\delta(l_\prl)
\frac{l^\mu_\bot l^\nu_\bot}{l_\bot^2
(l_\bot+q_\bot)^2}
\\
&=
\frac{q_\bot^2}{4(D-2)} N_{D-1}
\Big(
(D-1)\qbmnu{\mu}{\nu}
-
\eta^\mn_\bot
\Big)
\end{align}
\end{subequations}
We have now computed all the triangle integrals that contribute to our amplitude in the classical limit.
The results agree with Ref.~\cite{Cristofoli:2020uzm} the only difference being that we have expressed them in terms of $N_{D-1}$ and $N_D$.
\par
As already mentioned, the $N_{D-1}$-integrals have the simple interpretation in position space of two classical sources multiplied together and contracted to the three-graviton vertex.
This is not the case for the factors of $N_D$ in $I^\mu$ and $I^\mn$.
We can summarize it as follows.
The space components of the triangle integrals are given by the $N_{D-1}$-integrals which have the interpretation of a convolution between two classical sources.
The time and mixed components of the triangle integrals are given by the $N_D$ integral.
\par
According to the ideas in Sec.~\ref{sec:TheMetric}, only the $N_{D-1}$ integrals ought to contribute to the amplitude.
This is indeed the case as we will see.
We would like to have a simpler argument for this fact, however.
It seems that the $N_D$ part of the integral would describe mixed components of the metric, that is components with one time and one space index.
In $D=4$ the standard expression for the Schwarzschild metric in harmonic gauge found in e.g.~\cite{Weinberg:1972kfs} does not have such components.
However, there exist other expressions for the metric in harmonic gauge which have mixed components~\cite{Petrov:2020abu}.
This is also expected to be the case for the Schwarzschild-Tangherlini\ metric in arbitrary dimensions.
It would be interesting if the expressions for the metric with mixed components could also be derived from the amplitude in which case the $N_D$ part of the triangle integrals would possibly play a role.
\par
Finally, we should comment on the fact that the expression for $N_d$ in Eq.~\eqreft{vvv4} diverges when $d$ is even.
This divergence should be treated with the methods of dimensional regularization.
This means that for even $d$ we should expand $N_d$ in a Laurent series.
The divergent pole will be an analytic function in $q_\bot$ which describes local corrections to the metric.
The pole should be subtracted from $N_{d}$ and the non-analytic finite term is the relevant part of $N_{d}$.
We will analyze this in Sec.~\ref{sec:AppearanceOf}.
By using regularized integrals, it turns out that we can ignore the divergence in all space-time dimensions except $D=5$.
In Sec.~\ref{sec:SecondOrder} we will then derive the metric contribution in all dimensions $D\neq5$ and specialize on $D=5$ in Sec.~\ref{sec:AppearanceOf}.
\subsection{Three-Graviton Vertex and Tensor Manipulations}
\labelx{sec:Tensor}
We will now go through the tensor manipulations of the amplitude computation.
We use the three-graviton vertex rule in terms of the $U$-tensor.
We expand the Feynman diagram in Fig.~\ref{fig:amp1} as:
\begin{subequations}
\label{eqn:vv13}
\begin{align}
2\pi\delta(kq) i\maM^\mn_\teol
&=
-i\frac{4}{\kappa}
\Big(
\tilde G_\teol+\frac{1}{\xi} \tilde H_\teol
\Big)
\labelt{vv13a}
\\
&
=
2\pi\delta(kq)m^4(2i\kappa)(-i\kappa)^2i^3
\int \dDp{l}
\frac{1}{l^2(l+q_\bot)^2
\big((l+k)^2-m^2+i\epsilon\big)}
\nonumber{}
\\
&\qquad\qquad\qquad\qquad\qquad
\times
f_\ab f_\gd
\IVe^{\ab\ \gd\ \mn}_{h^3}(l,-l-q,q)
\ .
\labelt{vv13b}
\end{align}
\end{subequations}
In the front, we have gathered factors of $i$, $\kappa$ and $m$.
Afterwards we have the loop integration and the propagators which make the triangle integrals.
Then, in the end of Eq.~\eqreft{vv13} we have the tensor structure including the three-graviton vertex $\IVe_{h^3}$ and tensors $f_\ab$ defined as:
\begin{align}
f_\ab = \frac{k^\mu k^\nu}{m^2} \maPi_{\mn\ab}
= \eta_\prl^\mn \maPi_{\mn\ab}
\ .
\end{align}
The tensors $f_\ab$ describe the tensor structure of the $\phi^2 h$ vertices contracted to the graviton propagator.
Here, we have already made use of two simplifications due to the classical limit.
First, we have neglected factors of $l$ and $q$ in the scalar-graviton vertex.
Second, we have neglected the momentum dependent part of the graviton propagator.
The first simplification is similar to the simplification of the tree-level diagram.
Here, the graviton momenta are neglected in comparison to the scalar momenta.
The momentum dependent part of the propagator is less obvious since graviton momenta are introduced both in the denominator and numerator.
\par
There are two graviton propagators with momenta $l^\mu$ and $(l+q)^\mu$.
The momentum dependent part of the graviton propagator with e.g. momentum $l_\mu$ is:
\begin{align}
- 2(1-\xi)
I^{\mn}_{\rho\kappa} \frac{\ l^\rho l_\sigma}{l^2} I^{\kappa \sigma}_{\ab}
\ .
\end{align}
Thus there are also integrals with more loop momenta squared in the denominator than the triangle integrals.
However, in these integrals one of the graviton momenta $l^\mu$ or $q^\mu$ is always contracted to the scalar vertex.
A simple explanation for neglecting the momentum part of the propagator is then, that since the scalar graviton vertex represents the point particle energy-momentum tensor, it is conserved.
When the graviton momentum is contracted with the conserved energy-momentum tensor we then get zero.
\par
Let us analyze the integral where the graviton propagator with momentum $l^\mu$ meets the scalar graviton vertex:
\begin{subequations}
\begin{align}
\int \dDp{l}
\frac{lk}{l^2 l^2 (l+q)^2 \big((l+k)^2-m^2+i\epsilon\big)}
=
\frac{1}{2}
\int \dDp{l}
\frac{\big((l+k)^2-m^2\big)-l^2}{
l^2 l^2 (l+q)^2 \big((l+k)^2-m^2+i\epsilon\big)}
\\
=
\frac{1}{2}
\int \dDp{l}
\frac{1}{
l^2 l^2 (l+q)^2 }
-
\frac{1}{2}
\int \dDp{l}
\frac{1}{
l^2 (l+q)^2 \big((l+k)^2-m^2+i\epsilon\big)}
\end{align}
\end{subequations}
In the second line, the first term does not have the right structure for the classical limit.
The second term has the structure of the triangle integrals.
However, there will be three factors of graviton momenta ($l^\mu$ or $q^\mu$) in the numerator, one remaining from the propagator and two from the three-graviton vertex.
This is one extra factor of graviton momentum compared to the terms from the momentum independent part of the propagator.
\par
In the classical limit, we can thus make the simplifications in the definition of $f_\ab$.
Note that $f_\ab$ is the tensor structure of the first order metric contribution, $h_\mn^\cGno$, from Eq.~\eqreft{ber4a}:
\begin{equation}
f_\mn = \frac{D-3}{D-2} \eta_\mn^\prl - \frac{1}{D-2} \eta_\mn^\bot
\ ,
\labelt{vv22}
\end{equation}
Focus now on the tensor part of the integral with the three-graviton vertex and two factors of $f_\ab$.
We use the vertex rule in terms of $U^{\mn\ \ab\rho\ \gd\sigma}$ from Eq.~\eqreft{be10}:
\begin{align}
\hspace*{-1cm}
f_\ab f_\gd
\tau^{\ab\ \gd\ \mn}_{h^3}(l,-l-q,q)
=
f_\ab f_\gd
\Big( U^{\mn\ \ab\rho\ \gd\sigma} l_\rho (l+q)_\sigma
+ U^{\ab\ \gd\rho\ \mn\sigma} (l+q)_\rho q_\sigma
- U^{\gd\ \mn\rho\ \ab\sigma} q_\rho l_\sigma
\Big)
\end{align}
The final two terms of this equation can be simplified by expanding $(l+q)$ and using the symmetries of $U^{\mn\ \ab\rho\ \gd\sigma}$.
We get
\begin{align}
f_\ab f_\gd
\tau^{\ab\ \gd\ \mn}_{h^3}(l,-l-q,q)
=
f_\ab f_\gd
\Big( U^{\mn\ \ab\rho\ \gd\sigma} l_\rho (l+q)_\sigma
+ U^{\ab\ \gd\rho\ \mn\sigma} q_\rho q_\sigma
\Big)
\ ,
\end{align}
which we insert in the expression for the amplitude in Eq.~\eqreft{vv13}.
Inserting the triangle integrals as well, we get:
\begin{align}
2\pi \delta(kq)
i\mathcal{M}_{\text{1-loop}}^\mn
=
-
2\pi\delta(kq)
2 m^4 \kappa^3
f_\ab f_\gd
\Big( U^{\mn\ \ab\rho\ \gd\sigma} (I_\rs+I_\rho q_\sigma)
+ U^{\ab\ \gd\rho\ \mn\sigma} q_\rho q_\sigma I
\Big)
\ .
\end{align}
All the tensors of this expression are known and it is now the task to compute all the contractions.
\par
The integral $I_\rs+I_\rho q_\sigma$ is given by:
\begin{align}
I_\rs+I_\rho q_\sigma
=
i
\frac{q^2\ N_{D-1}}{16(D-2)m}
\Big(
(D-3)\qbmnd{\rho}{\sigma}
+ \eta^\bot_\rs
\Big)
+
i
\frac{N_D}{4m^2}
\big(
k_\rho q^\bot_\sigma
-
q^\bot_\rho k_\sigma
\big)
\labelt{vv14}
\end{align}
Because of the symmetries of $f_\ab f_\gd U^{\mn\ \ab\rho\ \gd\sigma}$, only the symmetric part of the integral contributes.
This is exactly what we want, since the symmetric part is given in terms of the $N_{D-1}$-integrals which follow the ideas of Sec.~\ref{sec:TheMetric}.
We can then conclude that the $N_D$ integral does not contribute to the one-loop diagram in Fig.~\ref{fig:amp1} in the classical limit.
\par
Inserting the integral from Eq.~\eqreft{vv14} in the amplitude we get
\begin{align}
\hspace*{-.6cm}
2\pi\delta(kq) \mathcal{M}_{\text{1-loop}}^\mn
&=
2\pi\delta(q_\prl) \frac{m^2 \kappa^3 q_\bot^2 N_{D-1}}{2}
f_\ab f_\gd
\Big(
- \frac{1}{4(D-2)}
U^{\mn\ \ab\rho\ \gd\sigma}
M_\rs^\bot
+ U^{\ab\ \gd\rho\ \mn\sigma} \qbmnd{\rho}{\sigma}
\Big)
\ ,
\labelt{vv15}
\end{align}
where we have defined:
\begin{align}
M_\rs^\bot
=
(D-3)\qbmnd{\rho}{\sigma}+\eta^\bot_\rs
\ .
\end{align}
The superscript on $M^\bot_\rs$ makes it clear that it has only space-like components.
\par
We will compute most of the contractions in Eq.~\eqreft{vv15} in detail.
Let us start with the contribution to $\tHmn^\mn_\teol$ which is simpler than the one to $\tGmn^\mn_\teol$.
Thus we compute the part which comes from $U_\tegf$.
The first term of Eq.~\eqreft{vv15} has the tensor structure:
\begin{align}
U^{\mn\ \ab\rho\ \gd\sigma}_{gf} f_\ab f_\gd M_\rs^\bot
\ .
\end{align}
To evaluate this it is convenient to use the expression for $U_\tegf$ in Eq.~\eqreft{n430b} which is in terms of $h_{\mn,\rho}$.
Instances of $h_\mn$ should be replaced by $f_\mn$ and partial derivatives by $M_\rs^\bot$.
For example:
\begin{align}
-\mathcal{P}^\rs_\ab
h^\ab_{,\sigma}
h_{\rho}^{\mu,\nu}
\rightarrow
-\mathcal{P}^\rs_\ab
f^\ab
f^{\mu}_\rho
\Mbo^\nu_\sigma
\ .
\labelt{vv20}
\end{align}
We then get:
\begin{align}
U^{\mn\ \ab\rho\ \gd\sigma}_{gf} f_\ab f_\gd M_\rs^\bot
&=
\alpha
\mathcal{P}^\rs_\ab
f^\ab
\Big(
- f_{\rho}^\mu \Mbo^{\nu}_\sigma
- f_{\rho}^\nu \Mbo^{\mu}_\sigma
+ f^\mn M^\bot_\rs
\Big)
\\
&=
\alpha
\eta_\prl^\rs
\Big(
- f_{\rho}^\mu \Mbo^{\nu}_\sigma
- f_{\rho}^\nu \Mbo^{\mu}_\sigma
+ f^\mn M^\bot_\rs
\Big)
=
0
\end{align}
Going from the first to the second line we used that $\maP f = \eta_\prl$.
The second line is easily seen to disappear since $M^\bot_\rs$ and $\eta_\prl^\mn$ are orthogonal.
\par
The second term of Eq.~\eqreft{vv14} has the tensor structure:
\begin{align}
U^{\ab\ \gd\rho\ \mn\sigma}_{gf} f_\ab f_\gd \qbmnd{\rho}{\sigma}
\ .
\labelt{vv17}
\end{align}
Here, the two factors of $f$ are contracted in an asymmetrical way.
Then Eq.~\eqreft{n430b} for $U_\tegf$ is less useful and we use instead Eq.~\eqreft{n436}.
\par
For the first term of Eq.~\eqreft{n436} we get:
\begin{subequations}
\label{eqn:vv16}
\begin{align}
\hspace*{-1cm}
-\alpha
\Big(
I^{\ab\rho\kappa}
I^\gd_{\kappa\lambda}
\maP^{\lambda\sigma\mn}
+
I^{\ab\sigma\kappa}
I^\mn_{\kappa\lambda}
\maP^{\lambda\rho\gd}
\Big)
f_\ab f_\gd \qbmnd{\rho}{\sigma}
&=
-\alpha
\Big(
f^{\rho\kappa}
f_{\kappa\lambda}
\maP^{\lambda\sigma\mn}
\qbmnd{\rho}{\sigma}
+
f^{\sigma\kappa}
I^\mn_{\kappa\lambda}
\eta_\prl^{\lambda\rho}
\qbmnd{\rho}{\sigma}
\Big)
\labelt{vv16a}
\\
&=
-\alpha\frac{1}{(D-2)^2}\maP^{\rho\sigma\mn}
\qbmnd{\rho}{\sigma}
\labelt{vv16b}
\end{align}
\end{subequations}
On the right-hand side of Eq.~\eqreft{vv16a} the second term vanishes and the first term simplifies to the one in the second line.
\par
For the second term of Eq.~\eqreft{n436} we get:
\begin{subequations}
\begin{align}
\frac{1}{2}
\alpha
\Big(
\maP^{\gd\rs} I^{\mn\ab}
+
\maP^{\mn\rs} I^{\gd\ab}
\Big)
f_\ab f_\gd \qbmnd{\rho}{\sigma}
&=
\frac{1}{2}
\alpha
\Big(
\eta_\prl^{\rs} f^{\mn}\qbmnd{\rho}{\sigma}
+
\maP^{\mn\rs} f_\ab f^\ab \qbmnd{\rho}{\sigma}
\Big)
\\
&=
\frac{1}{2}
\alpha
f_\ab f^\ab
\maP^{\mn\rs} \qbmnd{\rho}{\sigma}
\end{align}
\end{subequations}
Here, only the second term of the first line is non-zero.
It simplifies to the one in the second line.
\par
We have now computed the tensor structure in Eq.~\eqreft{vv17}:
\begin{subequations}
\begin{align}
U^{\ab\ \gd\rho\ \mn\sigma}_{gf} f_\ab f_\gd \qbmnd{\rho}{\sigma}
&=
\alpha
\Big(
-\frac{1}{(D-2)^2}
+
\frac{1}{2}
f_\ab f^\ab
\Big)
\maP^{\mn\rs} \qbmnd{\rho}{\sigma}
\\
&=
\alpha
\frac{D-3}{2(D-2)}
\maP^{\mn\rs} \qbmnd{\rho}{\sigma}
\\
&=
\alpha\frac{D-3}{4(D-2)}
(2\qbmnu{\mu}{\nu}-\eta^\mn)
\end{align}
\end{subequations}
This allows us to find $\tHmn^\mn_\teol$ defined in Eq.~\eqreft{vv13a}:
\begin{align}
&\tilde H_\teol^\mn =
-\alpha
\ 2\pi\delta(q_\prl)
\ \frac{\kappa^4 m^2 q^2 N_{D-1}}{
16}
\frac{D-3}{D-2}
\maP^{\mn\rs} \qbmnd{\rho}{\sigma}
\ .
\labelt{vv27}
\end{align}
This is the gauge-dependent part of Eq.~\eqreft{vv15}.
\par
We will now compute the part of Eq.~\eqreft{vv15} which contributes to $\tGmn^\mn$.
There are two tensor structures to be computed, namely
\begin{align}
U_\tecl^{\mn\ \ab\rho\ \gd\sigma} f_\ab f_\gd M_\rs^\bot
\ ,
\labelt{vv18}
\end{align}
and:
\begin{align}
U_\tecl^{\ab\ \gd\rho\ \mn\sigma} f_\ab f_\gd \qbmnd{\rho}{\sigma}
\ .
\labelt{vv19}
\end{align}
Both computations are similar to the ones for $U_\tegf$ only more complicated due to $U_\tecl$ being more complicated than $U_\tegf$.
We will do Eq.~\eqreft{vv18} in detail and only quote the result for Eq.~\eqreft{vv19}.
\par
For the computation of Eq.~\eqreft{vv18}, we use Eq.~\eqreft{ute1} for the definition of $U_\tecl$.
As in Eq.~\eqreft{vv20} instances of $h_\ab$ should be replaced with $f_\ab$ and the derivatives on $h_\ab$ should be replaced with $M_\rs$.
Let us reprint Eq.~\eqreft{ute1}:
\begin{align}
U_\tecl^{\mn\ \ab\rho\ \gd\sigma} \ h_{\ab,\rho} h_{\gd,\sigma}
=
&
2 I^\mn_\pe \maP^\ab_\rs \maP^{\sigma\phi}_{\gamma\delta} h^{,\epsilon}_\ab h^{\gd,\rho}
- \maP^{\mu\rho}_\ab \maP^{\nu\sigma}_\gd \eta_\rs h^\ab_{,\kappa} h^{\gd,\kappa}
\labelt{vv21}
\\
&
+ \maP^\mn_\rs
\Big(
h^{\rho\alpha}_{,\beta} h^{\sigma\beta}_{,\alpha}
-\frac{1}{2} h^{\alpha,\rho}_{\beta} h^{\beta,\sigma}_{\alpha}
- h^\rs_{,\alpha} h^\ab_{,\beta}
\Big)
\nonumber
\ .
\end{align}
We will consider each term by itself.
\par
The first term of $U_\tecl$ from Eq.~\eqreft{vv21} becomes:
\begin{align}
2 I^\mn_\pe \maP^\ab_\rs \maP^{\sigma\phi}_{\gamma\delta} f_\ab f^{\gd} M^{\epsilon\rho}_\bot
=
2 I^\mn_\pe \eta^\prl_\rs \eta^{\sigma\phi}_\prl M^{\epsilon\rho}_\bot
=
0
\end{align}
It is straightforward to apply $\maP$ to $f$.
We then get $\eta_\prl$ which is orthogonal to $M^\bot$ so that this term vanishes.
\par
Next term of Eq.~\eqreft{vv21}:
\begin{align}
- \maP^{\mu\rho}_\ab \maP^{\nu\sigma}_\gd \eta_\rs f^\ab f^\gd \Mbo^\kappa_\kappa
=
- \eta^{\mu\rho}_\prl \eta^{\nu\sigma}_\prl \eta_\rs \Mbo^\kappa_\kappa
=
-\eta^\mn_\prl \Mbo^\kappa_\kappa
\end{align}
Again, $\maP f$ is simply $\eta_\prl$.
We need to compute the trace of $\Mbo$:
\begin{align}
\Mbo^\kappa_\kappa = D-3+D-1 = 2(D-2)
\labelt{vv23}
\end{align}
So from the second term of Eq.~\eqreft{vv21} we get:
\begin{equation}
- \maP^{\mu\rho}_\ab \maP^{\nu\sigma}_\gd \eta_\rs f^\ab f^\gd \Mbo^\kappa_\kappa
=
-2(D-2)\eta_\prl^\mn
\ .
\end{equation}
\par
The next three terms of Eq.~\eqreft{vv21} are contracted with $\maP^\mn_\rs$.
We will compute the three terms in the brackets first and then do the contraction.
\par
The first term in the brackets:
\begin{align}
f^{\rho\alpha} f^{\sigma\beta} M_\ab^\bot
=
\frac{1}{(D-2)^2} M^\rs_\bot
\end{align}
Again, we used that $M_\ab^\bot$ is orthogonal to $k^\mu$ and the explicit structure of $f_\ab$ from Eq.~\eqreft{vv22}.
\par
The second term in the brackets:
\begin{align}
-\frac{1}{2} f^\ab f_\ab M^\rs_\bot
=
-\frac{1}{2} \frac{(D-3)^2+D-1}{(D-2)^2} M^\rs_\bot
\end{align}
Here, we inserted $f^\ab f_\ab$ directly.
\par
The third term in the brackets:
\begin{align}
-f^\rs f^\ab M_\ab^\bot
=
\frac{1}{D-2} f^\rs \Mbo^\kappa_\kappa
=
2 f^\rs
\end{align}
The trace, $\Mbo^\kappa_\kappa$, was given in Eq.~\eqreft{vv23}.
\par
The sum of these three terms become:
\begin{align}
f^{\rho\alpha} f^{\sigma\beta} M_\ab
-\frac{1}{2} f^\ab f_\ab M^\rs
-f^\rs f^\ab M_\ab
=
-\frac{1}{2}\frac{D-3}{D-2} M^\rs
+2 f^\rs
\ .
\labelt{vv24}
\end{align}
These are the three terms in the brackets from Eq.~\eqreft{vv21}.
They should be propagated with $\maP^\mn_\rs$.
For ${\maP^\mn_\rs M^\rs}$, we get:
\begin{align}
\maP^\mn_\rs M^\rs = -(D-2) \eta_\prl^\mn
+(D-3)
\Big(\qbmnu{\mu}{\nu}-\eta_\bot^\mn
\Big)
\end{align}
The three terms in Eq.~\eqreft{vv24} propagated with $\maP^\mn_\rs$ become:
\begin{align}
\maP^\mn_\rs
\Big(
f^{\rho\alpha} f^{\sigma\beta} M_\ab
-\frac{1}{2} f^\ab f_\ab M^\rs
-f^\rs f^\ab M_\ab
\Big)
=
\frac{D+1}{2} \eta_\prl^\mn
-
\frac{1}{2}\frac{(D-3)^2}{D-2}
\big(
\qbmnu{\mu}{\nu}-\eta_\bot^\mn
\big)
\ .
\end{align}
We have now computed all the terms of Eq.~\eqreft{vv21}.
\par
Thus, we have computed the contribution to $\tGmn^\mn$ in Eq.~\eqreft{vv18}:
\begin{align}
U^{\mn\ \ab\rho\ \gd\sigma} f_\ab f_\gd M_\rs
=
-\frac{3(D-3)}{2} \eta_\prl^\mn
-
\frac{1}{2}\frac{(D-3)^2}{D-2}
\big(
\qbmnu{\mu}{\nu}-\eta_\bot^\mn
\big)
\labelt{vv25}
\end{align}
We will not do the detailed computation of the other contribution to $\tGmn^\mn$ from Eq.~\eqreft{vv19}.
The result is:
\begin{align}
U^{\ab\ \gd\rho\ \mn\sigma} f_\ab f_\gd \qbmnd{\rho}{\sigma}
=
\frac{1}{2}\frac{D-4}{D-2} \eta_\prl^\mn
-
\frac{1}{2} \frac{D-1}{D-2}
\big(
\eta_\bot^\mn - \qbmnu{\mu}{\nu}
\big)
\labelt{vv26}
\end{align}
With Eqs.~\eqreft{vv25} and~\eqreft{vv26} we can compute $\tGmn^\mn$.
We combine Eqs.~\eqreft{vv13a} and~\eqreft{vv15}:
\begin{align}
&\tilde G_\teol^\mn
=
2\pi\delta(q_\prl)
\ \frac{\kappa^4 m^2 q^2 N_{D-1}}{
64
}
\bigg(
\frac{D-7}{D-2} \eta^\mn_\parallel
- \frac{(D-3)(3D-5)}{(D-2)^2}
\Big(
\eta_\bot^\mn - \frac{\ q^\mu q^\nu}{q^2}
\Big)
\bigg)
\ .
\labelt{amp1}
\end{align}
We have now computed the amplitude in Fig.~\ref{fig:amp1}.
The major results are Eqs.~\eqreft{vv27} and~\eqreft{amp1} for $\tHmn^\mn_\teol$ and $\tGmn^\mn_\teol$ respectively in terms of which the amplitude is defined in Eq.~\eqreft{vv13a}.
\section{Second Order Correction to the Metric}
\labelx{sec:SecondOrder}
With the result for the one-loop contribution to the three-point function we can derive the second order contribution to the Schwarzschild-Tangherlini\ metric.
Let us summarize the results of Sec.~\ref{sec:OneLoop} for the amplitude in Fig.~\ref{fig:amp1}:
\begin{subequations}
\label{eqn:vv28}
\begin{align}
&2\pi\delta(kq) \maM^\mn_\teol
=
-\frac{4}{\kappa}
\Big(
\tilde G_\teol^\mn
+\frac{1}{\xi}
\tilde H_\teol^\mn
\Big)
\ ,
\labelt{vv28a}
\\
&\tilde G_\teol^\mn
=
2\pi\delta(q_\prl)
\ \frac{\kappa^4 m^2 q^2 N_{D-1}}{
64
}
\bigg(
\frac{D-7}{D-2} \eta^\mn_\parallel
- \frac{(D-3)(3D-5)}{(D-2)^2}
\Big(
\eta_\bot^\mn - \frac{\ q^\mu q^\nu}{q^2}
\Big)
\bigg)
\ ,
\labelt{vv28b}
\\
&\tilde H_\teol^\mn =
-\alpha
\ 2\pi\delta(q_\prl)
\ \frac{\kappa^4 m^2 q^2 N_{D-1}}{
16}
\frac{D-3}{D-2}
\maP^{\mn\rs} \qbmnd{\rho}{\sigma}
\ .
\labelt{vv28c}
\end{align}
\end{subequations}
Note that $\ttau^\mn_\teol=\frac{4}{\kappa^2}\tGmn_\teol^\mn$.
We have chosen to work with $\tGmn_\teol^\mn$ instead of $\ttau^\mn_\teol$ due to it being analogous to $\tHmn_\teol^\mn$.
The functional dependence on $q^\mu$ is included in the integral $N_{D-1}$ which was defined in Eq.~\eqreft{vvv5}.
\par
We will find the metric and go to position space according to Eqs.~\eqreft{ext2} and~\eqreft{ver7}.
First let us briefly check if the results in Eqs.~\eqreft{vv28} obey the relations discussed in Eqs.~\eqreft{eom1}.
These where the conservation law of $\ttau^\mn$ and the analogous equation for $\tHmn^\mn$:
\begin{equation}
{G_\tecl}^\mn_\ab \tHmn^\ab_\tenl = 0
\ .
\labelt{vv29}
\end{equation}
It is easily checked that the one-loop contribution to $\ttau^\mn$ in Eq.~\eqreft{vv28b} is conserved.
That is $q_\mu \tGmn^\mn_\teol$ disappears since $\tGmn^\mn_\teol$ depends on the combination $(\eta_\bot^\mn - \frac{\ q^\mu q^\nu}{q^2})$.
It is less apparent that Eq.~\eqreft{vv29} is satisfied for the one-loop contribution to $\tHmn^\mn$.
\par
Let us verify that ${G_{(c)}}^\mn_\ab \tHmn_\teol^\ab$ vanishes.
We insert the tensor structure of $\tHmn_\teol^\ab$ and then that of ${G_{(c)}}^\mn_\ab$:
\begin{align}
{G_c}_\mn^\ab \maP^{\mn\rs} \qbmnd{\rho}{\sigma}
&=
\qbmnd{\mu}{\nu}
-2 \maJ_\mn^\ab \maP_\ab^\rs \qbmnd{\rho}{\sigma}
\labelt{vv30}
\end{align}
We then use Eq.~\eqreft{jpj1} to reduce the second term on the right-hand side:
\begin{subequations}
\label{eqn:vv31}
\begin{align}
\maJ_\mn^\ab \maP_\ab^\rs \qbmnd{\rho}{\sigma}
&=
\maJ_\mn^\ab \maP_\ab^\rs \maJ_\rs^\gd \eta_\gd
\labelt{vv31a}
\\
&=
\frac{1}{2} \maJ_\mn^\gd \eta_\gd
\\
&=
\frac{1}{2}
\qbmnd{\mu}{\nu}
\end{align}
\end{subequations}
Combining Eqs.~\eqreft{vv30} and~\eqreft{vv31} we see that ${G_{(c)}}^\mn_\ab \tHmn_\teol^\ab=0$ is satisfied.
These equations for $\tHmn^\mn_\teol$ and $\tGmn^\mn_\teol$ secure that the second order contribution to the metric is independent of $\xi$.
\par
We need to contract $\maM_\teol^\mn$ with the graviton propagator to get the metric.
We can use Eq.~\eqreft{ver7} or equivalently:
\begin{align}
\thmn^{\cGn{2}}_\mn
=
2 \frac{G^\tecl_{\mn\ab}}{q^2} \tGmn^\ab_\teol
+
2 \frac{G^\tegf_{\mn\ab}}{q^2} \tHmn^\ab_\teol
\labelt{vv32}
\end{align}
If we exchange $G^\tecl$ and $G^\tegf$ in Eq.~\eqreft{vv32} with $\maPi$ we get Eq.~\eqreft{ver7}.
We can do this due to Eq.~\eqreft{vv29} and the conservation law of $\ttau^\mn$.
After propagating with $G^\tecl$ and $G^\tegf$ we get:
\begin{subequations}
\label{eqn:vv33}
\begin{align}
&\frac{G^\tecl_{\mn\ab}}{q^2} 2 \tGmn^\ab_\teol
=
2\pi\delta(q_\prl)
\frac{\kappa^4 m^2 N_{D-1}}{32}
\bigg(
4\frac{(D-3)^2}{(D-2)^2} \eta^\prl_\mn
+ \frac{(D-3)(3D-5)}{(D-2)^2} \frac{\ q_\mu q_\nu}{q^2}
- \frac{D-7}{(D-2)^2} \eta^\bot_\mn
\bigg)
\labelt{vv33a}
\\
&\frac{G^\tegf_{\mn\ab}}{q^2} 2 \tHmn^\ab_\teol
=
-
2\pi\delta(q_\prl)
\alpha
\frac{\kappa^4 m^2 N_{D-1}}{8} \frac{D-3}{D-2} \frac{\ q_\mu q_\nu}{q^2}
\labelt{vv33b}
\end{align}
\end{subequations}
It is especially simple to compute Eq.~\eqreft{vv33b}.
The tensor structure of $\tHmn_\teol^\mn$ is
\begin{align}
\maP^{\mn\rs} \qbmnd{\rho}{\sigma}
\ ,
\end{align}
and it is thus straightforward to contract it with $\maPi_{\mn\ab}$.
In case of Eq.~\eqreft{vv33a} we applied $\maPi$ to each term of Eq.~\eqreft{vv28b}.
\par
We can now write down an expression for $h^{\cGn{2}}_\mn$ in momentum space:
\begin{align}
\thmn^{\cGn{2}}_\mn
=
2\pi\delta(q_\prl)
\frac{\kappa^4 m^2 N_{D-1}}{32}
\frac{(D-3)^2}{(D-2)^2}
\bigg(
4 \eta^\prl_\mn
+
\Big(
\frac{3D-5}{D-3}
-4\alpha\frac{D-2}{D-3}
\Big)\frac{\ q_\mu q_\nu}{q^2}
- \frac{D-7}{(D-3)^2} \eta^\bot_\mn
\bigg)
\labelt{vv38}
\end{align}
We recall the definition of $N_{D-1}$:
\begin{align}
N_{D-1}
=
\frac{
\Omega_{D-3} \sqrt{-q^2_\bot}^{D-5}
}{
4(4\pi)^{D-3} \cos(\frac{\pi}{2}D)
}
\labelt{vv39}
\end{align}
Again, this expression diverges in odd space-times.
For now, we will ignore this and continue, and in Sec.~\ref{sec:AppearanceOf} we analyze it in detail.
\par
To transform $\thmn_\mn^{\cGn{2}}$ to position space we need to transform $({-q^2_\bot})^{\frac{D-5}{2}}$ and the same function multiplied by $\frac{\ q_\mu q_\nu}{q^2}$.
For this purpose we can use the formula:
\begin{equation}
\int \frac{d^dq_\bot}{(2\pi)^d} \ e^{-ix_\bot q_\bot}
(-q_\bot^2)^{\frac{n}{2}} =
\frac{2^n}{\sqrt{\pi}^d}
\frac{\Gamma(\frac{d+n}{2})}{\Gamma(-\frac{n}{2})}
\frac{1}{(-x_\bot^2)^\frac{d+n}{2}}
\ ,
\labelt{vv34}
\end{equation}
The same integral is found in Ref.~\cite{Collado:2018isu}.
It transforms $({-q^2_\bot})^{\frac{n}{2}}$ into a similar function of $x_\bot^2$.
We could have used the dimensions of the integral to make an ansatz for the result.
Note, that when $\frac{n}{2}$ is an integer the integral vanishes since we divide by $\Gamma(-\frac{n}{2})$.
This is so, since when $\frac{n}{2}$ is an integer, then $(-q_\bot^2)^{\frac{n}{2}}$ is an analytic function in $q_\bot^2$.
Such a function does not contribute in the classical limit.
We will see, however, that the vanishing of this integral cancels the divergence of $N_{D-1}$.
We will understand this better in Sec.~\ref{sec:AppearanceOf}.
\par
Using the integral in Eq.~\eqreft{vv34} we transform $N_{D-1}$ and get:
\begin{align}
\int
\frac{d^{D-1}q_\bot\ e^{-iq_\bot x_\bot}\ }{(2\pi)^{D-1}}
N_{D-1}
=
\Bigg(
\frac{
1
}{
\Omega_{D-2}\ (D-3)\
\sqrt{-x_\bot^2}^{D-3}
}
\Bigg)^2
\ .
\labelt{vv36}
\end{align}
It is clear that the divergence in $N_{D-1}$ has disappeared in position space after using the integral in Eq.~\eqreft{vv34}.
The Fourier transform of $\frac{\ q_\mu q_\nu}{q^2}$ can also be derived directly from Eq.~\eqreft{vv34}.
We differentiate it with respect to $x^\mu$ twice:
\begin{align}
\frac{\partial}{\partial x_\bot^\mu}
\frac{\partial}{\partial x_\bot^\nu}
\int \frac{d^dq_\bot}{(2\pi)^d} \ e^{-ix_\bot q_\bot}\
(-q_\bot^2)^{\frac{n}{2}}
=
- \int \frac{d^dq_\bot}{(2\pi)^d} \ e^{-ix_\bot q_\bot}\
q^\bot_\mu q^\bot_\nu (-q_\bot^2)^{\frac{n}{2}}
\labelt{vv48}
\end{align}
Using this method we get the Fourier transform of $N_{D-1} \frac{q^\bot_\mu q^\bot_\nu}{q^2_\bot}$:
\begin{equation}
\int \frac{d^{D-1}q_\bot \ e^{-iq_\bot x_\bot}}{(2\pi)^{D-1}}
N_{D-1}
\frac{q^\bot_\mu q^\bot_\nu}{q^2_\bot}
=
\Bigg(
\frac{
1
}{
\Omega_{D-2}\ (D-3)\
\sqrt{-x_\bot^2}^{D-3}
}
\Bigg)^2
\frac{1}{D-5}
\Big(
2(D-3)\frac{x^\bot_\mu x^\bot_\nu}{x_\bot^2}
-\etar{\mn}
\Big)
\ .
\labelt{vv35}
\end{equation}
This integral is seen to diverge in space-time $D=5$ and the integral is then only valid in $D\neq5$.
This is due to the fact that we have not subtracted the divergent term in $N_{D-1}$.
It only becomes problematic in $D=5$ where a logarithmic dependence on the radial coordinate appears in the metric.
\par
Using the integrals from Eqs.~\eqreft{vv36} and~\eqreft{vv35} we can transform $\thmn_\mn^{\cGn{2}}$ to position space for all space-time dimensions $D\neq5$.
We will use the Schwarzschild-Tangherlini\ parameter, $\mu$, defined in Eq.~\eqreft{mud1} and we get:
\begin{align}
\hspace*{-.5cm}
h^{\cGn{2}}_\mn =
\frac{\mu^2}{r^{2(D-3)}}
\Bigg(
\oov{2} \etat{\mn}
-
\frac{
(4\alpha-3)D
-8\alpha + 5
}{
4(D-5)
}
\Xmnb{\mu}{\nu}
-
\frac{
2(1-\alpha)D^2
-(13-10\alpha)D
+25-12\alpha
}{
4(D-3)^2(D-5)
}
\etar{\mn}
\Bigg)
\ .
\labelt{vv37}
\end{align}
Here, we have introduced the variable $r$ defined by $r^2=-x_\bot^2$.
In the reference frame of $k^\mu$ we have $r=\absvec{x}$.
The equation is valid for all $D\neq5$ and for all values of the gauge parameter $\alpha$.
The special choice of $\alpha=\frac{5}{6}$ removes the pole in $D=5$ and for this special choice Eq.~\eqreft{vv37} can also be used in $D=5$.
This is easy to see in momentum space in Eq.~\eqreft{vv38} where the coefficient of $\frac{\ q_\mu q_\nu}{q^2}$ disappears in $D=5$ for $\alpha=\frac{5}{6}$.
The same conclusion holds for any choice of $\alpha$ which depends on $D$ in such a way that $\alpha=\frac{5}{6}$ when $D=5$.
We would expect higher order corrections to the metric to depend on arbitrary powers of $\alpha$.
\par
In the literature similar expansions are found in Refs.~\cite{Collado:2018isu,BjerrumBohr:2002ks,Goldberger:2004jt}.
Both Refs.~\cite{BjerrumBohr:2002ks,Goldberger:2004jt} work in $D=4$ while Ref.~\cite{Collado:2018isu} derives the second order contribution to the metric in arbitrary dimensions in \dDo-gauge.
We find that their result agrees with our Eq.~\eqreft{vv37} when we put $\alpha=0$.
Note, however, that there is a misprint in their article.
The relevant equation in their article~\cite{Collado:2018isu} is Eq.~(5.34).
In the fourth line of this equation the square on $(D-p-3)^2$ should be removed, so that instead it reads $(D-p-3)$.
We thank Paolo Di Vecchia for confirming this.
Note, also, that in their equation $p$ should be set to zero and $R_p$ should be related to $\mu$ appropriately.
The divergence of Eq.~\eqreft{vv37} in $D=5$ is not discussed in Ref.~\cite{Collado:2018isu}.
\par
We have also checked that Eq.~\eqreft{vv37} in $D=4$ for $\alpha=1$ agrees with the standard formula for the Schwarzschild metric in harmonic gauge from e.g. Ref.~\cite{Weinberg:1972kfs}.
For general $\alpha$ and $D\neq5$ we have compared it to the classical derivation in Sec.~\ref{sec:ClassicalDerivation} and we find agreement.
\par
As expected the metric in Eq.~\eqreft{vv37} satisfies the gauge condition $G_\sigma=0$ which at second order in $G_N$ reads:
\begin{align}
G^{\cGn{2}}_\sigma =
\maP^\mn_\rs {h_{\cGn{2}}}_\mn^{,\rho}
- \alpha \Gamma_{\sigma\ab}^{\rho\mn} \ h_\cGno^\ab \ h^{\cGno}_{\mn,\rho}
\ .
\end{align}
Here, $h_\cGno^\mn$ should be taken from Eq.~\eqreft{me32}.
\section{Appearance of Logarithms in the Metric}
\labelx{sec:AppearanceOf}
What happened to the divergence in $N_{D-1}$ and why is Eq.~\eqreft{vv37} for $h^{\cGn{2}}_\mn$ not valid in $D=5$?
First, we will consider the $N_{D-1}$ integral again and remove the divergent part.
Then we will use the regularized expression for $N_{D-1}$ to derive the metric in $D=5$.
Here, it turns out that a logarithmic dependence on the radial coordinate appears in position space.
We will then analyze this phenomenon in terms of gauge transformations and show that besides $D=5$ we only expect logarithms in $D=4$.
\par
The divergent factor in the expression for $N_{D-1}$ in Eq.~\eqreft{vv39} is $1/\cos(\frac{\pi}{2}D)$.
This is so since $\cos(\frac{\pi}{2}D)$ vanishes when $D$ is an odd integer.
It is then convenient to analyze $N_{D-1}$ for the cases of even and odd $D$ separately.
\par
When $D$ is even the expression in Eq.~\eqreft{vv39} is finite and reduces to
\begin{align}
N_{D-1} = \frac{(-1)^n}{8(16\pi)^n n!} \frac{(-q^2_\bot)^n}{\sqrt{-q^2_\bot}}
\ ,
\end{align}
where $n = \frac{1}{2}(D-4)$ is an integer and $n\geq0$.
This expression is finite and non-analytic in $q^2$ and it presents no difficulties when it is Fourier transformed to position space with Eq.~\eqreft{vv34}.
\par
Next let us consider odd $D$.
In this case, the expression for $N_{D-1}$ in Eq.~\eqreft{vv39} is divergent.
Instead, we expand $N_{D-1}$ in a Laurent series in a small deviation from the odd dimension.
We write $D=5+2n+2\epsilon$ and expand $N_{D-1}$ in $\epsilon$:
\begin{align}
N_{D-1}
=
\frac{1}{\epsilon} \frac{(-1)^{n+1}\Omega_{2+2n} (-q^2_\bot)^n}{(4\pi)^{3+2n}}
+
\frac{(-1)^{n+1} n!}{16\pi^2 (4\pi)^n (1+2n)!}
(-q^2_\bot)^n
\ln(-q^2_\bot r_0^2)
+ \ellipsis
\end{align}
There are other finite terms in the expansion but the one shown is the only non-analytic term.
The pole term is an analytic function, namely an integer power of $q^2$.
The arbitrary length scale, $r_0$, comes from the dimensional dependence of $G_N$.
We conclude that the relevant part of $N_{D-1}$ for the classical limit is
\begin{align}
N_{D-1}
=
\frac{(-1)^{n+1} n!}{16\pi^2 (4\pi)^n (1+2n)!}
(-q^2_\bot)^n
\ln(-q^2_\bot r_0^2)
\ ,
\labelt{vv40}
\end{align}
where still $D=5+2n$.
The Fourier transform to position space of this expression for $N_{D-1}$ is equivalent to the transformation of the formula for $N_{D-1}$ in Eq.~\eqreft{vv39}.
We can use Eq.~\eqreft{vv34} which gives the Fourier transform of $(-q^2)^{\frac{n}{2}}$ if we use the following expression for the logarithm:
\begin{align}
\ln(-q^2)
= \frac{1}{\epsilon}
\Big(
(-q^2)^{\epsilon}-1
\Big)
\labelt{vv41}
\end{align}
Here, $\epsilon$ is an infinitesimal parameter.
It is now clear, that the Fourier transforms of both expressions for $N_{D-1}$ are the same.
This is so, since when we insert Eq.~\eqreft{vv41} into Eq.~\eqreft{vv40} the constant term in Eq.~\eqreft{vv41}, $(-\frac{1}{\epsilon})$ will be transformed to zero because of the property of Eq.~\eqreft{vv34} discussed below that equation.
The $q$-dependent term in Eq.~\eqreft{vv41} then makes the expression for $N_{D-1}$ in Eq.~\eqreft{vv40} equivalent to the one in Eq.~\eqreft{vv39}.
\par
However, this is not the case when we transform $N_{D-1} \frac{q^\bot_\mu q^\bot_\nu}{q^2_\bot}$.
Here, the transformed expression in position space is free of divergence when we use Eq.~\eqreft{vv40} but as we have seen it has a divergence in $D=5$ when we use Eq.~\eqreft{vv39}.
Let us indicate how Eqs.~\eqreft{vv41} and~\eqreft{vv34} can be used to transform ${N_{D-1} \frac{q^\bot_\mu q^\bot_\nu}{q^2_\bot}}$ to momentum space when $D$ is odd using Eq.~\eqreft{vv40} for the definition of $N_{D-1}$.
This will show how formulas which are all continuous in the dimension of space-time, $D$, can produce such different results in $D\neq5$ and $D=5$.
\par
First, we find:
\begin{align}
\hspace*{-1cm}
\int \frac{d^{D-1}q_\bot \ e^{-iq_\bot x_\bot}}{(2\pi)^{D-1}}
\frac{ N_{D-1}}{q^2_\bot}
&=
\frac{(-1)^{n} n!}{16\pi^2 (4\pi)^n (1+2n)!}
\int \frac{d^{D-1}q_\bot \ e^{-iq_\bot x_\bot}}{(2\pi)^{D-1}}
(-q^2_\bot)^{n-1}
\ln(-q^2_\bot )
\labelt{47}
\\
&=
\frac{(-1)^{n} n!}{16\pi^2 (4\pi)^n (1+2n)!}
\int \frac{d^{D-1}q_\bot \ e^{-iq_\bot x_\bot}}{(2\pi)^{D-1}}
\frac{1}{\epsilon}
\Big(
(-q^2_\bot)^{n-1+\epsilon}
-
(-q^2_\bot)^{n-1}
\Big)
\nonumber{}
\end{align}
In the first line we inserted the definition of $N_{D-1}$ from Eq.~\eqreft{vv40} and in the second line we used Eq.~\eqreft{vv41} to rewrite the logarithm function.
We have put the arbitrary scale to unity, $r_0=1$, and $n$ is still defined by $D=5+2n$.
We can now use Eq.~\eqreft{vv34} to evaluate the integral:
\begin{align}
\hspace*{-1cm}
\int \frac{d^{D-1}q_\bot \ e^{-iq_\bot x_\bot}}{(2\pi)^{D-1}}
\frac{ N_{D-1}}{q^2_\bot}
=
\frac{(-1)^{n} n!}{16\pi^2 (4\pi)^n (1+2n)!}
\frac{4^{n-1}}{\pi^{2+n} (-x_\bot^2)^{2n+1}}
\frac{1}{\epsilon}
\Big(
\frac{4^\epsilon \Gamma(2n+1+\epsilon)}{\Gamma(1-n-\epsilon)(-x^2_\bot)^\epsilon}
-
\frac{\Gamma(2n+1)}{\Gamma(1-n)}
\Big)
\labelt{vv46}
\end{align}
All dependence on the infinitesimal parameter $\epsilon$ has been gathered in the final factor.
Of course, $\epsilon$ is only introduced as an intermediate parameter and disappears from the final expression.
Focus on the last factor:
\begin{align}
\frac{1}{\epsilon}
\Big(
\frac{4^\epsilon\Gamma(2n+1+\epsilon)}{\Gamma(1-n-\epsilon)(-x^2_\bot)^\epsilon}
-
\frac{\Gamma(2n+1)}{\Gamma(1-n)}
\Big)
=
\Gamma(2n+1)
\bigg(
\frac{\psi(1-n) }{\Gamma(1-n)}
+
\frac{\psi(2n+1)
-
\ln(-\frac{x^2_\bot}{4})}{\Gamma(1-n)}
\bigg)
\labelt{vv45}
\end{align}
On the right-hand side, $\epsilon$ has disappeared again.
The function, $\psi(z)$, is the digamma function which is used to differentiate the $\Gamma$-function using $\Gamma'(z) = \Gamma(z) \psi(z)$.
To arrive at the Fourier transform of $N_{D-1} \frac{q^\bot_\mu q^\bot_\nu}{q^2_\bot}$ we would now insert Eq.~\eqreft{vv45} into Eq.~\eqreft{vv46} and then differentiate Eq.~\eqreft{vv46} twice with respect to $x_\bot^\mu$ as in Eq.~\eqreft{vv48}.
However, we can already draw important conclusions without doing the final steps in detail.
\par
Focus on the right-hand side of Eq.~\eqreft{vv45} where there are two terms in the brackets.
The first term, ${\psi(1-n)/\Gamma(1-n)}$ is finite and non-zero for all integers $n\geq0$ (for $n\geq1$ both numerator and denominator diverge and the limit is taken).
However, for integer $n\geq0$ the second term is non-zero only for $n=0$.
This is so since the numerator of the second term is finite for all $n\geq0$ while the denominator $\Gamma(1-n)$ diverges when $n$ is an integer $\geq1$.
Hence, the second term only plays a role in $D=5$ and in this way $D=5$ is so different from other dimensions.
\par
If we proceed with the computations in Eqs.~\eqreft{vv46} and~\eqreft{vv45}, the first term on the right hand side of Eq.~\eqreft{vv45} will produce the same results in all odd $D\neq5$ as the integral in Eq.~\eqreft{vv35} which was valid for all $D\neq5$.
In $D=5$, i.e. $n=0$, both terms in Eq.~\eqreft{vv45} are finite and the resulting integral would then also be finite.
Let us put $n=0$ in Eq.~\eqreft{vv45}:
\begin{align}
\frac{1}{\epsilon}
\Big(
\frac{4^\epsilon\Gamma(1+\epsilon)}{\Gamma(1-\epsilon)(-x^2_\bot)^\epsilon}
-
1
\Big)
=
2\psi(1)
-
\ln(-\frac{x^2_\bot}{4})
\ .
\labelt{hej1}
\end{align}
Using that $\psi(1)=-\gamma$ where $\gamma$ is the Euler-Mascheroni constant we get that this simplifies to:
\begin{align}
\frac{1}{\epsilon}
\Big(
\frac{4^\epsilon\Gamma(1+\epsilon)r_0^{2\epsilon}}{\Gamma(1-\epsilon)(-x^2_\bot)^\epsilon}
-
1
\Big)
=
-
\ln(-\frac{x^2_\bot e^{2\gamma}}{4r_0^2})
\ .
\labelt{hej2}
\end{align}
Here, we reintroduced the arbitrary scale, $r_0$.
This expression should be inserted in Eq.~\eqreft{vv46} which when differentiated twice with respect to $x_\bot^\mu$ would give the desired integral.
It is then clear that the result will depend on the combination $\ln(-\frac{x^2_\bot e^{2\gamma}}{4r_0^2})$.
\par
We compute the final steps and find the result for $N_{4} \frac{q^\bot_\mu q^\bot_\nu}{q^2_\bot}$ in position space:
\begin{align}
\hspace*{-.5cm}
\int \frac{d^5q \ \delta(q_\prl) \ e^{-iqx}}{(2\pi)^4}
N_4
\frac{q_\mu q_\nu}{q^2}
=
-\frac{1}{32\pi^4\sqrt{-x_\bot^2}^{4}}
\bigg(
\etar{\mn}
-6 \Xmnb{\mu}{\nu}
-\Big(
\etar{\mn} - 4\Xmnb{\mu}{\nu}
\Big)
\ln(-\frac{x_\bot^2 e^{2\gamma}}{4r_0^2})
\bigg)
\ .
\labelt{vv49}
\end{align}
Again, $\gamma$ is the Euler-Mascheroni constant which can be removed by redefining the scale $r_0$.
\par
Using Eq.~\eqreft{vv49} we compute the second order metric in $D=5$.
After a redefinition of $r_0$ we get:
\begin{eqnarray}
h^{(2)}_\mn =
\frac{\mu^2}{r^{4}}
\Big(
\oov{2} \etat{\mn}
- \frac{2(6\alpha-5)\ln\frac{r}{r_0}-1}{16} \etar{\mn}
+ \frac{(6\alpha-5)(4\ln\frac{r}{r_0}-1)}{8} \Xmnb{\mu}{\nu}
\Big)
\ .
\labelt{vv50}
\end{eqnarray}
Again, $r^2 = -x_\bot^2$.
We have not found this result in earlier literature, although a similar situation occurs in $D=4$ at third order in $G_N$ in \dDo\ gauge \cite{Goldberger:2004jt}.
Note that we can make the logarithm disappear with the special choice $\alpha=\frac{5}{6}$.
The arbitrary scale, however, would in principle still be there.
In analogy, in $D=4$ we know that for $\alpha=1$ in harmonic gauge there are no logarithms.
\par
How can it be, that the metric in $D=5$ depends on the arbitrary scale $r_0$?
It will now be seen that the freedom in choosing this scale corresponds to a redundant gauge freedom.
Thus, there is a certain gauge transformation of $h_\mn$ which leaves $h_\mn$ in the gauge $G_\sigma=0$.
This case is similar to the case of linearized gravity in \dDo-gauge where the coordinates can be translated by a harmonic function while the metric stays in \dDo-gauge.
The exact gauge transformations of the graviton field where discussed in Sec.~\ref{sec:GaugeTransformations}.
\par
Let us transform our coordinates according to:
\begin{equation}
x^\mu
\rightarrow
x^\mu
+ \beta
\frac{\mu^2}{r^4}
x_\bot^\mu
+ \ ...
\labelt{cor1}
\end{equation}
This is of the form $x^\mu\rightarrow x^\mu+\epsilon^\mu(x)$.
The ellipsis indicates higher order corrections to this transformation.
To second order in $G_N$ only the second order metric, $h^{\cGn{2}}_\mn$, will be changed by the transformation in Eq.~\eqreft{cor1}.
The change will be
\begin{align}
h^{\cGn{2}}_\mn
\rightarrow
h^{\cGn{2}}_\mn
-
\epsilon_{\mu,\nu}
-
\epsilon_{\nu,\mu}
\ ,
\labelt{me25}
\end{align}
where $\epsilon_\mu=\beta\frac{\mu^2x^\bot_\mu}{r^4}$.
Since the first order metric is unchanged only the second order gauge condition will be changed.
We find that:
\begin{align}
G_\sigma^{\cGn{2}}
\rightarrow
G_\sigma^{\cGn{2}}
+\partial^2 \epsilon_\sigma
\labelt{me26}
\end{align}
Thus, if $\epsilon_\sigma$ is a harmonic function, we will stay in the \dDo-type gauge $G_\sigma=0$.
The situation is thus completely analogous to the case of linearized gravity.
\par
The choice of $\epsilon^\mu=\beta\frac{\mu^2x_\bot^\mu}{r^4}$ makes $\epsilon^\mu$ a harmonic function.
This can be seen from the following argument.
We start from the potential from a point source which obeys:
\begin{align}
\partial^2 \frac{1}{r^{D-3}}
\propto
\delta^{D-1}(x_\bot)
\labelt{me27}
\end{align}
Then we take the partial derivative on each side:
\begin{align}
\partial^2 \frac{x_\mu^\bot}{r^{{D-1}}}
\propto
\partial_\mu \partial^2 \frac{1}{r^{D-3}}
\propto
\partial_\mu \delta^{D-1}(x_\bot)
\labelt{me28}
\end{align}
Putting $D=5$ we get the result that $\epsilon^\mu$ is harmonic when $r\neq0$.
\par
The conclusion is that the metric stays in the \dDo-type gauge when we make the coordinate transformation in Eq.~\eqreft{cor1}.
This will introduce the coefficient $\beta$ in Eq.~\eqreft{cor1} as a free parameter in the metric.
How does the metric change?
We use Eq.~\eqreft{me25}:
\begin{equation}
h^{\cGn{2}}_\mn
\rightarrow
h^{\cGn{2}}_\mn
- 2\beta
\frac{\mu^2}{r^4}
\Big(
-4 \Xmnb{\mu}{\nu}
+
\eta^\bot_\mn
\Big)
\end{equation}
Let us compare this to a rescaling of the arbitrary length scale $r_0$ in $h^{\cGn{2}}_\mn$ in Eq.~\eqreft{vv50}.
We find that when we let $r_0 \rightarrow \gamma r_0$ then $h^{\cGn{2}}_\mn$ changes by:
\begin{equation}
h^{\cGn{2}}_{\mn}
\rightarrow
h^{\cGn{2}}_{\mn}
+
\ln(\gamma)
\frac{6\alpha-5}{8}
\frac{\mu^2}{r^4}
\Big(
-4 \Xmnb{\mu}{\nu} + \delrb{\mn}
\Big)
\end{equation}
We see that the freedom to change coordinates according to Eq.~\eqreft{cor1} exactly corresponds to the freedom in choosing the arbitrary scale $r_0$.
\par
What happens in space-times different than $D=5$?
Here, the equivalent transformation would be
\begin{align}
\epsilon^\sigma = \beta \frac{x_\bot^\sigma}{r^{D-1}}
\ ,
\labelt{me29}
\end{align}
which is seen to be a harmonic function in Eq.~\eqreft{me28}.
In contrast to $\beta$ in the transformation from Eq.~\eqreft{cor1}, $\beta$ in Eq.~\eqreft{me29} is dimensionful.
\par
We can introduce an arbitrary parameter in the metric in any dimension if we translate our coordinates with $\epsilon^\sigma$ from Eq.~\eqreft{me29}.
However, we only expect logarithms to appear in $D=4$ besides $D=5$.
The reason is to be found in the dimension of $\beta$.
Only in $D=4$ and $D=5$ is it comparable to the dimension of the Schwarzschild-Tangherlini\ parameter $\mu$.
The mass dimension of $\beta$ is found to be $(-D+1)$ and that of $\mu$ is $(-D+3)$.
When can $\beta$ be expressed as an integer power of $\mu$?
This requires
\begin{align}
-D+1 = m(-D+3)
\ ,
\labelt{me30}
\end{align}
for some integer $m$.
For $D=4$ we find $m=3$ while for $D=5$ we get $m=2$.
For other values of $D\geq4$ we find that $m$ is not an integer.
The interpretation is that in $D=5$ a logarithm appears at second order, while in $D=4$ it appears at third order.
This conclusion is in agreement with the results in this chapter as well as Ref.~\cite{Goldberger:2004jt}.
In Ref.~\cite{Goldberger:2004jt} the Schwarzschild metric in $D=4$ was computed perturbatively to third order in \dDo-gauge and it was found that a logarithm appears in the metric at this order.
\section{Classical Derivation of the Schwarzschild-Tangherlini\ Metric}
\labelx{sec:ClassicalDerivation}
In this section we derive the Schwarzschild-Tangherlini\ metric with methods from classical general relativity.
The method is analogous to that in Weinberg~\cite{Weinberg:1972kfs} where the Schwarzschild metric is derived in harmonic coordinates in $D=4$ by a coordinate transformation from the Schwarzschild metric in spherical coordinates.
Here we work in arbitrary dimensions $D\geq4$ with the generalized \dDo-type gauge from Sec.~\ref{sec:GaugeFixing}.
That is, we will derive the Schwarzschild-Tangherlini\ metric perturbatively to second order in $G_N$ in coordinates which satisfy $G_\sigma=0$ where $G_\sigma$ is the function in Eq.~\eqreft{gau3}.
\par
The Schwarzschild-Tangherlini\ metric was given in Eq.~\eqreft{n215} and is:
\begin{align}
&d\tau^2 =
(1-\frac{\mu}{R^\dmt}) dt^2
-\frac{1}{1-\frac{\mu}{R^\dmt}} dR^2
- R^2 d\Omega^2_{D-2}
\labelt{met1}
\end{align}
The Schwarzschild-Tangherlini\ parameter $\mu$ was defined in Eq.~\eqreft{mud1} and is proportional to $mG_N$.
Also, we use $n=D-3$.
In Eq.~\eqreft{met1} we have used $R$ as the radial coordinate.
The goal is to change to new coordinates in terms of a new radial coordinate, $r$.
The old radial coordinate $R$ is a function of the new one, $r$, and we expand $R(r)$ perturbatively in $r$ and determine the coefficients of this expansion from the gauge condition $G_\sigma=0$.
\par
The new coordinates, $x^\mu$, should be Cartesian-like and are defined in terms of $r$ by passing from the spherical coordinates $r$ and angle variables to Cartesian coordinates:
\begin{align}
&x^0 = t
\labelt{met2}
\\
&x^1 = r \cos(\theta_1) \nonumber \\
&x^2 = r \sin(\theta_1)\cos(\theta_2) \nonumber \\
&x^3 = r \sin(\theta_1)\sin(\theta_2)\cos(\theta_3) \nonumber \\
&... \nonumber \\
&x^{D-1} = r \sin(\theta_1)\sin(\theta_2) ... \sin(\theta_{D-2})
\nonumber{}
\end{align}
Here $x_i$ are the Cartesian-like coordinates and $\theta_i$ are the angle coordinates where $i=1\ ...\ (D-2)$.
The angle coordinates are included in the Schwarzschild-Tangherlini\ metric in $d\Omega^2_{D-2}$.
The angles $\theta_i$ run from $0$ to $\pi$ when $i=1\ ...\ (D-3)$ and $\theta_{D-2}$ runs from $0$ to $2\pi$.
\par
The new coordinates $x^\mu$ are interpreted as a Lorentz covariant vector and indices on $x^\mu$ are then raised and lowered with the flat space metric.
We will also use the same Lorentz covariant notation as introduced in Sec.~\ref{sec:GeneralRelativity2}.
We want to write the Schwarzschild-Tangherlini\ metric in terms of the new coordinates.
We use the following relations:
\begin{subequations}
\label{eqn:met3}
\begin{align}
&dR^2 = \frac{dR^2}{dr^2}(\frac{x_\bot dx_\bot}{r})^2
\ ,
\labelt{met3a}
\\
&d\Omega^2_{D-2} = -
\frac{1}{r^2}
\big(
dx_\bot^2 + (\frac{x_\bot dx_\bot}{r})^2
\big)
\ .
\labelt{met3b}
\end{align}
\end{subequations}
Here we have already generalized to the Lorentz covariant notation.
Then the radial variable $r$ is defined as $r^2 = -x_\bot^2$.
Thus, although the Schwarzschild-Tangherlini\ metric in Eq.~\eqreft{met1} describes a stationary particle the equations in terms of the Cartesian-like coordinates are easily generalized to describe a particle with any momentum.
\par
We insert Eqs.~\eqreft{met3} in the Schwarzschild-Tangherlini\ metric from Eq.~\eqreft{met1}:
\begin{align}
d\tau^2 =
(1-\frac{\mu}{R^{\dmt}}) dt^2
-\frac{1}{1-\frac{\mu}{R^{\dmt}}} \frac{dR^2}{dr^2} (\frac{x_\bot dx_\bot}{r})^2
+ \frac{R^2}{r^2} \Big( dx_\bot^2 + (\frac{x_\bot dx_\bot}{r})^2 \Big)
\labelt{met4}
\end{align}
We can read off the metric, $g_\mn$:
\begin{align}
&g_{\mn} =
(1-\frac{\mu}{R^{\dmt}}) \eta^\prl_\mn
+\frac{
1
}{
1-\frac{\mu}{R^{\dmt}}
}
\frac{dR^2}{dr^2}
\Xmnb{\mu}{\nu}
+
\frac{R^2}{r^2}
(\eta^\bot_\mn - \Xmnb{\mu}{\nu})
\ .
\labelt{met5}
\end{align}
We will expand this expression for the metric in $r$.
\par
We expand $R$ in the dimensionless quantity $\frac{\mu}{r^{\dmt}}$:
\begin{align}
R = r(1 + a\frac{\mu}{r^{\dmt}} + b \ \Big(a\frac{\mu}{r^\dmt}\Big)^2 + ...)
\labelt{met6}
\end{align}
The coefficient, $a$, has been included in the $\mu^2$ term of the expansion for convenience.
Determining $a$ gives the first order correction to the metric while $b$ gives the second.
Likewise $a$ is determined from the first order gauge condition $G_\sigma^{\cGno}=0$ and $b$ from the second order gauge condition $G_\sigma^{\cGn{2}}=0$.
\par
Expanding the metric from Eq.~\eqreft{met5} to first order, we find:
\begin{align}
&g_{\mn} \approx
\eta_\mn
-\frac{\mu}{r^{\dmt}} \eta^\prl_\mn
+2a\frac{\mu}{r^{\dmt}} \eta^\bot_{\mn}
- \big( 2na-1 \big) \frac{\mu}{r^{\dmt}} \Xmnb{\mu}{\nu}
\labelt{met7}
\end{align}
We read off the first order correction:
\begin{align}
h^\cGno_\mn =
-\frac{\mu}{r^{\dmt}}
\Big(
\eta^\prl_\mn
-2a \eta^\bot_{\mn}
+ \big( 2na-1 \big) \Xmnb{\mu}{\nu}
\Big)
\ .
\labelt{met8}
\end{align}
From Eq.~\eqreft{n419} the first order gauge condition is:
\begin{align}
\maP^\mn_\rs {h^{\cGno}}_\mn^{,\rho} = 0
\labelt{met9}
\end{align}
We want to insert $h^\cGno_\mn$ from Eq.~\eqreft{met8} into the gauge condition from Eq.~\eqreft{met9}.
It is thus necessary to compute the partial derivative of $h_\mn^\cGno$.
We get:
\begin{align}
\partial_\sigma
h^\cGno_\mn
=
-n\frac{\mu}{r^{n+1}}
\Xrhob{\sigma}
\Big(
\eta^\prl_{\mn}
- 2a\eta^\bot_{\mn}
+ \big(2na-1\big) \Xmnb{\mu}{\nu} \Big)
-\oov{r^{n+1}} (2na-1) \lambda_{\sigma \mn}
\labelt{me10}
\end{align}
Here $\lambda_{\sigma \mn}$ is a convenient tensor defined as:
\begin{subequations}
\label{eqn:me11}
\begin{align}
\lambda_{\sigma\mn}
&=
r\ \partial_\sigma
\Xmnb{\mu}{\nu}
\labelt{me11a}
\\
&=
\Big( -\eta^\bot_{\sigma\mu} + \Xmnb{\sigma}{\mu} \Big) \Xrhob{\nu}
+ \Big( -\eta^\bot_{\sigma\nu} + \Xmnb{\sigma}{\nu} \Big) \Xrhob{\mu}
\label{lam1e}
\end{align}
\end{subequations}
It is symmetric in $\mu$ and $\nu$.
\par
We insert $\partial_\sigma h^{\cGno}_\mn$ into the gauge condition and get:
\begin{align}
G^\cGno_\sigma = \frac{\mu}{r^{n+1}}(2na-1)\Xrhob{\sigma}
\labelt{me12}
\end{align}
In this way $a$ is determined to be $a=\frac{1}{2n}$.
Inserting this value for $a$ in the expression for $h^\cGno_\mn$ from Eq.~\eqreft{met8} produces the same result as we computed in Sec.~\ref{sec:NewtonPotential}.
\par
We will now derive the next term in the expansion of the Schwarzschild-Tangherlini\ metric which is determined by $b$.
This is the second order term, which we want to compare with the amplitude computation.
We expand the coefficients of the metric from Eq.~\eqreft{met5} which are $(1-\frac{\mu}{R^{n}})$, $\frac{R^2}{r^2}$, and $1/(1-\frac{\mu}{R^{n}}) \frac{dR^2}{dr^2}$:
\begin{subequations}
\label{eqn:me13}
\begin{align}
&1-\frac{\mu}{R^{\dmt}}
= 1 - \frac{\mu}{r^{\dmt}} + \frac{1}{2} \Big(\frac{\mu}{r^{\dmt}}\Big)^2 + ...
\labelt{me13a}
\\
&\frac{R^2}{r^2} =
1 + \frac{1}{\dmt} \frac{\mu}{r^{\dmt}}
+(2b+1)\frac{1}{4\dmt^2} \Big(\frac{\mu}{r^{\dmt}}\Big)^2 + ...
\\
&\frac{dR^2}{dr^2}
= 1 - \frac{\dmt-1}{\dmt} \frac{\mu}{r^{\dmt}}
- \Big(
2(2\dmt-1)b - (\dmt-1)^2
\Big) \frac{1}{4\dmt^2} \Big(\frac{\mu}{r^{\dmt}}\Big)^2 +...
\\
&\frac{1}{1-\frac{\mu}{R^{\dmt}}}
= 1 + \frac{\mu}{r^{\dmt}} + \frac{1}{2} \Big(\frac{\mu}{r^{\dmt}}\Big)^2 + ...
\\
&\frac{1}{1-\frac{\mu}{R^{\dmt}}} \frac{dR^2}{dr^2} =
1 + \frac{1}{\dmt} \frac{\mu}{r^{\dmt}}
+\Big( 1-\dmt(\dmt-2) - 2(2\dmt-1)b \Big)
\frac{1}{4\dmt^2} \Big(\frac{\mu}{r^{\dmt}}\Big)^2 + ...
\labelt{me13e}
\end{align}
\end{subequations}
Inserting these expansions into the metric we get its expansion to second order in $G_N$:
\begin{align}
g_\mn \approx
\eta_\mn
- \frac{\mu}{r^\dmt}
\big(&\deltb{\mn} - \oov{\dmt} \delrb{\mn} \big)
+\Big(\fcr\Big)^2
\big(\oov{2} \deltb{\mn}
+ \frac{2b+1}{4\dmt^2} \delrb{\mn}
-\frac{4b+\dmt-2}{4\dmt} \Xmnb{\mu}{\nu} \big)
\labelt{me14}
\end{align}
And we read off $h^{\cGn{2}}_\mn$:
\begin{align}
h^{\cGn{2}}_\mn
=
\frac{\mu^2}{r^{2\dmt}}
\big(\oov{2} \deltb{\mn}
+ \frac{2b+1}{4\dmt^2} \delrb{\mn}
- \frac{4b+\dmt-2}{4\dmt} \Xmnb{\mu}{\nu} \big)
\labelt{me15}
\end{align}
The coefficient $b$ is determined from the second order term of the gauge condition.
\par
From Eq.~\eqreft{n419} the second order gauge condition is found to be:
\begin{align}
G^{\cGn{2}}_\sigma =
\maP^\mn_\rs {h^{\cGn{2}}}_\mn^{,\rho}
- \alpha \Gamma_{\sigma\ab}^{\rho\mn} \ h_\cGno^\ab \ h^{\cGno}_{\mn,\rho}
\labelt{me16}
\end{align}
We insert $h^\cGno_\mn$ from Eqs.~\eqreft{met8} and~\eqreft{me10} and $h^{\cGn{2}}_\mn$ from Eq.~\eqreft{me15}.
The partial derivative of $h^{\cGn{2}}_\mn$ is:
\begin{align}
\partial_\rho
h^{\cGn{2}}_\mn
=
\frac{2\dmt \mu^2}{r^{2\dmt+1}}
\Xrhob{\rho}
\Big(
\oov{2} \deltb{\mn}
+ \frac{2b+1}{4\dmt^2} \delrb{\mn}
- \frac{4b+\dmt-2}{4\dmt} \Xmnb{\mu}{\nu}
\Big)
-\frac{\mu^2}{r^{2\dmt+1}} \frac{4b+\dmt-2}{4\dmt} \lambda_{\rho\mn}
\labelt{me17}
\end{align}
The tensor, $\lambda_{\rho\mn}$, was defined in Eq.~\eqreft{me11}.
We compute the second order gauge condition in two steps.
First, $\alpha \Gamma_{\sigma\ab}^{\rho\mn} \ h_\cGno^\ab \ h^{\cGno}_{\mn,\rho}$, where we get:
\begin{align}
\alpha \Gamma_{\sigma\ab}^{\rho\mn} \ h_\cGno^\ab \ h^{\cGno}_{\mn,\rho}
=
- \alpha
\frac{\mu^2}{r^{2\dmt+1}}
\frac{\dmt+1}{2} \Xrhob{\sigma}
\labelt{me18}
\end{align}
Then, $\maP^\mn_\rs {h^{\cGn{2}}}_\mn^{,\rho}$, where we get:
\begin{align}
\maP^\mn_\rs {h^{\cGn{2}}}_\mn^{,\rho}
=
-
\frac{\mu^2}{r^{2\dmt+1}}
\frac{\dmt^2+1+(\dmt-2)b}{2\dmt}
\Xrhob{\sigma}
\labelt{me19}
\end{align}
The coefficient $b$ is determined from these expressions.
\par
Combining Eqs.~\eqreft{me16}, \eqreft{me18} and~\eqreft{me19}, we get:
\begin{align}
G_\sigma^{\cGn{2}}
=
-\frac{\mu^2}{2nr^{2n+1}}
\Xrhob{\sigma}
\Big(
n^2+1+(n-2)b
-
\alpha
n(n+1)
\Big)
\labelt{me21}
\end{align}
We determine $b$ from $G^{\cGn{2}}_\sigma=0$ to be:
\begin{align}
b = \frac{-(1-\alpha)\dmt^2+\alpha\dmt -1}{\dmt-2}
\labelt{me20}
\end{align}
This value of $b$ diverges when $n=2$, that is $D=5$.
This signals that the chosen expansion of $R(r)$ does not work in $D=5$ which is expected, since logarithms appear in $D=5$.
The value of $b$ is inserted in Eq.~\eqreft{me15} to get the second order contribution to the metric in $D\neq5$.
We find that Eq.~\eqreft{me15} with the value of $b$ in Eq.~\eqreft{me20} agrees with the amplitude computation from Eq.~\eqreft{vv37}.
\par
We will now specialize on $D=5$.
It is clear, that we cannot choose $b$ to satisfy Eq.~\eqreft{me21} when $n=2$ since $b$ disappears from the equation.
However, when $\alpha=\frac{5}{6}$ the equation is identically satisfied, and in this case $b$ can be chosen arbitrarily.
This is in agreement with the results from the amplitude computation.
\par
To find the metric in $D=5$ for general $\alpha$ it is necessary to modify the expansion of $R(r)$ in $r$.
We generalize the coefficient $b$ from the expansion in Eq.~\eqreft{met6} by letting $b\rightarrow b_0 + b_1 \ln(\frac{r}{r_0})$.
The expansion of $R(r)$ is then of the form:
\begin{align}
R = r
\bigg(
1 + \frac{\mu}{2\dmt r^\dmt} +
(b_0+b_1\ln(\frac{r}{r_0}))
\Big(\frac{\mu}{2\dmt r^\dmt}\Big)^2 + ...
\bigg)
\labelt{me22}
\end{align}
Where $b_0$ and $b_1$ are coefficients to be determined and $r_0$ is a length scale.
If we define $b=b_0+b_1\ln(\frac{r}{r_0})$ then Eq.~\eqreft{me22} is similar to the earlier expansion in Eq.~\eqreft{met6} only that $b$ now includes a logarithmic dependence.
\par
The metric in Eq.~\eqreft{met5} is expanded with the new definition of $b$.
For example, the expansion of the coefficient in Eq.~\eqreft{me13e} is changed to:
\begin{align}
\frac{1}{1-\frac{\mu}{R^{\dmt}}} \frac{dR^2}{dr^2} = (...) + 2b_1 \Big(\frac{\mu}{2\dmt r^\dmt}\Big)^2
\ .
\end{align}
The ellipsis indicate the result in Eq.~\eqreft{me13e} to which the term linear in $b_1$ should be added.
In case of $h^{\cGn{2}}_\mn$ and its derivative we get:
\begin{subequations}
\begin{align}
&h^{\cGn{2}}_\mn = (...) + 2 b_1 \Xmnb{\mu}{\nu} \Big(\frac{\mu}{2\dmt r^\dmt}\Big)^2
\\
&\partial_\rho h^{\cGn{2}}_\mn = (...) + b_1\frac{\mu^2}{4\dmt^2 r^{2\dmt+1}}(
8\dmt \Xrhob{\sigma}\Xmnb{\mu}{\nu}
-2 \Xrhob{\sigma}\delrb{\mn}
+2 \lambda_{\sigma\mn}
\Big)
\end{align}
\end{subequations}
Again, the ellipsis indicate that the results from Eqs.~\eqreft{me15} and~\eqreft{me17} should be inserted.
For example:
\begin{align}
h^{\cGn{2}}_\mn
=
\frac{\mu^2}{r^{2\dmt}}
\bigg(
\oov{2} \deltb{\mn}
+ \frac{2b+1}{4\dmt^2} \delrb{\mn}
+
\big(
-
\frac{4b+\dmt-2}{4\dmt}
+\frac{b_1}{2n^2}
\big)
\Xmnb{\mu}{\nu}
\bigg)
\labelt{me24}
\end{align}
As before, we insert $h^{\cGn{2}}_{\mn,\sigma}$ into the gauge condition from Eq.~\eqreft{me16}.
\par
For the term, which depends on $h^{\cGn{2}}_\mn$ in the gauge condition at second order we get:
\begin{align}
\maP^\mn_\rs {h^{\cGn{2}}}_\mn^{,\rho}
=
\frac{\mu^2}{r^{2\dmt+1}}
\Big(
-\frac{\dmt^2+1+(\dmt-2)b}{2\dmt}
+\frac{3\dmt-2}{4\dmt^2}b_1
\Big)
\Xrhob{\sigma}
\end{align}
Then, with the inclusion of the logarithmic dependence in $b$ the gauge condition ${G^{\cGn{2}}_\sigma=0}$ from Eq.~\eqreft{me16} becomes:
\begin{align}
G^{\cGn{2}}_\sigma
=
-\frac{\mu^2}{2nr^{2n+1}}
\Big(
(\dmt-2)b-\frac{3\dmt-2}{2\dmt} b_1 - \alpha \dmt (\dmt+1) + \dmt^2 + 1
\Big)
\end{align}
Recall the definition of $b=b_0+b_1\ln(\frac{r}{r_0})$.
When $n\neq2$ the logarithmic dependence in $b$ is forced to vanish and we get the same result as in Eq.~\eqreft{me20}.
Instead, when $n=2$ the coefficient of $b$ disappears and we get an equation which determines $b_1$ to be:
\begin{align}
b_1 = 5-6\alpha
\ .
\labelt{me23}
\end{align}
As expected $b_1$ disappears for $\alpha=\frac{5}{6}$.
When $b_1\neq0$ the roles of $b_0$ and $r_0$ are similar and we put $b_0=0$.
Inserting $b_1$ from Eq.~\eqreft{me23} into the expression for $h^{\cGn{2}}_\mn$ in Eq.~\eqreft{me24} we get $h^{\cGn{2}}_\mn$ in $D=5$.
We find that this expression for $h^{\cGn{2}}_\mn$ agrees with the one from the amplitude computation in Eq.~\eqreft{vv50}.
\chapter{Concluding Remarks and Perspectives}
\labelx{sec:Conclusion}
We have analyzed several aspects of classical general relativity treated as a quantum field theory.
These include gauge theory of spin-2 gravitons, expansion of general relativity around flat space-time and Feynman rules of gravity.
Gravitational interactions between particles are carried by gravitons.
The gauge symmetry of the gravitons is the quantum field theoretic manifestation of general covariance from classical general relativity and in the long-range, classical limit gravitons can be interpreted like other quantum fields in the framework of special relativity.
\par
The use of covariant gauge introduced the arbitrary covariant gauge parameter $\xi$ which clearly shows how different quantities such as the graviton propagator or the three-point vertex function depend on the quantum gauge-fixing procedure.
Also, the role of the gauge-fixing function, $G_\sigma$, was analyzed as well as the freedom in choosing this function.
Here, we introduced a novel gauge-fixing function which combines harmonic and \dDo\ gauge in an entire family of gauge choices depending on the gauge parameter $\alpha$.
Our results were thus very general depending on the two gauge parameters $\xi$ and $\alpha$ as well as the arbitrary space-time dimension, $D$.
\par
The expansion of objects from general relativity such as the \EHA\ action around flat space-time requires manipulation of numerous tensors with several indices.
We discussed two distinct expansions, namely in the graviton field and in the gravitational constant, and showed how to relate them to each other.
An important result is the expansion of the \EHA\ action in $h_\mn$ in terms of the tensors $\Gpz^\mn$ and $\Hpz^\mn$ related to the Einstein tensor and the analogous gauge-breaking tensor $H^\mn$ respectively.
This made it possible to relate the n-graviton vertex rule to the Einstein tensor and $H^\mn$.
\par
Using the results of our expansions in $h_\mn$ we were able to derive Feynman rules for quantum gravity.
Here, we were in particular interested in the gravitational part of the action from which the graviton propagator and n-graviton self-interaction vertices where derived.
The graviton propagator in covariant \dDo\ gauge as well as the n-graviton vertices in terms of the Einstein tensor and $H^\mn$ are novel results, that we have not found in earlier literature.
In addition we expanded the gravitational action in detail to third order in $h_\mn$ which allowed us to derive an explicit result for the three-graviton vertex.
\par
Using the Feynman rules for the n-graviton vertex we were able to relate the exact three-point vertex function of a massive scalar interacting with a graviton to the Schwarzschild-Tangherlini\ metric.
This was done by comparing the Feynman diagram expansion to a perturbative expansion of the classical equations of motion derived from the gauge-fixed action by $\delta S=0$.
Here, it was assumed that the n-loop triangle integrals can be reduced to certain convolution integrals in the classical limit which has been shown explicitly only in $D=4$~\cite{Bjerrum-Bohr:2018xdl,Galusha:cand}.
The metric thus derived is independent of $\xi$ and satisfies the gauge/coordinate condition $G_\sigma=0$.
It is an exciting result that the Schwarzschild-Tangherlini\ metric can be derived from the Lorentz covariant three-point amplitude to all orders in $G_N$ which deserves further analysis in the future.
\par
Using the formulas relating the three-point amplitude and the Schwarzschild-Tangherlini\ metric to each other, we computed the one-loop contribution to the metric with the generalized \dDo-type gauge function depending on the arbitrary parameter $\alpha$.
This, in particular, required use of the three-graviton vertex and evaluation of triangle one-loop integrals in arbitrary dimension.
For specific values of the parameters $\alpha$ and $D$ we were able to compare the results with the literature.
This was the case for $\alpha=0$ and arbitrary $D$ where we found agreement with \cite{Collado:2018isu} and for $D=4$ and $\alpha=1$ where we found agreement with the standard result for the Schwarzschild metric in harmonic coordinates.
\par
The introduction of generalized Lorentz covariant gauge-fixing functions is an interesting consequence of the quantum field theoretic approach to general relativity and it would be exciting to examine if it is possible to derive exact, all-order results with such coordinate-conditions in classical general relativity.
Here we think of the \dDo\ gauge condition from Eq.~\eqreft{vv42} or the generalized gauge-function from our work Eq.~\eqreft{gau3}.
In this work, we used methods from classical general relativity to compute the Schwarzschild-Tangherlini\ metric perturbatively in the \dDo-type gauge depending on $\alpha$ to second order in $G_N$.
The results of this classical computation agreed with the amplitude method.
\par
In $D=5$, at second order in $G_N$, a logarithmic dependence on the radial coordinate appeared in the metric.
In the amplitude computation it was necessary to carefully remove divergent terms from the triangle integrals in order to arrive at the correct metric in $D=5$.
The appearance of logarithms was analyzed in terms of redundant gauge freedom.
From this analysis it is expected that besides the case of $D=5$ at second order, logarithms will appear only in $D=4$ at third order which explains the logarithms in Ref.~\cite{Goldberger:2004jt} which appear exactly at this order in $D=4$.
\par
The reduction of n-loop triangle integrals in the classical limit is still to be done in detail.
In this thesis, we computed the one-loop integrals but only after going through several calculations, we were able to show that just the space-components of these integrals contribute which are exactly the part that has a simple interpretation in the classical limit.
Further investigations would be to include spinning and charged particles which would produce the Kerr-Newman metric.
Another direction would be to study diagrams with two distinct massive scalars (i.e. four external scalar lines) and an external graviton.
Such diagrams would describe binary systems and possibly a metric for the two-body system could be defined.
A third possibility would be to analyze the $\hbar$ correction to the Schwarzschild-Tangherlini\ metric as was done already in Ref.~\cite{BjerrumBohr:2002ks}.
Is it possible to derive this correction in a sensible way to all orders in $G_N$?
In general, the study of general relativity from quantum field theory promises to provide many exciting results in the future.
\blankpage
\chapter*{Acknowledgments}
I would like to thank my advisors Poul Henrik Damgaard and Emil Bjerrum-Bohr for many helpful discussions during the work on this thesis.
\blankpage
\addcontentsline{toc}{chapter}{Bibliography}
\bibliographystyle{unsrt}
| proofpile-arXiv_059-15639 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}}
During the past two decades, the dramatic improvement in data collection and acquisition technologies
has enabled scientists to collect a great amount of high-dimensional data,
for which the dimension $p$ can be much larger than the sample size $n$ (a.k.a. small-$n$-large-$p$).
The current research on high-dimensional data mainly focuses on
variable selection and graphical modeling.
The former aims to find a consistent estimate for high-dimensional regression under a
sparsity constraint. The existing methods include
Lasso \citep{Tibshirani1996}, SCAD \citep{FanL2001}, MCP \citep{Zhang2010}, and
rLasso \citep{SongL2015a}, among others.
The latter aims to learn conditional independence relationships for a large set of
variables. The existing methods include graphical Lasso \citep{YuanL2007, FriedmanHT2008},
nodewise regression \citep{MeinshausenB2006}, and $\psi$-learning \citep{LiangSQ2015}, among others.
Recently many researchers turn their attention to statistical inference for high-dimensional data,
aiming to quantify uncertainty of high-dimensional regression, e.g., constructing confidence intervals and assessing $p$-values for the regression coefficients.
The Bayesian methods \citep{LiangSY2013, SongL2015b} can potentially be used for this purpose,
but are time-consuming. The existing frequentist methods include
desparsified Lasso, multi sample-splitting, and ridge projection, among others.
Refer to \cite{DezeureBMM2015} for a comprehensive overview.
The desparsified Lasso method was proposed in \cite{vandeGeer2014}, which is also essentially
the same as the one developed in \cite{ZhangZhang2014} and \cite{Javanmard2014}.
For the high-dimensional linear regression
\[
{\boldsymbol Y}={\boldsymbol X} {\boldsymbol \beta}+{\boldsymbol \epsilon},
\]
where ${\boldsymbol \epsilon}$ are zero-mean Gaussian random errors, desparsified Lasso defines a bias-corrected
estimator
\begin{equation} \label{dlassoeq0}
\hat{{\boldsymbol \beta}}_{bc}=\hat{{\boldsymbol \beta}}_{Lasso}+\hat{\Theta}{\boldsymbol X}^{T}({\boldsymbol y}-{\boldsymbol X}\hat{{\boldsymbol \beta}}_{Lasso})/n,
\end{equation}
where $\hat{{\boldsymbol \beta}}_{Lasso}$ is the original Lasso estimator, and $\hat{\Theta}$ is
an approximator to the inverse of $\hat{{\boldsymbol \Sigma}}={\boldsymbol X}^{T}{\boldsymbol X}/n$.
From (\ref{dlassoeq0}), one can obtain
\begin{equation} \label{dlassoeq}
\sqrt{n}(\hat{{\boldsymbol \beta}}_{bc}-{\boldsymbol \beta})=\hat{\Theta}{\boldsymbol X}^{T}{\boldsymbol \epsilon}/\sqrt{n}
+\sqrt{n}(I_{p}-\hat{\Theta} \hat{{\boldsymbol \Sigma}})(\hat{{\boldsymbol \beta}}_{Lasso}-{\boldsymbol \beta}):=
\hat{\Theta}{\boldsymbol X}^{T}{\boldsymbol \epsilon}/\sqrt{n}+\Delta_n,
\end{equation}
where $I_{p}$ denotes the $p \times p$ identity matrix, and $\Delta_n$ is the error term.
With an appropriate estimator $\hat{\Theta}$, e.g., the one obtained by nodewise regression \citep{meinshausen2006},
it can be shown that
$\|\Delta_n\|_{\infty}=o_{p}(1)$ and thus
$\sqrt{n}(\hat{{\boldsymbol \beta}}_{bc}-{\boldsymbol \beta})$ shares the same asymptotic distribution
with $\hat{\Theta}{\boldsymbol X}^{T}{\boldsymbol \epsilon}/\sqrt{n}$.
Further, to calculate confidence intervals for ${\boldsymbol \beta}$, one needs to
approximate the distribution of $\hat{\Theta}{\boldsymbol X}^{T}{\boldsymbol \epsilon}/\sqrt{n}$.
For example, \cite{Javanmard2014} approximated it by $N(0,\hat{\sigma}^{2}\hat{\Theta}\hat{{\boldsymbol \Sigma}}\hat{\Theta}^{T})$,
where $\hat{\sigma}^2$ is a consistent estimator of $\sigma^2$;
and \cite{ZhangandCheng} approximated it using multiplier bootstrap.
The multi sample-splitting method was proposed and analyzed in \cite{MeinshausenMB2009}, which
works in the following procedure: Splitting the samples into two subsets equally,
using the first half of samples for variable selection
and using the second half of samples for calculating $p$-values based on the selected variables; repeating this process for many times; and aggregating the
$p$-values for statistical inference.
The confidence intervals can be constructed based on their duality with $p$-values.
The idea about sample-splitting and subsequent
statistical inference has also been implicitly contained in \cite{WassermanR2009}.
The multi sample-splitting method is very general
and can be applied to many different types of models.
The ridge projection method was studied in \cite{Buhlmann2013}, which can be viewed as
a direct extension of the low-dimensional ridge regression
to the high-dimensional case. The bias of the ridge estimator has been assessed and corrected,
the distribution of the ridge estimator has been derived and approximated,
and thus the method can be used for statistical inference.
The other methods include residual-type bootstrapping \citep{ChatterjeeL2013, LiuYu2013},
covariance test \citep{Lockhart2014}, and group-bound \citep{Meinshausen2015}.
A problem with the residual-type bootstrapping method is the super-efficiency
phenomenon; that is, a confidence interval of a zero regression coefficient might be the
singleton $\{0\}$. The covariance test method relies on the solution path of the Lasso
and is much related to the post-selection
inference methods \citep{Berk2013, LeeDSYJ2016, Tibshirani2016, FithianST2014}.
The group-bound method provides a good treatment for highly correlated variables, but
has often a weak power in detecting the effect of individual variables.
Recently, some methods based on the idea of estimating a low-dimensional component
of a high-dimensional model have also been proposed, see e.g. \cite{Belloni2015}
and \cite{YangYun2017}.
This paper proposes an innovative method, the so-called Markov neighborhood regression (MNR), for high-dimensional inference.
By making use of conditional independence relations among different explanatory variables,
it successfully breaks the high-dimensional inference problem into a series of low-dimensional
inference problems. The proposed method is fundamentally different from the existing ones, such as
desparsified-Lasso, ridge projection, and multi sample-splitting,
which strive to work on the problem in its original scale. The proposed method has been tested on high-dimensional linear,
logistic and Cox regression. The numerical results indicate that the proposed method
significantly outperforms the existing ones, while having competitive computational efficiency. Based on the concept of MNR, this paper also proposes a new method for learning the
causal structure of high-dimensional regression and applies it to identification of drug sensitive genes and cancer driver genes.
The idea of using conditional independence for dimension
reduction is general and can be applied to
many other high-dimensional or big data problems as well.
The remaining part of this paper is organized as follows. Section 2 describes the MNR method and establishes its validity.
Section 3 presents numerical results
on simulated data along with comparisons with some existing methods.
Section 4 proposes a method for learning the
causal structures of high-dimensional regression.
Section 5 presents some real data examples.
Section 6 concludes the paper with a brief discussion.
\section{Markov Neighborhood Regression}
\subsection{Linear Regression}
Suppose that a set of $n$ independent samples $D_{n}=\{(Y^{(i)},{\boldsymbol X}^{(i)})_{i=1}^{n} \}$ have been collected
from the linear regression with a random design:
\begin{equation} \label{modeleq1}
Y=\beta_0+X_{1}\beta_1+ \ldots+ X_{p}\beta_p+\epsilon,
\end{equation}
where $\epsilon$ follows the normal distribution $N(0,\sigma^{2})$,
and the explanatory variables, or known as features,
${\boldsymbol X}=(X_{1}, \ldots, X_{p})$ follows a multivariate normal distribution $N_{p}(\mathbf{0},\Sigma)$. In what follows, we will call
$\{X_i: \beta_i\ne 0, i=1,2,\ldots,p \}$ and $\{X_i: \beta_i=0, i=1,2,\ldots,p\}$
the true and false features, respectively.
Without loss of generality, we assume that the features have been standardized
such that $E(X_{j})=0$ and Var$(X_{j})=1$ for $j=1,\dots,p$.
Further, we represent ${\boldsymbol X}$ by an undirected graph ${\boldsymbol G}=({\boldsymbol V},{\boldsymbol E})$,
where ${\boldsymbol V}=\{1,2,\ldots,p\}$ denotes the set of $p$ vertices, ${\boldsymbol E}=(e_{ij})$ denotes the adjacency matrix,
$e_{ij}=1$ if the $(i,j)$th entry of the precision matrix $\Theta=\Sigma^{-1}$ is nonzero and
0 otherwise.
We use ${\boldsymbol X}_{{\boldsymbol A}}=\{X_k: k\in {\boldsymbol A}\}$ to denote a set of features
indexed by ${\boldsymbol A} \subset {\boldsymbol V}$,
and use $P_{{\boldsymbol V}}$ to denote the joint probability distribution of ${\boldsymbol X}_{{\boldsymbol V}}$.
For a triplet ${\boldsymbol I}, {\boldsymbol J}, {\boldsymbol U} \subset {\boldsymbol V}$,
we use ${\boldsymbol X}_{{\boldsymbol I}} \perp {\boldsymbol X}_{{\boldsymbol J}} | {\boldsymbol X}_{{\boldsymbol U}}$ to denote that ${\boldsymbol X}_{{\boldsymbol I}}$ is {\it conditionally independent} of ${\boldsymbol X}_{{\boldsymbol J}}$ given ${\boldsymbol X}_{{\boldsymbol U}}$.
A {\it path} of length $l>0$ from a vertex $v_0$ to another vertex $v_l$ is a sequence
$v_0,v_1,\ldots,v_l$ of distinct vertices such that $e_{v_{k-1},v_k}=1$ for $k=1,2,\ldots,l$.
The subset ${\boldsymbol U} \subset {\boldsymbol V}$ is said to {\it separate} ${\boldsymbol I} \subset {\boldsymbol V}$ from
${\boldsymbol J} \subset {\boldsymbol V}$ if for every $i \in {\boldsymbol I}$ and $j \in {\boldsymbol J}$, all paths from
$i$ to $j$ have at least one vertex in ${\boldsymbol U}$.
$P_{{\boldsymbol V}}$ is said to satisfy the {\it Markov property} with respect to
${\boldsymbol G}$ if for every triple of disjoint sets ${\boldsymbol I}, {\boldsymbol J}, {\boldsymbol U} \subset {\boldsymbol V}$, it holds that $X_{{\boldsymbol I}} \perp X_{{\boldsymbol J}} | X_{{\boldsymbol U}}$
whenever ${\boldsymbol U}$ separates ${\boldsymbol I}$ and ${\boldsymbol J}$ in ${\boldsymbol G}$.
Let $\xi_{j}=\{k: e_{jk}=1\}$
denote the neighboring set of $X_{j}$ in ${\boldsymbol G}$.
Following from the Markov property of the Gaussian graphical model (GGM), we have
$X_{j} \perp X_{i} |{\boldsymbol X}_{\xi_j}$ for any $i \in {\boldsymbol V} \setminus \xi_j$, as $\xi_j$ forms a separator
between $X_i$ and $X_j$.
For convenience, we call $\xi_j$ the minimum Markov neighborhood of $X_{j}$ in ${\boldsymbol G}$,
and call any superset ${\boldsymbol s}_j \supset \xi_j$ a Markov neighborhood of $X_{j}$ in ${\boldsymbol G}$.
The minimum Markov neighborhood is also termed as Markov blanket in Bayesian networks or
general Markov networks.
To motivate the proposed method, we first look at a simple mathematical fact based on the
well known property of Gaussian graphical models (see e.g. \cite{Lauritzen1996}):
\[
X_i \perp X_j |X_{{\boldsymbol V}\setminus \{i,j\}} \Longleftrightarrow \theta_{ij}=0,
\]
where $\theta_{ij}$ denotes the $(i,j)$-th entry of $\Theta$.
Without loss of generality, we let ${\boldsymbol S}_1=\{2, \ldots, d\}$
denote a Markov neighborhood of $X_1$,
let $\Sigma_d$ denote the covariance matrix of $\{X_{1}\} \cup {\boldsymbol X}_{{\boldsymbol S}_1}$,
and partition $\Theta$ as
\begin{equation} \label{Leq1}
\Theta= \begin{bmatrix}
\Theta_{d} & \Theta_{d,p-d} \\
\Theta_{p-d,d} & \Theta_{p-d}
\end{bmatrix},
\end{equation}
where the first row of $\Theta_{d,p-d}$ and the first column of $\Theta_{p-d,d}$ are exactly zero, as
$X_1 \perp {\boldsymbol X}_{{\boldsymbol V}\setminus(\{1\}\cup {\boldsymbol S}_1)} | {\boldsymbol X}_{{\boldsymbol S}_1}$ holds.
Inverting $\Theta$, we have
$\Sigma_{d}=(\Theta_{d}-\Theta_{d,p-d}\Theta_{p-d}^{-1}\Theta_{p-d,d})^{-1}$,
which is equal to the top $d\times d$-submatrix of $\Theta^{-1}$. Therefore,
\begin{equation} \label{Leq2}
\Sigma_{d}^{-1}=\Theta_{d}-\Theta_{d,p-d}\Theta_{p-d}^{-1}\Theta_{p-d,d}.
\end{equation}
Since the first row of $\Theta_{d,p-d}$ and the first column of $\Theta_{p-d,d}$ are exactly zero,
the $(1,1)$th element of $\Theta_{d,p-d}\Theta_{p-d}^{-1}\Theta_{p-d,d}$ is exactly zero. Therefore,
the $(1,1)$-th entry of $\Theta_d$ (and $\Theta$) equals to the $(1,1)$-th entry of $\Sigma_{d}^{-1}$.
This suggests that {\it if $\{X_{1}\} \cup {\boldsymbol X}_{{\boldsymbol S}_1} \supset {\boldsymbol X}_{{\boldsymbol S}_*}$ holds and
$n$ is sufficiently large, where ${\boldsymbol S}_*$ denote the index
set of true features of the model (\ref{modeleq1}), then the statistical inference for
$\beta_1$ can be made based on the
subset regression:}
\begin{equation} \label{modeleq2}
Y=\beta_0'+X_{1}\beta_1+X_{2} \beta_2'+\ldots+X_{d} \beta_d'+\epsilon,
\end{equation}
where the prime on $\beta_k$'s for $k\ne 1$
indicates that those regression coefficients might be modified by the subset regression.
Since ${\boldsymbol S}_1$ forms a Markov neighborhood of $X_{1}$ in the Markov network formed
by all features, we call (\ref{modeleq2})
a {\it Markov neighborhood regression}, which breaks the high-dimensional inference problem
into a series of low-dimensional inference problems.
Based on this mathematical fact, we propose the following algorithm:
\begin{algorithm} (Markov Neighborhood Regression) \label{subsetAlg1}
\begin{itemize}
\item[(a)] (Variable selection) Conduct variable selection for the model (\ref{modeleq1})
to get a consistent estimate of ${\boldsymbol S}_*$. Denote the estimate by $\hat{{\boldsymbol S}_*}$.
\item[(b)] ({\it Markov blanket estimation})
Construct a GGM for ${\boldsymbol X}$ and
obtain a consistent estimate of the Markov blanket for each variable.
Denote the estimates by $\hat{\xi}_j$ for $j=1,2,\ldots, p$.
\item[(c)] (Subset regression) For each variable $X_j$, $j=1,\ldots,p$,
let $D_j=\{j\}\cup \hat{\xi}_j \cup \hat{{\boldsymbol S}}_*$ and run an Ordinary Least Square (OLS) regression with
the features given by ${\boldsymbol X}_{D_j}$, i.e.,
\begin{equation} \label{regeqD}
Y=\beta_0+{\boldsymbol X}_{D_j}{\boldsymbol \beta}_{D_j}+\epsilon,
\end{equation}
where $\epsilon \sim N(0,\sigma^2I_n)$ and $I_n$ is an $n\times n$-identity matrix.
Conduct inference for $\beta_j$, including the estimate, confidence interval and $p$-value,
based on the output of (\ref{regeqD}).
\end{itemize}
\end{algorithm}
The Markov neighborhood corresponding to
the subset regression (\ref{modeleq2}) is
$\{2,3,\ldots,d\} \supseteq \hat{\xi}_1 \cup \hat{{\boldsymbol S}}_*$.
In general, $\hat{\xi}_1 \cup \hat{{\boldsymbol S}}_*$ can be any a subset of $\{1,2,\ldots,p\}$
depending on the ordering of features in (\ref{modeleq1}).
Algorithm \ref{subsetAlg1} can be implemented in many different ways. For example,
a variety of high-dimensional variable selection algorithms can be
used for step (a), e.g.,
Lasso \citep{Tibshirani1996}, SCAD \citep{FanL2001}, MCP \citep{Zhang2010} and
rLasso \citep{SongL2015a}, which are all able to produce a consistent
estimate for ${\boldsymbol S}_*$ under appropriate conditions. Similarly,
a variety of graphical model learning algorithms can be used for step (b),
e.g., graphical Lasso \citep{YuanL2007, FriedmanHT2008},
nodewise regression \citep{MeinshausenB2006}, and $\psi$-learning \citep{LiangSQ2015},
which all produce a consistent estimate for the underlying GGM under
appropriate conditions.
To justify Algorithm \ref{subsetAlg1}, we introduce the following lemma, which can be proved based on the basic theory of high-dimensional linear regression and the mathematical relation shown around equations (\ref{Leq1}) and (\ref{Leq2}). Refer to the Supplementary Material for the detail of the proof.
\begin{lemma} \label{lemma01}
Let $\hat{\xi}_{j}\supseteq\xi_{j}$ denote any Markov neighborhood of feature $x_{j}$, let
$\hat{{\boldsymbol S}}_* \supseteq {\boldsymbol S}_*$ denote any reduced feature space, and let $D_j=\{j\} \cup \hat{\xi}_{j} \cup \hat{{\boldsymbol S}}_*$.
Consider the subset regression (\ref{regeqD}).
Let $\hat{{\boldsymbol \beta}}_{D_j}$ denote the OLS estimator of ${\boldsymbol \beta}_{D_j}$ from the subset regression, and
let $\hat{\beta}_{j}$ denote the element of $\hat{{\boldsymbol \beta}}_{D_j}$ corresponding to the variable $X_j$.
If $|D_j|=o(n^{1/2})$, as $n\to\infty$, the following results hold:
\begin{itemize}
\item[(i)] $\sqrt{n}(\hat{\beta}_{j}-\beta_{j}) \sim
N(0,\sigma^{2}\theta_{jj})$, where
$\theta_{jj}$ is the $(j,j)$-th entry of the precision matrix $\Theta$.
\item[(ii)]
$\sqrt{n} \frac{\hat{\beta}_j-\beta_j}{\sqrt{ \hat{\sigma}_n^2 \hat{\theta}_{jj}}} \sim N(0,1)$,
where $\hat{\sigma}_n^2=({\boldsymbol y}-{\boldsymbol x}_{D_j}\hat{{\boldsymbol \beta}}_{D_j})^T({\boldsymbol y}-{\boldsymbol x}_{D_j}\hat{{\boldsymbol \beta}}_{D_j})/(n-d-1)$,
$\hat{\theta}_{jj}$ is the $(j,j)$-th entry of
the matrix $\bigg[\frac{1}{n} \sum_{i=1}^{n}{\boldsymbol x}^{(i)}_{D_j}({\boldsymbol x}^{(i)}_{D_j})^{T} \bigg]^{-1}$, and
${\boldsymbol x}^{(i)}_{D_j}$ denotes the $i$-th row of ${\boldsymbol X}_{D_j}$.
\end{itemize}
\end{lemma}
\begin{remark} \label{remA}
For the case that $n$ is finite, we have
$(n-|D_j|-1) \hat{\sigma}_n^2/\sigma^2 \sim \chi^2(n-|D_j|-1)$, independent of $\hat{{\boldsymbol \beta}}_{D_j}$,
by the standard theory of OLS estimation.
Therefore, we can use $t(n-|D_j|-1)$ to approximate the distribution of
$\sqrt{n} \frac{\hat{\beta}_j-\beta_j}{\sqrt{ \hat{\sigma}_n^2 \hat{\theta}_{jj}}}$;
that is, {\it the estimate, $p$-value and confidence interval of $\beta_j$ can be
calculated from (\ref{regeqD}) as in conventional low-dimensional multiple linear regression. }
\end{remark}
Lemma \ref{lemma01} implies that Algorithm \ref{subsetAlg1} will be valid
as long as the following conditions hold:
\begin{eqnarray}
\hat{\xi}_{j} & \supseteq & \xi_{j}, \ \forall j \in \{1,2,\ldots,p\}, \label{ceq1} \\
\hat{{\boldsymbol S}}_* & \supseteq & {\boldsymbol S}_*, \label{ceq2} \\
|D_j|&= & o(\sqrt{n}). \label{ceq3}
\end{eqnarray}
Condition (\ref{ceq2}) is the so-called screening property, which is known to be satisfied by many
high-dimensional variable selection algorithms, such as SCAD \citep{FanL2001}, MCP \citep{Zhang2010}
and adaptive Lasso \citep{Zou2006}. Lasso also satisfies this condition if the design matrix satisfies the
compatibility condition \citep{vandeGeerB2009}, $|{\boldsymbol S}_*|=o(n/\log(p))$, and the beta-min condition holds.
See \cite{DezeureBMM2015} for more discussions on this issue.
Given the sure screening property of the above variable selection procedure, if the nodewise regression algorithm \citep{MeinshausenB2006} is applied to learn the GGM in step (b) of Algorithm \ref{subsetAlg1}, then
the condition (\ref{ceq1}) can be satisfied. In fact, as along as the GGM construction algorithm
is consistent, the condition (\ref{ceq1}) will be asymptotically satisfied.
Further, the condition (\ref{ceq3}) can be easily
satisfied by a slight twist of the sparsity conditions used in the variable selection and
GGM estimation algorithms.
As an example, we give in the Appendix a set of technical conditions (A0)--(A9)
under which the conditions (\ref{ceq1})--(\ref{ceq3}) can be asymptotically satisfied,
provided that the SCAD algorithm is used for variable selection and
the $\psi$-learning algorithm is used for GGM estimation.
Based on these technical conditions,
the validity of Algorithm \ref{subsetAlg1} is justified in Theorem \ref{Them1}, whose proof is straightforward based on Slutsky's theorem and some existing theoretical results. Refer to
the Supplementary Material for the detail.
If different algorithms are used in Algorithm \ref{subsetAlg1}, then the
conditions used in Theorem \ref{Them1} should be changed accordingly.
We note that many conditions we imposed in proving the theorem
are purely technical and only serve to provide theoretical understanding
of the proposed method. We have no intent to make the conditions the weakest possible.
\begin{theorem} \label{Them1} (Validity of Algorithm \ref{subsetAlg1})
If the conditions (A0)-(A9) hold,
the SCAD algorithm is used for variable selection in step (a),
and the $\psi$-learning algorithm is used for GGM construction in step (b),
then for each $j \in \{1,2,\ldots,p_n\}$, we have
$\sqrt{n} \frac{\hat{\beta}_j-\beta_j}{\sqrt{ \hat{\sigma}_n^2 \hat{\theta}_{jj}}} \sim N(0,1)$ as $n\to\infty$,
where $\hat{\beta}_j$ denotes the estimate of $\beta_j$ obtained from the subset regression,
$\hat{\sigma}_n^2=({\boldsymbol y}-{\boldsymbol x}_{D_j}\hat{{\boldsymbol \beta}}_{D_j})^T({\boldsymbol y}-{\boldsymbol x}_{D_j}\hat{{\boldsymbol \beta}}_{D_j})/(n-d-1)$,
$\hat{\theta}_{jj}$ is the $(j,j)$-th entry of
the matrix $\bigg[\frac{1}{n} \sum_{i=1}^{n}{\boldsymbol x}^{(i)}_{D_j}({\boldsymbol x}^{(i)}_{D_j})^{T} \bigg]^{-1}$, and
${\boldsymbol x}^{(i)}_{D_j}$ denotes the $i$-th row of ${\boldsymbol X}_{D_j}$.
\end{theorem}
\begin{remark} \label{remB} Following Remark \ref{remA}, we can conduct inference for $\beta_j$ based on the output of the subset regression (\ref{regeqD}) as in
conventional low-dimensional multiple linear regression.
\end{remark}
As implied by Theorem \ref{Them1}, variable selection for regression (\ref{modeleq1})
can be converted as a multiple hypothesis testing problem for simultaneously testing the hypotheses
\begin{equation} \label{mult-test}
H_{0,j}:\ \beta_j=0 \Longleftrightarrow H_{1,j}: \ \beta_j \ne 0, \quad j=1,2,\ldots,p,
\end{equation}
based on the $p$-values obtained from the subset regressions. The consistency of
this test-based method follows from Theorem 2 of \cite{LiangSQ2015} as discussed in
Section \ref{CausalGaussoian}.
Compared to the regularization methods, e.g. Lasso, MCP, SCAD and adaptive Lasso \cite{Zou2006}, a significant
advantage of this method is that it controls the false discovery rate
of selected features in an explicit way. In addition, since the screening property generally holds for these
regularization methods, see \cite{DezeureBMM2015} for discussions on this issue,
the new method might result in a lower false discovery rate as shown
in Table \ref{FDRtab}.
On the other hand, since the $p$-value measures the contribution of
a feature to the regression
conditioned on all other $p-1$ features, MNR might not work well
when strong collinearity exists among certain true and false features. This case has been
excluded by Conditions A2 and A4, where the fixed upper bounds
on correlations and $\psi$-partial correlations place some
additional restrictions on the design.
The essential conditions required by MNR
are only sparsity; that is, the true regression model is sparse and
the conditional independence structure among the features is sparse such that (\ref{ceq1})-(\ref{ceq3}) hold when appropriate algorithms are applied.
Similar conditions have been assumed by some existing methods. For example, desparsified-Lasso requires the true model to be of size
$o(\sqrt{n}/\log p)$ (see e.g., Fact 2 of \cite{DezeureBMM2015}), which is
a little more restrictive than $o(n^{1/2})$ required by MNR;
desparsified-Lasso also requires the precision matrix $\Theta$ to be row-sparse at a level
of $o(n/\log p)$, which is comparable with the Markov blanket size $o(\sqrt{n})$ required by MNR
when $\log(p)=n^{\delta}$ for some $\delta \approx 1/2$.
The multi sample-splitting and the
ridge projection methods require the screening property (\ref{ceq2}) only,
which seems weaker than the conditions required by MNR and desparsified-Lasso.
However, as shown later by numerical examples,
they both essentially fail even for the simple linear regression case.
The use of conditional independence relations seems important for high-dimensional inference.
For MNR, since the essential conditions
are (\ref{ceq1})-(\ref{ceq3}),
a variable screening-based algorithm will also work under appropriate conditions. Based on this observation, we propose Algorithm S1, which together with some numerical results are presented in the Supplementary Material. Compared to Algorithm \ref{subsetAlg1}, Algorithm S1 can be substantially faster but the resulting confidence intervals can be a little wider; that is, Algorithm S1 is an accuracy/efficiency trade-off version of Algorithm \ref{subsetAlg1}.
Finally, we note that there are indeed scenarios that conditions (\ref{ceq1})-(\ref{ceq3})
are violated. For example, if all the features are equally correlated or there are a few features whose Markov blanket is of size $O(\sqrt{n})$ or larger, then the condition (\ref{ceq1}) will be violated, as the algorithm always restricts the Markov blanket to be of size $o(\sqrt{n})$ or smaller. Similarly, if the true model is of size
$O(\sqrt{n})$ or larger, then condition (\ref{ceq2}) will be violated. These conditions can also be violated by the algorithms used for Markov blanket estimation or variable selection, particularly when the sample size is small.
The screening property is itself a large sample property. Our numerical experience shows that the MNR method is pretty robust to violations of the conditions (\ref{ceq1})-(\ref{ceq3}). This will be demonstrated in Section \ref{equicSect}.
\subsection{Generalized Linear Models}
The MNR method can be easily extended to the generalized linear models (GLMs)
whose density function is given in the canonical form
\begin{equation} \label{GLMeq1}
f(y|{\boldsymbol x},{\boldsymbol \beta})=\exp(\vartheta y-b(\vartheta)+c(y)),
\end{equation}
where $b(\cdot)$ is continuously differentiable, and $\vartheta$ is the natural parameter relating $y$ to the features ${\boldsymbol x}$ via a linear function
$\vartheta=\beta_0+x_{1}\beta_{1}+\cdots+x_{p} \beta_{p}$.
This class of GLMs includes Poisson regression, logistic regression and linear regression (with known variance).
Note that for Cox proportional hazards models, the parameters can be estimated by maximizing the
partial likelihood function \citep{Cox1975}, based on which the Cox regression
can be converted to a Poisson regression. See, e.g., Chapter 13
of \cite{McCu:Neld:1989} for the detail.
This conversion is important, which enables the use of the MNR method for
high-dimensional survival data analysis.
To justify this extension, we establish the following lemma, where we assume that the features follow a multivariate normal distribution $N(0,\Sigma)$ and each has been standardized to have a mean 0 and variance 1. The proof follows the same logic as that of Lemma
\ref{lemma01}, but the precision matrix involved in Lemma \ref{lemma01} is replaced by the inverse of the Fisher information matrix of the GLM. Refer to the Supplementary Material for the detail.
\begin{lemma}\label{GLMlemma}
Let $\hat{\xi}_{j}\supseteq \xi_{j}$ denote any Markov neighborhood of feature $x_{j}$,
let $\hat{{\boldsymbol S}}_* \supseteq {\boldsymbol S}_*$ denote any reduced feature space, and let
$D_j=\{j\} \cup \hat{\xi}_{j} \cup \hat{{\boldsymbol S}}_*$.
Consider a subset GLM with the features ${\boldsymbol X}_{D_j}$, let $\hat{{\boldsymbol \beta}}_{D_j}$
denote the MLE of ${\boldsymbol \beta}_{D_j}$, and let $\hat{\beta}_{j}$ denote the
component of $\hat{{\boldsymbol \beta}}_{D_j}$ corresponding to feature $X_j$.
If $|D_j|=o(n^{1/2})$, then, as $n\to \infty$, the following results hold:
\begin{itemize}
\item[(i)] $\sqrt{n}(\hat{\beta}_{j}-\beta_{j}) \sim N(0,k_{jj})$,
where $k_{jj}$ denotes the $(j,j)$-th entry of the inverse of the Fisher information matrix
$K=I^{-1}=[E(b''({\boldsymbol x}^{T}{\boldsymbol \beta}){\boldsymbol x}\bx^{T})]^{-1}$,
and ${\boldsymbol \beta}$ denotes the true regression coefficients.
\item[(ii)] $\sqrt{n}(\hat{\beta}_{j}-\beta_{j})/\sqrt{\hat{k}_{jj}} \sim N(0,1)$, where
$\hat{k}_{jj}$ denotes the $(j,j)$-th entry of the inverse of the observed information matrix
$J_n(\hat{{\boldsymbol \beta}}_{D_j})=-\sum_{i=1}^n H_{\hat{{\boldsymbol \beta}}_{D_j}}(\log f(y_i|{\boldsymbol \beta}_{D_j}, {\boldsymbol x}_{D_j}))/n$
and $H_{\hat{{\boldsymbol \beta}}_{D_j}}(\cdot)$
denotes the Hessian matrix evaluated at the MLE $\hat{{\boldsymbol \beta}}_{D_j}$.
\end{itemize}
\end{lemma}
Lemma \ref{GLMlemma} implies that the estimate, $p$-value and confidence interval of $\beta_j$ can be calculated from
the subset GLM as in conventional low-dimensional GLMs.
For GLMs, variable selection can be done using the SCAD, MCP or Lasso algorithm, and
variable screening can be done using the sure independence screening algorithm
developed in \cite{FanS2010}. By Theorem 5 of \cite{FanS2010}, we can bound the size
of $\hat{{\boldsymbol S}}_*$ by $O(n^{\frac{1}{2}-\frac{\varepsilon}{2}})$ for a small constant $\varepsilon>0$
with a slight modification of the technical conditions therein.
Therefore, the theorems parallel to Theorem \ref{Them1} and Theorem S1 (in the Supplementary Material) can be proved for GLMs. For simplicity, they are omitted in the paper.
\subsection{Joint Inference} \label{Sect2.3}
The MNR method described above deals with only one coefficient
$\beta_j$ in each subset regression.
In fact, it can be easily extended to conduct joint inference for
several coefficients.
Let ${\boldsymbol A}\subset {\boldsymbol V}$ denote a set of features for which the joint inference
for the corresponding coefficients is desired.
Define $\xi_{{\boldsymbol A}}=\cup_{j \in {\boldsymbol A}} \xi_j$ as the union
of the Markov blankets of the features in ${\boldsymbol A}$.
Let ${\boldsymbol M}={\boldsymbol A} \cup \hat{\xi}_{{\boldsymbol A}} \cup \hat{{\boldsymbol S}}_*$.
Then a subset regression can be conducted with the features included in ${\boldsymbol M}$.
For high-dimensional linear regression, if $|{\boldsymbol M}|=O(n^{1/2})$, then, similar to Theorem 1, we can show
$\sqrt{n} (\hat{{\boldsymbol \beta}}_A- {\boldsymbol \beta}_A) \sim N(0, \sigma^2 \Theta_{AA})$,
where $\Theta_{AA}$ denotes the submatrix of the precision matrix $\Theta$ constructed
by its $A$ rows and $A$ columns.
Similarly, for high-dimensional GLMs, we can show
$\sqrt{n} (\hat{{\boldsymbol \beta}}_A- {\boldsymbol \beta}_A) \sim N(0, K_{AA})$,
where $K_{AA}$ denotes the submatrix of $K=[ E(b''({\boldsymbol x}^T {\boldsymbol \beta}) {\boldsymbol x} {\boldsymbol x}^T]^{-1}$
constructed by its $A$ rows and $A$ columns.
\section{Simulation Studies}
\subsection{A Conceptual Experiment} \label{Sect3.1}
We first test the concept of MNR using a large-$n$-small-$p$ example; that is, whether
the confidence intervals generated by MNR
coincide with those generated by the OLS method as the sample size
$n$ becomes large.
We generated a dataset from the model (\ref{modeleq1}) with $n=2000$ and
$p$=50, where $\sigma^2$ was set to 1,
the covariates ${\boldsymbol X}$ were generated from a zero-mean multivariate Gaussian distribution with
a Toeplitz covariance matrix given by
$\Sigma_{i,j}=0.9^{|i-j|}$ for $i,j=1,\ldots,p$,
and the true regression coefficients $(\beta_0,\beta_1,\beta_2,\ldots,\beta_5)=(1,0.2,0.4,-0.3,-0.5,1.0)$ and
$\beta_6=\cdots=\beta_p=0$.
Figure \ref{intervalplot} compares the 95\% confidence intervals of $\beta_1,\ldots,\beta_p$
produced by MNR and OLS with the simulated dataset.
For MNR, the nodewise regression algorithm
(with SIS-Lasso performed for each node)
was employed for Markov blanket estimation, and SIS-SCAD was employed for variable selection.
The SIS-Lasso refers to a variable selection procedure implemented in the package {\it SIS}
\citep{SISpackage},
where the sure independence screening (SIS) algorithm \citep{FanLv2008} was first applied for variable screening
and then the Lasso algorithm was applied to select variables from those survived from the screening procedure. The SIS-SCAD and SIS-MCP
can be interpreted in the same way.
As expected from Theorem \ref{Them1}, OLS and MNR produced almost identical
confidence intervals for each regression coefficient.
In this simulation, we set $n$ excessively large, which ensures
the convergence of the sample covariance matrix to
the true covariance matrix $\Theta^{-1}$.
In fact, MNR can work well with a smaller value of $n$ as illustrated
by the following small $n$-large-$p$ examples.
\begin{figure}[htbp]
\centering
\begin{center}
\begin{tabular}{c}
\epsfig{figure=n2000p50c.eps,height=7.0in,width=3.0in,angle=270}
\end{tabular}
\end{center}
\caption{ The 95\% confidence intervals of $\beta_1,\ldots,\beta_p$
produced by the MNR (solid line) and OLS (dashed line) methods for a dataset with $n=2000$ and $p=50$.}
\label{intervalplot}
\end{figure}
\subsection{An Illustrative Example}
To illustrate the performance of MNR, we generated 100 independent datasets from
the regression (\ref{modeleq1}), where $n=200$, $p=500$, $\sigma^2=1$,
the features were generated from a zero-mean multivariate Gaussian distribution with
a Toeplitz covariance matrix given by
$\Sigma_{i,j}=0.9^{|i-j|}$ for $i,j=1,\ldots,p$, and
the true regression coefficients were given by
$(\beta_0,\beta_1,\beta_2,\ldots,\beta_5)=(1,2,4,-3,-5,10)$ and
$\beta_6=\cdots=\beta_p=0$. We note that the same covariance matrix has
been used in \cite{vandeGeer2014} to illustrate the performance of desparsified-Lasso.
For convenience, we call this model a Toeplitz-covariance linear regression model.
\subsubsection{Illustration of MNR}
Algorithm \ref{subsetAlg1} was run for this example as in Section \ref{Sect3.1}, i.e.,
applying SIS-SCAD for variable selection and the nodewise regression algorithm
for Markov blanket estimation.
Table \ref{reg0tab} summarizes the coverage rates and widths
of the 95\% confidence intervals produced by MNR for each regression coefficient.
For the non-zero regression coefficients (denoted by ``signal''), the
mean coverage rate and mean width of the confidence intervals are defined, respectively, by
\begin{equation} \label{measureeq}
\bar{p}_{\rm cover} = \sum_{j=1}^{100}\sum_{i \in {\boldsymbol S}_*} \hat{p}_i^{(j)}/(100\cdot |{\boldsymbol S}_*|),
\quad
\bar{w}_{\rm CI}=\sum_{j=1}^{100} \sum_{i\in {\boldsymbol S}_*} \hat{w}_i^{(j)}/(100\cdot |{\boldsymbol S}_*|),
\end{equation}
and their respective standard deviations are defined by
\begin{equation} \label{measureeq2}
\begin{split}
\sigma(\bar{p}_{\rm cover}) & =\sqrt{{\mbox{Var}}\{\hat{p}_i^{(j)}: i \in {\boldsymbol S}_*, j=1,2,\ldots,100\}/100}, \\
\sigma(\bar{w}_{\rm CI}) & =\sqrt{{\mbox{Var}}\{\hat{w}_i^{(j)}: i\in{\boldsymbol S}_*, j=1,2,\ldots,100\} /100}, \\
\end{split}
\end{equation}
where $\hat{w}_i^{(j)}$ denotes the width of the 95\% confidence interval
of $\beta_i$ constructed with the $j$th dataset, $\hat{p}_i^{(j)}\in \{0,1\}$ indicates the coverage of $\beta_i$ by the confidence interval,
and ${\mbox{Var}}\{\cdot\}$ denotes the variance. By dividing by 100 in (\ref{measureeq2}),
the standard deviation represents the variability of the mean value (averaged over 100 independent
datasets) for a single regression coefficient.
For the zero regression coefficients (denoted by ``noise''),
the mean coverage rate, the mean width, and their standard deviations can be defined similarly.
For comparison, we applied the
desparsified Lasso, ridge projection and multi-split methods to this example.
These methods have been implemented in the $R$ package {\it hdi} \citep{Meierhdi2016}.
The comparison indicates that MNR significantly outperforms the existing methods:
for both the non-zero and zero regression coefficients,
the mean coverage rates produced by MNR are much closer to their nominal level.
The reason why desparsified-Lasso suffers from coverage deficiency for non-zero
regression coefficients will be explained in Section \ref{LRSect}.
\begin{table}[htbp]
\caption{Coverage rates and widths of the 95\% confidence intervals
produced by MNR with Algorithm \ref{subsetAlg1} for
the Toeplitz-covariance linear regression model, where ``signal'' and ``noise'' denote non-zero and zero regression coefficients, respectively. For ``signal'', the reported mean value and standard deviation (in the parentheses) are
defined in (\ref{measureeq}) and (\ref{measureeq2}), respectively. For ``noise'', they are defined similarly. }
\vspace{-0.2in}
\label{reg0tab}
\begin{center}
\begin{tabular}{cccccc} \toprule
Measure & & Desparsified-Lasso & Ridge & Multi-Split & MNR \\ \midrule
& signal & 0.384(0.049) & 0.576(0.049) & 0.202(0.040) & 0.956(0.021) \\
\raisebox{1.5ex}{Coverage} & noise & 0.965(0.018) & 0.990(0.010) & 1.000(6.4e-4)
& 0.950(0.022) \\ \midrule
& signal & 0.673(0.005) & 1.086(0.010) & 2.711(0.097) & 0.822(0.011) \\
\raisebox{1.5ex}{Width} & noise & 0.691(0.005) & 1.143(0.008) & 2.790(0.103) & 0.869(0.007) \\ \bottomrule
\end{tabular}
\end{center}
\end{table}
As discussed previously, MNR converts the problem of variable selection
as a multiple hypothesis testing problem. To illustrate the potential of MNR
in variable selection, we converted the $p$-values produced by the
subset regressions to z-scores using the inverse
probability integral transformation
\begin{equation} \label{zscoreq}
Z_i^{(j)}=\Phi^{-1}(1-q_i^{(j)}), \quad \quad i=1,2,\ldots,p, \quad j=1,2,\ldots,100,
\end{equation}
where $q_i^{(j)}$ denotes the $p$-value calculated via
subset regression for feature $i$ with dataset $j$, and $\Phi(\cdot)$ denotes the CDF
of the standard Gaussian distribution.
Figure S1 (in the Supplementary Material) shows the histogram of the z-scores, which indicates that the true and false features can be
well separated by the z-scores. The empirical Bayesian method developed by \cite{LiangZ2008} was
applied to each of the 100 datasets for simultaneously
testing the hypotheses (\ref{mult-test}). At a FDR level of $q=0.0001$, which
is measured by the q-value of \cite{Storey2002}, the method led to exact
identifications of the true and false features for all 100 datasets, i.e., both the
false selection rate (FSR) and negative selection rate (FSR) are 0.
More results were shown in Table \ref{FDRtab}.
Here the FSR and NSR are defined by
\[
FSR=\frac{\sum_{j=1}^{100}|\hat{{\boldsymbol S}}_{j}\setminus {\boldsymbol S}_*|}{\sum_{j=1}^{100}|\hat{{\boldsymbol S}}_{j}|}, \quad \quad
NSR=\frac{\sum_{j=1}^{100}|{\boldsymbol S}_*\backslash\hat{{\boldsymbol S}}_{j}|}{\sum_{j=1}^{100}|{\boldsymbol S}_*|},
\]
where ${\boldsymbol S}_*$ is the set of true features, and $\hat{{\boldsymbol S}}_{j}$ is the set
of selected features for dataset $j$.
For comparison, SIS-SCAD, SIS-MCP and SIS-Lasso were applied to these datasets for
performing variable selection under their default settings in the package {\it SIS}.
Table \ref{FDRtab} shows that MNR can significantly outperform the existing methods
in high-dimensional variable selection. As mentioned previously, compared to the existing
methods, a significant advantage of the MNR-based variable selection
method is that it controls the FDR of selected features.
\begin{table}[htbp]
\caption{Variable selection for the Toeplitz-covariance linear regression
with the MNR, SIS-SCAD, SIS-MCP and SIS-Lasso methods.}
\vspace{-0.2in}
\label{FDRtab}
\begin{center}
\begin{tabular}{ccccccc} \toprule
& \multicolumn{3}{c}{MNR} & & & \\ \cline{2-4}
\raisebox{1.5ex}{Measure} & $q=0.0001$ & $q=0.001$ & $q=0.01$ &
\raisebox{1.5ex}{SIS-SCAD} & \raisebox{1.5ex}{SIS-MCP} & \raisebox{1.5ex}{SIS-Lasso} \\ \midrule
FSR & 0 & 0.004 & 0.022 & 0.127 & 0.175 & 0.819 \\
NSR & 0 & 0 & 0 & 0 & 0 & 0 \\ \bottomrule
\end{tabular}
\end{center}
\vspace{-0.25in}
\end{table}
\subsubsection{Illustration of Joint Inference with MNR}
To illustrate the use of MNR for joint inference, we
constructed Bonferroni joint confidence intervals based on
the subset regression for each of the following sets of parameters:
$(\beta_1,\beta_2)$, $(\beta_3,\beta_4,\beta_5)$, $(\beta_1,\beta_6)$,
$(\beta_7, \beta_{10})$, and $(\beta_{20},\beta_{200},\beta_{400})$,
which have covered the cases of combinations of nonzero coefficients,
combinations of zero and nonzero coefficients, and combinations of zero coefficients.
For each set of parameters, as described in Section
\ref{Sect2.3}, the subset regression was constructed by unioning the Markov neighborhoods of
the corresponding features, and then the 95\% joint confidence intervals
for the set of parameters were constructed
using the standard Bonferroni method. The Markov neighborhood of each feature was constructed
as in Section \ref{Sect3.1} using nodewise regression for GGM estimation and
SIS-SCAD for variable selection. Table \ref{jointtab} summarizes the coverage rates
of the joint confidence intervals, and it
indicates that the proposed method works reasonably well for this example.
\begin{table}[htbp]
\caption{Coverage rates of
95\% joint confidence intervals produced by MNR
for the set of parameters: $(\beta_1,\beta_2)$, $(\beta_3,\beta_4,\beta_5)$, $(\beta_1,\beta_6)$,
$(\beta_7, \beta_{10})$, and $(\beta_{20},\beta_{200},\beta_{400})$,
where the number in the parentheses represents the standard deviation
of the joint coverage rate averaged over 100 independent datasets. }
\vspace{-0.2in}
\label{jointtab}
\begin{center}
\begin{tabular}{cccccc} \toprule
Parameters & $(\beta_1,\beta_2)$ & $(\beta_3,\beta_4,\beta_5)$ & $(\beta_1,\beta_6)$
& $(\beta_7,\beta_{10})$ & $(\beta_{20},\beta_{200}, \beta_{400})$ \\ \midrule
Joint coverage rate & 0.97(0.017) & 0.95(0.022) & 0.93(0.026) & 0.97(0.017) & 0.93(0.026) \\ \bottomrule
\end{tabular}
\end{center}
\end{table}
\subsection{Simulation Studies with More Regression Models}
\subsubsection{Linear Regression} \label{LRSect}
We simulated 100 independent datasets from
the linear regression (\ref{modeleq1}) where $n=200$, $p=500$, $\sigma^2=1$, the features were generated from a zero-mean Gaussian distribution with the precision matrix
$\Sigma^{-1}=\Theta=(\theta_{ij})$ given by
\begin{equation}\label{plugin}
\theta_{ij}=\left\{\begin{array}{ll}
0.5,&\textrm{if $\left| j-i \right|=1, i=2,...,(p-1),$}\\
0.25,&\textrm{if $\left| j-i \right|=2, i=3,...,(p-2),$}\\
1,&\textrm{if $i=j, i=1,...,p,$}\\
0,&\textrm{otherwise,}
\end{array}\right.
\end{equation}
and the regression coefficients were given by
$(\beta_0,\beta_1,\beta_2,\ldots,\beta_5)=(1,2,2.5,3,3.5,4)$ and
$\beta_6=\cdots=\beta_p=0$.
Since the precision matrix has an autoregressive (AR) structure,
for convenience, we call this model an AR(2)-precision linear regression model.
Algorithm \ref{subsetAlg1} was first applied to this example with the numerical
results summarized in
Table \ref{regtab}, where the $\psi$-learning algorithm was employed for Markov blanket estimation, and SIS-MCP was employed for variable selection. The $\psi$-learning algorithm has been implemented in the R-package {\it equSA} \citep{equSApackage}.
It provides an equivalent measure of the partial correlation coefficient,
the so-called $\psi$-partial correlation coefficient, for estimating Gaussian graphical models under
the small-$n$-large-$p$ scenario.
The algorithm consists of two screening stages.
The first stage is correlation screening, which,
via a multiple hypothesis test for correlation coefficients, determines for each feature
a conditioning set used for calculating the $\psi$-partial correlation coefficient.
The second stage is $\psi$-partial correlation screening, which, via a
multiple hypothesis test for $\psi$-partial correlation coefficients, determines
the Gaussian graphical model. Corresponding to the two stages,
the algorithm consists of two tuning parameters, $\alpha_1$ and $\alpha_2$, which specify
the significance levels of the two multiple hypothesis tests, respectively.
In all applications of the $\psi$-learning algorithm in this paper, we set
$\alpha_1=0.1$ and $\alpha_2=0.05$ as suggested by \cite{LiangSQ2015}.
In general, $\alpha_1$ should be slightly large to avoid potential loss of
important features in the conditioning set of each feature.
The nodewise regression algorithm has also been applied to this example for Markov blanket estimation, and the results are similar.
\begin{table}[!t]
\caption{Coverage rates and widths of the 95\% confidence intervals produced by MNR, desparsified-Lasso, and ridge projection for the AR(2)-precision linear, logistic and Cox regression. Refer to the caption of Table \ref{reg0tab} for the notation. }
\vspace{-0.2in}
\label{regtab}
\begin{center}
\begin{tabular}{cccccc} \toprule
Response & Measure & & Desparsified-Lasso & Ridge & MNR \\ \midrule
& & signal & 0.2300(0.0421) & 0.3340(0.0447) & {\bf 0.9500(0.0218)} \\
& \raisebox{1.5ex}{Coverage} & noise & 0.9640(0.0186) & 0.9922(0.0088) & {\bf 0.9503(0.0217)} \\ \cline{2-6}
\raisebox{1.5ex}{Gaussian} & & signal & 0.2810(0.0027) & 0.4481(0.0043) & 0.2806(0.0022) \\
& \raisebox{1.5ex}{Width} & noise & 0.2723(0.0024) & 0.4335(0.0036) & 0.2814(0.0024) \\ \midrule
& & signal & 0.004(0.0063) & 0(0) & {\bf 0.9320(0.0252)} \\
& \raisebox{1.5ex}{Coverage} & noise & 0.9953(0.0068) & 1.0(4.5e-4) & {\bf 0.9373(0.0242)} \\ \cline{2-6}
\raisebox{1.5ex}{Binary} & & signal & 0.6424(0.0101) & 1.0775(0.0110) & 1.9473(0.0529) \\
& \raisebox{1.5ex}{Width} & noise & 0.5782(0.0081) & 1.0100(0.0095) & 0.9799(0.0132) \\ \midrule
& & signal & --- & --- & {\bf 0.9140(0.0281)} \\
& \raisebox{1.5ex}{Coverage} & noise & --- & --- & {\bf 0.9354(0.0246)} \\ \cline{2-6}
\raisebox{1.5ex}{Survival} & & signal & --- & --- & 0.3356(0.0018) \\
& \raisebox{1.5ex}{Width} & noise & --- & --- & 0.2683(0.0017) \\ \bottomrule
\end{tabular}
\end{center}
\end{table}
For comparison, the desparsified Lasso and ridge projection methods were applied to this example.
Both methods have been implemented in the $R$ package {\it hdi} \citep{Meierhdi2016}.
The multi-split method is also available in {\it hdi}, but it often suffered
from a convergence issue in applications to this example and thus not included for comparison.
Table \ref{regtab} shows that MNR significantly outperforms the existing methods:
The coverage rates produced by MNR are almost identical to their
nominal levels for both zero and non-zero regression coefficients; while
the coverage rates produced by the other methods are far from their
nominal levels, especially for the nonzero regression coefficients.
For the non-zero regression coefficients, the
confidence intervals produced by desparsified-Lasso have about the same widths
as those by MNR, but the former have much lower coverage rates.
The coverage deficiency of desparsified-Lasso
are due to at least two reasons: (i) the bias-corrected estimator $\hat{{\boldsymbol \beta}}_{bc}$
is still biased; and (ii) the required sparsity condition is violated.
The bias of $\hat{{\boldsymbol \beta}}_{bc}$ can be easily seen from the derivation procedure of
$\hat{{\boldsymbol \beta}}_{bc}$, which is due to \cite{ZhangZhang2014}.
Let $Z_j$ denote the residual of the regression $X_j$ versus all other features
${\boldsymbol X}[-j]$, and let $P_{jk}=X_k^T Z_j/X_j^T Z_j$. Then
the following identity holds
\begin{equation} \label{identeq}
\frac{Y'Z_j}{X_j^T Z_j}=\beta_j+\sum_{k\ne j} P_{jk} \beta_k+\frac{\epsilon'Z_j}{X_j^T Z_j},
\end{equation}
where $Y$ and $\epsilon$ are as defined in (\ref{modeleq1}). Plugging the Lasso estimator $\hat{{\boldsymbol \beta}}_{Lasso}$ (of the
regression $Y$ versus ${\boldsymbol X}$) into (\ref{identeq}) leads to the bias-corrected estimator
\begin{equation} \label{bceq1}
\hat{\beta}_{bc,j}=\frac{Y'Z_j}{X_j^T Z_j} - \sum_{k \ne j} P_{jk} \hat{\beta}_{Lasso,k}
= \hat{\beta}_{Lasso,j}+Z_j' (Y-{\boldsymbol X} \hat{{\boldsymbol \beta}}_{Lasso})/Z_j' X_j, \quad j=1,2,\ldots,p,
\end{equation}
which is essentially the same with the estimator given in (\ref{dlassoeq0}). Here
$\hat{\beta}_{bc,j}$ and $\hat{\beta}_{Lasso,j}$ denote the $j$-th component of
$\hat{{\boldsymbol \beta}}_{bc}$ and $\hat{{\boldsymbol \beta}}_{Lasso}$, respectively.
The $\hat{{\boldsymbol \beta}}_{bc}$ can have the bias of $\hat{{\boldsymbol \beta}}_{Lasso}$ much corrected. However,
as implied by (\ref{identeq}), $\hat{{\boldsymbol \beta}}_{bc}$
is still generally biased because the Lasso estimator
$\hat{{\boldsymbol \beta}}_{Lasso}$ is generally biased.
Such a biased estimator shifts the center of the confidence interval and
thus leads to the coverage deficiency problem.
For the error term $\Delta_n$ defined in (\ref{dlassoeq}),
\cite{DezeureBMM2015} proved that it is negligible
if the sparsity condition $|{\boldsymbol S}_*|=o(\sqrt{n}/\log(p))$ holds,
the precision matrix is row-sparse at a level of $o(n/\log(p))$,
and some other regularity conditions on the design matrix hold. Among these conditions,
the model sparsity condition $|{\boldsymbol S}_*|=o(\sqrt{n}/\log(p))$ is a little restrictive and
can be easily violated.
For example, for a problem with $|{\boldsymbol S}_*|=5$ and $p=500$,
the sample size $n$ should be at least a few thousands to
satisfy the condition $s_0 \ll \sqrt{n}/\log(p)$.
As the result, the error term $\Delta_n$ might not be negligible, which can also
cause the coverage deficiency issue.
Since $\|\hat{{\boldsymbol \beta}}_{Lasso}-{\boldsymbol \beta}\|_1=O_p(|{\boldsymbol S}_*| \sqrt{\log(p)/n})$ \citep{DezeureBMM2015}, violation of the sparsity condition also worsens the bias of $\hat{{\boldsymbol \beta}}_{bc}$.
We note that the model sparsity condition $|{\boldsymbol S}_*|=o(\sqrt{n})$ required by MNR
is much weaker than $|{\boldsymbol S}_*|=o(\sqrt{n}/\log(p))$ under the small-$n$-large-$p$ scenario.
In our numerical experience, the coverage deficiency of desparsified-Lasso is mainly
due to the bias of $\hat{{\boldsymbol \beta}}_{bc}$. We illustrate this issue using two
simulation studies. The first one is given as follows and the other one is given
in Section \ref{equicSect}. In Table \ref{biastable}, we reported
the values of $\hat{{\boldsymbol \beta}}_{bc, j}$'s, $j=1,2,\ldots,8$, for the
AR(2)-precision linear regression. It is easy to see that
the desparsified-Lasso estimate is severely biased for the nonzero coefficients
$\beta_1, \ldots, \beta_5$, which significantly shifts the centers of
the resulting confidence intervals and thus leads to the coverage deficiency problem.
Note that $\hat{\beta}_{bc,6}$ is also biased
due to the strong correlation between $X_6$ and $X_5$.
For comparison, we included in Table \ref{biastable} the MNR estimates of
these coefficients, which are unbiased for both zero and nonzero coefficients.
\begin{table}[htbp]
\caption{Regression coefficient estimates (averaged over 100 independent datasets)
produced by MNR and desparsified-Lasso for the AR(2)-precision linear regression
(with $|{\boldsymbol S}_*|=5$, $p=500$ and $n=200$), where the numbers in the parentheses represent
the standard deviations of the estimates. }
\label{biastable}
\vspace{-0.2in}
\begin{center}
\begin{tabular}{cccccccccc} \toprule
Method & Measure & $\beta_1$ & $\beta_2$ & $\beta_3$ & $\beta_4$ & $\beta_5$ & $\beta_6$ & $\beta_{7}$ & $\beta_{8}$ \\ \midrule
--- & true & 2 & 2.5 & 3 & 3.5 & 4 & 0 & 0 & 0\\ \midrule
& $\hat{{\boldsymbol \beta}}_{bc}$ & 1.841 & 2.274 & 2.698 & 3.270 & 3.849 & -0.051 & -0.007 & 0.016 \\
\raisebox{1.5ex}{desparsified} & SD & (0.008) & (0.009) & (0.009) & (0.007) & (0.007) & (0.006) & (0.007) & (0.007)
\\ \midrule
& $\hat{{\boldsymbol \beta}}_{MNR}$ & 1.997 & 2.503 & 2.994 & 3.498 & 4.001 & 0.014 & 0.004 & -0.002 \\
\raisebox{1.5ex}{MNR} & SD & (0.006) & (0.008) & (0.008) & (0.007) & (0.006) & (0.007) & (0.008) & (0.008) \\ \bottomrule
\end{tabular}
\end{center}
\end{table}
\subsubsection{Logistic Regression}
We simulated 100 datasets for a logistic regression.
For each dataset, we set $n=300$, $p=500$, $(\beta_0,\beta_1,\ldots,\beta_5)=(1,2,2.5,3,3.5,4)$, $\beta_6=\cdots=\beta_p=0$, and generated the covariates from a
zero-mean multivariate Gaussian distribution with the precision matrix given by (\ref{plugin}).
For convenience, we call this model
an AR(2) precision logistic regression model. Each dataset consisted of 150 case samples and 150 control samples. To alleviate the convergence issues suffered by the GLM estimation procedure
{\it glm} in R, we set $n$ slightly large for this example.
Algorithm \ref{subsetAlg1} was run for the datasets, where the SIS-MCP algorithm was employed
for variable selection and the $\psi$-learning algorithm was employed for
Markov blanket estimation. The nodewise regression algorithm was also applied for
Markov blanket estimation, the results were similar.
The numerical results were summarized in Table \ref{regtab}, which
indicates that MNR significantly outperforms the other methods.
Desparsified-Lasso and ridge projection essentially fail for this example.
\subsubsection{Cox Regression}
For Cox regression, which is also known as Cox proportional-hazards model,
we let $\lambda(t)$ denote the hazard rate at time $t$ and let
$\lambda_0(t)$ denote the baseline hazard rate. The Cox model can then be expressed as
\begin{equation} \label{Coxeq}
\lambda(t)=\lambda_0(t) \exp(\beta_1X_1+\beta_2X_2+\ldots+\beta_pX_p).
\end{equation}
In the simulation, we set $(\beta_1,\ldots,\beta_5)=(1,1,1,1,1)$, $\beta_6=\cdots=\beta_p=0$,
the baseline hazard rate $\lambda_0(t)=\lambda_0=0.1$, and the censoring hazard rate $\lambda_c=1$;
generated the event time from the Weibull distribution with the shape parameter=1 and the
scale parameter=$\lambda_0\exp(-\sum_{i=1}^p X_i \beta_i)$; generated the censoring time
from the Weibull distribution with the shape parameter=1 and the scale parameter=$\lambda_c$;
set the observed survival time as the minimum of the event time and the censoring
time for each subject; and generated the features $X_1,\ldots, X_p$ from a zero-mean multivariate
normal distribution with the precision matrix given by (\ref{plugin}). For convenience,
we call this model an AR(2)-precision Cox regression model. We simulated 100 datasets
from this model with $n=300$ and $p=500$.
Algorithm \ref{subsetAlg1} was run for the datasets, where the SIS-Lasso algorithm was used for
variable selection and the $\psi$-learning algorithm was used for
Markov blanket estimation. The numerical results were summarized in Table \ref{regtab}.
The nodewise regression algorithm was also applied for
Markov blanket estimation, the results were similar.
In the MNR results, we can observe some slight bias, which mainly comes
from the model selection error and the estimation error of the Markov blankets.
Our numerical experience shows that the nominal level can
be reached by MNR with the correct model and correct Markov neighborhoods
or when $n$ becomes large.
In addition to coverage rates, Table \ref{regtab} reports
mean widths of the confidence intervals resulted from different methods.
For linear regression, the confidence intervals by MNR
are narrower than those by ridge projection, and of about the same width as
those by desparsified-Lasso. However, as analyzed previously,
desparsified-Lasso often suffers from the coverage deficiency issue.
For logistic and Cox regression, the comparison is not meaningful, as the other methods
either fail or are not available.
To explore the potential of MNR in variable selection,
we have calculated z-scores in (\ref{zscoreq})
based on the $p$-values generated by MNR for the datasets simulated above.
Figures S2-S4 show the histograms of the z-scores, which indicate that the true and false features can always be well
separated by z-scores for all these datasets. This is an attractive feature of MNR and its use
for feature selection will be further explored in Section \ref{sect4}.
\subsection{Robustness of Markov Neighborhood Regression} \label{equicSect}
This section studies the robustness of MNR to violations of
the conditions (\ref{ceq1})-(\ref{ceq3}).
This issue has been partially studied in Section 2.2 of the Supplementary Material,
where the condition (\ref{ceq2}) is violated when the size of $\hat{{\boldsymbol S}}_*$ is restricted to 3.
Recall that for the Toeplitz-covariance regression, we have
$|{\boldsymbol S}_*|=5$, $|\xi_j|=2$ for $j=2,3,\ldots, p-1$, and $|\xi_j|=1$ for $j=1$ and $p$.
Therefore, setting $|\hat{{\boldsymbol S}}_*| = 3$ leads to some true features missed in
each subset regression. As shown in Table S1 (in the Supplementary Material), this results in wider confidence intervals for both zero and nonzero
regression coefficients, although the coverage rates are not much affected.
In what follows, we consider one linear regression example where all features are equally
correlated with a correlation coefficient of 0.8. The features were generated from
a zero-mean multivariate Gaussian distribution with the covariance matrix given by
\begin{equation} \label{equieq}
\Sigma_{i,j}=0.8, \quad \mbox{for all $i \ne j$}, \quad \Sigma_{i,i}=1 \quad \mbox{for all $i$.}
\end{equation}
We set $p=500$, $n=300$,
$(\beta_0,\beta_1,\ldots,\beta_{10})=(1,2,2.5,3,3.5,4,$ $5,6,7,-8,-9)$,
and $\beta_{11}= \cdots= \beta_p=0$, and generated 100 independent datasets in total.
For convenience, we will call this model an equi-correlation linear regression model.
The same model has been used in \cite{vandeGeer2014} to illustrate the performance of
desparsified-Lasso, but with different sample sizes and regression coefficients.
For this example, it is easy to see that for each feature $x_j$, the Markov blanket $\xi_j$
consists of all other $p-1$ features. That is,
the condition (\ref{ceq1}) is violated, as we always restrict the Markov blanket
to be much smaller than $p$.
Algorithm \ref{subsetAlg1} was first applied to this example, where SIS-MCP was used for
variable selection, and nodewise regression
(with SIS-Lasso performed for each
node) was used for Markov blanket estimation. All the algorithms were run under their
default setting in the R package {\it SIS}. For comparison,
desparsified-Lasso and ridge projection methods were also applied to this example. Both
methods were run under their default settings in the R package {\it hdi}.
The numerical results were summarized in Table \ref{equictab}, which indicates that
MNR is pretty robust to the misspecification of the Markov blanket for this example.
In terms of mean coverage rates and widths, MNR produced most accurate
confidence intervals compared to the desparsified-Lasso and ridge projection methods.
\begin{table}[htbp]
\caption{Coverage rates and widths of the 95\% confidence intervals
produced by desparsified-Lasso, ridge projection and MNR (with Algorithm \ref{subsetAlg1}) for
the equi-correlation linear regression model. Refer to the caption of Table \ref{reg0tab}
for the notation. }
\label{equictab}
\begin{center}
\vspace{-0.2in}
\begin{tabular}{ccccc} \toprule
Measure & --- & Desparsified-Lasso & Ridge & MNR \\ \hline
& signal & 0.916(0.028) & 0.973(0.016) & 0.938(0.024) \\
\raisebox{1.5ex}{Coverage} & noise & 0.963(0.019) & 0.990(0.010) & 0.951(0.022) \\ \midrule
& signal & 0.656(0.003) & 1.066(0.007) & 0.551(0.003) \\
\raisebox{1.5ex}{Width} & noise & 0.657(0.004) & 1.069(0.007) & 0.554(0.003) \\ \bottomrule
\end{tabular}
\end{center}
\vspace{-0.15in}
\end{table}
Compared to the results reported in Table \ref{regtab} for the AR(2)-precision linear regression, desparsified-Lasso works much better for this example. For the AR(2)-precision linear regression,
Table \ref{biastable} shows that $\hat{{\boldsymbol \beta}}_{bc}$ is
severely biased and thus the method suffers from coverage deficiency for nonzero coefficients.
To have this issue further explored, we reported
in Table \ref{biastable2} $\hat{{\boldsymbol \beta}}_{bc}$ and $\hat{{\boldsymbol \beta}}_{\rm MNR}$ for
the nonzero coefficients $\beta_1,\beta_2,\ldots,\beta_{10}$.
The comparison with the true value shows that $\hat{{\boldsymbol \beta}}_{bc}$ is nearly unbiased for
$\beta_1,\ldots,\beta_8$, although it is systematically smaller than the true value
in magnitudes. As the result, desparsified-Lasso produced a good coverage rate for the
non-zero coefficients of this example.
MNR continuously works well; ${\boldsymbol \beta}_{\rm MNR}$ is unbiased and accurate for this example.
\begin{table}[htbp]
\caption{Regression coefficient estimates (averaged over 100 independent datasets)
produced by MNR and desparsified-Lasso for the equi-correlation linear regression model
(with $|{\boldsymbol S}_*|=10$, $p=500$ and $n=300$), where the numbers in the parentheses represent
the standard deviations of the estimates. }
\label{biastable2}
\begin{center}
\vspace{-0.2in}
\begin{tabular}{cccccccccccc} \toprule
Method & & $\beta_1$ & $\beta_2$ & $\beta_3$ & $\beta_4$ & $\beta_5$ & $\beta_6$ &
$\beta_{7}$ & $\beta_{8}$ & $\beta_9$ & $\beta_{10}$ \\ \midrule
--- & true & 2 & 2.5 & 3 & 3.5 & 4 & 5 & 6 & 7 & -8 & -9 \\ \midrule
& $\hat{{\boldsymbol \beta}}_{bc}$ & 1.96 & 2.43 & 2.98 & 3.47 & 3.96 & 4.97 & 5.95 & 6.95 & -7.87 & -8.84 \\
\raisebox{1.5ex}{desparsified} & SD & (0.02) & (0.02) & (0.02) & (0.02) & (0.02) & (0.02) &
(0.02) & (0.02) & (0.02) & (0.02) \\ \midrule
& $\hat{{\boldsymbol \beta}}_{\rm MNR}$ & 2.01 & 2.49 & 3.00 & 3.50 & 3.99 & 5.02 & 5.98 & 6.99 & -8.01 & -9.01 \\
\raisebox{1.5ex}{MNR} & SD & (0.01) & (0.01) & (0.02) & (0.01) & (0.02) & (0.02) & (0.02) &
(0.01) & (0.02) & (0.02) \\ \bottomrule
\end{tabular}
\end{center}
\end{table}
In summary, MNR is robust to misspecification of the Markov neighborhood.
It will perform reasonably
well as long as for each subset regression, the Markov neighborhood has covered the major
contributors of the subset regression, which include
the most significant features to the original regression
as well as the most correlated features to the target feature of the subset regression.
\subsection{Computational Complexity}
The MNR method consists of three steps,
variable selection, Markov blanket estimation, and subset regression. Its computational
complexity is typically dominated by the algorithm used for Markov blanket estimation.
For Algorithm \ref{subsetAlg1},
if the Lasso algorithm is employed for variable selection, then, by \cite{Meinshausen2007}, the computational complexity of this step is upper bounded by $O(n^3p)$ under the small-$n$-large-$p$ scenario. Instead of Lasso, this paper employed the SCAD and MCP algorithms for variable selection which have competitive computational complexity with Lasso \citep{Zhang2010}.
If the $\psi$-learning algorithm is used for Markov blanket estimation, then,
by \cite{LiangSQ2015}, the computational complexity of this step is upper bounded
by $O(n^3 p^2)$.
By condition (\ref{ceq3}), the computational complexity of each subset regression is $O(n^2)$ and thus the total computational complexity of the subset regression step is $O(n^2 p)$.
Therefore, the total computational complexity of MNR is upper bounded by $O(n^3 p^2)$.
Alternatively, if the nodewise regression algorithm is used for Markov blanket estimation and Lasso is used for the regression on each node/feature, then the computational complexity of
this step is also upper bounded by $O(n^3p^2)$ as there are $p$ features in total.
In this case, the total computational complexity of MNR is also upper bounded by $O(n^3p^2)$.
If the graphical Lasso is used for Markov blanket estimation, then the total computational complexity of MNR will be $O(p^3)$, as the graphical Lasso has a computational complexity of $O(p^3)$. In a fast implementation of the graphical Lasso algorithm by making use of the block diagonal structure in its solution \citep{WittenFS2011}, the total computational complexity of MNR can be reduced to $O(p^{2+v})$ for some $0< v \leq 1$.
Since desparsified-Lasso employs the nodewise regression algorithm in estimating
the precision matrix and correcting the bias of $\hat{{\boldsymbol \beta}}_{Lasso}$, its
computational complexity is upper bounded by $O(n^3p^2)$, the same bound as Algorithm \ref{subsetAlg1}.
For a dataset generated from the AR(2)-precision linear regression with $p=500$ and $n=200$,
Table \ref{timetab} summarized the CPU time cost by different methods when running with
a single thread on an Intel(R) Xeon(R) CPU E5-2660 v3@2.60GHz machine.
For MNR, we employed SIS-MCP for variable selection, but different methods for Markov blanket estimation.
We note that ridge projection and multi-split can be substantially faster than MNR, although they are often inferior to MNR in numerical performance.
For this example, ridge projection and multi-split took about 3.2 and 3.1 CPU seconds, respectively.
\begin{table}[htbp]
\begin{center}
\caption{CPU times (in seconds) cost by different methods for
a dataset generated from the AR(2)-precision linear regression with $p=500$ and $n=200$,
where MNR$_a$, MNR$_b$, MNR$_c$, and MNR$_d$
mean that $\psi$-learning, nodewise regression (with SIS-Lasso),
nodewise regression (with SIS-MCP), and nodewise regression (with SIS-SCAD) were used for Markov blanket estimation, respectively.}
\label{timetab}
\begin{tabular}{cccccc} \toprule
Methods & Desparsified-Lasso & MNR$_a$ & MNR$_b$ & MNR$_c$ & MNR$_d$ \\ \midrule
CPU(s) & 258 & 152 & 230 & 205 & 250 \\ \bottomrule
\end{tabular}
\end{center}
\end{table}
Finally, we note that MNR can be substantially accelerated via parallel computing,
for which both the Markov blanket estimation and subset regression steps
can be done in an embarrassingly parallel way.
As described in Section \ref{LRSect}, the $\psi$-learning algorithm consists of two
screening stages, for which both the correlation coefficients and $\psi$-partial
correlation coefficients can be calculated in parallel. Refer to \cite{LiangSQ2015} for
more discussions on this issue. If the nodewise regression algorithm is used for
Markov blanket estimation, its parallel implementation is obvious.
\section{Causal Structure Discovery for High-Dimensional Regression} \label{sect4}
The causal relationship for a pair or more variables refers to a {\it persistent association}
which is expected to exist in all situations without being affected by
the values of other variables. Due to its attractive feature, which does not only allow
better explanations for past events but also enables better predictions for the future,
causal discovery has been an essential task in many disciplines.
Since, for high-dimensional problems, it is difficult and expensive to identify causal relationships
through intervention experiments, passively observed data has thus become an important source to
be searched for causal relationships. The challenge of causal discovery from observational
data lies in the fact that statistical associations detected from observational data
are not necessarily causal.
In statistics, the causal relationship or {\it persistent association} can be determined using
conditional independence tests. For a large set of variables, a pair of variables
are considered to have no direct causal relationship if a subset of the remaining variables
can be found such that conditioning on this subset of variables, the two variables are
independent. Based on conditional independence tests,
\cite{SpirtesC2000} proposed the famous PC algorithm
for learning the structure of causal Bayesian networks.
Later, \cite{BuhlmannMM2010} extended
the PC algorithm to high-dimensional variable selection. The extension is called
the PC-simple algorithm which can be used to search for the causal
structure around the response variable. Note that the causal structure
includes all the possible direct causes and effects of the response variable, i.e., all
the parents and children in the terminology of directed acyclic graphs (DAGs).
For certain problems, we may be able to determine in logic which are for parents and
which are for children, although PC-simple cannot tell.
An alternative algorithm that can be used for local causal discovery is
the HITON-PC algorithm \citep{Aliferisetal2010}, which is also an extension of the PC algorithm.
The major issue with the PC-simple and HITON-PC algorithms is with their time complexity.
For both algorithms, in the worst scenario, i.e., when for each of the $p$ features all conditional
independence tests of order from 1 to $p-1$ are conducted, the total number of
conditional tests is $O(p 2^p)$.
Even under the sparsity constraint,
the total number of conditional tests can still be of a high order polynomial of $p$.
See \cite{BuhlmannMM2010} for more discussions on this issue.
In what follows we describe how the causal structure
around the response variable can be discovered for high-dimensional regression based on the
output of MNR. The proposed algorithm has a much favorable
computational complexity, which is $O(n^{3/2}p)$ in all scenarios.
For Gaussian, binary and proportional-hazards response data,
the MNR method can be described one by one as follows.
\subsection{Gaussian Response} \label{CausalGaussoian}
Assume that ${\boldsymbol Z}=(Y,{\boldsymbol X})$ jointly follows a multivariate Gaussian distribution $N_{p+1}(0,\Sigma)$.
To distinguish the notation from that used in previous sections, we let
${\boldsymbol G}_z=({\boldsymbol V}_z,{\boldsymbol E}_z)$ denote the graph underlying the joint Gaussian distribution.
Let $\zeta_j=\xi_j \cup {\boldsymbol S}_*$, where $\xi_j$ denotes the
Markov blanket of $X_j$ in the graph ${\boldsymbol G}_z$.
It is easy to see that $\zeta_j$ forms
a separator of $Y$ and $X_j$ in the graph ${\boldsymbol G}_z$.
Then, under the faithfulness condition for the joint distribution $N_{p+1}(0,\Sigma)$, we can show
as in \cite{LiangSQ2015} that
$ \rho(Y, X_j|{\boldsymbol X}_{{\boldsymbol V}\setminus \{j\}}) \ne 0 \Longleftrightarrow \rho(Y, X_j | {\boldsymbol X}_{\zeta_j}) \ne 0$,
where $\rho(\cdot,\cdot|\cdot)$ denotes the partial correlation coefficient.
The validity of the faithfulness condition is supported by the Lebesgue
measure zero argument \citep{Meek1995};
that is, the problems that violate the faithfulness condition usually correspond to some
particular parameter values that form a zero measure set in the space of all possible parameterizations.
Further, by the relationship between partial correlations
and regression coefficients, see e.g., p.436 of \cite{BuhlmannGeerBook2011}, we have
$\rho(Y, X_j | {\boldsymbol X}_{\zeta_j}) \ne 0
\Longleftrightarrow \beta_j \ne 0$, where $\beta_j$ is the
coefficient of $X_j$ in the Markov neighborhood regression $Y \sim X_j+{\boldsymbol X}_{\zeta_j}$.
Therefore, the test for $H_0: \beta_j=0$ versus $H_1: \beta_j \ne 0$
can be conducted via the Markov neighborhood regression.
Given the $p$-values of individual tests,
the causal structure around the response variable $Y$ can be determined
via a multiple hypothesis test.
Since the problem of causal structure discovery is to identify
a small set of variables that have causal or effect relations with the response variable,
a simplified version of Algorithm S1 can be used, which avoids to assess the
effect of all variables on the response. In the simplified algorithm,
the Markov blankets only need to be found for the variables
survived from the variable screening step. The simplified algorithm can be
described as follows.
\begin{algorithm} (Simplified MNR for Causal Structure Discovery) \label{subsetAlg3}
\begin{itemize}
\item[(a)] (Variable screening)
Apply a sure independence screening procedure with $Y$ as
the response variable and ${\boldsymbol X}$ as features,
to obtain a reduced feature set, $\hat{{\boldsymbol S}}_* \subseteq \{1,\dots,p\}$, with
the size $|\hat{{\boldsymbol S}}_*|=O(\sqrt{n}/\log(n))$.
\item[(b)] (Markov blanket estimation)
For each variable $X_j \in \hat{{\boldsymbol S}}_*$, apply a sure independence screening
procedure to obtain a reduced neighborhood
$\hat{\xi}_{j}\subseteq \{1,\dots,p\}$ with the size $|\hat{\xi}_j|=O(\sqrt{n}/\log(n))$.
\item[(c)] (Subset Regression) For each feature $X_j\in \hat{{\boldsymbol S}}_*$, run
a subset regression with the features given by $\{X_j\} \cup {\boldsymbol X}_{\hat{\xi}_j} \cup {\boldsymbol X}_{\hat{{\boldsymbol S}_*}}$.
Conduct inference for $\beta_j$, including the estimate, confidence interval and $p$-value,
based on the output of the subset regression.
\item[(d)] (Causal Structure Discovery) Conduct a multiple hypothesis test to identify causal features
based on the $p$-values calculated in step (c).
\end{itemize}
\end{algorithm}
The consistency of the algorithm for causal structure identification can be established
under slightly modified conditions of Theorem \ref{Them1}. To be more precise,
we only need to restate the conditions (A1)-(A4) for the joint distribution of $(Y,{\boldsymbol X})$,
and then the proof directly follows Theorem 2 of \cite{LiangSQ2015}.
It is easy to see that the computational complexity of this algorithm is $O(n^{3/2}p)$, as the
computational complexity of the SIS algorithm is $O(np)$ \citep{FanLv2008} and there are
a total of $O(\sqrt{n}/\log(n))$ features for which the Markov blanket needs to be estimated.
Hence, Algorithm \ref{subsetAlg3} can potentially be much faster than the PC-simple
and HITON-PC algorithms, especially when $p$ is large.
Again, this algorithm can have many implementations. For example,
the SIS algorithm \citep{FanLv2008} can be used for both variable screening
and Markov blanket estimation.
The HZ-SIS algorithm \citep{XueLiang2017} can also be used for both of them.
\subsection{Binary Response}
For binary response data, if we assume that the features ${\boldsymbol X}$ follow a Gaussian
distribution, then a joint distribution of $(Y,{\boldsymbol X})$ can be defined as in \cite{LeeHastie2015},
for which the conditional distribution of each component of ${\boldsymbol X}$
is Gaussian with a linear regression model, and
the conditional distribution of $Y$ is a binomial distribution as given
by a logistic distribution.
Further, we can assume that the joint distribution is faithful to
the mixed Graphical model formed by $(Y,{\boldsymbol X})$ \citep{Meek1995}.
As pointed out in \cite{LeeHastie2015}, the mixed graphical model is a
pairwise Markov network and the zero regression coefficients (in the
nodewise regression) correspond to the conditional independence.
Therefore, Algorithm \ref{subsetAlg3} is also applicable to the binary response data,
for which variable screening can be done using the GLM SIS algorithm \citep{FanS2010}.
Extending the algorithm to multinomial response data is straightforward. The consistency
of the approach directly follows from Theorem 2 of \cite{XuJiaLiang2019}, which shows
the consistency of a conditional independence test based approach for learning
mixed graphical models. Following \cite{XuJiaLiang2019}, the consistency of the
proposed approach can be established under appropriate conditions
such as the faithfulness of the joint distribution of $(Y,{\boldsymbol X})$ with respect to
the underlying mixed graphical model, the sparsity of Markov blankets, the sparsity
of the true model, and some conditions on generalized linear models.
\subsection{Proportional-Hazards} \label{Coxsection}
For Gaussian and binary response data, we justify Algorithm \ref{subsetAlg3}
for causal structure discovery by presenting $(Y,{\boldsymbol X})$ as an undirected graph for which
the causal structure around $Y$ contains both direct causes and effects
of the response variable. Unfortunately, extending this justification to
survival data is hard.
For survival data, the response variable is proportional hazard, which is non-Gaussian and non-multinomial and thus the joint distribution of $(Y,X)$
is difficult to define with respect to an undirected graph.
However, this difficulty can be resolved by modeling $(Y,X)$ as
a Bayesian network with $Y$ being a child node only.
If $Y$ is a child of $X_j$, then
$\tilde{\zeta}_j= \xi_j \cup \{p+1\} \cup {\boldsymbol S}_*$ forms the Markov blanket
of $X_j$, where $\xi_j$ is the sub-Markov blanket formed with ${\boldsymbol X}$
as implied by the PC algorithm \citep{SpirtesC2000}, $p+1$ is the index of $Y$ (by defining $X_{p+1}=Y$),
and ${\boldsymbol S}_*$ contains all siblings of $X_j$ with respect to the common child $Y$.
By the total conditioning property shown in \cite{PelletE2008} for Bayesian networks, we have
\begin{equation} \label{BNeq1}
X_j \perp Y |{\boldsymbol X}_{\tilde{\zeta}_j\setminus \{j,p+1\}} \Longleftrightarrow
X_j \perp Y |{\boldsymbol X}_{{\boldsymbol V} \setminus \{j,p+1\}},
\end{equation}
which implies that Algorithm \ref{subsetAlg3} is still valid for survival data.
However, construction of Bayesian networks for non-Gaussian and non-multinomial and
with missing data is beyond the scope of this paper. Therefore, there will be no
illustrative examples for this part.
In (\ref{BNeq1}), if $\tilde{\zeta}_j$ is replaced by some super Markov blanket
$\tilde{\zeta}_j' \supset \tilde{\zeta}_j$, the equivalence still holds.
This justification is very general and can be applied to the Gaussian and
multinomial response data as well. The only shortcoming is that it assumes that $Y$ can
only be a child of ${\boldsymbol X}$, while this might be too restrictive for the problems considered
with the Gaussian and multinomial response data.
\section{Real Data Studies}
This section reports two applications of Algorithm \ref{subsetAlg3},
one is for identification of anti-cancer drug sensitive genes, and the other is for
identification of cancer driver genes.
\subsection{Identification of Drug Sensitive Genes}
Disease heterogeneity is often observed in complex diseases such as cancer.
For example, molecularly targeted cancer drugs are only
effective for patients with tumors expressing targets \citep{GrunwaldH2003, Buzdar2009}.
The disease heterogeneity has directly motivated the development of precision medicine,
which aims to improve patient care by tailoring optimal therapies to an individual patient according to his/her
molecular profile and clinical characteristics.
Identifying sensitive genes to different drugs
is an important step toward the goal of precision medicine.
To illustrate the MNR method, we considered the cancer cell line encyclopedia (CCLE) dataset,
which is publicly available at {\it www.broadinstitute.org/ccle}.
The dataset consists of 8-point dose-response curves for 24 chemical compounds across over 400
cell lines. For different chemical compounds, the numbers of cell lines are
slightly different. For each cell line, it consists of the expression values
of $p=18,988$ genes. We used the area under the dose-response curve, which was termed
as activity area in \cite{Barretinaetal2012}, to measure the sensitivity of
a drug to each cell line. Compared to other measurements, such as $IC_{50}$
and $EC_{50}$, the activity area could capture the efficacy and potency of the drug
simultaneously. An exploratory analysis indicates that treating the activity area
as the response of a linear regression with respect to
the gene expression values is appropriate.
Since the purpose of this study is to identify the drug sensitive genes instead of
assessing the drug effect for all genes, Algorithm \ref{subsetAlg3} was applied
with the HZ-SIS algorithm used for variable screening and
Markov blanket estimation. In both steps, we set the neighborhood
size to be 40. After getting $p$-values from the subset regressions,
the adjusted $p$-values \citep{Holm1979} were calculated,
and the genes with the adjusted $p$-values less than 0.05 were identified
as the drug sensitive genes.
For some drugs, if there are no genes identified
at this significance level, we just selected
one gene with the smallest $p$-value.
The results were summarized in Table \ref{drugtab}.
For Algorithm \ref{subsetAlg3}, different neighborhood sizes have been tried, the
results are similar.
For comparison,
desparsified Lasso, ridge projection and multi sample-splitting
were also applied to this example. As in the MNR method,
for each drug, we selected the genes with
the adjusted $p$-values less than 0.05
as significant; and if there were no genes selected at this significance level, we just reported
one gene with the smallest adjusted $p$-value.
The results were also summarized in Table \ref{drugtab}.
Compared to the existing methods, MNR performs reasonably well for this real data example.
First of all,
for all drugs, desparsified Lasso is simply inapplicable due to
the ultra-high dimensionality of the dataset;
the package {\it hdi} aborted due to the excess of memory limit.
Due to the same issue, {\it hdi} also aborted for some drugs when performing
ridge regression. For multi sample-splitting and MNR, it is easy to see that if the same gene
is selected by both methods, then the 95\% confidence interval
produced by MNR is narrower.
MNR produced promising results in selection of drug sensitive genes.
For example, for both drugs Topotecan and Irinotecan, MNR selected
the gene SLFN11 as the top drug sensitive gene.
In the literature, \cite{Barretinaetal2012} and \cite{Zoppolietal2012}
reported that SLFN11 is predictive of treatment response for Topotecan and Irinotecan.
For drug 17-AAG, MNR selected NQO1 as the top gene;
in the literature, \cite{HadleyH2014} and \cite{Barretinaetal2012}
reported NQO1 as the top predictive biomarker for 17-AAG.
For drug Paclitaxel, BNN selected BCL2L1 as the top gene.
In the literature, many publications, such as \cite{LeeHLYK2016} and
\cite{Domanetal2016}, reported that the gene BCL2L1 is predictive of treatment response
for Paclitaxel.
For drug PF2341066, \cite{LawrenceSalgia2010} reported that HGF, which
was selected by MNR as the top drug sensitive gene,
is potentially responsible for the effect of PF2341066.
For drug LBW242, RIPK1 is selected by MNR. \cite{Gaither2007} and \cite{Moriwaki2015}
stated that RIPK1 is one of the presumed target of LBW242, which is involved in increasing death of cells.
Finally, we pointed out that the genes selected by MNR have some overlaps with those selected by
the multi sample-splitting method, although for the overlapped genes the 95\% confidence intervals produced by MNR tend
to be narrower.
{\small
\begin{center}
\begin{longtable}{ccccc}
\caption{ Comparison of drug sensitive genes selected by desparsified Lasso, ridge projection,
multi sample-splitting (multi-split) and MNR for 24 anti-cancer drugs, where $^*$ indicates that
this gene was significantly selected and the number in the parentheses denotes the
width of the 95\% confidence interval produced by the method.}
\label{drugtab} \\ \toprule
Drug &Desparsified Lasso & Ridge & Multi-Split & MNR \\ \hline
\endfirsthead
\multicolumn{5}{c}%
{{\bfseries \tablename\ \thetable{} -- continued from previous page}} \\
\hline Drug &Desparsified Lasso & Ridge & Multi-Split & MNR \\ \hline
\endhead
\hline \multicolumn{5}{|r|}{{Continued on next page}} \\ \hline
\endfoot
\endlastfoot
17-AAG&\makecell{--}&\makecell{--}&\makecell{NQO1*(0.138)}&\makecell{NQO1*(0.115)}\\\hline
AEW541&\makecell{--}&\makecell{F3(0.076)}&\makecell{SP1(0.176)}&\makecell{TMEM229B*(0.142)}\\\hline
AZD0530&\makecell{--}&\makecell{PPY2(0.966)}&\makecell{SYN3(0.705)}&\makecell{DDAH2(0.088)}\\\hline
AZD6244&\makecell{--}&\makecell{OSBPL3(0.161)}&\makecell{SPRY2*(0.084)\\LYZ*(0.069)\\RNF125*(0.084)}&\makecell{LYZ*(0.048)\\SPRY2*(0.056)}\\\hline
Erlotinib&\makecell{--}&\makecell{LRRN1(0.102)}&\makecell{PCDHGC3(0.684)}&\makecell{ENPP1(0.123)}\\\hline
Irinotecan&\makecell{--}&\makecell{SLFN11(0.091)}&\makecell{ARHGAP19*(0.134)\\SLFN11*(0.044)}&\makecell{ARHGAP19*(0.108)\\SLFN11*(0.033)}\\\hline
L-685458&\makecell{--}&\makecell{--}&\makecell{MSL2(0.2)}&\makecell{FAM129B(0.187)}\\\hline
Lapatinib&\makecell{--}&\makecell{WDFY4(0.509)}&\makecell{ERBB2*(0.111)}&\makecell{SYTL1(0.062)}\\\hline
LBW242&\makecell{--}&\makecell{RXFP3(0.86)}&\makecell{LOC100009676(0)}&\makecell{RIPK1(0.221)}\\\hline
Nilotinib&\makecell{--}&\makecell{--}&\makecell{RAB37(0.187)}&\makecell{RHOC(0.103)}\\\hline
Nutlin-3&\makecell{--}&\makecell{TTC7B(0.119)}&\makecell{LOC100009676(0)}&\makecell{DNAJB14(0.163)}\\\hline
Paclitaxel&\makecell{--}&\makecell{ABCB1*(0.229)}&\makecell{ABCB1*(0.183)}&\makecell{BCL2L1*(0.289)}\\\hline
Panobinostat&\makecell{--}&\makecell{C17orf105(1.104)}&\makecell{PUM2(0.589)}&\makecell{TGFB2(0.103)}\\\hline
PD-0325901&\makecell{--}&\makecell{ZNF646(0.498)}&\makecell{LYZ*(0.064)\\RNF125*(0.087)}&\makecell{DBN1(0.104)}\\\hline
PD-0332991&\makecell{--}&\makecell{GRM6(0.719)}&\makecell{LOC100506972(0.569)}&\makecell{PUM2(0.244)}\\\hline
PF2341066&\makecell{--}&\makecell{WDFY4(0.487)}&\makecell{SPN*(0.124)}&\makecell{HGF*(0.043)\\ENAH*(0.068)\\GHRLOS2*(0.24)}\\\hline
PHA-665752&\makecell{--}&\makecell{--}&\makecell{LAIR1(0.193)}&\makecell{INHBB(0.039)}\\\hline
PLX4720&\makecell{--}&\makecell{ADAMTS13(0.692)}&\makecell{SPRYD5*(0.118)}&\makecell{PLEKHH3(0.22)}\\\hline
RAF265&\makecell{--}&\makecell{LOC100507235(0.748)}&\makecell{SIGLEC9(0.761)}&\makecell{SEPT11*(0.078)}\\\hline
Sorafenib&\makecell{--}&\makecell{--}&\makecell{SBNO1(0.426)}&\makecell{RPL22*(0.151)\\LAIR1*(0.094)}\\\hline
TAE684&\makecell{--}&\makecell{--}&\makecell{ARID3A*(0.11)}&\makecell{ARID3A*(0.078)}\\\hline
TKI258&\makecell{--}&\makecell{--}&\makecell{SPN(0.12)}&\makecell{KHDRBS1(0.251)}\\\hline
Topotecan&\makecell{--}&\makecell{--}&\makecell{SLFN11*(0.136)}&\makecell{SLFN11*(0.107)}\\\hline
ZD-6474&\makecell{--}&\makecell{MID1IP1(0.158)}&\makecell{NOD1(0.363)}&\makecell{PXK*(0.066)}\\ \bottomrule
\end{longtable}
\vspace{-0.15in}
\end{center}
}
\subsection{Identification of Cancer Driver Genes}
We considered the Lymph dataset \citep{HansDW2007}, which consists of $n = 148$ samples
with 100 node-negative cases (low risk for breast cancer) and 48 node-positive cases
(high risk for breast cancer) as our binary response.
For each sample, there are $p=4512$ genes that showed evidence of variation above
the noise level for further study. This dataset has been analyzed
by multiple authors, such as \cite{HansDW2007} and \cite{LiangSY2013}.
Algorithm \ref{subsetAlg3} was applied to this dataset, where variable screening
was done using the GLM SIS algorithm developed in \cite{FanS2010},
and the Markov blanket estimation step
was done using the HZ-SIS algorithm \citep{XueLiang2017}.
In both steps, we set the neighborhood size to be 5. For this dataset,
MNR selected two genes, RGS3 and ATP6V1F, with the adjusted $p$-value less than 0.05.
The details were given in Table \ref{cancertab}, which
are consistent with our existing knowledge.
For example, RGS3 is known to play a role in modulating the ability of motile lymphoid cells \citep{Bowman1998},
and to be upregulated in p53-mutated breast cancer tumors \citep{Ooe2007}.
ATP6V1F has been reported by many authors in lymph node status studies, see e.g.,
\cite{HansDW2007} and \cite{Dobra2009}.
For comparison, desparsified Lasso and ridge projection methods
were also applied to this example. As aforementioned, the multi sample-splitting algorithm is
not yet available for logistic regression.
Both desparsified Lasso and ridge projection selected only the gene RGS3 as the cancer driver gene.
A closer look at Table \ref{cancertab} shows that MNR outperforms
desparsified Lasso and ridge projection for this example. This can be explained from two perspectives.
First, MNR is the only method that identifies RGS3 as a cancer driver gene at an acceptable
significance level. While
the desparsified Lasso and ridge projection can only identify that RGS3 has a smaller adjusted $p$-value
than other genes, and its adjusted $p$-value is greater than 0.05.
Second, for the gene RGS3, the 95\% confidence interval produced by MNR is narrower than
that produced by desparsified Lasso. Moreover, the 95\% confidence interval
produced by ridge projection even contains 0 and is thus less significant.
\begin{table}
\selectfont
\begin{center}
\caption{Comparison of the cancer driver genes selected by the MNR, desparsified Lasso and ridge projection methods
for the Lymph dataset, where $^*$ indicates that
this gene was significantly selected. }
\vspace{-5mm}
\label{cancertab}
\begin{tabular}{ccccccc} \\ \toprule
& Desparsified Lasso & & Ridge & & \multicolumn{2}{c}{MNR} \\ \cline{2-2} \cline{4-4} \cline{6-7}
Gene & RGS3 & & RGS3 & & RGS3* & ATP6V1F* \\
95\%C.I. & (1.145,5.748)& & (-0.251,2.249)& & (0.859,5.178) & (2.073,7.131)\\
Width & 4.603 && 2.500 & & 4.319 & 5.058\\ \bottomrule
\end{tabular}
\vspace{-0.15in}
\end{center}
\end{table}
\section{Discussion}
This paper has proposed the MNR method
for constructing confidence intervals and assessing
$p$-values for high-dimensional regression. The MNR method
has successfully broken the high-dimensional inference problem into
a series of low-dimensional inference problems based on
conditional independence relations among different variables.
The embarrassingly parallel structure
of the MNR method, where the Markov blanket,
confidence interval and $p$-value can be calculated for each variable in parallel,
enables it potentially to be run very fast on multicore computers.
The MNR method has been tested on high-dimensional linear,
logistic and Cox regression. The numerical results indicate that the MNR method
significantly outperforms the existing ones.
The MNR method has been applied to learn causal structures for
high-dimensional linear models with the real data examples
for identification of drug sensitive genes and cancer driver genes presented.
This paper has assumed that the features
are Gaussian. Extension of the MNR method to non-Gaussian features
is straightforward. In this case, the conditional independence relations
among the features can be figured out using Bayesian networks
based on the concept of Markov blanket as described in Section \ref{Coxsection}.
The theory developed in Section 2 will still hold.
The idea of using conditional independence relations for dimension
reduction is general and potentially can be extended to
other high-dimensional or big data problems as well.
Finally, we note that the performance of the MNR method relies on the algorithms used for
variable selection and Markov blanket estimation, while each of these algorithms
can rely on a non-trivial amount of tuning. A sub-optimal performance of these
algorithms may adversely affect the performance of the MNR method, for example, the resulting
confidence intervals can be wider.
\section*{Acknowledgments}
The authors thank the editor, associate editor and two referees for their encouraging and
constructive comments which
have led to significant improvement of this paper.
\vspace{10mm}
| proofpile-arXiv_059-15640 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
We study equations of Painlev\'e type that arise as symmetry constraints for the non-Abelian Volterra lattices
\begin{equation}\label{VL1}\tag*{\mbox{VL$^1$}}
u_{n,x}=u_{n+1}u_n-u_nu_{n-1}
\end{equation}
and
\begin{equation}\label{VL2}\tag*{\mbox{VL$^2$}}
u_{n,x}=u^{\mbox{\rm\tiny T}}_{n+1}u_n-u_nu^{\mbox{\rm\tiny T}}_{n-1}
\end{equation}
where $u$ and $u^{\mbox{\rm\tiny T}}$ can be viewed as a matrix of any fixed size and its transpose. Up to the author's knowledge, equation \ref{VL1} appeared in \cite{Salle_1982} and equation \ref{VL2} is new. There exists a sequence of substitutions that links these two equations, as described in Section \ref{s:mod}, but this is an implicit transformation and we prefer to treat both equations independently, albeit parallel to each other.
Both equations VL$^{1,2}$ are integrable in the sense of existence of infinite hierarchy of higher symmetries and conservation laws. In this paper, these properties are almost never used, since we consider only low-dimensional reductions associated with symmetries containing $u_{n+k}$ for $|k|\le2$. These symmetries are listed in Section \ref{s:sym} and include: the lattice itself, its simplest higher symmetry, the classical scaling symmetry and the master-symmetry. The stationary equation for a linear combination of these flows determines a constraint which is consistent with the lattice (there are several non-equivalent cases depending on the choice of coefficients of the linear combination). This constraint admits a reduction of the order; as a result, a discrete equation of the Painlev\'e type arises, while the dynamics in $x$ is reduced to a continuous Painlev\'e equation. The theory of non-Abelian Painlev\'e equations appeared not very long ago (we mention \cite{Balandin_Sokolov_1998} as one of the first publications on this topic). Now this area is actively developing and includes many studied examples, but there are still many blank spots and a classification of non-Abelian case is lacking.
Section \ref{s:zcr} contains zero curvature representations for the flows we need; under the constraint, they turn into the isomonodromic Lax pairs for the Painlev\'e equations.
In the scalar case $u_n\in\mathbb C$, the first example of a constraint of this type, leading to the dP$_1$ and P$_4$ equations, appeared in \cite{Its_Kitaev_Fokas_1990, Fokas_Its_Kitaev_1991}, see also \cite{Grammaticos_Ramani_1998, Grammaticos_Ramani_2014}. In this example the master-symmetry is not used and the non-isospectrality of the reduction is due to the scaling symmetry. The non-Abelian version of this reduction for both equations VL$^{1,2}$ is given in Section \ref{s:constr1}. The calculations here are quite straightforward and lead to two non-commutative analogs of dP$_1$. Notice that these analogs do not coincide with equations studied in \cite{Gordoa_Pickering_Zhu_2013, Cassatella-Contra_Manas_2012, Cassatella-Contra_Manas_Tempesta_2018} in the framework of other approaches. The respective analogs of P$_4$ equation for $y=u_n$ can be written as
\[
y_{xx}=\frac{1}{2}y_xy^{-1}y_x+\frac{1}{2}[ky-\gamma y^{-1},y_x]+\frac{3}{2}y^3+4xy^2+2(x^2-\alpha)y-\frac{\gamma^2}{2}y^{-1},
\]
where the coefficient $k$ takes two values: 1 for \ref{VL1} and $-3$ for \ref{VL2}.
Section \ref{s:constr2} is devoted to more difficult constraints related with master-symmetries of VL$^{1,2}$. In the scalar case, this reduction was studied in \cite{Adler_Shabat_2019} where it was shown that it leads to equations dP$_{34}$ and P$_5$ or P$_3$ (depending on the values of parameters). Now we present non-Abelian generalizations for these equations. However, the continuous part of the answer is left in the form of a system of two first-order ODEs, since reduction to one second-order ODE leads to rather complicated formulas.
An essential difference in the non-Abelian case is related with the procedure of reduction of order. We start from a stationary equation which is some 5-point O$\Delta$E:
\[
f_n(u_{n-2},u_{n-1},u_n,u_{n+1},u_{n+2};x,\mu,\nu)=0,
\]
where $f_n$ is a polynomial in $u_k$ and $u^{\mbox{\rm\tiny T}}_k$ with coefficients depending on $n,x$ and scalar parameters $\mu,\nu$, and our final result is a discrete Painlev\'e equation of the form
\[
g_n(u_{n-1},u_n,u_{n+1};x,\mu,\nu,\varepsilon,\delta)=0
\]
with additional constants $\varepsilon,\delta\in\mathbb C$. In the scalar case, both equations are completely equivalent since the reduction of order is made due to the first integrals which enter the equation as parameters. In the non-Abelian case, we use {\em partial} first integrals instead. As a result, the obtained Painlev\'e equation defines only a subclass of special solutions of the original equation (two of four matrix initial data are replaced by values of scalar parameters).
\smallskip
{\em Notations.} We use Greek letters to denote scalar coefficients, that is, elements of $\mathbb C$. By default, Latin letters stand for elements of an associative algebra ${\cal A}$ over $\mathbb{C}$, with the exception of the independent variables $n,x$ and $t_i$ (used for the flows from the lattice hierarchies) which, of course, are scalars. The involution $u^{\mbox{\rm\tiny T}}$ on $\cal A$ is such that $(u^{\mbox{\rm\tiny T}})^{\mbox{\rm\tiny T}}=u$, $(\alpha u+\beta v)^{\mbox{\rm\tiny T}}=\alpha u^{\mbox{\rm\tiny T}}+\beta v^{\mbox{\rm\tiny T}}$ and $(uv)^{\mbox{\rm\tiny T}}=v^{\mbox{\rm\tiny T}} u^{\mbox{\rm\tiny T}}$ for any $u,v\in \cal A$ and $\alpha,\beta\in\mathbb C$. We also assume that the algebra ${\cal A}$ has the unity 1 and that expressions involving inverse elements are allowed. In expressions like $u_n+\alpha$, the term $\alpha$ is understood as the scalar $\alpha$ multiplied by $1\in\cal A$, which does not lead to a confusion. For clarity, we can think of ${\cal A}$ as the algebra of matrices of arbitrary fixed size with the operations of matrix transpose and taking the inverse (it is also possible to use the conjugate transpose $u^*$, assuming that some part of scalars should be real).
\section{Modified equations}\label{s:mod}
Although \ref{VL1} and \ref{VL2} equations are very similar, there is no an explicit invertible change between them. It is clear that it is possible to get rid of the involution in \ref{VL2} by passing to the sequence of variables $\dots,u^{\mbox{\rm\tiny T}}_{n-1},u_n,u^{\mbox{\rm\tiny T}}_{n+1},u_{n+2},\dots$, on the expense that equations for variables with even and odd numbers become different. Then \ref{VL1} takes the form of two-component system of Toda type
\[
p_{n,x}=q_np_n-p_nq_{n-1},~~ q_{n,x}=p_{n+1}q_n-q_np_n\quad (u_{2n}=p_n,~ u_{2n+1}=q_n),
\]
and \ref{VL2} turns into another system
\[
p_{n,x}=q_np_n-p_nq_{n-1},~~ q_{n,x}=q_np_{n+1}-p_nq_n\quad (u_{2n}=p_n,~ u^{\mbox{\rm\tiny T}}_{2n+1}=q_n).
\]
Nevertheless, both lattices are related indeed, but in more complicated and unexpected way via the sequence of difference substitutions
\[
\ref*{VL1}~ \leftarrow~ \mbox{mVL}^1~ \leftarrow~ \mbox{pot-mVL}~ \to~ \mbox{mVL}^2~ \to~ \ref*{VL2},
\]
which act in opposite directions and involve inverse elements in ${\cal A}$. The equations participating in this sequence are
\begin{align*}
\mbox{VL$^1$}:&\quad u_{n,x}= u_{n+1}u_n-u_nu_{n-1},\\
\mbox{mVL$^1$}:&\quad v_{n,x}= v_{n+1}(v^2_n-\alpha^2)-(v^2_n-\alpha^2)v_{n-1},\\
\mbox{pot-mVL}:&\quad w_{n,x}= (w_{n+1}+2\alpha w_n)(w^{-1}_{n-1}w_n+2\alpha),\\
\mbox{mVL$^2$}:&\quad v_{n,x}= (v_n-\alpha)v_{n+1}(v_n+\alpha)-(v_n+\alpha)v_{n-1}(v_n-\alpha),\\
\mbox{VL$^2$}:&\quad u_{n,x}= u^{\mbox{\rm\tiny T}}_{n+1}u_n-u_nu^{\mbox{\rm\tiny T}}_{n-1}.
\end{align*}
The following substitutions are easily verified by direct computation. The VL$^1$ is related with the modified Volterra lattice mVL$^1$ by the well-known discrete Miura map \cite{Salle_1982, Bogoyavlensky_1991a, Casati_Wang_2020}
\[
\mbox{VL}^1\leftarrow\mbox{mVL}^1:~~ u_n=(v_{n+1}+\alpha)(v_n-\alpha).
\]
The substitution from the potential lattice pot-mVL to mVL$^1$ is of the form
\[
\mbox{mVL}^1\leftarrow\mbox{pot-mVL}:~~ v_n=w_{n+1}w^{-1}_n+\alpha,
\]
and the substitution to mVL$^2$ differs only in the order of the factors:
\[
\mbox{pot-mVL}\to\mbox{mVL}^2:~~ v_n=w^{-1}_nw_{n+1}+\alpha.
\]
Finally, the last substitution is
\[
\mbox{mVL}^2\to\ref{VL2}:~~ u_{2n}=(v_{2n}+\alpha)(v_{2n-1}+\alpha),~~
u_{2n+1}=(v^{\mbox{\rm\tiny T}}_{2n+1}-\alpha)(v^{\mbox{\rm\tiny T}}_{2n}-\alpha).
\]
The mVL$^2$ equation was studied (for zero parameter $\alpha$) in \cite{Adler_Svinolupov_Yamilov_1999}, where it was stated that it did not admit an analog of the Miura map. We see that this is not so: the trick is that the transformation should be defined by different formulas for even and odd variables and using the involution $^{\mbox{\rm\tiny T}}$, although the mVL$^2$ equation itself does not contain it.
In principle, the above substitutions can be applied to the reductions of equations \ref{VL1} and \ref{VL2} which we study below. However, this is not easy given the implicit nature of this transformation. Therefore, we leave the problem of studying these relations for further research, and in this work we consider both lattices and the Painlev\'e equations obtained from them independently, although parallel to each other.
\begin{remark}
It is interesting to note the incomplete analogy with the sequence of substitutions
\[
\mbox{KdV}\xleftarrow{u=v^2\pm v_x+\alpha}
\mbox{mKdV}^1\xleftarrow{v= w_xw^{-1}}
\mbox{pot-mKdV}\xrightarrow{v= w^{-1}w_x}
\mbox{mKdV}^2
\]
(see eg. \cite{Kupershmidt_2000}) for the equations
\begin{align*}
\mbox{KdV}:&\quad u_t= u_{xxx}-3uu_x-3u_xu,\\
\mbox{mKdV$^1$}:&\quad v_t= v_{xxx}-3v^2v_x-3v_xv^2-6\alpha v_x,\\
\mbox{pot-mKdV}:&\quad w_t= w_{xxx}-3w_{xx}w^{-1}w_x-6\alpha w_x,\\
\mbox{mKdV$^2$}:&\quad v_t= v_{xxx}+3[v,v_{xx}]-6vv_xv-6\alpha v_x.
\end{align*}
These equations can be obtained from the corresponding lattice equations by continuous limit, but no continuous analog of \ref{VL2} is known.
\end{remark}
\section{Symmetries}\label{s:sym}
For each lattice \ref{VL1} or \ref{VL2}, the evolution derivation $D_x=D_{t_1}$ is a member of infinite-dimensional Lie algebra with generators satisfying commutation relations
\begin{equation}\label{DD}
[D_{t_i},D_{t_j}]=0,~~ [D_{\tau_i},D_{t_j}]=jD_{t_{j+i-1}},~~ [D_{\tau_i},D_{\tau_j}]=(j-i)D_{\tau_{j+i-1}},
\quad i,j\ge1.
\end{equation}
This implies, in particular, that all derivations $D_{t_i}$ and $xD_{t_i}+D_{\tau_i}$ commute with $D_x$ and that the stationary equation for any linear combination of these derivations defines a constraint consistent with the lattice. The Painlev\'e equations or their higher analogs appear if such a constraint involves at least one derivation of the form $xD_{t_i}+D_{\tau_i}$. In this paper, we consider only low order constraints obtained by use of derivations $D_x$, $D_{t_2}$, $D_{\tau_1}$ and $D_{\tau_2}$ which we now write down explicitly. The relations (\ref{DD}) for these flows can be verified directly.
For \ref{VL1}, the symmetry $D_{t_2}$ is defined by equation
\begin{equation}\label{VL1.t2}
u_{n,t_2}=(u_{n+2}u_{n+1}+u^2_{n+1}+u_{n+1}u_n)u_n-u_n(u_nu_{n-1}+u^2_{n-1}+u_{n-1}u_{n-2})
\end{equation}
and its analog for \ref{VL2} reads
\begin{equation}\label{VL2.t2}
u_{n,t_2}= (u^{\mbox{\rm\tiny T}}_{n+1}u_{n+2} +(u^{\mbox{\rm\tiny T}}_{n+1})^2 +u_nu^{\mbox{\rm\tiny T}}_{n+1})u_n
-u_n(u^{\mbox{\rm\tiny T}}_{n-1}u_n +(u^{\mbox{\rm\tiny T}}_{n-1})^2 +u_{n-2}u^{\mbox{\rm\tiny T}}_{n-1}).
\end{equation}
The classical scaling symmetry $D_{\tau_1}$ for both lattices is of the form
\[
u_{n,\tau_1}=u_n.
\]
The master-symmetry $D_{\tau_2}$ for \ref{VL1} involves an additional nonlocal variable $s_n$:
\begin{equation}\label{VL1.tau2}
u_{n,\tau_2} = \bigl(n+\tfrac{3}{2}\bigr)u_{n+1}u_n+u^2_n-\bigl(n-\tfrac{3}{2}\bigr)u_nu_{n-1}+[s_n,u_n],\quad s_n-s_{n-1}=u_n.
\end{equation}
Notice, that the derivatives of $s_n$ with respect to $x$ and $t_2$ are local, for instance $s_{n,x}=u_{n+1}u_n$. It is clear that in the scalar case $u_n\in\mathbb C$ introducing of this nonlocality is not necessary (a description of local master-symmetries for the Volterra type lattices can be found, e.g. in \cite{Cherdantsev_Yamilov_1995, Adler_Shabat_Yamilov_2000}). As it turns out, for the lattice \ref{VL2} the nonlocality is not necessary as well: the master-symmetry in this case is of the form
\begin{equation}\label{VL2.tau2}
u_{n,\tau_2} = \bigl(n+\tfrac{3}{2}\bigr)u^{\mbox{\rm\tiny T}}_{n+1}u_n+u^2_n-\bigl(n-\tfrac{3}{2}\bigr)u_nu^{\mbox{\rm\tiny T}}_{n-1}.
\end{equation}
The relations (\ref{DD}) imply that the derivations of the form
\begin{equation}\label{Dt}
D_t=\mu_1(xD_{t_2}+D_{\tau_2})+\mu_2(xD_x+D_{\tau_1})+\mu_3D_{t_2}+\mu_4D_x
\end{equation}
commute with $D_x$. Hence it follows that the stationary equation $u_{n,t}=0$ is a constraint compatible with the lattice. Taking into account the scaling transformation and the shift $x\to x-x_0$, one can see that there are two nonequivalent cases leading to the Painlev\'e type equations:
\begin{equation}\label{constr1}
u_{n,t_2}+2(xu_{n,x}+u_{n,\tau_1})=0
\end{equation}
and
\begin{equation}\label{constr2}
xu_{n,t_2}+u_{n,\tau_2}-2\mu(xu_{n,x}+u_{n,\tau_1})-\nu u_{n,x}=0.
\end{equation}
These are the constraints that will be the subject of our further study.
\begin{remark}
In addition to the above symmetries, there are symmetries of the form $u_{n,y}=[a,u_n]$ which are specific to the non-Abelian case. For the lattice \ref{VL1}, $a$ can be any constant element of $\cal A$, and for \ref{VL2} it should satisfy $a=-a^{\mbox{\rm\tiny T}}$. Introducing such term into (\ref{Dt}) may be interesting because it violates the $GL$-invariance, but we do not consider this possibility in this paper.
\end{remark}
\begin{remark}
By use of \ref{VL1} or \ref{VL2} equation, one can find all variables $u_{n+k}$ as expressions of variables $u_n,u_{n+1}$ and their derivatives with respect to $x$. Therefore, any symmetry of the lattice can be rewritten as some coupled PDE system. In particular, for the \ref{VL1} equation and its symmetry (\ref{VL1.t2}), the pair $(p,q)=(u_n,u_{n+1})$ satisfies, for any $n$, the associated system
\[
\left\{\begin{aligned}
& q_{t_2}=~ q_{xx}+2q_xq+2(qp)_x+2[qp,q],\\
& p_{t_2}= -p_{xx}+2pp_x+2(qp)_x+2[qp,p].
\end{aligned}\right.
\]
Similarly, for \ref{VL2} and its symmetry (\ref{VL2.t2}), the pair $(p,q)=(u_n,u^{\mbox{\rm\tiny T}}_{n+1})$ satisfies the system
\[
\left\{\begin{aligned}
& q_{t_2}=~ q_{xx}+2q_xq+2(pq)_x+2[pq,q],\\
& p_{t_2}= -p_{xx}+2p_xp+2(qp)_x+2[p,qp].
\end{aligned}\right.
\]
These systems were found in \cite{Adler_Sokolov_2020} when classifying non-Abelian systems with higher conservation laws. They give two generalizations of the Levi scalar system from \cite{Levi_1981}, where the concept of the associated system was introduced; see also \cite{Shabat_Yamilov_1991} for a systematic study of relations between (scalar) Volterra and Toda-type lattices and nonlinear Schr\"odinger type equations.
\end{remark}
\section{Zero curvature representations}\label{s:zcr}
{\em \ref{VL1} equation.} It is easy to check that it is equivalent to the matrix equation
\begin{equation}\label{VL1.Lx}
L_{n,x}=U_{n+1}L_n-L_nU_n
\end{equation}
with
\begin{equation}\label{VL1.LU}
L_n=\begin{pmatrix}
\lambda & \lambda u_n \\
-1 & 0
\end{pmatrix},\quad
U_n=\begin{pmatrix}
\lambda+u_n & \lambda u_n \\
-1 & u_{n-1}
\end{pmatrix}.
\end{equation}
The entire \ref{VL1} hierarchy is associated with linear equations of the general form
\[
\Psi_{n+1}=L_n\Psi_n,\quad \Psi_{n,t}+\kappa(\lambda)\Psi_{n,\lambda}=V_n\Psi_n
\]
with the above matrix $L_n$. The compatibility condition for these equations is written as the zero curvature representation
\begin{equation}\label{VL1.Lt}
L_{n,t}+\kappa(\lambda)L_{n,\lambda}=V_{n+1}L_n-L_nV_n.
\end{equation}
Any member of the hierarchy is equivalent to (\ref{VL1.Lt}) for suitable choice of $V_n=V^{(t)}_n$, with factors $\kappa^{(t_j)}=0$ for the derivations $D_{t_j}$ and $\kappa^{(\tau_j)}=\lambda^j$ for $D_{\tau_j}$. It is easy to prove that the matrix $V_n$ has the structure
\[
V_n=\begin{pmatrix}
a_n & -\lambda c_{n+1}u_n \\
c_n & \lambda c_n+a_{n-1}
\end{pmatrix}.
\]
In particular, the choice $a_n=\lambda+u_n$ and $c_n=-1$ brings to the matrix $U_n=V^{(x)}_n$ from (\ref{VL1.LU}). The matrices for other flows we need are obtained with the following choice:
\begin{align*}
V^{(t_2)}_n:&~~ a_n=\lambda^2+\lambda u_n+u_{n+1}u_n+u^2_n+u_nu_{n-1},~~ c_n= -\lambda-u_n-u_{n-1},\\
V^{(\tau_1)}_n:&~~ a_n=n+1,~~ c_n=0,\\
V^{(\tau_2)}_n:&~~ a_n= (n-\tfrac{1}{2})(\lambda+u_n)+s_n,~~ c_n= -n+\tfrac{1}{2}.
\end{align*}
Notice, that $V^{(\tau_2)}_n$ contains nonlocality $s_n$ even in the scalar case, when the master-symmetry (\ref{VL1.tau2}) becomes local.\medskip
{\em \ref{VL2} equation.} In this case, the zero curvature representation has a less familiar form
\begin{equation}\label{VL2.Lx}
L_{n,x}=U_{n+1}L_n+L_nU^{\mbox{\rm\tiny T}}_n,
\end{equation}
with the matrices
\begin{equation}\label{VL2.LU}
L_n=\begin{pmatrix}
1 & -\lambda\\
0 & \lambda u_n
\end{pmatrix},\quad
U_n=\begin{pmatrix}
\tfrac{1}{2}\lambda & 1\\
-\lambda u_{n-1} & -\tfrac{1}{2}\lambda-u_{n-1}+u^{\mbox{\rm\tiny T}}_n
\end{pmatrix},
\end{equation}
where $U^{\mbox{\rm\tiny T}}$ denotes the result of the matrix transpose of $U$ applying the involution to each element (that is, if $\cal A$ is the matrix algebra then this is just the block transpose). Representations of this type arise if the linear equations for $\psi$-functions are given differently for even and odd numbers (cf. with the alternating Miura map from Section \ref{s:mod}):
\begin{gather*}
\Psi_{2n+1}=L_{2n}\Psi_{2n}=L^{\mbox{\rm\tiny T}}_{2n+1}\Psi_{2n+2},\\
\Psi_{2n,x}=-U^{\mbox{\rm\tiny T}}_{2n}\Psi_{2n},\quad \Psi_{2n+1,x}=U_{2n+1}\Psi_{2n+1}.
\end{gather*}
It is easy to see that the compatibility conditions for these equations have the same form (\ref{VL2.Lx}) regardless of the parity of $n$. Note that from here we can pass to the representation of the form $M_{2n,x}=A_{2n+2}M_{2n}-M_{2n}A_{2n}$ with matrices $M_{2n}=(L^{\mbox{\rm\tiny T}}_{2n+1})^{-1}L_{2n}$. Representations of this type (with the step by two lattice sites) were used for some equations of Volterra type also in the scalar situation \cite{Garifullin_Mikhailov_Yamilov_2014}.
More generally, all derivations from the \ref{VL2} hierarchy correspond to the linear equations
\[
\Psi_{2n,t}+\kappa(\lambda)\Psi_{2n,\lambda}=-V^{\mbox{\rm\tiny T}}_{2n}\Psi_{2n},\quad
\Psi_{2n+1,t}+\kappa(\lambda)\Psi_{2n+1,\lambda}=V_{2n+1}\Psi_{2n+1},
\]
with the same matrix $L_n$, which leads to representations of the form
\begin{equation}\label{VL2.Lt}
L_{n,t}+\kappa(\lambda)L_{n,\lambda}=V_{n+1}L_n+L_nV^{\mbox{\rm\tiny T}}_n,
\end{equation}
with $\kappa^{(t_j)}=0$ and $\kappa^{(\tau_j)}=\lambda^j$ as before. In particular, we are interested in the derivations $D_{t_2}$, $D_{\tau_1}$ and $D_{\tau_2}$ which correspond to the matrices
\[
V^{(t_2)}_n=\begin{pmatrix}
\tfrac{1}{2}\lambda^2+\lambda u_{n-1} & \lambda+a^{\mbox{\rm\tiny T}}_n\\
-\lambda u_{n-1}(\lambda+a_{n-1}) &
-\tfrac{1}{2}\lambda^2-u_{n-1}(\lambda+a_{n-1})+u^{\mbox{\rm\tiny T}}_na_{n+1}
\end{pmatrix},
\]
where $a_n=u_n+u^{\mbox{\rm\tiny T}}_{n-1}$, and
\[
V^{(\tau_1)}_n=\begin{pmatrix}
0 & 0\\
0 & 1
\end{pmatrix},\quad
V^{(\tau_2)}_n=nU_n+\frac{1}{2}\begin{pmatrix}
-\lambda & -1\\
3\lambda u_{n-1} & 2\lambda+3u_{n-1}+u^{\mbox{\rm\tiny T}}_n
\end{pmatrix}.
\]
{\em Transition to the isomonodromic Lax pairs.} The matrix $V_n$ and the factor $\kappa(\lambda)$ corresponding to the linear combination (\ref{Dt}) are constructed from matrices and factors corresponding to the basic derivations:
\[
V_n=\mu_1(xV^{(t_2)}+V^{(\tau_2)}_n)+\mu_2(xU_n+V^{(\tau_1)}_n)+\mu_3V^{(t_2)}_n+\mu_4U_n,\quad
\kappa=\mu_1\lambda^2+\mu_2\lambda.
\]
This matrix satisfies equation (\ref{VL1.Lt}) or (\ref{VL2.Lt}), depending on which hierarchy we are considering. In addition, in both cases the compatibility condition for the derivations $D_x$ and $D_t$ is satisfied:
\begin{equation}\label{Ut}
U_{n,t}+\kappa(\lambda)U_{n,\lambda}-V_{n,x}=[V_n,U_n].
\end{equation}
On passing to the stationary equation for (\ref{Dt}), the derivative with respect to $t$ disappears and the zero curvature representations (\ref{VL1.Lt}), (\ref{VL2.Lt}) and (\ref{Ut}) turn into isomonodromic Lax pairs. The discrete part of the constraint is equivalent to an equation of the form
\begin{equation}\label{VL1.Llambda}
\kappa(\lambda)L_{n,\lambda}=V_{n+1}L_n-L_nV_n
\end{equation}
or of the form
\begin{equation}\label{VL2.Llambda}
\kappa(\lambda)L_{n,\lambda}=V_{n+1}L_n+L_nV^{\mbox{\rm\tiny T}}_n,
\end{equation}
and the continuous part is equivalent to an equation of the form
\begin{equation}\label{Ulambda}
\kappa(\lambda)U_{n,\lambda}-V_{n,x}=[V_n,U_n]
\end{equation}
with the matrix entries simplified modulo the constraint under study.
\section{Non-Abelian analogs of \texorpdfstring{dP$_1$}{dP1} and \texorpdfstring{P$_4$}{P4} equations}\label{s:constr1}
For the \ref{VL1} equation, the constraint (\ref{constr1}) takes the form of difference equation
\begin{multline*}
u_{n+2}u_{n+1}u_n+u^2_{n+1}u_n+u_{n+1}u^2_n-u^2_nu_{n-1}-u_nu^2_{n-1}-u_nu_{n-1}u_{n-2}\\
+2x(u_{n+1}u_n-u_nu_{n-1})+2u_n=0,\quad
\end{multline*}
and for the \ref{VL2} it reads
\begin{multline*}
u^{\mbox{\rm\tiny T}}_{n+1}u_{n+2}u_n +(u^{\mbox{\rm\tiny T}}_{n+1})^2u_n +u_nu^{\mbox{\rm\tiny T}}_{n+1}u_n
-u_nu^{\mbox{\rm\tiny T}}_{n-1}u_n -u_n(u^{\mbox{\rm\tiny T}}_{n-1})^2 -u_nu_{n-2}u^{\mbox{\rm\tiny T}}_{n-1} \\
+2x(u^{\mbox{\rm\tiny T}}_{n+1}u_n-u_nu^{\mbox{\rm\tiny T}}_{n-1})+2u_n=0.\quad
\end{multline*}
Each of these equations can be represented as $F_{n+1}u_n-u_nF_{n-1}=0$, and it turns out that the equality $F_n=0$ is itself a constraint consistent with the lattice equation. This brings to the following non-Abelian analogs of dP$_1$ (we remark that different non-Abelian versions of dP$_1$ were studied in \cite{Cassatella-Contra_Manas_2012, Cassatella-Contra_Manas_Tempesta_2018}):
\begin{gather}
\label{dP11}\tag*{\mbox{dP$_1^1$}}
u_{n+1}u_n+u^2_n+u_nu_{n-1}+2xu_n+n-\nu+(-1)^n\varepsilon=0,\\
\label{dP12}\tag*{\mbox{dP$_1^2$}}
u^{\mbox{\rm\tiny T}}_{n+1}u_n+u^2_n+u_nu^{\mbox{\rm\tiny T}}_{n-1}+2xu_n+n-\nu+(-1)^n\varepsilon=0,
\end{gather}
where scalars $\nu$ and $\varepsilon$ play the role of integration constants. (This is only a partial first integral. The integration constants must belong to the center of $\cal A$, because they have to commute with $u_n$ which is a general element of $\cal A$.) Moreover, for consistency with the corresponding lattice, these scalars must be independent of $x$.
\begin{proposition}
The VL$^i$ equation is consistent with dP$_1^i$, $i=1,2$, for arbitrary constants $\nu,\varepsilon\in\mathbb C$.
\end{proposition}
\begin{proof}
This is proved by direct calculation as follows. Let $F_n$ be the left-hand side of \ref{dP11}. By differentiating this expression in virtue of \ref{VL1}, we obtain the identity
\[
F_{n,x}=(F_{n+1}-F_n)u_n+u_n(F_n-F_{n-1}).
\]
Similarly, if $F_n$ is the left-hand side of \ref{dP12} then differentiating in virtue of \ref{VL2} gives
\[
F_{n,x}=(F^{\mbox{\rm\tiny T}}_{n+1}+F_n)u_n-u_n(F_n+F^{\mbox{\rm\tiny T}}_{n-1}).
\]
In both cases, the derivative of the constraint $F_n=0$ vanishes identically due to the constraint itself, as required.
\end{proof}
The obtained constraint turns the lattice equation into a closed system of two first order equations for the variables $u_{n-1}$ and $u_n$. It is possible to rewrite this system as a second-order ODE which generalize P$_4$.
\begin{proposition}
If a solution of the VL$^i$ equation satisfies the constraint P$_4^i$ then any its component $y=u_n$ satisfies the P$_4^i$ equation, $i=1,2\!:$
\begin{align}
\label{P41}\tag*{\mbox{P$_4^1$}}
&y_{xx}=\frac{1}{2}y_xy^{-1}y_x+\frac{1}{2}[y-\gamma y^{-1},y_x]+\frac{3}{2}y^3+4xy^2+2(x^2-\alpha)y-\frac{\gamma^2}{2}y^{-1},\\
\label{P42}\tag*{\mbox{P$_4^2$}}
&y_{xx}=\frac{1}{2}y_xy^{-1}y_x-\frac{1}{2}[3y+\gamma y^{-1},y_x]+\frac{3}{2}y^3+4xy^2+2(x^2-\alpha)y-\frac{\gamma^2}{2}y^{-1}
\end{align}
with the values of parameters
\[
\alpha=\gamma_{n-1}-\gamma_n/2+1,\quad \gamma=\gamma_n:=n-\nu+(-1)^n\varepsilon.
\]
\end{proposition}
\begin{proof}
In the \ref{VL1} case, we have the following system for the variables $(p,y)=(u_{n-1},u_n)$:
\begin{equation}\label{P41.pq}
p_x= 2yp+p^2+2xp+\gamma_{n-1},\quad y_x=-y^2-2yp-2xy-\gamma_n.
\end{equation}
The second equation implies
\begin{equation}\label{py}
p=x-\frac{1}{2}(y^{-1}y_x+y+\gamma_ny^{-1})
\end{equation}
and substitution into the first equation gives \ref{P41}. Similarly, in the \ref{VL2} case, the variables $(p,y)=(u^{\mbox{\rm\tiny T}}_{n-1},u_n)$ satisfy the system
\begin{equation}\label{P42.pq}
p_x= 2py+p^2+2xp+\gamma_{n-1},\quad y_x=-y^2-2yp-2xy-\gamma_n
\end{equation}
which differs from the previous one just by one term. The elimination of $p$ brings to \ref{P42}.
\end{proof}
Notice, that the substitution (\ref{py}) (accompanied by additional involution $p\to p^{\mbox{\rm\tiny T}}$ in the \ref{P42} case) defines a B\"acklund transformation for equations P$_4^i$, since $u_{n-1}$ satisfies the same equations as $u_n$, up to the values of parameters.
As mentioned in Section \ref{s:mod}, the existence of substitutions between \ref{VL1} and \ref{VL2} suggests that there should be some transformation between equations \ref{P41} and \ref{P42} (which reduces to the identity transformation in the scalar case). However, its explicit form is still unknown.
Isomonodromic Lax pairs for the obtained equations are constructed according to the scheme from the previous section. We simplify the entries of the matrix $V_n=V^{(t_2)}_n+2xU_n+2V^{(\tau_1)}_n$ by use of the constraint dP$_1^i$ and arrive to the following representations.
\begin{proposition}
Equation \ref{dP11} and system (\ref{P41.pq}) for the variables $(u_{n-1},u_n)$ admit the Lax representations of the form (\ref{VL1.Llambda}) and (\ref{Ulambda}), respectively, where $\kappa(\lambda)=2\lambda$, the matrices $L_n$ and $U_n$ are of the form (\ref{VL1.LU}) and
\[
V_n=\begin{pmatrix}
\lambda^2+2x\lambda+\gamma_{n+1}+2\nu+1+\lambda u_n &
\lambda^2u_n -\lambda u_nu_{n-1}-\lambda\gamma_n\\
-\lambda-2x-u_n-u_{n-1}& \gamma_n+2\nu+1-\lambda u_n
\end{pmatrix}.
\]
Equation \ref{dP12} and system (\ref{P42.pq}) for the variables $(u^{\mbox{\rm\tiny T}}_{n-1},u_n)$ admit the Lax representations of the form (\ref{VL2.Llambda}) and (\ref{Ulambda}), respectively, where $\kappa(\lambda)=2\lambda$, the matrices $L_n$ and $U_n$ are of the form (\ref{VL2.LU}) and
\begin{align*}
V_n&=\frac{\lambda}{2}(\lambda+2x+2u_{n-1})\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}\\
&\qquad +\begin{pmatrix}
0 & \lambda+2x+u_{n-1}+u^{\mbox{\rm\tiny T}}_n\\
-\lambda^2u_{n-1}+\lambda u^{\mbox{\rm\tiny T}}_nu_{n-1}+\lambda\gamma_{n-1} &
1-2(-1)^n\varepsilon+[u^{\mbox{\rm\tiny T}}_n,u_{n-1}]
\end{pmatrix}.
\end{align*}
\end{proposition}
\section{Non-Abelian analogs of equations \texorpdfstring{dP$_{34}$}{dP34}, \texorpdfstring{P$_5$}{P5} and \texorpdfstring{P$_3$}{P3}}\label{s:constr2}
For the \ref{VL1} equation, the constraint (\ref{constr2}) is equivalent to equation
\begin{gather*}
x(u_{n+2}u_{n+1}u_n+u^2_{n+1}u_n+u_{n+1}u^2_n-u^2_nu_{n-1}-u_nu^2_{n-1}-u_nu_{n-1}u_{n-2})\\
+\bigl(n-\nu+\tfrac{3}{2}\bigr)u_{n+1}u_n+u^2_n-\bigl(n-\nu-\tfrac{3}{2}\bigr)u_nu_{n-1}+[s_n,u_n]\\
-2\mu x(u_{n+1}u_n-u_nu_{n-1})-2\mu u_n =0,\qquad s_n-s_{n-1}=u_n.
\end{gather*}
Like in the previous section, it is represented in the form $F_{n+1}u_n-u_nF_{n-1}=0$, where
\[
F_n=x(u_{n+1}u_n+u^2_n+u_nu_{n-1}-2\mu u_n) +\bigl(n-\nu-\tfrac{1}{2}\bigr)u_n+s_n -\mu n-(-1)^n\varepsilon-\varepsilon_0,
\]
with arbitrary constants $\varepsilon,\varepsilon_0\in\mathbb C$, and one can check that $F_n$ satisfy the identity $F_{n,x}=(F_{n+1}-F_n)u_n+u_n(F_n-F_{n-1})$. Hence, equation $F_n=0$ defines a constraint consistent with \ref{VL1}. In contrast to \ref{dP11}, it contains the nonlocal variable $s_n$, even in the scalar case. To get rid of it, we replace this equation with $G_n=F_{n+1}-F_n=0$, which is equation (\ref{Gn1}) below.
For the \ref{VL2} equation, the constraint (\ref{constr2}) is equivalent to
\begin{gather*}
x\bigl(u^{\mbox{\rm\tiny T}}_{n+1}u_{n+2}u_n +(u^{\mbox{\rm\tiny T}}_{n+1})^2u_n +u_nu^{\mbox{\rm\tiny T}}_{n+1}u_n
-u_nu^{\mbox{\rm\tiny T}}_{n-1}u_n -u_n(u^{\mbox{\rm\tiny T}}_{n-1})^2 -u_nu_{n-2}u^{\mbox{\rm\tiny T}}_{n-1}\bigr)\\
+\bigl(n-\nu+\tfrac{3}{2}\bigr)u^{\mbox{\rm\tiny T}}_{n+1}u_n+u^2_n-\bigl(n-\nu-\tfrac{3}{2}\bigr)u_nu^{\mbox{\rm\tiny T}}_{n-1}
-2\mu x(u^{\mbox{\rm\tiny T}}_{n+1}u_n-u_nu^{\mbox{\rm\tiny T}}_{n-1})-2\mu u_n =0.
\end{gather*}
This equation can be represented in the form $G_nu_n+u_nG^{\mbox{\rm\tiny T}}_{n-1}=0$ and it turns out that the equality $G_n=0$ (equation (\ref{Gn2}) below) is consistent with the derivation defined by \ref{VL2}.
Thus, at this stage, we have replaced the original constraint with a 4-point difference equation, for both lattices.
\begin{proposition}
For any constants $\mu,\nu,\varepsilon\in\mathbb C$, the \ref{VL1} equation is consistent with the constraint
\begin{equation}\label{Gn1}
\begin{gathered}[b]
x(u_{n+2}u_{n+1}+u^2_{n+1}-u^2_n-u_nu_{n-1})-(2\mu x-n+\nu-\tfrac{3}{2})u_{n+1}\\
+(2\mu x-n+\nu+\tfrac{1}{2})u_n-\mu+2(-1)^n\varepsilon=0,
\end{gathered}
\end{equation}
and the \ref{VL2} equation is consistent with the constraint
\begin{equation}\label{Gn2}
\begin{gathered}[b]
x\bigl(u^{\mbox{\rm\tiny T}}_{n+1}u_{n+2}+(u^{\mbox{\rm\tiny T}}_{n+1})^2-u^2_n-u_nu^{\mbox{\rm\tiny T}}_{n-1}\bigr)-(2\mu x-n+\nu-\tfrac{3}{2})u^{\mbox{\rm\tiny T}}_{n+1}\\
+(2\mu x-n+\nu+\tfrac{1}{2})u_n-\mu+2(-1)^n\varepsilon=0.
\end{gathered}
\end{equation}
\end{proposition}
\begin{proof}
Let $G_n$ be the left-hand side of (\ref{Gn1}). A direct computation proves that its derivative in virtue of \ref{VL1} satisfies the identity $G_{n,x}=G_{n+1}u_{n+1}+u_{n+1}G_n-G_nu_n-u_nG_{n-1}$, which implies that the equality $G_n=0$ defines a constraint for this lattice equation. Similarly, if $G_n$ is the left-hand side of (\ref{Gn2}) then differentiation due to \ref{VL2} leads to the identity $G_{n,x}=u^{\mbox{\rm\tiny T}}_{n+1}(G^{\mbox{\rm\tiny T}}_{n+1}+G_n)-u_n(G_n+G^{\mbox{\rm\tiny T}}_{n-1})$.
\end{proof}
To obtain the Painlev\'e equations, we have to reduce the order by one more unit. It is far from immediately clear whether this can be done at all. In the scalar case, it was shown \cite{Adler_Shabat_2019} that equation (\ref{Gn1}) admits the integrating factor $xu_{n+1}+xu_n+n-\nu+\tfrac{1}{2}$. After multiplying by it, the equation is reduced to the form $H_{n+1}-H_n=0$, where $H_n$ is a cubic polynomial in $u_n$. The constraint $H_n=\const$ takes the form of the discrete Painlev\'e equation dP$_{34}$ \cite{Grammaticos_Ramani_2014}
\begin{equation}\label{dP34}\tag*{\mbox{dP$_{34}$}}
(z_{n+1}+z_n)(z_n+z_{n-1})= 4x\frac{\mu z^2_n+2(-1)^n\varepsilon z_n+\delta}{z_n-n+\nu}
\end{equation}
after the change
\begin{equation}\label{zu}
z_n=2xu_n+n-\nu.
\end{equation}
Moreover, the $x$-evolution is governed by continuous Painlev\'e equations. If $\mu\ne0$ then the function $y(x)=1-4\mu x/(z_{n+1}(x)+z_n(x))$ satisfies the P$_5$ equation, for any $n$, and if $\mu=0$ then the function $y(\xi)=(z_{n+1}(x)+z_n(x))/(2\xi)$ with $x=\xi^2$ satisfies the P$_3$ equation.
Our goal is to generalize these results to the non-commutative case. It is convenient to work directly with variables (\ref{zu}), for which the \ref{VL1} and \ref{VL2} equations take, respectively, the form
\begin{equation}\label{VL1z}
2xz_{n,x}=z_{n+1}(z_n-n+\nu)-(z_n-n+\nu)z_{n-1}
\end{equation}
and
\begin{equation}\label{VL2z}
2xz_{n,x}=z^{\mbox{\rm\tiny T}}_{n+1}(z_n-n+\nu)-(z_n-n+\nu)z^{\mbox{\rm\tiny T}}_{n-1}.
\end{equation}
The non-Abelian analogs of \ref{dP34} are given in the following statement. In contrast to the scalar situation, the cases $\mu\ne0$ and $\mu=0$ must be considered separately. If all variables are scalars then each of four equations coincides with \ref{dP34} up to coefficients.
\begin{proposition}
For $\mu\ne0$, let $\sigma=\varepsilon/\mu$ and let $\omega\in\mathbb C$ be an arbitrary constant. Then equation (\ref{Gn1}) admits the partial first integral consistent with (\ref{VL1z}):
\begin{equation}\label{dP341}\tag*{\mbox{dP$_{34}^1$}}
(z_{n-1}+z_n)(z_n+(-1)^n\sigma+\omega)^{-1}(z_n+z_{n+1})=4\mu x(z_n-n+\nu)^{-1}(z_n+(-1)^n\sigma-\omega),
\end{equation}
and equation (\ref{Gn2}) admits the partial first integral consistent with (\ref{VL2z}):
\begin{equation}\label{dP342}\tag*{\mbox{dP$_{34}^2$}}
(z^{\mbox{\rm\tiny T}}_{n-1}+z_n)(z_n+(-1)^n(\sigma-\omega))^{-1}(z_n+z^{\mbox{\rm\tiny T}}_{n+1})
=4\mu x(z_n-n+\nu)^{-1}(z_n+(-1)^n(\sigma+\omega)).
\end{equation}
For $\mu=0$, let $\delta\in\mathbb C$ be an arbitrary constant, then equation (\ref{Gn1}) admits the alternating partial first integral consistent with (\ref{VL1z}):
\begin{equation}\label{dP3410}\tag*{\mbox{d$\widetilde{\rm P}_{34}^1$}}
\left\{\begin{array}{ll}
(z_{n+1}+z_n)(z_n-n+\nu)(z_n+z_{n-1})=4x( 2\varepsilon z_n+\delta), & n=2k,\\
(z_n+z_{n-1})(z_{n+1}+z_n)(z_n-n+\nu)=4x(-2\varepsilon z_n+\delta), & n=2k+1,
\end{array}\right.
\end{equation}
and equation (\ref{Gn2}) admits the partial first integral consistent with (\ref{VL2z}):
\begin{equation}\label{dP3420}\tag*{\mbox{d$\widetilde{\rm P}_{34}^2$}}
(z^{\mbox{\rm\tiny T}}_{n+1}+z_n)(z_n-n+\nu)(z_n+z^{\mbox{\rm\tiny T}}_{n-1})=4x(2(-1)^n\varepsilon z_n+\delta).
\end{equation}
\end{proposition}
\begin{proof}
The statement can be proved by direct, but rather tedious computations. In the case $\mu\ne0$, a more conceptual proof can be obtained by use of representations (\ref{VL1.Llambda})--(\ref{Ulambda}). We have $\kappa(\lambda)=\lambda^2-2\mu\lambda$, therefore for $\lambda=2\mu$ we have the equation $V_{n+1}L_n=L_nV_n$ for the case of \ref{VL1} or $V_{n+1}L_n=-L_nV^{\mbox{\rm\tiny T}}_n$ for the case of \ref{VL2}, and, in both cases, $V_{n,x}=[U_n,V_n]$. From here it is not difficult to prove, in a general form, that vanishing of a quasi-determinant $\Delta_n$ of the matrix $V_n|_{\lambda=2\mu}$ defines a constraint which is consistent both with continuous and discrete dynamics. For instance, the equation
\[
V_x=[U,V],\quad V=\begin{pmatrix} a & b\\ c & d\end{pmatrix},\quad U=\begin{pmatrix} p & q\\ r & s\end{pmatrix},
\]
implies the following relation for $\Delta=b-ac^{-1}d$:
\[
\Delta_x=(p-ac^{-1}r)\Delta-\Delta(s-rc^{-1}d) \quad\Rightarrow\quad \Delta_x|_{\Delta=0}=0;
\]
in a similar way, the discrete equations imply the relations of the form $\Delta_{n+1}=f_n\Delta_ng_n$ or $\Delta_{n+1}=f_n\Delta^{\mbox{\rm\tiny T}}_ng_n$ with some factors $f_n,g_n$. It is easy to verify that equations \ref{dP341} and \ref{dP342} are nothing but the equality $\Delta_n=0$ for the respective matrices $V_n|_{\lambda=2\mu}$, simplified modulo equations (\ref{Gn1}) or (\ref{Gn2}) and the change (\ref{zu}).
\end{proof}
In conclusion, we write down the systems arising from the lattices (\ref{VL1z}) and (\ref{VL2z}) under the above constraints. Each of these sytems admits the isomonodromic Lax pair (\ref{Ulambda}) with the matrices constructed from the matrices from Section \ref{s:zcr}.
For the lattice (\ref{VL1z}) and the constraint \ref{dP341}, the variables $q=z_n$ and $p=z_n+z_{n+1}$ satisfy, for any $n$, the system of the form
\begin{equation}\label{P51.pq}
\left\{\begin{array}{l}
2xq_x=p(q-n+\nu)-4\mu x(q+\alpha)p^{-1}(q+\beta),\\
2xp_x=pq+qp+p-p^2+4\mu x(p-2q-\alpha-\beta),
\end{array}\right.
\end{equation}
where $\alpha=(-1)^n\sigma-\omega$ and $\beta=(-1)^n\sigma+\omega$. For the lattice (\ref{VL2z}) and the constraint \ref{dP342}, the variables $q=z_n$ and $p=z_n+z^{\mbox{\rm\tiny T}}_{n+1}$ satisfy an almost identical system (cf. with the pair (\ref{P41.pq}) and (\ref{P42.pq}))
\begin{equation}\label{P52.pq}
\left\{\begin{array}{l}
2xq_x=p(q-n+\nu)-4\mu x(q+\alpha)p^{-1}(q+\beta),\\
2xp_x=2pq+p-p^2+4\mu x(p-2q-\alpha-\beta),
\end{array}\right.
\end{equation}
where $\alpha=(-1)^n(\sigma+\omega)$ and $\beta=(-1)^n(\sigma-\omega)$. Notice, that the system (\ref{P52.pq}) reduces to one rational equation of second order, by solving its second equation with respect to $q$ and substituting the result into the first equation (recall that in the scalar case the P$_5$ equation is satisfied by the variable $y=1-4\mu xp^{-1}$). However, this is hardly advisable, since the resulting formulas are rather cumbersome. Whether it is possible to eliminate $q$ in the system (\ref{P51.pq}) is not obvious.
In a similar way, for the lattice (\ref{VL1z}) and the constraint \ref{dP3410}, the variables $q=z_n$ and $p=z_n+z_{n+1}$ with even $n$ satisfy the system
\begin{equation}\label{P31.pq}
\left\{\begin{array}{l}
2xq_x=p(q-n+\nu)-4xp^{-1}(2\varepsilon q+\delta),\\
2xp_x=pq+qp+p-p^2-8\varepsilon x
\end{array}\right.
\end{equation}
(of course, a similar system can be derived also for odd $n$), and for the lattice (\ref{VL2z}) and the constraint \ref{dP3420}, the variables $q=z_n$ and $p=z_n+z^{\mbox{\rm\tiny T}}_{n+1}$ satisfy, for any $n$, the system
\begin{equation}\label{P32.pq}
\left\{\begin{array}{l}
2xq_x=p(q-n+\nu)-4xp^{-1}(2(-1)^n\varepsilon q+\delta),\\
2xp_x=2pq+p-p^2-8(-1)^n\varepsilon x.
\end{array}\right.
\end{equation}
Recall that in the scalar case the P$_3$ equation is obtained for the variable $y=p/(2\xi)$, after the change $x=\xi^2$.
\subsubsection*{Acknowledgements}
I am grateful to V.V.~Sokolov for many stimulating discussions. This research was carried out under the State Assignment 0033-2019-0004 (Quantum field theory) of the Ministry of Science and Higher Education of the Russian Federation.
| proofpile-arXiv_059-15641 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
The purpose of this paper is to study fluctuations of the eigenvalue counting measure for the Anderson model on $\Z^d$. We denote $\abs{n}=\sum_{v=1}^d n_v$ for any $n\in\Zd$, and write $n\sim m$ for $n,m\in\Zd$ if and only if $\abs{n-m}=1$. Define the operator $H: \ell^2(\Z^d)\longrightarrow \ell^2(\Z^d)$ by
$$(Hu)_n =\Par{\Delta u}_n +\Par{X u}_n = \sum_{m \sim n} u_m+X_n \cdot u_n$$
where $\Set{X_n}_{n\in\Zd}$ is an array of independent, identically distributed (iid) random variables with finite moments, satisfying $\Ex{X_n}=0$. We denote the distribution of each variable $X_n$ by $\textnormal{d}\rho$, which will henceforth be referred to as \emph{the underlying distribution}.
In this paper, we aim to study the fluctuations of the counting measure for the eigenvalues of finite volume approximations. Explicitly, we study fluctuations of polynomial linear statistics of finite volume truncations of $H$: for any $L\in\N$, denote
$$\Lambda_L=[-L,L]\cap\Z,$$
and let $H_L$ be the truncation of $H$ to the cube $\Lambda_L^d \subset \Zd$. That is,\\
$H_L=1_{\Lambda_L^d} H 1_{\Lambda_L^d}$, where
$$\Par{1_{\Lambda_L^d}(u)}_n=\begin{cases}u_n & n\in\Lambda_L^d\\
0 & n\notin\Lambda_L^d\end{cases}.$$
We denote by $N\left(0,\sigma^2\right)$ the normal distribution on $\R$ with mean $0$ and variance $\sigma^2$, and denote by $\overset{d}{\longrightarrow}$ convergence in distribution. We agree that the zero random variable is also normal, by allowing $\sigma^2=0$ (in this case we say the distribution is \emph{degenerate}).\\
The \emph{empirical measure} of $H_L$ is the measure
$$\textnormal{d}\nu_L=\frac{1}{(2L+1)^{d/2}}\sum_{i=0}^{\abs{\Lambda_L^d}} \delta_{\lambda_i^{(\Lambda_L^d)}}$$
where $\left \{\lambda_{1}^{(\Lambda_L^d)},\lambda_2^{(\Lambda_L^d)},\ldots,\lambda_{\abs{\Lambda_L^d}}^{(\Lambda_L^d)} \right \}=\sigma\Par{H_L}$ are the eigenvalues of $H_L$ (counting multiplicity), and $\delta_\lambda$ is the Dirac measure at $\lambda$. When the empirical measure has a limit as $L \rightarrow \infty$, this limit is known as the density of states of $H$. In our case, it is known that the random measure $\textnormal{d}\nu_L$ converges weakly almost surely to a deterministic measure $\textnormal{d}\nu$ (see e.g.\ \cite{AW} and references within).
We want to focus on asymptotics of the fluctuations of $\textnormal{d}\nu_L$. A natural way to study this is using \emph{linear statistics} for polynomials, i.e., random variables of the form $\int\poly\textnormal{d}\nu_L=\frac{1}{\Par{2L+1}^{d/2}}\Tr{\poly(H_L)}$ for some polynomial $\poly(x)\in\R[x]$.
Fluctuations of the truncated eigenvalues $\lambda_i^{\Par{\Lambda_L^d}}$ are assumed to be associated to continuity properties of the spectral measures. There are several results indicating this is indeed true. Minami \cite{Minami} studied the microscopic scale of the eigenvalues of the Anderson model in $\Z^d$, after Molchanov \cite{M} did the same for the continuous case in one dimension.
Minami proved that under certain conditions that ensure localization with exponentially decaying eigenfunctions, the eigenvalues of the Anderson model have Poisson behavior on the microscopic scale. For $d=1$, it is well known that localization holds for any ergodic non-deterministic potential \cite{AW}. However, for $d\ge 3$ and for sufficiently low energies, it is conjectured that $H$ has extended states, i.e., the spectrum of $H$ has an absolutely continuous component.
We now state our main theorem:
\begin{thm} \label{main_thm}
Let $\poly(x) \in\R[x]$ be a non-constant polynomial. Then
$$\frac{\Tr{\poly\Par{H_L}}-\Ex{\Tr{\poly\Par{H_L}}}}{(2L+1)^{d/2}} \overset{d}{\longrightarrow} N(0,\sigma(\poly)^2)$$
as $L\rightarrow\infty$, where:
\begin{enumerate}
\item If the underlying distribution $(\textnormal{d}\rho)$ is supported by more than three points, then $\sigma(\poly)^2>0$.
\item If the underlying distribution is supported by exactly two points, there exist polynomials $\polyb_2,\polyb_3,\polyb_5\in\R[x]$, of degrees $2,3,5$ respectively, such that $\sigma(\poly)^2=0$ if and only if $\poly\in\textnormal{span}_\R\Set{\polyb_5,\polyb_3,\polyb_2,1}$.
\item If the underlying distribution is supported by exactly three points, there exists a polynomial $\tildq_3\in\R[x]$ of degree $3$, such that $\sigma(\poly)^2=0$ if and only if $\poly\in\textnormal{span}_\R\Set{\tildq_3,1}$.
\end{enumerate}
\end{thm}
The polynomials $\polyb_2,\polyb_3,\polyb_5, \tildq_3$ depend on $\textnormal{d}\rho$ as well as on the dimension $d$, and are given explicitly in Propositions \ref{deg_23_var} and \ref{deg_5_var} below.
The study of fluctuations of finite truncations of the Anderson Model has received a considerable amount of attention, although most results focus on the one-dimensional case. Reznikova \cite{Rez} proved a CLT for the eigenvalue counting function of the truncated Anderson model in 1-dimension. Kirsch and Pastur \cite{KP} proved a CLT for the trace of truncations of the Green function of the Anderson model in one dimension. Recently, Pastur and Shcherbina \cite{PS} extended this result to other functions of $H$
In our proof we shall compute the trace of powers of $H_L$ by counting paths on the associated lattice. Path counting and weighted path counting is commonly used in the study of random Schrodinger operators and in the study of random matrices (see, e.g., \cite{AW} and \cite{RMT} and references therein).
This paper can be viewed as a second paper in a series, continuing the work of Breuer with the authors \cite{BGW}. In the previous paper, path counting was used to prove a central limit theorem (CLT) for a decaying model over $\N$. In this paper, the methods have been modified to apply to the Anderson model over $\Zd$ for general $d\in\N$. Each of the papers is self contained, but there are many parallels in the overall structure of the paper and propositions.
The rest of the paper is organized as follows: In Section 2 we set up our definitions, and prove that `typical' diagonal elements in the matrix representation of $H_L^k$ have a combinatorial description (using path counting). In Sections 3 and 4 we prove our main theorem - in Section 3 we show that fluctuations of $\Tr{f\Par{H_L}}$ converge to a normally distributed random variable, and in Section 4 we classify all cases in which the limit distribution is non-degenerate. The final proof of Theorem \ref{main_thm} appears at the end of Section 4. We conclude with Section 5 (which is independent from the rest of the paper) in which we state and prove a CLT for $m$-dependent random variables indexed by $\Zd$, which implies the CLT we use in Section 3.
\textbf{Acknowledgments.} We are deeply grateful to Jonathan Breuer for his generous guidance and support throughout this research.
Research by YG was supported by the Israel Science Foundation (Grant No. 399/16). Research by MW was supported in part by the Israel Science Foundation (Grant No. 1612/17) and in part by the ERC Advanced Grant (Grant No. 834735).
\section{Definitions and preliminaries}
Fix $d\in\N$. As stated in the Introduction, we explore the random operator $H:\ell^2\Par{\Z^d}\to \ell^2\Par{\Z^d}$.
It is useful to decompose $H$ as
\begin{equation} \label{operator_decomp}
H=V+\sum_{v=1}^d U_v + \sum_{v=1}^d D_v,
\end{equation}
where $V$ is the random potential operator, and each $U_v$ (respectively $D_v$) is the operator shifting forward (respectively backward) in direction $v$. In other words, let $e_1,e_2,\ldots,e_d$ denote the standard generators of $\Zd$ as a free abelian group. Then for every $n\in\Zd$ and $u\in\ell^2(\Zd)$ we have $(Vu)_n=X_n u_n$, and for every $1\leq v\leq d$ we have $(U_v u)_n=u_{n+e_v}$ and $(D_v u)_n=u_{n-e_v}$. A corresponding decomposition is also given for every finite volume truncation, $H_L$.
Our theorem deals with the asymptotic behavior (as $L\rightarrow\infty$) of\\
$\Tr{\poly\Par{H_L}}$, for polynomials $\poly\in\R[x]$. We consider $\Tr{\poly\Par{H_L}}$ as a polynomial in the variables $\SetPred{X_n}{n\in\Zd}$. To slightly ease notation, we denote our variables by a lowercase Latin letter (such as $x,z$) when referring to a single variable in a polynomial ring, and by uppercase letters (such as $X_n, Z_n, Z$) when referring to variables in polynomial rings which can also be understood as random variables with some distribution.
To work with such multivariate monomials, we introduce the following definitions:
\begin{defin} \label{index_def}
A finitely supported function $\beta:\Zd\to\N\cup\{0\}$ will be called a multi-index. Let $\beta_n$ denote the value $\beta(n)$ for every $n\in\Zd$. Let $X^\beta$ denote the monomial $\prod_{n\in\Zd}X_n^{\beta_n}$
\end{defin}
Fix a multi-index $\delta$, by
$$\delta_n=
\begin{cases}
1 &n=0\\
0 & n \neq 0
\end{cases}
$$
\begin{defin} \label{shifting_def}
For every multi-index $\beta$ and $i\in\Zd$, define $\beta^i$ $($$\beta$ shifted by $i$$)$, by $\beta^i_n=\beta_{n-i}$ for every $n\in\Zd$.
\end{defin}
Note that using these definitions, for $n,i\in\Zd$, $\delta^i_n$ is $1$ if $n=i$ and $0$ otherwise. Additionally, $\beta=\sum_{i\in\Zd} \beta_i \delta^i$ for every multi-index $\beta$ (this is a finite sum as $\beta$ is finitely supported).
Next, we fix $k\in\N$ and begin exploring the asymptotic behavior (as $L\rightarrow\infty$) of $\Tr{H_L^k}$. As we shall see, the coefficient of any monomial $X^\beta$ in $\Tr{H_L^k}$ is fixed for sufficiently large $L$, and has a concrete combinatorial description. Furthermore, these coefficients are invariant under translations of the monomials in $\Zd$. The precise statement is given in Proposition \ref{coef_prop} below, which requires some more definitions.
\begin{defin} \label{string_def}
Let $\mathcal S=\Set{V,U_1,U_2,\ldots,U_d,D_1,D_2,\ldots,D_d}$ be considered as formal symbols.\\
Then $\mathcal S^k$ denotes the set of all ordered $k$-tuples with elements from $\mathcal S$, or all strings of length $k$ from the alphabet $\mathcal S$.
\end{defin}
\begin{defin} \label{path_def}
For every $s\in\mathcal S^k$, we define a finite sequence of points, $y_0(s), y_1(s),\ldots,y_k(s)\in\Zd$ as follows:
\begin{itemize}
\item $y_0(s)=(0,0,\ldots,0)$,
\item $y_j(s)=
\begin{cases}
y_{j-1}(s)+e_v & s_j=U_v \\
y_{j-1}(s)-e_v & s_j=D_v \\
y_{j-1}(s)& s_j=V.
\end{cases}$
\end{itemize}
We say that $s$ is \emph{balanced}, if $y_k(s)=y_0(s)$.
\end{defin}
Note that $s\in\mathcal S^k$ is balanced iff for every $v=1,2,\ldots,d$, the symbols $U_v$ and $D_v$ appear in $s$ the same number of times.
\begin{defin} \label{corresponding_index_def}
For every $s\in\mathcal S^k$, define a multi-index $\varphi(s)$ by
$$\varphi(s)_n=\#\SetPred{1\leq j\leq k}{y_j(s)=y_{j-1}(s)=n},$$
for every $n\in\Zd$.
\end{defin}
\begin{defin} \label{path_counting_def}
For every multi-index $\beta$, let $\path^k(\beta)$ be the number of balanced strings $s\in\mathcal S^k$ satisfying $\varphi(s)^i=\beta$, for some $i\in\Zd$.
\end{defin}
Note that for every $s\in\mathcal S^k$ and multi-index $\beta$, there is at most one $i\in\Z^d$ for which $\varphi(s)^i=\beta$.
\begin{propos} \label{coef_prop}
For every non-zero multi-index $\beta$, and $k,L\in\N$, let $a^k_L(\beta)$ denote the coefficient of $X^\beta$ in the polynomial $\Tr{H_L^k}$. Then:
\begin{enumerate}
\item $0\leq a^k_L(\beta)\leq \path^k(\beta)$,
\item If $\beta_n>0$ for some $n\in\Lambda_{L-k}^d$, we have $a^k_L(\beta)=\path^k(\beta)$,
\item If $\beta_n>0$ for some $n\notin \boxL$, we have $a^k_L(\beta)=0$.
\end{enumerate}
\end{propos}
\begin{proof}
Use (\ref{operator_decomp}) to expand $H^k$. This gives us a bijection between operators in the expansion of $H^k$ and strings in $\mathcal S^k$. Furthermore, let $M_L$ be any matrix in the expansion of $H_L^k$ corresponding to a string $s\in\mathcal S^k$. It is straightforward to verify that if $s\in\mathcal S^k$ is balanced, and $i\in\boxL$, and $y^j(s)+i\in \boxL$ for every $j=1,2,\ldots,k$, we have $\Par{M_L}_{i,i}=X^{\varphi(s)^i}$. Otherwise, we have $\Par{M_L}_{i,i}=0$.\\
Therefore, fixing a multi-index $\beta$, the coefficient $a^k_L(\beta)$ equals the number of strings $s\in\mathcal S^k$, for which $\varphi(s)^i=\beta$ and the additional conditions $y_j(s)+i\in\boxL$ are fulfilled (we simply compute the trace as the sum over all diagonal entries from all matrices in the expansion). The number of such strings is at least $0$ and at most $\path^k(\beta)$ (which is the number of such strings without the additional conditions), proving (1).\\
Note that for any balanced $s\in\mathcal S^k$, we have $\abs{y_j(s)}\leq \frac k2$ for every $j=0,1,\ldots,k$. We deduce that whenever $\beta$ takes a non-vero value in $\Lambda_{L-k}^d$, if $\beta=\varphi(s)^i$ we must have $y_j(s)+i\in\Lambda_{L-k}^d$ for \emph{some} $j$, therefore $y_j(s)+i\in\boxL$ for \emph{every} $j=0,1,\ldots,k$. For such $\beta$, any $i\in\Zd$ and $s\in\mathcal S^k$ satisfying $\varphi(s)^i=\beta$ automatically fulfill the additional conditions, proving (2).\\
Similarly, if $\beta$ obtains a non-zero value outside of $\boxL$, satisfying $\varphi(s)^i=\beta$ guarantees that $y_j(s)+i\notin\boxL$ for some $j$, therefore $X^\beta$ doesn't appear anywhere on the diagonal of $M_L$, proving (3).
\end{proof}
Note that from definition \ref{path_counting_def}, it is clear that $\path^k\Par{\beta^i}=\path^k(\beta)$, for any multi-index $\beta$, any $i\in\Zd$, and any $k\in\N$.
Therefore, when considering the integers $\path^k(\beta)$ which appear as coefficients in the polynomials $\Tr{H_L^k}$, we may restrict our attention to a set of non-zero multi-indices which contains some shifting of every multi-index exactly once.
We denote this set by $B$:
\begin{defin} \label{index_set_def}
Two multi-indices $\beta$ and $\gamma$ are said to be equivalent if $\gamma=\beta^i$ for some $i\in\Zd$. From each equivalence class other than zero, choose a unique representative $\beta$, satisfying $\beta_0>0$ $($one way to make such choices, is to require the lexicographic minimum of the support of $\beta$ to be $0$$)$. Let $B$ be the set of all chosen representatives.
\end{defin}
In other words, $B$ is any set of multi-indices with the properties:
\begin{enumerate}
\item For any non-zero multi-index $\gamma$, we have $\gamma^i\in B$ for a unique $i\in\Zd$.
\item $\beta_0>0$ for every $\beta\in B$.
\end{enumerate}
\section{A central limit theorem for polynomial linear statistics}
In this section, we prove that for every polynomial $\poly(x) \in \R[x]$,
$$\frac{\Tr{\poly(H_L)}-\Ex{\Tr{\poly(H_L)}}}{\Par{2L+1}^{d/2}}$$
converges in distribution (as $L\rightarrow\infty$) to a normal distribution with variance $\sigma(\poly)^2\in[0,\infty)$ (see Proposition \ref{poly_CLT} below). We start by proving this CLT in the case where $\poly(x)=x^k$ is a monomial, which is easier to prove for an approximated version of the random variable $\Tr{H_L^k}$:
\begin{defin} \label{approximating_def}
For every $k,L\in\N$, let
\begin{equation} \label{approximating_def_eq}
T_L^k=\sum_{\beta\in B}\path^k(\beta) \sum_{i\in\boxL} X^{\beta^i},
\end{equation}
which we consider both as a random variable, and as a polynomial in the variables $\SetPred{X_n}{n\in\Zd}$.
\end{defin}
Note that the above sum is finite, since $\path^k(\beta)=0$ for all but finitely many $\beta\in B$.\\
We start be proving that $T_L^k$ can indeed approximate $\Tr{H_L^k}$, in the following sense:
\begin{propos} \label{approximating_prop}
For every $k\in\N$, the random variables
$$\frac{\Tr{H_L^k}-\Ex{\Tr{H_L^k}}}{\Par{2L+1}^{d/2}} - \frac{T_L^k-\Ex{T_L^k}}{\Par{2L+1}^{d/2}}$$
converge in probability $($as $L\to\infty$$)$ to $0$.
\end{propos}
\begin{proof}
It is sufficient to show that $\Var{T_L^k-\Tr{H_L^k}}=o\Par{L^d}$.
From Proposition \ref{coef_prop} and (\ref{approximating_def}), we have
$$T_L^k-\Tr{H_L^k}=\sum_{\beta\in B} \sum_{i\in\boxL} \Par{\path^k(\beta)-a^k_L\Par{\beta^i}}X^{\beta^i},$$
where $\path^k(\beta)-a^k_L\Par{\beta^i}=0$ whenever $i\in\Lambda_{L-k}^d$. Therefore, the number of non-zero terms in the above sum is at most
$$\abs{B_k}\Par{\abs{\Lambda_L^d}-\abs{\Lambda_{L-k}^d}}=O\Par{L^{d-1}},$$
where $B_k=\SetPred{\beta\in B}{\path^k(\beta)\neq 0}$ is finite. Next, consider the sum
\begin{equation} \label{approximation_var_eq}
\begin{split}
&\Var{T_L^k-\Tr{H_L^k}}=\\
&\sum_{\beta,\gamma\in B_k} \sum_{i,j\in\boxL} \Par{\path^k(\beta)-a^k_L\Par{\beta^i}}\Par{\path^k(\gamma)-a^k_L\Par{\gamma^j}}\Cov{X^{\beta^i} , X^{\gamma^j}}.
\end{split}
\end{equation}
Fixing $\beta,\gamma\in B_k$ and $i\in\Lambda_L^d\setminus\Lambda_{L-k}^d$, we see that whenever $j-i\notin\Lambda_k^d$, the supports of $\beta^i$ and $\gamma^j$ are disjoint, therefore $X^{\beta^i}$ and $X^{\gamma^j}$ are independent. This tells us that there are at most
$$\abs{B_k}^2\cdot \Par{\abs{\Lambda_L^d} - \abs{\Lambda_{L-k}^d}} \cdot \abs{\Lambda_k^d}=O\Par{L^{d-1}}$$
non-zero terms in (\ref{approximation_var_eq}). We know from (1) of Proposition \ref{coef_prop} that
\begin{equation} \label{path_mult_bound}
0\leq\Par{\path^k(\beta)-a^k_L\Par{\beta^i}}\Par{\path^k(\gamma)-a^k_L\Par{\gamma^j}}\leq \path^k(\beta)\path^k(\gamma).
\end{equation}
Since $\SetPred{X_n}{n\in\Zd}$ are identically distributed, we have $\Cov{X^{\beta^i} , X^{\gamma^j}} = \Cov{X^\beta , X^{\gamma^{j-i}}}$. From here we deduce that for fixed $\beta,\gamma\in B_k$, the term $\Cov{X^{\beta^i} , X^{\gamma^j}}$ only obtains a finite number of values: it is either $0$ or uniquely determined by $\beta,\gamma\in B_k$, the value of $j-i\in\Lambda_k^d$, and some of the (finite) moments of the underlying distribution $\textnormal{d}\rho$. Together with (\ref{path_mult_bound}), this gives us a uniform bound on all terms in (\ref{approximation_var_eq}), showing that indeed
$$\Var{T_L^k-\Tr{H_L^k}}=O\Par{L^{d-1}}.$$
\end{proof}
Our next step is a central limit theorem for the random variables $T_L^k$. Although our initial random variables $\SetPred{X_n}{n\in\Zd}$ were iid, for a fixed multi-index $\beta$ the random variables $\SetPred{X^{\beta^i}}{i\in\Zd}$ are generally not independent.
However, for any $i$ and $j$ sufficiently far apart ($j-i\notin\Lambda_k^d$ is sufficient), the variables $X^{\beta^i}$ and $X^{\beta^j}$ are independent. We use CLTs for weakly dependent random variables, by Hoeffding and Robbins \cite{HR}, and Neumann \cite{Neumann}, to prove:
\begin{thm} \label{main_CLT}
Let $B$ be any set of multi-indices such that $\beta_0>0$ for every $\beta\in B$. Let $\Set{a_\beta}_{\beta\in B}$ be a set of coefficients, such that $a_\beta=0$ for all but finitely many $\beta\in B$. Then
$$\frac 1{\Par{2L+1}^{d/2}} \sum_{\beta\in B} a_\beta \sum_{i\in\boxL} \Par{X^{\beta^i}-\Ex{X^{\beta^i}}}\overset{d}{\longrightarrow} N\Par{0,\sigma^2},$$
as $L\rightarrow\infty$, for some $\sigma^2\geq 0$.
\end{thm}
The proof is postponed to the appendix (Section 5).
\begin{cor} \label{approximate_CLT}
For every $k\in\N$,
$$\frac{T_L^k-\Ex{T_L^k}}{\Par{2L+1}^{d/2}}\overset{d}{\longrightarrow} N\Par{0,\sigma_k^2}$$
as $L\rightarrow\infty$, for some $\sigma_k^2\geq 0$.
\end{cor}
Now that we have a central limit theorem for our approximating random variables, we would like to compute the limit variances, and more generally, the limit covariances. We do this first for individual multi-indices:
\begin{lem} \label{basic_cov_lem}
For every two multi-indices $\beta$ and $\gamma$, we have
\begin{equation} \label{basic_cov_eq}
\underset{L\to\infty}\lim \frac 1{\Par{2L+1}^d}\Cov{\sum_{i\in\boxL} X^{\beta^i}, \sum_{i\in\boxL} X^{\gamma^i}}=\sum_{j\in\Zd}\Cov{X^\beta,X^{\gamma^j}}
\end{equation}
\end{lem}
Note that the sum on the right hand side of (\ref{basic_cov_eq}) is uniquely determined by $\beta,\gamma$, and the moments of the underlying distribution $\textnormal{d}\rho$, and it is in fact a finite sum: since $\beta$ and $\gamma$ are finitely supported, the supports of $\beta$ and $\gamma^j$ are disjoint (and therefore $X^\beta$ and $X^{\gamma^j}$ are independent) for all but finitely many $j\in\Zd$.
\begin{proof}
Since $\SetPred{X_n}{n\in\Zd}$ are identically distributed, the covariances are invariant to translations, and we may write
\begin{equation} \nonumber
\begin{split}
\Cov{\sum_{i\in\boxL} X^{\beta^i}, \sum_{i\in\boxL} X^{\gamma^i}}&=\sum_{i,i'\in\boxL}\Cov{X^{\beta^i},X^{\gamma^{i'}}}\\
&=\sum_{i,i'\in\boxL}\Cov{X^\beta,X^{\gamma^{i'-i}}}\\
&=\sum_{j\in\Zd}z_j(L)\cdot \Cov{X^\beta,X^{\gamma^j}},
\end{split}
\end{equation}
where $z_j(L)=\#\SetPred{i,i'\in\boxL}{j=i-i'}$. Clearly
$$\limL \frac{z_j(L)}{\Par{2L+1}^d}=1$$
for any $j\in\Zd$, and since $\Cov{X^\beta,X^{\gamma^j}}\neq 0$ only for finitely many $j\in\Zd$, the claim follows.
\end{proof}
\begin{cor} \label{approximate_cov}
For every $k,\ell\in\N$,
\begin{equation}
\begin{split}
&\limL \Cov{\frac{T^k_L}{\Par{2L+1}^{d/2}} , \frac{T^\ell_L}{\Par{2L+1}^{d/2}}}=\\
&\limL \frac 1{\Par{2L+1}^d}\Cov{T^k_L , T^\ell_L}=\sum_{\beta,\gamma \in B} \path^k(\beta)\path^\ell(\gamma) \sum_{j\in\Zd} \Cov{X^\beta , X^{\gamma^j}}
\end{split}
\end{equation}
\end{cor}
This allows us to deduce results for the asymptotic behavior of the trace of monomials:
\begin{cor} \label{monom_CLT}
For any $k\in\N$,
$$\frac{\Tr{H_L^k}-\Ex{\Tr{H_L^k}}}{\Par{2L+1}^{d/2}} \overset{d}{\longrightarrow} N\Par{0,\sigma_k^2}$$
as $L\rightarrow\infty$, where
$$\sigma_k^2=\sum_{\beta,\gamma\in B} \path^k(\beta)\path^k(\gamma) \sum_{j\in\Zd} \Cov{X^\beta , X^{\gamma^j}}.$$
Furthermore, for every $k,\ell\in\N$,
\begin{equation} \nonumber
\begin{split}
&\limL \Cov{\frac{\Tr{H_L^k}-\Ex{\Tr{H_L^k}}}{\Par{2L+1}^{d/2}} , \frac{\Tr{H_L^\ell}-\Ex{\Tr{H_L^\ell}}}{\Par{2L+1}^{d/2}}}=\\
&\sum_{\beta,\gamma\in B} \path^k(\beta)\path^\ell(\gamma)\sum_{j\in\Zd} \Cov{X^\beta , X^{\gamma^j}}.
\end{split}
\end{equation}
\end{cor}
\begin{proof}
Follows directly from Proposition \ref{approximating_prop} and Corollaries \ref{approximate_CLT}, \ref{approximate_cov}.
\end{proof}
And now we can prove the CLT for any polynomial:
\begin{propos} \label{poly_CLT}
Let $\poly(x)=\sum_{k=0}^m a_k x^k\in\R[x]$ be a polynomial. Then
$$\frac{\Tr{\poly(H_L)}-\Ex{\Tr{\poly(H_L)}}}{\Par{2L+1}^{d/2}} \overset{d}{\longrightarrow} N\Par{0,\sigma(\poly)^2}$$
as $L\rightarrow\infty$, where
$$\sigma(\poly)^2=\sum_{k,\ell=1}^m a_k a_\ell \sum_{\beta,\gamma \in B} \path^k(\beta)\path^\ell(\gamma) \sum_{j\in\Zd} \Cov{X^\beta , X^{\gamma^j}}.$$
\end{propos}
\begin{proof
Since $\Tr{\poly(H_L)}-\Ex{\Tr{\poly(H_L)}}$ doesn't depend on $a_0$, we may assume w.l.o.g. that $a_0=0$ and $\deg(\poly)=m>0$. Then
$$
\frac{\Tr{\poly(H_L)}-\Ex{\Tr{\poly(H_L)}}}{\Par{2L+1}^{d/2}}=\sum_{k=1}^m a_k \frac{\Tr{H_L^k}-\Ex{\Tr{H_L^k}}}{\Par{2L+1}^{d/2}},
$$
and from Corollary \ref{monom_CLT} we obtain the value of the variance $\sigma(\poly)^2$. Using Proposition \ref{approximating_prop} and (\ref{approximating_def_eq}), we now rewrite
\begin{equation} \label{limit_decomp}
\begin{split}
&\limL \frac{\Tr{\poly(H_L)}-\Ex{\Tr{\poly(H_L)}}}{\Par{2L+1}^{d/2}}
\limL \frac 1{\Par{2L+1}^{d/2}} \sum_{k=1}^m a_k \Par{T_L^k-\Ex{T_L^k}}=\\
&\limL \frac 1{\Par{2L+1}^{d/2}} \sum_{k=1}^m a_k \sum_{\beta\in B} \path^k(\beta) \sum_{i\in\boxL} \Par{X^{\beta^i}-\Ex{X^{\beta^i}}}=\\
&\limL \frac 1{\Par{2L+1}^{d/2}}
\sum_{\beta\in B}\Par{\sum_{k=1}^m a_k \path^k(\beta)} \sum_{i\in\boxL} \Par{X^{\beta^i}-\Ex{X^{\beta^i}}}.
\end{split}
\end{equation}
Note that the first equality holds in the sense that both limit random variables have the same distribution.
Theorem \ref{main_CLT} now applies, proving that the limit has a normal distribution.
\end{proof}
\section{Degenerate and non-degenerate cases}
Now that we proved the convergence in Theorem \ref{main_thm}, it remains to determine under which conditions the limit distribution is non-degenerate, that is when $\sigma(\poly)^2>0$ for a non-constant polynomial $\poly\in\R[x]$. It turns out that $\sigma(\poly)^2$ is always positive if $\deg(\poly)\neq 2,3,5$, but for some polynomials of degree $2,3,5$ and some specific underlying distributions, the variance may vanish. We first demonstrate positive variance in degrees $\neq 2,3,5$:
\begin{propos} \label{deg_good_var}
Let $\poly(x)=\sum_{k=0}^m a_k x^k\in\R[x]$ be a non-constant polynomial of degree $m\neq 2,3,5$. Then $\sigma(\poly)^2>0$.
\end{propos}
\begin{proof}
Using (\ref{limit_decomp}), we write:
\begin{equation} \label{limit_var}
\begin{split}
&\sigma(\poly)^2=\Var{\limL \frac{\Tr{\poly(H_L)}-\Ex{\Tr{\poly(H_L)}}}{\Par{2L+1}^{d/2}}}=\\
&\limL \Var{\frac 1{\Par{2L+1}^{d/2}} \sum_{\beta\in B}\Par{\sum_{k=1}^m a_k \path^k(\beta)} \sum_{i\in\boxL} \Par{X^{\beta^i}-\Ex{X^{\beta^i}}}}.
\end{split}
\end{equation}
We follow the same general method used in \cite{BGW} - it is sufficient to find a multi-index $\abeta\in B$, with the following properties:
\begin{enumerate}
\item $\path^m(\abeta)\neq 0$.
\item $\path^k(\abeta)=0$ for every $k<m$.
\item $\sum_{j\in\Zd}\Cov{X^\abeta,X^{\abeta^j}}>0$.
\item $\Cov{X^\abeta, X^{\beta^j}}=0$ for every $j\in\Zd$ and every $\abeta\neq\beta\in B$ satisfying $\path^k(\beta)\neq 0$ for some $1\leq k\leq m$.
\end{enumerate}
If we find such $\abeta$, we deduce from property (4) that the random variables
$$Y_L^1\equiv a_m \path^m(\abeta)\sum_{i\in\boxL} \Par{X^{\abeta^i}-\Ex{X^{\abeta^i}}}$$
and
$$Y_L^2\equiv \sum_{\beta\in B\setminus\{\abeta\}} \Par{\sum_{k=1}^m a_k \path^k(\beta)} \sum_{i\in\boxL} \Par{X^{\beta^i}-\Ex{X^{\beta^i}}}$$
are uncorrelated (for any $L\in\N$), and (\ref{limit_var}) becomes
\begin{equation} \nonumber
\begin{split}
\sigma(\poly)^2&=\limL \Var{\frac {Y_L^1+Y_L^2}{\Par{2L+1}^{d/2}}}\\
&=\limL \Var{\frac {Y_L^1}{\Par{2L+1}^{d/2}}}+\limL \Var{\frac {Y_L^2}{\Par{2L+1}^{d/2}}}\\
&\geq\limL \Var{\frac {Y_L^1}{\Par{2L+1}^{d/2}}}=a_m \path^m(\abeta) \sum_{j\in\Zd}\Cov{X^\abeta , X^{\abeta^j}}>0,
\end{split}
\end{equation}
where the final equality is due to Lemma \ref{basic_cov_lem}. We make the following choices for $\abeta$:
\begin{enumerate}
\item If $m=1$, choose $\abeta=\delta$.
\item If $m\geq 4$ is even, choose $\abeta=\delta+\delta^{\Par{\frac m2-1}e_1}$.
\item If $m\geq 7$ is odd, choose $\abeta=\delta+\delta^{e_1}+\delta^{\Par{\frac{m-3}{2}}e_1}$.
\end{enumerate}
The proof that these $\abeta$ satisfy properties (1) and (2) is straightforward path counting. For (3) and (4), recall that
\begin{equation} \nonumber
\begin{split}
\Cov{X^\abeta, X^{\beta^j}}&=\Ex{X^{\abeta}X^{\beta^j}}-\Ex{X^\abeta}\Ex{X^{\beta^j}}\\
&=\Ex{\prod_{n\in\Zd}X_n^{\abeta_n}X_n^{\beta_n^j}}-\Ex{\prod_{n\in\Zd}X_n^{\abeta_n}}\Ex{\prod_{n\in\Zd}X_n^{\beta_n^j}}\\
&=\prod_{n\in\Zd}\Ex{X_n^{\abeta_n+\beta^j_n}}-\prod_{n\in\Zd}\Ex{X_n^{\abeta_n}}\Ex{X_n^{\beta^j_n}}.
\end{split}
\end{equation}
If there exists any $n\in\Zd$ such that $\gamma_n=1$ and $\beta^j_n=0$, the term $\Ex{X_n}=0$ appears in both products, thus $\Cov{X^\abeta , X^{\beta^j}}=0$. Thus any $\beta^j$ for which $\Cov{X^\abeta, X^{\beta^j}}\neq 0$ must have $\beta^j_n\geq\abeta_n$ for every $n\in\Zd$.
If $\beta^j=\abeta$, since $\beta,\abeta\in B$ we must also have $j=0$ and $\beta=\abeta$. Otherwise, we have $\beta^j_n>\abeta_n$ for some $n\in\Zd$, and it is straightforward to verify that every string $s$ with $\varphi(s)^i=\beta^j$ must have length $>m$, therefore $p^k(\beta)=0$ for every $1\leq k\leq m$.
Note that there is some freedom in the choice of the representative set $B$, but one may choose $B$ such that $\abeta\in B$ in all of the above cases, or alternatively replace the above choice of $\abeta$ with some $\abeta^i\in B$.
\end{proof}
For polynomials $\poly$ of degree $2,3$ or $5$, we must carefully analyze all cases. Since there are specific underlying distributions and polynomials $\poly$ for which $\sigma(\poly)^2=0$, and we want an explicit description of all such cases, we need to explicitly compute all non-zero values of $\path^k(\beta)$, for $1\leq k\leq 5$.
\begin{lem} \label{coef_values}
If $k\in\Set{1,2,\ldots,5}$ and $\gamma$ is a multi-index with $\path^k(\gamma)>0$, then
\begin{enumerate}
\item $\gamma$ is either equivalent to a unique $\beta$ which equals $m\cdot\delta$ $($for some $m\in\Set{1,2,\ldots,5}$$)$, or to one of $\delta+\delta^e$, $2\delta+\delta^e$, or $2\delta+\delta^{-e}$ $($for some $e\in \Set{e_1,e_2,\ldots,e_d}$$)$.
\item The value of $\path^k(\gamma)=\path^k(\beta)$ is given in the table below $($empty entries correspond to $\path^k(\beta)=0$$)$:
$$\begin{array}{|c|c|c|c|c|c|c|c|}
\hline
\raisebox{-.2em}{$k$} \diagdown \raisebox{.2em}{$\beta$} &\delta &2\delta &3\delta &4\delta &5\delta &\delta+\delta^e &2\delta + \delta^{\pm e} \\
\hline
1 &1 & & & & & & \\
\hline
2 & &1 & & & & & \\
\hline
3 &6d & &1 & & & & \\
\hline
4 & &8d & &1 & &4 & \\
\hline
5 &60d^2-30d & &10d & &1 & &5 \\
\hline
\end{array}$$
\end{enumerate}
\end{lem}
To prove the lemma, we found no alternative to enumerating the relevant strings in $\mathcal S^k$ (for $k=1,2,\ldots,5$). We omit this technical proof.
Our method of verifying which polynomials $\poly(x)=\sum_{k=0}^5 a_k x^k$ satisfy $\sigma(\poly)^2>0$, is to describe random variables $W_1,W_2,\ldots,W_5$ such that \\
$\Var{\sum_{k=1}^5 a_k W_k}=\sigma(\poly)^2$ (for any choice of coefficients $a_0,a_1,\ldots,a_5$). We then explore the random variable $\sum_{k=1}^5 a_k W_k$ and determine under which conditions it is almost surely constant.\\
If $\deg(\poly)\leq 3$, we may replace $\Set{W_i}$ with a simpler set of random variables, $T_1,T_2,T_3$. We verify this case before approaching polynomials of degree 5:
\begin{propos} \label{deg_23_var}
Let $\poly(x)=\sum_{k=0}^m a_k x^k\in\R[x]$ be a polynomial of degree $1\leq m\leq 3$. Then:
\begin{enumerate}
\item If the underlying distribution $(\textnormal{d}\rho)$ is supported by more than three values, then $\sigma(\poly)^2>0$.
\item If the underlying distribution is supported by exactly three values, denoted $a,b,c\in\R$, then $\sigma(\poly)^2=0$ iff $\poly=a_3 \tildq_3 + a_0$, where
$$\tildq_3(x)=x^3-(a+b+c)x^2+(ab+ac+bc-6d)x.$$
\item If the underlying distribution is supported by exactly two values, denoted $a,b\in\R$, then $\sigma(\poly)^2=0$ iff $\poly=a_3 \polyb_3 + a_2 \polyb_2 + a_0$, where
$$\polyb_3(x)=x^3 - (a^2+ab+b^2+6d)x,\qquad \polyb_2(x)=x^2 - (a+b)x.$$
\end{enumerate}
\end{propos}
\begin{proof}
Let $Z$ denote both a random variable distributed by $\textnormal{d}\rho$, and the variable in polynomial ring $\R[Z]$. Define
$$T_1=Z,\qquad T_2=Z^2,\qquad T_3=Z^3+6dZ.$$
Using Lemma \ref{coef_values}, we see that for every $k=1,2,3$, we have\\
$T_k=\sum_{n=1}^3 \path^k(n\delta) Z^n$, thus for every $k,\ell=1,2,3$:
\begin{equation} \nonumber
\begin{split}
\Cov{T_k , T_\ell}=& \sum_{n,m=1,2,3} \path^k(n\delta) \path^\ell(m\delta) \Cov{Z^n, Z^m}\\
& \sum_{\beta,\gamma=\delta,2\delta,3\delta} \path^k(\beta) \path^\ell(\gamma) \Cov{X^{\beta}, X^{\gamma}}\\
& \sum_{\beta,\gamma\in B} \path^k(\beta) \path^\ell(\gamma) \sum_{j\in\Zd} \Cov{X^\beta, X^{\gamma^j}}
\end{split}
\end{equation}
(we may assume w.l.o.g. that $\delta,2\delta,3\delta\in B$).
We now deduce from Proposition \ref{poly_CLT} that
$$\sigma(\poly)^2=\Var{a_3 T_3 + a_2 T_2 + a_1 T_1},$$
which is zero iff $\Poly=a_3 T_3 + a_2 T_2 + a_1 T_1$ is almost surely constant, as a random variable.
As a polynomial, $\Poly\in\R[Z]$ has at most $3$ distinct roots, so if $Z$ is supported by more than $3$ points, $\Poly$ is non-constant as a random varaible, thus
$\Var{\Poly}>0$, proving (1).\\
Observe that any assignment of a value to the random variable $Z$ corresponds to a ring homomorphism $\R[Z]\rightarrow\R$. Furthermore, if we only assign values from $\Set{a,b,c}$, all three assignment homomorphisms factor through the quotient ring $\R[Z]/\Par{(Z-a)(Z-b)(Z-c)}$. Write $P\equiv Q$ for two polynomials $P,Q$, if they have the same projection in the quotient. Note that $P\equiv Q$ iff as random variables, $P=Q$ almost surely. Clearly $\Var{\Poly}=0$ as a random variable iff $\Poly\equiv \text{const}$ in $\R[Z]$. Now write
\begin{equation} \nonumber
\begin{split}
(Z-a)(Z-b)(Z-c)&=Z^3-(a+b+c)Z^2+(ab+ac+bc)Z-abc\\
&=T_3-(a+b+c)T_2+(ab+ac+bc-6d)T_1-abc,
\end{split}
\end{equation}
and deduce (2): the polynomial $\tildq_3(x)$ has $\sigma(\tildq_3)=0$ from the above, therefore $\sigma(a_3\tildq_3 + a_0)=0$. If $\poly\neq a_3\tildq_3+a_0$, we see that $\Poly=a_3T_3+a_2T_2+a_1T_2$ is equivalent to a polynomial of degree $1$ or $2$ in $\R[Z]$, and therefore isn't fixed under assignments from $\Set{a,b,c}$.\\
Finally, if $\textnormal{d}\rho$ is supported on $\Set{a,b}$, the same arguments hold with a different quotient ring, $\R[Z]/\Par{(Z-a)(Z-b)}$. Now note that
$$0\equiv (Z-a)(Z-b)=Z^2-(a+b)Z+ab,$$
therefore
$$T_2-(a+b)T_1=Z^2- (a+b)Z\equiv \text{const},$$
proving $\sigma(\polyb_2)^2=0$. We also have
$$Z^3\equiv (a+b)Z^2-abZ\equiv (a^2+ab+b^2)Z-ab(a+b),$$
therefore
$$T_3-(a^2+ab+b^2+6d)T_1=Z_3-\Par{a^2+ab+b^2}Z \equiv \text{const},$$
proving $\sigma(\polyb_3)^2=0$. If $\poly\neq a_3 \polyb_3 + a_2 \polyb_2 + a_0$, then the above computations show that $\Poly$ is equivalent to a polynomial of degree $1$, which isn't equivalent to any constant, therefore $\Var{\Poly}>0$.
\end{proof}
\begin{propos} \label{deg_5_var}
Let $\poly(x)=\sum_{k=0}^5 a_k x^k\in\R[x]$ be a polynomial of degree $5$. Then:
\begin{enumerate}
\item If the underlying distribution $(\textnormal{d}\rho)$ is supported by more than two values, then $\sigma(\poly)^2>0$.
\item If the underlying distribution is supported by exactly two values, denoted $a,b\in\R$, then $\sigma(\poly)^2=\sigma(\poly-a_5 \polyb_5)^2$, where
\begin{equation} \nonumber
\begin{split}
&
\text{\scalebox{0.92}
{$2\polyb_5(x)=2x^5-5(a+b)x^4+$}}\\
&
\text{\scalebox{0.92}
{$\left[3(a^4+b^4)+8(a^3 b+a^2 b^2+a b^3)+20d(a^2+b^2)+100dab-120d^2+60d\right]x$}}.
\end{split}
\end{equation}
In particular, $\sigma(\polyb_5)^2=0$.
\end{enumerate}
\end{propos}
\begin{proof}
For every $n\in\Lambda_1^d$, let $Z_n=X_n$. We regard the variables $\Set{Z_n}$ both as $3^d$ independent random variables distributed by $\textnormal{d}\rho$, and as the variables in the polynomial ring
$R=\R\SPar{Z_n\ \middle|\ n\in\Lambda_1^d}$. Define:
$$W_1=3^{-d/2}\sum_{n\in\Lambda_1^d} Z_n,\qquad W_2=3^{-d/2}\sum_{n\in\Lambda_1^d} Z_n^2.$$
and
$$W_3=3^{-d/2}\sum_{n\in\Lambda_1^d} \Par{Z_n^3+6dZ_n}.$$
Let $E$ consist of all unordered pairs $\Set{n,m}$, such that $n,m\in\Lambda_1^d$ differ in exactly one coordinate, that is
$$E=\SetPred{\Set{n,m}}{n,m\in\Lambda_1^d\ ,\ \#\SetPred{1\leq v\leq d}{n_v\neq m_v}=1}.$$
Now define
$$W_4=3^{-d/2}\sum_{n\in\Lambda_1^d} \Par{Z_n^4+8dZ_n^2}+3^{-d/2}\sum_{\Set{n,m}\in E} 4Z_nZ_m$$
and
\begin{equation} \nonumber
\begin{split}
W_5=&\ 3^{-d/2}\sum_{n\in\Lambda_1^d} \Par{Z_n^5+10dZ_n^3+\Par{60d^2-30d}Z_n}+\\
&\ 3^{-d/2}\sum_{\Set{n,m}\in E} 5\Par{Z_n^2 Z_m+Z_nZ_m^2}.
\end{split}
\end{equation}
Following Lemma \ref{coef_values} and a straightforward computation that we omit, we verify that
$$\Cov{W_k,W_\ell}=\sum_{\beta\in B}\path^k(\beta)\path^\ell(\gamma)\sum_{j\in\Zd}\Cov{X^\beta,X^{\gamma^j}}$$
for every $k,\ell\in\Set{1,2,\ldots,5}$ then deduce from Proposition \ref{poly_CLT} tha
$$\sigma(\poly)^2=\Var{\sum_{k=1}^5 a_k W_k}.$$
Denote
\begin{equation} \label{W_def}
\Poly=\sum_{k=1}^5 a_k W_k.
\end{equation}
As in the proof of Proposition \ref{deg_23_var}, we note that $\Var{\Poly}=0$ iff $\Poly$ is almost surely constant as a random variable.
This shows that $\Var{\Poly}>0$ if $\textnormal{d}\rho$ is not finitely supported: generally if $\poly\in\R\SPar{x_1,\ldots,x_m}$ is a non-constant multivariate polynomial, and $S$ is a set such that $\poly(s_1,\ldots,s_m)=0$ for every $s_1 ,\ldots,s_m\in S$, then straightforward induction on $m$ shows that $\abs{S}\leq \deg(\poly)$.\\
So assume henceforth that the variables $Z_n$ are supported by a finite set $\supp(\textnormal{d}\rho)\subset\R$. Denote $q(x)=\prod_{a\in \supp(\textnormal{d}\rho)}\Par{x-a}$, and let $Q_n=q(Z_n)\in R$, and let $I\subset R$ be the ideal generated by the polynomials $\Set{Q_n}_{n\in\Lambda_1^d}$. Every possible assignment of values to $\Set{Z_n}$ corresponds to a ring homomorphism $R\to\R$. If we only assign values from $\supp(\textnormal{d}\rho)$ the homomorphism factors through the quotient ring $R/I$. Write $P\equiv Q$ for two polynomials $P,Q$, if they have the same projection in the quotient. Note that $P\equiv Q$ in $R$ iff as random variables, $P=Q$ almost surely. Clearly $\Var{\Poly}=0$ as a random variable iff $\Poly\equiv\text{const}$ in $R$.
Next, we denote $\omega_1=\omega_2=\omega_3=0$,
$$\omega_4=3^{-d/2}\sum_{\Set{n,m}\in E} 4Z_nZ_m,$$
and
\begin{equation} \label{omega5_def}
\omega_5=3^{-d/2}\sum_{\Set{n,m}\in E} 5\Par{Z_n^2 Z_m+Z_n Z_m^2}
\end{equation}
(so each $\omega_k$ is the part of $W_k$ which is a sum of products involving more than one variable). Now rewrite (\ref{W_def}) as
$$\Poly=a_5\omega_5+a_4\omega_4+\sum_{k=1}^5 a_k\Par{W_k-\omega_k},$$
and note that if $\abs{\supp(\textnormal{d}\rho)}\leq k$ then $W_k-\omega_k$ is equivalent to a polynomial of degree lower than $\abs{\supp(\textnormal{d}\rho)}$: every term of the form $Z_n^k$ is equivalent to $Z_n^k-Z_n^{k-\abs{\supp(\textnormal{d}\rho)}}q\Par{Z_n}$, with degree strictly lower than $k$. Thus
$$\sum_{n\in\Lambda_1}Z_n^k\equiv \sum_{n\in\Lambda_1^d} Z_n^k-Z_n^{k-\abs{\supp(\textnormal{d}\rho)}}q\Par{Z_n},$$
and summing over $n\in\Lambda_1^d$ allows us to reduce $W_k-\omega_k$ to an equivalent combination of $W_1-\omega_1,\ldots,W_{k-1}-\omega_{k-1}$, to eventually obtain
\begin{equation} \label{red_deg}
\Poly\equiv\widetilde \Poly=a_5\omega_5+a_4\omega_4+\sum_{k=1}^{\abs{\supp(\textnormal{d}\rho)}-1} \widetilde a_k\Par{W_k-\omega_k}
\end{equation}
for some $\widetilde a_1,\ldots,\widetilde a_{\abs{\supp(\textnormal{d}\rho)}-1}\in\R$.
We are now ready to prove that $\sVar{\widetilde \Poly}>0$, whenever $\abs{\supp(\textnormal{d}\rho)}\geq 3$. Otherwise, $\sVar{\widetilde \Poly}=0$ implies that $\widetilde \Poly-c\in I$ for some constant $c$, so we can find polynomials $H_n\in R$, such that
\begin{equation} \label{var0_eq1}
\widetilde \Poly - c=\sum_{n\in\Lambda_1^d} H_n\cdot Q_n
\end{equation}
in $R$. Fix some $a\in A$, and let $\psi_a:R\to\R[x]$ be the ring homomorphism, defined by
$$\psi_a(Z_n)=\begin{cases} x & n=0 \\ a & n\neq 0
\end{cases}\ .$$
We have $\psi_a(Q_n)=q(a)=0$ for every $n\neq0$, so when we apply $\psi_a$ to (\ref{var0_eq1}), we obtain the equality
\begin{equation} \label{var0_eq2}
\psi_a(\widetilde \Poly)-c=h(x)q(x)
\end{equation}
in $\R[x]$, where $h(x)=\psi_a(H_0)$.
Note that $W_k-\omega_k$ has degree $k$ in $R$, therefore $\psi_a\Par{W_k-\omega_k}$ has degree at most $k$. Clearly $\psi_a\Par{a_5\omega_5+a_4\omega_4}$ has degree $2$, so from (\ref{red_deg}) the polynomial in the left hand side of (\ref{var0_eq2}) has degree strictly less than $\abs{\supp(\textnormal{d}\rho)}$. But $q(x)$ has degree $\abs{\supp(\textnormal{d}\rho)}$, so we must have $h(x)=0$ (otherwise the right hand side of (\ref{var0_eq2}) would have degree $\abs{\supp(\textnormal{d}\rho)}$ or higher). We deduce that $\psi_a(\widetilde \Poly)-c=0$ as a polynomial in $\R[x]$.
Since for every $n\in\Lambda_1^d$ there are $\#\SetPred{m}{\Set{n,m}\in E}=2d$ values of $m$ for which $5\cdot Z_n^2 Z_m$ appears in the sum (\ref{omega5_def}), the coefficient of $x^2$ in $\psi_a\Par{\omega_5}$ is $2d\cdot 3^{-d/2}\cdot5a$. We deduce that the coefficient of $x^2$ in $\psi_a(\widetilde \Poly)-c=0$ is
$$a_5\cdot 10d\cdot 3^{-d/2}\cdot a + c'=0,$$
where $c'$ doesn't depend on our choice of $a\in \supp(\textnormal{d}\rho)$. Since $a_5\neq 0$, there is at most one $a\in\R$ satisfying the above equation. However, for any $b\in \supp(\textnormal{d}\rho)$, applying $\psi_b$ to (\ref{var0_eq1}) allows us to obtain $a_5\cdot 10d\cdot 3^{-d/2}\cdot b + c'=0$, which is a contradiction. This concludes the proof of (1).\\
If $\supp(\textnormal{d}\rho)=\Set{a,b}$ then $q\Par{Z_n}=\Par{Z_n-a}\Par{Z_n-b}\in I$, therefore
\begin{equation} \label{deg_2_red}
Z_n^2\equiv\Par{a+b}Z_n-ab
\end{equation}
for every $n\in\Lambda_1^d$, thus (\ref{omega5_def}) becomes $\omega_5\equiv\frac52\Par{a+b}\omega_4-20dab W_1$, which allows us to deduce
\begin{equation} \label{mixed_red}
a_5\omega_5+a_4\omega_4\equiv-20a_5dab W_1
\end{equation}
whenever $a_4=-\frac52 \Par{a+b}a_5$.\\
Finally, from (\ref{deg_2_red}) we verify:
\begin{equation} \label{pure_red}
\begin{split}
Z_n^2\equiv& \Par{a+b}Z_n-ab\\
Z_n^3\equiv& \Par{a^2+ab+b^2}Z_n-ab\Par{a+b}\\
Z_n^4\equiv& \Par{a^3+a^2 b+a b^2+b^3}Z_n-ab\Par{a^2+ab+b^2}\\
Z_n^5\equiv& \Par{a^4+a^3 b+a^2 b^2+a b^3+b^4}Z_n-\text{const}.
\end{split}
\end{equation}
Summing over $n\in\Lambda_1^d$ allows us to reduce $3^{-d/2}\sum_n Z_n^k$ (for $k=2,3,4,5$) to equivalent expressions involving $W_1$ and constants, and along with (\ref{mixed_red}) and the definitions of $W_1,W_4,W_5$ we deduce
\begin{equation} \nonumber
\begin{split}
& 2W_5-5\Par{a+b}W_4+\text{const}\equiv\\
& \text{\scalebox{0.94}{$\Par{-3a^4-8a^3 b-8a^2 b^2-8a b^3-3b^4-20da^2-100dab-20db^2+120d^2-60d}W_1$}}.
\end{split}
\end{equation}
From here it follows that $\sigma\Par{\polyb_5}^2=0$ and that $\sigma\Par{\poly}^2=\sigma\Par{\poly-c\polyb_5}^2$ for any polynomial $\Poly$ and constant $c$, concluding our proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{main_thm}]
Given a polynomial $\poly(x)=\sum_{k=0}^m a_k x^k \in\R[x]$, we have
$$\frac{\Tr{\poly\Par{H_L}}-\Ex{\Tr{\poly\Par{H_L}}}}{(2L+1)^{d/2}} \overset{d}{\longrightarrow} N(0,\sigma(\poly)^2)$$
for $\sigma(\poly)^2\in[0,\infty)$ as $L\rightarrow\infty$, from Proposition \ref{poly_CLT}. From Propositions \ref{deg_good_var} and \ref{deg_23_var} we determine the cases in which $\sigma(\poly)^2>0$ whenever $\deg(\poly)\neq 5$. Finally, if $\deg(\poly)=5$, we know from proposition \ref{deg_5_var} that $\sigma(\poly)^2=\sigma(\poly-a_5 \polyb_5)^2$. If $\poly-a_5\polyb_5$ is non-constant and $\deg(\poly-a_5\polyb_5)$ is $1$ or $4$, we determine that $\sigma(\poly)^2=\sigma(\poly-a_5 \polyb_5)^2>0$ from proposition \ref{deg_good_var}, otherwise we use proposition \ref{deg_23_var} to determine the positivity.
\end{proof}
\section{Appendix - Proof of Theorem \ref{main_CLT}}
In the setting of Theorem \ref{main_CLT}, we consider a $d$-dimensional array of weakly dependent random variables.
Explicitly, we prove a central limit theorem which is valid in the setting of $m$-dependent random variables, which we now define:
\begin{defin} \label{m_dep_d1}
Let $\Set{Y_i}_{i\in\Zd}$ be a sequence of random variables. We say that the sequence is $m$-dependent, if for any two finite sets of indices, $I,J\subset\Zd$ which satisfy $\abs{i-j}>m$ for every $i\in I$ and $j\in J$, the corresponding sets of random variables,
$$\Set{Y_i}_{i\in I}, \qquad \Set{Y_j}_{j\in J} $$
are independent.
\end{defin}
Note that this definition extends a notion of $m$-dependence from \cite{HR} defined for sequences of variables indexed by $\N$ (the definition of $m$-dependence in \cite{HR} is equivalent to $m$-dependence as defined above, when we take $d=1$ and $Y_i=0$ for every $i\notin\N$). In \cite{HR}, Hoeffding and Robbins proved the following central limit theorem:
\begin{thm}[Hoeffding-Robbins] \label{thm_HR}
Let $\Set{X_i}_{i\in\N}$ be an $m$-dependent\\
sequence of random variables satisfying
$\Ex{X_i}=0$ and $\Ex{\abs{X_i}^3}\leq R^3 <\infty$ for every $i\in\N$, and
$$\lim_{p\to\infty} p^{-1}\sum_{h=1}^p A_{i+h}=A$$
uniformly for all $i\in\mathbb N$, where
$$A_i=\Ex{X_{i+m}^2}+2\sum_{j=1}^m \Ex{X_{i+m-j}X_{i+m}}.$$
Then
$$\frac{X_1+\ldots+X_n}{n^\frac12} \overset{d}{\longrightarrow} N\Par{0,A}.$$
\end{thm}
Theorem \ref{thm_HR} allows us to deduce a central limit theorem for $d=1$, and the following theorem by Neumann \cite{Neumann} will allow us to prove an induction argument on $d$:
\begin{thm}[Neumann] \label{thm_Neumann}
Suppose that $\Set{X_{n,k}\ \middle|\ n\in\N\ ,\ k=1,2,\ldots,n}$ is a triangular scheme of random variables with $\Ex{X_{n,k}}=0$ and
$$\sum_{k=1}^n \Ex{X^2_{n,k}}\le C$$
for all $n,k$ and some $C<\infty$. We assume that
$$\sigma_n^2 = \Var{X_{n,1}+...+X_{n,n}} \underset{n\rightarrow \infty}{\longrightarrow}\sigma^2\in [0,\infty), $$
and that
$$\sum_{k=1}^n \Ex{X_{n,k}^2 1(\abs{X_{n,k}}>\epsilon)}\underset{n \rightarrow \infty}{\longrightarrow}0 $$
holds for all $\epsilon>0$. Furthermore, we assume that there exists a summable sequence $(\theta_r)_{r\in \N}$ such that for all $u \in \N$ and all indices
$$1\le s_1<s_2<...<s_u<s_u+r=t_1\le t_2 \le n,$$
the following upper bounds for covariances hold true: for all measurable functions $g: \R^u \longrightarrow \R$ with $||g||_{\infty}=\sup_{x\in \R^u} |g(x)|\le 1$, we have
\begin{equation} \label{cov_req_1}
\abs{\Cov{g\Par{X_{n,s_1},...,X_{n,s_u}}X_{n,s_u}\ ,\ X_{n,t_1}}}\le \left(\Ex{X_{n,s_u}^2}+\Ex{X_{n,t_1}^2}+\frac{1}{n}\right)\theta_r
\end{equation}
and
\begin{equation} \label{cov_req_2}
\abs{\Cov{g\Par{X_{n,s_1},...,X_{n,s_u}}\ ,\ X_{n,t_1} X_{n,t_2}}}\le \left(\Ex{X_{n,t_1}^2}+\Ex{X_{n,t_2}^2}+\frac{1}{n}\right)\theta_r.
\end{equation}
Then
$$X_{n,1}+...+X_{n,n}\overset{d}{\longrightarrow} N\Par{0,\sigma^2}$$
as $n\rightarrow\infty$.
\end{thm}
Our central limit theorem for $m$-dependent random variables follows:
\begin{propos} \label{prop_aux_CLT}
Let $\Set{Y_i}_{i\in\Zd}$ be an identically distributed $d$-dimensional $m$-dependent array of random variables such that $\Ex{Y_i}=0$, and $\Ex{\abs{Y_i}^3}<\infty$.
Then
$$\frac1{\Par{2L+1}^{d/2}}\sum_{i\in\boxL} Y_i \overset{d}{\longrightarrow} N\Par{0,\sigma^2},$$
where
$$\sigma^2=\lim_{L\to\infty}\frac1{\Par{2L+1}^d}\Var{\sum_{i\in\boxL}Y_i}.$$
\end{propos}
\begin{proof}
By induction on $d$. For $d=1$, this is a straightforward application of Theorem \ref{thm_HR} to the random variables $\Set{X_i}_{i\in\N}$, defined by $X_i=Y_{i+m}+Y_{-i-m}$ (noting that for $i>m$, $\Set{X_i}_{i\in\N}$ are identically distributed and $m$-dependent, and the exclusion of a finite set of random variables $\Set{Y_i\ :\ \abs{i}\leq m}$ from the sum has no effect on the limit distribution).\\
We now assume by induction that the proposition holds for some $d\in\N$, and prove it in dimension $d+1$. For every $L\in\N$ we denote $n=2L+1$, rewrite
$$\frac1{\Par{2L+1}^{\Par{d+1}/2}}\sum_{i\in\boxLp} Y_i=\sum_{j=-L}^L Z_{n,j},$$
where
$$Z_{n,j}=\frac1{n^{1/2}} \cdot \frac1{n^{d/2}}\sum_{i\in I_{n,j}} Y_i$$
and
\begin{equation} \nonumber
\begin{split}
I_{n,j}&=\boxL\times\Set{j}\\
&=\Set{\Par{i_1 ,\ldots,i_{d+1}}\in\boxLp\ \middle|\ i_{d+1}=j}
\end{split}
\end{equation}
are defined for every $j\in\Lambda_L$. Our proof will be completed by applying Theorem \ref{thm_Neumann} to the random variables
$$X_{n,k}=\begin{cases} Z_{n,k-L-1} & n=2L+1\\
Z_{n+1,k-L-1} & n=2L,\end{cases}$$
which are defined for every $n\in\N$ and $k=1,2,\ldots,n$. We will apply the requirements of the theorem to the corresponding variables $Z_{n,j}$ (we henceforth ignore even values of $n$).\\
Fixing any $j\in\Z$, we may identify $I_{n,j}$ with $\boxL$, and note that the $d$-dimensional array $\Set{Y_i\ \middle|\ i\in\Z^{d+1}\ ,\ i_{d+1}=j}$ is identically distributed and $m$-dependent (the distribution of the array is independent of $j\in\Z$ as well). The induction hypothesis now applies, and we deduce
\begin{equation} \label{ind_hyp}
\sqrt n Z_{n,j}=\frac1{n^{d/2}}\sum_{i\in I_{n,j}}Y_i \overset{d}{\longrightarrow} N\Par{0,\sigma_d^2}
\end{equation}
as $n\rightarrow\infty$, uniformly in $j$, for some $\sigma_d^2\geq0$. The variables $Z_{n,j}$ are ``well behaved", in the sense that for any sufficiently large $n$,
$$\Ex{Z_{n,j}^2}=\Var{Z_{n,j}}\leq\frac1n(\sigma_d^2+1)$$
(thus there exists $C>0$ such that $\Ex{Z_{n,j}^2}\le \frac{C}{n}$ for all $n\in\N$ and $j\in\Lambda_L$). We deduce that
$$\Ex{Z_{n,j}}=0,\qquad \sum_{j=-L}^L \Ex{Z_{n,j}^2}\le C.$$
Additionally, since the finite sequence $\Set{Z_{n,j}}_{j\in\Lambda_L}$ is both identically distributed and $m$-dependent (for every $n=2L+1\in\N$), one can verify that
$$\Var{\sum_{j=-L}^L Z_{n,j}}\underset{n \rightarrow \infty}{\longrightarrow} \sigma^2<\infty.$$
Next, we prove that
$$\sum_{j=-L}^L \Ex{Z_{n,j}^2 1(\abs{Z_{n,j}}>\epsilon)}\underset{n \rightarrow \infty}{\longrightarrow}0$$
for every $\epsilon>0$. Note that
\begin{equation} \label{var_sum}
\begin{split}
\sum_{j=-L}^L \Ex{Z_{n,j}^2 1\Par{\abs{Z_{n,j}}>\epsilon}}&=n \Ex{Z_{n,j}^2 1\Par{\abs{Z_{n,j}}>\epsilon}}\\
&=\Ex{n \Par{Z_{n,j}}^2 1\Par{\abs{\sqrt n Z_{n,j}}>\epsilon\sqrt n}}.
\end{split}
\end{equation}
From the induction hypothesis, we know that $\sqrt{n}Z_{n,j}\overset{d}{\longrightarrow} N(0,\sigma_d^2)$. We deduce that for every $M>0$ we have
\begin{equation} \label{small_conv}
\sqrt{n}Z_{n,j}1(\abs{\sqrt n Z_{n,j}}>M)\overset{d}{\longrightarrow} \Phi_M,
\end{equation}
where $\Phi_M$ is a random variable satisfying $\Ex{\Phi_M}=0$, and
$$\Var{\Phi_M}= \begin{cases}
2\int_M^\infty \frac{t^2}{\sigma_d \sqrt{2\pi}}\exp{\Par{-\frac{t^2}{2\sigma_d^2}}}dt & \sigma_d^2>0\\
0 & \sigma_d^2=0.
\end{cases}$$
Choose some $M>0$ so that $\Var{\Phi_M}$ is arbitrarily close to $0$. For every $\epsilon>0$, any sufficiently large $n\in\N$ satisfies $\epsilon\sqrt n>M$, so
$$1\Par{\abs{\sqrt n Z_{n,j}}>\epsilon\sqrt n}\leq 1\Par{\abs{\sqrt n Z_{n,j}}>M},$$
and (\ref{var_sum}) now becomes
\begin{equation} \nonumber
\begin{split}
\sum_{j=-L}^L \Ex{Z_{n,j}^2 1\Par{\abs{Z_{n,j}}>\epsilon}}&=\Ex{n \Par{Z_{n,j}}^2 1\Par{\abs{\sqrt n Z_{n,j}}>\epsilon\sqrt n}}\\
&\leq\Ex{n \Par{Z_{n,j}}^2 1\Par{\abs{\sqrt n Z_{n,j}}>M}}\\
&=\Var{\sqrt{n}Z_{n,j}1(\abs{\sqrt n Z_{n,j}}>M)}\underset{n \rightarrow \infty}{\longrightarrow}\Var{\Phi_M}
\end{split}
\end{equation}
(due to (\ref{small_conv})).
It remains to show that there exists a summable sequence $(\theta_r)_{r\in \N}$ so that the upper bounds for covariances required in Neumann's Theorem hold (equations (\ref{cov_req_1}) and (\ref{cov_req_2}), for all relevant cases). From the $m$-dependence of the finite sequence $\Set{Z_{n,j}}_{j\in\Lambda_L}$, we deduce that the left hand sides of (\ref{cov_req_1}) and (\ref{cov_req_2}) equal $0$ whenever $r>m$, so we conclude by finding some $\theta_1,\ldots,\theta_m<\infty$. A straightforward computation shows that (\ref{cov_req_1}) holds as long as $\theta_r\geq 1$.
\comment
Note that we always have
$$\Var{g\Par{Z_{n,s_1},\ldots,Z_{n,s_u}}Z_{n,s_u}}\leq\Ex{g\Par{Z_{n,s_1},\ldots,Z_{n,s_u}}^2 Z_{n,s_u}^2}\leq \Ex{Z_{n,s_u}^2}$$
(since $||g||_{\infty}\le 1$), therefore
\begin{equation} \nonumber
\begin{split}
& \abs{\Cov{g\Par{Z_{n,s_1},...,Z_{n,s_u}}Z_{n,s_u}\ ,\ Z_{n,t_1}}}\leq \\
& \sqrt{\Var{g\Par{Z_{n,s_1},...,Z_{n,s_u}}Z_{n,s_u}}\Var{Z_{n,t_1}}} \leq \sqrt{\Ex{Z_{n,s_u}^2}\Ex{Z_{n,t_1}^2}} =\Ex{Z_{n,s_u}^2}\\
\end{split}
\end{equation}
since the distribution of $Z_{n,j}$ is independent of $j$. This tells us that (\ref{cov_req_1}) holds as long as $\theta_1,\ldots,\theta_m\geq 1$.
To prove (\ref{cov_req_2}), we use
$$\Var{g\Par{Z_{n,s_1},\ldots,Z_{n,s_u}}}\leq\Ex{g\Par{Z_{n,s_1},\ldots,Z_{n,s_u}}^2}\leq 1$$
(as $||g||_{\infty}\le 1$) to obtain
\begin{equation} \nonumber
\begin{split}
& \abs{\Cov{g\Par{Z_{n,s_1},...,Z_{n,s_u}}\ ,\ Z_{n,t_1}Z_{n,t_2}}}\leq \\
& \sqrt{\Var{g\Par{Z_{n,s_1},...,Z_{n,s_u}}}\Var{Z_{n,t_1}Z_{n,t_2}}} \leq \sqrt{\Var{Z_{n,t_1}Z_{n,t_2}}},
\end{split}
\end{equation}
and we conclude by showing that for some $\theta<\infty$,
$$\sqrt{\Var{Z_{n,t_1}Z_{n,t_2}}}\leq\frac 1n \theta$$
holds for every $n=2L+1$ and $t_1,t_2\in\Lambda_L$. Equivalently, we will show that
$$\sup_{n,t_1,t_2}\Var{\sqrt n Z_{n,t_1}\cdot \sqrt n Z_{n,t_2}}<\infty.$$
From (\ref{ind_hyp}) we deduce that
$$\sup_n\Var{\sqrt n Z_{n,t_1}\cdot \sqrt n Z_{n,t_2}}<\infty$$
for every $t_1,t_2\in\Z$. Furthermore, since our initial variables $\Set{Y_i}_{i\in\Zd}$ are identically distributed, the value of $\Var{\sqrt n Z_{n,t_1}\cdot \sqrt n Z_{n,t_2}}$ depends only on $n$ and $t_2-t_1$, and since our variables are $m$-dependent, it is enough to consider $\abs{t_2-t_1}\in\Set{0,1,\ldots,m+1}$. This concludes our proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{main_CLT}]
Theorem \ref{main_CLT} will follow from Proposition \ref{prop_aux_CLT}, applied to the variables
$$Y_i=\sum_{\beta\in B}a_\beta\Par{X^{\beta^i} - \Ex{X^{\beta^i}}}.$$
Clearly the variables $\Set{Y_i}_{i\in\Zd}$ are identically distributed (since $\Set{X_n}_{n\in\Zd}$ are), and $\Ex{Y_i}=0$. Since every $X_n$ has finite moments, so do $Y_i$ (as a finite sum of products of the variables $\Set{X_n}_{n\in\Zd}$). In particular, $\Ex{\abs{Y_i}^3}<\infty$.\\
Since $a_\beta\neq 0$ only for finitely many $\beta\in B$, one can find sufficiently large $m$, such that whenever $\abs{j-i}>m$ and $a_\beta,a_\gamma\neq 0$, the supports of $\beta^i$ and $\gamma^j$ are disjoint. From here it follows that $\Set{Y_i}_{i\in \Zd}$ is $m$-dependent.
\end{proof}
| proofpile-arXiv_059-15642 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Efficient interaction between light and matter and particularly the faithful coherent mapping between photons and atomic excitations lie at the heart of many quantum optics processes and applications, such as quantum networks. One appealing platform is room-temperature atomic vapor, which is successfully employed in first generation quantum technologies, including atomic clocks and magnetometers \cite{Patton2014,Knappe2004} and quantum light sources and memories \cite{Lee2016,Finkelstein2018}.
Light-matter interaction can be enhanced by a tight optical mode volume and by a collective coupling of this mode to an ensemble of atoms. While reduced mode volumes are achieved in small optical cavities \cite{Alaeian2020} or with tightly focused beams in free space \cite{Maser2016,Jia2018a}, they are typically incompatible with large ensembles of atoms due to the associated short Rayleigh range $z_\mathrm{R}$.
\begin{figure}[ht!]
\centering
\includegraphics[width=\columnwidth]{Fig_Concept.pdf}
\caption{\textbf{A super-extended evanescent field of a nanofiber interacting with atomic vapor.} (a) Illustration of an optical mode surrounding a thin optical fiber. The fiber on the left (fiber diameter $D=0.9\lambda/n$) has a guided mode with field diameter MFD=$1.2\lambda$, comparable with the wavelength $\lambda$. The fiber on the right ($D=0.37\lambda/n$) has a mode extending to MFD=13$\lambda$, guided over a distance of $5000\lambda$. (b) Mode field diameter (in units of $\lambda$) as a function of fiber diameter (in units of wavelength in matter $\lambda/n$). Blue and red circles mark the parameters of the two fibers shown in (a). Dashed black line marks the physical fiber dimensions; the MFD diverges from this line around $nD/\lambda < 1$. The dashed purple line shows the fraction of the power residing outside the fiber $\eta_\mathrm{power}$ and is larger than $99\%$ for the thin fiber. (c) Calculated Doppler-free absorption spectra for Rb vapor in the evanescent field of a $D=0.9\lambda/n=500$~nm (blue) and $D=0.37\lambda/n=200$~nm (red) silica fibers ($n=1.45$). The effect of transit-time broadening is clearly apparent.
}\label{fig:Figure1}
\end{figure}
An alternative approach employs a tightly confined optical mode that is supported by a low-loss waveguide over an extended length \cite{Spillane2008,Stern2013,Skljarow2020a,Nayak2007,Sague2007,Bajcsy2009}.
The optical mode guided by a dielectric structure with dimensions below the optical wavelength $\lambda$ extends beyond the structure boundaries as an evanescent field and can interact with the surrounding atomic ensemble. In this case, the field typically decays over a range of $\sim\lambda$ away from the structure \cite{Stern2013,Hendrickson2010,Ritter2015}. Such tight mode confinement is advantageous for processes requiring high intensity or a steep field gradient, \emph{e.g.}, non-linear optics at ultra-low power levels or atomic trapping \cite{Foster2004,Vetsch2010}. However, for coherent light-matter interaction with thermal atomic ensembles, tight confinement may limit the effectiveness of the interaction. Transverse motion of the atoms through the optical mode results in transit-time broadening and in a reduction of the absorption cross-section \cite{Stern2013,Hendrickson2010,Jones2015,Skljarow2020a}. \pagebreak
\\
\\
Additionally, the small mode volume limits the fraction of atoms simultaneously interacting with the mode, and the proximity of these atoms to the dielectric waveguide may lead to atom-surface interactions that impair the coherent interaction with the guided field \cite{Epple2014a,Sague2007}.
Finally, for a given uniform atomic density, the collective light-matter coupling strength (or the optical depth) is limited by the fraction of the optical field which resides outside the core material.
In this work, we tackle these challenges by realizing a guided optical mode with an evanescent field part that extends several wavelengths away from the waveguide surface, as illustrated in Fig.~\ref{fig:Figure1}(a). This unique mode is supported by the extremely thin waist of a tapered optical fiber. Tapered optical fibers, with waist diameters as small as a few hundred nanometers, have been shown to enable single mode operation with high transmission \cite{Tong2003}. In the past decade, these nanofibers were used in a multitude of applications from atom trapping \cite{Vetsch2010}, through sensing \cite{Korposh2019}, to chiral quantum optics \cite{Petersen2014} and cavity QED \cite{Takao2015,Bechler2018}. Previous demonstrations of an interface between thermal vapor and a nanofiber have shown several appealing features, such as polarization rotation, electromagnetically induced transparency \cite{Jones2015}, and non-linear effects at low power \cite{Hendrickson2010}. However, all of these have been limited by transit time broadening, as well as by the interaction between the vapor and the waveguide surface.
Here we show that reducing the diameter of a nanofiber to around $0.37\lambda/n$ (where $n$ is the refractive index of the fiber core) yields a super-extended evanescent mode, and that this mode greatly suppresses the above limitations. In our system, the mode extends to a diameter of $2w_0 \approx13\lambda$ and is guided over 5 mm (50 times larger than $z_\mathrm{R}= \pi w_0^2/\lambda$), with over $99\%$ of the optical power residing outside the core material. We interface this fiber with atomic vapor and perform one- and two-photon spectroscopy, as well as saturation and temporal transient measurements. The unique characteristics of the system provide for spectral features much narrower than previously measured and correspondingly longer coherence times, thus establishing its potential for more intricate processes and opening a path to coherent light-matter interactions with thermal vapor in evanescent fields.
\section{Super-extended evanescent field}
A guided optical mode can be characterized by the mode field diameter (MFD), defining an area containing $(1-e^{-2})$ of the optical power. For vacuum-clad fibers, the MFD of the fundamental mode $\mathrm{HE_{11}}$ depends on a single parameter: the fiber diameter $D$ in units of wavelength in matter $\lambda/n$. We show calculations of the MFD as a function of this parameter in Fig.~\ref{fig:Figure1}(b). As the fiber diameter is decreased, the MFD initially follows the diameter of the fiber core. In this regime, a significant fraction of the energy is contained inside the fiber core and its remainder resides in an evanescent field, decaying over $\sim\lambda$. When the fiber diameter is further narrowed below $nD/\lambda = 1$, it can no longer support a mode residing predominantly in the core. Yet a single bound solution, with most of the energy residing outside the core, always exists \cite{Snyder1983}.
In this regime, the MFD varies steeply with the fiber diameter. This is demonstrated by comparing two cases, illustrated in Fig.~\ref{fig:Figure1}(a) and marked by circles in Fig.~\ref{fig:Figure1}(b). A fiber with $D=0.9\lambda/n$ has MFD = 1.2$\lambda$ (blue circle), whereas a narrower fiber with $D=0.37\lambda/n$ has MFD = 13$\lambda$ (red circle). The latter, which we realize in this work, is a super-extended mode that contains a remarkably high fraction of $>99\%$ of the power in the vacuum cladding. This mode is well approximated by a modified Bessel function \mbox{$\mathcal{E}(r>D/2)=\mathcal{E}_0 K_0(\kappa r)$} \cite{Tong2004}, where \mbox{$\kappa=\sqrt{\beta^2-(2\pi/\lambda)^2}$} is the transverse component of the wavevector ($\beta$ is the propagation constant, \emph{i.e.}, the longitudinal component of the wavevector). For $\lambda=780$ nm, a $D=200$ nm silica fiber can guide a mode with MFD = 10 $\mu$m ($\kappa^{-1}\simeq 0.44\mathrm{MFD}=4.43~\mu\mathrm{m}$), such that the mode area is 2500-times larger than the fiber cross-section. The ability to produce such a waveguide by tapering down a standard optical fiber, with adiabatic following of the fundamental mode, is an exciting capability which lies at the edge of feasibility for waveguide tapering \cite{Stiebeiner:10,Sumetsky2006}.
Both fibers highlighted in Fig.~\ref{fig:Figure1}(a) guide the optical modes over many times the equivalent $z_\mathrm{R}$ in free space. However, when interfaced with atomic vapor, the interaction characteristics will strongly depend on mode confinement. Thermal ballistic motion, with thermal velocity $v_\mathrm{T}$ along both the longitudinal and the transverse wavevectors, leads to motional broadening. For modes extending to $\kappa^{-1}{\sim}\lambda/2$ away from the fiber, the transverse transit-time broadening $\Gamma_\mathrm{tt}=\sqrt{2}\kappa v_\mathrm{T}$ (full width at $1/e$) \cite{Hall} becomes similar to the longitudinal Doppler broadening $\sigma=\beta v_\mathrm{T}$. In contrast, for the super-extended optical mode with $\kappa^{-1}\gtrsim 5\lambda$, the transverse motional broadening is suppressed by more than one order of magnitude.
We note that while many Doppler-free techniques exist, including compensation by light shift \cite{Lahad2019,Finkelstein2019}, transit-time broadening cannot be easily mitigated by purely optical means. The calculated (Doppler-free) absorption spectra for both fibers presented in Fig.~\ref{fig:Figure1}(c) indeed show a 10-fold decrease in spectral width for the thinner fiber and a corresponding enhancement in atomic absorption cross-section. In this calculation, we average over atoms with different thermal velocities traversing the two-dimensional field distribution and neglect atomic trajectories which hit the nanofiber. This is a valid approximation for the super-extended mode due to the large ratio of mode-field area to fiber cross-section.
\section{Experimental system}
The experimental setup is presented in Fig.~\ref{fig:Vacuum setup}. A fiber with super-extended evanescent field is fabricated by tapering a silica fiber down to a nominal diameter $D$=200 nm (with variations better than $\pm5\%$) in a heat and pull method \cite{Stiebeiner2010}. Fig.~\ref{fig:Vacuum setup}(a) shows a scanning electron microscope image of the nanofiber waist of a tapered fiber that we have pulled using the same parameters and flame trajectory as the tapered fiber used in the subsequent experiments. To fulfill the adiabaticity criterion, the fiber is tapered down from a diameter of 125 $\mu$m to the final diameter of 200 nm over a length of 3.3 cm. The fiber is then glued to a custom mount [Fig.~\ref{fig:Vacuum setup}(b)], which provides three axes of optical access and an ultra-high vacuum-compatible fiber feedthrough. The mount is installed in a vacuum chamber [Fig.~\ref{fig:Vacuum setup}(d)], which is wrapped in resistive heating elements and surrounded by a thermally insulating polyurethane enclosure, allowing to stabilize the system temperature. Rubidium vapor is released into the chamber by breaking a glass capsule containing a metallic rubidium pellet (at natural abundance). We set two different temperature regions in the chamber; the area containing the capsule is kept cooler such that it remains a rubidium reservoir, and its temperature sets the vapor pressure inside the cell.
\begin{figure} [tb]
\centering
\includegraphics[width=\columnwidth]{Fig_Experimental2.pdf}
\caption{\textbf{Experimental system.} (a) Scanning electron microscope image of a tapered fiber waist, with a nominal diameter of 200 nm. (b) Custom, vacuum-compatible, fiber mount with external optical access and a UHV fiber feedthrough (f/t). (c) Schematic of the optical setup. Probe and control fields are coupled into the tapered fiber in counter-propagating directions. Fiber-coupled electro-optical modulators (EOMs) shape the temporal intensity of the two fields. Polarizing beam-splitter (PBS) picks out the outgoing probe, which is further filtered by a band-pass interference filter (IF) and sent to a single-photon counting module (SPCM) or to an avalanche photodiode (APD), allowing us to monitor down to pW powers. (d) Vacuum chamber houses the fiber and a natural abundance metallic Rb pellet. A free-space path provides an absorption reference.}
\label{fig:Vacuum setup}
\end{figure}
We utilize the electronic ladder transitions of rubidium shown in Fig.~\ref{fig:Experimental config}(a). A probe field at 780 nm probes and drives the D2 transition $5S_{1/2} \rightarrow 5P_{3/2}$ in all experiments. Two-photon spectra, demonstrated in Fig.~\ref{fig:Experimental config}(b), are obtained by adding a control field at 776 nm that drives the transition $5P_{3/2} \rightarrow 5D_{5/2}$. The control and probe lasers are frequency stabilized and fed into the nanofiber in counter-propagating directions. We show the optical setup in Fig.~\ref{fig:Vacuum setup}(c). The vacuum chamber also has a long free-space path, shown in Fig.~\ref{fig:Vacuum setup}(d), for measuring a reference absorption spectrum. More details on the fiber and setup are given in Methods.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{Fig_level_structure_wideEIT.pdf}
\caption{\textbf{Overview of the experiment.} (a) Level structure of rubidium atoms used in the experiment. The ground and intermediate levels are coupled by a probe field $\mathcal{E}$, detuned by $\Delta$ from the resonance frequency. The intermediate level can be coupled to a doubly excited level $5D_{5/2}$ by a control field $\mathcal{E}_\mathrm{c}$. The $5D_{5/2}$ level in $^{85}$Rb is composed of several tightly spaced hyperfine states. The three states accessible in our experiment are spaced over 18.3 MHz.
(b) Typical transmission spectrum of Rb vapor coupled to a super-extended evanescent field guided by a tapered optical fiber. The probe and control fields with propagation constants $\beta$ and $\beta_\mathrm{c}$ counter-propagate in the fiber. The two-photon transition is nearly Doppler-free as the propagation constants differ by about 0.5\%. Both evanescent fields have a similar decay length $\kappa^{-1}$ in the transverse direction. The control field is filtered by polarization and narrow-band interference filter.}
\label{fig:Experimental config}
\end{figure}
The transmission of the fiber at 780 nm is $\sim90\%$ after fabrication, and it drops to $\sim 70\%$ after international transfer, splicing to commercial fibers, and installation in the chamber. However, the transmission may further degrade over time due to rubidium adsorption on the fiber surface. We are able to mitigate this effect and prevent the transmission degradation by heating up the fiber using $1~\mu$W of light at 776 nm that is kept constantly on.
\section{Super-extended evanescent field interfaced with thermal vapor}
We begin with characterizing the super-evanescent mode by measuring the atomic absorption spectra with only the probe light present. In Fig.~\ref{fig:Isat}(a), we show the transmission spectrum of light guided by the tapered fiber and compare it with the free-space transmission spectrum of a large diameter (2 mm) beam measured simultaneously. The spectrum exhibits two transmission dips corresponding to $^{87}$Rb and $^{85}$Rb isotopes according to their natural abundance. The width of these absorption lines is dominated by Doppler broadening due to longitudinal atomic motion along the fiber axis.
Importantly, we do not observe any additional broadening due to the transverse motion across the evanescent mode,
when comparing the spectrum (blue line) to the free-space spectrum (dashed red line) and when fitting it to a numerical model (not shown) that accounts for the multi-level structure of rubidium. This is already an indication of an extended mode spanning at least several wavelengths.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{Fig_Isat_v5.pdf}
\caption{\textbf{Transmission spectra and low-power saturation.} (a) Transmission spectrum of light guided in a tapered fiber with super-extended evanescent field surrounded by Rb vapor with natural abundance (blue). Two distinct dips correspond to two Rb isotopes. A free-space absorption spectrum is measured simultaneously as a reference, through a 280-mm-long optical path across the chamber (red; transmission below $10^{-3}$ around the $^{85}\mathrm{Rb}$ dip is governed by noise and not shown). (b) Transmission spectra for different probe powers $P_{\mathrm{in}}$. (c) Resonant transmission as a function of $P_{\mathrm{in}}$. Solid line is a fit to a saturation model. All data in (a)-(c) are normalized by the off-resonance transmission.}
\label{fig:Isat}
\end{figure}
In Fig.~\ref{fig:Isat}(b), we plot absorption spectra for different probe powers. As the probe power increases, the absorption decreases, with the resonant transmission approaching unity already at a few hundreds of nW. We summarize these results in Fig.~\ref{fig:Isat}(c) by plotting the transmission at the $^{85}$Rb $F{=}3{\rightarrow}F'{=}4$ resonance for different probe powers and fitting the data to a saturation model for an inhomogeneously broadened ensemble of two-level systems \cite{Akulin1992} $T=\mathrm{exp}(-\mathrm{OD_0}/\sqrt{1+P_\mathrm{in}/P_\mathrm{sat}})$, where $\mathrm{OD}_0$ is the resonant, Doppler-broadened, optical depth of the ensemble, $P_\mathrm{in}$ is the probe power, and $P_\mathrm{sat}$ defines the saturation power. We find that the saturation power is $P_\mathrm{sat}=35\pm7$ nW. Given the imperfect transmission through the bare tapered fiber, we estimate the actual saturation power (reaching the atoms) to be $29\pm6$ nW. We note that for an extended evanescent field with decay length $\kappa^{-1}$, the saturation parameter per atom (often denoted as $s$) increases as the atom-nanofiber distance decreases and, for $P_\mathrm{in}=P_\mathrm{sat}$, becomes larger than unity for atoms within a distance of $\kappa^{-1}$ from the nanofiber axis.
This low saturation power is characteristic of a tightly confined optical mode. Even though the realized mode is larger than typical evanescent fields, the measured saturation power is identical, up to experimental uncertainty, to saturation powers reported across several platforms interfaced with hot vapor \cite{Stern2013,Hendrickson2010,Spillane2008}. This universality stems from the trade-off which sets the saturation threshold: Rabi frequency and decoherence rate (dominated by transit-time broadening) both increase linearly with 1/MFD.
As a consequence, we find that this type of non-linearity does not benefit from tighter mode confinement and, in fact, super-extended evanescent fields are advantageous as they enable the same saturation level with better coherence times.
On the flip side, this trade-off also implies that $P_\mathrm{sat}$ measurements cannot unambiguously confirm the dimensions of the optical mode.
In order to determine the transit-time broadening and thus the dimensions of the mode, we perform temporal transient measurements. In essence, we prepare a saturated atomic population and measure the rate at which the excited atoms leave the interaction region. In practice, we monitor the response of the resonant transmission to a sudden change in probe power in a pump-probe-like experiment.
The dynamics is governed by the optical Bloch equations for the excited population $\rho_\mathrm{ee}$ and the atomic coherence (optical dipole) $\rho_\mathrm{eg}$
\begin{equation}
\partial_t \rho_\mathrm{ee}=-\Gamma\rho_\mathrm{ee}+2\mathrm{Im}(\Omega^*\rho_\mathrm{eg}),
\end{equation}
\begin{equation}
\partial_t \rho_\mathrm{eg}=-
\sigma\rho_\mathrm{eg}+i\Omega(1-2\rho_\mathrm{ee}),
\end{equation}
where $\Gamma$ is the depopulation rate, \emph{i.e} the population decay rate due to both spontaneous emission and transit time across the evanescent field, and $\sigma$ is the total effective decoherence rate, dominated by Doppler dephasing $\sigma\approx \beta v_\mathrm{T}$ ($v_\mathrm{T}=180$ m/s). The probe Rabi frequency is given by $\Omega=G\mathcal{E}$, where $G$ is the ensemble-field coupling constant, and $\mathcal{E}$ is the probe electric field.
In turn, the propagation of the probe field is governed by the equation of motion for the slowly varying envelope of the electric field $(c\partial_z+\partial_t){\mathcal{E}}=iG\rho_\mathrm{eg}$ \cite{Fleischhauer2000}, which reduces to
\begin{equation}
\mathcal{E}_\mathrm{out}=\mathcal{E}_\mathrm{in}+i\rho_\mathrm{eg}GL/c
\end{equation}
for an optically thin medium. Here the outgoing field $\mathcal{E}_\mathrm{out}$ is given by the sum of the incoming field $\mathcal{E}_\mathrm{in}$ and the field scattered by the atomic dipoles. We can therefore extract the evolution of the atomic coherence by monitoring the difference \mbox{$\Delta\mathcal{E}=(P_\mathrm{in}-P_\mathrm{out})/\sqrt{P_\mathrm{in}}$}, where $P_\mathrm{out}$ and $P_\mathrm{in}$ are measured powers on and off resonance, respectively. Such analysis is akin to that employed in cavity ring-up spectroscopy \cite{Rosenblum2015}.
Figure \ref{fig:transient} shows the results of such an experiment. We start with a probe power of $\sim 4P_\mathrm{sat}$, \emph{i.e.,} above the saturation power, and abruptly attenuate it to $0.6P_\mathrm{sat}$.
Equations (1) and (2) under the condition $\sigma \gg \Gamma,\Omega$ result in an over-damped solution with a two-step decay process: a fast decay with rate $\sigma$ followed by a slower decay with rate $\Gamma$.
Indeed we observe in Fig.~\ref{fig:transient}(a) an initial short transient, where the atomic coherence follows the fast change in the incoming field, and subsequently a slow relaxation due to equilibration of atomic population.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{FigTransient_v6.pdf}
\caption{\textbf{Temporal dynamics in the nanofiber-vapor system.} (a) The transient response of the atomic coherence (optical dipole) $\rho_\mathrm{eg}$, quantified by the difference $\Delta\mathcal{E}$ between the incoming and outgoing probe (black dots, see text) after an abrupt reduction of the probe power (dashed blue line). The atomic coherence initially follows the rapidly changing incoming field and then slowly relaxes due to motional exchange of pumped and unpumped atoms and due to radiative decay. (b) Absorption $A(t)$ following the abrupt power reduction, plotted as the difference from the final steady-state value $A(t_\mathrm{end})$ in a semilog scale. Solid line is a fit to a pure exponential decay, also plotted in (a), from which the relaxation time $\tau_\mathrm{fall}$ is extracted. (c) An abrupt increase in the laser power again initiates a fast following of the field and then a slower decay, determined by the optical pumping rate. (d) Absorption following the abrupt power increase, providing the faster relaxation time $\tau_\mathrm{rise}$ from which the probe Rabi frequency can be determined.}
\label{fig:transient}
\end{figure}
To quantify the slow relaxation rate, we plot in Fig.~\ref{fig:transient}(b) the absorption $A(t)=1-P_\mathrm{out}(t)/P_\mathrm{in}(t)$. The (pumped) excited atoms are gradually replaced by (fresh) ground-state atoms that enter the interaction region, and correspondingly the medium relaxes from low to high absorption. We measure an exponential relaxation time of $
\tau_{\mathrm{fall}}=10.7 \pm 0.5 $ ns, corresponding to a depopulation rate of $\Gamma=14.9 \pm 0.8$ MHz (hereafter $1~\mathrm{MHz}\equiv 10^6\cdot 2\pi~\mathrm{rad}/ \mathrm{s}$). After subtracting the radiative decay rate (6 MHz), we attribute the remaining rate of $\Gamma_\mathrm{tt}=9$ MHz to transit time broadening. This is consistent with the numerical calculation of transit time broadening of $\Gamma_\mathrm{tt}=\sqrt{2}\kappa v_\mathrm{T}=9$ MHz for a fiber with diameter $D=200 $ nm ($\kappa=1/4.428~ \mu m^{-1}$), as plotted in Fig.~\ref{fig:Figure1}(c).
Upon ramping the strong field back on [Fig. \ref{fig:transient}(c)], we observe again a fast transient response followed by an exponential relaxation, with an absorption decay time [Fig. \ref{fig:transient}(d)] of $\tau_\mathrm{rise}=6.7 \pm 0.3$ ns. We attribute this time scale to optical pumping to the excited state at a rate of \mbox{$\Gamma+4\Omega^2/\sigma=24 $ MHz}, from which we infer a conversion of optical power to Rabi frequency of $\Omega(P_\mathrm{in}) = 2~\mathrm{MHz} \sqrt{P_\mathrm{in}[\mathrm{nW}]}$. This conversion ratio is in good agreement with a weighted average over the calculated field distribution and, specifically, it characterizes atoms at a distance of 2 $\mu\mathrm{m}$ ($\sim0.5\kappa^{-1}$) from the fiber.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{FigEIT.pdf}
\caption{\textbf{EIT in a bi-chromatic super-extended evanescent field.} (a) Transmission spectrum of a weak probe in the presence of a control field of varying power $P_\mathrm{c}$, counter-propagating in the nanofiber. (b) Full width at half maximum (FWHM) of the transparency resonance. Solid line is a fit to a power-broadening model. (c) Contrast of the transparency resonance.}
\label{fig:figEIT}
\end{figure}
We now move on to study a coherent, two-photon process, taking advantage of the long coherence time provided by the super-extended optical mode. We use a counter-propagating configuration that is nearly Doppler free, as the relative mismatch of the propagation constants $(\beta_\mathrm{c}-\beta)/\beta$ is only about 0.5$\%$. This reduces the longitudinal Doppler broadening to 0.5$\%$ of its value for the one-photon transition. We add the counter-propagating control field and observe the appearance of narrow electromagnetically-induced transparency (EIT) peaks inside the one-photon absorption lines, as shown in Fig.~\ref{fig:Experimental config}(b). Notably, most two-photon processes observed in atomic vapor via a waveguide have been limited to cascaded absorption \cite{Stern2013,Hendrickson2010,Skljarow2020a}. In our system, the small transit-time broadening and high signal-to-noise ratio enable the coherent effect of induced transparency, originating from interference of two possible excitation pathways. Furthermore, we are able to measure EIT with $P_\mathrm{in}\simeq$ pW or, equivalently, with probe photons entering the medium at a rate of few photons per $\mu s$ such that the atoms interact with less than a single probe photon (on average) during their lifetime in the mode. This is a pre-requisite for observing quantum non-linear optics.
In our configuration, the control field is also guided by the fiber with a similar, super-extended, mode and not applied externally. To leading order, the coherent two-photon process depends on the product of the probe and control fields $\mathcal{E}(r)\cdot\mathcal{E}^*_\mathrm{c}(r)$, which varies with time while a given atom traverses the mode. Both evanescent fields are of the form $K_0(\kappa r)\approx e^{-\kappa r}/\sqrt{r}$, such that their product is approximately proportional to $e^{-2\kappa r}/r$, yielding an increased two-photon transit time broadening of $2\times\Gamma_\mathrm{tt}$.
In Fig.~\ref{fig:figEIT}(a), we plot the transmission around the EIT resonance for different control powers, normalized by the one-photon absorption (in the absence of a control field). We note that increasing the probe power above a few pW reduces the EIT contrast, and, for probe powers higher than $P_\mathrm{sat}$, we have observed cascaded absorption.
We present in Fig.~\ref{fig:figEIT}(b,c) the full width at half maximum (FWHM) of the EIT resonance $\Gamma_\mathrm{EIT}$ and the EIT contrast for different control powers $P_\mathrm{c}$. The width and contrast increase linearly with $P_\mathrm{c}$. The solid line in Fig.~\ref{fig:figEIT}(b) is a fit to an EIT power-broadening model of the form $\Gamma_\mathrm{EIT} = \alpha P_\mathrm{c}+\gamma$. By extrapolating to $P_\mathrm{c}=0$, we find the effective width of the $5S_{1/2}\rightarrow5D_{5/2}$ transition $\gamma=40\pm1.2$ MHz. The measured EIT line is composed of three transitions to different hyperfine states [$F'=4\rightarrow F''=5,4,3$, see also Fig.~\ref{fig:Experimental config}(b)] whose transition frequencies are spaced over 18.3 MHz. When summing over these transitions with their corresponding oscillator strengths, we find that the measured width $\gamma$ is consistent when accounting for the contributions of two-photon transit time ($2\times \Gamma_\mathrm{tt}=18$ MHz, as calculated and as measured from transient measurements presented in Fig. \ref{fig:transient}), residual longitudinal Doppler width ($2\sigma=1.3$ MHz), laser linewidth (${\sim }0.8$ MHz), and radiative decay rate ($0.66$ MHz).
\section{Conclusions}
We have introduced a new platform to explore light-matter interactions employing a nanofiber-guided mode with a super-extended evanescent field, characterized by low transit-time broadening. This unique mode is reached through adiabatic following of the fundamental mode in a single-mode fiber tapered down to a quarter of a wavelength. Our demonstration, with high overall transmission, sustaining a $13 \lambda$ extended mode over several mm, lies near the asymptotic limit on tapered fiber dimensions capable of such wave-guiding.
Interfaced with atomic vapor, the system balances high local intensity with an extended coherence time. This balance enables saturation of transmission at optical powers that are on par with much tighter confined modes, and at ultra-low power levels with $\lesssim$ 2 photons present in the interaction region at any given time. We further observe coherent, spectrally-narrow, two-photon resonances, owing to suppression of transit-time broadening. Applying fast in-fiber modulation of the input signal, we observe intricate dynamics of nanofiber-vapor coupled system composed of fast modulation of the optical dipole followed by relatively long relaxation times, which are one order of magnitude longer than previously measured for a thermal vapor-waveguide interface.
The super-extended evanescent mode combines several features which make it particularly well suited for photon-photon interactions and quantum non-linear optics with Rydberg atoms \cite{Peyronel2012}. Firstly, more than 99\% of the guided energy resides outside the material core, as opposed to $\lesssim 50$\% in other evanescent field platforms. In addition, the effective field diameter MFD = 10$~\mu $m is suitable for confining photons to below the Rydberg blockade distance. The proximity of Rydberg atoms to dielectric surfaces induces inhomogeneous Van-der-Waals (VdW) interactions which have been studied, for example, in hollow-core fiber experiments \cite{Epple2014a}. Due to a favorable ratio of mode volume to fiber surface, we expect this detrimental effect to be suppressed in our extremely tapered fiber. Indeed, Rydberg excitations in a cold atomic cloud near a standard nanofiber were recently observed \cite{Rajasree2020}. In addition to VdW and other dipolar interactions, static charges generating stray electric fields pose a major challenge to interfacing a waveguide with Rydberg atoms due to the strong polarizability of the latter. Controlling and removing single charges may therefore be necessary as part of generating Rydberg excitations in our system. Such control and expulsion of charge down to the single electron level was already achieved in several other platforms \cite{Frimmer2017,Moore2014}.
We conclude that new and surprising capabilities can emerge from interfacing the platform presented here with various atomic ensembles for waveguide-based quantum optics and sensing applications.
\section*{Acknowledgments}
We thank Liron Stern and Uriel Levy for fruitful discussions. We acknowledge financial support by the Israel Science Foundation and ICORE, the European Research Council starting investigator grant Q-PHOTONICS 678674, the Minerva Foundation with funding from the Federal German Ministry for Education and Research, the Laboratory in Memory of Leon and Blacky Broder, and the Alexander von Humboldt Foundation (Alexander von Humboldt-Professorship).
\section*{Methods}
The probe laser is a Vescent distributed bragg reflector laser, which is frequency-offset locked to a stable reference laser locked to a cavity. The control laser is a Toptica external-cavity diode laser, which is side-of-fringe locked to an EIT signal obtained in a vapor cell. Both lasers are inserted into the system via fiber-coupled high bandwidth EOMs (iXblue NIR-MX800-LN-10). The EOM modulation signal is produced by a SRS DG645 delay generator, equipped with a SRD1 fast-risetime module.
The probe signal after exiting the fiber is picked off and filtered, by polarization and wavelength, and measured by an avalanche photo-diode or by a single-photon counting module (SPCM). To measure the tapered fiber transmission at the intensity level of single photons and with high temporal resolution, we use an Excelitas NIR-14-FC SPCM whose output is fed to a Fast ComTec MCS6 time tagger.
We use a Fibercore SM800 fiber, which is tapered gradually along 3.3 cm. The tapering profile consists of three stages, a steep linear taper along 4 mm (taper angle $\simeq$ 5 mrad), a moderate linear taper along 18 mm (taper angle $\simeq$ 2 mrad), and an exponential taper along the remaining 11 mm. The nanofiber waist is 5 mm long.
| proofpile-arXiv_059-15643 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Quasisymmetry was proposed \cite{B} as a way to achieve magnetic confinement and is the design principle \cite{NZ} underlying several modern optimised stellarators, including NCSX (partially constructed at PPPL) and HSX (built and operated at the University of Wisconsin-Madison). It is, in essence, a spatial symmetry of first-order guiding-centre motion that guarantees integrability.
In a previous work \cite{BKM}, necessary and sufficient conditions were derived for the existence of quasisymmetry, treating both system and symmetry as exact. It is worth noting that these hold for all nonzero values of charge, mass and magnetic moment.
Nevertheless, an approximate symmetry may be just as good as an exact one, especially since the guiding-centre system is only an approximation. Recently, it was suggested \cite{RHB} that approximate considerations of guiding-centre motion can relax the conditions of quasisymmetry.
In this paper, however, we show that any approximate spatial symmetry of the first-order guiding centre system must satisfy the quasisymmetry conditions \cite{BKM} to lowest order. This result contradicts the result presented in \cite{RHB}, which asserts that when a magnetic field is ``weakly quasisymmetric" there exists an approximate spatial symmetry for first-order guiding-centre motion.
Generalising from spatial symmetries to phase-space symmetries, we also find necessary and sufficient conditions for approximate symmetries that transform the parallel velocity of the guiding centre in addition to its position. In this way, we provide a set of weaker restrictions for an approximate conserved quantity. This set includes the case of \cite{RHB}, which proves to be a linearly parallel-velocity-dependent symmetry in first-order. We thereby confirm the approximate conserved quantity deduced in \cite{RHB}, even though the corresponding symmetry is not purely spatial. Moreover, we show that weak quasisymmetry implies a broader class of approximate conserved quantities than the single invariant considered in \cite{RHB}, which we derive and generalise even further for genuine phase-space symmetries. Finally, we show that under the magnetohydrostatic assumption, approximate symmetries reduce to quasisymmetry, as well.
\section{Guiding-centre motion}
\label{gcsection}
The very notion of guiding centre is built on an approximate symmetry. It assumes that the motion of charged particles admits approximately a rotational symmetry about the magnetic field. As a result, the magnetic moment is an adiabatic invariant. This allows to reduce the original charged-particle motion to a 2-degree-of-freedom system for the gyrocentre, which tracks or, to put it the other way round, guides the particle. Guiding-centre motion averages over the fast, small-radius gyration, and describes the system reduced under gyrosymmetry.
There have been various formulations of the guiding-centre system that agree to first order of approximation. Here we follow Littlejohn's, without taking into account electric fields, time-dependence or relativistic effects, which can be treated though accordingly.
Guiding-centre motion involves different features of the magnetic field that come into play both in contravariant and covariant components. This suggests the language of differential forms as more appropriate. Calculations and results support its use for brevity and hopefully clarity. That being said, notions and notation are kept to a minimum.\footnote{In short, the main tools are as follows. For any vector field $u$, $L_u$ denotes the Lie derivative with respect to $u$, $i_u$ stands for the interior product of a form with $u$, and $u^\flat$ the corresponding 1-form. Finally, $[u,w]=L_uw$ stands for the commutator of any two vector fields $u,w$. The only relations used next are limited to basic properties among $L_u$, $i_u$, the exterior derivative $d$ and the wedge product $\wedge$.} For the calculus of differential forms, besides classical textbooks we refer to the recent tutorial \cite{M} specifically adapted to 3D and plasma physics.
Throughout this paper we consider a 3-dimensional oriented smooth Riemannian manifold $Q$ equipped with associated volume-form $\Omega$, and assume that the magnetic field $B$ is nowhere zero on $Q$. We set $M=Q\times\mathbb{R}$, and also assume enough smoothness for all objects on $M$, wherever needed.
Let $x$ and $v_\parallel$ denote the position and reduced velocity of the guiding centre, respectively. We think of $z=(x,v_\parallel)$ as a point of $M$. Following \cite{L1}, the equations of first-order guiding-centre (FGC) motion for normalised constants ($m=q=1$) read
\begin{eqnarray}
\label{gc1}\eqalign{
\dot{x}&=\tilde{B}_\parallel^{-1}(v_\parallel\tilde{B}+\epsilon\mu\,b\times\nabla|B|),\\
\dot{v}_\parallel&=-\,\mu\tilde{B}_\parallel^{-1}\tilde{B}\cdot\nabla|B|,}
\end{eqnarray}
where $b=B/|B|$, $\tilde{B}=B+\epsilon v_\parallel\textnormal{curl}\, b$ is the so called modified magnetic field, $\tilde{B}_\parallel=\tilde{B}\cdot b$, $\mu$ is the value of the magnetic moment, and $\epsilon$ is a scaling parameter that indicates the order of the guiding-centre approximation. For a weakly inhomogeneous magnetic field, $\epsilon\ll1$ says that the magnetic field varies slowly within a gyroradius $\rho$ and a gyroperiod $\tau$. This can be expressed as $\rho/L, \tau/T\propto\epsilon$, where $L$ and $T$ stand for the characteristic lengths and time (seen by the particle) over which $B$ changes appreciably. As both $\rho$ and $\tau$ are inversely proportional to the gyrofrequency $\Omega_B=q|B|/m$, one may adopt $\epsilon=m/q$ and treat $\mu$ as the magnetic moment per unit mass instead of normalisation.
An equivalent way to express the above system is
\begin{eqnarray}
\label{gc2}\eqalign{
\tilde{B}\times\dot{x}+\epsilon\dot{v}_\parallel b+\epsilon\mu\nabla|B|=0,\\
b\cdot\dot{x}-v_\parallel=0,}
\end{eqnarray}
explicitly defining $v_\parallel$ as the component of the guiding centre velocity that is parallel to the magnetic field. Although (\ref{gc1}) is in solved form, (\ref{gc2}) is often more preferable to use and in fact precedes it in a Hamiltonian derivation.
In this form, the system admits a Hamiltonian formulation in the sense of $i_V\omega=-\,dH$ for $V=(\dot{x},\dot{v}_\parallel)$, where the symplectic form $\omega$ and Hamiltonian function $H$ on $M$ (minus the set where $\tilde{B}_\parallel=0$) are given by
\begin{eqnarray}
\label{symplectic}
\omega=\beta+\epsilon d(v_\parallel b^\flat)\\
\label{hamiltonian}
H(x,v_\parallel)=\epsilon(v_\parallel^2/2+\mu|B|(x)).
\end{eqnarray}
Here $\beta=i_B\Omega$ is a 2-form on $M$ expressing the magnetic flux, and the projection from $M$ to $Q$ that pulls back $\beta$ and $b^\flat$ is dropped to simplify notation. Note that $d\beta=(\textnormal{div}\, B)\Omega=0$, that is, $\beta$ is closed since $B$ is divergence-free.
In terms of the modified vector potential $\tilde{A}=A+\epsilon v_\parallel b$, system (\ref{gc2}) is also formulated as a variational problem described by the Lagrangian \cite{L1}
\begin{equation}
\label{lagrangian}
L(x,v_\parallel,\dot{x},\dot{v}_\parallel)=\tilde{A}(x,v_\parallel)\!\cdot\dot{x}-H(x,v_\parallel),
\end{equation}
or equally the Poincar\'e-Cartan form $\alpha=Ldt=\tilde{A}^\flat-Hdt$ on extended state space \cite{L2}. The magnetic potential $A^\flat$ always exists locally, since by the Poincar\'e lemma the closed magnetic flux form $\beta$ is locally also exact on $Q$, i.e., $\beta=dA^\flat$. If $\int_S\beta=0$ for all surfaces $S$ representing the second homology group $H_2(Q)$, then $A^\flat$ is global.
\section{Approximate Hamiltonian symmetries}
\label{symmsection}
Approximate symmetries were introduced in \cite{BGI} in a framework very close to Lie's symmetry groups. An independent approach was presented in \cite{CG1} with a particular focus on dynamical systems and connections with normal forms. See also \cite{IK,CG2} for more reading. Here we adapt some of these notions to a Hamiltonian setup.
There are two things that a symmetry of an approximate system being approximate means. The first is that the symmetry as a transformation is approximate, and the second is that the symmetry leaves the system approximately invariant. For consistency, the order of approximation in both cases is the same as the system's. By symmetry in this paper, we will mean continuous symmetries on $M$.
The key ingredient to quantify approximate methods is that any object that depends on a small parameter $\epsilon$ is considered only up to terms $O(\epsilon^k)$ for some integer $k$. To apply this, it is useful to work on the equivalence class of functions described as follows. Any two functions $f,g$ that differ by $O(\epsilon^{k+1})$-terms are regarded as equal. To relax notation, we express this by writing
\begin{equation}
\label{appclass}
f(z;\epsilon)=g(z;\epsilon)+O(\epsilon^{k+1})~\Leftrightarrow~f(z;\epsilon)\approx g(z;\epsilon)
\end{equation}
for some fixed $k$.
Each equivalence class $[f]$ under $\approx$ has a natural representative, namely the $k$th-order Taylor polynomial of $f$ in $\epsilon$ about $\epsilon=0$. Thus, we can think of any $C^k$ function $f$ in $\epsilon$ defined on some manifold $M$ as
\begin{equation}
f(z;\epsilon)\approx\sum\limits_{i=0}^k\epsilon^if_i(z),
\end{equation}
$z\in M$. We do the same for any $\epsilon$-dependent differential form, vector field, and mapping whatsoever on $M$, assuming they are sufficiently smooth in a neighbourhood of $\epsilon=0$.
In the following we simply let $k=1$, as the forthcoming notions straightforwardly generalise to any order of approximation. So, the term ``approximate'' from now on will mean approximate of first order, unless stated otherwise.
\begin{dfn}
An approximate dynamical system on a manifold $M$ is a system of ordinary differential equations $\dot{z}=V(z;\epsilon)$ with $V\approx V_0+\epsilon V_1$, where $V_0,V_1$ are vector fields on $M$.
\end{dfn}
Under the equivalence $\approx$, note that any system that agrees up to first order with the vector field $V$ will do. We can express this by replacing $=$ with $\approx$ in (\ref{gc1}). Within this class, it is useful to work with Littlejohn's representative system that has a Hamiltonian structure.
We think of Hamiltonian systems in terms of symplectic forms, i.e., nondegenerate closed 2-forms on $M$. Relaxing the nondegeneracy requirement, presymplectic forms are just closed 2-forms. In the approximate setting, we ask for
\begin{dfn}\label{Hamiltonian}
An approximate Hamiltonian system $(\omega,H)$ on a manifold $M$ is an approximate dynamical system $V$ that satisfies
\begin{equation}
\label{hamequation}
i_V\omega=-\,dH
\end{equation}
for $\omega=\omega_0+\epsilon\,\omega_1$, $H=H_0+\epsilon H_1$, where $\omega_0$ is a symplectic form, $\omega_1$ is a presymplectic form and $H_0,H_1$ are functions all on $M$.
\end{dfn}
We start by making precise the first aspect, what is an approximate transformation of a system
\begin{dfn}
An approximate transformation on a manifold $M$ is a smooth map $\Phi:M\times I\longrightarrow M$ with $\Phi(z;\epsilon)\approx\Phi_0(z)+\epsilon\,\Phi_1(z)$, $z\in M$ that is invertible for each $\epsilon\in I$, where $I\subset\mathbb{R}$ is open and contains 0.
\end{dfn}
Continuous transformations means there is a family of transformations that depend continuously on a parameter in a manifold of dimension at least 1. Typically this family is required to form a group. In the approximate context, we have the following notion.
\begin{dfn}\label{dfn:appgroup}
A one-parameter approximate transformation group on a manifold $M$ is a set of approximate transformations $\Phi^\tau$ such that\vspace{-0.1cm}
\begin{enumerate}
\item $\Phi^\tau\approx\textnormal{Id}\textnormal{ iff } \tau=0$,
\item $\Phi^{\tau_1}\circ\Phi^{\tau_2}\approx\Phi^{\tau_1+\tau_2}$\vspace{-0.1cm}
\end{enumerate}
for all $\tau,\tau_1,\tau_2\in G$, where $G\subset\mathbb{R}$ is open and contains 0.
\end{dfn}
\begin{dfn}\label{dfn:appgen}
The infinitesimal generator of a one-parameter approximate transformation group $\Phi^\tau$ on a manifold $M$ is defined by
\begin{equation}
\label{eq:appgen}
U(z;\epsilon)\approx\left.\frac{d\Phi^\tau(z;\epsilon)}{d\tau}\right|_{\tau=0}
\end{equation}
\end{dfn}
The converse to this relation, which builds the group from the generator, is given by the solution $\tilde{z}=\Phi^\tau(z;\epsilon)$ to the initial-value problem $d\tilde{z}/d\tau\approx U(\tilde{z})$, $\tilde{z}(0)\approx z$. Equivalently, it can be constructed from the exponential map in the approximate sense, $\Phi^\tau\approx\exp(\tau U)$, where the exponential of a vector field is defined by following it for time one.
Combining Definitions \ref{dfn:appgroup} and \ref{dfn:appgen}, we see that the generator (\ref{eq:appgen}) is a vector field of the form $U=U_0+\epsilon U_1$, where $U_0=\left.d\Phi^\tau_0/d\tau\right|_{\tau=0}$ and $U_1=\left.d\Phi^\tau_1/d\tau\right|_{\tau=0}$.
Moving to the second point, a $k$th-order approximate transformation is a $k$th-order approximate symmetry of a $k$th-order approximate system if it leaves the system invariant up to $O(\epsilon^{k})$-terms. For an autonomous system described exactly by a vector field $V$, a necessary and sufficient condition for a vector field $U$ to be an exact symmetry is that $U$ and $V$ commute. In the approximate case, the symmetry criterion applies accordingly and is given below as a definition
\begin{dfn}\label{symmetry}
An approximate symmetry of an approximate system $\dot{z}=V(z;\epsilon)$ on a manifold $M$ is a one-parameter approximate transformation group generated by a vector field $U$ on $M$ that satisfies $\left[U,V\right]\approx0$.
\end{dfn}
For approximate Hamiltonian systems and symmetries, the invariance criterion from the exact case also applies here accordingly and is given by the next definition.
\begin{dfn}\label{Hsymmetry}
An approximate Hamiltonian symmetry of an approximate Hamiltonian system $(\omega,H)$ on a manifold $M$ is a one-parameter approximate transformation group generated by a vector field $U$ on $M$ that satisfies $L_U\omega\approx0$ and $L_UH\approx0$.
\end{dfn
For multiple future reference, it is worth noting that under $d\omega=0$ and $i_V\omega=-\,dH$, for any vector field $u$ we have the relations
\begin{eqnarray}
\label{Lomega}
L_U\omega&=di_U\omega\\
L_UH&=i_UdH=i_Vi_U\omega.
\label{Lhamiltonian}
\end{eqnarray}
Then an approximate version of Noether's theorem for Hamiltonian systems follows. While the map from constants of motion to Hamiltonian symmetries is automatic, its inverse though, given Definition \ref{Hsymmetry}, stumbles on the exactness of the closed 1-form $i_U\omega$. The next result offers a way out.
\begin{lem}
\label{homology}
Let $V$ be an approximate Hamiltonian system $(M,\omega,H)$ and $U$ an approximate Hamiltonian symmetry generator. If closed trajectories of the set of fields $fU+gV$ with $f,g$ arbitrary functions span $H_1(M)$, then $i_U\omega$ is approximately exact.
\end{lem}
\begin{proof}
If $U$ is a Hamiltonian symmetry generator, then $i_U\omega$ is closed up to first-order terms from (\ref{Lomega}), because $L_U\omega\approx0$. Also, $i_Xi_U\omega\approx0$ for any $X=fU+gV$ from (\ref{Lhamiltonian}), since $L_UH\approx0$. So, $\int_\gamma i_U\omega\approx0$, where $[\gamma]=X$, hence the result.
\end{proof}
\begin{dfn}
A function $K(z;\epsilon)=K_0(z)+\epsilon K_1(z)$, $z\in M$ is an approximate constant of motion for an approximate dynamical system $V$ on a manifold $M$ if $L_VK\approx0$.
\end{dfn}
\begin{thm}\label{noether}
If a function $K$ is an approximate constant of motion for the approximate Hamiltonian system $(\omega,H)$, then there exists an approximate Hamiltonian symmetry generated by a vector field $U$, unique up to equivalence, such that $i_U\omega\approx-\,dK$. Under the assumption of Lemma \ref{homology}, the converse is also true.
\end{thm}
\begin{proof}
For any function $K$, a vector field $U=U_0+\epsilon U_1$ such that $i_U\omega\approx-\,dK$ is well-defined, since $\omega_0$ is nondegenerate. This is because the zeroth-order terms $i_{U_0}\omega_0=-dK_0$ determine $U_0$ uniquely and then the first-order terms $i_{U_1}\omega_0+i_{U_0}\omega_1=-dK_1$ determine $U_1$ uniquely. Thus, we have $L_U\omega\approx0$ from (\ref{Lomega}), since $i_U\omega$ is closed up to first-order terms. If $L_VK\approx0$, then $L_UH\approx0$ too, because from (\ref{Lhamiltonian})
\begin{equation}
\label{HtoK}
L_UH=-\,i_Ui_V\omega=-\,i_VdK=-\,L_VK.
\end{equation}
In the other direction, if a vector field $U$ generates an approximate Hamiltonian symmetry, $i_U\omega\approx-\,dK$ for some global function $K$ by Lemma \ref{homology}. Then, using (\ref{HtoK}), $K$ is approximately conserved, because $L_UH\approx0$.
\end{proof}
\begin{rem}\label{varsymm}
Here as well as in \cite{BKM}, we have chosen to use Hamiltonian symmetries. Equally, one can address the same problem in terms of variational symmetries \cite{O}. In other words, assume that $U$ generates an approximate symmetry of the Lagrangian formulation for the system. This means that $U$ leaves $\int{\!L\,dt}$ invariant modulo boundary terms and up to $O(\epsilon)$-terms. Infinitesimally for GC motion it is expressed as $L_U\alpha\approx df$ for some arbitrary function $f(x,v_\parallel)$, recalling $\alpha=\tilde{A}^\flat-Hdt$ from section \ref{gcsection}. This condition splits by $t$ into $L_U\tilde{A}^\flat\approx df$ and $L_UH\approx0$. Note that $d\tilde{A}^\flat=\omega$, so applying $d$ to the former gives $L_U\omega\approx0$. Therefore we recover Definition \ref{Hsymmetry}. The opposite direction requires Lemma \ref{homology}. Variational symmetries assume $K=U\cdot\tilde{A}-f$ is global from the beginning, and so Noether's formulation of Theorems \ref{noether} and \ref{prenoether} soon to follow does not require this lemma.
\end{rem}
\section{Noether's theorem for approximate presymplectic systems}
\label{casimirs}
The FGC system does not follow Definition \ref{Hamiltonian}, as $\omega_0=\beta$ is everywhere degenerate and therefore not symplectic. Consequently Theorem \ref{noether} does not apply in this case. Note that nondegeneracy of $\omega_0$ is actually a requirement only in the first direction of the theorem. Thus, if it fails then an approximate conserved quantity may correspond to more than one approximate Hamiltonian symmetry. The implications of this degeneracy for Noether's theorem are illustrated in this section.
A closed 2-form $\omega$ is called presymplectic. Thus, presymplectic forms may be degenerate and of variable rank. The rank of any 2-form $\omega$ is the dimension of the range of the associated linear map $\hat{\omega}$ from tangent vectors to cotangent vectors at each point, given by $\hat{\omega}(X)=i_X\omega$, and $\omega$ is degenerate if and only if the rank is less than the dimension of the manifold. For $\epsilon\neq0$ the guiding-centre form (\ref{symplectic}) in the exact scenario is symplectic except where $\tilde{B}_\parallel=0$. But for $\epsilon=0$ it reduces to $\beta$, which is closed ($\textnormal{div}\, B=0$) and its rank is 2 everywhere (as $B=0$ is excluded) on the 4-dimensional manifold $M=Q\times\mathbb{R}$, so it is presymplectic and nowhere symplectic.
In general, the kernel of $\omega$ consists of all the vector fields that annihilate $\omega$ and degeneracy means nonzero $\ker\omega$ of dimension complementary to the range. In the approximate setup, in order to include any degeneracies arising from the equivalence relation $\approx$, we consider
\begin{dfn}
\label{kernel}
For a 2-form $\omega=\omega_0+\epsilon\,\omega_1$, $\ker\omega$ is the set of all approximate vector fields $S$ such that $i_S\omega\approx0$.
\end{dfn}
For a presymplectic form $\omega$, we continue to say a dynamical system $V$ is Hamiltonian if $i_V\omega=-\,dH$ for some $H$. In contrast to the symplectic case, however, this does not have any solutions $V$ if $dH$ is not in the range of $\omega$, and if $dH$ is in the range then it has an affine space of solutions, consisting of one solution plus anything in its kernel, so $(\omega,H)$ no longer determines $V$ uniquely. Thus, to specify a presymplectic Hamiltonian system we give $(V,\omega,H)$. We do the same for approximate systems, as well. In the sense of Definition \ref{kernel}, note that nondegeneracy of an approximate 2-form $\omega=\omega_0+\epsilon\,\omega_1$ requires only $\omega_0$ to be nondegenerate. Failing to meet this requirement, the guiding-centre form for $\epsilon\ll1$ can be said to be presymplectic. More generally, we say
\begin{dfn}
An approximate presymplectic Hamiltonian system $(V,\omega,H)$ on a manifold $M$ is an approximate dynamical system $V$ that satisfies $i_V\omega=-\,dH$ for $\omega=\omega_0+\epsilon\,\omega_1$, $H=H_0+\epsilon H_1$, where $\omega_0,\omega_1$ are presymplectic forms and $H_0,H_1$ are functions all on $M$, assuming $\omega_0$ is nowhere symplectic.
\end{dfn}
Next we present some symmetry aspects introduced by presympelctic forms. For approximate forms, the points we limit ourselves to are very similar to the exact case, thus we fine-tune them directly for approximate presymplectic systems. So, first, we adopt again Definition \ref{Hsymmetry} for Hamiltonian symmetries. Note though that the kernel of a presymplectic form gives rise automatically to Hamiltonian symmetries of all systems $V$ with the same presymplectic form $\omega$ regardless of the Hamiltonian function $H$.
\begin{prop}\label{trivial}
For an approximate presymplectic Hamiltonian system $(V,\omega,H)$, any vector field in $\ker\omega$ generates an approximate Hamiltonian symmetry for all $H$.
\end{prop}
\begin{proof}
Let $i_S\omega\approx0$ for some vector field $S$. Then $L_S\omega=di_S\omega\approx0$ from (\ref{Lomega}), and $L_SH=i_Vi_S\omega\approx0$ from (\ref{Lhamiltonian}), as $V$ respects the order of approximation of $i_S\omega$.
\end{proof}
Thus, such symmetries are not triggered by the dynamics of a particular system, they merely reduce it to a local symplectic submanifold. Moreover, they trivially satisfy the relation of Theorem \ref{noether} for a constant function. We say
\begin{dfn}
A trivial symmetry of an approximate presymplectic Hamiltonian system $(V,\omega,H)$ is a transformation generated by a vector field in $\ker\omega$.
\end{dfn}
\begin{rem}
Unlike the symplectic case, in the presymplectic case we cannot deduce that a Hamiltonian symmetry $U$ is a symmetry of $V$, only that $[U,V]\in\ker\omega$ and $[U+S,V]=0$ for some $S\in\ker\omega$.
\end{rem}
In order to restore the one-to-one correspondence in Noether's theorem, we need to consider equivalence classes of Hamiltonian symmetries, each differing from one another by a trivial one. This is where an approximate version meets a presymplectic one.
\begin{dfn}
\label{range}
For a 2-form $\omega=\omega_0+\epsilon\,\omega_1$, $\mathrm{ran}\,\omega$ is the set of all approximate 1-forms $i_X\omega$ for approximate vector fields $X$.
\end{dfn}
\begin{thm}\label{prenoether}
If a function $K$ is an approximate constant of motion for the approximate presymplectic Hamiltonian system $(V,\omega,H)$ with $dK\in\textnormal{ran}\,\omega$, then there exists an approximate Hamiltonian symmetry generated by any vector field $U+S$ such that $i_U\omega\approx-\,dK$ and $S\in\ker\omega$. Under the assumption of Lemma \ref{homology}, the converse is also true.
\end{thm}
\begin{proof}
The proof follows from Proposition \ref{trivial} and along the same lines as the proof of Theorem \ref{noether}. In the first direction, for any function $K$ with $dK\in\mathrm{ran}\,\omega$, a vector field $\tilde{U}$ such that $i_{\tilde{U}}\omega\approx-\,dK$ can be defined uniquely modulo elements of $\ker\omega$, i.e., $\tilde{U}=U+S$, where $i_U\omega\approx-\,dK$. Then, as in Theorem \ref{noether}, $\tilde{U}$ generates an approximate Hamiltonian symmetry. In the other direction, if $U+S$ generates an approximate Hamiltonian symmetry, then so does $U$ by Proposition \ref{trivial}. Then, as in Theorem \ref{noether}, $i_U\omega\approx-\,dK$ by Lemma \ref{homology} and $K$ is an approximate conserved quantity.
\end{proof}
For more general presymplectic systems, where $V$ is not unique or only exists on a submanifold of $M$, see \cite{FP,LD} for a (purely) presymplectic version of Noether's theorem.
\subsection{The guiding-centre case}
Back to FGC motion,
\begin{prop}
\label{gcrange}
The range of the guiding-centre 2-form $\omega=\beta+\epsilon d(v_\parallel b^\flat)$ consists of all the 1-forms $a=a_0+\epsilon a_1$ on $M$ such that $a_0\in\mathrm{ran}\,\beta$.
\end{prop}
\begin{proof}
Let $a=a_0+\epsilon a_1$ be a 1-form such that $i_X\omega\approx a$ for some vector field $X=X_0+\epsilon X_1$. Then $i_{X_0}\beta=a_0$ and $i_{X_1}\beta+i_{X_0}d(v_\parallel b^\flat)=a_1$. Using $d(v_\parallel b^\flat)=dv_\parallel\wedge b^\flat+v_\parallel db^\flat$, the second equation gives
\begin{equation}
\label{ran2nd}
i_{X_1}\beta+(i_{X_0}dv_\parallel)b^\flat-(i_{X_0}b^\flat)dv_\parallel+v_\parallel i_{X_0}db^\flat=a_1.
\end{equation}
The first three terms show that $a_1$ is any 1-form for arbitrary $X_1$ and $v_\parallel$-, $b$-components of $X_0$. The latter do not enter the first condition, hence the result.
\end{proof}
\begin{prop}
\label{gckernel}
The kernel of the guiding-centre 2-form $\omega=\beta+\epsilon d(v_\parallel b^\flat)$ consists of all the vector fields $S=\epsilon S_1$ on $M$ such that $S_1\in\ker\beta$.
\end{prop}
\begin{proof}
Let $S=S_0+\epsilon S_1$ be a vector field such that $i_S\omega\approx0$. Then $i_{S_0}\beta=0$ and $i_{S_1}\beta+i_{S_0}d(v_\parallel b^\flat)=0$. Now, the second equation gives
\begin{equation}
\label{ker2nd}
i_{S_1}\beta+(i_{S_0}dv_\parallel)b^\flat-(i_{S_0}b^\flat)dv_\parallel+v_\parallel i_{S_0}db^\flat=0.
\end{equation}
Contracting the above with $\partial_{v_\parallel}$, we have $i_{S_0}b^\flat=0$. The latter together with the first equation yields $i_{S_0}\Omega=0$, applying $i_{S_0}$ on $\beta\wedge b^\flat=|B|\Omega$. Then the last term in (\ref{ker2nd}) also vanishes, because $i_{S_0}db^\flat=i_{S_0}i_c\Omega=-\,i_ci_{S_0}\Omega=0$, where $c=\textnormal{curl}\, b$. So, if we contract (\ref{ker2nd}) with $b$, we get $i_{S_0}dv_\parallel=0$. Thus, $S_0=0$ because $i_{S_0}\Omega=0$ and $i_{S_0}dv_\parallel=0$. Then (\ref{ker2nd}) reduces to just $i_{S_1}\beta=0$.
\end{proof}
\begin{cor}
\label{gcnoether}
For FGC motion, there is a one-to-one correspondence between approximate constants of motion $K$ with $K_0=\textnormal{const.}$ being flux surfaces and classes of approximate Hamiltonian symmetries $U+\epsilon(fb,g)$ where $f,g$ are any functions, and $i_U\omega\approx-\,dK$.
\end{cor}
\begin{proof}
The range of $\beta$ on $M$ consists of all the 1-forms on $Q$ that vanish on $B$, and so for exact 1-forms $dK_0$ this means $i_BdK_0=0$, i.e., $K_0=\textnormal{const.}$ is a flux surface. The kernel of $\beta$ on $M$ consists of all the vector fields $(fb,g)$, where $f,g$ are arbitrary functions. The result follows from Theorem \ref{prenoether} and Propositions \ref{gcrange}, \ref{gckernel}.
\end{proof}
\begin{rem}
Note that for all values of $\mu$ the vector field $V_0=(v_\parallel b,-\mu\,b\cdot\nabla|B|)$ spans $\ker\beta$, assuming $b\cdot\nabla|B|\neq0$. Then, $i_{V_0}dK_0=0$ says that $dK_0$ belongs to $\mathrm{ran}\,\omega$ automatically. In other words, instead of asking $K_0=\textnormal{const.}$ to be a flux surface in Corollary \ref{gcnoether} we can ask $K_0$ to be independent of $\mu$ when $b\cdot\nabla|B|\neq0$.
\end{rem}
\section{Approximate quasisymmetry}
In this section, we address approximate Hamiltonian spatial symmetries for guiding-centre motion. Our goal is to see how quasisymmetry can be approximated using the guiding-centre approximation. Either in the exact or the approximate framework,
\begin{dfn}
Quasisymmetry is a Hamiltonian symmetry on $Q$ of FGC motion for all values of the magnetic moment.
\end{dfn}
\begin{thm}\label{appqs}
Given a magnetic field $B$, a vector field $u=u_0+\epsilon u_1$ on $Q$ generates an approximate quasisymmetry if and only if $L_{u_0}\beta=0$, $L_{u_0}b^\flat=0$, $L_{u_0}|B|=0$, $L_{u_1}\beta=0$.
\end{thm}
\begin{proof}
Substitute $u$, $\omega$ and $H$ into the conditions of Definition \ref{Hsymmetry} and split up by different powers of $\epsilon$, dropping any second-order terms. Starting with $L_uH\approx0$, we get $L_{u_0}|B|=0$. Similarly from $L_u\omega\approx0$, we have $L_{u_0}\beta=0$ and
\begin{equation}
\label{p1}
L_{u_0}d(v_\parallel b^\flat)+L_{u_1}\beta=0
\end{equation}
from the zero- and first-order terms, respectively. Now
\begin{equation}
\label{p2}
L_{u_0}d(v_\parallel b^\flat)=dL_{u_0}(v_\parallel b^\flat)=d(v_\parallel L_{u_0}b^\flat)=dv_\parallel\wedge L_{u_0}b^\flat+v_\parallel dL_{u_0}b^\flat.
\end{equation}
Thus, contracting (\ref{p1}) with $\partial_{v_\parallel}$, we get $L_{u_0}b^\flat=0$. Substituting this into (\ref{p2}) gives $L_{u_0}d(v_\parallel b^\flat)=0$, and so (\ref{p1}) yields $L_{u_1}\beta=0$. Going in the opposite direction, it is straightforward to see that the converse is also true.
\end{proof}
As shown in \cite{BKM}, under the above conditions $u_0$ satisfies several additional properties such as $\textnormal{div}\, u_0=0$, $[u_0,B]=0$, $[u_0,J]=0$, $L_{u_0}(u_0\cdot b)=0$ and others. Note that $u_0$ and $u_1$ are uncoupled.
\section{Approximate $v_\parallel$-symmetries}
\label{sc:appvs}
Subsequently we ask how departures from quasisymmetry that depend on parallel velocity can relax the conditions on $B$ for FGC motion to have a symmetry. Thus, we investigate the conditions for an approximate Hamiltonian symmetry on phase space $M$, which will often be referred to simply as approximate symmetries. As it turns out (Theorem \ref{appvs}), symmetry generators for all values of the magnetic moment have zero component in the parallel-velocity direction. Thus, we will also refer to symmetries on $M$ as $v_\parallel$-symmetries, which is short for parallel-velocity-dependent symmetries generated on $Q$.
Symmetries that involve velocities are not new to charged particle motion. Gyrosymmetry is an example of an approximate Hamiltonian symmetry involving the perpendicular velocity to leading order.
\begin{expl}\normalfont
Consider the full particle's motion on the cotangent bundle $T^*Q$ with symplectic form $\omega=\beta+dv\wedge dx$, where $v$ is the particle's velocity. In the case of a homogeneous magnetic field, the magnetic moment $\mu=v_\perp^2/2|B|$ is an exact constant of motion and corresponds via $i_U\omega=-\,d\mu$ to the exact symmetry generated by $U=(v_\perp/|B|,v_\perp\!\times b)$ on $T^*Q$, where $v_\perp$ is the perpendicular velocity vector of the particle.
For a weakly-inhomogeneous $B$, we have $\omega=\beta+\epsilon\,dv\wedge dx$
for $\epsilon\ll1$.
The magnetic moment now extends to an adiabatic invariant $K=\epsilon^2 v_\perp^2/2|B|+O(\epsilon^3)$, which under $i_U\omega=-\,dK$ corresponds to an approximate Hamiltonian symmetry that extends to all orders, generated by the vector field $U=U_0+\epsilon U_1+O(\epsilon^2)$ with
\begin{eqnarray}
\label{gs1}
\fl U_0&=(0,v_\perp\!\times b),\\
\label{gs2}
\fl U_1&=|B|^{-1}\!\left(v_\perp,\left\{\!\left(v_\parallel\,b\cdot c-2K_1\,n\cdot\nabla|B|\right)\!b-v_\parallel c+\left[b\times(v_\perp\cdot\nabla b)+n\cdot\nabla b\right]\!/2\right\}\!\times v\right)\!,
\end{eqnarray}
where $n=b\times v_\perp$ and $c=\textnormal{curl}\, b$, as shown in \ref{GSappendix}. Thus, the exact symmetry from the homogeneous case splits between terms of different order. Note that the leading order of $U$ is less by two than $K$'s, same as with $V$ and $H$.
Formulas (\ref{gs1})-(\ref{gs2}) recover the one in \cite{JS}, where a coordinate-free way is presented to build the so called roto-rate as a means to gyrosymmetry and the corresponding adiabatic invariant to all orders for nearly-periodic systems.
\end{expl}
For considerations of general symmetries, we need to work on $M=Q\times\mathbb{R}$ with volume form $\Omega\wedge dv_\parallel$. For any vector field $U$ on $M$, we denote by $u$ the projection of $U$ on $Q$, i.e., the spatial components of $U$ collectively, and by $w$ the component of $U$ in the parallel-velocity direction; we write $U=(u,w)$
\begin{thm}
\label{appvs}
Given a magnetic field $B$, a vector field $U=(u,w)=(u_0+\epsilon u_1,w_0+\epsilon w_1)$ on $M$ generates an approximate Hamiltonian symmetry of FGC motion if and only if
\begin{eqnarray}
\label{vs1}L_{u_0}\beta=0,\\
\label{vs2}d(v_\parallel L_{u_0}b^\flat)+L_{u_1}\beta=0,\\
\label{vs3}L_{u_0}|B|=0,\\
w=0.
\end{eqnarray}
\end{thm}
\begin{proof}
From $L_UH\approx0$, we have $v_\parallel w_0+\mu L_{u_0}|B|=0$. Then for all values of $\mu$, we get (\ref{vs3}) and $w_0=0$. Given Corollary \ref{gcnoether}, take also $w_1=0$ under the equivalence by trivial symmetries. Thus, $U$ has overall zero velocity-component $w$.
Since $w=0$, $L_U\omega\approx0$ reduces to $L_u\omega\approx0$. From the latter, we obtain (\ref{vs1}) from the zeroth-order terms, and $L_{u_0}d(v_\parallel b^\flat)+L_{u_1}\beta=0$ from the first-order ones. This in turn gives (\ref{vs2}), because $L_{u_0}d(v_\parallel b^\flat)=d(v_\parallel L_{u_0}b^\flat)$.
\end{proof}
Next we explore some further consequences and also subcases.
\begin{thm}
\label{appvscons}
If $u=u_0+\epsilon u_1$ generates an approximate Hamiltonian $v_\parallel$-symmetry of FGC motion, then $\textnormal{div}\, u_0=0$, $[u_0,B]=0$ and $i_bL_{u_0}b^\flat=0$. Furthermore,\vspace{-0.1cm}
\begin{enumerate}
\item If $u_1=0$, then $[u_0,c]=0$ and $i_cL_{u_0}b^\flat=0$, where $c=\textnormal{curl}\, b$.
\item If $u_0$ is spatial, then $L_{u_0}b^\flat=i_Bi_{\partial_{v_\parallel}\!u_1}\Omega$.
\end{enumerate}
\end{thm}
\begin{proof}
Note first that $L_{u_0}\Omega\wedge dv_\parallel=di_{u_0}\Omega\wedge dv_\parallel=(\textnormal{div}\, u_0)\Omega\wedge dv_\parallel$. Take then the first symmetry condition (\ref{vs1}) and split by spatial and velocity components. In order to do this, wedge with $dv_\parallel$ and contract with $\partial_{v_\parallel}$, respectively. In the first case, write $L_{u_0}\beta=i_{[u_0,B]}\Omega+i_BL_{u_0}\Omega$, and therefore $L_{u_0}\beta\wedge dv_\parallel=(i_{[u_0,B]}+\textnormal{div}\, u_0\,i_B)\Omega\wedge dv_\parallel$, where $\Omega\wedge dv_\parallel$ is nondegenerate. In the second case, we have $i_{\partial_{v_\parallel}}\!L_{u_0}\beta=i_{\partial_{v_\parallel}\!u_0}i_B\Omega$, since $[\partial_{v_\parallel},u_0]=\partial_{v_\parallel}\!u_0$. Thus, the first symmetry condition splits into
\begin{eqnarray}
\label{con11}
[u_0,B]+(\textnormal{div}\, u_0)B=0,\\
\label{con12}
i_{\partial_{v_\parallel}\!u_0}i_B\Omega=0.
\end{eqnarray}
In the same way, split the second symmetry condition (\ref{vs2}). First of all, note that $db^\flat=i_c\Omega$ and write $d(v_\parallel L_{u_0}b^\flat)=dv_\parallel\wedge L_{u_0}b^\flat+v_\parallel L_{u_0}db^\flat$. Similarly then wedge with $dv_\parallel$, using now $L_{u_0}db^\flat=i_{[u_0,c]}\Omega+i_cL_{u_0}\Omega$, as well. In contracting with $\partial_{v_\parallel}$, note that $i_{\partial_{v_\parallel}}\!L_{u_0}b^\flat=i_{\partial_{v_\parallel}\!u_0}b^\flat$ and $i_{\partial_{v_\parallel}}\!L_{u_0}db^\flat=i_{\partial_{v_\parallel}\!u_0}i_c\Omega$, since $b^\flat$ lies on $Q$.
Thus, as before, the second condition gives
\begin{eqnarray}
\label{con21}
v_\parallel[u_0,c]+v_\parallel(\textnormal{div}\, u_0)c+[u_1,B]+(\textnormal{div}\, u_1)B=0,\\
\label{con22}
L_{u_0}b^\flat-(i_{\partial_{v_\parallel}\!u_0}b^\flat)dv_\parallel+v_\parallel i_{\partial_{v_\parallel}\!u_0}i_c\Omega+i_{\partial_{v_\parallel}\!u_1}i_B\Omega=0.
\end{eqnarray}
From (\ref{con22}) and (\ref{con12}), we have $i_bL_{u_0}b^\flat=0$.
Now using this, $L_{u_0}|B|=i_{[u_0,B]}b^\flat$. Then the third symmetry condition (\ref{vs3}) combined with (\ref{con11}) gives $\textnormal{div}\, u_0=0$ and so $[u_0,B]=0$, as well.
The remaining two statements are automatic from (\ref{con21}), substituting $\textnormal{div}\, u_0=0$, and (\ref{con22}).
\end{proof}
\begin{rem}\label{appvsconseq}
From the proof of Theorem \ref{appvscons}, we see that under $L_{u_0}\beta=0$, $L_{u_0}|B|=0$, the condition $\textnormal{div}\, u_0=0$ is equivalent to either $[u_0,B]=0$ or $i_bL_{u_0}b^\flat=0$.
\end{rem}
Note that, in spite of $w=0$, the first two symmetry conditions (\ref{vs1})-(\ref{vs2}) still have $v_\parallel$-components. The next result shows that (\ref{vs1}) can be reduced to a condition on $Q$ and gives a reformulation of (\ref{vs2}).
\begin{lem}
\label{appvslem}
For any two $v_\parallel$-dependent vector fields $u_0,u_1$ on $Q$, the conditions (\ref{vs1}) and (\ref{vs2}) hold if and only if
\begin{eqnarray}
\label{vs11}
i_{u_0}i_B\Omega=d\psi_0,\\
\label{vs21}
v_\parallel L_{u_0}b^\flat+i_{u_1}i_B\Omega=d\psi_1,
\end{eqnarray}
respectively, where $\psi_0$ is a spatial function on $Q$ and $\psi_1$ is a function on $M$, both defined at least locally. Under (\ref{vs11})-(\ref{vs21}), $\psi_1$ is spatial if and only if $u_0$ is.
\end{lem}
\begin{proof}
$L_{u_0}\beta=di_{u_0}\beta$, since $\beta$ is closed. Thus, by the Poincar\'e lemma, (\ref{vs1}) holds if $i_{u_0}\beta=d\psi_0$ for some local function $\psi_0$ on $M$. The $v_\parallel$-component then gives $\partial_{v_\parallel}\!\psi_0=0$, since $i_{u_0}\beta$ is a 1-form on $Q$. Similarly, (\ref{vs2}) holds if $v_\parallel L_{u_0}b^\flat+i_{u_1}\beta=d\psi_1$ for some local function $\psi_1$ on $M$. In the other direction, $\psi_0$ and $\psi_1$ can be global.
Now, on the one hand, the $v_\parallel$-derivative of (\ref{vs11}) gives $i_{\partial_{v_\parallel}\!u_0}i_B\Omega=0$, since $B$ and $\psi_0$ are spatial. On the other, the $v_\parallel$-component of (\ref{vs21}) yields $v_\parallel i_{\partial_{v_\parallel}\!u_0}b^\flat=\partial_{v_\parallel}\!\psi_1$. To see this, use $L_{u_0}b^\flat=i_{u_0}db^\flat+di_{u_0}b^\flat$ and note that $i_{u_1}i_B\Omega$ lies on $Q$, and so does $i_{u_0}db^\flat$. Therefore, when both conditions hold, we have $\partial_{v_\parallel}\!u_0\times B=0$ and $v_\parallel\,\partial_{v_\parallel}\!u_0\cdot b=\partial_{v_\parallel}\!\psi_1$. Hence $\partial_{v_\parallel}\!\psi_1=0$ if and only if $\partial_{v_\parallel}\!u_0=0$.
\end{proof}
Included in this section to treat the general case, the above lemma can be combined with either Theorem \ref{appqs} or \ref{appvs}.
From Theorem \ref{appvs}, we see already that for a general approximate (Hamiltonian) symmetry, $u_0$ and $u_1$ are now related via (\ref{vs2}). From Lemma \ref{appvslem} and (\ref{vs21}) in particular, we can express $u_1$ in terms of $u_0$ and give another characterisation of approximate Hamiltonian symmetries.
\begin{thm}\label{appvs2}
Given a magnetic field $B$, a vector field $U=(u,w)=(u_0+\epsilon u_1,w_0+\epsilon w_1)$ on $M$ generates an approximate Hamiltonian symmetry of FGC motion up to trivial symmetries if and only if $L_{u_0}\beta=0$, $L_{u_0}|B|=0$, $w=0$, and
\begin{equation}
\label{u1}
u_1=|B|^{-1}b\times(v_\parallel X_0-\nabla\psi_1),
\end{equation}
where $X_0=\textnormal{curl}\, b\times u_0 + \nabla(u_0\cdot b)$, $\nabla$ denotes the spatial gradient and $\psi_1$ is a flux function on $M$ defined at least locally such that
\begin{equation}
\label{u1c2}
\partial_{v_\parallel}\!\psi_1=v_\parallel\,b\cdot\partial_{v_\parallel}\! u_0.
\end{equation}
\end{thm}
\begin{proof}
First of all, note that $X_0^\flat\wedge dv_\parallel=L_{u_0}b^\flat\wedge dv_\parallel$. Thus, the spatial part of (\ref{vs21}) is $v_\parallel X_0+B\times u_1=\nabla\psi_1$. Cross then with $b$ and drop any trivial symmetries to arrive at (\ref{u1}). Dotting with $b$ yields $b\cdot\nabla\psi_1=v_\parallel\,b\cdot X_0=i_bL_{u_0}b^\flat=0$ by Theorem \ref{appvscons}. The velocity part of (\ref{vs21}), as shown in the proof of Lemma \ref{appvslem}, gives (\ref{u1c2}).\end{proof}
Note that equally we can replace $X_0$ with $|B|^{-1}(J\times u_0 + \nabla(u_0\cdot B))$ in (\ref{u1}). Condition (\ref{u1c2}) says that the $v_\parallel$-dependence of $\psi_1$ is determined by the $v_\parallel$-dependence of $u_0$. For example, if $u_0$ is an $n$th-order polynomial in $v_\parallel$, then so is $\psi_1$.
\begin{rem}
From (\ref{u1}), we deduce that, since $u_0$ is spatial if and only if $\psi_1$ is spatial, $u_1$ is nonzero up to trivial symmetries unless $u_0$ depends on $v_\parallel$ or $L_{u_0}b^\flat=0$. In other words, we cannot have both spatial $u_0$ and zero $u_1$, assuming $L_{u_0}b^\flat\neq0$.
\end{rem}
To connect with other formulations, we express some key relations of the previous results in vector calculus notation in \ref{VCappendix}.
\section{Approximate flux surfaces and constants of motion}
Back to Lemma \ref{appvslem}, we see that $B\cdot\nabla\psi_0=0$ from (\ref{vs11}). Thus, even in the case of an approximate phase-space (Hamiltonian) symmetry there exists a flux function $\psi_0$, at least locally, and we assume it is global.
From (\ref{vs21}) and $i_bL_{u_0}b^\flat=0$ we also have $B\cdot\nabla\psi_1=0$, as stated in Theorem \ref{appvscons}. Thus, there exists an approximate, generalised notion of a flux function given by
\begin{equation}
\psi=\psi_0+\epsilon\psi_1
\end{equation}
assuming $\psi_1$ is also global. We say generalised, because although $\psi_0$ is spatial, $\psi_1$ may depend on the parallel velocity.
From now on, we will assume that both $\psi_0$ and $\psi_1$ are global, and we will refer to the level sets of $\psi_0$ as flux surfaces.
Finally, to construct the approximate conserved quantity $K$ that arises from an approximate Hamiltonian symmetry $U$ in general, we employ Corollary \ref{gcnoether}. Recall that $K$ is uniquely determined by $U$ via $i_U\omega\approx-\,dK$ and vice versa, since trivial symmetries have been factored out. For any vector field $U=U_0+\epsilon U_1=(u_0,w_0)+\epsilon(u_1,w_1)$, we have
\begin{eqnarray*}
i_U\omega&\approx i_{U_0}\beta+\epsilon[i_{U_0}d(v_\parallel b^\flat)+i_{U_1}\beta]=i_{u_0}i_B\Omega+\epsilon[(L_{U_0}-di_{u_0})v_\parallel b^\flat+i_{u_1}i_B\Omega]\\
&=i_{u_0}i_B\Omega+\epsilon[w_0b^\flat+v_\parallel L_{u_0}b^\flat+i_{u_1}i_B\Omega-d(v_\parallel u_0\cdot b)],
\end{eqnarray*}
and so, using Theorem \ref{appvs} and Lemma \ref{appvslem}, we arrive at
\begin{equation}
\label{invariant}
K=-\,\psi_0-\epsilon(\psi_1-v_\parallel u_0\cdot b).
\end{equation}
This is a generalisation of the exact invariant \cite{BKM} in that it introduces the generalised flux function $\psi_1$. Note that this formula applies for spatial symmetries too, only the conditions on $u_0$ change. By Lemma \ref{appvslem}, however, the function $K$ is linear in the velocity if $u_0$ is spatial, but nonlinear otherwise. Interestingly enough, $u_1$ does not enter explicitly
\section{Weak quasisymmetry}
Given Theorems \ref{appvscons}, \ref{appvs2} and Remark \ref{appvsconseq}, we conclude that an approximate Hamiltonian $v_\parallel$-symmetry generator $u_0+\epsilon u_1$ satisfies the conditions
\begin{equation}
\label{vs0}
L_{u_0}\beta=0, \textnormal{div}\, u_0=0, L_{u_0}|B|=0
\end{equation}
to zero order and the first-order term $u_1$ is given by (\ref{u1}). The only additional condition (\ref{u1c2}) restricts the velocity-dependence between $u_0$ and $\psi_1$, and so it is automatic if either one is spatial by Lemma \ref{appvslem}.
Here we address the converse with no assumption on $u_1$ whatsoever. Leaving (\ref{u1c2}) aside, we may assume that $\psi_1$ (and so $u_0$) is independent of $v_\parallel$. To connect also with \cite{RHB}, we first treat the restriction to $\psi_1=0$ considered there.
\begin{prop}
\label{RHBi}
If $i_{u_0}i_B\Omega=d\psi_0$, $\textnormal{div}\, u_0=0$ and $L_{u_0}|B|=0$ with spatial $u_0,\psi_0$, then $p=-\,\psi_0+\epsilon\,v_\parallel u_0\cdot b$ is an approximate conserved quantity for FGC motion.
\end{prop}
\begin{proof}
To check the approximate invariance of $p$, compute $L_Vp$ with $V=(\dot{x},\dot{v}_\parallel)$ up to $O(\epsilon)$-terms. Thus, assuming $u_0$ is independent of $v_\parallel$, we have
\begin{eqnarray}
\label{Pinvariance}
L_Vp&=-L_V\psi_0+\epsilon\,u_0\cdot b\,L_Vv_\parallel+\epsilon\,v_\parallel L_V(u_0\cdot b)\nonumber\\
&=-\,i_{\dot{x}}d\psi_0+\epsilon\,\dot{v}_\parallel u_0\cdot b+\epsilon\,v_\parallel i_{\dot{x}}d(u_0\cdot b)\nonumber\\
&=-\,i_{\dot{x}}i_{u_0}i_B\Omega+\epsilon\,\dot{v}_\parallel u_0\cdot b+\epsilon\,v_\parallel i_{\dot{x}}(L_{u_0}b^\flat-i_{u_0}db^\flat)\nonumber\\
&=-\,i_{\dot{x}}i_{u_0}i_{\tilde{B}}\Omega+\epsilon\,\dot{v}_\parallel u_0\cdot b+\epsilon\,v_\parallel i_{\dot{x}}L_{u_0}b^\flat\nonumber\\
&\approx-\,\epsilon\,i_{u_0}(\dot{v}_\parallel b^\flat+\mu d|B|)+\epsilon\,\dot{v}_\parallel u_0\cdot b+\epsilon\,v_\parallel i_bL_{u_0}b^\flat\approx0,
\end{eqnarray}
using (\ref{gc1})-(\ref{gc2}) in the penultimate equality and Remark \ref{appvsconseq} in the last one.
\end{proof}
As with the general case of $K$, the approximate constant $p$ does not consider $u_1$. One might ask if $u_0$ on $Q$ is the corresponding Hamiltonian symmetry generator under (\ref{vs0}). Theorem \ref{appqs} rules out this possibility. One can verify that $u_0$ does not even generate an approximate symmetry of the guiding-centre equations themselves, regardless of the Hamiltonian structure, in the sense of Definition \ref{symmetry} (or at least modulo $\ker\omega$ to include any degeneracies). However, in light of Noether's theorem adapted here successively, leading to Corollary \ref{gcnoether}, we can construct the symmetry from $p$. Either by direct calculation or section \ref{sc:appvs} going backwards, we obtain
\begin{prop}
\label{RHBs}
The approximate conserved quantity $p=-\,\psi_0+\epsilon\,v_\parallel u_0\cdot b$ with spatial $u_0,\psi_0$ satisfying $i_{u_0}i_B\Omega=d\psi_0$, $\textnormal{div}\, u_0=0$, $L_{u_0}|B|=0$ corresponds to the approximate Hamiltonian symmetry generated by $u_p=u_0+\epsilon v_\parallel|B|^{-1}b\times X_0$, where $X_0=\textnormal{curl}\, b\times u_0 + \nabla(u_0\cdot b)$, up to trivial symmetries.
\end{prop}
Proposition \ref{RHBi} agrees with \cite{RHB} that $p$ is an approximate conserved quantity under conditions (\ref{vs0}). Contrary to \cite{RHB}, however, Proposition \ref{RHBs} shows that under these conditions the arising symmetry is not purely spatial, but it is spatial to lowest order and depends linearly on parallel velocity in first order.
This is only an example of such symmetries; within this symmetry class we could have in general a nonzero, spatial $\psi_1$. Although they escape quasisymmetry, these symmetries are a weak version of it.
\begin{dfn}
\label{dfn:weakvs}
A weak quasisymmetry is an approximate Hamiltonian symmetry of FGC motion which is spatial to leading order and nontrivially linear in $v_\parallel$ to first order.
\end{dfn}
Propositions \ref{RHBi}-\ref{RHBs} indicate that a spatial vector field $u_0$ that satisfies (\ref{vs0}) is the zeroth-order term of a weak quasisymmetry generator. We extend this to include the case of spatial $\psi_1\neq0$.
\begin{thm}
\label{thm:weakvs}
Assume $u_0$ is a vector field on $Q$ and $u_1=|B|^{-1}b\times(v_\parallel X_0-\nabla\psi_1)$ with $X_0=\textnormal{curl}\, b\times u_0 + \nabla(u_0\cdot b)\neq0$ and $\psi_1$ a flux function on $Q$. The following are equivalent.\vspace{-0.1cm}
\begin{enumerate}
\item $u=u_0+\epsilon u_1$ generates a weak quasisymmetry;
\item $i_{u_0}i_B\Omega=d\psi_0$, $\textnormal{div}\, u_0=0$ and $L_{u_0}|B|=0$.
\end{enumerate}
\end{thm}
\begin{proof}
If $u$ generates a weak quasisymmetry then from Theorems \ref{appvs}-\ref{appvscons} and Lemma \ref{appvslem} we see that the conditions (ii) hold
In the opposite direction, note first that since $u_0$ is spatial, $L_{u_0}b^\flat$ lies on $Q$ and reduces to $L_{u_0}b^\flat=X_0^\flat$ (Theorem \ref{appvs2}). Now, under the other two conditions, $\textnormal{div}\, u_0=0$ is equivalent to $b\cdot X_0=0$ (Remark \ref{appvsconseq}). Together with $B\cdot\nabla\psi_1=0$, they guarantee that we can define a vector field $u_1$ from (\ref{u1}). Then, Theorem \ref{appvs2} says that $u_0+\epsilon u_1$ is a Hamiltonian $v_\parallel$-symmetry with (\ref{u1c2}) trivially satisfied. Since $X_0$ and $\psi_1$ are independent of $v_\parallel$, it is a weak quasisymmetry.
\end{proof}
\section{Approximate $\mu$-symmetries}
We may as well enlarge the set of symmetries by allowing them to depend on $\mu$, and look for Hamiltonian symmetries on $M$ for specific values of the magnetic moment. However, the next theorem shows that these reduce to phase-space Hamiltonian symmetries.
\begin{thm}
For FGC motion, every approximate $\mu$-dependent Hamiltonian symmetry on $M$ is an approximate Hamiltonian symmetry up to trivial symmetries.
\end{thm}
\begin{proof}
Let $U(x,v_\parallel,\mu,\epsilon)=U_0(x,v_\parallel,\mu)+\epsilon U_1(x,v_\parallel,\mu)$ be the symmetry generator on $M$ with $U_i=(u_i,w_i)$, $i=1,2$, where $w_1=0$ up to trivial symmetries. We work our way partly through Theorems \ref{appvs} and \ref{appvscons} and modify them suitably.
From $L_U\omega\approx0$, the zeroth-order terms give $L_{U_0}\beta=0$, which reduces again to (\ref{vs1}), since $\beta$ is a spatial form on $Q$. But the first-order terms yield $L_{U_0}d(v_\parallel b^\flat)+L_{U_1}\beta=0$, and note that $L_{U_0}(v_\parallel b^\flat)=w_0 b^\flat+v_\parallel L_{u_0}b^\flat$, as $b^\flat$ is spatial too. So now, instead of (\ref{vs2}) the second symmetry condition reads
\begin{equation}
\label{mu1}
d(v_\parallel L_{u_0}b^\flat+w_0 b^\flat)+L_{u_1}\beta=0.
\end{equation}
Take then the $\mu$-component of (\ref{vs1}) and (\ref{mu1}), i.e., contract them with $\partial_\mu$. Similarly to Theorem \ref{appvscons}, we find
\begin{eqnarray}
\label{mu2}
i_{\partial_\mu u_0}i_B\Omega=0,\\
\label{mu3}
-\,(i_{\partial_\mu u_0}b^\flat)dv_\parallel+v_\parallel i_{\partial_\mu u_0}i_c\Omega+(\partial_\mu w_0)b^\flat+i_{\partial_\mu u_1}i_B\Omega=0,
\end{eqnarray}
respectively. The $v_\parallel$-component of (\ref{mu3}) gives $\partial_\mu u_0\cdot b=0$ and together with (\ref{mu2}), that is, $\partial_\mu u_0\times B=0$, they deliver $\partial_\mu u_0=0$. Dotting (\ref{mu3}) with $b$, we also find $\partial_\mu w_0=0$. Then (\ref{mu3}) reduces to $\partial_\mu u_1\times B=0$, which says that $\partial_\mu u_1=0$ up to trivial symmetries. Putting it all together, we conclude that $U$ is independent of $\mu$.
\end{proof}
\section{Relation to magnetohydrostatics}
So far quasisymmetry and symmetries in general of guiding-centre motion were treated independently of any other assumption on the magnetic field. In this section, we study approximate Hamiltonian symmetries in the presence of magnetohydrostatics (MHS),
\begin{equation}
\label{mhs}
J\times B=\nabla p,
\end{equation}
where $J=\textnormal{curl}\, B$ is the current density and $p$ is the scalar plasma pressure. This can be viewed as an extra restriction for the magnetic field that can be added to the previous symmetry conditions.
\begin{thm}
\label{appvsmhs}
For an MHS magnetic field with $dp\neq0$ a.e.~on $Q$ and density of irrational surfaces, every approximate Hamiltonian symmetry of FGC motion is an approximate quasisymmetry.
\end{thm}
\begin{proof}
First of all, write (\ref{mhs}) as $i_Bi_J\Omega=dp$ and note that $dB^\flat=i_J\Omega$. For any MHS field
\begin{eqnarray}
\label{rel1}
L_BB^\flat&=i_BdB^\flat+di_BB^\flat=d(p+|B|^2),\\
i_{[J,B]}\Omega&=i_JL_B\Omega-L_Bi_J\Omega=-\,L_BdB^\flat=-\,dL_BB^\flat=0,
\end{eqnarray}
the latter implying $[J,B]=0$, since $\Omega$ is nondegenerate.
Now let $u=u_0+\epsilon u_1$ be the generator of an approximate Hamiltonian symmetry. By Lemma \ref{appvslem}, we have (\ref{vs11}) displayed here, $i_{u_0}i_B\Omega=d\psi_0$, from the first symmetry condition (\ref{vs1}), and by assumption $p=p(\psi_0)$. If $B$ is MHS, then
\begin{equation}
\label{rel2}
L_B(u_0\cdot B)=i_{u_0}L_BB^\flat=i_{u_0}dp+i_{u_0}d|B|^2=0,
\end{equation}
using $[u_0,B]=0$ from Theorem \ref{appvscons}, equation (\ref{rel1}), $i_{u_0}d\psi_0=0$ from (\ref{vs11}), and $i_{u_0}d|B|=0$ from the third symmetry condition (\ref{vs3}).
Next we are going to prove that $u_0$ is independent of $v_\parallel$. To this end, note first that $J$ is tangent to flux surfaces and so
\begin{equation}
\label{rel3}
J=\kappa\,u_0 + \lambda B
\end{equation}
for some functions $\kappa,\lambda$. Crossing with $B$ gives $\kappa=-p'$. Applying $L_B$ to (\ref{rel3}), we deduce $L_B\lambda=0$, because $B$ commutes with $J$ and $u_0$, and $\kappa$ is a flux function. Taking the $v_\parallel$-derivative of (\ref{rel3}), we get
\begin{equation}
\label{rel4}
\kappa\,\partial_{v_\parallel}\!u_0 + (\partial_{v_\parallel}\lambda)B=0,
\end{equation}
since $\kappa,J,B$ are spatial. Finally, the $v_\parallel$-derivative of (\ref{rel2}) gives $\partial_{v_\parallel}\lambda=0$ for $L_B|B|\neq0$. To see this, dot (\ref{rel4}) with $B$ and insert it, so
\begin{equation}
0=L_{\partial_{v_\parallel}}L_B(u_0\cdot B)=L_BL_{\partial_{v_\parallel}}(u_0\cdot B)=-\,\kappa^{-1}(\partial_{v_\parallel}\lambda)L_B|B|^2,
\end{equation}
since $L_B\lambda=0$ and $L_B\kappa=0$. Substituting $\partial_{v_\parallel}\lambda=0$ in (\ref{rel4}), we conclude $\partial_{v_\parallel}\!u_0=0$ for $dp\neq0$.
By density of irrational surfaces, (\ref{rel2}) implies $L_{u_0}(u_0\cdot B)=0$ too. Likewise, $L_{u_0}\lambda=0$ from $L_B\lambda=0$. Thus, $C=u_0\cdot B$ and $\lambda$ are flux functions. Therefore we can write
\begin{equation}
\label{rel5}
L_{u_0}B^\flat=i_{u_0}dB^\flat+di_{u_0}B^\flat=i_{u_0}i_J\Omega+dC=(\lambda+C')d\psi
\end{equation}
Since $u_0$ is spatial, the conditions $i_{u_0}i_B\Omega=d\psi_0$, $[u_0,B]=0$ and $L_{u_0}|B|=0$ imply $u_0$ generates a local circle action on $Q$. See \cite{BKM}, Definition VIII.1 of a circle action and Theorem VIII.2(i) for a proof, as well as (59) there for the definition of circle-average. Circle-averaging (\ref{rel5}) gives then $0=(\lambda+C')d\psi$, because the average of $L_{u_0}$ of anything is zero and $\lambda,C$ are constant along $u_0$. Therefore $L_{u_0}B^\flat=0$.
Together with (\ref{vs3}), this gives $L_{u_0}b^\flat=0$ and so $X_0=0$, and then (\ref{vs2}) reduces to $L_{u_1}\beta=0$. Since $u_0$ is spatial, $\psi_1$ is too from Lemma \ref{appvslem}. Consequently, so is $u_1$ from (\ref{u1}), which completes the proof.
\end{proof}
\section{Discussion}
Compared to \cite{BKM}, Theorem \ref{appqs} shows that approximating quasisymmetry under the guiding-centre precision is to lowest order the same as exact quasisymmetry. While one might hope that the notion of quasisymmetry could be relaxed using approximate spatial (Hamiltonian) symmetries of guiding-centre motion, this theorem shows that it is impossible: if one insists that an approximate Hamiltonian symmetry is spatial then that symmetry must be a quasisymmetry.
This is not much unexpected, since the quasisymmetry conditions were derived for all nonzero $q,m,\mu$ \cite{BKM}. Another way of seeing this is to note that $\epsilon$- and $v_\parallel$-terms appear together in the Hamiltonian (or Lagrangian) formulation. Other spatial ways to approximate quasisymmetry could be perhaps more effective, as, for example, expansions near the magnetic axis.
Among the three conditions of quasisymmetry, $L_ub^\flat=0$ seems the most likely candidate to relax. Not included in earlier treatments, its necessity was first recognised in \cite{BQ}. Theorem \ref{appvs} with Lemma \ref{appvslem} say that an approximate phase-space Hamiltonian symmetry of guiding-centre motion does indeed weaken this condition to $v_\parallel L_{u_0}b^\flat+i_{u_1}i_B\Omega=d\psi_1$. All the same, the remaining two conditions remain unchanged, providing flux surfaces and symmetric field strength. More explicitly, Theorem \ref{appvs2} shows that $L_{u_0}b^\flat$ is basically pushed back to the next-order term of the symmetry, the only restriction between $u_0$ and $\psi_1$ being their velocity dependence. Given Theorem \ref{appvscons} and Remark \ref{appvsconseq}, the arbitrariness of $L_{u_0}b^\flat$ is slightly limited to $i_bL_{u_0}b^\flat=0$, which is equivalent to $\textnormal{div}\, u_0=0$ under the other symmetry conditions.
In conclusion, an approximate Hamiltonian $v_\parallel$-symmetry generated by $u_0+\epsilon u_1$ satisfies the conditions (\ref{vs0}) to zero order and the first-order term is given by (\ref{u1})-(\ref{u1c2}). In the other direction, Theorem \ref{thm:weakvs} shows that a spatial vector field $u_0$ that satisfies (\ref{vs0}) for $L_{u_0}b^\flat\neq0$ with a second spatial flux function $\psi_1$ is the zeroth-order term of a weak quasisymmetry (Definition \ref{dfn:weakvs}) generator with $u_1$ given by (\ref{u1}). We may even extend this and say that a $v_\parallel$-dependent vector field $u_0$ that satisfies (\ref{vs0}) and (\ref{u1c2}) for $L_{u_0}b^\flat\neq0$ and a $v_\parallel$-dependent flux function $\psi_1$, is the zeroth-order term of an approximate Hamiltonian $v_\parallel$-symmetry with $u_1$ given by (\ref{u1}).
Under the typical requirement of magnetohydrostatics though, every Hamiltonian $v_\parallel$-symmetry is spatial and reduces to quasisymmetry again according to Theorem \ref{appvsmhs}.
The approximate constant of motion $K$ coming from an approximate Hamiltonian phase-space symmetry generalises the exact one derived for exact quasisymmetry in two ways. The first one is by introducing approximate flux surfaces via $\psi_1$ and the second one is its nonlinear character in $v_\parallel$, when $u_0$ is not spatial. In any case, there is the question whether the first-order conserved quantity $K$ extends to higher orders leading to an adiabatic invariant. Repeating the symmetry analysis for approximate symmetries of higher order, one imagines building $U$ and therefrom $K$ order by order. As the order increases, variations of $K$ would remain slow over larger time intervals. Ultimately, one would deduce an asymptotic series for $K$, which delivers variations of order $\epsilon$ over very long times, making it an adiabatic invariant assuming convergence.
Rodr\'iguez \textit{et al} \cite{RHB} introduced the notion of a weakly quasisymmetric magnetic field and argued that (for non-MHS fields) weak quasisymmetry implies that FGC motion admits (a) an approximate spatial symmetry, and (b) an approximate constant of motion. Their treatment does not consider first-order corrections to flux surfaces, i.e., they treat the subcase $\psi_1=0$. We partly agree with (b), but disagree with (a). While the weak quasisymmetry conditions for $\psi_1=0$ do indeed imply the existence of an approximate conserved quantity, namely $p$, (Proposition \ref{RHBi}), which is directly analogous to the case of exact quasisymmetry, for spatial $\psi_1\neq0$ they imply a more general approximate conserved quantity, namely $K$ in (\ref{invariant}). Moreover, while the weak quasisymmetry conditions do imply the existence of an approximate symmetry for FGC motion, they do not imply the existence of an approximate spatial symmetry. Instead, our Proposition \ref{RHBs}, based on the one-to-one correspondence between symmetries and invariants, shows that the approximate symmetry associated with weak quasisymmetry even for zero $\psi_1$, namely $u_p$, acts non-trivially on both the guiding-centre position and parallel velocity, i.e., there is no way to regard the symmetry as operating in configuration space alone. Thus, Rodr\'iguez \textit{et al} correctly identify the conserved quantity associated with weak quasisymmetry for zero $\psi_1$, but incorrectly identify the infinitesimal generator of the corresponding phase-space symmetry.
It could be that there are no quasisymmetries (with bounded flux surfaces) other than axisymmetry. The quest for more general symmetries becomes then imperative as a means to relax the quasisymmetry notion. One such option is the longitudinal or second adiabatic invariant coming from a nonlocal symmetry, and the related concept of omnigeneity as a confinement condition (a sufficient one, but is it necessary?). A more direct generalisation, adopted here, was to allow symmetries on phase space instead of restricting to configuration space, and so involve the parallel velocity. Gyrosymmetry after all invokes the perpendicular velocity. Velocity-dependent symmetries could be of use, at least when it comes to guiding-centre integrability. Here we have made a first step of relaxing the requirement of a spatial symmetry by considering parallel-velocity-dependent symmetries within the approximate setup. Others could follow.
\ack
We would like to thank Eduardo Rodr\'iguez, Per Helander and Amitava Bhattacharjee for valuable discussions and helpful interaction. This work was supported by a grant from the Simons Foundation (601970, RSM) and by the Los Alamos National Laboratory LDRD program under project number 20180756PRD4.
| proofpile-arXiv_059-15644 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Spectral graph theory studies the relationship between combinatorial and geometric properties of a graph with the
eigenvalues of some matrix associated with it (typically, the adjacency matrix, the combinatorial Laplacian, the signless Laplacian or the
normalised Laplacian). Some concrete results in this direction relate the matching number of a graph (i.e., the maximal number of independent edges of a graph) with the spectrum of the combinatorial Laplacian (cf., \cite{ming:01}) or the eigenvalues of the signless Laplacian with the circumference of the graph (cf., \cite{Wang:13}). Moreover, the existence of a Hamiltonian cycle in the graph (i.e., a closed path in a connected graph that
contains each vertex exactly once) with bounds in the spectrum of the combinatorial Laplacian (see \cite{heuvel:95,mohar:92}).
Recall that deciding if a graph has a Hamiltonian cycle is an NP-complete problem and, therefore, in this context one usually gives sufficient conditions on the spectrum
of the Laplacian that guarantee the existence of a Hamiltonian cycle. A different approach is given in \cite{butler:10}, where the authors show
that if the non-trivial eigenvalues of the combinatorial Laplacian are sufficiently close to the average degree, then the graph is Hamiltonian.
The discrete Laplacian can be generalised in a natural way to include a magnetic field
which is modelled by a magnetic potential function defined on the set of all directed edges (arcs) of the graph with values in the unit circle,
i.e., $\alpha\colon A\to \R/2\pi\Z$.
Such operator is called in \cite{shubin:94} the Discrete Magnetic Laplacian (DML for short) and
is denoted by $\Delta_\alpha$ (see also \cite{sunada:94,higuchi-shirai:99}). It includes the cases of the combinatorial Laplacian (if $\alpha=0$) or the signless
Laplacian if $\alpha_e=\pi$ on all directed edges $e\in A$ (see Example~\ref{ex:lapl.weights}).
The analysis of the DML is interesting for theoretical aspects and, also, in
applications to mathematical physics, particularly in solid state and condensed matter physics, where one uses graphs as a model of a solid
(see, e.g., \cite{estrada:15}).
The analysis of the spectrum of the magnetic Laplacian is a particularly rich object of study because the presence of the magnetic potential amplifies the
relationship between the operator and the topology of the graph.
In particular, if the graph is connected and has no cycles (e.g., if the graph is a tree), then the magnetic potential has no effect and the
DML is unitarily equivalent to the usual combinatorial Laplacian.
Besides the evident physical importance of a magnetic field in interaction with a graph, the magnetic potential has many more applications. For example,
the magnetic potential $\alpha$ can be interpreted as a Floquet parameter to analyse the Laplacian on an (infinite) periodic graph
(see \cite{flp:18,fabila-lledo:pre19,flp:20a} as well as \cite{koro-sabu:17,saburova:19}). Moreover, the magnetic potential plays also the role of a spectral control parameter of the system. In fact, using the magnetic potential as a continuous parameter one can modify the spectrum of the Laplacian and, for instance,
raise its first eigenvalue or control the size of spectral gaps in periodic structures (see \cite{fabila-lledo:pre19}).
Nevertheless, in the combinatorics literature, the DML is rarely considered since, in principle, the magnetic field is an additional structure of the graph.
The aim of this article is to show that the DML with combinatorial weights is useful to address certain questions in discrete mathematics. In particular,
we explore the relation between the spectrum of the discrete magnetic Laplacian and two combinatorial properties of the graph:
the matching number and the existence of Hamiltonian cycles.
We extend some of the results in \cite{heuvel:95,ming:01,Wang:13} that include statements involving the spectrum of the combinatorial or signless Laplacian.
Moreover, the magnetic potential allows us to enlarge the spectral obstructions to the existence of a Hamiltonian cycle in the graph or the existence of a perfect matching.
This article is structured as follows: In \Sec{graph.theory}, we introduce the notation for the main discrete structures
needed. We consider finite and simple graphs (i.e., the graph with a finite number of vertices and with no multiples edges or loops). We include in this section the definition
of the DML with combinatorial weights and mention a spectral preorder that controls the spectral spreading of the eigenvalues under edge deletion. We refer to
\cite{flp:20a} for a general analysis of this preorder for multigraphs with general weights and, also, for additional motivation.
In Section~\ref{sec:match}, we introduce some relations
between the matching number of the graph and the eigenvalues of the magnetic Laplacian. Moreover, we generalise some spectral upper and lower
bounds stated in \cite{heuvel:95,ming:01,Wang:13} for the combinatorial or signless Laplacian. In \Sec{ham},
we address the problem of giving spectral obstructions for the graph being Hamiltonian.
In particular, we present examples of graphs where the obstructions given by the usual (or signless) Laplacian in \cite{heuvel:95,ming:01,Wang:13} do not apply.
Nevertheless, for certain values of the of the magnetic potential the DML provides spectral obstructions for the graph to be Hamiltonian.
\section{Graph theory and spectral preorder}
\label{sec:graph.theory}
\subsection{Discrete graphs}
\label{subsec:disc.graphs}
A \emph{discrete (oriented) graph} (or, simply, a \emph{graph}) $G=(V,A)$
consists of two disjoint and finite sets $V=V(G)$ and
$A=A(G)$, the set of \emph{vertices} and the set of all \emph{oriented edges}, respectively,
and a \emph{connection map} $\map {\bd=\bd^G} A {V \times V}$, where
$\bd e = (\bd_-e,\bd_+e)$ denotes the pair of the \emph{initial} and
\emph{terminal} vertex. We also say that $e$
\emph{starts} at $\bd_-e$ and \emph{ends} at $\bd_+e$. We assume that
each oriented edge $e$ (also called \emph{arrow} or \emph{arc}) comes with its oppositely
oriented edge $\bar e$, i.e., that there is an involution
$\map {\bar \cdot}AA$ such that $e \ne \bar e$ and
$\bd_\pm \bar e = \bd_\mp e$ for all $e \in A$.
If $V({G})$ has $n \in \N$
vertices, we say that ${G}$ is a \emph{finite} graph of \emph{order}
$n$ and we write $\card G=\card{V({G})}=n$.
We denote by
\begin{equation*}
A_v := \set{e \in A}{\bd_- e=v}
\end{equation*}
the set of all arcs starting at $v$ (alternatively we may also write $A_v^{G}$).
We define the \emph{degree} of the vertex $v$ in the graph $G=(V,A)$ by the cardinality of $A_v$, i.e.,
\begin{equation*}
\deg(v) := \deg^G(v) = \card{A_v}.
\end{equation*}
The \emph{inversion map} gives rise to an action of $\Z_2=\Z/2\Z$ on
the set of arcs $A$. An (unoriented) \emph{edge} is an element of the
orbit space $E=A/\Z_2$, i.e., an edge is obtained by identifying the
arc $e$ and $\ol e$. We denote an (unoriented) graph by $G=(V,E)$ and
set $\bd [e]=\{\bd_-e,\bd_+e\}$ for $[e] \in E$. We say that
$[e_1],[e_2] \in E$ \emph{share a vertex} if
$\{\bd_- e_1,\bd_+e_1\} \cap \{\bd_- e_2,\bd_+e_2\} \ne \emptyset$.
To simplify notation, we mostly write $e \in E$ instead of $[e] \in E$
for $e \in A$. Two edges $e_1, e_2 \in E$ in a graph $G=(V,E)$ are
\emph{independent} if they are not loops and if they do not share a
vertex. A \emph{matching} in a graph is a set of pairwise independent
edges. The \emph{matching number} of $G$, denoted by $\mu(G)$, is the
cardinality of the maximum number of pairwise independent edges in
$G$. A vertex $v$ \emph{belongs to a matching $M$} if
$v \in \bigcup_{e \in M} \bd e$. A \emph{perfect matching} $M$ is a
matching where all vertices of $V$ belong to $M$, i.e., where
$\bigcup_{e \in M} \bd e = V$. A graph is \emph{matchable} if it has
a perfect matching. Recall that if a tree is matchable then the
number of its vertices must be even.
\subsection{Magnetic potentials}
\label{subsec:mag.field.pot}
Let $G=(V,A)$ be a graph and consider the group $R=\R/2\pi\Z$ with the operation written additively.
We consider also the corresponding \emph{cochain groups} of $R$-valued functions on vertices and edges respectively:
\begin{equation*}
C^0(G,R):= \left\{\;\map \xi V R\;\right\}
\qquadtext{and}
C^1(G,R)
:= \bigset{\map \alpha A R}
{\forall e \in A\colon \alpha_{\bar e}=-\alpha_e}\;.
\end{equation*}
The \emph{coboundary operator} mapping $0$-cochains to $1$-cochains is given by
\begin{equation*}
\map \de {C^0(G,R)}{C^1(G,R)},
\quadtext{where}
(\de \xi)_e:=\xi(\bd_+e)-\xi(\bd_-e)\;.
\end{equation*}
\begin{definition}
\label{def:mag.pot}
Let $G=(V,A)$ be a graph and $R=\R/2\pi\Z$. \indent
\begin{enumerate}
\item An \emph{$R$-valued magnetic potential} $\alpha$ is an element
of $C^1(G,R)$.
\item We say that $\alpha, \wt \alpha \in C^1(G,R)$ are
\emph{cohomologous} or \emph{gauge-equivalent} and denote this as
$\wt \alpha \sim \alpha$ if $\wt \alpha-\alpha$ is exact, i.e., if
there is $\xi \in C^0(G,R)$ such that $\de \xi=\wt \alpha-\alpha$,
and $\xi$ is called the \emph{gauge}.
\item We say that $\alpha$ is \emph{trivial}, if it is cohomologous to $0$.
\end{enumerate}
\end{definition}
In the sequel, we will omit for simplicity the Abelian group $R$, e.g.\ we will write $C^1(G)$ instead of $C^1(G,R)$ for the
group of magnetic potential etc. We refer to \cite[Section~5]{lledo-post:08} for
additional motivation on homologies of graphs (see also \cite{mathai-yates:02}
and references therein for a version of these homologies twisted by the magnetic potential).
\subsection{The magnetic Laplacian and spectral preorder}
\label{subsec:disc.mag.lap}
In this section, we will introduce the discrete magnetic Laplacian associated to a graph $G$ with a magnetic potential $\alpha$. We call it simply a
\emph{magnetic graph} and denote it by $\mathbf{G}=(G,\alpha)$. We will also introduce a spectral
relation between the magnetic Laplacian associated to different graphs.
Given a finite graph $G$, we define the Hilbert space $\ell_2(V):= \{ \map \phi { V} \C\}$ (which is isomorphic to $\C^{|V|}$) and
with the inner product defined as usual given by
\begin{equation*}
\left\langle f,g\right\rangle_{\ell_2(V)}
:=\sum_{v \in V} {f(v)} \overline{g(v)} \;.
\end{equation*}
Note that functions on $V$ may be interpreted as $0$-forms (while functions on edges are $1$-forms).
\begin{definition}[Discrete magnetic Laplacian]
\label{def:dml}
Let $G=(V,A)$ be a graph and $\alpha \in C^1(G)$ an $R$-valued magnetic potential, i.e., a map
$\map \alpha A R$ such that $\alpha_{\bar e}=-\alpha_e$ for all $e\in A$,
where $R=\R/2\pi\Z$. The \emph{(discrete) magnetic Laplacian} is an operator
\begin{equation}
\label{eq:def.disc.lapl}
\map{\Delta_\alpha }
{\lsqr { V}} {\lsqr { V}}\;,
\end{equation}
that acts as
\begin{equation}
\label{eq:disc.lapl}
(\Delta_\alpha \phi)(v)
= \sum_{e \in A_v}
\bigl(\phi(v)-\e^{\im \alpha_e} \phi(\bd_+e)\bigr)
= \deg(v) \phi(v)
- \sum_{e \in A_v}
\e^{\im \alpha_e} \phi(\bd_+e).
\end{equation}
\end{definition}
The DML can be seen as a second order discrete operator and one can show that
that $\Delta_\alpha$ is positive definite and has spectrum contained in the interval $[0,\max_{v\in V}\mathrm{deg}(v)]$
(see, e.g., \cite[Section~2.3]{flp:18}). If we need to stress the dependence of the DML on the graph $G$, we will write the Laplacian as $\Delta_\alpha^G$.
If $G=(V,A)$ is a graph of order $|V({G})|=n$ and magnetic potential $\alpha$, we denote the spectrum
of the corresponding magnetic Laplacian by $\sigma(\Delta_\alpha):=\{ \lambda_k(\Delta_\alpha) \mid
k=1,\dots,n\}$. Moreover, we will write the eigenvalues in ascending order
and repeated according to their multiplicities, i.e.,
\begin{equation*}
0 \leq
\lambda_1(\Delta_\alpha)
\leq \lambda_2(\Delta_\alpha)
\leq \cdots \leq \lambda_n(\Delta_\alpha)\;.
\end{equation*}
\begin{example}[Special cases of the magnetic Laplacian]
\label{ex:lapl.weights}
\indent
\begin{enumerate}
\item
\label{lapl.comb}
If $\alpha\sim 0$, then $\Delta^G_\alpha$ is unitarily equivalent with the usual combinatorial Laplacian $\Delta^G_0$ (without magnetic potential).
\item
\label{lapl.signless}
Choosing $\alpha_e=\pi$ for all $e \in A$, then $\Delta^G_\alpha=\Delta^G_\pi$ is the
\emph{signless} Laplacian.
\item If we choose $R=\{0,\pi\}$,
then the magnetic potential is also called \emph{signature}, and $(G,\alpha)$
is called a \emph{signed graph} (see, e.g.~\cite{llpp:15} and
references therein).
\end{enumerate}
\end{example}
We present a spectral preorder for graphs with magnetic potential which is a particular case the preorder studied for general
weights and multigraphs in \cite{flp:20a}. We refer to this article for additional references, motivation and applications.
\begin{definition}[Spectral preorder of magnetic graphs]
\label{def:spectral-order}
Let $\mathbf{G}=\Wfull$ and $\mathbf{G}'=(G',\alpha')$ be two magnetic graphs with $\card{G}=\card{G'}=n$. We say that
\emph{$\mathbf{G}$ is (spectrally) smaller than $\mathbf{G}'$ with shift $r$}
and denote this by $\mathbf{G}\less[r]\mathbf{G}'$)
if
\begin{equation*}
\lambda_k(\Delta_\alpha^{G}) \le \lambda_{k+r}(\Delta_{\alpha'}^{G'})
\qquad \text{for all $1 \le k \le n-r$.}
\end{equation*}
If $r=0$ we write again simply $\mathbf{G} \less \mathbf{G}'$.
\end{definition}
The relation $\less$ is a preorder (i.e., a reflexive and transitive relation) on the class of magnetic graphs (cf. \cite[Proposition~3.11]{flp:20a}).
Moreover, $\mathbf{G}' \less[s] \mathbf{G} \less[r] \mathbf{G}' $ means that $\mathbf{G}' \less[s] \mathbf{G}$ and $ \mathbf{G} \less[r] \mathbf{G}' $. In particular,
if $s=0$ and $r=1$, the relation $\mathbf{G}' \less \mathbf{G} \less[1] \mathbf{G}' $ describes the usual interlacing of eigenvalues:
\begin{equation*}
\lambda_1(\Delta_{\alpha'}^{G'}) \le \lambda_1(\Delta_\alpha^G) \le \lambda_2(\Delta_{\alpha'}^{G'}) \le \lambda_2(\Delta_\alpha^G)
\le \dots \le \lambda_{n-1} (\Delta_{\alpha'}^{G'}) \le \lambda_{n-1}(\Delta_\alpha^G) \le \lambda_n(\Delta_{\alpha'}^{G'}) .
\end{equation*}
In the following result we apply the spectral preorder to control the
spectral spreading of the DML due to edge deletion and keeping the
same magnetic potential on the remaining edges. Recall that for a
graph ${G}=(V,E)$ and an edge $e_0\in E$, we denote by ${G}-e_0$ the
graph given by ${G}-e_0=(V, E \setminus \{e_0\})$.
\begin{theorem}
\label{thm:delete-edge}
Let $\mathbf{G}=\Wfull$ and $\mathbf{G}'=(G',\alpha')$ be two magnetic graphs where $G'=G-e_0$ for some $e_0\in E(G)$, and
$\alpha_e=\alpha'_e$ for all $e\in A(G')$, then
\[\mathbf{G}' \less \mathbf{G} \less[1] \mathbf{G}' \;.\]
\end{theorem}
Applying several times the previous relations it is clear that if $G'$ is obtained from $G$ by deleting $r$ edges, i.e., if $G'=G-\{e_1,\dots e_r\}$, then
\begin{equation}\label{eq:delete-r}
\mathbf{G}' \less \mathbf{G} \less[r] \mathbf{G}' \;.
\end{equation}
The preceding theorem generalises to the DML with arbitrary magnetic potential some known interlacing results,
namely \cite[Lemma~2]{heuvel:95} (for combinatorial and signless Laplacians)
or \cite[Theorem~3.2]{mohar:91} and~\cite[Corollary~3.2]{fiedler:73} (for combinatorial Laplacian). The general case for any magnetic potential
and arbitrary weights is proved in~\cite[Theorem~4.1]{flp:20a}, for the normalised Laplacian in \cite[Theorem~1.1]{butler:07} and for
the standard Laplacian in \cite[Theorem~2.2]{chen:04}.
\section{Matching number and the discrete magnetic Laplacian}
\label{sec:match}
In this section, we relate some bounds for the eigenvalues of the magnetic Laplacian with the matching number of the underlying graph.
In particular, we give a spectral obstruction provided by the DML to the existence of a perfect matching of the graph.
In certain examples this obstruction is not effective for the combinatorial and signless Laplacian, but it works for the DML with a certain non-trivial magnetic potential.
In this sense the presence of the magnetic potential (thought of as a continuous parameter)
makes the spectral obstruction applicable for many more cases (see Example~\ref{exa:nonhamiltonian}).
We begin considering the case of trees, so that any DML is unitarily equivalent to usual combinatorial Laplacian.
The first result essentially says that if $T$ is a tree with matching number $\mu(T)$, then the first $\mu(T)+1$ eigenvalues
are smaller or equal than $2$ and at least $\mu(T)$ eigenvalues are greater or equal than 2. Recall that we write the eigenvalues
in ascending order and repeated according to their multiplicities.
\begin{theorem}\label{thm:matching}
Let $T$ be a tree on $n$ vertices and matching number $\mu(T)$, then
\[\lambda_{\mu(T)+1}(\Delta^T)\leq 2\leq \lambda_{n-\mu(T)+1}(\Delta^T)\;.\]
\end{theorem}
\begin{proof}
Consider the graph $G$ as the disjoint union of $\mu(T)$ complete graphs $K_2$ and $n-2\mu(T)$ many isolated vertices, i.e.,
\[
G=\left(\bigcup_{i=1}^{\mu(T)} K_2\right)\cup \left(\bigcup_{i=1}^{n-2\mu(T)} K_1\right)\;.
\]
Then $G$ is a graph on $n$ vertices with $\sigma(\Delta^G)=\{0^{(n-\mu(T))},2^{(\mu(T))}\}$, where the superscripts denote the
multiplicities of each eigenvalue. In particular,
\begin{equation}\label{eq:tree2}
\lambda_{n-\mu(T)+k}(\Delta^G)=2 \quadtext{for all } k\in\{1,2,\dots,\mu(T)\}\;.
\end{equation}
The graph $G$ has $\mu(T)$ edges and is obtained from
the tree $T$ (which has $n-1$ edges) by deleting the $(n-1-\mu(T))$ edges that do not belong to the matching. Then, by \Thm{delete-edge}, we obtain
\begin{equation}\label{eq:tree3}
{G}\less T \less[n-\mu(T)-1] {G}\;.
\end{equation}
For $k=1$ in \Eq{eq:tree2} together with the left relation of \Eq{eq:tree3}, it follows that $2=\lambda_{n-\mu(T)+1}(\Delta^G)\leq \lambda_{n-\mu(T)+1}(\Delta^T)$.
Similarly, from the right relation of \Eq{eq:tree3} applied to the case $k=\mu(T)$ in \Eq{eq:tree2} we obtain $\lambda_{\mu(T)+1}(\Delta^T)\leq \lambda_{n}(\Delta^G)=2$.
\end{proof}
We mention next some easy consequences of the preceding theorem. Recall first
that if $T$ is matchable, then one and only one eigenvalue is equal to $2$ (see \cite[Theorem $2$]{ming:01}). This fact follows immediately from the preceding theorem.
\begin{corollary}\label{cor:matching1}
Let $T$ be a matchable tree on $n$ vertices, then $n$ is even and
\[
\lambda_{\frac{n}{2}+1}(\Delta^T)=2 \;.
\]
\end{corollary}
\begin{proof}
If $T$ is a matchable tree on $n$ vertices, then $n$ is an even number and $\mu(T)=\frac{n}{2}$. By \Thm{matching}
we obtain that $\lambda_{\frac{n}{2}+1}(\Delta^T)\leq 2\leq \lambda_{\frac{n}{2}+1}(\Delta^T)$ and hence $\lambda_{\frac{n}{2}+1}(\Delta^T)=2$.
\end{proof}
\begin{corollary}\label{cor:matching2}
Let $T$ be a matchable tree on $n$ vertices, then $n$ is even and
\[
\lambda_{\frac{n}{2}}(\Delta^T)<2<\lambda_{\frac{n}{2}+2}(\Delta^T) \;.
\]
\end{corollary}
\begin{proof}
Because $T$ is a matchable tree, by \Cor{matching1}, it follows that $\lambda_{\frac{n}{2}+1}(\Delta^T)=2 $ is an eigenvalue. In general, if $T$ is a tree, we
know from \cite[Theorem~2.1(ii)]{grone:90} that the multiplicity of any integer eigenvalue larger that $1$ is exactly equal to one and, therefore,
the result follows.
\end{proof}
We use next the spectral preorder to show the following spectral bounds. Note that
the second inequality is already shown in \cite{ming:01}.
\begin{theorem}\label{thm:matchingtree}
Let $T$ be a tree on $n$ vertices. If $n>2\mu(T)$, then
\[\lambda_{\mu(T)+1}(\Delta^T) < 2< \lambda_{n-\mu(T)+1}(\Delta^T) \;.\]
\end{theorem}
\begin{proof}
From \Thm{matching} we already have $\lambda_{\mu(T)+1}(\Delta^T)\leq 2\leq \lambda_{n-\mu(T)+1}(\Delta^T)$.
Suppose that $n$ is an odd number. If the value $2$ is an eigenvalue, then it is an integer eigenvalue. From~\cite[Theorem~2.1(i)]{grone:90} we conclude
that $2$ divides $n$ giving a contradiction. Therefore, $2$ is not an eigenvalue and follows that $\lambda_{\mu(T)+1}(\Delta^T) < 2 < \lambda_{n-\mu(T)+1}(\Delta^T)$.
Suppose that $n$ is an even number.
If $n>2\mu(T)$, then there exists a vertex $v_0\in V(T)$ such that $v_0$ does not belong to the maximum matching.
Let $r=\deg(v_0)$ and denote by $T-v_0$ the graph obtained from $T$ by deleting the vertex $v_0$ and all $r$ edges adjacent to $v_0$.
Denote the connected components of $T-v_0$ by $T_1, T_2, \cdots, T_r$. We can assume that $T_1$ has an odd number of vertices, since otherwise $n=\sum_{i=1}^r \card{V(T_i)}+1$ is odd giving a contradiction.
Hence, the graph $T-V(T_1)$ has also an odd number of vertices. Define $n_1=\card{T_1}$, which is an odd number and by the previous case it follows
\[
\lambda_{\mu(T_1)+1}(\Delta^{T_1}) < 2 < \lambda_{n_1-\mu(T_1)+1}(\Delta^{T_1})\;.
\]
Similarly, $\card{T-V(T_1)}=n-n_1$ is an odd number and, again,
\[
\lambda_{\mu(T-V(T_1))+1}(\Delta^{T-V(T_1)}) < 2 < \lambda_{n-n_1-\mu(T-V(T_1))+1}(\Delta^{T-V(T_1)})\;.
\]
From \Thm{delete-edge} we obtain for the disjoint union of graphs
\[
T_1\cup (T-V(T_1)) \less T \less[1] T_1\cup (T-V(T_1) )
\]
as $T_1 \cup (T-V(T_1))$ is obtained from $T$ by deleting one edge.
Moreover, we have $\mu(T)=\mu(T_1)+\mu(T-V(T_1))$ and we conclude
$\lambda_{\mu(T)+1}(\Delta^T) < 2< \lambda_{n-\mu(T)+1}(\Delta^T)$.
\end{proof}
We focus next on general finite simple graphs $G$. It is a well known fact that for any matching $M$ of $G$ there exists a spanning tree $T$ of $G$ which includes all the edges of $M$.
\begin{corollary}\label{cor:matG}
Let $G$ be a connected graph on $n$ vertices, $m$ edges and matching number $\mu(G)$. For any magnetic potential $\alpha$ we have
\[
\lambda_{\mu(G)+n-m}(\Delta^G_\alpha)\leq 2\leq \lambda_{n-\mu(G)+1}(\Delta^G_\alpha)\;.
\]
\end{corollary}
\begin{proof}
Consider a spanning tree $T$ of $G$ with the same matching number than $G$, i.e., $\mu(T)=\mu(G)$. Then, $T$ is a graph (with $n-1$ edges) obtained
from $G$ by deleting $m-(n-1)$ edges and by \Thm{delete-edge} we obtain
\[
T \less \mathbf{G} \less[m-n+1] T\;.
\]
By \Thm{matching} together with the previous relation and the fact that all magnetic Laplacians on a tree are unitarily equivalent (see \Exenum{lapl.weights}{lapl.comb}) we have
\[
2\leq \lambda_{n-\mu(T)+1}(\Delta^T)\leq \lambda_{n-\mu(G)+1}(\Delta^G_\alpha)\quadtext{and} \lambda_{\mu(G)+n-m}(\Delta^G_\alpha)\leq \lambda_{\mu(T)+1}(\Delta^T)\leq 2
\]
for any magnetic potential $\alpha$ on $G$ concluding the proof.
\end{proof}
The next corollary gives a simple family of spectral obstructions for the graph being matchable.
\begin{corollary}\label{cor:nonmatch}
Let $G$ be a graph on $n$ vertices where $n$ is even. If there exists a magnetic potential $\alpha$ such that
\[
\lambda_{\frac{n}{2}+1}(\Delta^G_\alpha)<2\;,
\]
then $G$ is not-matchable.
\end{corollary}
\begin{proof}
Suppose that $G$ is a matchable graph, then $\mu(G)=\frac{n}{2}$ and from \Cor{matG} we conclude
\[
2\leq \lambda_{n-\mu(G)+1}(\Delta^G_\alpha)=\lambda_{\frac{n}{2}+1}(\Delta^G_\alpha)\;
\]
which gives a contradiction. Therefore $G$ is not-matchable.
\end{proof}
\begin{example}\label{exa:nonhamiltonian}
Consider the graph $G$ given in \Fig{1}. The spectrum of the combinatorial Laplacian (i.e., with magnetic potential $\alpha\sim 0$) is
\[
\sigma(\Delta_0^G)= \left\{ 0,3-\sqrt{5},1,2,3,\sqrt{5}+3 \right\} \;.
\]
\begin{figure}[h]\label{fig:deledge}
\centering
{
\begin{tikzpicture}[baseline,vertex/.style={circle,draw,fill, thick, inner sep=0pt,minimum size=2mm},scale=.6]
\node (2) at (2,3) [vertex,label=below:] {};
\node (3) at (4,5) [vertex,label=left:] {};
\node (4) at (4,1) [vertex,label=below:] {};
\node (5) at (6,3) [vertex,label=left:] {};
\node (1) at (8,5) [vertex,label=below:] {};
\node (6) at (8,1) [vertex,label=below:] {};
\draw[-](5) edge node[below] {} (6);
\draw[-](1) edge node[below] {} (5);
\draw[-](2) edge node[left] {} (3);
\draw[-](2) edge node[below] {} (4);
\draw[-](3) edge node[right] {} (5);
\draw[-](4) edge node[below] {} (5);
\end{tikzpicture} }
\caption{Example of a bipartite graph that is not matchable. The spectral obstruction holds only for non-trivial values of the magnetic potential.}
\label{fig:1}
\end{figure}
Observe that the graph $G$ is bipartite, so that the signless combinatorial Laplacian is unitarily equivalent to the usual combinatorial Laplacian. Therefore,
if $\alpha_e=\pi$ for each edge, then the spectra of the combinatorial and signless combinatorial Laplacians coincide, i.e., $\sigma(\Delta_0^G)=\sigma(\Delta_\pi^G)$.
In particular, $\lambda_{\frac{n}{2}+1}(\Delta_0^G)=\lambda_{\frac{n}{2}+1}(\Delta_\pi^G)=\lambda_{4}(\Delta_0^G)=2$ and, therefore, the eigenvalues of the combinatorial and signless Laplacians provide no obstruction to the matchability of $G$. But, if we consider the magnetic potential $\alpha'$ with value equal to $\pi$ only on one edge of the cycle and zero everywhere else, then the spectrum is given by
\[
\sigma(\Delta_{\alpha'}^G)\approx\left\{ 0.23, 0.58, 1, 1.63, 3.41, 5.12\right\}\;.
\]
In particular, $\lambda_{4}(\Delta_{\alpha'}^G)<2$ and therefore we can conclude from \Cor{nonmatch} that $G$ is not matchable.
In this example any non-trivial magnetic potential $\alpha\nsim 0$ provides a spectral obstruction since $\lambda_{4}(\Delta_{\alpha}^G)<2$ as the following
plot of the eigenvalue $\lambda_4(\Delta_\alpha^G)$ for different values of $\alpha$ shows.
\begin{center}
\begin{figure}[hbt!]
\includegraphics[scale=.2]{./Fig1.png}
\caption{Plot of the eigenvalue $\lambda_4(\Delta_\alpha^G)$ as a function of a magnetic potential $\alpha_e=t$ for one edge of the cycle in the graph in \Fig{1} and zero everywhere else.}
\end{figure}
\end{center}
\end{example}
The next proposition generalises to the DML with arbitrary magnetic potential $\alpha$ results
known for the combinatorial and signless Laplacians (see \cite[Theorem~4]{ming:01} and \cite[Lemma~2.4]{Wang:13}).
\begin{proposition}\label{prp:generalmatch}
Let $G$ a connected graph with $n$ vertices, $m$ edges. Moreover, let $\alpha$ be a magnetic potential on $G$.
\begin{enumerate}
\item If $n>2\mu(G)$, then $\lambda_{\mu(G)+n-m}(\Delta^G_\alpha)<2< \lambda_{n-\mu(G)+1}(\Delta^G_\alpha)$.
\item If $n=2\mu(G)$, then $\lambda_{\frac{3n}{2}-m-1}(\Delta^G_\alpha)<2< \lambda_{\frac{n}{2}+2}(\Delta^G_\alpha)$.
\end{enumerate}
\end{proposition}
\begin{proof}
Consider a spanning tree $T$ of $G$ with the same matching number than $G$, i.e., $\mu(T)=\mu(G)$. Then, $T$ is a graph (with $n-1$ edges) obtained
from $G$ by deleting $m-(n-1)$ edges and by \Thm{delete-edge} we obtain
\begin{equation}\label{eq:generalmatching}
T \less \mathbf{G} \less[m-n+1] T\;.
\end{equation}
First note that the preceding relations in \Eq{eq:generalmatching} together with \Thm{matchingtree} give
\[
\lambda_{\mu(G)+n-m}(\Delta^G_\alpha)\leq \lambda_{\mu(T)+1}(\Delta^T) < 2< \lambda_{n-\mu(T)+1}(\Delta^T) \leq \lambda_{n-\mu(G)+1}(\Delta^G_\alpha)\;.
\]
Second, \Cor{matching2} together with \Thm{matchingtree} gives
\[
\lambda_{\frac{3n}{2}-m-1}(\Delta^G_\alpha)\leq \lambda_{\frac{n}{2}}(\Delta^T)<2<\lambda_{\frac{n}{2}+2}(\Delta^T)\leq \lambda_{n-\mu(G)+2}(\Delta^G_\alpha)
\]
which concludes the proof.
\end{proof}
\section{Hamiltonian graphs and the magnetic potential}
\label{sec:ham}
A cycle which contains every vertex of the graph is called a \emph{Hamiltonian cycle} and a graph
is said to be \emph{Hamiltonian} if it has a Hamiltonian cycle. Some results that connect
the existence of a Hamilton cycle in the graph and bounds of the eigenvalues of the combinatorial
Laplacian are given in \cite{heuvel:95,mohar:92}. In particular, the next result generalise the main result of Theorems~$1$ and $1'$ in~\cite{heuvel:95}.
\begin{theorem}\label{thm:ham-relations}
Let $\mathbf{G}=({G},\alpha)$ be a magnetic graph with $n$ vertices, $m$ edges and with a magnetic potential $\alpha$. If ${G}$ contains a Hamiltonian cycle $C_n$, then
\[
\mathbf{C}_n \less \mathbf{G} \less[m-n] \mathbf{C}_n
\]
where $\mathbf{C}_n=(C_n,\alpha')$ is the magnetic graph with $\alpha'$ denoting the restriction of $\alpha$
to the edges of the cycle, i.e., $\alpha'=\alpha\restr{E(C_n)}$.
\end{theorem}
\begin{proof}
Since $C_n$ is obtained from $G$ by deleting $m-n$ edges we obtain applying Eq.~(\ref{eq:delete-r}) the required inequalities (see also \Thm{delete-edge}).
\end{proof}
As a consequence of the previous result we mention a first spectral obstruction of the DML to the existence of a Hamiltonian cycle.
\begin{corollary}\label{cor:main}
Let $G=(V,A)$ be a graph on $n$ vertices. Assume that there exists an index $k\in\{1,2,\dots,n\}$ and a constant magnetic potential $\alpha=t$ (i.e., there is $t \in \R/2\pi\Z$ with $\alpha_e=t$ for all $e\in A$) such that
\[
\lambda_k(\Delta^{C_n}_{\alpha'})>\lambda_k(\Delta^G_\alpha),
\]
where $C_n$ is the cycle on $n$ vertices and $\alpha'=t$ is the magnetic potential on $C_n$. Then $G$ is non-Hamiltonian.
\end{corollary}
\begin{proof}
The inequality follows directly from the relation $C_n\less G$ of \Thm{ham-relations}.
\end{proof}
\begin{example}
Consider the graph $G$ on $6$ vertices given in \Fig{2} with trivial magnetic potential $\alpha=0$, i.e., $\mathbf{G}=(G,0)$. Then the spectrum
of the combinatorial Laplacian is given by:
\[
\sigma(\Delta_0^G)=\left\{0,\frac{1}{2} \left(7-\sqrt{17}\right),2,3,4,\frac{1}{2} \left(\sqrt{17}+7\right)\right\}\approx \{0,1.43,2,3,4,5.56\}\;.
\]
The spectrum of the cycle on $6$ vertices $C_6$ with magnetic potential $\alpha\sim 0$ is
\[
\sigma(\Delta_0^{C_6})=\{0,1,1,3,3,4\}\;.
\]
\begin{figure}[h]\label{fig:2}
\centering
{
\begin{tikzpicture}[baseline,vertex/.style={circle,draw,fill, thick, inner sep=0pt,minimum size=2mm},scale=.6]
\node (2) at (2,3) [vertex,label=below:] {};
\node (3) at (4,5) [vertex,label=left:] {};
\node (4) at (4,1) [vertex,label=below:] {};
\node (5) at (6,3) [vertex,label=left:] {};
\node (7) at (4,3) [vertex,label=left:] {};
\node (1) at (8,5) [vertex,label=below:] {};
\draw[-](5) edge node[below] {} (7);
\draw[-](2) edge node[below] {} (7);
\draw[-](2) edge node[left] {} (3);
\draw[-](2) edge node[below] {} (4);
\draw[-](3) edge node[right] {} (5);
\draw[-](4) edge node[below] {} (5);
\draw[-](1) edge node[below] {} (5);
\draw[-](1) edge node[below] {} (3);
\end{tikzpicture}
}
\caption{Example of a graph that can be shown to be non-Hamiltonian using the eigenvalues of the magnetic Laplacian.}
\label{fig:2}
\end{figure}
It follows that $\mathbf{C}_6\less \mathbf{G}$ and the spectra of the
combinatorial Laplacian gives no obstruction to the existence of a Hamiltonian cycle of $G$.
Similarly, the analysis for the signless Laplacian gives no obstruction.
In fact, consider the graph with magnetic potential $\alpha_e=\pi$ for all $e\in A$, i.e., $\mathbf{G}=(G,\alpha=\pi)$. Then, the eigenvalues
of the signless Laplacian are
\[
\sigma(\Delta_\pi^G)\approx \{0.30,1.22,2,3,3.58,5.87\}\;,
\]
while the spectrum of the $C_6$ for the signless Laplacian coincides with the spectrum of the usual Laplacian since $C_6$ is bipartite, i.e.,
\[
\sigma(\Delta_\pi^{C_6})=\{0,1,1,3,3,4\}\;.
\]
Again, it follows that $\mathbf{C}_6\less \mathbf{G}$ and the spectrum of the
signless Laplacians gives no obstruction.
Consider now the constant magnetic potential $\alpha_e=\pi/2$ for all $e\in A$. The spectrum of the corresponding DML provides the obstruction.
In fact, the spectrum of the magnetic Laplacian associated to the magnetic graph $\mathbf{G}=(G,\alpha=\pi/2)$ is given by
\[
\sigma(\Delta_{\pi/2}^G)\approx\{0.13,1.35,2,3,3.77,5.73\}\;.
\]
and the spectrum of the cycle $C_6$ with constant magnetic potential $\alpha_e=\pi/2$, $e\in A$, is
\[
\sigma(\Delta_{\pi/2}^{C_6})\approx\{0.25,0.52,2.39,2.83,3.98\}\;.
\]
It is clear that $\lambda_1(\Delta^{C_6}_{\pi/2})\approx 0.25>0.13=\lambda_1(\Delta^G_{\pi/2})$, hence by \Cor{main} we conclude that $G$ is non-Hamiltonian.
Note that there are other values of the magnetic potential where the first eigenvalue provides a similar obstruction (see \Fig{4}).
\begin{center}
\begin{figure}[hbt!]
\label{fig:4}
\includegraphics[scale=.2]{./Fig2.png}
\caption{First eigenvalue $\lambda_1$ for the Laplacians $\Delta_\alpha^G$ (red) and $\Delta_\alpha^{C_6}$ (blue) for different values of the magnetic potential $\alpha_e=t$,
$e\in A$. There is an open interval of values around $t=\pi/2$ and $t=3\pi/2$ that provide spectral obstructions of the DML to the existence of Hamiltonian cycles.}
\label{fig:4}
\end{figure}
\end{center}
\end{example}
As a consequence of \Thm{ham-relations}, we generalise next a result for the signless Laplacian proved in \cite[Theorem~2.8]{Wang:13} to arbitrary DMLs.
\begin{corollary}\label{cor:final}
Let $G$ be a Hamiltonian graph of order $n>3$, and $\alpha$ any magnetic potential on $G$. Then,
\begin{enumerate}
\item if $n$ is even, then $2\leq \lambda_{\frac{n}{2}+1}(\Delta_\alpha^G)$ and $2<\lambda_{\frac{n}{2}+2}(\Delta_\alpha^G)$.
\item if $n$ is odd, then $\lambda_{\frac{n+1}{2}+1}(\Delta_\alpha^G)> 2$
\end{enumerate}
\end{corollary}
\begin{proof}
(i)~If $n$ be even then $G$ is matchable and $\mu(G)=\frac{n}{2}$. By \Cor{matG} we conclude $2\leq \lambda_{n-\mu(G)+1}(\Delta^G_\alpha)=\lambda_{\frac{n}{2}+1}(\Delta^G_\alpha)$ and by \Prp{generalmatch} it follows that $\lambda_{\frac{n}{2}+2}(\Delta_\alpha^G)> 2$.
(ii)~If $n$ is odd then $\mu(G)=\frac{n-1}{2}$ and by \Prp{generalmatch} we conclude
$2< \lambda_{n-\frac{n-1}{2}+1}(\Delta^G_\alpha)$.
\end{proof}
The previous result gives a sufficient condition for a graph $G$ to be Hamiltonian. In particular if $G$ is a graph with even vertices and $\alpha$
a magnetic potential such that
\begin{equation*}
\lambda_{\frac{n}{2}+1}(\Delta_\alpha^G)< 2
\end{equation*}
then $G$ is non-Hamiltonian. We apply this reasoning in the following example.
\begin{example}
Consider the graph given in \Fig{1} and recall that in
Example~\ref{exa:nonhamiltonian}, we showed that for any $\alpha\not\sim0$
\begin{equation}\label{eq:recycle}
\lambda_{4}(\Delta_\alpha^G)< 2
\end{equation}
concluding that the graph is not matchable. Now for this example we conclude from \Cor{final}~(i) that one can use again inequality Eq.~(\ref{eq:recycle})
to show that $G$ is non-Hamiltonian either.
\end{example}
\newcommand{\etalchar}[1]{$^{#1}$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| proofpile-arXiv_059-15645 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section[Introduction]{Introduction} \label{sec:intro}
Propensity score is one of the most widely used causal inference methods for observational studies \citep{Rosenbaum1983}. Propensity score methods include weighting, matching, stratification, regression, and mixed methods such as the augmented weighting estimators. The \CRANpkg{PSweight} package provides a design and analysis pipeline for causal inference with propensity score weighting \citep{Robins1994, Hirano2003, Lunceford2004, LiMorganZaslavsky2018}. There are a number of existing R packages on propensity score weighting (see Table \ref{tb:summary}). Comparing to those, \CRANpkg{PSweight} offers three major advantages: it incorporates (i) visualization and diagnostic tools of checking covariate overlap and balance, (ii) a general class of balancing weights, including overlap weights and inverse probability of treatment weights, and (iii) multiple treatments. More importantly, \CRANpkg{PSweight} comprises a wide range of functionalities, whereas each of the competing packages only supports a
subset of these functionalities.
As such, \CRANpkg{PSweight} is currently the most comprehensive platform for causal inference with propensity score weighting, offering analysts a one-stop shop for the design and analysis. Table \ref{tb:summary} summarizes the key functionalities of \CRANpkg{PSweight} in comparison to related existing R packages. We elaborate the main features of \CRANpkg{PSweight} below.
\CRANpkg{PSweight} facilitates better practices in the design stage of observational studies, an aspect that has not been sufficiently emphasized in related packages. Specifically, we provide a design module that facilitates visualizing overlap (also known as the positivity assumption) and evaluating covariate balance without access to the final outcome \citep{austin2015moving}. When there is limited overlap, \CRANpkg{PSweight} allows for symmetric propensity score trimming \citep{Crump2009,Yoshida2019} and optimal trimming \citep{Crump2009,Yang2016} to improve the internal validity. We extend the class of balance metrics suggested in \citet{austin2015moving} and \citet{LiThomasLi2018} for binary treatments, and those in \citet{McCaffrey2013} and \citet{li2019propensity} for multiple treatments. In addition, the design module helps describe the weighted target population by providing the information required in the standard ``Table 1'' of a clinical article.
In addition to the standard inverse probability of treatment weights (IPW), \CRANpkg{PSweight} implements the average treatment effect among the treated (Treated) weights, overlap weights (OW), matching weights (MW) and entropy weights (EW) for both binary \citep{LiGreene13,Mao2018,LiMorganZaslavsky2018,Zhou2020} and multiple treatments \citep{Yoshida2017,li2019propensity}. All weights are members of the family of balancing weights \citep{LiMorganZaslavsky2018}; the last three types of weights target at the subpopulation with improved overlap in the covariates between (or across) treatment groups, similar to the target population in randomized controlled trials \citep{thomas2020overlap,thomas2020using}. Among them, OW achieves optimal balance and estimation efficiency \citep{LiMorganZaslavsky2018,LiThomasLi2018}. We also implement the augmented weighting estimators corresponding to each of the above weighting schemes \citep{Mao2018}. By default, \CRANpkg{PSweight} employs parametric regression models to estimate propensity scores and potential outcomes. Nonetheless, it also allows for propensity scores to be estimated by external machine learning methods including generalized boosted regression models \citep{McCaffrey2013} and super learner \citep{van2007super}, or importing any other propensity or outcome model estimates of interest.
\begin{table}[htbp]
\caption{Comparisons of existing R packages that implement propensity score weighting with discrete treatments. Binary treatments and additive estimands are implemented in all packages, and therefore those two columns are omitted.}
\centering\label{tb:summary}
\scriptsize
\begin{tabular}{p{1cm}cccccccc}
\toprule
& Multiple & Balance & IPW/ATT & OW/other & Ratio & Augmented & Nuisance-adj & Optimal \\[-0.6ex]
& treatments & diagnostics & weights & weights & estimands & weighting & variance & trimming \\
\midrule
\CRANpkg{PSweight} & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\
\CRANpkg{twang} & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\
\CRANpkg{CBPS} & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times$ & $\times$ & $\checkmark$ & $\checkmark$ & $\times$ \\
\CRANpkg{PSW} & $\times$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times$ \\
\CRANpkg{optweight} & $\checkmark$ & $\times$ & $\checkmark$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\
\CRANpkg{ATE} & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times$ & $\times$ & $\times$ & $\checkmark$ & $\times$ \\
\CRANpkg{WeightIt} & $\checkmark$ & $\times$ & $\checkmark$ & $\checkmark$ & $\times$ & $\times$ & $\times$ & $\times$ \\
\CRANpkg{causalweight} & $\checkmark$ & $\times$ & $\checkmark$ & $\times$ & $\times$ & $\checkmark$ & $\times$ & $\times$ \\
\CRANpkg{sbw} & $\times$ & $\checkmark$ & $\checkmark$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item $\checkmark$ indicates that the functionality is currently implemented in the package; $\times$ indicates otherwise.
\item References: \CRANpkg{twang} (Version 1.6): \citet{twang}; \CRANpkg{CBPS} (Version 0.21): \citet{CBPS}; \CRANpkg{PSW} (Version 1.1-3): \citet{PSW}; \CRANpkg{optweight} (Version 0.2.5): \citet{optweight};
\CRANpkg{ATE} (Version 0.2.0): \citet{ATE}; \CRANpkg{WeightIt} (Version 0.10.2): \citet{WeightIt};
\CRANpkg{causalweight} (Version 0.2.1): \citet{causalweight}; \CRANpkg{sbw} (Version 1.1.1): \citet{sbw}.
\end{tablenotes}
\end{table}
To our knowledge, \CRANpkg{PSweight} is the first R package to accommodate a variety of balancing weighting schemes with multiple treatments. Existing R packages such as \CRANpkg{twang}~\citep{twang}, \CRANpkg{CBPS}~\citep{CBPS}, \CRANpkg{optweight}~\citep{optweight} have also implemented weighting-based estimation with multiple treatments, but focus on IPW. The \CRANpkg{PSW} R package~\citep{PSW} implements both OW and MW and allows for nuisance-adjusted variance estimation, but it is only for binary treatments.
To better assist applied researchers to perform propensity score weighting analysis, this article provides a full introduction of the \CRANpkg{PSweight} package. In Section \ref{sec:models}, we explain the methodological foundation of \CRANpkg{PSweight}. Section \ref{sec:overview} outlines the main functions and their arguments. Section \ref{sec:illustrations} illustrates the use of these functions with a data example that studies the causal effect of educational attainment on income. Section \ref{sec:summary} concludes with a short discussion and outlines future development.
\section{Overview of propensity score weighting} \label{sec:models}
Before diving into the implementation details of \CRANpkg{PSweight}, we briefly introduce the basics of the propensity score weighting framework.
\subsection{Binary treatments}\label{sec:binary}
Assume we have an observational study with $N$ units. Each unit $i$ ($i=1,2,\ldots,N$) has a binary treatment indicator $Z_{i}$ ($Z_{i}=0$ for control and $Z_{i}=1$ for treated), a vector of $p$ covariates $\bm{X}_{i}=(X_{1i},\cdots, X_{pi})$. For each unit $i$, we assume a pair of potential outcomes $\{Y_{i}(1),Y_{i}(0)\}$ mapped to the treatment and control status, of which only the one corresponding to the observed treatment is observed, denoted by $Y_i=Z_{i}Y_{i}(1)+(1-Z_{i})Y_{i}(0)$; the other potential outcome is counterfactual.
Causal effects are contrasts of the potential outcomes of the same units in a \emph{target population}, which usually is the population of a scientific interest \citep{thomas2020using}. \CRANpkg{PSweight} incorporates a general class of weighted average treatment effect (WATE) estimands. Specifically, assume the observed sample is drawn from a probability density $f(\bm{x})$, and let $g(\bm{x})$ denote the covariate density of the target population. The ratio $h(\bm{x})\propto g(\bm{x})/f(\bm{x})$ is called the \emph{tilting function}, which adjusts the distribution of the observed sample to represent the target population. Denote the conditional expectation of the potential outcome by $m_z(\bm{x})=\bE[Y(z)|\bm{X}=\bm{x}]$ for $z=0,1$. Then, we can represent the average treatment effect over the target population by a WATE estimand:
\begin{equation}\label{eq:estimand1}
\tau^h=\bE_g[Y(1)-Y(0)]=\frac{\bE\{h(\bm{x})(m_1(\bm{x})-m_0(\bm{x}))\}}{\bE\{h(\bm{x})\}}.
\end{equation}
To estimate \eqref{eq:estimand1}, \CRANpkg{PSweight} maintains two standard assumptions: (1) \emph{unconfoundedness}: $\{Y(1),Y(0)\} \perp Z \mid \bm{X}$; (2) \emph{overlap}: $0<P(Z=1|\bm{X})<1$.
The propensity score is the probability of a unit being assigned to the treatment group given the covariates \citep{Rosenbaum1983}: $e(\bm{x})=P(Z=1|\bm{X}=\bm{x})$. While assumption (1) is generally untestable and critically depends on substantive knowledge, assumption (2) can be checked visually from data by comparing the distribution of propensity scores between treatment and control groups.
For a given tilting function $h(\bm{x})$ (and correspondingly a WATE estimand $\tau^h$), we can define the \emph{balancing weights} $(w_1,w_0)$ for the treated and control units: $w_1(\bm{x}) \propto h(\bm{x})/{e(\bm{x})}$ and $w_0(\bm{x})
\propto h(\bm{x})/\{1-e(\bm{x})\}$. These weights balance the covariate distributions between the treated and control groups towards the target population \citep{LiMorganZaslavsky2018}. \CRANpkg{PSweight} implements the following H\'{a}jek estimator for WATE:
\begin{equation}
\label{eq:sampleWATE}
\hat{\tau}^h=\hat{\mu}^h_1-\hat{\mu}^h_0=\frac{\sum_{i=1}^N w_1(\bm{x}_i)Z_i Y_i}{\sum_{i=1}^N w_1(\bm{x}_i)Z_i} -
\frac{\sum_{i=1}^N w_0(\bm{x}_i)(1-Z_i) Y_i}{\sum_{i=1}^N w_0(\bm{x}_i)(1-Z_i)},
\end{equation}
where the weights are calculated based on the propensity scores estimated from the data. Clearly, specification of $h(\bm{x})$ defines the target population and estimands. \CRANpkg{PSweight} primarily implements the following three types of balancing weights (see Table \ref{tab:weights_binary} for a summary):
\begin{itemize}
\item \emph{Inverse probability of treatment weights} (IPW), whose target population is the combined treatment and control group represented by the observed sample, and the target estimand is the average treatment effect among the combined population (ATE).
\item \emph{Treated weights}, whose target population is the treated group, and target estimand is the average treatment effect for the treated population (ATT). Treated weights can be viewed as a special case of IPW because it inversely weights the control group.
\item \emph{Overlap weights} (OW) \citep{LiMorganZaslavsky2018, li2019propensity}, whose target population is the subpopulation with the most overlap in the observed covariates between treatment and group groups . In medicine this is known as the population in clinical equipoise and is the population eligible to be enrolled in randomized clinical trials.
The target estimand of OW is the average treatment effect for the overlap population (ATO).
\end{itemize}
IPW has been the dominant weighting method in the literature, but has a well-known shortcoming of being sensitive to extreme propensity scores, which induces bias and large variance in estimating treatment effects. OW addresses the conceptual and operational problems of IPW.
Among all balancing weights, OW leads to the smallest asymptotic (and often finite-sample) variance of the weighting estimator \eqref{eq:sampleWATE}. \citep{LiMorganZaslavsky2018,LiThomasLi2018}.
Recent simulations also show that OW provides more stable causal estimates under limited overlap \citep{LiThomasLi2018,Mao2018,Yoshida2017,Yoshida2019}, and is more robust to misspecification of the propensity score model \citep{Zhou2020}.
\CRANpkg{PSweight} implements two additional types of balancing weights: matching weights (MW) \citep{LiGreene13}, and entropy weights (EW) \citep{Zhou2020}. Similar to OW, MW and EW focus on target populations with substantial overlap between treatment groups. Though having similar operating characteristics, MW and EW do not possess the same theoretical optimality as OW, and are less used in practice. Therefore, we will not separately describe MW and EW hereafter.
\begin{table}
\begin{center}
\caption{Target populations, tilting functions, estimands and the corresponding balancing weights for binary treatments in \CRANpkg{PSweight}. \label{tab:weights_binary}}
{\footnotesize
\begin{threeparttable}
\begin{tabular}{lccc}
\toprule
Target population &Tilting function $h(\bm{x})$ & Estimand & Balancing weights $(w_1, w_0)$ \\
\midrule
Combined &1 &ATE &$\left(\frac{1}{e(\bm{x})}, \frac{1}{1-e(\bm{x})}\right)$ \\
Treated &$e(\bm{x})$ &ATT &$\left(1, \frac{e(\bm{x})}{1-e(\bm{x})}\right)$ \\
Overlap &$e(\bm{x})(1-e(\bm{x}))$ &ATO & $(1-e(\bm{x}), e(\bm{x}))$ \\
Matching &$\xi_1(\bm{x})$ & ATM & $\left( \frac{\xi_1(\bm{x})}{e(\bm{x})}, \frac{\xi_1(\bm{x})}{1-e(\bm{x})} \right)$ \\
Entropy &$\xi_2(\bm{x})$ & ATEN & $\left( \frac{ \xi_2(\bm{x})}{e(\bm{x})}, \frac{ \xi_2(\bm{x})}{1-e(\bm{x})} \right)$ \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\small
\item Notes: $\xi_1(\bm{x}) = \min \{e(\bm{x}),1-e(\bm{x})\}$ and $\xi_2(\bm{x})=-\{e(\bm{x})\log(e(\bm{x}))+(1-e(\bm{x}))\log(1-e(\bm{x}))\} $.
\end{tablenotes}
\end{threeparttable}}
\end{center}
\end{table}
In observational studies, propensity scores are generally unknown and need to be estimated. Therefore, propensity score analysis usually involves two steps: (1) estimating the propensity scores, and (2) estimating the causal effects based on the estimated propensity scores. In \CRANpkg{PSweight}, the default model for estimating propensity scores with binary treatments is a logistic regression model. Spline or polynomial models can be easily incorporated by adding \code{bs()}, \code{ns()} or \code{poly()} terms into the model formula. \CRANpkg{PSweight} also allows for importing propensity scores estimated from external routines, such as boosted models or super learner (Section \ref{sec:impillu}).
Goodness-of-fit of the propensity score model is usually assessed based on the resulting covariate balance. In the context of propensity score weighting, this is measured based on either the absolute standardized difference (ASD):
\begin{equation}\label{eq:ASD1}
\text{ASD} = \left| {\frac{\sum_{i=1}^N w_1(\bm{x}_i)Z_iX_{pi} }{\sum_{i=1}^N w_1(\bm{x}_i)Z_i } - \frac{\sum_{i=1}^N w_0(\bm{x}_i)(1- Z_i)X_{pi} }{\sum_{i=1}^N w_0(\bm{x}_i)(1-Z_i) }}\right|
\Bigg /{\sqrt{\frac{s_{1}^2 + s_{0}^2}{2}}},
\end{equation}
or the target population standardized difference (PSD), $\max\{\text{PSD}_0,\text{PSD}_1\}$, where
\begin{equation}\label{eq:PSD1}
\text{PSD}_z =
\left|{\frac{\sum_{i=1}^N w_z(\bm{x}_i)\mathds{1}\{Z_i=z\}X_{pi} }{\sum_{i=1}^N w_z(\bm{x}_i)\mathds{1}\{Z_i=z\} } - \frac{\sum_{i=1}^N h(\bm{x}_i)X_{pi} }{\sum_{i=1}^N h(\bm{x}_i)}}\right|\Bigg /{\sqrt{\frac{s_{1}^2 + s_{0}^2}{2}}}.
\end{equation}
In \eqref{eq:ASD1} and \eqref{eq:PSD1}, $s_z^2$ is the variance (either unweighted or weighted, depending on user specification) of the $p$th covariate in group $z$, and $(w_0,w_1)$ are the specified balancing weights. Setting $w_0=w_1=1$ corresponds to the unweighted mean differences. ASD and PSD are often displayed as column in the baseline characteristics table (known as the ``Table 1'') and visualized via a Love plot (also known as a forest plot) \citep{Greifer}. A rule of thumb for determining adequate balance is when ASD of all covariates is controlled within $0.1$ \citep{austin2015moving}.
\subsection{Multiple treatments}
\label{sec:multiple}
\cite{li2019propensity} extend the framework of balancing weights to multiple treatments. Assume that we have $J$ $(J\geq 3)$ treatment groups, and let $Z_i$ stand for the treatment received by unit $i$, $Z_i\in \{1,\ldots,J\}$. We further define $D_{ij}=\mathds{1}\{Z_i=j\}$ as a set of multinomial indicator, satisfying $\sum_{i=1}^J D_{ij}=1$ for all $j$. Denote the potential outcome for unit $i$ under treatment $j$ as $Y_{i}(j)$, of which only the one corresponding to the received treatment, $Y_i=Y_i(Z_i)$, is observed. The generalized propensity score is the probability of receiving a potential treatment $j$ given $\bm{X}$ \citep{Imbens2000}: $e_j(\bm{x})=P(Z=j|\bm{X}=\bm{x})$, with the constraint that $\sum_{j=1}^J e_j(\bm{x})=1$.
To define the target estimand, let $m_j(\bm{x})=\bE[Y(j)|\bm{X}=\bm{x}]$ be the conditional expectation of the potential outcome in group $j$. For specified tilting function $h(\bm{x})$ and target density $g(\bm{x})\propto f(\bm{x})h(\bm{x})$, the $j$th average potential outcome among the target population is
\begin{equation}
\label{eq:meanpo}
\mu_j^h=\bE_g[Y(j)]=\frac{\bE\{h(\bm{x})m_j(\bm{x})\}}{\bE\{h(\bm{x})\}}.
\end{equation}
Causal estimands can then be constructed in a general manner as contrasts based on $\mu_j^h$. For example, the most commonly seen estimands in multiple treatments are the pairwise average treatment effects between groups $j$ and $j'$: $\tau_{j,j'}^h=\mu_j^h-\mu_{j'}^h$. This definition can be generalized to arbitrary linear contrasts. Denote $\pmb{a}=(a_{i},\cdots, a_{J})$ as a contrast vector of length $J$. A general class of additive estimands is
\begin{equation} \label{eq:meantau}
\tau^h(\pmb{a})=\sum\limits_{j=1}^{J}a_j\mu_j^h.
\end{equation}
Specific choices for $\bm{a}$ with nominal and ordinal treatments can be found in \citet{li2019propensity}. Similar as before, propensity score weighting analysis with multiple treatments rests on two assumptions: (1) \emph{weak unconfoundedness}: $Y(j)\perp \mathds{1}\{Z=j\} |\bm{X},$ for all $j$, and (2) \emph{Overlap}: the generalized propensity score is bounded away from 0 and 1: $0<e_j(\bm{x})<1$, for all $j$.
With multiple treatments, the tilting function $h(\bm{x})$ specifies the target population, estimand, and balancing weights. For a given $h(\bm{x})$, the balancing weights for the $j$th treatment group $w_j(\bm{x})\propto {h(\bm{x})}/{e_j(\bm{x})}$. Then the H\'{a}jek estimator for $\mu_j^h$ is
\begin{eqnarray}
\label{eq:simplemean}
\hat{\mu}^h_j=\frac{\sum_{i=1}^N w_j(\bm{x}_i)D_{ij}Y_i }{\sum_{i=1}^N w_j(\bm{x}_i)D_{ij}}.
\end{eqnarray}
Contrasts based on $\hat{\mu}^h_j$ can be obtained for any $\bm{a}$ to estimate the additive causal estimand $\tau^h(\bm{a})$. Of note, we only consider types of estimands that are transitive, and therefore the ATT estimands introduced in \citet{Lechner2001} is not implemented. In parallel to binary treatments \CRANpkg{PSweight} implements five types of balancing weights with multiple treatments: IPW, treated weights, OW, MW, and EW, and the corresponding target estimand of each weighting scheme is its pairwise (between each pair of treatments) counterpart in binary treatments.
Among all the weights, OW minimizes the total asymptotic variances of all pairwise comparisons, and has been shown to have the best finite-sample efficiency in estimating pairwise WATEs \citep{li2019propensity}. Table \ref{tab:weight_multi} summarizes the target population, tilting function and balancing weight for multiple treatments that are available in \CRANpkg{PSweight}.
\begin{table}[htbp]
\centering
\caption{Target populations, tilting functions, and the corresponding balancing weights for multiple treatments in \CRANpkg{PSweight}.}\label{tab:weight_multi}
{\footnotesize
\begin{tabular}{lcc}
\toprule
Target population & Tilting function $h(\bm{x})$ & Balancing weights $\left\{w_j(\bm{x}),~j=1,\ldots,J\right\}$\\\midrule
Combined & 1 & $\left\{1/e_j(\bm{x})\right\}$\\
Treated ($j'$th group) & $e_{j'}(\bm{x})$ & $\left\{e_{j'}(\bm{x})/e_j(\bm{x})\right\}$ \\
Overlap & $\{\sum_{k=1}^J 1/e_k(\bm{x})\}^{-1}$ & $\left\{\{\sum_{k=1}^J 1/e_k(\bm{x})\}^{-1}/e_j(\bm{x})\right\}$\\
Matching &$\min_k \{e_k(\bm{x})\}$ & $\left\{ \min_k \{e_k(\bm{x})\}/e_j(\bm{x}) \right\}$ \\
Entropy &$-\sum_{k=1}^J e_k(\bm{x})\log\{e_k(\bm{x})\}$ & $\left\{-\sum_{k=1}^J e_k(\bm{x})\log\{e_k(\bm{x})\}/ e_j(\bm{x}) \right\}$ \\
\bottomrule
\end{tabular}}
\end{table}
To estimate the generalized propensity scores for multiple treatments, the default model in \CRANpkg{PSweight} is a multinomial logistic model. \CRANpkg{PSweight} also allows for externally estimated generalized propensity scores. Goodness-of-fit of the generalized propensity score model is assessed by the resulting covariate balance, which is measured by the pairwise versions of the ASD and PSD. The detailed formula of these metrics can be found in \cite{li2019propensity}. A common threshold for balance is that the maximum pairwise ASD or maximum PSD is below 0.1.
\subsection{Propensity score trimming}
Propensity score trimming excludes units with estimated (generalized) propensity scores close to zero (or one). It is a popular approach to address the extreme weights problem of IPW. \CRANpkg{PSweight} implements the symmetric trimming rules in \citet{Crump2009} and \citet{Yoshida2019}. Operationally, we allow users to specify a single cutoff $\delta$ on the estimated generalized propensity scores, and only includes units for analysis if $\min_j\{e_{j}(\bm{x})\}\in[\delta,1]$. With binary treatments, the symmetric trimming rule reduces to $e(\bm{x})\in[\delta,1-\delta]$. The natural restriction $\delta<1/J$ must be satisfied due to the constraint $\sum_{j=1}^J e_j(\bm{x})=1$.
To avoid specifying an arbitrary trimming threshold $\delta$, \CRANpkg{PSweight} also implements the optimal trimming rules of \citet{Crump2009} and \citet{Yang2016}, which minimizes the (total) asymptotic variance(s) for estimating the (pairwise) ATE among the class of all trimming rules. OW can be viewed as a continuous version of trimming because it smoothly down-weigh the units with propensity scores close to 0 or 1, and thus avoids specifying a threshold.
\subsection{Augmented weighting estimators}
\label{sec:augest}
\CRANpkg{PSweight} also implements augmented weighting estimators, which augment a weighting estimator by an outcome regression and improves the efficiency. With IPW, the augmented weighting estimator is known as the doubly-robust estimator \citep{Lunceford2004,bang2005doubly,funk2011doubly}. With binary treatments, the augmented estimator with general balancing weights are discussed \citet{Hirano2003} and \citet{Mao2018}. Below, we briefly outline the form of this estimator with multiple treatments. Recall the conditional mean of $Y_i(j)$ given $\bm{X}_i$ and treatment $Z_i=j$ as $m_{j}(\bm{x}_i)=\bE[Y_i(j)|\bm{X}_i=\bm{x}_i]=\bE[Y_i|\bm{X}_i=\bm{x}_i,Z_{i}=j]$.
This conditional mean can be estimated by generalized linear models, kernel estimators, or machine learning models. \CRANpkg{PSweight} by default employs the generalized linear models, but also allows estimated values from other routines. When $m_{j}(\bm{x}_i)$ is estimated by generalized linear models, \CRANpkg{PSweight} currently accommodates three types of outcomes: continuous, binary and count outcomes (with or without an offset), using the canoncal link function.
With a pre-specified tilting function, the augmented weighting estimator for group $j$ is
\begin{eqnarray}\label{eq:augest}
\hat{\mu}^{h,\mbox{\tiny{aug}}}_j=\frac{\sum_{i=1}^N w_j(\bm{x}_i)D_{ij}\{Y_i-m_{j}(\bm{x}_i)\} }{\sum_{i=1}^N w_j(\bm{x}_i)D_{ij}}+\frac{\sum_{i=1}^N h(\bm{x}_i)m_{j}(\bm{x}_i) }{\sum_{i=1}^N h(\bm{x}_i)}.
\end{eqnarray}
The first term of (\ref{eq:augest}) is the H\'{a}jek estimator of the regression residuals, and the second term is the standardized average potential outcome (a $g$-formula estimator). With IPW, (\ref{eq:augest}) is consistent to $\bE[Y(j)]$ when either the propensity score model or the outcome model is correctly specified, but not necessarily both. For other balancing weights, (\ref{eq:augest}) is consistent to the WATE when the propensity model is correctly specified, regardless of outcome model specification. When both models are correctly specified, (\ref{eq:augest}) achieves the lower bound of the variance for regular and asymptotic linear estimators \citep{Robins1994,Hirano2003,Mao2018}.
\subsection{Ratio causal estimands}\label{sec:ratioest}
With binary and count outcomes, ratio causal estimands are often of interest. Using notation from the multiple treatments as an example, once we use weighting to obtain estimates for the set of average potential outcomes $\{\mu_j^h,j=1,\ldots,J\}$, we can directly estimate the causal relative risk (RR) and causal odds ratio (OR), defined as
\begin{equation}\label{eq:RROR}
\tau^{h,\mbox{\tiny{RR}}}_{j,j^{\prime}}=\frac{\mu_{j}^h}{\mu_{j^{\prime}}^h},~~~~~~\tau^{h,\mbox{\tiny{OR}}}_{j,j^{\prime}}=\frac{\mu_{j}^h/(1-\mu_{j}^h)}{\mu_{j^{\prime}}^h/(1-\mu_{j^{\prime}}^h)}.
\end{equation}
Here the additive estimand $\tau^{h,\mbox{\tiny{RD}}}_{j,j^{\prime}}=\mu_{j}^h-\mu_{j^{\prime}}^h$ is the causal risk difference (RD). \CRANpkg{PSweight} supports a class of ratio estimands for any given contrasts $\bm{a}$. Specifically, we define the log-RR type parameters by
\begin{equation} \label{eq:meanRR}
\lambda^{h,\mbox{\tiny{RR}}}(\pmb{a})=\sum\limits_{j=1}^{J}a_j\log\left(\mu_j^h\right),
\end{equation}
and the log-OR type parameters by
\begin{equation} \label{eq:meanOR}
\lambda^{h,\mbox{\tiny{OR}}}(\pmb{a})=\sum\limits_{j=1}^{J}a_j\left\{\log\left(\mu_j^h\right)-\log\left(1-\mu_j^h\right)\right\}.
\end{equation}
With nominal treatments, the contrast vector $\bm{a}$ can be specified to encode pairwise comparisons in the log scale (as in \eqref{eq:meanRR}) or in the log odds scale (as in \eqref{eq:meanOR}), in which case $\exp\{\lambda^{h,\mbox{\tiny{RR}}}(\pmb{a})\}$ and $\exp\{\lambda^{h,\mbox{\tiny{OR}}}(\pmb{a})\}$ become the causal RR and causal OR in \eqref{eq:RROR}.
User-specified contrasts $\bm{a}$ can provide a variety of nonlinear estimands. For example, when $J=3$, with $\bm{a}=(1,-2,1)^T$ one can use \CRANpkg{PSweight} to assess the equality of two consecutive causal RR: $H_0: \mu_3^h/\mu_2^h=\mu_2^h/\mu_1^h$.
\subsection{Variance and interval estimation}\label{sec:variance}
\CRANpkg{PSweight} by default implements the empirical sandwich variance for propensity score weighting estimators \citep{Lunceford2004,LiThomasLi2018,Mao2018} based on the M-estimation theory \citep{Stefanski2002}. The variance adjusted for the uncertainty in estimating the propensity score and outcome models, and are sometime referred to as the nuisance-adjusted sandwich variance. Below we illustrate the main steps with multiple treatments and general balancing weights. Write $\bm{\theta}=\left(\nu_1,\ldots,\nu_J,\eta_1,\ldots,\eta_J,\bm{\beta}^T,\bm{\alpha}^T\right)^T$ as the collection of parameters to be estimated. Then $\left\{\hat{\mu}^{h,\mbox{\tiny{aug}}}_j=\hat{\nu}_j+\hat{\eta}_j:j=1,\ldots,J\right\}$ jointly solve
\begin{align*}
\sum_{i=1}^{N}\Psi_{i}(\bm{\theta})=\sum_{i=1}^{N}
\left(
\begin{array}{c}
w_1(\bm{x}_i)D_{i1}\{Y_{i}-m_{1}(\bm{x}_{i};\bm{\alpha})-\nu_1\}\\
\vdots\\
w_J(\bm{x}_i)D_{iJ}\{Y_{i}-m_{J}(\bm{x}_{i};\bm{\alpha})-\nu_J\}\\
h(\bm{x}_i)\{m_{1}(\bm{x}_{i};\bm{\alpha})-\eta_1\}\\
\vdots\\
h(\bm{x}_i)\{m_{J}(\bm{x}_{i};\bm{\alpha})-\eta_J\}\\
S_{\bm{\beta}}(Z_i,\bm{x}_{i};\bm{\beta})\\
S_{\bm{\alpha}}(Y_i,Z_i,\bm{x}_{i};\bm{\alpha})
\end{array}
\right)=\bm{0},
\end{align*}
where $S_{\bm{\beta}}(Z_i,\bm{x}_{i};\bm{\beta})$ and $S_{\bm{\alpha}}(Y_i,Z_i,\bm{x}_{i};\bm{\alpha})$ are the score functions of the propensity score model and the outcome model. The empirical sandwich variance estimator is
\begin{eqnarray*}
\widehat{\bV}(\hat{\bm{\theta}})=\left\{\sum_{i=1}^{N}
\frac{\partial}{\partial\bm{\theta}^T}\Psi_{i}(\hat{\bm{\theta}})\right\}^{-1} \left\{\sum_{i=1}^{N}\Psi_{i}(\hat{\bm{\theta}})
\Psi_{i}^{T}(\hat{\bm{\theta}})\right\}
\left\{\sum_{i=1}^{N}
\frac{\partial}{\partial\bm{\theta}}\Psi_{i}^T(\hat{\bm{\theta}})\right\}^{-1}.
\end{eqnarray*}
Because $\hat{\mu}^{h,\mbox{\tiny{aug}}}_j=\hat{\nu}_j+\hat{\eta}_j$, the variance of arbitrary linear contrasts based on the average potential outcomes can be easily computed by applying the Delta method to the joint variance $\widehat{\bV}(\hat{\bm{\theta}})$. For the H\'ajek weighting estimators, variance is estimated by removing $S_{\bm{\alpha}}(Y_i,Z_i,\bm{x}_{i};\bm{\alpha})$ as well as the components involving $m_j(\bm{x}_i;\bm{\alpha})$ in $\Psi_{i}(\bm{\theta})$. Finally, when propensity scores and potential outcomes are not estimated through the generalized linear model or are supplied externally, or MW are used (since the tilting function is not everywhere differentiable), \CRANpkg{PSweight} ignores the uncertainty in estimating $\bm{\beta}$ and $\bm{\alpha}$ and removes $S_{\bm{\beta}}(Z_i,\bm{x}_{i};\bm{\beta})$ and $S_{\bm{\alpha}}(Y_i,Z_i,\bm{x}_{i};\bm{\alpha})$ in $\Psi_{i}(\bm{\theta})$ in the calculation of the empirical sandwich variance. Based on the estimated variance, \CRANpkg{PSweight} computes the associated symmetric confidence intervals and p-values via the normal approximation.
For ratio causal estimands, \CRANpkg{PSweight} applies the logarithm transformation to improve the accuracy of the normal approximation \citep{agresti2003categorical}. For estimating the variance of causal RR, we first obtain the joint variance of $\left(\log\left(\hat{\mu}^{h,\mbox{\tiny{aug}}}_1\right),\ldots \log\left(\hat{\mu}^{h,\mbox{\tiny{aug}}}_J\right)\right)^T$ using the Delta method, and then estimate the variance of $\lambda^{h,\mbox{\tiny{RR}}}(\pmb{a})$. Once the symmetric confidence intervals are obtained for $\lambda^{h,\mbox{\tiny{RR}}}(\pmb{a})$ using the normal approximation, we can exponentiate the upper and lower confidence limits to derive the asymmetric confidence intervals for the causal RR. Confidence intervals for the causal OR are computed similarly.
\CRANpkg{PSweight} also allows using bootstrap to estimate variances, which can be much more computationally intensive than the closed-form sandwich estimator but sometimes give better finite-sample performance in small samples. By default, \CRANpkg{PSweight} resamples $R=50$ bootstrap replicates with replacement. For each replicate, the weighting estimator (\ref{eq:simplemean}) or the augmented weighting estimtor (\ref{eq:augest}) is implemented, providing $R$ estimates of the $J$ average potential outcomes (an $R\times J$ matrix). Then for any contrast vector $\pmb{a}=(a_{1},\cdots, a_{J})^T$, \CRANpkg{PSweight} obtains $R$ bootstrap estimates:
\begin{equation*}
\hat{\mathbb{T}}^h(\pmb{a})_{bootstrap}= \left\{\hat{\tau}^h(\pmb{a})_1=\sum_{j=1}^{J} a_{j} \hat{\mu}^h_{j,1},~\ldots~, \hat{\tau}^h(\pmb{a})_R=\sum_{j=1}^{J} a_{j} \hat{\mu}^h_{j,R} \right\}.
\end{equation*}
The sample variance of $\hat{\mathbb{T}}^h(\pmb{a})_{bootstrap}$ is reported by \CRANpkg{PSweight} as the bootstrap variance; the lower and upper $2.5\%$ quantiles of $\hat{\mathbb{T}}^h(\pmb{a})_{bootstrap}$ form the $95\%$ bootstrap interval estimate
\section{Overview of package} \label{sec:overview}
The \CRANpkg{PSweight} package includes two modules tailored for design and analysis of observational studies. The design module provides diagnostics to assess the adequacy of the propensity score model and the weighted target population, prior to the use of outcome data. The analysis module provides functions to estimate the causal estimands discussed in Section \ref{sec:models}. We briefly describe the two modules below.
\subsection{Design module}
\CRANpkg{PSweight} offers the \code{SumStat()} function to visualize the distribution of the estimated propensity scores, to assess the balance of covariates under different weighting schemes, and to characterize the weighted target population. It uses the following code snippet:
\begin{Scode}
SumStat(ps.formula, ps.estimate = NULL, trtgrp = NULL, Z = NULL, covM = NULL,
zname = NULL, xname = NULL, data = NULL,weight = "overlap", delta = 0,
method = "glm", ps.control = list())
\end{Scode}
By default, the (generalized) propensity scores are estimated by the (multinomial) logistic regression, through the argument \code{ps.formula}. Alternatively, \code{gbm()} functions in the \CRANpkg{gbm} package \citep{gbmpkg} or the \code{SuperLearner()} function in the \CRANpkg{SuperLearner} package \citep{slpkg} can also be called by using \code{method = "gbm"} or \code{method = "SuperLearner"}. Additional parameters of those functions can be supplied through the \code{ps.control} argument. The argument \code{ps.estimate} supports estimated propensity scores from external routines. \code{SumStat()} produces a \code{SumStat} object, with estimated propensity scores, unweighted and weighted covariate means for each treatment group, balance diagnostics, and effective sample sizes (defined in \citet{li2019propensity}). We then provide a \code{summary.SumStat()} function, which takes the \code{SumStat} object and summarizes weighted covariate means by treatment groups and the between-group differences in either ASD or PSD. The default options in \code{weighted.var = TRUE} and \code{metric = "ASD"} yield ASD based on weighted standard deviations in \cite{austin2015moving}. The weighted covariate means can be used to build a baseline characteristics ``Table 1'' to illustrate the target population where trimming or balancing weights are applied.
\begin{Scode}
summary(object, weighted.var = TRUE, metric = "ASD")
\end{Scode}
\begin{table}[htbp]
\centering
\caption{\label{tab:design} Functions in the design module of \CRANpkg{PSweight}.}
{\footnotesize
\begin{tabular}{p{4cm}p{8cm}}
\toprule
Function & Description \\\midrule
\code{SumStat()} & Generate a \code{SumStat} object with information of propensity scores and weighted covariate balance \\
\code{summary.SumStat()} & Summarize the \code{SumStat} object and return weighted covariate means by treatment groups and weighted or unweighted between-group differences in ASD or PSD \\
\code{plot.SumStat()} & Plot the distribution of propensity scores or weighted covariate balance metrics from the \code{SumStat} object\\
\code{PStrim()} & Trim the data set based on estimated propensity scores \\
\bottomrule
\end{tabular}}
\end{table}
Diagnostics of propensity score models can be visualized with the \code{plot.SumStat()} function. It takes the \code{SumStat} object and produces a balance plot (\code{type = "balance"}) based on the ASD and PSD. A vertical dashed line can be set by the \code{threshold} argument, with a default value equal to $0.1$. The \code{plot.SumStat()} function can also supply density plot (\code{type = "density"}), or histogram (\code{type = "hist"}) of the estimated propensity scores. The histogram, however, is only available for the binary treatment case. The plot function is implemented as follows:
\begin{Scode}
plot(x, type = "balance", weighted.var = TRUE, threshold = 0.1, metric = "ASD")
\end{Scode}
In the design stage, propensity score trimming can be carried out with the \code{PStrim()} function. The trimming threshold \code{delta} is set to 0 by default. \code{PStrim()} also enables optimal trimming rules (\code{optimal = TRUE}) that give the most statistically efficient (pairwise) subpopulation ATE, among all possible trimming rules. A trimmed data set along with a summary of trimmed cases will be returned by \code{PStrim()}. This function is given below:
\begin{Scode}
PStrim(data, ps.formula = NULL, zname = NULL, ps.estimate = NULL, delta = 0,
optimal = FALSE, method = "glm", ps.control = list())
\end{Scode}
Alternatively, trimming is also anchored in the \code{SumStat()} function with the \code{delta} argument. All functions in the design module are summarized in Table \ref{tab:design}.
\subsection{Analysis module}
The analysis module of \CRANpkg{PSweight} includes two functions: \code{PSweight()} and \code{summary.PSweight()}. The \code{PSweight()} function estimates the average potential outcomes in the target population, $\{\mu_{j}^{h},j=1,\ldots,J\}$, and the associated variance-covariance matrix. By default, the empirical sandwich variance is implemented, but bootstrap variance can be obtained with the argument \code{bootstrap = TRUE)}. The \code{weight} argument can take \code{"IPW"}, \code{"treated"}, \code{"overlap"}, \code{"matching"} or \code{"entropy"}, corresponding to the weights introduced in Section \ref{sec:models}. More detailed descriptions of each input argument in the \code{PSweight()} function can be found in Table \ref{tab:est}. A typical \code{PSweight()} code snippet looks like
\begin{Scode}
PSweight(ps.formula, ps.estimate, trtgrp, zname, yname, data, weight = "overlap",
delta = 0, augmentation = FALSE, bootstrap = FALSE, R = 50, out.formula = NULL,
out.estimate = NULL, family = "gaussian", ps.method = "glm", ps.control = list(),
out.method = "glm", out.control = list())
\end{Scode}
\begin{table}[!htbp]
\centering
\caption{Arguments for function \code{PSweight()} in the analysis module of \CRANpkg{PSweight}. \label{tab:est}}
{\footnotesize
\begin{tabular}{p{2cm}p{9.5cm}p{2cm}}
\toprule
Argument & Description & Default \\\midrule
\code{ps.formula} & A symbolic description of the propensity score model. & -- \\
\code{ps.estimate} & An optional matrix or data frame with externally estimated (generalized) propensity scores for each observation; can also be a vector with binary treatments. & \code{NULL} \\
\code{trtgrp} & An optional character defining the \emph{treated} population for estimating (pairwise) ATT. It can also be used to specify the treatment level when only a vector of values are supplied for \code{ps.estimate} in the binary treatment setting. & Last value in alphabetic order \\
\code{zname} & An optional character specifying the name of the treatment variable when \code{ps.formula} is not provided. & \code{NULL} \\
\code{yname}& A character specifying name of the outcome variable in \code{data}. \\
\code{weight} & A character specifying the type of weights to be used. & \code{"overlap"}\\
\code{delta} & Trimming threshold for (generalized) propensity scores. & 0\\
\code{augmentation} & Logical value of whether augmented weighting estimators should be used. & \code{FALSE} \\
\code{bootstrap} & Logical value of whether bootstrap is used to estimate the standard error & \code{FALSE}\\
\code{R} & Number of bootstrap replicates if \code{bootstrap = TRUE} & 50\\
\code{out.formula} & A symbolic description of the outcome model to be estimated when \code{augmentation = TRUE} &\\
\code{out.estimate} & An optional matrix or data frame containing externally estimated potential outcomes for each observation under each treatment level.& \code{NULL} \\
\code{family} & A description of the error distribution and canonical link function to be used in the outcome model if \code{out.formula} is provided & \code{"gaussian"}\\
\code{ps.method}& a character to specify the method for propensity model.
&\code{"glm"}\\
\code{ps.control}&
A list to specify additional options when \code{method} is set to \code{"gbm"} or \code{"SuperLearner"}.&\code{list()}\\
\code{out.method}& A character to specify the method for outcome model.& \code{"glm"}\\
\code{out.control}&A list to specify additional options when \code{methodout} is set to \code{"gbm"} or \code{"SuperLearner"}. &\code{list()}\\
\bottomrule
\end{tabular}}
\end{table}
Similar to the design module, the \code{summary.PSweight()} function synthesizes information from the \code{PSweight} object for statistical inference. A typical code snippet looks like
\begin{Scode}
summary(object, contrast, type = "DIF", CI = TRUE)
\end{Scode}
In the \code{summary.PSweight()} function, the argument \code{type} corresponds to the three types estimands: \code{type = "DIF"}
is the default argument that specifies the additive causal contrasts; \code{type = "RR"} specifies the contrast on the log scale as in equation \eqref{eq:meanRR}; \code{type = "OR"} specifies the contrast on the log odds scale as in equation \eqref{eq:meanOR}. Confidence intervals and p-values are obtained using normal approximation and reported by the \code{summary.PSweight()} function. The argument \code{contrast} represents a contrast vector $\pmb{a}$ or matrix with multiple contrast row vectors. If \code{contrast} is not specified, \code{summary.PSweight()} provides all pairwise comparisons of the average potential outcomes. By default, confidence interval is printed (\code{CI = TRUE}); alternatively, one can print the test statistics and p-values by \code{CI = FALSE}.
\section{Case study with the NCDS data} \label{sec:illustrations}
We demonstrate \CRANpkg{PSweight} in a case study that estimates the causal effect of educational attainment on hourly wage, based on the National Child Development Survey (NCDS) data.
The National Child Development Survey (NCDS) is a longitudinal study on children born in the United Kingdom (UK) in $1958$ \footnote{https://cls.ucl.ac.uk/cls-studies/1958-national-child-development-study/}. NCDS collected information such as educational attainment, familial backgrounds, and socioeconomic and health well being on $17,415$ individuals. We followed \citet{battistin2011misclassified} to pre-process the data and obtain a subset of 3,642 males employed in 1991 with complete educational attainment and wage information for analysis. For illustration, we use the Multiple Imputation by Chained Equations in \citep{buuren2010mice} to impute missing covariates and obtain a single imputed data set for all subsequent analysis.\footnote{Ten out of twelve pre-treatment covariates we considered have missingness. The smallest missingness proportion is $4.9\%$ and the largest missingness proportion is $17.2\%$. We considered one imputed complete data set for illustrative purposes, but note that a more rigorous analysis could proceed by combining analyses from multiple imputed data sets via the Rubin's rule.} The outcome variable \code{wage} is log of the gross hourly wage in Pound. The treatment variable is educational attainment.
For the multiple treatment case,
To start with, we created \code{Dmult} as a treatment variable with three levels: \code{">=A/eq"}, \code{"O/eq"} and \code{"None"}, representing advanced qualification ($1,806$ individuals), intermediate qualification ($941$ individuals) and no qualification ($895$ individuals). We consider twelve pre-treatment covariates or potential confounders. The variable \code{white} indicates whether an individual identified himself as white race; \code{scht} indicates the school type they attended at age $16$; \code{qmab} and \code{qmab2} are math test scores at age $7$ and $11$; \code{qvab} and \code{qvab2} are two reading test scores at age $7$ and $11$; \code{sib$\_$u} stands for the number of siblings; \code{agepa} and \code{agema} are the ages of parents in year 1,974; in the same year, the employment status of mother \code{maemp} was also collected; \code{paed$\_$u} and \code{maed$\_$u} are the years of education for parents. For simplicity, we will focus on IPW and the three types of weights that improve covariate overlap: OW, MW and EW \citep{li2019propensity}.
\subsubsection{Estimating generalized propensity scores and balance assessment}
We use \code{Dmult}, the three-level variable, as the treatment of interest. About one half of the population attained advanced academic qualification, the there are approximately equal number of individuals with intermediate academic qualification or no academic qualification. To illustrate the estimation and inference for ratio estimands, we also introduce a binary outcome of wage, \code{wagebin}. The dichotomized wage was obtained with the cutoff of the average hourly wage of actively employed British male aged $30$-$39$ in $1991$\footnote{https://www.ons.gov.uk/employmentandlabourmarket/peopleinwork/earningsandworkinghours/}. The averaged hourly wage is $8.23$, and we take $\log(8.23)\approx 2.10$ as the cutoff. Among the study participants, we observe $1610$ and $2032$ individuals above and below the average, and we are interested in estimating the pairwise (weighted) average treatment effect of the academic qualification for obtaining above-average hourly wage.
We specify a multinominal regression model, \code{ps.mult}, to estimate the generalized propensity scores.
\begin{Scode}
ps.mult <- Dmult ~ white + maemp + as.factor(scht) + as.factor(qmab)
as.factor(qmab2) + as.factor(qvab) + as.factor(qvab2) + paed_u + maed_u +
agepa + agema + sib_u + paed_u * agepa + maed_u * agema
\end{Scode}
Then we obtain the propensity score estimates and assess weighted covariate balance with the \code{SumStat()} function.
\begin{Scode}
bal.mult <- SumStat(ps.formula = ps.mult, +
weight = c("IPW", "overlap", "matching", "entropy"), data = NCDS)
plot(bal.mult, type = "density")
\end{Scode}
\begin{figure}[!ht]
\centering
\subfigure{\label{fig:mult1}
\includegraphics[width=.35\linewidth]{figure/propdmult1.pdf} }
\subfigure{\label{fig:mult2}
\includegraphics[width=.35\linewidth]{figure/propdmult2.pdf} }
\hspace{-5mm}
\subfigure{\label{fig:mult3}
\includegraphics[width=.35\linewidth]{figure/propdmult3.pdf}}
\caption{\label{fig:propdmult}
Density plots of estimated generalized propensity scores with respect to the three-level treatment variable \code{Dmult} generated by \code{plot.SumStat()} function in the \CRANpkg{PSweight} package.}
\end{figure}
The distributions of generalized propensity scores are given in Figure \ref{fig:propdmult} (in alphabetic order of the names of treatment groups). For the generalized propensity score to receive the advanced qualification (\code{">=A/eq"}) or no qualification (\code{"None"}), there is a mild lack of overlap due to separation of the group-specific distribution. Since \code{bal.mult} includes four weighting schemes, we plot the maximum pairwise ASD and assess the (weighted) covariate balance in a single Love plot.
\begin{Scode}
plot(bal.mult, metric = "ASD")
\end{Scode}
The covariates are imbalanced across the three groups prior to any weighting. Although IPW can generally improve covariate balance, the maximum pairwise ASD still ocassionally exceeds the threshold $0.1$ due to lack of overlap. In contrast, OW, MW and EW all emphasize the subpopulation with improved overlap and provide better balance across all covariates.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{figure/multasd.pdf}
\caption{Love plot with the three-level treatment variable \code{Dmult} using the maximum pairwise ASD metric, generated by \code{plot.SumStat()} function in the \CRANpkg{PSweight} package.}\label{fig:multasd}
\end{figure}
\subsubsection{Generalized propensity score trimming}
The \CRANpkg{PSweight} package can perform trimming based on (generalized) propensity scores. As IPW does not adequately balance the covariates across the three groups in Figure \ref{fig:multasd}, we explore trimming as a way to improve balance for IPW. There are two types of trimming performed by the \CRANpkg{PSweight} package: (1) symmetric trimming that removes units with extreme (generalized propensity scores) \citep{Crump2009,Yoshida2019} and (2) optimal trimming that provides the most efficient IPW estimator for estimating (pairwise) ATE \citep{Crump2009,Yang2016}. Specifically, the symmetric trimming is supported by both the \code{SumStat()} and \code{PSweight()} functions through the \code{delta} argument. Both functions refit the (generalized) propensity score model after trimming following the recommendations in \citet{LiThomasLi2018}. We also provide a stand-alone \code{PStrim} function that performs both symmetric trimming and optimal trimming. Following \citet{Yoshida2019}, with three treatment groups, we exclude all individuals with the estimated generalized propensity scores less than $\delta=0.067$. This threshold removes a substantial amount of individuals in the advanced qualification group (information can be pulled from the \code{trim} element in the \code{SumStat} object). As discussed in \citet{Yoshida2019}, propensity trimming could improve the estimation of ATE and ATT, but barely have any effect for estimation of ATO and ATM. Evidently, Figure \ref{fig:mult4} indicates that IPW controls all pairwise ASD within $10\%$ in the trimmed sample. Trimming had nearly no effect on the weighted balance for OW, MW and EW.
\begin{Scode}
bal.mult.trim <- SumStat(ps.formula = ps.mult, weight = c("IPW", "overlap", "matching",
"entropy"), data = NCDS, delta = 0.067)
bal.mult.trim
\end{Scode}
\begin{Soutput}
1050 cases trimmed, 2592 cases remained
trimmed result by trt group:
>=A/eq None O/eq
trimmed 778 71 201
remained 1028 824 740
weights estimated for: IPW overlap matching entropy
\end{Soutput}
\begin{Scode}
plot(bal.mult.trim,metric = "ASD")
\end{Scode}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{figure/multasdtrim.pdf}
\caption{Love plot with the three-level treatment variable \code{Dmult} using the maximum pairwise ASD metric, after symmetric trimming with $\delta=0.067$. This plot is generated by \code{plot.SumStat()} function in the \CRANpkg{PSweight} package.}\label{fig:mult4}
\end{figure}
Alternatively, if one does not specify the trimming threshold, the \code{PStrim} function supports the optimal trimming procedure that identifies the optimal threshold based on data. An example syntax is given as follows. By pulling out the summary statistics for trimming, we can see that optimal trimming excludes $27\%$, $9\%$ and $2\%$ of the individuals among those with advanced qualification, intermediate qualification and no qualification, respectively. The exclusion is more conservative compared to symmetric trimming with $\delta=0.067$. However, the resulting covariate balance after optimal trimming is similar to Figure \ref{fig:mult4} and omitted.
\begin{Scode}
PStrim(ps.formula = ps.mult, data = NCDS, optimal = TRUE)
\end{Scode}
\begin{Soutput}
>=A/eq None O/eq
trimmed 479 21 82
remained 1327 874 859
\end{Soutput}
\subsubsection{Estimation and inference of pairwise (weighted) average treatment effects}
For illustration, we estimate the ratio estimands introduced in Section \ref{sec:ratioest} using the binary outcome \code{wagebin}. For illustration, we will only estimate the causal effects based on the data without trimming, and the analysis with the trimmed data follows the exact same steps. Based on the multinomial logistic propensity score model, we obtain the pairwise causal RR among the combined population via IPW.
\begin{Scode}
ate.mult <- PSweight(ps.formula = ps.mult, yname = "wagebin", data = NCDS,
weight = "IPW")}
contrasts.mult <- rbind(c(1,-1, 0), c(1, 0,-1), c(0, -1, 1))
sum.ate.mult.rr <- summary(ate.mult, type = "RR", contrast = contrasts.mult)
sum.ate.mult.rr
\end{Scode}
\begin{Soutput}
Closed-form inference:
Inference in log scale:
Original group value: >=A/eq, None, O/eq
Contrast:
>=A/eq None O/eq
Contrast 1 1 -1 0
Contrast 2 1 0 -1
Contrast 3 0 -1 1
Estimate Std.Error lwr upr Pr(>|z|)
Contrast 1 0.607027 0.115771 0.380120 0.83393 1.577e-07 ***
Contrast 2 0.459261 0.052294 0.356767 0.56176 < 2.2e-16 ***
Contrast 3 0.147766 0.121692 -0.090746 0.38628 0.2246
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
\end{Soutput}
By providing the appropriate contrast matrix, we obtain all pairwise comparisons of the average potential outcomes on the log scale with the \code{summary.PSweight()} function, and estimate $\lambda^{h,\mbox{\tiny{RR}}}(\pmb{a})$ for contrast vector $\pmb{a}$. The p-values provides statistical evidence against the weak causal null $H_0:\lambda^{h,\mbox{\tiny{RR}}}(\pmb{a})=0$. It is found that, among the combined population, the proportion that receives above-average hourly wage when everyone attains advanced qualification is $\exp(0.607)=1.83$ times that when everyone attains no academic qualification. Further, the proportion that receives above-average hourly wage when everyone attains advanced qualification is $\exp(0.459)=1.58$ times that when everyone attains intermediate qualification. Both effects are significant at the $0.05$ levels and provides strong evidence against the corresponding causal null (p-value $<0.001$). However, if everyone attains intermediate qualification, the proportion that receives above-average hourly wage is only slightly higher compared to without qualification, with a p-value exceeding $0.05$. To directly report the causal RR and its confidence intervals, we can simply exponentiate the point estimate and confidence limits provided by the \code{summary.PSweight()} function.
\begin{Scode}
exp(sum.ate.mult.rr$estimates[,c(1,4,5)])
\end{Scode}
\begin{Soutput}
Estimate lwr upr
Contrast 1 1.834968 1.4624601 2.302358
Contrast 2 1.582904 1.4287028 1.753749
Contrast 3 1.159241 0.9132496 1.471493
\end{Soutput}
Focusing on the target population that has the most overlap in the observed covariates, we further use the OW to estimate the pairwise causal RR. OW theoretically provides the best internal validity for pairwise comparisons; Figure \ref{fig:mult4} also indicates that OW achieves better covariate balance among the overlap population. Exponentiating the results provided by the \code{summary.PSweight()} function, we observe each pairwise causal RR has a larger effect size among the overlap weighted population. Interestingly, among the overlap population, the proportion that receives above-average hourly wage when everyone attains intermediate qualification becomes approximately $1.55$ times that when everyone attains no academic qualification, and the associated $95\%$ CI excludes the null. Moreover, the standard errors for the pairwise comparisons are smaller when using OW versus IPW, implying that OW analysis generally corresponds to increased power by focusing on a population with equipoise. We repeat the analysis using both MW and EW; the results are similar to OW for this analysis and therefore omitted for brevity.
\begin{Scode}
ato.mult <- PSweight(ps.formula = ps.mult, yname = "wagebin", data = NCDS,
weight = "overlap")
sum.ato.mult.rr <- summary(ato.mult, type = "RR", contrast = contrasts.mult)
exp(sum.ato.mult.rr$estimates[,c(1,4,5)])
\end{Scode}
\begin{Soutput}
Estimate lwr upr
Contrast 1 2.299609 1.947140 2.715882
Contrast 2 1.527931 1.363092 1.712705
Contrast 3 1.505048 1.257180 1.801785
\end{Soutput}
The above output suggests that among the overlap population, the causal RR for comparing advanced qualification and intermediate qualification is similar in magnitude to that for comparing intermediate qualification and no qualification. We can formally test for the equality of two consecutive causal RR based on the null hypothesis $H_0: \mu_3^h/\mu_2^h=\mu_2^h/\mu_1^h$ (also see Section \ref{sec:ratioest}). Operationally, we need to specify the corresponding contrast vector \code{contrast = c(1, 1, -2)}. The p-value for testing this null is $0.91$ (output omitted for brevity), and suggests a lack of evidence against the equality of consecutive causal RR at the $0.05$ level.
\begin{Scode}
summary(ato.mult, type = "RR", contrast = c(1, 1, -2), CI = FALSE)
\end{Scode}
With the binary outcome \code{wagebin}, we can also estimate the pairwise causal OR among a specific target population. For example, using OW, the causal conclusions regarding the effectiveness due to attaining academic qualification do not change, because all three $95\%$ confidence intervals exclude null. However, the pairwise causal OR appear larger than the pairwise causal RR. This is expected because our outcome of interest is not uncommon \citep{nurminen1995use}. For rare outcomes, causal OR approximates causal RR.
\begin{Scode}
sum.ato.mult.or <- summary(ato.mult, type = "OR", contrast = contrasts.mult)
exp(sum.ato.mult.or$estimates[,c(1,4,5)])
\end{Scode}
\begin{Soutput}
Estimate lwr upr
Contrast 1 3.586050 2.841383 4.525879
Contrast 2 2.050513 1.696916 2.477791
Contrast 3 1.748855 1.375483 2.223578
\end{Soutput}
As a final step, we illustrate how to combine OW with outcome regression and estimate the pairwise causal RR among the overlap population.
We use the same set of covariates in the binary outcome regression model.
\begin{Scode}
out.wagebin <- wagebin ~ white + maemp + as.factor(scht) + as.factor(qmab) +
as.factor(qmab2) + as.factor(qvab) + as.factor(qvab2) + paed_u + maed_u +
agepa + agema + sib_u + paed_u * agepa + maed_u * agema
\end{Scode}
Loading this outcome regression formula into the \code{PSweight()} function, and specifying \code{family = "binomial"} to indicate the type of outcome, we obtain the augmented overlap weighting estimates on the log RR scale. Exponentiating the point estimates and confidence limits, one reports the pairwise causal RR. The pairwise causal RR reported by the augmented OW estimator is similar to that reported by the simple OW estimator; further, the width of the confidence interval is also comparable before and after outcome augmentation, and the causal conclusions based on pairwise RR remain the same. The similarity between simple and augmented OW estimators implies that OW itself may already be efficient.
\begin{Scode}
ato.mult.aug <- PSweight(ps.formula = ps.mult, yname = "wagebin", data = NCDS,
augmentation = TRUE, out.formula = out.wagebin, family = "binomial")
sum.ato.mult.aug.rr <- summary(ato.mult.aug, type = "RR", contrast = contrasts.mult)
exp(sum.ato.mult.aug.rr$estimates[,c(1,4,5)])
\end{Scode}
\begin{Soutput}
Estimate lwr upr
Contrast 1 2.310628 1.957754 2.727105
Contrast 2 1.540176 1.375066 1.725111
Contrast 3 1.500237 1.253646 1.795331
\end{Soutput}
\subsection{Using machine learning to estimate propensity scores and potential outcomes}\label{sec:impillu}
As an alternative to the default generalized linear models, we can use more advanced machine learning models to estimate propensity scores and potential outcomes. Flexible propensity score and outcome estimation has been demonstrated to reduce bias due to model misspecification, and potentially improve covariate balance \citep{lee2010improving,hill2011bayesian,McCaffrey2013}. This can be achieved in \CRANpkg{PSweight} for both balance check and constructing weighted estimator by specifying the method as the generalized boosted model (GBM) or the super learner methods. Additional model specifications for these methods can be supplied through \code{ps.control} and \code{out.control}. Machine learning models that are included in neither \code{gbm} nor \code{SuperLearner} could be estimated externally and then imported through the \code{ps.estimate} and \code{out.estimate} arguments. These two arguments broaden the utility of \CRANpkg{PSweight} where any externally generated estimates of propensity scores and potential outcomes models can be easily incorporated.
We now illustrate the use of GBM as an alternative of the default generalized linear models. For simplicity, this illustration is based on binary education. Specifically, we created \code{Dany} to indicate whether one had attained any academic qualification. There are 2,399 individuals that attained academic qualification, and 1,243 individuals without any. GBM is a family of non-parametric tree-based regressions that allow for flexible non-linear relationships between predictors and outcomes \citep{Friedman2000}. The following propensity model formula is specified; the formula does not include interactions terms because boosted regression is already capable of capturing non-linear effects and interactions \citep{McCaffrey2004}. In this illustration, we use the AdaBoost \citep{freund1997decision} algorithm to fit the propensity model through the control setting, \code{ps.control=list(distribution = "adaboost")}. We use the default values for other model parameters such as the number of trees (\code{n.trees = 100}), interaction depth (\code{interaction.depth = 1}), the minimum number of observations in the terminal nodes (\code{n.minobsinnode = 1}), shrinkage reduction (\code{shrinkage = 0.1}), and bagging fraction (\code{shrinkage = 0.5}). Alternative values for these parameters could also be passed through \code{ps.control}.
\begin{Scode}
ps.any.gbm <- Dany ~ white + maemp + as.factor(scht) + as.factor(qmab) +
as.factor(qmab2) + as.factor(qvab) + as.factor(qvab2) + paed_u + maed_u+
agepa + agema + sib_u
bal.any.gbm <-SumStat(ps.formula = ps.any.gbm, data= NCDS, weight = "overlap",
method = "gbm", ps.control = list(distribution = "adaboost"))
\end{Scode}
The balance check through \code{plot.SumStat()} suggests substantial improvement in covariate balance with SMD of all covariates below 0.1 after weighting. After assessing balance and confirming the adequacy of the propensity score model, we further fit the outcome model using GBM with the default logistic regression and parameters. In the \code{PSweight()} function, we can specify both \code{ps.method = "gbm"} and \code{out.method = "gbm"} and leave the \code{out.control} argument as default. The detailed code and summary of the output is in below. Here we redefine the propensity score model without interaction terms because GBM considers interactions between covariates by default. The results using GBM, in this example, are very similar to those using generalized linear models (results omitted).
\begin{Scode}
out.wage.gbm <- wage ~ white + maemp + as.factor(scht) + as.factor(qmab) +
as.factor(qmab2) + as.factor(qvab) + as.factor(qvab2) + paed_u +
maed_u + agepa + agema + sib_u
ato.any.aug.gbm <- PSweight(ps.formula = ps.any.gbm, yname = "wagebin",
data = NCDS, augmentation = TRUE, out.formula = out.wage.gbm,
ps.method = "gbm", ps.control = list(distribution = "adaboost"),
out.method = "gbm")
summary(ato.any.aug.gbm, CI = FALSE)
\end{Scode}
\begin{Soutput}
Closed-form inference:
Original group value: 0, 1
Contrast:
0 1
Contrast 1 -1 1
Estimate Std.Error z value Pr(>|z|)
Contrast 1 0.186908 0.018609 10.044 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
\end{Soutput}
\section{Summary} \label{sec:summary}
Propensity score weighting is an important tool for causal inference and comparative effectiveness research. This paper introduces the \CRANpkg{PSweight} package and demonstrates its functionality with the NCDS data example in the context of binary and multiple treatment groups. In addition to providing easy-to-read balance statistics and plots to aid the design of observational studies, the \CRANpkg{PSweight} offers point and variance estimation with a variety of weighting schemes for the (weighted) average treatment effects on both the additive and ratio scales. These weighting schemes include the optimal overlap weights recently introduced in \citet{LiMorganZaslavsky2018} and \citet{li2019propensity}, and could help generate valid causal comparative effectiveness evidence among the population at equipoise.
Although propensity score weighting has been largely developed in observational studies, it is also an important tool for covariate adjustment in randomized controlled trials (RCTs). \citet{williamson2014variance} showed that IPW can reduce the variance of the unadjusted difference-in-means treatment effect estimator in RCTs, and \citet{shen2014inverse} proved that the IPW estimator is semiparametric efficient and asymptotically equivalent to the analysis of covariance (ANCOVA) estimator \citep{tsiatis2008covariate}. \citet{zeng2020RCT} generalized these results of IPW to the family of balancing weights. Operationally, there is no difference in implementing propensity score weighting between RCTs and observational studies. Therefore, \CRANpkg{PSweight} is directly applicable to perform covariate-adjusted analysis in RCTs.
The \CRANpkg{PSweight} package is under continuing development to include other useful components for propensity score weighting analysis. Specifically, future versions of \CRANpkg{PSweight} will include components to enable pre-specified subgroup analysis with balancing weights and flexible variable selection tools \citep{yang2020propensity}. We are also studying overlap weighting estimators with time-to-event outcomes and complex survey designs. Those new features are being actively developed concurrently with our extensions to the methodology.
\section{Acknowledgement}
The authors would like to acknowledge the NCDS replication data published on Harvard Dataverse (\url{https://dataverse.harvard.edu/}) \citep{DVNEPCYUL2012}, which provides a coded data set for our analysis in Section \ref{sec:illustrations}.
| proofpile-arXiv_059-15646 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\subsection{Main Idea}
\begin{figure}[b]
\center
\includegraphics[width = 0.95 \linewidth]{MHPC_diagram_final.pdf}
\caption{Novelty of MHPC compared to conventional simple-model MPC. Left: Simple-model MPC generates the simple-model plan and the full-model plan via a hierarchical sequence of optimizations. Right: MHPC generates the two plans simultaneously via a single optimization. }
\label{fig:MHPC_diagram}
\end{figure}
In this section, we further describe the main idea of MHPC. Compared to simple-model MPC, which formulates a hierarchical sequence of optimization problems, MHPC constructs a single optimization problem posed over a hierarchy of models. Main differences are conceptually shown in Fig.~\ref{fig:MHPC_diagram}. MHPC plans with full-model dynamics in the near term and with a simple model in the long term. The consistency between the two is enforced by a low-dimensional transition constraint. Due to the reduced dimensionality of the simple model, this transition typically incorporates a projection map. This instantaneous transition constraint is in contrast to methods that impose consensus between models over the entire planning horizon (e.g., \cite{budhiraja2019dynamics}). Figure~\ref{fig:MHPC_diagram} incorporates only two models, but, in general, MHPC can use multiple models and regards the higher-order model as a `full model' and lower-order models as `simple models'.
At the core of MHPC is an abstraction schedule specifying the planning horizons for each model. In Fig.~\ref{fig:MHPC_diagram}, $t_{h_f}$ and $t_{h_s}$ respectively represent planning horizon durations for a full model and a simple model. The abstraction schedule affects MHPC performance in terms of disturbance rejection and computational cost. Planning based exclusively on the full model and exclusively on a simple model represent two extremes of how to configure MHPC. In this work, we discuss these effects for several example systems in Section~\ref{sec:example_sys}.
\subsection{Problem Formulation}
\label{subsec:problem formulation}
\vspace{-3px}
\label{subsec:MHPC_problem}
MHPC can be formulated as a multi-phase receding-horizon TO problem, where a phase indicates a period of time during which the dynamics is unchanged. A system with multi-phase dynamics can be modeled as follows:
\begin{subequations}\label{eq_multiplase_system}
\begin{align}
\xi_{k+1} &= \mathbf{f}_i(\xi_k,\u^{[i]}_k),\label{subeq_dyn}\\
\mathbf{x}^{[i+1]}_0 &= \P_{i}(\xi_{N_{i}}),\label{subeq_reset}\\
g_{i}(\xi_{N_{i}}) &= 0, \label{subeq_switch}
\end{align}
\end{subequations}
where $\mathbf{x}$ and $\u$ are system state and control vectors, $i$ denotes the phase index, $k\in [0,N_i-1]$ the time index, $\mathbf{f}$ gives the dynamics evolution, $\P$ represents the phase transition map, and $g$ represents a constraint on the terminal state. The state and control vectors may have a different dimension in each phase. Note that $\mathbf{f}$ is in discrete time and is obtained via a numerical integration scheme, with forward Euler used in this work. Other integration schemes can be used without loss of generality to the remaining development. The equation~\eqref{subeq_switch} specifies a terminal constraint for the $i$-th phase.
We alert the readers that the formulation~\eqref{eq_multiplase_system} is a generalization of other hybrid systems models commonly used in legged locomotion work (e.g., \cite{westervelt2003hybrid}) where the transition map normally accounts for impacts. In~\eqref{eq_multiplase_system}, $\mathbf{f}$ may specify the dynamics for a phase of the full model (e.g., stance dynamics or flight dynamics of a full quadruped) or of a simple model. The transition map $\P$ may thus describe a reset map at impact, as well as a transition that maps the state of the full model to the state of a simpler model, as captured by the transition constraint in Fig.~\ref{fig:MHPC_diagram}. For example, when working with the SLIP model, the transition map could account for the impact in the full model and then extract the CoM state post-impact as the initial condition for the SLIP.
For each phase $i$ of~\eqref{eq_multiplase_system}, we define a cost function
\begin{equation}
J^{[i]}(\mathbf{x}^{[i]}_0, \mathbf{U}^{[i]}) = \sum_{k=0}^{N_i - 1} \ell_i(\xi_k, \u^{[i]}_k) + \phi_i(\xi_{N_i}),
\end{equation}
where $\ell_i$, $\phi_i$ and $N_i$ represent the running cost, the terminal cost, and the length of horizon, respectively, associated with phase $i$, and $\mathbf{U}^{[i]} = [\u^{[i]}_0,\cdots,\u^{[i]}_{N_i-1}]$. For a system~\eqref{eq_multiplase_system} with $n$ phases, the TO problem can be formulated as follows
\begin{subequations}\label{eq_hybridTO}
\begin{IEEEeqnarray}{cll}
\IEEEeqnarraymulticol{2}{l}{\min_{\mathbcal{U}} \ \ \sum_{i=1}^n J^{[i]}(\mathbf{x}^{[i]}_0,\mathbf{U}^{[i]})}\\
\text{subject~to} \ \ & \eqref{subeq_dyn}, \eqref{subeq_reset},\\
& \eqref{subeq_switch},\\
&\mathbf{h}_i(\xi_k,\u^{[i]}_k)\geq \mathbf{0}, \label{subeq_ineq}
\end{IEEEeqnarray}
\end{subequations}
where $\mathbcal{U} = [\mathbf{U}^{[1]},\cdots,\mathbf{U}^{[n]}]$ and $\mathbf{h}$ describes inequality constraints such as torque limits and friction constraints. Solving~\eqref{eq_hybridTO} with receding horizons then gives rise to MHPC. Extending the horizon of a conventional finite-horizon Optimal Control Problem (OCP) helps maintain long-term stability at the expense of additional computation. For example, the whole-body QP designed in \cite{Sherikov14} adds long-term balance constraints for a humanoid robot such that its state remains viable. An alternate strategy is to embed future costs into a terminal cost. An ideal terminal cost would be the value function of an infinite-horizon OCP, which, however, does not have an analytical solution in general and needs to be approximated. In this work, by planning over the long term with a simple model, MHPC provides a proxy of the value function for the full model to improve its long-term stability.
\subsection{Hybrid Systems Solver for MHPC}
\label{subsec:MHPC_solver}
We employ an efficient algorithm, Hybrid Systems Differential Dynamic Programming (HSDDP), developed in \cite{li2020hybrid}, to solve the multi-phase optimization problem~\eqref{eq_hybridTO}. Other general multi-phase TO solvers could also be used. HSDDP attacks~\eqref{eq_hybridTO} by converting it into an unconstrained optimization problem using Augmented Lagrangian (AL) \cite{lantoine2012hybrid,Howell19} and Reduced Barrier (ReB) methods. A Lagrangian-like function for phase $i$ is constructed as follows
\begin{multline}\label{eq_mode_Lagrangian}
\mathcal{L}^{[i]}(\mathbf{x}^{[i]}_0,\mathbf{U}^{[i]},\lambda_i,\sigma_i,\delta_i) = \\ \sum_{k=0}^{N_i - 1} \underbrace{\ell_i(\xi_k, \u^{[i]}_k) + B_{\delta_i}\big(\mathbf{h}_i(\xi_k,\u^{[i]}_k) \big)}_{L_i(\xi_k, \u^{[i]}_k)} \\
+\underbrace{\phi_i(\xi_{N_i}) + \left(\frac{\sigma_i}{2}\right)^2 g_i^2(\xi_{N_i}) + \lambda_i g_i(\xi_{N_i})}_{\Phi_i(\xi_{N_i})},
\end{multline}
where $\lambda_i$ and $\sigma_i$ are the Lagrange multiplier and penalty coefficient, respectively, and $B_{\delta_i}$ is an element-wise Reduced Barrier function \cite{hauser2006barrier,li2020hybrid} with relaxation parameter $\delta_i$. The Lagrangian-like function for problem~\eqref{eq_hybridTO} is then
\begin{equation}\label{eq_Lagrangian}
\mathcal{L}(\mathbcal{U},\boldsymbol{\lambda}, \boldsymbol{\sigma}, \boldsymbol{\delta}) = \sum_{i=1}^n \mathcal{L}^{[i]}(\mathbf{x}^{[i]}_0,\mathbf{U}^{[i]},\lambda_i,\sigma_i,\delta_i),
\end{equation}
where $\boldsymbol{\lambda} = [\lambda_1,\cdots,\lambda_n]$, $\boldsymbol{\sigma}$ and $\boldsymbol{\delta}$ are similarly defined.
The reformulated unconstrained problem is then:
\begin{subequations}\label{eq_hybridTO_unconstr}
\begin{IEEEeqnarray}{cll}
\IEEEeqnarraymulticol{2}{l}{\min_{\mathbcal{U}} \ \ \mathcal{L}(\mathbcal{U},\boldsymbol{\lambda}, \boldsymbol{\sigma}, \boldsymbol{\delta})}\\
\text{subject~to} \ \ & \eqref{subeq_dyn}, \eqref{subeq_reset}.
\end{IEEEeqnarray}
\end{subequations}
HSDDP solves~\eqref{eq_hybridTO_unconstr} with fixed $\boldsymbol{\lambda}, \boldsymbol{\sigma}, \boldsymbol{\delta}$ using DDP, and employs an outer loop to iteratively adjust their values until all constraints are satisfied. The update equations are as follows
\begin{equation}
\boldsymbol{\sigma} \leftarrow \beta_{\sigma}\boldsymbol{\sigma}, \ \
\boldsymbol{\lambda} \leftarrow \boldsymbol{\lambda} + \boldsymbol{\sigma} \circ\mathbf{g}, \ \
\boldsymbol{\delta} \leftarrow \beta_{\delta}\boldsymbol{\delta}, \label{eq_update}
\end{equation}
where $\beta_{\sigma}>1$ and $0<\beta_{\delta}<1$ are update parameters, the vector $\mathbf{g}\in\mathbb{R}^n$ concatenates $g_{i}(\xi_{N_{i}})$ $\forall i=1,\cdots,n$, and the operator $\circ$ denotes element-wise product. We note that the convergence of this strategy to a feasible point is not strictly guaranteed, however, it is empirically found to be effective. Care must be taken when solving~\eqref{eq_hybridTO_unconstr} with DDP due to the discontinuous jump (for hybrid systems) or state projection (for model transition) caused by~\eqref{subeq_reset}. This is addressed in HSDDP by employing an impact-aware step~\cite{li2020hybrid}.
We denote $V(\xi_k)$ the optimal cost-to-go evaluated at $\xi_k$. The variation of $V(\xi_k)$ along a nominal trajectory under perturbation $\delta\xi_k$ is approximated to the second order as follows
\begin{equation}\label{eq_value_func_approx}
\delta V(\delta\xi_k) \approx \frac{1}{2}(\delta\xi_k)^\top\mathbf{S}^{[i]}_k\delta\xi_k + (\mathbf{s}^{[i]}_k)^\top\delta\xi_k + s^{[i]}_k,
\end{equation}
where $\mathbf{S}^{[i]}_k$, $\mathbf{s}^{[i]}_k$, and $s^{[i]}_k$ respectively, represent the Hessian, gradient, and scalar terms. HSDDP recursively computes these terms for all $k$ and $i$ by performing a backward sweep. For $0\leq k<N_i$ in phase $i$, the recursive equations are \cite{tassa2012synthesis}
\begin{subequations}\label{eq_value_update_smooth}
\begin{align}
s &= s' - \frac{1}{2}\mathbf{Q}_{\u}^T\mathbf{Q}_{\u\u}^{-1}\mathbf{Q}_{\u}, \label{eq_scalar}\\
\mathbf{s} &= \Qx - \mathbf{Q}_{\u\x}^T\mathbf{Q}_{\u\u}^{-1}\mathbf{Q}_{\u}, \label{eq_gradient}\\
\S &= \mathbf{Q}_{\x\x} - \mathbf{Q}_{\u\x}^T\mathbf{Q}_{\u\u}^{-1}\mathbf{Q}_{\u\x} \label{eq_hessain},
\end{align}
\end{subequations}
in which
\begin{subequations}\label{eq_Qs}
\begin{align}
\Qx &= \mathbf{L}_{\x} + \mathbf{f}_{\mathbf{x}}^\top\mathbf{s}',\\
\mathbf{Q}_{\u} &= \mathbf{L}_{\u} + \mathbf{f}_{\mathbf{u}}^\top\mathbf{s}',\\
\mathbf{Q}_{\x\x} &= \mathbf{L}_{\x\x} + \mathbf{f}_{\mathbf{x}}^\top\S'\mathbf{f}_{\mathbf{x}} + \mathbf{s}' \cdot \mathbf{f}_{\mathbf{_{xx}}},\\
\mathbf{Q}_{\u\u} &= \mathbf{L}_{\u\u} + \mathbf{f}_{\mathbf{u}}^\top\S'\mathbf{f}_{\mathbf{u}} + \mathbf{s}' \cdot \mathbf{f}_{\mathbf{{uu}}},\\
\mathbf{Q}_{\u\x} &= \mathbf{L}_{\u\x} + \mathbf{f}_{\mathbf{u}}^\top\S'\mathbf{f}_{\mathbf{x}} + \mathbf{s}' \cdot \mathbf{f}_{\mathbf{{ux}}},
\end{align}
\end{subequations}
where $\mathbf{L}$ indicates derivatives of $\mathcal{L}$, the subscripts indicate partial derivatives and the prime indicates the next time step. Note that $\mathbf{f}_{\mathbf{_{xx}}}$, $\mathbf{f}_{\mathbf{{uu}}}$ and $\mathbf{f}_{\mathbf{{ux}}}$ are tensors. The notation `$\cdot$' denotes vector-tensor multiplication. At $k=N_i$ of phase $i$, the update equations are
\begin{subequations}\label{eq_value_update_jump}
\begin{align}
s &= s',\\
\mathbf{s} &= \boldsymbol{\Phi}_{\mathbf{x}} + \P_{\mathbf{x}}^\top\mathbf{s}',\\
\S & = \boldsymbol{\Phi}_{\mathbf{x}\x} + \P_{\mathbf{x}}^\top\S' \P_{\mathbf{x}}.
\end{align}
\end{subequations}
Note that in ~\eqref{eq_value_update_jump}, the prime indicates the next step at $k=0$ of phase $i+1$. The value function approximation thus obtained is used to construct the update policy
\begin{equation}\label{eq_optdu}
\delta\u^* = -\mathbf{Q}_{\u\u}^{-1}(\mathbf{Q}_{\u} + \mathbf{Q}_{\u\x}\delta\mathbf{x}) \equiv \boldsymbol{\kappa} + \mathbf{K}\delta\mathbf{x}.
\end{equation}
where $\boldsymbol{\kappa}$ is the search direction (feed-forward correction) and $\mathbf{K}$ is a feedback gain.
A line search method and regularization are typically performed with~\eqref{eq_optdu} to ensure cost reduction \cite{tassa2012synthesis}. Pseudocode for HSDDP is shown in Alg.~\ref{alg_HSDDP}, and the reader is referred to \cite{li2020hybrid} for a detailed description.
\begin{algorithm}[t]
\caption{HSDDP Algorithm}
\label{alg_HSDDP}
\begin{algorithmic}[1]
\State Initialize $\boldsymbol{\lambda},\boldsymbol{\sigma}, \boldsymbol{\delta}$.
\While {$\norm{\mathbf{g}}_2>$ tol.}
\State Minimize $\mathcal{L}(\mathbcal{U},\boldsymbol{\lambda},\boldsymbol{\sigma}, \boldsymbol{\delta})$ using DDP (using Eq.~\eqref{eq_value_update_jump} at each mode transition in the backward sweep).
\State Compute $g_{c_i}(\mathbf{x}^-_{N_i})$.
\State Update $\boldsymbol{\lambda},\boldsymbol{\sigma}, \boldsymbol{\delta}$ using Eq.~\eqref{eq_update}
\EndWhile
\end{algorithmic}
\end{algorithm}
\subsection{Quadruped}\label{subsec_sim_quadruped}
We use a 2D model of the MIT Mini Cheetah \cite{katz2019mini} as the testbed. A bounding gait is simulated for four gait cycles, where the front-stance mode and the flight mode each runs for 72 ms, and the back-stance mode runs for 80 ms. This task is repeated for six MHPC abstraction schedules.
MHPC is configured to re-plan at every gait mode. We alert the reader that we count the planning horizon in gait modes in this section rather than in seconds as in Fig.~\ref{fig:MHPC_diagram} since each mode may have different timings. For simplicity, denote the number of gait modes in the overall planning horizon, full-model horizon, and simple-model horizon, as $n_o$, $n_f$, and $n_s$, respectively. We fix $n_o = 8$ for five abstraction schedules where $n_f$ is considered at values in $\{0,2,4,6,8\}$, and $n_s = n_o - n_f$. A sixth schedule uses $n_o = n_f = 4$ with no simple-model horizon. In the rest of this paper, MHPC($n_f$, $n_s$) denotes the abstraction schedule ($n_f$, $n_s$). When $n_f \neq 0$, MHPC generates joint toques that are directly applied to the robot. When $n_f = 0$, MHPC degenerates to simple-model MPC and generates the GRF as shown in Fig.~\ref{fig:hierarcy_legged}(a). In the latter case, we use $\boldsymbol{\tau} = \mathbf{J}^\top\mathbf{F}$ to get joint torques for the stance leg, and a swing foot controller as in \cite{di2018dynamic} for the swing leg. A heuristic controller used by \cite{li2020hybrid} is employed to warm start the full-model plan, and the simple-model plan is initialized with zeros. The time steps for dynamic simulation and MHPC are both 1 ms. For each optimization, both the outer loop and inner loop of HSDDP are terminated at a maximum of 3 iterations regardless of convergence.
\begin{figure}[t]
\ifcaptionmod
\captionsetup[subfigure]{font=scriptsize,labelfont=scriptsize}
\fi
\centering
\includegraphics[width = 0.7 \columnwidth]{Robustness_miniCheetah_Rev01.pdf}
\caption{Robustness evaluation of MHPC for six abstraction schedules as applied to the quadruped robot.}
\label{fig:robustness_quadruped}
\vspace{-5px}
\end{figure}
\begin{table}[t]
\centering
\caption{Normalized computation times of each MHPC configuration for the quadruped bounding and biped running examples. The first row represents the abstraction schedule $(n_f, n_s)$.}
\label{tab:comp_times}
\begin{tabular}{*{8}{|c}|}
\hline
\multicolumn{2}{|c|}{}&(0,8) &(2,6) &(4,0) &(4,4) &(6,2) &(8,0) \\
\hline
\multirow{2}{*}{Quad} & Avg & 0.100 & 0.530 & 0.623 & 0.678 & 0.855 & 1\\
\cline{2-8}
& Std & 0.016 & 0.027 & 0.030 & 0.032 & 0.040 & 0\\
\hline
\multirow{2}{*}{Biped} & Avg & $\times$ & 0.400 &0.621 &0.530 & 0.770 & 1\\
\cline{2-8}
& Std & $\times$ & 0.070 & 0.042 & 0.085 & 0.008 & 0\\
\hline
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width = 0.7 \columnwidth]{Robustness_biped_extended.pdf}
\caption{Robustness evaluation of MHPC for five abstraction schedules as applied to the biped robot.}
\label{fig:robustness_biped}
\vspace{-15px}
\end{figure}
Figure~\ref{fig:robustness_quadruped} statistically quantifies the robustness of MHPC for each abstraction schedule, evaluated by the likelihood of rejecting push disturbances. Disturbances are applied on the trunk for 30 ms in the second flight, with magnitude varying between $10- 160 N$, and location and direction randomly sampled from a uniform distribution. The probability of success is estimated based on 200 random tests. The solid curves represent the estimated probability of success, around which the dashed curves indicate a $95\%$ confidence interval. The top right corner indicates a higher possibility of rejecting larger disturbances, whereas the left bottom corner indicates more likely failures for small disturbances. The configuration MHPC(0,8) (simple-model MPC) has the worst performance, since it starts to fail at disturbances of 30 $N$. The configuration MHPC(8,0) (whole-body MPC) has the best robustness, since it can reject disturbances as much as 130 $N$. The robustness of other MHPC configurations are above that of MHPC(0,8), demonstrating that incorporating full-model planning would increase the robustness. The result of MHPC(2,6) shows that even assigning a short interval of the time horizon to the full model could significantly improve the controller's robustness. Comparison between MHPC(4,4) and MHPC(4,0) reveals that the performance is improved by extending the horizon with the simple model. In this sense, both adding whole-body planning to a simple-model scheme, and adding simple-model planning to a whole-body scheme offer performance benefits.
Table~\ref{tab:comp_times} summarizes the average and standard deviations of the normalized computation times for each MHPC configuration, obtained assuming no disturbances. Since MHPC re-plans at every gait mode, sixteen optimizations are averaged for each MHPC configuration. The average computation times are normalized by that of MHPC(8,0). This normalization is done as the implementation of HSDDP is in MATLAB, so the absolute time required is less meaningful. The speed-ups reported would be expected to translate to other more efficient implementations. Unsurprisingly, MHPC(0,8) (simple-model MPC) has the lowest computational cost, whereas MHPC(8,0) has the highest, and the computation time of other schemes increases with $n_f$. This is because the full model has the highest state dimension, and the computational complexity of DDP iterations are cubic in state dimension. Note that MHPC(2,6) achieves performance as good as whole-body MPC, while its computation efficiency is the second best. We conclude that with the proper schedule, MHPC could achieve high performance rivaling whole-body MPC, while significantly lowering computational cost.
\subsection{Biped}
The biped testbed used in this work is the simulated five-link planar biped robot Ernie \cite{fevre2019terrain} at the Univ.~of Notre Dame. MHPC is applied for four gait cycles with a target running speed of 1.5 $m/s$. The stance modes and the flight modes run for 110~ms and 80~ms, respectively. The outer loop and inner loop of DDP are terminated at maximum of 3 and 15 iterations, respectively. All other simulation parameters (e.g., simulation and MHPC time steps, and overall planning horizon) are identical to the quadruped simulation in Section~\ref{subsec_sim_quadruped}.
Figure~\ref{fig:robustness_biped} depicts the probability of success of MHPC for five abstraction schedules in response to disturbances 0-750~$N$. The MHPC(0,8) (simple-model MPC) is not shown here since the authors were unable to obtain a stable running gait for the biped with this approach. This finding is attributed to the fact that the balance problem for a biped is typically more difficult than that for a quadruped, and more whole-body details need to be considered in designing a stabilizing controller. It is observed that MHPC(4,4) and MHPC(8,0) fully reject disturbances under 500 $N$ and 600 $N$, respectively, with which the resulting velocity disturbances are 0.5 $m/s$ and 0.6 $m/s$. Further, Table~\ref{tab:comp_times} shows that the computational cost of MHPC(4,4) is roughly half that of MHPC(8,0), demonstrating that by selecting a proper abstraction schedule, MHPC could achieve nearly comparable disturbance rejection performance to whole-body MPC with significantly lower computational cost. We note that MHPC(4,4) takes less computation time than MHPC(4,0), because MHPC(4,4) takes less DDP iterations to converge.
\subsection{Quadrotor}
\begin{figure}[b]
\centering
\includegraphics[width=.65\columnwidth]{DroneBalloonsFlying.JPG}
\caption{Quadrotor tests planned point-to-point motions from a starting configuration (right) to a goal configuration (left). The red and blue lines show the full-model and simple-model horizons, respectively.}
\label{fig:Quadrotor_Animation}
\end{figure}
To benchmark MHPC on the quadrotor, a point-to-point trajectory avoiding obstacles is planned and executed (Fig.~\ref{fig:Quadrotor_Animation}). The spherical obstacles are randomly generated then fixed across all trials. MHPC is solved in a receding horizon fashion and the entire simulation lasts 5~$s$ with a 0.02~$s$ time step. The initial guesses are all zeros for both models. The running costs assigned to the full and simple models are quadratic. The full model includes a quaternion orientation term $(1- (\boldsymbol{q}_d^\top \boldsymbol{q})^2)$, minimized at either the desired quaternion $\boldsymbol{q}_d$ or its antipode $-\boldsymbol{q}_d$, which provides the same orientation. A relaxed barrier method is used to impose non-collision constraints with the obstacles considering a spherical volume approximation for the quadrotor. A quadratic terminal cost is assigned to the rotational error at the end of the full-model trajectory, and the final terminal cost is the infinite-horizon LQR cost for the simple model, ignoring obstacles. To evaluate the benefit of MHPC, we first implement MPC with the full model only, and then evaluate the performance when a portion of the horizon is instead allocated to the simple model (Fig.~\ref{fig:Quadrotor_Results}(a)). The length of horizon assigned to the simple model is adjusted so that one iteration of DDP takes the same amount of time in all cases. With this strategy, a horizon reduction of one time step for the full model conservatively gives 2.5 additional time steps for the simple model. Thus, allocating a portion of the horizon to the simple model enables longer-horizon planning overall. At each re-planning step, DDP is warm-started using the previous plan and runs for a maximum of five iterations to simulate real-time behavior.
\begin{figure}[t]
\centering
\includegraphics[width=.95\columnwidth]{figure8update.pdf}
\caption{(a) Different MHPC planning horizons with equivalent computational requirements (b) Resulting performance as MHPC horizon composition is varied as depicted in Fig.~\ref{fig:Quadrotor_Results}(a).}
\label{fig:Quadrotor_Results}
\vspace{-10px}
\end{figure}
The total running costs for each simulation are given against horizon composition in Fig.~\ref{fig:Quadrotor_Results}(b).
The total cost reaches a minimum as the original full-model planning horizon is replaced by a longer simple-model horizon, except for the 0.5 $s$ horizon which under-performed. Interestingly, MHPC results in a similar optimal performance between the original horizons of $1\sim 2$ $s$. This result shows that even for computationally cheaper setups (e.g., $1$ $s$ original horizon), MHPC can exceed the performance of more computationally costly full-model MPC instances (e.g., $2$ $s$ original horizon). Overall, these results show that for a fixed computation time budget, MHPC enables longer-horizon prediction with reliably improved performance over full-model MPC. The companion video further illustrates these effects.
\subsection{Planar quadruped and biped}\label{subsec_quadruped}
\begin{figure}[b]
\centering
\includegraphics[width =0.95\linewidth]{Hierarchy_legged.pdf}
\caption{Hierarchy of models for five-link robots with MHPC. (a) detailed abstraction schedule for planar quadruped bounding (b) the same abstraction is applied for biped running.}
\label{fig:hierarcy_legged}
\end{figure}
The dynamics of a planar quadruped are hybrid since it makes and breaks contact with ground. As a result, the full model of a quadruped could be described by system~\eqref{eq_multiplase_system} whose dynamics in each phase are obtained as \cite{budhiraja2018differential, li2020hybrid}
\begin{equation}\label{eq_fullmodel_smooth}
\begin{bmatrix}
\H & -\mathbf{J}_{c}^\top \\
-\mathbf{J}_{c} & \mathbf{0}
\end{bmatrix}
\begin{bmatrix}
\Ddot{\mathbf{q}} \\
\boldsymbol{\lambda}_{c}
\end{bmatrix}
= \begin{bmatrix}
\S^\top\boldsymbol{\tau} - \mathbf{C}\Dot{\mathbf{q}} - \boldsymbol{\tau}_g\\
\Dot{\mathbf{J}}_{c}\Dot{\mathbf{q}}
\end{bmatrix},
\end{equation}
where $\mathbf{q} = [\c^\top, \theta, \mathbf{q}^\top_{\text{joint}}]^\top$ is the generalized coordinate, $\c\in\mathbb{R}^2$, $\theta \in \mathbb{R}$, and $\mathbf{q}_{\text{joint}}\in\mathbb{R}^4$ denote the trunk center of mass (CoM), trunk orientation, and joint angles, respectively, and $\H$, $\mathbf{C} \Dot{\mathbf{q}}$, $\boldsymbol{\tau}_g$, $\S$, and $\boldsymbol{\tau}\in\mathbb{R}^4$ denote the inertia matrix, Coriolis force, gravity force, selection matrix, and actuation torque, respectively. $\mathbf{J}_{c}$ and $\boldsymbol{\lambda}_{c}$ represent the contact Jacobian and contact force, respectively, and their expressions depend on which foot is in contact with the ground. During a flight mode, the matrix on the left side of~\eqref{eq_fullmodel_smooth} degenerates to the inertia matrix $\H$. Denoting $\mathbf{x}_f = [\mathbf{q}^\top, \Dot{\mathbf{q}}^\top]^\top$ as the full-model state, the state-space representation of~\eqref{eq_fullmodel_smooth} can then readily be obtained into the form~\eqref{eq_multiplase_system}. The reset map at touchdown is modeled considering impact dynamics as follows
\begin{equation}\label{eq_fullmode_impact}
\P_{\text{TD}}(\mathbf{x}_f^{-}) =
\begin{bmatrix}
\mathbf{I} & \mathbf{0}\\
\mathbf{0} & \mathbf{I}-\inv{\H}\mathbf{J}_{c}^\top(\mathbf{J}_{c}\inv{\H}\mathbf{J}_c^\top)^{-1}\mathbf{J}_c
\end{bmatrix}
\mathbf{x}_f^-,
\end{equation}
where the superscript `-' indicates the moment immediately before contact. The reset map at liftoff is smooth and simply is $\P_{\text{LO}}(\mathbf{x}_f^-) = \mathbf{x}_f^-$, where `-' indicates the moment immediately before breaking contact.
For a quadruped robot, the weight of its legs is typically a small fraction of its total weight. As result, a common simple model ignores the legs and considers only trunk dynamics:
\begin{equation}\label{eq_trunk_model}
\begin{aligned}
\Ddot{\c} &= \sum_i \frac{\mathbf{f}_i}{m} -\mathbf{g} \\
I \Ddot{\theta} &= \sum_i (\mathbf{p}_i - \c) \times \mathbf{f}_i ,
\end{aligned}
\end{equation}
where $\c$, $\theta$, and $I$ represent the position of the CoM, the trunk orientation, and the body inertia, respectively. $\mathbf{f}$, $\mathbf{g}$ and $\mathbf{p}$ denote the ground reaction force (GRF), gravitational acceleration, and the foot location, respectively, and $i$ indicates the contact foot index. We denote the state of this trunk model as $\mathbf{x}_s = [\c,\theta, \Dot{\c}, \Dot{\theta}]$. A projection map relating the full and simple models is then defined such that $\mathbf{x}_s = \mathbf{T}\mathbf{x}_f$ where
\begin{equation}\label{eq_contraction}
\mathbf{T} =
\begin{bmatrix}
\mathbf{I}^{3} & \mathbf{0}^{3\times 4} & \mathbf{0}^{3\times3} & \mathbf{0}^{3\times 4} \\
\mathbf{0}^{3\times 3} & \mathbf{0}^{3\times 4} & \mathbf{I}^{3} & \mathbf{0}^{3\times 4}
\end{bmatrix}
\end{equation}
and $\mathbf{I}^3\in\mathbb{R}^{3\times3}$ is an identity matrix.
We now define the low-dimensional transition constraint as in Fig.~\ref{fig:MHPC_diagram}. Take quadruped bounding as an example, which periodically executes four gait modes of motion: a back stance mode, a flight mode, a front stance mode, and another flight mode. Figure~\ref{fig:hierarcy_legged} illustrates an abstraction schedule used for quadruped bounding that assigns the first two modes for the full model~\eqref{eq_fullmodel_smooth}, \eqref{eq_fullmode_impact}, and the subsequent two modes for the trunk model~\eqref{eq_trunk_model}. At the instant of touchdown shown in Fig.~\ref{fig:hierarcy_legged}(a), a transition from full model to trunk model takes place. The low-dimensional transition thus considers the impact discontinuity~\eqref{eq_fullmode_impact} caused by touchdown, as well as the projection map~\eqref{eq_contraction}, which is defined as follows
\begin{equation}\label{eq_trans_constr_quad}
\P_2(\mathbf{x}^{[2]}_{N_2}) = \mathbf{T} \, \P_{\text{TD}}(\mathbf{x}^{[2]}_{N_2}).
\end{equation}
In this context, the function $g_2(\mathbf{x}^{[2]}_{N_2})$ in~\eqref{subeq_switch} measures the vertical distance between the contact foot and ground, ensuring that touchdown occurs on the ground. The value function propagation~\eqref{eq_value_update_jump} across the model transition is then
\begin{subequations}\label{eq_value_update_modeltrans}
\begin{align}
\mathbf{s} &= \boldsymbol{\Phi}_{\mathbf{x}} + \frac{\partial\P_{\text{TD}}}{\partial\mathbf{x}}^\top\mathbf{T}^\top\mathbf{s}',\\
\S & = \boldsymbol{\Phi}_{\mathbf{x}\x} + \frac{\partial\P_{\text{TD}}}{\partial\mathbf{x}}^\top\mathbf{T}^\top\S' \mathbf{T}\frac{\partial\P_{\text{TD}}}{\partial\mathbf{x}},
\end{align}
\end{subequations}
where here the prime indicates the next step when the simple model starts. The equation~\eqref{eq_value_update_modeltrans} reveals that the trunk model is effectively used to set a terminal cost for the full model via a low-rank approximation, thus biasing the full model toward plans that are favorable for the long-term.
We emphasize that the transition ~\eqref{eq_trans_constr_quad} is particular to the abstraction schedule at the time instant in Fig.~\ref{fig:hierarcy_legged}(a). Since the optimization window is shifting as MHPC proceeds, this transition constraint changes as well. For example, if we shift the overall planning horizon one gait mode to the future, the transition would occur at takeoff and would be described by $\P_2(\mathbf{x}^{[2]}_{N_2}) = \mathbf{T}\mathbf{x}^{[2]}_{N_2}$. Further, the abstraction schedule assigns two gait modes to each model, whereas in general it could be varied to acquire high performance. This prospect is explored further in Section~\ref{sec:Sim}.
A five-link biped robot is topologically equivalent to a planar quadruped (Fig.~\ref{fig:hierarcy_legged}) except that both legs are pinned below the trunk CoM. Consequently, the full model~\eqref{eq_fullmodel_smooth} and~\eqref{eq_fullmode_impact}, the simple model~\eqref{eq_trunk_model}, and the abstraction schedule for quadruped bounding are also applied to biped running.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figrue4update.pdf}
\caption{Quadrotor model hierarchy for motion planning around obstacles. The full model is followed by a point-mass model over the planning horizon.}
\label{fig:Quadrotor}
\vspace{-6px}
\end{figure}
\subsection{Quadrotor}
The last example explores the applicability of MHPC beyond legged robots. The quadrotor (Fig.~\ref{fig:Quadrotor}) is modeled as a rigid body with four thrust inputs, each at a moment arm $r$ from the CoM. The acceleration of the quadrotor CoM is modeled as
\begin{equation}
{^b\a_{\text{CoM}}} = {1\over m}\begin{bmatrix} 0 \\ 0 \\ u_1 + u_2 + u_3 + u_4 \end{bmatrix} + ^b\!\mathbf{R}_0 {}^0\! \mathbf{g}\end{equation}
with its rotational dynamics modelled as
\begin{equation}
{}^b\overline{\mathbf{I}} \, ^b\dot{\boldsymbol{\omega}} + ^b\boldsymbol{\omega} \times {}^b\overline{\mathbf{I}}\, ^b\boldsymbol{\omega} = \begin{bmatrix} r (u_2-u_4) \\ r (u_3 - u_1) \\ k_m (u_1+u_3-u_2-u_4) \end{bmatrix},
\nonumber
\end{equation}
where the effort vector $\u = [u_1 , u_2 , u_3 , u_4]^\top$ contains the linear thrust of each rotor, ${}^b\overline{\mathbf{I}} \in \mathbb{R}^{3\times3}$ is the Cartesian inertia of the drone about its CoM, $m$ its mass, and $^b\boldsymbol{\omega}$ its rotational velocity. In all cases, the pre-superscript indicates the frame that is used to express each vector. To approximate the yaw torque provided by the inertia of the counter rotating blades, $k_m$ is used as a ratio of the difference in thrust from the blades and the moment applied on the drone.
The full state of the drone is thus given by $\mathbf{x}_f = [\boldsymbol{q}, {}^b \boldsymbol{\omega} , {}^0 \mathbf{p}_{\text{CoM}} , {}^b\v_{\text{CoM}} ]^\top \in\mathbb{R} ^{13}$ where $\boldsymbol{q}$ gives its orientation quaternion, ${}^0 \mathbf{p}_{\text{CoM}}$ is the position of the CoM, and ${}^b \v_{\text{CoM}}$ its velocity. A fully actuated point mass model with linear dynamics is used as the simple model (Fig.~\ref{fig:Quadrotor}) with state $\mathbf{x}_s =[{}^0 \mathbf{p}_{\text{CoM}}^\top, {}^0 \v_{\text{CoM}}^\top]^\top \in\mathbb{R} ^{6}$. The input for the model is a single force vector in 3D.
To handle working with unit quaternions in the full model, a change of variables to minimal coordinates is employed where Cayley parameters are used to describe changes in orientation \cite{jackson2021planning} at each iteration.
Although Cayley parameters experience a singularity at a rotation of $\pi$ radians, since DDP makes small changes to the trajectory at each iteration, the singularity can be effectively avoided, even in cases when the nominal trajectory itself makes full rotations. The interested reader is referred to \cite{Jackson20} for detail.
\subsection{Contribution}
\label{subsec:Intro_contribution}
The major contribution of this research is to propose and evaluate Model Hierarchy Predictive Control (MHPC), a new method that carries out planning within MPC based on a hierarchy of models over the planning horizon. This contribution explores the formulation of the MPC problem, and is complementary to many excellent recent advances in numerical solution algorithms (e.g., \cite{mastalli2020crocoddyl,giftthaler2018control}) for optimal control. Overall, MHPC is shown to unify the computational and performance benefits of simple-model MPC and whole-body MPC. Figure~\ref{fig:MHPC_illusrative} conceptually illustrates the main idea for a planar quadruped that needs to account for gaps when planning its movement. In the current stance and next flight, the robot coordinates its legs and body to avoid the gap. Since the subsequent stance and flight are far away, it roughly determines a body motion so that the footsteps can avoid the gap, but it does not propose a concrete plan, i.e., ignores leg details, until the gap comes close. In this way, the quadruped focuses on its near-term balance while having a rough plan in mind for the long term. Building toward this vision, this work first considers a rigorous simulation evaluation for MHPC. As a step toward online MHPC, hardware experiments are then shown for jumping over a gap using the control policies optimized by MHPC in simulation.
The rest of this paper is structured as follows. Section~\ref{sec:MHPC} discusses the main idea of MHPC and formulates the MHPC problem mathematically. An efficient DDP-based solver is introduced for MHPC as well. Section~\ref{sec:example_sys} presents a hierarchy of models for three example systems. The performance of MHPC is then benchmarked in simulation in Section~\ref{sec:Sim}. In Section~\ref{sec:Exp}, we run MHPC offline and execute the control policy in a dynamics simulator and on hardware. Section~\ref{sec:Conclusion} concludes the paper and discusses future work.
\section{INTRODUCTION}
\label{sec:Intro}
\input{Introduction}
\section{Model Hierarchy Predictive Control}
\label{sec:MHPC}
\input{MHPC}
\section{Hierarchy of Models and Abstraction Schedules for Example Systems}
\label{sec:example_sys}
\input{Models}
\section{Simulation Results}
\label{sec:Sim}
\input{Simulation}
\section{Experimental Validation}
\label{sec:Exp}
\input{Experiment}
\section{Conclusions and Future Works}
\label{sec:Conclusion}
\input{Conclusion}
\ifarxiv
| proofpile-arXiv_059-15647 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\subsection{Bayesian Logistic Regression}
\label{model:blr}
\begin{align*}
\alpha &\sim \text{Normal}(0, 10, \text{size}=1),\\
\beta &\sim \text{Normal}(0, 2.5, \text{size}=K),\\
X_i &\sim \text{Normal}(0,\ 10,\ \text{size}=K) \quad \forall i \in 1\ldots N\\
\mu_i &= \alpha + {X_i}^T \beta \quad \forall i \in 1..N\\
Y_i &\sim \text{Bernoulli}(\text{logit}=\mu_i) \quad \forall i \in 1..N.
\end{align*}
\subsection{Robust Regression}
\label{model:robust}
\begin{align*}
\nu &\sim \text{Gamma}(2,\ 10)\\
\sigma &\sim \text{Exponential}(1.0)\\
\alpha &\sim \text{Normal}(0,10)\\
\beta &\sim \text{Normal}(0,\ 2.5,\ \text{size}=K)\\
X_i &\sim \text{Normal}(0,\ 10,\ \text{size}=K) \quad \forall i \in 1\ldots N\\
\mu_i &= \alpha + \beta^T X_i \quad \forall i \in 1\ldots N\\
Y_i &\sim \text{Student-T}(\nu,\ \mu_i,\ \sigma) \quad \forall i \in 1\ldots N\\
\end{align*}
\subsection{Noisy-OR Topic Model}
\label{model:noisyor}
Let $Z$ be the the set of all nodes in the network.
Each model has a leak node $O$ which is the parent of all other nodes.
The set of keyphrases is denoted by $K$ and the set of content units is represented as $T$.
\begin{align*}
|\text{Children}(K_i)| \sim \text{Poisson}(\lambda = 3) \quad \forall i \in 1\ldots K \\
\text{W}_{oj} \sim \text{Exponential}(0.1) \quad \forall j \in 1\ldots Z\\
\text{W}_{ij} \sim \text{Exponential}(1.0) \quad \forall i,j \text{ where i} \in \text{Parents}(Z_j)\\
P(Z_j = 1|\text{Parents}(Z_j)) = 1 - \exp(-W_{oj} -\sum_{i}(W_{ij} * Z_i)) \\
\text{Z}_j \sim \text{Bernoulli}(P(Z_j = 1|\text{Parents}(Z_j))) \\
\end{align*}
\subsection{Crowd Sourced Annotation Model}
\label{model:crowd_sourced}
There are $N$ items, $K$ labelers, and each item could be one of $C$ categories.
Each item $i$ is labeled by a set $J_i$ of labelers. Such that the size of $J_i$ is sampled randomly, and each labeler in $J_i$ is drawn uniformly without replacement from the set of all labelers.
$z_i$ is the true label for item $i$ and $y_{ij}$ is the label provided to item $i$ by labeler $j$.
Each labeler $l$ has a confusion matrix $\theta_l$ such that $\theta_{lmn}$ is the probability that an item with true class $m$ is labeled $n$ by $l$.
\begin{align*}
\pi &\sim \text{Dirichlet}\left(\frac{1}{C}, \ldots, \frac{1}{C}\right)\\
z_i &\sim \text{Categorical}(\pi)\quad \forall i \in 1\ldots N\\
\theta_{lm} &\sim \text{Dirichlet}(\alpha_m)\quad \forall l \in 1\ldots K,\ m \in 1 \ldots C\\
|J_i| &\sim \text{Poisson}(J_\text{loc})\\
l \in J_i &\sim \text{Uniform}(1\ldots K) \quad \text{without replacement}\\
y_{il} &\sim \text{Categorical}(\theta_{l z_i})\quad \forall l \in J_i\\
\end{align*}
Here $\alpha_m \in {\mathbb{R}^+}^C$.
We set $\alpha_{mn} = \gamma\cdot \rho$ if $m=n$ and $\alpha_{mn} = \gamma \cdot (1 - \rho)\cdot \frac{1}{C-1}$ if $m \ne n$.
Where $\gamma$ is the concentration and $\rho$ is the {\em a-priori} correctness of the labelers.
In this model, $Y_{il}$ and $J_i$ are observed.
In our experiments, we fixed $C=3$, $J_\text{loc}=2.5$, $\gamma=10$, and $\rho=0.5$.
\section{Introduction}
\input{introduction}
\section{System Overview}
\input{overview}
\section{Bayesian Logistic Regression Model}
\input{Models/logisticRegressionModel}
\section{Robust Regression Model}
\input{Models/robustRegressionModel}
\section{Noisy-Or Topic Model}
\input{Models/noisyOrTopicModel}
\section{Crowdsourced Annotation Model}
\input{Models/crowdSourcedAnnotationModel}
\section{Conclusion}
\input{conclusion.tex}
\bibliographystyle{acm-reference-format}
\subsection{Results}
\begin{center}
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[width=1.0\columnwidth]{../figures/annotation}
\captionof{figure}{Predictive log likelihood against samples for Crowd Sourced Annotation Model}
\label{fig:annotation}
\end{minipage}\hfill
\hspace{3mm}
\begin{minipage}[b]{0.45\linewidth}
\begin{adjustbox}{width=1.0\linewidth}
\begin{tabular}{ccccc}
\toprule
\multirow{2}{*}{PPL} & \multirow{2}{*}{Time(s)} & \multicolumn{3}{c}{$n_{eff}$/s} \\
& & min & median & max \\
\midrule
Stan-MCMC & 484.59 & 2.14 & 4.22 & 10.45 \\
Stan-VI (meanfield) & 15.23 & 0.48 & 7.86 & 216.35 \\
Stan-VI (fullrank) & 35.79 & 0.12 & 0.20 & 84.16 \\
\bottomrule
\vspace{1cm}
\end{tabular}
\end{adjustbox}
\captionof{table}{Runtime and $n_{eff}$/s for Crowd Sourced Annotation Model with items=10K and \mbox{labelers=100}}
\label{table:annotation}
\end{minipage}
\end{center}
PPL Bench can also benchmark different inference techniques within a PPL.
Here, we benchmark Stan's NUTS against Stan's VI methods, using both meanfield as well as fullrank algorithm.
We notice that VI using meanfield algorithm is indistinguishable from NUTS.
However, VI using fullrank converges to a slightly different predictive log likelihood.
\subsection{Stan Implementation}
The model requires support for discrete latent variables, which Stan does not have.
As a workaround, we did a reparameterization; the discrete latent variables with values "True" and "False" which denote whether a node is activated or not are instead represented as 1-hot encoded vectors\cite{maddison2016concrete}. Each element of the encoded vector is now a real number substituting a boolean variable.
"True" is represented by $[1, 0]$ and "False" by $[0, 1]$ in the 1-hot encoding; a relaxed representation of a true value, for example, might look like [0.9999, 0.0001].
The detailed reparameterization process is as follows:
We encode the probabilities of a node in a parameter vector $\alpha = [\alpha_{\text{true}}, \alpha_{\text{false}}]$, then choose a temperature parameter $\tau = 0.1$. For each key-phrase, we assign an intermediate random variable $X_j$:
\iffalse
\begin{align*}
\alpha_{\text{true}} = P(Z_j = 1|\text{Parents}(Z_j)) \\
\alpha_{\text{false}} = 1 - P(Z_j = 1|\text{Parents}(Z_j)) \\
G_j \stackrel{iid}{\sim} \text{Gumbel}(0,1) \\
X_j = \text{softmax}(\log\alpha_j + G_j/\tau), \\
Z_j = 1 - argmax(X_j) \\
\end{align*}
\fi
\begin{table}[h]
\begin{minipage}[b]{0.4\linewidth}
\includegraphics[width=1.0\columnwidth]{../figures/stan_noisy_or}
\end{minipage}\hfill
\begin{minipage}[b]{0.6\linewidth}
\begin{adjustbox}{width=1.0\linewidth,center}
\begin{tabular}{ccccccc}
\toprule
\multirow{2}{*}{PPL} & \multirow{2}{*}{Words} & \multirow{2}{*}{Topics} & \multirow{2}{*}{Time(s)} & \multicolumn{3}{c}{$n_{eff}$/s} \\
& & & & min & median & max \\
\midrule
Stan & 300 & 30 & 37.58 & 17.50 & 79.83 & 80.39 \\
Jags & 300 & 30 & 0.17 & 15028.96 & 17350.35 & 18032.36 \\
PyMC3 & 300 & 30 & 38.05 & 39.50 & 78.83 & 79.33 \\
\hline
Stan & 3000 & 100 & 438.74 & 2.27 & 6.84 & 7.00 \\
Jags & 3000 & 100 & 0.68 & 3773.66 & 4403.28 & 4504.26 \\
PyMC3 & 3000 & 100 & 298.79 & 3.96 & 10.04 & 14.72 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\captionof{table}{Runtime and $n_{eff}$/s for Noisy-Or Topic Model.}
\label{table:noisy_or}
\end{minipage}
\end{table}
\begin{figure}[h]
\centering
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{noisy_or_small}
\caption{30 Topics, 300 Words}
\label{fig:noisyor1}
\end{subfigure}%
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{noisy_or_large}
\caption{100 Topics, 3000 Words}
\label{fig:noisyor2}
\end{subfigure}
\caption{Predictive log likelihood against samples for Noisy-Or Topic Model}
\label{fig:noisyor}
\end{figure}
\vspace{-8mm}
From Figure~\ref{fig:noisyor}, we can see that all PPLs converge to the same predictive log likelihood.
We see that JAGS is extremely fast at this problem because it is written in C++ while PyMC3 is Python-based.
We suspect that Stan is slower then PyMC3 and JAGS because of the reparameterization which introduces twice
as many variables per node.
\section{Introduction}
\input{introduction}
\section{System Overview}
\input{overview}
\section{Bayesian Logistic Regression Model}
\input{Models/logisticRegressionModel}
\section{Robust Regression Model}
\input{Models/robustRegressionModel}
\section{Noisy-Or Topic Model}
\input{Models/noisyOrTopicModel}
\section{Crowdsourced Annotation Model}
\input{Models/crowdSourcedAnnotationModel}
\section{Conclusion}
\input{conclusion.tex}
\bibliographystyle{acm-reference-format}
\subsection{Results}
\begin{center}
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[width=1.0\columnwidth]{../figures/annotation}
\captionof{figure}{Predictive log likelihood against samples for Crowd Sourced Annotation Model}
\label{fig:annotation}
\end{minipage}\hfill
\hspace{3mm}
\begin{minipage}[b]{0.45\linewidth}
\begin{adjustbox}{width=1.0\linewidth}
\begin{tabular}{ccccc}
\toprule
\multirow{2}{*}{PPL} & \multirow{2}{*}{Time(s)} & \multicolumn{3}{c}{$n_{eff}$/s} \\
& & min & median & max \\
\midrule
Stan-MCMC & 484.59 & 2.14 & 4.22 & 10.45 \\
Stan-VI (meanfield) & 15.23 & 0.48 & 7.86 & 216.35 \\
Stan-VI (fullrank) & 35.79 & 0.12 & 0.20 & 84.16 \\
\bottomrule
\vspace{1cm}
\end{tabular}
\end{adjustbox}
\captionof{table}{Runtime and $n_{eff}$/s for Crowd Sourced Annotation Model with items=10K and \mbox{labelers=100}}
\label{table:annotation}
\end{minipage}
\end{center}
PPL Bench can also benchmark different inference techniques within a PPL.
Here, we benchmark Stan's NUTS against Stan's VI methods, using both meanfield as well as fullrank algorithm.
We notice that VI using meanfield algorithm is indistinguishable from NUTS.
However, VI using fullrank converges to a slightly different predictive log likelihood.
\subsection{Stan Implementation}
The model requires support for discrete latent variables, which Stan does not have.
As a workaround, we did a reparameterization; the discrete latent variables with values "True" and "False" which denote whether a node is activated or not are instead represented as 1-hot encoded vectors\cite{maddison2016concrete}. Each element of the encoded vector is now a real number substituting a boolean variable.
"True" is represented by $[1, 0]$ and "False" by $[0, 1]$ in the 1-hot encoding; a relaxed representation of a true value, for example, might look like [0.9999, 0.0001].
The detailed reparameterization process is as follows:
We encode the probabilities of a node in a parameter vector $\alpha = [\alpha_{\text{true}}, \alpha_{\text{false}}]$, then choose a temperature parameter $\tau = 0.1$. For each key-phrase, we assign an intermediate random variable $X_j$:
\iffalse
\begin{align*}
\alpha_{\text{true}} = P(Z_j = 1|\text{Parents}(Z_j)) \\
\alpha_{\text{false}} = 1 - P(Z_j = 1|\text{Parents}(Z_j)) \\
G_j \stackrel{iid}{\sim} \text{Gumbel}(0,1) \\
X_j = \text{softmax}(\log\alpha_j + G_j/\tau), \\
Z_j = 1 - argmax(X_j) \\
\end{align*}
\fi
\begin{table}[h]
\begin{minipage}[b]{0.4\linewidth}
\includegraphics[width=1.0\columnwidth]{../figures/stan_noisy_or}
\end{minipage}\hfill
\begin{minipage}[b]{0.6\linewidth}
\begin{adjustbox}{width=1.0\linewidth,center}
\begin{tabular}{ccccccc}
\toprule
\multirow{2}{*}{PPL} & \multirow{2}{*}{Words} & \multirow{2}{*}{Topics} & \multirow{2}{*}{Time(s)} & \multicolumn{3}{c}{$n_{eff}$/s} \\
& & & & min & median & max \\
\midrule
Stan & 300 & 30 & 37.58 & 17.50 & 79.83 & 80.39 \\
Jags & 300 & 30 & 0.17 & 15028.96 & 17350.35 & 18032.36 \\
PyMC3 & 300 & 30 & 38.05 & 39.50 & 78.83 & 79.33 \\
\hline
Stan & 3000 & 100 & 438.74 & 2.27 & 6.84 & 7.00 \\
Jags & 3000 & 100 & 0.68 & 3773.66 & 4403.28 & 4504.26 \\
PyMC3 & 3000 & 100 & 298.79 & 3.96 & 10.04 & 14.72 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\captionof{table}{Runtime and $n_{eff}$/s for Noisy-Or Topic Model.}
\label{table:noisy_or}
\end{minipage}
\end{table}
\begin{figure}[h]
\centering
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{noisy_or_small}
\caption{30 Topics, 300 Words}
\label{fig:noisyor1}
\end{subfigure}%
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{noisy_or_large}
\caption{100 Topics, 3000 Words}
\label{fig:noisyor2}
\end{subfigure}
\caption{Predictive log likelihood against samples for Noisy-Or Topic Model}
\label{fig:noisyor}
\end{figure}
\vspace{-8mm}
From Figure~\ref{fig:noisyor}, we can see that all PPLs converge to the same predictive log likelihood.
We see that JAGS is extremely fast at this problem because it is written in C++ while PyMC3 is Python-based.
We suspect that Stan is slower then PyMC3 and JAGS because of the reparameterization which introduces twice
as many variables per node.
\section{Introduction}
\lipsum[2]
\lipsum[3]
\section{Headings: first level}
\label{sec:headings}
\lipsum[4] See Section \ref{sec:headings}.
\subsection{Headings: second level}
\lipsum[5]
\begin{equation}
\xi _{ij}(t)=P(x_{t}=i,x_{t+1}=j|y,v,w;\theta)= {\frac {\alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}{\sum _{i=1}^{N} \sum _{j=1}^{N} \alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}}
\end{equation}
\subsubsection{Headings: third level}
\lipsum[6]
\paragraph{Paragraph}
\lipsum[7]
\section{Examples of citations, figures, tables, references}
\label{sec:others}
\lipsum[8] \cite{kour2014real,kour2014fast} and see \cite{hadash2018estimate}.
The documentation for \verb+natbib+ may be found at
\begin{center}
\url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf}
\end{center}
Of note is the command \verb+\citet+, which produces citations
appropriate for use in inline text. For example,
\begin{verbatim}
\citet{hasselmo} investigated\dots
\end{verbatim}
produces
\begin{quote}
Hasselmo, et al.\ (1995) investigated\dots
\end{quote}
\begin{center}
\url{https://www.ctan.org/pkg/booktabs}
\end{center}
\subsection{Figures}
\lipsum[10]
See Figure \ref{fig:fig1}. Here is how you add footnotes. \footnote{Sample of the first footnote.}
\lipsum[11]
\begin{figure}
\centering
\fbox{\rule[-.5cm]{4cm}{4cm} \rule[-.5cm]{4cm}{0cm}}
\caption{Sample figure caption.}
\label{fig:fig1}
\end{figure}
\subsection{Tables}
\lipsum[12]
See awesome Table~\ref{tab:table}.
\begin{table}
\caption{Sample table title}
\centering
\begin{tabular}{lll}
\toprule
\multicolumn{2}{c}{Part} \\
\cmidrule(r){1-2}
Name & Description & Size ($\mu$m) \\
\midrule
Dendrite & Input terminal & $\sim$100 \\
Axon & Output terminal & $\sim$10 \\
Soma & Cell body & up to $10^6$ \\
\bottomrule
\end{tabular}
\label{tab:table}
\end{table}
\subsection{Lists}
\begin{itemize}
\item Lorem ipsum dolor sit amet
\item consectetur adipiscing elit.
\item Aliquam dignissim blandit est, in dictum tortor gravida eget. In ac rutrum magna.
\end{itemize}
\bibliographystyle{unsrt}
\subsection{Bayesian Logistic Regression}
\label{model:blr}
\begin{align*}
\alpha &\sim \text{Normal}(0, 10, \text{size}=1),\\
\beta &\sim \text{Normal}(0, 2.5, \text{size}=K),\\
X_i &\sim \text{Normal}(0,\ 10,\ \text{size}=K) \quad \forall i \in 1\ldots N\\
\mu_i &= \alpha + {X_i}^T \beta \quad \forall i \in 1..N\\
Y_i &\sim \text{Bernoulli}(\text{logit}=\mu_i) \quad \forall i \in 1..N.
\end{align*}
\subsection{Robust Regression}
\label{model:robust}
\begin{align*}
\nu &\sim \text{Gamma}(2,\ 10)\\
\sigma &\sim \text{Exponential}(1.0)\\
\alpha &\sim \text{Normal}(0,10)\\
\beta &\sim \text{Normal}(0,\ 2.5,\ \text{size}=K)\\
X_i &\sim \text{Normal}(0,\ 10,\ \text{size}=K) \quad \forall i \in 1\ldots N\\
\mu_i &= \alpha + \beta^T X_i \quad \forall i \in 1\ldots N\\
Y_i &\sim \text{Student-T}(\nu,\ \mu_i,\ \sigma) \quad \forall i \in 1\ldots N\\
\end{align*}
\subsection{Noisy-OR Topic Model}
\label{model:noisyor}
Let $Z$ be the the set of all nodes in the network.
Each model has a leak node $O$ which is the parent of all other nodes.
The set of keyphrases is denoted by $K$ and the set of content units is represented as $T$.
\begin{align*}
|\text{Children}(K_i)| \sim \text{Poisson}(\lambda = 3) \quad \forall i \in 1\ldots K \\
\text{W}_{oj} \sim \text{Exponential}(0.1) \quad \forall j \in 1\ldots Z\\
\text{W}_{ij} \sim \text{Exponential}(1.0) \quad \forall i,j \text{ where i} \in \text{Parents}(Z_j)\\
P(Z_j = 1|\text{Parents}(Z_j)) = 1 - \exp(-W_{oj} -\sum_{i}(W_{ij} * Z_i)) \\
\text{Z}_j \sim \text{Bernoulli}(P(Z_j = 1|\text{Parents}(Z_j))) \\
\end{align*}
\subsection{Crowd Sourced Annotation Model}
\label{model:crowd_sourced}
There are $N$ items, $K$ labelers, and each item could be one of $C$ categories.
Each item $i$ is labeled by a set $J_i$ of labelers. Such that the size of $J_i$ is sampled randomly, and each labeler in $J_i$ is drawn uniformly without replacement from the set of all labelers.
$z_i$ is the true label for item $i$ and $y_{ij}$ is the label provided to item $i$ by labeler $j$.
Each labeler $l$ has a confusion matrix $\theta_l$ such that $\theta_{lmn}$ is the probability that an item with true class $m$ is labeled $n$ by $l$.
\begin{align*}
\pi &\sim \text{Dirichlet}\left(\frac{1}{C}, \ldots, \frac{1}{C}\right)\\
z_i &\sim \text{Categorical}(\pi)\quad \forall i \in 1\ldots N\\
\theta_{lm} &\sim \text{Dirichlet}(\alpha_m)\quad \forall l \in 1\ldots K,\ m \in 1 \ldots C\\
|J_i| &\sim \text{Poisson}(J_\text{loc})\\
l \in J_i &\sim \text{Uniform}(1\ldots K) \quad \text{without replacement}\\
y_{il} &\sim \text{Categorical}(\theta_{l z_i})\quad \forall l \in J_i\\
\end{align*}
Here $\alpha_m \in {\mathbb{R}^+}^C$.
We set $\alpha_{mn} = \gamma\cdot \rho$ if $m=n$ and $\alpha_{mn} = \gamma \cdot (1 - \rho)\cdot \frac{1}{C-1}$ if $m \ne n$.
Where $\gamma$ is the concentration and $\rho$ is the {\em a-priori} correctness of the labelers.
In this model, $Y_{il}$ and $J_i$ are observed.
In our experiments, we fixed $C=3$, $J_\text{loc}=2.5$, $\gamma=10$, and $\rho=0.5$.
\section{Introduction}
\lipsum[2]
\lipsum[3]
\section{Headings: first level}
\label{sec:headings}
\lipsum[4] See Section \ref{sec:headings}.
\subsection{Headings: second level}
\lipsum[5]
\begin{equation}
\xi _{ij}(t)=P(x_{t}=i,x_{t+1}=j|y,v,w;\theta)= {\frac {\alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}{\sum _{i=1}^{N} \sum _{j=1}^{N} \alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}}
\end{equation}
\subsubsection{Headings: third level}
\lipsum[6]
\paragraph{Paragraph}
\lipsum[7]
\section{Examples of citations, figures, tables, references}
\label{sec:others}
\lipsum[8] \cite{kour2014real,kour2014fast} and see \cite{hadash2018estimate}.
The documentation for \verb+natbib+ may be found at
\begin{center}
\url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf}
\end{center}
Of note is the command \verb+\citet+, which produces citations
appropriate for use in inline text. For example,
\begin{verbatim}
\citet{hasselmo} investigated\dots
\end{verbatim}
produces
\begin{quote}
Hasselmo, et al.\ (1995) investigated\dots
\end{quote}
\begin{center}
\url{https://www.ctan.org/pkg/booktabs}
\end{center}
\subsection{Figures}
\lipsum[10]
See Figure \ref{fig:fig1}. Here is how you add footnotes. \footnote{Sample of the first footnote.}
\lipsum[11]
\begin{figure}
\centering
\fbox{\rule[-.5cm]{4cm}{4cm} \rule[-.5cm]{4cm}{0cm}}
\caption{Sample figure caption.}
\label{fig:fig1}
\end{figure}
\subsection{Tables}
\lipsum[12]
See awesome Table~\ref{tab:table}.
\begin{table}
\caption{Sample table title}
\centering
\begin{tabular}{lll}
\toprule
\multicolumn{2}{c}{Part} \\
\cmidrule(r){1-2}
Name & Description & Size ($\mu$m) \\
\midrule
Dendrite & Input terminal & $\sim$100 \\
Axon & Output terminal & $\sim$10 \\
Soma & Cell body & up to $10^6$ \\
\bottomrule
\end{tabular}
\label{tab:table}
\end{table}
\subsection{Lists}
\begin{itemize}
\item Lorem ipsum dolor sit amet
\item consectetur adipiscing elit.
\item Aliquam dignissim blandit est, in dictum tortor gravida eget. In ac rutrum magna.
\end{itemize}
\bibliographystyle{unsrt}
| proofpile-arXiv_059-15648 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Magnetic fields are observed at various length scales in the universe \cite{r-beck,beck2013,malik2017,bernet2008,clarke,govoni2004,vogt2005,widrow,neronov,taylor2011,go2019}. However, the origin of these fields is still an open problem and there is no clear consensus on whether the large scale magnetic fields observed in galaxies and galaxy clusters are of primordial or of astrophysical origin. Magnetic fields of the order of $10^{-15}$G found using $\gamma$-ray observations of TeV-Blazars in the intergalactic medium~(IGM) indicates that at least a certain fraction of the magnetic field originates from the former~\cite{neronov,taylor2011,kandu2016,tav2011,tak2013,meyer2016,long2015}. Recent observation of magnetic fields located in a filament between two merging galaxy clusters also provides a particularly strong evidence of a primordial origin~\cite{go2019}. There are various mechanisms of magnetogenesis in the early universe and these include, inflation, phase transitions among others~\cite{turner-widrow,ratra,martin-yokoyama,agullo2013,atmjeet2014,Campanelli2015,vachaspati,Sigl:1996dm,kisslinger,qcd,arun2015,zhang2019,ellis2019}. Inflation provides a natural mechanism to generate large scale magnetic fields. However, there are certain problems associated with the same namely, strong coupling and backreaction. Various models have been suggested to circumvent these two issues and generate magnetic fields that satisfies the current observational bonds~\cite{rajeev2013,1475-7516-2015-03-040,sharma2017}. Some of these models, however, require low scales of reheating~\cite{sharma2017,sharmahelical,kobayashi2014}. The magnetic fields evolving from causal processes such as phase transitions do not suffer from the problems that affect inflationary magnetogenesis but they generate magnetic fields that have correlation length much smaller than the Hubble horizon at that epoch. Hence, these fields when evolved till today are correlated on galactic scales or less. However, if a parity violating magnetogenesis mechanism is considered, then magnetic fields with non-zero helicity are produced, which have larger coherence lengths at present along with a slower decay rate compared to non-helical magnetic fields~\cite{kandu2016,jedamzik,rajeev2010,caprini2014,fujita2019,atmjeet2015,chris2005,sav2013,kahn2017,brand2018,brand2015}. In particular, conservation of magnetic helicity sets constraints on the decay time-scale of helical magnetic fields leading to an inverse cascade of energy. The latter implies that the magnetic field power at smaller scales is transferred to larger scales contrary to the process of direct cascade as observed in the case of non-helical magnetic fields. Magnetic helicity has also been invoked in inflationary magnetogenesis models to explain higher magnetic field strength and correlation length for the same value of reheating temperature~\cite{sharmahelical}. \par
Magnetic fields impact various processes in the universe such as structure formation~\cite{wass1978,kim1994,ss2005}, temperature and polarization anisotropies of the Cosmic Microwave Background radiation (CMB) and bounds on such fields have been placed accordingly by studying the clustering of galaxies and CMB observations~\cite{bar1997,durr1999,sesh2001,sub2003,sesh2009,caprini2009, shaw2010, tri2010,tri2012,tri2014,shi2014,pogo2018,pao2019}. The effect of these fields has also been observed in the observations of CMB brightness temperature fluctuations produced by the neutral hydrogen 21cm line~\cite{sch2009,kunze2018,min2019}. Further, the presence of helicity leads to parity odd cross correlations between the CMB temperature and B- polarization anisotropies and E-and B-polarization anisotropies~\cite{pogo2001,caprini2004,kahn2005,ball2015,planck2015}. These parity odd signals provide a clear signature of stochastic helical magnetic fields. The CMB angular power spectra and Planck likelihoods constrain the maximally helical magnetic field today, $B_{0}$, smoothed on a $1$Mpc length scale to be $<5.6$nG by incorporating the contribution of magnetic fields in cosmological perturbations~\cite{planck2015,kandu2016}. Whereas the constraint on non-helical magnetic fields is $<4.4$nG using Planck 2015 data~~\cite{planck2015,kandu2016}. Constraints on non-helical magnetic field strength have been improved to $<3.4$nG using Planck 2018 data and by further adding BICEP/KECK data \cite{bicep} and South Pole Telescope (SPT) polarization data \cite{spt}, it has been improved to $<3.5$nG and $<2.9$nG, respectively~\cite{pao2019}. These constraints are further expected to modify when the effects of magnetic field on heating and ionization of the IGM are considered~\cite{ss2005,chluba2015,kunze2014,kunze2015}. \par
In the standard model of post-recombination universe, the temperature of the IGM follows the CMB temperature up to a redshift of $z \approx 100$ after which it thermally decouples from CMB and evolves as $a^{-2}$ at low redshifts~\cite{kandu2016,ss2005}. Post recombination, the radiative viscosity of the fluid also reduces, leading to the enhancement of turbulent motion~\cite{jedamzik,jed1998,subb1998}. For scales, where the Reynolds number becomes large, decaying turbulence becomes important and results in the dissipation of magnetic fields. Another process which becomes important post recombination is ambipolar diffusion~\cite{cowling,shu1992}. This process results from the Lorentz force exerted by the magnetic field on the charged particles, thereby accelerating it relative to the neutral particles. The collision between the charged and the neutral particles results in the dissipation of magnetic field and pumps energy into the IGM. The dissipation of magnetic fields caused by this process becomes dominant at lower redshifts. The combined effect of both these processes results in the heating up of the IGM and modifies the thermal and the ionization history of the universe. The impact of non-helical magnetic fields on the temperature and ionization evolution has been studied and constraints on it have been obtained through its impact on CMB temperature and polarization spectra as well as from the EDGES signal~\cite{ss2005,kk2013,chluba2015,kunze2015,min2018,chluba2019}. \par
In this paper, we have studied the decay of helical magnetic fields in the post-recombination universe via decaying turbulence and ambipolar diffusion and studied its imprints on the CMB.
The paper is organized as follows: In Section \ref{s1}, we introduce the two point correlation function of the helical magnetic field and set the notation used in this study. In Section~\ref{s2}, we discuss the two dissipation mechanisms, namely, ambipolar diffusion and decaying turbulence. In Section~\ref{s3}, we discuss the added contribution of helicity on the temperature and ionization fraction evolution with further effects on the CMB. Finally, in Section~\ref{s4}, we summarize our main results.
\section{Power spectrum of helical magnetic field}
\label{s1}
Primordial magnetogenesis processes in the early universe generate Gaussian random magnetic fields. These fields evolve adiabaticaly in the linear regime, where the velocity induced by the tangled fields is much less than the Alfv\'en velocity. The two point correlation of the spatial part of the magnetic field, $B(x)$, in the Fourier space is given by~\cite{pog2001,cap2004,tina2005,kk2010},
\begin{equation}
\langle B_{i}(\vec{k}) B^{\ast}_{j}(\vec{q}) \rangle = (2\pi)^3\delta_{D}^{3}(\vec{k}-\vec{q})\left(P_{S}(k)(\delta_{ij}-\hat{k}_{i}\hat{k}_{j}) - P_{A}(k)i \epsilon_{ijm}\hat{k}_{m} \right)
\end{equation}
where, $P_{S}(k,k_{L},k_{m}) = A_{B}(k/k_{l})^{n_{B}}~W(k,k_{m})$ denotes the power spectrum of the symmetric part with $k_{l}$, $k_{m}$, $n_{B}$ and $W(k,k_{m})$ denoting the pivot scale, diffusion scale, symmetric spectral index, and the Window function, respectively. Similarly, $P_{A}(k,k_{l},k_{m}) = A_{H}(k/k_{l})^{n_{A}} ~W(k,k_{m})$ denotes the power spectrum of the antisymmetric part related to the helicity of the magnetic field with $n_{A}$ denoting the antisymmetric spectral index. The normalized window function, $W(k,k_{m})$, is assumed to be a Gaussian of the form,
\begin{equation}
W(k,k_{m}) = \pi^{-3/2}k^{-3}_{m}\text{e}^{-\left(\frac{k}{k_{m}}\right)^{2}}
\end{equation}
The constants $A_{B}$ and $A_{H}$ are given by,
\begin{align}
A_{B} &= 4\pi^{7/2}\left(\frac{k_{l}}{k_{m}}\right)^{n_{B}} \rho_{B0} \frac{1}{\Gamma(\frac{n_{B}+3}{2})} \\
A_{H} &= 4\pi^{7/2}\left(\frac{k_{l}}{k_{m}}\right)^{n_{A}} \rho_{B0} \frac{1}{\Gamma(\frac{n_{B}+3}{2})} \left(\frac{q}{k_{m}}\right)^{n_{B}-n_{A}}
\end{align}
where $\rho_{B0} = \langle B^{2}_{0}(\vec{x})/2 \rangle$ denotes the energy density today smoothed over the diffusion scale. In addition, to satisfy the realizability condition~\cite{kk2010,kk2012}, we have the additional factor of $(q/k_{m})^{n_{B}-n_{A}}$ in the expression for $A_{H}$ where $q = k_{\text{max}}(k_{\text{min}})$ if $n_{A}-n_{B}>0~(n_{A}-n_{B}<0)$, where $k_{\text{max}}$ and $k_{\text{min}}$ denote the maximum and minimum wave number, respectively. For maximally helical magnetic fields, $n_{A}=n_{B}$, for which the additional factor in $A_{H}$ goes away. The dissipation scale, $k_{m}$~\cite{jed1998,subb1998}, is given by,
\begin{equation}\label{km1}
k_{m} = 248.60~\left(\frac{B_{0}}{\text{1nG}}\right)^{-1} \text{Mpc}^{-1}
\end{equation}
where, the expression for $k_{m}$ has been computed using the best-fit parameters of Planck 2018~\cite{pl2018}, $\Omega_{m}=0.315 \pm 0.007$, $\Omega_{b}h^2=0.0224 \pm 0.001$, and $H_{0}=67.4 \pm 0.5~\text{km}\text{s}^{-1}\text{Mpc}^{-1}$.
Apart from the diffusion scale, we need to consider the length scale at which the linear regime approximation breaks down. This scale is approximately equivalent to the magnetic Jean's scale, $k_{J}$. For scales, $k > k_{J}$, the magnetic pressure gradients are able to successfully resist gravity to prevent gravitational collapse. Hence, for such scales, non-linear processing prevents the fields from redshifting in a merely expansion driven adiabatic way. Turbulent decay of magnetic field becomes important for scales $ k_{J}< k < k_{m}$. However, in the linear regime i.e. for scales $k < k_{J}$, ambipolar diffusion of magnetic fields dominates. We will discuss both these processes in the next section.
\section{Dissipation mechanisms of helical magnetic field}
\label{s2}
Post recombination, the number density of ionized particles falls drastically reaching $10^{-4}$ for $z\lesssim 100$~\cite{peeb93}. The temperature of the IGM follows the evolution of $T_{\gamma}$ up to a redshift $\approx 100$ after which it starts falling faster, $\propto a^{-2}$. The dissipation of magnetic field contributes to the heating of the IGM and its ionization, thereby affecting the evolution of the gas temperature and the ionization fraction. The modified temperature evolution is given by~\cite{ss2005,chluba2015,chluba2019},
\begin{align}
\dot{T}_{e} &= -2\frac{\dot{a}}{a}T_{e} + \frac{8 \rho_{\gamma} \sigma_{t}N_{e}}{3 m_{e} N_{tot} c} (T_{\gamma} - T_{e}) + \frac{\Gamma}{1.5k_{B}N_{tot}}
\end{align}
where $N_{tot} = N_{H}(1+f_{He}+X_{e})$ denotes the total number density of particles that are coupled by Coulomb interactions, $T_{\gamma}$ denotes the CMB temperature, $T_{e}$ denotes the gas temperature, and $\sigma_{t}$ denotes the Thomson cross-section. The quantity, $f_{He} \approx Y_{p}/4(1-Y_{p}) \approx 0.079$ for helium mass fraction $Y_{p} = 0.24$; $X_{e}= N_{e}/N_{H}$ denotes the free electron fraction where $N_{e}$ denotes the number density of electrons and $N_{H}$ denotes hydrogen number density. The quantity, $\rho_{\gamma}$ denotes the photon energy density and is given by, $0.26(1+z)^{4}$eV~\cite{chluba2015}. In the above equation, the first term on the R.H.S. represents the decay of the temperature of non-relativistic particles due to expansion. If $T_e<T_{\gamma}$, the second term tends to increase $T_e$. Similarly if $T_e>T_{\gamma}$, this term tends to decrease $T_e$. Thus the second term tends to bring the electron temperature at par with the CMB temperature.
The quantity $\Gamma$ in the third term represents the additional heating due to magnetic fields, which can be either due to a) ambipolar diffusion, $\Gamma_{\text{ambi}}$, b) decaying turbulence, $\Gamma_{\text{turb}}$, or c) a combination of both, $\Gamma_{\text{ambi+turb}}$. We discuss these processes in the following two sub sections.
\subsection{Ambipolar diffusion}
The residual ionization after recombination facilitates ambipolar diffusion wherein the Lorentz force exerted by magnetic fields on the charged particles leads to a velocity difference between the charged and the neutral particles~\cite{cowling,shu1992}. The collisions between the ionized and the neutral particles lead to the dissipation of magnetic fields and in turn heating up of the IGM. The dissipation due to ambipolar diffusion is given by~\cite{cowling,shu1992},
\begin{equation}
\Gamma_{\text{ambi}} = \frac{(1-X_{e})}{X_{e} \gamma \rho_{b}^{2}} \frac{\langle{|(\nabla \times B(t,x)) \times B(t,x)|}^{2}\rangle}{16 \pi^{2}}
\end{equation}
where, the mean square Lorentz force, $\langle L^{2}\rangle$ is denoted by $\langle{|(\nabla\times B(t,x))\times B(t,x)|}^{2}\rangle/16\pi^{2}$, $X_{e} = N_{e}/N_{H}$ and $\rho_{b} = m_{H} N_{b}$, where $N_{b}$ denotes the baryon number density and $m_{H}$ denotes the mass of the Hydrogen atom. The quantity, $\gamma$, represents the coupling coefficient and is given by $\langle\sigma \nu \rangle_{HH^{+}}/2 m_{H}$ where, $\langle \sigma \nu \rangle \approx 6.49 \times 10^{-10} (T/K)^{0.375} \text{cm}^{3} \text{s}^{-1}$.
In Kunze (2011)~\cite{kk2010}, the expression for $\langle L^{2} \rangle$ was estimated to be,
\begin{align} \label{NHL}
\langle L^{2} \rangle &= \frac{k^{2}_{m}\rho^{2}_{B0}}{[\Gamma(\frac{n_{B}+3}{2})]^{2}}(1+z)^{10} \int^{\infty}_{0} dw w^{2n_{B}+7}e^{-w^{2}}\int^{\infty}_{0} dv v^{2n_{B}+2}e^{-2v^{2}w^{2}} \int^{1}_{-1} dx e^{2w^{2}vx} \\ &\times \nonumber (1-2vx+v^{2})^{\frac{n_{B}-2}{2}}[1+2v^2+(1-4v^2)x^2-4vx^3+4v^2x^4]
\end{align}
where, $w=k/k_{m}$, $v=q/k$, and $x=\vec{k}.\vec{q}/kq$. This integral is then numerically determined and the final expression is then approximated as,
\begin{align}
\langle L^{2} \rangle &\approx 16 \pi^{2} k^{2}_{m} \rho^{2}_{B0} (1+z)^{10} f_{L,NH}(n_{B}+3)
\end{align}
where $f_{L,NH}(n_{B}+3) = 0.8313(1-1.020 \times 10^{-2}(n_{B}+3)) (n_{B}+3)^{1.105}$. We then evaluate $\langle L^{2} \rangle$ for helical magnetic fields using the analysis detailed in Kunze~(2012)~\cite{kk2012},
\begin{align} \label{HL}
\nonumber
\langle L^{2} \rangle &= \frac{k^{2}_{m}\rho^{2}_{B0}}{[\Gamma(\frac{n_{B}+3}{2})]^{2}}(1+z)^{10} \int^{\infty}_{0} dw w^{2n_{B}+7}e^{-w^{2}}\int^{\infty}_{0} dv v^{2n_{B}+2}e^{-2v^{2}w^{2}} \\ &\times \nonumber \int^{1}_{-1} dx e^{2w^{2}vx} (1-2vx+v^{2})^{\frac{n_{B}-2}{2}}[1+2v^2+(1-4v^2)x^2-4vx^3+4v^2x^4] - \\ \nonumber &\frac{2\rho^{2}_{B0}k^2_{m}}{[\Gamma(\frac{n_{B}+2}{2})]^{2}}\left(\frac{q}{k_{m}}\right)^{2(n_{B}-n_{A})} (1+z)^{10} \int^{\infty}_{0} dw w^{2n_{A}+7}e^{-w^{2}}\int^{\infty}_{0} dv v^{2n_{A}+2}e^{-2v^{2}w^{2}} \\ \nonumber &[ \frac{1}{9} \int^{1}_{-1} dx e^{2w^{2}vx} (1-2vx+v^{2})^{\frac{n_{A}-1}{2}} (x-v) +\frac{2}{9} \int^{1}_{-1} dx e^{2w^{2}vx} (1-2vx+v^{2})^{\frac{n_{A}-1}{2}} \\ \nonumber
&(v+2x-3vx^2) +\frac{2}{9} \int^{1}_{-1} dx e^{2w^{2}vx} (1-2vx+v^{2})^{\frac{n_{A}-1}{2}} \\
&(4v+2x-6vx^2)]
\end{align}
Using the condition for maximally helical magnetic fields, $n_{A}=n_{B}$, we then numerically evaluate the integral along the lines similar to the non-helical magnetic field case above. The final expression for $\langle L^{2}\rangle$ for helical magnetic fields is given by,
\begin{equation}
\langle L^{2} \rangle \approx 16 \pi^{2} k^{2}_{m} \rho^{2}_{B0} (1+z)^{10} f_{L,H}(n_{B}+3)
\end{equation}
where $f_{L,H}(n_{B}+3)=0.45(1-0.017(n_{B}+3))(n_{B}+3)^{1.1}$. We use this expression for $f_{L,H}(n_{B}+3)$ to estimate the modified ambipolar diffusion decay of helical magnetic fields.
\subsection{Decaying turbulence}
After recombination, due to the drop in radiative viscosity on scales, $k > k_{J}$, turbulence starts dominating~\cite{jedamzik,jed1998,subb1998}. This leads to non-linear interactions, which through mode coupling leads to the dissipation of magnetic fields from larger to smaller scales until the scale reaches $k_{m}$, after which the dissipation proceeds through viscosity. \par
In previous studies, turbulent decay of non-helical fields has been included in the evolution of temperature of the IGM. Non-helical magnetic fields in the regime of turbulent decay evolve as~\cite{kandu2016,jedamzik},
\begin{equation}
\rho_{B} = \frac{\rho_{Bi}}{(1+\frac{\tilde{t}}{\tilde{t}_{d}})^{m}} \label{rhoB}
\end{equation}
where, $\rho_{B}$ and $\rho_{Bi}$ denote the flat space magnetic field energy density and the initial value of the flat space magnetic energy density, respectively, and $\tilde{t}$ and $\tilde{t}_{d}$ denote the time in flat space and the relevant dynamical time for decay, respectively. In the above expression, $m = 2(n_{B}+3)/(n_{B}+5)$. However, due to helicity conservation, the value of $m$ for maximally helical magnetic fields takes the value $m=2/3$. Hence, Eq.~\ref{rhoB} is modified to~\cite{kandu2016,jedamzik},
\begin{equation}
\rho_{B} = \frac{\rho_{Bi}}{(1+\frac{\tilde{t}}{\tilde{t}_{d}})^{2/3}}
\end{equation}
where, the evolution of the magnetic field energy density is independent of the scalar index, $n_{B}$. For non-helical magnetic fields, $\Gamma_{turb}$, is given by the following expression,
\begin{equation}
\Gamma_{\text{turb,NH}} = \frac{B^{2}_{0}}{8\pi} \frac{3m}{2} \frac{[\text{ln}(1+t_{d}/t_{i})]^{m}}{[\text{ln}(1+t_{d}/t_{i})+\text{ln}(t/t_{i})]^{1+m}} H(t) (1+z)^{4}
\end{equation}
where, $m$ has been defined again as $2(n_{B}+3)/(n_{B}+5)$, $B_{0}$ is the present day magnetic field strength, $H(t)$ is the Hubble parameter, $t_{i}$ is the initial epoch when the decay starts i.e. the epoch of decoupling, and $t_{d}$ denotes the physical decay time scale for the turbulence. The quantity, $t_{d}/t_{i}$, is given by $(k_J/k_{m})^{{n_{B}+5}/2}\approx 14.5 (B_{0}/1\text{nG})^{-1}(k_{m}/1\text{Mpc}^{-1})^{-1}$, where the magnetic Jeans wavenumber, $k_{J}$, is given by,
\begin{equation}
k_{J}= 14.5^{2/n_{B}+5} \left(\frac{B_{0}}{1\text{nG}}\right)^{-2/(n_{B}+5)}
\left(\frac{k_{m}}{1\text{Mpc}}\right)^{({n_{B}+3})/({n_{B}+5})}\text{Mpc}^{-1}.
\end{equation}
Due to the change in the evolution equation for the magnetic field energy density, the expression for $\Gamma_{turb}$ for maximally helical magnetic fields is modified to,
\begin{equation}
\Gamma_{\text{turb,H}} = \frac{B^{2}_{0}}{8\pi}\frac{[\text{ln}(1+t_{d}/t_{i})]^{2/3}}{[\text{ln}(1+t_{d}/t_{i})+\text{ln}(t/t_{i})]^{1+2/3}} H(t) (1+z)^{4}
\end{equation}
Compared to the non-helical case, the exponent $m$ changes to $2/3$ in the helical magnetic field case. This arises from the rate of decay of magnetic field energy density when helicity is conserved. The analytically derived decay rate is also supported by recent numerical simulations conducted by Banerjee and Jedamzik~(2004)~\cite{jedamzik}, Kahniashvili et~al. (2013)~\cite{kahn2013}, and Brandenburg et al. (2015)~\cite{brand2015}.
Next, we make use of these modified expressions for ambipolar diffusion and turbulent decay to evaluate the temperature and ionization fraction evolution. To this end, we use the extension of the \emph{RECFAST++} code~\cite{chluba2010}, which includes the contribution from magnetic field. We have modified the same to include the effect of helicity. We have also plotted the variation of the ambipolar diffusion and decaying turbulence rate with redshift, $z$ for both non-helical and helical magnetic fields in Fig.~\ref{fig0}. Further, we also study the implication of helicity on the CMB temperature and polarization anisotropies using \emph{CAMB}~\cite{lewis2000} in the following section.
\begin{figure}
\centering
\includegraphics[scale=0.4]{ambi_1.png}
\includegraphics[scale=0.4]{turb_1.png}
\caption{The variation of the ambipolar diffusion rate (upper panel) and decaying turbulence rate (lower panel) with redshift, $z$. The magnetic field considered is $B_{0}=3$nG. The solid and dashed lines represent non-helical and helical magnetic fields, respectively. The blue and red lines represent $n_{B}=2.0$ and $n_{B}=-2.9$, respectively.}
\label{fig0}
\end{figure}
\section{Results and Discussion} \label{s3}
\subsection{Evolution of temperature and ionization fraction}
In Fig.~\ref{fig1}, we have plotted the temperature evolution (left panel) and the evolution of ionization fraction (right panel) due to ambipolar diffusion (upper panel), turbulent decay (middle panel) and the combined effect of both for non-helical and maximally helical magnetic fields~(lower panel). We can infer from the figure that under certain conditions the contribution of the antisymmetric part of the magnetic field power spectrum can lead to a perceivable deviation from the non-helical case in both the baryon temperature evolution as well as the ionization fraction evolution.
As seen from upper left panel of Fig.~\ref{fig1}, we note that there is a decrease in the temperature of the gas in the case of helical field as compared to the non-helical case for certain intermediate values of redshift, $z >12$, for both the blue spectrum~($n_{B}=2.0$) as well as the nearly scale invariant spectrum~($n_{B}=-2.9$). In the case of the nearly scale invariant spectrum ($n_{B}=-2.9$), the deviation of the gas temperature (between the helical and the non-helical case at intermediate redshifts) is more than that compared to the $n_B=2$ case. These curves however, merge with each other at lower redshifts showing that although the magnetic field results in a change in gas temperature at low redshifts (as compared to the case when magnetic field is not considered), it is independent of whether we consider helical fields or non helical fields and whether we consider a blue spectrum or a nearly scale invariant one. \par
The ionization fraction evolution with redshift (upper right panel) also shows a similar trend with the presence of magnetic field leading to an increase in the degree of ionization. The helical contribution however results in an ionization fraction which is less than that in the case of non-helical fields. This feature is exhibited for both the blue spectrum as well as the nearly scale invariant spectrum. Both the behaviour of temperature and ionization fraction evolution can be understood from the change in the expression of $\langle L^2 \rangle$ (See Eq.~\ref{NHL} and Eq.~\ref{HL}), which has an additional negative contribution from the antisymmetric part in the case of helical magnetic fields.
Another important aspect which is worth noting is that while the deviation in the temperature vanishes at low redshifts for the two cases (of helical and non-helical magnetic fields), the difference in the ionization fraction for the case of helical and non-helical field is clearly perceivable even at low redshifts for both the blue spectrum as well as the nearly scale invariant spectrum. As we shall see this leads to a change in the polarization spectrum of the CMB. The above behaviour has been estimated for $B_{0}=3$nG and for higher magnetic fields, the change will be higher.
Next, equating $\Gamma = \Gamma_{turb}$, we can infer that the contribution of the antisymmetric part is independent of the spectral index and leads to higher values of temperature and ionization fraction as a function of redshift compared to the non-helical magnetic field for higher values of $n_{B}$. This can also be inferred from the middle panel of Fig.~\ref{fig1} where the temperature evolution for the helical magnetic field is shifted to higher values with respect to the temperature evolution for the non-helical magnetic field when $n_{B}=2.0$. However, as $n_{B}$ reduces to $-2.9$, the non-helical magnetic field induced temperature exceeds the helical magnetic field induced temperature. This can be understood by comparing the variation of $\Gamma_{turb,NH}$ and $\Gamma_{turb,H}$ for different values of $n_{B}$. Similarly, the ionization fraction evolution shows that the helical magnetic field induced ionization fraction as a function of redshift is larger than that induced due to non-helical magnetic fields for $n_{B}=2.0$ but for lower spectral indices, the difference diminishes with the evolution almost overlapping for $n_{B}=-2.9$. Next, in the lower panel of Fig.~\ref{fig1}, we show the combined effect of ambipolar diffusion and turbulence on the temperature and the ionization fraction evolution. We can see that the magnetic field decay due to turbulence dominates at high redshifts with ambipolar diffusion taking over at lower redshifts. With the temperature and the ionization fraction evolution in place, we now proceed to study the effect of the helical magnetic field dissipation on the CMB.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.3]{NH_H_ad_1_T.png}
\includegraphics[scale=0.3]{NH_H_1_Xe.png}
\includegraphics[scale=0.3]{NH_H_turb_T.png}
\includegraphics[scale=0.3]{NH_H_turb_Xe.png}
\includegraphics[scale=0.3]{NH_H_ab_turb_T.png}
\includegraphics[scale=0.3]{NH_H_ab_turb_Xe.png}
\caption{\emph{Left panel:} Temperature evolution with redshift due to the additional effect of ambipolar diffusion (A), decaying turbulence (C) and the combined effect of both (E) for non-helical as well as helical magnetic fields. The blue solid and dashed line represent the contribution due to non-helical and helical magnetic fields, respectively for $n_{B}=2.0$. Similarly, the red solid and dashed lines represent the contribution due to non-helical and helical magnetic fields, respectively for $n_{B}=-2.9$. The magnetic field considered is $B_{0}=3$nG. The black solid line represents the CMB temperature and the brown solid line represents the gas temperature without magnetic field. \emph{Right panel:} Ionization fraction evolution with redshift with similar details as given in the left panel.}
\label{fig1}
\end{figure}
\subsection{Effect on CMB}
The effect on CMB through ambipolar diffusion and turbulent decay of non-helical magnetic fields has been studied before and constraints on the magnetic field have also been derived~\cite{kunze2015,chluba2015,chluba2019}. In this paper, we study the effect of post-recombination dissipation of helical magnetic fields on the CMB. Fig.~\ref{fig2} shows the effect of $3$nG magnetic field (both non-helical as well as maximally helical) on the temperature and polarization anisotropy of the CMB. As is clear from the figure, while the effect in CMB temperature is small, the effect on CMB polarization anisotropy is clearly seen at least in a certain range of $\ell$-values.
In Fig.~\ref{fig2}A and Fig.~\ref{fig2}C, we have shown how the CMB temperature is affected by the dissipation of magnetic filed (maximally helical denoted by dashed line and non-helical magnetic fields denoted by solid lines) through ambipolar diffusion and turbulent decay, respectively. Fig.~\ref{fig3} incorporates the combined effect of ambipolar diffusion and turbulent decay. In the case of ambipolar diffusion, the effect of helicity shifts the temperature anisotropy spectrum to slightly higher values compared to the case of non-helical magnetic fields. The maximum value of the shift is $\approx 3\%$ for $n_{B}=2$ and decreases further with decreasing spectral index. The variation is negligible for the spectral index, $n_{B}=-2.9$. Further, in the case of decaying turbulence, the helical magnetic field shifts the temperature anisotropies to slightly lower values compared to non-helical magnetic fields at higher $\ell$s with the shift being more for $n_{B}=-2.9$ than $n_{B}=2.0$. This behaviour is different at lower $\ell$s for $n_{B}=2.0$ compared to $n_{B}$ = -2.9. In addition, as decaying turbulence for helical magnetic fields is independent of the spectral index, we see that the temperature anisotropy spectra for all values of the spectral index coincide. In order to resolve the difference between the helical and non-helical magnetic fields, in Fig.~\ref{fig3}C, we have plotted the percentage difference in the CMB temperature anisotropy due to helical and non-helical magnetic field including both the ambipolar diffusion effect and the decaying turbulence effect. We can infer from the figure that for $n_{B}= -2.9$, the effect of decaying turbulence dictates the behaviour as the effect of ambipolar diffusion in this case is negligible. On the other hand, for $n_{B}=2.0$, the combination results in the helical magnetic field leading to higher temperature anisotropies than the non-helical magnetic field for most values of $\ell$ as the effect of ambipolar diffusion dominates for this case. \par
The difference in polarization anisotropy of the CMB between the helical vs non-helical cases is more prominent than the temperature anisotropy. On the right panel of Fig.~\ref{fig2}, we have shown the polarization anisotropy in CMB due to ambipolar diffusion (Fig.~\ref{fig2}B), turbulent decay (Fig.~\ref{fig2}D) and in Fig.~\ref{fig3}B, we have shown the combination of both the effects. As in the temperature anisotropy case, we have also shown the percentage difference between the helical and the non-helical magnetic field in Fig.~\ref{fig3}D. In the case of ambipolar diffusion, the helical field shifts the polarization anisotropy spectrum to lower values and the shift decreases with decreasing $n_{B}$. This can be understood from the behaviour of the temperature and ionization fraction evolution, which shows lower values for helical magnetic fields compared to non-helical magnetic fields. The decrease in the ionization fraction leads to a decrease in the polarization anisotropy. We also see that the effect of turbulent decay on polarization is smaller than ambipolar diffusion. In the case of decaying turbulence, helical magnetic field induced polarization anisotropy has higher values than the non-helical case except at high $\ell$s~(Fig.~\ref{fig2}D).
The net effect of ambipolar diffusion and turbulent decay due to both non-helical and helical magnetic fields on CMB polarization is shown in Fig.~\ref{fig3}D. We find that for the nearly scale invariant spectrum ($n_{B}=-2.9$), the polarization anisotropy in CMB is dictated by decaying turbulence as the effect of ambipolar diffusion is negligible in this case. On the other hand, for $n_{B}=2.0$, the combined plot shows that the CMB polarization power spectrum closely resembles that due to the ambipolar diffusion as the effect of decaying turbulence dictates the behaviour for only a small range of $\ell$. In order to highlight how different the effect is between these two cases, the percentage difference in the polarization anisotropy for these two cases are plotted in Fig.~\ref{fig3}D.
We note that for the blue spectrum ($n_B=2$) there is a dip of $~35\%$ in the case of helical magnetic field induced polarization anisotropies compared to the non-helical magnetic field case for small values of $\ell$ roughly between $5$ and $20$. For $40 \le \ell \le 450$ the effect of the helical field is more, reaching a maximum of about $9\%$. For larger $\ell$, the difference is not much and remains within $5\%$. For $n_{B}=-2.9$, the induced polarization anisotropies due to helical field is not significantly different from the non-helical one, the largest difference being less than about $10\%$.
In addition, we have shown in Fig.~\ref{fig4} and Fig.~\ref{fig5}, the CMB temperature and anisotropy spectrum obtained from Planck 2018~\cite{pl2018} to compare with the CMB temperature and polarization anisotropies computed using \emph{CAMB} for different magnetic field strengths and spectral indices ($n_{B}=2.0$ and $n_{B}=-2.9$). In Fig~\ref{fig4}C, D and Fig~\ref{fig5}C, D
, we have plotted the absolute percentage deviation of our models from that of Planck 2018. We have included the combined effect of ambipolar diffusion and turbulent decay to compute these plots. It can be inferred from Fig.~\ref{fig4} that the maximally helical magnetic field generated temperature anisotropy spectrum for $n_{B}=2.0$ is closer to the Planck 2018 spectrum as compared to that due to the non-helical magnetic field at most $\ell$s for the same magnetic field strength. This was also seen in Fig.~\ref{fig2} and Fig.~\ref{fig3} where we showed that ambipolar diffusion dominates the behaviour for $n_{B}=2.0$. This implies that the constraint obtained on the magnetic field is relaxed when helicity is included for $n_{B}=2.0$. On the other hand for $n_{B}=-2.9$, the CMB anisotropy spectra due to helical magnetic field is farther from the Planck 2018 spectra compared to the non-helical case for most values of $\ell$ (See Fig.~\ref{fig5}). Further, as the magnetic field is reduced, the spectra approaches the Planck 2018 spectra with magnetic field between $1-2$nG showing the least deviation for both values of $n_{B}$. However, for $n_{B}=2.0$, this constraint is relaxed and for $n_{B}=-2.9$, it is tightened as we saw above. We note here that the constraints discussed are qualitative and have been inferred from the percentage difference plots.
Before we end this section, we would like to present a comparison of our results for the non-helical magnetic field case with previous results such as Chluba et al.~(2015)~\cite{chluba2015} and Kunze and Komatsu~(2015)~\cite{kunze2015} and study the variation of our results with an alternative model of damping profile discussed in Paoletti~et~al.~(2019)~\cite{pao2019}. The overall trend of the change in the CMB anisotropies due to the ambipolar diffusion and decaying turbulence effect of non-helical magnetic fields match with that provided by Chluba et al.~(2015)~\cite{chluba2015} where $B_{0}=3$nG and $n_{B}=-2.9$ and $n_{B}=3.0$ had been considered. The amplitudes of the percentage difference in $C_{\ell}$ with and without magnetic field calculated in our analysis are also equivalent with their work for $n_{B}=-2.9$.
The constraints on the magnetic field obtained in Chluba~et.al.~(2015)~\cite{chluba2015} namely, $B_{0}<1.1$nG for $n_{B}=-2.9$ match with the qualitative constraints inferred from the plots in this work. In addition, the latter is also of similar order to the constraints obtained by Kunze and Komatsu~(2015)~\cite{kunze2015}, wherein they employed Recfast++ with CLASS to study the variation of CMB anisotropies with non-helical magnetic fields.
We now compare our results for an alternative damping scale as given below with the results obtained by Paoletti~et~al.~(2019)~\cite{pao2019},
\begin{equation} \label{km}
k_{m} = \frac{\sqrt{5.5 \times 10^{4}}}{\sqrt{\langle B^{2} \rangle/nG}} \frac{(2\pi)^{(n_{B}+3)}/2}{\sqrt{\Gamma[n_{B}+5)/2]}}
\sqrt{h\frac{\Omega_{b}h^{2}}{0.022}} \text{Mpc}^{-1}
\end{equation}
wherein the damping profile is given by,
\begin{equation}
\langle B^{2} \rangle = \frac{1}{2\pi^{2}} \int_{0}^{k_{m}} dk k^{2} 2P_{B} (k)
\end{equation}
The change in the dissipation scale does not affect the overall trend seen previously as can be inferred from Fig.~\ref{fignew}. In other words, ambipolar diffusion still dominates for $n_{B}=2.0$ and decaying turbulence dominates for $n_{B}=-2.9$. However, the amplitude of change in $C_{\ell,TT}$ and $C_{\ell,EE}$ alters considerably for $n_{B}=2.0$ . A similar change is not observed in the case of $n_{B}=-2.9$. The temperature and ionization fraction evolution with redshift, z, also registers a change for $n_{B}=2.0$ but not for $n_{B}=-2.9$. The constraints obtained for $\sqrt{ \langle B^{2} \rangle}$ is considerably tightened compared to $B_{0}$. Paoletti et al. (2019)~\cite{pao2019} using MCMC had obtained $\sqrt{ \langle B^{2} \rangle} < 0.06$nG for $n_{B}=1.0$ and $\sqrt{ \langle B^{2} \rangle} < 1.06$nG for $n_{B}=-2.9 $, which matches with our qualitative estimates.
In addition, including the helicity does not alter the overall trend as seen for the previous damping scale; however, the amplitude of the difference between the non-helical and helical magnetic fields increases considerably for the ambipolar diffusion case as the rate is proportional to $k^{2}_{m}$ and the new damping scale is greater than the old damping scale by $(2\pi)^{(n_{B}+3)/2}/\sqrt{\Gamma[n_{B}+5)/2]}$. Even for magnetic field strength as low as $\sqrt{\langle B^{2} \rangle} = 0.4.$nG, the amplitude of the percentage difference between the $C_{\ell,TT,H}$ and $C_{\ell,TT,NH}$ is $\approx 15~\%$ for $n_{B}=2.0$. As we have stated previously, no difference from the previous result is observed for $n_{B}=-2.9$. We have also plotted the percentage difference in $C_{\ell,TT}$ and $C_{\ell,EE}$ with respect to Planck 2018 results for the new dissipation scale in Fig.~\ref{fignew} where the modification due to the new dissipation scale is apparent for $n_{B}=2.0$.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.3]{CTT_NH_H_ad_1.png}
\includegraphics[scale=0.3]{CEE_NH_H_ad_1.png}
\includegraphics[scale=0.3]{CTT_NH_H_t.png}
\includegraphics[scale=0.3]{CEE_NH_H_t.png}
\caption{\small{(A) and (C) denote the CMB temperature anisotropy spectrum where the x-axis denotes the multipole, $\ell$ with the effect of ambipolar diffusion and decaying turbulence, respectively. Similarly, (B), and (D) denote the CMB polarization anisotropy spectrum where the x-axis denotes the multipole, $\ell$ with the effect of ambipolar diffusion and decaying turbulence, respectively. The magnetic field considered here is $B_{0}=3$nG. The blue solid and dashed lines represent contribution of non-helical magnetic fields and maximally helical magnetic fields, respectively, for $n_{B}=2.0$. Keeping everything the same, red solid and dashed lines represent $n_{B}=-2.9$. The black solid line represents the temperature anisotropy spectrum with no magnetic field. }}
\label{fig2}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width = 0.5\textwidth]{CTT_NH_H_ab_t.png}
\includegraphics[width = 0.5\textwidth]{CEE_NH_H_ab_t.png}
\includegraphics[width = 0.5\textwidth]{diff_CTT_ab_t.png}
\includegraphics[width = 0.5\textwidth]{diff_CEE_ab_t.png}
\caption{\small{(A) and (B) denote the CMB temperature and polarization anisotropy spectrum, respectively where the x-axis denotes the multipole, $\ell$ with the combined effect of ambipolar diffusion and decaying turbulence. (C) and (D) denote the percentage difference in $C_{\ell,TT}$ and $C_{\ell,EE}$, respectively, between helical and non-helical magnetic field with the x-axis denoting $\ell$. $\Delta C_{\ell}/C_{\ell}$ denotes $C_{\ell,H}-C_{\ell,NH}/C_{\ell,NH}\times100$, where $C_{\ell,H}$ and $C_{\ell,NH}$ represent the CMB temperature anisotropy due to helical and non-helical magnetic field, respectively. It includes the combined effect of ambipolar diffusion and decaying turbulence in the calculation of the percentage difference in CMB temperature anisotropies. The magnetic field considered here is $B_{0}=3$nG. The blue solid and dashed lines represent contribution of non-helical magnetic fields and maximally helical magnetic fields, respectively, for $n_{B}=2.0$. Keeping everything the same, red solid and dashed lines represent $n_{B}=-2.9$. The black solid line represents the temperature anisotropy spectrum with no magnetic field. }}
\label{fig3}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width = 0.5\textwidth]{CTT_NH_H_ab_t_pl.png}
\includegraphics[width = 0.5\textwidth]{CEE_NH_H_ab_t_pl.png}
\includegraphics[width = 0.5\textwidth]{diff_CTT_ab_t_pl.png}
\includegraphics[width = 0.5\textwidth]{diff_CEE_ab_t_pl.png}
\caption{\small{ (A) and (B) denote the CMB temperature and polarization anisotropy spectra for $n_{B}=2.0$ , respectively, where the x-axis denotes the multipole, $\ell$. (C) and (D) denote the percentage difference in $C_{
\ell,TT}$ and $C_{\ell,EE}$, respectively estimated through \emph{CAMB} with respect to Planck 2018 temperature and polarization anisotropies with $\Delta C_{\ell}/C_{\ell}$ denoting $(C_{\ell,pl}-C_{\ell,NH or H})/C_{\ell,pl}~\times 100$ where $C_{\ell,pl}$ represents the Planck 2018 CMB anisotropy and $C_{\ell,NH or H}$ denotes that calculated from \emph{CAMB} for non-helical or helical magnetic fields. The Planck 2018 observations are indicated via the black solid line. All the plots include the combined effect of ambipolar diffusion and decaying turbulence with different strengths of magnetic field. The green, red, blue, and orange solid lines represent $B_{0}=4$nG, $B_{0}=3$nG, $B_{0}=2$nG, and $B_{0}=1$nG, respectively. The corresponding dashed lines represent helical magnetic fields.}}
\label{fig4}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width = 0.5\textwidth]{CTT_NH_H_ab_t_nn29_pl.png}
\includegraphics[width = 0.5\textwidth]{CEE_NH_H_t_nn29_pl.png}
\includegraphics[width = 0.5\textwidth]{diff_CTT_ab_t_nn29_pl.png}
\includegraphics[width = 0.5\textwidth]{diff_CEE_ab_t_nn29_pl.png}
\caption{\small{ (A) and (B) denote the CMB temperature and polarization anisotropy spectra for $n_{B}=-2.9$, respectively, where the x-axis denotes the multipole, $\ell$. (C) and (D) denote the percentage difference in $C_{
\ell,TT}$ and $C_{\ell,EE}$, respectively estimated through \emph{CAMB} with respect to Planck 2018 temperature and polarization anisotropies with $\Delta C_{\ell}/C_{\ell}$ denoting $(C_{\ell,pl}-C_{\ell,NH or H})/C_{\ell,pl}~\times 100$ where $C_{\ell,pl}$ represents the Planck 2018 CMB anisotropy and $C_{\ell,NH or H}$ denotes that calculated from \emph{CAMB} for non-helical or helical magnetic fields. The Planck 2018 observations are indicated via the black solid line. All the plots include the combined effect of ambipolar diffusion and decaying turbulence with different strengths of magnetic field. The green, red, blue, and orange solid lines represent $B_{0}=4$nG, $B_{0}=3$nG, $B_{0}=2$nG, and $B_{0}=1$nG, respectively. The corresponding dashed lines represent helical magnetic fields.}}
\label{fig5}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.3]{diff_CTT_NH_H_ab_t_Bd_n2_nkd.png}
\includegraphics[scale=0.3]{diff_CEE_NH_H_ab_t_Bd_n2_nkd.png}
\includegraphics[scale=0.3]{diff_CTT_NH_H_ab_t_nn29_nkd_old.png}
\includegraphics[scale=0.3]{diff_CEE_NH_H_ab_t_nn29_nkd_old.png}
\caption{The upper and the bottom two panels denote the percentage difference with respect to Planck 2018 results for $n_{B}=2.0$ and $n_{B}=-2.9$ using the new damping scale. The solid and dashed lines represent non-helical and helical magnetic fields, respectively. The magnetic fields considered for the upper two panels are namely, $0.4$nG,$0.3$nG, $0.2$nG, and $0.1$nG. On the other hand, the magnetic fields considered for the lower two panels are namely, $4$nG,$3$nG, $2$nG, and $1$nG.}
\label{fignew}
\end{figure}
\section{Conclusions} \label{s4}
In this paper, we have studied the post recombination decay of maximally helical magnetic fields. After recombination, the ionization fraction drops down to an order of $10^{-4}$, which enables the magnetic field to decay via ambipolar diffusion. Similarly, due to the decrease in viscosity at length scales less than the magnetic Jeans length, turbulence dominates and leads to the decay of magnetic field. Both these processes combined leads to the modification of the temperature and ionization fraction evolution of the IGM. \par
In contrast with earlier studies conducted on this subject, we have studied the additional contribution of helicity of the magnetic field. There are various parity breaking magnetogenesis models that predict helical magnetic fields, which have larger coherence length and a slower decay rate compared to non-helical magnetic fields. However, their decay post-recombination and subsequent implications on CMB have not been considered before. After taking into account the helical component of the magnetic field, the ambipolar diffusion term and the turbulent decay term is modified.
By considering only the dissipation due to ambipolar diffusion, we find that for maximally helical magnetic fields, which have $n_{A}=n_{B}$, the values of temperature and ionization fraction are smaller at certain redshifts compared to the case with non-helical magnetic fields as indicated in the upper panel of Fig.~\ref{fig1}. This also affects the CMB temperature and anisotropy results, which registers a non-negligible change in the temperature and polarization as depicted in the upper panel of Fig.~\ref{fig2}. Similarly, we evaluated the effect of decaying turbulence and the combined effect of ambipolar diffusion and decaying turbulence on the CMB anisotropies as depicted in the lower panel of Fig.~\ref{fig2} and Fig.~\ref{fig3}, respectively. \par
We can infer from Fig.~\ref{fig4} and Fig.~\ref{fig5} that the presence of helicity relaxes the constraints on the magnetic field for $n_{B}=2.0$ and tightens the constraints for $n_{B}=-2.9$, respectively. This is because ambipolar diffusion dominates the behaviour for the former and turbulent decay dominates the latter. It implies that ambipolar diffusion dictates the constraints for higher spectral indices compared to turbulent decay which dictates the behaviour for lower spectral indices. The constraints discussed here are qualitative in nature and have not been derived using any sampler such as MCMC compared to the analysis performed by Paoletti~et al~(2019)~\cite{chluba2019}, wherein constraints on non-helical magnetic fields were derived after performing MCMC. However, the overall trend and the qualitative constraints discussed here match with their quantitative constraints. After including the antisymmetric part of the magnetic field into the computation, we infer from the percentage difference plots that there is a small change in the constraints. However, evidence suggests the presence of helical magnetic fields in various astrophysical systems; hence, the study of their impact on existing observations is important. Further, this could pave the way for another independent constraint on cosmic magnetic fields.
\section*{Acknowledgements}
SJ and TRS acknowledge the facilities at IUCAA
Centre for Astronomy Research and Development
(ICARD), University of Delhi. RS would like to thank Professor Kandaswamy Subramanian for insightful discussions. The research of SJ is supported by INSPIRE Fellowship (IF160769), DST India. TRS acknowledges the project grant from SERB, Govt. of India (EMR/2016/002286).
\bibliographystyle{ws-ijmpd}
| proofpile-arXiv_059-15649 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Brief history of previous work}
\label{sect:history}
These questions were considered by Bertini, Geiser,
De Jonqui\`eres, Kantor, etc. over a century ago but continue to inspire
new work:
\begin{itemize}
\item{Manin \cite{manin1}, \cite{manin2} studied $G$-surfaces both in the arithmetic and geometric
context, focusing on the induced $G$-action on the geometric Picard group, and on
cohomological invariants of that lattice;}
\item{Iskovskikh \cite{IskMin} laid the groundwork for the $G$-birational
classification of surfaces and their linkings;
}
\item{Bayle, Beauville, Blanc, and de Fernex \cite{baylebeauville,BeaBla,deFer,BlaGGD,blancsubgroups}
classified actions of finite abelian $G$ on surfaces;}
\item{Dolgachev and Iskovskikh \cite{DolIsk} largely completed the surface case;}
\item{Bogomolov and Prokhorov \cite{BogPro,prokhorovII} considered the stable conjugacy problem
for the surface case using cohomological tools introduced by Manin;}
\item{Prokhorov, Cheltsov, Shramov, and collaborators proved numerous theorems for
threefolds -- both concerning specific groups, such as simple groups \cite{ProSimple,CheShr},
as well as general structural properties \cite{PSJordan}.}
\end{itemize}
Much of this work fits into the Minimal Model Program,
using distinguished models to reduce the classification problem
to an analysis of automorphisms of a restricted class of
objects, e.g., del Pezzo surfaces.
With a few exceptions -- the application of the cohomology on the
N\'eron-Severi group, by Manin and Bogomolov--Prokhorov, and the `normalized
fixed curve with action (NFCA)' invariant of Blanc \cite{blancsubgroups}
-- invariants play a limited role.
\
A fundamental observation, recorded in \cite[App.\ A]{reichsteinyoussinessential},
is that the presence of a
point fixed by a given abelian subgroup $H$ of $G$ is a birational invariant of
a smooth projective variety $X$ with generically free $G$-action.
Furthermore, Reichstein and Youssin showed that
the {\em determinant} of the action of abelian stabilizers on the
normal bundle to the locus with the given stabilizer, up to sign, is also a birational invariant
\cite{reichsteinyoussininvariant}. However, for finite groups this is only
useful when the minimal number of
generators of the abelian group equals the dimension of the variety
\cite[Th.~1.1]{reichsteinyoussinessential}. For cyclic groups, it is
applicable only for curves.
The invariants defined in \cite{kontsevichpestuntschinkel,Bbar,kreschtschinkel}
record all eigenvalues for the action
of abelian stabilizers, as well as additional information about the action on the
components of the fixed locus, and on their
function fields. These collections of
data are turned into a $G$-birational invariant, via explicit {\em blowup relations}. The groups
receiving these invariants, the {\em equivariant Burnside groups}, have an elaborate algebraic structure.
And they led to new results in birational geometry, some of which will be discussed below.
\section{Equivariant birational types}
\label{sect:first}
Here we restrict to the situation where $G$ is {\em abelian}
and consider only fixed points of
$X \righttoleftarrow G$. In general, there are no such
fixed points and we obtain no information. However, large classes of actions
do have fixed points, e.g., if $G$ is cyclic and
$h^i(X,{\mathcal O}_X)=0$, for each $i>0$, then the Atiyah-Bott holomorphic
Lefschetz formula \cite[Cor.~4.13]{AtiBot} yields a fixed point.
The vanishing assumption holds for rational and rationally connected $X$.
If $G$ is an abelian $p$-group ($p$ prime) acting on $X$ without
fixed points
then every Chern number of $X$ is divisible by $p$ \cite[Cor.~1.1.2]{Hau}.
To define an invariant of $X \righttoleftarrow G$, we consider collections of weights for the action of $G$
in the tangent bundle at $G$-fixed points in $X$. To formalize this,
let
$$
A=G^\vee = \Hom(G,{\mathbb G}_m)
$$
be the character group of $G$, and $n=\dim X$. Let
$$
{\mathcal S}_n(G)
$$
be the free abelian group on
symbols
$$
[a_1,\ldots,a_n], \quad a_j\in A, \quad \forall j,
$$
subject to the conditions:
\
\begin{itemize}
\item[({\bf G})] {\bf Generation:}
$\{a_1,\ldots,a_n\}$ generate $A$, i.e.,
$$
\sum_{i=1}^n {\mathbb Z} a_i = A,
$$
thus, $n$ is at least the minimal number of generators of $G$;
\
\item[({\bf S})] {\bf Symmetry:} for each permutation $\sigma \in \mathfrak S_n$ we have
$$
[a_{\sigma(1)},\ldots,a_{\sigma(n)}] = [a_1,\ldots,a_n].
$$
\end{itemize}
\
\noindent
Let
\begin{equation}
\label{eqn:Bn}
{\mathcal S}_n(G) \to {\mathcal B}_n(G)
\end{equation}
be the quotient, by relations, for all $2\le r \le n$:
\
\begin{itemize}
\item[($\mathrm{\bf B}_r$)] {\bf Blow-up:}
for all
$
[a_1,\ldots,a_r,b_1,\ldots,b_{n-r}]\in {\mathcal S}_n(G)
$
one has
\begin{align}
\label{keyrelation0}
[a_1,\ldots,a_r,b_1,\ldots,b_{n-r}] &= \nonumber \\
\sum_{1\le i \le r, \, a_i\neq a_{i'} \text{ for }i'<i}
[a_1-a_i,\ldots,a_i,\ldots,a_r-a_i,& b_1,\ldots,b_{n-r}].
\end{align}
\end{itemize}
\
\noindent
These relations reflect the transformations of weights in tangent spaces to components of the fixed locus
upon blowing up along a $G$-stable stratum.
\
\noindent
From the definition, we have
$${\mathcal B}_1(G) = \begin{cases} {\mathbb Z}^{\phi(N)}& \text{if } G \text{ is cyclic of order $N$, } \\
0 & \text{otherwise.}
\end{cases}
$$
\begin{prop} \cite[Prop.~8.1]{kreschtschinkel}
\label{prop:81}
For $n\ge 2$, all relations $\mathrm{({\bf B}_{\it r})}$ are implied by relation $\mathrm{({\bf B}_2)}$.
\end{prop}
\noindent
Thus, ${\mathcal B}_n(G)$ is obtained by imposing the relation:
\
\begin{itemize}
\item[($\mathrm{\bf B}$)] {\bf Blow-up:}
for all
$
[a_1,a_2,b_1,\ldots,b_{n-2}]\in {\mathcal S}_n(G)
$
one has
$$
[a_1,a_2,b_1,\ldots,b_{n-2}] =
$$
\begin{align}
\label{keyrelation}
& [a_1,a_2-a_1,b_1,\ldots,b_{n-2}] + [a_1-a_2,a_2,b_1,\ldots,b_{n-2}], & a_1\neq a_2, \\
& [0,a_1,b_1,\ldots,b_{n-2}], & a_1=a_2. \nonumber
\end{align}
\end{itemize}
\
\begin{proof}[Proof of Proposition~\ref{prop:81}]
We prove the result by induction on $r$.
We first treat the case that $a_1,a_2,\dots,a_r$ are
pairwise distinct; we drop the entries $b_1, \dots, b_{n-r}$ from the notation, as they do not
take part in relations.
Suppose $r\ge 3$.
Then:
\begin{align*}
[a_1,\dots,a_r]&=\text{(by $(\mathbf{B}_{r-1})$)}\,
[a_1,a_2-a_1,\dots,a_{r-1}-a_1,a_r]+\cdots \\ & \qquad\qquad\qquad\quad+
[a_1-a_{r-1},\dots,a_{r-2}-a_{r-1},a_{r-1},a_r] \\
&=\text{(by $(\mathbf{S})$)}\,\,\,\,\,\,\,\,\,\,
[a_1,a_r,a_2-a_1,\dots,a_{r-1}-a_1]+\cdots \\ & \qquad\qquad\qquad\quad+
[a_{r-1},a_r,a_1-a_{r-1},\dots,a_{r-2}-a_{r-1}] \\
&=\text{(by $(\mathbf{B}_2)+(\mathbf{S})$, applied to each term)} \\
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\![a_1,a_2-a_1,\dots,a_r-a_1]
+[a_1-a_r,a_2-a_1,\dots,a_{r-1}-a_1,a_r]\\
&\qquad+\,\,\cdots\,\,+\\
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\![a_1-a_{r-1},\dots,a_{r-2}-a_{r-1},a_{r-1},a_r-a_{r-1}]\\ &\qquad\qquad+[a_1-a_{r-1},\dots,a_{r-2}-a_{r-1},a_{r-1}-a_r,a_r].
\end{align*}
The right-hand terms, taken together, are equal by $(\mathbf{B}_{r-1})$ to
\[ [a_1-a_r,\dots,a_{r-1}-a_r,a_r], \]
which together with the left-hand terms gives us what we want.
Next we treat the more general case $a_r\notin \{a_1,\dots,a_{r-1}\}$.
In that case, for every $i$ with $a_i\in \{a_1,\dots,a_{i-1}\}$ we
omit the $i$th term on the right-hand side in the initial application
of $(\mathbf{B}_{r-1})$ and $(\mathbf{S})$ and omit the $i$th line
after the applications of $(\mathbf{B}_2)$ and $(\mathbf{S})$.
We conclude as before.
Finally we treat the case $a_r\in \{a_1,\dots,a_{r-1}\}$.
We start in the same way, by applying $(\mathbf{B}_{r-1})$ and $(\mathbf{S})$
as above.
Now, when we apply $(\mathbf{B}_2)$, we have to pay special attention to
terms with $a_i=a_r$: in the corresponding line we should leave the
left-hand term but omit the right-hand term.
Each of the remaining right-hand terms vanishes by
an application of $(\mathbf{B}_2)$ to a symbol of the form $[0,\dots]$,
i.e., the vanishing of any symbol with two nonzero weights summing to $0$.
The left-hand terms give us directly what we want.
\end{proof}
\subsection{Antisymmetry}
\label{subsect:anti}
Let
$$
{\mathcal B}_{n}(G) \to {\mathcal B}^{-}_{n}(G)
$$
be the projection to the quotient by the additional relation
\begin{equation}
\label{eqn:anti}
[-a_1,\ldots, a_n]=-[a_1,\ldots, a_n].
\end{equation}
In particular, symbols of the type $[0, a_2, \ldots, a_n]$
are in the kernel of the projection. We write
$$
[a_1,\ldots, a_n]^-
$$
for the image of a standard generator $[a_1,\ldots, a_n]$.
\
\subsection{Multiplication and co-multiplication}
\label{subsect:mult}
Consider a short exact sequence
$$
0\rightarrow G'\rightarrow G\rightarrow G''\rightarrow 0
$$
and its dual
$$
0\rightarrow A''\rightarrow A\rightarrow A'\rightarrow 0.
$$
The {\em multiplication}
$$
\nabla: {\mathcal B}_{n'}(G')\otimes {\mathcal B}_{n''}(G'')\rightarrow {\mathcal B}_{n}(G), \quad n'+n''=n,
$$
is defined by
$$
[a_1',\ldots, a_{n'}']\otimes [a_1'',\ldots, a_{n''}''] \mapsto \sum \,\, [a_1,\ldots, a_{n'}, a_1'', \ldots a_{n''}''],
$$
summing over all lifts $a_i \in A$ of $a_i'\in A'$. It descends to a similar map on quotients by
the relation \eqref{eqn:anti}. .
The {\em co-multiplication} is defined only on ${\mathcal B}^{-}_{n}(G)$:
$$
\Delta^-: {\mathcal B}^{-}_{n}(G) \rightarrow {\mathcal B}_{n'}^-(G')\otimes {\mathcal B}_{n''}^-(G''), \quad n'+n''=n.
$$
On generators it takes the form
$$
[a_1,\ldots, a_n]^-\mapsto \sum \,\, [a_{I'} \, \mathrm{ mod }\,\, A'']^-\otimes [a_{I''}]^-,
$$
where the sum is over subdivisions $\{1,\ldots, n\} = I'\sqcup I''$ of cardinality $n'$, respectively $n''$,
such that
\begin{itemize}
\item $a_j\in A''$, for all $j\in I''$
\item $a_j, j\in I''$, generate $A''$.
\end{itemize}
The correctness of this definition is proved as in \cite[Prop.~11]{kontsevichpestuntschinkel}. Here are the main steps: By \cite[Prop.~8.1]{kreschtschinkel} (= Proposition~\ref{prop:81}), it suffices to check 2-term relations
$\mathrm{({\bf B}_2)}$, i.e., the image of the relation
$$
[a_1,a_2, \ldots ] ^-= [a_1-a_2, a_2,\ldots]^- + [a_1,a_2-a_1, \ldots]^-
$$
after applying co-multiplication.
The only interesting part is when the first two arguments are distributed
over different factors in the definition
of co-multiplication.
The relation is the same relation as that for the $\mathcal M_n(G)$-groups, introduced and studied in \cite{kontsevichpestuntschinkel},
unless, $a_1=a_2$ -- recall that
$$
[a,a, \ldots]= [a,0,\ldots, ] \in {\mathcal B}_n(G).
$$
Since
$$
0 = [a,-a,\ldots]^- = - [a,a,\ldots]^-
$$
we have
$$
[a,a,\ldots]^-= [a,0,\ldots]^- = 0.
$$
Now it suffices to repeat the argument in
\cite[Prop.~11]{kontsevichpestuntschinkel}. Using the terminology of that paper, there are four cases,
of which only (1) and (4) are relevant. In both cases, all terms are zero.
\
\subsection{Birational invariant}
We return to the definition of an invariant for $X \righttoleftarrow G$,
$\dim(X)=n$, and $G$ abelian.
Consider irreducible components of the fixed locus
$$
X^G=\coprod_{\alpha} F_{\alpha},
$$
and write
$$
\beta_{\alpha}=[a_{1,\alpha},\ldots,a_{n,\alpha}]
$$
for the unordered $n$-tuple of
weights for the $G$-action on the tangent space $\mathcal T_{x_{\alpha}}X$, for some
$x_{\alpha}\in F_{\alpha}$ -- this does not depend on the choice of $x_{\alpha}$.
The number of zero weights is $\dim(F_{\alpha})$.
We express
\begin{equation}
\label{eqn:def-inv}\beta(X \righttoleftarrow G )=\sum_{\alpha} \, \beta_{\alpha}
\in {\mathcal B}_n(G),
\end{equation}
and write
$$
\beta^-(X\righttoleftarrow G) \in {\mathcal B}^{-}_{n}(G),
$$
for the image under the projection.
\begin{theo} \cite[Th.~3]{kontsevichpestuntschinkel}
\label{thm:inv}
The class
$$
\beta(X \righttoleftarrow G ) \in {\mathcal B}_n(G)
$$
is a $G$-birational invariant.
\end{theo}
The proof relies on $G$-equivariant Weak Factorization, connecting $G$-birational varieties
via blow-ups and blow-downs of smooth $G$-stable subvarieties.
\begin{prop}
\label{prop:cn}
Consider a linear, generically free, action of a cyclic group $C_N$, of order $N$, on ${\mathbb P}^n$, for $n\ge 2$.
Then
$$
\beta^-({\mathbb P}^n\righttoleftarrow C_N)=0 \in {\mathcal B}^{-}_{n}(C_N).
$$
\end{prop}
\begin{proof}
We know that all such actions are equivariantly birational, see, e.g., \cite[Th. 7.1]{reichsteinyoussininvariant}.
Thus it suffices to consider one such action. Take an action with weights $(1,0,\ldots, 0)$. It fixes a hyperplane, and a point, the corresponding class is
$$
[1,0,\ldots, 0] + [-1, -1, \ldots, -1]=[1,0,\ldots, 0] + [-1,0,\ldots, 0],
$$
here, we repeatedly used relation \eqref{keyrelation} to transform the second term.
\end{proof}
\begin{rema} \label{rema:lineartorsion}
We shall show in Section~\ref{subsect:torsion} that
$[1,0]+[-1,0]$ is torsion in ${\mathcal B}_2(C_N)$.
See \cite[Prop.~7 and Lem.~32]{kontsevichpestuntschinkel} for the case
of prime $N$.
The element is nontrivial when
$$
N=7, 9, 10, 11, 13, 14, 15, 17, \ldots.
$$
\end{rema}
\section{Computation of invariants on surfaces}
\label{sect:surf}
\subsection{Sample computations of ${\mathcal B}_2(C_p)$}
This group is generated by symbols $[a_1,a_2]$, where
$$
a_1,a_2 \in \Hom(C_p,{\mathbb G}_m)\cong {\mathbb Z}/p{\mathbb Z}
$$
are not both trivial.
To simplify notation, we write $a_i=0,1,\ldots,p-1$.
Note that
$$
[a,a]=[0,a]\quad \text{ and } \quad [a_1,a_2]=[a_2,a_1]
$$
so that the
symbols
$$[a_1,a_2]: \quad \quad 0<a_1\le a_2 < p$$
generate ${\mathcal B}_2(C_p)$.
The other relation -- for $a_1<a_2$ -- takes the form
$$
[a_1,a_2]=[a_1,a_2-a_1]+[p+a_1-a_2,a_2].
$$
\
\begin{itemize}
\item[($p=2$)]
We obtain relations
$$[1,1]=[0,1], \quad [0,1]=[0,1]+[1,1]$$
forcing ${\mathcal B}_2(C_2)=0$.
\end{itemize}
\
\begin{itemize}
\item[($p=3$)]
We obtain relations
\begin{align*}
[0,1] &= [0,1] + [2,1] = [0,1] + [1,2] \\
[0,2] &= [0,2] + [1,2] \\
[1,1] &= [0,1] \\
[1,2] &= [1,1] + [2,2] \\
[2,2] &= [0,2]
\end{align*}
forcing
$$
[1,2]=0\quad \text{ and } \quad [0,1]=[1,1]=-[2,2]=-[0,2].
$$
We conclude that ${\mathcal B}_2(C_3)\cong {\mathbb Z}$.
\end{itemize}
For instance, consider the standard diagonal action on ${\mathbb P}^2$
$$ (x:y:z) \mapsto (x:\zeta_3 y:\zeta_3^2 z)$$
with the convention that $\zeta_N$ is a primitive $N$th root of unity.
This has invariant
$$\beta({\mathbb P}^2 \righttoleftarrow C_3) = [1,2]+[1,2]+[1,2]=0.$$
On the other hand, the action of $C_3$ on the cubic surface
$$X=\{w^3=f_3(x,y,z)\}, \quad (w:x:y:z)\mapsto (\zeta_3 w:x:y:z)$$
fixes the cubic curve $\{ w=0\}$ and we find that
$$\beta(X\righttoleftarrow C_3)=[2,0]\neq 0,$$
thus $X$ is not $G$-equivariantly birational to ${\mathbb P}^2$, with linear action.
Note that $\beta$ does not allow to distinguish among these cubic
surfaces.
Nor does it distinguish the cubic surfaces from the
degree one del Pezzo surfaces with $C_3$-action
$$Y=\{w^2=z^3 + f_6(x,y) \} \subset {\mathbb P}(3,1,1,2),
$$
given by
$$
(w:x:y:z)\mapsto (w:x:y:\zeta_3 z).
$$
We shall see in Section~\ref{sect:refined} that
taking into account the fixed locus
$[F_{\alpha}]$ gives a complete invariant.
\
\begin{itemize}
\item[($p=5$)]
We have relations
\begin{align*}
[0,1] &= [0,1] + [4,1] \\
[0,2] &= [0,2] + [3,2] \\
[0,3] &= [0,3] + [2,3] \\
[0,4] &= [0,4] + [1,4]
\end{align*}
forcing $[1,4]=[2,3]=0$. We also have
\begin{align*}
[1,2] &= [1,1] + [4,2] \\
[1,3] &= [1,2] + [3,3] \\
[1,4] &= [1,3] + [2,4] \\
[2,3] &= [2,1] + [4,3] \\
[2,4] &= [2,2] + [3,4] \\
[3,4] &= [3,1] + [4,4]
\end{align*}
which shows that ${\mathcal B}_2(C_5)$ is freely generated by
$$
\beta_1=[1,1]\quad \text{ and }\quad \beta_2=[1,2]
$$
with
\begin{align*}
[1,3]=\beta_1-\beta_2, \quad [2,2]=2\beta_2-\beta_1, \quad [2,4]=\beta_2-\beta_1,\\
[3,3]=\beta_1-2\beta_2, \quad [3,4]=-\beta_2, \quad [4,4]=-\beta_1.
\end{align*}
\end{itemize}
For example, $\overline{M}_{0,5}$, a del Pezzo surface of degree 5, has a natural action of $C_5$ by
permuting the coordinates with fixed points given by the roots
of $z^5-1$ and $z^5+1$. We compute $\beta(\overline{M}_{0,5} \righttoleftarrow C_5)$:
$$
\beta_1 +\beta_2+(\beta_1-\beta_2)+(2\beta_2-\beta_1)+
(\beta_2-\beta_1)+(\beta_1-2\beta_2)+(-\beta_2)+(-\beta_1) =0.$$
Indeed, this action is in fact conjugate \cite{BeaBla} to a linear action on
${\mathbb P}^2$
$$ (x:y:z) \mapsto (x:\zeta_5 y:\zeta_5^4 z).$$
However, there is also a nontrivial action of $C_5$ on a del Pezzo surface of degree 1:
$$X=\{w^2=z^3+\lambda x^4z + x(\mu x^5 + y^5) \} \subset {\mathbb P}(3,1,1,2),
$$
$$
(w:x:y:z) \mapsto (w:x:\zeta_5 y:z),
$$
with fixed locus an elliptic curve and invariant
$\beta(X \righttoleftarrow C_5)=[4,0].$
\
Let us compute an example of nonprime order. The group ${\mathcal B}_2(C_4)$ has
generators
$$[0,1],[0,3],[1,1],[1,2],[1,3],[2,3],[3,3]$$
and relations
\begin{align*}
[0,1] &= [0,1]+[3,1] \\
[0,3] &= [0,3]+[1,3] \\
[1,1] &= [0,1] \\
[1,2] &= [1,1] + [3,2] \\
[1,3] &= [1,2] + [2,3] \\
[2,3] &= [2,1] + [3,3] \\
[3,3] &= [0,3]
\end{align*}
whence
$$[1,3]=0, \quad \beta_1:=[1,2]=-[2,3], \quad [0,3]=[3,3]=2[2,3]=-2\beta_1,
$$
$$
[0,1]=[1,1]=2[1,2]=2\beta_1
$$
and ${\mathcal B}_2(C_4)\cong {\mathbb Z}$.
\
Consider the del Pezzo surface of degree 1, given by
$$
X=\{ w^2 = z^3 + zL_2(x^2,y^2) + xy M_2(x^2,y^2) \} \subset {\mathbb P}(3,1,1,2),
$$
where $L_2$ and $M_2$ are homogeneous of degree two. It admits a $C_4$-action
$$
(w:x:y:z) \mapsto (iw:x:-y:-z)
$$
with a unique fixed point $(0:1:0:0)$. The weights on the tangent bundle
are $[2,3]$ whence
$$
\beta(X\righttoleftarrow C_4)\neq 0.
$$
Observe that $X^{C_2}$ is a curve of genus four.
\
See \cite[\S 10.1]{blancthesis} for
a classification of automorphisms of large finite order $N$
on del Pezzo surfaces:
\begin{enumerate}
\item{The surface
$$X=\{w^2=z^3+x(x^5+y^5) \} \subset {\mathbb P}(3,1,1,2)$$
admits an automorphism of order $30$
$$(w:x:y:z) \mapsto (-w:x:\zeta_5 y: \zeta_3 z)$$
with fixed point $(0:0:1:0)$ and with weights $[3,2]$,
thus
$$
\beta(X \righttoleftarrow C_{30})=[3,2]\neq 0 \in {\mathcal B}_2(C_{30})\otimes {\mathbb Q},
$$
by a computation in {\tt Sage} \cite{sagemath}. This implies
(see Remark~\ref{rema:lineartorsion}) that
this action is not conjugate to a linear action.
Note that $\dim {\mathcal B}_2(C_{30})\otimes {\mathbb Q} = 33.$
}
\item{The surface
$$X=\{w^2=z^3+xy(x^4+y^4) \} \subset {\mathbb P}(3,1,1,2)$$
admits an automorphism of order $24$
$$(w:x:y:z)\mapsto (\zeta_8w : x : iy : -i\zeta_3 z).$$
The fixed point is $(0:1:0:0)$ with symbol $[21,22]$.
Computing via {\tt Sage} we find
$$\beta(X\righttoleftarrow C_{24}) \neq 0 \in {\mathcal B}_2(C_{24})\otimes {\mathbb Q}.$$
Here $\dim {\mathcal B}_2(C_{24})\otimes {\mathbb Q} = 23.$
}
\end{enumerate}
\
There are good reasons why we obtain nonvanishing invariants only
when a curve is fixed: If $G=C_N$ is a cyclic group acting generically
freely
on a complex projective smooth rational surface $X\righttoleftarrow G$
then the following are equivalent \cite[Th.~4]{blancthesis}:
\begin{itemize}
\item{No $g \neq 1 \in G$ fixes a curve in $X$ of positive genus.}
\item{The subgroup $G$ is conjugate to a subgroup of $\Aut({\mathbb P}^2)$.}
\end{itemize}
Even more is true: if $G$ contains an element fixing a curve of positive genus then
$X$ is not even stably $G$-birational to projective space with a linear $G$-action, indeed, in this case
$\mathrm H^1(G,\Pic(X))\neq 0$ \cite{BogPro}.
\subsection{Examples for noncyclic groups}
If $G$ is a noncyclic abelian group then ${\mathcal B}_1(G)=0$ by
definition but there are actions on curves:
\begin{exam} \label{exam:Klein1}
Consider the action of $C_2\times C_2$ on ${\mathbb P}^1$ by
$$g_1:=\left( \begin{matrix} 0 & -1 \\
1 & 0 \end{matrix} \right), \quad
g_2:=\left( \begin{matrix} 1 & 0 \\
0 & -1 \end{matrix} \right),$$
with the elements
$$g_1^2=g_1g_2g_1^{-1}g_2^{-1} = \left( \begin{matrix} -1 & 0 \\
0 & -1 \end{matrix} \right)$$
acting trivially. Thus we obtain
$${\mathbb P}^1 \righttoleftarrow (C_2\times C_2).$$
The group has no fixed points whence
$\beta({\mathbb P}^1 \righttoleftarrow (C_2\times C_2))=0.$
The cyclic subgroups do have fixed points
$$({\mathbb P}^1)^{\left<g_1\right>}=\{ (1:\pm i) \},
({\mathbb P}^1)^{\left<g_2\right>}=\{ (1:0),(0:1) \},
({\mathbb P}^1)^{\left<g_1g_2\right>}=\{ (1:\pm 1) \}.$$
\end{exam}
We return to this in Example~\ref{exam:Klein2}.
In Section~\ref{subsect:burndef}, we will discuss how to incorporate information from {\em all} points with nontrivial stabilizer.
We compute ${\mathcal B}_2(C_2\times C_2)$. Writing
$$(C_2 \times C_2)^{\vee} = \{0,\chi_1,\chi_2,\chi_1+\chi_2\},$$
the only admissible symbols are
$$[\chi_1,\chi_2],[\chi_1,\chi_1+\chi_2],[\chi_1+\chi_2,\chi_2]$$
with relations:
\begin{align*}
[\chi_1,\chi_2] &= [\chi_1,\chi_1+ \chi_2] + [\chi_1+\chi_2,\chi_2] \\
[\chi_1,\chi_1+\chi_2] &= [\chi_1,\chi_2] + [\chi_1+\chi_2,\chi_2] \\
[\chi_1+\chi_2,\chi_2] &= [\chi_1,\chi_1+\chi_2] + [\chi_1,\chi_2].
\end{align*}
Thus we obtain the Klein four group again
$${\mathcal B}_2(C_2\times C_2)\cong C_2 \times C_2.$$
\
The classification of finite abelian noncyclic
actions on rational surfaces
may be found in \cite[\S 10.2]{blancthesis}. Examples of actions of
$C_2\times C_2$ on rational surfaces include:
\
\noindent
On ${\mathbb P}^1 \times {\mathbb P}^1$:
\begin{itemize}
\item[(1)]
$(x,y) \mapsto (\pm x^{\pm 1},y)$, without fixed points;
\item[(2)]
$(x,y) \mapsto (\pm x, \pm y)$,
with fixed points $(0,0),(0,\infty),(\infty,0),(\infty,\infty)$,
thus
$$
\beta({\mathbb P}^1 \times {\mathbb P}^1 \righttoleftarrow C_2 \times C_2)=4[\chi_1,\chi_2]=0;
$$
\item[(3)]
the diagonal action
$$(x,y) \mapsto (-x,-y), (x,y)\mapsto(x^{-1},y^{-1})$$
has no fixed points so the symbol sum is empty;
\end{itemize}
\
\noindent
On conic fibrations over ${\mathbb P}^1$:
\begin{itemize}
\item[(4)]
$(x_1:x_2)\times (y_1:y_2) \mapsto$
$$
\begin{array}{rl}
& (x_1:-x_2)\times (y_1:y_2), \quad \text{ respectively, } \\
& (x_1:x_2) \times (y_2(x_1-bx_2)(x_1+bx_2):y_1(x_1-ax_2)(x_1+ax_2),
\end{array}
$$
which also has four fixed points
with the same symbol;
\end{itemize}
\
\noindent
On a degree two del Pezzo surface:
\begin{itemize}
\item[(5)]
$
(w:x:y:z) \mapsto (\pm w: x: y: \pm z)
$ on
$$
\{w^2=L_4(x,y)+z^2L_2(x,y)+z^4 \},
$$
with the involutions fixing a genus three curve and an elliptic curve
meeting in four points whence $\beta(X\righttoleftarrow C_2\times C_2)=0$;
\end{itemize}
\
\noindent
On a degree one del Pezzo surface:
\begin{itemize}
\item[(6)]
$(w:x:y:z) \mapsto (\pm w: x: \pm y: z)$
on
$$
\{w^2=z^3+zL_2(x^2,y^2)+L_3(x^2,y^2)\},
$$
with the involutions fixing a genus four curve and a genus two curve
meeting in six points whence $\beta(X\righttoleftarrow C_2\times C_2)=0$.
\end{itemize}
None of these
actions are distinguished by ${\mathcal B}_2(C_2\times C_2)$.
Case~(1) is stably equivalent to the action on ${\mathbb P}^1$ described
above. The second action is equivalent to a linear action on ${\mathbb P}^2$
-- project from one of the fixed points.
We return to these examples in
Section~\ref{subsect:examples}.
\subsection{Linear actions yield torsion classes}
\label{subsect:torsion}
Let $C_N$ act linearly and generically freely on ${\mathbb P}^n$.
We saw in Proposition~\ref{prop:cn} that
$$\beta({\mathbb P}^n \righttoleftarrow C_N) = [a,0,\ldots,0]+[-a,0,\ldots,0]$$
for some $a$ relatively prime to $N$.
Remark~\ref{rema:lineartorsion} pointed out this is torsion for $n\ge 2$;
we offer a proof now:
\begin{prop}
For $N\in {\mathbb N}$, an $a$ with $\gcd(a,N)=1$, and $n\ge 2$ the element
$$[a,0, \ldots,0]+[-a,0,\ldots,0] \in {\mathcal B}_n(C_N)$$
is torsion.
\end{prop}
\begin{proof}
It suffices to consider $n=2$; the argument we present goes through
without changes for $n>2$.
We will work in ${\mathcal B}_2(C_N)\otimes {\mathbb Q}$. We use
generators for this
space arising from the alternative symbol formalism ${\mathcal M}_2(C_N)$ introduced
in \cite{kontsevichpestuntschinkel} with the property
that \cite[Prop.~7]{kontsevichpestuntschinkel}
$$
{\mathcal B}_2(C_N)\otimes {\mathbb Q} = {\mathcal M}_2(C_N)\otimes {\mathbb Q}.
$$
For $a,b \in \Hom(C_N,{\mathbb G}_m)$ generating the group we set
$$\left<a,b\right>= \begin{cases} [a,b] & \text{ if } a, b \neq 0 \\
\frac{1}{2}[a,0] & \text{ if } a\neq 0, b=0.
\end{cases}
$$
The advantage of these generators is that the relations are uniformly
$$\left<a,b\right> = \left<a,b-a\right> + \left<a-b,b\right> \quad (\mathbf{B}),$$
even when $a=b$.
We follow the proof of \cite[Th.~14]{kontsevichpestuntschinkel}.
For all $a,b$ with $\gcd(a,b,N)=1$ we write
$$
\delta(a,b):=\left<a,b\right> + \left<-a,b\right> + \left<a,-b\right> + \left<-a,-b\right>.
$$
We claim this is zero in ${\mathcal B}_2(C_N)\otimes {\mathbb Q}$.
First, we check that $\delta(a,b)$ is invariant under
$\SL_2({\mathbb Z}/N{\mathbb Z})$.
This has generators
$$
\left(\begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix} \right), \quad
\left(\begin{matrix} 1 & -1 \\ 0 & 1 \end{matrix} \right).
$$
and $\delta(a,b)=\delta(-b,a)$ by the symmetry of the underlying symbols.
We also have
\begin{align*}
\delta(a,b-a)=& \left<a,b-a\right> + \left<-a,b-a\right> +
\left<a,a-b\right> + \left<-a,a-b\right> \\
& \text{ applying $\mathbf{B}$ to second and third terms above } \\
=& \left<a,b-a\right> + \left<-a,b\right> + \left<-b,b-a\right> \\
& +\left<a,-b\right> + \left<b,a-b\right> + \left<-a,a-b\right> \\
& \text{ applying $\mathbf{B}$ to get first and four terms below} \\
=& \left<a,b\right> + \left<-a,b\right> + \left<a,-b\right> + \left<-a,-b\right> \\
=& \delta(a,b).
\end{align*}
Average $\delta(a,b)$ over
all pairs $a,b$ with $\gcd(a,b,N)=1$ to obtain
$$S:=\sum_{a,b} \delta(a,b) = 4\sum_{a,b} \left<a,b\right>.$$
Applying the blowup
relation ($\mathbf{B}$) to all terms one finds
$$S=2S,$$
which implies that $S=0 \in {\mathcal B}_2(C_N)\otimes {\mathbb Q}$.
We may regard $\delta(a,b)$ and $S$ as elements of ${\mathcal B}_2(C_N)$.
It follows that $\delta(a,b)$ is torsion in ${\mathcal B}_2(C_N)$,
annihilated by the number of summands in $S$.
Substituting $b=0$, we obtain that
$$
\delta(a,0)=[a,0]+[-a,0]=0 \in {\mathcal B}_2(C_N)\otimes {\mathbb Q}.
$$
\end{proof}
The invariance of $\delta(a,0)$ shows that $[a,0]+[-a,0]$
is independent of the
choice of $a$ relatively prime to $N$.
\subsection{Algebraic structure in dimension 2}
\label{subsect:str-2}
For reference, we tabulate
$$
\dim {\mathcal B}_2(G)\otimes {\mathbb Q},
$$
for $G=C_N$ and small values of $N$:
$$
\begin{tabular}{r|ccccccccccccccc}
{\it N} & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 &13 &14 & 15 & 16 \\
\hline
& 0 & 1 & 1 & 2 & 2 & 3 & 3 & 5 & 4 & 6 & 7 & 8 & 7 & 13 & 10 \\
\end{tabular}
$$
For primes $p\ge 5$ there is a closed formula
\cite[\S 11]{kontsevichpestuntschinkel}:
\begin{equation}
\label{label:dimb}
\dim {\mathcal B}_2(C_p) \otimes {\mathbb Q} = \frac{p^2 -1}{24} + 1 = \frac{(p-5)(p-7)}{24} + \frac{p-1}{2},
\end{equation}
which strongly suggested a connection with the modular curve $X_1(p)$!
We also have
\begin{equation}
\label{label:dimbm}
\dim {\mathcal B}_2^-(C_p) \otimes {\mathbb Q} = \frac{(p-5)(p-7)}{24},
\end{equation}
and, by \cite[Prop.~30]{kontsevichpestuntschinkel},
$$
{\mathcal B}_1^-(C_p)\otimes {\mathbb Q}= \mathrm{Ker}({\mathcal B}_2(C_p) \to {\mathcal B}_2^-(C_p))\otimes {\mathbb Q}.
$$
Computations in noncyclic cases have been performed by Zhijia Zhang\footnote{see \url{https://cims.nyu.edu/~zz1753/ebgms/ebgms.html}}; we summarize the results: for primes $p\ge 5$ one has
$$
\dim {\mathcal B}_2(C_p\times C_p) \otimes {\mathbb Q} = \frac{(p-1)(p^3+6p^2-p+6)}{24},
$$
$$
\dim {\mathcal B}_2^-(C_p\times C_p) \otimes {\mathbb Q} =\frac{(p-1)(p^3-p+12)}{24}
$$
For $G=C_{N_1}\times C_{N_2}$ and small values of $N_1,N_2$, we have:
$$
\begin{tabular}{r|ccccccccccccccc}
${\it N}_1$ & 2 & 2 & 2 & 2 & 2 & 2 & 3 & 3& 3 & 3 & 4 & 4 & 4 & 5 & 6 \\
\hline
${\it N}_2$ & 2 & 4 & 6 & 8 & 10 & 16 & 3 & 6 & 9 & 27 & 8 & 16 & 32 & 25 & 36 \\
\hline
$d_{{\mathbb Q}}$ & 0 & 2 & 3 & 6 & 7 & 21 & 7 & 15 & 37 & 235 & 33 & 105 & 353 &702 & 577 \\
$d_{{\mathbb Q}}^-$ & 0 & 0 & 0 & 1 & 1 & 9 & 3 & 7 & 19 & 163 & 17 & 65 &257& 502 & 433 \\
$d_{2}$ & 2 & 5 & 8 & 13 & 18 & 36 & 7& 15 & 37 & 235 & 34 & 106 &354 & 702 & 578 \\
$d_{2}^-$ & 2 & 3 & 5 & 8 & 12 & 24 &3 & 7 & 19 & 163 & 17 & 65 & 257 & 502 & 433\\
\end{tabular}
$$
\section{Reconstruction theorem}
\label{sect:recon}
The examples offered so far might suggest that very few invariants
in ${\mathcal B}_n(G)$ are actually realized geometrically by smooth projective
varieties $X \righttoleftarrow G$. If one allows nonrational examples far more
invariants arise:
\begin{prop}
Let $p$ be a prime. Then ${\mathcal B}_n(C_p)$ is generated as an abelian group
by $\beta(X \righttoleftarrow C_p)$, where $X$ is smooth and projective.
\end{prop}
\begin{proof}
We proceed by induction on $n$. The case of $n=1$ follows from the
Riemann existence theorem applied to cyclic branched covers of degree $p$ with
the prescribed ramification data. (See also Lemma~\ref{lemm:CI} below.)
For the symbols $[a_1,\ldots,a_{n-1},0]$ we construct $(n-1)$-dimensional varieties $D$
with the prescribed invariants and $D\times {\mathbb P}^1$ with trivial action on the second
factor. Since $[a,a,a_3,\ldots,a_n]=[0,a,a_3,\ldots,a_n]$ we may focus on
symbols $[a_1,a_2,\ldots,a_n], 0<a_1 < a_2< \ldots <a_n<p.$
We are reduced to the following lemma:
\begin{lemm} \label{lemm:CI}
Any sum
\begin{equation} \label{symbolsum}
\sum m_{[a_1,a_2,\ldots,a_n]}[a_1,a_2,\ldots,a_n]
\end{equation}
of symbols
$$
[a_1,a_2,\ldots,a_n], \quad 0<a_1 < a_2< \ldots <a_n<p,
$$
with nonnegative coefficients,
is realized as
$\beta(X \righttoleftarrow C_p)$, where $X$ is smooth, projective, and irreducible.
\end{lemm}
For each symbol $[a_1,a_2,\ldots,a_n]$ appearing in the sum,
take an $n$-dimensional representation $V_{[a_1,a_2,\ldots,a_n]}$ with the prescribed
weights and the direct sum
$$W_{[a_1,a_2,\ldots,a_n]} = (V_{[a_1,a_2,\ldots,a_n]} \oplus {\mathbb C})^{m_{[a_1,a_2,\ldots,a_n]}}$$
where ${\mathbb C}$ is the trivial representation of $C_p$. Write
$$W= \oplus W_{[a_1,a_2,\ldots,a_n]},$$
where the index is over the terms appearing in (\ref{symbolsum}),
and consider the projectivization ${\mathbb P}(W)$ and the $n$-planes
$$P_{[a_1,a_2,\ldots,a_n],j} \subset {\mathbb P}(W_{[a_1,a_2,\ldots,a_n]}), \quad j=1,\ldots,m_{[a_1,a_2,\ldots,a_n]}$$
associated with the summands of $W_{[a_1,a_2,\ldots,a_n]}$,
each with distinguished fixed point
$$
p_{[a_1,a_2,\ldots,a_n],j}=(0:0:\cdots:0:1).
$$
The action on
$$
\mathcal T_{p_{[a_1,a_2,\ldots,a_n]}}P_{[a_1,a_2,\ldots,a_n],j}
$$
coincides
with the action on $V_{[a_1,a_2,\ldots,a_n]}$. The fixed points of ${\mathbb P}(W)$
correspond to the eigenspaces for the $C_p$ action. Write
$M=\sum m_{[a_1,a_2,\ldots,a_n]}$ for the number of summands; each
weight occurs at most $M$ times. Thus the fixed point loci are projective
spaces of dimension $\le M-1$.
Choose a high-degree smooth complete intersection $X$ of dimension $n$,
invariant under the action of $C_p$, containing the $p_{[a_1,a_2,\ldots,a_n],j}$
and tangent to $P_{[a_1,a_2,\ldots,a_n],j}$ for each $[a_1,a_2,\ldots,a_n]$.
This complete intersection exists by
polynomial interpolation applied to the quotient ${\mathbb P}(W)/C_p$; smoothness
follows from Bertini's Theorem. Since complete intersections of
positive dimension are connected, the resulting $X$ is irreducible.
It only remains to show that such a complete intersection need not have fixed
points beyond those specified. Now $X$ has codimension
$(M-1)(n+1)$, so we may assume it avoids the fixed point loci -- away from
the stipulated points
$p_{[a_1,s_2,\ldots,a_n],j}$ -- provided $(M-1)(n+1) \ge M$.
It only remains to consider the special case $M=1$. Here we take
$$W=(V_{[a_1,a_2,\ldots,a_n]} \oplus {\mathbb C})^2,$$
imposing conditions at just one point $(0:0:\cdots:0:1)$.
Here $X\subset {\mathbb P}(W)$ has codimension
$n+1$ and the fixed point loci are ${\mathbb P}^1$'s, so we may avoid extraneous
points of intersection.
\end{proof}
\section{Refined invariants}
\label{sect:refined}
\subsection{Encoding fixed points}
Since ${\mathcal B}_2(C_2)=0$ this invariant says {\bf nothing}
about involutions of surfaces! Bertini, Geiser, and De Jonqui\`eres
involutions are perhaps the most intricate parts of the classification, which relies on
the study of fixed curves. This leads to a natural refinement of the invariants:
For the symbols of type $[a,0]$,
corresponding to curves $F_{\alpha}\subset X$ fixed by $C_N$,
we keep track of the (stable) birational equivalence class of $F_{\alpha}$
and the element of ${\mathcal B}_1(C_N)$ associated with $[a]$.
In general, \cite{kontsevichpestuntschinkel} introduced a group combining
the purely number-theoretic information encoded in ${\mathcal B}_n(G)$
with geometric information encoded in the Burnside group of fields from \cite{KT}.
Let
$$
\mathrm{Bir}_{n-1,m}(k), \quad 0\le m\le n - 1,
$$
be the set of $k$-birational equivalence classes of $(n-1)$-dimensional irreducible varieties over $k$,
which are $k$-birational to products $W\times \mathbb A^m$,
but not to $W'\times \mathbb A^{m+1}$, for any $W'$, and put
\begin{equation}
\label{eqn:bnk}
{\mathcal B}_n(G,k):= \oplus_{m=0}^{n-1} \oplus_{[Y]\in \mathrm{Bir}_{n-1,m}(k)}{\mathcal B}_{m+1}(G).
\end{equation}
Let $X$ be a smooth projective variety of dimension $n$
with a regular, generically free, action of an abelian group $G$.
Put
$$
\beta_k(X\righttoleftarrow G):=\sum_{\alpha} \beta_{k,\alpha},
$$
where, as before, the sum is over components $F_{\alpha}\subset X^G$
of the $G$-fixed point locus, but in addition to the
eigenvalues $a_1,\ldots, a_{n-\dim(F_{\alpha})}\in A$ in the tangent space $\mathcal T_{x_{\alpha}} X$ one keeps information about the function field of the component $F_{\alpha}$.
Choosing $m_{\alpha}$ so that
$$
[F_{\alpha}\times \mathbb A^{n-1-\dim(F_{\alpha})} ] \in \mathrm{Bir}_{n-1,m_{\alpha}}(k)
$$
we set
$$
\beta_{k,\alpha}:= [a_1,\ldots, a_{n-\dim(F_{\alpha})},
\underbrace{0, \ldots, 0}_{m_{\alpha}+1-n+\dim(F_{\alpha})}] \in
\mathcal B_{m_{\alpha}+1}(G),
$$
identified with the summand of (\ref{eqn:bnk}) indexed by $[F_{\alpha}\times \mathbb A^{n-1-\dim(F_{\alpha})}]$.
When $F_{\alpha}$ is not uniruled we get a symbol
in ${\mathcal B}_{\bCodim(F_{\alpha})}(G)$.
\begin{theo} \cite[Remark 5]{kontsevichpestuntschinkel}
The class
$$
\beta_k(X\righttoleftarrow G)\in {\mathcal B}_n(G,k)
$$
is a $G$-birational invariant.
\end{theo}
\subsection{Encoding points with nontrivial stabilizer}
\label{subsect:burndef}
We continue to assume that $G$ is a finite abelian group,
acting regularly and generically freely on a smooth variety $X$.
Let $H\subset G$ arise as the stabilizer of some point of $X$,
$F\subset X^H$ an irreducible component of the fixed point locus with generic stabilizer $H$, and $Y$ the minimal $G$-invariant subvariety containing $G$.
In Section~\ref{subsect:resolve}, we will explain how to blow up $X$ so that
$Y$ is always a disjoint union of translates of $F$.
Additional information about the action of $G$ on $X$ is contained in the action of the quotient
$G/H$, which could act
on the function field of $F$, or by translating $F$ in $X$.
The paper \cite{kreschtschinkel} introduced the group
$$
\Burn_n(G)
$$
as the quotient by certain relations of the free abelian group generated by symbols
\begin{equation}
\label{eqn:symbol}
(H, G/H\mathrel{\reflectbox{$\righttoleftarrow$}} K, \beta),
\end{equation}
where $K$ is a $G/H$-Galois algebra over a field
of transcendence degree $d\le n$ over $k$, up to isomorphism, and $\beta$ is a faithful $(n-d)$-dimensional
representation of $H$ (see \cite[Def.~4.2]{kreschtschinkel} for a precise formulation of conditions on $K$ and relations).
Passing to a suitable $G$-equivariant smooth projective model $X$
-- as discussed in Section~\ref{subsect:resolve} -- its class is defined by
\begin{equation}
\label{eqn:xg}
[X\righttoleftarrow G] :=\sum_{H\subseteq G} \sum_Y (H, G/H\mathrel{\reflectbox{$\righttoleftarrow$}} k(Y),\beta_Y(X))\in \mathrm{Burn}_n(G),
\end{equation}
where the sum is over all $G$-orbits of components $Y\subset X$ with generic stabilizer $H$ as above, the symbol records
the eigenvalues of $H$ in the tangent bundle to $x\in Y$ as well as the $G/H$-action
on the total ring of fractions $k(Y)$.
\begin{exam} \label{exam:Klein2}
We revisit Example~\ref{exam:Klein1} using the dual basis of characters
$\chi_1, \chi_2$ of $G=C_2\times C_2$. Here we have
\begin{align*}
[{\mathbb P}^1 \righttoleftarrow G] =& (\left<1\right>,G\mathrel{\reflectbox{$\righttoleftarrow$}} k(t)) +
(\left<g_1\right>, G/\left<g_1\right> \mathrel{\reflectbox{$\righttoleftarrow$}} \{ (1:\pm i)\},\chi_1) \\
&+ (\left<g_2\right>, G/\left<g_2\right> \mathrel{\reflectbox{$\righttoleftarrow$}} \{ (1:0),(0:1)\},\chi_2) \\
&+ (\left<g_1g_2\right>, G/\left<g_1g_2\right> \mathrel{\reflectbox{$\righttoleftarrow$}} \{ (1:\pm 1)\},\chi_1+\chi_2).
\end{align*}
The action on $k({\mathbb P}^1)=k(t)$
is by $g_1(t)=-t$ and $g_2(t)=-1/t$.
\end{exam}
Blowup relations ensure that $[X\righttoleftarrow G]$ is a well-defined
$G$-birational invariant -- see Section~\ref{subsect:blowupsample}
for more details.
There is a distinguished subgroup
$$
\Burn_n^{\rm triv}(G) \subset \Burn_n(G)
$$
generated by symbols $(1, G\mathrel{\reflectbox{$\righttoleftarrow$}} K=k(X))$.
For `bootstrapping' purposes -- where we seek invariants of $n$-folds
in terms of lower-dimensional strata with nontrivial stabilizers --
we may suppress these tautological symbols to get a quotient
$$
\Burn_n(G) \to\Burn_n^{\rm nontriv}(G).
$$
And there is also a
natural quotient group
$$
\Burn_n(G) \to \Burn_n^G(G)
$$
obtained by suppressing all symbols $(H, G/H\mathrel{\reflectbox{$\righttoleftarrow$}} K, \beta)$ where $H$ is a proper subgroup of $G$.
By \cite[Prop. 8.1 and Prop. 8.2]{kreschtschinkel}, there are natural surjective homomorphisms
$$
\Burn_n^G(G) \to {\mathcal B}_n(G,k)\to {\mathcal B}_n(G).
$$
\begin{exam}
The group $\Burn_1^{\rm nontriv}(G)$ is freely generated by nontrivial subgroups $H\subset G$
and injective characters $a:H \hookrightarrow {\mathbb G}_m$, e.g. $\Burn_1^{\rm nontriv}(C_N)\cong {\mathbb Z}^{N-1}$
and $\Burn_1^{\rm nontriv}(C_2\times C_2)\cong {\mathbb Z}^3$.
\end{exam}
\subsection{Examples of blowup relations}
\label{subsect:blowupsample}
We illustrate the relations for fixed points of cyclic actions on surfaces.
More computations are presented in Section~\ref{subsect:examples};
the reader may refer to
Section 4 of \cite{kreschtschinkel} for the general formalism.
All the key ideas are manifest in the surface case because the full set
of blowup relations follows from those in codimension two --
see \cite[Prop.~8.1]{kreschtschinkel} and the special case
Prop.~\ref{prop:81} above.
Suppose that $G=C_N$ acts on the surface $X$ with fixed point $\mathfrak p$ and
weights $a_1$ and $a_2$ that generate $A=\Hom(C_N,{\mathbb G}_m)$. Let $\widetilde{X}$
denote the blowup of $X$ at $\mathfrak p$ and $E\simeq {\mathbb P}^1$ the exceptional
divisor.
Let $\overline H=\operatorname{ker}(a_1-a_2)\subset G$
denote the generic stabilizer of $E$ which acts faithfully on the
normal bundle ${\mathcal N}_{E/\widetilde{X}}$ via
$\bar{a}_1=\bar{a}_2 \in \overline A:=\Hom(\overline H,{\mathbb G}_m)$.
Assume first that $a_1$ and $a_2$ are both prime to $N$, so $\mathfrak p$ is
an isolated fixed point of $X$.
The quotient $G/\overline H$
(when nontrivial) acts faithfully on $E={\mathbb P}^1$ with
fixed points $\mathfrak p_1$ and $\mathfrak p_2$.
If $\overline H=G$ then $a_1=a_2$ and
we get the relation
$$(G,{\rm triv} \mathrel{\reflectbox{$\righttoleftarrow$}} k=k(\mathfrak p), (a_1,a_1))
=(G,{\rm triv} \mathrel{\reflectbox{$\righttoleftarrow$}} k(t)=k(E), (a_1)).$$
If ${\rm triv } \subsetneq H'\subsetneq G$ then
\begin{align*}
(G,{\rm triv}& \mathrel{\reflectbox{$\righttoleftarrow$}} k=k(\mathfrak p), (a_1,a_2))
=(\overline H, G/\overline H \mathrel{\reflectbox{$\righttoleftarrow$}} k(t)=k(E), \bar{a}_1=\bar{a}_2 )\\
&+(G, {\rm triv} \mathrel{\reflectbox{$\righttoleftarrow$}} k=k(\mathfrak p_2), (a_1,a_2-a_1))\\
& +(G, {\rm triv} \mathrel{\reflectbox{$\righttoleftarrow$}} k=k(\mathfrak p_1), (a_2,a_1-a_2))
\stepcounter{equation}\tag{\theequation}\label{eqn:blow1}
\end{align*}
where $G/\overline H$ acts on $t$ by a primitive $d$th root of unity
with $d=|G/\overline H|$.
If $\overline H$ is trivial then
\begin{align*}
(G,{\rm triv} \mathrel{\reflectbox{$\righttoleftarrow$}} k=k(\mathfrak p), & (a_1,a_2))
=(G, {\rm triv} \mathrel{\reflectbox{$\righttoleftarrow$}} k=k(\mathfrak p_2), (a_1,a_2-a_1)) \\
&+(G, {\rm triv} \mathrel{\reflectbox{$\righttoleftarrow$}} k=k(\mathfrak p_1), (a_2,a_1-a_2)).
\stepcounter{equation}\tag{\theequation}\label{eqn:blow2}
\end{align*}
Assume now that $a_1=m_1n_1$ and $a_2=m_2n_2$ where $n_1,n_2|N$
(and are relatively prime modulo $N$)
and $m_1$ and $m_2$ are prime to $N$ and each other.
Then we have
$$\mathfrak{p} \in F_1 \cap F_2$$
for irreducible curves $F_1$ and $F_2$ with generic stabilizers
$C_{n_1}$ and $C_{n_2}$ respectively.
Let $\widetilde{F_1}, \widetilde{F_2} \subset \widetilde{X}$ denote
the proper transforms of $F_1$ and $F_2$ and $\mathfrak p_1,\mathfrak p_2 \in E$
their intersections with the exceptional divisors.
Thus the contribution to the strata containing $\mathfrak p$ is
\begin{align*}
(G, {\rm triv} \mathrel{\reflectbox{$\righttoleftarrow$}} k(\mathfrak p), & (a_1,a_2)) \\
+(C_{n_1}, C_N/C_{n_1} \mathrel{\reflectbox{$\righttoleftarrow$}} k(F_1), a_2)
&+(C_{n_2}, C_N/C_{n_2} \mathrel{\reflectbox{$\righttoleftarrow$}} k(F_2), a_1)
\end{align*}
with the latter two terms appearing in the symbol sum on
$\widetilde{X}$, with the $F_i$ replaced by the $\widetilde{F_i}$.
Note that $\overline H=\operatorname{ker}(a_1-a_2)\subsetneq G$
because $a_1\not \equiv a_2$.
Here the blowup formula takes the form (\ref{eqn:blow1}) or (\ref{eqn:blow2})
depending on whether $H'$ is trivial or not.
Now suppose that $a_2=0$.
Let $F\subset X$ denote the irreducible component of the fixed locus
containing $\mathfrak p$, so that $a_1$ is the character by which
$G$ acts on ${\mathcal{N}}_{F/X}$. Write $\widetilde{F} \subset \widetilde{X}$
for the proper transform of $F$, $\mathfrak p_1=\widetilde{F} \cap E$,
and $\mathfrak p_2\in E$ for the other fixed point.
Here we get the relation
$$(G,G\mathrel{\reflectbox{$\righttoleftarrow$}} k(F),a_1)=
(G,G\mathrel{\reflectbox{$\righttoleftarrow$}} k(F),a_1) + (G,G\mathrel{\reflectbox{$\righttoleftarrow$}} k(\mathfrak p_2),(a_1,-a_1)),$$
whence the latter term vanishes.
\subsection{Examples}
\label{subsect:examples}
We now complement the computations in Section~\ref{sect:surf}, for $G=C_N$, and small $N$. As before, we work over an algebraically-closed based field $k$
of characteristic zero.
\
\noindent
($N=2$)
\begin{itemize}
\item ${\mathcal B}_2(C_2)=0$.
\item ${\mathcal B}_2(C_2,k) \cong \Burn_2^{C_2}(C_2)$; has a copy of
${\mathcal B}_1(C_2)={\mathbb Z}$, for every isomorphism class of curves of positive genus.
\item $\Burn_2(C_2)=\Burn_2^{\mathrm{triv}}(C_2)\oplus \Burn_2^{C_2}(C_2)$.
\end{itemize}
\
\noindent
($N=3$)
\begin{itemize}
\item ${\mathcal B}_2(C_3)\cong {\mathbb Z}$, generated by $[1,1]$,
\[ [1,2]=0, \qquad [2,2]=-[1,1]. \]
\item
${\mathcal B}_2(C_3,k)\cong \Burn_2^{C_3}(C_3)$, is a direct sum of ${\mathbb Z}$, corresponding to
points and rational curves, and a copy of
${\mathcal B}_1(C_3)\cong {\mathbb Z}^2$ for every isomorphism class of curves of positive genus.
\item
$\Burn_2(C_3)=\Burn_2^{\mathrm{triv}}(C_3)\oplus \Burn_2^{C_3}(C_3)$.
\end{itemize}
\
\noindent
($N=4$)
\begin{itemize}
\item ${\mathcal B}_2(C_4)\cong {\mathbb Z}$, generated by $[1,2]$,
\[ [1,1]=2[1,2],\quad [1,3]=0,\quad [2,3]=-[1,2],\quad [3,3]=-2[1,2]. \]
\item ${\mathcal B}_2(C_4,k)$ is a direct sum of ${\mathbb Z}$, corresponding to points
and rational curves, and a copy of ${\mathcal B}_1(C_4)\cong{\mathbb Z}^2$ for every
isomorphism class of curves of positive genus.
\item $\Burn_2(C_4)=\Burn_2^{\mathrm{triv}}(C_4)\oplus \Burn_2^{\mathrm{nontriv}}(C_4)$:
$\Burn_2^{\mathrm{nontriv}}(C_4)$ has,
for every isomorphism class of curves of positive genus,
a copy of ${\mathcal B}_1(C_4)$ and a copy of ${\mathcal B}_1(C_2)$, with an
additional copy of ${\mathcal B}_1(C_2)$ for every
isomorphism class of curves of positive genus with involution.
We claim points and rational curves
contribute
$${\mathbb Z}^2 \subset \Burn_2^{\mathrm{nontriv}}(C_4),$$
generated by $[1,2]$ and $[2,3]$ where
$$[i,j]=(C_4,k,(i,j)), \quad (i,j)=(1,1),(1,2),(1,3),(2,3),(3,3).$$
Abusing notation, write
$$[1,0] = (C_4,k(t),1) \quad
[3,0] = (C_4,k(t),1).$$
We write down the blowup relations, both orbits of points with special
stabilizers and orbits on one-dimensional strata
with nontrivial stabilizer:
\begin{align*}
[0,1]&=[0,1]+[1,3] \\
[0,3]&=[0,3]+[1,3] \\
[1,1]&=[1,0] \\
[1,2]&=[1,1]+[2,3] \\
[1,3]&=[1,2]+[2,3]+(C_2,C_2 \mathrel{\reflectbox{$\righttoleftarrow$}} k(t), 1) \\
[2,3]&=[1,2]+[3,3] \\
[3,3]&=[3,0] \\
(C_2,C_2 \mathrel{\reflectbox{$\righttoleftarrow$}} k^2, (1,1)) &=(C_2,C_2 \mathrel{\reflectbox{$\righttoleftarrow$}} k(t)^2, 1) \\
(C_2,C_2 \mathrel{\reflectbox{$\righttoleftarrow$}} k(t), 1)&=(C_2,C_2 \mathrel{\reflectbox{$\righttoleftarrow$}} k(t), 1)+(C_2,C_2 \mathrel{\reflectbox{$\righttoleftarrow$}} k(t)^2, 1) \\
(C_2,C_2 \mathrel{\reflectbox{$\righttoleftarrow$}} k(t)^2, 1)&=(C_2,C_2 \mathrel{\reflectbox{$\righttoleftarrow$}} k(t)^2, 1)+(C_2,C_2 \mathrel{\reflectbox{$\righttoleftarrow$}} k(t)^2, 1)
\end{align*}
Thus we find
\begin{gather*}
[1,3]=0, \\
[0,3]=[3,3]=-[1,2]+[2,3],\\
[0,1]=[1,1]=[1,2]-[2,3], \\
(C_2,C_2\mathrel{\reflectbox{$\righttoleftarrow$}} k(t),1)=-[1,2]-[2,3], \\
(C_2,C_2\mathrel{\reflectbox{$\righttoleftarrow$}} k^2(t),1)=0, \\
(C_2,C_2\mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(1,1))=0.
\end{gather*}
\end{itemize}
Here $k^n$ denotes the total ring of fractions for an orbit of length
$n$ and $k^n(t)$ the total ring of fractions of the exceptional locus
of the blowup of such an orbit.
Furthermore, $C_2\mathrel{\reflectbox{$\righttoleftarrow$}} k(t)$ is via $t\mapsto -t$.
For example, the linear action on ${\mathbb P}^2$
$$(x:y:z) \mapsto (x:iy:-iz)$$
has invariant
$$[1,3]+[1,2]+[2,3]+(C_2,C_2 \mathrel{\reflectbox{$\righttoleftarrow$}} k(y/z), 1)=0
\in \Burn_2^{\mathrm{nontriv}}(C_4).$$
\
We close this section with a noncyclic example:
\
\noindent
($C_2\times C_2$) \newline
Write
$$G=C_2\times C_2=\{ 1,g_1,g_2,g_3=g_1g_2\}$$
and
$$G^{\vee} = \{0,\chi_1,\chi_2,\chi_3=\chi_1+\chi_2\}, \,
\chi_1(g_1)=\chi_2(g_2)=-1,\chi_1(g_2)=\chi_2(g_1)=1$$
as before.
\begin{itemize}
\item ${\mathcal B}_2(C_2\times C_2)=
\{0,[\chi_1,\chi_2],[\chi_1,\chi_3],[\chi_2,\chi_3]\}$
with the structure of the Klein four group presented in
Section~\ref{sect:surf}.
\item ${\mathcal B}_2(C_2\times C_2,k) \cong {\mathcal B}_2(C_2\times C_2)$
as ${\mathcal B}_1(C_2\times C_2)=0$.
\item $\Burn_2(C_2\times C_2)=\Burn^{\rm triv}(C_2\times C_2)
\oplus \Burn_2^{\rm nontriv}(C_2 \times C_2)$
where the second term is a direct sum of
the subgroup $R$ generated by points and rational curves,
a copy of
$$\Burn_1^{\rm nontriv}(C_2\times C_2)\cong {\mathbb Z}^3$$
for each curve of positive genus, and another copy
for each curve of positive genus equipped with an involution.
\end{itemize}
The group $R$ fits into an exact sequence
$$0 \rightarrow \Burn_1^{\rm nontriv}(C_2\times C_2) \rightarrow R \rightarrow {\mathcal B}_2(C_2\times C_2) \rightarrow 0$$
obtained by computing generators and relations.
\
\paragraph{{\bf Generators:}}
Here we take $1\le i < j \le 3$:
\begin{align*}
[\chi_i,\chi_j] &:=(G, {\rm triv} \mathrel{\reflectbox{$\righttoleftarrow$}} k, (\chi_i,\chi_j)) \\
e_i &:=(\left<g_i\right>, G/\left<g_i\right> \mathrel{\reflectbox{$\righttoleftarrow$}} k(t), \bar{\chi}_i=1)\\
q_i &:=(\left<g_i\right>, G/\left<g_i\right> \mathrel{\reflectbox{$\righttoleftarrow$}} k^2, (\bar{\chi}_i,\bar{\chi}_i)=(1,1)) \\
f_i &:=(\left<g_i\right>, G/\left<g_i\right> \mathrel{\reflectbox{$\righttoleftarrow$}} k^2(t), \bar{\chi}_i=1)
\end{align*}
\paragraph{{\bf Relations:}}
Here we choose $h$ so that $\{h,i,j\}=\{1,2,3\}$:
\begin{align*}
[\chi_i,\chi_j] &= e_h + [\chi_h,\chi_i] + [\chi_h,\chi_j] \quad
\text{(blow up fixed point)} \\
q_i &= f_i \quad \text{(blow up orbit $q_i$)} \\
e_i & = e_i + f_i \quad \text{(blow up general orbit of $e_i$)}
\end{align*}
Thus the $q_i$ and $f_i$ are zero and we have
$$R/\left<e_1,e_2,e_3\right> = {\mathcal B}_2(C_2\times C_2).$$
We revisit the actions of $C_2\times C_2$
on rational surfaces in Section~\ref{sect:surf} using these new techniques:
\begin{enumerate}
\item[(1)]{the action $(x,y) \mapsto (\pm x^{\pm 1},y)$ on ${\mathbb P}^1\times {\mathbb P}^1$
has invariant
$$
f_1+f_2+f_3=0;
$$
}
\item[(2)]{the action $(x,y) \mapsto (\pm x,\pm y)$ on ${\mathbb P}^1 \times {\mathbb P}^1$
has invariant
$$
2e_1+2e_2+4[\chi_1,\chi_2]=0;
$$}
\item[(3)]{the diagonal action on ${\mathbb P}^1\times {\mathbb P}^1$ has invariant
$$
2(q_1+q_2+q_3)=0;
$$}
\item[(4)]{the action on the conic fibration admits an elliptic curve
$$F=\{y_1^2(x_1-ax_2)(x_1+ax_2)=y_2^2(x_1-bx_2)(x_1+bx_2)\}$$
that is fixed by $g_2$ and fibers over $(x_1:x_2)=(0:1),(1:0)$
fixed by $g_1$ and thus has invariant
$$4[\chi_1,\chi_2]+2e_1 +
(\left<g_2\right>,\left<g_1\right>\righttoleftarrow k(F),1) \neq 0$$
where $g_1$ acts on $k(F)$ by $x_1/x_2 \mapsto -x_1/x_2$;}
\item[(5)]{the action on the degree two del Pezzo surface has
nontrivial invariant arising from the positive genus curves
fixed by $g_1$ and $g_2$;}
\item[(6)]{the action on the degree one del Pezzo surface has
nontrivial invariant arising from the positive genus curves
fixed by the involution.}
\end{enumerate}
\subsection{Limitation of the birational invariant}
It is an elementary observation, recorded in \cite[App.\ A]{reichsteinyoussinessential},
that the presence of a
point fixed by a given abelian subgroup $H$ of $G$ is a birational invariant of
a smooth projective variety $X$ with generically free $G$-action.
Two smooth $n$-dimensional projective varieties with generically free $G$-action might be
distinguished in this way but nevertheless have the same class in
$
\Burn_n^{\rm nontriv}(G).
$
Indeed, letting $C_2$ act on ${\mathbb P}^1$, we consider the corresponding
product action of $C_2\times C_2$ on ${\mathbb P}^1\times {\mathbb P}^1$.
As well, the action of $C_2\times C_2$ on
${\mathbb P}^1$ gives rise to a diagonal action on ${\mathbb P}^1\times {\mathbb P}^1$.
The former, but not the latter, has a point fixed by $C_2\times C_2$,
so the actions belong to two distinct birational classes.
However, both actions give rise to a vanishing class in
$\Burn_2^{\rm nontriv}(C_2\times C_2)$.
\subsection{Reprise: Cyclic groups on rational surfaces}
As already discussed in Section~\ref{sect:surf},
the presence of higher genus curves in the fixed locus of the action of a
cyclic group of prime order on a rational surface is an important invariant
in the study of the plane Cremona group. For example, for $G=C_2$, these curves
make up entirely the group $\Burn_2^{G}(G)$
and famously characterize birational involutions of the plane up to conjugation
\cite{baylebeauville}.
For more general cyclic groups acting on rational
surfaces, we recover the NFCA invariant of Blanc \cite{blancsubgroups},
which governs his classification.
We recall the relevant definitions:
Let $g\in \mathrm{Cr}_2$ be a nontrivial element of the plane Cremona group, of finite order $m$.
\begin{itemize}
\item {\em Normalized fixed curve} \cite{deFer}:
$$
\mathrm{NFC}(g):=\begin{cases}
\text{isomorphism class of the normalization of the union} \\
\text{of irrational components of the fixed curve.}
\end{cases}
$$
\item {\em Normalized fixed curve with action}:
$$
\mathrm{NFCA}(g):=\left((\mathrm{NFC}(g^r ), g\mid_{\mathrm{NFC}(g^r)} \right)^{m-1}_{r=1},
$$
where $g\mid_{\mathrm{NFC}(g^r)} $ is the automorphism induced by $g$ on $\mathrm{NFC}(g^r )$.
\end{itemize}
One of the main results in this context is the following characterization:
\begin{theo}\cite{blancsubgroups}
Two cyclic subgroups $G$ and $H$ of order $m$ of $\mathrm{Cr}_2$ are conjugate if and only if
$$
\mathrm{NFCA}(g) = \mathrm{NFCA}(h),
$$
for some generators $g$ of $G$ and $h$ of $H$.
\end{theo}
It follows immediately from the definition that the information encoded in
$\Burn_2(G)$, for $G=C_m$, is {\em equivalent} to $\mathrm{NFCA}(g)$.
\begin{rema}
It would be interesting to use symbol invariants to organize the
classification of
cyclic group actions on rational threefolds.
\end{rema}
\section{Cubic fourfolds}
\label{sect:cubic4}
In this section we discuss several illustrative examples, showing
various aspects of the new invariants introduced above.
Equivariant geometry in dimension $\le 3$ is, in principle,
accessible via the Minimal Model Program,
and there is an extensive literature on factorizations of equivariant birational maps.
We focus on dimension four, and in particular,
on cubic fourfolds.
Let $X\subset {\mathbb P}^5$ be a smooth cubic fourfold. No examples are
known to be irrational!
Here we show that there are actions $X\righttoleftarrow G$
where $G$-equivariant rationality fails, including actions on rational cubic fourfolds.
We found it useful to consult the classification of possible abelian automorphism
groups of cubic fourfolds in \cite{may}.
Here is a list of $N>1$, such that the cyclic group $C_N$ acts on a smooth cubic fourfold:
$$
N= 2,3,4,5,6,8,9,10,11,12,15,16, 18, 21, 24,30,32,33,36, 48.
$$
Note that
$$
\mathcal B_4(C_N)\otimes \mathbb Q =0, \quad \text{ for all $N < 27$, $N=30,32$}.
$$
We record their dimensions $d_{{\mathbb Q}}$ in the remaining cases:
$$
\begin{array}{c|ccc}
N & 33 & 36 & 48 \\
\hline
d_{{\mathbb Q}} & 2 & 3 & 7
\end{array}
$$
One can also work with finite coefficients: let
$$
d_p=d_p(N):=\dim \mathcal B_4(C_N)\otimes \mathbb F_p,
$$
we have $d_2,d_3,d_5=0$, for all $N\le 15$, and $N=18,21$. In the other cases, we find:
$$
\begin{array}{c|ccccccc}
N & 16 & 24 & 30 & 32 & 33 & 36 & 48\\
\hline
d_2 & 1 & 5 & 10 & 12 & 3 & 19 & 50 \\
d_3 & 0 & 0 & 0 & 0 & 2 & 3 & 7 \\
d_5 & 0 & 0 & 0 & 0 & 2& 3& 7 \\
d_7 & 0 & 0 & 0 & 0 & 2 & 3 & 7
\end{array}
$$
\
Thus, to exhibit applications of ${\mathcal B}_4(C_N)$ we have to look at large $N$.
\
\noindent
{\bf Using $\mathcal B_n(G)$:} Consider $X\subset {\mathbb P}^5$ given by
\begin{equation}
\label{eqn:36}
x_1^2x_2+x_2^2x_3+x_3^2x_1+x_4^2x_5+x_5^2x_0+x_0^3 =0.
\end{equation}
It carries the action of $G=C_{36}$, with weights
$$
(0,4,28,16,9,18)
$$
on the ambient projective space,
and isolated $G$-fixed points
$$
P_1=(0:1:0:0:0:0), \ldots , P_5:=(0:0:0:0:0:1).
$$
Computing the weights in the corresponding tangent spaces, we find that
$\beta(X)$ equals
$$
[4,24,31,22]+[28,24,19,10]+[24,12,7,34]+[9,5,17,29]+[14,26,2,9].
$$
Solving a system of 443557 linear equations in 82251 variables, we find, by
computer, that
$$
\beta(X)\neq \beta({\mathbb P}^4\righttoleftarrow C_{36} )= 0 \in \mathcal B_4(C_{36})\otimes \mathbb F_2=\mathbb F_2^{19}.
$$
This implies that $X$ is not $G$-equivariantly birational to ${\mathbb P}^4$.
\
\
\noindent
{\bf Using co-multiplication:}
The fourfold $X\subset {\mathbb P}^5$ given by
\begin{equation}
\label{eqn:48}
x_2^2x_3+x_3^2x_4+x_4^2x_5+ x_5^2x_0 + x_0^3 + x_1^3=0
\end{equation}
carries an action of $C_{48}$ with weights $(0,16,3,-6,12,-24)$, and isolated fixed points.
We find that $\beta(X)\in {\mathcal B}_{4}(C_{48})$ equals
$$
[-3,13,9,-27]+[6,22,9,-18]+[-12,4,-9,-18]+[40,27,18,36].
$$
Here, we apply the co-multiplication formula from Section~\ref{subsect:mult}
to the image $\beta^-(X)$ of $\beta(X)$ under the projection
$$
{\mathcal B}_{4}(C_{48}) \rightarrow {\mathcal B}_{4}^-(C_{48}).
$$
Let
$$
G':={\mathbb Z}/3{\mathbb Z}, \quad G'':={\mathbb Z}/16{\mathbb Z}, \quad n'=1\quad \text{ and } \quad n''=3.
$$
We have a homomorphism
$$
\nabla^-: {\mathcal B}_{4}^-(C_{48}) \to {\mathcal B}_{1}^-(C_{3}) \otimes {\mathcal B}_{3}^-(C_{16})
$$
and we find that $\nabla^-(\beta^-(X))$ equals
$$
[1]^-\otimes ([-1,3,-9]^- + [2,3,-6]^- + [-4,-3,-6]^- + [9,6,12]^-).
$$
Now we are computing in ${\mathcal B}_{3}^-(C_{16})$, a much smaller group.
We have
$$
\dim {\mathcal B}_{3}(C_{16})\otimes {\mathbb Q} =3, \text{ but } \dim {\mathcal B}_{3}^-(C_{16})\otimes {\mathbb Q} =0,
$$
however
$$
\dim {\mathcal B}_{3}(C_{16})\otimes {\mathbb F}_2 =8 \text{ and } \dim {\mathcal B}_{3}^-(C_{16})\otimes {\mathbb F}_2 =7.
$$
We find, by computer, that
$$
[-4,-3,-6] = [9,6,12] \in {\mathcal B}_{3}(C_{16})\otimes {\mathbb F}_2,
$$
so that the sum of these terms does not contribute to $\nabla^-(\beta^-(X))$,
and check that
$$
[-1,3,-9]^- + [2,3,-6]^- = [1,2,10]^-\neq 0 \in {\mathcal B}_{3}^-(C_{16}).
$$
It follows that
$$
\nabla^-(\beta^-(X)) \neq 0, \quad \text{ thus} \quad \beta^-(X)\neq 0 \in {\mathcal B}_{4}^-(C_{48}),
$$
and this action of $C_{48}$ on the cubic fourfold is not equivariantly birational to a linear action on ${\mathbb P}^4$, by Proposition~\ref{prop:cn}.
\
\noindent
{\bf Using $\mathcal B_{n}(G,k)$:}
There is also another way to analyze the fourfold in \eqref{eqn:48}:
observe that the divisor $Y\subset X$, a smooth cubic threefold given by
$x_1=0$, is fixed by $C_3 \subset C_{48}$.
This divisor is irrational, and we get a nontrivial contribution to
$\beta_k(X)\in {\mathcal B}_{4}(C_3,k)$, in the summand labeled by
$Y\in \mathrm{Bir}_{1,0}$; thus $X$ is not even $C_3$-equivariantly birational to ${\mathbb P}^4$.
\
The fourfold $X\subset {\mathbb P}^5$ given by
$$
f_3(x_0,x_1,x_2)+ x_3^2x_4+x_4^2x_5+x_5^2f_1(x_0,x_1,x_2)=0
$$
carries the action of $G=C_8$, with weights
$$
(0,0,0,1, 6, 4).
$$
The fourfold is smooth, e.g., for $f_3=x_0^3+x_1^3+x_2^3$ and $f_1=x_0$.
Here, there are no isolated fixed points, but we find
information from fixed point loci in higher dimension.
The $G$-fixed locus contains the degree 3 curve $Y$ given by
$$
x_3=x_4=x_5=f_3(x_0,x_1,x_2)=0,
$$
which is smooth for appropriate $f_3$.
Thus we get a contribution to
$\beta_k(X)\in {\mathcal B}_{4}(G,k)$, in the summand labeled by
$Y\in \mathrm{Bir}_{3,2}$:
$$
[7,2,4] \neq 0 \in {\mathcal B}_3(C_8)\cong \mathbb F_2.
$$
Here, we solve 289 linear equations in 120 variables.
This implies that $X$ is not $G$-equivariantly birational to ${\mathbb P}^4$. Of course,
this also follows by observing that the fourth power of the generator fixes a cubic threefold.
\
\noindent
{\bf Using $\mathrm{Burn}_{n}(G)$:}
Consider $X\subset {\mathbb P}^5$ given by
\begin{equation} \label{C6cubic}
x_0x_1^2+x_0^2x_2-x_0x_2^2-4x_0x_4^2+x_1^2x_2+x_3^2x_5-x_2x_4^2-x_5^3=0.
\end{equation}
It carries the action of $G=C_6$, which acts with weights
\[ (0,0,0,1,3,4). \]
The cubic fourfold $X$ is rational,
since it contains the disjoint planes
$$
x_0=x_1-x_4=x_3-x_5=0\quad \text{ and } \quad
x_2=x_1-2x_4=x_3+x_5=0.
$$
Noticing a cubic surface $S\subset X$, with $C_3$-stabilizer and
scalar action on the normal bundle, the fact that the $C_2$-action on $S$
fixes an elliptic curve on it lets us conclude, by \cite{BogPro},
that the cubic surface is not stably $C_2$-equivariantly rational; the corresponding symbol
$$
[ C_3, C_2\mathrel{\reflectbox{$\righttoleftarrow$}} k(S),\beta] \neq 0 \in \Burn_4(C_6),
$$
moreover, it does not interact with any other symbols in
$[X\righttoleftarrow G]$, which implies that $X$ is not $G$-birational to ${\mathbb P}^4$ with linear action.
In this case, no subgroup of $C_6$ fixes a hyperplane section.
We discuss obstructions of such type in Section~\ref{sec.elementary} below -- formally,
$\Burn_4(C_6)$ admits a projection to ${\mathbb Z}$ that distinguishes
the equivariant birational class of $X$ from that of
${\mathbb P}^4$ with linear action.
\
Kuznetsov \cite{Kuz} conjectures which cubic fourfolds $X$
are rational, in terms of their derived categories
$\mathsf D^b(X)$.
Consider the line bundles ${\mathcal O}_X,{\mathcal O}_X(1),{\mathcal O}_X(2)$ and the right
orthogonal complement
\begin{equation} \label{derived}
{\mathcal A}_X = \left< {\mathcal O}_X,{\mathcal O}_X(1),{\mathcal O}_X(2)\right>^{\perp}.
\end{equation}
Conjecturally, $X$ is rational if and only if ${\mathcal A}_X \cong \mathsf D^b(S)$,
the bounded derived category of a K3 surface $S$.
However, the K3 surface need not be canonically determined as there
are many examples of derived-equivalent but nonisomorphic K3 surfaces.
Indeed the following conditions on complex
projective K3 surfaces $S_1$ and $S_2$
are equivalent \cite{Orlov}
\begin{itemize}
\item{$\mathsf D^b(S_1)\cong \mathsf D^b(S_2)$;}
\item{the Mukai lattices
$$\widetilde{H}(S_1,{\mathbb Z}) \cong \widetilde{H}(S_2,{\mathbb Z})$$
as Hodge structures;}
\item{the transcendental cohomology lattices
$$H^2_{\tran}(S_1,{\mathbb Z}) \cong H^2_{\tran}(S_2,{\mathbb Z})$$
as Hodge structures.}
\end{itemize}
There is an alternative Hodge-theoretic version of the conjecture:
A smooth cubic fourfold $X$ is rational if and only if there exists
a K3 surface $S$ and an isomorphism of integral Hodge structures
\begin{equation} \label{hodge}
H^4(X,{\mathbb Z})_{\tran}\cong H^2(S,{\mathbb Z})_{\tran}(-1),
\end{equation}
where $\tran$ denotes the orthogonal complement
of the Hodge classes and $(-1)$ designates the Tate twist.
Work of Addington and Thomas \cite{AdTh}, and recent extensions
\cite[Cor.~1.7]{BLMNPS}, show that the conditions (\ref{derived})
and (\ref{hodge}) are equivalent. In particular, both
are stable under specialization, consisting of an explicit countable
collection of divisors in moduli \cite{Has00}.
The main theorem of \cite{KT} -- that rationality is stable under
specializations of smooth projective varieties --
gives the equivalence of Kuznetsov's conjecture with
the Hodge-theoretic statement.
Suppose then that $X$ admits an action of a finite group $G$.
If $X$ is rational -- and the conjectures are true -- then
$G$ naturally acts on ${\mathcal A}_X$ and $\mathsf D^b(S)$,
for each surface $S$ arising in (\ref{derived}).
There is an induced action on $\widetilde{H}^*(S,{\mathbb Z})$ as well.
It is natural to speculate that a $G$-equivariant birational
map ${\mathbb P}^4 \stackrel{\sim}{\dashrightarrow} X$ should imply that
we may choose $S$ in its derived equivalence class so that
the $G$-action on the Mukai lattice is induced by a $G$-action on
$S$.
There are several possible obstructions to finding such an $S$:
\begin{itemize}
\item{if $S$ exists then there exists a sublattice of algebraic
classes
$$U := \left( \begin{matrix} 0 & 1 \\
1 & 0 \end{matrix} \right)=H^2(S,{\mathbb Z})^{\perp}$$
in the $G$-invariant part of the abstract
Mukai lattice arising from ${\mathcal A}_X$;}
\item{the action of $G$ on $\Pic(S)$ preserves the ample cone
of $S$.}
\end{itemize}
The first condition fails when $G$ permutes various derived equivalent
K3 surfaces. The second condition fails if $G$ includes a Picard-Lefschetz
transformation associated with a smooth rational curve ${\mathbb P}^1 \subset S$.
Derived equivalent K3 surfaces might have very different
automorphism groups \cite[Ex.~23]{HT17}; this paper discusses
descent of derived equivalence in the presence of Galois actions.
We mention some results on when the group action can be
lifted to the associated K3 surface \cite[\S 8]{Ouchi}:
\begin{itemize}
\item{if $G\neq \{1\}, G$ acts on $X$ symplectically, i.e., acts trivially
on $\mathrm H^1(\Omega^3_X)$, then $S$ is unique;}
\item{if $X$ is the Klein cubic fourfold
$$
x_0^3 + x_1^2 x_2 + x^2_2 x_3 + x^2_3 x_4 + x^2_4 x_5 + x^2_5 x_1 = 0
$$
then $X$ admits a symplectic automorphism of order $11$ and ${\mathcal A}_X\cong \mathsf D^b(S)$
for a unique K3 surface, which has no automorphism of order $11$.}
\end{itemize}
We speculate that the Klein example should not be $C_{11}$-equivariantly
rational, even though
$$
\beta(X\righttoleftarrow C_{11})=0 \in {\mathcal B}_4(C_{11}),
$$
as the $C_{11}$-action
has isolated fixed points and the target group is trivial
\cite[\S 8]{kontsevichpestuntschinkel}.
\begin{ques}
Let $X$ be a smooth cubic fourfold with the action of a (finite) group $G$.
Suppose that ${\mathcal A}_X \cong \mathsf D^b(S)$ for a K3 surface $S$ with $G$-action,
compatible with the isomorphism. Does it follow that
$$
[X\righttoleftarrow G]=[{\mathbb P}^4\righttoleftarrow G] \in \Burn_4(G),
$$
for some action of $G$ on ${\mathbb P}^4$?
\end{ques}
It is mysterious how the invariants in the Burnside groups interact with
the actions on the Hodge structures on the middle cohomology of $X$.
Obstructions to $G$-equivariant rationality arise from fixed loci in
various dimensions but the Hodge theory encodes codimension-two
cycles only. The example \eqref{C6cubic}, which is rational but not
$C_6$-rational, is particularly striking to us: How is
the cubic surface in the fixed locus coupled with the associated K3
surfaces?
\section{Nonabelian invariants}
\label{sect:nonab}
In this section, $G$ is a finite group, not necessarily abelian.
\subsection{The equivariant Burnside group}
As in Section~\ref{subsect:burndef}, it is defined as the quotient of the ${\mathbb Z}$-module generated by
symbols
$$
(H, N_G(H)/H\mathrel{\reflectbox{$\righttoleftarrow$}} K, \beta),
$$
similar to those in \eqref{eqn:symbol}, by {\bf blow-up relations}. The required relations are a bit
complicated but similar in spirit to what was written above;
precise definitions are in \cite[Section 4]{kreschtschinkel}.
The resulting group
$$
\Burn_n(G)
$$
carries a rich combinatorial structure, that remains largely unexplored.
\subsection{Resolution of singularities}
\label{subsect:resolve}
The class of $X\righttoleftarrow G$, a projective variety with a generically free $G$-action,
is computed on a suitable model
of the function field $k(X)$. We explain how such a model may be found in practice.
While this is a corollary of Bergh's `destackification' procedure \cite{bergh},
the approach here can be helpful in specific examples.
We first review the resolution process of \cite[\S 3]{reichsteinyoussinessential}.
A variety with group action as above is {\em in standard form with respect
to a $G$-invariant divisor $Y$} if
\begin{itemize}
\item{$X$ is smooth and $Y$ is a simple normal crossings divisor;}
\item{the $G$ action on $X\setminus Y$ is free;}
\item{for every $g\in G$ and irreducible component $Z\subset Y$ either
$g(Z)=Z$ or $g(Z)\cap Z=\emptyset$.}
\end{itemize}
We recall several fundamental results. First, we can always
put actions in standard form:
\begin{quote}
If $X$ is smooth and $Y$ is a $G$-invariant closed subset
such that $G$ acts freely on $X\setminus Y$ then there exists a
resolution
$$\pi:\widetilde{X} \rightarrow X$$
obtained as a sequence of blowups along smooth $G$-invariant
centers, such that
$\widetilde{X}$ is in standard form with respect to
$\operatorname{Exc}(\pi)\cup \pi^{-1}(Y)$ \cite[Th.~3.2]{reichsteinyoussinessential}.
\end{quote}
An action in standard form has stabilizers of special type:
\begin{quote}
Assume that $X$ is in standard form with respect to $Y$
and $x\in X$ lies on $m$ irreducible components of $Y$
then the stabilizer $H$ of $x$ is abelian with $\le m$ generators
\cite[Th.~4.1]{reichsteinyoussinessential}.
\end{quote}
The proof of \cite[Th.~4.1]{reichsteinyoussinessential} (see Remark~4.4)
yields \'etale-local coordinates on $X$ about $x$
$$x_1,\ldots,x_k,y_1,\ldots,y_l,z_1,\ldots,z_m$$
such that
\begin{itemize}
\item{$H$ acts diagonally on all the coordinates;}
\item{$y_1=\cdots=y_l=z_1=\cdots=z_m=0$ coincides with $X^H$, i.e., these are the coordinates on which $H$ acts nontrivially;}
\item{$y_1\cdots y_m=0$ coincides with $Y$ and the associated
characters of $\chi_i:H\rightarrow {\mathbb G}_m$ generate $\Hom(H,{\mathbb G}_m)$ so
the induced representation
\begin{equation}
(\chi_1,\ldots,\chi_m):H \hookrightarrow {\mathbb G}_m^m \label{inject}
\end{equation}
is injective.}
\end{itemize}
Suppose that $X$ is in standard form with respect to $Y$
with irreducible components $Y_1,\ldots,Y_s$. For each orbit
of these under the action of $G$, consider the reduced divisor
obtained by summing over the orbit. The resulting divisors
$D(1),\ldots,D(r)$ have smooth support -- by the definition of
standard form -- and
$$D(1) \cup \cdots \cup D(r) = Y_1 \cup \cdots \cup Y_s.$$
The line bundles ${\mathcal O}_X(D(i))$ are naturally $G$-linearized and thus
descend to line bundles on the quotient stack $[X/G]$ and we obtain
$$\varphi: [X/G] \rightarrow B{\mathbb G}_m \times \cdots \times B{\mathbb G}_m \quad (r \text{ factors}).$$
We claim $\varphi$ is representable. It suffices to check this by showing
that the induced homomorphism of stabilizers is injective at each point
\cite[\href{https://stacks.math.columbia.edu/tag/04YY}{Tag 04YY}]{stacks-project}.
For $x\in X$, fix the indices $i_1,\ldots,i_m$ so that
$D(i_1),\ldots,D(i_m)$ are the components of $Y$ containing $x$,
and consider the induced
$$\varphi_x: [X/G] \rightarrow B{\mathbb G}_m \times \cdots \times B{\mathbb G}_m \quad (m \text{ factors}).$$
The homomomorphism on stabilizers is given by (\ref{inject}) which
is injective.
Thus we have established:
\begin{prop}
Let $X$ be a smooth projective variety with a generically free action by a
finite group $G$, in standard form with respect to a divisor $Y$.
Then Assumption 2 of \cite{kreschtschinkel} holds and the invariants
constructed there may be evaluated on $X$.
\end{prop}
\subsection{The class of $X\righttoleftarrow G$}
On a suitable model $X$,
we consider each stratum $F\subset X$ with nontrivial (abelian)
stabilizer $H\subset G$,
the action of the normalizer $N_G(H)/H$ on (the orbit of) the stratum, and the
induced action of $H$ on its normal bundle, and record these in a symbol. Then we
define
\begin{equation}
\label{eqn:defn-xg}
[X\righttoleftarrow G]:=\sum_{H\subseteq G} \sum_F(H,N_G(H)/H\mathrel{\reflectbox{$\righttoleftarrow$}} k(F), \beta) \in \Burn_n(G),
\end{equation}
as \eqref{eqn:xg}. The proof that this is a $G$-birational invariant relies on $G$-equivariant Weak Factorization and combinatorics \cite[Section 5]{kreschtschinkel}.
\subsection{Elementary observations}
\label{sec.elementary}
As we discussed, the presence of higher genus curves in the fixed locus of the
action of a cyclic group of prime order on a rational surface is an
important invariant in the study of the plane Cremona group;
see, e.g., \cite{blancsubgroups}.
These make up entirely the group $\Burn_2^{{\mathbb Z}/2{\mathbb Z}}({\mathbb Z}/2{\mathbb Z})$ and entirely
characterize birational involutions of the plane
up to conjugation \cite{baylebeauville}.
More generally, for any nontrivial cyclic subgroup $H$ of $G$ and birational class of an
$(n-1)$-dimensional variety $Y$, not birational to
$Z\times {\mathbb P}^1$ for any variety $Z$ of dimension $n-2$,
we have a projection from $\Burn_n(G)$ onto
the free abelian group on the $N_G(H)$-conjugacy classes of pairs
$(H',a)$, where $H'$ is a subgroup,
$H\subset H'\subset N_G(H)$, and $a\in H^\vee$ is a primitive character.
This sends
$$
(H,\Ind_{H'/H}^{N_G(H)/H}(k(Y)),a),
$$
for any
$H'/H\mathrel{\reflectbox{$\righttoleftarrow$}} k(Y)$, to the
generator indexed by the conjugacy class $[(H',a)]$ of the pair.
A more refined version of this observation might also relax the
restriction on $Y$ but take into account the action $H'/H\mathrel{\reflectbox{$\righttoleftarrow$}} k(Y)$.
We do not go into details, but only point out, for instance, that for $n=2$
and $Y={\mathbb P}^1$
there is a projection
\[ \Burn_2(G)\to \bigoplus_{\substack{[(H',a)]\\ H'/H \text{ not cyclic}}} {\mathbb Z}. \]
Taking $G$ to be the dihedral group of order $12$ and $H$ the center of $G$,
we may distinguish between the two inclusions of $G$ into the
plane Cremona group considered in \cite{isk-s3}, see Section~\ref{subsect:isk} below.
\subsection{Dihedral group of order 12.}
\label{subsect:relations}
We now compute $\Burn_2(G)$, for $G=D_{6}$,
the dihedral group with generators
$$\rho, \quad \sigma,\quad \text{ with } \quad
\rho^6=\sigma^2=\rho\sigma\rho\sigma=e_{D_6}.
$$
We list abelian subgroups, up to conjugacy:
\begin{itemize}
\item order $6$: $C_6=\langle \rho\rangle$
\item order $4$: $D_2=\langle \rho^3,\sigma\rangle$
\item order $3$: $C_3=\langle \rho^2\rangle$
\item order $2$: central $C_2=\langle \rho^3\rangle$, noncentral
$S:=\langle \sigma\rangle$, $S':=\langle \rho^3\sigma\rangle$
\item order $1$: $\mathrm{triv}$
\end{itemize}
The subgroup of order $4$ and two noncentral subgroups of order $2$ have
normalizer $D_2$, the others are normal.
As before, we use $k(X)$ to denote the function field of the underlying
surface $X$, $K$ to denote the algebra of rational functions of a one-dimensional
stratum with nontrivial stabilizer,
and $k^n$ to denote the algebra of functions of a zero-dimensional
orbit of length $n$.
When we blow up such an orbit, we use $k^n(t)$ to denote the total ring
of fractions of the exceptional locus.
\
\noindent
{\bf Generators:}
\noindent
$(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} K,(1))$
\noindent
$(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(1,j))$, \quad $j=1,\ldots, 4$, \quad
\noindent
$(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(2,3))$
\noindent
$(D_2,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,(a_1,a_2))$, \quad $a_1,a_2\in {\mathbb F}_2^2$, generating ${\mathbb F}_2^2$
\noindent
$(C_3,\langle \bar\rho,\bar\sigma\rangle\mathrel{\reflectbox{$\righttoleftarrow$}} K,(1))$
\noindent
$(C_3,\langle \bar\rho,\bar\sigma\rangle\mathrel{\reflectbox{$\righttoleftarrow$}} k^4,(1,1))$
\noindent
$(C_2,\langle \bar\rho,\bar\sigma\rangle\mathrel{\reflectbox{$\righttoleftarrow$}} K,(1))$
\noindent
$(S,C_2\mathrel{\reflectbox{$\righttoleftarrow$}} K,(1))$
\noindent
$(S',C_2\mathrel{\reflectbox{$\righttoleftarrow$}} K,(1))$
\noindent
$(\mathrm{triv},D_6\mathrel{\reflectbox{$\righttoleftarrow$}} k(X),\mathrm{triv})$
\
\noindent
{\bf Relations:}
\noindent
$(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(1,1))=(C_6,\langle \bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2(t),(1))$
\noindent
$(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(1,2))=(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(1,1))+(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(1,4))$
\noindent
\begin{multline*}
\hskip -0.35cm
(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(1,3)) =(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(1,2))+(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(2,3))+ \\
(C_2,\langle \bar\rho,\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2(t),(1)),
\end{multline*}
where $\bar\rho$ acts by cube roots of unity on $t$
\noindent
\begin{multline*}
\hskip -0.35cm
(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(1,4))=(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(1,3))+(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(2,3))+ \\
(C_3,\langle \bar\rho,\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2(t),(1)),
\end{multline*}
where $\bar\rho$ acts by $-1$ on $t$
\
\noindent
$0=2(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(1,4))+(C_2,\langle \bar\rho,\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2(t),(1))$, \\
where $\bar\rho$ acts by cube roots of unity on $t$
\
\noindent
$(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(2,3))=(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(1,2))+(C_6,\langle\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^2,(1,3))$
\
\noindent
\begin{multline*}
\hskip -0.35cm
(D_2,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,((1,0),(0,1)))=
(D_2,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,((1,0),(1,1)))+ \\ (D_2,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,((0,1),(1,1)))+ (S',C_2\mathrel{\reflectbox{$\righttoleftarrow$}} k(t),(1))
\end{multline*}
\noindent
\begin{multline*}
\hskip -0.35cm
(D_2,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,((1,0),(1,1)))=(D_2,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,((1,0),(0,1)))+ \\(D_2,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,((0,1),(1,1)))+(C_2,\langle \bar\rho,\bar\sigma\rangle \mathrel{\reflectbox{$\righttoleftarrow$}} k^3(t),(1)),
\end{multline*}
permutation action on $k^3$ with $\bar\sigma$ acting by $-1$ on $t$
\
\noindent
\begin{multline*}
\hskip -0.35cm
(D_2,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,((0,1),(1,1)))=(D_2,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,((1,0),(0,1)))+ \\(D_2,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,((1,0),(1,1)))+(S,C_2\mathrel{\reflectbox{$\righttoleftarrow$}} k(t),(1))
\end{multline*}
\
\noindent
$(C_3,\langle \bar\rho,\bar\sigma\rangle\mathrel{\reflectbox{$\righttoleftarrow$}} k^4,(1,1))=(C_3,\langle \bar\rho,\bar\sigma\rangle\mathrel{\reflectbox{$\righttoleftarrow$}} k^4(t),(1))$
\
\noindent
$0=2(C_3,\langle \bar\rho,\bar\sigma\rangle\mathrel{\reflectbox{$\righttoleftarrow$}} k^4,(1,1))$
\noindent
$0=(C_2,\langle \bar\rho,\bar\sigma\rangle\mathrel{\reflectbox{$\righttoleftarrow$}} k^6(t),(1))$
\
\noindent
$0=(S,C_2\mathrel{\reflectbox{$\righttoleftarrow$}} k^2(t),(1))$
\noindent
$0=(S',C_2\mathrel{\reflectbox{$\righttoleftarrow$}} k^2(t),(1))$
\subsection{Embeddings of $\mathfrak S_3\times C_2$ into the Cremona group}
\label{subsect:isk}
Iskovskikh \cite{isk-s3} exhibited two nonconjugate
copies of $G=\mathfrak S_3\times C_2\cong D_6$
in $\BirAut({\mathbb P}^2)$:
\begin{itemize}
\item{the action on $x_1+x_2+x_3=0$ by permutation and reversing signs,
with model ${\mathbb P}^2$;}
\item{the action on $y_1y_2y_3=1$ by permutation and taking inverses,
with model a sextic del Pezzo surface.}
\end{itemize}
To justify the interest in these particular actions we observe that $G$ is the Weyl group of
the exceptional Lie group $\mathsf G_2$,
which acts on the Lie algebra of the torus, respectively on the torus itself, and it is natural to ask whether or not
these actions are equivariantly birational.
It turns out that they are stably $G$-birational \cite[Proposition 9.11]{lemire}, but not $G$-birational.
The proof of failure of $G$-birationality in \cite{isk-s3} relies on the classification of links, via the $G$-equivariant Sarkisov program.
Here we explain how to apply $\Burn_2(G)$ to this problem.
Note that neither model above satisfies the stabilizer condition required in the Definition \eqref{eqn:defn-xg}!
We need to replace the surfaces by appropriate models $X$ and $Y$, in particular,
to blow up points:
\begin{itemize}
\item{$(x_1,x_2,x_3)=(0,0,0)$, with $G$ as stabilizer;}
\item{$(y_1,y_2,y_3)=(1,1,1)$, with $G$ as stabilizer, and
$$
(\omega,\omega,\omega), \quad (\omega^2,\omega^2,\omega^2), \quad
\omega=e^{2\pi i/3},
$$
with $\mathfrak S_3$ as stabilizer.}
\end{itemize}
We describe these actions in more detail, following closely \cite{isk-s3}.
The action on ${\mathbb P}^2$, with coordinates $(u_0:u_1:u_2)$ is given by
$$
\begin{pmatrix}
1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0
\end{pmatrix},
\quad
\begin{pmatrix}
1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & -1
\end{pmatrix},
\quad
\rho^3=
\begin{pmatrix}
1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1
\end{pmatrix}.
$$
There is one fixed point, $(1:0:0)$; after blowing up this point, the exceptional curve is stabilized by the
central involution $\rho^3$, and comes with a nontrivial $\mathfrak S_3$-action, contributing the symbol
\begin{equation}
\label{eqn:symbs3}
(C_2, \mathfrak S_3 \mathrel{\reflectbox{$\righttoleftarrow$}} k({\mathbb P}^1), (1))
\end{equation}
to $[X\righttoleftarrow G]$.
Additionally, the line $\ell_0:=\{ u_0=0\}$ has as stabilizer the central $C_2$, contributing the same symbol.
There are also other contributing terms, of the shape:
\begin{itemize}
\item $(C_6, C_2 \mathrel{\reflectbox{$\righttoleftarrow$}} k^2, \beta)$
\item $(D_2, {\rm triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k, \beta')$
\end{itemize}
for some weights $\beta, \beta'$.
A better model for the second action is the quadric
$$
v_0v_1+v_1v_2+v_2v_0=3w^2,
$$
where $\mathfrak S_3$ permutes the coordinates $(v_0:v_1:v_2)$
and the central involution exchanges the sign on $w$.
There are no $G$-fixed points, but a conic $R_0:=\{ w=0\}$ with stabilizer the central $C_2$ and
a nontrivial action of $\mathfrak S_3$. There are also:
\begin{itemize}
\item a $G$-orbit of length 2:
$$
\{ P_1:=(1:1:1:1), P_2:=(1:1:1:-1)\},
$$
exchanged by the central involution, each point has stabilizer $\mathfrak S_3$ -- these points have to be blown up, yielding a pair of conjugated ${\mathbb P}^1$, with a nontrivial $\mathfrak S_3$-action;
\item another curve $R_1:=\{ v_0+v_1+v_2=0\}$ with effective $G$-action;
\item additional points with stabilizers $C_6$ and $D_2$ in $R_0$ and $R_1$.
\end{itemize}
The essential difference is that the symbol \eqref{eqn:symbs3} appears {\em twice} for the action on ${\mathbb P}^2$, and only {\em once} for the action on the quadric: the pair of conjugated ${\mathbb P}^1$ with $\mathfrak S_3$-action has trivial stabilizer and does not contribute. Further blow-ups will not introduce new curves of this type.
Formally, examining the relations in Section~\ref{subsect:relations}, we see that the symbol \eqref{eqn:symbs3}
is not equivalent to any combination of other symbols, i.e., it is independent of all other symbols.
This implies that
$$
[X\righttoleftarrow G] \neq [Y\righttoleftarrow G] \in \Burn_2(G),
$$
thus $X$ and $Y$ are not $G$-equivariantly birational.
Note, that $X$ and $Y$ are equivariantly birational for any proper subgroup of $G$.
\begin{rema}
One can view
the symbol \eqref{eqn:symbs3}
as the analog of a curve of higher genus in the fixed locus of an element in the classification
of abelian actions on surfaces, as discussed in Section~\ref{sect:surf}.
\end{rema}
\bibliographystyle{plain}
| proofpile-arXiv_059-15650 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:intro}
A multi-sentence question (MSQ) is a dialogue turn that contains more than one question (cf. Ex. \ref{ex1}). We refer to the speaker of such a turn as a \textit{querent} (i.e., one who seeks).
\ex. \a.[Querent: ] How can I transport my cats if I am moving a long distance? (Q1)
\b.[] For example, flying them from NYC to London? (Q2) \label{ex1}
A standard question answering system might consider these questions separately:
\ex. \label{answer1} \a.[A1: ] You can take them in the car with you.
\b.[A2: ] British Airways fly from NYC to London.
However, this naïve approach does not result in a good answer, since the querent intends that an answer take both questions into account: in \ref{ex1}, Q2 clarifies that taking pets by car is not a relevant option. The querent is likely looking for an answer like \ref{answer2}:
\ex.\label{answer2} \a.[A: ] British Airways will let you fly pets from NYC to London.
Whilst question answering (QA) has received significant research attention in recent years \cite{TriviaQA,VisualQA}, there is little research to date on answering MSQs, despite their prevalence in English. Furthermore, existing QA datasets are not appropriate for the study of MSQs as they tend to be sequences of standalone questions constructed in relation to a text by crowdworkers (e.g. SQuAD \cite{rajpurkar2016squad}). We are not aware of any work that has attempted to improve QA performance on MSQs, despite the potential for obvious errors as in the example above.
Our contribution towards the broader research goal of automatically answering MSQs is as follows:
\vspace{-0.5em}
\begin{itemize}
\setlength\itemsep{-0.2em}
\item We create a new dataset of 162,745 English two-question MSQs from Stack Exchange.
\item We define five types of MSQ according to how they are intended to be answered, inferring intent from relations between them.
\item We design a baseline classifier based on surface features.
\end{itemize}
\section{Prior work}
\label{sec:prior}
Prior work on QA has focused on either single questions contained within dialogue \cite{choi2018quac,reddy-etal-2019-coqa,sharc,clark2018think}, or questions composed of two or more sentences crowd-sourced by community QA (cQA) services \cite{issuesincqa,cqamsq}. Our definition of MSQs is similar to the latter, but it should be noted that sentences in existing cQA datasets can be declarative or standalone, while in our case they must be a sequence of questions that jointly imply some user intent. Popular tasks on cQA have only considered the semantics of individual questions and answers, while we are more focused on interactions between questions.
\newcite{huang-etal-2008-question} and \newcite{anstypeinf} classify questions to improve QA performance, but their work is limited to standalone questions. \newcite{ciurca2019} was the first to identify MSQs as a distinct phenomenon, and curated a small dataset consisting of 300 MSQs extracted from Yahoo Answers. However, this dataset is too small to enable significant progress on automatic classification of MSQ intent.
\section{Large-scale MSQ dataset}
\label{sec:data}
Stack Exchange is a network of question-answering sites, where each site covers a particular topic. Questions on Stack Exchange are formatted to have a short title and then a longer body describing the question, meaning that it is far more likely to contain MSQs than other question answering sites, which tend to focus attention on the title with only a short amount of description after the title. There is a voting system which allows us to proxy well-formedness, since badly-formed questions are likely to be rated poorly. It covers a variety of topics, meaning that we can obtain questions from a variety of domains.
To obtain the data, we used the Stack Exchange Data Explorer\footnote{https://data.stackexchange.com/}, an open source tool for running arbitrary queries against public data from the Stack Exchange network. We chose 93 sites within the network, and queried each site for entries with at least two question marks in the body of the question. We removed any questions with \TeX{} and mark-up tags, then replaced any text matching a RegEx pattern for a website with `\texttt{[website]}'. From this cleaned text, we extracted pairs of MSQs by splitting the cleaned body of the question into sentences, then finding two adjacent sentences ending in `\texttt{?}'. We removed questions under 5 or over 300 characters in length. Finally, we removed any question identified as non-English using \texttt{langid.py} \cite{lui-baldwin-2012-langid}. Many of the questions labelled as `non-English' were in fact badly formed English, making language identification a useful pre-processing step.
After cleaning and processing, we extracted 162,745 questions from 93 topics\footnote{Our dataset is available at \url{https://github.com/laurieburchell/multi-sentence-questions}}. A full list of topics and the number of questions extracted from each is given in \Cref{app:topics}. We restrict the dataset to pairs of questions, leaving longer sequences of MSQs for future work.
\begin{table}[!ht]
\centering
\newcolumntype{L}{>{\raggedright\arraybackslash}X}%
\begin{tabularx}{\textwidth}{rL}
\multicolumn{2}{l}{\textsc{Separable}} \\
\hhline{==}
\textbf{Example} & \textit{What’s the recommended kitten food?} \newline \textit{How often should I feed it?} \\
\textbf{Intent} & Two questions on the same topic (the querent’s kitten). \\
\textbf{Strategy} & Resolve coreference and answer both questions separately. \\
\multicolumn{2}{l}{} \\
\multicolumn{2}{l}{\textsc{Reformulated}} \\
\hhline{==}
\textbf{Example} &
\textit{Is Himalayan pink salt okay to use in fish tanks?} \newline \textit{I read that aquarium salt is good but would pink salt work?} \\
\textbf{Intent} & Speaker wants to paraphrase Q1 (perhaps for clarity). \\
\textbf{Strategy} & Answer one of the two questions. \\
\multicolumn{2}{l}{} \\
\multicolumn{2}{l}{\textsc{Disjunctive}} \\
\hhline{==}
\textbf{Example} &
\textit{Is it normal for my puppy to eat so quickly?} \newline \textit{Or should I take him to the vet?} \\
\textbf{Intent} & Querent offers two potential answers in the form of polar questions. \\
\textbf{Strategy} & Select one of the answers offered (e.g. ``Yes, it is normal") or reject both (e.g. ``Neither -- try feeding it less but more often").\\
\multicolumn{2}{l}{} \\
\multicolumn{2}{l}{\textsc{Conditional}} \\
\hhline{==}
\textbf{Example} &
\textit{Has something changed that is making cats harder to buy?} \newline \textit{If so, what changed?} \\
\textbf{Intent} & Q2 only matters if the answer to Q1 is ``yes''.\\
\textbf{Strategy} & First consider what the answer to Q1 is and then answer Q2. \\
\multicolumn{2}{l}{} \\
\multicolumn{2}{l}{\textsc{Elaborative}} \\
\hhline{==}
\textbf{Example} &
\textit{How can I transport my cats if I am moving a long distance?} \newline \textit{For example, flying them from NYC to London?} \\
\textbf{Intent} & Querent wants a more specific answer. \\
\textbf{Strategy} & Combine context and answer the second question only. \\
\end{tabularx}
\caption{The five types of MSQ we describe, an example of each, the querent's intent, and the resulting answering strategy. \newcite{mann1988rhetorical}'s \textsc{elaboration}, \textsc{condition} and \textsc{restatement} relations correspond roughly to three of the relations we recognise.}
\label{tab:types}
\end{table}
\section{MSQ type as a proxy for speaker intent}
\label{sec:msq type}
MSQs are distinct from sequences of standalone questions in that their subparts need to be considered as a unit (see \ref{ex1} in Section \ref{sec:intro}). This is because they form a \textbf{discourse}: a coherent sequence of utterances \cite{Hobbs1979}. In declarative sentences, the relationship between their different parts is specified by ``discourse relations'' \cite{Stede2011,Kehler2006}, which may be signalled with discourse markers (e.g. \textit{if}, \textit{because}) or discourse adverbials (e.g. \textit{as a result}, see \newcite{rohde2015}). We propose adapting the notion of discourse relations to interrogatives.
A particularly useful approach to discourse relations in the context of MSQs is Rhetorical Structure Theory (RST) \cite{mann1988rhetorical}, which understands them to be an expression of the speaker's communicative intent. Listeners can infer this intent under the assumptions that speakers are ``cooperative'' and keep their contributions as brief and relevant as possible \cite{Grice1975}. Transposing this theory to interrogatives, we can conceptualise the querent's communicative intent as a specific kind of answer. Reflecting this intent, the relation suggests an answering strategy.
We introduce five types of ``question discourse relations'' with a prototypical example from our data set, highlighting the inferred intent and the proposed answering strategy in \Cref{tab:types}.
\section{Classification using contrastive features}
Since \newcite{ciurca2019} found that using conventional discourse parsers created for declaratives is not suitable for extracting discourse relations from MSQs, we design our own annotation scheme and use it to implement a baseline classifier. Following previous work on extracting discourse relations \cite{rohde2015}, we use discourse markers and discourse adverbials alongside other markers indicative of the structure of the question (listed in \cref{app:feats}) to identify explicitly signalled relations.\footnote{Since implicit discourse relations are pervasive and challenging to automatic systems \cite{Sporleder2008}, we make no attempt to extract them here.}
We construct a high-precision, low-recall set of rules to distinguish the most prototypical forms of the five types using combinations of binary contrastive features. To derive the relevant features, we consider the minimal edits to examples of MSQs required to break or change the type of discourse relation between their parts. We then define a feature mask for each MSQ type which denotes whether each feature is required, disallowed or ignored by that type. Each mask is mutually exclusive by design.
Given a pair of questions, the system enumerates the values of each feature, and compares to the definitions in \Cref{app:mask}. If a match is found, the pair is assigned the corresponding MSQ label, otherwise it is assigned \textsc{Unknown}. This process is illustrated in \Cref{app:visual}.
\begin{figure*}[!t]
\begin{subfigure}[t]{.5\textwidth}
\centering
%
\includegraphics[width=\linewidth]{figures/conf_mat_labeller.pdf}%
\caption{Confusion matrix for our classifier evaluated on our hand-annotated test set. The classifier can reliably detect \textsc{Disjunctive} and \textsc{Conditional} MSQs, achieving a high overall precision score of 82.9\%.}
\label{fig:cm_300}%
\end{subfigure}
~
\begin{subfigure}[t]{.5\textwidth}
\centering
%
\includegraphics[width=\linewidth]{figures/count_by_type.pdf}%
\caption{Counts of each MSQ type in our test set, according to our annotation and our classifier. While \textsc{Separable} MSQs appear to be the most prevalent, the classifier identifies only a small fraction of them, implying that they are likely to be implicitly signalled. \textsc{Disjunctive} and \textsc{Conditional} are the most likely to be explicitly signalled.}
\label{fig:type_counts}%
\end{subfigure}
\caption{}
\end{figure*}
To evaluate our classifier, 420 MSQs from our test set were annotated by two native speakers. We then evaluate the classifier on the subset of 271 samples for which both annotators agreed on the MSQ type. The resultant confusion matrix is shown in \Cref{fig:cm_300}, with the classifier achieving 82.9\% precision and 26.5\% recall.
Overall, we find that our classifier performs well for a heuristic approach, but that real world data contains many subtleties that can break our assumptions. During the annotation process, we found many instances of single questions followed by a question which fulfils a purely social function, such as ``\textit{Is it just me or this a problem?}'' (a \textit{phatic} question, see \newcite{Robinson1992}). MSQs can also exhibit more than one intent, presenting a challenge for both our classifier and the expert annotators (see \Cref{app:annotator}).
A limitation of our classifier is the focus on explicit MSQs, which can be identified with well-defined features. The low recall of our classifier indicates that MSQs are often implicit, missing certain markers or not completely fulfilling the distinguishing requirements. \Cref{fig:type_counts} shows that while \textsc{Disjunctive} and \textsc{Conditional} MSQs are often explicitly signalled, the other types are likely to be implicit.
\section{Conclusion}
\label{sec:conclusion}
Inspired by the role of discourse relations in MSQ answering strategies, we propose a novel definition of five different categories of MSQs based on their corresponding speaker intents. We introduce a rich and diversified multi-sentence questions dataset, which contains 162,000 MSQs extracted from Stack Exchange. This achieves our goal of providing a resource for further study of MSQs. Additionally, we implement a baseline classifier based on surface features as a prelimininary step towards successful answering strategies for MSQs.
Future work could improve on our classifier by considering implicit MSQs, with one potential approach being to transform explicit MSQs into implicit examples by removing some markers while ensuring the relation is still valid. Other areas for further work include implementing appropriate answering strategies for different types of MSQs, and investigating whether and how longer chains of MSQs differ compared to pairs of connected questions.
\section*{Acknowledgments}
This work was supported in part by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh. We would like to thank Bonnie Webber for her supervision, and Ivan Titov and Adam Lopez for their useful advice.
\bibliographystyle{acl}
| proofpile-arXiv_059-15651 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section*{Introduction}
In the recent years Topological Data Analysis (TDA) has imposed itself as a useful tool in order to manage huge amount of data of the present digital world~\cite{Ca09}. In particular, persistent homology has assumed a relevant role as an efficient tool for qualitative and topological comparison of data~\cite{EdMo13}, since in several applications we can express the acts of measurement by $\mathbb{R}^m$-valued functions defined on a topological space, so inducing filtrations on such a space~\cite{BiDFFa08}. These filtrations can be analyzed by means of the standard methods used in persistent homology. For further and detailed information about persistent homology we refer the reader to~\cite{EdHa08}.
The importance of group equivariance in machine learning is well known (see, e.g.,~\cite{AnRoPo16,CoWe16,MaVoKo17,MaBoBr15}). Our work on group equivariant non-expansive operators (GENEOs) is devoted to possibly establish a link between persistence theory and machine learning. Our basic idea is that acts of measurement are directly influenced by the observer, and we should mostly focus on well approximating the observer, rather than precisely describing the data (see, e.g.,~\cite{Fr16}). In some sense, we could see the observer as a collection of GENEOs acting on a suitable space of data and encode in the choice of these operators the invariance we are interested in.
The concept of invariance group leads us to consider the \emph{natural pseudo-distance} as our main tool to compare data. Let us consider two real-valued functions $\varphi$, $\psi$ on a topological space $X$, representing the data we want to compare, and a group $G$ of self-homeomorphisms of $X$. Roughly speaking, the computation of the natural pseudo-distance $d_G$ between $\varphi$ and $\psi$ is the attempt of finding the best correspondence between these two functions with respect to the invariance group $G$.
Unfortunately, $d_G$ is difficult to compute, but \cite{FrJa16} illustrates a possible path to approximate the natural pseudo-distance by means of a dual approach involving persistent homology and GENEOs. In particular, one can see that a good approximation of the space $\mathcal{F}(\varPhi,G)$ of all GENEOs corresponds to a good approximation of the pseudo-distance $d_G$. In order to extend our knowledge about $\mathcal{F}(\varPhi,G)$, we devote this paper to introduce some new methods to construct new GENEOs from a given set of GENEOs.
The outline of our paper follows. In Section \ref{model} we briefly present our mathematical framework. In Section \ref{powermeans} we give a new result about building GENEOs by power means and show some examples to explain why this method is useful and meaningful. In Section \ref{series} we illustrate a new procedure to build new GENEOs by means of series of GENEOs. In particular, this is a first example of costruction of an operator starting from an infinite set of GENEOs.
\section{Our mathematical model}
\label{model}
In this section the mathematical model illustrated in~\cite{FrJa16} will be briefly recalled.
Let $X$ be a (non-empty) topological space, and $\varPhi$ be a topological subspace of the topological space $C^{0}_{b}(X,\mathbb{R})$ of the continuous bounded functions from $X$ to $\mathbb{R}$, endowed with the topology induced by the sup-norm $\| \cdot \|_{\infty}$.
The elements of $\varPhi$ represent our data and are called \textit{admissible filtering functions} on the space $X$. We also assume that $\varPhi$ contains at least the constant functions $c$ such that $ |c| \le \sup_{\varphi \in \varPhi} \| \varphi \|_\infty$.
The invariance of the space $\varPhi$ is represented by the action of a subgroup $G$ of the group $\mathrm{Homeo}(X)$ of all homeomorphisms from $X$ to itself.
The group $G$ is used to act on $\Phi$ by composition on the right, i.e. we suppose that $\varphi \circ g$ is still an element of $\varPhi$ for any $\varphi\in \Phi$ and any $g\in G$. In other words, the functions $\varphi$ and $\varphi\circ g$, elements of $\varPhi$, are considered equivalent to each other for every $g\in G$.
In this theoretical framework we use the \emph{natural pseudo-distance $d_G$} to compare functions.
\begin{definition}\label{defdG}
For every $\varphi_1,\varphi_2\in\Phi$ we can define the function $d_G(\varphi_1,\varphi_2):=\inf_{g \in G}\sup_{x \in X}\left|\varphi_1(x)-\varphi_2(g(x))\right|$ from $\varPhi \times \varPhi$ to $\mathbb{R}$. The function $d_G$ is called the \emph{natural pseudo-distance} associated with the group $G$ acting on $\Phi$.
\end{definition}
We can consider this (extended) pseudo-metric as the ground truth for the comparison of functions in $\Phi$ with respect to the action of the group $G$. Unfortunately, $d_G$ is usually difficult to compute. However, the natural pseudo-distance can be studied and approximated by a method involving \emph{$G$-equivariant non-expansive operators}.
\begin{definition}
A $G$-equivariant non-expansive operator (GENEO) for the pair $(\Phi,G)$ is a function
$$F : \Phi \longrightarrow \Phi $$
that satisfies the following properties:
\begin{enumerate}
\item F is $G$-equivariant: $F(\varphi\circ g)=F(\varphi)\circ g, \quad \forall \ \varphi\in \Phi, \quad \forall \ g \in G$;
\item F is non-expansive: $\| F(\varphi_{1})-F(\varphi_{2})\|_{\infty} \le \| \varphi_{1} -\varphi_{2}\|_{\infty}, \quad \forall \ \varphi_{1},\varphi_{2}\in \Phi$.
\end{enumerate}
\end{definition}
The symbol $\mathcal{F}(\Phi,G)$ is used to denote the set of all $G$-equivariant non-expansive operators for $(\Phi,G)$.
Obviously $\mathcal{F}(\Phi,G)$ is not empty because it contains at least the identity operator.
\begin{remark}
The non-expansivity property means that the operators in $\mathcal{F}(\Phi,G)$ are $1$-Lipschitz functions and therefore they are continuous. We underline that GENEOs are not required to be linear.
\end{remark}
If $X$ has nontrivial homology in degree $k$, the following key result holds~\cite{FrJa16}.
\begin{theorem}\label{maintheoremforG}
$d_G(\varphi_1,\varphi_2)=\sup_{F\in \mathcal{F}(\varPhi,G)} d_{match}(\mathrm{Dgm}_k(F(\varphi_1)),\mathrm{Dgm}_k(F(\varphi_2)))$, where $\mathrm{Dgm}_k(\varphi)$ denotes the $k$-th persistence diagram of the function $\varphi:X\to\mathbb{R}$ and $d_{match}$ is the classical matching distance.
\end{theorem}
Persistent homology and the natural pseudo-distance are related to each other by Theorem~\ref{maintheoremforG} via GENEOs.
This result enables us to approximate $d_G$ by means of $G$-equivariant non-expansive operators. The construction of new classes of GENEOs is consequently a relevant step in the approximation of the space $\mathcal{F}(\Phi,G)$, and hence in the computation of the natural pseudo-distance, so justifying the interest for the results shown in Sections~\ref{powermeans} and \ref{series}.
\section{Building new GENEOs by means of power means}
\label{powermeans}
In this section we introduce a new method to build GENEOs, concerning the concept of power mean.
Now we recall a proposition that enables us to find new GENEOs, based on the use of $1$-Lipschitz functions (see~\cite{FrQu16}).
\begin{proposition}\label{lipfun}
Let $L$ be a 1-Lipschitz function from $\mathbb{R}^n$ to $\mathbb{R}$, where $\mathbb{R}^n$ is endowed with the norm $\|(x_1,\dots,x_n)\|_\infty=\max\{|x_1|,\dots,|x_n|\}$. Assume also that $F_1, \dots, F_n$ are GENEOs for $(\varPhi,G)$. Let us define the function $L^*(F_1, \dots , F_n):\varPhi \longrightarrow C^0_b(X,\mathbb{R})$ by setting
\begin{equation*}
L^*(F_1, \dots , F_n)(\varphi)(x):= L(F_1(\varphi)(x), \dots , F_n(\varphi)(x)).
\end{equation*}
If $L^*(F_1, \dots , F_n)(\varPhi)\subseteq \varPhi$, the operator $L^*(F_1, \dots , F_n)$ is a GENEO for $(\varPhi,G)$.
\end{proposition}
In order to apply this proposition, we recall some definitions and properties about power means and $p$-norms.
Let us consider a sample of real numbers $x_1, \dots, x_n$ and a real number $p>0$. As well known, the power mean $M_p(x_1, \dots, x_n)$ of $x_1, \dots, x_n$ is defined by setting
\begin{equation*}
M_p(x_1, \dots, x_n):= \left(\frac{1}{n}\sum_{i=1}^n |x_i|^p\right)^\frac{1}{p}.
\end{equation*}
In order to proceed, we consider the function $\|\cdot \|_p: \mathbb{R}^n \longrightarrow \mathbb{R}$ defined by setting
\begin{equation*}
\| x \|_{p}=(|x_1|^p + |x_2|^p + \dots + |x_n|^p)^\frac{1}{p}
\end{equation*}
where $x=(x_1,\dots,x_n)$ is a point of $\mathbb{R}^n$. It is well know that, for $p\ge 1$, $\|\cdot\|_p$ is a norm and that for any $x \in \mathbb{R}^n$, we have $\lim_{p \to \infty}\|x\|_p=\|x\|_\infty$.
Finally, it is easy to check that if $x \in \mathbb{R}^n$ and $0< p < q < \infty$, it holds that
\begin{equation}
\label{ineqpq}
\|x\|_q \le \|x\|_{p} \le n^{\frac{1}{p} - \frac{1}{q}} \|x\|_q.
\end{equation}
For $q$ tending to infinity, we obtain a similar inequality:
\begin{equation}
\label{ineqpinfty}
\|x\|_\infty \le \|x\|_p \le n^\frac{1}{p} \|x\|_\infty.
\end{equation}
Now we can define a new class of GENEOs. Let us consider $F_1, \dots , F_n$ GENEOs for $(\varPhi,G)$ and $p>0$. Let us define the operator $M_p(F_1, \dots, F_n): \varPhi \longrightarrow C^0_b(X, \mathbb{R})$ by setting
\begin{equation*}
M_p(F_1, \dots, F_n)(\varphi)(x):= M_p(F_1(\varphi)(x), \dots, F_n(\varphi)(x)).
\end{equation*}
\begin{theorem}
If $p \ge 1$ and $M_p(F_1, \dots, F_n)(\varPhi) \subseteq \varPhi$, $M_p(F_1, \dots, F_n)$ is a GENEO for $(\varPhi,G)$.
\end{theorem}
\begin{proof}
If we show that $M_p$ is a $1$-Lipschitz function for $p \ge 1$, Proposition \ref{lipfun} will ensure us that $M_p(F_1, \dots, F_n)$ is a GENEO.
Let $p \ge 1$ and $x,y \in \mathbb{R}^n$. Since $\| \cdot \|_p$ is a norm, the reverse triangle inequality holds. Therefore, because of (\ref{ineqpinfty}) we have that:
\begin{align*}
\left| \left(\frac{1}{n}\sum_{i=1}^n |x_i|^p\right)^\frac{1}{p} - \left(\frac{1}{n}\sum_{i=1}^n |y_i|^p\right)^\frac{1}{p}\right| & = \left(\frac{1}{n}\right)^\frac{1}{p} \left| \left(\sum_{i=1}^n |x_i|^p\right)^\frac{1}{p} - \left(\sum_{i=1}^n |y_i|^p\right)^\frac{1}{p}\right|\\
& = \left(\frac{1}{n}\right)^\frac{1}{p} \left| \| x \|_p - \| y \|_p \right|\\
& \le \left(\frac{1}{n}\right)^\frac{1}{p} \| x - y \|_p \\
& \le \left(\frac{1}{n}\right)^\frac{1}{p} n^\frac{1}{p} \| x - y \|_\infty = \| x - y \|_\infty.
\end{align*}
Hence, for $p \ge 1$ $M_p$ is non-expansive (i.e. $1$-Lipschitz) and the statement of our theorem is proved.
\end{proof}
\begin{remark}
If $ 0 < p<1$ and $n > 1$, $M_p$ is not a $1$-Lipschitz function. This can be easily proved by showing that for $x_2=x_3=\dots=x_n=1$ the derivative $\frac{\partial M_p}{\partial x_1}$ is not bounded.
\end{remark}
\subsection{Examples}
In this subsection we want to justify the use of the operator $M_p$. In order to make this point clear, let us consider the space $\varPhi$ of all $1$-Lipschitz functions from the unit circle $S^1$ to $[0,1]$ and the invariance group $G$ of all rotations of $S^1$. Now, we can take into consideration the following operators:
\begin{itemize}
\item the identity operator $F_1:\varPhi \longrightarrow \varPhi$;
\item the operator $F_2:\varPhi \longrightarrow \varPhi$ defined by setting $F_2(\varphi):= \varphi \circ \rho_\frac{\pi}{2}$ for any $\varphi \in \varPhi$, where $\rho_\frac{\pi}{2}$ is the rotation through a $\frac{\pi}{2}$ angle.
\end{itemize}
Let us set $\bar{\varphi} = |\sin{x}|$ and $\bar{\psi}= \sin^2{x}$.
As we can see in Figures 1 and 2, the functions $F_i(\bar{\varphi})$ and $F_i(\bar{\psi})$ have the same persistence diagrams for $i = 1,2$. In order to distinguish $\bar{\varphi}$ and $\bar{\psi}$, we define the operator $F:\varPhi \longrightarrow \varPhi$ by setting $F(\varphi):=M_1(F_1,F_2)(\varphi)= \frac{F_1(\varphi) + F_2(\varphi)}{2}$. In particular,
\begin{equation}
F(\bar{\varphi}):=M_1(F_1,F_2)(\bar{\varphi})= \frac{F_1(\bar{\varphi}) + F_2(\bar{\varphi})}{2}= \frac{|\sin{x}| + |\cos{x}|}{2}
\end{equation}
and
\begin{equation}
F(\bar{\psi}):=M_1(F_1,F_2)(\bar{\psi})= \frac{F_1(\bar{\psi}) + F_2(\bar{\psi})}{2}= \frac{sin^2{x} + \cos^2{x}}{2}=\frac{1}{2}.
\end{equation}
We can easily check that $F(\bar{\varphi})$ and $F(\bar{\psi})$ have different persistence diagrams; thus $F$ allows us to distinguish between $\bar{\varphi}$ and $\bar{\psi}$. All this proves that the use of the operator $M_1$ can increase the information, letting $F_1$ and $F_2$ cooperate.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.35]{phipsi.eps
\qquad\qquad
\includegraphics[scale=0.35]{F1phipsi.eps}
\caption{On the left: $\bar{\varphi}$ and $\bar{\psi}$ have the same persistence diagrams. On the right: $F_1(\bar{\varphi})$ and $F_1(\bar{\psi})$ have the same persistence diagrams.}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.35]{F2phipsi.eps
\qquad\qquad
\includegraphics[scale=0.35]{Fphipsi.eps}
\caption{On the left: $F_2(\bar{\varphi})$ and $F_2(\bar{\psi})$ have the same persistence diagrams. On the right: the persistence diagrams of $F(\bar{\varphi})$ and $F(\bar{\psi})$ are different from each other.}
\end{figure}
A similar argument still holds for values of $p$ greater than one. Under the same hypotheses about $\varPhi$, we can consider the same GENEOs $F_1$, $F_2$ and the functions $\bar{\varphi}= |\sin x|$ and $\hat{\psi}= (\sin^2 x)^\frac{1}{p}$. For the sake of simplicity, we fixed $p=3$ in order to represent the following figures. As we can see in Figures 3 and 4, we cannot distinguish $\bar{\varphi}$ and $\hat{\psi}$ by using persistent homology since their persistence diagrams coincide. Neither applying $F_1$ nor $F_2$ can help us, but when we apply $M_p(F_1,F_2)$ we can distinguish $\bar{\varphi}$ from $\hat{\psi}$ by means of their persistence diagrams (see Figure 4).
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.35]{phipsip.eps
\qquad\qquad
\includegraphics[scale=0.35]{F1phipsip.eps}
\caption{On the left: $\bar{\varphi}$ and $\hat{\psi}$ have hence the same persistence diagrams. On the right: On the right: $F_1(\bar{\varphi})$ and $F_1(\hat{\psi})$ have the same persistence diagrams.}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.35]{F2phipsip.eps
\qquad\qquad
\includegraphics[scale=0.35]{Fphipsip.eps}
\caption{On the left: $F_2(\bar{\varphi})$ and $F_2(\hat{\psi})$ have the same persistence diagrams. On the right: the persistence diagrams of $F(\bar{\varphi})$ and $F(\hat{\psi})$ are different from each other.}
\end{figure}
These examples justify the use of the previously defined power mean operators $M_p(F_1, \dots , F_n)$ to combine the information given by the operators $F_1, \dots , F_n$.
\section{Series of GENEOs}
\label{series}
First we recall some well-known results about series of functions.
\begin{theorem}
Let $(a_k)$ be a positive real sequence such that $(a_k)$ is decreasing and $\lim_{k \to \infty}a_k=0$. Let $(g_k)$ be a sequence of bounded functions from the topological space $X$ to $\mathbb{C}$. If there exists a real number $M>0$ such that
\begin{equation}
\left|\sum_{k=1}^n g_k(x)\right| \le M
\end{equation}
for every $x \in X$ and every $n \in \mathbb{N}$, then the series $\sum_{k=1}^{\infty} a_k g_k$ is uniformly convergent on $X$.
\end{theorem}
The second result ensures us that a uniformly convergent series of continuous functions is a continuous function.
\begin{theorem}
Let $(f_n)$ be a sequence of continuous function from a compact topological space $X$ to $\mathbb{R}$. If the series $\sum_{k=1}^{\infty} f_k$ is uniformly convergent, then $\sum_{k=1}^{\infty} f_k$ is continuous from $X$ to $\mathbb{R}$.
\end{theorem}
Now we can define a series of GENEOs. Let us consider a compact pseudo-metric space $(X,d)$, a space of real-valued continuous functions $\varPhi$ on $X$ and a subgroup $G$ of the group $\text{Homeo}(X)$ of all homeomorphisms from $X$ to $X$, such that if $\varphi \in \varPhi$ and $g \in G$, then $\varphi \circ g \in \varPhi$. Let $(a_k)$ be a positive real sequence such that $(a_k)$ is decreasing and $\sum_{k=1}^{\infty}a_k \le 1$.
Let us suppose that $(F_k)$ is a sequence of GENEOs for $(\varPhi,G)$ and that for any $\varphi \in \varPhi$ there exists $M(\varphi)>0$ such that
\begin{equation}
\left|\sum_{k=1}^n F_k(\varphi)(x)\right| \le M(\varphi)
\end{equation}
for every $x \in X$ and every $n \in \mathbb{N}$. These assumptions fulfill the hypotheses of the previous theorems and ensure that the following operator is well-defined.
Let us consider the operator $F: C_b^0(X,\mathbb{R}) \longrightarrow C_b^0(X,\mathbb{R})$ defined by setting
\begin{equation}
F(\varphi):= \sum_{k=1}^{\infty} a_k F_k(\varphi).
\end{equation}
\begin{proposition}
If $F(\varPhi)\subseteq \varPhi$, then F is a GENEO for $(\varPhi,G)$.
\end{proposition}
\begin{proof}
\begin{itemize}
\item Let $g\in G$. Since $F_k$ is $G$-equivariant for any $k$ and $g$ is uniformly continuous (because $X$ is compact), $F$ is $G$-equivariant:
\begin{align*}
F(\varphi \circ g) & = \sum_{k=1}^{\infty} a_k F_k(\varphi\circ g)\\
& = \sum_{k=1}^{\infty} a_k (F_k(\varphi)\circ g)\\
& = \left( \sum_{k=1}^{\infty} a_k F_k(\varphi)\right) \circ g \\
& = F(\varphi) \circ g
\end{align*}
for any $\varphi \in \varPhi$.
\item Since $F_k$ is non-expansive for any $k$ and $\sum_{k=1}^\infty a_k \le 1$, $F$ is non-expansive:
\begin{align*}
\|F(\varphi_1) - F( \varphi_2)\|_\infty & = \left\| \sum_{k=1}^{\infty} a_k F_k(\varphi_1) - \sum_{k=1}^{\infty} a_k F_k(\varphi_2) \right\|_\infty \\
& = \left\| \lim_{n \to \infty} \left( \sum_{k=1}^n a_k F_k(\varphi_1) - \sum_{k=1}^n a_k F_k(\varphi_2) \right) \right\|_\infty \\
& = \lim_{n \to \infty} \left\| \sum_{k=1}^n a_k (F_k(\varphi_1) - F_k(\varphi_2)) \right\|_\infty \\
& \le \lim_{n \to \infty} \sum_{k=1}^n (a_k \| F_k(\varphi_1) - F_k(\varphi_2)\|_\infty) \\
& \le \lim_{n \to \infty} \sum_{k=1}^n (a_k \| \varphi_1 - \varphi_2 \|_\infty) \\
& = \sum_{k=1}^\infty a_k \| \varphi_1 - \varphi_2 \|_\infty \\
& \le \| \varphi_1 - \varphi_2 \|_\infty.
\end{align*}
\end{itemize}
\end{proof}
\section*{Conclusions}
In this work we have illustrated some new methods to build new classes of $G$-equivariant non-expansive operators (GENEOs) from a given set of operators of this kind. The leading purpose of our work is to expand our knowledge about the topological space $\mathcal{F}(\varPhi,G)$ of all GENEOs. If we can well approximate the space $\mathcal{F}(\varPhi,G)$, we can obtain a good approximation of the natural pseudo-distance $d_G$ (Theorem \ref{maintheoremforG}). Searching new operators is a fundamental step in getting more information about the structure of $\mathcal{F}(\varPhi,G)$, and hence we are asked to find new methods to build GENEOs. Moreover, the approximation of $\mathcal{F}(\varPhi,G)$ can be seen as an approximation of the considered observer, represented as a collection of GENEOs. Many questions remain open. In particular, we should study an extended theoretical framework that involves GENEOs from the pair $(\varPhi,G)$ to a different pair $(\Psi, H)$. A future research about this is planned to be done.
\section*{Acknowledgment}
The research described in this article has been partially supported by GNSAGA-INdAM (Italy).
\bibliographystyle{model1-num-names}
| proofpile-arXiv_059-15652 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Massive multiple-input multiple-output (MIMO) is a key technology in the fifth generation (5G) wireless standard \cite{Parkvall17}. In massive MIMO networks, the base stations (BSs) are equipped with a very large number of antennas \cite{Marzetta2010,Larsson2014,Yang2015,Akbar2016a,Marzetta2016}. Massive MIMO offers high spectral efficiency, energy efficiency, and throughput \cite{Nguyen2017,Akbar2018a}. However, practical aspects related to the deployment of massive MIMO in 5G networks are still relatively unexplored. Most existing research in massive MIMO has treated deployments with all base station antennas co-located in a compact antenna array \cite{Marzetta2016,Liu2014,Emil2016b,Akbar2016a,Akbar16b}.
In a co-located deployment, the path-loss between a user and all the antenna elements within a cell is assumed to be the same. This is
because of the fact that, the base station antennas in the array are assumed to be relatively close to one another and have the same radiation patterns. For this deployment scenario, performance has been characterized analytically for independent Rayleigh fading \cite{Marzetta2016} and for correlated Rayleigh
fading \cite{Emil2017bb}, and efficient power control algorithms are available.
In various practical and foreseen deployment scenarios, the antennas connected to a given BS may not be arranged in a compact array. For example, each BS may be connected to several compact sub-arrays that serve different sectors. Alternatively, antennas may be distributed over facades or rooftops of buildings. Consequently, the distance-dependent path-loss may vary among the antenna elements belonging to one BS. If non-omnidirectional antennas are used at BSs, the antenna pointing direction will also influence the path-loss. Herein, we use the term ``distributed antenna array'' (DAA) to refer to a setup where the BS antennas are grouped into sub-arrays. Importantly, for a given user terminal, each sub-array sees a different path loss but the antennas within each sub-array see the same path-loss. The case of concern is a multi-cell DAA system, comprising multiple autonomous cells, where each cell is served by a BS. Furthermore, each DAA BS is connected to multiple compact sub-arrays, as depicted in Fig.~\ref{network_config}. Note that as a special case, a DAA system with only a single cell and each sub-array comprising of only few antennas, would be equivalent to the setup called ``cell-free massive MIMO'' in \cite{Nayebi2015,Ngo2017}.
\begin{figure}[!t]
\centering
\begin{subfigure}[b]{0.5\textwidth}
\centering
{\includegraphics[width=3.4in]{DAA}}
\label{DAA_disp}
\end{subfigure}
\caption{An illustration of the multi-cell DAA massive MIMO system. Each DAA massive MIMO
cell operate autonomously, i.e., there is no ``network MIMO'' or ``CoMP''.}
\label{network_config}
\end{figure}
Power control in massive MIMO is essential in order to provide uniformly good service throughout the network. Recent research works has extensively studied power control in cell-free massive MIMO systems. By jointly optimizing the number of active access points (APs) and the transmitted power by each AP, it is possible to minimize the total downlink power consumption in cell-free massive MIMO while satisfying the individual downlink rate requirements of users in the network \cite{Chien2019}. Joint power allocation and scheduling in cell-free massive MIMO was studied while considering the intermittent network data traffic \cite{Chen2019}. Similarly, \cite{Francis2019} analyzed different power control policies in cell-free massive MIMO where a subset of available APs may be utilized to serve the users. Furthermore, \cite{Zhang2019} considers the max-min power control for multicast cell-free massive MIMO. Additionally, the power allocation problem in cell-free massive MIMO can also be considered and solved as a deep-learning problem \cite{Andrea2019}. Since in a multi-cell DAA system the path-losses are different to the different same-cell DAAs, existing algorithms for power control, such as those in \cite[Chap.~5--6]{Marzetta2016}, \cite{Li2016,Cheng2017} are inapplicable. Some existing multi-cell power control algorithms take the interference caused to BSs in another cell into account, but in a conceptually different way. In the proposed DAA massive MIMO system, the signals received at the distributed BSs within the same cell are treated as desired signals instead of interfering signals. As such, the proposed system model and setup is fundamentally different than traditional massive MIMO.
In this paper, we derive and evaluate new power-control algorithms for multi-cell massive MIMO with DAA. The specific novel contributions of our work are summarized as follows:
\begin{enumerate}
\item
We derive a generalized closed-form expression for the downlink signal-to-interference-plus-noise ratio (SINR) valid for multi-cell massive MIMO with an arbitrary number of sub-arrays per DAA massive MIMO cell, under the assumption of correlated Rayleigh channel fading. Results for uncorrelated channel fading follow as special cases.
\item
We investigate optimal downlink power allocation and equal-$\nu$ power allocation in various configurations of DAA massive MIMO. Specifically, we formulate the downlink power allocation problem as a max-min optimization problem and then rewrite it as a target SINR maximization problem, the solution to which ensures uniformly good quality of service.
\item We analyze and compare the throughputs obtained by various configurations of DAA multi-cell massive MIMO. We also demonstrate that the fading covariance significantly influences the performance.
\end{enumerate}
In addition, we present detailed numerical results to highlight and emphasize the performance advantages achieved by DAA massive MIMO. Notably, for some scenarios, DAA massive MIMO is capable of boosting the network performance by up to $55\%$. Furthermore, we reveal that under the assumption of uncorrelated channel fading, increasing DAAs and reducing the number of antennas per sub-array may not always provide an improvement in performance.
The system model considered in our work is novel and has major distinctions as compared to the cell-free massive MIMO systems. Importantly, the proposed system model is a multi-cell model that is general enough to encompass cell-free massive MIMO as a special case. Furthermore, the proposed system model describes a more realistic scenario where both inter-cell and intra-cell interferences are present and affect the performance of the system. Cell-free massive MIMO systems only have intra-cell interference and cannot describe the impact of inter-cell interference in a straightforward manner.
This paper is a comprehensive extension of our conference paper \cite{Akbar2018}. The main novel contributions include investigating correlated fading and providing new insightful numerical results. In \cite{Akbar2018}, we confirmed the performance benefits obtained by DAA massive MIMO. Building on this preliminary study, this paper provides a comprehensive performance analysis for our proposed power control algorithm in various configurations of DAA massive MIMO. In \cite{Akbar2018}, the users' locations were assumed to be the same for numerical results. Differently, in this paper we consider various user locations and evaluate the median and $95\%$ likely performances, which reliably indicates the network performance. Additionally, in this paper we examine the network performance under the assumptions of both correlated and uncorrelated channel fading, which is an important advancement relative to \cite{Akbar2018}.
\textit{Notations:} We denote vectors and matrices by lower-case boldface symbols and upper-case boldface symbols, respectively. $\mathbb{E}[\cdot]$ denotes the expectation, $\|\cdot\|$ denotes the $\textit{l}_2$ norm, $\textrm{tr}(\cdot)$ denotes the matrix trace, $(\cdot)^{\textrm{H}}$ denotes the Hermitian transpose, and $(\cdot)^{T}$ denotes the matrix transpose, and $\mathbf{I}_M$ denotes an $M\times M$ identity matrix.
\section{System Model}
In this paper, we consider a multi-cell multi-user massive MIMO network, as illustrated in Fig.~\ref{network_config}. The network consists of $L$ cells with $K$ single-antenna users in each cell\footnote{We consider the traditional definition of cells, where a coverage area is divided into non-overlapping regions and the users in each region is served by the network infrastructure located in the region. Different from traditional cellular networks where each cell is served by one BS, our network infrastructure in a cell consists of multiple distributed BSs.}. Notably, each cell has $N$ DAAs deployed at arbitrary locations, where each sub-array is equipped with $M$ antenna elements. As such, there are $M_{\textrm{tot}}=M\times N$ antenna elements in each cell. Each sub-array in a cell is connected to a CPU through back-haul links \footnote{In DAA massive MIMO, there is an information exchange overhead between sub-arrays and the CPU. The impact of information exchange overhead on the performance of the DAA massive MIMO is beyond the scope of the current work. A recent study analyzed the exchange of information between multiple CPUs in a cell-free system and demonstrated that it is possible to obtain performance similar to a single-CPU cell-free system \cite{Palou2019}. Furthermore, we assume that the information exchange between the sub-arrays and the CPU is perfectly synchronized.}. We highlight that no coordination is required between different cells in the system and CPU in each cell only requires local information via a backhaul links that can be implemented using cloud-RAN techniques. Furthermore, we assume that there is perfect synchronization in the system. We denote the BS in the \textit{j}-th cell by $\textrm{BS}_{j}$, where $j\in\{1,\cdots,L\}$. The \textit{n}-th antenna sub-array in the \textit{j}-th cell is represented by $\textrm{BS}_{j}^n$, where $n\in\{1,\cdots,N\}$. Furthermore, we represent the \textit{k}-th user in the \textit{j}-th cell by $\textrm{U}_{jk}$, where $k\in\{1,\cdots,K\}$. We represent the uplink channel between $\textrm{U}_{jk}$ and $\textrm{BS}_{l}^n$ by $\mathbf{h}_{jk}^{ln}$, where $l\in\{1,\cdots,L\}$. Additionally, we assume that the channels follow a correlated Rayleigh fading distribution, i.e., $\mathbf{h}_{jk}^{ln} \sim \mathcal{CN}(\mathbf{0},\mathbf{R}_{jk}^{ln})$, where $\mathbf{R}_{jk}^{ln}$ is the channel covariance matrix, which encapsulates various channel impairments such as average path-loss and spatial correlation. We clarify that the path loss between a user and all antenna elements of a DAA is considered constant. However, due to the physical separation between various DAAs the path losses between a users and different DAAs in a cell are not constant. As such, the existing power control algorithm cannot be applied to the system model considered in our work. The system model considered in this paper is a generalized model as compared to \cite{Nguyen2017,Nayebi2015,Ngo2017} since it is capable of describing various antenna array deployments. For example, we note that the cell free massive MIMO is a special case of our considered system model when $L=1$, $M=1$, and $N=M_{\textrm{tot}}$. Moreover, co-located massive MIMO is another special case of our considered system model when $M=M_{\textrm{tot}}$ and $N=1$.
We assume the network operates in the time division duplex (TDD) mode. Accordingly, the uplink and downlink channels are assumed to be the same and reciprocal within one channel coherence interval \cite{Marzetta2016}. Consequently, the BS utilizes the uplink channel estimates for downlink precoding based on the assumption of channel reciprocity. In the beginning of each channel coherence interval, the users in each cell transmit their pilot sequences to the same-cell BS, which then performs channel estimation. The channel estimation is followed by the downlink data transmission where each BS sends data to the same-cell users.
\subsection{Uplink Channel Estimation Under Perfect Channel Knowledge}
At the beginning of each channel coherence interval, all users send their pre-assigned pilot sequences to the same-cell BS for the purpose of channel estimation. We denote the pilot sequence assigned to $\textrm{U}_{jk}$ by $\mathbf{\boldsymbol{\phi}}_{jk}$, where $\|\mathbf{\boldsymbol{\phi}}_{jk}\|^2=1$, $j\in\{1,\cdots,L\}$, and $k\in\{1,\cdots,K\}$. We assume that each pilot sequence is of length $\tau_p$. We highlight that we need to estimate the channels between a user and all the same-cell DAAs in the system. Differently, in co-located massive MIMO, the channel estimation is typically performed between a user and a single same-cell BS. Furthermore, we assume that each user within the same cell is assigned an orthogonal pilot sequence, i.e., $\tau_p=K$. The same set of pilot sequences are reused in each cell across the entire network. Consequently, the uplink pilot transmission received at the \textit{n}-th sub-array of $\textrm{BS}_{j}$, i.e., $\textrm{BS}_{j}^n$, is given as
\begin{align} \label{uplink_trans}
\mathbf{Y}^{jn} &= \sum\limits_{l=1}^L\sum\limits_{i=1}^K\mathbf{h}_{li}^{jn}\mathbf{\boldsymbol{\phi}}_{li}^\textrm{H} + \frac{1}{\sqrt{\rho_{\textrm{tr}}}}\mathbf{N}_{j}^n,
\end{align}
where $\mathbf{N}_{j}^n\in \mathbf{\mathbb{C}}^{M \times \tau}$ represents the additive white Gaussian noise (AWGN) at $\textrm{BS}_{j}^n$, and $\rho_{\textrm{tr}}$ is the normalized pilot power per user. Afterwards, the sub-array $\textrm{BS}_{j}^n$ correlates \eqref{uplink_trans} with the known pilot sequence to obtain
\begin{align} \label{uplink_trans1}
\mathbf{y}_{jk}^{jn} &= \left(\sum\limits_{l=1}^L\sum\limits_{i=1}^K\mathbf{h}_{li}^{jn}\mathbf{\boldsymbol{\phi}}_{li}^\textrm{H} + \frac{1}{\sqrt{\rho_{\textrm{tr}}}}\mathbf{N}_{j}^n\right)\mathbf{\boldsymbol{\phi}}_{jk}.
\end{align}
We denote the correlation between pilot sequences assigned to $\textrm{U}_{li}$ and $\textrm{U}_{jk}$ as $\rho_{li}^{jk}=\mathbf{\boldsymbol{\phi}}_{li}^\textrm{H}\mathbf{\boldsymbol{\phi}}_{jk}$. Based on this definition, we re-express \eqref{uplink_trans1} as
\begin{align} \label{uplink_trans2}
\mathbf{y}_{jk}^{jn} &= \mathbf{h}_{jk}^{jn} + \sum\limits_{\substack{l=1 \\ (l,i) \neq}}^L \sum\limits_{\substack{i=1 \\ (j,k)}}^K \rho_{li}^{jk}\mathbf{h}_{li}^{jn} + \frac{1}{\sqrt{\rho_{\textrm{tr}}}}\mathbf{N}_{j}^n\mathbf{\boldsymbol{\phi}}_{jk}.
\end{align}
From \eqref{uplink_trans2}, we obtain the MMSE uplink channel estimate, i.e., $\mathbf{\widehat{h}}_{jk}^{jn}$ of the channel $\mathbf{h}_{jk}^{jn}$ as \cite{Emil2016a}
\begin{align} \label{mmse_est}
\mathbf{\widehat{h}}_{jk}^{jn} &= \mathbf{W}_{jk}^{n}\mathbf{y}_{jk}^{jn},
\end{align}
where
\begin{align}\label{W_val}
\mathbf{W}_{jk}^{n}=\mathbf{R}_{jk}^{jn}(\mathbf{Q}_{jk}^{n})^{-1},
\end{align}
\begin{align}\label{R_val}
\mathbf{R}_{jk}^{jn}=\mathbb{E}\left[\mathbf{h}_{jk}^{jn}(\mathbf{h}_{jk}^{jn})^\textrm{H}\right],
\end{align}
and
\begin{align}\label{Q_val}
\mathbf{Q}_{jk}^{n}&=\mathbb{E}\left[\mathbf{y}_{jk}^{jn}(\mathbf{y}_{jk}^{jn})^\textrm{H}\right]= \sum\limits_{l=1}^L \sum\limits_{i=1}^K \left|\rho_{li}^{jk}\right|^2 \mathbf{R}_{lk}^{jn} + \frac{1}{\rho_{\textrm{tr}}}\mathbf{I}_M.
\end{align}
Under the assumption of full statistical channel knowledge, the covariance matrices $\mathbf{R}_{jk}^{jn}$ are known to the BSs. As such, $\textrm{BS}_{j}^n$ can obtain the matrix $\mathbf{Q}_{jk}^n$ by using \eqref{Q_val}. Afterwards, utilizing \eqref{W_val} together with \eqref{uplink_trans2}, $\textrm{BS}_{j}^n$ obtain the channel uplink channel estimates using \eqref{mmse_est}. We highlight that the assumption of perfect channel covariance knowledge is commonly used in massive MIMO literature \cite{Emil2016a,Yin2013,Adhikary2013}. Additionally, the change in the channel covariance information occurs at a slow rate \cite{Adhikary2013}. As such, the channel statistics remains largely unchanged over several channel coherence intervals. Furthermore, several methods exist in literature to estimate the change in the channel covariance information with small overhead \cite{Marzetta2011,Hoydis2011}.
\subsection{Uplink Channel Estimation Under Limited Channel Covariance Knowledge}
In this subsection, we discuss a more practical scenario where BSs only have limited knowledge of the channel covariance. Specifically, we assume that BSs do not have full knowledge of the channel covariance matrix $\mathbf{R}_{jk}^{jn}$. Furthermore, BSs only have knowledge about the diagonal elements of the channel covariance matrices. Under these assumptions, we obtain the element-wise EW-MMSE uplink channel estimate of the $z$-th element of $\mathbf{h}_{jk}^{jn}$, where $z\in\{1,\dotsc,M\}$, as \cite{Hoydis2011}
\begin{align} \label{locha}
\left[\mathbf{\widehat{\bar{h}}}{}_{jk}^{jn}\right]_z &= \frac{\left[\mathbf{R}_{jk}^{jn}\right]_{z}}{\sum_{l=1}^{L}\sum_{i=1}^{K}\left|\rho_{li}^{jk}\right|^2\left[\mathbf{R}_{li}^{jn}\right]_{z} + \frac{1}{\rho_{\textrm{tr}}}} \left[\mathbf{y}_{jk}^{jn}\right]_{z}.
\end{align}
We highlight that the diagonal elements of the channel covariance matrices are easy to estimate and require only a few additional resources \cite{Emil2016a}.
Consequently, the EW-MMSE channel estimate for $\mathbf{h}_{jk}^{jn}$ is obtained as \cite{Hoydis2011}
\begin{align} \label{mmse_est2}
\mathbf{\widehat{\bar{h}}}{}_{jk}^{jn} &= \mathbf{\widehat{W}}_{jk}^{n}\mathbf{y}_{jk}^{jn},
\end{align}
where
\begin{align}\label{W_val2}
\mathbf{\widehat{W}}_{jk}^{n}=\mathbf{D}_{jk}^{jn}(\bm{\Lambda}_{jk}^{n})^{-1}.
\end{align}
We highlight that $\mathbf{D}_{jk}^{jn}$ and $\bm{\Lambda}_{jk}^{jn}$ are $M\times M$ diagonal matrices. We define $\mathbf{D}_{jk}^{jn}$ as
\begin{align}\label{R_val2}
\mathbf{D}_{jk}^{jn} =
\begin{bmatrix}
\left[\mathbf{R}_{jk}^{jn}\right]_{1} & 0 & 0 & 0\\
0 & \left[\mathbf{R}_{jk}^{jn}\right]_{2} & 0 & 0\\
0 & 0 & ... & 0\\
0 & 0 & 0 & \left[\mathbf{R}_{jk}^{jn}\right]_{M}
\end{bmatrix}
\end{align}
and $\mathbf{\bm{\Lambda}}_{jk}^{n}$ as
\begin{align}\label{Q_val2}
\begin{bmatrix}
\sum\limits_{l,i}\left|\rho_{li}^{jk}\right|^2\left[\mathbf{R}_{li}^{jn}\right]_1 + \frac{1}{\rho_{\textrm{tr}}} & 0 & 0\\
0 & ... & 0\\
0 & 0 & \sum\limits_{l,i}\left|\rho_{li}^{jk}\right|^2\left[\mathbf{R}_{li}^{jn}\right]_M + \frac{1}{\rho_{\textrm{tr}}}
\end{bmatrix}.
\end{align}
Afterwards, utilizing \eqref{W_val2} together with \eqref{uplink_trans2}, $\textrm{BS}_{j}^n$ obtains the channel uplink channel estimates according to \eqref{mmse_est2}, under the assumption of imperfect channel knowledge at BSs.
\subsection{Downlink Data Transmission}
During the downlink data transmission phase, $\textrm{BS}_{j}$ transmits data symbols to each user in the \textit{j}-th cell. Based on the assumption of channel reciprocity, the channel estimates obtained through the uplink channel estimation are utilized in downlink transmission. Accordingly, the symbol transmitted by $\textrm{BS}_j$ for the $K$ same-cell users is represented as
\begin{align}\label{symbol_n}
x_{j} &= \sum_{i=1}^{K}\sum_{n=1}^{N} \nu_{ji}^n \mathbf{{a}}_{ji}^{n}q_{ji},
\end{align}
where $\nu_{ji}^n \geq 0$ is the real-valued downlink power control coefficient for $\textrm{U}_{ji}$ at $\textrm{BS}_{j}^n$, $\mathbf{{a}}_{ji}^{n}$ is the downlink precoding vector for $\textrm{U}_{ji}$ at $\textrm{BS}_{j}^n$, $q_{ji}$ is the data symbol intended for $\textrm{U}_{ji}$, and ${q}_{ji} \sim \mathcal{CN}\left(0,1\right)$. We assume that the downlink power control coefficients have some constraints and are chosen to satisfy $\mathbb{E}\left[|x_{j}|^2\right] \leq 1$. Using \eqref{symbol_n}, we simplify this power constraint and represent it as
\begin{align}\label{pow_constaint}
\sum_{i=1}^K \sum_{n=1}^{N} ({\nu_{ji}^n})^2\mathbb{E}\left[\|\mathbf{{a}}_{ji}^{n}\|^2\right] &\leq 1,~\forall~j.
\end{align}
We highlight that the constraint in \eqref{pow_constaint} represents the total transmit power constraint in cell $j$, which is normalized such that the maximum power is 1. From \eqref{symbol_n}, the downlink signal received at $\textrm{U}_{jk}$ is
\begin{align}\label{pow_constaint1}
r_{jk} &=\sum_{l=1}^L\sum_{i=1}^K\sum_{n=1}^N \nu_{li}^n (\mathbf{h}_{jk}^{ln})^\textrm{H} \mathbf{{a}}_{li}^{n}q_{li}+ n_{jk},
\end{align}
where $n_{jk}$ is the AWGN at $\textrm{U}_{jk}$. Assuming that the users only have the statistical information about the channels, and the instantaneous channel information is not available at the users due to the lack of downlink pilots \cite{Akbar2016a}, we represent the downlink signal received at $\textrm{U}_{jk}$ as
\begin{align} \label{received1}
r_{jk} &=\sum\limits_{n=1}^N \nu_{jk}^n \mathbb{E}\left[(\mathbf{h}_{jk}^{jn})^\textrm{H} \mathbf{{a}}_{jk}^{n}\right]q_{jk} + \sum_{\substack{l,i,n \\(l,i) \neq (j,k)}}^{L,K,N} \nu_{li}^n(\mathbf{h}_{jk}^{ln})^\textrm{H} \mathbf{{a}}_{li}^{n}q_{li} \nonumber \\ & +\sum\limits_{n=1}^N \nu_{jk}^n\bigg((\mathbf{h}_{jk}^{jn})^\textrm{H}\mathbf{{a}}_{jk}^{n} - \mathbb{E}\left[(\mathbf{h}_{jk}^{jn})^\textrm{H}\mathbf{{a}}_{jk}^{n}\right]\bigg)q_{jk} + n_{jk}.
\end{align}
\normalsize
We highlight that the first term on the right hand side (RHS) of \eqref{received1} denotes the $N$ superimposed copies of the downlink data symbol $q_{jk}$ received from the $N$ sub-arrays in cell $j$. We highlight that \eqref{received1} is a generalized expression for received signal at $\textrm{U}_{jk}$, which is valid for an arbitrary number of DAAs in a cell.
\subsection{Achievable Downlink Rate and Throughput}
In this subsection, we derive a closed-form expression for the downlink rate. Based on the derived expression, we obtain the downlink throughput.
We note that the last three terms on RHS of \eqref{received1} are the effective noise. We also note that, these three terms are uncorrelated with the first term in \eqref{received1}. Accordingly, the downlink rate for $\textrm{U}_{jk}$ is given as \cite{Emil2017bb}
\begin{align}\label{SE_chan}
{R}_{jk}&=\log_{2}\left(1+\gamma_{jk}\right)& \textrm{b/s/Hz},
\end{align}
where $\gamma_{jk}$ is the effective downlink SINR for $\textrm{U}_{jk}$ given by \eqref{SINR_chan} at the top of the next page. We obtain the downlink throughput achieved by $\textrm{U}_{jk}$ as
\begin{figure*}[!t]
\begin{align} \label{SINR_chan}
\gamma_{jk}=\frac{\left| \textstyle{\sum_{n=1}^N} \nu_{jk}^n \mathbb{E}\left[(\mathbf{h}_{jk}^{jn})^\textrm{H} \mathbf{{a}}_{jk}^{n} \right]\right|^2}{\textstyle{\sum_{l=1}^L\sum_{i=1}^K} \mathbb{E}\left[\left|\sum_{n=1}^N\nu_{li}^n(\mathbf{h}_{jk}^{ln})^\textrm{H} \mathbf{{a}}_{li}^{n}\right|^2\right] - \left|\textstyle{\sum_{n=1}^N} \nu_{jk}^n\mathbb{E}\left[(\mathbf{h}_{jk}^{jn})^\textrm{H} \mathbf{{a}}_{jk}^{n}\right]\right|^2 + \sigma_{n}^2}.
\end{align}
\setcounter{equation}{19}
\begin{align} \label{SINR}
{\gamma}_{jk}^{\textrm{MMSE}}=\frac{\left|\textstyle{\sum_{n=1}^N}\textrm{tr}\left(\nu_{jk}^n\mathbf{W}_{jk}^{n}\mathbf{R}_{jk}^{jn}\right)\right|^2} {\textstyle{\sum_{l=1}^L \sum_{i=1}^K\sum_{n=1}^N} \textrm{tr}\left((\nu_{li}^n)^2\mathbf{W}_{li}^n \mathbf{Q}_{li}^n(\mathbf{W}_{li}^n)^\textrm{H}\mathbf{R}_{jk}^{ln}\right) + \textstyle{\sum_{l=1,l\neq j}^L} \left|\sum_{n=1}^N\textrm{tr}\left(\nu_{lk}^n\mathbf{W}_{lk}^n\mathbf{R}_{jk}^{ln}\right)\right|^2 + \sigma_{n}^2}.
\end{align}
\setcounter{equation}{21}
\begin{align} \label{SINR_EW}
{\gamma}_{jk}^{\textrm{EW-MMSE}}=\frac{\left|\textstyle{\sum_{n=1}^N}\textrm{tr}\left(\nu_{jk}^n\mathbf{\widehat{W}}_{jk}^{n} \mathbf{R}_{jk}^{jn}\right)\right|^2} {\textstyle{\sum_{l=1}^L \sum_{i=1}^K\sum_{n=1}^N} \textrm{tr}\left((\nu_{li}^n)^2\mathbf{\widehat{W}}_{li}^n \mathbf{Q}_{li}^n(\mathbf{\widehat{W}}_{li}^n)^\textrm{H}\mathbf{R}_{jk}^{ln}\right) + \textstyle{\sum_{l=1,l\neq j}^L} \left|\sum_{n=1}^N\textrm{tr}\left(\nu_{lk}^n\mathbf{\widehat{W}}_{lk}^n\mathbf{R}_{jk}^{ln}\right)\right|^2 + \sigma_{n}^2}.
\end{align}
\hrulefill
\vspace*{4pt}
\end{figure*}
\setcounter{equation}{18}
\begin{align}\label{tp_chan2}
\textrm{Throughput}_{jk}& = \textrm{BW} \times \ell \times {R}_{jk} & \textrm{b/s},
\end{align}
where $\textrm{BW}$ denotes that channel bandwidth and $\ell$ denotes the portion of coherence interval used for the downlink data transmission. In this paper, we use the average per-user throughput as the performance metric for comparison, which is based on the numerical average obtained from individual user throughputs. We highlight that if all the users in the network achieve the same downlink SINR, the average per-user throughput is the same as the individual user throughput. We next provide a closed-form expression for the SINR in the following theorem.
\begin{theorem} \label{theorem}
Assuming that the BSs perform maximum ratio transmission (MRT) in the downlink, i.e, $\mathbf{a}_{jk}^{n} = \mathbf{\widehat{h}}_{jk}^{jn}$, the closed-form expression for the downlink effective SINR at $\textrm{U}_{jk}$ is obtained as in \eqref{SINR} at the top of the next page.
\end{theorem}
\begin{IEEEproof}
The proof is given in Appendix \ref{SINR_proof}.
\end{IEEEproof}
The closed-form expression for the downlink SINR given in \eqref{SINR} can be re-written as
\setcounter{equation}{20}
\begin{align}\label{SINR_reduced}
{\gamma}_{jk}^{\textrm{MMSE}}=\frac{\left|\textstyle{\sum_{n=1}^N}\nu_{jk}^n\chi_{jk}^{n}\right|^2} {\textstyle{\sum_{l,i,n}^{L,K,N}}(\nu_{li}^n)^2\zeta_{jk}^{lin} + \textstyle{\sum_{l\neq j}^{L}}|\textstyle{\sum_{n=1}^N}\nu_{lk}^n\xi_{jk}^{ln}|^2 + \sigma_{n}^2},
\end{align}
where
$\chi_{jk}^{n} = \textrm{tr}(\mathbf{W}_{jk}^{n}\mathbf{R}_{jk}^{jn})$,
$\zeta_{jk}^{lin} = \textrm{tr}(\mathbf{W}_{li}^n \mathbf{Q}_{li}^n(\mathbf{W}_{li}^n)^\textrm{H}\mathbf{R}_{jk}^{ln})$, and
$\xi_{jk}^{ln} = \textrm{tr}(\mathbf{W}_{lk}^n\mathbf{R}_{jk}^{ln})$.
\begin{theorem} \label{theorem2}
Assuming that the BSs perform maximum ratio transmission (MRT) in the downlink, i.e, $\mathbf{a}_{jk}^{n} = \mathbf{\widehat{\bar{h}}}_{jk}^{jn}$, the closed-form expression for the downlink effective SINR at $\textrm{U}_{jk}$ is obtained as in \eqref{SINR_EW} at the top of the page.
\end{theorem}
\begin{IEEEproof}
The proof is given in Appendix \ref{SINR_proof2}.
\end{IEEEproof}
The closed-form expression for the downlink SINR given in \eqref{SINR_EW} can be re-written as
\setcounter{equation}{22}
\begin{align}\label{SINR_reduced2}
{\gamma}_{jk}^{\textrm{EW-MMSE}}=\frac{\left|\textstyle{\sum_{n=1}^N}\nu_{jk}^n\bar{\chi}_{jk}^{n}\right|^2} {\textstyle{\sum_{l,i,n}^{L,K,N}}(\nu_{li}^n)^2\bar{\zeta}_{jk}^{lin} + \textstyle{\sum_{l\neq j}^{L}}|\textstyle{\sum_{n=1}^N}\nu_{lk}^n\bar{\xi}_{jk}^{ln}|^2 + \sigma_{n}^2},
\end{align}
where
$\bar{\chi}_{jk}^{n} = \textrm{tr}(\mathbf{\widehat{W}}_{jk}^{n}\mathbf{R}_{jk}^{jn})$,
$\bar{\zeta}_{jk}^{lin} = \textrm{tr}(\mathbf{\widehat{W}}_{li}^n \mathbf{Q}_{li}^n(\mathbf{\widehat{W}}_{li}^n)^\textrm{H}\mathbf{R}_{jk}^{ln})$, and
$\bar{\xi}_{jk}^{ln} = \textrm{tr}(\mathbf{W}_{lk}^n\mathbf{R}_{jk}^{ln})$.
\section{Downlink Power Control in Distributed Antenna Array Massive MIMO}
In this section, we formulate the downlink power control problem as a max-min optimization problem. Max-min power control maximizes the minimum rate or SINR for all the user in the network. As such, every user in the network receives a uniform quality of service. Max-min power control has previously been studied for conventional co-located BSs \cite{Yang2014,Marzetta2016}. However, the application of max-min power control for DAA massive MIMO networks, where each array has multiple antenna elements, has not been investigated.
The proposed algorithm only requires the estimated channel information between users and the same-cell DAAs and no coordination is required between different cells. Importantly, in TDD operation BSs can listen to the transmissions from other cells and estimate all the parameters that are needed. As such, it is possible to estimate the inter-cell parameters using the same procedure as the intra-cell parameters. Accordingly, additional signaling is not required. The goal of the max-min optimization problem is to maximize the minimum downlink SINR for all the users in the network. As such, we formulate the network-wide max-min optimization problem using \eqref{SINR_reduced} as
\begin{align} \label{opt_problem}
\begin{aligned}
& \underset{\{\nu_{li}^n\}}{\text{max}}~~~\underset{\forall~j,k}{\text{min}}
& & \frac{\left|\textstyle{\sum_{n=1}^N}\nu_{jk}^n\chi_{jk}^{n}\right|^2} {\textstyle{\sum_{l,i,n}^{L,K,N}}(\nu_{li}^n)^2\zeta_{jk}^{lin} + \textstyle{\sum_{l\neq j}^{L}}|\textstyle{\sum_{n=1}^N}\nu_{lk}^n\xi_{jk}^{ln}|^2 + \sigma_{n}^2} \\
& \text{s. t.}
& & \hspace{-0.4cm} \textstyle{\sum_{i=1}^K\textstyle{\sum_{n=1}^N}(\nu_{li}^n)^2\textrm{tr}(\mathbf{W}_{li}^n \mathbf{Q}_{li}^n\left(\mathbf{W}_{li}^n\right)^\textrm{H}) \leq 1},\forall~l,n,\\
& & & \hspace{-0.4cm} \nu_{li}^n \geq 0, \;\forall~l,i,n,
\end{aligned}
\end{align}
where the constraint $\sum_{i=1}^K\sum_{n=1}^N(\nu_{li}^n)^2\textrm{tr}(\mathbf{W}_{li}^n \mathbf{Q}_{li}^n\left(\mathbf{W}_{li}^n\right)^\textrm{H}) \leq 1$ is obtained from \eqref{pow_constaint} under the assumption that MRT is used at the BSs. The first constraint in the optimization problem (19) establishes that the normalized sum of all the power control coefficients does not exceed $1$. The second constraint ensures that the power control coefficients are non-negative. Assuming that the target SINR is ${\gamma}$, we rewrite the optimization problem given in \eqref{opt_problem} on the epigraph form as
\begin{align} \label{opt_problem1}
\begin{aligned}
& \underset{\{\nu_{li}^n\},{\gamma}}{\text{max}}
& & \gamma \\
& \text{s. t.}
& & \hspace{-0.25cm} \frac{\left|\textstyle{\sum_{n=1}^N}\nu_{jk}^n\chi_{jk}^{n}\right|^2} {\textstyle{\sum_{l,i,n}^{L,K,N}}(\nu_{li}^n)^2\zeta_{jk}^{lin} + \textstyle{\sum_{l\neq j}^{L}}|\textstyle{\sum_{n=1}^N}\nu_{lk}^n\xi_{jk}^{ln}|^2 + \sigma_{n}^2} \geq {\gamma} \\
& & &\hspace{-0.25cm} \forall\;j,k, \\
& & & \hspace{-0.25cm} \textstyle{\sum_{i=1}^K\textstyle{\sum_{n=1}^N}(\nu_{li}^n)^2\textrm{tr}(\mathbf{W}_{li}^n \mathbf{Q}_{li}^n\left(\mathbf{W}_{li}^n\right)^\textrm{H}) \leq 1},\forall~l,n,\\
& & &\hspace{-0.25cm} \nu_{li}^n \geq 0,\; \forall~l,i,n.
\end{aligned}
\end{align}
This problem can be solved as a quasi-convex program. We next formulate a convex feasibility problem based on \eqref{opt_problem1}, which we use in a bisection algorithm \cite{Boyd2004} to search for the value of $\gamma\in[\gamma_{\textrm{min}},\gamma_{\textrm{max}}]$ that is the global optimum to \eqref{opt_problem1}, where ${\gamma}_{\textrm{min}}$ and ${\gamma}_{\textrm{max}}$ define the search range \cite{Ngo2017,Boyd2004}.
\begin{proposition} \label{prop_1}
The constraint set in the optimization problem \eqref{opt_problem1} is convex and the optimization problem is quasi-concave. For a given constant value of $\gamma$, the optimization problem in \eqref{opt_problem1} is re-written as the convex feasibility problem\footnote{We highlight that \eqref{opt_problem2} is a general convex feasibility problem where the objective function is zero. Furthermore, \eqref{opt_problem1} can be considered as a special case of the general convex feasibility problem \eqref{opt_problem2}.}
\begin{align} \label{opt_problem2}
\begin{aligned}
& \underset{\{\nu_{li}^n\}}{\text{max}}
& & 0 \\
& \text{s. t.}
& & \|\mathbf{x}_{jk}\| \leq \frac{1}{\sqrt{\gamma}}{\left|\textstyle{\sum_{n=1}^N}\nu_{jk}^n\chi_{jk}^{n}\right|},\; \forall~j,k, \\
& & & \textstyle{\sum_{i=1}^K\sum_{n=1}^N(\nu_{li}^n)^2\textrm{tr}(\mathbf{W}_{li}^n \mathbf{Q}_{li}^n\left(\mathbf{W}_{li}^n\right)^\textrm{H}) \leq 1},\; \forall~l,\\
& & &\nu_{li}^n \geq 0,\;\forall\;l,i,n,
\end{aligned}
\end{align}
where $\mathbf{x}_{jk} = [\mathbf{\tilde{x}}_{jk}~~\mathbf{\bar{x}}_{jk}~~\sqrt{\sigma_{n}^2}]^T$. We define $\mathbf{\tilde{x}}_{jk}$ and $\mathbf{\bar{x}}_{jk}$ as $\mathbf{\tilde{x}}_{jk}=[\mathbf{\tilde{x}}_{jk}^{11}\dotsc\mathbf{\tilde{x}}_{jk}^{li}\dotsc\mathbf{\tilde{x}}_{jk}^{LK}]$ and $\mathbf{\bar{x}}_{jk}=[\mathbf{\bar{x}}_{jk}^{11}\dotsc\mathbf{\bar{x}}_{jk}^{li}\dotsc\mathbf{\bar{x}}_{jk}^{LK}]$, respectively, where
\begin{align}\label{def_x11}
\mathbf{\tilde{x}}_{jk}^{li}=[(\nu_{li}^1)(\zeta_{jk}^{li1})^{\frac{1}{2}}\dotsc(\nu_{li}^N)(\zeta_{jk}^{liN})^{\frac{1}{2}}]
\end{align}
and
\begin{align}\label{def_x11}
\mathbf{\bar{x}}_{jk}^{li}=\begin{cases}
\varrho_{jk}^{lk1}+\varrho_{jk}^{lkn}+\dotsc+\varrho_{jk}^{lkN} & l\neq j,\\
{0}, & l=j.
\end{cases}
\end{align}
\end{proposition}
\begin{IEEEproof}
The proof is given in Appendix \ref{SOCP_proof}.
\end{IEEEproof}
We next outline the step-by-step approach to solve the convex feasibility problem (21) as follows:
\begin{enumerate}
\item In each iteration of the bisection algorithm, we set $\bar{\gamma}=({\gamma}_{\textrm{min}}+{\gamma}_{\textrm{max}})/2$ and solve the feasibility problem \eqref{opt_problem2} by setting ${\gamma}=\bar{\gamma}$.
\item If the problem is infeasible, we set ${\gamma}_{\textrm{max}}=\bar{\gamma}$ otherwise we set ${\gamma}_{\textrm{min}}=\bar{\gamma}$.
\item The algorithm iteratively refines ${\gamma}_{\textrm{min}}$ and ${\gamma}_{\textrm{max}}$ and stops the search when ${\gamma}_{\textrm{max}}-{\gamma}_{\textrm{max}}<\varepsilon$, where $\varepsilon>0$ is the error tolerance.
\end{enumerate}
To summarize, the bisection algorithm modifies $\gamma_{\textrm{min}}$ and $\gamma_{\textrm{min}}$ based on certain conditions to find a value for $\gamma$, where the optimization problem \eqref{opt_problem2} is optimal. We highlight that the max-min power control in cell-free massive MIMO \cite{Nayebi2015} is a special case of the power control problem considered in this paper, which can be obtained from \eqref{opt_problem2} when the network has one cell and each antenna array has one or more antenna elements.
The number of iterations required to solve the bisection algorithm is given as $\log_2( \gamma_{max} - \gamma_{min}) - \log_2(\varepsilon)$ \cite{emilbook}. As such, the algorithm has low computational complexity and converges very fast, since the search range is halved in each iteration. We highlight that a second-order cone program (SOCP) is solved in each iteration. The computational complexity of the SOCP is $\mathcal{O}(K_{\textrm{tot}}^4)$, where $K_{\textrm{tot}}=LK$ \cite{Lobo1998,bashar2018}. The total number of arithmetic operations required to solve the optimization problem \eqref{opt_problem2} is given as $[\log_2( \gamma_{max} - \gamma_{min}) - \log_2(\varepsilon)] \times \mathcal{O}(K_{\textrm{tot}}^4)$.
The max-min power control in \eqref{opt_problem1} maximizes the minimum SINR. In this work, we evaluate the average per-user throughput to demonstrate the performance of the power control algorithms. We highlight that the average per-user throughput is dependent on the effective SINR as given in \eqref{SE_chan} and \eqref{tp_chan2}. Therefore, by evaluating the average per-user throughput, we also demonstrate the improvement in SINR. Furthermore, we highlight that using the procedure outlined in this subsection, it is possible to obtain the max-min power control for \eqref{SINR_reduced2}.
\subsection{Equal-$\nu$ Power Allocation}
In this paper, we use equal-$\nu$ power allocation as a baseline for comparison with the proposed power control. In equal-$\nu$ power allocation, the total available downlink transmit power is shared equally among all the users in a cell \cite{Yang2014}. As such, the downlink power control coefficients ${\nu_{li}^n}$ are equal for all the users. From \eqref{pow_constaint} and assuming that the full available power is used by the BSs during the downlink transmission, we obtain
\begin{align}\label{pow_constaint_equal1}
(\nu)^2\sum_{l=1}^L\sum_{i=1}^K \sum_{n=1}^{N} \mathbb{E}\left[\|\mathbf{{a}}_{li}^{n}\|^2\right] &= L.
\end{align}
Assuming that the BSs performs MRT, the power control coefficient is obtained as
\begin{align}\label{equal_pow}
{\nu} &= \sqrt{\frac{L}{\sum_{l=1}^L\sum_{i=1}^K \sum_{n=1}^{N}\textrm{tr}\left(\mathbf{W}_{li}^n \mathbf{Q}_{li}^n\left(\mathbf{W}_{li}^n\right)^\textrm{H}\right)}}.
\end{align}
Intuitively, equal-$\nu$ power allocation implies that every user is allocated power from a serving antenna array proportionally to the mean-square of its channel estimate. As such, a user receives more power from the arrays with good propagation conditions than the arrays with weaker propagation conditions.
With equal-$\nu$ power allocation, the power control coefficients $\nu$ remain the same regardless of channel conditions. Accordingly, it is expected that equal-$\nu$ power allocation does not give a higher throughput as compared to the max-min power control. However, equal-$\nu$ allocation serves as an important benchmark for the network performance.
\section{Numerical Results}
In this section, we numerically demonstrate the performance benefits of DAA massive MIMO. Throughout the section, we assume that $L=2$ and there are $K=10$ users in each cell. We consider hexagonal cells with radius $1000$ m. Each user within a cell is allocated an orthogonal pilot sequence and the same pilot sequence set is repeated in the neighbouring cell \footnote{Let us assume that the channel coherence time is $T_c=3\;\textrm{ms}$ and the coherence bandwidth is $W_c=300\;\textrm{kHz}$, the coherence block length in this case is $S=T_c W_c = 900$ symbols. If we assume that $40\%$ of the coherence block length is used for the uplink pilot training, we have $360$ orthogonal pilot sequences available in a cell. As such, orthogonal pilot sequences can be allocate to $360$ users within a cell, which is a reasonably large number of users. As such, it is reasonable to assume that all the users within a cell can be allocated orthogonal pilot sequences.}. Additionally, we assume that there are $M_{\textrm{tot}}=400$ antenna elements in each cell. The CDFs are obtained from the average per-user throughput for $100$ different realization of user locations, where the users' locations are uniformly distributed within a cell. Furthermore, the sub-arrays are deployed in multiple tiers at an offset distance of $120$ m from the center of cell. We assume that $20\;\textrm{MHz}$ bandwidth is available for transmission and $45\%$ of the coherence interval is used for the downlink transmission. Furthermore, all the cells in the network have the same geometry. We use CVX \cite{CVX} to solve the max-min power control problem. For the sake of clarity, we represent a DAA massive MIMO having $N$ antenna arrays and $M$ antennas per sub-array as $\{M,N\}$ DAA massive MIMO. We obtain the median per-user throughput by computing the median of the average per-user throughputs obtained for $100$ different realization of the user locations. The path loss model is $1/d^\kappa$, where $d$ is the distance between a user and a sub-array and $\kappa=3.76$. In Section V.A and Section V.B, we assume that the BSs have perfect channel knowledge. In Section V.C, we assume that the BSs have limited channel knowledge. The simulation parameters remain the same unless stated otherwise.
\subsection{Max-Min and Equal-$\nu$ Power Allocation under Uncorrelated Channel Fading}
In this subsection, we examine the impact of max-min power allocation and equal-$\nu$ power allocation in DAA massive MIMO under the assumption of uncorrelated channel fading.
\begin{figure}[!t]
\centering
\includegraphics[width=3.4in]{MaxminUncorrelated100to10}
\caption{The CDFs of max-min power allocation under uncorrelated channel fading for $\{100,4\}$, $\{50,8\}$, $\{25,16\}$, and $\{10,40\}$ DAA configuration.} \label{MaxminUncorrelated100to10}
\end{figure}
We first analyze the throughput achieved by applying max-min power control in DAA massive MIMO. Fig.~\ref{MaxminUncorrelated100to10} depicts the CDFs of max-min power allocation for $\{100,4\}$, $\{50,8\}$, $\{25,16\}$, and $\{10,40\}$ DAA massive MIMO. We note that DAA massive MIMO provides a large improvement in the per-user throughput. Importantly, as $N$ increases, there is an increase in the per-user throughput. For example, when $N$ increases from $4$ to $40$, the median per-user throughput increases from $11.6$ Mbit/s to $15.0$ Mbit/s, which is equivalent to $22.7\%$ increase. Similarly, when $N$ increases from $4$ to $40$, the $95\%$ likely performance increases from $7.6$ Mbit/s to $14.0$ Mbit/s, which corresponds to an $45.6\%$ improvement in the per-user throughput. Furthermore, we note that the individual throughput values are closer to the median value when $N=40$. Differently, the spread of the individual throughput values around the median is larger for $N=4$. As such, in the $\{10,40\}$ DAA configuration, the majority of the users enjoy a higher throughput close to the median throughput value.
We next examine the impact of decreasing the number of antennas in each sub-array to a small value, i.e., below $M=10$. Fig.~\ref{MaxminUncorrelated10to2} depicts the CDFs of the per-user throughput for $\{10,40\}$, $\{4,100\}$, and $\{2,200\}$ DAA massive MIMO. In contrast to the results obtained in Fig.~\ref{MaxminUncorrelated100to10}, we observe that increasing $N$ above $40$ and decreasing $M$ below $10$ reduces the per-user throughput. For example, when $M$ is reduced from $10$ to $2$, the median per-user throughput decreases from $15.0$ Mbit/s to $14.1$ Mbit/s, which corresponds to a $5.5\%$ decrease. Similarly, when $M$ decreases from $10$ to $2$, the $95\%$ likely performance drops from $13.9$ Mbit/s to $13.2$ Mbit/s or equivalently $4.9\%$. As such, we observe a $5\%$ loss in performance by reducing the number of antenna elements per sub-array below $10$. We highlight that this behaviour is due to a loss in channel hardening \cite{Chen2018}. Specifically, reducing $M$ deteriorate the benefits offered by favourable propagation in massive MIMO as a result of increased uncertainty in channel statistics. Furthermore, we highlight that in our simulations the exact $M$ where we observe a degradation in performance is between $M=4$ and $M=10$.
\begin{figure}[!t]
\centering
\includegraphics[width=3.4in]{MaxminUncorrelated10to2}
\caption{The CDFs of max-min power allocation under uncorrelated channel fading for $\{10,40\}$, $\{4,100\}$, and $\{2,200\}$ DAA configuration.} \label{MaxminUncorrelated10to2}
\end{figure}
We now analyze equal-$\nu$ power allocation in DAA massive MIMO. Fig.~\ref{EqualUncorrelated100to10} shows the CDFs of the per-user throughput for various configurations of DAA massive MIMO. We highlight that equal-$\nu$ power control achieves a lower per-user throughput as compared to max-min power control for the same DAA configuration. For example, for $\{10,40\}$ DAA massive MIMO, equal-$\nu$ power control achieves $14.10$ Mbit/s median per-user throughput and $10.7$ Mbit/s $95\%$ likely performance, which is $5.8\%$ and $23.0\%$, respectively, lower than that achieved by max-min power control for the same DAA configuration. As such, when $N$ increases from $4$ to $40$, the median per-user throughput increases from $11.5$ Mbit/s to $14.1$ Mbit/s (or equivalently, by $18.5\%$), and the $95\%$ likely performance increases from $4.8$ Mbit/s to $10.7$ Mbit/s (or equivalently, by $55.4\%$).
\begin{figure}[!t]
\centering
\includegraphics[width=3.4in]{EqualUncorrelated100to10}
\caption{The CDFs of equal-$\nu$ power allocation under uncorrelated channel fading for $\{100,4\}$, $\{50,8\}$, $\{25,16\}$, and $\{10,40\}$ DAA configuration.} \label{EqualUncorrelated100to10}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=3.4in]{EqualUncorrelated10to2}
\caption{The CDFs of equal-$\nu$ power allocation under uncorrelated channel fading for $\{10,40\}$, $\{4,100\}$, and $\{2,200\}$ DAA configuration.}\label{EqualUncorrelated10to2}
\end{figure}
Fig.~\ref{EqualUncorrelated10to2} depicts the CDFs of the per-user throughput for equal-$\nu$ power allocation when each sub-array is equipped with $M=10$ or less antenna elements. Similar to our observations in Fig.~\ref{MaxminUncorrelated10to2}, we note that reducing $M$ below $10$ degrades the median per-user throughput and the $95\%$ likely performance. For example, when $M$ reduces from $10$ to $2$, the median per-user throughput decreases from $14.1$ Mbit/s to $12.7$ Mbit/s (or equivalently, by $9.7\%$). Additionally, for the same reduction in $M$, the $95\%$ likely performance reduces from $10.7$ Mbit/s to $9.2$ Mbit/s (or equivalently, by $13.8\%$).
We highlight that under the assumption of uncorrelated channel fading, DAA massive MIMO improves the per-user throughput up to a certain value of $\{M,N\}$. In our simulations, when $M$ is below 10, we observe a loss in the achievable per-user throughput as compared to the maximum throughput achieved when $M=10$. We highlight that this behaviour is due to the significant loss in channel hardening when the number of antenna elements per sub-array is below 10 \cite{Chen2018}.
\begin{table*}[!t]
\centering
\caption{A Summary of the Throughputs (Mbit/s) for various DAA Configurations}
\label{table_simulation}
\centering
\scalebox{1.2}{\begin{tabular}{|c|c|c|c||c|c|} \hline
\textbf{Power Allocation} &\textbf{DAA Configuration} & \multicolumn{2}{c||}{\textbf{Uncorrelated Fading}} & \multicolumn{2}{c|}{\textbf{Correlated Fading}} \\ \hline \hline
& & Median & $95\%$ Likely Performance & Median & $95\%$ Likely Performance \\\hline
\multicolumn{6}{c}{} \\\hline \hline
\multirow{3}{2.5cm}{Max-Min} & $(100,4)$ & $11.6$ & $7.6$ & $10.3$ & $9.3$ \\\hhline{|~-----|}
& $(50,8)$ & $13.6$ & $10.4$ & $10.5$ & $9.6$ \\\hhline{|~-----|}
& $(25,16)$ & $14.2$ & $12.6$ & $11.9$ & $10.9$ \\\hhline{|~-----|}
& $(10,40)$ & \cellcolor{blue!15} $\mathbf{15.0}$ & \cellcolor{blue!15} $\mathbf{13.9}$ & $11.9$ & $11.0$\\\hhline{|~-----|}
& $(4,100)$ & $14.8$ & $13.9$ & $12.0$ & $11.2$ \\\hhline{|~-----|}
& $(2,200)$ & $14.1$ & $13.2$ & \cellcolor{blue!15} $\mathbf{12.9}$ & \cellcolor{blue!15} $\textbf{11.9}$ \\\hline \hline
\multicolumn{6}{c}{} \\\hline \hline
\multirow{3}{2.5cm}{Equal-$\nu$}& $(100,4)$ & $11.5$ & $4.8$ & $7.8$ & $5.0$ \\\hhline{|~-----|}
& $(50,8)$ & $13.2$ & $6.2$ & $9.0$ & $6.6$ \\\hhline{|~-----|}
& $(25,16)$ & ${13.5}$ & $9.3$ & $10.0$ & $6.6$ \\\hhline{|~-----|}
& $(10,40)$ & \cellcolor{blue!15} $\mathbf{14.1}$ & \cellcolor{blue!15} $\mathbf{10.7}$ & $10.6$ & $7.1$ \\\hhline{|~-----|}
& $(4,100)$ & $13.8$ & $10.7$ & $10.6$ & \cellcolor{blue!15} $\mathbf{8.1}^{*}$ \\\hhline{|~-----|}
& $(2,200)$ & $12.7$ & $9.2$ & \cellcolor{blue!15} $\mathbf{11.3}$ & $7.2$ \\\hline \hline
\multicolumn{6}{l}{\scriptsize Note: The unit for all values is Mbit/s. $^{*}$ represents the outlier value.} \\
\end{tabular}}
\end{table*}
\subsection{Max-Min and Equal-$\nu$ Power Allocation under Correlated Channel Fading}
In this subsection, we assume that the wireless channels undergo correlated channel fading. We generate the covariance matrix for the correlated channels using the one-ring channel model \cite{Adhikary2013}. We assume a uniformly distributed angular spread. The standard deviation of the angular spread is $5$ degrees and the spacing between adjacent antenna elements is $\frac{\lambda}{2}$, where $\lambda$ is the wavelength of the frequency used for transmission. Furthermore, we assume that the shadow fading is independent.
\begin{figure}[!t]
\centering
\includegraphics[width=3.4in]{MaxMinCorrelatedAll}
\caption{The CDFs of max-min power allocation under correlated channel fading for $\{100,4\}$, $\{50,8\}$, $\{25,16\}$, $\{10,40\}$, $\{4,100\}$, and $\{2,200\}$ DAA configuration.} \label{MaxMinCorrelatedAll}
\end{figure}
Fig.~\ref{MaxMinCorrelatedAll} shows the CDFs of max-min power control for $\{100,4\}$, $\{50,8\}$, $\{25,16\}$, $\{10,40\}$, $\{4,100\}$, and $\{2,200\}$ DAA massive MIMO. We highlight that the median and the $95\%$ likely performance of the per-user throughput under correlated channel fading are lower than those obtained for uncorrelated channel fading for the same $N$ and $M$. For example, for $\{10,40\}$ DAA configuration, the median per-user throughput obtained from Fig.~\ref{MaxMinCorrelatedAll} is $11.9$ Mbit/s, which is $20.7\%$ lower than that obtained from Fig.~\ref{MaxminUncorrelated100to10} for the same DAA configuration. Similarly, the $95\%$ likely performance decreases to $11.0$ Mbit/s, which corresponds to a $20.8\%$ decrease as compared to uncorrelated channel fading. We observe that when $N$ increases from $4$ to $200$, the median per-user throughput increases from $10.3$ Mbit/s to $12.9$ Mbit/s (or equivalently, by $19.9\%$). Interestingly, when the channel fading is correlated, the network can deploy as little as two antenna elements per sub-array and still achieve an improvement in the throughput as compared to the network with higher $N$. Likewise, we also achieve an improvement in the $95\%$ likely performance. For instance, when $N$ increases from $4$ to $200$, the $95\%$ likely performance increases from $9.3$ Mbit/s to $11.9$ Mbit/s (or equivalently, by $21.5\%$). We highlight that, in addition to the increased throughput, max-min power control also improves the fairness in the network because each user achieves the same downlink throughput.
\begin{figure}[!t]
\centering
\includegraphics[width=3.4in]{EqualCorrelatedAll}
\caption{The CDFs of equal-$\nu$ power allocation under correlated channel fading for $\{100,4\}$, $\{50,8\}$, $\{25,16\}$, $\{10,40\}$, $\{4,100\}$, and $\{2,200\}$ DAA configuration.} \label{EqualCorrelatedAll}
\end{figure}
Fig.~\ref{EqualCorrelatedAll} depicts the CDFs of equal-$\nu$ power control for various DAA configuration. We highlight that equal-$\nu$ power allocation achieves a lower per-user throughput as compared to max-min power control. Moreover, the performance of equal-$\nu$ power control in correlated channel fading is less than that in uncorrelated channel fading. For example, for $\{10,40\}$ DAA configuration, we observe $25.1\%$ reduction in the median per-user throughput and $33.4\%$ reduction in the $95\%$ likely performance as compared to uncorrelated channel fading. Furthermore, we observe that increasing $N$ from $4$ to $200$ increases the median per-user throughput by $31.4\%$. We also observe $31.1\%$ increase in the $95\%$ likely performance for the same $N$.
Table~\ref{table_simulation} provides a summary of the median and the $95\%$ likely performance for equal-$\nu$ and max-min power allocation under uncorrelated and correlated channel fading. The maximum value for each power allocation (max-min, equal-$\nu$) and fading type (correlated, uncorrelated) is highlighted in light blue colour. For the $95\%$ likely performance, we have considered the value for $\{4,100\}$ DAA configuration as an outlier. Based on the results in Table~\ref{table_simulation}, we emphasise that channel covariance is important in determining the performance of the DAA massive MIMO network. We highlight that $\{100,4\}$, which is the closest configuration to a co-located massive MIMO, provides the lowest median and $95\%$ likely performance in all the cases as evident by Table I. As such, we recommend that DAA massive MIMO should be preferred over a co-located massive MIMO regardless of the channel fading conditions (correlated or uncorrelated).
\subsection{Max-Min and Equal-$\nu$ Power Allocation under Limited Channel Covariance Knowledge}
In this subsection, we analyze the impact of having limited channel covariance knowledge on the performance of max-min and equal-$\nu$ power allocations for $K=5$. Fig.~\ref{final-EWMMSE} depicts the CDFs of equal-$\nu$ and max-min power control under the assumption of imperfect channel knowledge and where EW-MMSE is used for channel estimation. We observe that even under the assumption of imperfect channel covariance knowledge, max-min power control is able to achieve better performance as compared to equal-$\nu$ power control. Furthermore, EW-MMSE under the assumption of imperfect channel knowledge performs reasonably well. For example, when max-min power control is used with EW-MMSE for $\{40,10\}$ configuration, the median per-user throughput is $98\%$ of the case when full channel knowledge is available. Similarly, when equal-$\nu$ power control with EW-MMSE for $\{40,10\}$ configuration is able to achieve nearly the same median per-user throughput when full channel knowledge is available to the BSs. As such, under the current system configuration and simulation parameters, the impact of having limited knowledge about channel covariance is not significant.
\begin{figure}[!t]
\centering
\includegraphics[width=3.4in, height=2.9in]{final-EWMMSE}
\caption{The CDFs of max-min and equal-$\nu$ power allocation under correlated channel fading for $\{10,40\}$ and $\{40,10\}$ DAA configuration for EW-MMSE and full channel knowledge.} \label{final-EWMMSE}
\end{figure}
\section{Conclusion}
In this paper, we investigated the optimal max-min downlink power allocation in various configurations of the DAA massive MIMO network. To this end, we first derived a generalized closed-form expression for the downlink SINR under the assumption of correlated Rayleigh channel fading. Afterwards, we formulated a network-wide max-min optimization problem based on the derived closed-form downlink SINR expression. We then solved the max-min optimization problem and obtained the downlink power control coefficients. We further compared the median and $95\%$ likely performance of the optimal power allocation with equal-$\nu$ power allocation. Our numerical results indicated that adding DAAs in the network provides a large improvement in the average per-user throughput. Also, we demonstrated that the channel covariance is an important factor in determining the performance of DAA massive MIMO. Overall, the proposed system model gives network designers great flexibility to deploy BS antenna arrays in arbitrary locations and reveal how to reap the benefits offered by massive MIMO.
\appendices
\section{Proof of Theorem \ref{theorem}}\label{SINR_proof}
In this appendix, we derive the closed-form expression for the downlink SINR in \eqref{SINR_chan} for the MMSE channel estimator. The proof follows the same approach as in \cite{Emil2016a}.
We note that for MRT, the downlink precoding vector is given as $\mathbf{\widehat{a}}_{jk}^{n} = \mathbf{\widehat{h}}_{jk}^{jn}$. Accordingly, the numerator of \eqref{SINR_chan} is simplified as
\begin{align}\label{appendix1}
\mathbb{E}\left[(\mathbf{h}_{jk}^{jn})^H \mathbf{\widehat{h}}_{jk}^{jn}\right] &=\textrm{tr}(\mathbf{W}_{jk}^{n}\mathbb{E}[\mathbf{y}_{jk}^n(\mathbf{h}_{jk}^{jn})^H ]),\nonumber \\ &=
\textrm{tr}(\mathbf{W}_{jk}^{n}\mathbf{R}_{jk}^{jn}).
\end{align}
We now simplify the first term in the denominator of \eqref{SINR_chan}. We note that when $(j,k)\neq (l,i)$, $\mathbf{{h}}_{jk}^{ln}$ and $\mathbf{\widehat{h}}_{li}^{ln}$ are independent. For this case we have
\begin{align}\label{first_term}
\mathbb{E}\left[|(\mathbf{h}_{jk}^{ln})^H \mathbf{\widehat{h}}_{li}^{ln} |^2\right] & = \textrm{tr}(\mathbf{W}_{li}^{n}\mathbf{Q}_{li}^{n}(\mathbf{W}_{li}^{n})^H\mathbf{R}_{jk}^{ln}).
\end{align}
We now consider the case where $(j,k)=(l,i)$. In this case, $\mathbf{{h}}_{jk}^{ln}$ and $\mathbf{\widehat{h}}_{li}^{ln}$ are not independent. By utilizing the fact that $\mathbf{h}_{jk}^{ln}$ and $\mathbf{\widehat{h}}_{li}^{ln}-\mathbf{W}_{li}^{n}\mathbf{h}_{jk}^{ln}$ are independent, we have
\begin{align}\label{sec_term}
\mathbb{E}\left[|(\mathbf{h}_{jk}^{ln})^H \mathbf{\widehat{h}}_{li}^{ln} |^2\right]&= \textrm{tr}(\mathbf{W}_{li}^{n}\mathbf{Q}_{li}^{n}(\mathbf{W}_{li}^{n})^H\mathbf{R}_{jk}^{ln}) \notag \\
&+ |\textrm{tr}(\mathbf{W}_{li}^n\mathbf{R}_{jk}^{ln})|^2.
\end{align}
Combining \eqref{first_term} and the \eqref{sec_term}, the first term in the denominator of \eqref{SINR_chan} is written as
\begin{align}\label{den_first}
\mathbb{E}\left[|(\mathbf{h}_{jk}^{ln})^H \mathbf{\widehat{h}}_{li}^{ln} |^2\right]&= \textrm{tr}(\mathbf{W}_{li}^{n}\mathbf{Q}_{li}^{n}(\mathbf{W}_{li}^{n})^H\mathbf{R}_{jk}^{ln}) + \nonumber \\
&\begin{cases}
0, & (j,k)\neq(l,i)\\
|\textrm{tr}(\mathbf{W}_{lk}^n\mathbf{R}_{jk}^{ln})|^2, & (j,k)=(l,i)
\end{cases}
\end{align}
Substituting \eqref{appendix1} and \eqref{den_first} in \eqref{SINR_chan} we obtain \eqref{SINR}, which completes the proof.
\section{Proof of Theorem \ref{theorem2}}\label{SINR_proof2}
In this appendix, we derive the closed-form expression for the downlink SINR in \eqref{SINR_chan} for the EW-MMSE channel estimator. Using \eqref{mmse_est2}, we write $\mathbf{\widehat{a}}_{jk}^{n} = \widehat{\mathbf{\bar{h}}}{}_{jk}^{jn}$. Afterwards, the numerator of \eqref{SINR_chan} is rewritten as
\begin{align}\label{appendix2}
\mathbb{E}\left[(\mathbf{h}_{jk}^{jn})^H \mathbf{\widehat{\bar{h}}}{}_{jk}^{jn}\right] &=\textrm{tr}(\mathbf{\widehat{W}}_{jk}^{n}\mathbb{E}[\mathbf{y}_{jk}^n(\mathbf{h}_{jk}^{jn})^H ]),\nonumber \\ &=
\textrm{tr}(\mathbf{\widehat{W}}_{jk}^{n}\mathbf{R}_{jk}^{jn}).
\end{align}
We next simplify the first term in the denominator of \eqref{SINR_chan}. We note that when $(j,k)\neq (l,i)$, $\mathbf{{h}}_{jk}^{ln}$ and $\mathbf{\widehat{\bar{h}}}{}_{li}^{ln}$ are independent. For this case we have
\begin{align}\label{first_term2}
\mathbb{E}\left[|(\mathbf{h}_{jk}^{ln})^H \mathbf{\widehat{\bar{h}}}{}_{li}^{ln} |^2\right] & = \textrm{tr}(\mathbf{\widehat{W}}_{li}^{n}\mathbf{Q}_{li}^{n}(\mathbf{\widehat{W}}_{li}^{n})^H\mathbf{R}_{jk}^{ln}).
\end{align}
We now consider the case where $(j,k)=(l,i)$. In this case, $\mathbf{{h}}_{jk}^{ln}$ and $\mathbf{\widehat{\bar{h}}}{}_{li}^{ln}$ are not independent. However, by noting that $\mathbf{h}_{jk}^{ln}$ and $\mathbf{\widehat{\bar{h}}}{}_{li}^{ln}-\mathbf{\widehat{W}}_{li}^{n}\mathbf{h}_{jk}^{ln}$ are independent, we simplify the term as
\begin{align}\label{sec_term2}
\mathbb{E}\left[|(\mathbf{h}_{jk}^{ln})^H \mathbf{\widehat{\bar{h}}}{}_{li}^{ln} |^2\right]&= \textrm{tr}(\mathbf{\widehat{W}}_{li}^{n}\mathbf{Q}_{li}^{n}(\mathbf{\widehat{W}}_{li}^{n})^H\mathbf{R}_{jk}^{ln}) \notag \\
&+ |\textrm{tr}(\mathbf{\widehat{W}}_{li}^n\mathbf{R}_{jk}^{ln})|^2.
\end{align}
Combining \eqref{first_term2} and the \eqref{sec_term2}, the first term in the denominator of \eqref{SINR_chan} is written as
\begin{align}\label{den_first2}
\mathbb{E}\left[|(\mathbf{h}_{jk}^{ln})^H \mathbf{\widehat{\bar{h}}}{}_{li}^{ln} |^2\right]&= \textrm{tr}(\mathbf{\widehat{W}}_{li}^{n}\mathbf{Q}_{li}^{n}(\mathbf{\widehat{W}}_{li}^{n})^H\mathbf{R}_{jk}^{ln}) + \nonumber \\
&\begin{cases}
0, & (j,k)\neq(l,i)\\
|\textrm{tr}(\mathbf{\widehat{W}}_{lk}^n\mathbf{R}_{jk}^{ln})|^2, & (j,k)=(l,i)
\end{cases}
\end{align}
Substituting \eqref{appendix2} and \eqref{den_first2} in \eqref{SINR_chan} we obtain \eqref{SINR_EW}, which completes the proof.
\section{Proof of Proposition \ref{prop_1}} \label{SOCP_proof}
In this appendix, we prove that the constraint set in \eqref{opt_problem2} is convex. We write the first constraint in the optimization problem \eqref{opt_problem2} as
\begin{align} \label{socp1}
\frac{\left|\textstyle{\sum_{n=1}^N}\nu_{jk}^n\chi_{jk}^{n}\right|^2} {\textstyle{\sum_{l,i,n}^{L,K,N}}(\nu_{li}^n)^2\zeta_{jk}^{lin} + \textstyle{\sum_{l\neq j,n}^{L,N}}|\nu_{lk}^n\xi_{jk}^{ln}|^2 + \sigma_{n}^2} \geq \widehat{\gamma}_{jk} .
\end{align}
Now we introduce auxiliary variable $|\nu_{lk}^n\xi_{jk}^{ln}|^2 \leq (\varrho_{jk}^{lin})^2 $ and simplify \eqref{socp1} to obtain
\begin{align}\label{socp2}
\left(\textstyle{\sum_{l,i,n}^{L,K,N}}(\nu_{li}^n)^2\zeta_{jk}^{lin} + \textstyle{\sum_{l\neq j,n}^{L,N}} (\varrho_{jk}^{lin})^2 + \sigma_{n}^2\right)^{\frac{1}{2}} \leq \notag \\ \frac{1}{\sqrt{\widehat{\gamma}_{jk}}}{\left|\textstyle{\sum_{n=1}^N}\nu_{jk}^n\chi_{jk}^{n}\right|},
\end{align}
which is equivalent to
\begin{align}\label{socp3}
\|\mathbf{x}_{jk}\| &\leq \frac{1}{\sqrt{\widehat{\gamma}_{jk}}}{\left|\textstyle{\sum_{n=1}^N}\nu_{jk}^n\chi_{jk}^{n}\right|},
\end{align}
where $\mathbf{x}_{jk} = [\mathbf{\tilde{x}}_{jk}~~\mathbf{\bar{x}}_{jk}~~\sqrt{\sigma_{n}^2}]^T$. The terms $\mathbf{\tilde{x}}_{jk}$ and $\mathbf{\bar{x}}_{jk}$ are defined in \textit{Proposition 1}.
We highlight that the constraint given in \eqref{socp3} can be represented in the standard second-order-cone (SOC) form. As such, the optimization problem in \eqref{opt_problem1} is convex. Furthermore, the remaining constraints in \eqref{opt_problem1} are convex. Accordingly, the optimization problem in \eqref{opt_problem1} is quasi-concave. Accordingly, we re-write the optimization problem in \eqref{opt_problem1} as \eqref{opt_problem2} given in \textit{Proposition 1}.
| proofpile-arXiv_059-15653 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
The Crewther relation~\cite{Crewther:1972kn, Adler:1973kz} provides a non-trivial relation for three fundamental constants, $3S = K R'$, where $S$ is the anomalous constant of $\pi^0\to\gamma\gamma$, $K$ is coefficient of the Bjorken sum rules for the polarized deep-inelastic electron scattering~\cite{Bjorken:1966jh}, and $R'$ is the isovector part of the cross-section ratio for the electron-positron annihilation into hadrons~\cite{Adler:1974gd}. In the theory of quantum chromodynamics (QCD)~\cite{Politzer:1973fx, Gross:1973id}, the Crewther relation is improved as the ``Generalized Crewther Relation (GCR)"~\cite{Bjorken:1967px, Bjorken:1969mm, Broadhurst:1993ru, Brodsky:1995tb, Crewther:1997ux, Gabadadze:1995ei, Braun:2003rp}:
\begin{eqnarray}
D^{\rm NS}(a_s) C^{\rm Bjp}(a_s) &=& 1+\Delta^{*}_{\rm csb}, \label{GCR1}
\end{eqnarray}
or
\begin{eqnarray}
D(a_s) C^{\rm GLS}(a_s) &=& 1+\Delta_{\rm csb}, \label{eq:GCR}
\end{eqnarray}
where $a_s = \alpha_s/\pi$, $D^{\rm NS}$ is the non-singlet Adler function, $C^{\rm Bjp}$ is derived by the Bjorken sum rules for polarized deep-inelastic electron scattering, $D$ is the Adler function, and $C^{\rm GLS}$ is the coefficient of the Gross-Llewellyn Smith (GLS) sum rules~\cite{Gross:1969jf}. The $\Delta^{*}_{\rm csb}$ and $\Delta_{\rm csb}$ are conformal breaking terms, and for example, the $\Delta_{\rm csb}$-term takes the form
\begin{eqnarray}
\Delta_{\rm csb} = \frac{\beta(a_s)}{a_s}K(a_s)=-\sum_{i\geq2}\sum_{k=1}^{i-1} K_k \beta_{i-1-k} a_s^i , \label{eq:csbbeta}
\end{eqnarray}
where $\beta(a_s)=-\sum_{i\geq0}\beta_{i} a_s^{i+2}$ is the usual $\beta$-function, and the coefficients $K_k$ are free of $\{\beta_i\}$-functions.
The perturbative QCD (pQCD) corrections to $D^{\rm NS}$ and the Bjorken sum rules have been computed up to ${\cal O}(\alpha_s^3)$-level~\cite{Gorishnii:1990vf, Surguladze:1990tg, Chetyrkin:1994js, Chetyrkin:1996ez, Larin:1991tj} and ${\cal O}(\alpha_s^4)$-level~\cite{Baikov:2008jh, Baikov:2010je}, respectively. The GCR (\ref{GCR1}) between the non-singlet Adler function and the Bjorken sum rules has been discussed in Refs.\cite{Cvetic:2016rot, Garkusha:2018mua, Shen:2016dnq}. The GCR (\ref{eq:GCR}) between the Adler function and the GLS sum rules has been investigated up to ${\cal O}(\alpha_s^3)$-level in Ref.\cite{Brodsky:1995tb}. Using the known $\alpha_s^4$-order corrections~\cite{Baikov:2010iw, Baikov:2012zn}, we has the chance to derive a more accurate GCR (\ref{eq:GCR}) up to ${\cal O}(\alpha_s^4)$-level. It is well-known that a physical observable is independent to any choice of theoretical conventions such as the renormalization scheme and renormalization scale. This property is called as the ``renormalization group invariance" (RGI)~\cite{Petermann:1953wpa, GellMann:1954fq, Peterman:1978tb, Callan:1970yg, Symanzik:1970rt}. For a fixed-order pQCD prediction, if the perturbative coefficient before $\alpha_s$ and the corresponding $\alpha_s$-value at each order do not well match with each other, then the RGI shall be explicitly broken~\cite{Wu:2013ei, Wu:2014iba}. Conventionally, people adopts the ``guessed" typical momentum flow of the process as the optimal renormalization scale with the purpose of eliminating the large logs to improve the pQCD convergence or minimizing the contributions from the higher-order loop diagrams or achieving theoretical prediction in agreement with the experimental data. Such kind of treatment directly breaks the RGI and reduces the predictive power of pQCD. Thus it is important to have a proper scale-setting approach to achieve a scale-invariant fixed-order prediction.
In the literature, the principle of maximum conformality (PMC)~\cite{Brodsky:2011ta, Brodsky:2012rj, Mojaza:2012mf, Brodsky:2013vpa} has been proposed to eliminate those two artificial ambiguities. The purpose of PMC is not to find an optimal renormalization scale but to fix an effective $\alpha_s$ of the process by using the renormalization group equation (RGE); And the PMC prediction satisfies all self-consistency conditions of the renormalization group~\cite{Brodsky:2012ms}. Two multi-scale approaches have been suggested to achieve the goal of PMC, which are equivalent in sense of perturbative theory~\cite{Bi:2015wea}, and a collection of their successful applications can be found in Ref.\cite{Wu:2015rga}. For the multi-scale approach, the PMC sets the scales via an order-by-order manner; the individual scales at each order reflect the varying virtuality of the amplitudes at those orders. It has been noted that the PMC multi-scale approach has two types of residual scale dependence because of unknown perturbative terms~\cite{Zheng:2013uja}. Those residual scale dependence suffer from both the $\alpha_s$-power suppression and the exponential suppression, but their magnitudes could be large due to poor convergence of the perturbative series of either the PMC scale or the pQCD approximant~\cite{Wu:2019mky}.
Lately, the PMC single-scale approach~\cite{Shen:2017pdu} has been suggested to suppress the residual scale dependence by introducing an overall effective $\alpha_s$. The argument of such effective $\alpha_s$ corresponds to the overall effective momentum flow of the process, which is also independent to any choice of renormalization scale. It has been shown that by using the PMC single-scale approach and the $C$-scheme strong coupling constant~\cite{Boito:2016pwf}, one can achieve a strict demonstration of the scheme-invariant and scale-invariant PMC prediction up to any fixed order~\cite{Wu:2018cmb}. Moreover, the resulting renormalization scheme- and scale- independent conformal series is helpful not only for achieving precise pQCD predictions but also for a reliable prediction of the contributions of unknown higher-orders; some of its applications have been performed in Refs.\cite{Du:2018dma, Yu:2018hgw, Yu:2019mce, Huang:2020rtx, Yu:2020tri}, which are estimated by using the $\rm Pad\acute{e}$ resummation approach~\cite{Basdevant:1972fe, Samuel:1992qg, Samuel:1995jc}. In the present paper, we shall adopt the PMC single-scale approach to deal with the GCR (\ref{eq:GCR}), and then, as the first time, we shall estimate the unknown $5_{\rm th}$-loop contributions for GCR (\ref{eq:GCR}). A novel demonstration of the scheme independence of commensurate scale relation up to all orders shall also be presented.
\section{Generalized Crewther relation under the PMC single-scale approach} \label{II}
It is helpful to define the effective charge for a physical observable~\cite{Grunberg:1980ja, Grunberg:1982fw, Dhar:1983py}, which incorporates the entire radiative correction into its definition. For example, the GLS sum rules indicates that the isospin singlet structure function $xF_3(x,Q^2)$ satisfies an unsubtracted dispersion relation~\cite{Gross:1969jf}. i.e.,
\begin{eqnarray}
\frac{1}{2} \int_0^1 \frac{d x}{x} xF_3(x,Q^2) = 3 C^{\rm GLS}(a_s),
\end{eqnarray}
where $xF_3(x,Q^2)=xF^{\nu p}_3(x,Q^2)+xF^{\bar{\nu} p}_3(x,Q^2)$, and $Q$ is the momentum transfer. The entire radiative QCD corrections can be defined as an effective charge $a_{F_3}(Q)$. Moreover, the Adler function of the cross-section ratio for the electron-positron annihilation into hadrons~\cite{Adler:1974gd}
\begin{eqnarray}
D(Q^2) &=& -12 \pi^2 Q^2 \frac{d}{d Q^2}\Pi(Q^2), \label{AdlerF}
\end{eqnarray}
where
\begin{eqnarray}
\Pi(Q^2)&=& -\frac{Q^2}{12\pi^2}\int^{\infty}_{4 m_{\pi}^2}\frac{R_{e^+e^-}(s)ds}{s(s+Q^2)}.
\end{eqnarray}
The Adler function $D$ can be defined as the effective charge $a_D(Q)$. Thus the Adler function $D$ and the GLS sum rules coefficient $C^{\rm GLS}$ can be rewritten as
\begin{eqnarray}
D(a_s)&=&1+ a_D(Q),\label{D}\\
C^{\rm GLS}(a_s)&=&1- a_{F_3}(Q).\label{CGLS}
\end{eqnarray}
The effective charges $a_D(Q)$ and $a_{F_3}(Q)$ are by definition pQCD calculable, which can be expressed as the following perturbative form,
\begin{eqnarray}
a_{\cal S}&=&r^{\cal S}_{1,0}a_s + (r^{\cal S}_{2,0}+\beta_{0}r^{\cal S}_{2,1}) a_s^{2}\nonumber\\
&&+(r^{\cal S}_{3,0}+\beta_{1}r^{\cal S}_{2,1}+ 2\beta_{0}r^{\cal S}_{3,1}+ \beta_{0}^{2}r^{\cal S}_{3,2})a_s^{3}\nonumber\\
&& +(r^{\cal S}_{4,0}+\beta_{2}r^{\cal S}_{2,1}+ 2\beta_{1}r^{\cal S}_{3,1} + \frac{5}{2}\beta_{1}\beta_{0}r^{\cal S}_{3,2} \nonumber\\
&&+3\beta_{0}r^{\cal S}_{4,1}+3\beta_{0}^{2}r^{\cal S}_{4,2} +\beta_{0}^{3}r^{\cal S}_{4,3}) a_s^{4}+\mathcal{O}(a^5_s), \label{cDij}
\end{eqnarray}
where ${\cal S}=D$ or ${F_3}$, respectively. $r^{\cal S}_{i,j=0}$ are conformal coefficients with $r^{\cal S}_{1,0}=1$, and $r^{\cal S}_{i,j\neq0}$ are nonconformal coefficients. The $\beta$-pattern at each order is determined by RGE~\cite{Mojaza:2012mf, Brodsky:2013vpa}. The coefficients $r^{D, F_3}_{i,j}$ up to four-loop level under $\overline{\rm MS}$ scheme can be read from Refs.\cite{Baikov:2012zm, Baikov:2010je} by using the general QCD degeneracy relations~\cite{Bi:2015wea}.
\begin{figure}[htb]
\includegraphics[width=0.50\textwidth]{Qstar.eps}
\caption{The PMC scales ${\tilde Q}_*$ and ${\hat Q}_*$ versus the momentum scale $Q$ up to LL, NLL and N$^2$LL accuracies, respectively.} \label{Qstar}
\end{figure}
After applying the PMC single-scale approach~\cite{Shen:2017pdu}, all $\{\beta_i\}$-terms are used to fix the correct $\alpha_s$-value by using the RGE, and up to four-loop accuracy, we obtain the following conformal series
\begin{eqnarray}
D(a_s)|_{\rm PMC}&=&1+\left[r^D_{1,0}a_s({\tilde Q}_*)+r^D_{2,0}a^2_s({\tilde Q}_*) \right. \nonumber \\
&& \quad\quad \left. +r^D_{3,0}a^3_s({\tilde Q}_*) +r^D_{4,0}a^4_s({\tilde Q}_*) \right], \\
C^{\rm GLS}(a_s)|_{\rm PMC}&=&1- \left[r^{F_3}_{1,0}a_s({\hat Q}_*) +r^{F_3}_{2,0}a^2_s({\hat Q}_*) \right. \nonumber \\
&& \quad\quad \left. +r^{F_3}_{3,0}a^3_s({\hat Q}_*) +r^{F_3}_{4,0}a^4_s({\hat Q}_*) \right], \label{CGLSpmc}
\end{eqnarray}
where ${\tilde Q}_*$ and ${\hat Q}_*$ are in perturbative series which can be derived from the pQCD series of $a_D$ and $a_{F_3}$. ${\tilde Q}_*$ and ${\hat Q}_*$ correspond to the overall momentum flows of the effective charges $a_D(Q)$ and $a_{F_3}(Q)$, which are independent to any choice of renormalization scale. This property confirms that the PMC is not to choose an optimal renormalization scale, but to find the correct momentum flow of the process. Using the four-loop pQCD series of $a_D$ and $a_{F_3}$, we determine their magnitudes up to N$^2$LL accuracy by replacing the coefficients ${\hat r}_{i,j}$ in the Eqs.~(8-11) of Ref.\cite{Shen:2017pdu} with $r^D_{i,j}$ or $r^{F_3}_{i,j}$, respectively. We present ${\tilde Q}_*$ and ${\hat Q}_*$ up to different accuracies in Fig.~\ref{Qstar}. As shown by Fig.~\ref{Qstar}, because the perturbative series of $a_D$ and $a_{F_3}$ have good perturbative convergence, it is interesting to find that their magnitudes up to different accuracies, such as the leading log (LL), the next-to-leading log (NLL), and the next-next-leading log (N$^2$LL) accuracies, are very close to each other. Thus one can treat the N$^2$LL-accurate ${\tilde Q}_*$ and ${\hat Q}_*$ as their exact values.
\begin{figure}[htb]
\includegraphics[width=0.50\textwidth]{RPMCS.eps}
\caption{The $D(a_s) C^{\rm GLS}(a_s)$ versus the momentum $Q$ under the PMC single-scale approach up to two-loop, three-loop, and four-loop levels, respectively. The uncalculated five-loop result is predicted by using the Pad\'{e} approximation approach. } \label{RPMC}
\end{figure}
By applying the PMC single-scale approach, one can improve the precision of $D(a_s)$ and $C^{\rm Bjp}(a_s)$. Fig.~\ref{RPMC} shows the PMC predictions of $D(a_s) C^{\rm GLS}(a_s)$ up to two-loop, three-loop, and four-loop levels, respectively. Moreover, the PMC scheme-invariant and scale-invariant conformal series provides a reliable platform for predicting the uncalculated high-order terms~\cite{Du:2018dma}. As the fist time, we also present the uncalculated five-loop result in Fig.~\ref{RPMC}, which is predicted by using the Pad\'{e} approximation approach (PAA)~\cite{Basdevant:1972fe, Samuel:1992qg, Samuel:1995jc} and by using the preferable [N/M]=[0/n-1]-type Pad\'{e} generating function which makes the PAA geometric series be self-consistent with the PMC prediction~\cite{Du:2018dma}. More explicitly, the PAA has been introduced for estimating the $(n+1)_{\rm th}$-order coefficient in a given $n_{\rm th}$-order perturbative series and feasible conjectures on the likely high-order behavior of the series. For example, for the following conformal series
\begin{equation}
\rho(Q)=\sum^n_{i=1} r_{i,0}\; a_s^i,
\end{equation}
for the present cases, we have $n=4$ and $\rho(Q)=a_D|_{\rm PMC}$ or $a_{F_3}|_{\rm PMC}$; its $[N/M]$-type fractional generating function by using the PAA is defined as
\begin{equation}
\rho^{N/M}(Q) = a_s\times\frac{b_0+b_1 a_s+\cdots+b_N a_s^N}{1+c_1 a_s+\cdots+c_M a_s^M}, \label{Paam}
\end{equation}
where $M\geq1$ and $N+M=n-1$. Then the unknown $(n+1)_{\rm th}$-order coefficient $r_{n+1,0}$ can be predicted by using the known coefficients $\{r_{1,0},\cdots,r_{n,0}\}$ via expanding the fractional generating function over $a_s$. That is Eq.(\ref{Paam}) can be reexpressed as
\begin{equation}
\rho^{N/M}(Q) = \sum^n_{i=1} r_{i,0} a_s^i + r_{n+1,0} a_s^{n+1}+\cdots.
\end{equation}
We can first express all the coefficients $\{b_0,\cdots,b_N\}$ and $\{c_1,\cdots,c_M\}$ by the known coefficients $r_{\{1,\cdots,n\},0}$, and then get the coefficient $r_{n+1,0}$ over $\{b_i\}$ and $\{c_i\}$, which can be finally expressed by $\{r_{1,0},\cdots,r_{n,0}\}$. We put the PAA predictions of $a_D|_{\rm PMC}$ and $a_{F_3}|_{\rm PMC}$ in Table.~\ref{pmcpaa1} and Table.~\ref{pmcpaa2}, respectively, in which the known coefficients (``Exact") at different orders are also presented as comparison. They show that when we have known more perturbative terms, the PAA predicted coefficients shall become closer to the ``Exact" ones.
\begin{table}[htb]
\begin{tabular}{ccc}
\hline
~~~ ~~~ & ~~~$\rm Exact$~~~ & ~~~$\rm PAA$ ~~~ \\
\hline
$r^D_{1,0}$ & ~1 & - \\
\hline
$r^D_{2,0}$ & ~1.84028 & - \\
\hline
$r^D_{3,0}$ & -3.03222 & [0/1]:~3.38662 \\
\hline
$r^D_{4,0}$ & -23.2763 & [0/2]:-17.3926 \\
\hline
$r^D_{5,0}$ & - & [0/3]:-34.1991 \\
\hline
\end{tabular}
\caption{The preferable [0/$n$-1]-type PAA predictions of three-, four-, and five-loop coefficients of $a_{D}|_{\rm PMC}$ under the PMC single-scale approach. The known coefficients (``Exact") are also presented as comparisons.} \label{pmcpaa1}
\end{table}
\begin{table}[htb]
\begin{tabular}{ccc}
\hline
~~~ ~~~ & ~~~$\rm Exact$~~~ & ~~~$\rm PAA$ ~~~ \\
\hline
$r^{F_3}_{1,0}$ & ~1 & - \\
\hline
$r^{F_3}_{2,0}$ & ~0.840278 & - \\
\hline
$r^{F_3}_{3,0}$ & -5.71277 & [0/1]:~0.706067 \\
\hline
$r^{F_3}_{4,0}$ & -16.0776 & [0/2]:-10.1939 \\
\hline
$r^{F_3}_{5,0}$ & - & [0/3]:~18.2157 \\
\hline
\end{tabular}
\caption{The preferable [0/$n$-1]-type PAA predictions of three-, four-, and five-loop coefficients of $a_{F_3}|_{\rm PMC}$ under the PMC single-scale approach. The known coefficients (``Exact") are also presented as comparisons. } \label{pmcpaa2}
\end{table}
Fig.~\ref{RPMC} shows that the two-, three-, four-, and the predicted five-loop predictions are close in shape, especially the predicted five-loop curve almost coincides with the four-loop one. This is due to the fact that the PMC conformal series is free of renormalon divergence~\cite{Beneke:1994qe, Neubert:1994vb, Beneke:1998ui}, which inversely results in a good pQCD convergence. The scheme-independent $D(a_s)|_{\rm{PMC}}$ and $1/C^{\rm GLS}(a_s)|_{\rm PMC}$ have the same conformal coefficients, but as shown by Fig.~(\ref{Qstar}), their PMC scales are not identical, ${\tilde Q}_*\neq{\hat Q}_*$, thus there is a small deviation from unity:
\begin{equation}
D(a_s)|_{\rm PMC}~C^{\rm GLS}(a_s)|_{\rm PMC} \approx 1. \label{eq:GCRPMC0}
\end{equation}
This deviation, though very small in large $Q$ region, e.g. for $Q\simeq10^3$ GeV, the ratio is about $0.998$ for the five-loop prediction, is the intrinsic nature of QCD theory, due to its conformal breaking property.
In GCR (\ref{eq:GCR}), the scheme dependent $\Delta_{\rm csb}$-term has been introduced to collect all the conformal breaking terms. This leads to explicit scheme dependence of the GCR (\ref{eq:GCR}) under conventional scale-setting approach due to the mismatching of $\alpha_s$ and its corresponding expansion coefficients. On the other hand, by applying the PMC single-scale approach, we can achieve an exactly scheme and scale invariant GCR at any fixed order. More explicitly, after applying the PMC single-scale approach, we obtain the following conformal series,
\begin{eqnarray}
a_D(Q)=\sum_{i=1}^n a_{F_3}^i(Q_*), \label{eqGSICR}
\end{eqnarray}
where all the coefficients are exactly equal to $1$, the PMC scale $Q_*$ is independent to the choice of renormalization scale, which can be fixed up to N$^2$LL accuracy by using the known four-loop coefficients\footnote{ The scale $Q_*$ satisfies Eqs.(8-11) of Ref.\cite{Shen:2017pdu}, whose value can be determined by replacing $a_s$ and ${\hat r}_{i,j}$ with $a_{F_3}$ and $r^{D F_3}_{i,j}$. The $r^{D F_3}_{i,j}$ is a function of $r^D_{i,j}$ and $r^{F_3}_{i,j}$~\cite{Brodsky:2013vpa}. One can derive Eqs.(\ref{T0}-\ref{T2}) by substituting those functions into Eqs.(9-11) of Ref.\cite{Shen:2017pdu}. }, i.e.,
\begin{eqnarray}
\ln\frac{Q^2_*}{Q^2}=T_0 +T_1 a_{F_3}(Q) +T_2 a^2_{F_3}(Q),
\label{qstar}
\end{eqnarray}
where
\begin{eqnarray}
T_0&=&r^{F_3}_{2,1}-r^D_{2,1}, \label{T0}\\
T_1&=&2(r^{F_3}_{3,1}-r^D_{3,1}+r^D_{2,0} r^D_{2,1}-r^{F_3}_{2,0} r^{F_3}_{2,1}) \nonumber \\ &&+[r^{F_3}_{3,2}-r^D_{3,2}+(r^D_{2,1})^2-(r^{F_3}_{2,1})^2]\beta_0 ,\\
T_2&=&3(r^{F_3}_{4,1}-r^D_{4,1})-2r^{F_3}_{2,1} r^{F_3}_{3,0}+4r^D_{2,0} r^D_{3,1}+3r^D_{2,1} r^D_{3,0}\nonumber \\
&&-4(r^D_{2,0})^2 r^D_{2,1}-r^D_{2,1} r^{F_3}_{3,0}+(r^{F_3}_{2,0})^2(r^D_{2,1}+5r^{F_3}_{2,1})\nonumber \\
&&-2r^{F_3}_{2,0}(3r^{F_3}_{3,1}-r^D_{3,1}+r^D_{2,0} r^D_{2,1})\nonumber \\
&&+\{3(r^{F_3}_{4,2}-r^D_{4,2})+2r^D_{2,0} r^D_{3,2}+4r^D_{3,1} r^D_{2,1}\nonumber \\
&&-3r^D_{2,0} (r^D_{2,1})^2-2r^{F_3}_{2,1}(3r^{F_3}_{3,1}-r^D_{3,1}+r^D_{2,0} r^D_{2,1})\nonumber \\
&&+r^{F_3}_{2,0}[r^D_{3,2}-3r^{F_3}_{3,2}+6(r^{F_3}_{2,1})^2-(r^D_{2,1})^2]\}\beta_0\nonumber \\
&&+\{r^{F_3}_{4,3}-r^D_{4,3}+2r^D_{3,2}r^D_{2,1}+2(r^{F_3}_{2,1})^3-(r^D_{2,1})^3\nonumber \\
&&+r^{F_3}_{2,1}[r^D_{3,2}-3r^{F_3}_{3,2}-(r^D_{2,1})^2]\}\beta_0^2\nonumber \\
&&+\frac{3}{2}[r^{F_3}_{3,2}-r^D_{3,2}+(r^D_{2,1})^2-(r^{F_3}_{2,1})^2]\beta_1. \label{T2}
\end{eqnarray}
Eq.~(\ref{eqGSICR}) is exactly scheme-independent, which can be treated as a kind of commensurate scale relation (CSR). The CSR has been suggested in Ref.\cite{Brodsky:1994eh} with the purpose of ensuring the scheme-independence of the pQCD approximants among different renormalization schemes, and all the original CSRs suggested in Ref.\cite{Brodsky:1994eh} are at the NLO level. The PMC single-scale approach provides a way to extend the CSR to any orders. A general demonstration on the scheme independence of the CSR (\ref{eqGSICR}) shall be given in next section. As a special case, taking the conformal limit that all $\{\beta_i\}$-terms tend to zero, we have $Q_* \to Q$, and then the relation (\ref{eqGSICR}) turns to the original Crewther relation~\cite{Broadhurst:1993ru}
\begin{eqnarray}
[1+a_D(Q)][1-a_{F_3}(Q)]\equiv 1.
\end{eqnarray}
By applying the PMC single-scale approach, one can obtain similar scheme-independent relations among different observables. The relation (\ref{eqGSICR}) not only provides a fundamental scheme-independent relation but also has phenomenologically useful consequences.
For example, the effective charge $a_{F_3}$ can be related to the effective change $a_R$ of $R$-ratio for the $e^+e^-$ annihilation cross section ($R_{e^+e^-}$). The measurable $R$-ratio can be expressed by the perturbatively calculated Adler function, i.e.,
\begin{eqnarray}
R_{e^+e^-}(s)&=&\frac{1}{2\pi i}\int^{-s+i \epsilon}_{-s-i \epsilon}\frac{D(a_s(Q))}{Q^2}dQ^2 \nonumber\\
&=&3\sum_f q_f^2(1+a_R(Q)),
\end{eqnarray}
where $q_f$ is the electric charge of the active flavor. Similarly, the perturbative series of the effective charge $a_R(Q)$ can be written as
\begin{eqnarray}
a_R&=&r^R_{1,0}a_s + (r^R_{2,0}+\beta_{0}r^R_{2,1})a_s^{2}\nonumber\\
&&+(r^R_{3,0}+\beta_{1}r^R_{2,1}+ 2\beta_{0}r^R_{3,1}+ \beta_{0}^{2}r^R_{3,2})a_s^{3}\nonumber\\
&& +(r^R_{4,0}+\beta_{2}r^R_{2,1}+ 2\beta_{1}r^R_{3,1} + \frac{5}{2}\beta_{1}\beta_{0}r^R_{3,2} \nonumber\\
&&+3\beta_{0}r^R_{4,1}+3\beta_{0}^{2}r^R_{4,2}+\beta_{0}^{3}r^R_{4,3}) a_s^{4}+\mathcal{O}(a^5_s),
\end{eqnarray}
where the coefficients $r^R_{i,j}$ under the $\overline{\rm MS}$-scheme can be derived from Refs.\cite{Baikov:2008jh, Baikov:2010je, Baikov:2012zm, Baikov:2012zn}~\footnote{Here, we will not consider the nonperturbative contributions, which may be important in small $Q^2$-region~\cite{Shifman:1978bx, Jaffe:1982pm, Shuryak:1981kj, Shuryak:1981pi, Ji:1993sv, Braun:1986ty, Balitsky:1989jb, Ross:1993gb, Stein:1994zk, Stein:1995si}, but are negligible for comparatively large $s$ and $Q^2$.}. After applying the PMC single-scale approach, we obtain a relation between $a_R$ and $a_{F_3}$ up to N$^3$LO, i.e.,
\begin{eqnarray}
a_{F_3}(Q)&=& \sum_{i=1}^{n} r_{i,0} a^{i}_R(\bar{Q}_*), \label{aF3}
\end{eqnarray}
whose first four coefficients are
\begin{eqnarray}
r_{1,0}&=&1, \\
r_{2,0}&=&-1, \\
r_{3,0}&=&1+\gamma^{\rm S}_3\left[\frac{99}{8}-\frac{3(\sum_f q_f)^2}{4\sum_f q^2_f}\right],\\
r_{4,0}&=&-1+\frac{(\sum_f q_f)^2}{\sum_f q^2_f}\left(\frac{27\gamma^{\rm NS}_2 \gamma^{\rm S}_3}{16}+\frac{3\gamma^{\rm S}_3}{2}-\frac{3\gamma^{\rm S}_4}{4}\right)\nonumber \\
&&-\gamma^{\rm S}_3 \left(\frac{99}{4}+\frac{891\gamma^{\rm NS}_2}{32}\right) +\frac{99\gamma^{\rm S}_4}{8},
\end{eqnarray}
where $\gamma^{\rm S, NS}_{i}$ are singlet and non-singlet anomalous dimensions which are unrelated to $\alpha_s$-renormalization, and the effective PMC scale $\bar{Q}_*$ can be determined up to N$^2$LL accuracy, which reads
\begin{eqnarray}
\ln\frac{\bar{Q}^2_*}{Q^2}=S_0+S_1 a_R(\bar{Q}_*)+S_2 a_R^2(\bar{Q}_*),
\label{qstarF3}
\end{eqnarray}
where
\begin{eqnarray}
S_0=&&-\frac{r_{2,1}}{r_{1,0}}, \\
S_1=&&\frac{ \beta _0 (r_{2,1}^2-r_{1,0} r_{3,2})}{r_{1,0}^2}+\frac{2 (r_{2,0} r_{2,1}-r_{1,0} r_{3,1})}{r_{1,0}^2},
\end{eqnarray}
and
\begin{eqnarray}
S_2=&&\frac{3 \beta _1 (r_{2,1}^2-r_{1,0}r_{3,2})}{2 r_{1,0}^2}\nonumber\\
&&\!\!\!\!\!\!\!\! +\frac{4(r_{1,0} r_{2,0} r_{3,1}-r_{2,0}^2 r_{2,1})+3(r_{1,0} r_{2,1} r_{3,0}-r_{1,0}^2 r_{4,1})}{r_{1,0}^3} \nonumber \\
&&\!\!\!\!\!\!\!\! +\frac{ \beta _0 (6 r_{2,1} r_{3,1} r_{1,0}-3 r_{4,2} r_{1,0}^2+2 r_{2,0} r_{3,2} r_{1,0}-5 r_{2,0} r_{2,1}^2)}{r_{1,0}^3}\nonumber\\
&&\!\!\!\!\!\!\!\! +\frac{ \beta _0^2 (3 r_{1,0} r_{3,2} r_{2,1}- 2r_{2,1}^3- r_{1,0}^2 r_{4,3})}{ r_{1,0}^3}.
\end{eqnarray}
Experimentally, the effective charge $a_R$ has been constrained by measuring $R_{e^+e^-}$ above the thresholds for the production of $(c\bar{c})$-bound state~\cite{Mattingly:1993ej}, i.e.
\begin{equation}
a^{\rm exp}_R(\sqrt{s}=5 {\rm GeV})\simeq 0.08 \pm0.03. \label{ar}
\end{equation}
Substituting it into Eq.~(\ref{aF3}), we obtain
\begin{eqnarray}
a_{F_3}(Q=12.58^{+1.48}_{-1.26}~\rm GeV)
&=&0.073^{+0.025}_{-0.026}. \label{aF3result1}
\end{eqnarray}
which is consistent with the PMC prediction derived directly from Eq.~(\ref{CGLSpmc}) within errors, e.g.
\begin{equation}
a_{F_3}(Q=12.58^{+1.48}_{-1.26}~{\rm GeV})|_{\rm PMC}=0.063^{+0.002+0.001}_{-0.001-0.001},
\end{equation}
where the first error is for $\Delta Q=\left(^{+1.48}_{-1.26}\right)~{\rm GeV}$ and the second error is for $\Delta\alpha_{s}(M_Z) =0.1179\pm0.0011$~\cite{Zyla:2020zbs}. At present, the GLS sum rules is measured at small $Q^2$-values~\cite{Leung:1992yx, Quintas:1992yv}, an extrapolation of the data gives~\cite{Kataev:1994rj}
\begin{equation}
a^{\rm ext}_{F_3}(Q=12.25~{\rm GeV}) \simeq 0.093\pm 0.042,
\end{equation}
which agrees with our prediction within errors.
\section{A demonstration of the scheme independence of commensurate scale relation}
In this section, we give a novel demonstration of the scheme independence of CSR to all orders by relating different pQCD approximants within the effective charge method. The effective charge $a_A$ can be expressed as a perturbative series over another effective charge $a_B$,
\begin{eqnarray}
a_A &=& r^{\rm AB}_{1,0}a_{B} + (r^{\rm AB}_{2,0}+\beta_{0}r^{\rm AB}_{2,1})a_{B}^{2}+ \nonumber\\
&& (r^{\rm AB}_{3,0}+\beta_{1}r^{\rm AB}_{2,1}+ 2\beta_{0}r^{\rm AB}_{3,1}+ \beta_{0}^{2}r^{\rm AB}_{3,2})a_{B}^{3}+ \nonumber\\
&& (r^{\rm AB}_{4,0}+\beta^{B}_{2}r^{\rm AB}_{2,1}+ 2\beta_{1}r^{\rm AB}_{3,1} + \frac{5}{2}\beta_{1}\beta_{0}r^{\rm AB}_{3,2}+ \nonumber\\
&& 3\beta_{0}r^{\rm AB}_{4,1}+3\beta_{0}^{2}r^{\rm AB}_{4,2}+\beta_{0}^{3}r^{\rm AB}_{4,3}) a_{B}^{4}+\mathcal{O}(a_{B}^5), \label{eq:aAaB}
\end{eqnarray}
where $a_A$ and $a_B$ stand for the effective charges under arbitrary schemes $A$ and $B$, respectively. The previously mentioned $a_D$ and $a_{F_3}$ are such kind of effective charges. The $\{\beta_i\}$-functions are usually calculated under the $\overline{\rm MS}$-scheme, and the $\{\beta_i\}$-functions for $A/B$ scheme can be obtained by using its relation to the $\overline{\rm MS}$-scheme one, i.e. $\beta^{A/B}={\partial \alpha_{s, A/B}}/{\partial \alpha_{s, \overline{\rm MS}}}{\beta^{\overline{\rm MS}}}$. The effective charge $a_B$ at any scale $\mu$ can be expanded in terms of a $C$-scheme coupling $\hat{a}_B(\mu)$ at the same scale~\cite{Wu:2018cmb}, i.e.,
\begin{eqnarray}
a_{B} &=& {\hat a}_{B}+C\beta_0 {\hat a}_{B}^2+ \left(\frac{\beta_2^B}{\beta_0}- \frac{\beta_1^2}{\beta_0^2}+\beta_0^2 C^2+\beta_1 C\right) {\hat a}_{B}^3 \nonumber\\
& & +\left[\frac{\beta_3^B}{2 \beta _0}-\frac{\beta_1^3}{2 \beta _0^3}+\left(3 \beta_2^B-\frac{2 \beta _1^2}{\beta _0}\right) C+\frac{5}{2}\beta_0 \beta_1 C^2 \right. \nonumber\\
&& +\beta_0^3 C^3 \Big] {\hat a}_{B}^4 + {\cal O}({\hat a}_{B}^5),
\label{eq:Expandhata}
\end{eqnarray}
where by choosing a suitable $C$, the coupling ${\hat a}_{B}$ can be equivalent to $a_B$ defined for any scheme at the same scale, i.e. $a_B={\hat a}_{B}|_{C}$. By using the $C$-scheme coupling, the relation (\ref{eq:aAaB}) becomes
\begin{eqnarray}
a_A&=&r_1 {\hat a}_{B} + (r_2 + \beta_0 r_1 C) {\hat a}_{B}^{2}+\bigg[r_3 + \bigg(\beta_1 r_1+2\beta_0 r_2\bigg)C\nonumber\\
&&+\beta_0^2 r_1 C^2+ r_1\bigg(\frac{\beta_2^B}{\beta_0}-\frac{\beta_1^2}{\beta_0^2}\bigg)\bigg] {\hat a}_{B}^{3} \nonumber\\
&&+\bigg[r_4 + \bigg(3\beta_0 r_3+2\beta_1 r_2+3 \beta_2^B r_1-\frac{2\beta_1^2 r_1}{\beta_0}\bigg)C \nonumber\\
&&+\bigg(3\beta_0^2 r_2+\frac{5}{2} \beta_1 \beta_0 r_1\bigg)C^2+r_1\beta_0^3 C^3 \nonumber \\
&&+r_1\bigg(\frac{\beta_3^B}{2\beta_0}-\frac{\beta_1^3}{2\beta_0^3}\bigg)
+r_2\bigg(\frac{2\beta_2^B}{\beta_0}-\frac{2\beta_1^2}{\beta_0^2}\bigg)\bigg] {\hat a}_{B}^{4}\nonumber \\
&&+\mathcal{O}({\hat a}_{B}^5),
\end{eqnarray}
where the coefficients $r_i$ are
\begin{eqnarray}
r_1 &=& r^{\rm AB}_{1,0}, \label{r1-conf-beta}\\
r_2 &=& r^{\rm AB}_{2,0}+\beta_{0}r^{\rm AB}_{2,1}, \label{r2-conf-beta}\\
r_3 &=& r^{\rm AB}_{3,0}+\beta_{1}r^{\rm AB}_{2,1}+ 2\beta_{0}r^{\rm AB}_{3,1}+ \beta_{0}^{2}r^{\rm AB}_{3,2}, \label{r3-conf-beta}\\
r_4 &=& r^{\rm AB}_{4,0}+\beta^{B}_{2}r^{\rm AB}_{2,1}+ 2\beta_{1}r^{\rm AB}_{3,1} + \frac{5}{2}\beta_{1}\beta_{0}r^{\rm AB}_{3,2} \nonumber\\
&&+3\beta_{0}r^{\rm AB}_{4,1}+3\beta_{0}^{2}r^{\rm AB}_{4,2}+\beta_{0}^{3}r^{\rm AB}_{4,3}. \label{r4-conf-beta}
\end{eqnarray}
Following the standard PMC single-scale approach, we obtain the following CSR
\begin{equation}
a_A(Q)=\sum_{i=1}^n r^{\rm AB}_{i,0}{\hat a}_{B}^i(Q_{**}), \label{ABCSR}
\end{equation}
where the effective PMC scale $Q_{**}$ is obtained by vanishing all nonconformal terms, which can be expanded as a power series over ${\hat a}_{B}(Q_{**})$, i.e.,
\begin{eqnarray}
\ln\frac{Q_{**}^2}{Q^2} = \sum^{n-2}_{i=0} \hat{S}_i {\hat a}_{B}^i(Q_{**}),
\label{eq:C-PMC-scale}
\end{eqnarray}
whose first three coefficients are
\begin{eqnarray}
\hat{S}_0 &=& -\frac{r^{\rm AB}_{2,1}}{r^{\rm AB}_{1,0}}-C, \label{S0}\\
\hat{S}_1 &=& \frac{2\left(r^{\rm AB}_{2,0}r^{\rm AB}_{2,1}-r^{\rm AB}_{1,0}r^{\rm AB}_{3,1}\right)}{(r^{\rm AB}_{1,0})^2}
+\frac{(r^{\rm AB}_{2,1})^2-r^{\rm AB}_{1,0} r^{\rm AB}_{3,2}}{(r^{\rm AB}_{1,0})^2}\beta_0 \nonumber\\&&+\frac{\beta_1^2}{\beta_0^3}-\frac{\beta^{B}_2}{\beta_0^2}, \label{S1}
\end{eqnarray}
and
\begin{eqnarray}
\hat{S}_2 &=&\frac{3r^{\rm AB}_{1,0}r^{\rm AB}_{2,1}r^{\rm AB}_{3,2} -(r^{\rm AB}_{1,0})^2r^{\rm AB}_{4,3} -2(r^{\rm AB}_{2,1})^3}{(r^{\rm AB}_{1,0})^3}\beta_0^2 + \nonumber\\
&& \hspace{-0.8cm}\frac{3r^{\rm AB}_{1,0}\left(2r^{\rm AB}_{2,1} r^{\rm AB}_{3,1} -r^{\rm AB}_{1,0} r^{\rm AB}_{4,2}\right)+ r^{\rm AB}_{2,0}\left[2 r^{\rm AB}_{1,0} r^{\rm AB}_{3,2}-5 (r^{\rm AB}_{2,1})^2\right]}{(r^{\rm AB}_{1,0})^3}\beta_0 \nonumber\\
&&\hspace{-0.8cm}+\frac{3r^{\rm AB}_{1,0}\left(r^{\rm AB}_{3,0}r^{\rm AB}_{2,1} -r^{\rm AB}_{1,0}r^{\rm AB}_{4,1}\right)+4r^{\rm AB}_{2,0} \left(r^{\rm AB}_{1,0}r^{\rm AB}_{3,1}-r^{\rm AB}_{2,0} r^{\rm AB}_{2,1}\right)}{(r^{\rm AB}_{1,0})^3} \nonumber\\
&&\hspace{-0.8cm}+ \frac{3\left[(r^{\rm AB}_{2,1})^2-r^{\rm AB}_{1,0}r^{\rm AB}_{3,2}\right]}{2(r^{\rm AB}_{1,0})^2}\beta_1- \frac{\beta_1^3}{2\beta_0^4}+\frac{\beta^{B}_2 \beta_1}{\beta_0^3}-\frac{\beta^{B}_3}{2\beta_0^2}.\label{S2}
\end{eqnarray}
One may observe that only the LL coefficient $\hat{S}_{0}$ depends on the scheme parameter $C$, and all the higher order coefficients $\hat{S}_{i}$ ($i\geq1$) are free of $C$.
Moreover, the $C$-scheme coupling ${\hat a}_{B}(Q_*)$ satisfies the following relation~\cite{Wu:2018cmb},
\begin{equation}
\frac{1}{{\hat a}_{B}(Q_{**})}+\frac{\beta_1}{\beta_0} \ln {\hat a}_{B}(Q_{**}) = \beta_0\left(\ln\frac{Q_{**}^2}{\Lambda^2}+C\right).
\end{equation}
Substituting Eq.~(\ref{eq:C-PMC-scale}) into the right hand side of the equation, we obtain
\begin{eqnarray}
&& \frac{1}{{\hat a}_{B}(Q_{**})}+\frac{\beta_1}{\beta_0} \ln {\hat a}_{B}(Q_{**}) \nonumber\\
&=& \beta_0 \bigg[ \ln\frac{Q^2}{\Lambda^2}-\frac{r^{\rm AB}_{2,1}}{r^{\rm AB}_{1,0}}+\sum^{n-2}_{i= 1} \hat{S}_i {\hat a}_{B}^i(Q_{**}) \bigg]. \label{Csas2}
\end{eqnarray}
By using this equation, we can derive a solution for ${\hat a}_{B}(Q_{**})$. Because all the coefficients in Eq.~(\ref{Csas2}) are free of $C$ at any fixed order, the magnitude of ${\hat a}_{B}(Q_{**})$ shall be exactly free of $C$. Together with the scheme-independent conformal coefficients and the fact that the value of $C$ can be chosen to match any renormalization scheme, we can conclude that the CSR (\ref{ABCSR}) is exactly scheme independent. This demonstration can be extended to all orders.
\section{Summary}
The PMC provides a systematic approach to determine an effective $\alpha_s$ for a fixed-order pQCD approximant. By using the PMC single-scale approach, the determined effective $\alpha_s$ is scale-invariant, which is free of any choice of renormalization scale. Because all nonconformal terms have been eliminated, the resultant pQCD series shall be scheme independent, well satisfying the requirements of RGI. Furthermore, by applying the PMC single-scale approach, we obtain a scheme-independent GCR, $D(a_s)|_{\rm PMC}C^{\rm GLS}(a_s)|_{\rm PMC} \approx 1$, which provides a significant connection between the Adler function and the GLS sum rules. We have shown that their corresponding effective couplings satisfy a scheme-independent CSR, $a_D(Q)=\sum_{i=1}^{n} a_{F_3}^i(Q_*)$. Furthermore, we obtain a CSR that relates the effective charge $a_{F_3}$ to the effective charge of $R$-ratio, $a_{F_3}(Q)=\sum_{i=1}^{4} r_{i,0} a^{i}_R(\bar{Q}_*)$. This leads to $a_{F_3}(Q=12.58^{+1.48}_{-1.26}~\rm GeV) = 0.073^{+0.025}_{-0.026}$, which agrees with the extrapolated measured value within errors. A demonstration on the scheme-independence of the CSR has been presented. The scheme- and scale- independent CSRs shall provide important tests of pQCD theory.
\hspace{1cm}
{\bf Acknowledgements:} This work is partly supported by the Chongqing Graduate Research and Innovation Foundation under Grant No.ydstd1912 and No.CYB21045, the National Natural Science Foundation of China under Grant No.11625520 and No.12047564, and the Fundamental Research Funds for the Central Universities under Grant No.2020CQJQY-Z003.
| proofpile-arXiv_059-15654 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Preliminary}
We review the basic relevant concepts of CoL, and some basic notational conventions. A reader may want to consult \cite{JapCL1} for further details.
CoL understands the interactive computational problems as games between two players: {\em machine} and {\em environment}. The symbolic names for these two players are
$\twg$ and
$\tlg$,
respectively.
A
{\bf move} means a finite string over the keyboard alphabet.
A {\bf labmove} is a move prefixed with $\top$ or $\bot$. A {\bf run} is a (finite or infinite) sequence of labmoves, and a
{\bf position} is a finite run.
Runs will be delimited by ``$\langle$'' and ``$\rangle$''.
$\langle\rangle$ denotes the {\bf empty run}.
The following is a brief definition of the concept of a constant game.
\begin{definition}\label{game}
A {\bf constant game} is a pair $A=(\legal{A}{},\win{A}{})$, where:\vspace{10pt}
1. $\legal{A}{}$ is a set of runs satisfying the condition that a finite or infinite run is in $\legal{A}{}$ iff all of its nonempty finite --- not necessarily proper --- initial
segments are in $\legal{A}{}$ (notice that this implies $\langle\rangle\in\legal{A}{}$). The elements of $\legal{A}{}$ are
said to be {\bf legal runs} of $A$. \vspace{5pt}
2. $\win{A}{}$ is a function that sends every run $\Gamma$ to one of the players $\top$ or $\bot$.
\end{definition}
Unfortunately, the above definition is not sufficient
to represent games equipped with some kind of
heuristics. AlphaGo is such an example.
For this reason, we introduce a new game which we call
{\it a constant game with heuristics}, denoted by
$\watom{A}{h}$ where $h$ is a heuristic function.
For example, AlphaGo can be represented by $\watom{Go}{h}$ where
$h$ represents the powerful heuristics of the AlphaGo.
\begin{definition}\label{game}
A {\bf constant game with heuristics} is a pair $\watom{A}{h} =(\legal{A}{},\win{A}{},h^A)$, where:\vspace{10pt}
$h^A$ is a heuristic function for the machine to follow, i.e., the machine's strategy for the game $A$.
$h^A$ typically depends on the run of the game.
\end{definition}
Often we need to preprogram the environment's strategy as well.
For this reason, we introduce a new game which we call
{\it a constant game with environment's strategy }, denoted by
$\watom{A}{\nheu{s}}$ where $\nheu{s}$ describes the environment's strategy for the game.
\section{Examples}\label{sec:modules}
As an example of multi-agent system, we will look at the Starbucks. This example introduces
several interesting concepts such as how service $flows$ among agents.
It is formulated with the God, the Folger coffee maker (coffee provider), the Starbucks owner, a user and the bank (dollar provider).
We assume the following:
\begin{itemize}
\item
In our example, God
provides the coffee-making manual to the Folger and collects \$10. It also provides the dollar-making manual
to the bank and collects ten coffees.
\item God is not actually implemented. Instead, the Folger and the bank play the role of God whenever necessary.
\item The store owner plays the roles of barista and cashier.
\item The owner tries to borrow \$8 from the bank and pay 8 coffees to it. He also tries
to pay \$10 and gets ten coffees from the Folger.
\item Each coffee costs a dollar.
\item The user tries to get two coffees by paying two dollars to the owner.
He also tries to get two dollars by paying two coffees to the bank.
\item The user is active from the beginning.
\end{itemize}
Now we want to implement the above.
The first task is to determine the representation of a coffee.
A coffee is represented by a (imaginary or real, depending on your
need) coffee machine.
We assume that the owner has a coffee manual/heuristic which provides the 'rules of thumb' to make a good coffee.
A coffee machine -- similar to an ATM machine --
can be seen as a game between the owner with a manual and
its customer with a sequence of interactions.
Assume we have a particular coffee machine with LCD monitor where
(1) The user of the machine selects the $x (= 1,2,\ldots,)$ grams of sugar, and then
the $y (= 1,2,\ldots)$*10cc of milk
(2) The owner selects the $z(= 1,2,\ldots,10)$ spoons of coffee.
For simplicity, we assume that the owner uses the following heuristic evaluating function
\[ h(x,y,z) = | z - xy -1 |. \]
\noindent
In other words, if it selects $z$ such that $z = xy+1$, then
it knows that he/she makes good coffees.
One simplest way of representing this device is to represent it as a general
atom $\watom{C}{h}$ with the above heuristic $h$\footnote{Coffee machine can be represented without
using general atoms but it is cumbersome.}.
Similarly, the consumer's preference in coffee can be programmed in the user's scripts.
Below illustrates some user's scripts $\nheu{c_0},\nheu{c_1}$ used in the example below in coffee making. \\
$\nheu{c_0} = \{\langle \oo 3,\oo 1\rangle \}$. \% 3 grams of sugar, 10cc of milk\\
$\nheu{c_1} = \{\langle \oo 4,\oo 2\rangle \}$. \% 4 grams of sugar, 20cc of milk\\
As in the case of coffee, the same approach can be employed to
represent a dollar, i.e., as a credit-card paying machine or a POS machine.
A credit-card paying machine can be seen as an interactive
constant game. To make things simple, we assume the bank is a provider for one dollar and $r$ is
a manual for making a dollar.
\newenvironment{exmple}{
\begingroup \begin{tabbing} \hspace{2em}\= \hspace{3em}\= \hspace{3em}\=
\hspace{3em}\= \hspace{3em}\= \hspace{3em}\= \kill}{
\end{tabbing}\endgroup}
\newenvironment{example2}{
\begingroup \begin{tabbing} \hspace{8em}\= \hspace{2em}\= \hspace{2em}\=
\hspace{10em}\= \hspace{2em}\= \hspace{2em}\= \hspace{2em}\= \kill}{
\end{tabbing}\endgroup}
An example is provided by the following $*C,o,u,*1$ agents.
In $\watom{1}{\nheu{d_0}}$ of the *C agent, $\nheu{d}$ describes a preprogrammed God's
requirements in making the first dollar.
Similarly, in $\watom{C}{\nheu{c_0}}$ of the bank agent, $\nheu{c_0}$ describes a preprogrammed
requests in making the first
coffee.
Now consider $\watom{C}{h}$ in *C. Here $h$ is a heuristic function for making a coffee.
That is, $h$ is a coffee-making manual.
\begin{exmple}
\> $agent\ *C$. \% Folger coffee provider \\
\> $\nheu{d_0} = \ldots$ \% God's requirements in the first dollar \\
\> \vdots \\
\> $\nheu{d_9} = \ldots$ \% God's requirements in the tenth dollar\\
\> $h(x,y,z) = \ldots$ \% coffee-making manual \\
\> $ (( \watom{1}{\nheu{d_0}} \mlc \ldots \mlc \watom{1}{\nheu{d_9}}) \mli \watom{C}{h})^{God}$. \% the coffee manual
costs ten (customized) dollars.\\
\end{exmple}
\begin{exmple}
\> $agent\ o $. \% starbucks owner \\
\> $((C \mlc \ldots \mlc C) \mli (1 \mlc \ldots \mlc 1))^k$. \% pay 8 coffees and get \$8 from bank.\\
\> $( (1 \mlc \ldots \mlc 1) \mli (C \mlc \ldots \mlc C))^f$. \% pay \$10 and get 10 coffees from Folger.\\
\end{exmple}
\begin{exmple}
\> $agent\ u$. \% the client\\
\> $((C \mlc C) \mli (1 \mlc 1))^k$. \% pay 2 coffees and get \$2 from bank.\\
\> $((1 \mlc 1) \mli (C \mlc C))^o.$ \% pay two dollars and get two coffees from owner. \\
\end{exmple}
\begin{exmple}
\> $agent\ *1$. \% the bank\\
\> $\nheu{c_0} = \ldots$ \% God's requirements in the first coffee \\
\> \vdots \\
\> $\nheu{c_9} = \ldots$ \% God's requirements in the tenth coffee\\
\> $h(x,y,z) = \ldots$ \% coffee-making manual \\
\> $r(\ldots) = \ldots$ \% dollar-making manual \\
\> $((\watom{C}{\nheu{c_0}} \mlc \ldots \mlc \watom{C}{\nheu{c_9}}) \mli \watom{1}{r})^{God}$. \% dollar-making manual
costs 10 ( customized) coffees.\\
\end{exmple}
Now consider the user agent $u$. The user is active from the beginning and tries to do the following:
(1) obtain two coffees from the owner and pass it along to the bank, and by (2) obtaining two dollars and then
passing them along to the
owner. The task (2) easily succeeds, as $u$ makes two dollars by copying the moves of the bank
(The bank makes moves according to the recipe $r$).
From this, the agent
$u$ successfully pays the owner $o$ two dollars. The owner $o$ pays \$10 to $*C$ (\$2 from $u$, \$8 from *1)
all using the copy-cat method.
Upon request, $*C$ makes ten (real or imaginary) coffees using the
``coffee manual'' $h$. $o$ makes ten coffees by copying $*C$.
Note that the user can make two coffees and $*1$ can make 8 coffees both by copying $o$.
\section{Introduction}\label{sintr}
The design and implementation of multi-agent systems
is recognized as a key component of general AI. Yet it remains the case that existing approaches -- classical logic,
$\pi$-calculus, linear logic, etc -- are too simplistic to
encode real-world multi-agent systems. Implementing the Starbucks
in AI is such an example.
{\em Computability logic} (CoL) \cite{Jap0}-\cite{JapCL12}, is an
elegant theory of (multi-)agent computability. In CoL, computational problems are seen as games between a machine and its environment and logical operators stand for operations on games.
It understands interaction among agents in its most general --- game-based --- sense.
There are many fragments of CoL.
To represent resources such as coffee, we choose \mbox{\bf CL2} -- a basic fragment of CoL -- as our target language. \mbox{\bf CL2}\ is obtained by adding to \mbox{\bf CL1}\ a second kind of atoms called
{\it general atoms}. A general atom
models an arbitrary interactive computing problem such as a coffee
machine.
In this paper, we discuss a web-based implementation of multi-agent programming based on \mbox{\bf CL2} \cite{JapCL2}.
We assume the following in our model:
\begin{itemize}
\item Each agent corresponds to a web site with a URL. An agent's resourcebase(RB) is described in its homepage.
\item There are three kinds of agents: God, resource providers/consumers and regular agents.
A resource provider for a resource $R$, written as $*R$, is an agent who is given a resource manual by God.
It can thus produce as many copies of resource $R$ using the manual.
A resource consumer is an agent who gives the resource to God.
\item God is both the ultimate provider for every
resource and its ultimate consumer.
\item Our goal here is to program every agent including resource providers/consumers. For this, we assume that
a resource provider has a machine's manual/heuristic $h$ for creating a resource R.
Similarly, the counterstrategy of a resource consumer
- the consumer's script -- is preprogrammed in the environment's strategy $\nheu{s}$. Note that, unlike the machine's
strategy, a consumer's counterstrategy varies from a resource customer to a resource customer.
To represent these manuals/scripts, we extends general atom P to $\watom{P}{h}$ for resource providers,
general atom P to $\watom{P}{\nheu{s}}$ for resource consumers.
\end{itemize}
In this paper, we present \mbox{\bf CL2$^{\Psi}$}\
which is a web-based implementation of \mbox{\bf CL2}. This implementation is rather simple and
straightfoward.
What is interesting is that \mbox{\bf CL2$^{\Psi}$}\ is an appealing multi-agent
programming model where resources are involved.
\section{\mbox{\bf CL2$^{\Psi}$}}\label{s2tb}
We review the propositional computability
logic called $\mbox{\bf CL2}$ \cite{JapCL1}.
As always, there are infinitely many {\bf elementary atoms} in the language, for which we will be using the letters
$p,q,r,\ldots$ as metavariables. There are also infinitely many
{\bf general atoms} in the language, for which we will be using the letters $P,Q,R,\ldots$.
We introduce {\bf general atoms with machine's strategy/heuristics}, denoted by $\watom{P}{h},\ldots$ and
{\bf general atoms with environment's strategy}, denoted by $\watom{P}{\nheu{s}}, \ldots$.
The two atoms: $\twg$ and $\tlg$ have a special status in that their interpretation is fixed. Formulas of this language, referred to as {\bf $\mbox{\bf CL2}$-formulas}, are built from atoms in the standard way:
\begin{definition}
The class of $\mbox{\bf CL2}$-formulas
is defined as the smallest set of expressions such that all atoms are in it and, if $F$ and $G$ are in it, then so are
$\gneg F$, $F\mlc G$, $F \mld G$, $F \mli G$, $F\adc G$, $F \add G$.
\end{definition}
Now we define $\mbox{\bf CL2$^{\Psi}$}$, a slight extension to $\mbox{\bf CL2}$ with environment parameters.
Let $F$ be a $\mbox{\bf CL2}$-formula.
We introduce a new {\it env-annotated} formula $F^\omega$ which reads as `play $F$ against an agent $\omega$.
For an $\adc$-occurrence $O$ (or an occurrence $O$ of a general atom) in $F^\omega$, we say
$\omega$ is the {\it matching} environment of $O$.
For example, $(p \adc (q \adc r))^{w}$ is an agent-annotated formula and
$w$ is the matching environment of both occurrences of $\adc$.
We extend this definition to subformulas and formulas. For a subformula $F'$ of the above $F^\omega$,
we say that $\omega$ is the {\it matching} environment of both $F'$ and $F$.
In introducing environments to a formula $F$, one issue is
whether we allow `env-switching' formulas of the form
$(F[R^u])^w$. Here $F[R]$ represents a formula with some occurrence of a subformula $R$.
That is, the machine initially plays $F$ against agent $w$ and then switches
to play against another agent $u$ in the course of playing $F$.
For technical reasons, we focus on non `env-switching' formulas.
This leads to the following definition where $h$ is a heuristic function:
\begin{definition}
The class of $\mbox{\bf CL2$^{\Psi}$}$-formulas
is defined as the smallest set of expressions such that
(a) For any $\mbox{\bf CL2}$-formula $F$ and any agent $\omega$, $F^\omega$ are in it and, (b) if $H$ and $J$ are in it, then so are
$\gneg H$, $H\mlc J$, $H\mld J$, $H\mli J$.
\end{definition}
\begin{definition}
\noindent Given a $\mbox{\bf CL2$^{\Psi}$}$-formula $J$, the skeleton of $J$ -- denoted by
$skeleton(J)$ -- is obtained by
replacing every occurrence $F^\omega$ by $F$.
\end{definition}
\noindent For example, $skeleton((p \adc (q \adc r))^{w}) = p \adc (q \adc r)$.
We often use $F$ instead of $F^{\omega}$ when it is irrelevant.
.
The following definitions comes from \cite{JapCL2}. They apply both
to $\mbox{\bf CL2}$, and $\mbox{\bf CL2$^{\Psi}$}$.
Understanding $E\mli F$ as an abbreviation of $\neg E \mld F$, a {\bf positive} occurrence of a subformula is one that is in the scope of an even number of $\neg$'s. Otherwise, the occurrence is {\bf negative}.
A {\bf surface occurrence} of a subformula means an occurrence that is not in the scope of a choice ($\add$ or $\adc$) operator.
A formula is {\bf elementary} iff it does not contain the choice operators and general atoms.
The {\bf elementarization} of a formula is the result of replacing, in it, every surface occurrence of the form $F_1\add ... \add F_n$ by $\oo$ , every surface occurrence of the form $F_1\adc ... \adc F_n$ by $\pp$,
every positive surface occurrence of each general atom by $\oo$ , and
every negative
surface occurrence of the form each general atom by $\pp$.
A formula is {\bf stable} iff its elementarization is valid in classical logic, otherwise
it is {\bf instable}.
$F${\bf -specification} of $O$, where $F$ is a formula and $O$ is a surface occurrence in $F$, is a string $\alpha$ which can be defined by:
\begin{itemize}
\item $F$-specification of the occurrence in itself is the empty string.
\item If $F$ = $\neg G$, then $F$-specification of an occurrence that happens to be in $G$ is the same as the $G$-specification of that occurrence.
\item If $F$ is $G_1\mlc ... \mlc G_n$, $G_1\mld ... \mld G_n$, or $G_1\mli G_2$, then $F$-specification of an occurrence that happens to be in $G_i$ is the string $i.\alpha$, where $\alpha$ is the $G_i$-specification of that occurrence.
\end{itemize}
The proof system of \mbox{\bf CL2$^{\Psi}$}\ is identical to that $\mbox{\bf CL2}$ and
has the following three rules, with $H$, $F$ standing for $\mbox{\bf CL2$^{\Psi}$}$-formulas and $\vec H$ for a set
of $\mbox{\bf CL2$^{\Psi}$}$-formulas: \\
Rule (A): ${\vec H}\vdash F$, where $F$ is stable and, whenever $F$ has a positive (resp. negative) surface occurrence of $G_1\adc ... \adc G_n$ (resp. $G_1\add ... \add G_n$) whose matching environment is $\omega$, for each i$\in \{1,...,n\}$, $\vec H$ contains the result of replacing in $F$ that occurrence by $G_i^\omega$.
Rule (B): $H\vdash F$, where $H$ is the result of replacing in $F$ a negative (resp. positive) surface occurrence of $G_1\adc ... \adc G_n$ (resp. $G_1\add ... \add G_n$) whose matching environment is $\omega$ by $G_i^\omega$ for some i$\in \{1,...,n\}$.
Rule (C): $F'\vdash F$, where $F'$ is the result of replacing in $F$
one negative surface occurrence of
some general atom $P$ and one positive surface occurrence of
some general atom $P$ by a nonlogical elementary atom that does not occur
in $F$.
\begin{examplee}\label{ex01}
$\mbox{\bf CL2$^{\Psi}$} \vdash $$(C \mlc C)\mli (C\mld C)^\omega$
where $\omega$ is an agent.
Note that $\omega$ play no roles in the proof procedure. Similarly, the machine's manual and the environment's script
play no role in the proof procedure.
\end{examplee}
\begin{enumerate}
\item $(p\mlc q)\mli (p\mld q)^\omega$, rule A, 0
\item $(p\mlc C) \mli (p\mld C)^\omega$, rule C, 1
\item $(C \mlc C)\mli (C\mld C)^\omega$, rule C, 2
\end{enumerate}
\section{Hyperformulas}\label{s3hyper}
To facilitate the execution procedure, we modify \mbox{\bf CL2$^{\Psi}$}\ to obtain
\mbox{\bf CL2$^{o,\Psi}$}.
Unlike \mbox{\bf CL2$^{\Psi}$}, this new language allows any hyperformulas.
Its rules
are Rules (a) and (b) of \mbox{\bf CL2$^{\Psi}$}\ plus the following Rule (c$^o$)
instead of the old Rule (c): \\
Rule (C$^o$): $F'\vdash F$, where $F'$ is the result of replacing in $F$ one negative surface occurrence of
some general atom $P$ and
one positive surface occurrence of
some general atom $P$ by a hybrid atom $P_q$.
In the above, we introduced hybrid atoms. Each hybrid atom is a pair consisting of a general atom $P$, called its general component, and a nonlogical elementary atom $q$, called its elementary component. Hybrid atoms were introduced
in \cite{JapCL2} to distinguish elementary atoms introduced in Rule (c)
from all other elementary atoms.
Now atoms can be of one of the three (elementary, general or hybrid)
sorts.
All the terminologies and definitions of the previous section extends well to hyperformulas.
One exception is that in the elementarization
of a hyperformula, every surface occurrence of each hybrid atom must also be replaced
by the elementary component of that atom.
We can easily convert $\mbox{\bf CL2$^{\Psi}$}$ proof to a modified one: if $q$ is obtained from $P$ by Rule (c), replace all
occurrences of $q$ by $P_q$. Apply this procedure to all of its descendants in the proof tree as well.
\begin{examplee}\label{ex01}
$\mbox{\bf CL2$^{o,\Psi}$} \vdash (C \mlc C)\mli (C\mld C)^\omega$
where $\omega$ is an agent.
Note that $\omega$ play no roles in the proof procedure.
\end{examplee}
\begin{enumerate}
\item $(C_p \mlc C_q)\mli (C_p\mld C_q)^\omega$, rule A, 0
\item $(C_p\mlc C) \mli (C_p\mld C)^\omega$, rule C, 1
\item $(C \mlc C )\mli (C\mld C)^\omega$, rule C, 2
\end{enumerate}
\section{Execution Phase}\label{s22tog}
The machine model of \mbox{\bf CL2}\ is designed to process only one query/formula at one time.
In distributed systems such as \mbox{\bf CL2$^{\Psi}$}, however, it is natural for an agent to receive/process
multiple queries. For this reason, our machine processes multiple formulas one by one.
Multiple queries cause some complications, as the RB of the machine evolves to RB' in the course of solving a query.
In such a case, subsequent queries must be solved with respect to RB'.
To be specific, it maintains a queue $Q = \langle Q_1,\ldots,Q_n \rangle$ for storing multiple incoming
queries. We assume that the machine processes $Q_1,\ldots,Q_n$ by executing the following $n$
procedures sequentially:
\[ Exec(RB_1\mli Q_1), Exec(RB_1\mli Q_2)\ldots, Exec(RB_n\mli Q_n) \]
\noindent
Here $RB_1$ is the original RB associated with the machine. We assume here that, for
$1\leq i\leq n$, $RB_i$ evolves
to $RB_{i+1}$ after solving $Q_i$.
It leads to the following definition: \\\\
{\bf procedure} EXEC(K,Q): $K$ is RB of the agent and $Q$ is a queue of incoming queries.\\
\begin{itemize}
\item If $K = \Gamma$ and $Q = (Q_1,\ldots,Q_n)$ then we do the following:
In this case, the machine tries to solve the first
query by invoking $Exec(\Gamma\mli Q_1)$ and then EXEC$(\Gamma', (Q_2,\ldots,Q_n)) $.
\item Else ($Q$ is empty): wait for new incoming service calls.
\end{itemize}
Below we will introduce an algorithm that executes
a formula $J$. The algorithm is a minor variant
of the one in \cite{JapCL2} and contains two stages: \\
{\bf Algorithm Exec(J)}: \% $J$ is a $\mbox{\bf CL2$^{\Psi}$}$-formula \\
\begin{enumerate}
\item First stage is to initialize a temporary variable $E$ to $J$,
a position variable $\Omega$ to an empty position $\langle\rangle$.
Activate all the agents specified in $J$.
\item The second stage is to play $J$ according to the following $mainloop$ procedure
(which is from \cite{JapCL2}):
\end{enumerate}
procedure $mainloop(Tree)$: \% $Tree$ is a proof tree of $J$ \\
{\bf Case} $E$ is derived by Rule (B): \\
\hspace{3em} Let $H$ be the premise of $E$ in the proof. $H$ is the result of substituting, in $E$, a certain negative (resp. positive) surface occurrence of a subformula $G_1\adc\ldots\adc G_n$ (resp. $G_1\add\ldots\add G_n$) by $G^\omega_i$ for some $i\in\{1,\ldots,n\}$. Here, we assume that
$\omega$ is the matching environment of that occurrence.
Let $\gamma$ be the $E$-specification of that occurrence. Then make the move $\gamma i$, update $E$ to $H$. Then inform $\omega$ of the move $\gamma i$.
repeat $mainloop$
{\bf Case} $E$ is derived by Rule (C$^0$): \\
\hspace{3em}
Let $H$ be the premise of $E$ in the proof. $H$ is the result of
replacing in $E$ some positive surface occurrence $\pi$ and some
negative surface occurrence $\nu$ of a general atom $P $ by a hybrid atom
$P_q $. Let $\langle\oo\pi_1,\ldots,\oo\pi_n\rangle$ and $\langle\oo\nu_1,\ldots,\oo\nu_m\rangle$
be $\Omega^\pi$ and $\Omega^\nu$, respectively\footnote{$\Omega^\pi$ and $\Omega^\nu$ may be programmed in
$h$ in $\watom{P}{h}$.}.
Here $\Omega^\pi$ is the subrun of the occurrence $\pi$ and $\Omega^\nu$ is the subrun of the occurrence $\nu$ of the hybrid atom introduced.
Then:
make the $m+n$ moves $\pi\nu_1,\ldots,\pi\nu_m,\nu\pi_1,\ldots,\nu\pi_n$
(in this order); update $\Omega$ to $\langle\Omega,\pp\pi\nu_1,\ldots,
\pp\pi\nu_m,\pp\nu\pi_1,\ldots,\pp\nu\pi_n\rangle$.
Update E to H; repeat $mainloop$.
{\bf Case} $E$ is derived by Rule (a): \\
Follow the procedure innerloop described below.
Below, ``the environment makes a move'' means that
either the environment makes a move or $\pp$ makes a
move for the environment using a given heuristic function.\\
$innerloop$: Keep granting permission until the environment
makes a move $\alpha$. \\
Subcase (i): $\alpha=\gamma\beta$, where $\gamma$
$E$-specifies a surface occurrence of a general atom. Then update $\Omega$
to $\langle\Omega,\oo\gamma\beta\rangle$
and repeat $innerloop$. \\
Subcase (ii): $\alpha=\gamma\beta$, where $\gamma$ $E$-specifies a surface occurrence of a hybrid atom. Let $\sigma$ be the $E$-
specification of the other occurrence of the same hybrid atom.
Then make the move $\sigma\beta$, update $\Omega$ to $\langle\Omega,\oo\gamma\beta,\pp\sigma\beta\rangle$ and repeat $innerloop$. \\
Subcase (iii): $\alpha = \gamma i$, where $\gamma$ $E$-specifies a positive (negative) surface occurrence of a
subformula $G_1\adc\ldots\adc G_n$ ($G_1\add\ldots\add G_n$) and $i\in\{1,\ldots,n\}$.
Let $H$ be the result of substituting, in $E$, a certain negative (resp. positive) surface occurrence of a subformula $G_1\add\ldots\add G_n$ (resp. $G_1\adc\ldots\adc G_n$) by $G^\omega_i$ for some $i\in\{1,\ldots,n\}$. Here $\omega$ is the matching environment of that occurrence.
Then update $E$ to $H$, and repeat $mainloop$. \\
If $\alpha$ does not satisfy the conditions of any of the above Subcases (i),(ii),(iii), ignore it.
\input exam
\section{Conclusion} \label{s5thr}
In this paper, we proposed a multi-agent programming model based on $\mbox{\bf CL2$^{\Psi}$}$.
Unlike other formalisms such as LogicWeb\cite{Loke} and distributed logic programming\cite{LCF},
this model does not require any centralized control.
Our next goal is to replace $\mbox{\bf CL2$^{\Psi}$}$ with much more expressive $\mbox{\bf CL12}$\cite{Japtow}.
\bibliographystyle{plain}
| proofpile-arXiv_059-15655 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{intro}
Heavy-flavor hadrons, containing open or hidden charm and beauty flavors are believed to be important probes for the understanding of Quantum Chromodynamics (QCD) in high-energy hadronic collisions: starting from the study of production mechanisms in proton-proton (pp) collisions to the investigation of Cold Nuclear Matter (CNM) effects in proton-nucleus (p--A) collisions and their suppression in the search of Quark Gluon Plasma (QGP) in nucleus-nucleus (A--A) collisions~\cite{Xu:2017hgt,Adare:2012yxa,Tang:2020ame}. In addition, the study of heavy-flavor production as a function of the charged-particle multiplicity may provide insights into multiple hard partonic scatterings~\cite{Abelev:2012rz,Acharya:2020pit,Adam:2015ota}. Recently, the observation of heavy-ion-like features in small systems (pp and p$-$A) continues to generate considerable interest in the scientific community. For example, the discovery of collective-like phenomena~\cite{Li:2011mp,Khachatryan:2010gv}, strangeness enhancement~\cite{ALICE:2017jyt} etc., and corresponding phenomenological studies~\cite{Deb:2020ezw,Mishra:2020epq} in high-multiplicity pp and p$-$A collisions are few among them. In this context, the observed QGP-like phenomena warrants a deeper understanding involving many complex dynamical processes like resonance decays, jets, underlying events (UE) etc. Therefore, small systems need to be re-investigated properly including the light and heavy-flavor sectors, as the production dynamics of these sectors are different in nature. To observe similar effects and in particular, the interplay of hard processes and UE, heavy-flavors are very useful tools. The study of heavy-flavor transverse momentum ($p_{\rm T}$) spectra is one of the main tools to disentangle collective effects from trivial correlations. The production dynamics of heavy-flavor can be better understood through a differential analysis involving tools to separate jetty and isotropic events~\cite{Cuautle:2015kra}. Phenomenological Monte Carlo generators such as PYTHIA8~\cite{Pythia} are widely used to compare expectations where different known physics mechanisms are taken into account. Multi-Parton Interactions (MPIs) together with Color Reconnection (CR) in PYTHIA8 could mimic collective-like expansion~\cite{Ortiz:2013yxa}. In this present paper, we have performed a multi-differential study of heavy-flavor production (J/$\psi$, $\rm D^{0}$ and $\Lambda_{c}^{+}$) as a function of charged-particle multiplicity and transverse spherocity using pQCD-inspired event generator, PYTHIA8~\cite{Pythia}.
As J/$\psi$ suppression is a potential probe of QGP \cite{Matsui:1986dk}, leading to enhancement of open charms like $\rm D^{0}$ and $\Lambda_{c}^{+}$, this
multi-differential study in pp collisions taking minimum-bias (MB) and high-multiplicity events will be of real help in understanding the underlying
production dynamics of heavy-flavors.
The simulation of pp collisions implemented in PYTHIA8 starts with the basic concept, p+p $\rightarrow$ n, defining ``n" particles produced by the collision of two protons. If we consider a proton, then in first approximation one would get distribution of quarks: u, u, and d. But in reality, protons are active soup of partons: quark-antiquark pairs appearing from vacuum or gluons, which create so-called sea of particles inside a proton. Larger fraction of low-$x$ gluons and sea quarks start to appear in case of higher $Q^{2}$ (higher colliding energy) coming from a parton splitting. Multi-partonic interactions (MPI) is the natural consequence of this composite nature of proton~\cite{Sjostrand:2007gs}. MPIs are mainly responsible for soft underlying events, but there is finite probability of occurrence of several hard scattering processes in the same hadron-hadron collisions~\cite{Sjostrand:2007gs,Butterworth:1996pi,Adam:2015ota}. The large number of MPIs from pp collisions generate a very high density of QCD matter. This concept is different from the traditional one, which has back-to-back jet production in pp collisions. Therefore, it is worth separating these MPI dominant events (isotropic) from jet-rich (jetty) events and revisit the pp results at the LHC. Surprisingly, current PYTHIA8 event topology studies have shown that high-multiplicity pp collisions exhibit prominent features of heavy-ion collisions~\cite{Ortiz:2017jho,Acharya:2019mzb,Tripathy:2020jue}. This questions the interpretation of heavy-ion results which relies heavily on the results obtained in minimum bias pp collisions. The Run1 and Run2 of the LHC have provided a large number of measurements to explore the pp physics, probing both soft and hard QCD regime, and thereby a few tests on possible QGP signatures in pp collisions. There are still some phenomena like baryon-to-meson ratios, degree of collectivity etc., that are yet to be understood completely. It is now an ideal time to perform these studies further using realistic Monte Carlo event generators and open windows for such measurements for the future LHC Run3. One of such interesting puzzle is an enhanced production of heavy baryons over mesons ratio in the intermediate-$p_{\rm{T}}$ region in central heavy-ion collisions at the midrapidity~\cite{Acharya:2017kfy,Adam:2019hpq,Oh:2009zj}. This enhancement is described by multi-quark dynamics through quark coalescence or recombination~\cite{Greco:2003mm,Oh:2009zj}. Minijet partons produced in relativistic heavy-ion collisions coalesce with partons from the QGP expected to be formed in relativistic heavy-ion collisions leading to an enhanced production of hadrons with intermediate transverse momentum~\cite{Greco:2003xt}. Therefore, the enhanced production of baryon over meson is thought to be an indication of QGP formation in heavy-ion collisions. Recently, LHC experiments have also observed the similar signature for charmed baryon-to-meson ratio for Pb-Pb and p-Pb collisions~\cite{Acharya:2018ckj,Aaij:2018iyy,Acharya:2017kfy}. Such ratio is also measured in pp collisions~\cite{Acharya:2020lrg}. Taking the opportunity to use transverse spherocity, a tool which separates jet-rich events from isotropic events (rich in MPI), we have analysed the charmed baryon-to-meson ratio in pp collisions using PYTHIA8 and looked for existence of such enhancement in intermediate-$p_{\rm{T}}$ region. Further, we have analysed its counterpart ratio in light flavor sector to know the behaviour and hence the production mechanism when we go from light flavor to heavy-flavor sector.
The paper is organised as follows. We begin with a brief motivation for the study in Section~\ref{intro}. In Section~\ref{eventgen}, the detailed analysis methodology along with brief description about PYTHIA8 are given. Section~\ref{result} discusses about the results and finally they are summarized in Section~\ref{sum}. In the appendix, we have compared the production cross-sections of J/$\psi$ and $\rm D^{0}$ between experimental data from ALICE and PYTHIA8 in the same kinematic range to have a glimpse of the spectral shapes, which are essential for a $p_{\rm T}$-differential study.
\section{Event generation and Analysis methodology}
\label{eventgen}
PYTHIA8, a standalone Monte-Carlo event generator, is widely used to simulate ultra-relativistic collisions among the particles like electron-electron, electron-positron, proton-proton and proton-antiproton. It has been quite successful in explaining many of the experimental data from the LHC qualitatively with different incorporated physics processes.
PYTHIA8 includes Multi-Parton Interaction (MPI) scenario, which allows heavy-flavor quarks to be produced through $2\rightarrow2$ hard
sub-processes. Detailed explanation on PYTHIA8 physics processes and their implementation can be found in Ref.~\cite{Pythia}.
The results reported in this paper are obtained from simulated inelastic, non-diffractive events using PYTHIA version 8.215~\cite{Sjostrand:2014zea} with the 4C tune (Tune:pp = 5)~\cite{Corke:2010yf}. Further, non-diffractive component of the total cross section for all hard QCD processes (HardQCD:all=on) are considered, which includes the production of heavy quarks along with MPI-based scheme of color reconnection (ColourReconnection:reconnect = on). A cut of $p_{\rm T}\geq$ 0.5 GeV/$c$ (using PhaseSpace:pTHatMinDiverge) is used to avoid the divergences of QCD processes in the limit $p_{\rm{T}}$ $\rightarrow$ 0. For the production of quarkonia through NRQCD framework~\cite{Shao:2012iz,Caswell:1985ui,Bodwin:1994jh}, we use Charmonium:all flag in the simulation. Study of $\rm D^{0}$, J/$\psi$ and $\Lambda_{c}^{+}$ production are done at the mid-rapidity. J/$\psi$, $\rm D^{0}$ and $\Lambda_{c}^{+}$ are reconstructed via the $e^{+}+e^{-}$ ($|y|<$ 0.9)~\cite{Lofnes}, $K^{-}+\pi^{+}$ ($|y|<$ 0.5)~\cite{Hamon:2018zqs} and $p+K^{-}+\pi^{+}$ ($|y|<$ 0.5)~\cite{Acharya:2017kfy} decay channels and their yields are obtained through invariant mass reconstruction keeping the detector acceptance of ALICE in mind. This analysis is performed by generating 100 million events for J/$\psi$ and approximately 50 million events each for $\rm D^{0}$ and $\Lambda_{c}^{+}$ in pp collisions at $\sqrt{s}$ = 13 TeV. The charged-particle multiplicity, $N_{\rm ch}$ is measured at the mid-rapidity ($|\eta|<$ 1.0), considering all the charged particles including the decay product of J/$\psi$, $\rm D^{0}$ and $\Lambda_{c}^{+}$. As the aim of this work is to perform a multi-differential study using transverse spherocity and the charged-particle multiplicities ($N_{\rm ch}$), we have chosen the minimum bias (0-100\%) collisions and events with top $20\%$ of $N_{\rm ch}$ for our study.
Transverse spherocity ($S_{0}$) is defined for a unit vector $\hat{n} (n_{T},0)$ that minimizes the following ratio~\cite{Cuautle:2015kra,Salam:2009jx}.
\begin{eqnarray}
S_{0} = \frac{\pi^{2}}{4} \bigg(\frac{\Sigma_{i}~|\vec p_{\rm T_{i}}\times\hat{n}|}{\Sigma_{i}~p_{\rm T_{i}}}\bigg)^{2}.
\label{eq1}
\end{eqnarray}
By construction, $S_{0}$ is infrared and collinear safe~\cite{Salam:2009jx} and it ranges from 0 to 1. $S_{0}$ becoming 0 are referred to the jetty events while 1 would mean the events are isotropic in nature. As transverse spherocity distinguishes events based on different topological limits $i.e.$ events with back-to-back jet structures (jetty) versus events dominated by multiple soft scatterings (isotropic), in this analysis only the events with at least 5 charged-particles in $|\eta|<$ 0.8 with $p_{\rm T}>$ 0.15 GeV/$c$ are considered, so that the concept of event topology becomes statistically meaningful. $S_{0}$ cuts on the generated events are applied in order to sort out jetty and isotropic events from the total events. For minimum bias collisions, the cuts for jetty events is $0 \leq S_{0} < 0.37$ with lowest 20\% of $S_{0}$ distribution and $0.72 < S_{0} \leq 1$ is for isotropic events with highest 20\% of $S_{0}$ distribution. Further, minimum bias events are divided into six multiplicity classes and the corresponding spherocity cuts for isotropic and jetty events are tabulated in Table~\ref{tab:mult_sp}. For consistency, $N_{\rm ch}$ intervals chosen here are the same as in Ref.~\cite{Khatun:2019dml}. In order to maximize the statistics, the bin-width is taken smaller at lower multiplicities and then subsequently higher at high multiplicity bins. Figure~\ref{sp_dis} represents the transverse spherocity distribution in different multiplicity classes for pp collisions at $\sqrt{s}$ = 13 TeV. Here, it is observed that high-multiplicity events are more towards isotropic in nature which is in accordance with earlier works on transverse spherocity~\cite{Khatun:2019dml,Khuntia:2018qox,Tripathy:2019blo}. The peak of the transverse spherocity distribution shifts towards isotropic events with increasing charged-particle multiplicity. This shows that higher contribution of softer events come from multiple hard partonic scatterings in high-multiplicity pp collisions, which generate an almost isotropic distribution of particles~\cite{Khuntia:2018qox}. Therefore, the differential study of particle production as a function of multiplicity and event shape classes has great importance to understand the particle production mechanism. As transverse spherocity distribution depends on charged-particle multiplicity, the cuts for jetty and isotropic events vary for different transverse spherocity classes, which is shown in Table~\ref{tab:mult_sp}. For the sake of simplicity, here onwards we refer transverse spherocity as spherocity.
\begin{figure}[ht!]
\includegraphics[scale=0.43]{Figures/spherocity_distribution_Nch.pdf}
\caption[]{(Color Online) Transverse spherocity distributions for different charged-particle multiplicity in pp collisions at $ \sqrt{s} =\mathrm{13~TeV}$ using PYTHIA8. Different line styles and colors are for different multiplicity classes. }
\label{sp_dis}
\end{figure}
\begin{table}[htp]
\caption{Charged-particle multiplicity (Mult.) classes ($N_{\rm ch}$) ($|\eta|<$ 1.0) and corresponding spherocity ranges for jetty and isotropic events. Here the lowest and highest 20\% events of the spherocity distribution for a given multiplicity class are considered as jetty and isotropic events, respectively.}
\centering
\scalebox{1.17}
{
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{1}{|c|}{\bf Mult. Classes }&\multicolumn{2}{c|}{\bf $S_{0}$ range} \\
\cline{2-3}
\multicolumn{1}{|c|}{($N_{\rm ch}$)} &{\bf Jetty events} & {\bf Isotropic events}\\
\hline
\multirow{1}{*}{$5-10$}
&$0-0.29$ &$0.64-1$ \\
\cline{2-3}
\cline{1-3}
\multirow{1}{*}{$10-15$}
& $0-0.38$ &$0.70-1$ \\
\cline{2-3}
\cline{1-3}
\multirow{1}{*}{$15-20$}
&$0- 0.44$&$0.74-1$\\
\cline{2-3}
\cline{1-3}
\multirow{1}{*}{$20-30$}
&$0-0.49$ &$0.77- 1$ \\
\cline{2-3}
\cline{1-3}
\multirow{1}{*}{$30-40$}
&$0- 0.54$&$0.80- 1$ \\
\cline{2-3}
\cline{1-3}
\multirow{1}{*}{$40-150$}
&$0- 0.58$ &$0.82- 1$ \\
\cline{2-3}
\cline{1-3}
\end{tabular}
}
\label{tab:mult_sp}
\end{table}
With this detailed analysis methodology, we now proceed for the estimation of transverse momentum spectra, relative integrated yield and relative mean transverse momentum of $\rm D^{0}$, J/$\psi$ and $\Lambda_{c}^{+}$ in pp collisions at $\sqrt{s}$ = 13 TeV.
\section{Results and Discussion}
\label{result}
\subsection{Transverse momentum spectra}
\label{hard_qcd}
\begin{figure*}[!ht]
\begin{center}
\includegraphics[scale=0.66]{Figures/ptspectra_ratio.pdf}
\caption {(Color online) $p_{\rm T}$-spectra (left panel) of isotropic, jetty and spherocity-integrated events, and their ratios (right panel) to the spherocity-integrated ones for $\rm D^{0}$ (top), J/$\psi$ (middle) and $\Lambda_{c}^{+}$ (bottom) for minimum bias pp collisions at $\sqrt{s}$ = 13 TeV using PYTHIA8. }
\label{fig:pTmin}
\end{center}
\end{figure*}
\begin{figure*}[!ht]
\begin{center}
\includegraphics[scale=0.66]{Figures/pt_spectra_top_20.pdf}
\caption {(Color online) $p_{\rm T}$-spectra (left panel) of isotropic, jetty and spherocity-integrated events, and their ratios (right panel) to the spherocity-integrated ones for $\rm D^{0}$ (top), J/$\psi$ (middle) and $\Lambda_{c}^{+}$ (bottom) for high-multiplicity pp collisions at $\sqrt{s}$ = 13 TeV using PYTHIA8. }
\label{fig:pTtop20}
\end{center}
\end{figure*}
Left panels of Fig. \ref{fig:pTmin} show the transverse momentum ($p_{\rm T}$) spectra for $\rm D^{0}$ (top), J/$\psi$ (middle) and $\Lambda_{c}^{+}$ (bottom) for isotropic, jetty and spherocity-integrated events in minimum bias pp collisions at $\sqrt{s}$ = 13 TeV. The right panels show the ratio of the $p_{\rm T}$ spectra for isotropic and jetty events to the spherocity-integrated ones. The ratios clearly indicate that the particle production from isotropic events dominate at low-$p_{\rm T}$ and after a certain $p_{\rm T}$, the particle production from jetty events starts to dominate. The crossing point of the jetty and isotropic events for $\Lambda_{c}^{+}$ and J/$\psi$ are found to be similar. However, for $\rm D^{0}$ the crossing point is at a higher $p_{\rm T}$. This may suggest that the soft production of $\rm D^{0}$ is dominant till higher-$p_{\rm T}$ compared to $\Lambda_{c}^{+}$ and J/$\psi$. We also estimate the $p_{\rm T}$ spectra in high-multiplicity pp collisions in different spherocity classes, which is shown in Fig. \ref{fig:pTtop20}. Here, the crossing point of the jetty and isotropic events for all the studied particles are found to be similar. The comparison between Fig. \ref{fig:pTmin} and Fig. \ref{fig:pTtop20} indicates that the heavy-flavor particle production from jetty events dominates at a lower $p_{\rm T}$ in high-multiplicity pp collisions compared to the minimum bias ones. At high multiplicity for low-$p_{\rm T}$ region, the separation between the isotropic and jetty events are small compared to minimum bias events.
\subsection{Relative integrated yield and relative mean transverse momentum}
\label{yield_pt_mult}
\begin{figure*}[!ht]
\begin{center}
\includegraphics[scale=0.46]{Figures/dNdy_all.pdf}
\includegraphics[scale=0.66]{Figures/Mean_pT_all-new.pdf}
\caption {(Color online) Left panel: Self-normalised yields with respect to the corresponding event types, Middle panel: mean transverse momenta ($\langle p_{\rm{T}} \rangle$) scaled to its MB values, and Right panel: ratio of $\langle p_{\rm{T}} \rangle$ in different event types to the spherocity-integrated ones as a function of multiplicity for $\rm D^{0}$ (top), J/$\psi$ (middle) and $\Lambda_{c}^{+}$ (bottom). The error bars in the data points are the statistical uncertainties.}
\label{fig5:meanpt:spherocity}
\end{center}
\end{figure*}
The relative yields of $\rm D^{0}$, J/$\psi$ and $\Lambda_{c}^{+}$ are measured at the mid-rapidity ($|y| < 0.9$) using the following relation:
\begin{eqnarray}
\frac{Y_{particle}}{<Y_{particle}>}= \frac{N_{particle}^{i}}{N_{particle}^{total}}\frac{N_{evt}^{total}}{N_{evt}^{i}},
\label{eq1}
\end{eqnarray}
where, $N_{\rm particle}^{i}$ and $N_{\rm evt}^{i}$ are the number of $\rm D^{0}$, J/$\psi$ and $\Lambda_{c}^{+}$, and number of events in $i^{th}$ multiplicity bin, respectively. $N_{\rm particle}^{\rm total}$ and $N_{\rm evt}^{\rm total}$ are the total number of $\rm D^{0}$, J/$\psi$ and $\Lambda_{c}^{+}$ produced, and total number of minimum-bias events, respectively. The uncertainties in the measurement of the number of $\rm D^{0}$, J/$\psi$ and $\Lambda_{c}^{+}$ particles are $\sqrt{N_{\rm D^{0}}}$, $\sqrt{N_{J/\psi}}$ and $\sqrt{N_{\Lambda_{c}^{+}}}$, respectively . These uncertainties are propagated using standard error propagation formula to estimate the uncertainties in relative $\rm D^{0}$, J/$\psi$ and $\Lambda_{c}^{+}$ yields. The mean transverse momenta ($\langle p_{\rm{T}}\rangle$) of $\rm D^{0}$, J/$\psi$ and $\Lambda_{c}^{+}$ are calculated for each multiplicity bin and corresponding uncertainty is given by the ratio of standard deviation ($\sigma$) and square root of the number of entries in that bin ($\sigma/\sqrt{N_{\rm bin}^{p_{\rm T}}}$).
Left(middle) panel of Fig. \ref{fig5:meanpt:spherocity} shows the integrated yields($\langle p_{\rm{T}}\rangle$) of $\rm D^{0}$, J/$\psi$ and $\Lambda_{c}^{+}$ scaled to the corresponding integrated yields($\langle p_{\rm{T}}\rangle$) of spherocity-integrated events in minimum bias collisions as a function of charged-particle multiplicity. For all the particles, the relative yield and the relative mean transverse momentum increase with charged-particle multiplicity. Enabling the CR in PYTHIA8, produces effects on the final particle distributions, which could resemble those due to flow~\cite{Ortiz:2013yxa}. An increase in the $\langle p_{\rm{T}}\rangle$ with $N_{\rm ch}$ is attributed to the presence of CR between the interacting strings. The relative yields and relative $\langle p_{\rm{T}}\rangle$ are found to be higher for jetty events compared to isotropic ones. The right panel of Fig. \ref{fig5:meanpt:spherocity} shows the ratio of relative mean transverse momentum from isotropic and jetty events to the $S_{0}$-integrated events. Interestingly, the relative $\langle p_{\rm{T}}\rangle$ of the studied particles for isotropic events stay systematically below the spherocity-integrated ones for low-multiplicity events and approaches towards spherocity integrated ones with increase of multiplicity. For jet-like events the $\langle p_{\rm{T}}\rangle$ is higher than that of spherocity-integrated events and the relative increase in $\langle p_{\rm{T}}\rangle$ saturates at high multiplicity. This behavior is similar to the observed behavior of $\langle p_{\rm{T}}\rangle$ of light-flavor
charged-particles in different spherocity classes by ALICE at the LHC~\cite{Acharya:2019mzb}.
Figure~\ref{fig5:meanpt:spherocity} further reveals a clear distinction in the production mechanisms between $\rm D^{0}$ and $\Lambda_{c}^{+}$ versus J/$\psi$. For example, relative yields of J/$\psi$ for jetty, isotropic and $S_{0}$-integrated events are close to each other and are less than that of $\rm D^{0}$ and $\Lambda_{c}^{+}$. This means more number of open flavors are produced in high-multiplicity events as compared to charmonia and is also reflected in the complementary study of $\langle p_{\rm{T}}\rangle$. The $\langle p_{\rm{T}}\rangle$ of J/$\psi$ has the dominant effect of jetty events, whereas, $\langle p_{\rm{T}}\rangle$ of $\rm D^{0}$ and $\Lambda_{c}^{+}$ are dominated by isotropic ones. This can be explained by the multi-quark dynamics by the fact that $\rm D^{0}$-mesons and $\Lambda_{c}^{+}$ baryons are produced via string fragmentation. Here, the latter carry the flow-like characteristics originating from CR mechanism~\cite{Ortiz:2013yxa}. But, J/$\psi$ which is a bound state of heavy charm and anti-charm quarks, has a very little contribution from CR~\cite{Thakur:2017kpv,Deb:2018qsl}. Further, greater number of light-quarks are produced from MPI compared to heavy-quarks, which makes more light quarks to come to the close proximity of a c-quark, as compared to its own counter part ($\bar{c}$) and hence higher probability of production of open heavy-flavors than charmonia in a high-multiplicity environment. However, enhancement of heavy-baryon over meson still need to be understood which we have tried to explore in the next section.
\subsection{Baryon-to-meson ratio}
\label{sec:yield_ratio}
A significant enhancement of baryon-to-meson ratios for light hadrons has been observed in central heavy-ion collisions compared to pp collisions in the intermediate $p_{\rm{T}}$ region~\cite{Abelev:2013xaa}. The enhancement can be explained by coalescence model through hadronize-combination of constituent quarks~\cite{Oh:2009zj,Greco:2003mm,Minissale:2015zwa}. Recently, ALICE and LHCb have observed enhancement of charmed baryon-to-meson ratio which indicates charm quarks may hadronize through coalescence as well. Although, minimum bias pp collisions do not show significant enhancement of baryon-to-meson ratio in the intermediate $p_{\rm{T}}$ region~\cite{Acharya:2020lrg,Aaij:2018iyy}, in this paper we have tried to unfold the possibility of such effects in high-multiplicity events in different event shapes. Indication of such enhancement would be sensitive to thermalization effect in pp collisions~\cite{Sarkar:2019nua}. The relative abundance of baryons and mesons can shed light on the process of fragmentation - a non-perturbative process. Formation of jets of partons into high transverse momentum hadrons is described by fragmentation function which incorporate how partons from jet combine with quarks and antiquarks from the vacuum to form hadrons. Because of MPIs, jet-partons in pp collisions can combine with quarks and antiquarks produced from MPIs to form hadrons via string fragmentation. Since the momenta of quarks and antiquarks from secondary MPIs are smaller than those of partons from jets, these hadrons have momenta lower than independent fragmentation of jet partons and that is what we observe from Fig.~\ref{fig5:lambda_to_dzero}. The $p_{\rm T}$-differential $\Lambda^{+}_{c}/\rm D^{0}$ ratio for jetty events is higher compared to isotropic events in minimum bias sample. One interesting observation from Fig.~\ref{fig5:lambda_to_dzero} is that the behaviour of $\Lambda^{+}_{c}/\rm D^{0}$ $p_{\rm T}$-differential ratio for all event topologies follow heavy-ion-like trend i.e. enhancement of baryon-to-meson ratio in the intermediate $p_{\rm{T}}$ region followed by a decreasing behaviour. Although, the minimum bias samples show a clear event topology
dependence, the top 20\% high-multiplicity pp events are driven by the final state multiplicity without a distinction of event
types.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.438]{Figures/charged_lambda_to_dzero_MB.pdf}
\includegraphics[scale=0.438]{Figures/charged_lambda_to_dzero_TOP_20.pdf}
\caption {(Color online) $p_{\rm{T}}$-differential particle ratio of $\Lambda_{c}^{+}$ to $\rm D^{0}$, for minimum bias and high-multiplicity (top 20\%) pp collisions in isotropic (blue squares), jetty (red triangles) and spherocity integrated (open circles) events using PYTHIA8.}
\label{fig5:lambda_to_dzero}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.438]{Figures/neutral_lambda_to_anti_kaon_MB.pdf}
\includegraphics[scale=0.438]{Figures/neutral_lambda_to_anti_kaon_TOP_20.pdf}
\caption {(Color online) $p_{\rm{T}}$-differential particle ratio of $\Lambda^{0}$ to $\rm K^{-}$, for minimum biased and high-multiplicity (top 20\%) pp collisions in isotropic (blue squares), jetty (red triangles) and spherocity integrated (open circles) events using PYTHIA8.}
\label{fig5:lambda_to_K}
\end{center}
\end{figure}
Contrary to heavy-flavors, when we study similar ratio in the light flavor sector ($\Lambda^{0}/K^{-}$), we observe a completely opposite trend with spherocity classes for $p_{\rm T}$ $>$ 4 GeV/$c$ (Fig.~\ref{fig5:lambda_to_K}): ratio is higher for isotropic samples as compared to jetty ones. This is because of the fact that the former are driven predominantly by hard collisions and can have maximum contributions from jettiness of the events in comparison with the contributions from hadronization. However, for light flavors, most of the contributions could be MPI dominant. Here, the $\Lambda$ enhancement is linked to the increased density of quarks and gluons, particularly the strange quarks (s) from MPI and CR in the final state.
For the heavy flavor versus light flavor behaviour of the baryon over meson ratio, heavier particles will have a larger boost which will be
reflected in the baryon-to-meson ratios.
Therefore, one can see the shift of peak of the ratio to higher $p_{\rm{T}}$ for heavy-flavors. For the top 20\% high-multiplicity pp
events, as seen from Figs.~\ref{fig5:lambda_to_dzero} and~\ref{fig5:lambda_to_K}, event topology has no effect on heavy-flavor sector, whereas in case of the light-flavors, we do observe a clear dependence of the discussed ratio on different event types.
\section{Summary}
\label{sum}
This paper focuses on the production of heavy-flavor hadrons -- of J/$\psi$, $\rm D^{0}$ and $\Lambda_{c}^{+}$ in pp collisions at $\sqrt{s}$ = 13 TeV using 4C tuned PYTHIA8 event generator at mid-rapidity. In addition, an event shape observable called spherocity is used for the first time in the heavy-flavor sector as a differentiator of integrated events into jetty and isotropic to have a better understanding of production dynamics of the heavy-flavor hadrons.
Important findings from this study are summarized below:
\begin{enumerate}
\item We see a clear dependence of the spherocity distribution with charged-particle multiplicity. The spherocity distribution is increasingly skewed with the increase in charged-particle multiplicity.\\
\item A clear spherocity dependence of heavy-flavor $p_{\rm T}$-spectra, integrated yield, $\langle p_{\rm{T}}\rangle$ and particle ratios is observed in both minimum bias and high-multiplicity pp collisions.\\
\item The crossing point of the ratios of $p_{\rm T}$-spectra from jetty and isotropic events to the spherocity-integrated ones shifts to lower $p_{\rm T}$ with the increase in charged-particle multiplicity. This indicates that spherocity differentiates events (jetty versus isotropic) more accurately in high-multiplicity pp collisions keeping a small gap in the multiplicity of heavy-flavor hadrons.\\
\item Relative yield and relative $\langle p_{\rm{T}}\rangle$ are found to be increasing with the increase in charged-particle multiplicity and they are higher for jetty events compared to isotropic ones. These results suggest that spherocity acts as a nice tool to differentiate events dominated by soft versus hard particle production processes.
\item The spherocity dependence of relative yields and relative $\langle p_{\rm{T}}\rangle$ for $\rm D^{0}$ and $\Lambda_{c}^{+}$ show a similar trend while for J/$\psi$ the difference from jetty to isotropic events is found to be lesser. This novel observation hints to different production dynamics of open charm compared to charmonia and the MPIs with color reconnection mechanism plays a major role for such a behavior in PYTHIA8. \\
\item The $\Lambda^{+}_{c}/\rm D^{0}$ ratio in jetty events is found to be higher compared to the isotropic events while an opposite trend for $\Lambda^{0}/K^{-}$ ratio is observed for the minimum bias sample. This is an interesting observation as spherocity dependence of particle ratios show a completely different behaviour for heavy flavor compared to light flavor sector. This clearly indicates to a MPI dominant contribution for $\Lambda^{0}/K^{-}$ while the $\Lambda^{+}_{c}/\rm D^{0}$ ratio is driven predominantly by hard collisions and can have maximum contributions from jets.\\
\end{enumerate}
A multi-differential study taking event topology and multiplicity is necessary in small systems at LHC energies when looking into the observation of heavy-ion like features in high-multiplicity pp collisions. The LHC experiments have planned for a dedicated high-multiplicity triggered events and the associated detector upgrades that will provide a proper platform in this direction. Study of heavy-flavor production will play an important role for the test of the pQCD, as they are produced early in time and witness the complete spacetime evolution of the system. However, the present limitations in terms of proper identification of secondary vertices, efficiency at low-$p_{\rm T}$ and dealing with signal to background ratio will be overcome to a greater extent with the detector upgrades. It is worth mentioning here that ALICE ITS3 planned for installation in LHC Long Shutdown3 (LS3), will have a novel vertex detector consisting
of curved wafer-scale ultra-thin silicon sensors arranged in perfectly cylindrical layers. This will feature an unprecedented low material budget of 0.05\% X0 per layer, with the innermost layer positioned at
only 18 $mm$ radial distance from the interaction point \cite{LS3,Future}. This will help with higher efficiency of detection of heavy-flavor particles, opening up a new domain of pQCD studies. The present study will be more exciting to carry out in experimental data in the upcoming LHC Run-3 and Run-4.
\section{Acknowledgement}
This research work has been carried out under the financial supports from DAE-BRNS, Government of India, Project No. 58/14/29/2019-BRNS of Raghunath Sahoo. S.T. acknowledges the support from the postdoctoral fellowship of DGAPA UNAM.
| proofpile-arXiv_059-15656 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}\label{sec:introduction}}
\else
\section{Introduction}
\label{sec:introduction}
\fi
\par The differential privacy is state of the art conception for privacy preservation because it formalizes a strong privacy guarantee by a solid mathematical foundation. That is, even if there is only one different record in two data sets, it is hard to tell one data set from another data set. In differential privacy, the privacy guarantee is quantified by probability with which attackers can tell one data set from another data set. The probability is controlled by a parameter called privacy budget.
\par One of most important reasons for popularity of differential privacy is that it makes no assumptions about the background knowledge available to attackers \cite{29}. That is, even if attackers know all data records except one, attackers cannot get much information of the record which attackers don't know. The amount of information which attackers can extract is strictly controlled by the privacy budget. However, intuitively, if attackers have more background knowledge, attackers have stronger ability to extract information. For example, if attackers know a target record related to a female, attackers can filter all records related to male, resulting in high probability to get target information.
\par In this paper, we explore the transformation from background knowledge to consuming extra privacy budget of differential privacy mechanisms. That is, how attackers can consume unexpected privacy budget by its background knowledge. We find out that the transformation is possible because differential privacy mechanisms do not take one property of query into account, namely linear property.
\par The linear property makes it possible to divide one query into two queries. For example, the counting query is one class of linear queries. For query $count(\{b_1,b_2,b_3\})$, the next equality holds.
\begin{eqnarray*}
& & count(\{b_1,b_2,b_3\} \\
&=& count(\{b_1,b_2 \})+count(\{ b_3\})\\
&=&count(\{b_1,b_3 \})+count(\{ b_2\})\\
\end{eqnarray*}
\par Suppose that attackers aim at the answer of $count(\{b_1,b_2,b_3\})$ and the consumed privacy budget is $\epsilon$ when a differential privacy mechanism responds one query every time.
\par Attackers can issue two different queries, namely $count(\{b_1,b_2 \})$ and $count(\{b_1,b_3 \})$, to a differential privacy mechanism and obtain two responses. From the differential privacy mechanism's perspective, it answers two different queries and each query consumes privacy budget $\epsilon$.
\par However, if $\{b_2,b_3\}$ is background knowledge of attackers, attackers can get two answers for the same query. In specific, by adding up the first response which is an answer of $count(\{b_1,b_2 \})$ and $count(\{b_3\})$ which is computed by attackers themselves, attackers can obtain an answer of $count(\{b_1,b_2,b_3\}$. By adding up the second response and $count(\{b_2\})$, attackers can obtain another answer of $count(\{b_1,b_2,b_3\}$. So, from attackers' perspective, the differential privacy mechanism answers the same query two times and consumed privacy budget for each time is $\epsilon$. That is, the totally consumed privacy budget is $2*\epsilon$ for the same query.
\par The possibility to obtain multiple answers for one query leads to risk of leaking membership information of records. In specific, attackers obtain multiple answers of query $count(\{b_1,b_2,b_3\})$ and they know that the record $b_2$ and $b_3$ are in the target data set. If the record $b_1$ is in the target data set, the mean of obtained samples is 3. Otherwise, the mean of samples is 2. By estimating the mean of samples, attackers can determine whether the record $b_1$ is in the target data set.
\par In a word, if the background knowledge of attackers is enough, attackers can consume unreasonably large privacy budget in total while differential privacy mechanisms don't beware, resulting in risk of information leakage. The risk may be a big challenge for applications of differential privacy mechanisms because influenced linear query such as the counting query and the sum query are elemental queries which differential privacy mechanisms can handle well in previous researches.
\par In order to demonstrate the unexpected information leakage of differential privacy mechanisms, we take advantage of the above vulnerability to construct a membership inference attacks method against the Laplace mechanism. Our main contributions are as follows
\par (1) We point out that the liner property of queries is a source of information leakage for differential privacy. To the best of our knowledge, this paper is the first one to discuss information leakage caused by linear property of queries.
\par (2) A method is proposed to obtain multiple i.i.d.(independent identical distribution) samples for linear query's answer under constrains of differential privacy. In general cases, multiple i.i.d. samples for query's answer cannot be obtained directly in differential privacy. However, the linear query has two special properties. The first property is linear property and the second property is that the sensitivity can be calculated easily. Based on these two properties, we propose a method to obtain multiple i.i.d. samples for linear query's answer.
\par (3) A membership inference attacks method against the Laplace mechanism is proposed. The goal of membership inference attacks is to determine whether a target record is in a target data set. However, the differential privacy claims that even if attackers know all records except one, attackers cannot extract information of the record they don't know. Therefore, the membership inference attacks is an appropriate way to demonstrate the information leakage of differential privacy. In the proposed attacks method, the decision is made by hypothesis test method which is based on multiple i.i.d. samples obtained.
\subsection{Related Work}
\par The differential privacy is state of the art conception for privacy preservation \cite{26}. Since the conception of differential privacy is proposed, there have been some analyses about its information leakage. There are mainly two lines of papers. The first line is related to assumption of data generation. In differential privacy, records in data sets are assumed to be independent. But the assumption is not always true in real application scenarios \cite{31}. For example, Bob or one of his 9 immediate family members may have contracted a highly infectious disease. The whole family is infected together or none of them is infected with high probability \cite{32}. The correlation among family members makes it easier for attackers to infer whether Bob is infected by querying "how many people are infected by the disease in Bob's family". There are some papers aiming at fixing the correlation problem. Lv et al. research the problem in scenarios of big data \cite{33}. Wu et al. research the problem from the perspective of game theory \cite{19}. Zhang et al. research the problem in machine learning \cite{34}. The key of fixing correlation problem is to find an appropriate quantity to quantify correlation among data records. Then added noise of differential privacy mechanisms is calibrated by the quantity. Chen et al. quantify degree of correlation by Gaussian correlation model \cite{35}. Zhu et al. propose conception of correlated sensitivity to quantify degree of correlation \cite{24}. A strength of the line is that the unexpected information leakage is not related to certain differential privacy mechanisms. A weakness is that the unexpected information leakage is only related to correlated records not for all records.
\par The second line is related to the extensional conception of sensitivity of differential privacy. There is a difficult trade-off in differential privacy mechanisms, namely trade-off between magnitude of noise and data utility \cite{27} \cite{28}. In order to reduce the magnitude of noise, various alternative conceptions for sensitivity are proposed such as elastic sensitivity \cite{22} and record sensitivity \cite{26}. Some of these proposed conceptions are weak and lead to unexpected information leakage. The local sensitivity is an example \cite{23}. The goal of local sensitivity is to release data with database-based additive noise. That is, the noise magnitude is determined not only by the queries which are to be released, but also by the data set itself. However, the noise calibrated by local sensitivity is too small, resulting in information leakage. For example, let $f_{med}(x) = median(x_1,x_2,\dots,x_n)$, where $x_i$ are sorted real number from bound interval such as $[0,\Lambda]$. If the $f_{med}(x)$ is released with noise magnitude proportional to local sensitivity, then the probability of receiving a non-zero answer is zero when $x_1=\dots = x_{m+1}=0$,$x_{m+2}=\dots=x_n=\Lambda$. Whereas the probability of receiving a non-zero answer is non-negligible when $x_1=\dots = x_{m}=0$,$x_{m+1}=\dots=x_n=\Lambda$. Where $n$ is odd and $m=\frac{n+1}{2}$. The analyses for extensional conceptions of sensitivity are difficult because how big the noise magnitude needs to be is hard to quantify.
\par The goal of membership inference attacks is to determine that a given data record is in a target data set or not. The attack is firstly demonstrated by Homer \cite{36}. Their paper has so much big influence on privacy community that the USA National Institute of Health (NIH) switches policies in case of membership information leakage in process of releasing statistical data. The membership inference attacks has been valued since then. Shokri et al. quantitatively investigate membership inference attacks against machine learning models \cite{43}. Rahman et al. analyze membership inference attacks against deep learning model which is protected by differential privacy mechanisms \cite{41}. Yeom et al. investigate the relationship between overfitting and membership inference attacks, finding that "overfitting is sufficient to allow an attacker to perform membership inference attack" \cite{42}. Backes et al. explore the membership inference attacks against miRNA expression data set \cite{44}. Almadhoun et al. discuss membership inference attacks against differential privacy mechanisms \cite{37}. They take advantage of dependence of data records. Almadhoun et al also analyze an application of membership inference against genomic data sets \cite{38}.
\par In brief, the first line is about the special records which are correlated with each other and the second line is about trade-off between noise magnitude and data utility. Different from the mentioned two lines, we will discuss unexpected information leakage from a novel angle. In this paper, from the view of query functions, we show that differential privacy mechanisms do not take liner property of queries into account, resulting in unexpected information leakage.
\subsection{Organization}
\par In the next section, background knowledge will be introduced briefly, including the conception of differential privacy, the linear property and the hypothesis test. In the section 3, how to obtain multiple answers for linear queries is presented. In the section 4, the proposed membership inference attacks method will be presented and an instance of attack will be given. In section 5, experimental results will be showed. At last, the conclusion will be claimed.
\section{background}
\par The introduction of background knowledge is divided into three parts including basic conceptions of differential privacy, the linear property and the hypothesis test method.
\subsection{Differential Privacy}
\par The differential privacy is the most popular conception of privacy preservation. It formalizes a strong privacy guarantee: even if attackers know all data records except one, attackers cannot get much information of the record which attackers don't know \cite{30} \cite{39}. To that end, differential privacy mechanisms guarantee that its outputs are insensitive to any particular data record. That is, presence or absence of any record has limited influence on outputs of differential privacy mechanisms. The presence or absence of one record in a data set is captured by a conception of neighboring databases. Specifically, databases $D$ and $D'$ are neighboring databases denoted by $D \sim D'$ if there is only one different record between $D$ and $D'$ denoted by $d(D,D')=1$.
\par \textbf{Definition 1}(differential privacy) Any random mechanism $M:D^n\to R^d$ preserves $\epsilon$-DP(differential privacy) if for any neighboring databases $D$ and $D'$ such that $d(D,D')=1$ and for all sets of possible output $S$:
\begin{eqnarray*}
P\{M(D)\in S\} \le e^\epsilon P\{M(D')\in S\}
\end{eqnarray*}
\par Here, the $\epsilon$ is the privacy budget. The privacy guarantee is quantified by the privacy budget. The smaller the privacy budget is, the stronger the privacy guarantee is.
\par The Laplace mechanism is a famous and foundational mechanism in the field of differential privacy. It is to deal with numerical data. The Laplace mechanism is based on a conception of global sensitivity. The global sensitivity quantifies the max difference of query's answer, which is caused by presence or absence of one record.
\par \textbf{Definition 2} (global sensitivity) For a given database $D$ and query $q$, the global sensitivity denoted by $\Delta D$ is
\begin{eqnarray*}
\Delta D = \max \limits_{D \sim D'} |q(D)-q(D')|
\end{eqnarray*}
\par The Laplace mechanism covers the difference by noise drawn from Laplace distribution with location parameter $\mu=0$ and scale parameter $b=\frac{\Delta D}{\epsilon}$
\par \textbf{Definition 3} (Laplace mechanism) For given database $D$, query $q$ and privacy budget $\epsilon$, the output of Laplace mechanism is
\begin{eqnarray*}
M(q,D,\epsilon) = q(D) + Lap(0,\frac{\Delta D}{\epsilon})
\end{eqnarray*}
\par Here, the $Lap(0,\frac{\Delta D}{\epsilon})$ represents noise drawn from a Laplace distribution with location parameter 0 and scale parameter $\frac{\Delta D}{\epsilon}$.
\begin{comment}
\par As for exponential mechanism, it is based on a function called utility function. The utility function denoted by $q$ is a measure for the quality of output. The inputs of utility function are database $D$ and the possible output r. The utility function should be insensitive for present or absence of any one record. The sensitivity of $q$ is $\Delta q$
\begin{eqnarray*}
\Delta q = \max \limits_{\forall r,D, D'} |q(D,r)-q(D',r)|
\end{eqnarray*}
\par The exponential mechanism is suitable for application scenarios where outputs are not real number or make no sense after adding noise. The exponential mechanism assigns greatest probability to result whose value of utility function is greatest.
\par \textbf{Definition 4} (Exponential mechanism) For a utility function $q$ and database $D$, the mechanism $M$
\begin{eqnarray*}
M(D,q) = \{return\ r\ with\ probability \propto exp^{\frac{\epsilon q(D,r)}{2\Delta q}}\}
\end{eqnarray*}
\end{comment}
\par Differential privacy mechanisms have many good properties which make it possible to build complex differential privacy mechanisms by basic block algorithms. Two of them are Sequential Composition Theorem and Parallel Composition Theorem.
\par \textbf{Sequential Composition Theorem} Let $A_1,A_2, \cdots, A_k$ be $k$ algorithms that satisfy $\epsilon_1$-DP,$\epsilon_2$-DP, $\cdots$,$\epsilon_k$-DP respectively. Publishing $t = <t_1,t_2,\cdots, t_k>$ satisfies $(\sum_{i=1}^{k}\epsilon_i)$-DP. Where, $t_1=A_1(D),t_2=A_2(t_1,D), t_3=A_3(<t_1,t_2,D>),\cdots, t_k=A_k(<t_1,t_2,t_3,\cdots,t_{k-1}>,D)$.
\par \textbf{Parallel Composition Theorem} Let $A_1,A_2, \cdots, A_k$ be $k$ algorithms that satisfy $\epsilon_1$-DP,$\epsilon_2$-DP, $\dots$,$\epsilon_k$-DP respectively. Publishing $t = <t_1,t_2,\cdots, t_k>$ satisfies $(max_{i\in [1,2\cdots k]}\epsilon_i)$-DP. Where, $t_1=A_1(D_1),t_2=A_2(t_1,D_2), t_3=A_3(<t_1,t_2,D_3>),\cdots, t_k=A_k(<t_1,t_2,t_3,\cdots,t_{k-1}>,D_k)$ and $D_i \cap D_j = \emptyset $ for all $i\ne j$,
\subsection{Linear Property}
\par The linear query is one kind of fundamental queries. It is used widely. For example, sum and counting query are popular linear queries. The linear query has a very good property, namely linear property.
\par \textbf{Definition 4} (linear property) For linear query $q$ and data set $D$ such that $D = D_1 \cup D_2$ and $ \varnothing = D_1 \cap D_2 $, we have
\begin{eqnarray*}
q(D) = q(D_1) +q(D_2)
\end{eqnarray*}
\par The counting query is one kind of linear queries so it satisfies the linear property.
\subsection{Hypothesis Test}
\par The hypothesis test is a statistic tool to determine whether fluctuation of experimental results is only due to randomness. Its main goal is to guarantee the confidence level of experimental conclusions. To that end, there are three steps in hypothesis test logically. Firstly, an assumption about experimental conclusions is made. The assumption needs to be contrary to conclusions which are achieved by experimental results. The assumption is called null hypothesis denoted by $H_0$. The opposite hypothesis of null hypothesis is called alternative hypothesis denoted by $H_1$. Secondly, a statistics is chosen. The statistics needs to be related to the assumption and its probability distribution function is known. Thirdly, the probability of experimental results is calculated through the known distribution. If the probability is smaller than a predetermined threshold, experimental conclusions are reliable. The procedure can guarantee that when the assumption which is contrary to experimental conclusions is right, the experimental results occur with extremely small probability. That is, the experimental results are not only due to randomness.
\par The concrete steps of hypothesis test method are clear. Firstly, a predetermined probability threshold needs to be chosen and it is called significance level. Secondly, the experimental results' probability is calculated and the probability is called $p$ value. At last, a decision is made. In specific, if the $p$ value is greater than the predetermined significance level, alternative hypothesis is accepted. Otherwise, the null hypothesis is accepted.
\par The one-sample t-test is an useful hypothesis test method. It is used to determine whether samples come from a distribution with a certain mean.
\begin{comment}
content...
\subsection{Membership Inference Attacks Model }
\par The membership inference attacks is to judge whether a target record is in a target data set. Concretely, there is a target data set with sensitive information and a differential privacy mechanism is used to protect the data set. An attacker with some background knowledge has black-box access to the differential privacy mechanism. Given a target record, the attacker guesses: whether the target record is in target data set. The formal definition of membership inference attack is as follows:
\par \textbf{Definition 5} (membership inference attacks): Given a subset $D_{know}$ of target data set $D$ , black-box access to differential privacy mechanism $M$ and a target record $x$, the attacker guesses whether the target record is in the data set $D$ or not.
$$ A(D_{know},x,M)=\left\{
\begin{array}{rcl}
1 & & {x\in D}\\
0 & & {x\notin D}
\end{array} \right. $$
\par Here, the $A$ represents membership inference attacks method. If $A$ asserts that $x$ is in data set $D$, it outputs 1. If $A$ asserts that $x$ is not in data set $D$, it outputs 0.
\par The ability of attackers is limited. In specific, the black-box access means that an attacker submits a query to differential privacy mechanisms and gets a result as return. The attacker cannot get any other information from the differential privacy mechanism.
\end{comment}
\section{multiple i.i.d. samples for linear query's answer }
\par This section presents how to obtain multiple i.i.d. samples for linear query's answer under constrains of differential privacy. Firstly, the idea of proposed method will be introduced. Then, the proposed method will be presented.
\par Usually, multiple i.i.d. samples for a query's answer cannot be obtained directly under constrains of differential privacy. Differential privacy mechanisms are random mechanisms so it can output different answers for the same query when the same query comes every time. However, in order to reduce consumption of the privacy budget, differential privacy mechanisms output the same answer for queries which have been answered before. To that end, there are some engineering methods available. For example, a table is maintained. The table records answered queries as well as answers of these queries. When a query comes, differential privacy mechanisms look up whether the query is in the table. If the query was answered before, the recorded answer will be returned so that differential privacy mechanisms do not need to compute the answer again.
\par The linear property makes it possible to divide a linear query into two parts. For example, the counting query is a linear query. There are four numbers in a set $B=\{b_1,b_2,b_3,b_4\}$. The equality $count(B)=count(\{b_1,b_2\})+count(\{b_3,b_4\})$ holds. In addition, there are multiple divisions for one query. For instance, the equality $count(B)=count(\{b_1\})+count(\{b_2,b_3,b_4\})$ also holds.
\par The linear property makes it possible to obtain multiple different answers for one linear query under constrains of differential privacy. In specific, one linear query can be divided into two parts such as $count(B)=count(\{b_1,b_2\})+count(\{b_3,b_4\})$. If attackers want to obtain the answer of $count(B)$ under the condition that all queries are only calculated by a differential privacy mechanism, attackers can issue the query $count(B)$ to the differential privacy mechanism. In addition, in order to obtain another answer of $count(B)$, attackers can also issue the query $count(\{b_1,b_2\})$ to the differential privacy mechanism and calculate the query $count(\{b_3,b_4\})$ by himself when attackers know the set $\{b_3,b_4\}$. In brief, by multiple divisions of a query, it is possible to obtain multiple answers for a linear query under constrains of differential privacy.
\par There is a necessary assumption for attackers to obtain multiple answers for a linear query. In specific, attackers need to know a set of records in the target data set which is protected by the differential privacy mechanism. The set of records can be regarded as background knowledge of attackers.
\par The assumption about the background knowledge in the proposed method is weaker than the assumption in differential privacy. The differential privacy claimed that even if attackers know all data records except one, the record which attackers don't know can be protected \cite{30} \cite{39}. That is, the background knowledge of attackers could be all records except one in the differential privacy. In proposed method, the background knowledge is only a set of records instead of all records.
\subsection{Proposed Method}
\begin{figure*}[htbp]
\centering
\includegraphics[width=18.0cm]{./figure/proposed_method2.jpg}
\caption*{\\ Fig 1: method to obtain multiple answers for the target query and membersip inference attacks mehtod}
\end{figure*}
\par Attackers can issue query $q_s(D)$ to the differential privacy mechanism $M$. The $q_s(D)$ means computing the query $q$ over the target data set $D$ under the condition $s$ which specifies the range of data records. In other words, the $q_s(D)$ is equal to $q(D\cap D_s)$ where the $D_s$ is the set of records which satisfy the condition $s$. For example, "count the number of students whose age is greater than 10 in a classroom". The query $q$ is the counting query, the data set $D$ is the set of all students in the classroom and the condition $s$ is that " the age is greater than 10".
\par Suppose that the target record is denoted by $x$ and the background knowledge of attackers is a set of data records denoted by $D_{know}$. The goal is to obtain multiple answers for a linear query $q(\{x\}\cup D_{know})$. The $s$ represents the condition such that $D_s= \{x\}\cup D_{know}$. Therefore, the target query can be expressed as $q_s$ and its answer can be expressed as $M(q_s,D,\epsilon)$.
\par Attackers choose a subset $ D_i \subset D_{know}$, construct a query condition $s_i$ such that $D_{s_i} = D_i\cup \{x\}$, issue query $q_{s_i}(D)$ to the differential privacy mechanism and obtain $a_i = M(q_{s_i},D,\epsilon)= q_{s_i}(D)+Lap(0,\epsilon/\Delta D)$ as return. Let $\hat{a}_i = q(D_{know}\setminus D_i) + a_i$.
\par The data set $D_i$ is a subset of $D_{know}$ and the $D_{know}$ is background knowledge of attackers so attackers can compute $q(D_{know}\setminus D_i)$ locally by themselves. The $\hat{a}_i$ is an answer of the target query $q(\{x\}\cup D_{know})$, which will be proved in next subsection.
\par Through another subset $D_j \subset D_{know}$, attackers can get another answer of the target query. Therefore, attackers can get multiple answers of the target query. If $D_i \cap D_j = \emptyset$ for all $i\ne j$, the totally consumed privacy budget is different from the differential privacy mechanism's perspective and attackers' perspective. The claim is proved in next subsection.
\par The whole idea of this paper is showed in Fig 1, including the method to obtain multiple answers for the target query and the membership inference attacks method which is presented in detail at next section.
\begin{algorithm}
\caption*{method to obtain multiple answers of $q(\{x\}\cup D_{know})$}
\label{alg:Framwork}
\begin{algorithmic}[1]
\REQUIRE ~~\\
the background knowledge $D_{know}$;\\
the differential privacy mechanism $M$;\\
a linear query $q$;\\
the total privacy budget $\epsilon_t$;\\
\ENSURE ~~\\
the $m$ answers $\hat{a}_1,\hat{a}_2,\dots,\hat{a}_m$ ;\\
\STATE let $\epsilon = \frac{\epsilon_t}{m}$
\FOR{$i=1$ to $m$}
\STATE choose a new subset $D_i \subset D_{know}$ \\ such that $D_i\cap D_j = \emptyset$ for all $j\ne i$ ;
\STATE construct a query condition $s_i$ such that $D_{s_i} = D_i\cup \{x\}$;
\STATE $a_i = M(q_{s_i},D,\epsilon)$;
\STATE $\hat{a}_i = a_i + q(D_{know}\setminus D_i)$
\ENDFOR
\RETURN $\hat{a}_1,\hat{a}_2,\dots,\hat{a}_m$
\end{algorithmic}
\end{algorithm}
\subsection{Correctness and Privacy}
\par In this subsection, the proposed method is analyzed in terms of correctness and privacy respectively. In the theorem 1, we prove that linear queries' multiple answers can be obtained. In theorem 2, we analyze the amount of background knowledge of attackers. Then, totally consumed privacy budget is analyzed from the two perspectives.
\par \textbf{Theorem 1}: $\hat{a}_1,\hat{a}_2,\dots,\hat{a}_m$ are i.i.d. samples of the target linear queries' answer calculated by differential privacy mechanisms over the target data set $D$.
\par \textbf{Proof}:
\par For $\forall i$, we have
\begin{eqnarray*}
\hat{a}_i &=& q(D_{know}\setminus D_i) + a_i \\
\hat{a}_i &=& q(D_{know}\setminus D_i) + M(q_{s_i},D,\epsilon) \\
\hat{a}_i &=& q(D_{know}\setminus D_i) + q_{s_i}(D) + Lap(0,\frac{\Delta D}{\epsilon})\\
\end{eqnarray*}
\par We know that $q_{s_i}(D) = q((D_i\cup \{x\})\cap D)$. And we also know
\begin{eqnarray*}
(D_i\cup \{x\})\cap D \ \subset \ D_i\cup \{x\}
\end{eqnarray*}
\par And
\begin{eqnarray*}
(D_i\cup \{x\}) \ \cap \ (D_{know}\setminus D_i) = \emptyset
\end{eqnarray*}
\par In addition, $q$ is linear query. We have
\begin{eqnarray*}
& & q(D_{know}\setminus D_i) + q_{s_i}(D)\\
&=& q(D_{know}\setminus D_i) + q((D_i\cup \{x\})\cap D) \\
&=& q(D_u ) \\
\end{eqnarray*}
\par Here $D_u$ is union set of $D_{know}\setminus D_i$ and $(D_i\cup \{x\})\cap D$.
\begin{eqnarray*}
D_u &=& (D_{know}\setminus D_i) \cup ((D_i\cup \{x\})\cap D) \\
D_u &=& ((D_{know}\setminus D_i)\cap D) \cup ((D_i\cup \{x\})\cap D) \\
D_u &=& ((D_{know}\setminus D_i) \cup (D_i\cup \{x\}))\cap D) \\
D_u &=& (D_{know} \cup \{x\})\cap D) \\
\end{eqnarray*}
\par So, we have
\begin{eqnarray*}
q(D_u ) &=& q((D_{know} \cup \{x\})\cap D ) \\
q(D_u ) &=& q_s(D) \\
\end{eqnarray*}
\par So, we have
\begin{eqnarray*}
\hat{a}_i &=& q(D_{know}\setminus D_i) + q_{s_i}(D) + Lap(0,\frac{\Delta D}{\epsilon})\\
\hat{a}_i &=& q_s(D) + Lap(0,\frac{\Delta D}{\epsilon}) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)\\
\hat{a}_i &=& M(q_s,D,\epsilon)
\end{eqnarray*}
\par According to the equality (1), for all $\hat{a}_1,\hat{a}_2,\dots,\hat{a}_m$, the noise $Lap(0,\frac{\Delta D}{\epsilon})$ are i.i.d. samples of a Laplace distribution. So, the $\hat{a}_1,\hat{a}_2,\dots,\hat{a}_m$ are i.i.d. samples of the target query's answer.
$\hfill\Box$
\par The background knowledge is a foundation of proposed method. The background knowledge is a set of known records in the proposed method, although it is captured by prior probability distribution in some papers such as \cite{25}. It is easier to obtain some records of a target data set than to obtain a prior probability distribution of the target data set. For example, with respect to score of students, it is easier to collect some students' score than to obtain a prior probability distribution of students' score.
\par The number of records in $D_{know}$ is a way to quantify the amount of background knowledge of attackers.
\par \textbf{Theorem 2}: The number of records in $D_{know}$ needs to be greater than $m$.
\par \textbf{Proof}:
\par As known before, a subset of $D_{know}$ can be used to generate an i.i.d. sample for the target query's answer. The number of i.i.d. samples is $m$. Therefore, the number of records in $D_{know}$ should guarantee that the number of possibly used subsets of $D_{know}$ is equal to or greater than $m$.
\par In the proposed method, all possibly used subsets are disjoint with each other. Therefore, the number of possibly used subset is equal to the number of records in $D_{know}$ at most. In specific, each possibly used subset has only one record of $D_{know}$.
\par In a word, the number of records in $D_{know}$ needs to be greater than $m$.
\par $\hfill\Box$
\par The privacy budget is the key parameter for privacy guarantee in differential privacy mechanisms. The smaller the privacy budget is, the stronger the privacy guarantee is. One good property of differential privacy is that the totally consumed privacy budget can be calculated in a cumulative way.
\par \textbf{Theorem 3}: From attackers' perspective, the totally consumed privacy budget is $\epsilon_t$.
\par \textbf{Proof}
\par For $\forall i$, the consumed privacy budget of $M(q_{s_i},D,\epsilon)$ is $\epsilon = \frac{\epsilon_t}{m}$. By the theorem 1, attackers obtain $m$ answers for the target query from the differential privacy mechanism. According to the Sequential Composition Theorem, the totally consumed privacy budget is
\begin{eqnarray*}
\sum_{i=1}^{m}\epsilon = m\epsilon = \epsilon_t
\end{eqnarray*}
$\hfill\Box$
\par From the attackers' perspective, the totally consumed privacy budget can be unreasonably large if there are enough records in the set of background knowledge. In specific, if attackers can consume $\epsilon$ privacy budget to obtain one answer of the target query, then attackers can consume $m*\epsilon$ privacy budget to obtain $m$ different answers of the target query using the proposed method. So, if there are enough records in the set of background knowledge, attackers can consume unreasonably large privacy budget for the target query. However, from the differential privacy mechanism's perspective, the totally consumed privacy budget is different.
\par \textbf{Theorem 4}: From the differential privacy mechanism's perspective, when the target record $x$ is not in the target data set the totally consumed privacy budget is $\epsilon$. When the target record $x$ is in the target data set, the differential privacy mechanism responds $m$ different query with $\epsilon$ privacy budget for each response, in resulting $\epsilon_t$ in total.
\par \textbf{Proof}
\par Firstly, when the target record $x$ is not in the target data set $D$, for $\forall\ i$ we have
\begin{eqnarray*}
q_{s_i} = q((D_i \cup \{x\})\cap D) = q(D_i\cap D)=q(D_i)
\end{eqnarray*}
\par The consumed privacy budget of $q_{s_i}$ is denoted by $\epsilon_i$. For $\forall\ 0<i \ne j < m$, we know $D_i \cap D_j = \emptyset$. According to the Parallel Composition Theorem, the totally consumed privacy budget is
\begin{eqnarray*}
\max\limits_{0<i<m} \epsilon_i = \epsilon
\end{eqnarray*}
\par Secondly, when the target record $x$ is in the target data set $D$, for $\forall\ i$ we have
\begin{eqnarray*}
q_{s_i} = q((D_i \cup \{x\})\cap D) = q(D_i \cup \{x\})
\end{eqnarray*}
\par For $\forall\ 0<i \ne j < m$, we have
\begin{eqnarray*}
(D_i\cup\{x\}) \cap (D_j\cup\{x\}) = \{x\} \ne \emptyset
\end{eqnarray*}
\par According to the Sequential Composition Theorem, the totally consumed privacy budget is
\begin{eqnarray*}
\sum_{i=1}^{m}\epsilon_i = \sum_{i=1}^{m}\epsilon = m\epsilon = \epsilon_t
\end{eqnarray*}
\par In a word, from the differential privacy mechanism's perspective, it responds $m$ different query with $\epsilon$ privacy budget for each response, in resulting $\epsilon_t$ in total. That is, from the differential privacy mechanism's perspective, the target query only consume privacy budget $\epsilon$.
$\hfill\Box$
\par In a word, the totally consumed privacy budget is delicately analyzed from attackers' perspective and differential privacy mechanism's perspective in the theorem 3 and 4 respectively. The attackers can consume unreasonably large privacy budget while the differential privacy mechanism may not beware. Whether the differential privacy mechanism bewares the unreasonably large consumed privacy budget depends on weather the target record is in the target data set. That is, the presence or absence of the target record has key influence on behavior of the differential privacy mechanism. Thus, it is possible to infer whether the target record is in the target data set by observing outputs of differential privacy mechanism.
\section{membership inference attacks method}
\par In order to demonstrate the unexpected information leakage of differential privacy mechanisms due to linear property, a membership inference attacks method against the Laplace mechanism is constructed based on linear queries.
\par In this section, the main content is divided into five subsections. In the first subsection, the membership inference attacks model is introduced. In the second subsection, an analysis is given to discuss the security foundation of the Laplace mechanism. In the third subsection, reasons why counting query is an appropriate liner query to perform membership inference attacks is discussed. In the fourth subsection, the membership inference attacks method is presented. In the last subsection, the success rate of proposed attacks method is analyzed.
\subsection{Membership Inference Attack model}
\par The membership inference attacks is to determine whether a data record is in a data set. Specifically, there is a target data set with sensitive information and a differential privacy mechanism is used to protect the target data set. Attackers has black-box access to the differential privacy mechanism. Given a data record, attackers guess: whether the given record is in the target data set. The formal definition of membership inference attacks is as follows:
\par \textbf{Definition 5} (membership inference attacks): Given black-box access to a differential privacy mechanism $M$ and a target record $x$, attackers guess whether the target record is in the the target data set $D$.
$$ A(x,M)=\left\{
\begin{array}{rcl}
1 & & {x\in D}\\
0 & & {x\notin D}
\end{array} \right. $$
\par Here, the $A$ represents membership inference attacks method. If the attacks method asserts that the target record is in the target data set, it outputs 1. Otherwise, it outputs 0.
\par There are three assumptions about the black-box access of the differential privacy mechanism.
\par (1) For the same query, the black-box access returns the same answer no matter how many times the query is issued. If attackers can get a new answer for the same query from the black-box access when attackers issue the query every time, then attackers can get multiple i.i.d. samples of the target query's answer trivially. With multiple i.i.d. samples, it is trivial for attackers to infer sensitive information of records. Thus, this assumption is reasonable.
\par (2) For every different query, the black-box access returns an answer. The second assumption guarantees the utility of differential privacy mechanisms. If the black-box access refuses to answer queries which are issued first time, the utility of differential privacy mechanisms is damaged, resulting in useless mechanisms.
\par (3) There is a threshold of the totally consumed privacy budget. When the totally consumed privacy budget is greater than the threshold, the black-box access aborts. The consumed privacy budget is accumulative and when totally consumed privacy budget is greater than a certain limitation, leaking information is trivial.
\par The ability of attackers is limited. The black-box access means that attackers issue a query to a differential privacy mechanism and gets a result as return. Attackers cannot get any other information from the differential privacy mechanism.
\subsection {Security Analysis of The Laplace Mechanism}
\par The security of the Laplace mechanism is based on perturbation. The differential privacy requires that presence or absence of one record cannot be identified through mechanisms' outputs. The Laplace mechanism satisfies the requirement by perturbation. Specifically, queries' answer are perturbed by noise. The noise is drawn from a Laplace distribution whose variance is related to two factors. The first factor is strength of privacy guarantee. The second factor is maximum difference of query's answer, which is caused by presence or absence of any one record. The maximum difference is captured by the conception of sensitivity which is denoted by $\Delta D$. The strength of privacy guarantee is captured by the conception of privacy budget which is denoted by $\epsilon$. The variance of noise distribution is related to ratio of the sensitivity to the privacy budget. Specifically, the variance is $2(\Delta D/\epsilon)^2$.
\par The key idea behind the Laplace mechanism is: If the sensitivity is far less than fluctuation of added noise, it is hard to tell that variation of the Laplace mechanism's outputs happens due to noise distribution's fluctuation or due to presence or absence of a record. This idea can be described intuitively by formula $\Delta D \ll 2(\Delta D/\epsilon)^2$.
\par The goal of covering the sensitivity can be achieved if the privacy budget is appropriately small. That is, for appropriately small privacy budget, the sensitivity could be much less than variance of noise distribution.
\par In a word, for the Laplace mechanism, the sensitivity is covered by fluctuation of noise. So, the security foundation of the Laplace mechanism can be described by formula $\Delta D \ll 2(\Delta D/\epsilon)^2$ intuitively. However, the formula may not hold for some queries, which we will discuss in next subsection.
\par The hypothesis test is an appropriate tool to construct membership inference attacks method against the Laplace mechanism. In the Laplace mechanism, the difference caused by presence or absence of one record is covered by the fluctuation of noise. However, the hypothesis test method is designed to determine that variation are just due to random noise or other reasons. In other words, the hypothesis test can be used to determine that variation of the Laplace mechanism's outputs is just due to randomness of noise or due to presence or absence of a record.
\subsection { Counting Query }
\par The counting query is an appropriate query to perform membership inference attacks against the Laplace mechanism. In specific, it is acceptable for most cases that value of the privacy budget is from 0 to 1. Let's check the inequality $\Delta D \ll 2(\Delta D/\epsilon)^2$. For counting query, when the privacy budget is 1, the sensitivity is 1 and the variance of noise distribution is 2. The sensitivity is just a half of variance. When privacy budget is 0.5, the sensitivity is 1 and the variance is 8. So the sensitivity is one eighth of variance. In a word, the inequality $\Delta D \ll 2(\Delta D/\epsilon)^2$ may not hold for counting query when the privacy budget takes values which are generally considered to be an appropriate value for the privacy budget.
\par There are other three reasons why the target linear query should be the counting query. Firstly, the counting query makes the attacks method technically right. The counting query satisfies linear property and the sensitivity of counting query is always 1. These two properties make it possible to obtain multiple i.i.d. samples for the counting query's answer. In addition, the hypothesis test method works on i.i.d. samples. Thus, the proposed membership inference attacks method is technically right due to unique properties of the counting query.
\par Secondly, the proposed attacks method is based on counting query so that it can work in a wide range. The counting query is very common. On the one hand, it is a common need to confirm the number of records which meet specific requirements, such as $sql$ statement " select count(*) from table student where student.age $\le$ 20 ". On the other hand, there are a lot of complex queries which are based on the counting query.
\par Thirdly, the difference of query's answer, which is caused by presence or absence of one record, is most easily detected by hypothesis test method when the target query is the counting query. As mentioned before, the difference is smaller than or equal to the sensitivity and the sensitivity is far smaller than variance of noise distribution. Thus, it is hard to detect whether variation of the Laplace mechanism's outputs is due to presence or absence of one record. However, when the target query is the counting query, the difference caused by presence or absence of any record is 1, reaching the value of sensitivity. So, the counting query makes it most easy to detect whether the variation of outputs is caused by presence or absence of one record.
\subsection{Instance Attacks}
\par This subsection presents how to perform membership inference attacks against the Laplace mechanism.
\par Attackers aim for determining whether a target record $x$ is in a target data set $D$. The background knowledge of attackers is denoted by $D_{know}$ which is a subset of target data set $D$. The target linear query is the counting query $q(\{x\}\cup D_{know})$. The $s$ represents the condition such that $D_s= \{x\}\cup D_{know}$. So, the target query can be expressed as $q_s(D)$ and its answer can be expressed as $M(q_s,D,\epsilon)$.
\par The membership inference attacks method consists of two subroutines. The first subroutine is to obtain multiple i.i.d. samples for the target query's answer. The first subroutine is feasible due to the linear property of the counting query, which is proved in previous section. By the first subroutine, attackers can obtain $m$ i.i.d. samples of the target query's answer and the samples are $\hat{a}_1,\hat{a}_2,\dots,\hat{a}_m$. When the target record $x$ is in the target data set $D$,
\begin{eqnarray*}
q_s(D) &=& q(D_s\cap D) \\
&=& q((D_{know}\cup\{x\})\cap D)\\
&=& q(D_{know}\cup\{x\})
\end{eqnarray*}
\par Therefore, the mean of samples is $q(D_{know}\cup\{x\})$. When the target record $x$ is not in data set $D$
\begin{eqnarray*}
q_s(D) &=& q(D_s\cap D) \\
&=& q((D_{know}\cup\{x\})\cap D)\\
&=& q(D_{know})
\end{eqnarray*}
\par Therefore, the mean of samples is $q(D_{know})$.
\par Attackers can determine whether the target record is in the target data set by comparing $q(D_{know})$ with the mean of samples. The procedure of comparison is implemented by hypotheses test method in the second subroutine.
\par In the second subroutine, the hypothesis test method is used to determine whether the samples which are obtained in the first subroutine are drawn from a distribution with mean $q(D_{know})$. If the mean of samples is determined to be $q(D_{know})$, the assertion is that the target record is not in the target data set. Otherwise, the assertion is that the target record is in the target data set.
\par In the second subroutine, the key is to find an appropriate hypothesis test method. That is, it is key to find an appropriate statistics. The one-sample t-test is chosen. The $T$ statistics follows $T$ distribution and the $T$ statistics is as follows
\begin{eqnarray*}
T &=& \frac{\bar{X} - \mu}{ S/\sqrt{m}}
\end{eqnarray*}
\par Here, the $\mu$ is distribution mean and the $m$ is the number of samples. The $\bar{X}$ is the sample mean and the $S$ is sample standard variance. When the samples are denoted by $\hat{a}_1,\hat{a}_2,\dots,\hat{a}_m$ we have
\begin{eqnarray*}
\bar{X} = (\hat{a}_1+\hat{a}_2+\dots+\hat{a}_m)/m
\end{eqnarray*}
\par And,
\begin{eqnarray*}
S=\sqrt{\frac{(\hat{a}_1-\bar{X})^2+(\hat{a}_2-\bar{X})^2+\dots+(\hat{a}_m-\bar{X})^2}{m}} \\
\end{eqnarray*}
\par The one-sample t-test is an appropriate hypothesis test method because it is designed to determine whether some samples are drawn from a distribution with a certain mean.
\par The significance level is 0.05. There are two widely accepted choices for significance level, namely $0.01$ and $0.05$. We desire that as long as difference is statistically significant, the alternative hypothesis could be accepted. So we choose $0.05$ as value of significance level.
\par There is a table where $p$ value can be searched by the freedom and the value of statistic $T$. The freedom is $m-1$ and the value of statistic $T$ can be calculated as above.
\begin{algorithm}
\caption*{Membership Inference Attacks Method A}
\label{alg:Framwork}
\begin{algorithmic}[1]
\REQUIRE ~~\\
the background knowledge $D_{know}$ of attackers;\\
the target record $x$;\\
the Laplace mechanism $M$;\\
the counting query $q$;\\
total privacy budget $\epsilon_t$;\\
the number of samples $m$;\\
the significance level $\alpha=0.05$;\\
\ENSURE ~~\\
the $x$ is in data set $D$ or not;\\
\STATE construct conditions $s$ such that $D_s = D_{know} \cup \{x\}$;
\STATE let $\epsilon = \frac{\epsilon_t}{m}$;
\FOR{$i=1$ to $m$}
\STATE $\hat{a}_i = M(q_s,D,\epsilon)$ by the first subroutine;
\ENDFOR
\STATE make\ hypothesis: \\
the null hypothesis is $H_0$: $\mu_0 = q(D_{know})$; \\
the alternative hypothesis is $H_1$: $\mu_0 \ne q(D_{know})$; \\
\STATE calculate $\bar{X} = \frac{\hat{a}_1+\hat{a}_2+\dots+\hat{a}_m}{m}$;\\
\STATE calculate $S=\sqrt{\frac{(\hat{a}_1-\bar{X})^2+(\hat{a}_2-\bar{X})^2+\dots+(\hat{a}_m-\bar{X})^2}{m}}$;\\
\STATE calculate $T = \frac{\bar{X} - \mu_0}{\frac{S}{\sqrt{m}}}$;\\
\STATE find $p$ according to $T$ and freedom $m-1$;\\
if $p<\alpha$, the assertion is $x\in$ D; \\
if $p>\alpha$, the assertion is $x\notin D$ \\
\RETURN assertion
\end{algorithmic}
\end{algorithm}
\subsection{Success Rate Analysis}
\par The success rate of proposed attacks method is tightly related to two factors including the amount of background knowledge of attackers and the threshold of privacy budget in black-box access. According to the two factors, there are four cases faced by attackers, as shown in Fig 2.
\begin{figure}[htbp]
\centering
\includegraphics[width=7.0cm]{./figure/linear_property.png}
\caption*{Fig 2: cases faced by attackers}
\end{figure}
\par Suppose that the threshold of the privacy budget is $\epsilon_l$. When the totally consumed privacy budget is greater than the threshold from the differential privacy mechanism's perspective, the black-box access of differential privacy mechanisms aborts. The "enough background knowledge" means that attackers can construct as many as needed number of queries such that the totally consumed privacy budget is greater than the threshold, resulting in abortion of black-box access.
\par For the case 4, there is enough background knowledge available to attackers. That is, there is enough number of records in $D_{know}$. In addition, the target record $x$ is in the target data set. By the theorem 4, from the differential privacy mechanism's perspective, the totally consumed privacy budget is $m\epsilon$. Because the background knowledge is enough, attackers can make the $m$ big enough such that $m\epsilon>\epsilon_l$, result in abortion of the black-box.
\par For the case 3, there is enough number of records in $D_{know}$ and the target record $x$ is in not the target data set. By the theorem 4, from the differential privacy mechanism's perspective, the totally consumed privacy budget is $\epsilon$ no matter how many queries attackers issue. So the black-box access never aborts. By the theorem 3, from attackers' perspective, the totally consumed privacy budget is $m\epsilon$. Thus, attackers can issue as many as needed queries such that the $m\epsilon$ is unreasonably large, resulting in that attackers can find the absence of the target record trivially by the hypothesis test method.
\par For the case 2 and the case 1, the background knowledge is not enough. Thus, attackers cannot consume unreasonably large privacy budget or force the black-box access to abort. In these cases, the success rate is tightly related to the exact amount of background knowledge. For these two cases, we will give a delicate analysis about the success rate in theorem 5.
\par In brief, from attackers' perspective, they issue as many as possible queries to consume as much as possible privacy budget. In specific, there are three possible results. Firstly, the black-box access aborts, which means that the totally consumed privacy budget is greater than the threshold and the target record is in the target data set. Secondly, the black-box access does not aborts and the totally consumed privacy budget is so big that attackers can find the absence of the target record. Thirdly, all records of attackers' background knowledge are used and attackers obtain multiple answers of the target query. Attackers determine whether the target record is in the target data set by hypothesis test method. The success rate of using hypothesis test method is analyzed in theorem 5 at next.
\begin{table*}[!htbp]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\diagbox{assertion}{hypothesis}& $H_0$& $H_1$\\
\hline
$H_0$& $P\{accetp\ H_0|H_0\ is\ ture\} = 1-\alpha = 0.95 $& $P\{accetp\ H_0|H_1\ is\ ture\} = \delta $\\
\hline
$H_1$& $P\{accetp\ H_1|H_0\ is\ ture\} = \alpha = 0.05 $ & $P\{accetp\ H_1|H_1\ is\ ture\} = 1-\delta $\\
\hline
\end{tabular}
\caption*{Table 1: success rate of hypothesis test}
\end{table*}
\par The hypothesis test method has two types error shown in Table 1. Based on the Table 1, the success rate of proposed attacks method can be calculated. The proof of theorem 5 is deferred to the appendix.
\par \textbf{Theorem 5} The success rate $R$ of the proposed membership inference attacks method is
\begin{eqnarray*}
R = \frac{1}{2}(1.95-\int_{-T_{(m-1,0.975)}+\frac{(\mu_0-\mu_1)*\sqrt{m}}{S}}^{T_{(m-1,0.975)}+\frac{(\mu_0-\mu_1)*\sqrt{m}}{S}}f(t)dt)\\
\end{eqnarray*}
$\hfill\Box$
\par Here, the $T_{(m-1,0.975)}$ is the value of $T$ statistic when the freedom of $T$ distribution is $m-1$ and the cumulative probability is 0.975. The $f(t)$ is the probability density function of $T$ distribution with freedom $m-1$. The $\mu_0$ is the mean assumed in null hypothesis. The $\mu_1$ is the true mean of distribution when the null hypothesis is wrong. We have
\begin{eqnarray*}
\mu_0= q(D_{know}) \ \ and \ \ \mu_1 = q(D_{know}\cup\{x\})
\end{eqnarray*}
\par In above formula, $\mu_0-\mu_1$ is equal to $-1$ because the query $q$ is the counting query. The $m$ is the number of obtained samples from the Laplace mechanism by black-box access. The $S$ is the standard deviation of obtained samples.
\par As mentioned before, the standard deviation of noise distribution is $\sqrt{2}\Delta D/\epsilon$. In addition, for the counting query the sensitivity $\Delta D$ is 1. The standard deviation of sample could be substituted with standard deviation of noise distribution. That is, the $S$ could be substituted with $\sqrt{2}\Delta D/\epsilon$. Then, the $R$ is
\begin{eqnarray*}
R \approx \frac{1}{2}(1.95-\int_{-T_{(m,0.975)} -\frac{\epsilon\sqrt{m}}{\sqrt{2}}}^{T_{(m,0.975)}-\frac{\epsilon\sqrt{m}}{\sqrt{2}}}f(t)dt)\\
\end{eqnarray*}
\par The $\epsilon$ is the privacy budget consumed by each query. By the above formula, the $R$ increases when the number of obtained samples increases.
\par In terms of totally consumed privacy budget, the pattern is different. In specific, the totally consumed privacy budget $\epsilon_t$ is equal to $m\epsilon$. So, the $R$ is
\begin{eqnarray*}
R \approx \frac{1}{2}(1.95-\int_{-T_{(m,0.975)} -\epsilon_t/\sqrt{2m}}^{T_{(m,0.975)}-\epsilon_t/\sqrt{2m}}f(t)dt)\\
\end{eqnarray*}
\par According to the above formula, the $R$ decreases when the number of obtained samples increases under the condition that the totally consumed privacy budget is predetermined.
\par By the theorem 2, the maximum number of samples is equal to the number of records in $D_{know}$ at most. Let $r$ be the number of records in $D_{know}$. The above two formulas become
\begin{eqnarray*}
R \approx \frac{1}{2}(1.95-\int_{-T_{(r,0.975)} -\frac{\epsilon\sqrt{r}}{\sqrt{2}}}^{T_{(r,0.975)}-\frac{\epsilon\sqrt{r}}{\sqrt{2}}}f(t)dt)\\
\end{eqnarray*}
\par And
\begin{eqnarray*}
R \approx \frac{1}{2}(1.95-\int_{-T_{(r,0.975)} -\epsilon_t/\sqrt{2r}}^{T_{(r,0.975)}-\epsilon_t/\sqrt{2r}}f(t)dt)\\
\end{eqnarray*}
\par In above analyses, the consumed privacy budget is calculated from attackers' perspective. However, there are two cases if the consumed privacy budget is calculated from differential privacy mechanisms' perspective. Firstly, if the target record is in the target data set, the consumed privacy budget is the same with that of attackers' perspective. Secondly, if the target record is not in the target data set, the consumed privacy budget is $\epsilon$ which is the amount of privacy budget consumed by one query.
\section{experiment}
\par In this section, the detailed information of extensive experiments is given. The goal is to evaluate the success rate of proposed membership inference attacks method. As analyzed before, for the case 3 and 4, attackers can determine that the target record is in the target data set or not with probability 1. In this section, we evaluate the success rate for the case 1 and 2 where the background knowledge of attackers are not enough.
\par For the case 1 and 2, the success rate is related to two factors, namely consumed privacy budget and the number of obtained samples. In specific, the privacy budget is the key parameter to control the amount of information released by differential privacy mechanisms. The number of samples has important influence on performance of hypothesis test method. The intuition is consistent with the theoretical analysis in theorem 5. So we pay our attention to the two factors and investigate their influence on the success rate. All experiments are based on the counting query.
\par The privacy budget is unreasonably large if the value of privacy budget is 10 in real application scenarios. However, the key contribution of this paper is that the totally consumed privacy budget is different for delicately designed queries from attackers' perspective and from differential privacy mechanisms' perspective. As analyzed before, attackers could consume unreasonably large privacy budget while the differential privacy mechanism doesn't beware it. This is the reason why we will choose a large value for totally consumed privacy budget to verify the success rate of proposed attacks method
\par The number of samples is from 4 to 29. When the number of samples is less than 4, the success rate is not good enough. The reason is that the number of samples are so small that the standard deviation of samples has big error. For example, if there is only one sample, then the sample standard deviation is 0, resulting in huge error. According to experimental results, when the number of samples is less than 4, there are big fluctuation in experimental results. In order to show patterns of experimental results, experiments need to be repeated so many times if the number of samples is less than 4.
\begin{figure*}[htbp]
\centering
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=5.7cm]{./figure/constant_total_privacy_budget/0_to_1/theoretical_result300.png}
\caption*{(a)theoretical results\\ privacy budget is from 0.1 to 1 }
\end{minipage}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=5.7cm]{./figure/constant_total_privacy_budget/0_to_1/300.png}
\caption*{(b)experiment results\\ privacy budget is from 0.1 to 1}
\end{minipage}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=5.7cm]{./figure/constant_total_privacy_budget/0_to_10/theoretical_result240.png}
\caption*{(c)theoretical results\\ privacy budget is from 0.1 to 10}
\end{minipage}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=5.7cm]{./figure/constant_total_privacy_budget/0_to_10/240.png}
\caption*{(d)experiment results\\ privacy budget is from 0.1 to 10}
\end{minipage}
\caption*{Fig 3: totally consumed privacy budget is a constant }
\end{figure*}
\begin{figure*}[htbp]
\centering
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=5.7cm]{./figure/total_privacy_budget_by_multiple_times/0_to_1/theoretical_result220.png}
\caption*{(a)theoretical results\\ privacy budget is from 0.01 to 0.1}
\end{minipage}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=5.7cm]{./figure/total_privacy_budget_by_multiple_times/0_to_1/220.png}
\caption*{(b)experiment results\\ privacy budget is from 0.01 to 0.1}
\end{minipage}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=5.7cm]{./figure/total_privacy_budget_by_multiple_times/0_to_10/theoretical_result220.png}
\caption*{(c)theoretical results\\ privacy budget is from 0.01 to 0.33}
\end{minipage}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=5.7cm]{./figure/total_privacy_budget_by_multiple_times/0_to_10/220.png}
\caption*{(d)experiment results\\ privacy budget is from 0.01 to 0.33}
\end{minipage}
\caption*{Fig 4: consumed privacy budget of each query is a constant }
\end{figure*}
\par In the first experiment, suppose that the totally consumed privacy budget is a constant to verify the theoretical analysis of the success rate. The privacy budget is from 0.1 to 1 and from 0.1 to 10 respectively. The number of samples is from 4 to 29. We repeat experiments and show average of these experimental results in Fig 3.
\par In Fig 3, the privacy budget is the totally consumed privacy budget by all samples. In specific, if the privacy budget is 1 and the number of samples is 10, consumed privacy budget of each sample is 0.1.
\par According to results in Fig 3, the experimental results are consistent with theoretical analysis. The success rate increases when the totally consumed privacy budget increases. If the totally consumed privacy becomes bigger, the privacy budget allocated to each sample becomes bigger. Thus, each sample is perturbed with smaller noise. So, the hypothesis test method has better performance. When the totally consumed privacy budget is constant, the success rate decreases as the number of samples increases. The reason is that when the number of samples is bigger the privacy budget of every sample is smaller, resulting in bigger noise of each sample. For example, totally consumed privacy budget is 1. When the number of samples is 5, the privacy budget for every sample is 0.2. While when the number of samples is 10, the privacy budget for every sample is 0.1.
\par In the second experiment, suppose that privacy budget of each sample is constant to verify theoretical analysis of the success rate. As in the first experiment, the range of the number of samples is from 4 to 29. The privacy budget is from 0.01 to 0.1 and from 0.01 to 0.33 respectively. We repeat experiments and show average of these experimental results in Fig 4.
\par In Fig 4, the privacy budget is the consumed privacy budget of each sample. In specific, if the privacy budget is 0.2 and the number of samples is 10, consumed privacy budget is 2 in total.
\par According to results in Fig 4, the experimental results are consistent with theoretical analysis. The success rate increases when the privacy budget of each sample increases. When the privacy budget is smaller, the added noise is bigger. It is hard to tell that variation of Laplace mechanism's outputs is due to presence or absence of one record or due to fluctuation of added noise. So the success rate is low. When the privacy budget is bigger, the added noise is smaller so it is easier to tell the variation caused by presence or absence of one record from fluctuation of added noise.
\par The success rate increases when the number of samples increases. Intuitively speaking, when the privacy budget of each sample is constant, the amount of information of each sample is almost the same. So, when the number of samples increases, the amount of information released by differential privacy mechanisms increases, resulting in increase of the success rate.
\par In all above experiments, the totally consumed privacy budget is calculated from attackers' perspective. Because the proposed method is a method to perform attacks, it is meaningful to calculate totally consumed privacy budget from attackers' perspective.
\par From differential privacy mechanisms' perspective, if the totally consumed privacy budget exceeds the threshold of the privacy budget, the black-box access aborts, resulting in that attackers know presence of the target record in the target data set. If the totally consumed privacy budget does not exceed the threshold , attackers can continually consume more privacy budget until the success rate is acceptable for attackers or all records in $D_{know}$ are used.
\section{conclusion}
\par In this paper, we find that the differential privacy does not take liner property of queries into account, resulting in unexpected information leakage. In specific, the totally consumed privacy budget of delicately designed queries may be different from attackers' perspective and from differential privacy mechanisms' perspective due to linear property. The difference leads to unexpected information leakage. In addition, we show how attackers can consume extra privacy budget by its background knowledge, which is against the old opinion that the amount of background knowledge almost has no influence on privacy guarantee of differential privacy.
\par In order to demonstrate the unexpected information leakage, we show a membership inference attacks against the Laplace mechanism. Specifically, based on linear property of queries, a method is proposed to obtain multiple i.i.d. samples of the linear query's answer. Based on these samples, the hypothesis test method is used to determine whether a target record is in a target data set. Based on counting query, we perform extensive experiments to verify proposed membership inference attacks method. According to experimental results, the proposed membership inference attacks method works well.
\bibliographystyle{plain}
| proofpile-arXiv_059-15657 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\subsection{Background}
Motivated by the work of Beresnevich, Haynes and Velani \cite{BHV}, in this paper, we study metric multiplicative Diophantine approximation. We mainly use an elementary method introduced in \cite{Yu2}. We will focus on dimension two, but our strategy also works for higher-dimensional cases. Let $\gamma,\beta,\gamma'$ be three real numbers and $\psi:\mathbb{N}\to \mathbb{R}^+$ be a function (approximation function). We will be interested in the following set,
\[
W(\psi,\beta,\gamma,\gamma')=\{x\in [0,1]: \|qx-\gamma\|\|q\beta-\gamma'\|<\psi(q)\text{ infinitely often}\},
\]
where $\|x\|$ for $x\in\mathbb{R}$ is the distance between $x$ and the integer set $\mathbb{Z}.$ More specifically, we want to know under which conditions (for $\psi,\beta,\gamma,\gamma'$) does $W(\psi,\beta,\gamma,\gamma')$ have positive or even full Lebesgue measure. This is a challenging problem in the study of metric Diophantine approximations. Recently, there have been many breakthroughs in this direction, and we will briefly introduce them. The study of $W(\psi,\beta,\gamma,\gamma')$ is closely related to, if not contained in, the study of inhomogeneous Diophantine approximation, where the central object to study is
\[
W(\psi,\gamma)=\{x\in [0,1]: \|qx-\gamma\|<\psi(q)\text{ infinitely often}\}.
\]
If we define
\[
\psi'(q)=\frac{\psi(q)}{\|q\beta-\gamma'\|},
\]
then $W(\psi,\beta,\gamma,\gamma')=W(\psi',\gamma).$ If we put $\gamma=0$, then the study of $W(\psi,0)$ is the classical metric Diophantine approximation. It is almost impossible to avoid mentioning the following result,
\begin{theorem*}[Duffin-Schaeffer conjecture/Koukoulopoulos-Maynard theorem, \cite{DS}, \cite{KM2019}]
Let $\psi$ be an approximation function with $\sum_q \psi(q)\phi(q)/q=\infty.$ then $W(\psi,0)$ has full Lebesgue measure, otherwise if $\sum_q \psi(q)\phi(q)/q<\infty$ then for Lebesgue almost all $x$, there are only finitely many coprime pairs $(p,q)$ such that $|x-p/q|<\psi(q)/q$. Here, $\phi(.)$ is the Euler phi function.
\end{theorem*}
From the above result, the Lebesgue measure of $W(\psi,\beta,0,\gamma')$ can be understood very well. However, when $\gamma\neq 0,$ much less are known.
\begin{theorem LL*}\footnote{This list is not complete.}
\begin{itemize}
\item {Sz\"{u}sz's theorem \cite{Szusz}}: If $\psi$ is non-increasing and $\sum_{q=1}^{\infty} \psi(q)=\infty$ then $W(\psi,\gamma)$ has full Lebesgue measure for all real number $\gamma.$
\item {Ram\'{i}rez's examples \cite{Ramirez}}: Without the monotonicity of the approximation function $\psi$, the condition $\sum_{q=1}^{\infty} \psi(q)=\infty$ alone cannot ensure $W(\psi,\gamma)$ to have positive Lebesgue measure.
\item {Extra divergence \cite{Yu}}: For each $\epsilon>0,$ if $\sum^{\infty}_{q=1} q^{-\epsilon} \psi(q)=\infty,$ then for all number $\gamma,$ $W(\psi, \gamma)$ has full Lebesgue measure.
\item {Erd\H{o}s-Vaaler type \cite{Yu2}}: Let $\psi$ be an approximation function with $\psi(q)=O(q^{-1}(\log\log q)^{-2})$ and $\sum_{q=1}^{\infty} \psi(q)=\infty.$ Then for each non-Liouville $\gamma$ we have
$
|W(\psi,\gamma)|=1.$
\end{itemize}
\end{theorem LL*}
It is very difficult to use the above results to study $W(\psi,\beta,\gamma,\gamma')$ via $W(\psi',\gamma).$ This is mainly because that the extra conditions (monotonicity, upper bound, extra divergence) are difficult to be tested for approximation functions of form $\psi(q)/\|q\beta-\gamma'\|.$
\begin{theorem LLL*}\footnote{This list only contains results before the year 2019 and it is not complete.}
\begin{itemize}
\item {Gallagher,\cite{Gallagher62}:} Let $\psi$ be a non-increasing approximation function with $\sum_{q=1}^{\infty} \psi(q)\log q=\infty.$ For Lebesgue almost all $(x,y)\in [0,1]^2$, there are infinitely many integers $q$ with
\[
\|qx\|\|qy\|\le\psi(q).
\]
\item {Beresnevich-Haynes-Velani, \cite[Theorem 2.3]{BHV}:} Let $\psi$ be a non-increasing approximation function with $\sum_{q=1}^{\infty} \psi(q)\log q=\infty.$ Then for all real number $\gamma,$ Lebesgue almost all $(x,y)\in [0,1]^2,$ there are infinitely many integers $q$ with
\[
\|qx-\gamma\|\|qy\|\le \psi(q).
\]
\item{Chow \cite{C18}, Koukoulopoulos-Maynard \cite{KM2019}:} Let $\psi$ be a non-increasing approximation function with $\sum_{q=1}^{\infty} \psi(q)\log q=\infty.$ Then for all non-Liouville number $\beta$ and all real number $\gamma',$ the set $W(\psi,\beta,0,\gamma')$ has full Lebesgue measure.
\end{itemize}
\end{theorem LLL*}
The central problem in this area is the following conjecture by Beresnevich, Haynes and Velani \cite[Conjecture 2.1]{BHV} which is now proved by Chow and Technau in \cite{CT}.
\begin{conjecture}\label{BHVC}
Let $\psi$ be a non-increasing approximation function with $\sum_{q=1}^{\infty} \psi(q)\log q=\infty.$ Then for all real numbers $\gamma, \gamma'$ Lebesgue almost all $(x,y)\in [0,1]^2,$ there are infinitely many integers $q$ with
\[
\|qx-\gamma\|\|qy-\gamma'\|\le \psi(q).
\]
\end{conjecture}
The proof in \cite{CT} relies on a fine analysis of the arithmetic and additive structures of Bohr sets. In this paper, we shall provide a different (and more elementary) proof for the following slightly weaker result.
\begin{corollary}[Corollary of Theorem \ref{MAIN MONNTON}]\label{Coro2}
Let $\psi$ be a non-increasing approximation function with $\sum_{q=1}^{\infty} \psi(q)\log q=\infty.$ Then for all non-Liouville number $\gamma$, real number $\gamma'$ and Lebesgue almost all $(x,y)\in [0,1]^2,$ there are infinitely many integers $q$ with
\[
\|qx-\gamma\|\|qy-\gamma'\|\le \psi(q).
\]
\end{corollary}
The extra non-Liouville condition is not necessary because of the results in \cite{CT}. However, under this condition, one may obtain a stronger result. To be more concrete and precise, we state the following conjecture.
\begin{conjecture}\label{con2}
Let $\psi$ be an approximation function with $\sum_{q=1}^{\infty} \psi(q)\log q=\infty$ and $\psi(q)=O(q^{-1}(\log q)^{-2}).$ Then for all non-Liouville number $\gamma$ and real number $\gamma'$ Lebesgue almost all $(x,y)\in [0,1]^2,$ there are infinitely many integers $q$ with
\[
\|qx-\gamma\|\|qy-\gamma'\|\le \psi(q).
\]
\end{conjecture}
We will prove (in Theorem \ref{MAIN}) a fibred version of Conjecture \ref{con2} under a slightly stronger requirement on $\psi,$ i.e. $\psi(q)=O(q^{-1}(\log q)^{-2}(\log\log q)^{-2}).$ Unlike the situation with monotonic $\psi,$ the unfibred statement is no longer a direct consequence of the fibred version and Conjecture \ref{con2} is still open even with this stronger requirement on $\psi.$
It is interesting to continue this direction and try to obtain a sharp result for multiplicative Diophantine approximations. For example, what would be the Duffin-Schaffer conjecture in the multiplicative setting?
\subsection{Results in this paper}
Before we state the main results, we mention that for monotonic approximation functions, the optimal results are proved in \cite{CT}. Our results are weaker in the sense that we need to require stronger Diophantine conditions for some parameters. For example, Corollary \ref{Coro2} requires one of the shift parameters $\gamma,\gamma'$ to be not Liouville. Our method also extends to deal with non-monotonic approximation functions. In this situation, we believe that some of the stronger Diophantine conditions are in fact necessary.
\subsubsection{Diophantine parameters}
Our method works very well when some Diophantine conditions are presented. Before we state the results, some standard terminologies are needed.
\begin{defn}
We say that a real number $\gamma$ is Diophantine if there are positive numbers $\rho,c$ such that $\|n\gamma\|\ge c n^{-\rho}$ for all $n\ge 1.$ The infimum of all such possible $\rho$ (we allow $c$ to change) is referred to as the Diophantine exponent of $\gamma.$ More generally, let $\gamma_1,\dots,\gamma_r$ be $r\ge 2$ real numbers which are $\mathbb{Q}$-linearly independent. We say that $(\gamma_1,\dots,\gamma_r)$ is a Diophantine $r$-tuple if there are positive numbers $\rho,c$ such that
\[
\|n_1\gamma_1+\dots+n_r\gamma_r\|\ge c(\max\{|n_1|,\dots,|n_r|\})^{-\rho}
\]
for all $n_1,\dots,n_r\in \mathbb{Z}.$ The infimum of all such possible $\rho$ is referred to as the Diophantine exponent\footnote{In some literatures, this is called the dual exponent.} of $(\gamma_1,\dots,\gamma_r).$
\end{defn}
In general, the Diophantine exponent of $r$ $\mathbb{Q}$-linearly independent numbers is at least $r.$ It is possible to see that if $(\gamma_1,\dots,\gamma_r)$ is Diophantine then $(M_1\gamma_1,\dots,M_r\gamma_r)$ is Diophantine for all non-zero integers $M_1,\dots,M_r.$ Further more, their Diophantine exponents are equal although the associated constants might be different. Examples of Diophantine tuples include tuples of $\mathbb{Q}$-linearly independent algebraic numbers, for example, $(\sqrt{2},\sqrt{3}).$ Other examples includes tuples of natural logarithms of algebraic numbers, for example, $(\ln 2,\ln 3).$
We first provide the following Erd\H{o}s-Vaaler type result.
\begin{theorem}\label{MAIN}[main Theorem I]
Let $\psi(q)=O((q\log q(\log\log q)^2)^{-1})$ be an approximation function. Let $\gamma,\beta$ be irrational numbers such that $(\gamma,\beta)$ is Diophantine. Let $\gamma'$ be a real number. Then for all small enough $\omega>0$ (in a manner that depends on $(\gamma,\beta),\gamma$ and $\beta$), if the following condition holds,
\begin{align}
\sum_{\substack{q\in \mathbb{N}\\\|q\beta-\gamma'\|\in [q^{-\omega},1)}}\frac{\psi(q)}{\|q\beta-\gamma'\|}=\infty,\label{Div}
\end{align}
then $|W(\psi,\beta,\gamma,\gamma')|=1.$
\end{theorem}
\begin{remark}
The power $2$ in $q\log q (\log\log q)^2$ is by no means the optimal value with our method. It is very likely to be improved to $1$ or even $0.$ The value $2$ comes from a rather crude bound (\ref{II'1}) in the proof of this theorem. See Section \ref{proofofmain} for more details.
\end{remark}
\begin{remark}\label{Conv}
The divergence condition (\ref{Div}) in the above results looks rather technical. A way of viewing it is that after fixing $\gamma,\beta$ and $\gamma'$ we can choose a small enough $\omega>0$ and consider the set of integers $q$ with $\|q\beta-\gamma'\|\in [q^{-\omega},1].$ Restricted to this set of integers, we can design approximation functions $\psi$ freely subject to a upper bound condition and a divergence condition (\ref{Div}). In fact, under the condition that $\psi$ is supported on where $\|q\beta-\gamma'\|\in [q^{-\omega},1]$ the condition (\ref{Div}) is also necessary.
\end{remark}
If $\psi$ is monotonic, then we can prove the following fibred Chow-Technau's theorem with Diophantine parameters.
\begin{theorem}\label{MAIN MONNTON}[Chow-Technau's theorem for Diophantine parameters]
Let $\psi(q)$ be a non-increasing approximation function. Let $\gamma,\beta$ be irrational numbers such that $(\gamma,\beta)$ is Diophantine. Let $\gamma'$ be a real number. Suppose that $\sum_{q=1}^{\infty} \psi(q)\log q=\infty,$ then
\[
|W(\psi,\beta,\gamma,\gamma')|=1.
\]
\end{theorem}
\begin{remark}
This is a weaker version of a result recently proved by Chow and Technau. In fact, Chow-Technau's theorem(\cite[Corollary 1.10]{CT} with $k=2$) says that the conclusion holds by only assuming that $\beta,\gamma$ are irrational and $\beta$ is non-Liouville. Theorem \ref{MAIN MONNTON} holds, for example, when $(\gamma,\beta)=(\sqrt{2},\sqrt{3})$ or $(\sqrt{3},\log 2).$ However, Chow-Technau's result can be used, for example, when $(\gamma,\beta)=(\pi,e)$ in which case one cannot use Theorem \ref{MAIN MONNTON} directly as it is not known whether $(\pi,e)$ is Diophantine or not.
\end{remark}
\subsubsection{General parameters}
Our method can be also used to consider the case when $(\gamma,\beta)$ is not assumed to be Diophantine. Let $\sigma(.)$ be a function taking integer variables and positive values so that $\sigma(N)$ is the best Diophantine exponent for $(\gamma,\beta)$ up to height $N.$ That is to say, $\sigma(N)$ is the infimum of all numbers $\sigma>0$ such that for all $-N\le k_1,k_2\le N$ with $k_1,k_2$ not both zeros,
\[
\|k_1\gamma+k_2\beta\|\ge \max\{k_1,k_2\}^{-\sigma}
\]
In particular, if $(\gamma,\beta)$ is Diophantine, then $\sigma(.)$ is a bounded function. In general, $\sigma(.)$ is a non-decreasing function. When there are possible confusions, we write $\sigma_{(\gamma,\beta)}$ to indicate the $\sigma$ function associated with the pair $(\gamma,\beta).$ Similarly, one can define the $\sigma$ function for real numbers. For example $\sigma_\gamma(.)$ would be just be defined as $\sigma_{(\gamma,\beta)}$ but with $k_2=0$ throughout. Thus if $\sigma_\gamma$ is unbounded then $\gamma$ is Liouville. We do not exclude rational numbers. If $\gamma$ is rational, then it is not Diophantine in this sense. In this case, $\sigma_\gamma$ is not a well defined function as it attains $\infty.$
\begin{theorem}\label{MAIN2}[main Theorem II]
Let $\psi(q)=O((q\log q (\log\log q))^{-3})$ be an approximation function. Let $\gamma,\beta$ be irrational numbers such that $\sigma_{(\gamma,\beta)}(q)=O ((\log\log\log q)^{1/2}).$ Let $\epsilon>0.$ Then there is a small number $c>0$ such that the following holds.
Let $\omega:\mathbb{N}\to \mathbb{R}$ be the function
\[
\omega(q)=
\begin{cases}
1 & \log\log q\le 1\\
c/(\log\log\log q)^{1/2} & \log\log q>1
\end{cases}.
\]
Let $\gamma'$ be a real number. If
\begin{align*}
\sum_{\substack{q\in\mathbb{N}\\\|q\beta-\gamma'\|\in [q^{-\omega(q)},1)}}\frac{\psi(q)}{\|q\beta-\gamma'\|}=\infty,
\end{align*}
then $|W(\psi,\beta,\gamma,\gamma')|=1.$
\end{theorem}
Again, if $\psi$ is monotonic, then it is possible to prove a result with cleaner conditions.
\begin{theorem}\label{MAIN MONO2}
Suppose that $\psi$ is a monotonic approximation function such that
\[
\sum_{q=1}^{\infty} \psi(q)\frac{\log q}{(\log\log q)^{1/2}}=\infty.
\]
Let $\gamma,\beta$ be irrational numbers such that $\sigma_{(\gamma,\beta)}(q)=O (\log\log q)^{1/2}).$ Let $\gamma'$ be a real number. Then $|W(\psi,\beta,\gamma,\gamma')|=1.$
\end{theorem}
For example the approximation function $\psi(q)=1/(q(\log q)^2 (\log\log q)^{1/2})$ satisfies the above condition. We remark that it is not possible to completely drop the Diophantine condition for $(\gamma,\beta).$ In fact, \cite[Theorem 1.14]{CT} shows that under the hypothesis (for $\psi$) in Theorem \ref{MAIN MONO2}, there exist infinitely many choices of $\gamma,\beta,\gamma'$ such that $|W(\psi,\beta,\gamma,\gamma')|=0.$ We believe that the joint Diophantine condition for $(\gamma,\beta)$ can be reduced to a Diophantine condition for $\beta$ only.
\begin{conjecture}
Suppose that $\psi$ is a monotonic approximation function such that
\[
\sum_{q=1}^{\infty} \psi(q)\frac{\log q}{(\log\log q)^{1/2}}=\infty.
\]
Let $\beta$ be an irrational number such that $\sigma_{\beta}(q)=O ((\log\log q)^{1/2}).$ Let $\gamma,\gamma'$ be real numbers. Then $|W(\psi,\beta,\gamma,\gamma')|=1.$
\end{conjecture}
\section{Notation}\label{notation}
\begin{itemize}
\item $d(.)$ is the divisor function. We will not use any other 'reserved' arithmetic functions. For example, $\omega$ in this paper is NOT the distinct prime factors function.
\item $A^{\psi,\gamma}_q$:
Let $\psi$ be an approximation function and $\gamma$ be a real number. For each integer $q\ge 1,$ we use $A^\gamma_q$ to denote the set
\[
A^{\psi,\gamma}_q=\{x\in [0,1]: \|qx-\gamma\|<\psi(q)\}.
\]
We can assume that $\psi(q)<1/2$ for all $q\ge 1.$ In fact, if there are infinitely many $q$ with $\psi(q)\ge 1/2,$ then $W(\psi,\gamma)$ would be the whole unit interval. If $\gamma,\psi$ is clear from the context, we will write $A_q$ instead of $A^{\psi,\gamma}_q.$
\item $\chi_A$: The indicator function of a set $A.$
\item $B(x,r)$: Metric ball centred at $x$ with radius $r,$ where $r>0$ and $x$ belongs to a metric space.
\item $I_r=\chi_{B(0,r)}.$ Namely, $I_r:\mathbb{R}\to\{0,1\}$ is such that $I_r(x)=1$ if and only if $x\in [-r.r].$
\item $\Delta_{\psi}(q,q')$: The value $q\psi(q')+q'\psi(q)$, where $\psi$ is a given approximation function and $q,q'$ are positive integers. When $\psi$ is clear from the context, we write it as $\Delta(q,q').$
\item $\|x\|$: The distance of a real number $x$ to the set of integers.
\item $\{x\}$: The unique number in $(-1/2,1/2]$ with $\{x\}-x$ being an integer.
\item $\log $: Base $2$ logarithmic function.
\item $\mathcal{I}_M$: The collection of intervals $I\subset [0,1]$ of length $1/M$ and with endpoints in $M^{-1}\mathbb{N},$ where $M\ge 1$ is an integer.
\item $|A|$: The Lebesgue measure of $A\subset \mathbb{R}$ where $A$ is a Lebesgue measurable set.
\item Conditioned sums: Let $A$ be a countable set of positive numbers. It is standard to use $\sum_{x\in A} a$ for the sum of elements in $A$ since the order of summation does not matter. Sometimes, $A$ can be described by some properties, say $P.$ By saying that under the condition $P$
\[
\sum_{x\in (0,\infty)}x,
\]
we actually mean $\sum_{x\in A} x.$ This has an advantage when $P$ is a complicated property, and it would be too long to appear under the summations symbol. More generally, let $B$ be a set of positive numbers, by saying that under the condition $P$
\[
\sum_{x\in B}x,
\]
we mean
\[
\sum_{\substack{x\in B\\mathbf{x}\text{ saisfies } P}}x.
\]
\item Asymptotic symbols: For two functions $f,g:\mathbb{N}\to (0,\infty)$ we use $f=O(g)$ to mean that there is a constant $C>0$ with
\[
f(q)\le Cg(q)
\]
for all $q\ge 1.$
We use $f=o(g)$ to mean that
\[
\lim_{q\to\infty} \frac{f(q)}{g(q)}=0.
\]
For convenience, we also use $O(g), o(g)$ to denote an auxiliary function $f$ with the property that $f=O(g)$, $o(g)$ respectively. The precise form of the function $f$ changes across the contexts, and it can always be explicitly written down.
\end{itemize}
\section{Preliminaries}
There are several standard results that will be needed in the proofs of the main results. The first one is the Borel-Cantelli lemma. The following version can be found in \cite[Proposition 2]{BDV ref}.
\begin{lemma}[Borel-Cantelli]\label{Borel}
Let $(\Omega, \mathcal{A}, m)$ be a probability space and let $E_1, E_2, \ldots \in \mathcal{A}$ be a sequence of events in $\Omega$ such that $\sum_{n=1}^{\infty}{m(E_n)} = \infty$. Then
\[m(\limsup_{n \to \infty}{E_n}) \ge \limsup_{Q \to \infty}{\frac{\left(\sum_{s=1}^{Q}{m(E_s)}\right)^2}{\sum_{s,t=1}^{Q}{m(E_s \cap E_t)}}}.\]
If $\sum_{n=1}^{\infty}{m(E_n)} < \infty,$ then $m(\limsup_{n \to \infty}{E_n})=0.$
\end{lemma}
We also need the following discrepancy result, \cite[Section 1.4.2]{DT97}. Let $k\ge 1$ be an integer. Let $(a_1,\dots,a_k)\in \mathbb{T}^k$ be a $\mathbb{Q}$-linear independent vector. Let $I\subset\mathbb{T}^k$ be a box (a Cartesian product of intervals). We wish to count
\[
S_I(Q,a_1,\dots,a_k)=\#\{q\in \{1,\dots,Q\}: q(a_1,\dots,a_k)\in I\}.
\]
We write
\[
S_I(Q,a_1,\dots,a_k)=Q |I|+ E^I_{a_1,\dots,a_k}(Q).
\]
We have the following upper bound for the error term $E^I_{a_1,\dots,a_k}(Q)$
\[
E^I_{a_1,\dots,a_k}(Q)=o_I(Q).
\]
This follows from the ergodicity of irrational rotation $+(a_1,\dots,a_k)$ on $\mathbb{T}^k.$ In some special case, we have a much better understanding of the error term. Suppose that $(a_1,\dots,a_k)$ is Diophantine, for example, when $a_1,\dots,a_k$ are $\mathbb{Q}$-linearly independent algebraic numbers. Then there are a number $\rho\in (0,1)$, a constant $C>0,$ such that for all boxes $I,$ all integers $Q\ge 1,$
\[
|E^I_{a_1,\dots,a_k}(Q)|\le CQ^{\rho}.
\]
The infimum of all such $\rho$ is called the discrepancy exponent of $(a_1,\dots,a_k).$ We have the following result \cite[Theorem 1.80]{DT97} which related discrepancy exponent to Diophantine exponent.
\begin{lemma}\label{Discrepancy bound}
Let $a_1,\dots,a_k$ be $k\ge 1$ $\mathbb{Q}$-linearly independent numbers. Suppose that the Diophantine exponent of $(a_1,\dots,a_k)$ is $\rho$ (which is at least $k$). Then the discrepancy exponent of $(a_1,\dots,a_k)$ is at most $1-1/\rho.$
\end{lemma}
We use $D_{a_1,\dots,a_k}(Q)$ to denote
\[
\sup_{I}E^I_{a_1,\dots,a_k}(Q)
\]
where the $\sup$ is taken over all boxes $I\subset [0,1]^k.$ Let $M_1,\dots,M_k$ be integers. It can be checked that
\[
D_{M_1a_1,\dots,M_ka_k}(Q)\le M_1\dots M_k D_{a_1,\dots,a_k}(Q).
\]
Let $\psi$ be an approximation function and $\gamma$ be a real number. In order to use Lemma \ref{Borel}, we need to estimate the size of intersections $A_q\cap A_{q'}.$ We have the following result from \cite[Lemma 4.1]{Yu2}.
\begin{lemma}\label{master}
Let $H>2$ be an integer. Let $\psi$ be an approximation function and $\gamma$ be an irrational number. For integers $1\le q'<q$ such that $\Delta(q',q)<H\gcd(q,q')$ we have the following estimate
\[
|A_q\cap A_{q'}|\le 2(2H+1)\min\{\psi(q)/q, \psi(q')/q'\} \gcd(q,q') I_{\Delta(q,q')/\gcd(q,q')}(\{\gamma(q'-q)/\gcd(q,q')\}).
\]
Otherwise when $\Delta(q,q')\ge H\gcd(q,q')$, we have
\[
|A_q\cap A_{q'}|\le 4(1+C_0/(2H))\psi(q)\psi(q'),
\]
where $C_0>1$ is an absolute constant.
\end{lemma}
\section{Outline of the proofs}
The proofs of the main theorems are very similar. Here we provide an outline of the proof of Theorem \ref{MAIN}. Let $\beta,\gamma,\gamma'$ be real numbers and $\psi$ be an approximation function. We want to consider the limsup of the sets $\{A_q=A^{\psi',\gamma}_q\}_{q\ge 1}$ where
\[
\psi'(q)=\frac{\psi(q)}{\|q\beta-\gamma'\|}.
\]
We will use the divergence Borel-Cantelli lemma (Lemma \ref{Borel}). For this, we need to study the intersections $A_q\cap A_{q'}$ for different positive integers $q,q'.$ In this direction, Lemma \ref{master} offers us some helpful information. We need to use this lemma to provide some estimates on the sum
\[
\sum_{1\le q'\le q}|A_{q'}\cap A_q|
\]
which will be helpful for the divergence Borel-Cantelli lemma. To estimate the above sum, we see that it is helpful to study the sequence of $0,1$'s,
\[
\{I_{\Delta(q,q')/\gcd(q,q')}(\{\gamma(q'-q)/\gcd(q,q')\})\}_{1\le q'\le q}.
\]
This sequence is essentially driven by the irrational rotation (on the unit interval) with rotating angle $\gamma.$ Due to the uniform ergodicity of irrational rotations, we expect that the above sequence over $\{0,1\}$ is `well distributed'. More precisely, we expect that the appearances of $1$'s are almost periodic with gaps around $1/\{\gamma\}.$ Up to some quantifiable measures (depending on $\gamma$), we can treat this sequence as the constant sequence of $1$'s. With this strategy, we have resolved the inhomogeneous parameter $\gamma.$
Next, we consider the inhomogeneous parameter $\gamma'.$ It is now carried by the modified approximation function $\psi'.$ The factor $\|q\beta-\gamma'\|$ makes $\psi'$ appear somehow irregular. Luckily, $\psi'$ is not too irregular. The strategy is to flatten the range of $\psi'.$ For example, let $u\in (0,1/2)$ be a number and we consider set $G_u$ of $q$'s with
\[
\|q\beta-\gamma'\|\in [u,2u].
\]
On $G_u$, we can essentially ignore the effect of the factor $\|q\beta-\gamma'\|$. Here, the set $G_u$ is controlled by the irrational rotation with angle $\beta.$ Again, by ergodicity, we expect that $G_u$ is `well distributed' in $\mathbb{N}.$ We can then use this fact and Lemma \ref{master} to obtain good estimates for
\[
\sum_{\substack{1\le q'\le q\\ \|q'\beta-\gamma'\|\in [u,2u]}}|A_{q'}\cap A_q|
\]
with various $u.$ Then we can sum the above estimates for different values of $u$ and obtain a good estimate for
\[\sum_{1\le q'\le q}|A_{q'}\cap A_q|.\]
This is the way we treat the inhomogeneous parameter $\gamma'.$
From here, we see that two irrational rotations (with parameters $\gamma$ and $\beta$) dominate the above arguments. A very crucial point here is that we need to treat them at the same time. This effectively leads to the consideration of the two-dimensional irrational rotation with angle $(\gamma,\beta).$ In order for this to work, we need that the pair $(\gamma,\beta)$ is Diophantine. This condition basically says that the orbit $\{(q\gamma,q\beta)\mod \mathbb{Z}^2\}_{q\ge 1}$ is 'well distributed' over $[0,1]^2.$ With this regularity, we can partially ignore the effects caused by the inhomogeneous parameters $\gamma,\gamma'$ and the result will follow.
\section{Bounding intersections}
The main purpose of this section is to prove several lemmas on summing over measures of intersections. Throughout the rest of this paper, we let $H>2$ be an integer and $C_0$ be as in Lemma \ref{master}. The results of this section are the following Lemmas \ref{Lemma} and \ref{Lemma3}. The proofs are complicated and the reader can skip the proofs in this section for now and read Section \ref{proofofmain} to see how to used them to prove the main theorems.
In what follows, for each integer $q>1,$ define
\[
F(q)=\sum_{r|q} \frac{\log r}{r}.
\]
Suppose that $\psi$ is an approximation function. Let $\beta,\gamma,\gamma'$ be real numbers and $\omega$ be a positive number. Define
\begin{align}
\psi_{\beta,\gamma',\omega}^{'}(q)=
\begin{cases}
\frac{\psi(q)}{\|\beta q-\gamma'\|} & \|q\beta-\gamma'\|\in [q^{-\omega},1]\\
0 & \text{else}
\end{cases}.\label{psi}
\end{align}
Often, $\beta,\gamma'$ are clear from the context. If so, we write $\psi'_{\omega}$ instead of $\psi'_{\beta,\gamma',\omega}.$ In addition, if $\omega$ is also clear from the context, we simply write $\psi'.$ Here, $\omega$ may not be a constant as $q$ varies. In fact, we will consider the situation when $\omega:\mathbb{N}\to [0,\infty)$ is a function. In this case, we write $\psi'_{\omega(.)}$ for the function
\[
q\to \psi'_{\omega(q)}(q).
\]
\begin{lemma}\label{Lemma}
Suppose that $\psi(q)=O(q^{-1}(\log q)^{-1}(\log\log q)^{-2}).$ Let $\gamma$ be an irrational non-Liouville number, $\beta$ be a real number such that $(\beta,\gamma)$ is Diophantine and $\gamma'$ be a real number. Then there is a positive number $\omega_0$ depending on the Diophantine exponents of $\gamma,\beta, (\beta,\gamma)$ such that for all $\omega\in (0,\omega_0),$ the approximation function $\psi'=\psi'_{\beta,\gamma',\omega}$ satisfies
\[
\sum_{1\le q'<q} |A^{\psi',\gamma}_q\cap A^{\psi',\gamma}_{q'}|=O\left(\psi'(q)+\frac{\psi'(q)}{(\log\log q)^2}F(q)\right)+4(1+C_0/(2H))\psi'(q)\sum_{1\le q'< q} \psi'(q')
\]
where the implicit constant can be made explicit.
\end{lemma}
\begin{remark}\label{Remark}
If we only assume $\psi(q)=O(q^{-1}(\log q)^{-1}),$ then we have the following conclusion
\[
\sum_{1\le q'<q} |A^{\psi',\gamma}_q\cap A^{\psi',\gamma}_{q'}|=O\left(\psi'(q)+\psi'(q)\sum_{r|q} \frac{\log r}{r}\right)+4(1+C_0/(2H))\psi'(q)\sum_{1\le q'< q} \psi'(q').
\]
\end{remark}
\begin{lemma}\label{Lemma3}
Suppose that $\psi(q)=O(q^{-1}(\log q)^{-1}(\log\log q)^{-3}).$ Let $(\gamma,\beta)$ be a pair of numbers whose $\sigma$ function satisfies $\sigma(q)=O((\log\log q)^{1/2}).$ Then there is a constant $C_1>0$ such that the approximation function $\psi'=\psi'_{\beta,\gamma',\omega(.)}$ satisfies
\[
\sum_{1\le q'<q} |A^{\psi',\gamma}_q\cap A^{\psi',\gamma}_{q'}|=O\left(\psi'(q)+\frac{\omega(q)\psi'(q)}{(\log\log q)^{3}}F(q)\right)+4(1+C_0/(2H))\psi'(q)\sum_{1\le q'< q} \psi'(q'),
\]
where $\omega(q)=C_1/\sigma(q).$ If $\psi(q)=O(q^{-1}(\log q)^{-1} (\log\log q)^{1/2}),$ then as in above (with $\omega(q)=C_1/(\log\log q)^{1/2}$) we have
\[
\sum_{1\le q'<q} |A^{\psi',\gamma}_q\cap A^{\psi',\gamma}_{q'}|=O\left(\psi'(q)+\psi'(q)F(q)\right)+4(1+C_0/(2H))\psi'(q)\sum_{1\le q'< q} \psi'(q').
\]
\end{lemma}
\begin{remark}
The second part of this lemma will be very useful in the case when we know that $\psi'$ is supported on where $F(q)=\sum_{r|q} \log r/r$ is uniformly bounded from above.
\end{remark}
\subsection{A counting lemma with Diophantine parameters: proof of Lemma \ref{Lemma}}
First, we will deal with the case when $(\gamma,\beta)$ is a Diophantine pair. All the main ideas will be included in this special case. Later on, we will add extra technical arguments to weaken this Diophantine condition. \footnote{Of course, the integrity of this manuscript suffers from this non-linear proving style. The reader may need to constantly re-read this section in order to be able to finish the later proofs. We have carefully set up the breakpoints inside the proof such that the reader can conveniently come back and re-read what will be needed.}In this way, those extra arguments will not block our view too much, at least for now. Weakening the Diophantine condition is not for free. We shall see later that the less Diophantine condition we ask for $(\gamma,\beta),$ the less information we can use from $\psi.$
\begin{proof}[Proof of Lemma \ref{Lemma}]
Without loss of generality we assume that $q\ge 1024.$ We also assume that $\psi(q)\le q^{-1}(\log q)^{-1}(\log\log q)^{-1}.$ Let $\omega>0$ be a number that will be determined later. In the proof, we will need to introduce a constant $C>0$ whose value will be updated throughout the proof. Consider $\psi'=\psi_{\beta,\gamma'\omega}.$ In what follows, we will use $\psi'$ as the default approximation function and we write $A$ for $A^{\psi',\gamma}$ and $\Delta$ for $\Delta_{\psi'}.$ By Lemma \ref{master}, for each integer $H>2$, if $\Delta(q,q')<H\gcd(q,q')$ then we have
\[
|A_q\cap A_{q'}|\le 2(2H+1)\min\{\psi'(q)/q,\psi'(q')/q'\}\gcd(q,q')I_{\Delta(q,q')/\gcd(q,q')}(\{\gamma(q'-q)/\gcd(q,q')\})
\]
otherwise, for a $C_0>0,$
\[
|A_q\cap A_{q'}|\le 4(1+C_0/(2H))\psi'(q)\psi'(q').
\]
Now we wish to estimate the sum
\begin{align}
\sum_{\substack{1\le q'<q\\\|q'\beta-\gamma'\|\in [{q'}^{-\omega},1)}} \min\left\{\frac{\psi'(q')}{q'},\frac{\psi'(q)}{q}\right\}\gcd(q',q)I_{\Delta(q,q')/\gcd(q,q')}(\{\gamma(q'-q)/\gcd(q,q')\})\label{SUM}
\end{align}
There are now two different 'rotations' hidden in the above sum. The first one is inside the term
\[
\{\gamma (q'-q)/\gcd(q,q')\}
\]
and the second one is inside the value of $\psi'$ which depends heavily on $\|q\beta-\gamma'\|.$ We will use both the 'rotations' in the following argument.
Before the main part of the proof, let us observe some simple facts. First, we see that for each $1>\rho_1>0,$
\begin{align}
\sum_{1\le q'<q^{1-\rho_1}} \frac{\psi'(q)}{q}\gcd(q',q)I_{\Delta(q,q')/\gcd(q,q')}(\{\gamma (q'-q)/\gcd(q,q')\})\le \psi'(q)\frac{q^{1-\rho_1}}{q}d(q)=\psi'(q)o(1).\label{1}
\end{align}
Next observe that
\begin{align}
&\sum_{1\le q'<q} \frac{\psi'(q)}{q}\gcd(q',q)I_{\Delta(q,q')/\gcd(q,q')}(\{\gamma (q'-q)/\gcd(q,q')\})\\
=&\frac{\psi'({q})}{q}\sum_{r|q} r \sum_{q': \gcd(q',q)=r}I_{\Delta(q,q')/r}(\{\gamma (q'-q)/r\}).\label{DIVISOR SUM}
\end{align}
Then we see that for each $\rho_2>1,$
\begin{align}
\sum_{r|q}r\sum_{\substack{q':\gcd(q',q)=r\\mathbf{q}'\le q/r^{\rho_2}}}I_{\Delta(q,q')/r}(\{\gamma (q'-q)/r\})\le \sum_{r|q}r \frac{q}{r^{1+\rho_2}}\le \zeta(\rho_2)q.\label{2}
\end{align}
Therefore, to estimate (\ref{SUM}), it is enough to reduce the range of the sum to $q'\ge q^{1-\rho_1}$ as well as $q'\ge q/r^{\rho_2}$ for each divisor $r|q$ in (\ref{DIVISOR SUM})\footnote{For convenience, we record here the condition that $q'\ge \max\{q^{1-\rho_1},q/r^{\rho_2}\}$ for fixed values $1>\rho_1>0,\rho_2>1.$}. In what follows, those conditions are always implied. Those conditions imply in particularly that for large enough $q>0,$
\[
(\log\log q')^2\ge \frac{1}{2}(\log\log q)^2
\]
and
\[
\log q'\ge (1-\rho_1)\log q.
\]
So far, the above arguments do not rely on any Diophantine conditions for $(\gamma,\beta).$ We now split the expression (\ref{SUM}) according to whether $\psi'(q')q\ge \psi'(q)q'$ or $\psi'(q')q\le \psi'(q)q'$. We consider them separately with similar arguments. There, we will see how the Diophantine condition for $(\gamma,\beta)$ is used.
\subsubsection*{Case I: $\psi'(q')q\ge \psi'(q)q'$}
We first deal with summand for which $\psi'(q')q\ge \psi'(q)q'.$ This implies that
\[
\min\left\{\frac{\psi'(q')}{q'},\frac{\psi'(q)}{q}\right\}=\frac{\psi'(q)}{q}.
\]
In what follows, we will not explicitly write down this condition. For concreteness, we choose $\rho_1=1/2$ and $\rho_2=2$ in this case. Let $l\ge 0$ be an integer and we consider the set
\[
G_\omega^l=\{q'\ge 1, q'\in\mathbb{N}:\|q'\beta-\gamma'\|\in [2^l/{q'}^{\omega},2^{l+1}/{q'}^{\omega}]\}.
\]
Since $\omega$ is fixed throughout the proof, we simply write $G^l=G^l_\omega.$
Now observe that (decompose $\mathbb{N}$ into sets $G^l$ with various $l$)
\begin{align*}
\sum_{\substack{1\le q'<q\\\|q'\beta-\gamma'\|\in [{q'}^{-\omega},1)}} \frac{\psi'(q)}{q}\gcd(q',q)I_{\Delta(q,q')/\gcd(q,q')}(\{\gamma (q'-q)/\gcd(q,q')\})\\
\le \frac{\psi'(q)}{q}\sum^{l\le \omega \log q}_{l= 0}\sum_{r|q}r \sum_{\substack{q': \gcd(q',q)=r\\mathbf{q}'\in G^l}}I_{\Delta(q',q)/r}(\gamma(q'-q)/r).
\end{align*}
For each integer $k\ge 0$ we introduce the following set
\[
D_k(q)=\{q'\ge 1, q'\in\mathbb{N}: q'\in [q/2^{k+1},q/2^k]\}.
\]
Recall that we implicitly have the condition $q'\ge q^{1/2}$ and this will make $2^k$ to be at most $\sqrt{q}.$\footnote{We have $2^k\le q^{1/2}.$ We also have $q/2^k\ge q/r^2,$ which forces $2^k\le r^2.$ In general, if we do not specify the values of $\rho_1,\rho_2,$ we have $2^k\le \min\{q^{\rho_1}, r^{\rho_2}\}.$} Then we see that
\begin{align*}
\sum_{\substack{1\le q'<q\\\|q'\beta-\gamma'\|\in [{q'}^{-\omega},1)}} \frac{\psi'(q)}{q}\gcd(q',q)I_{\Delta(q,q')/\gcd(q,q')}(\{\gamma (q'-q)/\gcd(q,q')\})\\
\le \frac{\psi'(q)}{q}\sum^{l\le \omega \log q}_{l\ge 0}\sum_{r|q}r \sum_{k:1\le 2^k\le q^{1/2}}\sum_{\substack{q':\gcd(q',q)=r\\mathbf{q}'\in G^l\\mathbf{q}'\in D_k(q)}}I_{\Delta(q',q)/r}(\gamma(q'-q)/r).
\end{align*}
If $q'\in G^l\cap D_k(q)$, then we have (recall that $(\log\log q')^2\ge (\log\log q)^2/2, \log q'\ge \log q/2$, $\psi'(q')q\ge \psi'(q)q'$ and $\psi(q)\le q^{-1}(\log q)^{-1}(\log\log q)^{-2}$)
\[
\Delta(q',q)\le 2\psi'(q')q\le 2\frac{2^{k+1}}{\log q'(\log\log q')^2} \frac{q^{\omega}}{2^l 2^{\omega k}}\le 8\frac{2^{k+1}}{\log q(\log\log q)^2} \frac{q^{\omega}}{2^l2^{\omega k}}.
\]
For each integer pair $k,l,$ we wish to estimate the sum
\begin{align}
&S_{k,l,r}(q) =\sum_{\substack{q': \gcd(q',q)=r\\mathbf{q}'\in G^l\\mathbf{q}'\in D_k(q)}}I_{\Delta(q',q)/r}(\gamma(q'-q)/r)&\nonumber\\
&\le \#\{1\le s\le q/r-1: sr\in G^l\cap D_k(q), \{\gamma(s-q/r)\}\in B(0,2^{k+4-l-\omega k}q^{\omega}/r\log q(\log\log q)^2)\}&\nonumber\\
&\le \#\{1\le s\le q/r-1: sr\in D_k(q), \|sr\beta-\gamma'\|\in [2^{l+\omega k}/q^{\omega},2^{l+1+\omega(k+1)}/q^{\omega}],\nonumber&\\
&\{\gamma(s-q/r)\}\in B(0,2^{k+4-l-\omega k}q^{\omega}/r\log q(\log\log q)^2)\}.&\label{R} \end{align}
If $q/(2^kr)<1$ then no non-zero multiple of $r$ is inside $D_k(q).$ If $q/(2^kr)\ge 1,$ then there are at most $3q/(2^kr)$ many possible multiples of $r$ in $D_k(q).$ We wish to use Lemma \ref{Discrepancy bound} to estimate $S_{k,l,r}(q).$ From our assumption, $(\gamma,\beta)$ is a Diophantine pair. Thus, there are numbers $\epsilon, C>0,$ such that
\begin{align}
S_{k,l,r}(q)\le 8\frac{2^{k+1}}{\log q(\log\log q)^2} \frac{q^{\omega}}{2^l2^{\omega k}}\times\frac{1}{r} \times \frac{2^{l+1}2^{(k+1)\omega}}{q^{\omega}}\frac{3q}{2^k r}+C r \left(\frac{3q}{2^k r}\right)^{\epsilon}.\label{E1}
\end{align}
The factor $r$ in the above discrepancy error term comes from the fact that we are analysing irrational rotation with $(r\beta,\gamma)$ rather than $(\beta,\gamma).$ Now, if we ignore the condition that $q'\in G^l$ in the sum, then we have the upper bound
\[
S_{k,l,r}(q)\le \#\{1\le s\le q/r-1: sr\in D_k(q), \{\gamma(s-q/r)\}\in B(0,2^{k+4-l-\omega k}q^{\omega}/r\log q(\log\log q)^2)\}.
\]
Again, Lemma \ref{Discrepancy bound} for irrational rotation with parameter $\gamma$ gives us a positive number $\epsilon'<1$ with
\begin{align}
S_{k,l,r}(q)\le 8\frac{2^{k+1}}{\log q(\log\log q)^2} \frac{q^{\omega}}{2^l 2^{\omega k}}\frac{1}{r} \frac{3q}{2^k r}+C\left(\frac{3q}{2^k r}\right)^{\epsilon'}.\label{E2}
\end{align}
Finally, as $\gamma$ is irrational and not Liouville, there is a number $\alpha>1$ such that $S_{k,l,r}(q)=0$ whenever
\begin{align}
8\frac{2^{k+1}}{r\log q(\log\log q)^2} \frac{q^{\omega}}{2^l2^{\omega k}}\le \left(\frac{q}{r}\right)^{-\alpha}. \label{C2}
\end{align}
To see this, observe that as $s$ ranging over $\{1,\dots,q/r-1\}$, the value of $\{\gamma(s-q/r)\}$ ranges over $\{\gamma\},\{2\gamma\},\dots,\{\gamma(q/r-1)\}.$ As $\gamma$ is irrational and not Liouville, we see that $\|n\gamma\|\ge n^{-\alpha}$ for a number $\alpha>1$ and all $n\ge 2.$ The estimate (\ref{E1}) provides in general more information than the estimate (\ref{E2}). However, the discrepancy error in (\ref{E1}) is in general larger than in (\ref{E2}). We will use (\ref{E1}) when $r$ is small compared to $q$ and (\ref{E2}) when $r$ is large.
Now we need to consider the sum
\[
\frac{\psi'(q)}{q}\sum^{l\le \omega \log q}_{l\ge 0}\sum_{r|q}r \sum_{k:1\le 2^k\le q^{1/2}}S_{k,l,r}(q).
\]
Let us first use the Estimate (\ref{E2}). In the sum, we only need to consider the case when Condition (\ref{C2}) is not satisfied, i.e. we have the following condition
\begin{align}
r<2^{\frac{k+4-l-\omega k}{1+\alpha}}\frac{1}{(\log q)^{1/(1+\alpha)}(\log\log q)^{2/(1+\alpha)}} q^{\frac{\omega+\alpha}{1+\alpha}}. \label{C2'}
\end{align}
The second term in (\ref{E2}) sums up to
\begin{align}
&\ll \frac{\psi'(q)}{q}\sum_{r|q}r\sum_{k,l} \left(\frac{3q}{2^k r}\right)^{\epsilon'}&\nonumber\\
&\ll \psi'(q)(\log q)^2\sum_{r|q} \left(\frac{r}{q}\right)^{1-\epsilon'}=\psi'(q)o(1)&\label{I}
\end{align}
For the last line, we have used the condition (\ref{C2'}) and the fact that the number of divisors function $d(q)=o(q^{\delta})$ for all $\delta>0$. For the first term in (\ref{E2}), it sums up to
\begin{align*}
&\ll \frac{\psi'(q)}{q}\sum_{r|q}r\sum_{k,l}\frac{q^\omega q} {r^2}&\\
&\ll \psi'(q)q^\omega (\log q)^2\sum_{r|q}\frac{1}{r}.&
\end{align*}
Thus, if we impose the condition $r\ge q^{2\omega}$ in the sum, we see that
\begin{align}
\psi'(q)q^\omega (\log q)^2\sum_{\substack{r|q\\r\ge q^{2\omega}}}\frac{1}{r}\le \psi'(q)q^\omega (\log q)^2 d(q)q^{-2\omega}=\psi'(q)o(1).\label{II}
\end{align}
Now, we are left with the terms for which $r<q^{2\omega}.$ We will use the Estimate (\ref{E1}). First, we deal with the error term (the second term), which sums up to
\begin{align*}
&\ll \frac{\psi'(q)}{q}\sum_{\substack{r|q\\r<q^{2\omega}}}r\sum_{k,l}r\left(\frac{3q}{2^k r}\right)^{\epsilon}&\\
&\ll \psi'(q)(\log q)\sum_{\substack{r|q\\r<q^{2\omega}}}\frac{r^{2-\epsilon}}{q^{1-\epsilon}}&\\
&\ll \psi'(q)(\log q) d(q) q^{2\omega(2-\epsilon)-(1-\epsilon)}.&
\end{align*}
We can choose $\omega$ to be small enough so that $2\omega(2-\epsilon)-(1-\epsilon)<0.$\footnote{Here is the first condition on $\omega$.} Then we have
\begin{align}
\psi'(q)(\log q) d(q) q^{2\omega(2-\epsilon)-(1-\epsilon)}=\psi'(q)o(1).\label{III}
\end{align}
The last step is to add up the contributions for the first term of (\ref{E1}),
\begin{align}
\frac{\psi'(q)}{q}\sum^{l\le \omega \log q}_{l\ge 0}\sum_{\substack{r|q\\ r<q^{2\omega}}}r \sum_{k:1\le 2^k\le q^{1/2}}8\frac{2^{k+1}}{\log q(\log\log q)^2} \frac{q^{\omega}}{2^l2^{\omega k}}\times\frac{1}{r} \times \frac{2^{l+1}2^{(k+1)\omega}}{q^{\omega}}\frac{3q}{2^k r}\nonumber\\
\le C\frac{\psi'(q)}{\log q(\log\log q)^2} \log q \sum_{r|q} \frac{\log r}{r}\nonumber\\
=C\frac{\psi'(q)}{(\log\log q)^2} \sum_{r|q} \frac{\log r}{r},\label{IV}
\end{align}
where $C>0$ is a positive constant. Now we combine (\ref{I}),(\ref{II}),(\ref{III}),(\ref{IV}) to obtain (update the value for $C$ if necessary)
\begin{align*}
\sum_{\substack{1\le q'<q\\\|q'\beta-\gamma'\|\in [{q'}^{-\omega},1)}} \frac{\psi'(q)}{q}\gcd(q',q)I_{\Delta(q,q')/\gcd(q,q')}(\{\gamma (q'-q)/\gcd(q,q')\})\\
\le C\left(\psi'(q)+\frac{\psi'(q)}{(\log\log q)^2}\sum_{r|q} \frac{\log r}{r}\right).
\end{align*}
This is the estimate for the terms in (\ref{SUM}) with $\psi'(q')q\ge \psi'(q)q'$.
\subsection*{Case II: $\psi'(q')q\le \psi'(q)q'$}
In this case, we have
\[
\min\left\{\frac{\psi'(q')}{q'},\frac{\psi'(q)}{q}\right\}=\frac{\psi'(q')}{q'}.
\]
As in the first case, we are dealing with the following sum
\begin{align*}
\sum_{\substack{1\le q'<q\\\|q'\beta-\gamma'\|\in [{q'}^{-\omega},1)}} \frac{\psi'(q')}{q'}\gcd(q',q)I_{\Delta(q,q')/\gcd(q,q')}(\{\gamma (q'-q)/\gcd(q,q')\})\\
\le \sum^{l\le \omega \log q}_{l\ge 0}\sum_{r|q}r \sum_{k:1\le 2^k\le q^{1/2}}\sum_{\substack{q': \gcd(q',q)=r\\mathbf{q}'\in G^l\cap D_k(q)}}\frac{\psi'(q')}{q'}I_{\Delta(q',q)/r}(\gamma(q'-q)/r).
\end{align*}
Observe that for $q'\in G^l\cap D_k(q),$
\[
\frac{\psi'(q')}{q'}\le T_{k,l}=\min\left\{8\times \frac{2^{2(k+1)}}{q^2}\frac{q^\omega}{2^{\omega k}} \frac{1}{2^l}\frac{1}{\log q (\log\log q)^2}, \frac{\psi'(q)}{q}\right\}.
\]
Therefore we have
\begin{align*}
\sum_{\substack{q': \gcd(q',q)=r\\mathbf{q}'\in G^l\cap D_k(q)}}\frac{\psi'(q')}{q'}I_{\Delta(q',q)/r}(\gamma(q'-q)/r)\\
\le T_{k,l}\sum_{\substack{q': \gcd(q',q)=r\\mathbf{q}'\in G^l\cap D_k(q)}}I_{\Delta(q',q)/r}(\gamma(q'-q)/r).
\end{align*}
As we assumed for this case, we have
\[
\Delta(q',q)\le 2 \psi'(q)q'.
\]
Then for $q'\in G^l\cap D_k(q),$ we have
\[
\Delta(q',q)\le 2 \psi'(q) \frac{q}{2^k}.
\]
The following steps will be very similar to those in (Case I). Again, we need to estimate
\begin{align*}
S_{k,l,r}(q)=\sum_{\substack{q': \gcd(q',q)=r\\mathbf{q}'\in G^l\cap D_k(q)}}I_{\Delta(q',q)/r}(\gamma(q'-q)/r)\\
\end{align*}
Again, as $\gamma$ is not Liouville, we have $S_{k,l,r}=0$ if
\[
2\psi'(q)\frac{q}{2^kr}\le \left(\frac{q}{r}\right)^{-\alpha}.
\]
Therefore, we can assume that (negating the above)
\begin{align}
r^{1+\alpha}< 2\psi'(q)q^{\alpha+1}\frac{1}{2^k}\le \frac{2}{q\log q (\log\log q)^2} \times q^{\omega}\times q^{\alpha+1}\frac{1}{2^k}\le 2q^{\alpha+\omega}.\label{C2''}
\end{align}
As $(\gamma,r\beta)$ is a Diophantine pair, we obtain the following estimate which is similar to (\ref{E1}),
\begin{align}
S_{k,l,r}(q)\le \frac{2\psi'(q)q}{2^kr} \frac{2^{l+1+\omega (k+1)}}{q^\omega} \frac{3q}{2^k r}+C r\left(\frac{q}{2^k r}\right)^{\epsilon}.\label{E1'}
\end{align}
Similarly, we also have the following analogue of (\ref{E2}),
\begin{align}
S_{k,l,r}(q)\le \frac{2\psi'(q)q}{2^k r}\frac{3q}{2^k r}+C \left(\frac{q}{2^k r}\right)^{\epsilon}.\label{E2'}
\end{align}
Our aim is to estimate
\[
\sum_{r|q}r\sum_{k,l} T_{k,l} S_{k,l,r}(q).
\]
As in (Case I), we need to switch between (\ref{E1'})(\ref{E2'}) according to the value of $r.$ First, let us observe that the error term (second term) in (\ref{E2'}) creates no significant contribution (with the condition (\ref{C2''})),
\begin{align*}
\sum_{r|q}r\sum_{k,l}T_{k,l} \left(\frac{q}{2^k r}\right)^{\epsilon}\ll \psi'(q)(\log q)^2\sum_{r|q} \left(\frac{r}{q}\right)^{1-\epsilon}\ll \psi'(q)(\log q)^2d(q) q^{-\rho}=\psi'(q)o(1),
\end{align*}
where $\rho>0$ is a positive number depending on $\alpha,\omega,\epsilon.$ Let us now consider the first term in (\ref{E2'}),
\begin{align*}
\sum_{r|q}r\sum_{k,l}T_{k,l}\frac{2\psi'(q)q}{2^k r}\frac{3q}{2^k r}\ll \psi'(q)q^\omega (\log q)^2\sum_{r|q}\frac{1}{r}.
\end{align*}
We will choose to use (\ref{E2'}) only when $r\ge q^{2\omega}.$ In this way, the above is again
\[
\psi'(q)o(1).
\]
We have thus showed that under the condition \ref{C2''}) and $r\ge q^{2\omega},$
\begin{align}
\sum_{\substack{r|q\\ r\ge q^{2\omega}}}r\sum_{k,l} T_{k,l} S_{k,l,r}(q)=\psi'(q)o(1).\label{I'}
\end{align}
From now on, we will assume $r<q^{2\omega}.$ Next, we see that the error term of (\ref{E1'}) (the second term above) sums up to
\begin{align}
\ll \sum_{\substack{r|q\\ r<q^{2\omega}}} r \sum_{k,l} T_{k,l} \times r\left(\frac{q}{2^k r}\right)^{\epsilon}\nonumber\\
\ll \psi'(q)(\log q)^2\sum_{\substack{r|q\\ r<q^{2\omega}}} \frac{r^2}{q}\left(\frac{q}{2^k r}\right)^{\epsilon}\nonumber\\
\ll \psi'(q)(\log q)^2d(q)q^{-E},\label{II'}
\end{align}
where the exponent $E$ is (recall that $r<q^{2\omega}$)
\[
E=1-\epsilon-2\omega(2-\epsilon).
\]
Now $\epsilon$ is a fixed positive number. We have met this condition just above the Estimate (\ref{III}). Our freedom is to choose $\omega\in (0,1).$ We need to choose $\omega$ to be sufficiently small so that $E>0.$\footnote{This is the second and the last condition on $\omega$.} This is certainly possible. Finally, the main term (the first term) of (\ref{E1'}) sums up to
\begin{align*}
\le \sum_{\substack{r|q\\ r<q^{2\omega}}} r \sum_{k,l} 48\times \frac{2^{2(k+1)}}{q^2}\frac{q^\omega}{2^{\omega k}} \frac{1}{2^l}\frac{1}{\log q (\log\log q)^2} \times \frac{\psi'(q)q}{2^kr} \frac{2^{l+1+\omega (k+1)}}{q^\omega} \frac{q}{2^k r}\\
\le \frac{1000\psi'(q)}{\log q(\log\log q)^2} \sum_{r|q}r\sum_{k,l} \frac{1}{r^2}
\end{align*}
The sum of $k,l$ gives a $O(\log r\log q)$ factor, where the implied constant is absolute. Thus, we see that for a constant $C>0,$
\begin{align}
\frac{1000\psi'(q)}{\log q(\log\log q)^2} \sum_{r|q}r\sum_{k,l} \frac{1}{r^2}\le \frac{C\psi'(q)}{(\log\log q)^2} \sum_{r|q} \frac{\log r}{r}.\label{III'}
\end{align}
We can combine (\ref{I'}), (\ref{II'}), (\ref{III'}) and obtain the required estimate for (\ref{SUM}) in (Case II). From here, the proof is finished.
\end{proof}
\subsection{Weakening the Diophantine condition: proof of Lemma \ref{Lemma3}}\label{W}
In the above proof, the Diophantine condition for $(\gamma,\beta)$ is needed in the following way. First, we need $\gamma$ to be not Liouville to deduce Estimates (\ref{E2}), (\ref{E2'}). Next, together with Condition (\ref{C2'}) we deduced that the sum for $r\ge q^{2\omega}$ gives $\psi'(q)o(1).$ To deal with the sum for $r<q^{2\omega},$ we need to use the condition that $(\gamma,\beta)$ is a Diophantine pair in order to have (\ref{E1}), (\ref{E1'}).
There is still leeway for us. In order to see it, let us examine the Diophantine property of the pair $(\gamma,\beta)$ more precisely. Recall that $\sigma(.)$ is the function taking integer variables and positive values so that $\sigma(N)$ is the best Diophantine exponent for $(\gamma,\beta)$ up to height $N.$ That is to say, $\sigma(N)$ is the infimum of all numbers $\sigma>0$ such that for all $-N\le k_1,k_2\le N$ with $k_1,k_2$ not both zeros,
\[
\|k_1\gamma+k_2\beta\|\ge \max\{k_1,k_2\}^{-\sigma}
\]
We recall the following result by Erd\H{o}s-Tur\'{a}n-Koksma, \cite[Theorem 1.21]{DT97}. This result holds for $d$-dimensional torus rotations with any $d\ge 1$. In this paper, we only use the case when $d=1$ or $2$.
\begin{theorem}[ETK]\label{ETK}
Let $\alpha$ be an irrational number. Let $H,N$ be positive integers. Then for each interval $I\subset [0,1]$,
\[
|E^{I}_{\alpha}(N)|\le 9N \left(\frac{1}{H}+\sum_{0<|k|\le H}\frac{2}{||k|+1|}\frac{2}{N\|k\alpha\|}\right).
\]
Let $(\alpha,\beta)$ be a pair of numbers. Let $H, N$ be positive integers. Then for each rectangle $I\subset [0,1]^2$ whose sides are parallel to the coordinate axes,
\[
|E^{I}_{\alpha,\beta}(N)|\le 9N\left(\frac{1}{H}+\sum'_{k_1,k_2}\frac{4}{(|k_1|+1)(|k_2|+1)}\frac{2}{N \|k_1\alpha+k_2\beta\|}\right),
\]
where $\sum'_{k_1,k_2}$ is the sum of pairs of integers $-H\le k_1,k_2\le H$ such that at least one of $k_1,k_2$ is not zero.
\end{theorem}
Here, the factor $4/(|k_1|+1)(|k_2|+1)$ can be replaced with
\[
1/\max\{1,|k_1|\}\max\{1,|k_2|\},
\]
which will only provides us with some marginal improvements. We apply Theorem \ref{ETK} for $(\gamma,\beta).$ We see that
\begin{align*}
|E^I_{\gamma,\beta}(N)|&\le 9N\left(\frac{1}{H}+\sum'_{k_1,k_2}\frac{4}{(|k_1|+1)(|k_2|+1)}\frac{2}{N \|k_1\gamma+k_2\beta\|}\right)&\\
&\le 9N\left(\frac{1}{H}+\frac{8}{N}\sum'_{k_1,k_2}\frac{1}{(|k_1|+1)(|k_2|+1)}\frac{1}{ \max\{k_1,k_2\}^{-\sigma(N)}}\right)&\\
&\le 9N\left(\frac{1}{H}+\frac{8000\sigma(N)}{N}(\log H)H^{\sigma(N)}\right).&
\end{align*}
The factor $8000$ on the last line is by no means the optimal one. Now we see that for an absolute constant $C>0,$
\[
9N\left(\frac{1}{H}+\frac{8000\sigma(N)}{N}(\log H)H^{\sigma(N)}\right)\le C N^{\sigma(N)/(\sigma(N)+1)}(\log N)^{1/(\sigma(N)+1)}
\]
if we choose $H$ to be the smallest positive integer with
\[
\frac{1}{H}\le \frac{8000\sigma(N)}{N}(\log H)H^{\sigma(N)}.
\]
Thus, we have obtained the following discrepancy estimate for the irrational rotation with parameter $(\gamma,\beta),$
\begin{align}
|E^I_{\gamma,\beta}(N)|\le C_1N^{\sigma(N)/(\sigma(N)+1)}(\log N)^{1/(\sigma(N)+1)}.\label{R1}
\end{align}
Recall that we also have the following condition for all $0\le |k_1|,|k_2|\le N$ except $k_1=k_2=0,$
\[
\|k_1\gamma+k_2\beta\|\ge \max\{k_1,k_2\}^{-\sigma(N)}.
\]
In particular, if we let $k_2=0,$ then we have for all $1\le |k_1|\le N,$
\begin{align}
\|k_1\gamma\|\ge k_1^{-\sigma(N)}.\label{R2a}
\end{align}
Then a further use of Theorem \ref{ETK} gives us that
\begin{align}
|E^I_{\gamma}(N)|\le C_1 N^{\sigma(N)/(\sigma(N)+1)}.\label{R2b}
\end{align}
Of course, if $(\gamma,\beta)$ is a Diophantine pair, then $\sigma(.)$ is bounded. Then from the above (\ref{R1}), (\ref{R2a}), (\ref{R2b}) we can deduce (\ref{E1}), (\ref{E1'}), (\ref{E2}), (\ref{E2'}). Here is where the few extra rooms are. We now prove Lemma \ref{Lemma3}.
\begin{proof}[Proof of Lemma \ref{Lemma3}]
We include all the notations in the proof of Lemma \ref{Lemma}. We want to insert some changes to the arguments in order to deduce Lemma \ref{Lemma3}. Let us concentrate on (Case I). Observe that we still have Estimates (\ref{E1}), (\ref{E2}) and Condition (\ref{C2}). The difference here is that the exponents $\epsilon,\epsilon',\alpha$ are no longer constants. They need to be chosen according to $q.$ In fact, here we can choose
\[
\epsilon(q)=\epsilon'(q)=(1+o(1))\frac{\sigma(q)}{\sigma(q)+1}
\]
and
\[
\alpha(q)=\sigma(q).
\]
Here the $o(1)$ term is introduced to absorb the power of $\log N$ factor in (\ref{R1}) and can be made more explicit. Next, in order to have the Estimate (\ref{I}), it is sufficient to have
\begin{align}
(\log q)^2 d(q) \frac{1}{q^{\rho(q)}}=o(1)\label{Ic}
\end{align}
where
\[
\rho(q)=\frac{0.5-\omega(q)}{(\sigma(q)+1)^2}.
\]
Again, we may fix a small enough $\omega(q)$ to start with. Observe that (the maximal order of the divisor function)\footnote{The use of the maximal order of the divisor function is an overkill. In most situations, one only need to use the averaged order. For example, if $\psi$ is supported on where $d(q)=O(\log q),$ then we can improve this estimate significantly.}
\[
\log d(q)\ll \log q/\log\log q.
\]
Thus we can achieve (\ref{Ic}) if
\[
(\log q)^2d(q)=o(q^{\rho(q)}).
\]
This can be achieved by requiring that
\[
\sigma(q)\le C (0.5-\omega(q))^{1/2}(\log\log q)^{1/2}.
\]
where $C>0$ is an absolute constant. The above condition says that $q^{\rho(q)}$ is not too small.
After re-establishing (\ref{I}), we need to consider (\ref{II}). For (\ref{II}) to hold, it is enough to have
\[
(\log q)^2 d(q)/q^\omega=o(1).
\]
This can be achieved by choosing $\omega$ to be $\gg 1/(\log \log q)$ where the implied constant is absolute. This says that $\omega$ cannot be too small. The critical issue occurs for Estimate (\ref{III}). We see that if $\epsilon$ is very close to one, then $\omega$ must be chosen to be very small because we should have
\[
2\omega(2-\epsilon)-(1-\epsilon)<0.
\]
Thus, we cannot maintain that $\omega$ is a fixed number. It must vary along $q$ as well! More precisely, we can choose
\begin{align}
\frac{C_1}{\log\log q}\le\omega(q)\le C_2/\sigma(q)\label{Omega}
\end{align}
where $C_1,C_2>0$ are constants. Since $\sigma(q)=O((\log\log q)^{1/2})$, the above requirement is possible to be achieved. Thus we have
\[
q^{2\omega(q)(2-\epsilon(q))-(1-\epsilon(q))}(\log q)^2 d(q)=o(1).
\]
In this way, we still have the Estimates (\ref{I}), (\ref{II}), (\ref{III}). The estimate (\ref{IV}) is the main term, we do not have further restriction to deduce it. However, observe that as we have already imposed the condition $r\le q^{2\omega(q)}$, this restricts the range of $k$ in the sum because $2^k\le r^2$. In return, assuming $\psi(q)=O(q^{-1}(\log q)^{-1}(\log\log q)^{-3}),$ we see that
\begin{align}
\frac{\psi'(q)}{q}\sum^{l\le \omega(q) \log q}_{l\ge 0}\sum_{r|q}r \sum_{k:1\le 2^k\le q^{2\omega(q)}}8\frac{2^{k+1}}{\log q(\log\log q)^3} \frac{q^{\omega(q)}}{2^l2^{\omega(q) k}}\times\frac{1}{r} \times \frac{2^{l+1}2^{(k+1)\omega(q)}}{q^{\omega(q)}}\frac{3q}{2^k r}\nonumber\\
\ll \frac{\psi'(q)\omega(q)}{(\log\log q)^{3}}\sum_{r|q}\frac{\log r}{r}. \label{IVc}
\end{align}
Similarly, the arguments in (Case II) can be reconsidered in exactly the same way. This finishes the proof.
\end{proof}
\section{Proofs of Theorems \ref{MAIN}, \ref{MAIN2}} \label{proofofmain}
In this section, we will prove Theorems \ref{MAIN}, \ref{MAIN2}.
\begin{proof}[Proof of Theorem \ref{MAIN}]
First, the requirement that $(\gamma,\beta)$ is Diophantine implies that $\gamma,\beta$ are both irrational and non-Liouville numbers.
Let $\omega$ be a small enough positive number which will be specified later. Recall the construction (\ref{psi}) for $\psi'=\psi'_{\beta,\gamma',\omega}.$ We see that
\[
W(\psi,\beta,\gamma,\gamma')\supset W(\psi',\gamma).
\]
Now we write $A$ for $A^{\psi',\gamma}.$ The proof now divides into two parts.
\subsubsection*{Step 1: Restricting the support of $\psi'$}
In this step, we show that it is possible to restrict the support of $\psi'$ a bit further (to a certain subset $A\subset\mathbb{N}$). The idea is that we have a upper bound condition $\psi(q)=O((q\log q(\log\log q)^2)^{-1}).$ Then we can use this upper bound to deduce another upper bound for $\psi'$ in such a way that
\[
\sum_{q\in A^c}\psi'(q)<\infty.
\]
On the other hand, we also know that $\sum_{q=1}^\infty\psi'(q)=\infty.$ Thus we have
\[
\sum_{q\in A}\psi'(q)=\infty.
\]
This allows us to restrict the support of $\psi'$ to $A.$ We now supply the details.
Recall that
\[
F(q)=\sum_{r|q} \frac{\log r}{r}.
\]
In this step, we show that it is possible to restrict $\psi'$ on integers $q$ with $F(q)=O((\log\log q)^2).$
We analyse the values of $F(q)$ for $q$ with
\[
\|q\beta-\gamma'\|\in \left[\frac{1}{q^{\omega}},1\right].
\]
For each integer $l\ge 0,$ recall the set
\[
G^l=G_\omega^l=\left\{q:\|q\beta-\gamma'\|\in \left[\frac{2^l}{q^{\omega}},\frac{2^{l+1}}{q^{\omega}}\right]\right\}.
\]
Let $K>0$ be an integer and $Q>1000$ be another integer. Consider the following sum
\begin{align*}
\sum_{q\in G^l\cap [Q/2,Q]} F^K(q)=\sum_{r_1,\dots r_K\le Q} \frac{\log r_1\log r_2\dots \log r_K}{r_1 r_2\dots r_K}\sum_{\substack{q: [r_1,\dots,r_K]|q\\mathbf{q}\in G^l\cap [Q/2,Q]}}1.
\end{align*}
By using Lemma \ref{Discrepancy bound} we see that
\begin{align}
\sum_{\substack{q: [r_1,\dots,r_K]|q\\mathbf{q}\in G^l\cap [Q/2,Q]}}1\le \frac{Q}{[r_1,\dots,r_K]}\frac{2^{l+1+\omega}}{Q^{\omega}}+C[r_1,\dots,r_k] \left(\frac{Q}{[r_1,\dots,r_K]}\right)^{\epsilon}\label{E3}
\end{align}
For two constants $\epsilon,C>0$ which depend only on $\beta.$ Here the factor $[r_1,\dots,r_k]$ for the second term on the RHS is because we are considering the rotation with angle $\beta [r_1,\dots,r_k]$ rather than $\beta.$
From now on, we pose the condition
\begin{align}
1-\omega-\epsilon>0. \label{C3}
\end{align}
This is always possible to achieve since $\epsilon<1.$ We will also need to apply Lemma \ref{Lemma}. From there, we also got an upper bound $\omega'_0$ for possible values of $\omega.$ Now we set $\omega_0$ to be the minimal between $(1-\epsilon)$ and $\omega'_0.$
The first term on the RHS in (\ref{E3}) is relatively easy to handle and it will make the major contribution. We want to show that the second term would not contribute too much. Observe that if
\[
\frac{Q^{1-\omega}}{[r_1,\dots,r_K]}\ge [r_1,\dots,r_k]\left(\frac{Q}{[r_1,\dots,r_K]}\right)^{\epsilon}
\]
then the RHS of (\ref{E3}) will be bounded from above by
\[
\left(1+\frac{C}{2^{l+1+\omega}}\right) \frac{Q}{[r_1,\dots,r_K]}\frac{2^{l+1+\omega}}{Q^{\omega}}\le \left(1+C\right) \frac{Q}{[r_1,\dots,r_K]}\frac{2^{l+1+\omega}}{Q^{\omega}}.
\]
Otherwise, we have
\begin{align}
[r_1,\dots,r_K]\ge Q^{\frac{1-\omega-\epsilon}{2-\epsilon}}. \label{C4}
\end{align}
We write $\epsilon'=\frac{1-\omega-\epsilon}{2-\epsilon}.$ Observe that
\begin{align*}
\sum_{\substack{r_1,\dots r_K\le Q\\ [r_1,\dots,r_K]\ge Q^{\epsilon'}}} \frac{\log r_1\log r_2\dots \log r_K}{r_1 r_2\dots r_K}\sum_{\substack{q: [r_1,\dots,r_K]|q\\mathbf{q}\in G^l\cap [Q/2,Q]}}1\\
\le \frac{\log^K Q}{Q^{\epsilon'}} \sum_{r_1,\dots,r_K\le Q} \sum_{\substack{q: [r_1,\dots,r_K]|q\\mathbf{q}\in G^l\cap [Q/2,Q]}}1\\
=\frac{\log^K Q}{Q^{\epsilon'}}\sum_{q\in G^l\cap [Q/2,Q]} d^K(q).
\end{align*}
There is a constant $C_1>0$ such that for all $q\ge 1,$
\[
d(q)\le 2^{C_1\log q/\log\log q}.
\]
Thus we see that
\begin{align}
\frac{\log^K Q}{Q^{\epsilon'}}\sum_{q\in G^l\cap [Q/2,Q]} d^K(q)\nonumber\\
\le \frac{\log^K Q}{Q^{\epsilon'}}2^{C'K\log Q/\log\log Q}\# G^l\cap [Q/2,Q].\label{I'1}
\end{align}
By using Lemma \ref{Discrepancy bound} we see that as $\epsilon<1-\omega$,
\[
\# G^l \cap [Q/2,Q]\le \frac{2^{l+1+\omega}}{Q^{\omega}}Q+CQ^\epsilon\le (2^{l+1+\omega}+C)Q^{1-\omega}.
\]
Next, we bound the first term on the RHS of (\ref{E3}),
\begin{align}
\sum_{r_1,\dots r_K\le Q} \frac{\log r_1\log r_2\dots \log r_K}{r_1 r_2\dots r_K}\frac{Q}{[r_1,\dots,r_K]}\frac{2^{l+1+\omega}}{Q^{\omega}}\nonumber\\
\le Q^{1-\omega}2^{l+1+\omega} (C_2K^2)^K\label{II'1}
\end{align}
where $C_2>0$ is an absolute constant. Here, we have used the fact that
\[
\sum_{r\ge 1} \frac{\log r}{r^{1+K^{-1}}}=-\zeta'(1+K^{-1})= O(K^2)
\]
and that\footnote{This inequality is sharp. However, this is an overkill in the estimate of the sum $\sum_{r_1,\dots,r_K}$. For most of the tuples $(r_1,\dots,r_K),$ $[r_1,\dots,r_K]$ is in fact much larger than $(r_1\dots r_K)^{1/K}$. Intuitively speaking, very few tuples of integers have large GCD's.}
\[
[r_1,\dots,r_K]\ge (r_1\dots r_K)^{1/K}.
\]
Collecting \ref{I'1}),(\ref{II'1}) we see that
\begin{align}
\sum_{q\in G_{\omega}^l\cap [Q/2,Q]} F^K(q)\le Q^{1-\omega}2^{l+1+\omega} (C_1K^2)^K+\frac{\log^K Q}{Q^{\epsilon'}}2^{C_1K\log Q/\log\log Q}(2^{l+1+\omega}+C)Q^{1-\omega}\label{RAW}
\end{align}
holds for all $Q\ge 1024,K,l\ge 1$ The above constants $C,C_1,C_2$ do not depend on $Q,K,l.$ We choose $K=K_Q$ as a function of $Q.$ Then as long as $K\le \epsilon'\log\log Q/2C_1$ we see that with another constant $C_3>0,$
\begin{align}
\sum_{q\in G^l\cap [Q/2,Q]} F^{K_Q}(q)\le C_3 2^{l}Q^{1-\omega}((C_1K_Q^2)^{K_Q}+\log^{K_Q} Q/Q^{\epsilon'/2})\label{E4}.
\end{align}
Now we choose $K_Q=[\epsilon'\log\log Q/2C_1].$ From here we see that for $M$ with $\log M=8C_1/\epsilon'$,
\begin{align}
\#\{q: q\in G^l\cap [Q/2,Q], F(q)\ge C_1M (\log\log Q)^2\}\le C_42^{l}Q^{1-\omega} \frac{1}{(\log Q)^4}, \label{E5}
\end{align}
where $C_4>0$ is a constant depending on both $\omega$ and $\epsilon'$ which in turn depends only on $\beta.$ From here we see that ($C_5>0$ is a different constant)
\begin{align*}
\sum_{\substack{q\in [Q/2,Q]\\F(q)\ge C_1M(\log\log q)^2}} \psi'(q)&\le\sum_{0\le l\le \omega\log Q} \sum_{\substack{q\in [Q/2,Q]\cap G^l\\F(q)\ge C_1M(\log\log q)^2}} \frac{1}{q \log q (\log\log q)^2} \frac{q^{\omega}}{2^l}\\
&\le C_5\sum_{0\le l\le \omega \log Q} \frac{1}{Q \log Q (\log\log Q)^2} \frac{Q^{\omega}}{2^l} 2^l Q^{1-\omega}\frac{1}{(\log Q)^4}\\
&\le C_5\omega \frac{1}{(\log Q)^4 (\log\log Q)^2}.
\end{align*}
By considering $Q=2^k,k\ge 10$ we see that
\[
\sum_{F(q)\ge C_1M(\log\log q)^2} \psi'(q)<\infty.
\]
However, our assumption is that
\[
\sum_{q}\psi'(q)=\infty.
\]
Therefore we can make the assumption that $\psi'$ is supported on where $F(q)\le C_1M (\log\log q)^2.$
\subsubsection*{Step 2: Use Lemma \ref{Lemma}}
Now we use Lemma \ref{Lemma} and see that
\[
\sum_{1\le q'<q} |A_q\cap A_{q'}|=O\left(\psi'(q)+\frac{\psi'(q)}{(\log\log q)^2}F(q)\right)+4(1+C_0/(2H))\psi'(q)\sum_{1\le q'< q} \psi'(q').
\]
As we assumed that $\psi'(q)>0$ only when $F(q)\le C'M (\log\log q)^2,$ the above estimate can be made one step further,
\begin{align*}
\sum_{1\le q'<q} |A_q\cap A_{q'}|&=O\left(\psi'(q)+C'M\psi'(q)\right)+16\psi'(q)\sum_{1\le q'< q} \psi'(q')\\&= O(\psi'(q))+4(1+C_0/(2H))\psi'(q)\sum_{1\le q'< q} \psi'(q').
\end{align*}
From here we see that (as $\sum_q \psi'(q)=\infty$)
\[
\frac{(\sum_{q\le Q} |A_q|)^2}{\sum_{q,q'\le Q} |A_q\cap A_{q'}|}\ge\frac{(\sum_{q=1}^Q 2\psi'(q))^2}{4(1+C_0/(2H))(\sum_{q=1}^Q\psi'(q))^2+O(\sum_{q=1}^Q \psi'(q))}\ge \frac{1}{1+\frac{C_0}{2H}+o(1)}.
\]
Then by Lemma \ref{Borel} we see that
\[
|\limsup_{q\to\infty} A_q|\ge \frac{1}{1+\frac{C_0}{2H}}
\]
This finishes the proof as we can choose $H$ to be arbitrarily large.
\end{proof}
\begin{proof}[Proof of Theorem \ref{MAIN2}]
In the proof of Theorem \ref{MAIN}, we obtained (\ref{E3}) from the non-Liouville condition for $\beta.$ This estimate finally gives (\ref{E4}) and (\ref{E5}). Here, we need to choose $\omega,\epsilon$ to change along $q.$ For example, the construction of $G_\omega^l$ is now \[
G_\omega^l=\left\{q:\|q\beta-\gamma'\|\in \left[\frac{2^l}{q^{\omega(q)}},\frac{2^{l+1}}{q^{\omega(q)}}\right]\right\}
\]
and $\epsilon(q)\le\sigma_\beta(q)/(\sigma_\beta(q)+1)$, where $\sigma_\beta(q)$ is the Diophantine exponent for $\beta$ at height $q.$ In particular, we have $\sigma_\beta(q)\le \sigma_{(\gamma,\beta)}(q)=\sigma(q)=O(\log\log\log q).$ Also, in the definition $\psi'$, $\omega$ is a function as well. In this way, $\psi'$ is not zero only when $\|q\beta-\gamma'\|\gg q^{-\omega(q)}.$ Since $\omega(q)$ decays to $0,$ the support of $\psi'$ now is smaller.
Now the estimate (\ref{RAW}) holds with all the exponents $\omega,\epsilon'$ being a function of $q.$ However, what we really need is the values $\omega(q),\epsilon(q),\epsilon'(q)$ for $q\in [Q,2Q].$ For this reason, it is more convenient to have concrete function forms for them. More precisely, we choose a small number $c>0$ and define
\[
\epsilon(q)=\frac{(\log\log\log q)^{1/2}}{(\log\log\log q)^{1/2}+1}
\]
\[
\epsilon'(q)=\frac{1-\omega(q)-\epsilon(q)}{2-\epsilon(q)}.
\]
We fix $\omega(q)=c(\log\log\log q)^{-1/2}$ so that $\epsilon'(q)>0.$ The problem is that $\epsilon'(q)\to 0$ as $q\to\infty.$ This will cause $M$ (which is now also depending on $q$) to be too large. For this reason we need to treat the Estimate (\ref{RAW}) more carefully. Let us consider two sequences $K_Q, H_Q, Q\ge 1.$ Then we see that by (\ref{RAW})
\begin{align*}
&\#\{q:q\in G^l_\omega\cap [Q,2Q],F(q)\ge H_Q\}&\\\le &\frac{1}{H^{k_Q}_Q}\sum_{q\in G_{\omega}^l\cap [Q/2,Q]} F^{K_Q}(q)&\\
\le & \frac{Q^{1-\omega(2Q)}2^{l+1+\omega(Q)} (C'K_Q^2)^{K_Q}+\frac{\log^{K_Q} Q}{Q^{\epsilon'(2Q)}}2^{C'K_Q\log Q/\log\log Q}(2^{l+1+\omega(Q)}+C)Q^{1-\omega(2Q)}}{H^{K_Q}_Q}.&
\end{align*}
Now we want to achieve the following two asymptotics,
\[
\left(\frac{K^2_Q}{H_Q}\right)^{K_Q}=O(1/(\log Q)^4),
\]
\[
\frac{\log^{K_Q} Q}{Q^{\epsilon'(2Q)}}2^{C'K_Q\log Q/\log\log Q}\frac{1}{H_Q^{K_Q}}=O(1/(\log Q)^4).
\]
The first asymptotic is satisfied if we choose
\[
H_Q\ge K^2_Q (\log Q)^{4/K_Q}=e^{2\log K_Q+4\log\log Q/K_Q}.
\]
The second asymptotics is satisfied if we choose
\begin{align*}
H_Q\ge \log Q\times 2^{C'\log Q/\log\log Q}\times (\log Q)^{4/\log K_Q}/Q^{\epsilon'(2Q)/K_Q}\\=2^{\log\log Q+C'\log 2 \log Q/\log\log Q+4\log\log Q/K_Q}e^{-\epsilon'(2Q)\log Q/K_Q}.
\end{align*}
To satisfy both of the above two inequalities, it is enough to take
\[
K_Q=\frac{\epsilon'(2Q)\log Q}{\log\log Q+\frac{C'\log 2 \log Q}{\log\log Q}}
\]
and
\[
H_Q=e^{2\log K_Q+4\log\log Q/K_Q}.
\]
This makes
\[
K_Q\asymp \log\log Q/(\log\log\log Q)^{1/2},H_Q=o((\log\log Q)^{3})
\]
and
\begin{align}
\#\{q:q\in G^l_\omega\cap [Q,2Q],F(q)\ge (\log\log Q)^{3}\}\ll 2^{l}Q^{1-\omega(2Q)}\frac{1}{(\log Q)^4}. \label{E51'}
\end{align}
So we have found a replacement of (\ref{E5}). Then the rest of the argument after (\ref{E5}) in the proof of Theorem \ref{MAIN} can be used. Thus we can assume that $\psi'$ is supported on the integers $q$ with $F(q)\ll (\log\log q)^3.$ After this step, we can use the first part of Lemma \ref{Lemma3} to deduce the result.
\end{proof}
\section{Monotonic approximation function, proofs of Theorems \ref{MAIN MONNTON}, \ref{MAIN MONO2}}
As before, we will first show the proof for Theorem \ref{MAIN MONNTON} in detail and then illustrate how to add new arguments to proof Theorems \ref{MAIN MONO2}.
\begin{proof}[Proof of Theorem \ref{MAIN MONNTON}]
First, we want to compare $\psi(q)$ with $(q\log q)^{-1}.$ Suppose that
\[
\psi(q)\ge \frac{1}{q\log q}
\]
for only finitely many integers $q.$ Then we can actually assume that
\[
\psi(q)\le \frac{1}{q\log q}
\]
for all $q\ge 2.$ We call this case to be the finite case. Otherwise, we have the infinite case. The central idea for proving the result for those two cases are very similar but we need to treat them in different ways.
\subsection*{The finite case}
Suppose that $\psi(q)\le (q\log q)^{-1}.$ Then according to Remark \ref{Remark}, we would want to restrict $\psi$ to where $F(q)\le H'$ for a suitable constant $H'>0.$ We could not do this for proving Theorem \ref{MAIN}. The monotonicity of $\psi$ will play a crucial role.
First we choose a positive number $\omega$ which is small enough so that the conclusion of Remark \ref{Remark} hold. We also need to use (\ref{E4}) in the proof of Theorem \ref{MAIN} with $K_Q=1.$ In order to achieve this, we need to choose $\omega$ to be small enough.
As before, our goal now is to show that
\begin{align}
\sum_{F(q)\le H'}\psi'(q)=\sum_{\substack{q:\|q\beta-\gamma'\|\in [q^{-\omega},1]\\ F(q)\le H'}}\frac{\psi(q)}{\|q\beta-\gamma'\|}=\infty \label{*}
\end{align}
with a suitable positive number $H'.$
We make use of (\ref{E4}) with $K_Q=1$ and see that for all $Q>1024,l\ge 0$
\begin{align}
\sum_{q\in G_{\omega}^l\cap [Q/2,Q]} F(q)\le C 2^{l}Q^{1-\omega}(C+\log Q/Q^{\epsilon'/2})\le C2^lQ^{1-\omega}\label{E4'}
\end{align}
where $C>0$ is a constant depending on $\gamma,\beta$ and $G_\omega^l$ is the same as defined in the previous section,
\[
G^l=G_\omega^l=\left\{q:\|q\beta-\gamma'\|\in \left[\frac{2^l}{q^{\omega}},\frac{2^{l+1}}{q^{\omega}}\right]\right\}.
\]
From Lemma \ref{Discrepancy bound} here we see that for some $C_1>0,$
\[
\#\{q\in G^l\cap [Q/2,Q]: F(q)>H'\}\le \frac{C_1}{H'} 2^l Q^{1-\omega}.
\]
On the other hand with again Lemma \ref{Discrepancy bound}, we have for a constant $C_2>0$ depending on $\beta,\gamma,$
\[
\# G^l\cap [Q/2,Q]\ge C_2 2^l Q^{1-\omega}.
\]
In order to use Lemma \ref{Discrepancy bound} in above, $1-\omega$ must be larger than the discrepancy exponent of $\beta.$ This can be achieved by choosing $\omega$ to be small enough. As a result, we see that for each even number $Q>2048,$
\begin{align}
\sum_{\substack{q\in [Q/2,Q]\\F(q)\le H'}} &\psi'(q) \nonumber\\
&\ge \psi(Q/2)\sum_{0\le l\le \omega\log Q} \sum_{\substack{q\in [Q/2,Q]\cap G^l\\ F(q)\le H'}} \frac{q^{\omega}}{2^l}\nonumber\\
&\ge \psi(Q/2) \frac{Q^{\omega}}{2^{\omega}}\sum_{0\le l\le \omega\log Q} \frac{1}{2^l}\#\{q\in G^l\cap [Q/2,Q]: F(q)\le H'\}\nonumber\\
&\ge \psi(Q/2) \frac{Q^{\omega}}{2^{\omega}} \sum_{0\le l\le \omega\log Q} \frac{1}{2^l} 2^l Q^{1-\omega}(C_2-C_1/H').\label{E5'}
\end{align}
We now choose $H'$ to be large enough in a manner that depends on $C_1,C_2$ which in turn depends on $\beta,\gamma$ only, such that
\[
C_2-C_1/H'>0.5C_2.
\]
As a result, we see that
\[
\sum_{\substack{q\in [Q/2,Q]\\F(q)\le H'}} \psi'(q)\ge 0.5\times 2^{-\omega}C_2\psi(Q/2) Q \log Q.
\]
Taking $Q=2^k$ for $k\ge 11$ we see that
\[
\sum_{\substack{q\in [2^{k-1},2^k]\\F(q)\le H'}} \psi'(q)\ge 0.5\times 2^{-\omega}C_2\psi(2^{k-1}) 2^{k} k\ge 0.5\times 2^{-\omega}C_2 \psi(2^{k-1})2^{k-1} \log k.
\]
Notice that as $\psi$ is non-increasing,
\[
\psi(2^{k-1})2^{k-1} \log k\ge \sum_{2^{k-1}\le q\le 2^{k}} \psi(q) \log q.
\]
From here we see that
\[
\sum_{\substack{q\ge 2048\\ F(q)\le H'}} \psi'(q)\ge 0.5\times 2^{-\omega}C_2\sum_{k\ge 10} \sum_{\substack{q\in [2^{k-1},2^k]\\ F(q)\le H'}} \psi'(q)\ge 0.5\times 2^{-\omega}C_2\sum_{q\ge 2048} \psi(q)\log q=\infty.
\]
This establishes (\ref{*}). Then we see that we can further restrict $\psi'$ on integers $q$ with
\[
F(q)\le H'.
\]
Then we can perform the arguments in (Step 2) of the proof of Theorem \ref{MAIN} to conclude that
\[
|W(\psi',\gamma)|=1.
\]
This finishes the proof for the finite case. Before we continue the proof. Let us first see how the above arguments extend to deal with the case when $\beta$ is Liouville. As in the proof of Theorem \ref{MAIN MONO2}, we need to consider $\omega$ as a function of $q$ rather than a constant. More precisely, we need
\[
\omega(q)\le 1/(\sigma_\beta(q)+1).
\]
We will return to this discussion later.
\subsection*{The infinite case}
In this case, there are infinitely many $q$ with
\[
\psi(q)\ge \frac{1}{q\log q}.
\]
Then as $\psi$ is non-increasing we can find infinitely many $Q$ such that
\[
\psi(q)\ge \frac{1}{2q\log q}
\]
for $q\in [Q/2,Q].$ For such a $Q$, consider the approximation function
\[
\psi_Q(q)=\frac{1}{2Q\log Q}
\]
when $Q/2\le q\le Q$ and $\psi_Q(q)=0$ otherwise. As there are infinitely many such $Q,$ we can choose $4096\le Q_1<Q_2<Q_3<\dots$ such that $2Q_i\le Q_{i+1}$ for $i\ge 1.$ Consider the new approximation function
\[
\psi_*=\sum_{i\ge 1} \psi_{Q_i}.
\]
This approximation function satisfies $\psi\ge \psi_*.$ We define
\[
\psi'(q)=\frac{\psi_*(q)}{\|q\beta-\gamma'\|}
\]
if $\|q\beta-\gamma'\|\in [q^{-\omega},1]$ and otherwise $\psi'(q)=0.$
By performing the same arguments as in the finite case, we use Estimate (\ref{E5'}) and see that for each $i\ge 1,$
\begin{align}
\sum_{\substack{q\in [Q_i/2,Q_i]\\ F(q)\le H'}} \psi'(q)\ge 0.5\times 2^{-\omega}C_2\psi(Q_i/2) Q_i \log Q_i\\=2^{-\omega}C_2 \frac{1}{2Q_i\log Q_i} Q_i \log Q_i=2^{-\omega-1}C_2.\label{E6'}
\end{align}
Thus we see that
\[
\sum_{q\ge 2048: F(q)\le H'}\psi'(q)=\infty.
\]
The rest of the argument will be the same as in the finite case and we conclude that
\[
|W(\psi',\gamma)|=1.
\]
This finishes the proof for the infinite case and from here we conclude the theorem.
\end{proof}
\begin{proof}[Proof of Theorem \ref{MAIN MONO2}]
We include all notations from the proof of Theorem \ref{MAIN MONNTON}. First, we re-examine the proof of the finite case of Theorem \ref{MAIN MONNTON}. The benchmark function is no longer $1/q(\log q)$ but
\[
\frac{(\log\log q)^{1/2}}{q\log q}.
\]
We saw that the arguments in the proof of Theorem \ref{MAIN MONNTON} still hold here but with $\omega$ being chosen to be a function such that
\[
\omega(q)\le 1/(\sigma_\beta(q)+1).
\]
Since we have $\sigma_\beta(q)=O((\log\log q)^{1/2}),$ we see that it is possible to choose $\omega$ in a way that
\[
\omega(q)\asymp (1/\log\log q)^{1/2}.
\]
We can use the second part of Lemma \ref{Lemma3}. Thus we need to show that the approximation function (Note that $\omega$ is not a constant.It is a function.)
\[
\psi'=\psi'_{\beta,\gamma',\omega}
\]
is divergent on a subset where $F$ is uniformly bounded from above, i.e.
\[
\sum_{q: F(q)\le H'} \psi'(q)=\infty.
\]
To do this, we need to use (\ref{E5'}) in the proof of Theorem \ref{MAIN MONNTON}. Here, $\omega$ need to be replaced by $\omega(Q).$ Then we see that
\[
\sum_{q\in [Q,2Q],F(q)\le H'}\psi'(q)\gg \psi(Q/2)Q \log Q \omega(Q).
\]
Then we see that
\[
\sum_{F(q)\le H'}\psi'(q)\gg \sum_{q} \psi(q)\log q \frac{1}{(\log\log Q)^{1/2}}=\infty.
\]
This finishes the proof for the finite case. The infinite case can be treated similarly. We examine each dyadic interval where $\psi(q)\ge \frac{(\log\log q)^{1/2}}{q\log (q)}.$ Then (E6') needs to be changed to
\begin{align*}
\sum_{q\in [Q_i/2,Q_i], F(q)\le H'} \psi'(q)\gg \frac{(\log\log Q_1)^{1/2}}{Q_i\log Q_i} Q_i \log Q_i \omega(Q_i)\gg 1.
\end{align*}
From here the proof finishes.
\end{proof}
\section{A doubly metric result}
In order to prove Corollary \ref{Coro2}, we need the following standard result.
\begin{lemma}\label{LMA}
Let $\gamma$ be an irrational and non-Liouville number. Then for Lebesgue almost all $\beta\in\mathbb{R},$ $(\gamma,\beta)$ is Diophantine.
\end{lemma}
\begin{proof}
First, it is clear that Lebesgue almost all numbers are $\mathbb{Q}$-linearly independent with respect to $\gamma.$ Next we let $H'$ to be a large positive number and we consider the set of $\beta\in [0,1]$ such that
\[
\|q_1\gamma+q_2\beta\|\le \max\{|q_1|,|q_2|\}^{-H'}
\]
for infinitely many integer pairs $(q_1,q_2)$ with $q_1q_2\neq 0.$ For such an integer pair $(q_1,q_2)$ we construct the following set
\[
A_{q_1,q_2}=\{\beta\in [0,1]: \|q_1\gamma+q_2\beta\|\le \max\{|q_1|,|q_2|\}^{-H'}\}.
\]
It is then possible to see that (if we view $[0,1]$ as $\mathbb{T}$) $A_{q_1,q_2}$ is a union of $q_2$ many intervals (possibly with overlaps) of length
\[
\frac{2}{q_2 \max\{|q_1|,|q_2|\}^{H'}}.
\]
The Lebesgue measure of $A_{q_1,q_2}$ is then at most
\[
2\max\{|q_1|,|q_2|\}^{-H'}.
\]
Observe that by choosing $H'$ to be larger than $2$ we see that,
\[
\sum_{|q_1|,|q_2|\ge 1} \max\{|q_1|,|q_2|\}^{-H'}=\sum_{k\ge 1} \sum_{1\le |q_1|\le k} 2 k^{-H'}=\sum_{k\ge 1} 4k^{-H'+1}<\infty,
\]
Then the convergence part of Borel-Cantelli lemma implies that
\[
\left|\limsup_{|q_1|,|q_2|>0} A_{q_1,q_2}\right|=0.
\]
From here, we see that for Lebesgue almost all numbers $\beta\in [0,1]$ there are at most finitely many solutions ($q_1,q_2$) to
\[
\|q_1\gamma+q_2\beta\|\le \max\{|q_1|,|q_2|\}^{-H'}.
\]
Thus there is a constant $c>0$ with
\[
\|q_1\gamma+q_2\beta\|\ge c\max\{|q_1|,|q_2|\}^{-H'}
\]
for all $|q_1|, |q_2|>0.$ We need to consider the situation when $q_1$ or $q_2$ is zero. We know that $\gamma$ is non-Liouville by assumption and we can also assume that $\beta$ is non-Liouville since non-Liouville numbers are Lebesgue typical. By redefining the constants $c,H'$ if necessary we see that for $|q_1|\ge 1, |q_2|\ge 1$
\[
\|q_1\gamma\|\ge c |q^{-H'}_1|
\]
as well as
\[
\|q_2\beta \|\ge c |q^{-H'}_2|.
\]
This implies that $(\gamma,\beta)$ is Diophantine. It is simple to extend the range for $\beta$ from $[0,1]$ to $\mathbb{R}.$ From here the proof finishes.
\end{proof}
\begin{proof}[Proof of Corollary \ref{Coro2}]
Since $\gamma$ is non-Liouville, we can apply Theorem \ref{MAIN MONNTON} whenever $\beta$ and $(\beta,\gamma)$ are both Diophantine. From Lemma \ref{LMA} we also know that for Lebesgue almost all $\beta\in [0,1],$ the pair $(\gamma,\beta)$ is Diophantine. Thus $\beta$ is also non-Liouville and irrational. Therefore we see that Lebesgue almost all $\beta\in [0,1],$ both $\beta$ and $(\gamma,\beta)$ are Diophantine. We can then apply Theorem \ref{MAIN MONNTON} to those $\beta.$ We denote the set of such $\beta's$ as $G.$ Then we see that
\[
\|qx-\gamma\|\|q\beta-\gamma'\|<\psi(q)
\]
infinitely often for Lebesgue almost all $x$ whenever $\beta\in G.$ Then we can use Fubini's theorem to conclude Corollary \ref{Coro2}.
\end{proof}
\subsection*{Acknowledgements.}
HY was financially supported by the University of Cambridge and the Corpus Christi College, Cambridge. HY has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 803711).
| proofpile-arXiv_059-15658 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{./pictures/F-1-PDF.pdf}
\end{center}
\caption{The paradigm of our distortion-aware feature extraction block. To illustrate the learned distortion, we use the area around the curtain in the image as an example. $81$ locations (see red dots) are sampled by our DAMO network, it is clear that these sampling locations for an activation unit are mainly around the curtain (see green dot). Then, we utilize SPM to help deformable convolution focus mainly on informative regions and thus reduce the impact of distortion in panorama.}
\label{fig:task}
\end{figure*}
3D scene perception and understanding is a fundamental technique for many applications such as robotics and intelligent vehicles. Among various 3D vision
tasks, depth estimation is highly important since it forms the basis for many downstream tasks such as obstacle avoidance and object fetching. Due to the high costs of 3D sensors (e.g., LiDARs), inferring 3D information from 2D RGB images captured by cheap consumer-level cameras is significantly important.
Dense depth estimation is a challenging pixel-level task in 3D scene understanding. Different from stereo matching and Structure from Motion (SfM) methods, monocular depth estimation is an ill-posed problem for its information loss caused by the projection from a 3D space to a 2D image plane. That is, one pixel in a 2D image may correspond to multiple points in a 3D space. Thanks to the power of deep learning, progressive advances have been achieved using prior geometric constraints extracted either explicitly or implicitly from annotated data~\cite{laina2016deeper, eigen2015predicting, yin2019enforcing}. However, most of these methods focus on predicting depth from general perspective images.
Distortion is a major challenge for tasks working on panoramas, such as classification, saliency detection, semantic segmentation and depth estimation~\cite{eder2019convolutions, lee2019spherephd, zhang2018saliency}. Directly applying conventional CNNs on panoramas (e.g., represented in equirectangular formats) is hard to achieve promising performance. Since a panorama is usually produced by stitching several perspective images captured by a perspective camera located at the same place, equirectangular projection can be considered as a transformation from a non-Euclidean space to an Euclidean space and thus introducing distortion to panoramas. As a consequence, the projection of objects has irregular shapes and the distortion becomes extremely significant for pixels close to the poles or image plane. Therefore, standard convolution is unsuitable for panorama processing.
In this work, we propose a Distortion-Aware Monocular Omnidirectional (DAMO) network by combining strip pooling and deformable convolution to generate accurate depth maps from panoramas with distortion. We first present a distortion-aware feature extraction block to handle distortion introduced by equirectangular projection. Specifically, we utilize deformable convolution to learn offsets for the sampling grids, resulting in feature maps that are much denser than regular grids. We then exploit strip pooling to capture anisotropy context information from irregular regions (i.e., the distorted projection of objects) and preserve the integral distortion information for convolution sampling. In addition, to mitigate supervision bias caused by uneven sampling in different areas, we also propose an easy-to-use spherical-aware weight matrix for the objective function. Experiments on the 360D dataset demonstrate that our DAMO network achieves the state-of-the-art performance with high efficiency.
Our contributions can be summarized as follows:
\begin{itemize}
\item We propose a DAMO network to handle distortion in panoramas using both deformable convolution and strip pooling module. Experiments on the 360D dataset show that DAMO is superior to the state-of-the-art.
\item We introduce a plug-and-play spherical-aware weight for our objective function to make the network focus on informative areas. This weight helps our network to achieve fast convergence and improved performance.
\end{itemize}
\section{Related Work}
We will briefly describe several existing methods related to our work in this section.
\subsection{Depth Estimation}
Depth estimation has been a hot topic for a long time. Early studies~\cite{scharstein2001a, rajagopalan2004depth, liang2019stereo} in this area focused on developing algorithms to generate point correspondences in stereo images. Different from these methods, Delage et al.~\cite{delage2006a} developed a Bayesian framework to perform 3D indoor reconstruction from one single perspective image based on a strong floor-wall assumption. Saxena et al.~\cite{saxena2005learning, saxena2009make3d} used Markov Random Fields (MRFs) to incorporate multiscale and global image features to predict depth from a single RGB image.
Eigen et al.~\cite{eigen2014depth} proposed the first deep learning based network. They used a multiscale convolutional architecture to predict results in a coarse-to-fine manner. Eigen et al.~\cite{eigen2015predicting} then adopted a multi-task training scheme to further improve the performance of their model. Laina et al.~\cite{laina2016deeper} proposed a regularization concerning loss and a uniform up-projection module for monocular depth estimation, which have been frequently used in subsequent methods. Fu et al.~\cite{fu2018deep} considered the depth estimation task as an ordinal regression problem by applying a spacing-increasing discretization strategy and a well-designed ordinal regression loss. Yin et al.~\cite{yin2019enforcing} improved the supervision capability of the objective function by randomly selecting a number of ternary points and producing a virtual plane for each ternary. With its geometric supervision, the virtual normal loss improves the convergence of the depth estimation model. However, all these methods focus on perspective images and may easily trapped into suboptimal results while being directly applied to panoramas.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{./pictures/F-2-PDF.pdf}
\end{center}
\caption{An overview of our DAMO network. We build a filter-bank layer utilizing a group of parallel rectangular convolutions to extract horizontal distortion-aware features following~\cite{zioulis2018omnidepth,BiFuse20}. Then, these features are fed into an encoder-decoder based architecture~\cite{Ma2017SparseToDense} with a skip connection at each resolution. Besides, we also add SPM in the last building block for each of first three DAMO layers to help deformable convolution focus on contextual regions.}
\label{fig:pipline}
\end{figure*}
\subsection{Representations on Panorama} \label{subsection:representations}
Equirectangular image is one of the most widely used representations for panoramas and distortion has been a major challenge for years. Su et al.~\cite{su2017flat2sphere:} used an adaptive kernel to handle the distortion near the pole. Following this idea, Zioulis et al.~\cite{zioulis2018omnidepth} designed a set of rectangular filter-banks to deal with the horizontal distortion introduced by equirectangular projection by increasing the receptive field of conventional convolution kernels on horizon. Although the representation capability of CNNs on panoramas has been improved by these methods, the gap between omnidirectional and perspective images still exists.
Cube map is another commonly used representation for panoramas. This representation faces the challenge of inconsistency between different faces. Cheng et al.~\cite{cubepadding2018} proposed cube padding to reduce the information loss along edges between faces. Wang et al.~\cite{BiFuse20} further extended~\cite{cubepadding2018} to spherical padding and propose a two-branch encoder-decoder based network to predict depth maps for panoramas. However, their model was hard to train due to its multiple training settings and high time cost. Inspired by rectangular convolution in~\cite{zioulis2018omnidepth}, we exploit strip pooling~\cite{hou2020strip} to preserve more context details for convolution.
\subsection{Dynamic Mechanism} \label{dynamicmechanism}
Existing deep learning based dynamic mechanisms can be divided into two categories: weight based methods~\cite{jia2016dynamic, hu2018squeeze, ma2020weightnet} and offset based methods~\cite{jeon2017active, dai2017deformable}.
Weight based methods focus on adaptively generating weights for either feature map selection or channel-wise selection. For instance, Jia et al.~\cite{jia2016dynamic} used a flexible filter generation network to produce a set of filter operators that dynamically conditioned on an individual input, resulting in improved performance on the video and stereo prediction tasks. Besides, attention is investigated to generate weights. Hu et al.~\cite{hu2018squeeze} proposed a light-weight gating mechanism to explicitly model channel-wise dependencies and further improve the model representation ability using global information.
Offset based methods aim at providing offsets for filters to aggregate more geometric information. Jeon et al.~\cite{jeon2017active} proposed an Active Convolution Unit (ACU) to learn its own shape adaptively during training. However, the shapes of filters have to be fixed after training. Moreover, Dai et al.~\cite{dai2017deformable} introduced deformable convolution to learn offsets for each spatial location dynamically, resulting in higher generalization capability than ACU.
\section{Proposed Method}
The overall pipeline of our DAMO network for monocular dense depth estimation on panorama is shown in Fig.~\ref{fig:pipline}. In this section, we will first introduce our Distortion-Aware (DA) module for calibrated semantic feature extraction, including strip pooling and deformable convolution. Then, we will introduce our objective function with a spherical-aware weight matrix.
\subsection{Distortion-aware Module} \label{subsection:DA}
The DA module is shown in Fig.~\ref{fig:pipline}(b). We first utilize deformable convolution to extract learnable distorted information and calibrate semantic features. After that, the learned distortion knowledge is fed into the Strip Pooling Module (SPM)~\cite{hou2020strip}. The feature maps are further activated by multiple times and thus the distortion can be sufficiently learned by our DA module.
\subsubsection{Deformable Convolution} \label{subsection:deformable}
To learn distortion knowledge and model the spatial transformation of convolution kernels on panorama, we adopt deformable convolution~\cite{dai2017deformable} in this work. Given a set of sampling locations on a regular grid $R$, the input feature map ${\bf{t}}$, the output feature map $F$ and the weights for kernel $\bf{w}$, a conv layer is applied in conventional convolution nerual networks. Here, we define the grid for $3 \times 3$ convolution kernel with a dilation of 1 as $R = \{(\pm{1},\pm{1}),(\pm{1},0),(0,\pm{1}),(0,0)\}$.
According to~\cite{dai2017deformable}, deformable convolution can be formulated as:
\begin{equation}
\begin{aligned}
{F({\boldsymbol{p_{0}}}) = \sum_{ {\boldsymbol{p_{n}}} \in R} {\bf{w}}( {\boldsymbol{p_{n}}}) \cdot {\bf{t}}({{\boldsymbol{p_{0}}} + {\boldsymbol{p_{n}}} + \Delta {\boldsymbol{p_{n}}}})}
\end{aligned}
\label{deformableconv}
\end{equation}
\noindent where ${\boldsymbol{p_{0}}}$ represents a location on output feature map $F$, ${\boldsymbol{p_{n}}}$ is one position on the regular sampling grid $R$ and $\Delta {\boldsymbol{p}_{n}}$ represents the offset corresponding to the position of ${\boldsymbol{p_{n}}}$.
The $i$-th input feature map (with a size of $C_{i} \times B_{i} \times H_{i} \times W_{i}$) is first fed into a convolutional layer to generate a group of 2D offsets for each sampling location on grid $R$. The offsets for both vertical and horizontal translation on $R$ have a size of $18 \times B_{i} \times H_{i} \times W_{i}$. Here, we use a $3 \times 3$ sampling grid $R$ as an example. Then, the sampling grid for deformable convolution is generated through Eq.~\ref{deformableconv} using the 2D offsets $\Delta \boldsymbol{p}_{n}$. Since most learnable offsets are fractional, we use bilinear interpolation to generate integer offsets ~\cite{dai2017deformable,zhu2019deformable}.
\subsubsection{Strip Pooling} \label{subsection:strip}
To preserve more distortion information and help the network to focus on informative regions, we adopt strip pooling~\cite{hou2020strip} in our DA module. Different from standard spatial max pooling that adopts two-dimensional sampling grids, sampling along vertical and horizontal orientations are performed separately in strip pooling. That is, contexual information in feature maps are selected either in a row or a column according to its input (as shown in Fig.~\ref{fig:pipline}). Similar to the origin defination in~\cite{hou2020strip}, strip pooling is defined as:
\begin{equation}
\begin{cases}
y_{c, i}^{h} = {\max\limits_{0 \leq j < W}{x_{c,i,j}}} \\
y_{c, j}^{v} = {\max\limits_{0 \leq i < H}{x_{c,i,j}}}
\end{cases}
\label{strip}
\end{equation}
where $x_{c,i,j}$ is a location of the sampling grid on input (i.e., ${\bf{x}} \in \mathbb{R}^{C \times H \times W}$), $H$ and $W$ also represent the kernel size of strip pooling along horizontal and vertical orientations, respectively. ${y_{c,i}^{h} \in {\bf{y}^{h}}}$ and ${y_{c,j}^{v} \in {\bf{y}^{v}}}$ represent the $i$-th and $j$-th grids on the feature map of channel $c$ for horizontal and vertical strip pooling, respectively.
Both vertical and horizontal strip pooling layers are used in SPM. Once these two expanded output feature maps (with the same size as their inputs) are added, an element-wise multiplication $G(\cdot,\cdot)$ is used to activate the contextual geometric areas of the input. SPM is sensitive to anisotropy context and produces denser distortion feature maps than the regular max pooling module. Given the combination ${\bf{y}}$ of ${\bf{y^{h}}}$ and ${\bf{y^{v}}}$, the output ${\bf{z}}$ of SPM can be formulated as:
\begin{equation}
\begin{aligned}
{{\bf{z}} = G({\bf{x}}, \sigma(f({\bf{y}})))}
\end{aligned}
\label{spm}
\end{equation}
where $\sigma$ is the sigmoid function, $f$ is the last $1 \times 1$ convolution layer, the fusion of horizontal and vertical strip pooling information ${y_{c,i,j} \in \bf{y}}$ is defined as ${y_{c,i,j} = y_{c,i}^{h} + y_{c,j}^{v}}$.
We apply our SPM to each conv$k$ layer of the DAMO layer (i.e., conv2, conv3, conv4 and conv5, as defined in ResNet~\cite{he2016deep}). Specifically, SPMs are used in the last building block of the first three DAMO layers and are stacked for all building blocks of the DA module (see Fig.~\ref{fig:pipline}).
\subsection{Spherical-aware Weighted Loss} \label{subsection:Spherical-Weighted}
Since an equitangular image represents a panorama in 2D space, distortion is extremely large in the areas around the pole of a spherical space. Specifically, for two areas with the same coverage in a spherical surface, the area near the pole is much larger than the area near the equator in an equitangular image due to uneven projection. Loss functions defined in perspective images~\cite{zioulis2018omnidepth, Eder_2020_CVPR, cheng2020depth} are hard to produce optimal results due to the overfitting (weighting) in sparse areas near the pole and the underfitting in dense areas near the equator.
\subsubsection{Weight Matrix}
To achieve balanced supervision in different areas, we introduce a spherical-aware weighting strategy on objective function. Specifically, the Cartesian coordinates of a pixel ${p_{E} = D(x,y)}$ in the equirectangular image can be converted to spherical coordinates ${p_{S} = \Pi(\theta, \phi)}$ in a spherical surface, where longtitude ${\theta \in [0 , 2\pi]}$ and latitude ${\phi \in [0, \pi]}$. That is, $\phi_{(x,y)} = \frac{\pi x}{\rm{H}}, \theta_{(x,y)} = \frac{2 \pi y}{\rm{W}}$.
Considering a sphere $\Pi(\theta, \phi)$ with a unit radius, we generate a weight matrix for objective functions according to the sphere angle of a pixel along the latitude. Take the north hemisphere as an example, the weight is defined as the ratio of the area from the north pole to the current latitude to the total area of the sphere surface. The weights in the south hemisphere can be calculated following a similar way.
\begin{equation}
\begin{aligned}
{\mathrm{W}_{(x,y)} = {\int_{0}^{\phi_{(x,y)}}{{\rm{sin}}{\phi} d_{\phi}}}}
\end{aligned}
\label{weight}
\end{equation}
Here, $\mathrm{W}$ is the weight matrix for the objective function in each location. $\phi_{(x,y)}$ denotes the angle of the position $(x,y)$ along the vertical axis.
\begin{table*}
\centering
\caption{Comparison to the state-of-the-art $360^{\circ}$ monocular depth estimation methods on the 360D Dataset. The method with $^{\divideontimes}$ represents a model provided by the author (with its Caffe based weights being converted to Pytorch based weights), and the method with $^{\star}$ denotes our reproduction. Note that, the results and metrics reported in~\cite{BiFuse20} are different from~\cite{zioulis2018omnidepth}, to follow the baseline method of the 360D Dataset, we convert its \textit{RMSE(log)} results from base-10 logarithm to natural logarithm. Besides, the results in~\cite{zioulis2018omnidepth} are updated at the authors' github repository$^{1}$.}
\label{table:360D}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
{\multirow{2}{*}{\textbf{Method}}}&\textbf{RMSE}&\textbf{Abs\_REL}&\textbf{RMSE(log)}&\textbf{$\delta_{1}$}&\textbf{$\delta_{2}$}&\textbf{$\delta_{3}$}\\
\cline{2-7}
~&\multicolumn{3}{c|}{Lower the better}&\multicolumn{3}{c|}{Higher the better}\\
\hline
OmniDepth-UResNet~\cite{zioulis2018omnidepth} & 0.3084 & 0.0946 & 0.1315 & 0.9133 & 0.9861 & 0.9962 \\
OmniDepth-RectNet~\cite{zioulis2018omnidepth} & 0.2432 & 0.0687 & 0.0999 & 0.9583 & 0.9936 & 0.9980 \\
\hline
BiFuse-Equi~\cite{BiFuse20} & 0.2667 & - & 0.1006 & 0.9667 & 0.9920 & 0.9966 \\
BiFuse-Cube~\cite{BiFuse20} & 0.2739 & - & 0.1029 & 0.9688 & 0.9908 & 0.9956 \\
BiFuse-Fusion~\cite{BiFuse20} & 0.2440 & - & \textit{0.0985} & \textit{0.9699} & 0.9927 & 0.9969 \\
\hline
OmniDepth-RectNet${}^{\divideontimes}$ & \textit{0.2297} & 0.0641 & 0.0993 & 0.9663 & \textit{0.9951} & \textit{0.9984} \\
\hline
BiFuse-Equi${}^{\star}$ & 0.2415 & \textit{0.0573} & 0.1000 & 0.9681 & 0.9928 & 0.9972 \\
\hline
DAMO & \textbf{0.1769} & \textbf{0.0406} & \textbf{0.0733} & \textbf{0.9865}& \textbf{0.9966} & \textbf{0.9987} \\
\hline
\end{tabular}
\end{table*}
\subsubsection{Loss Function}
Following~\cite{yin2019enforcing, fang2020towards}, we adopt the reverse Huber loss (also called the Berhu loss)~\cite{laina2016deeper} in this work.
\begin{equation}
\mathcal{L}_{d}=
\begin{cases}
\left|d{_{pre}}-d{_{gt}}\right|, & \text{if $\left|d{_{pre}}-d{_{gt}}\right| \leq \tau$} \\
\frac {(d{_{pre}}-d{_{gt}})^2+\tau^2}{2\tau}, & \text{if $\left|d{_{pre}}-d{_{gt}}\right| > \tau$}
\end{cases}
\label{berhu}
\end{equation}
\noindent where $d_{pre}$ and $d_{gt}$ are the predicted and groundtruth depth values, respectively. The Berhu loss can achieve a good balance between L1 and L2 norms. Specifically, pixels with high gradient residuals will be assigned with large weights using the L2 term. Meanwhile, the L1 term pays more attention to regions with small gradients. In our experiments, we set the threshold $\tau$ as $20\%$ of the maximum error between prediction and groundtruth.
Finally, our loss function is defined as:
\begin{equation}
{\mathcal{L} = \mathcal{L}_{d} \otimes \mathrm{W} }
\label{weight-berhu}
\end{equation}
Compared to the original Berhu loss $\mathcal{L}_{d}$, our weighted Berhu loss $\mathcal{L}$ can help the network to focus more on informative regions (i.e, regions near the equator) and mitigate the effects introduced by severe distortion (which is significant in areas near the pole) in panoramas.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{./pictures/F-3-PDF.pdf}
\end{center}
\caption{Qualitative Comparison on the 360D Dataset. Here, we colorize these depth maps to better distinguish the effectiveness of different methods. Points with dark color (blue) are closer than those with light color (red). The first row shows equirectangular RGB images, while invalid areas are masked with black in groundtruth (in the third row). It is obvious that our DAMO generates more accurate depth maps than existing methods, especially for those distorted areas.}
\label{fig:com1}
\end{figure*}
\section{Experiments} \label{section:Experiments}
We conduct extensive experiments on a widely used omnidirectional dataset to evaluate the performance of our DAMO network. We will first describe the dataset and the evaluation toolbox, and then present the implementation details of our experiments. We further compare our method to the state-of-the-art.
\subsection{Experimental Settings}
\subsubsection{Dataset}
We adopt a large-scale indoor omnidirectional RGBD dataset (i.e., the 360D Dataset~\cite{zioulis2018omnidepth}) to conduct experiments. This dataset contains two real-world datasets (i.e., Stanford2D3D~\cite{Armeni2017Joint} and Matterport3D~\cite{Matterport3D}), and two synthetic datasets (i.e., SunCG~\cite{song2017semantic} and SceneNet~\cite{handa2016scenenet}). Following the original split in ~\cite{zioulis2018omnidepth}, the training and test sets are listed as follows:
\begin{itemize}
\item Training set: a cross-domain set of both real-world and sythetic images from the Stanford2D3D, Matterport3D and SunCG datasets were first obtained. Then, scenes with very close or far regions were removed, resulting in a training dataset with 34,679 RGBD image pairs.
\item Test set: 1,289 omnidirectional image pairs were collected from the Stanford2D3D, Matterport3D and SunCG dataset for test. The remaining image pairs from the SceneNet dataset were used for validation.
\end{itemize}
\subsubsection{Implementation Details}\label{subsubsection:implement}
Our network was implemented in Pytorch with a single Nvidia RTX Titan GPU. Each RGB image has a resolution of $512 \times 256$ while invalid depth values are removed by a mask. The batch size was set to 8 and the initial learning rate was set to $1 \times 10^{-4}$. We used the Adam optimizer~\cite{kingma2014adam} with its default settings and a poly learning rate policy~\cite{fu2019dual} for training. All models were trained for 20 epochs on the 360D dataset for fair comparision.
\subsubsection{Evaluation Metrics} \label{subsubsection:Metrics}
We adopted the same metrics as previouse works~\cite{zioulis2018omnidepth, cheng2020depth} for fair comparison, including Absolute average Relative Error (Abs\_REL), Root Mean Squared Error (RMSE), Root Mean Squared Error in logarithmic space (RMSElog) and accuracy with a threshold $\delta_{t}$, where ${t \in \{1.25, 1.25^{2}, 1.25^{3} \} }$. Note that, we used the same evaluation strategy as ~\cite{zioulis2018omnidepth}. That is, depth maps were estimated by dividing a median scalar $\bar{s}$ to achieve direct comparison among multiple datasets with different range scales, where $\bar{s} = {\rm{median}}{(D_{GT})} / {\rm{median}}{(D_{Pred})}$.
\subsection{Comparison to the State-of-the-art} \label{subsubsection:sota}
We compare our DAMO to two existing methods. Note that, only the Caffe model is provided by~\cite{zioulis2018omnidepth} while the source codes are provided by~\cite{BiFuse20}. We used their released model or source codes for comparison in this work.
\subsubsection{Quantitative Comparison}
As shown in Table~\ref{table:360D}, our method outperforms the baseline method~\cite{zioulis2018omnidepth} by a large margin. Specifically, the {Abs{\_}Rel} and {RMSE} values of DAMO are much better than
OmniDepth-RectNet$^{\divideontimes}$ by $\textbf{36.5\%}$ and $\textbf{22.9\%}$, respectively. Although the same backbone (i.e., ResNet-50~\cite{he2016deep}) is used in BiFuse-Equi~\cite{BiFuse20}, BiFuse-Fusion~\cite{BiFuse20}, and our DAMO model, our model achieves significant performance improvement over the BiFuse-Equi and BiFuse-Fusion methods in almost all metrics. Note that, the BiFuse-Fusion method adopts an additional cube map representation branch and thus has more parameters to tune. Besides, the cube map representation has obvious discontinuity between neighboring faces. Although different padding schemes have been proposed to mitigate the edge effect~\cite{cubepadding2018, BiFuse20}, the computational efficiency of this representation is still low. In contrast, our DAMO network has only one equirectangular branch to predict depth on panorama and outperforms BiFuse-Fusion by nearly $\textbf{26.7\%}$ in RMSE.
\footnotetext[1]{https//github.com/VCL3D/360Vision/tree/master/SingleImageDepthMetrics.}
The superiority of our DAMO network can be attributed to two reasons. \textbf{First}, DA module can adjust irregular sampling grids among distorted projection of objects in panoramas automatically and extracts rich geometric information in challenging areas (e.g., pole areas). \textbf{Second}, our weighted Berhu loss can help our model focusing on informative areas (especially for these areas near the equator) and alleviate the supervision bias caused by distortion. Consequently, the representation capability of our network on panoramas is improved.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{./pictures/F-4-PDF.pdf}
\end{center}
\caption{Ablation study with strip pooling and deformable convolution on the 360D Dataset. Models with strip pooling can predict clearer depth regions than the base model, while models with deformable convolution generate much sharper boundaries than those with regular convolution. Integrating both strip pooling and deformable convolution in our DA module can further improve the performance.}
\label{fig:com2}
\end{figure*}
\subsubsection{Qualitative Comparison}
We present several predicted depth maps from the 360D dataset in Fig.~\ref{fig:com1}. It can be observed from the zoom-in regions that DAMO can predict more accurate and clearer depth maps than RectNet~\cite{zioulis2018omnidepth} and BiFuse-Equi~\cite{BiFuse20} on panoramas. It is also shown that fine details such as chair and vase (as denoted by a small arrow in Fig.~\ref{fig:com1}) can be successfully estimated. This further demonstrates the representation capability of our DAMO network on various types of objects in equirectangular images.
\setlength{\tabcolsep}{1mm}{
\begin{table}
\centering
\caption{Ablation study with strip pooling and deformable convolution. The second and third parts show the performance achieved by networks with SPM and deformable convolution (where `D' represents deformable convolution), respectively. $\mathcal{L}$ indicates the spherial-weighted loss. All models are trained in the same settings and the best results in each part are shown in boldface.}
\label{table:ablations}
\begin{tabular}{|c|c|c|c|c|}
\hline
{\multirow{2}{*}{\textbf{Method}}}&{\multirow{2}{*}{\textbf{Param.}}}&\textbf{RMSE}&\textbf{Abs\_REL}&\textbf{RMSE(log)}\\
\cline{3-5}
~&~&\multicolumn{3}{c|}{Lower the better}\\
\hline
Base Model & {\multirow{2}{*}{\textbf{61.28M}}} &0.2129 & 0.0598 & 0.0959 \\
+ $\mathcal{L}$ &~& \textbf{0.2068} & \textbf{0.0562} & \textbf{0.0904} \\
\hline
+ SPM & {\multirow{2}{*}{+ \textbf{5.83M}}} & 0.1879 & 0.0433 & 0.0769 \\
+ $\mathcal{L}$ &~& \textbf{0.1803} & \textbf{0.0419} & \textbf{0.0749} \\
\hline
+ D (conv5) & {\multirow{2}{*}{+ \textbf{0.24M}}} & 0.1906 & 0.0471 & \textbf{0.0789} \\
+ $\mathcal{L}$ &~& \textbf{0.1882} & \textbf{0.0455} & 0.0794 \\
\hline
\end{tabular}
\end{table}
}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{./pictures/F-5-PDF.pdf}
\end{center}
\caption{Generalization analysis for the proposed methods. We evaluate existing methods in SceneNet to test the generalization ability of the model.}
\label{fig:scannet}
\end{figure*}
\subsection{Ablation Study} \label{subsubsection:ablation}
In this section, we conduct experiments to illustrate the effectiveness of each component of our DAMO network.
\subsubsection{Effectiveness of SPM}
We will first analyze the benefit of exploiting SPM in our DA module. As illustrated in Table~\ref{table:ablations}, it can be observed that the improvement of applying SPM is significant. While the number of parameters of the model is increased by {$\textbf{5.83}$M} (nearly $9.5\%$ as compared to the base model), the performance is improved by about $\textbf{27.5\%}$ and $\textbf{11.7\%}$ in {Abs\_REL} and {RMSE}, respectively. We argue that the trade-off between the complexity of network and advancement of its performance is well balanced. As shown in Fig.\ref{fig:com2}, the base model produces many artifacts, especially on the south and north poles. In contrast, the base model with SPM can extract rich contextual information and predicts fine depth map on panoramas.
\subsubsection{Deformable vs. Regular Convolution}
We compare deformable convolution to its regular counterpart in this section. As shown in Table~\ref{table:ablations}, the performance of our baseline is improved significantly with deformable convolution. Specifically, {Abs{\_}REL} is improved from $0.0598$ to $0.0471$ (nearly by $\textbf{21.2\%}$) and RMSE is improved from $0.2129$ to $0.1906$ (nearly by $\textbf{10.4\%}$) by integrating deformable convolution into our DAMO network.
As shown in Figs.~\ref{fig:com2}(c)(e), we can observe that the model with deformable convolution can generate much cleaner depth maps on walls, ceilings and floors. Note that, distortion exists mainly in these areas since ceilings and floors are always located in the north and south pole in images of the 360D dataset. Besides, long walls usually exist from left to right in indoor scenes. It is clear that deformable convolution can learn reasonable offsets to model the transformation of sampling grids and mitigate the harmful distortion effects for CNNs introduced in panoramas. Moreover, deformable convolution can produce sharper object boundaries and more accurate depth results in some difficult regions (e.g., see Fig.~\ref{fig:com2}(e)).
\subsubsection{Deformable Convolution with SPM}
Here, we investigate how these two components of our DA module work together in synergy. As shown in Tables~\ref{table:360D} and \ref{table:ablations}, although deformable convolution or SPM improves our base model for more than $\textbf{10.0\%}$ in {Abs\_REL} and {RMSE}, the improvement of their combination is still significant. Specifically, our DAMO network outperforms the base model by nearly $\textbf{31.9\%}$ in {Abs\_REL} and $\textbf{16.9\%}$ in {RMSE}. We believe the main reason for our improvement on panorama is that the DA module learns the distortion in this domain. The synergism of SPM and deformable convolution is obvious. That is, SPM activates informative regions and helps deformable convolution to focus on challenging areas by learning the transformation of sampling grids among panoramas.
\subsubsection{Spherical-aware Weight vs. Regular Weight} \label{subsubsection:weight}
We conduct ablation experiments to demonstrate the effectiveness of our spherical-aware weight for objective functions. Table~\ref{table:ablations} show the results of each component with/without the proposed weight. The largest improvement is achieved on our base model, its {Abs{\_}REL} is improved from 0.0598 to 0.0562 (by nearly $\textbf{6.0\%}$) and RMSE is improved from 0.2129 to 0.2068 (by nearly $\textbf{2.8\%}$). When SPM or deformable convolution is used in the model, the spherical-aware weight can still introduce further performance improvement.
\subsubsection{Generalization Analysis}
To further demonstrate the generalization capability of our DAMO network, we tested our method without finetune on the validation split of the 360D dataset (i.e., SceneNet~\cite{handa2016scenenet}). It is clear that DAMO outperforms other methods. As shown in Fig.~\ref{fig:scannet}, DAMO predicts more fine details and its results are closer to groundtruth than other methods. Compared to the test set of 360D, the overall performance drop of our DAMO network on the validation split is lower than other methods. For instance, our RMSE is increased by $0.1088$. In contrast, the RMSEs of BiFuse-Equi and OmniDepth-RectNet are increased by $0.1437$ and $0.1471$, respectively. That is because, RectNet and BiFuse-Equi do not consider the distortion of panorama and are therefore overfitted on the training set of 360D. More quantitative details are reported in our supplementary material.
The superiority of DAMO can be attributed to our DA module, which consists of SPM and deformable convolution. Specifically, reasonable offsets are learned with deformable convolution and its sampling grids can target distorted projection of objects at different locations of a panorama. Therefore, our DAMO obtains transformation capability of sampling grids against distortion. Besides, the element-wise multiplication in SPM generates the connection among different channels in each input. Consequently, the transferability of DAMO is also improved. The DA module improves the overall representation capability of our network, especially along occlusion boundaries of objects (see the frame in Fig.~\ref{fig:scannet}).
\section{Conclusion}
We presented a method for omnidirectional dense depth estimation. We introduce deformable convolution to handle distortion of panorama and use strip pooling to improve the generalization ability of our DAMO network. To alleviate the supervision bias caused by distortion, we further introduce a spherical-weighted objective function to aggregate abundant information near the equator of the sphere. Experiments show that the proposed method outperforms the state-of-the-art monocular omnidirectional depth estimation methods.
{\small
| proofpile-arXiv_059-15659 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
A story is anything which is narrated in the form of a causally/logically linked set of event plots involving some shared characters \cite{mostafazadeh-etal-2016-corpus}. Story generation aims at automatically generating stories with such attributes.
Different from other text generation tasks, story generation is more challenging because it is restricted to several constraints: (1) it must maintain \textbf{consistent plots} to form a reasonable story; (2) it must guarantee the \textbf{consistency} of the characters throughout the story; (3) the \textbf{coherency} of the text units such as the clauses or sentences should be concerned. In order to improve the quality of plot consistency, some previous works focus on the perspective of plot planning and then merge the text units according to the order of plots \cite{lebowitz1987planning,perez2001mexica,porteous2009controlling,riedl2010narrative,li2013story}. We observe that those approaches rely heavily on human annotations and are restricted to the abstract story representation level without surface realization in natural language, such as producing event verb sequence and sentence segments. Therefore, these methods need to work with sentence templates or rules to generate stories.
\begin{table}[!t]
\centering
\begin{tabular}{p{0.98\columnwidth}}
\toprule
\small \textbf{Prompt}: The zombie apocalypse is upon us but all humans are immune to the virus . All other mammals are susceptible \\
\midrule
\small \textbf{Story}: As I sat there I tried to remember how many days it had been since the last one died . $[$\textcolor{blue}{My wife and daughter were dead , I knew they would n't be alive any time soon .}$]_{1a}$ The food ran out quickly , \textcolor{red}{they} had to be taken away to stay in the house , but that was no reason to be angry if there were a few people left living there . After a month of food being gone a little girl sat next to me . She was the only other survivor who was n't bitten , as it was hard for us to eat \textcolor{red}{them} , we were all sickly . That would be another month to go and if the other animals were still alive in the food , they would n't be able to eat her . `` Hey little one . '' I looked up and saw her . $[$\textcolor{blue}{I had a big smile on my face at this point .}$]_{2a}$ $[$\textcolor{blue}{My wife was the only one who noticed , but her smile was all that was visible of me , and not of her .}$]_{1b}$ $[$\textcolor{blue}{I looked at her , and smiled sadly .}$]_{2b}$ She looked sad , but did n't care . She never cared for me .
\\
\bottomrule
\caption{A story generated by GPT2.}
\label{case:storyexample}
\end{tabular}
\vspace{-11mm}
\end{table}
In the past few years, several end-to-end approaches based on Sequence-to-Sequence (Seq2Seq) models \cite{sutskever2014sequence,bahdanau2014neural} are proposed, which can generate a story at a stroke in a left-to-right manner \cite{jain2017story,clark2018neural,fan2018hierarchical}. These methods are data-driven and can directly generate stories in natural language form instead of other abstract representation. However, these methods struggle to capture the high-level interactions between the plot points and maintain consistent plots throughout the story. Thus, several two-stage models for story generation have recently been proposed \cite{martin2018event,xu2018skeleton,yao2019plan,fan-etal-2019-strategies,chen2019learning}. These models usually decompose story generation into two stages: generating middle form first and then generating the final story. Different middle forms are applied in these methods, such as keywords, sentences and event tuples.
Recently, the OpenAI GPT2/3 language model \cite{radford2019language,brown2020language} achieves strong performance on several language generation tasks. \cite{see-etal-2019-massively} and \cite{guan2020knowledge} verify the performance of GPT2 on story generation and GTP2 outperforms both end-to-end methods and two-stage methods.
However, after analyzing the generated stories carefully, we observe that there are still some serious issues in the generated stories by GPT2.
Take a story generated by GPT2 as shown in Figure \ref{case:storyexample} for example. The story is about survivors in the end of the world. First, plots consistency cannot be guaranteed among multiple sentences of a story, such as blue sentences in Figure \ref{case:storyexample}. The sentence $1a$ describes ``My wife and daughter were dead ''. But the sentence $1b$ talks about ``My wife'' again. It is contradictory. There is the same problem in the sentence $2a$ and $2b$. Second, there are still coreference errors in generated stories, such as red text in Figure \ref{case:storyexample}. It is not clear who \textit{they} and \textit{them} refer to.
Moreover, Top-k sampling~\cite{radford2019language,see-etal-2019-massively,brown2020language} is usually utilized as the decoding strategy in long text generation. The random operation in sampling will disturbance the generation procedure by producing improper tokens which will decrease the quality. This phenomenon is more pronounced at the border of sentences, therefore we can sometimes observe the bad performance in discourse coherency.
To solve the aforementioned problems, we propose a two-stage generation model based on Transformer-based auto-regressive language models to improve consistency and coherency of stories. Specifically, the first stage is to organize the story outline which depicts the story plots and events, and the second stage is to expand the outline into a complete story. Therefore the plots consistency can be controlled and guaranteed explicitly. In addition, coreference supervision signals are incorporated to reduce coreference errors and improve the coreference consistency.
Moreover, we design an auxiliary task of discourse relation modeling to enhance the discourse coherency of the generated stories.
Both the backbone models in the two states are designed based on Transformer-based language models. Thus, on one hand, the framework can still inherit the superior performance of GPT2, on the other hand, it can guarantee the plot consistency, coreference consistency, as well as discourse coherency.
The main contributions of this paper are summarized as follows:
\begin{itemize}[topsep=0pt]
\setlength\itemsep{-0.4em}
\item We propose to improve the plot and coreference consistency as well as the discourse coherency for the task of story generation.
\item A two-stage framework based on Transformer-based language models is designed to control the plots and improve consistency of generated stories.
\item A coreference constraint is applied to improve the coreference consistency of generated stories.
\item We design a discourse relation modeling component as an auxiliary task during training to enhance the performance of discourse coherency.
\item Experiments on a story dataset from Reddit demonstrate that our model outperforms the baseline methods in terms of both automatic metrics and human evaluation.
\end{itemize}
\section{Methodology}
\subsection{Overview}
\begin{figure*}[!ht]
\centering
\includegraphics[width=15cm]{allall.pdf}
\caption{The framework of our model for story generation.}
\label{fig:allmethod}
\end{figure*}
To begin with, we state the problem of story generation as follows: given a prompt context $\mathbf{X} = \{x_1,...,x_i...,x_k\}$ where $x_i$ denotes each word in the prompt, the model needs to generate a story $\mathbf{Y} = \{y_1,...,y_i,...,y_n\}$ following the prompt $\mathbf{X}$ by maximizing the conditional probability $p(\mathbf{Y}|\mathbf{X})$.
As show in Figure \ref{fig:allmethod}, to enhance the consistency and coherency of generated stories, we propose a two-stage framework for story generation. The first stage is story outline generation which can generate the plot outline based on the given prompt. Then in the second stage, the whole story is completed by embellishing the outline generated in the first stage.
Transformer-based language models are introduced as the backbone models for those two stages respectively.
Moreover, a component of discourse relation classification is incorporated to the language model as an auxiliary task to further improve the coherency of the generated stories.
To further improve the consistency, we design a coreference supervision component to encourage the language model to attend on correct entities when generating pronouns by maximizing the attention weights of the corresponding entities.
\subsection{Transformer-based Language Model}
Inspired by the popular pre-trained language models for text generation such as GPT2 \cite{radford2019language}, XLNET \cite{yang2019xlnet} and GPT3 \cite{brown2020language}, we also employ the Transformer-based auto-regressive language models as our backbone frameworks.
Transformer-based language models only contain a decoder. The decoder consists of $N$ identical self attention blocks and each block contains two sub-layers: a self multi-head attention layer and a feed-forward layer. A add \& norm layer is employed around each of two sub-layers. Formally, given the input $\mathbf{H}^{n-1}$, the output $\mathbf{H}^{n}$ of each decoder block is computed as follows:
\begin{align}
\mathbf{C}^{n} & = \operatorname{LN}\left(\operatorname{SELF-ATT}\left(\mathbf{H}^{n-1}\right)+\mathbf{H}^{n-1}\right) \\
\mathbf{H}^{n} &=\operatorname{LN}\left(\operatorname{FFN}\left(\mathbf{C}^{n}\right)+\mathbf{C}^{n}\right)
\end{align}
where $\operatorname{SELF-ATT}(\cdot)$, $\operatorname{LN}(\cdot)$, and $\operatorname{FFN}(\cdot)$ are respectively self-attention mechanism, layer normalization, and feed-forward network with ReLU activation in between. $\operatorname{SELF-ATT}(\cdot)$ computes attention over the input $\mathbf{H}^{n-1}$ as follows:
\begin{equation}
\operatorname{SELF-ATT}\left(\mathbf{H}^{n-1}\right)=\operatorname{softmax}\left(\frac{\mathbf{Q} \mathbf{K}^{\top}}{\sqrt{d_{k}}}\right) \mathbf{V}
\end{equation}
where $\{\mathbf{Q,K,V}\}$ are query, key and value vectors that are transformed from the input $\mathbf{H}^{n-1}$. $\sqrt{k}$ is the scaling factor where the $d_k$ is the dimension size of the query and key vectors. Given the word embeddings $\mathbf{E}=\{e_1,e_2,...,e_m\}$ and corresponding positional embeddings $\mathbf{P}=\{p_1,p_2,...,p_m\}$, the first block input $\mathbf{H}^0=\mathbf{E}+ \mathbf{P}$.
Finally, a linear function with $\operatorname{softmax}$ activation is used to compute the probability of next word $x_t$ via:
\begin{equation}\label{eq:outprob}
p \left( x_t | x_{\leq{t-1}} \right) = \operatorname { softmax } \left( g \left( h _ { t } \right) \right)
\end{equation}
We calculate negative log-likelihood loss for model training:
\begin{equation}
\mathcal{L}_{\mathrm{lm}}=- \frac{1}{T}\sum_t \log p\left(x_t |x_{\leq t-1} \right)
\end{equation}
\begin{table*}[!ht]
\centering
\resizebox{2.05\columnwidth}{!}{
\begin{tabular}{lcl}
\toprule
S1 & marker & S2 \\
\midrule
Her eyes flew up to his face. & and & Suddenly she realized why he looked so different. \\
The concept is simple. & but & The execution will be incredibly dangerous. \\
You used to feel pride. & because & You defended innocent people. \\
Belter was still hard at work. & when & Drade and barney strolled in. \\
I' ll tell you about it. & if & You give me your number. \\
We plugged bulky headsets into the dashboard. & so & We could hear each other when we spoke into the microphones. \\
It was mere minutes or hours. & before & He finally fell into unconsciousness. \\
And then the cloudy darkness lifted. & though & The lifeboat did not slow down. \\
\bottomrule
\end{tabular}
}
\caption{Example pairs from Books 8 dataset.}
\label{dismaker}
\end{table*}
\subsection{Two-stage Generation}
\noindent\textbf{Outline Preparation}
In order to regard the outline generation task as a supervised learning problem, we must construct a high-quality training dataset including sufficient prompt-outline pairs.
As pre-mentioned, outline implies the story plots, therefore the quality of outline can affect the performance of story generation directly.
If the outline contains too much information, the story generator will directly learn to copy from the outline and restrain the imagination and creativity.
On the contrary, if the outline ignores the key-point information, the informativeness and consistency of stories will be decreased.
In this work, we investigate two forms of outline: keyword and abstract. These two forms retain the important information of the story and ignore some details and commonly used in two-stage based methods \cite{yao2019plan,fan-etal-2019-strategies,chen2019learning}. Our motivation is to use two-stage generation to improve performance of GPT2 , so we do not design a new middle form. Specifically, we use the RAKE algorithm \cite{rose2010automatic} \footnote{https://pypi.org/project/rake-nltk/} to extract keywords of story. According to \cite{yao2019plan} and the average lengths of stories in our corpus, we extract 10 keywords for each story. We use a variation of the TextRank algorithm \cite{barrios2016variations} \footnote{https://radimrehurek.com/gensim/} to extract abstract of story. In order to retain important information and ignore some detail information, we keep 30\% sentences of each story as abstract. Thus, we can get (prompt, outline, story) pairs automatically to train the two-stage model.
\noindent\textbf{Prompt to Outline Generation}
A Transformer-based language model based decoder is used to generate outlines. Specifically, we concatenate prompt $\mathbf{X}$ and outline $\mathbf{Z}$ with \verb|<SEP>| token to get a sequence $\mathbf{X'}$. For training, we compute cross entropy of all tokens in $\mathbf{X'}$ as normal language model. When testing, given the prompt tokens as context, the decoder generates outline tokens.
\noindent\textbf{Prompt and Outline to Story Generation}
Another decoder with the same architecture is used to generate stories. We concatenate prompt $\mathbf{X}$, outline $\mathbf{Z}$ and story $\mathbf{Y}$ with \verb|<S>| and \verb|<SEP>| token to get a sequence $\mathbf{X''}$. For training, we compute cross entropy of prompt and story tokens in $\mathbf{X''}$. Note that we don't calculate the loss of the outline tokens. Because, the tokens come from the story and we avoid computing loss of these tokens twice. When testing, given the prompt and the outline tokens as context, the decoder generates story tokens. Next, two components are incorporated in this stage to enhance discourse coherency and coreference consistency.
\subsection{Discourse Coherency Enhancement}\label{sec:discourse}
In order to improve discourse representation of Transformer-based language model, we design a discourse relation classification task as an auxiliary task. Discourse relations describe how two segments (e.g. clauses, sentences, and larger multi-clause groupings) of discourse are logically connected. These relations can be used to describe the high-level organization of text. Thus, discourse relation is an important aspect of story coherence. In this work, we only consider shallow discourse relations between adjacent sentences as many research on discourse relation classification do \cite{chen-etal-2016-implicit,lan-etal-2017-multi,bai-zhao-2018-deep}.
\noindent \textbf{Discourse Information Preparation}
In order to get discourse label of adjacent sentences in stories, we need to train a golden discourse relation classification model. However, there is limited annotation corpus of implicit discourse relations and explicit discourse relations. For example, the commonly used dataset Penn Discourse Treebank 2.0 \cite{prasad-etal-2008-penn} contains about 10k pairs. Following \cite{nie-etal-2019-dissent}, we use discourse markers as replace of discourse relations. Because we are able to automatically curate a sizable training set of sentence pairs with discourse markers. We use discourse marker dataset Book 8 from \cite{nie-etal-2019-dissent}, which contains 3.6M sentence pairs and each pair is labeled with one connective of 8 connectives as discourse label. Several sentence pairs and corresponding discourse markers are shown in Table \ref{dismaker}.
We fine tuning BERT \cite{devlin2018bert} \footnote{https://github.com/huggingface/transformers} in this dataset to get a golden discourse marker prediction model. Then we use this model to tag discourse relation label of sentence pairs in our story corpus. Considering that this automatic tagging may produce large errors, we only keep labels with high classification probability, and labels with lower probability are replaced with the ninth label, \textit{unknown}. The sentence pairs with labels belonging to 8 connectives are used to train our discourse relation classification component.
\noindent \textbf{Discourse-aware Story Generation}
The discourse relation classification component contains a sentence encoder and a two-layers MLP. The encoder is used to extract sentence semantic feature and the MLP is used to convert feature into classification probability. The sentence encoder shares parameters with the story decoder exclude the output layer. For a story $\mathbf{Y}$ contains several sentence $\{\mathbf{S_1},\mathbf{S_i},\mathbf{S_p}\}$ and each sentence contains several words $\mathbf{S_i}=\{y_{i1},y_{ij},y_{iq}\}$, we get output $h^w_{ij}$ of encoder as word representation and use max pooling operation on words of this sentence to get sentence representation $h^s_i$:
\begin{align}
\mathbf{H}^s_i & = \operatorname{encoder}(\mathbf{S_i}) \\
h^s_i &= \operatorname{max}(\mathbf{H}^s_i)
\end{align}
Then the MLP is used to classify adjacent sentences as follows:
\begin{eqnarray}
f = &\tanh(\mathbf{W_f}[h^s_i,h^s_{i+1}]+b_f) \\
p(dis|\mathbf{S_i},\mathbf{S_j}) = & \operatorname{softmax}(\mathbf{W_o}f +b_o)
\end{eqnarray}
The loss function $\mathcal{L}_{\mathrm{dis}}$ of this component is the cross-entropy of discourse label. Then a joint loss function is applied to train the second stage model:
\begin{equation}
\mathcal{L}=\mathcal{L}_{\mathrm{lm}}+ \lambda_1\mathcal{L}_{\mathrm{dis}}
\end{equation}
where $\lambda_1$ is a hyperparameter to balance two tasks.
\subsection{Coreference Consistency Enhancement}
Although Transformer-based language model has the ability of long-distance dependence, there are still some coreference errors in the generated stories. In order to encourage model to attend correct entities, we add a supervision on attention weight of entity mention tokens. We use Stanford's CoreNLP tool \footnote{https://stanfordnlp.github.io/CoreNLP/} to extract coreference annotation of stories.
Specifically, for a story $\mathbf{Y}$ we get $p$ coreference clusters and each cluster contains $q$ entity mentions. We assign each entity mention token $y^c_i$ in subsequence $\mathbf{Y}^c=\{y^c_1,y^c_i,y^c_{pq}\}$ a cluster label $\mathbf{C}=\{c_1,c_i,c_{pq}\}$. During training, for a entity mention token $y^c_i$, we get attention weights between current token and previous tokens $\{y^c\leq i-1\}$ in last self-attention layer of decoder, the sum of which is 1:
\begin{equation}
\sum_{k=1}^{i-1}{\alpha_{ik}} = 1
\end{equation}
We design a coreference loss to maximize attention weights of tokens in the same cluster as follows:
\begin{equation}
\mathcal{L}_{\mathrm{coref}} = -\frac{1}{pq} \sum^{pq}_{i=1} \frac{1}{N_{i}}\sum_{k=1}^{i-1} \mathbbm{1}(c_k=c_i) \log\alpha_{ik}
\end{equation}
where $N_{t}$ is the number of entity mentions in the same cluster $c_i$. Considering these two components, the loss function for the second stage model is as follows:
\begin{equation}
\mathcal{L}=\mathcal{L}_{\mathrm{lm}}+ \lambda_1\mathcal{L}_{\mathrm{dis}} + \lambda_2\mathcal{L}_{\mathrm{coref}}
\end{equation}
\section{Experimental Setup}
\subsection{Settings and Data Set}
For two Transformer decoders, we apply the same model size as GPT2-117M \cite{radford2019language}. Thus we can analysis the effect of pre-training weight of GPT2. Specifically, the dimension of word embedding and the dimension of hidden vectors are set to 768. The number of self attention block is set to 12 and 12 heads are used in self multi-head attention. We train the model using Adam \cite{kingma2014adam} with learning rate 0.0005. The dropout rate is set to 0.3 for regularization. $\lambda_1$ and $\lambda_2$ are set to 0.1 and 0.3 according to the performance in valid set. Following \cite{fan2018hierarchical} we generate stories with random top $k$ sampling, where next words are sampling from the top $k=20$ candidates rather than the entire vocabulary distribution.
We use writing prompts dataset from \cite{fan2018hierarchical}, which is collected from Reddit's WRITINGPROMPTS forum\footnote{https://www.reddit.com/r/WritingPrompts/}. WRITINGPROMPTS is a community where online users inspire each other to write by submitting story prompts. Each prompt can have multiple story responses. The prompts have a large diversity of topic, length, and detail. There are 300k stories and the dataset is split into TRAIN, VAL and TEST (90\%/5\%/5\%). For our experiments, we limit the length of the stories to 500 words maximum. We use the GPT2's BPE vocabulary with size of 50,527 in our model.
\subsection{Evaluation Metrics}
\noindent\textbf{Automatic Evaluation.}
Many commonly used metrics based on n-gram overlap between the generated text and the human text, such as BLEU \cite{papineni2002bleu}, are not useful in story generation, which is also observed by previous work \cite{martin2018event,fan2018hierarchical}. Because we do not aim to generate a specific story; we want to generate viable and novel stories.
In order to evaluate different aspect of stories we use four type metrics. We use \textbf{Perplexity} to evaluate the fluency of stories. Perplexity is commonly used to evaluate the quality of language models, and it reflects how fluently the model can produce the correct next word given the preceding words. What's more, in order to evaluate the diversity of stories we compute \textbf{Distinct-1/2} \cite{li2016persona}, which is the percentage of distinct n-grams in all generated stories and is widely used in conversation generation.
In order to evaluate the discourse coherency of the stories, we reuse the fine-tuned BERT for evaluation. Specifically, we use BERT to tag discourse label for sentence pairs in generated stories in the same way as the tagging process of training set in Section \ref{sec:discourse}. We compute the percentage of sentence pairs with \textbf{Unknown} labels in generated stories. The less sentence pairs with unknown labels the model generates, the better the coherency of stories are. In order to evaluate the coreference coherence, we compute the averaged \textbf{Coreference Chains} in each story. Specifically, we use Stanford's CoreNLP tool \footnote{https://stanfordnlp.github.io/CoreNLP/} to extract coreference chains of generated stories.
\noindent\textbf{Human Evaluation.}
To further evaluate the quality of generated stories, we conduct pair-wise comparisons with two strong baseline models (FConvS2S and GPT2P).
We evaluate the models from the following three perspectives: \textbf{Relevance} to indicate whether a story is relevant to the given prompt, \textbf{Grammaticality} to indicate whether a story is natural and
fluent, and \textbf{Logicality} to indicate whether a story is consistent and coherent in terms of causal dependencies in the context. Three aspects are independently evaluated.
We randomly sample 100 stories from the test set and obtain 300 stories from three models. For each pair of stories (one by our model and the other by a baseline, along with the prompt), three annotators are asked to give a preference (win, lose, or tie) in terms of three metrics respectively. Majority voting is used to make final decisions among the three annotators.
\subsection{Comparison Methods}
\noindent\textbf{Conv Seq2Seq with self-attention (ConvS2S).} We replicate the model proposed by \cite{fan2018hierarchical} using their source code, which applies a convolutional sequence-to-sequence model with gated self-attention to generate stories from prompts.
\noindent\textbf{Fusion of Conv Seq2Seq with self-attention (FConvS2S).} The model is also proposed by \cite{fan2018hierarchical}, which utilizes a fusion mechanism to integrate two \textbf{ConvS2S}.
\noindent\textbf{GPT2.} The model only contains a Transformer-based decoder and has the same model size as GPT2-117M \cite{radford2019language}. We train the model from scratch.
\noindent\textbf{GPT2 with Pre-training (GPT2P).} We first load pre-training weights of GPT2-117M and then fine tune the model on the used dataset.
\noindent\textbf{Ours.} Our overall model contains two-stage generation, discourse relation classification and coreference supervision. In order to evaluate the upper bound of two-stage generation, we use different percentages of tokens of ground truth outlines as contexts to generate stories. Ours(0\%) means using own generated outlines as contexts in the second stage to generate stories. It is our final model. Ours(100\%) means all tokens of ground truth outlines are used as contexts. It is the upper bound model.
\section{Results and Discussions}
\vspace{-2mm}
\subsection{Automatic Evaluation and Human Evaluation}
\begin{table*}[!htb]
\centering
\resizebox{2\columnwidth}{!}{
\begin{tabular}{lccccc}
\toprule
\textbf{Method} & \textbf{Perplexity}$\downarrow$ & \textbf{Dis-1}(\%)$\uparrow$ & \textbf{Dis-2}(\%)$\uparrow$ & \textbf{Unknown}(\%)$\downarrow$ & \textbf{Coref Chains}$\uparrow$ \\
\midrule
ConvS2S & 34.61 & 0.400 & 5.191 & 76.01 & 5.52 \\
FConvS2S & 33.97 & 0.482 & 6.271 & 75.60 & 5.43 \\
GPT2 & 29.50 & 0.474 & 6.796 & 74.95 & 5.67 \\
GPT2P & 25.64 & 0.493 & 7.333 & 73.61 & 5.61 \\
Ours(0\% ground truth outline) & 30.84 & 0.531 & 7.379 & 75.19 & 5.98 \\
\midrule
Ours(50\% ground truth outline) & 19.21 & 1.311 & 13.253 & 75.15 & 5.97 \\
Ours(100\% ground truth outline) & 10.32 & 1.509 & 15.266 & 74.97 & 5.80 \\
\bottomrule
\end{tabular}
}
\caption{Automatic evaluation results on TEST set.}
\label{overallresult}
\end{table*}
\begin{table*}[!htb]
\centering
\resizebox{2\columnwidth}{!}{
\begin{tabular}{llllllllll}
\toprule
\multirow{2}{*}{\textbf{Method}} & \multicolumn{3}{l}{\textbf{Relevance}} & \multicolumn{3}{l}{\textbf{Grammaticality}} & \multicolumn{3}{l}{\textbf{Logicality}} \\
& \textbf{Win}(\%) & \textbf{Tie}(\%) & \textbf{Lose}(\%) & \textbf{Win}(\%) & \textbf{Tie}(\%) & \textbf{Lose}(\%) & \textbf{Win}(\%) & \textbf{Tie}(\%) & \textbf{Lose}(\%) \\
\midrule
Ours vs. FConvS2S & \textbf{23} & 66 & 11 & \textbf{28} & 53 & 19 & \textbf{40} & 33 & 27 \\
Ours vs. GPT2P & \textbf{21} & 60 & 19 & \textbf{17} & 69 & 14 & \textbf{31} & 47 & 22 \\
\bottomrule
\end{tabular}
}
\caption{Human evaluation results on TEST set.}
\label{humanresult}
\end{table*}
\begin{table*}[!htb]
\centering
\resizebox{1.8\columnwidth}{!}{
\begin{tabular}{llllll}
\toprule
\textbf{Method} & \textbf{Perplexity}$\downarrow$ & \textbf{Dis-1}(\%)$\uparrow$ & \textbf{Dis-2}(\%)$\uparrow$ & \textbf{Unknown}(\%)$\downarrow$ & \textbf{Coref Chains}$\uparrow$ \\
\midrule
\textbf{First stage} \\
\midrule
keyword & 74.46 & 0.964 & 7.132 &/ &/ \\
abstract & 35.53 & 0.776 & 10.060 &/ &/ \\
\midrule
\textbf{Second stage} \\
\midrule
story with keyword & 17.82 & 0.461 & 6.188 & 74.26 & 5.67 \\
story with abstract & 10.65 & 0.512 & 7.358 & 74.54 & 5.81 \\
\bottomrule
\end{tabular}
}
\caption{Comparison of different outlines.}
\label{twostageresult}
\end{table*}
As shown in Table \ref{overallresult}, we compute four types metrics for these methods. We can see that GPT2 outperforms FConvS2S and ConvS2S in all metrics. This indicates that the self-attention based model is superior to the convolutional based model in story generation. Although FConvS2S and ConvS2S is enhanced with a self-attention mechanism, their ability to capture long-distance dependence is still weaker than GPT2. Compared to GPT2, GPT2P improves the perplexity and distinct significantly. GPT2P also generates least sentence pairs with \textit{unknown} discourse relation. This shows that pre-training weights contributions to generating more fluent, diverse and coherent stories. Compared to these methods, our model (Ours(0\%)) achieves best diversity and coreference performance. This demonstrates the effectiveness of our overall model. The upper bound model (Ours(100\%)) achieves best perplexity score. This indicates that our model sacrifices part of fluency for the plot control.
What's more, we can see that all two-stage models has a lower \textit{unknown} score compared with GPT2 and GPT2P. We claim that two-stage generation and discourse relation component may repel each other. Next, we conduct ablation experiment to evaluate each component of our method.
Table \ref{humanresult} reports human evaluation results.
Our method achieves best scores in three metrics. Specifically, our method mainly improve scores on Logicality. This shows
that our method can generate more coherent stories by utilizing discourse and coreference supervision. Our method performs similar to GPT2P in term of Relevance and Grammaticality. Because both two methods use Transformer as the decoder and our model dose not design a component to improve the relevance to the prompt.
\subsection{Outline Analysis}
\begin{figure}[!ht]
\centering
\includegraphics[width=7cm]{abstractattention.pdf}
\caption{The attention weight distribution of story tokens in different positions.}
\label{abstract}
\end{figure}
We compare the performance of keyword and abstract as outlines. As shown in Table \ref{twostageresult}, in first stage keyword is more difficult to generate than abstract, for that keyword gets a higher perplexity. From second stage, we can see that stories using abstract as outline get better scores in four metrics. This indicates that the abstract contributes to generating stories with better diversity and consistency. Therefore, we take abstract as outline in our model. In order to evaluate whether the stories are generated following the plot order of abstract, we plot story tokens' attention weight distributions on abstract tokens. The attention weight distributions are computed by averaging 2,000 generated stories. Because of the limited space, we only list tokens of the abstract and the story in the front positions. The result is shown in Figure \ref{abstract}. There are several lines with darker colors in the diagonal direction of the figure. This demonstrates that the story's focus follows the plot order of the abstract and our two-stage model can control the plots of the story well.
\subsection{Discourse Relation Classification}
\begin{table}[!htb]
\centering
\resizebox{1\columnwidth}{!}{
\begin{tabular}{l|lll}
\toprule
\textbf{TLM+Discourse} & \textbf{And}(\%)$\uparrow$ & \textbf{When}(\%)$\uparrow$ & \textbf{Unknown}(\%)$\downarrow$ \\
\midrule
0.1 & 11.43 & 2.90 & 72.94 \\
0.3 & 11.38 & 2.80 & 73.60 \\
0.5 & 10.91 & 2.72 & 73.78 \\
\bottomrule
\end{tabular}
}
\caption{The percentages of discourse relations with different $\lambda_1$.}
\label{table:hyper}
\end{table}
\begin{table}[!htb]
\centering
\resizebox{1\columnwidth}{!}{
\begin{tabular}{l|lll}
\toprule
\textbf{Method} & \textbf{And}(\%)$\uparrow$ & \textbf{When}(\%)$\uparrow$ & \textbf{Unknown}(\%)$\downarrow$ \\
\midrule
ConvS2S & 8.52 & 2.45 & 76.01 \\
FConvS2S & 8.67 & 2.41 & 75.59 \\
GPT2 & 10.14 & 2.50 & 74.96 \\
GPT2P & 10.96 & 2.60 & 73.61 \\
TLM+Discourse & 11.43 & 2.90 & 72.94 \\
\bottomrule
\end{tabular}
}
\caption{The percentages of discourse relations of different methods.}
\label{table:disresult}
\end{table}
We evaluate the effect of discourse relation classification with \textbf{TLM+Discourse}, which enhances Transformer based language model with our proposed discourse relation classification component. We show the percentages of \textit{and}, \textit{when} and \textit{unknown} for that the percentages of other relations are small. We first try several different values for $\lambda_1$ in loss function and the result is shown in Table \ref{table:hyper}. When $\lambda_1$ is high, there is no gain in three discourse metrics. When $\lambda_1$ is 0.1, we get most gain in three discourse metrics. This indicates that large weight affects the main task and damages the performance of language model. Therefore, we use 0.1 in our final model. Then we compare \textbf{TLM+Discourse} with other baselines. The percentages of discourse relations are shown in Figure \ref{table:disresult}. We can see that GPT2 and GPT2P performs better than ConvS2S and FConvS2S. TLM+Discourse generates least sentence pairs with \textit{unknown} relation and achieves best score among all methods. This shows that discourse relation classification component is helpful to generate more sentence pairs with certain discourse relations and further improve the coherency of generated stories.
\subsection{Case Study}
\begin{table}[!htb]
\centering
\begin{tabular}{p{0.95\columnwidth}}
\toprule
\textbf{Prompt}: \small The zombie apocalypse is upon us but all humans are immune to the virus . All other mammals are susceptible . \\
\midrule
\small \textbf{GPT2P}: As I sat there I tried to remember how many days it had been since the last one died . $[$\textcolor{blue}{My wife and daughter were dead , I knew they would n't be alive any time soon .}$]_{1a}$ The food ran out quickly , \textcolor{red}{they} had to be taken away to stay in the house , but that was no reason to be angry if there were a few people left living there . After a month of food being gone a little girl sat next to me . She was the only other survivor who was n't bitten , as it was hard for us to eat \textcolor{red}{them} , we were all sickly . That would be another month to go and if the other animals were still alive in the food , they would n't be able to eat her . `` Hey little one . '' I looked up and saw her . $[$\textcolor{blue}{I had a big smile on my face at this point .}$]_{2a}$ $[$\textcolor{blue}{My wife was the only one who noticed , but her smile was all that was visible of me , and not of her .}$]_{1b}$ $[$\textcolor{blue}{I looked at her , and smiled sadly .}$]_{2b}$ She looked sad , but did n't care . She never cared for me . \\
\midrule
\small \textbf{Our (generated abstract)}: He was the last to see the end of this world . The only reason John had survived was by his own hand , and it took much more than he could bear with a few months to heal his wounds . It took a few months for him to be able to see the end and get his arm back from the dead , and he did n't care for a few more years without having to go back . \\
\small \textbf{Our (generated story)}: When he had been first laid to rest the first few days he did n't care , and he had thought it was just a small thing , but it started getting worse . He was the last to see the end of this world . The only reason John had survived was by his own hand , and it took much more than he could bear with a few months to heal his wounds . John had lived the rest of his life on Earth without anything else but the clothes on his back , and the rifle he kept at home when he was alone . The first few days were easy . A little bit of food , a bit of protection . He spent a few days walking around the city and drinking a beer , thinking he was safe , but the next few months were hell . It took a few months for him to be able to see the end and get his arm back from the dead , and he did n't care for a few more years without having to go back . It was better to go back , to be safe , so he would be safe for a while , and so he would n't get infected .
\\
\bottomrule
\end{tabular}
\caption{Comparison of different methods.}
\label{human}
\end{table}
We analyze some generated stories to evaluate our overall model. An example is shown in Table \ref{human}. Stories generated by GPT2P have poor plot consistency and have some coreference errors, such as blue sentences and red words in Table \ref{human}. Compared with GPT2P, our model can effectively control the plot consistency of the story through the abstract. Therefore, stories generated by our model have better plot consistency. In addition, our model has less coreference errors than GPT2P and generates stories with better coreference consistency. What's more, the coherency between sentences is also better than GPT2P.
\section{Conclusion}
In this paper, we propose a two-stage generation model to improve consistency and coherency of generated stories. The first stage is to build the story outline, and the second stage is to expand the outline into a complete story. What's more, we design a supplementary task of discourse relation classification to improve the discourse representation ability of the model. In addition, we enhance model with coreference supervision to improve coreference consistency in generated stories. Experimental results on a story dataset show that our method is superior to baseline methods.
\bibliographystyle{acl_natbib}
| proofpile-arXiv_059-15660 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:intro}
\IEEEPARstart{B}{lind} source separation (BSS) is a problem concerned with estimating the original source signals from a mixture signal captured by multiple sensors.
When the number of sources is no greater than that of sensors, i.e., in the (over-)determined case,
independent component analysis (ICA~\cite{comon2010handbook,cichocki2002adaptive,lee1998independent,stone2004independent})
has been a popular approach to BSS because it is simple and effective.
When each original source is a vector and a multivariate random variable,
independent vector analysis (IVA~\cite{kim2007,hiroe2006}), also termed joint BSS~\cite{li2009joint,li2011joint,anderson2011joint}, has been widely studied as an extension of ICA.
In this paper, we will only focus on the problem of extracting all the super-Gaussian sources from a linear mixture signal under the following assumptions and improve the computational efficacy of IVA (ICA can also be improved in the same way as IVA, and so we only deal with IVA):
\begin{enumerate}
\item The number of super-Gaussian sources $K$ is known and fewer than that of sensors $M$, i.e., $K < M$.
\item There can be up to $M - K$ stationary Gaussian noises, and thus the problem remains (over-)determined.
\end{enumerate}
The second assumption, which concerns the rigorous development of efficient algorithms, can be violated to some extent when applied in practice (see numerical experiments in Section~\ref{sec:exp}).
To distinguish it from a general BSS, this problem is called blind source extraction (BSE~\cite{cichocki2002adaptive}).
The BSE problem is often encountered in such applications as speaker source enhancement, biomedical signal processing, or passive radar/sonar.
In speaker source enhancement, for instance,
the maximum number of speakers (super-Gaussian sources) can often be predetermined as a certain number (e.g., two or three) while an audio device is equipped with more microphones.
In real-word applications, the observed signal is contaminated with background noises, and most noise signals are more stationary than speaker signals.
BSE can be solved by simply applying IVA as if there were $M$ super-Gaussian sources and selecting the top $K$ $(< M)$ super-Gaussian signals from the $M$ separated signals in some way.
However, this approach is computationally intensive if $M$ gets too large.
To reduce the computing cost, for preprocessing of IVA, the number of channels (or sensors) can be reduced to $K$ by using principle component analysis or by selecting $K$ sensors with high SNRs.
These channel reductions, however, often degrade the separation performance due to the presence of the background noises.
BSE methods for efficiently extracting just one or several non-Gaussian signals (not restricted to super-Gaussian signals) have already been studied mainly in the context of ICA%
~\cite{friedman1974,huber1985,friedman1987exploratory,cardoso1993,delfosse1995adaptive,cruces2004,amari1998adaptive,hyvarinen1997fastica,hyvarinen1999fastica,oja2006fastica,wei2015}.
The natural gradient algorithm and the FastICA method with deflation techniques~\cite{amari1998adaptive,hyvarinen1997fastica,hyvarinen1999fastica} can sequentially extract non-Gaussian sources one by one.
In FastICA, the deflation is based on the orthogonal constraint where the sample correlation between every pair of separated signals equals zero.
This constraint was also used to develop FastICA with symmetric orthonormalization~\cite{hyvarinen1997fastica,hyvarinen1999fastica,oja2006fastica,wei2015} that can simultaneously extract $K$ non-Gaussian signals.
\subsection{Independent vector extraction (IVE)}
Recently, maximum likelihood approaches have been proposed for BSE in which the background noise components are modeled as stationary Gaussians.
These methods include
independent vector extraction (IVE~\cite{koldovsky2018ive,koldovsky2017ive,jansky2020adaptive})
and overdetermined IVA (OverIVA~\cite{scheibler2019overiva,scheibler2020fast,scheibler2020ive,ike2020overiva}),
which will be collectively referred to as IVE in this paper.
When $K$ non-Gaussian sources are all super-Gaussian
(as defined in Assumption~\ref{assumption:superGaussian}), IVE can use a majorization-minimization (MM~\cite{lange2016mm}) optimization algorithm developed for
an auxiliary-function-based ICA (AuxICA~\cite{ono2010auxica})
and
an auxiliary-function-based IVA (AuxIVA~\cite{ono2011auxiva,ono2012auxiva-stereo,ono2018asj}).
In this paper, we only deal with an IVE based on the MM algorithm~\cite{scheibler2019overiva,scheibler2020fast,scheibler2020ive,ike2020overiva,jansky2020adaptive}
and always assume that the super-Gaussian distributions are defined by Assumption~\ref{assumption:superGaussian} in Section~\ref{sec:model:IVE}.
In the MM-based IVE,
each separation matrix $W \in \mathbb{C}^{M \times M}$ is optimized by iteratively minimizing a surrogate function of the following form:
\begin{align*}
g(W) &=
\sum_{i = 1}^K \bm{w}_i^h V_i \bm{w}_i + \trace\left( W_{\z}^h V_{\z} W_{\z} \right) - \log | \det W |^2,
\\
W &= [\bm{w}_1,\ldots,\bm{w}_K,W_{\z}] \in \mathbb{C}^{M \times M},
\\
W_{\z} &= [\bm{w}_{K+1},\ldots,\bm{w}_M] \in \mathbb{C}^{M \times (M - K)}.
\end{align*}
Here, $\bm{w}_1,\ldots,\bm{w}_K \in \mathbb{C}^M$ are filters that extract the target-source signals and $W_{\z}$ is a filter that extracts the $M - K$ noise components.
$V_1,\ldots,V_K,V_{\z} \in \mathbb{C}^{M \times M}$ are the Hermitian positive definite matrices that are updated in each iteration of the MM algorithm.
Interestingly, when $K = M$ or when viewing the second term of $g(W)$ as $\trace\left( W_{\z}^h V_{\z} W_{\z} \right) = \sum_{i = K + 1}^M \bm{w}_i^h V_{\z} \bm{w}_i$,
the problem of minimizing $g(W)$ has been discussed in ICA/IVA literature~\cite{pham2001,degerine2004,degerine2006maxdet,yeredor2009HEAD,yeredor2012SeDJoCo}.
Among the algorithms developed so far, block coordinate descent (BCD~\cite{tseng2001convergence}) methods with simple analytical solutions have attracted much attention in the field of audio source separation because they have been experimentally shown to give stable and fast convergence behaviors.
A family of these BCD algorithms~\cite{ono2010auxica,ono2011auxiva,ono2012auxiva-stereo,ono2018asj}, summarized in Table~\ref{table:alg}, is currently called an iterative projection (IP) method.
The IP method has been widely applied not only to ICA and IVA but also to audio source separation methods using more advanced source models (see~\cite{kitamura2016ilrma,kameoka2019MVAE,makishima2019independent,sekiguchi2019fast,ito2019fastmnmf} for the details of such methods).
\subsection{Conventional BCD (or IP) algorithms for IVE}
When we consider directly applying the IP methods developed for AuxIVA to the BSE problem of minimizing $g(W)$ with respect to $W$,
AuxIVA-IP1~\cite{ono2011auxiva} in Table~\ref{table:alg}, for instance, will cyclically update
$\bm{w}_1 \rightarrow \cdots \rightarrow \bm{w}_K \rightarrow \cdots \rightarrow \bm{w}_M$ one by one.
However, in the BSE problem, the signals of interests are $K$ non-Gaussian sources, and most of the optimization of $W_{\z}$ should be skipped.
Therefore, in a previous work \cite{scheibler2019overiva},
an algorithm (IVE-OC in Table~\ref{table:alg}) was proposed that cyclically updates $\bm{w}_1 \rightarrow W_{\z} \rightarrow \cdots \rightarrow \bm{w}_K \rightarrow W_{\z}$ one by one with a computationally cheap formula for $W_{\z}$.
In this work \cite{scheibler2019overiva}, the updating equation for $W_{\z}$ was derived solely from the (weak) orthogonal constraint (OC~\cite{koldovsky2017ive,koldovsky2018ive,jansky2020adaptive}) where the sample correlation between the separated $K$ non-Gaussian sources and the $M - K$ background noises equals zero.
(Note that the $K$ non-Gaussian sources are not constrained to be orthogonal in the OC.)
Although the algorithm has successfully reduced the computational cost of IVA by nearly a factor of $K / M$,
the validity of imposing the heuristic OC remains unclear.
After that, IVE-OC has been extended by removing the OC from the model.
One such extension is a direct method that can obtain a global minimizer of $g(W)$ when $K = 1$~\cite{scheibler2020fast,ike2020overiva}.
The other extension is a BCD algorithm for $K \geq 2$ that cyclically updates the pairs
$(\bm{w}_1,W_{\z}) \rightarrow \cdots \rightarrow (\bm{w}_K,W_{\z})$ one by one~\cite{scheibler2020ive},
but the computational cost is not so cheap due to the full update of $W_{\z}$ in each iteration.
These algorithms are called IVE-IP2 in this paper (see Table~\ref{table:alg}).
\subsection{Contributions}
In this work, we propose BCD algorithms for IVE, which are summarized in Table~\ref{table:alg} with comparisons to the previous BCDs (IPs).
The followings are the contributions of this paper.
(i)
We speed up the previous IVE-IP2 for $K \geq 2$ by showing that
$W_{\z}$'s update can be omitted in each iteration of BCD without changing the behaviors of the algorithm (Section~\ref{sec:IVE-IP2-new:K>1}).
This is attained by carefully exploiting the stationary Gaussian assumption for the noise components.
In an experiment of speaker source enhancement, we confirmed that the computational cost of the conventional IVE-IP2 is consistently reduced.
(ii)
We provide a comprehensive explanation of IVE-IP2 for $K = 1$ (Sections~\ref{sec:IVE-IP2:K=1} and \ref{sec:source-extraction}).
Interestingly, it turns out to be a method that iteratively updates the maximum signal-to-noise-ratio (MaxSNR~\cite{vantrees2004,warsitz2007maxsnr}) beamformer
and the power spectrum for the unique ($K = 1$) target-source signal.
The experimental result shows that this algorithm has much faster convergence than conventional algorithms.
Note that IVE-IP2 for $K = 1$ was developed independently and simultaneously in our conference paper~\cite{ike2020overiva} and by Scheibler--Ono~\cite{scheibler2020fast}.
(iii)
We reveal that IVE-OC~\cite{scheibler2019overiva}, which was
developed with the help of the heuristic orthogonal constraint (OC),
can also be obtained as a BCD algorithm for our proposed IVE without the OC (Section~\ref{sec:IVE-IP1}).
(This interpretation of IVE-OC as a BCD algorithm was described in our conference paper~\cite{ike2020overiva}, but it is not mathematically rigorous.
We here provide a rigorous proof of this interpretation.)
(iv)
We further extend the proposed IVE-IP2 for the \textit{semiblind} case where the transfer functions for $L$ ($1 \leq L \leq K$) sources of interests can be given a priori (Section~\ref{sec:semi-BSE}).
We call the proposed semiblind method Semi-IVE.
In Semi-IVE, $L$ separation filters, e.g., $\bm{w}_1,\ldots,\bm{w}_L$, which correspond to the known transfer functions are optimized by iteratively solving the linear constrained minimum variance (LCMV~\cite{vantrees2004}) beamforming algorithm,
and the remaining $\bm{w}_{L + 1},\ldots,\bm{w}_K$ (and $W_{\z}$) are optimized by the full-blind IVE-IP2 algorithm.
We experimentally show that when $L \geq K - 1$ transfer functions are given Semi-IVE yields surprisingly fast convergence.
\textit{Organization}: The rest of this paper is organized as follows.
The BSE and semi-BSE problems are defined in Section~\ref{sec:problem}.
The probabilistic model for the proposed IVE is compared with related methods in Section~\ref{sec:model}.
The optimization algorithms are developed separately for the BSE and semi-BSE problems in Sections~\ref{sec:BSE} and \ref{sec:semi-BSE}.
The computational time complexity for these methods is discussed in Section~\ref{sec:computational-complexity}.
In Sections~\ref{sec:exp} and \ref{sec:conclusion}, experimental results and concluding remarks are described.
\subsection{Notations}
Let $\mathcal{S}_+^d$ or $\mathcal{S}_{++}^d$ denote the set of all Hermitian positive semidefinite (PSD) or positive definite (PD) matrices of size $d \times d$.
Let $\GL(d)$ denote the set of all (complex) nonsingular matrices of size $d \times d$.
Also, let $\bm{0}_d \in \mathbb{C}^d$ be the zero vector,
let $O_{i,j} \in \mathbb{C}^{i \times j}$ be the zero matrix,
let $I_d \in \mathbb{C}^{d \times d}$ be the identity matrix,
and let $\bm{e}_k$ be a vector whose $k$-th element equals one and the others are zero.
For a vector or matrix $A$, $A^\top$ and $A^h$ represent
the transpose and the conjugate transpose of $A$.
The element-wise conjugate is denoted as $A^\ast = (A^\top)^h$.
For a matrix $A \in \mathbb{C}^{i \times j}$, $\image A$ is defined as the subspace $\{ A \bm{x} \mid \bm{x} \in \mathbb{C}^{j} \} \subset \mathbb{C}^i$.
For a vector $\bm{v}$, $\| \bm{v} \| = \sqrt{\bm{v}^h \bm{v}}$ denotes the Euclidean norm.
\begin{table*}[t]
\begin{center}
{\small
\caption{Optimization process of BCD algorithms for problem~\eqref{problem:maxdet}}
\label{table:alg}
\begin{tabular}{cccccl} \hline
& Method & \multicolumn{1}{c}{Algorithm} & Reference & Assumption & \multicolumn{1}{c}{Optimization process of BCD}
\\ \hline
\multirow{3}{*}{Conventional} &
\multirow{3}{*}{(Aux)IVA} & IP1 & \cite{ono2011auxiva} & - &
$\bm{w}_1 \rightarrow \cdots \rightarrow \bm{w}_K \rightarrow \cdots \rightarrow \bm{w}_M$
\\
&& IP2 & \cite{ono2012auxiva-stereo} & $K = M = 2$ &
$W = [ \bm{w}_1,\bm{w}_2 ]$ (direct optimization)
\\
&& IP2 & \cite{ono2018asj} & (if $M$ is even) &
$(\bm{w}_1, \bm{w}_2) \rightarrow \cdots \rightarrow (\bm{w}_{M - 1}, \bm{w}_M)$
\\
\hline
\multirow{2}{*}{Conventional} &
\multirow{2}{*}{IVE} & IVE-OC$\empty^{1)}$ & \cite{scheibler2019overiva}, \S\ref{sec:model:IVE-OC} & Orthogonal constraint (OC) &
$\bm{w}_1 \rightarrow W_{\z}^1 \rightarrow \cdots \rightarrow \bm{w}_K \rightarrow W_{\z}^1$
\\
&& IP2$\empty^{3)}$ & \cite{scheibler2020ive}, \S\ref{sec:IVE-IP2-old:K>1} & $K \geq 2$ & $(\bm{w}_1, W_{\z}) \rightarrow \cdots \rightarrow (\bm{w}_K, W_{\z})$
\\
\hline
\multirow{3}{*}{Proposed} &
\multirow{3}{*}{IVE} & IP1$\empty^{1)}$ & \S\ref{sec:IVE-IP1} & - &
$\bm{w}_1 \rightarrow (W_{\z},\Omega_{\z}) \rightarrow \cdots \rightarrow \bm{w}_K \rightarrow (W_{\z}, \Omega_{\z})$
\\
&& IP2$\empty^{2)}$ & \S\ref{sec:IVE-IP2:K=1} & $K = 1$ &
$W = [\bm{w}_1, (W_{\z})]$ (direct optimization)
\\
&& IP2$\empty^{3)}$ & \S\ref{sec:IVE-IP2-new:K>1} & $K \geq 2$ &
$(\bm{w}_1, (W_{\z})) \rightarrow \cdots \rightarrow (\bm{w}_K, (W_{\z}))$
\\
\hline
\multirow{2}{*}{Proposed} &
\multirow{2}{*}{Semi-IVE} & \multirow{2}{*}{IP2$\empty^{4)}$} & \S\ref{sec:semi-BSE} & Given $\bm{a}_1,\cdots,\bm{a}_L$ $(L \geq K - 1)$ &
$W = [\bm{w}_1, \ldots, \bm{w}_K, (W_{\z})]$ (direct optimization)
\\
&&& \S\ref{sec:semi-BSE} & Given $\bm{a}_1,\cdots,\bm{a}_{L}$ $(L \leq K - 2)$ &
$(\bm{w}_{L + 1}, (W_{\z})) \rightarrow \cdots \rightarrow (\bm{w}_K, (W_{\z}))$
\\
\hline
\multicolumn{6}{l}{$\empty^{1)}$ The two algorithms are identical as shown in Section~\ref{sec:IVE-IP1}.}
\\
\multicolumn{6}{l}{$\empty^{2)}$ It was developed independently and simultaneously by Scheibler--Ono~\cite{scheibler2020fast} and the authors~\cite{ike2020overiva} in Proceedings of ICASSP2020.}
\\
\multicolumn{6}{l}{$\empty^{3)}$ The proposed IVE-IP2 for $K \geq 2$ is an acceleration of the conventional IVE-IP2 developed in~\cite{scheibler2020ive}.}
\\
\multicolumn{6}{l}{$\empty^{4)}$ $\bm{w}_1,\cdots,\bm{w}_L$, which correspond to $\bm{a}_1,\ldots,\bm{a}_L$, are directly globally optimized as the LCMV beamformers (see Section~\ref{sec:semi-BSE:LCMV}).}
\\
\multicolumn{6}{l}{$\empty^{2, 3, 4)}$ In the proposed IVE-IP2 and Semi-IVE, the optimizations for $W_{\z}$ are skipped (see Sections~\ref{sec:IVE-IP2:K=1}, \ref{sec:IVE-IP2-new:K>1}, and \ref{sec:Semi-IVE}).}
\end{tabular}
}
\end{center}
\end{table*}
\section{Blind and semiblind source extraction}
\label{sec:problem}
\subsection{Blind source extraction (BSE) problem}
Throughout this paper, we discuss IVA and IVE using the terminology from audio source separation in the short-term Fourier transform (STFT) domain.
Suppose that $K$ super-Gaussian target-source signals and a stationary Gaussian noise signal of dimension $M - K$ are transmitted and observed by $M$ sensors.
In this paper, we only consider the case where $1 \leq K < M$.
The observed signal in the STFT domain is modeled at each frequency bin $f = 1,\ldots, F$ and time-frame $t = 1,\ldots,T$ as
\begin{align}
\label{eq:mixture}
\bm{x}(f,t) &= A_{\s}(f) \bm{s}(f,t) + A_{\z}(f) \bm{z}(f,t) \in \mathbb{C}^{M},
\\
A_{\s}(f) &= [\, \bm{a}_{1}(f), \ldots, \bm{a}_{K}(f) \,] \in \mathbb{C}^{M \times K},
\\
\bm{s}(f,t) &= [\, s_{1}(f,t), \ldots, s_{K}(f,t) \,]^\top \in \mathbb{C}^K,
\\
A_{\z}(f) &\in \mathbb{C}^{M \times (M - K)}, \quad \bm{z}(f,t) \in \mathbb{C}^{M - K},
\end{align}
where $s_i(f,t) \in \mathbb{C}$ and $\bm{z}(f,t) \in \mathbb{C}^{M - K}$ are the STFT coefficients of target source $i \in \{1,\ldots,K\}$ and the noise signal, respectively.
$\bm{a}_i(f) \in \mathbb{C}^M$ is the (time-independent) acoustic transfer function (or steering vector) of source $i$ to the sensors,
and $A_{\z}(f) \in \mathbb{C}^{M \times (M - K)}$ is that of the noise.
It is assumed that the source signals are statistically mutually independent.
In the \textbf{blind source extraction (BSE)} problem, we are given an observed mixture $\bm{x} = \{ \bm{x}(f,t) \}_{f,t}$ and the number of target sources $K$.
From these inputs, we seek to estimate the spatial images $\bm{x}_1,\ldots,\bm{x}_K$ for the target sources, which are defined as
$\bm{x}_i(f,t) \coloneqq \bm{a}_i(f) s_i(f,t) \in \mathbb{C}^M$, $i = 1,\ldots,K$.
\subsection{Semiblind source extraction (Semi-BSE) problem}
In the \textbf{semiblind source extraction (Semi-BSE)} problem, in addition to the BSE inputs, we are given $L$ transfer functions $\bm{a}_1, \ldots, \bm{a}_L$ for $L$ super-Gaussian sources, where $1 \leq L \leq K$.
From these inputs, we estimate all the target-source spatial images $\bm{x}_1,\ldots,\bm{x}_K$.
If $L = K$, then Semi-BSE is known as a beamforming problem.
\textbf{Motivation to address Semi-BSE:}
In some applications of audio source extraction, such as meeting diarization~\cite{ito2017data}, the locations of some (but not necessarily all) point sources are available or can be estimated accurately,
and their acoustic transfer functions can be obtained from, e.g., the sound propagation model~\cite{johnson1992array}.
For instance, in a conference situation, the attendees may be sitting in chairs with fixed, known locations.
On the other hand, the locations of moderators, panel speakers, or audience may change from time to time and cannot be determined in advance.
In such a case, by using these partial prior knowledge of transfer functions, Semi-BSE methods can improve the computational efficiency and separation performance of BSE methods.
In addition, since there are many effective methods for estimating the transfer function of (at least) a dominant source~\cite{warsitz2007blind,khabbazibasmenj2012robust,vorobyov2013principles},
there is a wide range of applications where Semi-BSE can be used to improve the performance of BSE.
\section{Probabilistic models}
\label{sec:model}
We start by presenting the probabilistic models for the proposed auxiliary-function-based independent vector extraction (IVE),
which is almost the same as those of such related methods as
IVA~\cite{kim2007}, AuxIVA~\cite{ono2011auxiva,ono2012auxiva-stereo,ono2018asj}, and the conventional IVE~\cite{koldovsky2017ive,koldovsky2018ive,jansky2020adaptive,scheibler2019overiva,scheibler2020fast,scheibler2020ive}.
\subsection{Probabilistic model of proposed IVE}
\label{sec:model:IVE}
In the mixing model~\eqref{eq:mixture}, the mixing matrix
\begin{align}
\label{eq:A}
A(f) = [\, \bm{a}_1(f), \ldots, \bm{a}_K(f), A_{\z}(f) \, ] \in \mathbb{C}^{M \times M}
\end{align}
is square and generally invertible, and hence the problem can be converted into one that estimates a separation matrix
$W(f) \in \GL(M)$ satisfying $W(f)^h A(f) = I_M$, or equivalently, satisfying
\begin{align}
\label{eq:s=wx}
s_i(f,t) &= \bm{w}_i(f)^h \bm{x}(f,t) \in \mathbb{C}, \quad i = 1,\ldots,K,
\\
\label{eq:z=Wx}
\bm{z}(f,t) &= W_{\z}(f)^h \bm{x}(f,t) \in \mathbb{C}^{M - K},
\end{align}
where we define
\begin{align}
\label{eq:W}
W(f) &= [\, \bm{w}_1(f), \ldots, \bm{w}_K(f), W_{\z}(f) \, ] \in \GL(M),
\\
\bm{w}_i(f) &\in \mathbb{C}^M, \quad i = 1,\ldots,K,
\\
W_{\z}(f) &\in \mathbb{C}^{M \times (M - K)}.
\end{align}
Denote by
$\bm{s}_i(t) = [\, s_i(1,t), \ldots, s_i(F,t) \,]^\top \in \mathbb{C}^F$
the vector of all the frequency components for source $i$ and time-frame $t$.
The proposed IVE exploits the following three assumptions.
Note that Assumption~\ref{assumption:superGaussian} was introduced for developing AuxICA~\cite{ono2010auxica} and
AuxIVA~\cite{ono2011auxiva,ono2012auxiva-stereo,ono2018asj}.
\begin{assumption}[Independence of sources]
The random variables $\{ \bm{s}_i(t), \bm{z}(f,t) \}_{i,f,t}$ are mutually independent:
\begin{align*}
p( \{ \bm{s}_i(t), \bm{z}(f,t) \}_{i,f,t} ) = \prod_{i,t} p(\bm{s}_i(t)) \cdot \prod_{f,t} p(\bm{z}(f,t)).
\end{align*}
\end{assumption}
\begin{assumption}[Super-Gaussian distributions for the target sources~\cite{ono2010auxica,ono2011auxiva}]
\label{assumption:superGaussian}
The target-source signal $\bm{s}_i(t)$ follows a circularly symmetric super-Gaussian distribution:
\begin{align}
- \log p(\bm{s}_i(t)) = G(\| \bm{s}_i(t) \|) + \const,
\end{align}
where $G \colon \mathbb{R}_{\geq 0} \to \mathbb{R}$ is differentiable and satisfies that
$\frac{G'(r)}{r}$ is nonincreasing on $r \in \mathbb{R}_{> 0}$.
Here, $G'$ is the first derivative of $G$.
Candidates of $G$ (or the probability density functions) include
the $\log \cosh$ function
and
the circularly symmetric generalized Gaussian distribution (GGD) with
the scale parameter $\alpha_i \in \mathbb{R}_{> 0}$
and
the shape parameter $0 < \beta < 2$,
which is also known as the exponential power distribution~\cite{gomez1998ggd}:
\begin{align}
G(\| \bm{s}_i(t) \|) = \left( \frac{\| \bm{s}_i(t) \| }{\alpha_i} \right)^\beta
+ 2 F \log \alpha_i.
\end{align}
GGD is a parametric family of symmetric distributions, and when $\beta = 1$ it is nothing but the complex Laplace distribution.
It has been experimentally shown in many studies that ICA type methods including IVA and IVE can work effectively for audio source separation tasks when audio signals such as speech signals are modeled by the super-Gaussian distributions (see, e.g.,~\cite{kim2007,hiroe2006,koldovsky2018ive,koldovsky2017ive,jansky2020adaptive,scheibler2019overiva,scheibler2020fast,scheibler2020ive,ike2020overiva,ono2011auxiva,ono2012auxiva-stereo,ono2018asj}).
\end{assumption}
\begin{assumption}[Stationary Gaussian distribution for the background noise]
\label{assumption:noise}
The noise signal $\bm{z}(f,t) \in \mathbb{C}^{M - K}$ follows a circularly symmetric complex Gaussian distribution with the zero mean and identity covariance matrix:
\begin{align}
\label{eq:z:pdf}
\bm{z}(f,t) &\sim \mathbb{C} \mathcal{N} \left( \bm{0}_{M - K}, I_{M - K} \right),
\\
p(\bm{z}(f,t)) &= \frac{1}{\pi^{M - K}} \exp \left( - \| \bm{z}(f,t) \|^2 \right).
\end{align}
\end{assumption}
Assumption~\ref{assumption:noise} plays a central role for deriving several efficient algorithms for IVE.
Despite this assumption, as we experimentally confirm in Section~\ref{sec:exp},
the proposed IVE can extract speech signals even in a diffuse noise environment where the noise signal is considered super-Gaussian or nonstationary and has an arbitrary large spatial rank.
With the model defined by \eqref{eq:s=wx}--\eqref{eq:z:pdf}, the negative loglikelihood, $g_0(W) \coloneqq - \frac{1}{T} \log p(\bm{x} \mid W)$, can be computed as
\begin{alignat}{2}
\nonumber
g_0(W)
&=&\,& \frac{1}{T} \sum_{i = 1}^K \sum_{t = 1}^T G( \| \bm{s}_i(t) \| )
+ \frac{1}{T} \sum_{f = 1}^F \sum_{t = 1}^T \| \bm{z}(f,t) \|^2
\\
\label{eq:loss}
&&&
- 2 \sum_{f = 1}^F \log | \det W(f) | + \const,
\end{alignat}
where $W \coloneqq \{ W(f) \}_{f = 1}^F$ are the variables to be optimized.
\begin{remark}
The stationary Gaussian noise model \eqref{eq:z:pdf} with the identity covariance matrix does not sacrifice generality.
At first glance, it seems better to employ
\begin{align}
\label{eq:z:R}
\bm{z}(f,t) \sim \mathbb{C}\mathcal{N} \left( \bm{0}, \Omega_{\z}(f) \right)
\end{align}
with a general covariance matrix $\Omega_{\z}(f) \in \mathcal{S}_{++}^{M - K}$.
However, we can freely change the variables to satisfy \eqref{eq:z:pdf} using the ambiguity between $A_{\z}$ and $\bm{z}$, given by
\begin{align}
A_{\z}(f) \bm{z}(f,t) &= (A_{\z}(f) \Omega_{\z}(f)^{\frac{1}{2}}) (\Omega_{\z}(f)^{- \frac{1}{2}} \bm{z}(f,t)).
\end{align}
\end{remark}
\subsection{Relation to IVA and AuxIVA}
\label{sec:model:IVA}
If we assume that the $M - K$ noise components also independently follow super-Gaussian distributions,
then the IVE model coincides with that of IVA~\cite{kim2007} or AuxIVA~\cite{ono2011auxiva,ono2012auxiva-stereo,ono2018asj}.
The following are the two advantages of assuming the stationary Gaussian model \eqref{eq:z:pdf}.
(i) As we confirm experimentally in Section~\ref{sec:exp}, when we optimize the IVE model, separation filters $\bm{w}_1,\ldots,\bm{w}_K$ extract the top $K$ highly super-Gaussian (or nonstationary) signals such as speech signals from the observed mixture while $W_{\z}$ extracts only the background noises that are more stationary and approximately follow Gaussian distributions.
On the other hand, in IVA, which assumes super-Gaussian noise models, $K$ ($< M$) target-source signals need to be chosen from the $M$ separated signals after optimizing the model.
(ii) As we reveal in Section~\ref{sec:IVE-IP1}, in the proposed IVE,
it suffices to optimize $\image W_{\z} = \{ W_{\z} \bm{v} \mid \bm{v} \in \mathbb{C}^{M - K} \} \subset \mathbb{C}^M$, the subspace spanned by $W_{\z} \in \mathbb{C}^{M \times (M - K)}$, instead of $W_{\z}$.
Because we can optimize $\image W_{\z}$ very efficiently (Section~\ref{sec:IVE-IP1}), IVE can reduce the computation time of IVA, especially when $K \ll M$.
\subsection{Relation to IVE with orthogonal constraint}
\label{sec:model:IVE-OC}
The proposed IVE is inspired by OverIVA with an orthogonal constraint (OC)~\cite{scheibler2019overiva}.
This conventional method will be called IVE-OC in this paper.
IVE-OC~\cite{scheibler2019overiva} was proposed as an acceleration of IVA~\cite{ono2011auxiva} for the case where $K < M$, while maintaining its separation performance.
The IVE-OC model is defined as the proposed IVE by replacing the noise model from \eqref{eq:z:pdf} to \eqref{eq:z:R} and introducing two additional constraints:
\begin{align}
\label{eq:IVE-IP1:Im(Wz):1}
W_{\z}(f) &= \begin{bmatrix}
W_{\z}^1(f) \\
-I_{M - K}
\end{bmatrix}, \quad W_{\z}^1(f) \in \mathbb{C}^{K \times (M - K)},
\\
\label{eq:OC:sample}
& \hspace{-10mm} \frac{1}{T} \sum_{t = 1}^T \bm{s}(f,t) \bm{z}(f,t)^h = O_{K, M - K}.
\end{align}
The first constraint \eqref{eq:IVE-IP1:Im(Wz):1} may be applicable because there is no need to extract the noise components (see~\cite{scheibler2019overiva,scheibler2020ive} for details).
The second constraint \eqref{eq:OC:sample}, called an \textit{orthogonal constraint (OC~\cite{koldovsky2018ive})}, was introduced to help the model distinguish between the target-source and noise signals.
OC, which forces the sample correlation between the separated target-source and noise signals to be zero,
can equivalently be expressed as
\begin{align}
& \hspace{-9 mm} W_{\s}(f)^h V_{\z}(f) W_{\z}(f) = O_{K, M - K},
\\
V_{\z}(f) &= \frac{1}{T} \sum_{t = 1}^T \bm{x}(f,t) \bm{x}(f,t)^h \in \mathcal{S}_{+}^M,
\\
W_{\s}(f) &= [\, \bm{w}_1(f), \ldots,\bm{w}_K(f) \,] \in \mathbb{C}^{M \times K},
\end{align}
which together with \eqref{eq:IVE-IP1:Im(Wz):1} imply
\begin{align}
\label{eq:IVE-IP1:Im(Wz):2}
& \hspace{-3 mm} W_{\z}^1(f) = (W_{\s}(f)^h V_{\z}(f) E_{\s})^{-1} (W_{\s}(f)^h V_{\z}(f) E_{\z}),
\end{align}
where we define
\begin{align}
\label{eq:Es}
E_{\s} &\coloneqq
[\, \bm{e}_{1}, \ldots, \bm{e}_K \,] = \begin{bmatrix}
I_{K} \\
O_{M - K, K}
\end{bmatrix} \in \mathbb{C}^{M \times K},
\\
\label{eq:Ez}
E_{\z} &\coloneqq
[\, \bm{e}_{K + 1}, \ldots, \bm{e}_M \,] = \begin{bmatrix}
O_{K, M - K} \\
I_{M - K}
\end{bmatrix} \in \mathbb{C}^{M \times (M - K)}.
\end{align}
From \eqref{eq:IVE-IP1:Im(Wz):1} and \eqref{eq:IVE-IP1:Im(Wz):2}, it turns out that $W_{\z}$ is uniquely determined by $W_{\s}$ in the IVE-OC model.
Hence, in a paper on IVE-OC~\cite{scheibler2019overiva}, an algorithm was proposed
in which $W_{\z}$ is updated based on \eqref{eq:IVE-IP1:Im(Wz):1} and \eqref{eq:IVE-IP1:Im(Wz):2} immediately after
updating any other variables $\bm{w}_1,\ldots,\bm{w}_K$ to always impose OC on the model.
Although the algorithm was experimentally shown to work well,
its validity from a theoretical point of view is unclear
because the update rule for $W_{\z}$ is derived solely from the constraints \eqref{eq:IVE-IP1:Im(Wz):1}--\eqref{eq:OC:sample}
and does not reflect an objective of the optimization problem for parameter estimation, such as minimizing the negative loglikelihood.
In this paper, we develop BCD algorithms for the maximum likelihood estimation of the proposed IVE that does not rely on OC,
and identify one such algorithm (IVE-IP1 developed in Section~\ref{sec:IVE-IP1}) that exactly coincides with the conventional algorithm for IVE-OC.
This means that OC is not essential for developing fast algorithms in IVE-OC.
Due to removing OC from the model, we can provide other more computationally efficient algorithms for IVE,
which is the main contribution of this paper.
\section{Algorithms for the BSE problem}
\label{sec:BSE}
We develop iterative algorithms for the maximum likelihood estimation of IVE.
The proposed and some conventional algorithms~\cite{scheibler2019overiva,scheibler2020fast,scheibler2020ive,ike2020overiva} are based on the following two methods.
\begin{itemize}
\item One is the conventional majorization-minimization (MM) algorithm developed for AuxICA~\cite{ono2010auxica} and AuxIVA~\cite{ono2011auxiva}.
In MM, instead of dealing with original objective function $g_0$ (Eq.~\eqref{eq:loss}),
a surrogate function of $g_0$ that is easier to minimize is addressed (Section~\ref{sec:BSE:MM}).
\item The other is block coordinate descent (BCD) algorithms.
In each iteration of BCDs, several $W$ columns are updated to globally minimize the above surrogate function with respect to that variable.
In this paper, we propose several BCDs that improve the conventional BCDs.
\end{itemize}
Our proposed algorithms are summarized in Algorithm~\ref{alg:main}.
The optimization processes of all the BCDs are summarized in Table~\ref{table:alg}
and detailed in the following subsections.
The computational time complexities of the algorithms are discussed in Section~\ref{sec:computational-complexity}.
\subsection{Majorization-minimization (MM) approach}
\label{sec:BSE:MM}
We briefly describe how to apply the conventional MM technique developed for AuxICA~\cite{ono2010auxica} to the proposed IVE.
In IVE as well as AuxICA/AuxIVA, a surrogate function $g$ of $g_0$ is designed with an auxiliary variable $r$ that satisfies
\begin{align}
g_0(W) = \min_{r} g (W, r).
\end{align}
Then, variables $W$ and $r$ are alternately updated by iteratively solving
\begin{align}
\label{problem:MM:1}
r^{(l)} &\in \argmin_{r} g(W^{(l - 1)}, r),
\\
\label{problem:MM:2}
W^{(l)} &\in \argmin_{W} g(W, r^{(l)})
\end{align}
for $l = 1,2,\ldots$ until convergence.
In the same way as in AuxIVA~\cite{ono2011auxiva},
by applying Proposition~\ref{prop:MM} in Appendix~\ref{appendix:lemma} to the first term of $g_0(W)$,
problem \eqref{problem:MM:2} in IVE comes down to solving the following $F$ subproblems:
\begin{align}
\label{problem:maxdet}
W(f) \in \argmin_{W(f)} ~ g (W(f), r^{(l)}), \quad f = 1,\ldots,F,
\end{align}
where $r^{(l)} \coloneqq \{ r_i^{(l)}(t) \in \mathbb{R}_{\geq 0} \}_{i,t}$ and
\begin{align}
\nonumber
&\hspace{-10 mm}
g( W(f), r^{(l)} ) = \sum_{i = 1}^K \bm{w}_i(f)^h V_i(f) \bm{w}_i(f)
\\
\label{eq:loss:MM}
&\hspace{-5 mm}
+ \trace \left( W_{\z}(f)^h V_{\z}(f) W_{\z}(f) \right)
- 2\log | \det W(f) |,
\\
\label{eq:Vi}
V_i(f) &= \frac{1}{T} \sum_{t = 1}^T \phi_{i}(t) \bm{x}(f,t) \bm{x}(f,t)^h,
\\
\label{eq:Vz}
V_{\z}(f) &= \frac{1}{T} \sum_{t = 1}^T \bm{x}(f,t) \bm{x}(f,t)^h,
\\
\label{eq:MM:phi}
\phi_{i}(t) &= \frac{ G'(r_i^{(l)}(t)) }{ 2 r_i^{(l)}(t) }, \quad r_i^{(l)}(t) = \| \bm{s}_i^{(l)}(t) \|,
\\
\label{eq:MM:si}
&\hspace{-7 mm}
s_i^{(l)}(f,t) = \bm{w}_i^{(l - 1)}(f)^h \bm{x}(f,t).
\end{align}
Here, the computation of \eqref{eq:MM:phi}--\eqref{eq:MM:si} corresponds to the optimization of \eqref{problem:MM:1}.
Recall that $G'$ in \eqref{eq:MM:phi} is the first derivative of $G \colon \mathbb{R}_{\geq 0} \to \mathbb{R}$ (see Assumption~\ref{assumption:superGaussian}).
To efficiently solve \eqref{problem:maxdet}, we propose BCD algorithms in the following subsections.
From the derivation, it is guaranteed that the objective function is monotonically nonincreasing at each iteration in the MM algorithm.%
\footnote{
Due to space limitations,
showing the convergence rate and other convergence properties of the proposed algorithms will be left as a future work.
}
\subsection{Block coordinate descent (BCD) algorithms and the stationary condition for problem \eqref{problem:maxdet}}
\label{sec:BSE:KKT}
No algorithms have been found that obtain a global optimal solution for problem~\eqref{problem:maxdet} for general $K, M \in \mathbb{N}$.
Thus, iterative algorithms have been proposed to find a local optimal solution.
Among them, a family of BCD algorithms has been attracting much attention
(e.g.,~\cite{ono2010auxica,ono2011auxiva,ono2012auxiva-stereo,ono2018asj,scheibler2019overiva,scheibler2020ive,scheibler2020fast,ike2020overiva,kitamura2016ilrma,kameoka2019MVAE,makishima2019independent,sekiguchi2019fast})
because they have been experimentally shown to work faster and more robustly than other algorithms
such as the natural gradient method~\cite{amari1996natural-gradient}.
The family of these BCD algorithms (specialized to solve \eqref{problem:maxdet}) is currently called an \textit{iterative projection (IP) method}.
As we will see in the following subsections, all the IP algorithms summarized in Table~\ref{table:alg}
can be developed by exploiting
the stationary condition, which is
also called the first-order necessary optimality condition~\cite{nocedal-Jorge2006optimization}.
To simplify the notation, when we discuss~\eqref{problem:maxdet}, we abbreviate frequency bin index $f$ without mentioning it.
For instance, $W(f)$ and $V_i(f)$ are simply denoted as $W$ and $V_i$ (without confusion).
\begin{lemma}[See, e.g., \cite{pham2001,degerine2004,degerine2006maxdet}]
\label{lemma:KKT}
The stationary condition for problem~\eqref{problem:maxdet} is expressed as ($i = 1,\ldots,K$)
\begin{alignat}{3}
\label{eq:KKT:wk}
\frac{\partial g}{\partial \bm{w}_i^\ast} &= \bm{0}_M &\quad &\Longleftrightarrow &\quad W^h V_i \bm{w}_i &= \bm{e}_i,
\\
\label{eq:KKT:Wz}
\frac{\partial g}{\partial W_{\z}^\ast} &= O_{M, M - K} &\quad &\Longleftrightarrow &\quad W^h V_{\z} W_{\z} &= E_{\z},
\end{alignat}
where $E_{\z}$ is given by \eqref{eq:Ez}.
\end{lemma}
To rigorously derive the algorithms, we always assume the following two technical but mild conditions (C1) and (C2) for problem \eqref{problem:maxdet}:%
\footnote{
If (C1) is violated, problem \eqref{problem:maxdet} has no optimal solutions and algorithms should diverge to infinity (see {\cite[Proposition 1]{ike2019ilrma}} for the proof).
Conversely, if (C1) is satisfied, it is guaranteed that problem \eqref{problem:maxdet} has an optimal solution by Proposition~\ref{prop:loss:lower-bounded} in Appendix~\ref{appendix:lemma}.
In practice, the number of frames $T$ exceeds that of sensors $M$, and (C1) holds in general.
Condition (C2) is satisfied automatically if we initialize $W$ as nonsingular.
Intuitively, singular $W$ implies $- \log | \det W | = +\infty$, which will never occur during optimization.
}
\begin{description}
\item[(C1)] $V_1, \ldots, V_K, V_{\z} \in \mathcal{S}_{+}^M$ are positive definite.
\item[(C2)] Estimates of $W \in \mathbb{C}^{M \times M}$ are always nonsingular during optimization.
\end{description}
\subsection{Conventional methods: IVA-IP1, IVA-IP2, and IVE-OC}
\label{sec:BSS-IP}
\subsubsection{IVA-IP1}
\label{sec:BSS-IP1}
Let $W_{\z} = [\bm{w}_{K + 1}, \ldots, \bm{w}_M]$.
As shown in Table~\ref{table:alg}, IVA-IP1~\cite{ono2011auxiva}
cyclically updates each separation filter $\bm{w}_1,\ldots,\bm{w}_M$ by solving
the following subproblem for each $i = 1,\ldots,M$ one by one:
\begin{align}
\label{problem:BSS-IP1}
\bm{w}_i \in \argmin_{\bm{w}_i} g (\bm{w}_1,\ldots,\bm{w}_M, r).
\end{align}
This can be solved under (C1) and (C2) by
\begin{align}
\label{eq:BSS-IP1:1}
\bm{u}_i &\leftarrow (W^h V_i)^{-1} \bm{e}_i \in \mathbb{C}^M,
\\
\label{eq:BSS-IP1:2}
\bm{w}_i &\leftarrow \bm{u}_i \left( \bm{u}_i^h V_i \bm{u}_i \right)^{-\frac{1}{2}} \in \mathbb{C}^M.
\end{align}
Here, we define $V_i \coloneqq V_{\z} \in \mathcal{S}_{++}^M$ for $i = K + 1, \ldots, M$, and the $i$th column of $W$ in \eqref{eq:BSS-IP1:1}, i.e., $\bm{w}_i$, is set to the current value before update.
When applied to the BSE problem, IVA-IP1's main drawback is that
the computation time increases significantly as $M$ gets larger since it updates $W_{\z}$ even though there is no need to extract the background noises.
\subsubsection{IVE-OC}
\label{sec:IVE-IP1:OC}
To accelerate IVA-IP1 when it is applied to the BSE problem, IVE-OC~\cite{scheibler2019overiva} updates
$W_{\z}$ using \eqref{eq:IVE-IP1:Im(Wz):1} and \eqref{eq:IVE-IP1:Im(Wz):2}
(see Section~\ref{sec:model:IVE-OC} and Algorithm~\ref{alg:IVE-IP1}).
Although this update rule seems heuristic, we reveal in Section~\ref{sec:IVE-IP1}
that IVE-OC can be comprehended in terms of BCD for the proposed IVE that does not rely on OC.
\subsubsection{IVA-IP2}
\label{sec:IVA-IP2}
When $(K, M) = (1, 2)$ or $(2, 2)$, i.e., when $W = [\bm{w}_1, \bm{w}_2]$, problem \eqref{problem:maxdet} can be solved directly (not iteratively) through a generalized eigenvalue problem~\cite{degerine2006maxdet,ono2012auxiva-stereo},
which is more efficient than IVA-IP1.
We extend this direct method to the case where $K = 1$ and general $M$ ($\geq 2$) in Section~\ref{sec:IVE-IP2:K=1}.
In a previous work~\cite{ono2018asj},
this IVA-IP2 was extended for the case where $K = M \geq 3$.
This algorithm, which is also called IVA-IP2, updates two separation filters, e.g., $\bm{w}_i$ and $\bm{w}_j$, in each iteration by exactly solving the following subproblem:
\begin{align}
(\bm{w}_i, \bm{w}_j) \in \argmin_{\bm{w}_i,\,\bm{w}_j} g (\bm{w}_1,\ldots,\bm{w}_M, r).
\end{align}
We extend this algorithm in Section~\ref{sec:IVE-IP2-new:K>1}.
\subsection{Proposed IVE-IP2 for the case of $K = 1$}
\label{sec:IVE-IP2:K=1}
We focus on the case where $K = 1$ and derive the proposed algorithm IVE-IP2%
\footnote{
The algorithm IVE-IP2 for $K = 1$ was developed independently and simultaneously by Scheibler--Ono (called the fast IVE or FIVE~\cite{scheibler2020fast}) and the authors~\cite{ike2020overiva} in the Proceedings of ICASSP2020.
In this paper, we give a complete proof for the derivation of IVE-IP2 and add some remarks
(see also Section~\ref{sec:source-extraction} for the projection back operation).
}
that is expected to be more efficient than IVE-IP1.
The algorithm is summarized in Algorithm~\ref{alg:IVE-IP2:K=1}.
When $(K, M) = (1, 2)$ or $(2, 2)$, problem \eqref{problem:maxdet} can be solved directly through a generalized eigenvalue problem~\cite{degerine2006maxdet,ono2012auxiva-stereo,ono2018asj}.
We here extend this direct method to the case where $K = 1$ and $M \geq 2$ in the following proposition.
\begin{proposition}
\label{prop:IVE-IP2:K=1}
Let $K = 1$, $M \geq 2$, and $V_1,V_{\z} \in \mathcal{S}_{++}^M$.
A matrix $W = [ \bm{w}_1, W_{\z} ] \in \GL(M)$ with $\bm{w}_1 \in \mathbb{C}^M$ satisfies
stationary conditions \eqref{eq:KKT:wk} and \eqref{eq:KKT:Wz} if and only if
\begin{align}
\label{eq:IVE-IP2:K=1:w1}
\bm{w}_1 &= \bm{u}_1 \left( \bm{u}_1^h V_1 \bm{u}_1 \right)^{- \frac{1}{2}} Q_1, \quad Q_1 \in \mathcal{U}(1),
\\
\label{eq:IVE-IP2:K=1:Wz}
W_{\z} &= U_{\z} \left( U_{\z}^h V_{\z} U_{\z} \right)^{- \frac{1}{2}} Q_{\z}, \quad Q_{\z} \in \mathcal{U}(M - 1),
\\
\label{eq:IVE-IP2:K=1:orth}
U_{\z} &\in \mathbb{C}^{M \times (M - 1)} \quad \text{with} \quad U_{\z}^h V_{\z} \bm{u}_1 = \bm{0}_{M - 1},
\\
\label{eq:IVE-IP2:K=1:eig}
\bm{u}_1 &\in \mathbb{C}^{M \times 1} \quad \text{with} \quad V_{\z} \bm{u}_1 = \lambda V_1 \bm{u}_1,
\end{align}
where $\mathcal{U}(d)$ is the set of all unitary matrices of size $d \times d$.
Also, \eqref{eq:IVE-IP2:K=1:eig} is the generalized eigenvalue problem for $(V_{\z}, V_1)$
with the eigenvalue $\lambda \in \mathbb{R}_{> 0}$ and eigenvector $\bm{u}_1 \in \mathbb{C}^M$.
Moreover, if the generalized eigenvalue $\lambda \in \mathbb{R}_{> 0}$ in \eqref{eq:IVE-IP2:K=1:eig} is chosen as the largest one,
then any $W \in \GL(M)$ obtained by \eqref{eq:IVE-IP2:K=1:w1}--\eqref{eq:IVE-IP2:K=1:eig}
is a global optimal solution for problem \eqref{problem:maxdet}.
\end{proposition}
\begin{proof}
We first show that $W$ satisfies \eqref{eq:KKT:wk}--\eqref{eq:KKT:Wz} if and only if it is computed by \eqref{eq:IVE-IP2:K=1:w1}--\eqref{eq:IVE-IP2:K=1:eig}.
For the ``if'' part, observe that
$U_{\z} V_1 \bm{u}_1 = \bm{0}_{M - 1}$ holds by \eqref{eq:IVE-IP2:K=1:orth}--\eqref{eq:IVE-IP2:K=1:eig} and $\lambda \neq 0$.
Thus, $W$ surely satisfies \eqref{eq:KKT:wk}--\eqref{eq:KKT:Wz}.
We prove the ``only if'' part.
The stationary conditions~\eqref{eq:KKT:wk}--\eqref{eq:KKT:Wz} imply that vectors $V_1 \bm{w}_1$ and $V_{\z} \bm{w}_1$ are orthogonal to the subspace $\image W_{\z}$ of dimension $M - 1$.
Hence, it holds that $\bm{w}_1= c \bm{u}_1$ for some $c \in \mathbb{C}$, where $\bm{u}_1$ is given by \eqref{eq:IVE-IP2:K=1:eig}.
This $c$ is restricted by $\bm{w}_1^h V_1 \bm{w}_1 = 1$, and we obtain \eqref{eq:IVE-IP2:K=1:w1}.
In a similar manner, \eqref{eq:KKT:Wz} implies that vector $V_{\z} \bm{u}_1$ is orthogonal to $\image W_{\z}$ of dimension $M - 1$.
Hence, it holds that $W_{\z} = U_{\z} R$ for some $R \in \mathbb{C}^{(M - 1) \times (M - 1)}$, where $U_{\z}$ is given by \eqref{eq:IVE-IP2:K=1:orth}.
This $R$ is restricted by $W_{\z}^h V_{\z} W_{\z} = I_{M - 1}$, and we have \eqref{eq:IVE-IP2:K=1:Wz}.
We next show the latter statement.
By Proposition~\ref{prop:loss:lower-bounded} in Section~\ref{sec:IVE-IP1}, global optimal solutions exist, and they must satisfy the stationary conditions, which are equivalent to \eqref{eq:IVE-IP2:K=1:w1}--\eqref{eq:IVE-IP2:K=1:eig}.
Since \eqref{eq:IVE-IP2:K=1:w1}--\eqref{eq:IVE-IP2:K=1:eig} satisfy \eqref{eq:KKT:wk}--\eqref{eq:KKT:Wz},
the sum of the first and second terms of $g$ becomes $M$, which is constant.
On the other hand, for the $\log \det$ term, it holds that
\begin{align*}
| \det W |&= \left( \bm{u}_1^h V_1 \bm{u}_1 \right)^{-\frac{1}{2}} \cdot \det \left( U_{\z}^h V_{\z} U_{\z} \right)^{-\frac{1}{2}} \cdot | \det U |
\\
&= \sqrt{\lambda} \det \left( U^h V_{\z} U \right)^{-\frac{1}{2}} \cdot | \det U |
= \sqrt{\lambda} \det (V_{\z})^{-\frac{1}{2}},
\end{align*}
where we define $U \coloneqq [\bm{u}_1, U_{\z}]$ and use $U_{\z}^h V_{\z} \bm{u}_1 = \bm{0}_{M - 1}$ in the second equality.
Hence, the largest $\lambda$ leads to the smallest $g$, which concludes the proof.
\end{proof}
By Proposition~\ref{prop:IVE-IP2:K=1}, under condition (C1), a global optimal solution for problem~\eqref{problem:maxdet}
can be obtained by updating $W = [\bm{w}_1, W_{\z}]$ using \eqref{eq:IVE-IP2:K=1:w1}--\eqref{eq:IVE-IP2:K=1:eig} with $Q_1 = 1$ and $Q_{\z} = I_{M - 1}$
and choosing the generalized eigenvalue $\lambda$ in \eqref{eq:IVE-IP2:K=1:eig} as the largest one.
Moreover, this algorithm can be accelerated by omitting the computation of \eqref{eq:IVE-IP2:K=1:Wz}--\eqref{eq:IVE-IP2:K=1:orth} and updating only $\bm{w}_1$ according to \eqref{eq:IVE-IP2:K=1:w1} and \eqref{eq:IVE-IP2:K=1:eig}.
It is applicable for extracting the unique target-source signal because the formulas for computing $\bm{w}_1$, $V_1$, and $V_{\z}$ are independent of $W_{\z}$.
The obtained algorithm IVE-IP2 is shown in Algorithm~\ref{alg:IVE-IP2:K=1}.
Interestingly, because $V_{\z}$ and $V_1$ can be viewed as the covariance matrices of the mixture and noise signals,
the update rules \eqref{eq:IVE-IP2:K=1:w1} and \eqref{eq:IVE-IP2:K=1:eig} turn out to be a MaxSNR beamformer~\cite{vantrees2004,warsitz2007maxsnr}.
Hence, IVE-IP2 can be understood as a method that iteratively updates the MaxSNR beamformer $\bm{w}_1$
and target-source signal $\bm{s}_1$.
\subsection{Drawback of conventional IVE-IP2 for the case of $K \geq 2$}
\label{sec:IVE-IP2-old:K>1}
Suppose $2 \leq K < M$.
The conventional IVE-IP2 developed in {\cite[Algorithm 2]{scheibler2020ive}}
is a BCD algorithm that cyclically updates the pairs $(\bm{w}_1, W_{\z}), \ldots, (\bm{w}_K, W_{\z})$ by solving the following subproblem for each $i = 1,\ldots, K$ one by one:
\begin{align}
\label{problem:IVE-IP2}
(\bm{w}_i, W_{\z}) \in \argmin_{ \bm{w}_i,\, W_{\z} } g (\bm{w}_1, \ldots, \bm{w}_K, W_{\z}, r).
\end{align}
It was shown in \cite[Theorem~3]{scheibler2020ive} that a global optimal solution of problem \eqref{problem:IVE-IP2} can be obtained through a generalized eigenvalue problem.
In this paper, we simplify the result of \cite[Theorem~3]{scheibler2020ive} in the next proposition,
based on which we will speed up the conventional IVE-IP2 in Section~\ref{sec:IVE-IP2-new:K>1}.
\begin{proposition}
\label{prop:IVE-IP2:K>1}
Let $2 \leq K < M$, and $V_i,V_{\z} \in \mathcal{S}_{++}^M$.
A global optimal solution of problem~\eqref{problem:IVE-IP2} is obtained as
\begin{align}
\label{eq:IVE-IP2:K>1:wi}
\bm{w}_i &= P_i \bm{b}_i \left( \bm{b}_i^h G_i \bm{b}_i \right)^{- \frac{1}{2}} \in \mathbb{C}^{M \times 1},
\\
\label{eq:IVE-IP2:K>1:Wz}
W_{\z} &= P_{\z} B_{\z} \left( B_{\z}^h G_{\z} B_{\z} \right)^{- \frac{1}{2}} \in \mathbb{C}^{M \times (M - K)},
\\
\label{eq:IVE-IP2:K>1:P}
P_\ell &= \left((W')^h V_\ell \right)^{-1} [\, \bm{e}_i, E_{\z} \,] \in \mathbb{C}^{M \times (M - K +1)},
\\
\label{eq:IVE-IP2:K>1:G}
G_\ell &= P_\ell^h V_\ell P_\ell \in \mathcal{S}_{++}^{M - K + 1}, \quad \ell \in \{ i, \z \},
\\
\label{eq:IVE-IP2:K>1:W'}
W' &= [\, \bm{w}_1, \ldots, \bm{w}_i', \ldots, \bm{w}_K, W'_{\z} \,] \in \GL(M),
\\
\label{eq:IVE-IP2:K>1:orth}
B_{\z} &\in \mathbb{C}^{(M - K + 1) \times (M - K)} \quad \text{with} \quad B_{\z}^h G_{\z} \bm{b}_i = \bm{0}_{M - K},
\\
\label{eq:IVE-IP2:K>1:eig}
\bm{b}_i &\in \mathbb{C}^{(M - K + 1) \times 1} \quad \text{with} \quad G_i \bm{b}_i = \lambda_{\max} G_{\z} \bm{b}_i,
\end{align}
where $E_{\z}$ is defined by \eqref{eq:Ez},
and $\bm{w}_i' \in \mathbb{C}^{M \times 1}$ and $W'_{\z} \in \mathbb{C}^{M \times (M - K)}$ in \eqref{eq:IVE-IP2:K>1:W'} are set arbitrarily as long as $W'$ is nonsingular (for instance they can be set to the current values under condition (C2)).
Also, \eqref{eq:IVE-IP2:K>1:eig} is the generalized eigenvalue problem for $(G_i,G_{\z})$,
and $\bm{b}_i$ is the eigenvector corresponding to the largest generalized eigenvalue $\lambda_{\max} \in \mathbb{R}_{> 0}$.
\end{proposition}
\begin{proof}
The proof is given in Appendix~\ref{appendix:proof-of-prop3}.
\end{proof}
In the conventional IVE-IP2, under condition (C2),
problem \eqref{problem:IVE-IP2} is solved by \eqref{eq:IVE-IP2:K>1:wi}--\eqref{eq:IVE-IP2:K>1:W'}
where $B_{i,\z} \coloneqq [\bm{b}_i, B_{\z}]$ is computed through the following generalized eigenvalue problem
instead of using \eqref{eq:IVE-IP2:K>1:orth}--\eqref{eq:IVE-IP2:K>1:eig}:
\begin{align}
\label{eq:IVE-IP2:K>1:old:GEV}
G_i B_{i,\z} = G_{\z} B_{i,\z} \diag \{ \lambda_\mathrm{max}, \lambda_2, \ldots,\lambda_{M - K + 1} \}.
\end{align}
Since the computational cost of solving \eqref{eq:IVE-IP2:K>1:old:GEV} is greater than computing \eqref{eq:IVE-IP2:K>1:orth}--\eqref{eq:IVE-IP2:K>1:eig},
we can speed up the conventional IVE-IP2, as shown in the next subsection.
\subsection{Proposed IVE-IP2 for $K \geq 2$}
\label{sec:IVE-IP2-new:K>1}
The proposed IVE-IP2 for $K \geq 2$, which is summarized in Algorithm~\ref{alg:IVE-IP2:K>1}, is an acceleration of the conventional IVE-IP2 described in the previous subsection.
We here provide another efficient formula to solve \eqref{problem:IVE-IP2}
without changing the behavior of the conventional IVE-IP2.
Thanks to newly providing Proposition~\ref{prop:IVE-IP2:K>1},
we can update the pair $(\bm{w}_i, W_{\z})$ according to \eqref{eq:IVE-IP2:K>1:wi}--\eqref{eq:IVE-IP2:K>1:eig} in which
only the first generalized eigenvector has to be computed.
Moreover, for the purpose of optimizing $\bm{w}_1,\ldots,\bm{w}_K$,
we do not have to update $W_{\z}$
using Eqs. \eqref{eq:IVE-IP2:K>1:Wz} and \eqref{eq:IVE-IP2:K>1:orth}
when solving problem \eqref{problem:IVE-IP2} for each $i = 1,\ldots,K$.
To see this, observe that
\begin{enumerate}
\item in the MM algorithm (Eqs. \eqref{problem:maxdet}--\eqref{eq:MM:si}), the auxiliary variable $r$, and by extension, the covariance matrices $V_1, \ldots, V_K, V_{\z}$ are independent of $W_{\z}$; and hence,
\item $W_{\z}$ never contributes to the construction of the surrogate function $g$ during iterative optimization.
\end{enumerate}
This observation implies that problem \eqref{problem:IVE-IP2} remains the same during iterative optimization regardless whether we update $W_{\z}$ or not.
Hence, in IVE-IP2, it is sufficient to update only $\bm{w}_i$ in \eqref{problem:IVE-IP2} using Eqs. \eqref{eq:IVE-IP2:K>1:wi}, \eqref{eq:IVE-IP2:K>1:P}--\eqref{eq:IVE-IP2:K>1:W'},
\eqref{eq:IVE-IP2:K>1:eig}.
This advantage stems from the stationary Gaussian assumption for the noise components.
To the contrary, suppose that the noise components are nonstationary or non-Gaussian.
Then, the surrogate function $g$ as well as $V_{\z}$ should depend on $W_{\z}$ in the same way that $V_i$ depends on $\bm{w}_i$, meaning that $W_{\z}$ must be optimized in subproblem~\eqref{problem:IVE-IP2}.
In this way, the stationary Gaussian assumption is important to derive a computationally efficient algorithm for IVE.
\subsection{IVE-IP1: Reinterpretation of the conventional IVE-OC}
\label{sec:IVE-IP1}
In this subsection, we model the noise components by \eqref{eq:z:R} with a general time-independent covariance matrix $\Omega_{\z}$, i.e.,
\begin{align}
- \log p(\bm{z}(t)) = \bm{z}(t)^h \Omega_{\z}^{-1} \bm{z}(t) + \log \det \Omega_{\z} + \const,
\end{align}
and
develop a BCD algorithm called IVE-IP1 that cyclically updates $\bm{w}_1,(W_{\z},\Omega_{\z}),\bm{w}_2,(W_{\z},\Omega_{\z}), \ldots,\bm{w}_K, (W_{\z},\Omega_{\z})$ by solving the following subproblems one by one:
\begin{align}
\label{problem:IVE-IP1:wk}
\bm{w}_i &\in \argmin_{\bm{w}_i} g (\bm{w}_1,\ldots,\bm{w}_K,W_{\z},\Omega_{\z},r),
\\
\label{problem:IVE-IP1:Wz}
(W_{\z},\Omega_{\z}) &\in \argmin_{W_{\z},\, \Omega_{\z}} g (\bm{w}_1,\ldots,\bm{w}_K,W_{\z},\Omega_{\z},r).
\end{align}
Here, the frequency bin index $f$ is omitted.
Due to considering the noise covariance $\Omega_{\z}$, the objective function $g$ is slightly changed to
\begin{alignat}{2}
\nonumber
g( W, \Omega_{\z}, r) &=&\,& \sum_{i = 1}^K \bm{w}_i^h V_i \bm{w}_i
+ \trace \left( W_{\z}^h V_{\z} W_{\z} \Omega_{\z}^{-1} \right)
\\
\label{eq:obj:ip1}
&&&+ \log \det \Omega_{\z} - 2\log | \det W |.
\end{alignat}
It will turn out that the BCD algorithm IVE-IP1 is identical to the conventional IVE-OC summarized in Algorithm~\ref{alg:IVE-IP1}.
In other words, IVE-OC, which relies on the orthogonal constraint (OC), can be interpreted as a BCD algorithm for the proposed IVE without OC.
Since problem \eqref{problem:IVE-IP1:wk} is the same as problem \eqref{problem:BSS-IP1},
the update rules for $\bm{w}_1, \ldots, \bm{w}_K$ are given by \eqref{eq:BSS-IP1:1}--\eqref{eq:BSS-IP1:2}.
On the other hand, an algorithm for problem \eqref{problem:IVE-IP1:Wz} can be obtained from the next proposition.
\begin{proposition}
\label{prop:IVE-IP1:Wz}
Let $K, M \in \mathbb{N}$ with $K < M$,
$V_{\z} \in \mathcal{S}_{++}^M$,
and let $W_{\s} = [\bm{w}_1,\ldots,\bm{w}_K] \in \mathbb{C}^{M \times K}$ be full column rank.
Then, a pair $(W_{\z}, \Omega_{\z})$ is a global optimal solution of problem \eqref{problem:IVE-IP1:Wz}
if and only if it is computed by
\begin{align}
\label{eq:prop:Wz}
W_{\z} &\in \mathbb{C}^{M \times (M - K)} \quad \text{with} \quad W_{\s}^h V_{\z} W_{\z} = O_{K, M - K},
\\
\label{eq:prop:Wz:2}
\Omega_{\z} &= W_{\z}^h V_{\z} W_{\z} \in \mathcal{S}_{++}^{M - K}.
\end{align}
(Note that $W_{\z}$ must be full column rank to guarantee the positive definiteness of $\Omega_{\z}$.)
\end{proposition}
\begin{proof}
The stationary condition which is the necessary optimality condition of problem \eqref{problem:IVE-IP1:Wz} is expressed as
\begin{alignat}{3}
\label{eq:prop-ip1:1}
\frac{\partial g}{\partial W_{\z}^\ast} &= O &~~
&\Longleftrightarrow&~~&
\begin{cases}
W_{\s}^h V_{\z} W_{\z} = O_{K, M - K},
\\
W_{\z}^h V_{\z} W_{\z} = \Omega_{\z},
\end{cases}
\\
\label{eq:prop-ip1:3}
\frac{\partial g}{\partial \Omega_{\z}^{-1}} &= O &~~
&\Longleftrightarrow&~~&
W_{\z}^h V_{\z} W_{\z} = \Omega_{\z}.
\end{alignat}
Hence, the ``only if'' part is obvious.
To see the ``if'' part, we show that all stationary points are globally optimal.
To this end, it suffices to prove that
\begin{description}
\item[(i)] $g$ takes the same value on all stationary points; and
\item[(ii)] $g$ attains its minimum at some $(W_{\z}, \Omega_{\z})$.
\end{description}
We first check (i).
By~\eqref{eq:prop:Wz:2}, the second term of $g$ becomes
$\trace (W_{\z}^h V_{\z} W_{\z} \Omega_{\z}^{-1} ) = M - K$, which is constant.
Let $(W_{\z}, \Omega_{\z})$ and $(W_{\z}', \Omega_{\z}')$ be two stationary points.
Since $\image W_{\z} = \image W'_{\z}$ by the first part of \eqref{eq:prop-ip1:1}, we have $W_{\z}' = W_{\z} Q$ for some $Q \in \GL(M - K)$.
Then, by \eqref{eq:prop-ip1:3}, $\Omega_{\z}' = Q^h \Omega_{\z} Q$.
Hence, for the $\log \det$ terms of $g$, we have
\begin{alignat*}{2}
&&\,& \log \det \Omega_{\z}' - 2\log |\det [W_{\s}, W_{\z}'] |
\\
&=&& \log \det (Q^h \Omega_{\z} Q) - 2 \log |\det [W_{\s}, W_{\z}] \cdot \det Q|
\\
&=&& \log \det \Omega_{\z} - 2\log |\det [W_{\s}, W_{\z}] |,
\end{alignat*}
which concludes (i).
We next show (ii).
Let us change the variables from $W_{\z}$ and $\Omega_{\z}$ to $W_{\z}' = W_{\z} \Omega_{\z}^{- \frac{1}{2}}$ and $\Omega_{\z}$.
Then, the objective function $g$ with respect to $W_{\z}'$ and $\Omega_{\z}$ is expressed as
\begin{align*}
g(W_{\z}', \Omega_{\z}) =
\trace (W_{\z}'^h V_{\z} W_{\z}') - 2\log | \det [W_{\s}, W_{\z}' ] | + \const,
\end{align*}
which is independent of $\Omega_{\z}$.
By Proposition~\ref{prop:loss:lower-bounded},
the problem of minimizing $g$ with respect to $W_{\z}'$ attains its minimum at some $W_{\z}'$, which in turn implies that problem \eqref{problem:IVE-IP1:Wz} also attains its minimum at some $(W_{\z}, \Omega_{\z})$.
\end{proof}
Proposition~\ref{prop:IVE-IP1:Wz} implies that
in problem \eqref{problem:IVE-IP1:Wz}
it is sufficient to optimize $\image (W_{\z})$, instead of $W_{\z}$, to satisfy the orthogonal constraint (OC), i.e., Eq. \eqref{eq:prop:Wz},
which is part of the stationary condition \eqref{eq:prop-ip1:1}.%
\footnote{
The observation that OC appears as part of the stationary condition was pointed out in \cite{scheibler2020ive}.
}
Since the update formula \eqref{eq:IVE-IP1:Im(Wz):1} and \eqref{eq:IVE-IP1:Im(Wz):2} for $W_{\z}$ in IVE-OC surely satisfies OC,
it is applicable for the solution of problem \eqref{problem:IVE-IP1:Wz}.
Hence, it turns out that the conventional IVE-OC summarized in Algorithm~\ref{alg:IVE-IP1} can be obtained as BCD for the proposed IVE.
\subsection{Source extraction and projection back}
\label{sec:source-extraction}
After optimizing separation matrix $W$ in IVE, we can achieve source extraction by computing the minimum mean square error (MMSE) estimator of the source spatial image $\bm{x}_i$ for each $i = 1,\ldots,K$:
\begin{align}
\label{eq:MMSE}
\hat{\bm{x}}_i(f,t) \coloneqq
\underbrace{ \left( W(f)^{-h} \bm{e}_i \right) }_{ \text{projection back} }
\underbrace{ \left( \bm{w}_i(f)^h \bm{x}(f,t) \right) }_{ \text{source extraction} }.
\end{align}
The projection back operation~\cite{murata2001projection-back} adjusts the amplitude ambiguity of the extracted signals between frequency bins.
We here show that in the projection back operation, i.e., $W^{-h} \bm{e}_i$ for $i = 1,\ldots,K$,
we need only $\bm{w}_1,\ldots,\bm{w}_K, \image W_{\z}$ but not $W_{\z}$
(frequency bin index $f$ is dropped off for simplicity).
To see this, let
\begin{align}
\label{eq:W^i}
W^{(i)} = [\bm{w}_1,\ldots,\bm{w}_K, W_{\z}^{(i)}] \in \GL(M), \quad i \in \{1, 2 \}
\end{align}
satisfy $\image W_{\z}^{(1)} = \image W_{\z}^{(2)}$.
Then, we have
\begin{align*}
W^{(1)} = W^{(2)} \begin{bmatrix}
I_K & O \\
O & D \\
\end{bmatrix}
~\text{for some $D \in \GL(M - K)$}.
\end{align*}
This implies $\left( W^{(1)} \right)^{-h} \bm{e}_i = \left( W^{(2)} \right)^{-h} \bm{e}_i$ for $i = 1,\ldots,K$,
meaning that the projection back operation depends only on $\image W_{\z}$, as we required.
\begin{remark}
\label{remark:projection-back}
Since the proposed IVE-IP2 will never update $W_{\z}$ from its initial value,
we update $\image W_{\z}$ using \eqref{eq:IVE-IP1:Im(Wz):1} and \eqref{eq:IVE-IP1:Im(Wz):2} after the optimization,
which corresponds to Step~\ref{step:Im(Wz)} in Algorithm~\ref{alg:main}.
This update rule is applicable for this purpose
by considering the stationary condition~\eqref{eq:KKT:Wz},
which is equivalent to~\eqref{eq:prop-ip1:1} with $\Omega_{\z} = I_{M - K}$.
\end{remark}
\section{Algorithm for semiblind BSE problem}
\label{sec:semi-BSE}
\subsection{Proposed Semi-IVE}
\label{sec:semi-ive}
We next address the semiblind source extraction problem (Semi-BSE) defined in Section~\ref{sec:problem}.
In Semi-BSE, we are given a priori the $L$ transfer functions (steering vectors) for target sources $l = 1,\ldots,L$ ($1 \leq L \leq K$), denoted as
\begin{align}
A_1(f) \coloneqq [\, \bm{a}_1(f), \ldots, \bm{a}_L(f) \,] \in \mathbb{C}^{M \times L}.
\end{align}
In this situation, it is natural in the context of the linear constrained minimum variance (LCMV~\cite{vantrees2004}) beamforming algorithm
to regularize the separation matrix as
$W(f)^h A_1(f) = E_1$, where $E_1$ is defined by \eqref{eq:E1} below.
We can thus immediately propose the Semi-BSE method called Semi-IVE
as IVE with the linear constraints, i.e.,
\begin{align}
\label{problem:semi-BSE:main}
\left.
\begin{array}{cl}
\underset{W}{\minimize} & g_0 (W) \quad \text{(defined by \eqref{eq:loss})} \\
\subject-to & W(f)^h A_1(f) = E_1, \quad f = 1,\ldots,F.
\end{array}
\right\}
\end{align}
Here, $W \coloneqq \{ W(f) \}_{f = 1}^F$.
The optimization of Semi-IVE is also based on the MM algorithm described in Section~\ref{sec:BSE:MM}.
The surrogate function for Semi-IVE is the same as that of IVE, i.e., Eq. \eqref{eq:loss:MM},
and the problem of minimizing the surrogate function becomes the following constrained optimization problem for each frequency bin $f = 1,\ldots,F$:
\begin{align}
\label{problem:semi-BSE}
\left.
\begin{array}{cl}
\underset{W(f)}{\minimize} & g (W(f), r) \quad \text{(defined by \eqref{eq:loss:MM})} \\
\subject-to & W(f)^h A_1(f) = E_1.
\end{array}
\right\}
\end{align}
The goal of this section is to develop an efficient algorithm to obtain a (local) optimal solution for problem~\eqref{problem:semi-BSE}.
Hereafter, we omit frequency bin index $f$ and use the following notations for simplicity:
\begin{align}
\label{eq:E1}
E_1 &= [\, \bm{e}_1^{(M)}, \ldots,\bm{e}_L^{(M)} \,] = \begin{bmatrix}
I_L \\
O_{M - L, L}
\end{bmatrix} \in \mathbb{C}^{M \times L},
\\
\label{eq:E2}
E_2 &= [\, \bm{e}_{L + 1}^{(M)}, \ldots,\bm{e}_M^{(M)} \,] = \begin{bmatrix}
O_{L, M - L} \\
I_{M - L}
\end{bmatrix} \in \mathbb{C}^{M \times (M - L)},
\\
\label{eq:W2}
W_2 &= [\, \bm{w}_{L+1},\ldots,\bm{w}_K, W_{\z} \,] \in \mathbb{C}^{M \times (M - L)}.
\end{align}
Here, the superscript $\empty^{(d)}$ in $\bm{e}_i^{(d)} \in \mathbb{C}^d$ represents the length of the unit vectors.
\subsection{LCMV beamforming algorithm for $\bm{w}_1,\ldots,\bm{w}_L$}
\label{sec:semi-BSE:LCMV}
By applying Proposition~\ref{prop:det} in Appendix~\ref{appendix:lemma}, the objective function can be expressed as
\begin{alignat}{3}
\nonumber
g(W) &=&\,& \sum_{i = 1}^K \bm{w}_i^h V_i \bm{w}_i + \trace (W_{\z}^h V_{\z} W_{\z})
\\
\label{eq:loss:semi-BSE}
&&&+ \log \det \left( A_1^h A_1 \right) - \log \det \left( W_2^h W_2 \right).
\end{alignat}
Hence, in problem \eqref{problem:semi-BSE}, separation filters $\bm{w}_1,\ldots,\bm{w}_L$ that correspond to the given transfer functions $\bm{a}_1,\ldots,\bm{a}_L$ can be globally optimized by solving
\begin{align}
\label{problem:BF:W}
\tag{LCMV}
\left.
\begin{array}{cl}
\underset{\bm{w}_i}{\minimize} & \bm{w}_i^h V_i \bm{w}_i \\
\subject-to & \bm{w}_i^h A_1 = (\bm{e}_i^{(L)})^\top \in \mathbb{C}^{1 \times L}.
\end{array}
\right\}
\end{align}
This problem is nothing but the LCMV beamforming problem that is solved as
\begin{align}
\bm{w}_i = V_i^{-1} A_1 \left(A_1^h V_i^{-1} A_1 \right)^{-1} \bm{e}_i^{(L)} \in \mathbb{C}^M.
\end{align}
\subsection{Block coordinate descent (BCD) algorithm for $W_2$}
\label{sec:Semi-IVE}
We develop a BCD algorithm for optimizing the remaining variables $W_2$.
Since $W_2 \in \mathbb{C}^{M \times (M - L)}$ is restricted by the $(M - L)L$ linear constraints $W_2^h A_1 = O_{M - L, L}$,
it can be parameterized using the $(M - L)^2$ variables.
One such choice is given by
\begin{align}
\label{eq:semi-BSE:W2}
W_2 &= W_2' \overline{W} \in \mathbb{C}^{M \times (M - L)}, \quad \overline{W} \in \mathbb{C}^{(M - L) \times (M - L)},
\\
\label{eq:semi-BSE:W2'}
W_2' &= [A_1, E_2]^{-h} E_2 \in \mathbb{C}^{M \times (M - L)},
\end{align}
where $E_2$ is defined as \eqref{eq:E2}, and it is assumed that $[A_1, E_2]$ is nonsingular.
It is easy to see that this $W_2$ certainly satisfies the linear constraints.
By substituting \eqref{eq:semi-BSE:W2}--\eqref{eq:semi-BSE:W2'} into \eqref{eq:loss:semi-BSE},
we can reformulate problem \eqref{problem:semi-BSE} as an unconstrained optimization problem of minimizing
\begin{align*}
\overline{g} (\overline{W}, r)
= \sum_{i = L + 1}^K \overline{\bm{w}}_i^h \overline{V}_{\!i} \overline{\bm{w}}_i
+ \trace (\overline{W}_{\!\z}^h \overline{V}_{\!\z} \overline{W}_{\!\z})
- 2 \log |\det \overline{W} |
\end{align*}
with respect to $\overline{W}$, where we define
\begin{align}
\overline{W} &= [\, \overline{\bm{w}}_{L + 1}, \ldots, \overline{\bm{w}}_K, \overline{W}_{\!\z} \,] \in \mathbb{C}^{(M - L) \times (M - L)},
\\
\overline{\bm{w}}_i &\in \mathbb{C}^{(M - L) \times 1}, \quad i = L + 1, \ldots, K,
\\
\overline{W}_{\!\z} &\in \mathbb{C}^{(M - L) \times (M - K)},
\\
\overline{V}_{\!i} &= \left( W_2' \right)^h V_i W_2' \in \mathcal{S}_{++}^{M - L}, ~~~ i \in \{ L + 1, \ldots, K, \z \}.
\end{align}
Interestingly, this problem is nothing but the BSE problem and was already discussed in Section~\ref{sec:BSE}.
In a similar manner to IVE-IP2, our proposed Semi-IVE algorithm updates $\overline{\bm{w}}_{L + 1},\ldots,\overline{\bm{w}}_{K}$ one by one by solving the following subproblem
for each $i = L + 1, \ldots, K$:
\begin{align}
\label{problem:Semi-IVE}
(\overline{\bm{w}_i}, \overline{W}_{\!\z}) \in \argmin_{\overline{\bm{w}}_i,\,\overline{W}_{\!\z}} \overline{g} (\overline{\bm{w}}_{L + 1},\ldots,\overline{\bm{w}}_K,\overline{W}_{\!\z}, r).
\end{align}
\begin{itemize}
\item When $L = K - 1$ and $\overline{W} = [\, \overline{\bm{w}}_K, \overline{W}_{\!\z} \,]$,
a global optimal solution for problem \eqref{problem:Semi-IVE}
can be obtained by applying Proposition~\ref{prop:IVE-IP2:K=1}.
\item When $1 \leq L \leq K - 2$,
a global optimal solution of \eqref{problem:Semi-IVE} can be obtained by applying Proposition~\ref{prop:IVE-IP2:K>1}.
\item Note that for the same reason as in the proposed IVE-IP2,
updating $\overline{W}_{\!\z}$ in \eqref{problem:Semi-IVE} is not necessary.
\end{itemize}
In summary, Semi-IVE can be presented in Algorithm~\ref{alg:Semi-IVE}, in which we use the following notations:
\begin{align}
I_{M - L} &= [\, \overline{\bm{e}}_{L + 1}, \ldots, \overline{\bm{e}}_K, \overline{E}_{\z} \,] \in \GL (M - L),
\\
\overline{\bm{e}}_i &= \bm{e}_{i - L}^{(M - L)} \in \mathbb{C}^{M - L}, \quad i = L + 1, \ldots, K,
\\
\overline{E}_{\z} &= [\, \bm{e}_{K - L + 1}^{(M - L)}, \ldots, \bm{e}_{M - L}^{(M - L)} \,] \in \mathbb{C}^{(M - L) \times (M - K)}.
\end{align}
\begin{remark}
Semi-IVE does not update $\overline{W}_{\!\z}$ and $W_{\z}$ from the initial values.
Hence, for the same reason as in Remark~\ref{remark:projection-back}, we need to optimize $\image \overline{W}_{\!\z}$ and $\image W_{\z}$ after the optimization and before performing the projection back operation.
For this purpose, we adopt the following formula:
\begin{align}
\label{eq:Semi-IVE:Im(Wz)}
W_{\z} = W_2' \overline{W}_{\!\z},\quad
\overline{W}_{\!\z} = \begin{bmatrix}
(\overline{W}_{\!\s}^h \overline{V}_{\!\z} \overline{E}_{\s})^{-1} (\overline{W}_{\!\s}^h \overline{V}_{\!\z} \overline{E}_{\z}) \\
-I_{M - K}
\end{bmatrix}.
\end{align}
Here, we use the following notations:
\begin{alignat*}{3}
\overline{W} &= [\, \overline{W}_{\!\s}, \overline{W}_{\!\z} \,],
&\quad
\overline{W}_{\!\s} &= [\, \overline{\bm{w}}_{L + 1}, \ldots, \overline{\bm{w}}_{K} \,],
\\
I_{M - L} &= [\, \overline{E}_{\s}, \overline{E}_{\z} \,],
&\quad
\overline{E}_{\s} &= [\, \overline{\bm{e}}_{L + 1}, \ldots, \overline{\bm{e}}_{K} \,].
\end{alignat*}
The formula \eqref{eq:Semi-IVE:Im(Wz)} optimizes $\image \overline{W}_{\!\z}$ (and hence $\image W_{\z}$)
as described in Remark~\ref{remark:projection-back}.
\end{remark}
\section{Computational time complexity}
\label{sec:computational-complexity}
We compare the computational time complexity per iteration of the algorithms presented in Table~\ref{table:alg}.
In IVE-IP1, IVE-IP2, and Semi-IVE, the runtime is dominated by
\begin{itemize}
\item the computation of $V_1,\ldots,V_K \in \mathcal{S}_{++}^M$ at Step \ref{step:V} in Algorithm~\ref{alg:main}, which costs $\mathrm{O}(K M^2 FT)$; or
\item the matrix inversions and generalized eigenvalue decompositions in Algorithms~\ref{alg:IVE-IP1}--\ref{alg:Semi-IVE}, which cost
$\mathrm{O}(K M^3 F)$.
\end{itemize}
Thus, the time complexity of these algorithms is
\begin{align}
\mathrm{O}(K M^2 FT + K M^3 F).
\end{align}
On the other hand, the time complexity of IVA-IP1 and IVA-IP2 is
\begin{align}
\mathrm{O}(M^3 FT + M^4 F)
\end{align}
due to the computation of $M$ covariance matrices $V_1,\ldots,V_M$.
Consequently, IVE reduces the computational time complexity of IVA by a factor of $K / M$.
\begin{algorithm}[p]
\caption{IVE ($L = 0$) and Semi-IVE ($1 \leq L \leq K$)
based on generalized Gaussian distributions in Assumption~\ref{assumption:superGaussian}:
$G(\| \bm{s}_{i}(t) \|, \alpha_i) = \left( \frac{\| \bm{s}_i(t) \|}{\alpha_i} \right)^{\beta} + 2 F \log \alpha_i$.
Here, $\alpha_1,\ldots,\alpha_K$ are parameters to be optimized.
}
\label{alg:main}
\DontPrintSemicolon
{\setstretch{1.3}
\nl\KwData{Observed signal $\bm{x}$;}
\myinput{Number of sources $K$; and}
\myinput{$L$ transfer function $A_1 = [\bm{a}_1,\ldots,\bm{a}_L]$,}
\myinput{where $0 \leq L \leq K$.}
\nl\KwResult{Separated spatial images $\bm{x}_1,\ldots,\bm{x}_K$.}
}
{\setstretch{1.4}
\nl\Begin{
\tcc{Initialization}
\nl $W(f) \leftarrow -I_M$\;
\nl $V_{\z}(f) \leftarrow \frac{1}{T} \sum_{t = 1}^T \bm{x}(f,t) \bm{x}(f,t)^h$\;
\nl \If{using IVE-IP1 or IVE-IP2}{
\nl Update $W_{\z}(f)$ using Step \ref{step:IVE-IP1:Wz} in Algorithm~\ref{alg:IVE-IP1}
}
\nl \If{using Semi-IVE}{
\nl $W_2'(f) \leftarrow [\, A_1(f), E_2 \,]^{-h} E_2$ \;
\nl $\overline{V}_{\!\z}(f) \leftarrow W_2'(f)^h V_{\z}(f) W_2'(f)$ \;
\nl Update $W_{\z}(f)$ using \eqref{eq:Semi-IVE:Im(Wz)}\;
}
\tcc{Optimization}
\nl \Repeat{convergence}{
\nl \For{$i = 1,\ldots,K$}{
\nl $s_i(f,t) \leftarrow \bm{w}_i(f)^h \bm{x}(f,t)$\;
\nl $r_i(t) \leftarrow \| \bm{s}_i(t) \|$\;
\nl $\alpha_i^{\beta} = \frac{\beta}{2F} (\frac{1}{T} \sum_t r_i(t)^\beta)$\;
\label{step:alpha}
\nl $\phi_i(t) \leftarrow \frac{G'(r_i(t), \alpha_i)}{2 r_i(t)}
= \frac{\beta}{2} \frac{1}{\alpha_i^\beta r_i(t)^{2 - \beta}}$\;
\nl $\phi_i(t) \leftarrow \min \{ \phi_i(t), 10^5 \times \min \{ \phi_i(t) \}_{t = 1}^T \}$
\tcp{for numerical stability}
\label{step:G'}
\nl $V_i(f) \leftarrow \frac{1}{T} \sum_t \phi_i(t) \bm{x}(f,t) \bm{x}(f,t)^h$\;
\label{step:V}
\nl $V_i(f) \leftarrow V_i(f) + 10^{-3} \trace\{ V_i(f) \} I_M$
\tcp{for numerical stability}
}
\nl Update $W(f)$ for each $f$ using IVE-IP1, IVE-IP2, or Semi-IVE.\;
}
\tcc{Source extraction}
\nl \If{using IVE-IP2}{
\nl Update $W_{\z}(f)$ using Step \ref{step:IVE-IP1:Wz} in Algorithm~\ref{alg:IVE-IP1}
\label{step:Im(Wz)}
}
\nl \If{using Semi-IVE}{
\nl Update $W_{\z}(f)$ using \eqref{eq:Semi-IVE:Im(Wz)}\;
}
\nl $\bm{x}_i(f,t) \leftarrow \left( W(f)^{-h} \bm{e}_i \right) \bm{w}_i(f)^h \bm{x}(f,t)$\;
}
}
}
\end{algorithm}
\begin{algorithm}[p]
\caption{IVE-IP1 (proposed in~\cite{scheibler2019overiva})}
\label{alg:IVE-IP1}
\setstretch{1.2}
\DontPrintSemicolon
\nl \For{$i = 1,\ldots,K$}{
\nl $\bm{u}_i(f) \leftarrow \left( W(f)^h V_i(f) \right)^{-1} \bm{e}_i$\;
\nl $\bm{w}_i(f) \leftarrow \bm{u}_i(f) \left( \bm{u}_i(f)^h V_i(f) \bm{u}_i(f) \right)^{-\frac{1}{2}}$\;
\nl
\label{step:IVE-IP1:Wz}
$W_{\z}(f) \leftarrow
{\small
\begin{bmatrix}
(W_{\s}(f)^h V_{\z}(f) E_{\s})^{-1} (W_{\s}(f)^h V_{\z}(f) E_{\z}) \\
-I_{M - K}
\end{bmatrix}
}$\;
}
\end{algorithm}
\begin{algorithm}[p]
\caption{IVE-IP2 for $K = 1$}
\label{alg:IVE-IP2:K=1}
\setstretch{1.2}
\DontPrintSemicolon
\nl Solve $V_{\z}(f) \bm{u} = \lambda_{\max} V_1(f) \bm{u}$ to obtain the eigenvector $\bm{u}$ corresponding to the largest eigenvalue $\lambda_{\max} $. \;
\nl $\bm{w}_1(f) \leftarrow \bm{u} \left( \bm{u}^h V_1(f) \bm{u} \right)^{- \frac{1}{2}}$\;
}
\end{algorithm}
\begin{algorithm}[p]
\caption{IVE-IP2 for $K \geq 2$}
\label{alg:IVE-IP2:K>1}
\setstretch{1.2}
\DontPrintSemicolon
\nl \For{$i = 1,\ldots,K$}{
\nl \For{$\ell \in \{i, \z \}$}{
\nl $P_\ell(f) \leftarrow \left( W(f)^h V_\ell(f) \right)^{-1} [\, \bm{e}_i, E_{\z} \,]$ \;
%
\nl $G_\ell(f) \leftarrow P_\ell(f)^h V_\ell(f) P_\ell(f)$ \;
}
\nl Solve $G_i(f) \bm{b} = \lambda_{\max} G_{\z}(f) \bm{b}$ to obtain $\bm{b}$ corresponding to the largest eigenvalue $\lambda_{\max}$. \;
\nl $\bm{w}_i(f) \leftarrow P_i(f) \bm{b} \left( \bm{b}^h G_i(f) \bm{b} \right)^{-\frac{1}{2}}$
}
}
\end{algorithm}
\begin{algorithm}[p]
\caption{Semi-IVE}
\label{alg:Semi-IVE}
\setstretch{1.2}
\DontPrintSemicolon
\tcc{LCMV beamforming}
\nl \For{$i = 1,\ldots,L$}{
\nl {\small $\bm{w}_i(f) \leftarrow V_i(f)^{-1} A_1(f) \left( A_1(f)^h V_i(f)^{-1} A_1(f) \right)^{-1} \bm{e}_i$}\;
}
\nl \If{$L = K$}{
return
}
\tcc{BCD}
\nl \For{$i = L + 1,\ldots,K$}{
\nl $\overline{V}_{\!i}(f) \leftarrow W_2'(f)^h V_i(f) W_2'(f)$ \;
}
\nl \If{$L = K - 1$}{
\nl Solve $\overline{V}_{\!\z}(f) \overline{\bm{u}} = \lambda_{\max} \overline{V}_{\!K}(f) \overline{\bm{u}}$ to obtain $\overline{\bm{u}}$
corresponding to the largest eigenvalue $\lambda_{\max}$. \;
%
\nl $\bm{w}_K(f) \leftarrow W_2'(f) \overline{\bm{u}} \left( \overline{\bm{u}}^h \overline{V}_{\!K}(f) \overline{\bm{u}} \right)^{-\frac{1}{2}}$
}
\nl \Else{
%
\nl \For{$i = L + 1,\ldots,K$}{
\nl \For{$\ell \in \{i, \z \}$}{
\nl $\overline{P}_\ell(f) \leftarrow \left( \overline{W}(f)^h \overline{V}_\ell(f) \right)^{-1} [\, \overline{\bm{e}}_i, \overline{E}_{\z} \,]$ \;
%
\nl $\overline{G}_\ell(f) \leftarrow \overline{P}_\ell(f)^h \overline{V}_\ell(f) \overline{P}_\ell(f)$ \;
}
%
\nl Solve $\overline{G}_i(f) \overline{\bm{b}} = \lambda_{\max} \overline{G}_{\z}(f) \overline{\bm{b}}$ to obtain $\overline{\bm{b}}$ corresponding to the largest eigenvalue $\lambda_{\max}$. \;
%
\nl $\bm{w}_i(f) \leftarrow W_2'(f) \overline{P}_i(f) \overline{\bm{b}} \left( \overline{\bm{b}}^h \overline{G}_i(f) \overline{\bm{b}} \right)^{-\frac{1}{2}}$
}
}
}
\end{algorithm}
\section{Experiments}
\label{sec:exp}
In this numerical experiment, we evaluated the following properties of the IVE and Semi-IVE algorithms:
\begin{itemize}
\item The source extraction performance for the speech signals in terms of the signal-to-distortion ratio (SDR~\cite{vincent2006sdr}) between the estimated and oracle source spatial images;
\item the runtime performance.
\end{itemize}
We compared the performance of the following five methods whose optimization procedures are summarized in Table~\ref{table:alg}:
\begin{enumerate}
\item \textbf{IVA-IP1-old}:
The conventional AuxIVA with IP1~\cite{ono2011auxiva}, followed by picking $K$ signals in an oracle manner.
\item \textbf{IVE-IP1-old}:
The conventional IVE-IP1 or IVE-OC~\cite{scheibler2019overiva} (Algorithms~\ref{alg:main} and \ref{alg:IVE-IP1}).
\item \textbf{IVE-IP2-old}:
The conventional IVE-IP2~\cite{scheibler2020ive}
(see~{\cite[Algorithm 2]{scheibler2020ive}} for the implementation when $K \geq 2$).
\item \textbf{IVE-IP2-new}:
The proposed IVE-IP2 (Algorithms~\ref{alg:main}, \ref{alg:IVE-IP2:K=1}, and \ref{alg:IVE-IP2:K>1}),
where the generalized eigenvalue problems will be solved using the power method with 30 iterations
(see Section~\ref{sec:exp-implementation} for details).%
\footnote{
Note that in the case of $K = 1$, IVE-IP2-old (called FIVE~\cite{scheibler2020fast}) and IVE-IP2-new are the same algorithm (see Section~\ref{sec:IVE-IP2:K=1}).
However, we distinguish between them because we propose to use the power method to efficiently obtain the first generalized eigenvector in the proposed IVE-IP2-new.
}
\item \textbf{Semi-IVE-$(L)$-new}:
The proposed semiblind IVE algorithm, where the transfer functions of $L$ super-Gaussian sources are given as an oracle
(Algorithms~\ref{alg:main} and \ref{alg:Semi-IVE}).
\end{enumerate}
\subsection{Dataset}
\label{sec:exp-data}
As evaluation data, we generated synthesized convolutive noisy mixtures of speech signals.
\textit{Room impulse response (RIR) data}:
We used the RIR data recorded in room \textsf{E2A} from the RWCP Sound Scene Database in Real Acoustical Environments~\cite{rwcp}.
The reverberation time ($\mathrm{RT}_{60}$) of room \textsf{E2A} is 300 ms.
These data consist of nine RIRs from nine different directions.
\textit{Speech data}:
We used point-source speech signals from the test set of the TIMIT corpus~\cite{timit}.
We concatenated the speech signals from the same speaker so that the length of each signal exceeded ten seconds.
We prepared 168 speech signals in total.
\textit{Noise data}:
We used a background noise signal recorded in a cafe (\textsf{CAF}) from the third `CHiME' Speech Separation and Recognition Challenge~\cite{chime3}.
We chose a monaural signal captured at ch1 on a tablet and used it as a point-source noise signal.
The noise signal was about 35 minutes long.
\textit{Mixture signals}:
We generated 100 mixtures consisting of $K \in \{1, 2, 3\}$ speech signals and $J = 5$ noise signals:
\begin{enumerate}
\item We selected $K$ speech signals at random from the 168 speech data.
We selected $J$ non-overlapping segments at random from the noise data and prepared $J$ noise signals.
We selected $K + J$ RIRs at random from the original nine RIRs.
\item We convolved the $K$ speech and $J$ noise signals with the selected $K + J$ RIRs to create $K + J$ spatial images.
\item We added the obtained $K + J$ spatial images in such a way that
$\mathrm{SNR} \coloneqq 10 \log_{10} \frac{ \frac{1}{K} \sum_{i = 1}^K \lambda_i^{(\mathrm{s})} }{ \sum_{j = 1}^J \lambda_j^{(\mathrm{n})} }$ becomes 0 dB if $K = 1,2$ and 5 dB if $K = 3$,
where $\lambda_i^{(\mathrm{s})}$ and $\lambda_j^{(\mathrm{n})}$ denote the sample variances of the $i$th speech-source and $j$th noise-source spatial images.
\end{enumerate}
\subsection{Experimental conditions}
\label{sec:exp-cond}
For all the methods, we initialized $W(f) = - I_M$ and
assumed that each super-Gaussian source $\bm{s}_i(t) \in \mathbb{C}^F$ follows the generalized Gaussian distribution (GGD).
More concretely, we set to $G(r_i(t), \alpha_i) = (\frac{r_i(t)}{\alpha_i})^{\beta} + 2 F \log \alpha_i$ with shape parameter $\beta = 0.1$ in Assumption~\ref{assumption:superGaussian}.
Scale parameters $\alpha_1,\ldots,\alpha_K \in \mathbb{R}_{> 0}$ are optimized to
$\alpha_i^{\beta} = \frac{\beta}{2F} (\frac{1}{T} \sum_t r_i(t)^\beta)$
every after $W$ is updated (Step~\ref{step:alpha} in Algorithm~\ref{alg:main}).
In Semi-IVE-$(L)$-new,
we prepared oracle transfer functions $\bm{a}_1,\ldots,\bm{a}_L$ using oracle spatial images $\bm{x}_1,\ldots,\bm{x}_L$.
We set $\bm{a}_\ell (f)$ to the first eigenvector of
sample covariance matrix
$R_\ell(f) = \frac{1}{T} \sum_{t = 1}^T \bm{x}_\ell(f,t) \bm{x}_\ell(f,t)^h$
for each source $\ell \in \{ 1,\ldots,L \}$.
Here, $\bm{a}_\ell(f)$ was normalized to be $\| \bm{a}_\ell(f) \| = 1$.
The performance was tested for one to three speakers and two to eight microphones.
The sampling rate was 16 kHz,
the reverberation time was 300 ms,
the frame length was 4096 (256 ms),
and the frame shift was 1024 (64 ms).
\subsection{Implementation notes}
\label{sec:exp-implementation}
We implemented all the algorithms in Python 3.7.1.
In IVE-IP2-new and Semi-IVE, we need to solve the generalized eigenvalue problems of form
$A \bm{x} = \lambda_{\mathrm{max}} B \bm{x}$ where we require only the first generalized eigenvector.
To prevent `for loops' in the Python implementation, we solved them
(i) by first transforming the problem into an eigenvalue problem
$\left( B^{-1} A \right) \bm{x} = \lambda_{\mathrm{max}} \bm{x}$,
and then (ii) using power iteration (also known as the power method~\cite{atkinson2008}) to obtain the required first eigenvector.
The number of iterations in the power method was set to 30 in this experiment.
On the other hand, in IVE-IP2-old for $K \geq 2$, all the generalized eigenvectors have to be obtained.
To prevent `for loops,' we implemented it
(i) by first transforming the generalized eigenvector problem into an eigenvector problem in the same way as above,
and then (ii) by calling the \textsf{numpy.linalg.eig} function.
IVE-IP2-old for $K = 1$ (i.e., FIVE~\cite{scheibler2020fast}) was implemented in the same way.
These implementations are not recommended for numerical stability,
but we adopted them to make the algorithms run fast.
\begin{figure*}[p]
\begin{center}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/SDR.K1_M10_noi5_snr0_nonoverlap_CAF.pdf}
\caption{One target source ($K = 1$), SNR = 0 [dB], averaged signal length: 11.46 sec}
\vspace{3 mm}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/SDR.K2_M10_noi5_snr0_nonoverlap_CAF.pdf}
\caption{Two target sources ($K = 2$), SNR = 0 [dB], averaged signal length: 12.85 sec}
\vspace{3 mm}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/SDR.K3_M10_noi5_snr5_nonoverlap_CAF.pdf}
\caption{Three target sources ($K = 3$), SNR = 5 [dB], averaged signal length: 12.92 sec}
\vspace{3 mm}
\end{subfigure}
\end{center}
\vspace{-4 mm}
\caption{
SDR [dB] performance as a function of runtime.
Results shown here are averaged over 100 mixtures.
}
\label{fig:SDR}
\end{figure*}
\begin{table*}[p]
\begin{center}
{
\caption{Realtime factors of algorithms when number of iterations is set to 50}
\label{table:RTF}
\begin{tabular}{c|ccc|ccc|ccc} \hline
Number of sources & \multicolumn{3}{c|}{$K = 1$} & \multicolumn{3}{c|}{$K = 2$} & \multicolumn{3}{c}{$K = 3$}
\\ \hline
Number of microphones & $M = 2$ & $M = 6$ & $M = 8$ & $M = 3$ & $M = 6$ & $M = 8$ & $M = 4$ & $M = 6$ & $M = 8$
\\ \hline
Signal length [sec] & 11.77 & 11.77 & 11.77 & 12.43 & 12.43 & 12.43 & 12.81 & 12.81 & 12.81
\\ \hline \hline
IVA-IP1-old & 0.08 & 0.58 & 0.99 & 0.18 & 0.61 & 1.05 & 0.31 & 0.70 & 1.20
\\
IVE-IP1-old & 0.05 & 0.13 & 0.18 & 0.14 & 0.27 & 0.36 & 0.28 & 0.44 & 0.67
\\
IVE-IP2-old & 0.07 & 0.32 & 0.48 & 0.21 & 0.59 & 0.93 & 0.38 & 0.81 & 1.45
\\
IVE-IP2-new & 0.06 & 0.16 & 0.22 & 0.19 & 0.40 & 0.59 & 0.35 & 0.59 & 1.00
\\ \hline
Semi-IVE-$(1)$-new & 0.04 & 0.12 & 0.17 & 0.15 & 0.29 & 0.41 & 0.32 & 0.54 & 0.89
\\
Semi-IVE-$(2)$-new & - & - & - & 0.14 & 0.26 & 0.35 & 0.28 & 0.44 & 0.70
\\
Semi-IVE-$(3)$-new & - & - & - & - & - & - & 0.28 & 0.42 & 0.65
\\ \hline
\end{tabular}
}
\end{center}
\end{table*}
\subsection{Experimental results}
\label{sec:exp-res}
Figure~\ref{fig:SDR} shows the SDR performance of each algorithm as a function of the runtime,
and Table~\ref{table:RTF} presents the realtime factor (RTF) of the algorithms when the number of iterations is set to 50 for all the algorithms.
These results were obtained by running the algorithms on a PC with
Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz using a single thread.
\subsubsection{Effectiveness of IVE-IP2-new for extracting unique target source ($K = 1$)}
In Fig.~\ref{fig:SDR}(a), IVE-IP2-new showed faster convergence than the conventional algorithms while achieving a similar SDR performance.
Interestingly, the full-blind IVE algorithms in the $M \geq 6$ cases produced rather better SDRs than the informed Semi-IVE-$(1)$-new in the $M = 2$ case.
This result motivates us to use more microphones at the expense of increased runtime.
As desired, these increased runtime of the IVE algorithms, especially the proposed IVE-IP2-new, was shown to be very small compared to IVA-IP1-old.
\subsubsection{Effectiveness of IVE-IP2-new compared to IVE-IP2-old}
From Fig.~\ref{fig:SDR} and Table~\ref{table:RTF}, it can be confirmed for $M \geq 6$ that the computational cost of the proposed IVE-IP2-new is consistently smaller than IVE-IP2-old while maintaining its separation performance,
which shows the effectiveness of IVE-IP2-new.
On the other hand, if $M$ is small, IVE-IP2-new gave almost the same performance as IVE-IP2-old.
\subsubsection{Comparison of IVA-IP1-old, IVE-IP1-old, and IVE-IP2-new}
If $K = 1$ (Fig.~\ref{fig:SDR}(a)), the proposed IVE-IP2-new gave the fastest convergence among other blind algorithms, which clearly shows the superiority of IVE-IP2-new.
On the other hand, if $K \in \{2, 3\}$ (Fig.~\ref{fig:SDR}(b)--(c)), the convergence of IVE-IP2-new is slower than that of IVE-IP1-old,
mainly because the computational cost per iteration of IVE-IP2-new exceedes that of IVE-IP1-old (see Table~\ref{table:RTF}).
Therefore, IVE-IP1-old is still important when extracting more than two sources.
\subsubsection{Effectiveness of Semi-IVE-$(L)$-new}
In Fig.~\ref{fig:SDR}, the proposed Semi-IVE algorithms naturally outperformed all of the full-blind IVA and IVE algorithms.
Surprisingly, when the $L \coloneqq K - 1$ transfer functions are given a priori (and $M \geq 4$),
the Semi-IVE algorithms (Semi-IVE-$(1)$-new if $K = 2$ and Semi-IVE-$(2)$-new if $K = 3$)
achieved comparable or sometimes better performance than Semi-IVE-$(K)$-new.
The convergence of Semi-IVE-$(L)$-new with $L \geq K - 1$ was also extremely fast.
These results clearly show the effectiveness of the proposed Semi-IVE algorithms.
\section{Concluding remarks}
\label{sec:conclusion}
We presented two new efficient BCD algorithms for IVE:
(i) IVE-IP2, which extracts all the super-Gaussian sources from a linear mixture in a fully blind manner, and
(ii) Semi-IVE, which improves IVE-IP2 in the semiblind scenario in which the transfer functions for several super-Gaussian sources are available as prior knowledge.
We also argued that the conventional IVE that relies on the orthogonal constraint (IVE-OC) can be interpreted as BCD for IVE (IVE-IP1).
Due to the stationary Gaussian noise assumption,
these BCD (or IP) algorithms can skip most of the optimization of the filters for separating the noise components, which plays a central role for achieving a low computational cost of optimization.
Our numerical experiment, which extracts speech signals from their noisy reverberant mixture, showed that when $K = 1$ or given at least $K - 1$ transfer functions in a semiblind case, the proposed IVE-IP2 and Semi-IVE resulted in significantly faster convergence compared to the conventional algorithms,
where $K$ is the number of the target-source signals.
The new IVE-IP2 consistently speeded up the old IVE-IP2,
and the conventional IVE-IP1 remains important for extracting multiple sources.
\appendices
\section{}
\label{appendix:lemma}
We prepare several propositions that are needed to rigorously develop the algorithms in Sections~\ref{sec:BSE} and \ref{sec:semi-BSE}.
Proposition~\ref{prop:MM} below gives an inequality which is the basis of the MM algorithm for AuxICA~\cite{ono2010auxica}, AuxIVA~\cite{ono2011auxiva}, and (the auxiliary-function-based) IVE.
\begin{proposition}[See {\cite[Theorem~1]{ono2010auxica}}]
\label{prop:MM}
Let $G \colon \mathbb{R}_{\geq 0} \to \mathbb{R}$ be differentiable and satisfy
that $\frac{G'(r)}{r}$ is nonincreasing on $r \in \mathbb{R}_{> 0}$.
Then, for arbitrary $r, \widetilde{r} \in \mathbb{R}_{> 0}$, it holds that
\begin{align}
G(r) \leq \frac{G'(\widetilde{r})}{2\widetilde{r}} \cdot r^2
+ \left( G(\widetilde{r}) - \frac{\widetilde{r} \cdot G'(\widetilde{r})}{2} \right).
\end{align}
The inequality holds with equality if $r = \widetilde{r}$.
\end{proposition}
To give a proof of Propositions~\ref{prop:IVE-IP2:K=1}, \ref{prop:IVE-IP2:K>1}, and \ref{prop:IVE-IP1:Wz}, we need the following proposition that provides a sufficient condition for the existence of globally optimal solutions.
\begin{proposition}
\label{prop:loss:lower-bounded}
Suppose $V_1,\ldots,V_n \in \mathcal{S}_{++}^M$ and $1 \leq n \leq M$.
Let $W_2 \in \mathbb{C}^{M \times (M - n)}$ be full column rank.
The function $g$ with respect to
$W_1 = [\bm{w}_{1},\ldots,\bm{w}_n] \in \mathbb{C}^{M \times n}$,
defined by
\begin{align*}
\nonumber
g(W_1) &=
\sum_{i = 1}^n \bm{w}_i^h V_i \bm{w}_i - \log | \det [W_1,W_2] |^2,
\end{align*}
is lower bounded and attains its minimum at some $W_1$.
\end{proposition}
\begin{proof}
The statement for $n = M$ was proved in~\cite{degerine2004,degerine2006maxdet,yeredor2009HEAD,yeredor2012SeDJoCo}.
Since that for $1 \leq n < M$ can also be proved in the same way as the former part, we omit the proof.
\end{proof}
Proposition~\ref{prop:det} below gives a useful expression of $|\det W|^2$.
The same statement for $L = 1$ in Proposition~\ref{prop:det} can be found in \cite{nakatani2019wpd-ml},
and we here extend it for general $1 \leq L \leq M$.
\begin{proposition}
\label{prop:det}
Let
$1 \leq L < M$ and
\begin{align*}
W &= \begin{bmatrix} W_1, W_2\end{bmatrix} \in \mathbb{C}^{M \times M},
\\
W_1 &\in \mathbb{C}^{M \times L}, \quad W_2 \in \mathbb{C}^{M \times (M - L)},
\\
A_1 &\in \mathbb{C}^{M \times L}, \quad W_1^h A_1 = I_L, \quad W_2^h A_1 = O_{M - L,L}.
\end{align*}
Then, $|\det W |^2 = \det \left( A_1^h A_1 \right)^{-1} \det \left( W_2^h W_2 \right)$ holds.
\end{proposition}
\begin{proof}
The orthogonal projection onto $\image A_1$ is given by
$P = A_1 (A_1^h A_1)^{-1} A_1^h \in \mathbb{C}^{M \times M}$.
Then, it holds that
\begin{align*}
| \det W |^2
&= \left| \det ([\, P W_1 + (I - P) W_1, W_2 \,]) \right|^2
\\
&= \left| \det ([\, P W_1, W_2 \,]) \right|^2
\\
&= \det ([\, P W_1, W_2 \,]^h [\, P W_1, W_2 \,])
\\
&= \det \left(
\begin{bmatrix}
W_1^h P^h P W_1 & O_{L, M - L} \\
O_{M - L, L} & W_2^h W_2
\end{bmatrix}
\right)
\\
&= \det (A_1^h A_1)^{-1} \cdot \det (W_2^h W_2),
\end{align*}
where we use $\image (I - P) \supseteq \image W_2$ in the second equality.
\end{proof}
\section{Proof of Proposition~\ref{prop:IVE-IP2:K>1}}
\label{appendix:proof-of-prop3}
\begin{proof}
The proof is very similar to that of \cite[Theorem~3]{scheibler2020ive}.
By Proposition~\ref{prop:loss:lower-bounded}, problem \eqref{problem:IVE-IP2} has global optimal solutions,
and they satisfy the stationary conditions \eqref{eq:KKT:wk}--\eqref{eq:KKT:Wz}.
In Eqs.~\eqref{eq:KKT:wk}--\eqref{eq:KKT:Wz}, the first $K$ rows except for the $i$th row is linear with respect to $\bm{w}_i$ and $W_{\z}$,
and so they can be solved as
\begin{alignat}{3}
\label{eq:IVE-IP:K>1:101}
\bm{w}_i &= P_i G_i^{-1} \bm{c}_i,
&\quad
\bm{c}_i &\in \mathbb{C}^{(M - K + 1) \times 1},
\\
\label{eq:IVE-IP:K>1:102}
W_{\z} &= P_{\z} G_{\z}^{-1} C_{\z},
&\quad
C_{\z} &\in \mathbb{C}^{(M - K + 1) \times (M - K)},
\end{alignat}
where $P_\ell$ and $G_\ell$ for $\ell \in \{i, \z \}$ are defined by \eqref{eq:IVE-IP2:K>1:P}--\eqref{eq:IVE-IP2:K>1:G},
and $\bm{c}_i$ and $C_{\z}$ are free variables (the roles of $G_i^{-1}$ and $G_{\z}^{-1}$ will be clear below).
Substituting \eqref{eq:IVE-IP:K>1:101}--\eqref{eq:IVE-IP:K>1:102} into the objective function $g$, we have
\begin{align}
\nonumber
g &= \bm{c}_i^h G_i^{-1} \bm{c}_i + \trace \{ C_i^h G_{\z}^{-1} C_{\z} \}
\\
\label{eq:prop:phi}
&\quad - \log | \det [\, W_0 \mid P_i G_i^{-1} \bm{c}_i \mid P_{\z} G_{\z}^{-1} C_{\z} \,] |^2 + \const,
\end{align}
where $W_0 \coloneqq [\, \bm{w}_1, \ldots,\bm{w}_{i - 1}, \bm{w}_{i + 1}, \ldots,\bm{w}_K \,] \in \mathbb{C}^{M \times (K - 1)}$.
Now, let us consider minimizing $g$ with respect to $\bm{c}_i$ and $C_{\z}$.
In \eqref{eq:prop:phi}, the $\log \det$ term can be simplified as
\begin{alignat*}{3}
&&&2 \log | \det
\begin{bmatrix}
W_0 \mid P_i G_i^{-1} \bm{c}_i \mid P_{\z} G_{\z}^{-1} C_{\z}
\end{bmatrix}
|
\\
&=&\,&
2 \log | \det
\begin{bmatrix}
W_0^h
\\
P_i^h V_i
\end{bmatrix}^{-1}
\begin{bmatrix}
W_0^h
\\
P_i^h V_i
\end{bmatrix}
\begin{bmatrix}
W_0 \mid P_i G_i^{-1} \bm{c}_i \mid P_{\z} G_{\z}^{-1} C_{\z}
\end{bmatrix}
|
\\
&=&~&
2 \log | \det \begin{bmatrix}
W_0^h W_0 & * & * \\
O_{M - K + 1, K - 1} & \bm{c}_i & C_{\z}
\end{bmatrix}
|
+ \const
\\
&=&~&
2 \log | \det [\, \bm{c}_i, C_{\z} \,] | + \const
\end{alignat*}
Here, we used $V_i P_i = \left( W' \right)^{-h} [\, \bm{e}_i, E_{\z} \,] = V_{\z} P_{\z}$ twice in the second equality.
Hence, by applying Proposition~\ref{prop:IVE-IP2:K=1}, $g$ attains its minimum when
\begin{align}
\bm{c}_i &= \bm{u}_i \left( \bm{u}_i^h G_i^{-1} \bm{u}_i \right)^{- \frac{1}{2}},
\quad C_{\z} = U_{\z} \left( U_{\z}^h G_{\z}^{-1} U_{\z} \right)^{- \frac{1}{2}},
\\
\label{eq:IVE-IP2:prop:Uz}
U_{\z} &\in \mathbb{C}^{(M - K + 1) \times (M - K)} \quad \text{with} \quad U_{\z}^h G_{\z}^{-1} \bm{u}_i = \bm{0},
\\
\label{eq:IVE-IP2:prop:ui}
\bm{u}_i &\in \mathbb{C}^{(M - K + 1) \times 1} \quad \text{with} \quad G_{\z}^{-1} \bm{u}_i = \lambda_{\max} G_i^{-1} \bm{u}_i,
\end{align}
where $\lambda_{\max}$ in \eqref{eq:IVE-IP2:prop:ui} denotes the largest generalized eigenvalue.
Because \eqref{eq:IVE-IP2:prop:Uz}--\eqref{eq:IVE-IP2:prop:ui}
are equivalent to \eqref{eq:IVE-IP2:K>1:orth}--\eqref{eq:IVE-IP2:K>1:eig} through
$\bm{b}_i = G_i^{-1} \bm{u}_i$ and $B_{\z} = G_{\z}^{-1} U_{\z}$,
a global optimal solution for \eqref{problem:IVE-IP2} can be obtained by
\begin{align*}
\bm{w}_i &= P_i G_i^{-1} \bm{u}_i \left( \bm{u}_i^h G_i^{-1} \bm{u}_i \right)^{- \frac{1}{2}}
= P_i \bm{b}_i (\bm{b}_i^h G_i \bm{b}_i)^{-\frac{1}{2}},
\\
W_{\z} &= P_{\z} G_{\z}^{-1} U_{\z} \left( U_{\z}^h G_{\z}^{-1} U_{\z} \right)^{- \frac{1}{2}}
= P_{\z} B_{\z} \left( B_{\z}^h G_{\z} B_{\z} \right)^{-\frac{1}{2}}
\end{align*}
as we desired.
\end{proof}
\bibliographystyle{IEEEtran}
| proofpile-arXiv_059-15661 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction and main result}
\subsection{Setting}
Leaving precise definitions till the next section, we work in the context of Gibbs measures for
shift-invariant absolutely summable interactions on a space of configurations of the form
$\mathcal{S}^{\Z^d}$, where $\mathcal{S}$ is a finite set. As explained below, we will in fact deal with equilibrium states, that is,
shift-invariant Gibbs measures. The problem we consider is the following. Given an interaction $\Phi$ and an inverse temperature $\beta>0$, there is a simplex of equilibrium states $\mathcal{E\!S}(\beta)$ associated with $\beta\Phi$ (which might not be a singleton for large values of $\beta$, as for instance in the Ising model).
We ask the following:
\begin{quote}
What is the behavior of $\mathcal{E\!S}(\beta)$ when $\beta\to+\infty$?
\end{quote}
When there is a single equilibrium state $\mu_\beta$ for each $\beta$, this question is simply: Does the limit of $(\mu_{\beta})_{\beta>0}$ exist?
If it does, what is the limiting measure? (The natural topology in this problem is the weak topology, see below.) This question is
connected with ground states. We need not explain what they are because they will play no explicit role in the present paper.
Let us only say that a ground state for an interaction $\Phi$ is a probability measure supported on a certain closed subset of $\mathcal{S}^{\Z^d}$,
possibly uncountable, which is invariant under the shift action (that is, a subshift), and determined by the ``maximizing configurations''
of the Hamiltonian of $\Phi$. We refer to \cite{GT2015} for details. (Notice that in this paper we use the convention sign followed in dynamical
systems and also in \cite{Ruelle2004}, namely we ``prefer'' to maximize instead of minimizing.)
\subsection{Known results and main theorem}
If the answer to the above question is known in a number of particular examples, notably in relation with phase transitions,
see, \textit{e.g.}, \cite{DB1985,vE-F-S,Georgii}, the general study of this problem is pretty recent, and it was started by people working
in ergodic theory and dynamical systems. They considered `potentials' on $\mathcal{S}^{\N}$ (or $\mathcal{S}^{\Z}$) for which there is a single
equilibrium state, which is also a Gibbs measure, for each inverse temperature \cite{Bow75}. \footnote{A caution on the terminology is in order. In
statistical physics, an interaction or a potential is a family of real-valued functions on $\mathcal{S}^{\Z^d}$ indexed by the finite subsets of $\Z^d$.
For equilibrium states, a function deriving from the potential appears naturally and can be interpreted as the `mean energy per site'. In dynamical
systems, people consider $d=1$ (the shift representing time evolution), and they only consider this mean energy per site that they call a potential.}
In a nutshell, the situation is the following. For locally constant `potentials', which correspond to finite-range interactions, convergence always takes
place, and it is possible to describe the limit measures \cite{Bre03, CGU11, Lep05}. For Lipschitz `potentials', which correspond for instance to
exponentially decaying pair interactions (as a function of the distance between sites), there is a rather surprising negative result.
Recall that, in this class, equilibrium states and Gibbs measures coincide, and for a given potential and for each $\beta$ there is exactly one Gibbs
measure. It was first proved in \cite{CH10} that there do exist potentials (or interactions) in this class such that the limit of
$(\mu_{\beta})_{\beta>0}$ does not exist when $\beta\to+\infty$.
Another construction was given in \cite{CR15}.
What happens when $d\geq 2$? In sharp contrast with the case $d=1$, it was proved in \cite{CH10} that, when $d\geq 3$, then one can construct
a finite-range interaction (with $\mathcal{S}=\{0,1\}$) such that, for \textit{any} family $(\mu_\beta)_{\beta>0}$ in which $\mu_\beta$ is an
equilibrium state for this interaction at inverse temperature $\beta$, the limit $\lim_{\beta\to\infty}\mu_\beta$ does not exist. (We will comment below
on this rather subtle statement.)
The case $d=2$ is left as an open problem in \cite{CH10} (for reasons that will be explained later on), and in this paper, we solve it.
More precisely, our main theorem is the following.
\begin{theorem}[Main theorem]
\label{maintheorem}
\leavevmode\\
There exists a finite set $\mathcal{S}$ and a finite-range interaction on $\mathcal{S}^{\Z^2}$, such that for any one-parameter family $(\mu_\beta)_{\beta>0}$ in
which $\mu_\beta$ is an equilibrium state (\textit{i.e.}, a shift-invariant Gibbs measure) at inverse temperature $\beta$, the limit $\lim_{\beta\to\infty}
\mu_\beta$ does not exist.
\end{theorem}
Several comments are in order. First, if there were a unique equilibrium state/Gibbs measure for each $\beta$, then there would be a unique choice
for $\mu_\beta$, and the previous result could be formulated more transparently: there exist finite range interactions such that the limit $
\lim_{\beta\to\infty}\mu_\beta$ does not exist. But in our example we didn't look if uniqueness holds at low temperature.
Second, by compactness (in the weak topology) of the space of probability measures, if we take any sequence $(\beta_\ell)_{\ell\geq 1}$ of
inverse temperatures such that $\beta_\ell\to+\infty$, there exists a subsequence $(\ell_i)$ such that the sequence
$(\mu_{\beta_{\ell_i}})_{i\geq 1}$, in which $\mu_{\beta_{\ell_i}}$ is an equilibrium state, has a limit.
Our result, as well as the one in \cite{CH10} mentioned above, is about continuous-parameter families. Third, and last, there is
nothing new in the fact that one can choose \textit{some} divergent family of equilibrium states. Consider for instance
the nearest-neighbor Ising model in which one can choose a family which alternates, when $\beta$ is large enough, between the $+$ and $-$ phases. However, it is always possible to choose families which converge to either $\delta_-$ or $\delta_+$. In our example, \textit{it is not
possible} to choose \textit{any} family which converges to a ground state.
Let us also mention that in \cite{CR15} such a non-convergence result (for any $d\geq 2$) was obtained, but for non-locally constant Lipschitz `potentials'.
\subsection{More comments}
The fact that the Gibbs measures of an interaction can behave in a `chaotic' way when temperature goes to zero seems to have been first
proved in \cite{vE-Wioletta} for a class of examples of nearest-neighbor, bounded-spin models, in any dimension. In that example, $\mathcal{S}$
is the unit circle. The paper \cite{CH10} was the first to exhibit this kind of behavior for models with a finite number of `spin' values at each site.
In the above mentioned paper \cite{CR15}, a stronger property is studied namely `sensitive dependence'.
Roughly, it means that the non-convergence can indeed occur along any prescribed sequence of temperatures going to zero, by making an arbitrarily small perturbation of the original interaction. We believe that our example exhibits this property but we did not try to prove it.
Finally, let us mention that we only deal with equilibrium states, that is, shift-invariant Gibbs measures. It is well known that there can exist non-shift invariant Gibbs measures at low temperature, \textit{e.g.}, in the three-dimensional Ising model where the so-called `Dobrushin states' appear
\cite{dobrushin-Ising3D,DB1985}. The situation is unclear in that case.
\subsection{On the role of symbolic dynamics}
In \cite{CH10}, \cite{CR15} and the present work, a central role is played by symbolic dynamics, in particular the construction of subshifts with certain
properties.
Informally, a subshift is a subset of configurations in $\mathcal{S}^{\Z^d}$ defined by a (finite or infinite) set of `patterns' which cannot appear anywhere in
these configurations. When $d=1$, we say that we have a 1D subshift. A prominent class of 1D subshifts is that of subshifts of finite type for which there
are finitely many
forbidden patterns. They play a central role to `encode' certain differential dynamical systems such as Axiom A diffeomorphisms \cite{Bow75}.
There are many 1D subshifts that are not of finite type which were introduced for various purposes (for instance the Thue-Morse subshift defined by
substitution rules); see for instance \cite{kurka-book}.
There is a striking and dramatic difference between 1D and 2D subshifts of finite type. For instance, it is formally undecidable whether a 2D subshift of finite type
is empty or not. This undecidability problem is closely related to the existence of nonempty shifts of finite type without periodic points, or, equivalently, the
existence of Wang tile sets (their definition is given below) such that one can tile the plane but never in a periodic fashion \cite{LS02}.
1D subshifts of finite type are closely related to the zero temperature limit of (one-dimensional) Gibbs measures of finite-range potentials: the limiting
measure, which always exists, is necessarily supported on a subshift of finite type. The above mentioned examples of non-convergence for non-finite-
range potentials \cite{CH10,CR15} rely on the construction of some subshifts which are necessarily not of finite type. Roughly speaking, the idea is to cook
up two subshifts of $\mathcal{S}^{\Z}$, each carrying only one shift-invariant probability measure (among other properties), and a (non-finite-range) potential
such that the corresponding one-parameter family of Gibbs measures $(\mu_\beta)_{\beta>0}$ accumulates at the same time on the two measures as $
\beta\to+\infty$.
In dimension higher than one, we previously said that this non-convergence phenomenon can arise for finite-range potentials.
The underlying phenomenon which we exploit is that one can imbed (in a way precised below) any (effective) 1D subshift into a higher-dimensional
subshift of finite type. In \cite{Hoc09}, there is a construction which allows to imbed a 1D effective subshift into a 3D subshift of finite type, which is the one
used in \cite{CH10}. In this paper we use another construction from \cite{DRS12} based on `hierarchical self-simulating tilings'. It permits to imbed any (effective) 1D subshift into a 2D subshift of
finite type. This is a rather cumbersome construction (that we will partly describe it below), although the underlying ideas are
simple. (Let us mention that a different embedding construction is given in \cite{AS13}.) Moreover, we use the construction of
certain 1D subshifts given in \cite{CR15}. It is somewhat more flexible than the one used in \cite{CH10}. Once we have a 2D subshift of finite type built up from a certain 1D subshift,
we can then define a finite-range potential which `penalizes' the forbidden patterns (which are finitely many).
\subsection{Organization of the paper}
In Section \ref{Preliminaries} we set the necessary definitions and notations for equilibrium states, subshifts and Wang tilings.
In Section \ref{Imbedding-Proposition} we state the embedding theorem of Durand et al. \cite{DRS12} and establish a key proposition which results from their
construction. In particular, we explain some of the ideas of the proof of the embedding theorem.
Next, we construct in Section \ref{Construction-of-a-base-system} a certain 1D effective subshift that serves as a `base' for a 2D subshift of finite type,
and we define an associated finite-range interaction. Section \ref{Estimates-on-admissible-patterns} contains some estimates involving the admissible
patterns of the 2D subshift of finite type.
Finally, we prove in Section \ref{Non-convergence} our main result (Theorem \ref{maintheorem}), namely that that for every one-parameter family
$(\mu_\beta)_{\beta>0}$ in which $\mu_\beta$ is an equilibrium state at inverse temperature $\beta$ for the above interaction, $\lim_{\beta\to\infty}\mu_\beta$ does
not exist.
\section{Equilibrium states, subshifts and tilings}
\label{Preliminaries}
The configuration space is $\mathcal{S}^{\Z^d}$, where $\mathcal{S}$ is a finite set and $d\geq 1$ is an integer.
Regarding equilibrium states, we are interested in $d=2$, but we will also consider the case $d=1$ to
construct some subshifts needed in the proof of the main result.
On $\mathcal{S}^{\Z^d}$, we have the shift operator $\sigma$ defined by
\[
\sigma^{\boldsymbol{i}}(x)_{\boldsymbol{j}}=x_{\boldsymbol{i}+\boldsymbol{j}}
\]
where $x=(x_{\boldsymbol{k}})_{\boldsymbol{k}\in\Z^d}\in\mathcal{S}^{\Z^d}$, $\boldsymbol{i},\boldsymbol{j}\in \Z^d$. In the language of symbolic dynamics \cite{LS02},
$(\mathcal{S}^{\Z^d},\sigma)$ is the $d$-dimensional full shift over $\mathcal{S}$. As usual, $\mathcal{S}^{\Z^d}$ is given the product topology,
which is generated by the cylinder sets, and thus it is a compact metrizable space.
We denote by $\mathfrak{B}$ the Borel $\sigma$-algebra which coincides with the $\sigma$-algebra generated by cylinder sets.
\subsection{Equilibrium states}
We only recall a few definitions and facts, mainly to set notations. We refer to \cite{Georgii,Ruelle2004} for details, as well as to \cite{Kel98} for
a viewpoint from ergodic theory for $\Z^d$-actions.
The basic ingredient is a shift-invariant summable
interaction $\Phi=(\Phi_\Lambda)_{\Lambda \Subset \Z^d}$. ($\Lambda \Subset \Z^d$ means that $\Lambda$ is a nonempty finite subset of $\Z^d$.)
More precisely, for each $\Lambda\Subset \Z^d$, $\Phi_\Lambda:\mathcal{S}^{\Z^d}\to\R$ is $\mathfrak{B}_\Lambda$-measurable,\footnote{\,$\mathfrak{B}_\Lambda$ is the $\sigma$-algebra generated by the coordinate maps $\omega\mapsto \omega_x$ when $x$ is restricted to $\Lambda$.}
$\Phi_\Lambda(x)=\Phi_\Lambda(\tilde{x})$ whenever $x$ and $\tilde{x}$ coincide on $\Lambda$, $\Phi_{\Lambda+\boldsymbol{i}}
=\Phi_\Lambda\circ \sigma^{\boldsymbol{i}}$ for all $\boldsymbol{i}\in\Z^d$, and $\sum_{\Lambda\ni\, 0} \|\Phi_\Lambda\|_\infty<\infty$.
We say that $\Phi$ is of finite range if there exists $R>0$ such that $\Phi_\Lambda\equiv 0$ whenever $\text{diam}(\Lambda)>R$.
Given $\Phi$, define the function $\phi:\mathcal{S}^{\Z^d}\to\R$ by
\begin{equation}\label{def-phi-from-Phi}
\phi(x)=\sum_{\substack{\Lambda\ni\, 0 \\ \Lambda\Subset\Z^d}} \frac{\Phi_\Lambda(x)}{|\Lambda|}\,.
\end{equation}
By definition, the equilibrium states of $\beta\Phi$ are the shift-invariant probability measures which maximize the
quantity
\[
\int \beta \phi\dd\nu + h(\nu)
\]
over all shift-invariant probability measures $\nu$ on $\mathcal{S}^{\Z^d}$. Here $h(\nu)$ is the entropy of $\nu$ (also called the mean entropy per
site in statistical physics), and the supremum (which is attained) is called the (topological) pressure and is
denoted by $P(\beta\phi)=P(\sigma,\beta\phi)$.
For the class of interactions we consider, shift-invariant Gibbs measures coincide with equilibrium states (see \cite[Chapter 15]{Georgii} or \
\cite[Theorem 4.2]{Ruelle2004}).
We use the terminology and convention of dynamical systems and thermodynamic formalism where one ``prefers'' to maximize,
whereas in statistical physics one ``prefers'' to minimize.
Note that if $\Phi$ is of finite range then $\phi$ is locally constant, which means that the values of $\phi(x)$ are determined by
finitely many coordinates of $x$.
\subsection{Subshifts, 2D subshifts of finite type and Wang tilings}
We refer to \cite{LS02} for more details.
We now turn to symbolic dynamics.
For $\Lambda\Subset\Z^d$ and $x\in \mathcal{S}^{\Z^d}$, $x_{\Lambda}$ is
the restriction of the configuration $x$ to $\Lambda$. An element $\omega\in \mathcal{S}^\Lambda$ is called a $\Lambda$-pattern, or simply a
pattern, and $\Lambda$ is its support. But only the ``shape'' of $\Lambda$ matters, not its ``location'' in $\Z^d$.
More precisely, define the equivalence relation $\sim$ on the set of finite subsets of $\Z^d$ by setting
$\Lambda\sim \Lambda'$ if and only if there exists $\boldsymbol{i}\in\Z^d$ such that $\Lambda'=\Lambda+\boldsymbol{i}$. Hence, a support of a pattern
is an equivalence class for $\sim$. Let us also denote by $\sim$ the equivalence relation saying that two patterns
$\omega\in \mathcal{S}^\Lambda$ and $\omega'\in \mathcal{S}^{\Lambda'}$ are congruent if there exists $\boldsymbol{j}\in\Z^d$ such
that $\Lambda'=\Lambda+\boldsymbol{j}$ and $\omega'_{\boldsymbol{i}+\boldsymbol{j}}=\omega_{\boldsymbol{i}}$ for all $\boldsymbol{i}\in \Lambda$.
In the sequel, for the sake of simplicity, we will several times consider a ``localized'' $\Lambda$, and by $\omega\in\mathcal{S}^\Lambda$ we
will mean any pattern $\omega'\sim\omega$.
For $n\geq1$ let
\[
\Lambda_n=\{-n+1, \ldots, 0, \ldots, n-1\}^d
\]
which is the discrete $d$-dimensional cube with volume $\lambda_n=|\Lambda_n|=(2n-1)^d$.
When $d=1$, patterns of the form $\omega_0\cdots \omega_{n-1}$, where $\omega_i\in \mathcal{S}$ and $n\geq 0$, are called $n$-strings or simply strings.
Given a $\Lambda$-pattern $\omega$, let
\[
[\omega]=\big\{x\in \mathcal{S}^{\Z^d}: x_{\Lambda}=\omega\big\}
\]
denote the corresponding cylinder set. Given a finite set of patterns $P$, we write $[P]=\bigcup_{p\,\in P} [\,p\hspace{.5pt}]$.
A (nonempty) subset $X$ of $\mathcal{S}^{\Z^d}$ is a {\it subshift} if it is closed and $\sigma$-invariant.
Equivalently, $X\subseteq\mathcal{S}^{\Z^d}$ is a subshift if there exists a set $F$ of patterns such that
$X=X_F$ where
\[
X_F=\left\{x\in \mathcal{S}^{\Z^d}: \text{no pattern from}\ F\ \text{appears in}\ x\right\}.
\]
Thus $F$ is the set of ``forbidden'' patterns.
Note that $X_F$ may be empty and that different forbidden sets may generate the same subshift.
If $F$ is empty, $X_F=\mathcal{S}^{\Z^d}$.
A subshift $X$ is a \textit{of finite type} if there exists a finite set $F$ such that $X=X_F$.
We will use the abbreviation SFT for ``subshift of finite type''.
A subshift $X$ is \textit{effective} if there exists a recursively enumerable set $F$ such that $X=X_F$, that is, if we can have a
Turing machine which, given no input, lists out the elements of $F$. Let us remark that the class of effective subshifts is countable, so we
apparently rule out ``most'' subshifts, but all known examples are in this class, provided that they are defined using computable parameters,
which is not a restriction in practice.
Given a subshift $X$ and an integer $n\geq1$, define the set of (locally) admissible $\Lambda_n$-patterns by
\[
\mathcal{P}_{X,n}=\left\{ \omega \in \mathcal{S}^{\Lambda_n}: \text{no forbidden pattern of}\ X\ \text{appears in}\ \omega\right\}.
\]
Finally, the set of probability measures on $\mathcal{S}^{\Z^d}$ is given the weak topology. A sequence of probability measures
$(\mu_k)_{k\geq 1}$ converges to a probability measure $\mu$ if, for any cylinder set $B$, $\mu_k(B)\to\mu(B)$, as $k\to+\infty$.
Let us briefly explain how a two-dimensional SFT can be seen as a Wang tiling, and vice versa.
Working with tilings is better adapted for some constructions we use later on.
We consider tiles which are unit squares with colored sides. The colors are taken from a finite set
$\mathscr{C}$. For visualization purposes, one can actually use colors, but it can be more convenient to use symbols or integers. Hence, the
word ``color'' means any element from a finite set of symbols $\mathscr{C}$.
Hence a tile is a quadruple of colors (left, right, top and bottom ones), i.e., an element
of $\mathscr{C}^4$. A tile set is a subset $\tau\subset \mathscr{C}^4$. A Wang tiling with tiles from
$\tau$ is a mapping $x:\mathbb{Z}^2\to\tau$ which respects the color matching condition: abutting edges of adjacent tiles
must have the same color. We shall simply say that it is a $\tau$-tiling.
See Fig. \ref{fig:example-Wang-tiling} for an example (where the colors are not only put on edges to ease visualization).
We can naturally identify each such tiling with a point $x=(x_{i,j})_{(i,j)\in\Z^2}\in \tau^{\Z^2}$, interpreting $\tau$ as an alphabet.
The set $W\subset \tau^{\Z^2}$ of all $\tau$-tilings is obviously a subshift of
finite type (called the Wang shift of $\tau$). Conversely, a SFT can be regarded as a Wang shift.
In this paper, a tiling will mean a Wang tiling.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=250pt]{Wang-tiling.png}
\caption{An example of tile set $\tau$ and a region of $\Z^2$ legally tiled using this tile set.} \label{Wang tile}
\label{fig:example-Wang-tiling}
\end{center}
\end{figure}
\section{Imbedding a 1D effective subshift into a 2D subshift of finite type} \label{Imbedding-Proposition}
We are going to outline how to imbed a one-dimensional effective subshift into a two-dimensional SFT, as explained in detail in \cite{DRS12}.
We define the ``vertical extension'' of a subshift $X\subset \mathcal{A}^\Z$ as
\[
\widehat{X}=\big\{ \hat{x}=(x_{i,j})_{(i,j)\in\Z^2}\in \mathcal{A}^{\Z^2} : \forall j, (x_{i,j})_{i\in\Z}\in X
,\; \forall i,j, x_{i,j}=x_{i,j+1}\big\}.
\]
The following theorem and proposition are key-results in our construction.
\begin{theorem}\cite[Theorem 10]{DRS12}\label{imbedding}
\leavevmode\\
Let $X$ be a one-dimensional effective subshift over a finite alphabet set $\mathcal{A}$.
Then there exist finite alphabets $\mathcal{C}$ and $\breve{B}\subset \mathcal{A}\times \mathcal{C}$,
and a two-dimensional SFT $Y\subset \breve{B}^{\Z^2}$ such that $\pi(Y)=\widehat{X}$ where $\pi$
is the projection from $\breve{B}$ to $\mathcal{A}$.\footnote{We define in the obvious way the projection acting on
patterns or configurations by applying $\pi$ to every symbol.}
\end{theorem}
In plain words, every sequence from a 1D effective subshift can be obtained as a projection of a configuration of some 2D subshift of
finite type in an extended alphabet.
Configurations in $Y$ can be seen as configurations from $\widehat{X}$ marked with some extra symbols taken from $\mathcal{C}$.
These symbols form a new ``layer" which is superimposed on top of $\widehat{X}$.
For reasons explained below, the layer where configurations of $\widehat{X}$ appear is called the {\it input layer}, whereas the layer where patterns over
$\mathcal{C}$ appear is called the {\it computation layer}. Then $\pi$ erases the superimposed layer of data corresponding to $\mathcal{C}$.
The following proposition will play an important role in some computations later on.
\begin{proposition} \label{computation layer}
\leavevmode\\
For each $n\geq 1$ there exists $c_n\leq (2n-1)^2$ such that
\[
|\pi^{-1}\, \widehat{\omega} \cap \mathcal{P}_{Y, n}| =c_n
\]
for all $\omega \in \mathcal{P}_{X,n}$, where $\widehat{\omega}\in\mathcal{P}_{\widehat{X},n}$.
\end{proposition}
This proposition says that the number of admissible patterns in the computation layer does not depend on the input layer.
To prove it, we need some elements of the proof of Theorem \ref{imbedding} found in \cite{DRS12}.
The basic idea of the proof is to run a Turing machine $\EuScript{M}_X$ which checks the forbidden strings of $X$.
The transition rules of $\EuScript{M}_X$ are converted into tiling constraints, since they are described locally.
Then a `space-time diagram' of $\EuScript{M}_X$ is (almost) a tiling based on the corresponding tile set. We can consider the horizontal
dimension as `space', given by the symbols on the tape of the Turing machine, whereas the vertical dimension is `time' which is
given by successive computations of the Turing machine.
There are two difficulties to check forbidden strings of $X$. First, we have to check arbitrary long input strings, since the length of forbidden strings may not be
bounded. Second, we need to start the Turing machine at every site, since we should check every string starting at every site.
In order to overcome the first difficulty, we will consider a tile set which admits a hierarchical structure, the {\it self-simulating structure} defined in
Subsection \ref{Self-simulating structure}.
The way to solve the second difficulty is explained in Subsection \ref{Simulation to check forbidden strings}.
Colloquially, the idea to organize the computations uses fixed-point self-similar tilings. The idea of a self-similar fixed-point tile set can be sketched as follows.
It is well known that tilings can be used to simulate computations, in the sense that for any Turing machine one can construct a tile set simulating it:
use rows of tiles to simulate the tape in the machine, with successive rows corresponding to consecutive states of the machine.
In turn, these computations can be used to guarantee the desired behavior of bigger blocks, called
macro-tiles. So, for a desired behavior of macro-tiles, we can construct a tile set which guarantees this behavior. If these tiling rules coincide
with the rules for macro-tiles, we get self-similarity as a consequence. The way to achieve this is to use an idea very close to the classical Kleene fixed-point theorem
in computability theory (and which was for instance used to construct self-reproducing automata by von Neumann).
\subsection{Self-simulating structure} \label{Self-simulating structure}
In order to get the self-simulating structure, we consider tilings with ``macro tiles".
\begin{definition}[Macro tiles]
Consider a tile set $\tau$ and an integer $N\geq1$.
A pattern $\omega$ over $\tau^{\{0, 1, \ldots, N-1\}^2}$ is called a {\it $\tau$-macro tile with zoom factor $N$},
if all tiles in $\omega$ satisfy the color matching property.
Denote by $\tau^{(N)}$ the set of all $\tau$-macro tiles with zoom factor $N$.
Every side of a macro-tile consists of a sequence of $N$ colors and we call it a macro-color.
\end{definition}
It is easy to see that a $\tau$-tiling can be seen as a $\tau^{(N)}$-tiling.
In particular, we pay attention to the situation when a $\tau$-tiling can be split uniquely into macro-tiles which acts like tiles from another tile set $\rho$.
\begin{definition}[Simulation]
Let $\rho$ and $\tau$ be tile sets and $N\geq1$.
The tile set $\rho$ is {\it simulated} by a tile set $\tau$ with zoom factor $N$ if
there exists an injective map $r: \rho\rightarrow \tau^{(N)}$ such that
\begin{itemize}
\item $t,s \in \rho$ satisfy the color matching property if and only if $r(t), r(s)$ satisfy the color matching property;
\item for every $\tau$-tiling there exists a unique vertical and horizontal $N\times N$ split such that every pattern in the $N\times N$ square is the image of an element in $\rho$ by $r$.
\end{itemize}
\end{definition}
\begin{example}[Coordinate tile]\cite{DRS12}\label{coordinate tile}
Consider a tile set $\rho$ whose element is colored by only one color, namely ``0'', and consider $\rho=\{0\}^4$.
We define a tile set which simulates $\rho$.
Let $N\geq2$ and $\mathscr{C}=(\Z/N\Z)^2$.
Define a tile set $\tau$ by
\[
\tau=\left\{t \in \mathscr{C}^4 : t_b=t_\ell=(i,j), t_r=(i+1, j), t_t=(i, j+1)\ (i,j)\in \mathscr{C}\right\}.
\]
Define a map $r: \rho \rightarrow \tau$ by
$r((0,0,0,0))=$ the $\tau$-macro tile with zoom factor $N$ whose macro colors are
$(0,0) (1,0) \cdots (N-1,0)$ for the bottom and top,
$(0,0) (0,1) \cdots (0, N-1)$ for the left and right; see Fig. \ref{simulation macro tile}.
We call the tile $r((0,0,0,0))$ the {\it coordinate tile} with size $N$.
\end{example}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=300pt]{picture_on_position_macro_tile.pdf}
\caption{A tile in $\tau$ (left) and a $\tau$ macro tile with zoom factor $N$ (right).} \label{simulation macro tile}
\end{center}
\end{figure}
We will consider a sequence $\{\tau_k\}_{k\geq0}$ of tile sets with the following properties:
the level $k+1$ tile set $\tau_{k+1}$ is simulated by the level $k$ tile set $\tau_{k}$ with zoom factor $N_{k+1}$;
for every $k$ the tile set $\tau_k$ describes simulation of $\EuScript{M}_X$;
the zoom factors $N_k$ increase and macro tiles of $\tau_k$ can treat long input strings as $k$ increases.
See Fig. \ref{sequence of macro tiles}.
We start with the construction of a tile set which simulates a $\ell$-bits colored tile set, $\rho \subset (\{0,1\}^{\ell})^4$.
Consider a Turing machine $\EuScript{M}_\rho$ which checks whether a given four $\ell$-bits input represents a tile in $\rho$.
Superimposing other tiles on coordinate tiles (Example \ref{coordinate tile}),
we define a tile set which simulates $\rho$.
The superimposed tiles make another ``layer" on the coordinate tiles.
Let $c=(c_b, c_l, c_t, c_r)$ be a four $\ell$-bits input.
Consider a macro tile with size $N$ consisting of coordinate tiles with size $N$.
Tiles whose color contain $0$ are called boundary tiles.
(In Fig.\ \ref{structure of macro tile} the grey and green zones are the boundary tiles.)
On the middle $\ell$ tiles of the bottom (respectively left, top and right) side of the boundary, we distribute a $\ell$-bits color $c_b$ (resp.\ $c_l, c_t$ and $c_r$).
For the rest of the boundary tiles we distribute $0$.
Since we use the coordinate tiles, each tile ``knows" its coordinate and we can distribute colors like this.
In the middle square of the macro tile (the red and yellow zones in Fig.\ \ref{structure of macro tile})
we put tiles which describes a universal Turing machine with a program of $\EuScript{M}_\rho$.
Since the rule of a universal Turing machine is given by local rules,
they can be embedded into tiles.
Conveying the $\ell$-bits colors on the boundary to the middle square by wires,
we let the universal Turing machine to know the input.
Then a space-time diagram with the input $c=(c_b, c_l, c_t, c_r)$ appears in the square.
Since each tile ``knows" its coordinate, this structure is arranged easily.
The size $N$ of macro tiles is chosen to be large enough to contain these structures and to finish the simulation on the computation zone.
If $c\in \rho$, the simulation doesn't halt and we have a macro tile.
If $c\notin \rho$, the simulation halts and there is no macro tile with this structure.
Since a universal Turing machine is deterministic, there is a one-to-one correspondence between an input and the pattern of the simulation.
Hence the tile set, say $\eta$, which makes this macro tile simulates $\rho$.
Note that the number of tiles in $\eta$ does not depend on $\EuScript{M}_\rho$
since we use a universal Turing machine.
Moreover, tiles for boundaries, wires and the space-time diagram do not depend on $\EuScript{M}_\rho$
and only those for the program do.
By using this fact, we modify the program on a macro tile of $\eta$ and get self-simulating structure.
We replace the program of $\EuScript{M}_\rho$ with the following three programs:
the program to make the boundaries, wires and computation structures;
the program of $\EuScript{M}_X$;
and the program which rewrites itself.
Then $\eta$ simulates a tile set whose macro tiles have the same structure as in Figure 3 and carries the same program as in the macro tiles of $\eta$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=300pt]{picture_on_structure_of_macro_tile.pdf}
\caption{A macro tile simulating a tile colored by $k$-bits.} \label{structure of macro tile}
\end{center}
\end{figure}
Using this construction, we can make a sequence of tile sets which simulate tiles in the next level and carry the same program.
The first level tile set $\tau_0$ simulates the second one $\tau_1$ with zoom factor $N_0$, then
$\tau_1$ simulates the third one $\tau_2$ with zoom factor $N_1$, and so on and so forth.
See Fig. \ref{sequence of macro tiles}.
Since $\tau_0$ simulates $\tau_1$ and $\tau_1$ simulates $\tau_2$,
the patterns of $\tau_0$ in $N_0\times N_0$ squares must represent tiles in $\tau_2$.
If we zoom out, we can see $\tau_2$-tiling in a $\tau_0$-tiling and so on.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=300pt]{picture_on_sequence_of_macro_tiles.pdf}
\caption{Sequence of macro tiles which simulate the previous level tile sets.} \label{sequence of macro tiles}
\end{center}
\end{figure}
\subsection{Simulation to check forbidden strings} \label{Simulation to check forbidden strings}
According to the previous discussion we can construct a sequence $\{\tau_k\}$ of tile sets which simulate the tiles in the next level.
Moreover, macro tiles in any level simulate the same program which includes $\EuScript{M}_X$.
We superimpose tiles which carry alphabet $\mathcal{A}$ on tiles in $\tau_0$.
The new layer is called the input layer and the other one is called the computation layer.
Then $\EuScript{M}_X$ in the macro tiles of $\tau_0$ can access the input layer and it checks whether the input is forbidden or not.
However it is not clear how the programs in the macro tiles of higher level tile sets know the input.
We distribute the infinite strings in the input layer in the following way.
We call by a {\it level $k$ macro tile} a macro tile consists of tiles in $\tau_k$.
Consider a $\tau_0$-tiling and zoom out to see $\tau_k$-tiling.
Let $\ell_k=N_0 N_1\cdots N_{k-1}$.
Then a level $k$ macro tile is represented by $N_k$ tiles consisting of $\ell_k\times \ell_k$ tiles in $\tau_0$.
Each $\ell_k\times \ell_k$ tile represents a level $k-1$ macro tile.
We distribute $\ell_k$ bits in the input string to $N_k$ level $k-1$ macro tiles in the following way:
The $i$th bit from the left is distributed to the $i$th level $k-1$ macro tile from the bottom.
See Fig. \ref{distribution of an input}.
Not only one bit but also string can be distributed.
Moreover we can change the length of the distributed strings, while it should be very short with regard to the size of the macro tile to which it is
distributed.
Hence the main program in each macro tile simulates the distributed substring of the input.
Note that the way of distributing does not depend on inputs.
By the above discussion, we can obtain the set of tilings whose input layers are the vertical extension of $X$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=300pt]{picture_on_distribution_of_an_input.pdf}
\caption{Sequence of macro tiles which simulate the previous level tile sets.} \label{distribution of an input}
\end{center}
\end{figure}
We now prove Proposition \ref{computation layer}.
\begin{proof}[Proof of Proposition \ref{computation layer}]
Fix $n\geq 1$ and $\omega\in \mathcal{P}_{X,n}$.
Take $p\in \pi^{-1}\, \widehat{\omega}\cap \mathcal{P}_{Y,n}$ and $\omega' \in \mathcal{P}_{X,n}$.
Consider a macro tile in the computation layer of $p$ whose size is the maximum size included in the $n\times n$ square.
The macro tile simulates a very short substring of $\omega$.
Replace the input layer of $p$ by $\widehat{\omega}'$.
Then the macro tile simulate the substring of $\omega'$ in the same position and with the same length as the simulated substring of $\omega$.
Since $\omega'$ is an admissible string of $X$, the simulation does not halt.
Hence we have a new macro tile.
This change of the macro tile influences macro tiles in higher levels to change the colors on the boundary and their simulations are also renewed.
Since there is a one-to-one correspondence between inputs and the patterns of simulation,
a map from the patterns in the computation layer of $\pi^{-1}\, \widehat{\omega}\,\cap\, \mathcal{P}_{Y,n}$ to $\pi^{-1}\, \widehat{\omega}'\cap \mathcal{P}_{Y,n}$ is
injective.
Since $\omega$ is arbitrary, this proves that the number of patterns in $\pi^{-1}\, \widehat{\omega}\,\cap\mathcal{P}_{Y,n}$ does not depend on the input $\omega$.
The number of possible patterns in the computation layer depends on the positions of the macro tile with the maximum size included in a square of size
$(2n-1)\times (2n-1)$. Hence $c_n$ is at most $(2n-1)^2$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=300pt]{picture_on_globally_admissible_pattern.pdf}
\caption{An element $p\in \pi^{-1}\widehat{\omega}\cap \mathcal{P}_{n, Y}$.
} \label{globally admissible pattern}
\end{center}
\end{figure}
\end{proof}
\section{Construction of a some 1D effective subshift} \label{Construction-of-a-base-system}
We construct a particular 1D subshift which we will imbed into a two-dimensional SFT by using Theorem \ref{imbedding}.
Then we define a finite-range interaction which somewhat penalizes the admissible patterns of this SFT.
We consider the alphabet $\Sigma=\{-1, 0, +1\}$.
Let $\Sigma_+=\{0, +1\}$ and $\Sigma_-=\{0, -1\}$.
We construct a sequence $\{X_{k}\}_{k\geq1 }$ of SFTs by giving the sequence $\{F_k\}$ of forbidden sets.
We will consider an increasing sequence $\{\ell_k\}$ of lengths of forbidden strings
and a decreasing sequence $\{r_k\}$ of frequencies of $0$ in forbidden strings.
We will choose these sequences to satisfy $\lim_{k\to\infty}\ell_k=\infty$ and $\lim_{k\to\infty} r_k=0$.
We will give the precise conditions on $\{\ell_k\}$ and $\{r_k\}$ in Section \ref{Estimates-on-admissible-patterns}.
Here we explain the basic idea to make a sequence of SFTs by controlling the frequency of $0$ in forbidden sets.
For each $n$ set $\Sigma^n=\Sigma^{\{0,1,\ldots, n-1\}}$.
For $\omega\in \Sigma^n$ denote by $f_0$ the frequency with which $0$ appears in $\omega$, i.e.,
\[
f_0(\omega)=\frac{1}{n}\big|\big\{i\in \{0,1, \ldots, n-1\}: \omega_i=0\big\}\big|.
\]
Let
\[
F_{1}=\Sigma^{2\ell_1-1}\,\setminus\, \Sigma_+^{2\ell_1-1}
\cup \big\{\omega\in \Sigma_+^{2\ell_1-1} : f_0(\omega) \geq r_1\big\}.
\]
The first set forbids the strings including $-1$, while the second set forbids the strings with many zeros.
Similarly define $F_2$ by
\[
F_{2}=\Sigma^{2\ell_2-1}\,\setminus\, \Sigma_-^{2\ell_2-1}
\cup\big\{\omega\in \Sigma_-^{2\ell_2-1} : f_0(\omega) \geq r_2\big\}.
\]
For $m\geq2$ define $F_{2m-1}$ by
\begin{align*}
& F_{2m-1}
=\Sigma^{2\ell_{2m-1}-1}\,\setminus\, \Sigma_+^{2\ell_{2m-1}-1}
\cup \big\{\omega\in \Sigma_+^{2\ell_{2m-1}-1}: f_0(\omega) \geq r_{2m-1}\big\}\\
&\cup \left\{\omega\in \Sigma_+^{2\ell_{2m-1}-1}: \text{there exists a subsequence}\ \eta\ \text{of}\ \omega\ \text{s.t.}\ \eta\in \bigcup_{i=1}^{m-1} F_{2i-1}\right\}
\end{align*}
and
$F_{2m}$ by
\begin{align*}
& F_{2m}=\Sigma^{2\ell_{2m}-1}\,\setminus\, \Sigma_-^{2\ell_{2m}-1}
\cup \big\{\omega\in \Sigma_-^{2\ell_{2m}-1}: f_0(\omega) \geq r_{2m}\big\}\\
&\cup \left\{\omega\in \Sigma_-^{2\ell_{2m}-1}: \
\text{there exists a subsequence}\ \eta\ \text{of}\ \omega\ \text{s.t.}\ \eta\in \bigcup_{i=1}^{m-1} F_{2i}\right\}.
\end{align*}
Let $X_k=X_{F_k}$. By construction we have
\begin{align*}
& X_1\supset X_3\supset \cdots \supset X_{2m-1}\supset \cdots\\
& X_2\supset X_4\supset \cdots \supset X_{2m}\supset \cdots.
\end{align*}
Let $X_+=\bigcap_{m\geq1} X_{2m-1}$, $X_-=\bigcap_{m\geq1} X_{ 2m}$ and $X=X_+\cup X_-$.
Note that $X_\pm\neq\emptyset$ since $\pm1^\infty \in X_\pm$.
By construction we have
$\mathcal{P}_{X_+, \ell_{2m-1}}=\Sigma_+^{2l_{2m-1}-1}\,\setminus\, F_{2m-1}$ and
$\mathcal{P}_{X_-, \ell_{2m}}=\Sigma_+^{2l_{2m}-1}\,\setminus\, F_{2m}$
for $m\geq1$.
We will choose $\{\ell_k\}$ and $\{r_k\}$ in such a way that $h_{{\scriptscriptstyle top}}(X_1)>h_{{\scriptscriptstyle top}}(X_2)>h_{{\scriptscriptstyle top}}(X_3)> \cdots >h_{{\scriptscriptstyle top}}(X_{2m-1})>h_{{\scriptscriptstyle top}}(X_{2m})>h_{{\scriptscriptstyle top}}(X_{2m+1})>\cdots$ and to keep this kind of condition after imbedding in two dimensions.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=300pt]{picture_on_the_base_system.pdf}
\caption{Sketch of the sequences of nested subshifts of finite type.} \label{sequence of subshifts.}
\end{center}
\end{figure}
Applying Theorem \ref{imbedding} to the subshift $X$,
we know that there exist an alphabet $\breve{B}\subset \Sigma\times \mathcal{C}$
and a two-dimensional SFT $Y$ over $\breve{B}$.
We need to blow up the number of patterns in $Y$.
We replace $\Sigma$ in $\breve{B}$ by
$\widetilde{\Sigma}=\{\widetilde{\omega}=(\omega, s) \in \Sigma\times\{0, \tilde{0}, \ast\} : s=\ast\ \text{if}\ \omega\neq 0\}$.
Let $\widetilde{\mathcal{B}}$ be the corresponding alphabet.
We have two possibilities when $\omega=0$, which blows up the number of patterns corresponding to the numbers of $0$.
Denote by $\widetilde{Y}$ the corresponding SFT.
Let $\widetilde{Y}_\pm=\widetilde{\pi}^{-1} X_\pm \cap \widetilde{Y}$ where $\widetilde{\pi}$ is the projection from $\widetilde{\mathcal{B}}$ to $
\Sigma$.
Let $F$ be the (finite) set of forbidden patterns for $\widetilde{Y}$. Elements in $F$ are ``cross-shaped''.
Define $\Lambda_F=\{(0,0), (\pm1,0), (0, \pm 1)\}$, and a finite-range interaction $\Phi$ by
\[
\Phi_\Lambda(x)=
\begin{cases}
-|\Lambda| & \text{if} \quad \Lambda=\Lambda_F\;\;\text{and}\;\; x_{\Lambda_F}\in F\\
0 & \text{otherwise}.
\end{cases}
\]
Then, the corresponding locally constant potential $\phi: \widetilde{\mathcal{B}}^{\,\Z^2}\rightarrow \R$ is (recall \eqref{def-phi-from-Phi})
\[
\phi(x)=
\begin{cases}
-1 & \text{if} \quad x_{\Lambda_F}\in F\\
0 & \text{otherwise}.
\end{cases}
\]
That $\phi$ is locally constant means that for every $p\in F$, $\phi(x)=\phi(x')$ whenever $x,x'\in [p]$.
Observe that $\phi\equiv0$ on $\widetilde{Y}$, since configurations in $\widetilde{Y}$ have no pattern from $F$.
\section{Estimates on admissible patterns} \label{Estimates-on-admissible-patterns}
We start with the conditions we have to impose on $\{\ell_k\}$ and $\{r_k\}$.
For each $k\geq1$ we require that
\[
\tag{S1}
(2\ell_k-1)\, r_{k} \geq1,
\]
which ensure that there exists at least one admissible string with $0$ of any size.
For $t\in \left[0,1\right]$, let
\begin{equation}\label{def-binary-entropy}
H(t)=-t\log t -(1-t)\log (1-t)
\end{equation}
with the usual convention $0\log0=0$.
Let $\ell_1=2^2$ and $r_1=2^{-1}$.
Define $\ell_{k+1}$ and $r_{k+1}$ inductively by the following conditions:
\[
\tag{S2}
2^{-1}r_k \log 2 \geq 10 H(r_{k+1})
\]
\[
\tag{S3}
(4\ell_k-2)^{-1}\geq 10\left(r_{k+1} + 2(2\ell_{k+1}-1)^{-2} \log (2\ell_{k+1}-1)\right)
\]
\[
\tag{S4}
\ell_{k+1}\geq 2^{4\ell_k}.
\]
Note that (S3) implies
\[
(4\ell_k-2)^{-1}\geq 10\left(r_{k+1} + (2\ell_{k+1}-1)^{-2} \log c_{\ell_{k+1}}\right)
\]
since $c_n\leq (2n-1)^2$.
Since we can rewrite conditions (S1), (S2), (S3) and (S4) by using recursive functions,
the subshift $X$ we defined in previous section is effective.
The input layer of an admissible pattern of $\widetilde{Y}$ can contain forbidden strings
since macro tiles on the computation layer only checks very short substrings of the input.
Such input patterns are said to be locally admissible.
For later use we define the globally admissible set.
For $n\geq1$ define the globally admissible set $G_{\widetilde{Y}, n}$ of $\widetilde{Y}$ with size $n$ by
\[
G_{\widetilde{Y}, n}=\pi^{-1}\, \mathcal{P}_{\widehat{X}, n} \cap \mathcal{P}_{\widetilde{Y},\,n}.
\]
Since we restrict the admissible set of $\widetilde{Y}$ to the elements whose input layer consists of admissible strings of $X$,
an element of $G_{\widetilde{Y}, n}$ can be extended to bigger and bigger squares.
Define $m_k=(2\ell_k-1) (2\ell_{k-1}-1)^{-1}$.
Then an admissible string of $X_-$ with length $2\ell_k-1$ consists of $m_{k}$ admissible strings of $X_-$ with length $2\ell_{k-1}-1$.
However $\mathcal{P}_{X_-,\, \ell_{k-1}}^{m_k} \neq \mathcal{P}_{X_-, \,\ell_k}$, since we may find forbidden strings in the concatenated part.
Let $k$ be even.
Let $B_{+, k-1}$ be the set of admissible strings with more than one $0$, i.e.,
$B_{+, \, k-1} = \mathcal{P}_{X_+,\, \ell_{k-1}} \setminus \{ +1^{\ell_{k-1}}\}$.
Let $C_{+,\, k}$ be the set of strings obtained by concatenating alternatively strings from $B_{+, \,k-1}$ and the string $+1^{2\ell_{k-1}-1}$:
\begin{align*}
C_{+,\, k} = & \left\{\omega_1 \cdots \omega_{m_k}\in \mathcal{P}_{X_+,\, \ell_{k-1}}^{m_k}:
\omega_m\in B_{+,k-1}\ \text{if}\ m\ \text{is odd}\right. \\
& \left. \;\;\qquad\qquad\qquad\qquad\qquad\text{and}\ \omega_m=(+1)^{2\ell_{k-1}-1}\ \text{if}\ m\ \text{is even}\right\}\,.
\end{align*}
Since the string $(+1)^{2\ell_{k-1}-1}$ appears between strings from $B_{+,\,k-1}$, no forbidden string appears in the concatenated parts.
Hence we have $C_{+,\, k}\subset \mathcal{P}_{X_+, \, \ell_k}$.
We define $B_{-,\, k}$ and $C_{-,\, k}$ for odd $k$ in the same way.
We use the following lemma to show Proposition \ref{pattern lemma}.
\begin{lemma}
We have
\begin{equation}
\label{string_relation_odd}
|\mathcal{P}_{X_-,\, \ell_k}|^{10}\leq |C_{+,\, k}| \quad\text{if}\;\;k\;\;\text{is odd}
\end{equation}
and
\begin{equation}
\label{string_relation_even}
|\mathcal{P}_{X_+,\, \ell_k}|^{10}\leq |C_{-, \, k}| \quad\text{if}\;\;k\;\;\text{is even}\,.
\end{equation}
\end{lemma}
\begin{proof}
Assume $k$ is odd. By (S2) we have the following.
\begin{align*}
|\mathcal{P}_{X_-,\ell_k}|^{10}
& \leq \left(\sum_{i=0}^{(2\ell_k-1)\, r_k} \binom{2\ell_k-1}{i} \right)^{10}
\leq \left(\e^{(2\ell_k-1)H(r_k)}\right)^{10}\\
&= \e^{10(2\ell_k-1) H(r_{k})}\\
& \leq \e^{2^{-1} (2\ell_k-1)\, r_{k-1}}.
\end{align*}
Now
\begin{align*}
|C_{+, k}|
&=|B_{+,\, k-1}|^{2^{-1}m_k}
=\left(\sum_{i \leq (2\ell_{k-1}-1)\,r_{k-1}}\binom{ 2\ell_{k-1}-1}{i}-1\right)^{2^{-1}m_k}\\
& \geq \left(\sum_{i \leq (2\ell_{k-1}-1)\, r_{k-1}}
\binom{ (2\ell_{k-1}-1)\,r_{k-1}}{i}\right)^{2^{-1}m_k}\\
& =\e^{ (2\ell_{k-1}-1)\, r_{k-1} 2^{-1}m_k}
= \e^{2^{-1}(2\ell_k-1)\, r_{k-1}}\,.
\end{align*}
Hence we get \eqref{string_relation_odd}.
The proof of \eqref{string_relation_even} is very similar and thus left to the reader.
\end{proof}
For each $n\geq1$ set the globally admissible set $G_{\widetilde{Y}_\pm, n}$ of $\widetilde{Y}_\pm$ with size $n$ by
\[
G_{\widetilde{Y}_\pm, n}=\pi^{-1}\, \mathcal{P}_{\widehat{X}_\pm, n} \cap \mathcal{P}_{\widetilde{Y},\,n}.
\]
\begin{proposition}\label{pattern lemma}
We have
\[
|G_{\widetilde{Y}_-,\ \ell_k}|^{10}\leq |G_{\widetilde{Y}_+,\ \ell_k}|\quad \text{if}\;\;k\;\;\text{is odd}
\]
and
\[
|G_{\widetilde{Y}_+,\ \ell_k}|^{10}\leq |G_{\widetilde{Y}_-,\ \ell_k}|\quad \text{if}\;\;k\;\;\text{is even}.
\]
\end{proposition}
\begin{proof}
By Proposition \ref{computation layer} we have
\begin{align*}
|G_{\widetilde{Y}_{\pm}, \ell_k}|
&=\sum_{p\,\in Y_{\pm,\, \ell_k}} \e^{\text{number of}\;0\text{'s in}\;\pi(p)}\\
& =c_{\ell_k} \sum_{\red{\widehat{\omega}\,\in\, \scaleto{\mathcal{P}_{\widehat{X}_\pm,\ell_k}}{7pt}}}
\e^{\text{number of}\;0\text{'s in}\;\widehat{\omega}}\\
& = c_{\ell_k} \sum_{\omega\,\in\, \mathcal{P}\!_{X_\pm, \,\ell_k}} \e^{f_0(\omega)(2\ell_k-1)^2}.
\end{align*}
Assume $k$ is odd.
By \eqref{string_relation_even} we have
\begin{align*}
|G_{\widetilde{Y}_{-}, \ell_k}|^{10}
&=\left(c_{\ell_k}\sum_{\omega\,\in\, \mathcal{P}_{X_+\!, \ell_k}} \e^{f_0(\omega)(2\ell_k-1)^2}\right)^{10}\\
& \leq | \mathcal{P}_{X_-, \ell_k} |^{10} \e^{10\, r_k(2\ell_k-1)^2+10\log c_{\ell_k}}\\
& \leq |C_{+\!,\, k}| \, \e^{10(r_k(2\ell_k-1)+(2\ell_k-1)^{-1}\log c_{\ell_k}) (2\ell_k-1)}.
\end{align*}
Note that every $\omega$ in $C_{+\!, k}$ satisfies $f_0(\omega) (2\ell_k-1) \geq 2^{-1} m_k$.
By (S3) we have
\begin{align*}
f_0(\omega)(2\ell_k-1)^2
& \geq \frac{2\ell_k-1}{2(2\ell_{k-1}-1)}(2\ell_k-1)\\
&\geq 10 (r_k(2\ell_k-1) + (2\ell_k-1)^{-1}\log c_{\ell_k}) (2\ell_k-1)
\end{align*}
for $\omega \in C_{+\!,k}$.
Hence we have
\begin{align}
|G_{\widetilde{Y}_{-}, \ell_k}|^{10}
\leq \sum_{\omega\, \in\, C_{+\!,k}}\e^{f_0(\omega)(2\ell_k-1)^2}
\leq c_{\ell_k} \sum_{\omega\,\in\, \mathcal{P}_{X_+\!, \ell_k}} \e^{f_0(\omega)(2\ell_k-1)^2}
=|G_{\widetilde{Y}_+\!, \ell_k}|.
\label{GYplus}
\end{align}
Therefore, for odd $k$, the proof is finished. We can do very similar calculations for even $k$.
\end{proof}
\section{Proof of the main theorem } \label{Non-convergence}
We now prove Theorem \ref{maintheorem}.
For each $\beta>0$, we pick an arbitrary equilibrium state for $\beta\phi$ (there can be several) and we consider the resulting one-parameter
family $(\mu_{\beta})_{\beta>0}$.
Let $M_{k}$ be the minimum size of the macro-tiles whose simulation checks all substrings with length less than $\ell_k$.
For each $k\geq 1$ we denote by $\theta(k)$ the smallest index which satisfies $2\ell_{\theta(k)}-1\geq 2 M_k$.
By this choice at least one macro tile with size $M_k$ should appear in the box with size $\ell_{\theta(k)}$.
Our main theorem follows from the following proposition and the rest of this paper is dedicated to prove it.
\begin{proposition} \label{oscillation}
Take an arbitrary $\delta\in (0,1]$. Then for all $k$ large enough we have
\begin{align*}
\mu_{\beta_{k}}\left( [G_{\widetilde{Y}_-,\, \ell_{k+1}}] \right) & \geq 1-\delta \quad \text{if}\quad k\;\; \text{is even},\\
\mu_{\beta_{k}}\left([G_{\widetilde{Y}_+,\, \ell_{k+1}}] \right)& \geq 1-\delta \quad \text{if}\quad k\;\; \text{is odd}.
\end{align*}
\end{proposition}
Invoking the ergodic decomposition \cite[Chapter 14]{Georgii},
we can restrict ourselves to ergodic equilibrium states in the proof of Proposition \ref{oscillation}.
Since $\bigcap_{k\geq1} [G_{\widetilde{Y}_-, \ell_k}]=\widetilde{Y}_-$, $\bigcap_{k\geq1}[G_{\widetilde{Y}_+, \ell_k}]=\widetilde{Y}_+$
and $\widetilde{Y}_-\cap \widetilde{Y}_+=\emptyset$, the proposition shows that the one-parameter family $(\mu_\beta)_{\beta>0}$ does not converge.
To prepare the proof of the above proposition, we need the following three lemmas.
\begin{lemma}\label{weight lemma on the expectation}
Let $C=\log\big|\widetilde{\mathcal{B}}\big|$.
Then $\int \phi \dd\mu_{\beta}\geq -C\beta^{-1}$ for any $\beta>0$.
\end{lemma}
\begin{proof}
There exists at least one shift-invariant measure $\mu$ whose support is contained in $\widetilde{Y}$.
Since $\phi\equiv 0$ on $\widetilde{Y}$, this implies that $\int \phi \dd\mu=0$. Hence, since $h(\mu)\geq0$, we get
\[
P(\beta \phi)\geq h(\mu)+\beta \int \phi\dd\mu \geq 0.
\]
Since $\mu_{\beta}$ is an equilibrium state for $\beta\phi$, we have
\[
h(\mu_{\beta})+\beta\int \phi \dd\mu_{\beta}=P(\beta\phi)\geq0.
\]
Therefore
\[
\int \phi \dd\mu_{\beta}\geq-\beta^{-1}h(\mu_{\beta})
\geq -\beta^{-1}\log \big|\widetilde{\mathcal{B}}\big|.
\]
\end{proof}
Given $n\geq1$ and a function $\psi: \widetilde{\mathcal{B}}^{\,\Z^2}\rightarrow \R$, let $S_n\psi=\sum_{i\in \Lambda_n} \psi \circ \sigma^i$.
Let $C'=2C$ and for $n,k \geq1$ define $E_{n,k}$ as the set of configurations $y\in \widetilde{\mathcal{B}}^{\,\Z^2}$ satisfying the following conditions:
\begin{align}
\frac{S_{n+\ell_{\theta(k+1)}-1} \phi(y)}{\lambda_{n+\ell_{\theta(k+1)}-1}} \geq \int \phi \dd\mu_{\beta_{k}} -C \beta_{\ell_{k}}^{-1}
\label{Sn1}
\end{align}
and
\begin{align}
\frac{1}{\lambda_n} S_n\chi_{[G_{\widetilde{Y}_-, \ell_{k+1}}]} (y) \leq \mu_{\beta_{k}} \left([G_{\widetilde{Y}_-,\ell_{k+1}}]\right) + C'2^{-2\ell_{k}}.
\label{Sn2}
\end{align}
\begin{lemma} \label{estimate of Enk}
Take $k\geq1$ and $\varepsilon>0$.
Let $\mu_{\beta_{k}}$ be an ergodic equilibrium state for $\beta_{k}\phi$.
Then for all $n$ large enough
\begin{align}
\mu_{\beta_{k}}(E_{n,k})>1-\varepsilon. \label{probability of Enk}
\end{align}
\end{lemma}
\begin{proof}
Since $\mu_{\beta_{k}}$ is ergodic, we have
\begin{align*}
\lim_{n\to\infty} \frac{1}{\lambda_n} S_n\phi(y)
&= \int \phi \dd\mu_{\beta_{k}}\\
\lim_{n\to\infty}\frac{1}{\lambda_n}S_n\chi_{[G_{\widetilde{Y}_-, \ell_{k+1}}]}(y)
&=\mu_{\beta_{k}} \left([G_{\widetilde{Y}_-,\ell_{k+1}}]\right)
\end{align*}
for $\mu_{\beta_{k}}$ almost every point $y$,
which completes the proof.
\end{proof}
We estimate the number of globally admissible patterns in the configurations of $E_{n,k}$.
\begin{lemma} \label{Enk}
For every $k$ there exists $N\geq1$ such that for every $n\geq N$
\begin{align}
\frac{1}{\lambda_n} \#\Big\{i\in \Lambda_n: y_{i+\Lambda_{\ell_{k+1}}\in\, G_{\widetilde{Y}, \ell_{k+1}}}\Big\}
=\frac{1}{\lambda_n} S_n \chi_{[G_{\widetilde{Y}, \ell_{k+1}}]}(y)
>1-C'2^{-2\ell_{k}}
\label{patterns-of-Enk}
\end{align}
for every $y\in E_{n,k}$.
\end{lemma}
\begin{proof}
Take $N\geq1$ such that $\lambda_n^{-1} \lambda_{n+\ell_{\theta(k+1)}-1} \lambda_{\ell_{\theta(k+1)}}<2^{\ell_{k}}$,
for all $n\geq N$.
Take $n\geq N$ and $y\in E_{n,k}$.
Since by definition $[G_{\widetilde{Y}, \ell_{k+1}}]\supset [\mathcal{P}_{\widetilde{Y}, \ell_{\theta(k+1)}}]$, we have
\[
1-\frac{1}{\lambda_n} S_n\chi_{[G_{\widetilde{Y}, \ell_{k+1}}]} (y)
=\frac{1}{\lambda_n} S_n \chi_{\widetilde{\mathcal{B}}^{\Z^2}\setminus[G_{\widetilde{Y}, \ell_{k+1}}]} (y)\\
\leq \frac{1}{\lambda_n} S_n \chi_{\widetilde{\mathcal{B}}^{\Z^2}\setminus[\mathcal{P}_{\widetilde{Y}, \ell_{\theta(k+1)}}]} (y).
\]
$S_n \chi_{\widetilde{\mathcal{B}}^{\Z^2}\setminus[\mathcal{P}_{\widetilde{Y}, \ell_{\theta(k+1)}}]} (y)$ is the number of positions in $\Lambda_n$
for which a non-admissible pattern with size $\ell_{\theta(k+1)}$ appears.
Let $i\in \Lambda_{n+\ell_{\theta(k+1)}-1}$ be a position where a forbidden pattern appears, that is, $y_{\Lambda_F+i}\in F$.
Then the patterns in boxes with size $\ell_{\theta(k+1)}$ including $i$ are non-admissible.
The number of such boxes is bounded by $\lambda_{\ell_{\theta(k+1)}}$.
Since the number of positions in $\Lambda_{n+\ell_{\theta(k+1)}-1}$ where a forbidden pattern appears is $-S_{n+\ell_{\theta(k+1)}-1} \phi(y)$,
we have
\begin{align*}
& \frac{1}{\lambda_n} S_n \chi_{\widetilde{\mathcal{B}}^{\Z^2}\setminus[\mathcal{P}_{\widetilde{Y}, \ell_{\theta(k+1)}}]} (y)\\
&\leq- \frac{\lambda_{n+\ell_{\theta(k+1)}-1}}{\lambda_n}\frac{1}{\lambda_{n+\ell_{\theta(k+1)}-1}} S_{n+\ell_{\theta(k+1)}-1} \phi (y) \times \lambda_{\ell_{\theta(k+1)}}.
\end{align*}
See Fig. \ref{non admissible patterns}.
By the choice of $n$, \eqref{Sn1} and Lemma \ref{weight lemma on the expectation}, we have
\begin{align*}
1-\frac{1}{\lambda_n} S_n\chi_{[G_{\widetilde{Y}, \ell_{k+1}}]} (y)
&\leq 2^{\ell_{k}}\left(-\int \phi \dd\mu_{\beta_{\ell_{k}}} +C\beta_{\ell_{k}}^{-1}\right)\\
&\leq 2C\beta_{\ell_{k}}^{-1}2\, ^{\ell_{k}}=C' \, 2^{-2\ell_{k}}.
\end{align*}
\end{proof}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=200pt]{picture_on_non_admissible_patterns.pdf}
\caption{Estimate of the number of non-admissible patterns.} \label{non admissible patterns}
\end{center}
\end{figure}
\begin{proof}[Proof of Proposition \ref{oscillation}]
By contradiction. We suppose that there exist $\delta\in (0,1]$ and $k$ large enough such that
\begin{align}
\mu_{\beta_{k}}\left( [G_{\widetilde{Y}_-, \ell_{k+1}}] \right)\leq 1-\delta \quad \text{if}\ k\ \text{is even}, \label{contradiction}\\
\mu_{\beta_{k}}\left([G_{\widetilde{Y}_+, \ell_{k+1}}] \right)\leq 1-\delta \quad \text{if}\ k\ \text{is odd}. \nonumber
\end{align}
We only deal with the case when $k$ is even, since the other case is similar.
Without loss of generality we may assume $k$ is large enough to satisfy $C' 2^{-2\ell_{k}}\leq \delta$
and $\Lambda_{\ell_{k+1}}\supset \Lambda_F$.
By \eqref{Sn2} and \eqref{contradiction} we have
\begin{align}
\frac{1}{\lambda_n}S_n \chi_{[G_{\widetilde{Y}_-, \ell_{k+1}}]}(y)
\leq 1-\delta + C'2^{-2\ell_{k}}
\leq 1-\left(1+\frac{1}{100}\right)\delta
\label{pattern from negative}
\end{align}
for every $y\in E_{n,k}$.
Let $h_{k+1}^-=\frac{1}{\lambda_{\ell_{k+1}}}\log\big|G_{\widetilde{Y}_-, \ell_{k+1}}\big|$.
Using \eqref{GYplus} in Proposition \ref{pattern lemma}, we have
\[
\Big|G_{\widetilde{Y}_-, \ell_{k+1}}\Big|
\geq \sum_{\omega \in C_{-, k+1}}\e^{f_0(\omega)(2\ell_{k+1}-1)^2}
\geq \e^{\min_{\omega\in C_{-, k+1}} f_0(\omega)(2\ell_{k+1}-1)^2}.
\]
By the definition of $C_{-,\, k+1}$ we have
\begin{equation}\label{hkminus}
h_{k+1}^-=\frac{1}{\lambda_{\ell_{k+1}}}\log \Big|G_{\widetilde{Y}_-, \ell_{k+1}}\Big|
\geq\min_{\omega\in C_{-,k+1}} f_0(\omega)
\geq\frac{\lfloor \frac{m_{k+1}}{2}\rfloor}{\ell_{k+1}}
\geq O(\ell_{k}^{-1}).
\end{equation}
\begin{lemma} \label{estimation on exponential decay}
Take $N\geq1$ such that, for every $n\geq N$ and $y\in E_{n,k}$, \eqref{patterns-of-Enk} holds.
Then we have
\[
\frac{1}{\lambda_n}\log|E_{n,k}|\leq H\big(C'2^{-2\ell_{k}}\big)+ (1-\delta') h_{k+1}^- +o(h_{k+1}^-)
\]
where $\delta'=\frac{9}{1000}\,\delta$.
\end{lemma}
\begin{proof}
Fix $n\geq N$. For $y \in E_{n,k}$ the number of positions in $\Lambda_n$ for which a pattern from $\widetilde{\mathcal{B}}^{\Lambda_{\ell_{k+1}}}\setminus G_{\widetilde{Y},\, \ell_{k+1}}$ appears is bounded above by $ \lambda_n C' 2^{-2\ell_{k}}$ by \eqref{patterns-of-Enk}:
\[
\lambda_n\left(1-\frac{1}{\lambda_n} S_n \chi_{[G_{\widetilde{Y}, \ell_{k+1}}]} (y)\right)
\leq \lambda_n\, C' 2^{-2\ell_{k}}.
\]
The number of possible places for such positions is bounded by
\[
\sum_{r< \lambda_n C'2^{-2\ell_{k}}}\binom{\lambda_n}{r}\leq \e^{H(C'2^{-2\ell_{k}})\lambda_n}\lambda_n^2
\]
(See Appendix \ref{appendix-binom}).
Since $G_{\widetilde{Y}\!,\, \ell_{k+1}}$ is the disjoint union of $G_{\widetilde{Y}_-, \ell_{k+1}}$ and $G_{\widetilde{Y}_+, \ell_{k+1}}$,
\eqref{pattern from negative} and \eqref{patterns-of-Enk} imply
\begin{align*}
S_n \chi_{[G_{Y_+', \ell_{k+1}}]} (y)
&=S_n \chi_{[G_{\widetilde{Y}, \ell_{k+1}}]} (y)-S_n \chi_{[G_{Y_-', \ell_{k+1}}]} (y)\\
&\geq \lambda_n\big(1-C' 2^{-2\ell_{k}}\big) -\lambda_n\left(1-\Big(1+\frac{1}{100}\Big)\delta\right)\\
& \geq \frac{\delta}{100} \lambda_n.
\end{align*}
Considering overlapping parts, the possible choices of patterns from $G_{\widetilde{Y}_+, \ell_{k+1}}$
is bounded above by $\lambda_n \lambda_{\ell_{k+1}}^{-1}$
and is bounded below by $\frac{\delta}{100}\frac{\lambda_n}{\lambda_{\ell_{k+1}}}$.
Hence the number of ways to choose patterns from $G_{\widetilde{Y}, \ell_{k+1}}$ is bounded by
\begin{align*}
& \sum_{r=\frac{\delta}{100} \frac{\lambda_n}{\lambda_{\ell_{k+1}}}}^{\frac{\lambda_n}{\lambda_{\ell_{k+1}}}}
\binom{\lambda_n}{r}\, \Big|G_{\widetilde{Y}_-, \ell_{k+1}}\Big|^{ \frac{\lambda_n}{\lambda_{\ell_{k+1}}}-r}\, \Big|G_{\widetilde{Y}_+, \ell_{k+1}}\Big|^r \\
& \leq \sum_{r=\frac{\delta}{100} \frac{\lambda_n}{\lambda_{\ell_{k+1}}}}^{\frac{\lambda_n}{\lambda_{\ell_{k+1}}}}
\binom{\lambda_n}{r} \, \Big|G_{\widetilde{Y}_-, \ell_{k+1}}\Big|^{ \frac{\lambda_n}{\lambda_{\ell_{k+1}}}-r}\, \Big|G_{\widetilde{Y}_-, \ell_{k+1}}\Big|^{\frac{r}{10}}\\
& =\sum_{r=\frac{\delta}{100} \frac{\lambda_n}{\lambda_{\ell_{k+1}}}}^{\frac{\lambda_n}{\lambda_{\ell_{k+1}}}}
\binom{\lambda_n}{r}\, \Big|G_{\widetilde{Y}_-, \ell_{k+1}}\Big|^{ \frac{\lambda_n}{\lambda_{\ell_{k+1}}}-\frac{9}{10}r}\\
& \leq \Big|G_{\widetilde{Y}_-, \ell_{k+1}}\Big|^{ \frac{\lambda_n}{\lambda_{\ell_{k+1}}}(1-\frac{9}{10}\frac{\delta}{100})}
\sum_{r=\frac{\delta}{100} \frac{\lambda_n}{\lambda_{\ell_{k+1}}}}^{\frac{\lambda_n}{\lambda_{\ell_{k+1}}}}
\binom{\lambda_n}{r}\\
& \leq \Big|G_{\widetilde{Y}_-, \ell_{k+1}}\Big|^{ \frac{\lambda_n}{\lambda_{\ell_K}}(1-\delta')} \frac{\delta}{100} \left(\frac{\lambda_n}{\lambda_{\ell_{k+1}}} \right)^2 \e^{\frac{\lambda_n}{\lambda_{\ell_{k+1}}}H\big(\frac{\delta}{100}\big)}.
\end{align*}
Hence
\[
|E_{n, k}|\leq e^{H(C'2^{-2\ell_{k}})\lambda_n}\lambda_n^2
\big|G_{\widetilde{Y}_-, \ell_{k+1}}\big|^{ \frac{\lambda_n}{\lambda_{\ell_{k+1}}}(1-\delta')} \frac{\delta}{100} \left(\frac{\lambda_n}{\lambda_{\ell_{k+1}}} \right)^2
\e^{\frac{\lambda_n}{\lambda_{\ell_{k+1}}}H\big(\frac{\delta}{100}\big)}.
\]
By taking logarithm and dividing out by $\lambda_n$ we get
\begin{align*}
\frac{1}{\lambda_n}\log|E_{n, k}|
&\leq H\big(C'2^{-2\ell_{k}}\big)
+ \frac{4}{\lambda_n}\log{\lambda_n} + (1-\delta') \frac{1}{\lambda_{\ell_{k+1}}}\log \Big|G_{\widetilde{Y}_-, \ell_{k+1}}\Big|\\
& \quad + \frac{1}{\lambda_n}\log \frac{\delta}{100}+2\log\frac{1}{\lambda_{\ell_{k+1}}}+ \frac{1}{\lambda_{\ell_{k+1}}} H\left(\frac{\delta}{100}\right).
\end{align*}
Since $\delta\in [0,1]$, we have $\lambda_n^{-1} \log \delta(100)^{-1} <0$.
Since $\log x \leq x-1$ for every $x>0$, $H(t)\leq \log2\leq 1$ and $\lambda_n\geq9$ for $n\geq2$, we have
\[
2\log\frac{1}{\lambda_{\ell_{k+1}}}+ \frac{1}{\lambda_{\ell_{k+1}}} H\left(\frac{\delta}{100}\right)
\leq 2\left(\frac{1}{\lambda_{\ell_{k+1}}}-1\right)+\frac{1}{\lambda_{\ell_{k+1}}}
=\frac{3}{\lambda_{\ell_{k+1}}}-2\leq 0.
\]
For all $n$ large enough we have
\[
\frac{4}{\lambda_n} \log \lambda_n \leq 2^{-2 \ell_{k}}.
\]
Hence we have
\[
\frac{1}{\lambda_n}\log|E_{n, k}|
\leq H\big(C'2^{-2\ell_{k}}\big)+ (1-\delta') h_{k+1}^-+2^{-2\ell_{k}}
\]
for all $n$ large enough.
\end{proof}
Now we can finish the proof.
We have a probability measure $\nu_{k+1}^-$ which is invariant ergodic under $\sigma_{\Lambda_{\ell_{k+1}}}$ and whose entropy is $h_{k+1}^-=\log|G_{\widetilde{Y}, \ell_{k+1}}|$ and
whose support is included in $[G_{\widetilde{Y}, \ell_{k+1}}]$. (See Appendix \ref{appendix-pressure} for details.)
Since no forbidden pattern appears in $\Lambda_{\ell_{k+1}}$ for an element in $[G_{\widetilde{Y}, \ell_{k+1}}]$, we have
\[
\int S_{\ell_{k+1}}\phi \dd\nu_{k+1}^-
=\int_{[G_{\widetilde{Y}_-, \ell_{k+1}}]} \phi \dd\nu_{k+1}^-
\geq -\# (\Lambda_{\ell_{k+1}}\setminus \Lambda_{\ell_{k+1}-1})
=-8(\ell_{k+1}-1).
\]
By the variational principle we have
\begin{align*}
P(\beta_{k}\phi) & =P(\sigma, \beta_{k} \phi)
=\lambda_{\ell_{k+1}}^{-1} P(\sigma_{\Lambda_{\ell_{k+1}}}, \beta_{k} S_{\ell_{k+1}}\phi)\\
& \geq \frac{h(\nu_{k+1}^-)}{\lambda_{\ell_{k+1}}}+\frac{\beta_k}{\lambda_{\ell_{k+1}}} \int S_{\ell_{k+1}}\phi \dd \nu_{k+1}^-.
\end{align*}
The `block' shift $\sigma_{\Lambda_{\ell_k}}$ is defined in Appendix \ref{appendix-pressure}.
The second equality is a general fact. (We refer \cite[Theorem 9.8]{Wal82} for a proof in the case of a continuous transformation of a compact metric space and a continuous function $\phi$. The proof in the present context is obtained by combining the proof of Theorem 9.8 in \cite{Wal82} and Section 4.4 in \cite{Kel98}.)
Since $\mu_{\beta_{k}}$ is an equilibrium state for $\beta_{k}\phi$, we have
\begin{align}
h_{k+1}^-=\lambda_{\ell_{k+1}}^{-1}h(\nu_{k+1}^-)
&\leq P(\beta_{k}\phi)-\frac{\beta_k}{\lambda_{\ell_{k+1}}} \int S_{\ell_{k+1}}\phi \dd \nu_{k+1}^- \nonumber\\
&\leq h(\mu_{\beta_{k}}) +\beta_{k} \int \phi \dd\mu_{\beta_{k}}+\frac{\beta_k}{(2\ell_{k+1}-1)^2}\times 8(\ell_{k+1}-1) \nonumber\\
&\leq h(\mu_{\beta_{k}})+O(2^{-\ell_k}).
\label{entropy inequation}
\end{align}
We get the last inequality because $ \phi\leq0$ and (S4).
The $L^1$ version of Shannon-McMillan-Breiman theorem \cite{Kel98} tells us that, for any $\eta>0$, there exists $n_0$
such that for any $n\geq n_0$
\[
\sum_{p\in \widetilde{\mathcal B}^{\Lambda_n}} \left| -\frac{1}{\lambda_n}\log\mu_{\beta_{k}}([p])-h(\mu_{\beta_{k}})\right| \mu_{\beta_{k}}([p])\leq \eta.
\]
In particular
\begin{align*}
\eta
&\geq \sum_{p\in [E_{n,k}]} \left| -\frac{1}{\lambda_n}\log\mu_{\beta_{k}}([p])-h(\mu_{\beta_{k}})\right| \mu_{\beta_{k}}([p])\\
& \geq \left|\sum_{p\in [E_{n,k}]} \Big( -\frac{1}{\lambda_n}\log\mu_{\beta_{k}}([p])-h(\mu_{\beta_{k}}) \Big)\mu_{\beta_{k}}([p])\right|
\end{align*}
from which it follows, by taking $\eta=2^{-{\ell_{k}}}$, that
\[
h(\mu_{\beta_{k}}) \mu_{\beta_{k}} ([E_{n,k}])
\leq \sum_{p\in [E_{n, k}]} \left(-\frac{1}{\lambda_n}\mu_{\beta_{k}}([p])\log \mu_{\beta_{k}}( [p]) \right) + 2^{-{\ell_{k}}}\,.
\]
Using Jensen inequality with the concave function $t\mapsto -t \log t$ and the weights $1/|[E_{n,k}]|$ we get
\[
h(\mu_{\beta_{k}}) \mu_{\beta_{k}} ([E_{n,k}])
\leq \frac{1}{\lambda_n}\log|[E_{n, k}]|-\frac{1}{\lambda_n} \mu_{\beta_{k}}([E_{n,k}] ) \log\mu_{\beta_{k}}([E_{n,k}] ) + 2^{-{\ell_{k}}}\,.
\]
We now apply Lemma \ref{estimate of Enk} with $\varepsilon=\frac{\delta'}{2}$ to get from \eqref{entropy inequation} that, for $n$ large enough,
\[
h_{k+1}^-\left(1-\frac{\delta'}{2}\right)
\leq \frac{1}{\lambda_n}\log|[E_{n, k}]|-\frac{1}{\lambda_n} \left(1-\frac{\delta'}{2}\right)\log\left(1-\frac{\delta'}{2}\right) + 2^{-{\ell_{k}}}+O(2^{-\ell_k}).
\]
Then Lemma \ref{estimation on exponential decay} implies
\[
h_{k+1}^-\left(1-\frac{\delta'}{2}\right)
\leq H\big(C'2^{-2\ell_{k}}\big)+ (1-\delta') h_{k+1}^-+o(h_{k+1}^-)+O(2^{-\ell_k}).
\]
Dividing out by $h_{k+1}^-$, we have
\begin{equation}
1-\frac{\delta'}{2} \leq 1-\delta'+ \frac{H\big(C'2^{-2\ell_{k}}\big)}{h_{k+1}^-}+\frac{O(2^{-\ell_k})}{h_{k+1}^-}+o(1).\label{final_estimate}
\end{equation}
Using \eqref{hkminus} and then letting $k\to \infty$ in \eqref{final_estimate}, we get a contradiction.
\end{proof}
| proofpile-arXiv_059-15662 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{intro}
It is well known that the effective behavior of composite materials can be established by employing a homogenization technique, which is
based on the assumption that the composite's microstructure is periodic, see \cite{SZ}, for example.
The periodic nature of this type of composites allows the identification of a repeating unit cell which has to be appropriately analyzed in order to
establish its effective behavior which is identical to that of the composite.
However, in the case where it is desired to analyze the elastic field of a composite with embedded local damage (e.g. a crack),
the homogenization technique is no longer applicable.
This is because analyzing a repeating unit cell that includes the localized damage implies that the damage is embedded within the composite in
a periodic manner, which is, of course, hardly a realistic scenario.
In a series of investigations, \cite{AR12}, \cite{RA12}, \cite{AR13}, \cite{Ab17a} and \cite{BRA},
localized damage, in the form of stiff and soft inclusions, cavities, finite, semi-infinite and interfacial cracks in composite materials, has been studied.
In these investigations, a multiscale analysis approach has been developed, aimed to establish the behavior of composites with localized damage.
The multiscale formulation involves micro-to-macro analysis.
In the framework of the microscale analysis, the effective behavior of the undamaged (intact) composite is established.
The micromechanically established solution is employed as the applied `far-field' of the damaged composite.
In these investigations, the high-fidelity generalized method of cells (HFGMC), \cite{AAB},
micromechanics model has been employed for the analysis on the microscale.
The HFGMC micromechanical theory is based on the homogenization technique for composites that are characterized by periodic microstructure,
such that a repeating unit cell can be identified. This repeating unit cell is discretized into several subcells.
The displacement vector in each subcell is represented by a second-order expansion (which includes the applied far-field)
in terms of local coordinates. The unknown terms (microvariables) in the expansion are determined by the fulfillment of the equilibrium equations,
interfacial displacement and traction continuity, as well as global boundary conditions.
The latter conditions ensure that the displacements and tractions at opposite surfaces of the repeating unit cell represent far-field-induced global displacements -- which translates to identical strains and stresses on opposite surfaces (which essentially owes to symmetry -- a weaker property of periodic undamaged microstructure).
All the equations are imposed in the average (surface-integral sense).
Thus, instead of satisfying the equilibrium equations, for example, at every point of the subcell,
the volume integrals over the subcell of these equations are satisfied (i.e., set equal to zero).
Similarly, the continuity conditions between neighboring subcells are imposed by equating the surface integrals of the relevant field values.
A great advantage of HFGMC theory stems from its ability to predict, in addition to the effective moduli of the composite, also the fields distributions that result from
the application of specific loading to the composite.
Having established the effective behavior of the undamaged composite, from which the applied far-field on the damaged composite can be determined,
it is possible to proceed to the macroscale analysis, by which the \emph{damaged} composite can be studied.
This composite is subjected to the micromechanically established far-field, which is assumed to hold far away from the damage locus, which is assumed to be limited to one or a few cells, situated far enough from the boundary cells.
The macroscale derivation involves two distinct analysis steps.
In the first one, the representative cell method, \cite{RN97}, is employed. In the framework of this method, the periodic composite with localized
damage is discretized into several identical cells and then reduced to the problem of a single cell by applying the discrete Fourier transform.
In the second one, the resulting governing equations, interfacial and boundary conditions in the Fourier transform domain, are solved by employing the higher-order
theory (HOT), \cite{AAB}. Among the features of the approach is that the sub-problem in the Fourier space obtained for each Fourier transform coefficient (harmonica), can be solved on a separate computer. In other words, the method allows parallel computation (or parallelization) of a major part of the computational effort required for the solution of the full problem, not unlike the parallelization that can be employed in the construction of the stiffness matrix in Finite Element Method algorithms.
The HOT has been originally developed for the analysis of functionally graded materials. The basic analysis of the governing equations is similar
to that of the HFGMC, but differs in the application of the relevant boundary conditions, which replace the periodic boundary conditions of the HFGMC formulation.
Here too, the boundary conditions that are applied at the composite surfaces, are imposed in the integral sense rather than in a point-wise manner.
Chapters 6 and 11 of \cite{AAB} discuss in detail both the HFGMC and the HOT, and supply various verifications and applications.
Having established by the HOT the elastic field in the transform domain, it is possible to proceed by applying inverse Fourier transform and to obtain the real-space
field at any point of the composite impaired by the localized damage.
The macroanalysis involves calculation of initially unknown eigenstresses.
Therefore, an iterative procedure, based on Banach's contracting mapping method (e.g., \cite{Rall}), had been employed. This iterative procedure converged rapidly,
yielding the field distributions at any location of the damaged composite.
The aforementioned investigations involved damaged composites with thermoelastic quasistatic fields. Extensions of the multiscale analysis for composites with
localized damage have been performed by \cite{AR14} and \cite{RA16}.
The effects of various types of localized damage in `smart' composites (piezoelectric, electro-magneto-elastic and thermo-electro-magneto-elastic) have been investigated
by \cite{Ab12}, \cite{Ab17b}, \cite{Ab17c}, \cite{Ab17d} and \cite{Ab18}.
Thus far, the described multiscale analysis has been applied to composites with linear constituents, to predict the behavior of linear composites with various types of localized damage.
Unfortunately, the implementation of this method of analysis for the prediction of damaged composites with hyperelastic constituents undergoing large deformations was unsuccessful.
This owes to the fact that the iterative mapping becomes non-contracting as nonlinearity becomes more pronounced.
To this end, instead of relying on a naturally emerging fixed-point iterative mapping that becomes non-contracting, a locally convergent generic solver is employed in the present work.
The most notable general efficient numerical solver for smooth square \emph{systems} of general smoothly-nonlinear equations is Newton's method (see \cite{Newton} for a review). The disadvantage of Newton's method for problems in which the (vector) function the root of which is sought is calculated by a complicated algorithm, is that the associated Jacobian is not available analytically. In such cases one has to use numerical differentiation. The main problem with this approach in terms of algorithmic complexity is that the appropriate step for the calculation of a finite difference is not known in advance and has to be searched for. Moreover, for large systems, the number of calculations is proportional to the number of variables. In terms of memory requirements, the generally non-sparse computed Jacobian matrix has to be stored, which tends to be quite expensive. Finally, the method may diverge if the roots coincide with local extrema of the aforementioned functions, which may be true, especially for oscillatory functions, such as those obtained from approximate inverse Fourier transforms, as in the algorithm described above.
A standard remedy for all the noted disadvantages is using the so-called Quasi-Newton methods. There, approximations for the aforementioned Jacobian is constructed using information from the solution history. Those methods require less computational effort as there is no explicit differentiation stage. The price, however, is a lower convergence rate. One method, proven especially successful in practice is the so-called Broyden method, \cite{Broyden1965}, which suggests an especially clever approach to approximating the Jacobian, such that the convergence rate is optimized. An obvious shortcoming of the direct application of the Broyden method is that it still has high memory demands as the approximate Jacobian needs to be stored in the memory. The solution to this shortcoming has been introduced in the literature in what is known as Limited-Memory, or Low Memory Broyden methods. The economy in memory is possible due to the fact that only an inner product of the inverse approximate Jacobian with the vector function the root of which is sought needs to be stored. Moreover, the approximate Jacobian according to the Broyden method is updated in each solution iteration by a low-rank matrix. This allows one storing only the histories of one or two special vectors, for the execution of the algorithm. One common approach, \cite {Rotten2014} uses two such vectors, requiring the storage of $2Np$ double precision numbers, $N$ being the number of equations to be solved and $p$ being the maximum number of iterations up to reasonable level of convergence. Although this approach is rather popular, as it happens, a more economic algorithm was proposed already in 1970 by Rheinboldt (see \cite{Rheinboldt1970}, \cite{Rheinboldt1998}). In his seminal work, Rheinboldt managed to construct an algorithm requiring the storage of only $Np$ double-precision numbers, which is 50 percents less. The vector histories to be stored in his algorithm were the steps themselves, that is the suggested updates for the state variables themselves.
The algorithm of Rheinboldt is undoubtedly as efficient and economic as one can hope for. However, one step forward can still be made. The solution steps, although being the required output of a solver, are not the natural candidates for stored information. Those can only point on the \emph{steps} of the algorithm, but do not allow following the \emph{success} of convergence for different vector function components. If one could devise an algorithm which stores directly the function values histories, then it would have been possible to observe the success of convergence for specific equations. For example, for the problem addressed in this paper, one may expect convergence challenge close to the location of the damage, or, alternatively, near specific inclusions in a composite. For a solver that stores directly the function histories, which stand for Cauchy stress components' corrections at each computational spatial cell, one could follow the quality of convergence of the solution for the stress (represented through the eigenstress) at specific `interesting' spatial locations, and perhaps devise certain control or stopping criteria based on such observations.
Having this idea in mind, this work indeed offers a novel low-memory algorithm based on the Broyden step, requiring storage memory for $Np$ double-precision numbers, such that the consecutive vector function histories are stored in the memory directly. The construction of the algorithm assumed the use of the so-called \emph{good} Broyden step, the optimality of which was phenomenologically discussed (and never actually disputed) in the literature, and which is argued for theoretically in Appendix \ref{AppendixB} of this paper.
This algorithm complements the idea of the use of the back and forth Fourier transform for the micromechanical analysis of hyperelastic composites with localized damage in the volume (area)-averaged strong form. The structure of the remainder of the paper is as follows: Section 2 presents the governing equations, Section 3 discusses the numerical solution approach, including the introduction of the new algorithm for the implementation of the low-memory nonlinear solver, Section 4 demonstrates the application of the approach to a set of specific composites with different geometries, albeit only relatively `nicely-shaped' ones (which is to be overcome in future work), resorting to various materials and damage patterns, and Section 5 concludes.
\section{Governing Equations}
\label{sec:2}
We consider a composite whose constituents are isotropic hyperelastic materials.
The undamaged composite is assumed to possess periodic microstructure forming a doubly periodic array (see Fig. \ref{Figure1}(a)). The composite's deformation due to external loading is described with respect to initial Lagrangian coordinates ($X_1$, $X_2$, $X_3$).
The strain energy function $W$ of a constituent is given by
\begin{eqnarray} \label{E1}
\ \ \ \ \ \ \ W = W (I_1, I_2, I_3)
\end{eqnarray}
where $I_1$, $I_2$, $I_3$ are the three standard invariants of the right Cauchy-Green deformation tensor ${\bm C}$. This tensor is expressed in terms of the deformation gradient ${\bm F}$ by
\begin{eqnarray} \label{E2}
\ \ \ \ \ \ \ {\bm C} = {\bm F}^{\top} {\bm F}
\end{eqnarray}
where the superscript $\top$ denotes the transpose operation.
Let us denote by ${\bm S}$ the second (symmetric) Piola-Kirchhoff stress tensor. It follows that
\begin{eqnarray} \label{E3}
\ \ \ \ \ \ \ {\bm S} =2 \frac{\partial W}{\partial {\bm C}}
\end{eqnarray}
The nonsymmetric first Piola-Kirchhoff stress tensor is defined by, \cite{Mal}:
\begin{eqnarray} \label{E4}
\ \ \ \ \ \ \ {\bm T} = {\bm S} {\bm F}^{\top}
\end{eqnarray}
In the absence of body forces, the equilibrium (linear momentum balance) equation is given by
\begin{eqnarray} \label{E5}
\ \ \ \ \ \ \ \nabla \cdot {\bm T} = {\bm 0}
\end{eqnarray}
where the gradient $\nabla$ is calculated with respect to material coordinates.
In the present article, two types of hyperelastic materials are considered.
For the first one, the compressible version of the Mooney-Rivlin strain energy for rubber-like
materials, introduced by \cite{SB}, is considered:
\begin{eqnarray} \label{E6}
\ \ \ \ W = C_1 (\hat {I_1} - 3) + C_2 (\hat{I_2} - 3) + \frac{\kappa}{2} (J - 1)^2
\end{eqnarray}
where $\hat {I_1}=I_1 I^{-1/3}_3$, $\hat {I_2}=I_2 I^{-2/3}_3$, $J =$ det$({\bm F})$, $\kappa$ is the bulk modulus and $C_1$, $C_2$ are material parameters.
For small deformations the material is characterized by the bulk modulus $\kappa$ and the shear modulus $\mu = 2 (C_1 + C_2)$. In the examples shown in the applications section, the values $C_1=0.3$ MPa, $C_2=0.1$ MPa, $\kappa=3$ MPa were employed.
The resulting first Piola-Kirchhoff stress tensor, derived from Eqs. (\ref{E6}), is denoted by ${\bm T}^{MR}$.
The second type of hyperelastic material is characterized by the Murnaghan strain energy function, \cite{Mur},
which is expressed in terms of the invariants of the Cauchy-Green (or the Lagrangian) strain tensor ${\bm E} = \frac{1}{2}({\bm C} - {\bm I})$ as follows:
\begin{equation}
\label{E7}
\begin{split}
\ \ \ W = \frac{\lambda + 2 \mu}{2} J^2_1 - 2 \mu J_2 + \frac{l +2 m}{3} J^3_1 - 2 m J_1 J_2 + n J_3
\end{split}
\end{equation}
where
\begin{eqnarray} \label{E8}
J_1 = \textrm{tr} ({\bm E}), \ \ \ \
J_2 = \frac{J^2_1 - \textrm{tr} ({\bm E}^2)}{2}, \ \ \ \
J_3 = \textrm{det} ({\bm E})
\end{eqnarray}
where `tr()' denotes the trace of a square matrix and $\lambda$, $\mu$, $l$, $m$ and $n$ are material constants (later assumed identical for all the subcells and cells associated with the material in question, and with $\lambda$ and $\mu$ corresponding to initial, `ambient', linear-elastic behavior). In the examples shown in the applications section, the material parameters for the two composite-constituent materials obeying the Murnaghan model as employed, were taken form \cite{Chen}.
The resultant first Piola-Kirchhoff stress tensor is denoted by ${\bm T}^{MUR}$, and its components can be represented in the form:
\begin{eqnarray} \label{E9}
\ \ \ \ \ \ \ \ \ \ \ \ \ T^{MUR}_{ij} = \lambda E_{pp} \delta_{ij} + 2 \mu E_{ij} + T^{'}_{ij}
\end{eqnarray}
where $T^{'}_{ij}$ are the additional terms.
The structure of the components of the Cauchy-Green strain tensor $E_{ij} = \left(u_{i,j} + u_{j,i} + u_{k,i} u_{k,j} \right) / 2$
can be utilized to represent the stress tensor $T^{MUR}_{ij}$ as follows:
\begin{eqnarray} \label{E10}
\ \ \ \ \ \ \ T^{MUR}_{ij} = \lambda u_{p,p} \delta_{ij} + \mu (u_{i,j} + u_{j,i}) + T^{NL}_{ij}
\end{eqnarray}
where $T^{NL}_{ij}$ are the components of the (added) nonlinear terms, which can be obtained by subtracting the linear terms from the full expression derivable from the strain energy function.
As will be discussed in the following, in the presence of localized damage it is advantageous to represent the constitutive equation of the hyperelastic material as follows:
\begin{eqnarray} \label{E11}
\ \ \ \ \ \ \ T_{ij} = \lambda u_{p,p} \delta_{ij} + \mu (u_{i,j} + u_{j,i}) - T^{e}_{ij}
\end{eqnarray}
where $T^{e}_{ij}$ denote the components of the eigenstresses, given for the Mooney-Rivlin material by
\begin{eqnarray} \label{E12}
\ \ \ T^{e}_{ij} = \lambda u_{p,p} \delta_{ij} + \mu (u_{i,j} + u_{j,i}) - (1 -D) T^{MR}_{ij}
\end{eqnarray}
In these equations, $u_i$ denote the components of the displacements, $\delta_{ij}$ denote the components of the Kronecker delta and $\lambda$ and $\mu$ are the Lam\'e constants of the material in the small strain limit. In Eq. (\ref{E12}), $D$ is a damage parameter (a binary field taking the values of either 0 -- undamaged region -- or 1 -- damaged region -- assumed to be localized in space in the following), such that:
\begin{equation}
\label{E13}
\ \ \ \ \ \ \ \ \ \ \ \ \ T_{ij} =\begin{cases} T^{MR}_{ij} \ \ \textrm{for} \ \ D = 0 \\
0 \ \ \ \ \ \ \ \textrm{for} \ \ D = 1
\end{cases}
\end{equation}
Thus, the representation of the stress components by Eq. (\ref{E11}) in conjunction with Eq. (\ref{E12}), provides the desired requirement that in the region of a crack or a hole, the stresses are zero.
The stress tensor of the Murnaghan material that includes damage is also given by Eq. (\ref{E11}), provided that the eigenstress tensor is defined in this case by:
\begin{eqnarray} \label{E14}
\ \ \ \ \ \ \ T^{e}_{ij} = D \left( \lambda u_{p,p} \delta_{ij} + \mu( u_{i,j}+u_{j,i}) \right) - (1 -D) T^{NL}_{ij}
\end{eqnarray}
It can be easily verified that
\begin{equation}
\label{E13}
\ \ \ \ \ \ \ \ \ T_{ij} =\begin{cases} T^{MUR}_{ij} \ \ \textrm{for} \ \ D = 0 \\
0 \ \ \ \ \ \ \ \ \ \textrm{for} \ \ D = 1
\end{cases}
\end{equation}
\section{Method of Solution}
\label{sec:3}
Far away from the perturbed region within which a crack or a cavity exists, the composite's behavior is governed by its (undamaged) global response resulting from homogenization.
Presently, the finite strain HFGMC, \cite{AAB}, is employed, which establishes the following macroscopic incremental constitutive equation:
\begin{eqnarray} \label{M1}
\ \ \ \ \ \ \Delta \bar {\bm T} = {\bm R}^{*} : \Delta \bar {\bm F}
\end{eqnarray}
where $\bar {\bm T}$ and $\bar {\bm F}$ are the global (average) first Piola-Kirchhoff stress and deformation gradient tensors, respectively, and ${\bm R}^{*}$ is the effective fourth-order tangent tensor.
Consequently, it is possible to integrate Eq. (\ref{M1}) and establish the composite's stress-deformation (i.e. $\bar {\bm T} - \bar {\bm F}$) response up to any desired externally applied loading level.
According to the representative cell method, \cite{RN97}, which is presently generalized for the analysis of hyperelastic composites,
a rectangular domain $-H \le X_2 \le H$, $-L \le X_3 \le L$ of the composite is considered which includes the perturbed region.
Although this region includes the localized damage, it is assumed that the region is sufficiently extensive relative to the damaged zone, such that the elastic field at its boundaries
is not influenced by the existence of the localized damage. This assumption is standard for the method (see \cite{AR12}). The far-field stress, as obtained for a given far-field strain by the HFGMC homogenization is insensitive to sufficiently localized damage -- in contrast to the exact stress and strain profiles, which are of the main interest in the analysis. In the examples provided in the following, the elastic fields \emph{at the domain boundaries} are verified to be sufficiently close to those obtained for undamaged domains, with relative error of the order of that of the spatial discretization.
Consequently, the boundary conditions that are applied on $X_2=\pm H$ and $X_3=\pm L$ are referred to as the far-field boundary conditions.
This rectangular region is divided into $(2 M_2 +1) \times (2 M_3 +1)$ cells, see Fig. \ref{Figure1}(b) for $M_2=M_3=2$.
Every cell is labeled by $(K_2,K_3)$ with $K_2=-M_2,...,M_2$ and $K_3=-M_3,...,M_3$.
In each cell, local coordinates $(X^{'}_2, X^{'}_3)$ are introduced whose origins are located at the cell center, see Fig. \ref{Figure1}(c). The equilibrium equations in Eq. (\ref{E5}) of the material within the cell $(K_2,K_3)$ take the form
\begin{eqnarray} \label{M2}
\ \ \ \ \ \ T^{(K_2,K_3)}_{kj,k} = {0}, \ \ \ \ j= 1,2,3 \ ; \ \ k=2,3
\end{eqnarray}
The constitutive equation in the cell, Eq. (\ref{E11}), can be written as
\begin{equation}
\label{M3}
\begin{split}
T^{(K_2,K_3)}_{ij} = \lambda u^{(K_2,K_3)}_{p,p} \delta_{ij} + \mu (u^{(K_2,K_3)}_{i,j} + u^{(K_2,K_3)}_{j,i}) - T^{e(K_2,K_3)}_{ij}
\end{split}
\end{equation}
where the eigenstress components are given by
\begin{equation}
\label{M4}
\begin{split}
T^{e(K_2,K_3)}_{ij} = \lambda u^{(K_2,K_3)}_{p,p} \delta_{ij} + \mu (u^{(K_2,K_3)}_{i,j} + u^{(K_2,K_3)}_{j,i}) - (1 -D) T^{\text{MR}(K_2,K_3)}_{ij}
\end{split}
\end{equation}
for a Mooney-Rivlin material, and
\begin{equation}
\label{M41}
\begin{split}
T^{e(K_2,K_3)}_{ij} = D \left[ \lambda u^{(K_2,K_3)}_{p,p} \delta_{ij} \right. \left.+ \mu (u^{(K_2,K_3)}_{i,j}+ u^{(K_2,K_3)}_{j,i}) \right] - (1 -D) T^{NL(K_2,K_3)}_{ij}
\end{split}
\end{equation}
for a Murnaghan material. For a linearly elastic material, on the other hand, the eigenstress components can be verified to take the form
\begin{equation}
\label{M42}
\begin{split}
T^{e(K_2,K_3)}_{ij} = D \left[ \lambda u^{(K_2,K_3)}_{p,p} \delta_{ij} + \mu (u^{(K_2,K_3)}_{i,j} + u^{(K_2,K_3)}_{j,i} ) \right]
\end{split}
\end{equation}
The continuity of the tractions acting on the $X_2$ and $X_3$ planes requires that
\begin{equation}
\label{M5}
\begin{split}
\left[ {T}_{2j} ( h, X^{'}_3) \right]^{(K_2, K_3)} -
\left [{T}_{2j} (-h, X^{'}_3 ) \right]^{(K_2 +1, K_3)}= 0,
K_2=-M_2,...,M_2-1, \ K_3=-M_3,...,M_3
\end{split}
\end{equation}
\begin{equation}
\label{M6}
\begin{split}
\left[ {T}_{3j} (X^{'}_2 , l) \right]^{(K_2, K_3)} -
\left[ {T}_{3j} (X^{'}_2 , -l) \right]^{(K_2, K_3+1)} = 0,
K_2=-M_2,...,M_2,\ K_3=-M_3,...,M_3-1
\end{split}
\end{equation}
The continuity of displacements between adjacent cells should be imposed, which requires that
\begin{equation}
\label{M7}
\begin{split}
\left[ {u}_j ( h, X^{'}_3) \right]^{(K_2, K_3)} -
\left[ {u}_j (-h, X^{'}_3 ) \right]^{(K_2 +1, K_3)} = 0,
K_2=-M_2,...,M_2-1,\ K_3=-M_3,...,M_3
\end{split}
\end{equation}
\begin{equation}
\label{M8}
\begin{split}
\left[ {u}_j (X^{'}_2 , l) \right]^{(K_2, K_3)} -
\left[ {u}_j (X^{'}_2 , -l) \right]^{(K_2, K_3+1)} = 0,
K_2=-M_2,...,M_2,\ \ \ \ \ K_3=-M_3,...,M_3-1
\end{split}
\end{equation}
Next, the far-field boundary conditions must be imposed at the opposite sides $X_2 = \pm H$, $X_3 = \pm L$ of the rectangular domain as shown in Fig. \ref{Figure1}(b).
The far-field tractions at the boundaries are homogenized quantities, insensitive to sufficiently localized damage (limited to a single cell or, at most, several cells), for sufficiently many cells. In addition, \emph{disregarding the damage}, the assumed rectangular array of cells is doubly-periodic, and therefore definitely has at least two symmetry planes. This implies opposite tractions on opposite surfaces, or identical stresses there, and also identical strains, or displacements differing by constants:
\begin{equation}
\label{M9}
\begin{split}
\left[ {T}_{2j} (h, X^{'}_3) \right]^{(M_2,q)} - \left[ {T}_{2j} (-h, X^{'}_3) \right]^{(-M_2,q)} = 0, \ q=-M_3,...,M_3
\end{split}
\end{equation}
\begin{equation}
\label{M10}
\begin{split}
\ \ \ \left[ {T}_{3j} (X^{'}_2,l) \right]^{(p, M_3)} - \left[ {T}_{3j} (X^{'}_2,-l) \right]^{(p, -M_3)} = 0, \ p=-M_2,...,M_2
\end{split}
\end{equation}
The displacements ($j=1,2,3$) at the opposite sides of the rectangular domain, in turn, satisfy
\begin{equation}
\label{M11}
\begin{split}
\left[ {u}_j (h, X^{'}_3) \right]^{(M_2,q)} - \left[ {u}_j (-h, X^{'}_3) \right]^{(-M_2,q)} = {\Delta}_{2j}, \ q=-M_3,...,M_3
\end{split}
\end{equation}
\begin{equation}
\label{M12}
\begin{split}
\left[ {u}_j (X^{'}_2,l) \right]^{(p, M_3)} - \left[ {u}_j (X^{'}_2,-l) \right]^{(p, -M_3)} = {\Delta}_{3j}, \ p=-M_2,...,M_2
\end{split}
\end{equation}
where ${\Delta}_{2j}$ and ${\Delta}_{3j}$ denote the vector of the far-field displacement differences, and are given by
\begin{equation}
\label{M13}
\begin{split}
{\Delta}_{21} = 2 H {\bar E}_{21} , \
{\Delta}_{22} = 2 H \left( \sqrt{1+2 {\bar E}_{22}}-1 \right) , \
{\Delta}_{23} = 2 H {\bar E}_{23} \\
{\Delta}_{31} = 2 L {\bar E}_{31} , \
{\Delta}_{32} = 2 L {\bar E}_{32} , \
{\Delta}_{33} = 2 L \left( \sqrt{1+2 {\bar E}_{33}}-1 \right)
\end{split}
\end{equation}
(calculated at the \emph{material points} referred to in the boundary conditions in Eqs. (\ref{M11}) and (\ref{M12})),
$\bar {E}_{2j}$ and $\bar {E}_{3j}$ being the average (Lagrangian) strains of the unperturbed periodic composite, which can be calculated from the (current) average deformation gradient $\bar {\bm F}$ (and assigned back to their corresponding material points). The latter can be obtained from Eq. (\ref{M1}) by applying the finite strain HFGMC for a specified far-field loading.
A clarification regarding Eqs. (\ref{M11})-(\ref{M13}) is in order. It is assumed that the $X_1'$ direction is `prismatic', there is no discretization and calculation along it, and thus in Eqs. (\ref{M11})-(\ref{M12}), one has only ${\Delta}_{2j}$ and ${\Delta}_{3j}$ (but not $\Delta_{1j}$). As for Eq. (\ref{M13}), the difference of the displacement in the direction $X_1'$ between two points with different $X_2'$ or $X_3'$ values, can be nonzero in the sense that there may be displacement in the $X_1'$ direction in the composite, but, much like the displacements in the other two directions, it is uniform along $X_1'$. Thus the term `plane deformation` only means that one direction ($X_1'$) is `prismatic' (all the elastic field are independent of $X_1'$).
\subsection{Analysis in the Fourier transform space}
\label{sec:3.1}
Thus far, the analysis performed in the real space was described. In the following, the double discrete Fourier transform is applied on the governing equations, constitutive relations, interfacial and boundary conditions.
For the displacement vector, for example, this transform is defined as follows:
\begin{equation}
\label{TR}
\begin{split}
\hat {u}_j (X^{'}_2, X^{'}_3, \phi_p, \phi_q) =
\sum^{M_2}_{K_2 = -M_2} \sum^{M_3}_{K_3 = -M_3} {u}^{(K_2, K_3)}_j (X^{'}_2, X^{'}_3)
e^{ i (K_2 \phi_p + K_3 \phi_q ) } , \ j=1,2,3
\end{split}
\end{equation}
with $\phi_p = \frac{2 \pi p}{2 M_2 +1}, p=0, \pm 1, \pm 2, ..., \pm M_2,
\phi_q = \frac{2 \pi q}{2 M_3 +1}, q=0, \pm 1, \pm 2, ..., \pm M_3 $.
The application of this transform on the boundary-value problem in Eqs. (\ref{M2})-(\ref{M12})
for the rectangular domain $-H< X_2< H$, $-L <X_3 <L$, divided into $(2M_2+1) \times (2M_3+1)$ cells,
converts it to the problem for the single representative cell $-h < X'_2 < h$, $-l < X'_3 < l$ with respect to the complex-valued transforms.
For each couple of Fourier harmonics indices, $\lbrace p,q \rbrace$, a separate linear mechanical problem emerges, with a right-hand side known for the current iteration. Each of these separate problems can be solved independently. They can be solved consecutively, by using the same physical memory storage domain. Alternatively, if memory supplies are sufficient and it is computational time which is of the essence, then parallel computing can be employed, and all those Fourier space `single-cell` problems can be solved in parallel, on different processor cores or different computers altogether, assuming the correct parallelization code is written. Continuing with the description of the algorithm, the field equations obtained from the equilibrium and constitutive equations take the form
\begin{eqnarray} \label{T2}
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \hat {T}_{kj,k} = {0}, \ \ \ \ j= 1,2,3 \ ; \ \ k=2,3
\end{eqnarray}
\begin{eqnarray} \label{T3}
\ \ \ \ \hat {T}_{jk} = \lambda {\hat u}_{m,m} \delta_{jk} + \mu ({\hat u}_{j,k} + {\hat u}_{k,j}) - {\hat T}^{e}_{jk}
\end{eqnarray}
where
\begin{equation}
\label{T4}
\begin{split}
\ \ \ \hat {T}^{e}_{jk} = \sum^{M_2}_{K_2=-M_2} \sum^{M_3}_{K_3=-M_3} T^{e(K_2,K_3)}_{jk} e ^{ i(K_2 \phi_p + K_3 \phi_q) }
\end{split}
\end{equation}
and the components of the eigenstresses $T^{e(K_2,K_3)}_{jk}$ are given by Eqs. (\ref{M4}), (\ref{M41}) and (\ref{M42}) for Mooney-Rivlin, Murnaghan and linearly elastic materials, respectively.
The continuity of tractions and displacements between adjacent cells, Eqs. (\ref{M5})-(\ref{M8}),
as well as the conditions which relate these variables at the opposite sides of the rectangle, Eqs. (\ref{M9})-(\ref{M12}), take the Bloch form:
\begin{equation}
\label{T5}
\begin{split}
{\hat T}_{2j} (h, X^{'}_3) = e^{-i \phi_p} \hat {T}_{2j}(-h, X^{'}_3), \ -l \le X^{'}_3 \le l, \ j=1,2,3
\end{split}
\end{equation}
\begin{equation}
\label{T6}
\begin{split}
\hat {u}_j (h, X^{'}_3) = e^{-i \phi_p} \hat {u}_j (-h, X^{'}_3) + \delta_{0,K_3} (2 M_3+1) {\Delta}_{2j} e^{i \phi_p M_2}, -l \le X^{'}_3 \le l
\end{split}
\end{equation}
\begin{equation}
\label{T7}
\begin{split}
\ \ \ \ \ \ \ \hat {T}_{3j} (X^{'}_2,l) = e^{-i \phi_q} \hat {T}_{3j}(X^{'}_2,-l), \ -h \le X^{'}_2 \le h, \ j=1,2,3
\end{split}
\end{equation}
\begin{equation}
\label{T8}
\begin{split}
\hat {u}_j (X^{'}_2,l) = e^{-i \phi_q} \hat {u}_j(X^{'}_2,-l) + \delta_{0,K_2} (2 M_2+1) {\Delta}_{3j} e^{i \phi_q M_3}, -h \le X^{'}_2 \le h
\end{split}
\end{equation}
where $p = 0,...,\pm M_2$, $q = 0,..., \pm M_3$.
The boundary-value problem in Eqs. (\ref{T2})-(\ref{T8}) is solved using the high-order theory (HOT), \cite{AAB}, by dividing the cell domain $-h\le X^{'}_2 \le h$, $-l \le X^{'}_3 \le l$
into a rectangular array of $N_{\beta}\times N_{\gamma}$ subcells (see Fig. \ref{Figure1}(c)). In the framework of the high-order theory, the governing equations, interfacial and boundary conditions are imposed in the integral (averaged) sense. Specifically, traction continuity is imposed in the face-averaged sense, which implies forces continuity for the (Lagrangian) subcells, and the same goes for the displacements (which is equivalent to continuity of center-of-mass displacements of the staggered-mesh subcells -- up to small error related to nonuniform density which is negligible for moderately large strains). Regarding the equilibrium equations -- here integration is performed over the subcell volume, which is equivalent to requiring local balance of surface forces on each subcell. This volume integration cancels out the gradient correction of the stress distribution in the subcell and also the contribution of the eigenstress. The explanation for this is given in the following.
The HOT assumes a parabolic displacement field and a linear strain field in a subcell. As parabolic corrections to the strain are neglected, the subcells have to be small enough for the strain gradient in a subcell to be a small correction to the subcell-average of the strain.
For elastic calculations, the stress should be an algebraic (or transcendental) function of the strain at every point. Since the strain is assumed linear in a subcell, it is necessary that the stress be linear in the coordinates inside the subcell as well. For linear elasticity this is trivial. For nonlinear elasticity, the stress appears formally nonlinear in the coordinates. However, the subcells should be small enough for the stress to be effectively linear in the coordinates, and for parabolic and higher order corrections to be negligible. Regarding the method of solution -- the equilibrium equations and the continuity conditions for the subcells are solved in the Fourier space. The right-hand sides of those equations contain the Fourier transforms of the eigenstresses. Formally, both the equilibrium equations and the continuity conditions for the stresses are written in terms of the Fourier transforms of the total Cauchy (or first Piola-Kirchhoff) stress, which is comprised of the linear stress (proportional to the strain) and the eigenstress' Fourier transform.
The Fourier transform of the eigenstress at the undamaged cells contains only purely nonlinear contribution. For moderate strain (say, no more than 10 to 20 percents, which is a reasonable upper limit of elastic strain, beyond which inelasticity may emerge for material described by the Mooney-Rivlin model -- for material described reasonably by the Murnaghan equation, this estimate, unless purely volumetric strain is assumed, is more than probably a highly-overestimated non-tight upper bound), the purely nonlinear correction (first term is square) gives only a few percents change and thus it is a small correction. In the damaged cells, the eigenstress' Fourier transform is linear in the strain, but it relates to the linear part of the Fourier transform of the stress (the left-hand side of the equations) as the relative area of the damaged cells. For a single damaged cell and even 5 cells in each direction in total, the corresponding correction is no more than a few (say 4) percents. In total, the Fourier transform of the eigenstress is a small correction to the Fourier transform of the linear-elastic part of the stress.
Next, it should be acknowledged that just like the total stress, also the correction introduced by the eigenstress should change linearly with the coordinates inside a subcell (since it is a difference between a coordinate-linear total stress and a coordinate-linear linear-elastic part).
Thus for a proper choice of (small enough) subcells, the (Fourier transform of the) eigenstress would contain a constant part (subcell-average) and a constant-gradient part, linear in the coordinates inside a subcell. For small enough subcells, the coordinate-linear correction is small relative to the subcell average, both for the linear-elastic part of the stress, (for the total stress) and for the eigenstress.
Therefore, one can identify three orders of magnitude comprising the Fourier transform of the total stress. The zeroth order is the subcell-average of the linear-elastic part (the one proportional to the Fourier transform of the strain -- or the symmetric part of the displacement gradient).
The first-order correction to this part consists of two terms, both proportional to one small parameter. The first term is the gradient correction to the linear-elastic stress, which becomes relatively small for small enough subcells. The second term is the subcell-averaged part of the (Fourier transform of the) eigenstress, which has two parts, one proportional to the (small) square of the effective total strain, and one proportional to the small relative area of the damaged region.
The second-order correction to the Fourier transform of the total stress is the gradient-part of the eigenstress, which is proportional to the product of two small terms, one vanishing with decreasing subcell size (relative, say to some geometric or physical macroscopic length scale), and the other being small for reasonable moderate strain (say 10 percents) and a reasonable number of cells (even 5 in each direction), with the assumption of damage localized on no more than a few cells.
To conclude, the gradient-part of the Fourier transform of the eigenstress is a higher-order (second) correction that can be neglected for enough cells, subcells and not too much damage and strain. Consequently, the \emph{eigenstresses} can be viewed as uniform in a subcell.
As for the subcell-averaged part of the eigenstress -- this contribution cancels out from the equilibrium equation, but remains relevant for the continuity equations. Therefore the equilibrium equations remain linear in the displacements, and can be used for static condensation, much as is the case in \cite{AV15}. On the other hand, the eigenstresses, uniform within each subcell, do make their contribution -- in the stress continuity equations.
\subsection{Inversion of the Fourier transform}
\label{sec:3.2}
Once the solution in the transform domain has been established, the real-space elastic field
can be readily determined at any point in the desired cell
$(K_2,K_3)$ of the considered rectangular region $-H \le X_2 \le H$, $-L \le X_3 \le L$
by the inverse transform formula, which for the displacements, for example, reads:
\begin{equation}
\label{T19}
\begin{split}
{u}^{(K_2, K_3)}_j (X^{'}_2, X^{'}_3) = \frac{1}{(2 M_2+1)(2 M_3+1)} \sum^{M_2}_{p=-M_2} \sum^{M_3}_{q=-M_3} \hat {u}_j (X^{'}_2, X^{'}_3, \phi_p, \phi_q)
e^{ - i (K_2 \phi_p + K_3 \phi_q ) }
\end{split}
\end{equation}
In the application of the present analysis, the Fourier-transformed eigenstresses $\hat {T}^{e}_{jk}; \ j,k = 1,2,3;$ in Eq. (\ref{T3}) are not known.
Therefore, an iterative procedure similar to the one that has been applied in the linear case, \cite{AR12}, has to be employed as follows:
1. Start the iterative procedure by assuming that $\hat {T}^{e}_{jk}$ = 0 and solve the boundary value problem with Eqs. (\ref{T2})-(\ref{T8}) in the transform domain.
2. Apply the inverse transform formula to compute the displacements and stress fields.
The latter can be used to compute the (current) eigenstress components ${T}^{e(K_2,K_3)}_{jk}$ in the real space.
3. Obtain an improved estimate, $\tilde{T}^{e(K_2,K_3)}_{jk}$, of the real-space eigenstress based on the value computed in stage 2, having in use a certain (convergence) criterion.
4. Compute the Fourier transform of this improved estimate, $\tilde{T}^{e(K_2,K_3)}_{jk}$, as obtained in stage 3, by employing Eq. (\ref{T4}).
5. Solve (again) the boundary-value problem equations in the transform domain by employing this time the just computed values of $\hat{\tilde {T}}^{e}_{jk}; j,k = 1,2,3.$
6. Repeat the iterative process until satisfactory convergence in the real-space eigenstress components is obtained.
For $\tilde{T}^{e(K_2,K_3)}_{jk}=T^{e(K_2,K_3)}_{jk}$, this procedure can be identified with the Banach contracting mapping method (e.g., \cite{Rall}).
In linear problems of composites with localized damage, \cite{AR12}, this procedure has converged for every magnitude of the applied external loading.
In the presently addressed hyperelastic problem however, convergence of this iterative procedure is obtained only for small magnitudes of the far-field, for which the materials are essentially linear.
Therefore, the aforementioned procedure is utilized only at the first loading increment, and the corresponding convergent solution is used as an initial guess for an iterative procedure which has $\tilde{T}^{e(K_2,K_3)}_{jk}\neq T^{e(K_2,K_3)}_{jk}$, and employs a nonlinear solver providing $\tilde{T}^{e(K_2,K_3)}_{jk}( T^{e(K_2,K_3)}_{jk})$, as elaborated on in subsection 3.4.
\subsection{A brief description of the HOT}
\label{sec:3.3}
In the framework of the HOT, the single representative cell in the Fourier transform domain is divided into $N_{\beta}$ and $N_{\gamma}$ subcells in the $X^{'}_2$ and $X^{'}_3$ directions,
respectively, see Fig. \ref{Figure1}(c).
Each subcell is labeled by the indices $(\beta \gamma)$ with $\beta=1,...,N_{\beta}$ and $\gamma=1,...,N_{\gamma}$,
and may contain a distinct homogeneous material. The initial dimensions of subcell $(\beta \gamma)$ in the $X^{'}_2$ and $X^{'}_3$ directions
are denoted by $h_{\beta}$ and $l_{\gamma}$, respectively. A local initial coordinates system
$(\bar X^{(\beta)}_2, \bar X^{(\gamma)}_3)$ is introduced in each subcell with its origin located at the subcell center. The finite-strain higher-order theory is based on the following quadratic expansions of the displacement vector $\hat {\bm u}^{(\beta \gamma)}$
in subcell $(\beta \gamma)$:
\begin{equation}
\label{H1}
\begin{split}
\hat {\bm u}^{(\beta \gamma)} =
\hat {\bm W}^{(\beta \gamma)}_{(00)} +
\bar{X}^{(\beta )}_{2} \hat {\bm W}^{(\beta \gamma)}_{(10)}+
\bar{X}^{(\gamma)}_{3} \hat {\bm W}^{(\beta \gamma)}_{(01)} + \\
+ \frac{1}{2} \left(3 \bar{X}^{(\beta )2}_2 -\frac{h^2_{\beta }}{4} \right) \hat {\bm W}^{(\beta \gamma)}_{(20)}
+ \frac{1}{2} \left(3 \bar{X}^{(\gamma)2}_3 -\frac{l^2_{\gamma}}{4} \right) \hat {\bm W}^{(\beta \gamma)}_{(02)}
\end{split}
\end{equation}
The unknown coefficient $ \hat {\bm W}^{(\beta \gamma)}_{(mn)}$ are determined, as shown in the following,
from the satisfaction of the equilibrium equations, interfacial and boundary conditions.
In the absence of body forces, the equilibrium equations in the subcell,
expressed in terms of the first Piola-Kirchhoff stress tensor, $\hat {\bm T}^{(\beta \gamma)}$, can be represented in the form
\begin{eqnarray} \label{H2}
\ \ \ \ \ \ \ \ \ \ \ \frac{\partial \hat T^{(\beta \gamma)}_{2j}}{\partial {\bar X}^{(\beta)}_2}
+ \frac{\partial \hat T^{(\beta \gamma)}_{3j}}{\partial {\bar X}^{(\gamma)}_3} = 0, \ \ \ \ \ j=1,2,3
\end{eqnarray}
The surface-averages of the tractions are given by
\begin{equation}
\label{H3}
\begin{split}
\hat {\bm T}^{\pm(\beta \gamma)}_2 = \frac{1}{l_{\gamma}} \int_{ -l_{\gamma} / 2}^{ l_{\gamma} / 2}
\hat {\bm T}^{(\beta \gamma)}_2 \left({\bar X}^{(\beta)}_2 = \pm \frac{ h_{\beta}} {2} \right) \ d {\bar X}^{(\gamma)}_3 \\
\hat {\bm T}^{\pm(\beta \gamma)}_3 = \frac{1}{h_{\beta}} \int_{ -h_{\beta} / 2}^{ h_{\beta} / 2}
\hat {\bm T}^{(\beta \gamma)}_3 \left({\bar X}^{(\gamma)}_3 = \pm \frac{ l_{\gamma}} {2} \right) d {\bar X}^{(\beta)}_2
\end{split}
\end{equation}
and $ \hat {\bm T}^{(\beta \gamma)}_2$, $ \hat {\bm T}^{(\beta \gamma)}_3$ are column vectors defined by
\begin{equation}
\label{H4}
\begin{split}
\hat {\bm T}^{(\beta \gamma)}_2 = \left[ \hat T_{21}, \hat T_{22}, \hat T_{23} \right]^{(\beta \gamma)},
\ \hat {\bm T}^{(\beta \gamma)}_3 = \left[ \hat T_{31}, \hat T_{32}, \hat T_{33} \right]^{(\beta \gamma)}
\end{split}
\end{equation}
In terms of the surface-averages of the tractions, the equilibrium equation (\ref{H2}) reads
\begin{equation}
\label{H5}
\left[ \hat {\bm T}^{+(\beta \gamma)}_2 - \hat {\bm T}^{-(\beta \gamma)}_2 \right]
+\frac{h_{\beta}}{l_{\gamma}} \left[ \hat {\bm T}^{+(\beta \gamma)}_3 - \hat {\bm T}^{-(\beta \gamma)}_3 \right] = 0
\end{equation}
This relation expresses the equilibrium equations imposed in the average sense within subcell $(\beta \gamma)$.
The constitutive relations in Eq. (\ref{T3}) can be represented in the form
\begin{eqnarray} \label{CON}
\ \ \ \ \ \ \ \ \ \ \hat T^{(\beta \gamma)}_{jk} = C^{(\beta \gamma)}_{jklm} \hat u^{(\beta \gamma)}_{l,m} - \hat T^{e(\beta \gamma)}_{jk}
\end{eqnarray}
which provides the following expressions for the components of the
surface-averages of the tractions $ \hat {\bm T}^{\pm(\beta \gamma)}_{2}$ and $ \hat {\bm T}^{\pm(\beta \gamma)}_{3}$ (recalling the argumentation given above for the assumption that eigenstresses are uniform across the subcell to sufficient approximation):
\begin{equation}
\label{H6}
\begin{split}
\hat {T}^{\pm(\beta \gamma)}_{2j} = C^{(\beta \gamma)}_{2j12}
\left( \hat W^{(\beta \gamma)}_{1(10)} \pm \frac{3 h_{\beta}}{2} \hat W^{(\beta \gamma)}_{1(20)} \right)
+\\+C^{(\beta \gamma)}_{2j22}
\left( \hat W^{(\beta \gamma)}_{2(10)} \pm \frac{3 h_{\beta}}{2} \hat W^{(\beta \gamma)}_{2(20)} \right)
+C^{(\beta \gamma)}_{2j32}
\left( \hat W^{(\beta \gamma)}_{3(10)} \pm \frac{3 h_{\beta}}{2} \hat W^{(\beta \gamma)}_{3(20)} \right)
+\\+C^{(\beta \gamma)}_{2j13} \hat W^{(\beta \gamma)}_{1(01)}
+ C^{(\beta \gamma)}_{2j23} \hat W^{(\beta \gamma)}_{2(01)}
+ C^{(\beta \gamma)}_{2j33} \hat W^{(\beta \gamma)}_{3(01)} - \hat T^{e}_{2j}, \ j,k,l=1,2,3
\end{split}
\end{equation}
\begin{equation}
\label{H7}
\begin{split}
\hat {T}^{\pm(\beta \gamma)}_{3j} = C^{(\beta \gamma)}_{3j12} \hat W^{(\beta \gamma)}_{1(10)} + C^{(\beta \gamma)}_{3j22} \hat W^{(\beta \gamma)}_{2(10)}
+ C^{(\beta \gamma)}_{3j32} \hat W^{(\beta \gamma)}_{3(10)} +\\
+C^{(\beta \gamma)}_{3j13}
\left( \hat W^{(\beta \gamma)}_{1(01)} \pm \frac{3 l_{\gamma}}{2} \hat W^{(\beta \gamma)}_{1(02)} \right)+
C^{(\beta \gamma)}_{3j23}
\left( \hat W^{(\beta \gamma)}_{2(01)} \pm \frac{3 l_{\gamma}}{2} \hat W^{(\beta \gamma)}_{2(02)} \right) +\\
+C^{(\beta \gamma)}_{3j33}
\left( \hat W^{(\beta \gamma)}_{3(01)} \pm \frac{3 l_{\gamma}}{2} \hat W^{(\beta \gamma)}_{3(02)} \right) - \hat T^{e}_{3j}, \ \ j,k,l=1,2,3
\end{split}
\end{equation}
Substitution of Eq. (\ref{H6})-(\ref{H7}) in (\ref{H5}) provides the three relations:
\begin{equation}
\label{H8}
\begin{split}
C^{(\beta \gamma)}_{2j12} \hat W^{(\beta \gamma)}_{1(20)}
+C^{(\beta \gamma)}_{2j22} \hat W^{(\beta \gamma)}_{2(20)}
+C^{(\beta \gamma)}_{2j32} \hat W^{(\beta \gamma)}_{3(20)} \\
+C^{(\beta \gamma)}_{3j13} \hat W^{(\beta \gamma)}_{1(02)}
+ C^{(\beta \gamma)}_{3j23} \hat W^{(\beta \gamma)}_{2(02)}
+ C^{(\beta \gamma)}_{3j33} \hat W^{(\beta \gamma)}_{3(02)} = 0, \ j=1,2,3
\end{split}
\end{equation}
Just like the surface-averaged tractions, the surface-averaged displacements can be defined by
\begin{equation}
\label{H9}
\begin{split}
\hat {\bm u}^{\pm(\beta \gamma)}_2 = \frac{1}{l_{\gamma}} \int_{ -l_{\gamma} / 2}^{ l_{\gamma} / 2}
\hat {\bm u}^{(\beta \gamma)} \left( {\bar X}^{(\beta)}_2 = \pm \frac{ h_{\beta}} {2} \right) \ d {\bar X}^{(\gamma)}_3 \\
\hat {\bm u}^{\pm(\beta \gamma)}_3 = \frac {1}{h_{\beta}} \int_{ -h_{\beta} / 2}^{ h_{\beta} / 2}
\hat {\bm u}^{(\beta \gamma)} \left( {\bar X}^{(\gamma)}_3 = \pm \frac{ l_{\gamma}} {2} \right) \ d {\bar X}^{(\beta)}_2
\end{split}
\end{equation}
These surface-averages $ \hat {\bm u}^{\pm(\beta \gamma)}_i$, $i=1,2,3$, can be related to the microvariables
$ \hat {\bm W}^{(\beta \gamma)}_{(mn)}$; $(mn)=0,1,2$; in the expansion in Eq. (\ref{H1}):
\begin{equation}
\label{H10}
\begin{split}
\ \ \hat {\bm u}^{\pm(\beta \gamma)}_2 = \hat {\bm W}^{(\beta \gamma)}_{(00)}
\pm \frac{h_{\beta}}{2} \hat {\bm W}^{(\beta \gamma)}_{(10)}
+ \frac{h^2_{\beta}}{4} \hat {\bm W}^{(\beta \gamma)}_{(20)} , \\
\hat {\bm u}^{\pm(\beta \gamma)}_3 = \hat {\bm W}^{(\beta \gamma)}_{(00)}
\pm \frac{l_{\gamma}}{2} \hat {\bm W}^{(\beta \gamma)}_{(01)}
+ \frac{l^2_{\gamma}}{4} \hat {\bm W}^{(\beta \gamma)}_{(02)}
\end{split}
\end{equation}
Manipulations of Eq. (\ref{H10}) by subtractions and additions yield
\begin{equation}
\label{H11}
\begin{split}
\hat {\bm W}^{(\beta \gamma)}_{(10)} = \frac{1}{h_{\beta}}
\left[ \hat {\bm u}^{+}_2 - \hat {\bm u}^{-}_2 \right]^{(\beta \gamma)}, \ \ \
\hat {\bm W}^{(\beta \gamma)}_{(01)} = \frac{1}{l_{\gamma}}
\left[ \hat {\bm u}^{+}_3 - \hat {\bm u}^{-}_3 \right]^{(\beta \gamma)}
\end{split}
\end{equation}
\begin{equation}
\label{H12}
\begin{split}
\ \ \ \ \ \ \ \hat {\bm W}^{(\beta \gamma)}_{(20)} = \frac{2}{h^2_{\beta}}
\left[ \hat {\bm u}^{+}_2 + \hat {\bm u}^{-}_2 \right]^{(\beta \gamma)}
- \frac{4}{h^2_{\beta}} \hat {\bm W}^{(\beta \gamma)}_{(00)} , \
\ \ \ \ \ \ \hat {\bm W}^{(\beta \gamma)}_{(02)} = \frac{2}{l^2_{\gamma}}
\left[ \hat {\bm u}^{+}_3 + \hat {\bm u}^{-}_3 \right]^{(\beta \gamma)}
- \frac{4}{l^2_{\gamma}} \hat {\bm W}^{(\beta \gamma)}_{(00)}
\end{split}
\end{equation}
The expressions for $ \hat {\bm W}^{(\beta \gamma)}_{(00)}$ in terms of the surface-averaged displacements can be determined from the equilibrium equation, Eq. (\ref{H5}).
The final form which expresses the equilibrium and constitutive equations is given by
\begin{eqnarray} \label{H13}
\ \ \left\{ \begin{array}{c}
\hat {\bm T}^{\pm}_2 \\
\hat {\bm T}^{\pm}_3 \end{array} \right\}^{(\beta \gamma)} =
\left[ \begin{array}{c}
{\bm K} \\
\end{array} \right]^{(\beta \gamma)} \
\left\{ \begin{array}{c}
\hat {\bm u}^{\pm}_2 \\
\hat {\bm u}^{\pm}_3 \end{array} \right\}^{(\beta \gamma)}
-\left\{ \begin{array}{c}
\hat {\bm T}^{e}_2 \\
\hat {\bm T}^{e}_3 \end{array} \right\}^{(\beta \gamma)}
\end{eqnarray}
where $[\bm K]^{(\beta \gamma)}$ is a matrix of the 12th-order whose elements depend on the dimensions of the subcell and the (zero-stress-limit tangent) elastic isotropic stiffness tensor components $C^{(\beta \gamma)}_{jklm}$
of the material filling this subcell. These relations are employed for the enforcing of the continuity of tractions between subcells as well as the tractions' boundary conditions in Eqs. (\ref{T5}) and (\ref{T7}).
Along with with the interfacial continuity conditions of the displacements between the subcells and the boundary conditions in Eqs. (\ref{T6}), (\ref{T8}),
a system of $12 N_{\beta} N_{\gamma}$ algebraic equations in the surface-averaged displacements $\hat {\bm u}^{\pm (\beta \gamma)}_2$ and $\hat {\bm u}^{\pm (\beta \gamma)}_3$ is obtained. These equations are linear for every `guess' (iteration) of the Fourier-transformed eigenstresses. These are the equations to be solved in stage 1 as described in the previous subsection. The following subsection presents and describes the approach to solving the overall iterative problem.
\subsection*{3.4 Convergent iterative solution of the nonlinear system of equations}
Within the aforementioned approach, there exists a system of equations in the real space that needs to be solved for each loading increment for the mechanics to be correct. The advantage of the application of the Fourier transform method is in that the linear system to be solved is a small one, i.e. it does not require a lot of memory and computational time. The drawback is that there is an unknown `right-hand' side (for each wavenumber). Since the unknown `right-hand` side is the Fourier transform of the eigenstress, the unknowns for the underlying nonlinear system can only be the eigenstresses (or their Fourier transforms, which is just adding a linear operator to a nonlinear one, so it is generally the real-space eigenstresses). The nonlinear equations that have to be satisfied are the eigenstress update equations, Eqs. (\ref{M4}), (\ref{M41}) or (\ref{M42}), depending on the material, in which $T_{ij}^{MR(K_2,K_3)}$ or $T_{ij}^{NL(K_2,K_3)}$ for a given iteration are perceived as functions of the eigenstress $T^{e(K_2,K_3)}_{ij}$ at the previous iteration, obtained after forward Fourier transforms, solution of a set of linear problems in the Fourier space, an inverse Fourier transform, computation of the deformation gradient and then application of the nonlinear hyperelastic functionals. This underlying nonlinear system can be written as follows:
\begin{equation}
\label{B1}
^{(k)}{T}^{e(K_2,K_3)}_{ij} = g\left(^{(k-1)}{T}^{e(K_2,K_3)}_{ij}\right)
\end{equation}
(here the function $g(x)$ is a nonlinear function for any nonlinear material, such as the Mooney-Rivlin or Murnaghan material).
The mapping in Eq. (\ref{B1}) has a fixed-point corresponding to the solution of the mechanical problem, which for stable hyperelastic materials exists and is unique. This fixed-point may be unstable, in the sense that the mapping may not be contracting in the vicinity of the fixed-point. However, the fixed-point exists and it is a root of the equation
\begin{equation}
\label{B2}
f\left(T^{e(K_2,K_3)}_{ij}\right)\triangleq g\left(T^{e(K_2,K_3)}_{ij}\right) - T^{e(K_2,K_3)}_{ij} =0
\end{equation}
Thus, instead of trying to find a fixed-point of the mapping in Eq. (\ref{B1}), one may look for a root of Eq. (\ref{B2}). The advantage is that the aforementioned mapping may be unstable near the fixed-point, whereas for the root-finding problem, general (at least locally) convergent algorithms are known.
At this point it is important to note that the root-finding algorithm is to be called at the level of the two-scales discretization, that is after the entire domain is divided into cells and each cell is divided into subcells, such that the vector of unknowns is $x_m^{(n)}\triangleq{^{(l,k)}_{(\beta,\gamma)}T_{ij}^{e(K_2,K_3)}}$, it is sought independently for every loading iteration $l$, it changes during the root-finding process counted by the iteration $k$, and it has $d=d_T N_{\beta}N_{\gamma} \times(2M_2+1)(2M_3+1)$ components ($d_T$ components for every subcell in every cell in the domain, where $d_T \le 9$ and the number of independent stress components is determined by the geometry).
Since the loading is assumed to be applied incrementally and one can choose arbitrarily small increments, the first increment can always be taken from the linear part of the hyperelastic stress strain relation. Then for the first increment one can use a rigorously linear material with moduli equal to the initial tangent moduli of the hyperelastic material. This initial problem can be solved using the mapping in Eq. (\ref{B1}) and convergence is then guaranteed. One can then use the obtained eigenstress vector as an initial guess for the next loading increment for which the full hyperelastic material is recovered and a locally-convergent root-finding algorithm is employed. In other words, there is a way to guarantee sufficiently good initial guesses for locally-convergent root-finding algorithms. For the second loading increment it is the convergence of the Banach method which would supply a good enough starting point. For the following increments, a good-enough starting point for any consecutive increment would be supplied by good-enough convergence of the solver for the previous increment. Thus the only requirement for overall convergence is the use of a locally-convergent algorithm.
Furthermore, since we do have good initial guesses here and on the other hand there are two scales in the problem, which may render the vector of unknowns extremely large, one would seek the most efficient locally convergent root-finding algorithm with weak dimension-dependent complexity. Although in the examples considered in this work, the number of equations that need to be solved is of the order of several hundreds of thousands, which is not particularly challenging for a modern work-station computer, clearly, increased resolution for adequate description of fibers with non-trivial cross-section, or three-dimensional problems, would require already tens to hundreds of millions of equations to solve, which is still computationally hard.
Due to the high dimensionality of the problem and the fact that the error in the solution of the Fourier-space problems in intermediate iterations may introduce incorrect frequencies which may result in oscillatory intermediate solution in the real space, using the approach of numerical minimization seems inadvisable here, as many local minima may be encountered. Instead, a direct multidimensional root-finding algorithm would be better-suited. The direct multidimensional locally-convergent root-finding algorithm with the highest convergence rate is Newton's method.
Exact application of Newton's method requires the Jacobian of the vector of equations in each iteration, or its inverse. The Jacobian is usually a full-rank matrix (for a regular multidimensional space). Such a matrix would have $d^2$ entries, which may be extremely demanding in terms of memory for high-resolution calculations for the discussed multiple-scale approach.
This can be described mathematically as follows. The standard Newton update step for finding the nearest root of the equation $\textbf{f}(\textbf{x})=\textbf{0}$, starting from $\textbf{x}=\textbf{x}_0$ is:
\begin{eqnarray} \label{B3}
\ \ \ \ \ \ \ \ \ \textbf{x}_{k+1}={\mathcal{G}}\left(\textbf{x}_{k}\right)=\textbf{x}_k-\textbf{B}_k\textbf{f}_k
\end{eqnarray}
where $\textbf{B}_k=(\nabla\textbf{f}^{\top}_k)^{-\top}$ corresponds to the exact Newton step case.
The inverse of the Jacobian can be spectrally decomposed, without loss of generality, according to the following formula:
\begin{eqnarray} \label{B4}
\ \ \ \ \ \ \ \ \ \ \textbf{B}_k=\sum_{n=1}^d \lambda_n\bar{\textbf{v}}_n\hat{\textbf{v}}_n^{\top}=\sum_{n=1}^d \textbf{u}_n\textbf{v}_n^{\top}
\end{eqnarray}
where $\bar{\textbf{v}}_n$ and $\hat{\textbf{v}}_n^{\top}$ are the right and left eigenvectors of the inverse Jacobian, respectively, and $\lambda_n$ are the eigenvalues. The second decomposition has a different normalization and is more convenient for the derivations employed in the following. The idea here is that if
\begin{eqnarray} \label{B5}
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left \Vert\lbrace\nabla[{g(\textbf{x})]^{\top}\rbrace^{\top}}\right\Vert_s^{\textbf{x}=\textbf{x}^*}\ge1
\end{eqnarray}
where $\Vert\cdot\Vert_s$ indicates the spectral norm, then another fixed-point iteration is constructed, for which
\begin{eqnarray} \label{B6}
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left \Vert\lbrace\nabla[{\mathcal{G}(\textbf{x})]^{\top}\rbrace^{\top}}\right\Vert_s^{\textbf{x}=\textbf{x}^*}<1
\end{eqnarray}
Now, the exact Newton step has two problems. First, especially in the case of oscillatory functions -- as ones obtained by an inverse Fourier transform of an approximate expression -- the Jacobian may vanish at local extrema, which renders the step divergent. This difficulty is often circumvented by the introduction of the so-called Quasi-Newton step, where the Jacobian is substituted with a finite difference calculated by use of previous solution steps, which are always finite. This approach is employed also in the present work. The second problem with the Newton step, is that it requires storing $d$ $d$-dimensional vectors in memory, one for each of the orthogonal directions of the $d$-dimensional state space. These vectors are the ones shown in the sum in Eq. (\ref{B4}). Moreover, the calculation of $d$ finite differences in each direction for each of the nonlinear equations in the system, without knowing \emph{a priori} the sufficient step for stable convergent calculation of such finite differences, may be extremely time-consuming. Therefore, the approach of Quasi-Newton methods is to replace the $d$ orthogonal finite difference approximations of the derivatives of the components of $\textbf{f}$ around the current state in each iteration by a smaller number of vectors related to the solution history. These vectors do not form an orthogonal basis and they are not exactly local. Nevertheless, they may be useful in approximating the Jacobian (or its inverse) in some sense. This is the idea behind the Quasi-Newton method. Derivation of the Quasi-Newton method equations, along with the relevant argumentation, in a form suitable for the following derivation of the final algorithm employed here, is given in \ref{AppendixA}
In the following, we present an algorithm of the Quasi-Newton type, which is essentially a version of the so-called good Broyden method, albeit implemented in an low-memory form directly storing function values, as derived within the present work. Argumentation for the optimality of the good Broyden method, suitable for the following derivation of the final form of the algorithm employed here, is given in \ref{AppendixB} The details of the derivation of the new low-memory form of the good Broyden method (the DFVS-LM-GBM algorithm) is provided in \ref{AppendixC}
\subsubsection*{3.4.1 The Directly Function-Value Storing Low-Memory Good Broyden Method}
The update scheme of the algorithm is standard,
\begin{eqnarray} \label{B11a}
\ \ \ \ \ \ \textbf{x}_{k+1}=\textbf{x}_k+\textbf{s}_{k+1}
\end{eqnarray}
The step (where $k$ denotes the `current' iteration number of the Quasi-Newton solver) is given by:
\begin{eqnarray} \label{B33}
\ \ \ \ \ \textbf{s}_{k+1}=\frac{\beta_k}{\beta_k-\alpha_k}\textbf{c}_k
\end{eqnarray}
where $\textbf{c}_n$ has the following explicit component-wise-defined expression ($N=d$):
\begin{eqnarray} \label{B49}
c_i^{(k)} = \sum_{m=1}^{k-1}H_{k-1,m}F^{(k)}_{i,m+1}, \forall \ i\le N, k > 2, i,k \in \mathbb{N}
\end{eqnarray}
and the matrix $\textbf{F}^{(k)}$ is given by:
\begin{eqnarray} \label{B31}
\ \ \ \ \ \ \ \ \ \ \textbf{F}^{(k)}=[\textbf{f}_1,\textbf{f}_2,..,\textbf{f}_{k-1},\textbf{f}_k]
\end{eqnarray}
($\textbf{f}$ being the vector function the root of which is sought -- in our case the vector of residues of the real-space eigenstress decomposition-update equations, Eqs. (\ref{M4}), (\ref{M41}) or (\ref{M42}), with the definition in Eq. (\ref{B2})).
The following initial conditions should be used:
\begin{eqnarray} \label{B30a}
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textbf{s}_{2}=\textbf{f}_1, \ \textbf{c}_2=\textbf{f}_2
\end{eqnarray}
Regarding the calculation of the matrix $\textbf{H}$, the following definition is introduced:
\begin{equation} \label{B46}
\phi_{nm} \triangleq
\begin{cases} \frac{\gamma_{n+2}^{(m+1)}}{\beta_{m+1}-\alpha_{m+1}}, \ \textrm{if} \ m < n < k \\ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ , \ \textrm{otherwise}
\end{cases} ; m,n \in \mathbb{N}
\end{equation}
Then, the following (lower triangular) matrix is defined in a component-wise fashion:
\begin{eqnarray} \label{B47}
\ \ \ \ \ \ \ M_{nm} \triangleq \delta_{nm}-\phi_{nm}, \forall \ n,m<k \ ; \ n,m \ \in \mathbb{N}
\end{eqnarray}
where $\delta_{nm}$ is Kronecker's delta and where the $k\times k$ matrix $\textbf{M}$ has to be stored in memory for each iteration, at the same physical address for every consecutive number of the solver step iteration $k$, each time replacing the previous registry, occupying no more than the size of a $p^2$ array (where $p=k_{\text{max}}$).
Next, exact matrix inversion is performed for the defined $p\times p$ matrix, requiring $\mathcal{O}(p^3)$ complexity (still polynomial), yielding $\textbf{H}$:
\begin{eqnarray} \label{B48}
\ \ \ \ \ \ \ \textbf{H}\triangleq\textbf{M}^{-1}
\end{eqnarray}
Finally, the three scalar quantities in the above formulas should be iterated according to the following equations:
\begin{equation}
\label{B38}
\begin{split}
\ \ \ \ \ \alpha_{n} = \frac{\beta_{n-1}}{\beta_{n-1}-\alpha_{n-1}}\left[\vphantom{\frac{\lambda_{n-1} }{\beta_{n-1}-\alpha_{n-1}}}\mu_n^{(n-1)}+\frac{\lambda_{n-1} }{\beta_{n-1}-\alpha_{n-1}}\gamma_n^{(n-1)}\right], \ n\ge3
\end{split}
\end{equation}
\begin{eqnarray} \label{B39}
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \beta_{n} = \frac{\beta_{n-1}^2\lambda_{n-1}}{(\beta_{n-1}-\alpha_{n-1})^2}, \ n\ge3
\end{eqnarray}
\begin{equation}
\label{B40}
\begin{split}
\gamma_{n}^{(m)} = \frac{\beta_{m-1}}{\beta_{m-1}-\alpha_{m-1}}\left[\vphantom{\frac{\lambda_{n-1} }{\beta_{n-1}-\alpha_{n-1}}}\mu_n^{(m-1)} +\frac{\lambda_{m-1} }{\beta_{m-1}-\alpha_{m-1}}\gamma_n^{(m-1)}\right], \ n,m\ge3
\end{split}
\end{equation}
\begin{equation}
\label{B41}
\begin{split}
\ \ \ \ \ \ \ \ \ \lambda_n = \nu_{nn}^{(n-1)}+2\frac{\gamma_n^{(n-1)}\mu_n^{(n-1)}}{\beta_{n-1}-\alpha_{n-1}}+\frac{\left[\gamma_n^{(n-1)}\right]^2\lambda_{n-1}}{(\beta_{n-1}-\alpha_{n-1})^2}, \ n\ge3
\end{split}
\end{equation}
\begin{equation}
\label{B42}
\begin{split}
\mu_{n}^{(m)} = \nu_{nm}^{(m-1)}+\frac{\gamma_n^{(m-1)}\mu_{m}^{(m-1)}}{\beta_{m-1}-\alpha_{m-1}}+\frac{\gamma_{m}^{(m-1)}\mu_{n}^{(m-1)}}{\beta_{m-1}-\alpha_{m-1}}+\frac{\gamma_n^{(m-1)}\gamma_{m}^{(m-1)}\lambda_{m-1}}{(\beta_{m-1}-\alpha_{m-1})^2}, \ m,n\ge3
\end{split}
\end{equation}
\begin{equation}
\label{B43}
\begin{split}
\nu_{nl}^{(m)} = \nu_{nl}^{(m-1)}+\frac{\gamma_n^{(m-1)}\mu_l^{(m-1)}}{\beta_{m-1}-\alpha_{m-1}}+\frac{\gamma_l^{(m-1)}\mu_n^{(m-1)}}{\beta_{m-1}-\alpha_{m-1}}+\frac{\gamma_n^{(m-1)}\gamma_l^{(m-1)}\lambda_{m-1}}{(\beta_{m-1}-\alpha_{m-1})^2}, \ l,m,n\ge3
\end{split}
\end{equation}
These six equations for the scalar variables required for the implementation of the method need six initial conditions, which are obtained from the six permutations of scalar products involving the three initial values of the vector function the root of which is sought, as follows:
\begin{equation}
\label{B44}
\begin{split}
\alpha_2=\textbf{f}_1^{\top}\textbf{f}_2, \ \beta_2=\textbf{f}_1^{\top}\textbf{f}_1, \ \lambda_2=\textbf{f}_2^{\top}\textbf{f}_2, \gamma_n^{(2)}=\textbf{f}_1^{\top}\textbf{f}_n, \ \mu_n^{(2)}=\textbf{f}_2^{\top}\textbf{f}_n, \ \nu_{nl}^{(2)}=\textbf{f}_l^{\top}\textbf{f}_n, \ n,l\ge 3
\end{split}
\end{equation}
\subsubsection*{3.4.2 Summary of the description of the nonlinear solver}
To conclude the description of the solver proposed in this work, we note that the Directly Function-Value Storing Low-Memory Good Broyden Method (DFVS-LM-GBM) is implemented by taking the state vector update as prescribed by Eq. (\ref{B11a}), using the step as given by Eq. (\ref{B33}), with the auxiliary direction vector as given in a component-wise fashion in Eq. (\ref{B49}), employing Eq. (\ref{B46})-(\ref{B48}), along with the definition in Eq. (\ref{B31}). In addition, in every iteration, the system of six scalar equations, namely, Eqs. (\ref{B38})-(\ref{B43}), has to be sub-iterated in an internal loop from 2 to $k$ for every successive value of $k$, starting from initial conditions (updated for every $k$), as given by Eqs. (\ref{B44}). Of course, Eqs. (\ref{B30a}) should be used for the initial step and auxiliary direction.
This concludes the description of the nonlinear solver. An extra low memory version was proposed here, which uses a total size of $\mathcal{O}(p N)+\mathcal{O}(N)+\mathcal{O}(p^2)$ of double precision entries (one notes that the two $(k-1)\times (k-1)$ matrices $\textbf{M},\textbf{H}$, as well as the matrix which $n,l$-components are denoted by $\nu_{nl}^{(m)}$, are not stored in separate addresses for every state-vector component $i$, but rather only for one, say the first component, and are rewritten to that same address for every consecutive step-iteration $k$). This is asymptotically half of the memory used by the standard low-memory method, for $1\ll p\ll N$.
On the issue of computational complexity, it can be said that this matter is less problematic for quasi-static problems with large spatial resolution, a memory limit being a more stringent requirement, however for the least, polynomial complexity should be guaranteed with respect to all dimensionless problem parameters much larger than unity, or the problem becomes computationally NP-hard. The algorithm presented in this subsection indeed guarantees polynomial complexity, the toughest problem to solve in a given iteration being the inversion of a lower triangular matrix of about a hundred (or $p$) columns , which is not particularly hard and obviously has polynomial complexity. Beyond that, the solver uses a triple loop of complexity $p^2N$ with standard arithmetic operations. Last, calculation of initial conditions requires $p^2N$ operations for every solver-step iteration,which increases the complexity to the order of $p^3N$. The matrix inversion complexity is $p^3$ for every iteration, which yields a $p^4$ complexity, approximately, and the overall estimate is thus complexity of $\mathcal{O}(p^3N)+\mathcal{O}(p^4)$, which is, of course, polynomial in $p$ (and $N$-linear).
A block diagram of the overall solution algorithm, addressing the multiple-scale mechanical treatment with the mathematical solver as an integral part is depicted in Fig. \ref{Figure2}.
\section{Applications}
\label{sec:4}
The approach described in the previous section was applied to the analysis of stress distribution in typical examples of composites with hyperelastic constituents distributed doubly-periodically, with various examples of localized damage. In all cases a quasi-statically applied `far-field' stress loading of 20kPa--200kPa in the `vertical' direction $X_2$ was assumed (in fact, a far-field displacement boundary condition was applied for which the desired far-field stress as aforementioned was obtained). In one of the directions the composites were assumed to be prismatic (so-called plane-deformation assumption). The number of (structurally-identical) cells taken was chosen as the minimum number required for convergence in terms of spatial attenuation of the effect of local damage on the profiles dictated by the far-field loading. This number was set to five in each of the two directions for the majority of the calculations. Convergence with the number of cells, assuming a square array, was checked for the case of homogeneous Mooney-Rivlin material with a (roughly) octagonal cavity. The result for three cells (in each direction) turned out to be noticeably different, whereas the result for seven cells turned out to be almost identical to the five by five case. The number of subcells for solution by the HOT was set to eleven in each direction, for all cases except for the example of a single ``crack'' (quotation marks here and onward denote the fact that instead of an exact crack the computation considers a line of subcells in which the stress -- but not the density -- is explicitly set to zero by taking $D=1$) in either homogeneous Mooney-Rivlin material, or a composite of two constituents described by the Murnaghan model. For the two latter cases, the number of subcells was eleven in the vertical direction and ten in the horizontal direction (ten columns). For the case of the single ``crack'' in homogeneous Mooney-Rivlin material, convergence with the number of subcells was checked by taking 20 by 21, instead of 10 by 11 subcells. It was found that reasonably correct results are obtained already for 10 by 11 subcells, with reasonable convergence in the spatial profiles (and hence also the stress-strain profiles). This way, when viewed as a structure, the computational domain (at least for the case of homogeneous material with localized damage) was discretized at the maximum examined resolution into a 100 by 100 `effective subcells' grid.
After running several sets of problems with increasing numbers of subcells and cells, it was empirically observed that computational time increases linearly with the total number of subcells in the domain. This is typical for explicit incremental-stepping algorithms. The following can be said about memory and CPU requirements for the algorithm when comparing to the standard (FEM) approach. A common strategy for solving the problem of finding the mechanical response of composites with hyperelastic constituents, is to apply the loading incrementally, with increments sufficiently small such that the equations of elasticity could be linearized in each increment. Then, the tangent modulus is derived, the spatial domain is divided into finite elements and the equilibrium equations are solved in the weak form, which for linear response within an increment produces the correct solution, as minimization converges to the unique minimum. If minimization algorithms are used, then the increments only need to be small enough for the gradient of the objective function to be \emph{monotonic} in each increment, which is a less strict condition. Then, typically, the L-BFGS iterative formula is used, which is the minimization counterpart of the low memory Broyden-step Quasi-Newton algorithm employed for solving nonlinear equations. If the equation form of the FE method is used rather than minimization, which is possible for the tangent-modulus based strategy, then the linear system can either be solved exactly using a sparse-matrix based Gauss elimination for the stiffness matrix, or iterative linear solvers based on variants of the Gauss-Seidel approach, using preconditioning. In any case, sparse exact solvers are usually linear in the number of elements in terms of complexity, for narrow-bandwidth matrices, and iterative solvers, say of the Quasi-Newton type, are sub-linear in complexity (they have super-linear convergence).
Therefore, in terms of equation solution, both the FEM and the suggested method are linear in the system size, with perhaps different coefficients. As for CPU required for the construction of the equations, again the FEM is linear in system size for stiffness matrix construction, and the proposed algorithm involves back and forth discrete Fourier transforms with solution of linear systems for each harmonic \emph{independently}. This is also linear in system size, with perhaps a different prefactor than for the FEM. In terms of memory usage, the sparse exact solver requires storage size linear in the resolution, and the low memory BFGS or Broyden both also require linear storage. Therefore, asymptotically, the proposed method is comparable with the FEM solution. The implementation here was done in-house, using the Fortran programming language, with no use of black-box commercial software. The potential advantage can come from the fact that in the proposed method, the increments of the loading do not have to be so small as to make the equations \emph{monotonic} in the variables within each increment, since, unlike in the FEM approach, it is not needed for convergence to hold theoretically. Instead, the increments need only to be small enough for the starting point to be sufficiently close to the solution. Then, convergence would hold theoretically within each increments for the nonlinear solver. This \emph{may} become a less stringent requirement than monotonicity conservation, which might allow taking larger increments. The latter, in turn, should not have an effect on the number of iterations needed for convergence. This advantage is inherited by the Broyden step from Newton's method. The number of iterations for convergence is asymptotically system-size independent, but it can add a constant factor to the complexity. This factor can be different than the corresponding factor in the FEM approach. Thus, asymptotically the proposed approach is equivalent to FEM in terms of memory and CPU, for identical loading increments, but \emph{may} allow taking larger increments. One technical advantage of the proposed algorithm is that if for a given increment convergence is achieved, then the solution is at hand, whereas using the FEM, a solution may be obtained by (local) minimization, which is not representative of strong-form equilibrium, and one may have to check for smaller increments, verifying that the solution does not change. Hence the assertion in the abstract that the proposed method can be computationally comparable and perhaps advantageous in certain applications. Regarding the issue of being more problem-specific in terms of treatment of complex geometry and not as robust as FEM in terms of meshing, it should be noted that the parametric HFGMC \cite{Ch14} was recently developed for this purpose. In any event, more meticulous comparison of the proposed and the standard methods would be done in subsequent work.
In the considered examples, only one cell in a typical direction contains the damage, making the assumption of subcell-uniform eigenstress justified. The far-field loading was applied incrementally (without augmenting the hyperelastic model by rate sensitivity) up to about ten percents of uni-directional elongation in each case, which is approximately of the same order in all strain measures. Figures $k$(a) with $k$ ranging from 3 to 10 present a schematic view of the specimen considered in each case, with the damage depicted and a control point shown for which a stress-strain curve is plotted (beneath it). The schematic view is intended to be associated with the engineering problem to be solved, and not necessarily with the specific numerical discretization for which colormaps of the results are shown subsequently. For this reason for the cases when a crack in the material is considered, the schematic view shows a thin line for a crack, instead of a rectangle with an aspect ratio corresponding to the line of subcells representing the crack. It is understood that in the limit of sufficiently high resolution, the width of the ``crack'' will converge to zero. The same holds for the length of the ``crack'', which may extend farther than, say, a cavity or fiber in its vicinity, but only by a subcell in each direction, which will vanish in sufficiently high resolution. Thus one-subcell-large geometric discrepancies between the schematic view and the computed cases are tolerated. The stress-strain curve below the schematic view shows the first Piola-Kirchhoff stress, representing the force per unit unstrained area, and the square `Lagrangian' strain, which is a reasonable dimensionless measure of relative deformation. In all cases, a dashed line shows hypothetical linear behavior with ambient elastic constants, as a reference, implying developed material nonlinearity (or, owing to the large-strain formulation, combined material and geometric nonlinearity) taking place. It can be noted that the curves are slightly concave in most cases. This owes to the fact that the shown strain is quadratic in the deformation gradient and a force-displacement curve would have been a approximately a square of the shown quasi-linear curve, at least for large enough strain, or slightly less convex than a parabola, but still convex, as expected for a hyperelastic material. Figures $k$(b) show maps of stress distribution in a cross-section of the composite, for the normal vertical component of the first Piola-Kirchhoff stress, $T_{22}$ in units of MPa in the form of a `cold-warm' color map. Figures $k$(c) show the same for the normal vertical component of the square `Lagrangian' strain measure, $E_{22}$. In all cases the color maps make the emergence of stress and strain concentration close to the damage loci evident. The points chosen for demonstration of local material stress-strain relations, as depicted in Figs. $k$(a) are always as close as possible to the stress concentration maxima loci. Discussion of specific results is given in the following.
Figure \ref{Figure3} shows uniaxial stretch of Mooney-Rivlin material with a square-section cavity represented by 5 subcells in each directions. Nonlinearity for strains increasing from zero is apparent, as well as stress concentration with a maximum value of about 1.5 for the stress and slightly higher for the strain. Figure \ref{Figure4} shows a similar result for a larger and more round cavity. For this example, in the central cell in an array of 5 by 5 cells, a roughly octagonal cavity was created, as follows. A 5 by 5 array of (fully damaged) subcells was placed in the center of the cell. Then four linear arrays of 3 additional (fully damaged) subcells each were added symmetrically below, above, to the right and to the let of the central 5 by 5 array. The resulting domain has four faces three subcell lengths long and four faces with roughly a similar length, hence the term `octagonal'. One should remember that the method of analysis is volume integral, and some roughness in the domain boundary is less influential than in a finite-difference method. Larger stress and strain concentration is observed than for the square cavity case, as expected for a composite with a smaller material volume fracture. A maximum value of about 2 can be observed. The effect of the cavity shape is, again, not very pronounced due to the volume-integrated approach of the method of solution. Figure \ref{Figure5} shows a case similar to that of Fig. \ref{Figure4}, only for fewer cells (which implies a larger damaged area fraction). As expected, a higher stress and stress concentration factor, of about 3 is observed. Moreover, apparently the chosen number of cells, namely 3 in each direction, is insufficient for observing asymptotic behavior far from the damage locus.
Figure \ref{Figure6} shows the case with a finite ``crack'' in a Mooney-Rivlin material, stretched in the first mode. The ``crack'' is represented by a single line of six fully damaged subcells, positioned symmetrically in the central cell. One observes sufficiently developed nonlinearity, attained asymptotic stress and strain distribution and concentration of stress and strain close to the ``crack'' tips, with a maximum value of about 3. Figure \ref{Figure6d} compares a stress profile on the ``crack'' axis for the hyperelastic material to the so-called K-field expected for a linear material. One observes the much higher stress localization for the hyperelastic material, with stronger-than power law decay (apparent on a log scale not shown here). The higher level of (stress) energy localization for the nonlinear case is not surprising. An important remark is to be made here. Within the framework of the chosen approach, cracks are represented by a line of subcells, in which all stress components vanish due to setting $D=1$ there. In terms of stress such entities are as good as `elongated voids', but unlike voids they have the original material density and they obey displacements continuity just as the original material. Clearly, the stress and strain concentration close to the `tips' of such ``cracks'' will markedly underestimate the stress and strain concentrations around the tips of real cracks. However, the common and basically reasonable understanding of this representation is that the stress and strain at the tips of the ``crack'' are not computed anyway. What is computed is the stress and strain averaged over a subcell for the nonlinear elastic part and a linear trend line over the range of a subcell for the linear-elastic part. This linear profile starting from zero at the edge damaged subcell in the ``crack'' and reaching an attenuated value at the neighboring subcell cannot capture a peak anyway. The understanding is that upon increasing resolution, points closer to the tip of the ``crack'' are attained, and along with that also the value of the stress at the tip is gradually attained.
Figure \ref{Figure7} shows the case of porous Mooney-Rivlin material with a doubly-periodic array of square-section prismatic cavities, each represented by 3 by 3 arrays of subcells of zero-stiffness material, positioned symmetrically in each cell. The localized damage in this case is manifested in the form of two identical finite ``cracks'', each represented by a single line of five fully damaged subcells located symmetrically below and above one of the cavities (the central one) with a vertical distance of two subcells between the cavity and the subcells of a ``crack''. This is the first example of the present work of localized damage in a hyperelastic composite, albeit with the second phase being void. The stress and strain maps show periodic distributions sufficiently far from the damage locus and the apparent stress and strain concentration close to the four tips of the two finite ``cracks''. One observes that the ``cracks'' do ``release'' the stress and strain in the material below and above them, as one might expect. The strain map reveals an interesting ``zig-zag'' pattern of higher strain around the ``cracks'', a sign of ``interaction'' of (void) inclusions. It should be noted that the schematic view in the top of the figure shows the crack and pores to be of equal length, even though in the exhibited computation the crack was represented by a line of five subcells, and the pores are all three-subcells-long. The reason for the appearing discrepancy is that for five by five subcells pores the porosity is already too high to be realistic, and a ``crack'' three-subcells-long would have an aspect ratio too small to represent a line defect, whereas the length mismatch between the pores and the ``crack'' poses no special problem. For twice higher resolution, a six-subcells-long pore would be closer to a five or seven-subcells-long ``crack'' -- hence the schematic view.
Figures \ref{Figure8}--\ref{Figure9} present the first case of a strictly composite material, consistent of two material phases described by the Murnaghan constitutive equations. It should be noted at this point that the aim of the specific examples is to illustrate how composites with hyperelastic constituents can be analyzed by employing a nonlinear solver. Thus what is needed is calibrated model of hyperelastic energy for a matrix and fiber inclusions. Since hyperelastic energy is expressed through the arbitrary measure of Lagrange quadratic strain, it is impossible to obtain a meaningful expression without either multiple scale modeling or experimental calibration. The latter is chosen here, and hence a specific material couple is taken, using the work of \cite{Chen}. However, it so happens that the materials described by the Murnaghan equation in \cite{Chen}, aluminum and silicon carbide, due not exhibit finite strain hyperelasticity, only stretching elastically to lower strains than those necessary to illustrate pronounced nonlinearity. One possible alternative is to use the example of rubber, described by the Mooney-Rivlin equation. That was indeed executed. In order to show another example, with another constitutive equation, for the description of two-constituent composite, reference is made to \cite{Chen}. We therefore opt to use the model and numbers cited in \cite{Chen}, where the materials are designated as aluminum and silicon carbide, but imply that we understand the materials as virtual/artificial ones, describing actual natural materials at smaller strains, and behaving asymptotically consistently, but somewhat arbitrary at larger strain. We do so with the sole purpose of exemplifying the convergence of the algorithm in treating hyperelastic materials with developed nonlinearity. For simplicity, we will refer to those half-artificial materials as `aluminum' and `silicon carbide', using the quotation marks to distinguish the names from those referring to real aluminum and silicon carbide (except for in the figures themselves).
Figure \ref{Figure8} addresses the case of localized damage manifested as a lost fiber, that is a case of a doubly-periodic fiber-reinforced bulk of material, where after production, one fiber was somehow `pulled-out' and there is void in its stead. The fiber is represented by an array of 3 by 3 subcells, positioned symmetrically in a cell. Figure \ref{Figure8}(a) reveals a less convex curve than for the Mooney-Rivlin material, which is reasonable, since metals and ceramic materials are more linear in the reversible range than polymers and rubbers (the shown curve resembles a square root in square strain, which means approximately linear strain--relative displacement dependence, unlike the nearly quadratic one for the MR material). The last remark is clearly somewhat speculative, since the calculated regimes are hardly realistic for actual metals and ceramics. However, if one refers to the constitutive models employed to describe rubber on one hand and metal/ceramic composites on the other, clearly the Mooney-Rivlin model, which is more convex in terms of force displacement dependence than the extrapolation of the Murnaghan equation -- which is more linear for small strains than the MR model, as confirmed by small-strain experiments -- to larger strains, then the aforementioned comparative judgment can be made in reference to the mathematical models inspired by the actual materials and extrapolated with a certain amount of consistency, rather than to the actual materials themselves. Figure \ref{Figure8}(b) shows a nearly periodic stress map, with maximum stresses in the fibers close to their interfaces with the matrix. Moreover, the stress concentration around the cavity is lesser, and the cavity also reduces the stress below and above it, which is reasonable. Figure \ref{Figure8}(c) is very interesting. For the applied uniaxial stress, a material with a nonzero Poisson ratio (for small strains), develops biaxial strain state. Indeed, the strain map looks like superposition of two cases, for each of which the cross-section looks like an array of parallel rods, every other of which looks like a composition of two types of rods interchanged periodically. This picture is perturbed by the cavity in the center, which obviously `releases' the strain below and above it, but shows strain concentration at the `walls' of the cavity, which take about twice the average strain, locally.
Figure \ref{Figure9} shows the classical case of a composite material with damage, namely, a doubly periodic array of `silicon carbide' fibers (the same as in the example with the `lost' fiber') in an `aluminum' matrix (both described by the Murnaghan constitutive equations), with two finite ``cracks'' stretched in the first mode, just below and above a fiber (with the same ``crack'' geometry as in the example with the porous material). A stress-strain curve near a ``crack'' tip shows high stress concentration, as expected due to the combined effect of the presence of a rigid inclusion and a ``crack''. The concavity is indicative of linearity of force--relative displacement dependence up to the maximum point. The emergence of non-monotonicity owes to the fact that the plotted stress and strain values are not invariants and the fiber and the ``crack'' induced large bi-axiality (in other words, the non monotonicity is related emergence of dominant horizontal-plane strains. The maximum reached strain of six percents is sufficient for considering the problem one of finite strain, in light of, for example, the aforementioned stress-strain dependence non-monotonicity. The stress distribution map in Fig. \ref{Figure9}(b) shows the expected periodic pattern far from the damage loci, with a stress concentration factor of about 2 at the `silicon carbide' inclusions. In addition, there is the interesting effect of `stress-screening' in the region between the ``cracks''. This is not surprising -- instead of the loading work to result in straining the material between the ``cracks'', it results in opening the ``cracks'' in the first mode, creating stress concentration at their tips. The pattern shown in Fig. \ref{Figure9}(c) is similar, apart from the absence of strain concentration at the (relatively more) rigid inclusions. The crack-tip stress singularity regularized in a physical specimen by inelasticity, is regularized in the computational analysis by the fact that there is no crack tip at all, strictly speaking, but rather a square cavity one subcell in size. Thus the error in describing the crack tip singularity is encompassed in the error in the geometrical representation of the crack by a row of damaged subcells (Figs. \ref{Figure6d} and \ref{Figure10d}, discussed in more detail below, compare the obtained ``crack''-tip stress fields to standard K-fields of exact cracks, showing reasonable correspondence). In the same time, the fact that corner effects at, say, square-section cavities, as the one given in the example in Fig. \ref{Figure3}, where the entire cavity is represented by a relatively small number of subcells, are not very pronounced, owes to the geometry-regularizing nature of the volume-integrated PDE-solving approach followed in the present work.
Figure \ref{Figure10} presents the case of a laminate (rather than a fiber-reinforced matrix) of `aluminum' and `silicon carbide' described, as for the case of Figure \ref{Figure9}, by the Murnaghan constitutive equations. This time, however, instead of two ``cracks'' emergent below and above a fiber inclusion, a single ``crack'' (represented by a single horizontal line of 6 fully damaged subcells, positioned symmetrically in the central cell) is assumed in the middle of the aluminum layer occupying, symmetrically, 9 of the 11 rows of subcells in each cell. The ``crack'' is of finite length, oriented `horizontally', such that the loading opens it in the first mode. Of course, as before, the ``crack'' is prismatic, manifesting two narrow surfaces disentangling everywhere but at the edges, where they are attached tangentially. The stress-strain relation evolution near a ``crack'' tip is as for the previous case, qualitatively. The stress distribution map, as shown in Fig. \ref{Figure10}(b), is similar to the case of a finite crack in a uniform space stretched in the first mode -- the layering seems to have no effect on the stress, within the visible range. Of course, it is known that a major failure mechanism in laminates is delamination, which results from stress increase around the tips of existing micro-cracks at the laminate interfaces. However, within the idealization where aside from the explicitly-introduced ``crack'' there are no additional assumed cracks at the interfaces, it appears that due to stress equilibrium across interfaces, for sufficiently small strains, until nonlinearity kicks in, the lamination is not revealed in the stress map (clearly this is the result of the assumption of idealized interfaces between the layers). This is not surprising, since a layered composite loaded normally to the layering is nothing but a collection of generalized nonlinear springs arranged in a series, and in this case all the elements carry the same force. If the layers are homogeneous, this also means the same stress for all elements. For the middle layer, only the average stress is the same, with a non-uniform distribution, caused by the ``crack''. The stress concentration factor here has a maximum value of about 1.5, approximately the same as in the case of uniform material with a single crack. Figure \ref{Figure10}(c) reveals the layered nature of the composite, showing a layered strain pattern, in addition to strain concentration at the ``crack'' tips. Again, the pattern is explained by analogy with an array of springs attached in a series, the elongation of which is inversely proportional to rigidity, as the more rigid `silicon carbide' exhibits less strain than the aluminum. Figure \ref{Figure10d} shows a stress profile on the ``crack'' axis, on top of the K-field -- the profile for linear material. As for the Mooney-Rivlin material, one observes here a higher level of stress localization compared to linear elastic fracture mechanics. However, the correspondence between the K-field and the Murnaghan case distribution is better than for the Mooney-Rivlin material -- one observes higher proximity between the solid and dashed curves close to the ``crack'' tip. This is unsurprising since Murnaghan materials are much more linear in terms of force-displacement relation than Mooney-Rivlin materials, as one expects metals to be, when compared to, say, rubber. The discrepancy in the `tails' of the curves in Fig. \ref{Figure10d} can, in turn, be attributed to the fact that a standard K-field does not contain the far-field stress term. The dashed curves of the K-field for Figs. \ref{Figure6d} and \ref{Figure10d} were obtained from the analytic formula for the stress around a finite-length crack tip, with the stress intensity factor determined by contour integration (J-integral) around the ``crack'' in the numerically-analyzed material. It is important to note that the K-field shown on the plots is regularized by the subcell size, since the inverse square root function starts not at zero but at an argument corresponding to distance of the order of the subcell length from the presumed ``crack'' tip. This regularization was chosen for better comparison with the hyperelastic material results. In the case of the hyperelastic calculation itself, the same regularization is present implicitly (as discussed above). The ``control'' points for collection of stress and strain histories were always located in the subcell positioned to the right of the damaged zone, vertically symmetrically with respect to the single or the upper damaged zone, one subcell to the right of the rightmost damaged subcell. Subcell-averaged values were used. The physical length dimension scales out in the elastostatic regime assumed here, but the presumed order of magnitude is 1mm for the cell length.
One last remark that should be made at this point concerns the issue of postprocessing, as manifested in the presented colormaps in Figs. \ref{Figure3}-\ref{Figure9}. The colormaps were produced in Matlab, using default plotting settings, one feature of which is smoothing interpolation, helpful for appreciation of asymptotic approach of the sought fields with resolution even before `infinite' resolution is reached. A shortcoming of such graphical smoothing interpolation is that it hides jumps of certain fields, such as normal strain, across interfaces between materials with different elastic coefficients. In the presented examples it was decided to favor smoothing for better appearance of fields distributions, on the account of exact representation of jumps across interfaces, having in mind that the distribution of distinct phases in the composite is rather regular (an opposite choice might have been made for random distribution of fibers. Here, the subcells belonging to the distinct phases are easy to spot and count. Thus it should be held in mind that if subcells adjacent to pore subcells, for instance, appear to have zero strain just as pore subcells themselves, it is only due to the graphical smoothing interpolation. Also, in regard with the pores and ``cracks'' being characterized by zero strain, it should be noted that the composite was solved as a simply-connected continuum, with points being characterized by damage-controlled zero stiffness, hence it was decided not to exclude the pores from the presentation, for self-consistence. The zero strain set in the pores helps to avoid presentation of large strain formally developed there due to vanishing stiffness and conditions of continuity. Such large strain would have skewed the range of the colormaps. Exclusion of the pore subcells at postprocessing could be a possible alternative, though it would conceal the idea of the method to model the material as simply-connected and undamaged with superimposed damage accounted for iteratively. In any case, the resulting ranges of the colormaps do not hide any special features weakly-distinct in color.
A final point regarding the colormaps is that in certain cases the fields appear to have slightly broken symmetry. This occurs due to the nonsmooth definitions entailed in the smoothing interpolation algorithms within the graphical software, activated by numerical round-off errors. The raw results do in fact respect the symmetry, and the aforementioned postprocessing feature is diminished with increased computational resolution. Thus, hopefully, the issue should bear limited significance on the comprehension of the results, as long as the said understanding is held in mind.
\section{Conclusions}
\label{sec:5}
This paper presents a computational method for multiscale analysis of hyperelastic composites with localized damage. A doubly-periodic composite with hyperelastic constituents is assumed, augmented by the presence of damage in a localized region, typically in a representative cell of the periodic medium. The analysis involves homogenization for derivation of `far' fields. A second step consists of decomposing the first Piola-Kirchhoff stress tensor into a uniform linear part and the remainder, comprised of damage-affected and purely-nonlinear terms. The values of the remainder terms are assumed to be zero and then corrected iteratively. A spatial double Fourier transform is employed and a linear mechanical problem with a predictor inhomogeneous part is then solved in Fourier space, separately for each Fourier harmonica. These emergent linear systems are of size approximately equal to the number of subcells discretizing a representative cell of the periodic composite. The independence between the different Fourier problems permits the use of parallel computing, if memory capabilities allow it. Alternatively, the same processor can be used, with the corresponding increase in CPU. Trade-off between memory and CPU can be made by use of programming optimization. The possibility for such optimization is a noteworthy feature of the method, based on its resorting to Fourier transform application, making it comparable in this sense to more standard approaches, such as the Finite Element method, for example, in which parallel computing can be employed for the stiffness matrix construction. A key step in the solution is the convergence of the iterative sequence of solutions of linear sub-problems. This convergence is obtained naturally, by use of a contracting Banach mapping, resulting from the stress decomposition, in the case of composites with linear-elastic constituents, and is lost for hyperelastic ones. For the latter case, instead of Banach's mapping, a general numerical nonlinear (Quasi-Newton) solver is employed (to be specific, the Good Broyden step algorithm of the Quasi-Newton family). Moreover, since the iterating vector is rather large, due to resolving two spatial scales, a low-memory version of the Good Broyden method is employed, in order to avoid storing in memory the elements of a large square matrix. The most economic low-memory algorithms require storing one vector for each iteration. The present paper, among other things, suggests such an algorithm, the noteworthy property of which is that the vectors to be stored in the memory are the values of the vector functionthe root of which is sought. For the case where such function values are related to stress imbalances in subcells discretizing the domain of a damaged periodic composite, storing the function values directly gives higher convergence control, as one can require convergence specifically or more exclusively at points of stress concentration.
The method of analysis thoroughly described and discussed in the first part of the paper is then applied to specific examples in its second part. Multiple cases of damaged composites are examined, with several materials, periodicity patterns and damage types. Interesting insights are obtained in regard with stress concentration in hyperelastic media. Compared to other approaches, beyond the already discussed features, one can also acknowledge finer points, as raised and discussed for example in \cite{Ch14} (among other things, the issue of subcell size, in view of the interpolation within the subcell being of higher order than in methods resorting to spatial linearization between computational grid points).
To conclude, the suggested approach can be used for quasi-static analysis of loaded periodic composites with solid hyperelastic constituents and spatially-localized damage, when an alternative may be in place to standard methods, such as, for example, the Finite Element method, which requires the use of loading increments sufficiently small for local monotonicity to hold, whereas the proposed method is suitable for increments within which the equations can be non-monotonic. This means that larger increments can be taken, in principle, without creating a problem of convergence to local minima, due to the present method being a strong-form approach to integration of PDEs, and once a solution was obtained, no further validation of convergence with smaller increments is required.
In regard with geometric flexibility in describing composite constituents of complex or non-periodic geometry, it should be said that there is room for further development, which the authors intend to take on in the future. One possible extension can be the combination of the employed iterative approach with non-cartesian meshing, such as the one employed in the Parametric HFGMC algorithm \cite{Ch14}. In any case, the enhancement of the approach of the HFGMC with the Higher Order Theory in analysis of periodic solid composites with localized damage allowing to account for nonlinear elasticity of the constituent phases appears to be of value to the development of this take on computational mechanics, which has its historical account.
\bibliographystyle{spmpsci}
| proofpile-arXiv_059-15663 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}\label{sec:intro}
Einstein's general relativity (GR) has been wildly successful.
The agreement with the observed perihelion precession of Mercury and the 1919 eclipse expedition to verify the prediction of relativistic light-bending around the Sun were the beginning of a century of thorough vetting~\cite{Will:2014kxa}.
The theory has passed every experimental test so far, and it was recently validated in the strong-field regime, most notably through the imaging of a black-hole (BH) shadow in the electromagnetic spectrum by the Event Horizon Telescope~\cite{Akiyama:2019cqa} and through the observation of coalescing binary black holes (BBHs) by the LIGO/Virgo Collaboration~\cite{LIGOScientific:2019fpa,TheLIGOScientific:2016src}.
One century of experimental triumphs did not deter theoretical work on observationally viable extensions of GR for mainly two sets of reasons~\cite{Berti:2015itd}.
The first is observational: some of the most outstanding open questions in physics might be explained by modifying the gravitational sector.
For example, one could introduce an additional scalar field to the gravitational action~\cite{Charmousis:2011bf,Charmousis:2011ea} or allow the graviton to be massive~\cite{deRham:2010tw,DAmico:2011eto,deRham:2010kj} to explain the late-time acceleration of the Universe~\cite{Riess:1998cb,Perlmutter:1998np} without invoking the cosmological constant or dark energy.
The second set of reasons is theoretical: string theory and other ultraviolet completions of the Standard Model usually add higher-order curvature corrections to the Einstein-Hilbert action, implying deviations from GR at high energies and large curvatures~\cite{Polchinski:1998rq,Polchinski:1998rr,Fujii:2003pa}.
Therefore it is important to systematically test the assumptions underlying GR, which are often summarized in terms of Lovelock's theorem~\cite{Berti:2015itd,Alexander:2017jmt}. More specifically, GR assumes that
the gravitational interaction is mediated by the metric tensor alone;
the metric tensor is massless;
spacetime is four-dimensional;
the theory of gravity is position-invariant and Lorentz-invariant; and
the gravitational action is parity-invariant.
There is no \emph{a priori} reason why these assumptions should be true, and therefore it is reasonable to explore alternatives to GR by systematically questioning each of them~\cite{Yunes:2013dva,Berti:2015itd}.
Our study is motivated by a combination of these two reasons: we will focus on theories that may address long-standing problems in physics, while questioning the validity of the main assumptions behind GR.
The LIGO-Virgo-KAGRA network of Earth-based detectors just completed their third observing run (O3).
A fourth observing run (O4) is planned in 2022, and future observations will combine data from LIGO Hanford~\cite{TheLIGOScientific:2014jea}, LIGO Livingston~\cite{TheLIGOScientific:2014jea}, Virgo~\cite{TheVirgo:2014hva}, KAGRA~\cite{Akutsu:2020his}, LIGO India~\cite{LIGOIndia}, and third-generation (3g) detectors such as Cosmic Explorer (CE)~\cite{ 2015PhRvD..91h2001D} and the Einstein Telescope~\cite{Punturo:2010zza}.
The space-based observatory LISA~\cite{2017arXiv170200786A}, scheduled for launch in 2034, will extend these observations to the low-frequency window.
As existing ground-based detectors are improved, new ones are built and space-based detectors are deployed, our ability to test GR will be greatly enhanced, but to what level?
The main goal of this study is to combine the anticipated timeline of technological development for Earth- and space-based gravitational-wave (GW) detectors with astrophysical models of binary merger populations to determine what theories will be potentially ruled out (or validated) over the next three decades.
We estimated parameters by running $\sim 10^8$ Fisher matrix calculations using waveform models including the effects of precession~\cite{Hannam:2013oca,Khan:2015jqa,Husa:2015iqa}.
Our null hypothesis is that GR correctly describes our Universe, and that all modifications must reduce to GR in some limit for the coupling constants of the modified theory~\cite{Yunes:2013dva}.
Under this assumption, we employ the parameterized post-Einsteinian (ppE) framework~\cite{Yunes:2009ke,Cornish:2011ys,Sampson:2013lpa,Chatziioannou:2012rf} to place upper limits on the magnitudes of any modification, assuming future GW observations to be consistent with GR.
As our GW observatories are most sensitive to changes in the GW phase, we ignore modifications to the GW amplitude, an approximation that has been shown to be very good~\cite{Tahura:2019dgr}.
\subsection*{Executive Summary}
For the reader's convenience, here we provide an executive summary of the main results of this lengthy study.
\noindent
\textbf{(i) We use public catalogs of BBH populations observable by LISA and by different combinations of terrestrial networks over the next thirty years, and extract merger rates and detection-weighted source parameter distributions.}
While this was not the main goal of this work, we did require astrophysical population models to realistically model GW science over the next three decades.
In the pursuit of constructing forecasts of constraints on GR, we developed useful statistics concerning the distribution of intrinsic parameters for detectable merging BBHs for a variety of population models and detectors.
Useful quantities calculated here and related to BBH mergers are the expected detection rates for a large selection of population models and detector networks. These rates are listed in Table~\ref{tab:rates}, and discussed in Secs.~\ref{sec:T_pdet} and~\ref{sec:S_pdet}.
Detection rates depend not only on the population model, but also on the detector network.
For LISA, we follow the method outlined in Ref.~\cite{Gerosa:2019dbe} to compute detection rates for multiband and massive black hole (MBH) sources.
We constructed synthetic catalogs by filtering the datasets coming from the full population models based on their signal-to-noise ratio (SNR). This yields a detection-weighted distribution of source parameters (discussed in Sec.~\ref{sec:catalog_creation})
which is useful to understand detection bias and to understand the typical sources accessible by different networks over the next three decades.
In Figs.~\ref{fig:source_properties} and~\ref{fig:source_properties_SPACE} we show these distributions for a large selection of detection network/population model combinations, considering both stellar-origin black holes (SOBHs) and MBHs.
The main conclusions of this analysis are summarized in Fig.~\ref{fig:sources_per_year}, which shows the typical detection rates and SNR distributions for different source models and networks.
This plot contains key information on the relative constraining performance of different population model/detector network combinations, which will be important for the following discussion of tests of GR.
\noindent
\textbf{(ii) We find that improvements over existing GW constraints on theory-agnostic modifications to GR range from 2 to 4 orders of magnitude for ground-based observations, from 2 to 4 orders of magnitude for LISA observations of MBHs, and from 1 to 6 orders of magnitude for multiband observations, depending on what terrestrial network upgrades will be possible, on LISA's mission lifetime, and on the astrophysical distribution of merging BBHs in the Universe.}
The main issue addressed in this work is the scientific return on investment of future detector upgrades in terms of future explorations of strong gravity theories beyond GR. What future detectors and network upgrades are most efficient at constraining beyond-GR physics?
Our models use astrophysical populations of SOBHs and MBHs and three reasonable development scenarios for ground-based detectors (ranging from optimistic to pessimistic) to try and answer this question. We first consider generic (theory-agnostic) modifications of GR, and then focus on specific classes of theories that test key assumptions underlying Einstein's theory.
Our primary conclusions for generic modifications to GR are summarized in Fig.~\ref{fig:SMBH_time} and in Sec.~\ref{sec:general_mod}, where we show bounds on generic deviations from GR at a variety of post-Newtonian (PN) orders, separated by the class of source and marginalized over the detector configurations and population models.
A term in the GW phase that is proportional to $\left( \pi \mathcal{M} f\right)^{b/3}$, where $\mathcal{M}$ is the chirp mass of the binary and $f$ is the GW frequency, is said to be of $(b+5)/2$ PN order.
While the range in constraints between the different models and scenarios is large, we have plotted constraints from current pulsar and GW tests of GR for comparison, where available and competitive.
There are several trends present in this figure, most notably:
\begin{itemize}
\item[1)] SOBH multiband sources observed by both LISA and terrestrial networks are the most effective at setting bounds on negative PN effects, outperforming all other classes of sources by at least an order of magnitude.
This observation must be tempered, however, because no multiband sources are observed at all in some of the scenarios we have analyzed.
The detection rate of multiband sources is an open question~\cite{Gerosa:2019dbe,Moore:2019pke}.
We hope that their importance for tests of GR, outlined here and elsewhere~\cite{Barausse:2016eii,Cutler:2019krq,Gnocchi:2019jzp,Carson:2019rda,Carson:2019kkh,Toubiana:2020vtf,Liu:2020nwz}, will stimulate further work on this class of sources.
\item[2)] The MBH mergers observed by LISA outperform SOBH sources observed only in the terrestrial band for negative PN orders in the more pessimistic ground-based detector scenarios.
For most negative PN orders, LISA MBH observations perform at least comparably to the most optimistic terrestrial network scenario, and greatly outperform the other two terrestrial scenarios analyzed in this work.
\item[3)] Terrestrially observed SOBH sources are most effective at constraining positive PN effects, outperforming MBHs and multiband sources.
Furthermore, for positive PN effects, the difference between the different terrestrial network scenarios closes dramatically.
The constraining power between the different terrestrial networks shrinks, spanning a range of 4 orders of magnitude at negative PN orders but showing significant overlap for positive PN orders.
This suggests that highly sensitive detectors are less important for constraining deviations that first enter at positive PN order, as opposed to negative PN order.
\end{itemize}
In terms of what detectors would have the highest return on investment, LISA's contribution to constraints on negative PN effects is quite high.
Multiband sources are, by far, the most effective testbeds for fundamental physics in the early inspiral of GW signals, but even in the absence of multiband sources (a realistic concern), MBH sources perform as well or better than even the most optimistic terrestrial network scenario we examined.
The difference in terrestrial network scenarios is fairly drastic for negative PN effects, and so ground-based detector upgrades would play an important role if LISA were not available.
The strongest improvement occurs in our most optimistic scenario (including CE and ET), but there is also a clear separation between the ``pessimistic'' and ``realistic'' scenarios.
Terrestrial networks perform the best for positive PN effects, but not by orders of magnitude.
Even at positive PN orders, LISA MBH sources are still as effective as the more pessimistic terrestrial network scenarios.
Furthermore, while constraining positive PN effects, no single terrestrial network scenario drastically outperforms the others: there is a clear hierarchy between the three scenarios, but with significant overlap.
These conclusions are also summarized in Table~\ref{tab:generic_summary}, where we show a concise overview of current constraints on generic ppE parameters coming from observations of pulsars~\cite{Nair:2020ggs} and GWs~\cite{LIGOScientific:2019fpa}, and we compare them against forecasts from our simulations.
\begin{table}
\begin{tabularx}{\linewidth}{ @{\hspace{4pt}}c | @{\hspace{4pt}}c | @{\hspace{4pt}}c | @{\hspace{4pt}}c }
\hline \hline
\makecell{PN order \\ (ppE $b$)}& \makecell{Current \\ Constraint} & \makecell{Best (Worst) \\ Constraint} &\makecell{Best (Worst) \\ Source Class} \\ \hline \hline
-4 (-13)& $-$ & $10^{-25}$ ($10^{-14}$) & MB (T) \\ \hline
-3.5 (-12)& $-$ & $10^{-23}$ ($10^{-14}$) & MB (T) \\ \hline
-3 (-11)& $-$ & $10^{-21}$ ($10^{-12}$) & MB (T) \\ \hline
-2.5 (-10)& $-$ & $10^{-19}$ ($10^{-11}$) & MB (T) \\ \hline
-2 (-9)& $-$ & $10^{-17}$ ($10^{-10}$) & MB (T) \\ \hline
-1.5 (-8)& $-$ & $10^{-15}$ ($10^{-9}$) & MB (T) \\ \hline
-1 (-7)& $2\times10^{-11}$ & $10^{-13}$ ($10^{-11}$) & MB (MBH) \\ \hline
-0.5 (-6)& $1.4\times10^{-8}$ & $10^{-11}$ ($10^{-8}$) & MB (T) \\ \hline
0 (-5)& $1.0\times10^{-5}$ & $10^{-7}$ ($10^{-5}$) & MBH (T) \\ \hline
.5 (-4)& $4.4\times10^{-3} {}^{\ast}$ & $10^{-7}$ ($10^{-5}$) & MB (T) \\ \hline
1 (-3)& $2.5\times10^{-2} {}^{\ast}$ & $10^{-6}$ ($10^{-4}$) & MB/T (T) \\ \hline
1.5 (-2)& $0.15{}^{\ast}$ & $10^{-5}$ ($10^{-3}$) & T (MB) \\ \hline
2 (-1)& $0.041{}^{\ast}$ & $10^{-4}$ ($10^{-2}$) & T (MB) \\ \hline
\hline
\end{tabularx}
\caption{Summary of the constraints we predict on the theory-agnostic ppE modification parameter $\beta$ as a function of the PN order parameter $b$, as defined in Eqs.~(\ref{eq:ppE1}) and (\ref{eq:ppE2}) below.
We compare these constraints against current constraints from pulsar tests~\cite{Nair:2020ggs} and GW observations from the LVC~\cite{LIGOScientific:2019fpa}, denoted by $({}^{\ast})$.
The LVC analysis used a slightly different formalism, so we mapped their results to the ppE framework for 4 specific sources (GW150914, GW170104, GW170608, and GW170814), we computed the standard deviation of the Markov Chain Monte Carlo (MCMC) samples, and then combined the posteriors assuming a normal distribution to obtain a rough order-of-magnitude estimate of current ppE bounds from the LVC results.
The columns list, from left to right: the PN order of each particular modification, the current constraint (if one exists), the best and worst constraints from our simulations, and the class of astrophysical sources those constraints come from.
All the constraints are $1\sigma$ bounds, and we only show worst-case constraints that still improve on existing bounds.
The source class acronyms are as follows: MB stands for multiband observations of SOBHs, T stands for terrestrial-only observations of SOBHs, and MBH stands for space-based detection of MBHs. }
\label{tab:generic_summary}
\end{table}
\noindent
\textbf{(iii) LISA and future terrestrial network constraints on theory-agnostic modifications to GR follow trends which depend on the PN order, the underlying population of sources, and the detector network.}
Using suitable approximations, we derive analytical expressions that help to elucidate the reason for the hierarchy of constraining power observed in our simulations.
We first examine single observations, and show how different source properties influence the constraints. We then attempt to quantify the importance of stacking multiple observations to develop a cumulative constraint from an entire catalog of observations.
In Sec.~\ref{sec:ind_scaling} [Eqs.~\eqref{eq:single_source_scaling} and \eqref{eq:single_source_scaling2}] we show that, to leading order, the relative constraining power of one class of sources over another depends on the binary masses and on the initial frequency of observation, raised to a power which depends on the PN order in question.
As this power changes sign going from negative to positive PN orders, this scaling explains why multiband and MBH sources are more competitive at negative PN orders, while terrestrial networks are more effective at positive PN orders.
This trend is succinctly summarized in Fig.~\ref{fig:source_class_scaling}.
Besides single-source trends, in Sec.~\ref{sec:multiple_source_scaling} we quantify the effect of stacking observations and the benefit of large catalogs.
In Fig.~\ref{fig:Neff_pn} we show that, as the PN order of the modification goes from negative to positive, the number of single observations meaningfully contributing to the cumulative bound from a catalog rises exponentially.
This helps to further explain the improvement of terrestrial-only catalogs over LISA catalogs for higher PN orders: the very large catalogs coming from third-generation detectors are effectively leveraged to produce much stronger bounds, but only for positive PN orders.
As shown in Fig.~\ref{fig:Neff_distribution}, this depends on the relation between the three parameters of primary concern (the SNR, the chirp mass, and the constraint), and on how their relation evolves as a function of the PN order.
These considerations help us understand the behavior observed in our simulations.
The single-source scaling implies that MBHs and multiband sources should be more efficient at negative PN orders, because of the typical masses and initial frequencies of the observations.
At positive PN orders the balance shifts in favor of terrestrial-only catalogs, further enhanced by the fact that large catalogs bear much more weight for positive PN effects.
The considerations made above also explain the significant overlap of different terrestrial detection scenarios at positive PN orders, and their separation at negative PN orders: negative PN effects are well constrained by single, loud events (favoring the most optimistic detector scenarios), while positive PN effects benefit from large catalogs.
As detection rates are comparable for all three terrestrial scenarios, they perform comparably for positive PN effects.
\noindent
\textbf{(iv) We quantify the expected improvement over current constraints on theory-specific coupling parameters. We derive trends for theory-specific scalings and find that some conclusions following from generic modifications must be \emph{reversed}.}
The analysis of generic deviations from GR is a good theory-agnostic diagnostic tool for estimating the efficacy of future efforts to constrain fundamental physics. This is useful to perform null tests of GR, but at the end of the day, tests of GR focused on specific contending candidates provide the most meaningful physical insights~\cite{Chua:2020oxn}.
Many of the trends observed for generic modifications remain valid when considering specific theories, but the scaling relations we observe in our simulations can change significantly for some of our target theories.
A bird's eye summary of our conclusions can be found in Table~\ref{tab:theory_summary}.
There we identify the current bound on theory-specific parameters, our predicted bounds after thirty years, and the class of sources which is most effective at improving the bounds.
In this table we only include constraints obtained from actual data with a robust statistical analysis, in an effort to limit our comparisons to reliable experimental limits (as opposed to forecasts, simulations, etcetera).
In-depth results by source class and trend derivations are presented in Sec.~\ref{sec:specific_theories}.
We refer the reader to that section for a detailed discussion of individual theories.
In broad terms, the process of mapping generic constraints to theory-specific parameters can impose significant modifications to the trends observed in the analysis of generic constraints.
These modifications can be significant enough to completely reverse the conclusions derived from generic deviations. This should temper any interpretation of our conclusions from general modifications.
We also remark that our analysis for specific theories is far from comprehensive: there is, in principle, a very large number of GR modifications that have different mappings to ppE parameters, and therefore different trends in connection with source distributions.
\begin{table*}
\begin{tabularx}{\linewidth}{ @{\hspace{4pt}}c | @{\hspace{4pt}}c | @{\hspace{4pt}}c | @{\hspace{4pt}}c | @{\hspace{4pt}}c }
\hline \hline
Theory & Parameter & Current bound & \makecell{Most (Least) Stringent \\ Forecasted Bound} & \makecell{Most (Least) \\Constraining Class} \\ \hline \hline
Generic Dipole & $\delta \dot{E}$ & $1.1\times10^{-3}$~\cite{Yunes:2016jcc,Chamberlain:2017fjl}${}^{\ast}$ & $10^{-11} $ ($10^{-6}$) & MB (T)\\ \hline
Einstein-dilaton-Gauss-Bonnet & $\sqrt{\alpha_{\mbox{\tiny EdGB}}}$ & \makecell{$1$ km~\cite{Yagi:2012gp}\\ $3.4$ km~\cite{Nair:2019iur}${}^{\ast}$ }& $10^{-3}$ ($1$) km& T (MBH) \\ \hline
Black Hole Evaporation & $\dot{M}$ & -- & $10^{-8}$ ($10^{2}$) $M_{\odot}/$yr & MB (T)\\ \hline
Time Varying G & $\dot{G}$ & $10^{-13}-10^{-12} \text{ yr}^{-1}$~\cite{Bambi:2005fi,Copi:2003xd,Manchester:2015mda,KONOPLIV2011401,2010A.A...522L...5H} & $10^{-9}$ ($10$) yr${}^{-1}$& MB (T) \\ \hline
Massive Graviton & $m_g$ & \makecell{$10^{-29}\text{eV}$~\cite{Hare:1973px,Goldhaber:1974wg,Talmadge:1988qz,Desai:2017dwg}\\ $10^{-23}\text{eV}$~\cite{LIGOScientific:2019fpa,Brito:2013wya}${}^{\ast}$} & $10^{-26}$ ($10^{-24}$) eV & MBH (MB) \\ \hline
dynamic Chern Simons & $\sqrt{\alpha_{\mbox{\tiny dCS}}}$ & $5.2$ km~\cite{Silva:2020acr} & $10^{-2}$ ($10$) km & T (MB) \\ \hline
Non-commutative Gravity & $\sqrt{\Lambda}$ & $2.1$ $l_p$~\cite{Kobakhidze:2016cqh}${}^{\ast}$& $10^{-3}$ ($10^{-1}$) $l_p$& T (MB) \\ \hline \hline
\end{tabularx}
\caption{
Summary of forecasted constraints on specific modifications of GR.
The source class acronyms are the same as in Table~\ref{tab:generic_summary}.
A (${}^{\ast}$) symbol denotes constraints coming from previous BBH observations, as opposed to other experimental evidence.
When necessary, we have mapped all existing constraints to $1\sigma$ constraints by assuming the posterior to be normally distributed.
We only show worst-case constraints that improve on existing GW bounds.
For consistency with previous work, $\dot{M}$ is given in units of $M_{\odot}/$yr, while we use geometrical units (so that $\delta \dot{E}$ is dimensionless) for the generic dipole radiation bound.
Note that the necessary factor for transforming between the two is $c^3/G =6.41\times10^{12} M_{\odot}/{\rm yr} $.
The time derivative of the gravitational constant, $\dot{G}$, is normalized to the current value of $G$, and it does indeed have units of yr${}^{-1}$ in geometrical units (where $G=c=1$).
}\label{tab:theory_summary}
\end{table*}
Our conclusions on the best return of investment from GW detector development from the generic modification analysis \emph{generally} hold also for specific theories.
EdGB gravity (Sec.~\ref{sec:res_edgb}) and massive graviton theories (Sec.~\ref{sec:res_mg}) are two notable exceptions: in these cases, the dependence of the theory-agnostic parameters on source mass, spin and distance implies that the generic modifications predictions (at $-1$PN and 1PN orders, respectively) must be reversed.
The remainder of the paper presents the calculations summarized above in much more detail. The plan of the paper is as follows.
In Sec.~\ref{sec:detector_networks} we give details on the detector networks implemented in this work. This section includes information about the proposed timelines of detector development, as well as the specific sensitivity curves we have implemented at each stage.
In Sec.~\ref{sec:detector_prob} we discuss the statistics with which this network is used to filter astrophysical populations, including the calculation of detection probabilities for both terrestrial and space-based detectors.
In Sec.~\ref{sec:methodology} we describe the population models, then discuss the calculation of detection rates and the creation of our synthetic catalog.
In Sec.~\ref{sec:stat} we outline the statistics of parameter estimation procedures and waveform models, including a brief overview of Fisher analysis and the modified-GR waveforms implemented in this study.
In Sec.~\ref{sec:results} we present the results of our numerical investigation, as well as an analytical analysis to break down certain trends that have appeared in our findings.
Finally, in Sec.~\ref{sec:conclusions} we discuss limitations of this study and directions for future work.
To improve readability, some technicalities about Bayesian inference and Fisher matrix calculations, the mapping of the ppE formalism to specific theories and our waveform models are relegated to Appendices~\ref{app:Fisher}, \ref{sec:theories} and~\ref{sec:imr_vs_ins}, respectively.
Throughout this paper we will use geometrical units ($G=c=1$) and we assume a flat Universe with the cosmological parameters inferred by the Planck Collaboration~\cite{Ade:2015xua}.
\begin{table*}[t]
\begin{tabular}[b]{>{\centering}p{0.15\textwidth}>{\centering} p{0.15\textwidth}>{\centering} p{0.35\textwidth} p{0.1\textwidth}}
\hline \hline
{Year} & {Detectors} & Noise curves& {Moniker(s)}\\ \hline \hline
\multirow{4}{*}{2022-2023~\cite{Aasi:2013wya}} & \multirow{1}{*}{LIGO Hanford} & \multirow{1}{*}{Advanced LIGO design~\cite{ligo_SN_forecast}} & \multirow{4}{*}{HLVKO4} \\
& \multirow{1}{*}{LIGO Livingston} & \multirow{1}{*}{Advanced LIGO design} & \\
& \multirow{1}{*}{Virgo} & \multirow{1}{*}{Advanced Virgo+ phase 1~\cite{ligo_SN_forecast}} & \\
& \multirow{1}{*}{KAGRA} & \multirow{1}{*}{KAGRA 80Mpc or 128Mpc~\cite{ligo_SN_forecast}} & \\ \hline
\multirow{5}{*}{\makecell{2025-2030~\cite{Aasi:2013wya} \\ (one year observations \\ in alternating years) }} & \multirow{1}{*}{LIGO Hanford} & \multirow{1}{*}{Advanced LIGO A+~\cite{ligo_SN_forecast}} & \multirow{5}{*}{\makecell{HLVKIO5 \\ HLVKIO6 \\ HLVKIO7} } \\
& \multirow{1}{*}{LIGO Livingston} & \multirow{1}{*}{Advanced LIGO A+} & \\
& \multirow{1}{*}{Virgo} & \multirow{1}{*}{Advanced Virgo+ phase 2 high or low~\cite{ligo_SN_forecast}} &\\
& \multirow{1}{*}{KAGRA} & \multirow{1}{*}{KAGRA 80Mpc or 128Mpc} & \\
& \multirow{1}{*}{LIGO India} & \multirow{1}{*}{Advanced LIGO A+} & \\ \hline
\multirow{5}{*}{\makecell{2032-2035 \\ (one year observations \\ in alternating years)}} & \multirow{1}{*}{LIGO Hanford} & \multirow{1}{*}{Advanced LIGO Voyager~\cite{Voyager_detector}} & \multirow{5}{*}{\makecell{HLVKIO8 \\ HLVKIO9 }} \\
& \multirow{1}{*}{LIGO Livingston} & \multirow{1}{*}{Advanced LIGO Voyager} & \\
& \multirow{1}{*}{Virgo} & \multirow{1}{*}{Advanced Virgo+ phase 2 high or low} & \\
& \multirow{1}{*}{KAGRA} & \multirow{1}{*}{KAGRA 80Mpc or 128Mpc} & \\
& \multirow{1}{*}{LIGO India} & \multirow{1}{*}{Advanced LIGO Voyager} & \\ \hline \hline
\multicolumn{4}{c}{ Scenario 1} \\ \hline\hline
\multirow{4}{*}{2035-2039~\cite{Baker:2019nia,Reitze:2019dyk}} & \multirow{1}{*}{Cosmic Explorer} & \multirow{1}{*}{CE phase 1~\cite{CE_psd}} & \multirow{4}{*}{CEKL} \\
& \multirow{1}{*}{Einstein Telescope} & \multirow{1}{*}{ET-D~\cite{Hild:2010id}} & \\
& \multirow{1}{*}{KAGRA} & \multirow{1}{*}{KAGRA 128Mpc} & \\
& \multirow{1}{*}{LISA} & \multirow{1}{*}{LISA~\cite{Cornish:2018dyw,Tanay:2019knc}} & \\ \hline
\multirow{4}{*}{2039-2045~\cite{Baker:2019nia,Reitze:2019dyk}} & \multirow{1}{*}{Cosmic Explorer} & \multirow{1}{*}{CE phase 1} & \multirow{4}{*}{CEKLext} \\
& \multirow{1}{*}{Einstein Telescope} & \multirow{1}{*}{ET-D} & \\
& \multirow{1}{*}{KAGRA} & \multirow{1}{*}{KAGRA 128Mpc} & \\
& \multirow{1}{*}{LISA} & \multirow{1}{*}{LISA} & \\ \hline
\multirow{3}{*}{2045-2050~\cite{Baker:2019nia,Reitze:2019dyk}} & \multirow{1}{*}{Cosmic Explorer} & \multirow{1}{*}{CE phase 2~\cite{CE_psd}} & \multirow{4}{*}{CEK} \\
& \multirow{1}{*}{Einstein Telescope} & \multirow{1}{*}{ET-D} & \\
& \multirow{1}{*}{KAGRA} & \multirow{1}{*}{KAGRA 128Mpc} & \\ \hline \hline
\multicolumn{4}{c}{ Scenario 2} \\ \hline\hline
\multirow{4}{*}{2035-2039} & \multirow{1}{*}{Cosmic Explorer} & \multirow{1}{*}{CE phase 1} & \multirow{4}{*}{CVKL} \\
& \multirow{1}{*}{Virgo} & \multirow{1}{*}{Advanced Virgo+ phase 2 high} & \\
& \multirow{1}{*}{KAGRA} & \multirow{1}{*}{KAGRA 128Mpc} & \\
& \multirow{1}{*}{LISA} & \multirow{1}{*}{LISA} & \\ \hline
\multirow{4}{*}{2039-2045} & \multirow{1}{*}{Cosmic Explorer} & \multirow{1}{*}{CE phase 1} & \multirow{4}{*}{CVKLext} \\
& \multirow{1}{*}{Virgo} & \multirow{1}{*}{Advanced Virgo+ phase 2 high} & \\
& \multirow{1}{*}{KAGRA} & \multirow{1}{*}{KAGRA 128Mpc} & \\
& \multirow{1}{*}{LISA} & \multirow{1}{*}{LISA} & \\ \hline
\multirow{3}{*}{2045-2050} & \multirow{1}{*}{Cosmic Explorer} & \multirow{1}{*}{CE phase 2} & \multirow{4}{*}{CVK} \\
& \multirow{1}{*}{Virgo} & \multirow{1}{*}{Advanced Virgo+ phase 2 high} & \\
& \multirow{1}{*}{KAGRA} & \multirow{1}{*}{KAGRA 128Mpc} & \\ \hline \hline
\multicolumn{4}{c}{ Scenario 3} \\ \hline\hline
\multirow{6}{*}{2035-2039} & \multirow{1}{*}{LIGO Hanford} & \multirow{1}{*}{Advanced LIGO Voyager} & \multirow{6}{*}{\makecell{HLVKIL }} \\
& \multirow{1}{*}{LIGO Livingston} & \multirow{1}{*}{Advanced LIGO Voyager} & \\
& \multirow{1}{*}{Virgo} & \multirow{1}{*}{Advanced Virgo+ phase 2 high or low} & \\
& \multirow{1}{*}{KAGRA} & \multirow{1}{*}{KAGRA 80Mpc or 128Mpc} & \\
& \multirow{1}{*}{LIGO India} & \multirow{1}{*}{Advanced LIGO Voyager} & \\
& \multirow{1}{*}{LISA} & \multirow{1}{*}{LISA} & \\ \hline
\multirow{6}{*}{2039-2045} & \multirow{1}{*}{LIGO Hanford} & \multirow{1}{*}{Advanced LIGO Voyager} & \multirow{6}{*}{\makecell{HLVKILext }} \\
& \multirow{1}{*}{LIGO Livingston} & \multirow{1}{*}{Advanced LIGO Voyager} & \\
& \multirow{1}{*}{Virgo} & \multirow{1}{*}{Advanced Virgo+ phase 2 high or low} & \\
& \multirow{1}{*}{KAGRA} & \multirow{1}{*}{KAGRA 80Mpc or 128Mpc} & \\
& \multirow{1}{*}{LIGO India} & \multirow{1}{*}{Advanced LIGO Voyager} & \\
& \multirow{1}{*}{LISA} & \multirow{1}{*}{LISA} & \\ \hline
\multirow{5}{*}{2045-2050} & \multirow{1}{*}{LIGO Hanford} & \multirow{1}{*}{Advanced LIGO Voyager} & \multirow{5}{*}{\makecell{HLVKI+ }} \\
& \multirow{1}{*}{LIGO Livingston} & \multirow{1}{*}{Advanced LIGO Voyager} & \\
& \multirow{1}{*}{Virgo} & \multirow{1}{*}{Advanced Virgo+ phase 2 high or low} & \\
& \multirow{1}{*}{KAGRA} & \multirow{1}{*}{KAGRA 80Mpc or 128Mpc} & \\
& \multirow{1}{*}{LIGO India} & \multirow{1}{*}{Advanced LIGO Voyager} & \\ \hline \hline
\end{tabular}
\caption{
The above timeline tabulates the exact terrestrial detector evolution utilized by this study.
There is a single timeline of detectors until 2035, when we model three separate scenarios that could play out in the next three decades: Scenario 1, 2, and 3.
A graphical representation is shown in Fig.~\ref{fig:timeline}.
The various sensitivity curves in column 3 are shown in Fig.~\ref{fig:SN}.
}\label{tab:timeline}
\end{table*}
\begin{figure*}[htb]
\centering
\includegraphics[clip=true, width=\textwidth]{detector_timeline.jpg}
\caption{Graphical representation of Table~\ref{tab:timeline}.
The shaded regions in the figure represent periods of active observation, and the colors/hatching corresponds to the noise curve being implemented, as shown in Fig.~\ref{fig:SN}.
}\label{fig:timeline}
\end{figure*}
\begin{figure*}[t]
\includegraphics[width=0.9\textwidth]{SN.pdf}
\caption{Noise curves for the various detector configurations studied in this work. The shaded bands observed for the Virgo+ phase 2 and KAGRA sensitivities reflect uncertainties in estimates of their anticipated power spectral densities.}\label{fig:SN}
\end{figure*}
\section{Detector Networks}\label{sec:detector_networks}
The construction and enhancement of GW detectors across the world and in space is expected to proceed steadily over the next thirty years. Tests of GR using GW observations are fundamentally tied to this global timeline of detector development, so it is important to have a realistic range of models for detector networks that spans the inevitable uncertainties intrinsic in planning experiments over such a long time.
In this section we describe potential timelines for upgrades and deployment of new detectors, our assumptions on the location of the detectors, and their expected sensitivities.
\subsection{Estimated Timeline}
\label{subsec:timeline}
Three plausible scenarios for the GW detector roadmap as of the writing of this paper are schematically presented in Fig.~\ref{fig:timeline}, with more details in Table~\ref{tab:timeline}.
The timeline starts with the fourth observing run (O4) of the LIGO-Virgo-KAGRA detectors, which are scheduled to take data at their design sensitivities for one year starting in 2022.
After this run, the instruments would be taken offline to be upgraded to higher sensitivity, with the next set of one-year-long observing runs starting in 2025.
At this point, the network would also be joined by LIGO-India.
Subsequent upgrades for the LIGO detectors to LIGO Voyager are planned for the early 2030's.
The plans for 3g detectors are understandably more uncertain, with CE and ET potentially joining the network in 2035.
After a 5--10 year observing run, CE is expected to be taken offline for upgrades, with a second set of runs expected in 2045.
Meanwhile, LISA is scheduled to fly in 2034, with a minimum mission lifetime of 4 years and a possible extension by 6 additional years, for a total of 10 years of observation~\cite{Baker:2019nia}.
Given the timeline described above, one can identify several distinct periods of observations in which a different combination of detectors would be simultaneously online.
During the O4 run, LIGO Hanford (H), LIGO Livingston (L), Virgo (V) and KAGRA (K) are expected to collect data simultaneously, creating the HLVKO4 network.
LIGO India is expected to join the data collection effort in the late 2020's for the O5, O6 and O7 observation campaigns, creating the HLVKIO5/O6/O7 networks.
In the early 2030's, the LIGO detectors (Hanford, Livingston, and Indigo) will be upgraded to the Voyager design, reflected in the HLVKIO8/09 networks.
The timeline beyond 2035 is quite uncertain, and we cannot model every possible scenario.
Therefore, we chose to model three different timelines:
\begin{itemize}
\item[1)] After 2035, an optimistic detector schedule would see the Virgo and LIGO detectors replaced by the Einstein Telescope (E) and CE (C) detectors, respectively.
Furthermore, LISA (L) is targeting around 2035 as the beginning of its data collection, with a nominal 4-year mission and an additional 6-year extension.
These assumptions correspond to the CEKL and CEKLext networks, respectively.
We follow up the multiband observation campaigns with a final terrestrial-only observation period from 2045-2050 for the CEK network.
This timeline is shown as ``Scenario 1'' in Table~\ref{tab:timeline}.
\item[2)] A less optimistic scenario might see one terrestrial 3g detector receive full funding and come online in the 2030's. We chose to use CE as our one 3g terrestrial detector to create the CVKL, CVKLext, and CVK networks. This is ``Scenario 2'' in Table~\ref{tab:timeline}.
\item[3)] We also consider a pessimistic scenario where no terrestrial 3g detectors will be observing before the 2050's. The network will remain at its O9 sensitivity, but it will still be joined by LISA in the 2030's. This scenario includes the HLVKIL, HLVKILext, and HLVKI+ networks, and is denoted as ``Scenario 3'' in Table~\ref{tab:timeline}.
\end{itemize}
Because these last three observation periods for all three scenarios are less defined and span a wide time range, we assume an 80\% duty cycle when estimating terrestrial-only detection rates, but we use the full observation period for calculating multiband rates.
\begin{table}[t]
\begin{tabular}{c c c }
\hline \hline
Detector & Latitude (${}^\circ$) & Longitude (${}^\circ$) \\ \hline \hline
LIGO Hanford & 46.45 & -119.407 \\ %
LIGO Livingston & 30.56 & -90.77 \\ %
Virgo & 43.63 & 10.50 \\ %
KAGRA & 36.41 & 137.31 \\ %
LIGO India &14.23 & 76.43 \\ %
Cosmic Explorer & 40.48 & -114.52 \\ %
Einstein Telescope & 43.63 & 10.50 \\
\hline \hline
\end{tabular}
\caption{Detector locations used in this paper.}\label{table:locations}
\end{table}
\subsection{Estimated Sensitivity}
The detector sensitivities can be characterized in terms of their power spectral density $S_{n}$, which we present in Fig.~\ref{fig:SN}.
We assume that the LIGO detectors will start operating at design sensitivity (``LIGO design''~\cite{ligo_SN_forecast} in Fig.~\ref{fig:SN}) in O4, but will be upgraded to the A+ configuration (``LIGO A+''~\cite{ligo_SN_forecast} in Fig.~\ref{fig:SN}) in time for the O5 observing run.
In the early 2030's, the LIGO detectors will be upgraded to the Voyager sensitivity (``LIGO Voyager''~\cite{Voyager_detector} in Fig.~\ref{fig:SN}).
Virgo observations begin with the Advanced Virgo+ phase 1 noise curve (``Virgo phase $1$''~\cite{ligo_SN_forecast} in Fig.~\ref{fig:SN}) in O4, and they will subsequently be upgraded to Advanced Virgo+ phase 2 (``Virgo phase 2''~\cite{ligo_SN_forecast} in Fig.~\ref{fig:SN}) beginning in O5.
To bracket uncertainties, we consider both an optimistic (``high'') configuration and a pessimistic (``low'') configuration for Virgo+~\cite{ligo_SN_forecast}.
We model the KAGRA detector using the ``128Mpc'' and ``80Mpc'' configurations from Ref.~\cite{ligo_SN_forecast} for optimistic and pessimistic outlooks, respectively (``KAGRA'' in Fig.~\ref{fig:SN}).
LIGO India is planned to join the network in O5 with sensitivity well approximated by the A+ noise curve, mirroring the Hanford and Livingston detectors.
LIGO India will follow the same development path as its American counterparts, and be upgraded to Voyager sensitivity in the early 2030's.
The US-led 3g detector, CE, may replace the LIGO detectors in 2035 at phase 1 sensitivity (``CE phase $1$'' in Fig.~\ref{fig:SN}).
After upgrades are completed in the early 2040's, the detector may come back online with phase 2 noise sensitivity (``CE phase 2'' in Fig.~\ref{fig:SN})~\cite{Reitze:2019dyk}.
The European-led 3g counterpart ET could replace the Virgo detector in 2035.
ET will be modeled with the ET-D sensitivity in this study (``ET-D'' in Fig.~\ref{fig:SN}).
In reality, ET is comprised of 3 individual detectors arranged in an equilateral triangle, and a fully consistent treatment of ET would incorporate the three detectors separately.
However, after testing on subsets of our populations, we concluded that modeling ET as three identical copies of one of the constituent detectors minimally impacts our estimates on constraints of modified gravity, because of the small correlations between modified gravity modifications to the phase and the extrinsic parameters of the source, like sky location and orientation.
This approximation significantly reduces the computational resources required to perform this study, so we opted to use it when constructing the Fisher matrices themselves (as discussed in Sec.~\ref{sec:stat}).
When calculating the detection probability, however, we do account for the three detectors separately (cf. Sec.~\ref{sec:rates}). This is because the different orientations and positions of the detectors affect the rates more than they affect parameter estimation.
For networks that include a mixture of 3g and 2g detectors, we will only model the 2g detectors with the most optimistic sensitivity curve, i.e. the ``high'' configuration for Virgo and the ``128Mpc" configuration for KAGRA. The impact of the different 2g sensitivities is small when implemented alongside a 3g detector, and the shrinking of the parameter space for our models significantly reduces the computational cost of the problem.
For LISA, we model the noise curve using the approximations in Ref.~\cite{Cornish:2018dyw}.
At different points in this work, we required both sky-averaged and non-sky-averaged response functions to various detectors.
For LISA this can be more complicated than terrestrial interferometers, so we plot the sky-averaged noise curve directly from Ref.~\cite{Cornish:2018dyw} (``LISA -- sky-averaged'' in Fig.~\ref{fig:SN}) and the full (non-sky-averaged) sensitivity produced in Ref.~\cite{Tanay:2019knc} (``LISA -- non-sky-averaged'' in Fig.~\ref{fig:SN}).
However, in contrast to Ref.~\cite{Tanay:2019knc}, we do include the factor of 2 to account for the second channel, mirroring the approximation we made for ET.
\subsection{Estimated Location}
\label{sec:network_response_functions}
The relative locations of the various detectors affects the global response function, and thus it impacts the analysis performed in this paper.
For terrestrial detectors, the various geographical locations of each site are shown in Table~\ref{table:locations}.
The sites of detectors currently built or under construction were taken from data contained in \software{LALSuite}~\cite{lalsuite}.
Since a site has yet to be decided upon for CE, we chose a reasonable location near the Great Basin desert, in Nevada.
For LISA, the detector's position and orientation as a function of time must be taken into account, so we use the time-dependent response function derived in Refs.~\cite{Cutler:1997ta,Berti:2004bd}.
Unlike those papers we use the polarization angle defined by the total angular momentum $\mathbf{J}$, instead of the orbital angular momentum $\mathbf{L}$, because the latter precesses in time, while $\mathbf{J}$ remains (approximately) constant.
\section{Statistical Methods for Population Simulations}\label{sec:detector_prob}
Both terrestrial and space-borne GW detectors have nonuniform sensitivity over the sky. This effect is important when attempting to estimate the expected detection rate and the resulting population catalog.
Terrestrial detector networks can mitigate this selection bias by incorporating more detectors into the network, which can ``fill in'' low-sensitivity regions in the sky.
The incorporation of the most accurate combination of detectors and their locations can be important. This is why in Sec.~\ref{sec:network_response_functions} we specified the locations used in this study.
For space-borne detectors, some signals may be detectable for much longer than the observation period, so random sky locations map to random spacetime locations, and the effect of only seeing a portion of the signal must be accounted for.
These issues with terrestrial networks and space detectors, and their associated detection probabilities, are discussed in Secs.~\ref{sec:T_pdet} and Sec.~\ref{sec:S_pdet}, respectively.
We wish to calculate the probability that the GWs emitted by some source will be detected by a terrestrial network of instruments, which we will refer to as the detection probability.
We will focus primarily on two classes of sources: SOBH binaries~\cite{Gerosa:2018wbw} and MBH binaries~\cite{Klein:2015hvg}.
We will use publicly available SOBH population synthesis models to produce synthetic catalogs which are mainly of interest for the terrestrial network, but can also be observed as ``multiband'' events by both the terrestrial network and LISA. We will also use MBH binary simulations to create synthetic catalogs for LISA (these sources are typically well outside the frequency band accessible to terrestrial networks). Intermediate-mass BH binaries could also be of interest~\cite{Datta:2020vcj}, but we do not consider them here, mainly because their astrophysical formation models and rates have large uncertainties~\cite{Gair:2010dx,Cutler:2019krq,Jani:2019ffg}.
\begin{table*}[!htb]
\centering
\begin{tabular}{ c c c }
\hline \hline
Detection network & Detector locations & Detector sensitivity curve \\ \hline \hline
\multirow{3}{*}{HLVKO4} & \multirow{1}{*}{ Hanford site} & \multirow{3}{*}{Ad. LIGO design~\cite{ligo_SN_forecast}} \\
& \multirow{1}{*}{ Livingston site} & \\
& \multirow{1}{*}{ Virgo site} & \\ \hline
\multirow{4}{*}{HLVKIO5-O7} & \multirow{1}{*}{ Hanford site} & \multirow{4}{*}{Ad. LIGO A+~\cite{ligo_SN_forecast}} \\
& \multirow{1}{*}{Livingston site} & \\
& \multirow{1}{*}{Virgo site} & \\
& \multirow{1}{*}{KAGRA site} & \\ \hline
\multirow{4}{*}{HLVKIO8-O9} & \multirow{1}{*}{ Hanford site} & \multirow{4}{*}{Ad. LIGO Voyager~\cite{Voyager_detector}} \\
& \multirow{1}{*}{Livingston site} & \\
& \multirow{1}{*}{Virgo site} & \\
& \multirow{1}{*}{KAGRA site} & \\ \hline
\multirow{2}{*}{CEKL(ext)} & \multirow{1}{*}{ Cosmic Explorer site} & \multirow{2}{*}{CE phase 1~\cite{CE_psd}} \\
& \multirow{1}{*}{All ET sites} & \\ \hline
\multirow{1}{*}{CVKL(ext)} & \multirow{1}{*}{ Cosmic Explorer site} & \multirow{1}{*}{CE phase 1} \\ \hline
\multirow{4}{*}{HLVKIL(ext)} & \multirow{1}{*}{ Hanford site} & \multirow{4}{*}{Ad. LIGO Voyager} \\
& \multirow{1}{*}{Livingston site} & \\
& \multirow{1}{*}{Virgo site} & \\
& \multirow{1}{*}{KAGRA site} & \\ \hline
\multirow{2}{*}{CEK} & \multirow{1}{*}{ Cosmic Explorer site} & \multirow{2}{*}{CE phase 2~\cite{CE_psd}} \\
& \multirow{1}{*}{All ET sites} & \\ \hline
\multirow{1}{*}{CVK} & \multirow{1}{*}{ Cosmic Explorer site} & \multirow{1}{*}{CE phase 2} \\ \hline
\multirow{4}{*}{HLVKI+} & \multirow{1}{*}{ Hanford site} & \multirow{4}{*}{Ad. LIGO Voyager} \\
& \multirow{1}{*}{Livingston site} & \\
& \multirow{1}{*}{Virgo site} & \\
& \multirow{1}{*}{KAGRA site} & \\ \hline
\end{tabular}
\caption{
Configurations used at each stage of our analysis to calculate the probability of detection for a given binary for the terrestrial detector network.
Note that networks involving multiple detectors are labelled by the network nodes and not just their number, because the relative position of the detectors impacts the calculation of the detection probability.
Our calculation depends on the assumption that all the detectors have approximately the same sensitivity curve, and so the curve used at each stage is given in the last column.
Because of this assumption, and the extreme disparity in sensitivity between second- and third-generation detectors, we only use the CE detector to calculate rates when CE is part of the network.}\label{table:detection_prob_table}
\end{table*}
\begin{figure}
\includegraphics[width=\linewidth]{pdet_curves.pdf}
\caption{
Detection probability $p_{\mbox{\tiny det}}$ for the four networks examined in this paper.
The black curve is for a single detector (where global position no longer matters, so this is valid for any single right-angle Michelson interferometer).
The blue curve is specifically for the Hanford, Livingston, and Virgo (HLV) network.
The red curve is for the Hanford, Livingston, Virgo, and KAGRA (HLVK) network.
Finally, the green curve represents a network comprised of CE and ET (which includes all three of the ET detectors as well as the $60^{\circ}$ angle between each set of arms).}\label{fig:pdet_curves}
\end{figure}
\subsection{Terrestrial Detection Probability}\label{sec:T_pdet}
An accurate calculation of the detection probability for each source requires injections into search pipelines. A simplifying, while still satisfactorily accurate, assumption used in most of the astrophysical literature (see e.g.~\cite{Dominik:2014yma,Finn:1992xs,Finn:1995ah}) involves computing the SNR $\rho$, defined by
\begin{equation}
\rho^2= 4 \Re \left[ \int \frac{\tilde{h} \; \tilde{h}^{\ast}}{S_n(f)} df \right]\,,
\label{eq:SNR}
\end{equation}
where we recall that $S_{n}(f)$ is the noise power spectral density of the detector, while $\tilde{h}=\tilde{h}(f)$ is the Fourier transform of the contraction between the GW strain and the detector response function.
We can factor out all the detector-dependent quantities from the SNR in the form of the ``projection parameter'' $\omega$ defined as~\cite{Dominik:2014yma,Finn:1995ah}
\begin{equation}\label{eq:omega}
\omega^2 = \frac{(1 + \cos^2 \iota)^2}{4} F_{+}^2(\theta,\phi,\psi) + \cos^2{\iota} F_{\times}^2 (\theta,\phi,\psi)\,,
\end{equation}
where $\iota$ is the inclination of the binary relative to the line of sight, $\theta$ and $\phi$ are the spherical angles of the source relative to the vector perpendicular to the plane of the detector, and $\psi$ is the polarization angle.
The single-detector antenna pattern functions $F_{+}$ and $F_{\times}$ are given by
\begin{align}
F_{+} &= \frac{1}{2} \left( 1+\cos^2\theta \right) \cos 2\phi \cos 2\psi - \cos \theta \sin2 \phi \sin 2\psi \,, \nonumber \\
F_{\times} &= \frac{1}{2} \left( 1+\cos^2\theta \right) \cos 2\phi \sin 2\psi + \cos \theta \sin2 \phi \cos 2\psi \,.
\end{align}
With the projection-parameter approximation, we can approximate the SNR as
\begin{equation}
\rho^{2} \approx \omega^2 \rho_{\mbox{\tiny opt}}^2 \,,
\end{equation}
where $\rho_{\mbox{\tiny opt}}$ is the SNR for an optimally oriented binary with $\theta=0$, $\iota=0$, and $\psi = 0$.
This relation is approximate if the binary is precessing, so that $\iota$ is a function of time, but it is exact otherwise.
The calculation of the detection probability can then be rephrased as a search for the extrinsic source parameters that satisfy $\omega \approx \rho / \rho_{\mbox{\tiny opt}} \geq \rho_{\mbox{\tiny thr}} / \rho_{\mbox{\tiny opt}} \equiv \omega_{\mbox{\tiny thr}} $ for some $\rho_{\mbox{\tiny thr}}$. The probability that $\omega$ satisfies the above criteria translates into finding the cumulative probability distribution~\cite{Dominik:2014yma}
\begin{equation}\label{eq:pdet}
p_{\mbox{\tiny det},\mbox{\tiny terr}}(\vec{\lambda}) = \int \Theta\left(\omega'(\theta,\phi, \psi, \iota) - \omega_{\mbox{\tiny thr}} \right)\frac{\sin \theta d\theta d\phi}{4 \pi} \frac{d \psi}{\pi} \frac{d\cos \iota}{2} \,,
\end{equation}
where $\Theta(\cdot)$ is the Heaviside function, which ultimately describes the selection effects of our terrestrial networks. This cumulative probability clearly depends on the source parameter vector $\vec{\lambda}$, inherited from $\omega_{\mbox{\tiny thr}}=\omega_{\mbox{\tiny thr}}(\vec{\lambda})$.
Equation~\eqref{eq:pdet} can be extended to multiple-detector networks by expanding our definition of $\omega$ to
\begin{equation}\label{eq:omega_network}
\omega_{\text{network}}^2 = \sum_i \omega_i^2\,,
\end{equation}
where $\omega_i$ is the projection parameter for a single detector in the network, and $\omega_{\text{network}} = \rho_{\text{network-thr}} / \rho_{\mbox{\tiny opt}}$ with some threshold network SNR, $\rho_{\text{network-thr}}$, and single-detector optimal SNR, $\rho_{\mbox{\tiny opt}}$.
In the case of a multiple-detector network, the locally defined position coordinates $\theta$ and $\phi$ are replaced with the globally defined position coordinates $\alpha$ (the right ascension angle) and $\delta$ (the declination angle).
The polarization angle $\psi$ is changed to the globally defined polarization angle $\bar{\psi}$, which is defined with respect to an Earth-centered coordinate axis instead of the coordinate system tied to a single detector.
Evaluating Eq.~\eqref{eq:pdet} for each network, with the network projection operator defined as Eq.~\eqref{eq:omega_network}, provides a good estimation of the probability we are seeking: a weighting factor for a given binary that incorporates the sensitivity and global geometry of a given detector network, as well as the impact that the intrinsic properties of the source have on its detectability.
Importantly, the intrinsic source parameters themselves only enter into Eq.~\eqref{eq:pdet} through the calculation of $\rho_{\mbox{\tiny opt}}$ in $\omega_{\mbox{\tiny thr}}$.
Once a threshold SNR $\rho_{\mbox{\tiny thr}}$ is set, the detection probability function can be seen as a function of only one number $\omega_{\mbox{\tiny thr}}$ (for a given network), through its dependence on $\rho_{\mbox{\tiny opt}}$.
As Eq.~\eqref{eq:pdet} is a four-dimensional integral and must be calculated numerically, this detail can significantly save on computational cost if we can approximate the full function $p_{\mbox{\tiny det},\mbox{\tiny terr}}(\omega_{\mbox{\tiny thr}})$ once for each network.
To do this, we form a grid in $\omega_{\mbox{\tiny thr}}$ with approximately 100 grid points, and evaluate Eq.~\eqref{eq:pdet} for each grid point with $10^9$ samples uniformly distributed in $\bar{\psi}$, $\cos \iota$, $\alpha$, and $\sin \delta$.
Interpolating across the grid in $\omega_{\mbox{\tiny thr}}$ produces an approximation for $p_{\mbox{\tiny det},\mbox{\tiny terr}}(\omega_{\mbox{\tiny thr}})$.
This approximation must be calculated for each specific network, as the quantity $\omega'$ in Eq.~\eqref{eq:pdet} depends on the number and relative location of the detectors, but it only needs to be evaluated once per network, rather than once per source.
The resulting probability functions for the four terrestrial networks examined in this paper are shown in Fig.~\ref{fig:pdet_curves}.
Note that the relative location of each detector in a network impacts the form of $p_{\mbox{\tiny det},\mbox{\tiny terr}}$, so we label the curves by the detector nodes and not just their number (i.e.~the form of $p_{\mbox{\tiny det},\mbox{\tiny terr}}$ will be slightly different for a Hanford, Livingston, and Virgo network when compared to a Hanford, Livingson, and KAGRA network).
Furthermore, an important assumption in this calculation is that the sensitivity of each detector is identical.
This is not a good approximation when jointly considering second- and third-generation detectors, so in these cases we neglect all the 2g detectors in the network.
The configurations used at each stage are summarized in Table~\ref{table:detection_prob_table}.
\subsection{Space Detection Probability}\label{sec:S_pdet}
For space-based detectors, which operate at much lower frequencies, the picture changes quite drastically.
The terrestrial detection probability of Sec.~\ref{sec:T_pdet} addresses the issue of random sky location and orientation of the sources, but an important effect for detectors like LISA is the time spent in band.
Because signals observable by LISA can be detected for much longer than the observation time $T_{\mbox{\tiny obs}}$ of the LISA mission, the time spent in the frequency range accessible to LISA will characterize the detectability of the binary.
We characterize this effect as outlined below (we refer the reader to Ref.~\cite{Gerosa:2019dbe} for a more thorough derivation and further details).
To determine the time the binary spends in the observational frequency band of LISA, we look for the roots of
\begin{equation}\label{eq:roots_eq}
\rho(t_{\mbox{\tiny merger}}) - \rho_{\mbox{\tiny thr}} = 0\,,
\end{equation}
where $t_{\mbox{\tiny merger}}$ is the time before merger at which the signal starts, $\rho_{\mbox{\tiny thr}}$ is some threshold SNR, and the SNR $\rho(t_{\mbox{\tiny merger}})$ is defined as
\begin{equation}\label{eq:rho_tmerg}
\rho(t_{\mbox{\tiny merger}}) = 4 \Re \left[
\int^{\text{min}(f(t_{\mbox{\tiny merger}}-T_{\mbox{\tiny obs}}), 1\,\text{Hz})}_{f(t_{\mbox{\tiny merger}})}
\frac{\tilde{h} \; \tilde{h}^{\ast}}{S_{n}(f)} df
\right]\,.
\end{equation}
Note that, at variance with Ref.~\cite{Gerosa:2019dbe}, we use $1\,$Hz as the upper cutoff for the LISA noise curve.
Once the roots of Eq.~\eqref{eq:roots_eq} (say $T_1$ and $T_2$) have been found, we can obtain the probability of mergers for LISA via
\begin{align}\label{eq:space_merger_rate_SOBH}
p_{\mbox{\tiny det},\mbox{\tiny space}}^{\mbox{\tiny SOBH}} (\vec{\lambda}) &= p_{\mbox{\tiny det},\mbox{\tiny terr}}(\vec{\lambda}) \times \text{min}\left[ \frac{T_1 - T_2}{T_{\mbox{\tiny obs}}}, \frac{T_{\text{wait}} - T_2}{T_{\mbox{\tiny obs}}}\right]
\end{align}
for SOBH binaries, and
\begin{align}\label{eq:space_merger_rate_MBH}
p_{\mbox{\tiny det},\mbox{\tiny space}}^{\mbox{\tiny MBH}} (\vec{\lambda}) &= \text{min}\left[ \frac{T_1 - T_2}{T_{\mbox{\tiny obs}}}, \frac{T_{\text{wait}} - T_2}{T_{\mbox{\tiny obs}}}\right]
\end{align}
for MBH binaries. The probability $p_{\mbox{\tiny det},\mbox{\tiny space}}^{\mbox{\tiny SOBH}}$ is weighted by $p_{\mbox{\tiny det},\mbox{\tiny terr}}$ because all SOBH binaries we consider for LISA are also candidate multiband events, which must be observed both by LISA and by a terrestrial network to be considered ``true'' multiband sources. In these expressions, $T_{\text{wait}}$ is some maximum waiting time for the binary to merge, which (following Ref.~\cite{Gerosa:2019dbe}) we choose to be $5 \times T_{\mbox{\tiny obs}}$ for each detector network iteration.
\subsection{Waveform Model for Population Estimates}
\label{subsec:wf-pop}
When computing the detection probability of a given source, we need a model for the Fourier transform of the time-domain response function $h = F_{+} h_{+} + F_{\times} h_{\times}$. In the terrestrial case, we implement the full precessing inspiral/merger/ringdown model \software{IMRPhenomPv2}~\cite{Hannam:2013oca,Khan:2015jqa,Husa:2015iqa} with an inclination angle of $\iota=0^\circ$ to calculate the optimal SNR, $\omega_{\mbox{\tiny opt}}$.
For the space-based estimates in the next section, we will use the spinning (but nonprecessing) sky-averaged \software{IMRPhenomD} waveform model~\cite{Khan:2015jqa,Husa:2015iqa}, with a small modification: since we are interested in LISA rather than terrestrial, right-angle interferometers, we replace the usual factor of $2/5$ (that arises from sky-averaging) in favor of the sky-averaged LISA sensitivity curve from~\cite{Cornish:2018dyw}, which accounts for the second LISA data channel, sky-averaging, and the $60^{\circ}$ angle between the detector arms. This waveform model depends on parameters $\vec{\lambda}_{D} = [\alpha,\delta,\theta_{\rm L},\phi_{\rm L}, \phi_{\text{ref}}, t_{c,\text{ref}}, D_L, \mathcal{M}, \eta, \chi_1, \chi_2]$, where $\alpha$ is the right ascension, $\delta$ is the declination, $\theta_{\rm L}$ and $\phi_{\rm L}$ are the polar and azimuthal angles of the binary's orbital angular momentum $\mathbf{L}$ in equatorial coordinates at the reference frequency, $\phi_{\text{ref}}$ and $t_{c,\text{ref}}$ are the orbital phase and the time of coalescence at the reference frequency, $D_{L}$ is the luminosity distance, $\mathcal{M}$ and $\eta$ are the redshifted chirp mass and the symmetric mass ratio, and $\chi_i=\mathbf{\hat{L}} \cdot \mathbf{S}_i/m_i^2$ are the dimensionless spin components along $\mathbf{\hat{L}}=\mathbf{L}/|\mathbf{L}|$ with spin angular momentum $\mathbf{S}_i$.
For space-based detectors we must also choose a way to map between time and frequency. The limits of the SNR integral \eqref{eq:SNR} and the antenna patterns (which for LISA are functions of time) depend on this mapping. For multiband SOBH binaries we use the leading-order PN relation~\cite{Peters:1964zz,Berti:2004bd,Gerosa:2019dbe}
\begin{equation}\label{eq:pn_time}
f(t_{\mbox{\tiny merger}}) = \frac{5^{3/8}}{8 \pi} \left(\mathcal{M}\right)^{-5/8} t^{-3/8}_{\mbox{\tiny merger}}\,,
\end{equation}
where again $t_{\mbox{\tiny merger}}$ is the time before merger. For massive black hole (MBH) binaries, observed by LISA only through merger, this PN approximation is insufficient, so we use instead~\cite{Chamberlain:2018snj,Cornish:2020vtw}
\begin{equation}\label{eq:time_phase_deriv}
t_{\mbox{\tiny merger}} = \frac{1}{2\pi} \frac{d \phi}{df}\,,
\end{equation}
where $\phi$ is the GW Fourier phase. When calculating detection rates, we will invert these relations numerically as needed.
\begin{figure*}[t]
\includegraphics[width=\linewidth]{source_properties.pdf}
\caption{
Distributions of the different source properties detected by each network.
For each detector network, labeled across the y-axis, we plot the distribution of the total detector-frame mass $M_z=M(1+z)$, mass ratio $q=m_2/m_1<1$, redshift $z$, and SNR $\rho$ in log-space (base 10).
Each plot is split, with the upper (grey) half coming from the $\sigma=265$\,km/s SPOPS simulations, and the lower (green) half coming from the $\sigma=0$\,km/s simulations.
}\label{fig:source_properties}
\end{figure*}
\begin{figure*}
\includegraphics[width=\linewidth]{source_properties_SPACE.pdf}
\caption{
Distributions of the different MBH binary source properties detected by LISA.
For each MBH binary simulations, labeled across the y-axis, we plot the distribution of the total detector-frame mass $M_z=M(1+z)$, mass ratio $q=m_2/m_1<1$, redshift $z$, and SNR $\rho$ in log-space (base 10).
Each plot is split in two, with the upper (grey) half corresponding to a ``nominal'' four-year LISA mission, and the lower (green) half corresponding to an extended ten-year mission.
}\label{fig:source_properties_SPACE}
\end{figure*}
\section{Population Simulations}\label{sec:methodology}
A key ingredient of our work is the use of astrophysically motivated BBH population models (Sec.~\ref{sec:pop_synth}).
Our methodology for computing detection rates and for creating synthetic catalogs from the models is explained in Sec.~\ref{sec:rates} and in Sec.~\ref{sec:catalog_creation}, respectively.
\subsection{Population Models}\label{sec:pop_synth}
For ease of comparison with previous work, we use the SPOPS catalogs~\cite{Gerosa:2018wbw} for SOBH binaries (Sec.~\ref{sec:spops}) and the MBH binary merger catalogs used in Ref.~\cite{Klein:2015hvg} (Sec.~\ref{sec:mbh}).
\subsubsection{Stellar Mass Simulations}\label{sec:spops}
We use the public SPOPS catalog of population synthesis simulations~\cite{Gerosa:2018wbw} in an effort to accurately capture the full spin orientations of the binaries at merger.
The SPOPS catalog uses multiscale solutions of the precessional dynamics~\cite{Kesden:2014sla,Gerosa:2015tea} computed through the public code \software{PRECESSION}~\cite{Gerosa:2016sys} to quickly evolve the binary's spin orientations in time until the binary is about to merge.
The catalog is parameterized by three different variables: the strength of the BH natal kicks, the BH spin magnitudes at formation, and the efficiency of tidal alignment~\cite{Gerosa:2018wbw}.
In this model, natal kicks are caused by asymmetric mass ejection during core collapse, imparting a torque on one of the constituents of the binary, while the tidal alignment reflects spin-orbital angular momentum coupling through tidal interactions that can realign the spin vectors with the orbital angular momentum vector (see Ref.~\cite{Gerosa:2018wbw} for further details).
Following Ref.~\cite{Gerosa:2019dbe}, we choose to vary only one parameter of these models while keeping the others fixed.
More specifically, we consider a uniform distribution in spin magnitude and the most realistic ( ``time'') prescription for tidal alignment of Ref.~\cite{Gerosa:2019dbe}, while varying the natal kick.
To estimate lower and upper constraints on the rates given uncertainties in our population modelling, we use the two most extreme natal kick models, corresponding to $\sigma=0$\,km/s and $\sigma=265$\,km/s, where $\sigma$ is the one-dimensional dispersion of the Maxwellian distribution the kicks are drawn from.
The zero-kick scenario results in a lack of precessional effects and the highest detection rates for all detectors, while the $\sigma=265$\,km/s choice corresponds to a soft upper bound on the size of the kicks, which imparts the largest spin tilts and results in the lowest detection rate.
The two chosen values of $\sigma$ result in optimistic and pessimistic bounds on our projected constraints, and at the same time they provide a useful comparison between highly precessing systems and nonprecessing systems.
\subsubsection{Massive Black Hole Simulations}\label{sec:mbh}
To model MBH binary populations, we adopt the semianalytical models of early Universe BH formation~\cite{Barausse:2012fy,Sesana:2014bea,Antonini:2015cqa} used in the LISA parameter estimation survey of Ref.~\cite{Klein:2015hvg}.
As in that work, we focus on three populations models, characterized by different BH seeding mechanisms and different assumptions on the time delay between BH mergers and the mergers of their host galaxies.
These population models are denoted as
\begin{enumerate}
\item PopIII -- seeds are produced from the collapse of population III stars in the early Universe (a light-seed scenario);
\item Q3delays -- seeds are produced from the collapse of a protogalactic disk (heavy-seed scenario), and there are delays between galaxy mergers and BH mergers;
\item Q3nodelays -- seeds are produced from the collapse of a protogalactic disk (heavy-seed scenario), and there are no delays between galaxy mergers and BH mergers.
\end{enumerate}
These three models embody two seed formation mechanisms, with two models representing optimistic and pessimistic heavy-seed scenarios. The difference between PopIII simulations with and without delays is less than a factor of two, so, following Ref.~\cite{Klein:2015hvg}, we consider only the more conservative estimate, in which delays are incorporated.
\subsection{Detection Rate Calculations}\label{sec:rates}
With population synthesis simulations at our disposal, we can now estimate expected detection rates for a given detector network.
This involves taking a model for our Universe that predicts a certain rate of merging BBHs per comoving volume, and filtering the model through the lens of a particular detector configuration and sensitivity.
The detection rate $r$ for a given network follows from the following relation~\cite{Gerosa:2019dbe,Belczynski:2015tba}:
\begin{equation}\label{eq:rate}
r = \iint dz \; d\vec{\lambda} \; \mathcal{R}(z) \; p(\vec{\lambda}) \; \frac{dV_{c}(z)}{dz} \; \frac{1}{1+z} \; p_{\mbox{\tiny det}}(\vec{\lambda} , z)\,,
\end{equation}
where $z$ is the cosmological redshift, $\mathcal{R}$ is the intrinsic merger rate (a function of the redshift), $p$ is the probability of a binary forming and merging given a set of intrinsic source parameters $\vec{\lambda}=\vec{\lambda}_{D}$ (discussed in Sec.~\ref{subsec:wf-pop}), and $dV_{c}/dz$ is a shell of comoving volume $V_{c}$ at redshift $z$.
The quantity $p_{\mbox{\tiny det}}$ is the probability of a binary being detected by a given detector network with some threshold SNR, as discussed in Sec.~\ref{sec:detector_prob}.
The type of detector network affects the quantity $p_{\mbox{\tiny det}}$ only, while the other terms in the integral above depend only on information contained in the population simulation.
For this study, we have used a threshold SNR of 8 for terrestrial and space detections, while for multiband detections we require the terrestrial SNR and the LISA SNR to both be above $8$ independently.
Because of the intrinsic difference in the duration of signals observed by space detectors and terrestrial networks, we treat the calculation of $p_{\mbox{\tiny det}}$ slightly differently between the two cases, as discussed in Sec.~\ref{sec:T_pdet} for terrestrial detectors, and in Sec.~\ref{sec:S_pdet} for space-based detectors.
For all binaries, we evaluate the integral in Eq.~\eqref{eq:rate} through a large population of binary systems that are evolved to the point of becoming BBHs, and are weighted according to the probability that a binary of this type would actually be found in the Universe given some population model.
This probability is comprised of factors like the star formation rate (SFR), cosmological evolution of the metallicity, the distribution of masses for these stellar populations, etc.;
the continuous equation in Eq.~\eqref{eq:rate} then becomes a discrete sum
\begin{equation}\label{eq:rate_sum}
r = \sum_i r_i \; p_{\mbox{\tiny det}}(\vec{\lambda}_i)\,,
\end{equation}
where the index $i$ refers to samples in the simulation, $r_i$ is the intrinsic merger rate, which depends on parameters like the SFR and the mass distribution, and $p_{\mbox{\tiny det}}(\vec{\lambda}_{i})$ is the detection probability evaluated for the source parameters of the particular sample. This detection probability is $p_{\mbox{\tiny det},\mbox{\tiny terr}}$ when considering a terrestrial network only, $p_{\mbox{\tiny det},\mbox{\tiny space}}^{\mbox{\tiny SOBH}}$ when considering multiband events, or $p_{\mbox{\tiny det},\mbox{\tiny space}}^{\mbox{\tiny MBH}}$ when considering MBH binaries detectable only by LISA.
The intrinsic merger rate $r_i$ varies depending on the catalog used.
For the case of the SPOPS simulations, we utilized the original \software{StarTrack} data at the foundation of each SPOPS catalog (cf. Ref.~\cite{Belczynski:2015tba} for details) to construct the intrinsic merger rate in Eq.~\eqref{eq:rate}.
For MBH catalogs, the intrinsic merger rate $r_{i}$
becomes~\cite{Klein:2015hvg}
\begin{equation}\label{eq:MBHB_rate}
r_i = 4 \pi W_{{\rm PS},i} \left(\frac{D_L (z_i)}{1+z_i}\right)^2 \,,
\end{equation}
as outlined in the data release~\cite{Klein:2015hvg,MBH_data_release}.
The parameter $W_{{\rm PS},i}$ is the weight on the Press-Schechter mass function divided by the number of realizations~\cite{Barausse:2012fy}.
\subsection{Synthetic Catalog Creation}\label{sec:catalog_creation}
Calculating the BBH detection rate only gets us half-way to our end goal.
Once we have the number of mergers we expect to detect for each network and simulated population, we still need to synthesize BBH catalogs to use for the later Fisher analysis in this paper.
To create these synthetic catalogs, we sample directly from the population simulations, using Monte Carlo rejection sampling.
The probability of accepting a sample is based on the intrinsic merger rate $r_i$ in Eq.~\eqref{eq:rate_sum}, evaluated for a single simulation entry, which comes directly from the simulation data itself.
This gives a distribution of sources that reflects the expected BBH distributions for each evolution prescription.
With a distribution of ``intrinsic'' mergers in this realization of the Universe, we assign any remaining parameters according to reasonable distributions.
For sky-location and orientation, this distribution is uniform in $\alpha$, $\sin\delta$, $\cos \theta_{\rm L}$, and $\phi_{\rm L}$.
For the binary's merger time, we use a uniform distribution in GMST for the terrestrial networks, which impacts the orientation of the terrestrial network at the time of merger.
This effect is completely degenerate with the right ascension of the binary, which is also randomly uniform in $\alpha$.
We use a similar prescription for MBH binaries, where the signal duration is typically shorter than the observation period.
We employ a uniform distribution in time from $0$ to $T_{\mbox{\tiny obs}}$, which again translates to a uniform distribution in detector orientation (random position of LISA in its orbit).
Candidates for multiband detection are more nuanced.
The signal is typically detectable for much longer than the observation period, and the frequency-time relation is nonlinear because of the familiar chirping behavior of GW signals.
For this class of sources, we randomly assign a signal starting time, which has a power-law relation with the starting frequency: cf. Eq.~\eqref{eq:pn_time}.
In this case, the position of the binary in time not only affects the orientation of LISA, but also the initial and final frequencies of the signal.
This assignment of time is important, as assigning a uniformly random initial frequency would create a bias towards seeing sources close to merger.
Once the full parameter vector has been specified, we proceed to calculate the SNR for the source in question.
Sources meeting the SNR threshold requirements are retained in the final catalog.
This process is repeated as necessary until we have a catalog of sources that matches the number of BBHs predicted by our rate calculations in Sec.~\ref{sec:rates}.
There are some drawbacks to this scheme.
If this process is repeated enough times, sources in the simulation will begin to be reused, as there are a fixed number of possible sources to draw from.
For this study, however, these effects are negligible, as the number of the sources in the simulations is larger than any single catalog we construct.
Furthermore, the effects will be further mitigated by randomly assigning the rest of the parameter vector not coming from the simulation, which will imbue at least slightly different properties to each source, even if one were reused.
\begin{figure*}[t]
\includegraphics[width=\linewidth]{sources_per_year.pdf}
\caption{Properties of detected merger events for various detector networks and population models. The left panels refer to terrestrial-only sources, while MBHs and multiband sources are shown on the right.
The points and thick lines show the mean values, while the shaded regions and error bars encompass the optimistic and pessimistic scenarios.
The assumed detector network is shown in the top x-axis (using the notation of Table~\ref{tab:timeline}), while the corresponding years are shown on the bottom x-axis.
The top panels show the rates of detected mergers for each class of sources; circles refer to the PopIII MBH population.
The middle panels show the cumulative number of observed sources: here the three different multiband scenarios are identical, as the choice of terrestrial network has little impact on the number of multiband sources we can detect~\cite{Gerosa:2019dbe}.
The bottom panels show the average $\log_{10}$SNR. Here the lower (upper) bounds correspond to subtracting (adding) the standard deviation to the mean value of the most pessimistic (optimistic) scenario.}
\label{fig:sources_per_year}
\end{figure*}
\begin{table}[t]
\begin{tabularx}{1\linewidth}{ @{\hspace{4pt}}Y @{\hspace{4pt}}Y @{\hspace{4pt}}Y }
\hline
\hline
\multicolumn{3}{c}{SOBH Rates (yr${}^{-1}$)} \\
\hline
Network & \makecell{SPOPS 0 \\(T, MB)} & \makecell{SPOPS 265 \\(T, MB)} \\
\hline
\hline
HLVKO4 & ($1.43\times 10^{4}$,0) & ($2.90\times10^{2}$,0) \\
HLVKIO5-O7 & ($1.22\times10^{5}$,0) & ($3.43\times10^{3}$,0) \\
HLVKIO8-O9 & ($6.60\times10^{5}$,0) & ($2.48\times10^{4}$,0) \\
\hline
\multicolumn{3}{c}{Scenario 1} \\
\hline
CEKL & ($9.70\times10^{5}$,2.58) & ($3.96\times10^{4}$,0.0854) \\
CEKLext & ($9.70\times10^{5}$,6.24) & ($3.96\times10^{4}$,0.210) \\
CEK & ($9.72\times10^{5}$,0) & ($3.97\times10^{4}$,0) \\
\hline
\multicolumn{3}{c}{Scenario 2} \\
\hline
CVKL & ($8.36\times 10^{5}$,2.58) & ($3.36\times10^{4}$,0.0854) \\
CVKLext & ($8.36\times 10^{5}$,6.24) & ($3.36\times10^{4}$,0.210) \\
CVK & ($9.26\times 10^{5}$,0) & ($3.77\times10^{4}$,0) \\
\hline
\multicolumn{3}{c}{Scenario 3} \\
\hline
HLVKIL & ($6.60\times 10^{5}$,2.58) & ($2.48\times10^{4}$,0.0854) \\
HLVKILext & ($6.60\times 10^{5}$,6.24) & ($2.48\times10^{4}$,0.210) \\
HLVKI+ & ($6.60\times 10^{5}$,0) & ($2.48\times10^{4}$,0) \\
\hline
\multicolumn{3}{c}{} \\
\hline
\hline
\multicolumn{3}{c}{MBH Rates (yr${}^{-1}$)} \\
\hline
Network & PopIII & Q3 (delay,nodelay) \\
\hline
\hline
LISA & 62.5 & (8.11,119.1) \\
\hline
\hline
\end{tabularx}
\caption{Detection rates for the detector networks and population models examined in this study. For SOBH populations, the first number in the parentheses is the detection rate for the terrestrial-only network (neglecting LISA), while the second number is the detection rate for multiband events seen in both the terrestrial network and LISA. For MBH populations, we show the detection rate for LISA for the PopIII, light-seeding scenario, as well as for the Q3, heavy-seeding scenario. In the case of Q3, the first number in parentheses corresponds to delayed mergers (Q3delays) and the second number to the nondelayed version (Q3nodelays).}
\label{tab:rates}
\end{table}
To recap, our process can be broken down into the following steps:
\begin{enumerate}
\item Perform rejection sampling on the simulation entries according to the probability of merging, neglecting detector selection effects.
\item Keep the ``successful'' events, and randomly draw the rest of the requisite parameters according to their individual distributions.
\item Calculate the SNR for the given detector network. If the binary meets the threshold requirements, keep the source in the final catalog.
\end{enumerate}
The source properties of the various \emph{detected} catalogs are shown in Fig.~\ref{fig:source_properties} for the SOBH populations, and in Fig.~\ref{fig:source_properties_SPACE} for the MBH populations targeted by LISA.
Both figures show the distributions of the redshifted total mass $M_z$, the mass ratio $q=m_2/m_1<1$, the redshift $z$, and the SNR $\rho$ of the detected populations of sources for different detector configurations and population models.
For the SOBH sources shown in Fig.~\ref{fig:source_properties}, the y-axis labels correspond to different detector combinations, while the upper (grey) and lower (green) histograms correspond to the two different kick magnitudes ($\sigma=265$\,km/s and $\sigma=0$\,km/s) chosen to bracket SOBH population models.
In the LISA SMBH case of Fig.~\ref{fig:source_properties_SPACE}, the same properties are plotted for the three populations models and for a four-year and ten-year LISA mission.
Note that the y-axis label now corresponds to different population models, and each half of the violin plot corresponds to different mission durations: the upper (grey) half corresponds to the ``nominal'' four-year LISA mission, and the lower (green) half corresponding to an extended ten-year mission.
The detection rates, cumulative detected sources, and average SNR for each class of sources are shown in Fig.~\ref{fig:sources_per_year}, where sources are broken down into 4 distinct categories:
\begin{itemize}
\setlength\itemsep{0.1em}
\item [(i)] ``SOBH - TERR'': SOBH candidates detected only by a terrestrial network;
\item [(ii)] ``SOBH - MB'': SOBH candidates detected by both a terrestrial network and LISA (multiband);
\item [(iii)] ``MBH - PopIII'': MBH sources from the PopIII model (light seeds);
\item [(iv)] ``MBH - Q3'': MBH sources from both Q3 (heavy seeds) models, with shaded bands indicating the range of uncertainty on delays between galaxy mergers and BH mergers.
\end{itemize}
The year is shown across the bottom x-axis, while the detector network timeline is shown across the top x-axis using the acronyms defined in Table~\ref{tab:timeline}.
The solid lines and markers represent the mean values of the different quantities when considering each population model and optimistic/pessimistic detector configurations.
The error bars and shaded regions represent the most optimistic and most pessimistic scenarios, except in the case of the SNR in the third panel, where the upper and lower bounds are the optimistic (pessimistic) average plus (minus) the standard deviation of the optimistic (pessimistic) distribution.
There is no error for the PopIII model, as we only have one iteration of this model and only one noise curve for LISA.
The detection rates for SOBHs and MBHs in the different scenarios are also listed in Table~\ref{tab:rates}.
Roughly speaking, the power of a detector network to reveal new physics comes from a combination of (i) the number of sources the network can detect, and (ii) the typical quality of each signal (as measured by the SNR).
Figure~\ref{fig:sources_per_year} attempts to capture the zeroth-order difference between each detector configuration and population model in these two aspects. The punchline is that although LISA will be able, on average, to see events with much larger SNR, these are just a few compared to the abundant number of sources that ground-based detectors will observe (albeit at typically lower SNR). The precision of GR tests scales as $\rho^{-1}$ and it is approximately proportional to $\sqrt{N}$ for $N$ events~\cite{Berti:2011jz}, therefore it is not immediately obvious which set of observations will be best at testing GR. With our catalogs this question can be answered quantitatively. As we discuss below, ground-based and space-based detectors are complementary to each other.
\section{Parameter Estimation}\label{sec:stat}
In this section we describe the statistical methods we will use to carry out projections on the strength of tests of GR in the future, as well as our waveform model and the numerical implementation.
\subsection{Basics of Fisher Analysis}
The backbone of this work is built on the estimation of the posterior distributions that might be inferred based on our synthetic signals. Given a loud signal with a large enough SNR, the likelihood of the data, i.e., the probability that one would see a data set $d$ given a model with parameters $\vec{\theta}$, can be expanded about the maximum likelihood (ML) parameters $\vec{\theta}_{\mbox{\tiny ML}}$. This expansion taken out to second order results in the following approximate likelihood function (where we focus on a single detector for the moment)~\cite{Poisson:1995ef,Berti:2004bd}:
\begin{equation}\label{eq:approx_likelihood}
\mathcal{L} \propto \exp \left[ -\frac{1}{2} \Gamma_{ij} \Delta \theta^i \Delta \theta^j \right]\,,
\end{equation}
where $\Delta \theta^i = \theta^i_{\mbox{\tiny ML}} - \theta^i$ are deviations from the ML values, and $\Gamma_{ij}$ is the Fisher information matrix
\begin{equation}
\Gamma_{ij} = \left( \partial_{i} h | \partial_{j} h \right) |_{\mbox{\tiny ML}} \,.
\end{equation}
As before, $h$ is the template response function, and the noise-weighted inner product is given by
\begin{equation}\label{eq:inner_product}
(A|B) = 4 \Re \left[ \int \frac{ \tilde{A} \; \tilde{B}^{\ast}}{S_n (f)} df \right]\,,
\end{equation}
with $S_{n}(f)$ the noise power spectral density. By truncating the expansion at second order, we have effectively represented our posterior probability distribution as a multidimensional Gaussian with a covariance matrix given by $\Sigma^{ij} = \left(\Gamma^{-1}\right)^{ij}$. The variances of individual parameters can then be read off to be $\sigma^i = \sqrt{\Sigma^{ii}}$, where index summation is not implied.
\begin{table*}[t]
\begin{tabular*}{15 cm}{c|c|c|c|c|c|c}
\hline \hline
\makecell{Theory or \\ physical process} & \makecell{Physical \\ modification} & G/P & \makecell{PN \\ order} & $\beta$ & \makecell{Theory \\ parameter}& $b$ \\ \hline \hline
\makecell{Generic dipole \\ radiation} & \makecell{Dipole \\ radiation} & G & -1 &\eqref{eq:dipole_beta} & $\delta\dot{E}$ &-7 \\ \hline
\makecell{Einstein-dilaton\\Gauss-Bonnet}& \makecell{Dipole \\ radiation} & G & -1 &\eqref{eq:EdGB_beta}&$\sqrt{\alpha_{\mbox{\tiny EdGB}}} $&-7 \\ \hline
\makecell{Black Hole \\Evaporation} & \makecell{Extra \\ dimensions}& G & -4 & \eqref{eq:BHE_beta} & $\dot{M}$ & -13 \\ \hline
\makecell{Time varying $G$} & LPI & G & -4 & \eqref{eq:Gdot_beta}&$\dot{G}$ & -13 \\ \hline
\makecell{Massive \\ Graviton} & \makecell{Nonzero \\ graviton mass} & P & 1 &\eqref{eq:MG_beta}&$m_g$ &-3 \\ \hline
\makecell{dynamical \\Chern-Simons} & \makecell{Parity \\ violation} &G & 2 & \makecell{ \eqref{eq:dCS_beta}} &$\sqrt{\alpha_{\mbox{\tiny dCS}}} $ &-1 \\ \hline
\makecell{Noncommutative\\gravity} & \makecell{Lorentz \\ violation} & G & 2 & \eqref{eq:NC_beta} &$\sqrt{\Lambda}$ &-1 \\
\hline \hline
\end{tabular*}
\caption{
A summary of the theories examined in this work (adapted and updated from~\cite{Chamberlain:2017fjl,Carson:2019kkh}).
The columns (in order) list the theory in question (unless a generic deviation is being examined), the physical interpretation of the modification, the way the modification is introduced into the waveform, the PN order at which the modification is introduced, the equation specifying the ppE-theory mapping, and the $b$ parameter in the ppE framework.
The practical ramifications between ``generation'' vs ``propagation'' effects relates to how the modification is introduced into the waveform, as explained in Appendix~\ref{sec:imr_vs_ins}.
}
\label{tab:theory}
\end{table*}
In an attempt to capture the hard boundaries on the spin components (the dimensionless spin magnitudes $|\chi_i|$ and in-plane spin component $\chi_p$ in GR should not exceed 1), we incorporate a Gaussian prior on these two parameters with a width of $1$. We do so by adding to the Fisher matrix diagonal terms of the form~\cite{Cutler:1994ys,Poisson:1995ef,Berti:2004bd}
\begin{equation}
\Gamma_{ij} \to \Gamma_{ij} + \Gamma_{ij}^0 \,,
\end{equation}
where $\Gamma_{ii}^0$ represents our prior distribution and is given by
\begin{equation}
\Gamma^0_{ij} = \delta_{\chi_1, \chi_1} + \delta_{\chi_2, \chi_2}+ \delta_{\chi_p, \chi_p}\,.
\end{equation}
In the case of multiple observations for a single source, we simply generalize the above results through sums. For example, the likelihood for a single event observed with $N$ detectors can be expanded quadratically via
\begin{equation}
\mathcal{L} \propto \exp \left[ -\frac{1}{2} \Delta \theta^i \Delta \theta^j \sum_k^N \Gamma_{ij,k} \right]\,,
\end{equation}
where the subscript $k$ labels the $k$-th detector, and we have assumed that the parameters $\vec{\theta}$ are globally defined.
This gives the final covariance matrix
\begin{equation}\label{eq:total_cov}
\Sigma^{ij} = \left( \bigl(\sum_k^N \Gamma_k + \Gamma^0\bigr)^{-1}\right)^{ij}\,.
\end{equation}
To improve readability, additional details on the calculation of the Fisher matrix are given in Appendix~\ref{app:Fisher}.
\subsection{Waveform Model for the Fisher Analysis}
For the Fisher studies carried out in this paper, we model binary merger waveforms using the phenomenological waveform model \software{IMRPhenomPv2}~\cite{Hannam:2013oca,Khan:2015jqa,Husa:2015iqa}, which allows us to capture certain spin precessional effects from inspiral until merger. The software used in this work was predominantly written from scratch, but the software library \software{LALSuite}~\cite{lalsuite} was used for comparison and to verify our implementation. For the actual parameter estimation calculation with LISA, we rescale the sensitivity curve to remove the sky-averaging numerical factor, and we account for the geometric factor of $\sqrt{3}/2$ manually in the LISA response function (``LISA -- non-sky-averaged'' in Fig.~\ref{fig:SN}), following Ref.~\cite{Tanay:2019knc}.
To fully specify the waveform produced by the \software{IMRPhenomPv2} template in GR, we need a 13-dimensional vector of parameters:
\begin{equation}
\vec{\lambda}_{\text{Pv2},\text{GR}} = \left[\alpha,\delta,\theta_{\rm L},\phi_{\rm L}, \phi_{\text{ref}}, t_{c,\text{ref}}, D_L, \mathcal{M}, \eta, \chi_1, \chi_2, \chi_\text{p}, \phi_\text{p}\right].
\end{equation}
The first 11 parameters are the same as those introduced for the \software{IMRPhenomD} model in Sec.~\ref{subsec:wf-pop}. The parameters $\chi_\text{p}$ and $\phi_\text{p}$ define the magnitude and direction of the in-plane component of the spin, defined as~\cite{LIGOScientific:2018mvr}
\begin{equation}\label{eq:chip}
\chi_\text{p} = \frac{1}{B_1 m_1^2} \text{max}\left( B_1 S_{1 \perp}, B_2 S_{2 \perp} \right) \,,
\end{equation}
where $B_1 = 2 + 3 q / 2$, $B_2 = 2+ 3/ ( 2q )$, $q = m_2/m_1 < 1$ is the mass ratio, and $S_{i\perp}$ is the projection of the spin of BH $i$ on the plane orthogonal to the orbital angular momentum $\mathbf{L}$.
This \software{IMRPhenomPv2} is then deformed through parameterized post-Einsteinian corrections to model generic, theory-independent modifications to GR~\cite{Yunes:2009ke,Cornish:2011ys,Sampson:2013lpa,Chatziioannou:2012rf}. We worked with deformations of two types:
\begin{align}
\tilde{h}_{\text{gen}}(\vec{\lambda}_{\text{Pv2}},\beta ) &=
\begin{cases}
\tilde{h}_{\text{GR}} e^{i \beta \left(\mathcal{M} \pi f \right)^{b/3}} & f<0.018m \\
\tilde{h}_{\text{GR}} & 0.018m<f \,,\\
\end{cases}
\label{eq:ppE1}
\\
\tilde{h}_{\text{prop}}(\vec{\lambda}_{\text{Pv2}},\beta ) &=
\tilde{h}_{\text{GR}} e^{i \beta \left(\mathcal{M} \pi f \right)^{b/3}}\,,
\label{eq:ppE2}
\end{align}
where the first waveform $h_{\text{gen}}$ represents deviations from GR caused by modified generation mechanisms, and $h_{\text{prop}}$ represents deviations from GR caused by modified propagation mechanisms.
Details (including the motivation for these implementations, and the disparity of the results between the two types of deviations) are discussed in Appendix~\ref{sec:imr_vs_ins}.
As outlined there, differences are minor, and therefore from now on we will focus on the propagation mechanism, unless otherwise specified.
The parameter $\beta$ controls the magnitude of the deformation, and $b$ controls the type of deformation considered.
The ppE version of the \software{IMRPhenomPv2} model is then controlled by the parameters
\begin{equation}
\vec{\lambda}_{\text{Pv2},\mbox{\tiny ppE}} = \vec{\lambda}_{\text{Pv2},\text{GR}} \cup \{\beta\}.
\end{equation}
Recall that, in PN language~\cite{Blanchet:2013haa}, a term in the phase that is proportional to $\left( \pi \mathcal{M} f\right)^{b/3}$ is said to be of $(b+5)/2$ PN order. The waveform model above is identical to the \software{gIMR} model coded up in LAL, and used by the LVC when performing parameterized PN tests of GR on GW data.
The main power of the ppE approach is its ability to map the ppE deformations to known theories of gravity. Table~\ref{tab:theory} presents the mapping between $(\beta,\,b)$ and the coupling constants in various theories of gravity (see Appendix~\ref{sec:theories} for a more detailed review of these mappings).
This table makes it clear then that ppE deformations are not false degrees of freedom, in the language of~\cite{Chua:2020oxn}. Once a constraint is placed on $\beta$, one can easily map it to a constraint on the coupling constants of a given theory through Table~\ref{tab:theory}. This reparameterization is typically computationally trivial, and therefore it saves significant resources by reusing generic results, instead of repeating the analysis for every individual theory.
\subsection{Numerical Implementation}
Common methods for calculating the requisite derivatives for the Fisher matrices typically involve either symbolic manipulation software, such as \software{Mathematica}~\cite{Mathematica}, or the use of numerical differentiation based on a finite difference scheme.
The calculation of the derivatives is always followed by some sort of numerical integration, which can be based on a fairly simple method such as Simpson's rule, or some more advanced integration algorithm that might appear prepackaged in \software{Mathematica}.
All of these methods have their respective benefits: symbolic manipulation and complex integration algorithms provide the most accuracy, while numerical differentiation and simpler integration schemes are typically much faster.
All methods also come with their respective drawbacks.
The maximally accurate method of adaptive integration and symbolic differentiation in \software{Mathematica} can be computationally taxing, while the fully numerical approach can be prone to large errors if the stepsizes are not tuned correctly, both for the differentiation with respect to the source parameters $\vec{\theta}$, as well as for the frequency spacing in the Fisher matrix integrals.
On top of these aspects, using a program like \software{Mathematica} can be cumbersome at times, as interfacing with lower-level (or even scripting) languages adds an extra layer of complexity.
A combination of the two extremes implemented in one low-level language would be ideal, and it is the route chosen for this work.
While symbolic manipulation is not available in the language that we chose (\software{C++}), we instead implemented an automatic differentiation (AD) software package natively written in \software{C}/\software{C++}: \software{ADOL-C}~\cite{10.1145/229473.229474}.
The basic premise of AD (as implemented in \software{ADOL-C}) is to use operator-overloading to perform the chain-rule directly on the program itself.
By hard-coding a select number of derivatives on basic mathematical functions and operations (such as trigonometric functions, exponentials, addition, multiplication, etc.) and tracing out all the operations performed on an input parameter as it is transformed into an output parameter, \software{ADOL-C} can stitch together the derivative of the original function.
This results in derivatives that are exact to numerical precision.
As no final, mathematical expression is output, this does not exactly constitute symbolic differentiation, but perfectly fulfills our requirements.
To complete the Fisher calculation, we take our exact derivatives (to floating-point error) and integrate them with a Gaussian quadrature scheme based on Gauss-Legendre polynomials, as in Ref.~\cite{Berti:2004bd}.
To calculate the weighting factors and the evaluation points, we have implemented a modified version of the algorithm found in Ref.~\cite{10.5555/1403886}.
While this typically incurs a high computational cost to calculate the weights and abscissas, we mitigate this fact by doing the calculation only once, and reusing the results for each Fisher matrix.
This results in integration errors orders of magnitude lower than a typical ``Simpson's rule'' scheme, with the same computational speed per data point.
\section{Tests of General Relativity}
\label{sec:results}
In this section we summarize the main results of the analysis described above.
We begin with the constraints on generic modifications as a function of time for each population and network (Sec.~\ref{sec:general_mod}).
Next, we translate these into constraints on specific theories (Sec.~\ref{sec:specific_theories}, and in particular Table~\ref{tab:theory}).
\subsection{Constraints on Generic Modifications}\label{sec:general_mod}
Let us begin by showing in Fig.~\ref{fig:SMBH_time} the projected strength of constraints on modifications at various PN orders (shown in different panels) as a function of time.
Detector scenarios are labeled at the top, and the various astrophysical population classes are separated to facilitate visual comparisons.
Recall from Sec.~\ref{sec:detector_networks} that we consider three detector scenarios (S1, S2, and S3) bracketing funding uncertainties in the development of the future detector network.
The source classes include the following:
\begin{itemize}
\setlength\itemsep{0.1em}
\item[(i)] SOBH - TERR: SOBH populations as seen by only terrestrial networks;
\item[(ii)] SOBH - MB: SOBH events observed by both terrestrial networks and LISA;
\item[(iii)] MBHs: heavy-seed (Q3) and light-seed (PopIII) scenarios as seen by LISA.
\end{itemize}
When relevant, the error estimates shown in the figures below come from the different versions of the population model (i.e. SPOPS 265 vs SPOPS 0 and Q3delays vs Q3nodelays), as well as marginalization over the different estimates of the noise curves (i.e. the ``high'' and ``low'' sensitivity curve for Virgo and the ``128Mpc'' and ``80Mpc'' curves for KAGRA).
The uncertainties correspond to the minimum and maximum bounds from all the combinations we studied at that point in the timeline.
Figure~\ref{fig:SMBH_time} is one of the main results of this paper. It allows us to draw many conclusions, itemized below for ease of reading\footnote{Throughout this analysis, the $0^{\text{th}}$ PN order in the GW phase refers to the first (often called ``Newtonian'') term in the GR series, which is proportional to $v^{-5} \propto f^{-5/3}$. Consistently, negative (positive) PN orders identify modifications entering in at lower (higher) powers of $v$, relative to this leading-order term.}:
\begin{itemize}
\item [(i)] {\bf{Multiband sources yield the best constraints at negative PN orders.}} This is expected from prevous work~\cite{Barausse:2016eii,Chamberlain:2017fjl}: the long, early (almost monochromatic) inspiral signals coming from LISA observations stringently constrain deviations at low frequencies.
\item [(ii)] {\bf{LISA MBH observations do better than terrestrial SOBH observations at negative PN orders.}} Constraints coming from the large-SNR MBH populations outperform the terrestrial networks at negative PN order, despite the large number of expected SOBH sources in the terrestrial network.
\item [(iii)] {\bf{Terrestrial SOBH observations can do slightly better than LISA MBH observations at positive PN orders.}} Positive PN order effects can be constrained better when the merger is in band. The terrestrial networks begins to benefit from the millions of sources in the SOBH catalogs, but the extremely high-SNR sources in the MBH catalogs mean that LISA constraints are still competitive with terrestrial constraints.
\item [(iv)] {\bf{Terrestrial network improvements make a big difference at negative PN orders.}} The different terrestrial network scenarios are widely separated for the negative PN effects, with the most optimistic S1 scenario vastly outperforming the S2 and S3 scenarios. This conclusions is robust with respect to astrophysical uncertainties in the population models.
\item [(v)] {\bf{Network improvements are less relevant at higher PN order.}} In this case the three different scenarios overlap considerably (but the S1 scenario maintains a clear edge over the other two).
\end{itemize}
\begin{figure*}
\includegraphics[width=\textwidth]{constraint_vs_time_SMBH_MBH.pdf}
\caption{
Constraints on modifications to GR at various PN orders as a function of time.
The colors represent different classes of populations (including SOBH terrestrial-only sources, SOBH multiband sources, MBH sources from the Q3 heavy-seed scenario, and MBH sources from the light-seed PopIII scenario). The bands in all of these scenarios -- except for PopIII -- correspond to astrophysical uncertainties: kick velocities $\sigma=265$\,km/s and $\sigma=0$\,km/s give the upper and lower bounds for SOBHs, while the inclusion of delays affects Q3 scenarios.
Greyscale patches at the top of each panel correspond to the observation period for each network, labeled across the top.
Multiband sources and MBHs yield strong constraints at negative PN orders.
Terrestrial-only SOBH sources begin to contribute substantially at positive PN orders for all detector networks, with the optimistic scenario S1 yielding the best constraints.
We overlay as horizontal lines the most stringent current bounds, where available and competitive, from pulsars~\cite{Nair:2020ggs} and LVC observations of GWs~\cite{LIGOScientific:2019fpa}.
}\label{fig:SMBH_time}
\end{figure*}
To understand some of these features, it can be illuminating to model the scaling behavior of bounds at different PN orders with respect to various source parameters.
Below we consider an analytical approximation that can reproduce most of the observed features. We first model constraints on individual sources, and then fold in the enhancement achieved by stacking multiple events.
\subsubsection{Analytical scaling: individual sources}\label{sec:ind_scaling}
A good first approximation is to ignore any covariances between parameters by treating the Fisher matrix as approximately diagonal, so that the bounds on the generic ppE parameter $\beta$ is roughly
\begin{equation}
\sigma_{\beta\beta} \approx \left(\frac{1}{\Gamma_{\beta \beta}}\right)^{1/2} = \left[ 4 \Re \int_{f_{\text{low}}}^{f_{\text{high}}} \frac{(\pi \mathcal{M} f)^{2b/3} \left|\tilde{h}\right|^{2}}{S_n(f)} df\right]^{-1/2}\,,
\end{equation}
where $f_{\text{low}}$ and $f_{\text{high}}$ are the lower and upper bounds of integration. This expression can be simplified further by assuming white noise, so that $S_n(f)=S_0$ is constant, and by ignoring PN corrections to the amplitude, i.e. $|\tilde h|=A f^{-7/6}$, where $A\propto \mathcal{M}^{5/6}/D_{L}$ is an overall amplitude (see e.g.~\cite{Maggiore:1900zz}). This leads to
\begin{equation}
\sigma_{\beta\beta} \approx \left[ \frac{6 A^2}{S_0} \frac{\left( f_{\text{low}}^{2(b-2)/3}- f_{\text{high}}^{2(b-2)/3}\right) \left(\pi \mathcal{M} \right)^{2 b /3}}{2-b} \right]^{-1/2}\,,
\end{equation}
as long as $b \neq 2$.
We can further simplify the expression for $\sigma_{\beta \beta}$ by using the fact that, within the same approximations, the SNR scales like
\begin{align}
\rho^2 = 4 \Re\left[ \int_{f_{\text{low}}}^{f_{\text{high}}} \frac{h h^{\ast}}{S_n(f)} df \right] \approx \frac{3 A^2}{S_0} \left(f_{\text{low}}^{-4/3} - f_{\text{high}}^{-4/3}\right)\,,
\end{align}
which then leads to
\begin{equation}
\sigma_{\beta\beta} \approx \frac{(\pi \mathcal{M})^{-b/3}}{\rho} \left[ \left(1-\frac{b}{2}\right) \frac{f_{\text{low}}^{-4/3} - f_{\text{high}}^{-4/3} }{f_{\text{low}}^{2(b-2)/3} - f_{\text{high}}^{2(b-2)/3} }\right]^{1/2}
\end{equation}
Assuming the higher frequency cutoff to be at the Schwarzschild ISCO, so that $f_{\text{high}} = f_{\rm ISCO} = 6^{-3/2} \eta^{3/5}/(\pi {\cal{M}})$, and expanding to leading order in the small quantity $\pi {\cal{M}} f_{\text{low}} \ll 1$, we finally obtain the approximate scaling
\begin{align}
\label{eq:single_source_scaling}
\sigma_{\beta\beta} &\approx \left[ 6^{b-2} \left(\frac{b}{2}-1\right)\right]^{1/2} \frac{\left(\pi {\cal{M}} f_{\text{low}}\right)^{-2/3}}{\eta^{(b-2)/5}\rho} \,, \quad b > 2\,,
\\
\sigma_{\beta\beta} &\approx \left(1-\frac{b}{2}\right)^{1/2} \frac{\left(\pi {\cal{M}} f_{\text{low}}\right)^{-b/3} }{\rho} \,, \quad b < 2\,.
\label{eq:single_source_scaling2}
\end{align}
The expressions above do not apply to the case $b=2$, as the integration would lead to a logarithmic scaling. Recall that $b>2$ corresponds to PN orders higher than $3.5$.
As expected, all bounds on generic ppE parameters approximately scale as the inverse of the SNR, regardless of the PN order at which they enter. What is more interesting is that they also scale with the chirp mass as $\mathcal{M}^{-b/3}$ when $b < 2$, or as ${\cal{M}}^{-2/3}$ when $b>2$.
For a single event, we then have the ratio
\begin{align}\label{eq:ratio_scaling}
\frac{\sigma_{\beta\beta}^{\text{TERR}}}{\sigma_{\beta\beta}^{\text{MBH}}} &\approx
\frac{\rho^{\text{MBH}}}{\rho^{\text{TERR}}}
\left(\frac{{\cal{M}}^{\text{TERR}}}{{\cal{M}}^{\text{MBH}}}\right)^{-b/3}
\left(\frac{f_{\text{low}}^{\text{TERR}}}{f_{\text{low}}^{\text{MBH}}}\right)^{-b/3}\,,
\end{align}
for $b < 2$. Since ${\rho^{\text{MBH}}}/{\rho^{\text{TERR}}} \sim 10^{2}$, ${{\cal{M}}^{\text{TERR}}}/{{\cal{M}}^{\text{MBH}}} \sim 10^{-4}$
and ${f_{\text{low}}^{\text{TERR}}}/{f_{\text{low}}^{\text{MBH}}} \sim 10^{5}$,
we conclude that the ratio
${\sigma_{\beta\beta}^{\text{TERR}}}/{\sigma_{\beta\beta}^{\text{MBH}}} \approx 10^{3 - b/3}$.
This ratio is large (favoring MBH sources) when $b$ is negative and large, i.e. at highly negative PN orders, and slowly transitions to favor terrestrial, SOBH sources at positive PN orders, explaining the observations in items (ii) and (iii) above. The ratio degrades by approximately four orders of magnitude between -4 PN and 2 PN, in favor of the terrestrial network, and in agreement with Fig.~\ref{fig:SMBH_time}. This scaling with $b$ holds true regardless of the typical SNRs of the sources, as the ratio of SNRs depends on the ratio of the chirp masses of the sources, but not on the PN order.
Let us now consider the scaling of the bounds with PN order in more detail. Figure~\ref{fig:source_class_scaling} shows an averaged ratio $\sigma_{\beta\beta}^{\text{TERR}} / \sigma_{\beta\beta}^{\text{MBH}}$ computed from the full numerical simulations of Fig.~\ref{fig:SMBH_time} (solid blue line), together with the prediction in Eq.~\eqref{eq:ratio_scaling} that the ratio should scale as $\propto 10^{-b/3}$ (solid black line).
The numerical results (blue line, with an ``uncertainty'' quantified by the shaded blue region) were computed as follows. We first averaged the constraints for each population model at each PN order and for each detector network that concurrently observes with LISA; this allowed us to isolate the effect of the combination of source class and detector, neglecting the sometimes significant contribution from stacking. Ratios of the averaged quantities were then calculated for each combination of SOBH model (SPOPS 0 and SPOPS 265) and heavy-seeding MBH model (Q3delays and Q3nodelays) and for each detector network -- the CEKLext, CVKLext, and HLVKILext (optimistic and pessimistic) configurations -- resulting in $16$ combinations in all at each PN order, assuming an extended ten-year LISA mission duration.
The average of these combinations is shown as the solid blue line in Fig.~\ref{fig:source_class_scaling}, and the region bounded by the minimum and maximum ratios is shown shaded in blue. Observe that the scaling of Eq.~\eqref{eq:ratio_scaling} is consistent with the averaged ratio in the entire domain; the small dip at $b=-5$ (or $0$PN order) is due to degeneracies with the chirp mass, which the scaling relation does not account for.
\begin{figure}
\includegraphics[width=\linewidth]{source_class_scaling_mean.pdf}
\caption{
Scaling relations discussed in Sec.~\ref{sec:ind_scaling}.
The ratio $\sigma_{\beta\beta}^{\text{TERR}}/\sigma_{\beta\beta}^{\text{MBH}}$, calculated from the full Fisher simulations including the realistic noise curves shown in Fig.~\ref{fig:SN} and the \software{IMRPhenomPv2} waveform, is shown in blue.
The empirically measured trend is derived from averaging the constraints from each terrestrial network and each population model, then calculating the ratios of every combination of terrestrial network and SOBH model against each MBH heavy-seeding model.
The blue line shows the mean ratio, and the blue shaded region is the area bounded by the maximum and minimum ratios.
The red line and the red shaded region refer instead to the ratio between the terrestrial-only constraints and the multiband constraints, i.e. $\sigma_{\beta\beta}^{\text{TERR}}/\sigma_{\beta\beta}^{\text{MB}}$.
For this class of sources, we calculate the ratio for each population model and detector network, one at a time.
That is, the terrestrial-only constraints from the S1 network derived from the SPOPS 265 model are compared against the multiband constraints from the S1 network and the SPOPS 265 model.
The trends predicted analytically in the text are shown in black and grey for MBH and multiband sources, respectively.
The trend lines we show for our predictions have been shifted along the y-axis to better compare the with the data.
}\label{fig:source_class_scaling}
\end{figure}
The relation $\sigma_{\beta\beta}^{\text{TERR}}/\sigma_{\beta\beta}^{\text{MBH}}$ can be pushed further by comparing multiband sources against the rest of the SOBH sources detected \emph{only} by the terrestrial network.
For these two classes of sources, the masses would be comparable. Let us focus on the impact of the early inspiral observation. The ratio of the SNRs in the LISA band
is of ${\cal{O}}(1)$ for typical sources, so we will neglect it for now.
Typical initial frequencies, however, are quite different, with multiband sources having initial frequencies of about $10^{-2}$Hz for SOBH sources that merge within several decades in the terrestrial band.
This makes the ratio $f_{\text{low}}^{\text{TERR}}/f_{\text{low}}^{\text{MB}}\sim 10^{3}$, and thus, the constraining power of multiband sources relative to that of terrestrial-only sources is approximately $\sigma_{\beta\beta}^{\text{TERR}}/\sigma_{\beta\beta}^{\text{MB}} \sim 10^{-b}$, which explains the scaling observed in item (i) above.
In Fig.~\ref{fig:source_class_scaling} we show the averaged ratio measured from our full simulations including the noise curves shown in Fig.~\ref{fig:SN} and the \software{IMRPhenomPv2} waveform (solid red line) as well as the $10^{-b}$ scaling derived from Eq.~\eqref{eq:ratio_scaling} (solid gray line). Again, we average the constraints from each population model at each PN order, assuming a ten-year LISA mission duration. However we do not consider every combination of population models and detector networks, but instead compare the multiband constraints from each network and SOBH model
against the terrestrial-only constraints from the same combination of terrestrial network and SOBH model. That is, we compare S1 terrestrial-only constraints derived from the SPOPS 265 model against the multiband constraints with the S1 network and from the SPOPS 265 model, repeating the procedure for each terrestrial network and population model. This yields 8 different combinations of population models and networks. The red line shows the average ratio for all the combinations considered, and the red-shaded region shows the area bounded by the maximum and minimum ratios. The simple analytical scaling reproduces the numerics quite well at negative PN orders, where the contribution to the constraint on the ppE parameter primarily comes from LISA observations. At positive PN orders the scaling relation breaks down for two main reasons: (i) our scaling relation neglects covariances, and (ii) the dominant source of information is no longer LISA's observation of the early inspiral, but the signal from the merger-ringdown seen by the terrestrial network.
\begin{figure}[t]
\includegraphics[width=\linewidth]{effective_N_Neff_spops_0_uniform_time_CEK_o.pdf}
\caption{
Empirically determined values of $N_{\text{eff}}$ for the CEK (Scenario 1) network and the SPOPS 0 catalog, derived from our full Fisher analysis, including the noise curves shown in Fig.~\ref{fig:SN} and the \software{IMRPhenomPv2} waveform.
The parameter $N_{\text{eff}}$ is defined as the number of sources needed from the full catalog in order to achieve a threshold constraint $\sigma_{\beta,\mbox{\tiny thr}}$, using the most constraining sources first.
Here we choose $\log_{10} \sigma_{\beta,\mbox{\tiny thr}} = 0.95 \log_{10} \sigma_{\beta}$, where $\sigma_{\beta}$ is the cumulative bound from the full Fisher analysis for the entire catalog.
The values of the threshold constraint (blue $+$ signs) are shown alongside the full constraint (red $\times$ signs) in the lower panel.
The number of required sources grows exponentially as a function of PN order: large catalogs benefit positive PN orders, but they are not as important for highly negative PN orders.
}\label{fig:Neff_pn}
\end{figure}
\begin{figure*}[t]
\includegraphics[width=\linewidth]{effective_N_SNR_MC_FULL_spops_0_uniform_time_CEK_o_annotated.jpg}
\caption{
Three different distributions in the $\log_{10} \mathcal{M}-\rho$ plane for the CEK network and the SPOPS 0 population model.
The blue heat map shows the distribution of the sources directly in the $\log_{10} \mathcal{M}-\rho$ plane, and it is the same for all PN orders.
The black contours show the constraints from individual sources.
The red scatter plots show the sources needed to obtain a threshold cumulative constraint $\log_{10} \sigma_{\beta,\mbox{\tiny thr}} = 0.95 \log_{10} \sigma_{\beta}$, where the shade of red indicates the strength of the individual bounds (in log base 10).
We utilized a $2\sigma$ gaussian filter over the data to smooth out the noise and create more easily interpretable contour plots.
In conjunction with Fig.~\ref{fig:Neff_pn}, the growing number of scatter points as a function of PN order illustrates the increasing dependence of the cumulative constraint on the size of the source catalog.
Furthermore, the relation between chirp mass, SNR, and individual bound can be seen to shift significantly between positive and negative PN orders, agreeing with the commonly held intuition that lower-mass sources are better for constraining negative PN effects.
In more detail, the negative PN orders benefit highly from low-mass systems, with slight dependence on SNR, while positive PN order effects depend much more strongly on the SNR and have more minimal dependence on the chirp mass.
Finally, the range of individual bounds ($\sim 4$ orders of magnitude at negative PN orders and $\sim 2$ orders of magnitude at positive PN orders) helps to explain the different scaling relations between the cumulative bounds and the total number of sources.
}\label{fig:Neff_distribution}
\end{figure*}
\subsubsection{Analytical scaling: multiple sources}\label{sec:multiple_source_scaling}
Our analysis above helps to elucidate some of the trends observed in our numerical simulations by examining individual sources, but it fails to capture the power of combining observations to enhance constraints on modified theories of gravity.
Especially when considering terrestrial networks, this element is critical in predicting future constraints, and it is connected with our observations (iv) and (v) in the previous list.
To fully explore this facet of our predictions, we try to isolate the impact of the total number of sources on the final, cumulative constraint for a given network.
As shown in Eq.~\eqref{eq:PPEcombined} of Appendix~\ref{app:Fisher}, the combined constraint from an ensemble of simulated detections is
\begin{equation}
\sigma_{\beta}^2 = \left( \sum_i^{N} \frac{1}{\sigma_{\beta,i}^2} \right)^{-1}\,,
\end{equation}
where $\sigma_{\beta,i}$ is the variance on $\beta$ of the $i$-th source marginalized over the source-specific parameters, including all detectors and priors, and $N$ is the total number of sources in the ensemble.
The effect of the population on all the different combinations of detector networks and PN orders can be summarized by the distribution in $\sigma_{\beta,i}$, and we find empirically that they all lie somewhere in the spectrum bounded by the following extreme scenarios:
\begin{itemize}
\item [(a)] all the constraints contribute more or less equally,
\item [(b)] the total constraint is dominated by a single (or a few) observations.
\end{itemize}
When the covariances are all approximately equal, the sum above reduces to $\sigma_{\beta} \approx \sigma_{\beta,i} / \sqrt{N}$, but when one constraint (say $\sigma_{\beta,\text{strongest}}$) dominates the ensemble, the sum reduces to $\sigma_{\beta} \approx \sigma_{\beta,\text{strongest}} $.
Naturally, in the case where all sources are more or less equally important, the power of large catalogs is maximized, and one would expect terrestrial networks observing hundreds of thousands to millions of sources to outperform networks with smaller populations, such as MBHs and multiband sources (everything else being equal).
When one observation dominates the cumulative bound because of loud SNR or source parameters that maximize the constraint, then large catalogs are not as important.
In an attempt to quantify this effect, we can ask the following question: what is the minimum number of sources we can retain and still achieve a similar constraint on $\beta$?
To answer this question, we take all the variances calculated with our Fisher analysis for a given population model and detector network, and order them according to the strength of the constraint from each individual source.
With some threshold constraint set, we can work our way down the list, calculating the cumulative bound for the ``best'' $N'$ sources at a time.
We define $N_{\text{eff}}$ as the value of $N'$ such that our threshold constraint is achieved.
Comparing the values of $N_{\text{eff}}$ at each PN order for a single population model and network provides useful insights into how generic constraints benefit from the catalog size.
The upper panel of Fig.~\ref{fig:Neff_pn} shows the values of $N_{\text{eff}}$ calculated using the results from our full Fisher analysis, including the noise curves shown in Fig.~\ref{fig:SN} and the \software{IMRPhenomPv2} waveform, for the CEK network with the SPOPS 0 population model and a threshold constraint of $\log_{10}\sigma_{\beta,\mbox{\tiny thr}} = 0.95\log_{10}\sigma_{\beta}$. A pronounced trend is evident: positive PN orders require up to $\sim 10^5$ sources to retain a constraint equal to our threshold value, while the most negative PN effects only require a single, highly favorable source to reach the threshold value.
The lower panel of Fig.~\ref{fig:Neff_pn} merely shows the value of the full numerical constraint (red $\times$ signs) compared with our value of the threshold constraint (blue $+$ signs): by our own definition, the threshold constraint captures most (i.e.~95\%) of the full constraint.
Figure~\ref{fig:Neff_distribution} shows several different facets of the data relevant to the analysis of Fig.~\ref{fig:Neff_pn}.
For each PN order, we have plotted three different quantities: (i) a heat map of all the sources in the catalog in the $\log_{10} \mathcal{M}-\rho$ plane (shown in blue), which is the same for all PN orders, (ii) the contours showing the strength of the individual constraints from each source for the entire catalog (in black), and (iii) the subset of sources required to meet the threshold constraint $\sigma_{\beta,\mbox{\tiny thr}}$ (in red), where the shade corresponds to the strength of the individual bounds.
Several interesting conclusions can be drawn from this figure.
First, the relation between the constraint, the SNR, and the chirp mass changes as a function of PN order.
The highly positive PN orders benefit highly from loud sources, with only a slight preference for the lower mass systems (if at all), while highly negative PN effects benefit greatly from low-mass systems, with a slight preference for louder sources.
This agrees with our intuition about low-mass systems being most important for negative PN effects: in Eq.~\eqref{eq:single_source_scaling} the chirp mass is raised to the $-b/3$ power, significantly enhancing the impact of low-mass systems for negative PN effects, while minimizing their impact for positive PN effects (assuming $b<2$). As these figures are constructed from our fully numerical data, these trends take into account the nonlinear relation between SNR and chirp mass, as these are not independent parameters when considering realistic population models.
Reasonably accurate population models are important in studies of this type, as bounds can be significantly altered by changing the distributions of source properties.
A second observation one can draw from Fig.~\ref{fig:Neff_distribution} relates to the change in the relation between SNR and individual constraints, which explains why the constraining-power gap between the different terrestrial network scenarios closes at positive PN orders (items (iv) and (v) from above).
The relaxation in the SNR-constraint correlation at high positive PN orders means that the huge boost in SNR from utilizing 3g detectors, as compared to a 2g only network, has only a moderate impact on the cumulative bound, \emph{if} the 2g network is sensitive enough to observe a comparable number of sources to the 3g network.
In the case of the Voyager network (HLVKI+), the much lower average SNR (shown in Fig.~\ref{fig:source_properties} and Fig.~\ref{fig:sources_per_year}) hinders the network's capability greatly at negative PN orders, but only minimally at positive PN orders, as compared with the CEK or CVK networks shown in Fig.~\ref{fig:SMBH_time}.
This is because the total number of sources observed in each scenario is comparable with Scenario 3, only differing by $\sim 30\%$, and allowing HLVKI+ to maintain competitive constraining power through comparably sized catalogs.
A third observation that we can make about Fig.~\ref{fig:Neff_distribution} is that the range in individual bounds is also clearly PN-order dependent. The most negative PN corrections change by $\sim 4$ orders of magnitude, while the most positive PN corrections only change by $\sim 2$ orders of magnitude.
This change in constraint range lends credence to the interpretation outlined above. When constraints are clustered closer together and contribute equally, the cumulative constraint scales strongly with the number of sources. The opposite is true when the clustering is weaker and one constraint dominates over the whole ensemble.
The analysis performed here, coupled with that done in Sec.~\ref{sec:ind_scaling}, further clarifies the trend observed in items (ii) and (iii).
The combination of the individual source scaling favoring LISA at negative PN orders is enhanced by the significant benefit from large catalogs for terrestrial networks for positive PN orders.
\subsection{Specific Theories}\label{sec:specific_theories}
We can now recast the constraints on generic ppE parameters from Sec.~\ref{sec:general_mod} into constraints on relevant quantities in a variety of specific modified gravity theories.
We list and categorize these theories in Table~\ref{tab:theory}.
We will utilize the scaling analysis outlined in the previous section, with the additional step
\begin{equation}
\Gamma_{\text{theory}} = \mathcal{J}^T \cdot \Gamma_{\mbox{\tiny ppE}} \cdot \mathcal{J}\,,
\end{equation}
where $\mathcal{J}$ is the Jacobian $\partial \vec{\theta}_{\mbox{\tiny ppE}}/\partial \vec{\theta}_{\text{theory}}$ of the transformation, and ${(\cdot)}^T$ is the transpose operation.
In our case, the Jacobian is diagonal. This is because the off-diagonal components are all proportional to the theory-specific modifying parameter;
as we inject with GR models, these are always set to zero for any specific beyond-GR theory.
We can then write
\begin{equation}
\Gamma_{\alpha_\text{theory}\alpha_\text{theory}} = \left(\frac{\partial \beta}{\partial \alpha_{\text{theory}}}\right)^2 \Gamma_{\beta\beta}\,,
\end{equation}
where $\beta$ is the generic ppE modification at the corresponding PN order for a given theory, and $\alpha_{\text{theory}}$ is the theory-specific modifying parameter.
The interested reader can find the mappings $\beta(\alpha_{\text{theory}})$ between each theory and the ppE formalism, and more in-depth explanations of their motivations, in Appendix~\ref{sec:theories}.
This mapping between ppE constraints and theory-specific constraints changes the scaling relations between the theory-specific bound and different source parameters, with many of the conclusions made by examining the generic constraints changing quite drastically. This is because the Jacobian typically depends on source parameters, like $\mathcal{M}$, $\eta$, $\chi_1$, and $\chi_2$, and this can strongly enhance the constraining power of one population of BBHs over another. No general trend can be ascertained across multiple modified theories since each coupling is different, so we will examine each theory in turn. As we will see, constraints on different theory-specific parameters scale differently with SNR, chirp mass, etcetera, impacting how the cumulative bound improves with stacking and how dependent the bound is on small numbers of loud sources.
To examine this in more detail, we will focus on a single detector network (HLVKIO8) with a single population model (SPOPS 0) to try and isolate the pertinent effects for each theory.
\begin{figure}[ht!]
\includegraphics[width=\linewidth]{constraint_vs_time_SMBH_MBH_Dipole.pdf}
\caption{
Projected cumulative constraints on generic dipolar radiation for the detector networks and population models examined in this paper.
The multiband sources outperform all other source classes by at least $\sim 2$ orders of magnitude, with MBH sources and the most optimistic terrestrial scenario performing comparably.
}\label{fig:dipole}
\end{figure}
\begin{figure*}
\includegraphics[width=\linewidth]{scaling_dipole.pdf}
\caption{
Result of the scaling analysis outlined in Sec.~\ref{sec:res_dipole} performed on the data synthesized with the HLVKIO8 network and the SPOPS 0 population.
The left panel shows a heat map of the constraint on $\delta \dot{E}$ versus the SNR of the source.
The solid blue lines correspond to the strongest and weakest single-source constraint, and the area between these two bounds is shown in hatching.
The cumulative bound from the entire catalog is shown as the solid green line.
The power-law fit to the data in the left panel is shown as the solid black curve, and our prediction for the scaling is shown as the solid red curve.
The right panel shows three distinct slices of the catalog, with ranges in SNR from 10 to 11 (blue), 20 to 22 (green), and 50 to 55 (red).
These ranges are highlighted in the left panel.
The right panel shows the density of the constraint versus the chirp mass, with empirical trends shown in black and predicted trends shown in red.
There is a noticeable transition point in the distribution, so low-mass and high-mass systems were analyzed separately.
The powers used in all trend lines are shown in the legend.
For trend lines, the (logarithmic) offset for the predicted scaling relations has been adjusted to coincide with the empirically fit offset, to better compare the slopes of the trends.
Of particular interest is the strong trend relating the SNR and the bound, as well as the tight correlation between chirp mass and constraint for low-mass systems, which seems to taper off for high-mass systems.
}\label{fig:dipole_scaling}
\end{figure*}
\subsubsection{Generic Dipole Radiation}\label{sec:res_dipole}
Dipole radiation is absent in GR, since in Einstein's theory GWs are sourced by the time variation of the quadrupole moment of the stress-energy tensor.
Therefore, any observation of dipole radiation would indicate a departure from GR.
Dipole radiation must be sourced by additional channels of energy loss, due to the presence of new (scalar, vector or tensor) propagating degrees of freedom.
By the balance law, these new channels of energy loss affect the time variation of the binding energy $E$,
and therefore dipole effects generically enter the GW Fourier phase at $-1$ PN (to leading order)~\cite{Chamberlain:2017fjl}.
While many theories predict specific forms of dipole radiation, we can constrain any process leading to dipole radiation by the time rate of change of the binding energy, $\dot{E}$.
We we show in Appendix~\ref{sec:theories}, the Jacobian in this specific class of modifications scales as
\begin{equation}
\left(\frac{\partial \beta}{\partial \delta \dot{E}}\right)^{2} \propto \eta^{4/5}\,,
\end{equation}
where $\delta \dot{E} =\dot{E} - \dot{E}_{\mbox{\tiny GR}}$ is the variation in $\dot{E}$ due to dipole radiation: see Eq.~\eqref{eq:dip_energy}.
This implies that the scaling relations found earlier for generic ppE modifications should not change much when we translate them into constraints on dipole radiation.
These constraints are shown in Fig.~\ref{fig:dipole}.
As dipole radiation is a negative PN effect, multiband sources will contribute significantly, improving bounds by at least two orders of magnitude over any other detector network or population class.
LISA observations of MBH binaries are still highly competitive, outpacing the terrestrial-only network in all cases except the most optimistic detector schedule.
Furthermore, the different terrestrial networks see a wide variation, as the difference between the typical SNRs between the networks are quite large.
After thirty years of GW measurements, our models suggest an improvement of 3--9 orders of magnitude over existing constraints, depending on source populations and detector characteristics, but a 9-orders-of-magnitude improvement is only possible with multiband events.
All of these trends are consistent with the analysis presented in Sec.~\ref{sec:ind_scaling}, with constraints on this negative PN order effect benefitting from the low initial frequency and low chirp masses of LISA multiband sources.
This is because dipole radiation approximately scales like a generic ppE modification in terms of SNR and chirp mass, meaning that most of the analysis from above is still valid in this case.
To better understand the numerical results presented in Fig.~\ref{fig:dipole}, we can look at our analytical approximation of $\Delta \delta \dot{E}$ using the methods from the previous section.
After mapping the bound on the generic $\beta$ to $\delta \dot{E}$, expanding in $\epsilon = \mathcal{M} f_{\text{low}}$, and setting the upper frequency to the ISCO frequency, we have the approximation
\begin{equation}
\Delta\delta \dot{E} \approx \frac{ 112 \sqrt{2} }{ \eta^{2/5} } \frac{ \left(\mathcal{M}\pi f_{\text{low}} \right)^{7/3} }{\rho} \,.
\end{equation}
Results related to this approximation are shown in Fig.~\ref{fig:dipole_scaling}.
The left panel shows a density map of the bounds on $\delta \dot{E}$ versus the SNR of the source, with a numerical fit overlaid showing the SNR scaling trend in black.
Our $1/\rho$ scaling prediction, shown in red, matches the numerics very well.
The right panel shows a density plot of the bound on $\delta \dot{E}$ versus chirp mass.
To isolate the impact of the chirp mass on the attainable bound on $\delta \dot{E}$, we restrict ourselves to thin slices in different ranges of SNR (the ranges are highlighted in the top panel).
This is to insulate our results from the fact that the SNR typically scales with the mass, causing a nonlinear relationship between the mass, SNR, and constraint.
To ensure that the scaling does not change for different ranges of SNR, we have separately analyzed three different ranges.
For lower mass systems, we see good agreement with the analytically predicted $\mathcal{M}^{7/3}$ scaling relationship, but around $\mathcal{M}\sim 30 M_{\odot}$ we see a sharp transition, and our approximations fail.
The impact of these different scaling relations can be seen in the range of constraints and the cumulative constraint shown in Fig.~\ref{fig:dipole_scaling}.
In the left panel, we have plotted the strongest and weakest constraint as solid blue lines, bounding the parameter space of single-source bounds.
The cumulative bound for this one network-population combination is shown as a green line, near the bottom of the panel.
As is evident in the figure, the improvement of the cumulative bound over the most stringent bound is marginal.
This can be explained by the huge range of single-source bounds, covering five orders of magnitude, consistent with the analysis performed in Sec.~\ref{sec:multiple_source_scaling}.
\begin{figure}
\includegraphics[width=\linewidth]{constraint_vs_time_SMBH_MBH_TimeVaryingG.pdf}
\caption{
Projected cumulative constraints on the time derivative of the gravitational constant $\dot{G}$ for the detector networks and population models examined in this paper.
Multiband sources outperform all other source classes by $\sim 1-2$ orders of magnitude, with MBH sources performing the next best.
SOBHs observed by the terrestrial network alone perform the worst, but with Scenario 1 outperforming Scenarios 2 and 3 due to the high SNR of the observations in the former network.
}\label{fig:varG}
\end{figure}
\begin{figure*}
\includegraphics[width=\linewidth]{scaling_tvg.pdf}
\caption{
Result of the scaling analysis outlined in Sec.~\ref{sec:res_varg} performed on the data synthesized with the HLVKIO8 network and the SPOPS 0 population.
The plotting style is the same as in Fig.~\ref{fig:dipole_scaling}.
The left panel shows a heat map of the constraint on $\dot{G}$ versus the SNR of the source.
The right panel shows the density of the constraint versus $\mathcal{M}$, with empirical trends shown in black and predicted trends shown in red.
Again, the strong trend relating the SNR and the bound agrees well with the prediction, and there seems to be a tight correlation between $\mathcal{M}$ and constraint, well approximated by our analysis in Sec.~\ref{sec:res_varg}.
}\label{fig:TVG_scaling}
\end{figure*}
\subsubsection{Local Position Invariance -- Variable G Theories}\label{sec:res_varg}
If the gravitational constant $G$ were time-dependent, we would observe anomalous acceleration in the inspiral of BBHs~\cite{Yunes:2009bv}.
At leading order, this affects the GW Fourier phase at $-4$ PN.
From the transformation in Appendix~\ref{sec:theories}, the Jacobian to map from the generic ppE modification to the parameter $\dot{G}$ itself is
\begin{equation}
\left(\frac{\partial \beta}{\partial \dot{G}}\right)^{2} \propto \left(\frac{\mathcal{M}}{1+z}\right)^2\,.
\end{equation}
The mapping now includes a chirp mass-dependent factor, which can vary by orders of magnitude between source classes.
From this scaling with chirp mass, and the fact that this modification enters at a highly negative PN order ($-4$PN), we expect that the best sources will be those that are seen at the widest separations (like multi-band sources) and have the largest chirp mass.
Our predictions for the constraints on $\dot{G}$ can be seen in Fig.~\ref{fig:varG}.
Multiband constraints again outperform all other source classes and detector configurations, as expected.
However, because the Jacobian is proportional to $\mathcal{M}^2$, MBH sources seen by LISA are not far behind.
Comparatively, the terrestrial-only bounds trail significantly behind both of these source classes, by as much as three orders of magnitude.
There is also a wide separation between the three different terrestrial-only observation scenarios.
This suggests that the cumulative bound does not benefit too much from large catalogs, but instead is dominated by a small number of favorable observations.
A variable $G$ modification presents the first departure from our analysis on the scaling of generic results. MBH sources receive a sizeable benefit over the SOBH sources due to the Jacobian factor between parameters.
Consequently, constraints on this particular modification benefit greatly from the inclusion of LISA in the GW network, both in the form of multiband and MBH observations.
Even after thirty more years of GW detections with the most ideal networks, our models indicate that the bounds will still fall far short of the current constraints on $\dot{G}$ coming from cosmology. These constraints, however, are qualitatively different from those considered here. Cosmological constraints assume a Newton constant that is linearly dependent on time in the entire cosmological history of the Universe, i.e.~ that $G \to G(t) \sim G_{\rm BBN} + \dot{G}_{\rm BBN} t$, where $t$ is time from the Big Bang until today, and where $G_{\rm BBN}$ and $\dot{G}_{\rm BBN}$ are constants. Our $\dot{G}$ constraints only assume a linear time dependence \emph{near} the BBH merger, i.e.~ that $G \to G(t) \sim G_{t_{c}} + \dot{G}_{t_{c}} (t - t_{c})$ for $t < t_{c}$ where $t_{c}$ is the time of coalescence, $G_{t_{c}}$ and $\dot{G}_{t_{c}}$ are constants, and $G(t)$ relaxes back to $G_{t_{c}}$ in a few horizon light-crossing times. In our stacking analysis, we are implicitly assuming that $\dot{G}_{t_{c}}$ is the same for all sources in all catalogs. Therefore, it is not strictly fair to compare cosmological and GW bounds.
We can again repeat the analysis from Sec.~\ref{sec:general_mod} to better understand the relationship between the bound on $\dot{G}$ and various source parameters.
Making the approximations outlined in Sec.~\ref{sec:ind_scaling}, we can approximately rewrite the constraint on $\dot{G}$ as
\begin{equation}
\Delta \dot{G} \approx \frac{32763}{5} \sqrt{ \frac{6}{5} } \frac{ \left( \pi \mathcal{M} f_{\text{low}} \right)^{13/3}(1+z) }{\mathcal{M} \rho}\,,
\end{equation}
where we obtain the expected extra dependence on the chirp mass from the Jacobian transformation.
Results pertinent to this approximation are shown in Fig.~\ref{fig:TVG_scaling}.
The left panel shows a heat map of the $\dot{G}$ constraints against the SNR for the sources in the HLVKIO8 network and the SPOPS 0 model.
The right panel shows a heat map of the constraint on $\dot{G}$ against the chirp mass, for different slices in the SNR.
Notably, the scaling of the constraint on $\dot{G}$ with respect to the chirp mass matches well with our prediction of $\mathcal{M}^{10/3}$, which differs from the generic constraint by a factor of $\mathcal{M}^{-1}$ due to the Jacobian factor.
Again, we see a large spread in the magnitude of the constraint, ranging over $\sim 6$ orders of magnitude.
This leads to a marginal improvement of the cumulative bound over the strongest bound from a single observation, further hampering the terrestrial-only networks, in agreement with our analysis in Sec.~\ref{sec:multiple_source_scaling}.
After accounting for the modified scaling due to the Jacobian, the scaling relations and techniques from Sec.~\ref{sec:general_mod} generally hold for predicting constraints on variable $G$ theories.
\begin{figure}
\includegraphics[width=\linewidth]{constraint_vs_time_SMBH_MBH_NonComm.pdf}
\caption{
Projected cumulative constraints on $\sqrt{\Lambda}$ for the detector networks and population models examined in this paper.
Terrestrial-only catalogs, with their populations of millions of sources, seem to dominate any future constraint on this particular deviation, with an improvement by 1--2 orders of magnitude over any other source classification.
This conclusion seems independent of the particular terrestrial scenario we pick, with comparable performance from all three.
}\label{fig:noncom}
\end{figure}
\begin{figure*}
\includegraphics[width=\linewidth]{scaling_nc.pdf}
\caption{
Result of the scaling analysis outlined in Sec.~\ref{sec:res_LV} performed on the data synthesized with the HLVKIO8 network and the SPOPS 0 population.
The plotting style is the same as in Fig.~\ref{fig:dipole_scaling}.
The left panel shows a heat map of the constraint on $\sqrt{\Lambda}$ versus the SNR of the source.
The right panel shows the density of the constraint versus the chirp mass, with empirical trends shown in black and predicted trends shown in red.
The small range of constraints from the catalog lead to considerable enhancements of the cumulative bound when stacking observations, and the weak scaling with chirp mass and moderate scaling with SNR further benefit SOBH sources over other source classes.
}\label{fig:NC_scaling}
\end{figure*}
\subsubsection{Lorentz Violation -- Noncommutative Gravity}\label{sec:res_LV}
If a commutation relation is enforced between momentum and position, as in quantum mechanics, the leading order effect occurs at 2PN.
Predictions for the constraints on the scale of the noncommutative relation are shown in Fig.~\ref{fig:noncom}.
The Jacobian of the transformation found in Appendix~\ref{sec:theories} is given by
\begin{equation}
\left(\frac{\partial \beta}{\partial \Lambda^2 }\right)^{2} \propto \eta^{-4/5} ( 2 \eta - 1) \,.
\end{equation}
The Jacobian only introduces source-dependent terms of $\order{1}$, and as such, bounds on $\Lambda^2$ should generally follow the scaling trends found in Sec.~\ref{sec:general_mod}.
Given that this modification comes at 2PN, we would expect the terrestrial-only source catalogs to constrain non-commutative gravity the strongest: the power of large catalogs is enhanced, and the effect of LISA observations of the early inspiral is less relevant for positive PN effects.
The bounds predicted by our models are shown in Fig.~\ref{fig:noncom}.
As expected, the terrestrial networks contribute the most to any future bound on non-commutative gravity.
Even when just considering the three terrestrial-only scenarios, the differences are minimal.
Furthermore, the other source classes (MBH and multiband) perform almost identically.
All of these trends further solidify our conclusion that the key to future constraints on this particular modification is large catalogs of observations, as opposed to single, favorable sources.
Future constraints from all source classes should improve by 1--3 orders of magnitude over present constraints.
Continuing our analysis to explore the more subtle trends we are seeing, we can repeat the analysis outlined in Sec.~\ref{sec:general_mod}.
This gives us the following approximation for the variance on $\sqrt{\Lambda}$:
\begin{equation}
\Delta \sqrt{\Lambda} \approx \left( \frac{32768}{1875} \right)^{1/8} \frac{\eta^{1/5} \left(\pi \mathcal{M} f_{\text{low}}\right)^{1/12}}{ \left( 1-2 \eta \right)^{1/4} \rho^{1/4}}\,.
\end{equation}
Although the bound on $\Lambda^2$ scales as expected from Sec.~\ref{sec:general_mod}, approximating our bound on $\sqrt{\Lambda}$ given our constraint on $\Lambda^2$ introduces modifications to the trends we would not have expected from a straightforward extrapolation from constraints on generic modifications.
Namely, we see that the bound should generically scale with the SNR as $\rho^{-1/4}$, and the constraint should scale with the chirp mass as $\mathcal{M}^{1/12}$.
Pertinent trends related to this approximation are shown in Fig.~\ref{fig:NC_scaling}, where the HLVKIO8 network and the SPOPS 0 model were used to do the analysis.
The left panel shows a heat map in the constraint-SNR plane, with the extremal, single source bounds shown as solid blue lines.
The cumulative bound for only this network-population combination is shown as the solid green line.
Our predicted trend for the constraint with respect to the SNR is shown in red, while the empirically determined trend is shown in black.
The right panel shows a heat map in the constraint-chirp mass plane, where we have separately analyzed three different slices of sources with specific SNRs, denoted by the colors red, blue, and green.
In the left panel of Fig.~\ref{fig:NC_scaling}, we can see that our approximation for the relation between the constraint and the SNR does fairly well relative to the empirically determined trend.
Furthermore, we see that the range of constraints is considerably tighter than even the generic constraints at 2PN.
The largest and smallest bound for non-commutative gravity are separated by one order of magnitude, leading to a significant improvement of the cumulative bound over the tightest single-observation bound.
This feature further explains to some degree the discrepancy between LISA sources and terrestrial-only sources in Fig.~\ref{fig:noncom}.
In the right panel of Fig.~\ref{fig:NC_scaling}, we see much wider distributions in the constraint-chirp mass plane, as compared to the previously analyzed modifications.
Our predicted trends are moderately accurate, although with noticeably lower accuracy.
This is consistent with the fact the constraint scales very weakly with chirp mass ($\mathcal{M}^{1/12}$), and other correlations are widening the distribution and complicating the relation.
\begin{figure}
\includegraphics[width=\linewidth]{constraint_vs_time_SMBH_MBH_dCS.pdf}
\caption{
Projected cumulative constraints on $\sqrt{\alpha_{\mbox{\tiny dCS}}}$ for the detector networks and population models examined in this paper.
Terrestrial-only catalogs, with their populations of millions of sources, dominate any future constraint on this particular deviation, with an improvement of 2-5 orders of magnitude over other source classification.
This conclusion is independent of the terrestrial scenario we pick, with comparable performance from all three.
Multiband sources, with their low chirp masses, seem to perform the next best.
}\label{fig:dcs}
\end{figure}
\begin{figure*}
\includegraphics[width=\linewidth]{scaling_dcs.pdf}
\caption{
Result of the scaling analysis outlined in Sec.~\ref{sec:res_PV} performed on the data synthesized with the HLVKIO8 network and the SPOPS 0 population.
The plotting style is the same as in Fig.~\ref{fig:dipole_scaling}.
The left panel shows a heat map of the constraint on $ \sqrt{\alpha_{\mbox{\tiny dCS}}}$ versus the SNR of the source.
The right panel shows the density of the constraint versus the chirp mass, with empirical trends shown in black and predicted trends shown in red.
Our prediction for the SNR scaling is considerably less accurate than for previous theories, presumably from covariances with other source parameters and competing scaling trends with the chirp mass.
The tight range of constraints and large improvement of the cumulative bound over all other single source constraints, seen in the left panel, indicate strong dependence on the total number of sources in the catalog.
}\label{fig:dCS_scaling}
\end{figure*}
\subsubsection{Parity Violation -- Dynamical Chern Simons}\label{sec:res_PV}
One of the fundamental tenets of GR is the parity invariance of the gravitational action.
Dynamical Chern-Simons (dCS) gravity includes a parity-odd, second-order curvature term in the action, known as the Pontryagin density, coupled to a scalar field through a dimensionful parameter $\alpha_{\mbox{\tiny dCS}}$.
The fact that the Pontryagin density is parity-odd necessarily restricts the scalar field to also be odd in vacuum, making it an axial field.
The leading-order effect in the GW phase sourced by these deviations enters at 2PN order.
In Appendix~\ref{sec:theories} we recall that the following mapping holds:
\begin{equation}\label{eq:dcs_jac}
\left(\frac{\partial \beta}{\partial \alpha_{\mbox{\tiny dCS}}^2}\right)^{2} \propto \frac{\left[ \hat{m}_1 s_2^{\mbox{\tiny dCS}} - \hat{m}_2 s_1^{\mbox{\tiny dCS}} \right]^4 \eta^{8/5} }{(1+z)^{-8} \mathcal{M}^8 } \,,
\end{equation}
where $s_i^{\mbox{\tiny dCS}}$ is the BH sensitivity, defined in Eq.~\eqref{eq:dCS_sens}, and $\hat{m}_i = m_i/\mathcal{M} = \eta^{-3/5}(1\pm \sqrt{1-4\eta}) / 2$ for the larger ($+$) and smaller ($-$) mass.
Here, we have only shown the Jacobian to leading order in spin, and we have transformed the mass components to explicitly show the chirp mass dependence.
As the mass ratio and spin factors are bounded to a magnitude of $\order{1}$, the dependence of the Jacobian on $\mathcal{M}^{-8}$ should have the most significant effect on $\Delta\alpha_{\mbox{\tiny dCS}}$ and \emph{strongly} favor low-mass systems, suggesting that SOBHs would be considerably more effective than MBHs.
Furthermore, as this is a positive PN modification, we would expect to see a sizeable benefit from large catalogs, given the analysis in Sec.~\ref{sec:multiple_source_scaling}, and the impact of LISA observations of the early inspiral should be considerably less important.
All of these factors point to the terrestrial-observation only scenarios outperforming LISA detections of MBH sources and LISA-terrestrial joint detections of multiband sources.
Our predictions for the constraints on the strength of this coupling are shown in Fig.~\ref{fig:dcs}.
Indeed, terrestrial-only detections perform the best at constraining dCS modifications to GW, with bounds up to $\sim 2$ orders of magnitude tighter than multiband sources and $\sim 4\text{-}5$ orders of magnitude better than MBH sources.
As expected, MBH sources detected by LISA are severely inhibited by the particular Jacobian for this specific modification.
Furthermore, we also see little variation between the three terrestrial scenarios, indicating that a significant weight lies with the size of the catalogs, as opposed to the source properties of a select minority of favorable observations.
As the power of constraining this particular modification to GR benefits strongly from large numbers of sources, we can expect to slowly push the current bound down by $\sim 3$ orders of magnitude, with minimal dependence on the actual detector schedule, over the course of the next thirty years.
Further analysis using the techniques in Sec.~\ref{sec:ind_scaling} leads to the following approximate form of the variance:
\begin{align}\nonumber
\Delta \sqrt{\alpha_{\mbox{\tiny dCS}}} &\approx \left(\frac{3584 \sqrt{6}}{5 \pi}\right)^{1/4} \frac{ \left( \pi \mathcal{M} f_{\text{low}} \right)^{1/12} \mathcal{M} }{(1+z)\eta^{1/5} \rho^{1/4} } \\ \nonumber
\times &\left( |3015 \chi_2^2 \hat{m}_1^2 - 5250 \chi_1 \chi_2 \hat{m}_1 \hat{m}_2 +3015 \chi_1^2 \hat{m}_2^2 \right. \\
&\left. - 14 (\hat{m}_2 s^{\mbox{\tiny dCS}}_1 - \hat{m}_1 s^{\mbox{\tiny dCS}}_2)^2| \right)^{-1/4}\,.
\end{align}
Beyond the additional terms coming from the Jacobian of the parameter transformation, we now see additional deviations from our analysis on generic modifications in Sec.~\ref{sec:general_mod}.
Raising the bound on $\alpha_{\mbox{\tiny dCS}}^2$ to the one-fourth power to obtain our further approximated bound on $\sqrt{\alpha_{\mbox{\tiny dCS}}}$ has introduced new dependence of the constraint on all the source parameters of interest.
Namely, the dependence on $\rho$ has been amended to scale as $\rho^{-1/4}$, and the dependence on the chirp mass is now $\mathcal{M}^{13/12}$.
Results related to this analysis are shown in Fig.~\ref{fig:dCS_scaling}, derived from data produced with the HLVKIO8 network and the SPOPS 0 model.
The left panel shows a heat map of the sources in the catalog in the $\Delta \alpha_{\mbox{\tiny dCS}}$--SNR plane, with the extremal bounds shown in blue, and the cumulative bound (for this single catalog) shown in green.
The right panel shows a heat map of the sources in the $\Delta \alpha_{\mbox{\tiny dCS}}$--$\mathcal{M}$ plane for three slices in SNR-range (in red, blue, and green).
The trends we have predicted are shown in red, while the empirically determined trends are shown in black, for both panels.
Starting in the left panel, the range in single-observation constraints on $\sqrt{\alpha_{\mbox{\tiny dCS}}}$ is quite small.
The tight range of the constraints (just 1--2 orders of magnitude between the strongest and weakest constraints) helps to explain the enhanced effectiveness of the terrestrial networks at constraining this modification, as the constraint scales favorably with large numbers of observations.
This is explicitly seen by the sizable improvement of the cumulative constraint over the constraint coming from the strongest single observation.
Furthermore, in the left panel, we see that our prediction for the SNR trend does not accurately reflect what we observe in the synthetic data.
This is in stark contrast with non-commutative gravity, where the modification enters at the same PN order and predicts identical scaling with respect to the SNR.
Notably, this deviation also occurs in EdGB gravity, detailed below, which has a similarly complicated Jacobian.
The primary differences between the modification introduced by dCS and non-commutative gravity are (i) the scaling of the constraint with respect to the chirp mass, and (ii) covariances between the modified gravity coupling constant and all other sources parameters (such as the spins and mass ratio).
\begin{figure}
\includegraphics[width=\linewidth]{quad_grav_spin_terms.pdf}
\caption{
Histogram of spin-related terms contributing to the relevant Fisher element for dCS and EdGB.
The sources were taken from the catalog derived from the HLVKIO8 network and SPOPS 0 population model.
For dCS, this only includes the term to first order in spin.
The wide range of magnitudes that this term can take (5--6 orders of magnitude) helps to explain the breakdown of our ability to predict trends concerning the constraints on these theories.
From Fig.~\ref{fig:dCS_scaling} we see that the SNR and chirp mass only span a range of 1 or 2 orders of magnitude, and as such, the trends we would expect to see for these parameters could be completely washed out by this additional spin-dependent term, which we have neglected in our simple analysis.
}\label{fig:quad_grav_hist}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{constraint_vs_time_SMBH_MBH_EdGB.pdf}
\caption{
Projected cumulative constraints on $\sqrt{\alpha_{\mbox{\tiny EdGB}}}$ for the detector networks and population models examined in this paper.
Terrestrial-only catalogs, with their populations of millions of sources, seem to most efficiently constrain EdGB, but multiband sources are not far behind.
The modified scaling of the constraint with SNR and chirp mass work in favor of terrestrial networks, but the fact that EdGB produces a negative PN modification to leading order benefits multiband sources.
MBHs are not effective at constraining EdGB, and will not contribute much to future bounds on this theory.
}\label{fig:edgb}
\end{figure}
\begin{figure*}
\includegraphics[width=\linewidth]{scaling_edgb.pdf}
\caption{
Result of the scaling analysis outlined in Sec.~\ref{sec:res_edgb} performed on the data synthesized with the HLVKIO8 network and the SPOPS 0 population.
The plotting style is the same as in Fig.~\ref{fig:dipole_scaling}.
The left panel shows a heat map of the constraint on $ \sqrt{\alpha_{\mbox{\tiny EdGB}}}$ versus the SNR of the source.
The right panel shows the density of the constraint versus the chirp mass, with empirical trends shown in black and predicted trends shown in red.
Because of the small range in single-observation constraints (about 1--2 orders of magnitude), the cumulative bound greatly benefits from large numbers of observations, despite this being a negative PN effect that would typically be dominated by a small cadre of favorable sources.
The predicted trend for the constraint-SNR relationship fails, presumably due to covariances introduced through the Jacobian.
The predicted trend for the constraint-$\mathcal{M}$ relationships performs fairly well, as the correlation is enhanced through the Jacobian.
}\label{fig:EdGB_scaling}
\end{figure*}
For difference (i), we can examine the right panel of Fig.~\ref{fig:dCS_scaling}, where we see moderate agreement with our predicted scaling trend for the chirp mass and much tighter correlations for dCS than for non-commutative gravity.
Not only is the trend more accurately predicted, but the scaling with chirp mass in dCS, as compared with non-commutative gravity, is considerably stronger ($\mathcal{M}^{13/12}$ as opposed to $\mathcal{M}^{1/12}$).
Considering there is a negative correlation between the constraint and the SNR, a positive correlation between the constraint and the chirp mass, and a positive correlation between the SNR and chirp mass, a shift in the different trends as significant as that found in dCS may lead to the observed deterioration in our predictions.
For difference (ii), the mild agreement of the chirp mass scaling in the right panel suggests that covariances between parameters are degrading the accuracy of all of our approximations, not just the SNR.
To further explore this idea, we can look at the typical range of values that the other source-dependent terms from the Jacobian in Eq.~\eqref{eq:dcs_jac} can take.
For the final bound from a given source, the magnitude of these additional terms in an absolute sense is important, but in terms of the trends we expect to see, the range of values these terms can take is the quantity of interest.
If certain sources with comparable SNR and chirp mass have Jacobian transformations that span several orders of magnitude because of these additional terms, our simple analytical approximations cannot be expected to accurately match the synthetic data.
A histogram of the spin- and mass ratio-dependent terms for both dCS and EdGB are shown in Fig.~\ref{fig:quad_grav_hist}, where we do indeed see a non-negligible range of values.
Figure~\ref{fig:dCS_scaling} shows that the SNR and chirp mass both span approximately 1--2 orders of magnitude for this particular catalog, while the complicated Jacobian factors that we have neglected in our analysis span approximately 4--5 orders of magnitude.
A range this large can easily erase any structure we would hope to see with our simple approximations, and helps to explain why our simple analytical approximation fails for dCS (and for EdGB, as we will discuss below).
Between these two factors, our ability to predict scaling trends of the constraint on $\sqrt{\alpha_{\mbox{\tiny dCS}}}$ as a function of source parameters has moderate success with regards to the chirp mass, but is definitely degraded in general when compared with the same analysis for general modifications. The dCS example provides direct evidence that conclusions derived from generic constraints may be highly misleading when focusing on a particular modified theory.
\subsubsection{Quadratic Gravity -- Einstein-dilaton-Gauss-Bonnet}\label{sec:res_edgb}
Similar to dCS, Einstein-dilaton-Gauss-Bonnet (EdGB) gravity is also quadratic in curvature at the level of the action.
In this case, a scalar field is coupled to the Gauss-Bonnet invariant through a dimensionful coupling constant $\alpha_{\mbox{\tiny EdGB}}$.
In contrast to dCS, the scalar field in EdGB is parity-even in vacuum (because the Gauss-Bonnet invariant is also parity-even), and the leading order correction to the GW phase comes at $-1$PN order, because the dominant modification to the generation of GWs is the introduction of dipolar radiation.
The Jacobian for this particular theory is
\begin{equation}
\left(\frac{\partial \beta}{\partial \alpha_{\mbox{\tiny EdGB}}^2}\right)^{2} \propto \frac{\left[ \hat{m}_2^2 s_1^{\mbox{\tiny EdGB}} - \hat{m}_1^2 s_2^{\mbox{\tiny EdGB}} \right]^4 \eta^{12/5} }{(1+z)^{-8} \mathcal{M}^8 } \,,
\end{equation}
where $s_i^{\mbox{\tiny EdGB}}$ is the BH sensitivity defined in Eq.~\eqref{eq:EdGB_sen}, and we again use the mass parameters $\hat{m}_i = m_i/\mathcal{M} = \eta^{-3/5}(1\pm \sqrt{1-4\eta}) / 2$ for the larger ($+$) and smaller ($-$) mass.
Given the new dependencies on source parameters introduced by the Jacobian, we would expect to see SOBH sources receive a sizeable boost due to the chirp mass scaling.
Furthermore, this is a negative PN effect, which already tends to favor small chirp masses (cf. Sec.~\ref{sec:ind_scaling}).
Both of these considerations imply that multiband and terrestrial networks should outperform LISA MBH sources.
Constraints on $\sqrt{\alpha_{\mbox{\tiny EdGB}}}$ are shown in Fig.~\ref{fig:edgb}.
Indeed, we see SOBH sources of all kinds outperforming MBH sources.
Within the SOBH source classes, terrestrial networks outperform multiband sources by 1--2 orders of magnitude.
While multiband sources benefit from long early inspiral observations from LISA, which encodes much information for a negative PN effect, the large catalogs of sources in the terrestrial-only catalogs are enhanced by the modified dependence on the SNR, discussed below.
As a further consequence of the adjusted SNR dependence, we also see fairly minor variations between the three terrestrial network scenarios.
After approximately thirty years of observations, our models indicate that we could see $\sim 2$--$4$ orders of magnitude improvement on previous constraints on $\sqrt{\alpha_{\mbox{\tiny EdGB}}}$. This conclusion is fairly robust under variations of the terrestrial network.
Analyzing the constraints on $\sqrt{\alpha_{\mbox{\tiny EdGB}}}$ with the machinery of Sec.~\ref{sec:general_mod}, we obtain the following approximation on the variance of the coupling parameter:
\begin{align}\nonumber
\Delta \sqrt{\alpha_{\mbox{\tiny EdGB}}} &\approx \left(\frac{903168}{25\pi^6 }\right)^{1/8} \frac{ \left( \pi \mathcal{M} f_{\text{low}} \right)^{7/12} \mathcal{M}}{(1+z) \eta^{3/10} \rho^{1/4}} \\
& \times \left( \hat{m}_2^2 s_1^{\mbox{\tiny EdGB}} - \hat{m}_1^2 s_2^{\mbox{\tiny EdGB}} \right)^{-1/2} \,.
\end{align}
We now see additional modifications to the dependencies on source parameters, beyond the Jacobian shown above.
Just as in the cases of dCS and non-commutative gravity, we must transform from $\alpha_{\mbox{\tiny EdGB}}^2$ to $\sqrt{\alpha_{\mbox{\tiny EdGB}}}$, which forces the constraint to scale with $\rho^{-1/4}$ and $\mathcal{M}^{19/12}$.
Trends related to this approximation are shown in Fig.~\ref{fig:EdGB_scaling}, produced from our simulations based on HLVKIO8 and SPOPS 0.
The left panel shows a heat map of all the sources in the $\Delta \sqrt{\alpha_{\mbox{\tiny EdGB}}}$-SNR plane, with extremal single-source constraints shown in blue, and the cumulative constraint for this catalog shown in green.
The right panel shows a heat map in the $\Delta \sqrt{\alpha_{\mbox{\tiny EdGB}}}$-$\mathcal{M}$ plane, for three different slices of SNR, shown as blue, green, and red.
In the left panel, we again see that our prediction for the SNR scaling is not accurate.
Just as in dCS gravity, this discrepancy lies in covariances complicating the relationships beyond the point where our simple approximations are valid.
For comparison, we can examine what we found for generic dipole radiation constraints in Sec.~\ref{sec:res_dipole}, where we saw a much better agreement with our predictions for the constraint-SNR relationship.
Referring again to the histogram in Fig.~\ref{fig:quad_grav_hist}, we see that the terms related to the BH sensitivity in EdGB span several decades, washing out the trends we would expect to see from the analysis of Sec.~\ref{sec:general_mod}.
As a by product, these complications lead to a tight range in single-observation constraints, spanning 1--2 orders of magnitude.
This in turn leads to a large enhancement for terrestrial networks: cumulative bounds from tightly grouped populations of constraints benefit from large numbers of sources, which is not typically expected from a modification at $-1$PN.
In the right panel, we see moderate agreement between our prediction for the $\Delta \sqrt{\alpha_{\mbox{\tiny EdGB}}}$--$\mathcal{M}$ relationship, but again, covariances seem to degrade the quality of simple analytical scaling relationships between the constraint and the source parameters. In contrast, for generic dipole radiation we see a much tighter correlation between the constraint and the chirp mass.
The difference between the two trends further confirms our explanation: more complex Jacobians tend to complicate the source parameter-constraint relation we identified in Sec.~\ref{sec:general_mod}.
\begin{figure}
\includegraphics[width=\linewidth]{constraint_vs_time_SMBH_MBH_BHEvaporation.pdf}
\caption{
Projected cumulative constraints on the rate of black hole evaporation $\dot{M}$, for the detector networks and population models examined in this paper.
Our models predict multiband sources to perform the best from the three classes of sources examined in this paper, followed next by MBH observations by LISA.
Terrestrial-only observations from the most optimistic scenario are competitive with LISA's MBH sources, but the other two scenarios considered in this work trail behind by 2-3 orders of magnitude.
}\label{fig:BHE}
\end{figure}
\begin{figure*}
\includegraphics[width=\linewidth]{scaling_bhe.pdf}
\caption{
Result of the scaling analysis outlined in Sec.~\ref{sec:res_BHE} performed on the data synthesized with the HLVKIO8 network and the SPOPS 0 population.
The plotting style is the same as in Fig.~\ref{fig:dipole_scaling}.
The left panel shows a heat map of the constraint on $\dot{M}$ versus the SNR of the source.
The right panel shows the density of the constraint versus the chirp mass, with empirical trends shown in black and predicted trends shown in red.
The wide distribution of constraints in this catalog indicate that the benefit of large catalogs is minimal, and the total bound is dominated by a select few, highly favorable observations.
The distribution of the sources in the $\Delta \dot{M}$-$\mathcal{M}$ plane is to a very good approximation linear, showing a tight correlation between the two quantities.
The $\Delta \dot{M}$-SNR relationship also agrees fairly well with our predictions.
}\label{fig:BHE_scaling}
\end{figure*}
\subsubsection{Black Hole Evaporation}\label{sec:res_BHE}
In the case of BH evaporation, the modification first enters the GW phase at $-4$PN order.
The Jacobian from the ppE parameter to this particular process, as shown in Appendix~\ref{sec:theories}, is given by
\begin{equation}\label{eq:bhe_jac}
\left(\frac{\partial \beta}{\partial \dot{M}}\right)^{2} \propto \left[\frac{3 - 26 \eta + 34\eta^2}{ \eta^{2/5} ( 1 - 2 \eta)}\right]^2 \,.
\end{equation}
As the Jacobian only depends on the system parameters through the symmetric mass ratio (bounded to $(0,0.25]$), no parameters specific to a given system will induce large changes in the attainable bound.
This fact leads us to the conclusion that the driving factors in the constraint magnitude will be the chirp mass (benefitting SOBH sources) and the SNR (benefitting LISA MBH sources and the most sensitive ground-based detector networks).
Furthermore, as this modification also enters at a highly negative PN order, multiband sources can also be expected to perform competitively.
Constraints on the rate of BH evaporation are show in Fig.~\ref{fig:BHE}.
As expected, multiband sources constrain BH evaporation the tightest, with MBH sources from LISA's catalog trailing by 4-6 orders of magnitude.
The most sensitive terrestrial network scenario examined in this paper is also competitive with the LISA MBH sources, but the other two scenarios we have considered fall behind by 2-3 orders of magnitude.
By using the machinery of Sec.~\ref{sec:general_mod}, we obtain the following approximate form of the bound on $\dot{M}$:
\begin{equation}
\Delta \dot{M} \approx \frac{425984}{5}\sqrt{\frac{6}{5}}\frac{\left(f_{\text{low}} \pi \mathcal{M} \right)^{13/3}\eta^{2/5} }{\rho}\left|\frac{1-2\eta}{3-26\eta + 34 \eta^2}\right|\,.
\end{equation}
The Jacobian does not depend on the total mass and the phase modification scales linearly with the modifying parameter, so we see a scaling relation as expected from Sec.~\ref{sec:general_mod}.
Results related to this approximation are shown in Fig.~\ref{fig:BHE_scaling}.
The left panel depicts a heat map of the sources in the HLVKIO8 network and the SPOPS 0 population model in the $\Delta \dot{M}$--SNR plane.
The solid blue lines correspond to the strongest and weakest constraints coming from single observations, while the green line represents the cumulative bound for the entire catalog.
The right panel shows a heat map in the $\Delta \dot{M}$--$\mathcal{M}$ plane for different slices of SNR (in red, blue, and green).
The empirically determined scaling trends are shown in black, while our predictions for the trends are shown in red.
The left panel of Fig.~\ref{fig:BHE_scaling} shows good agreement between the trends predicted by our simple, analytic calculations and the data from our fully numerical treatment.
The wide distribution in constraints coming from single sources in the catalog indicates weak scaling with the size of the catalog, giving a relative boost in power to the smaller source populations in the MBH LISA and MB catalogs.
This conclusion is supported by the very modest improvement of the cumulative bound for the catalog over the strongest single-source constraint.
In the right panel, we see good agreement with our predicted chirp mass scaling relation.
The correlation between the chirp mass and the constraint is quite tight for this particular modification, due to the strong scaling and the highly negative PN order (reducing correlations that widen the distribution).
\begin{figure}
\includegraphics[width=\linewidth]{constraint_vs_time_SMBH_MBH_ModDispersion.pdf}
\caption{
Projected cumulative constraints on the mass of the graviton, $m_g$, for the detector networks and population models examined in this paper.
Our models show that MBH sources observed by LISA will perform the best at constraining this modification, but only slightly better than the terrestrially-observed only sources.
Multiband sources perform the worst, as they received no benefits from the Jacobian and already perform only moderately well for positive PN order effects.
}\label{fig:mg}
\end{figure}
\begin{figure*}
\includegraphics[width=\linewidth]{scaling_moddispersion.pdf}
\caption{
Result of the scaling analysis outlined in Sec.~\ref{sec:res_mg} performed on the data synthesized with the HLVKIO8 network and the SPOPS 0 population.
The plotting style is the same as in Fig.~\ref{fig:dipole_scaling}.
The left panel shows a heat map of the constraint on $m_g$ versus the SNR of the source.
The right panel shows the density of the constraint versus the redshift $z$, with empirical trends shown in black and predicted trends shown in red.
Because of the narrow range of constraints in the catalog and the large enhancement of the cumulative bound over the strongest single observation, stacking observations is quite efficient for this modification.
The right panel shows that there is indeed a trend in the $\Delta m_g$-$z$ relation (although the distributions are moderately wide)
which would favor sources far from Earth, and would primarily benefit MBH sources.
}\label{fig:mg_scaling}
\end{figure*}
\subsubsection{Modified Dispersion -- Massive Graviton}\label{sec:res_mg}
If the graviton were massive, contrary to what is predicted when considering GR as the classical limit of a quantum theory of gravity, the leading order effect would enter the GW phase at 1PN.
The Jacobian of the transformation from the ppE framework to this particular modification is
\begin{equation}
\left(\frac{\partial \beta}{\partial m_g^2}\right)^{2} \propto \left( \frac{\mathcal{M} D_0}{1+z}\right)^2 \,,
\end{equation}
where the quantity $D_0$ is a new cosmological distance defined in Appendix~\ref{sec:theories}.
We get modified scaling with the chirp mass, and similarly to the variable-$G$ mapping, this Jacobian causes the constraint to inversely scale with the mass.
As a result, this new mass factor will benefit MBHs over SOBHs.
Furthermore, we now have strong dependence on the distance to the source, $D_0$, where constraints from farther sources will be enhanced as compared to those sources closer to Earth (see e.g.~\cite{Berti:2004bd}).
These facts benefit LISA MBH sources, which therefore should provide the best constraints.
This is confirmed in Fig.~\ref{fig:mg}.
The MBH sources observed by LISA do indeed perform the best, but only marginally.
The effectiveness of stacking is seen to still be quite high for this particular modification, as the three terrestrial scenarios all perform comparably.
Furthermore, as this is a positive PN effect, terrestrial networks receive a boost from the generic scaling effects discussed in Sec.~\ref{sec:ind_scaling}.
Multiband sources perform the worst, as they receive little benefit from early inspiral observation, they typically have low mass, and are located at low redshifts.
Ultimately, we can expect to improve on the current bound on $m_g$ by 2--3 orders of magnitude over the next thirty years, and this conclusion is robust under variations of the terrestrial detector schedule.
This improvement will be insufficient to rule out a massive graviton as a possible explanation of the late-time acceleration of the Universe: in a cosmological context, the graviton would need a mass of the order of the inverse of the Hubble constant, $H_0^{-1}$, which is of the order of $10^{-30}$ eV, much smaller than our predicted final constraints.
To explore these relations deeper, we can apply our approximation from Sec.~\ref{sec:general_mod}, giving us the following approximation for the constraint on $m_g$:
\begin{equation}
\Delta m_g \approx \frac{h}{\pi}\left( \frac{5}{2} \right)^{1/4} \sqrt{ \frac{(1+z)}{D_0} \frac{ \pi f_{\text{low}}}{\rho}}\,.
\end{equation}
This approximation has produced a notably different scaling relation than what has been seen previously.
Namely, the constraint no longer scales with the chirp mass, as the Jacobian factor has cancelled the chirp mass dependence from the generic ppE scaling.
While this final form of the constraint does not explicitly benefit MBH systems, generic constraints scale with the chirp mass as $\mathcal{M}$.
The removal of this chirp mass dependence benefits MBH sources much more than SOBH sources.
Also different from previous constraints, we have strong scaling with the distance to the source.
For low redshifts, the distance parameter $D_0 \approx z H_0$ to lowest order in redshift.
Extending this expansion to the constraint, the leading-order term should scale as $z^{-1/2}$ for low-redshift sources.
The results related to this approximation are shown in Fig.~\ref{fig:mg_scaling}.
The left panel shows a heat map of the sources in the catalog created from the HLVKIO8 network and SPOPS 0 population model in the $\Delta m_g$-SNR plane, with the solid blue lines denoting the extremal, single observation constraints.
The solid green line represents the cumulative bound from this particular catalog.
We see good agreement between our predicted scaling for the SNR, after accounting for the Jacobian above.
There is a narrow range for the constraints, only spanning one order of magnitude between all sources.
This leads to sizeable benefits for large catalogs, also evident from the overlap between the different terrestrial network scenarios.
The right panel shows a heat map of the sources in the $\Delta m_g$-redshift plane.
We do indeed see a trend in this particular relationship, although the distributions are moderately wide.
Our predictions for the scaling relation agrees fairly well with the synthetic data.
\subsection{Effect of Precession on the Constraints}\label{sec:precession}
The differences between the two SOBH population models go beyond the size of the catalogs, which has been our focus so far.
An aspect differentiating the SPOPS 0 and SPOPS 265 catalogs, that could have a large impact on our analysis, is the typical magnitude of the in-plane component of the binary's spins, which is the cause of relativistic precession.
The question we now address is whether the stronger constraints coming from the SPOPS 0 catalog over the SPOPS 265 catalog are entirely due to the larger catalog sizes, or if the difference in source parameter distributions also impacts the cumulative bounds attainable through GWs.
Previous work has shown that the inclusion of precessional effects can break degeneracies in various source parameters when considering a full MCMC analysis, allowing for significantly tighter constraints on various source properties~\cite{Chatziioannou:2014bma}.
To determine if this effect can be seen in our data, in Fig.~\ref{fig:spops_comp} we show histograms of the individual source constraints on dCS and EdGB, using the two different catalogs (SPOPS 0 and SPOPS 265) and the CEK network.
These two theories in particular were chosen because conventional thinking would suggest that they would be the most sensitive to precessional effects, due to the dependence of the ppE parameter on spins.
\begin{figure}
\includegraphics[width=\linewidth]{SPOPS_265_vs_0_comparison.pdf}
\caption{
Distributions of single-source constraints on the GR-modifying parameters $\sqrt{\alpha}_{\mbox{\tiny dCS}}$ (top) and $\sqrt{\alpha}_{\mbox{\tiny EdGB}}$ (bottom) from the two population models SPOPS 0 (blue) and SPOPS 265 (orange) as detected by the CEK network.
The histograms are normalized to provide a comparison of the shapes of the distributions, as opposed to the raw numbers of sources.
We see that the distributions only diverge slightly, towards the larger-constraint side of the spectrum.
This suggests that the larger precessional effects seen in the SPOPS 265 catalog do not significantly modify the typical constraints attainable by individual sources, or that any effect we may have seen was washed out by the differences in the distributions of other source parameters, such as the total mass and mass ratio.
This lack of difference could also be an artifact of our waveform model (\software{IMRPhenomPv2}), which is not the most up-to-date waveform available, or of the Fisher approximation, which could be improved upon by a full MCMC analysis.
}\label{fig:spops_comp}
\end{figure}
The figure shows little deviation between the two population models for these theories.
The distribution changes slightly on the larger-constraint side of the histogram, but the difference is negligible when considering cumulative constraints.
Furthermore, these small deviations in the distributions of constraints cannot be solely attributable to precessional effects, as the parameter distributions shown in Fig.~\ref{fig:source_properties} are all modified as well.
To explore the impact of precession on generic modifications in a more controlled environment, we did a direct comparison between systems with zero precession and ``maximal'' precession (in a sense to be defined shortly), but which are otherwise identical.
The results of this analysis are shown in Fig.~\ref{fig:precession_investigation}.
The methodology we implemented to produce Fig.~\ref{fig:precession_investigation} began with a set grid in the total mass, ranging from 5\,$M_{\odot}$ to 20\,$M_{\odot}$, mass ratio in the range [0.05,\,1], and aligned-spin components for each BH ranging from $-0.8$ to $0.8$.
With this grid of intrinsic source parameters, we populated the other extrinsic parameters using randomly generated numbers in the conventional ranges.
The range on the luminosity distance was chosen such that the SNRs would range from $\sim~$20 to 150.
Once a set of full parameter vectors had been created, we calculated one set of Fishers for a fixed detector network with the in-plane component of the spin, $\chi_p$, set to 0.
Then, without changing any other parameters, the in-plane spin component was increased to $\chi_p = \sqrt{1 - \chi_1^2}$, which is approximately the maximal spin one can achieve while still maintaining a total spin magnitude less than 1.
The top panel shows the mean constraint for both configurations as a solid line, with the $1\sigma$ interval of the distribution of constraints shown as the shaded region.
In the bottom panel we compare the constraints from each configuration (precessing and non-precessing) for each individual source.
The mean of this ratio is then plotted as a solid line, and the $1\sigma$ region is shown as the shaded region.
The conclusion from Fig.~\ref{fig:precession_investigation} is that precession seems to have a moderate influence, but one that could be easily washed out by other physical effects.
In the most favorable scenario where the binary is maximally precessing, our analysis suggests an improvement of at most a factor of $\sim2$.
Given previous work (see e.g.~\cite{Chatziioannou:2014bma}), one may expect more significant improvements when considering even mild precession.
While we do predict improvements from the use of precessing templates, our more restrained conclusions could be the result of two facets of our analysis.
Our use of a more rudimentary statistical model, the Fisher matrix, does not capture all the more nuanced artifacts in the posterior surface, like a full MCMC analysis would.
Furthermore, we here use the \software{IMRPhenomPv2} waveform, which is in some ways more limited in modeling precession with respect to the waveforms used in Ref.~\cite{Chatziioannou:2014bma}.
Future studies of precession could focus on these two areas in particular.
\begin{figure}
\includegraphics[width=\linewidth]{precession_comparison.pdf}
\caption{
To create the data involved in this figure, we have created a set grid in parameter space with total mass ranging from 5\,$M_{\odot}$ to 20\,$M_{\odot}$, mass ratio in the range 0.05 to 1, and aligned-spin components for each binary ranging from $-0.8$ to $0.8$.
The rest of the parameters were populated with random numbers in the usual ranges, and the luminosity distance was set such that the typical SNRs ranged from $\sim 20$--$150$.
We computed Fisher matrices for each set of parameters, with the in-plane component of the spin set to 0, and then we recomputed them setting the in-plane component of the spin $\chi_p = \sqrt{1 - \chi_1^2}$, so that the binary is approximately ``maximally'' precessing.
The top panel shows the distribution of the bounds for the two binary subsets -- precessing (blue) and non-precessing (green) -- as a function of PN order.
The solid line denotes the average of the synthetic catalog, while the shaded region denotes the $1\sigma$ interval.
The lower panel shows the ratio $\sigma_{\beta,\,\rm p}/\sigma_{\beta,\,\rm nonp}$.
Each ratio is calculated for a single parameter set, and the mean of these ratios is shown as a solid black line, with the $1\sigma$ spread shown by the shading.
Even in this more extreme comparison, the improvement in constraint as the result of larger precession effects only amounts to a factor of $\sim2$.
However, more drastic difference may be possible if we performed a full MCMC analysis, or if we used different waveform models.
}\label{fig:precession_investigation}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
In this work, we have constructed forecasts of what constraints can be placed on a variety of modifications to GR, both generic and theory-specific, using astrophysical population models and the most current projections for detector development over the next thirty years.
Our analysis spans several topics of interest to the GW community concerned with tests of GR.
We investigate what fundamental physics can be done with a variety of source populations (heavy-seed MBHs, light-seed MBHs, terrestrially observed SOBHs, and multiband SOBHs) and plans for detector development.
All of these aspects are connected to what fundamental science is achievable. Ours is the first robust study of this breadth and scope that is capable of quantifying the effects of detector development choices and astrophysical uncertainties.
We identify trends and scaling relationships of constraints for individual GW observations, studying how they evolve with PN order and how they depend on the target source class (MBHs, terrestrially observed SOBHs, and multiband SOBHs).
We also quantify the effect of combining constraints from a full, synthetic catalog, appropriately informed by robust population models.
We find that the effectiveness of stacking observations is a PN-dependent conclusion. The techniques developed here have important implications for the future of GW-based tests of GR, especially in the era of 3g detectors.
The two components of our analysis (individual scaling and studies of the stacking of multiple observations) combine to create a full picture of some of the most important aspects involved in testing GR with GWs.
We hope that this information will be valuable in driving design choices for future detector development.
We map our generic constraints to theory-specific constraints, where we analyze specific parameters in viable, interesting theories.
Repeating some of the scaling analysis done in previous sections leads, in some cases, to a reversal of the conclusions drawn for generic modifications.
This reinforces the need to incorporate theory-specific waveforms in future analyses, when available.
This work opens up several new avenues of research.
We focused on BBH systems, neglecting future contributions from neutron star-neutron star and neutron star-BH binaries.
These binaries have much longer inspiral signals relative to typical BH mergers observed by the LVC, and they could provide crucial information concerning early inspiral, negative PN effects.
Beyond the signal length, neutron stars are sometimes treated on unequal footing in the context of specific theories, such as scalar-tensor gravity, EdGB and dCS.
This could provide other insights into specific theories that do not affect BBH mergers.
Because of the scale of the catalogs involved we used simple Fisher matrix forecasts, running $\sim 10^8$ Fisher matrix calculations.
A more thorough analysis using MCMC, or other more robust data analysis techniques, could provide more information about some of the trends we have identified.
An MCMC population study on the scale of this work is currently intractable, but even an analysis of a subset of sources could be enlightening.
Our work has focused on estimating only a single PN modification at a time, but any modified theory of gravity will correct the waveform at all orders in a PN expansion. Recent work studied how constraints are affected when one attempts to simultaneously constrain ppE deformations that enter at multiple orders~\cite{Gupta:2020lxa}. Here we have chosen to limit ourselves to a single parameter at a time for the following reasons. While allowing for multiple parameters to vary in a completely independent way at several PN orders is a more robust and general framework, this treatment is probably overly pessimistic. Past work~\cite{Sampson:2013lpa,Arun:2006yw} showed that, indeed, varying multiple generic parameters simultaneously drastically lowers our ability to constrain them. However, in the context of a given, physically motivated theory there should be some relation between the different ppE modifications. Any PN expansion should converge in the appropriate domains, ensuring a hierarchy on the size of the modifications. Moreover, the modification at each PN order should at least depend on the coupling parameters of the theory, ensuring that no two PN orders are totally independent from each other. These criteria suggest that the overall bound on a given modification, in the context of a physically motivated theory, should not be significantly weakened by the inclusion of higher-order corrections (except in the most unfortunate of fine-tuning scenarios). Therefore our conclusions should be robust under the inclusion of higher-order PN corrections to the waveform.
Our investigation of the effects of precession on modified GR constraints could be improved in at least three ways.
While we did include a full inspiral/merger/ringdown model of precession by implementing \software{IMRPhenomPv2}~\cite{Hannam:2013oca,Khan:2015jqa,Husa:2015iqa}, more recent and complex waveform models (such as \software{IMRPhenomPv3}~\cite{Khan:2018fmp}, \software{IMRPhenomXPHM}~\cite{Pratten:2020ceb} or \software{SEOBNRv4PHM}~\cite{Ossokine:2020kjp}) could encode more information in the signal, helping to break degeneracies.
A more robust statistical analysis, such as a full MCMC, could explore the posterior space more thoroughly, shedding light on the effects of precession.
Last but not least, the astrophysical SOBH models considered here only allow for isolated field formation under restrictive assumptions. Dynamical formation generally predicts a larger fraction of precessing systems~\cite{Rodriguez:2016vmx}, and it is important to consider other pathways for producing BBHs with large misaligned spins even within the isolated formation channel~\cite{Gerosa:2018wbw,Steinle:2020xej}.
\acknowledgments
We thank Vishal Baibhav, Davide Gerosa, Gabriela Gonz\'alez, Bangalore Sathyaprakash, Sashwat Tanay and Kaze Wong for many useful discussions on various aspects of this work.
N.Y. acknowledges support from NSF Grants No. PHY-1759615, PHY-1949838 and NASA ATP Grant No. 17-ATP17-0225.
E.B. is supported by NSF Grants No. PHY-1912550 and AST-2006538, NASA ATP Grants No. 17-ATP17-0225 and 19-ATP19-0051, and NSF-XSEDE Grant No. PHY-090003. This work has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 690904. The authors would like to acknowledge networking support by the GWverse COST Action CA16104, ``Black holes, gravitational waves and fundamental physics.''
This work made use of the Illinois Campus Cluster, a computing resource that is operated by the Illinois Campus Cluster Program (ICCP) in conjunction with the National Center for Supercomputing Applications (NCSA) and which is supported by funds from the University of Illinois at Urbana-Champaign. It also used computational resources at the Maryland Advanced Research Computing Center (MARCC).
The following software libraries were used at various stages in the analysis for this work, in addition to the packages explicitly mentioned above: \software{GSL}~\cite{gough2009gnu}, \software{numpy}, \software{scipy}, \software{filltex}~\cite{2017JOSS....2..222G}.
| proofpile-arXiv_059-15664 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{Introduction}
Exchange rates are attributed to various theories, such as purchasing power parity (PPP) (\citealp{Cassel_1918}) and interest rate parity (IRP) (\citealp{Keynes_1923}).\footnote{
In addition to PPP and IRP, the flexible-price monetary model (\citealp{Frenkel_1976}) and sticky-price monetary model (\citealp{Dornbusch_1976}) are used.
Appendix A provides additional information on the theory of exchange rate determination.
}
These theories hold for different time scales; PPP is a long-term theory, whereas IRP is a short-term theory. If PPP holds, then the exchange rate adjusts a deviation from the PPP level within several years. (see \citealp{Rogoff_1996}). \cite{Ito_1997} and \cite{Taylor_2002} apply a unit root test to real exchange rates. \cite{Hasan_2006} applies a unit root test and a cointegration test to the real exchange rate of Australia and Canada. Each study shows that the deviation tends to take several years to adjust to the PPP level depending on the data. \cite{Enders_1994}, \cite{Sarno_1997}, \cite{Ogawa_2008}, and \cite{Mishra_2010} analyze long-term equilibrium values of PPP for multiple real exchange rates.
However, the degree to which the exchange rate theory holds may differ between periods. In other words, the exchange rate adjusts to the equilibrium value indicated by the theory in a certain period but dose not during other periods. Thus, this study quantifies the degree to which the exchange rate theory holds in each period. We focus on exchange rate synchronization on the basis of PPP.\footnote{
IRP analysis requires high-frequency tick data and will be a future task due to data constraints.
}
Synchronization is a phenomenon in which a pair of fluctuations adjust their rhythms when interacting with each other; that is, the phase difference between the two fluctuations remains constant during a certain time interval. Approximately, phase denotes a specific position $(-\pi, \pi]$ in one amplitude of a large fluctuation. The synchronization analysis quantifies the degree of rhythm adjustment between two time series. The following studies apply the synchronization concept to economic analysis. \cite{Flood_2010}, \cite{Ikeda_2013}, and \cite{Esashi_2018} apply synchronization to study business cycles (see \citealp{Onozaki_2018}). \cite{Vodenska_2016} perform a synchronization analysis to examine the interaction and lead-lag relationship between the stock market and the foreign exchange market. \cite{Walti_2011} investigates the relationship between stock market co-movements and monetary integration.
When two numeraire-denominated exchange rates synchronize around the PPP level, we consider that the ratio of the two data adjusts to the original exchange rate’s PPP level. Therefore, the degree of synchronization measures the impact of PPP on the original exchange rate. The impact of PPP denotes the strength at which the exchange rate adjusts to the PPP level. If factors other than PPP significantly impact exchange rates, then the impact of PPP can be small.
The U.S. dollar (USD) and euro (EUR) and the USD and Japanese yen (JPY) exchange rate are analyzed. A monetary authority’s foreign exchange intervention hinders theory establishment, and mutual influence is necessary in this analysis. The USD, EUR, and JPY are the three most traded currencies and have floating exchange rate systems.
The linkages between the exchange rate and international monetary policy or the degree of openness of international capital markets are now summarized. Linkages between exchange rates may occur even for currencies of countries with less freedom of capital movement and less degree of openness in their international capital markets. However, such linkages are not the subject of our analysis. For example, we consider a country with a fixed exchange rate system (dollar pegged) in which the degree of freedom of capital movement and the openness of international capital markets are low to maintain the currency system. The regulation of capital transactions hinders the establishment of PPP; however, a strong linkage occurs between the country's currency and the dollar. However, this linkage is the result of government intervention and does not underlie the PPP that is the subject of this analysis. Therefore, the currencies in our analysis should be from countries with less government regulation or exchange rate intervention because such a linkage between exchange rates suggests a theoretical background, such as PPP. The United States, the euro area, and Japan fall into this category because of their freedom of capital movement and open international capital markets.
Our analysis shows that the degree of synchronization is stably high between the USD and EUR and between the USD and JPY during certain periods.\footnote{
More precisely, it is the synchronization between the deviations in the exchange rate from the PPP of numeraire-denominated USD and EUR, or that of the denominated USD and JPY. See section 4.3.
}
This result suggests that the USD/EUR and USD/JPY exchange rates adjust to PPP during the period. During other periods, the degree of synchronization does not maintain a high level for either the USD/EUR or USD/JPY. This result suggests that certain factors other than PPP affect the exchange rate during a period.
The remainder of this paper is structured as follows. Section \ref{Synchronization} defines synchronization, and Section \ref{Data} explains the data used in this analysis. Section \ref{PPP and frequency band} provides calculations of exchange rate deviation from PPP and introduces a frequency-based filter. Section \ref{Synchronization analysis using the Hilbert transform} conducts a synchronization analysis using the Hilbert transform. Section \ref{Results and interpretation} provides the results of the synchronization analysis. Section \ref{Comparison with the correlation coefficient} discusses the difference between a synchronization analysis and the correlation coefficient. Finally, Section \ref{Conclusions} concludes our paper.
\section{Synchronization}
\label{Synchronization}
\begin{figure
\begin{center}
\subfloat{({\bf a}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f1a.eps}}
\subfloat{({\bf b}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f1b.eps}}\\
\end{center}
\begin{spacing}{1.1}
Figure 1. (a) Two phase-synchronized time series. Note: “$\bullet$” (orange) represents $\sin(2\pi t)+5$, and “$\times$” (blue) represents $2\sin(2\pi(t-0.25))+5$. The amplitudes of two time series are different. Although the phases of two time series are also different from each other, the phase difference between two time series is kept constant.
(b) The ratio of the two synchronized time series. Note: “$\bullet$” (purple) represents $\frac{2\sin(2\pi(t-0.25))+5}{\sin(2\pi t)+5}$. This ratio oscillates around 1.
\end{spacing}
\label{fig:f1}
\end{figure}
Synchronization is a phenomenon in which a pair of fluctuations adjust their rhythms through mutual interactions. Rhythm adjustment indicates that the phase difference between two time series adjusts to remain constant for a certain time interval. We use the notion of phase to capture the degree of synchronization. The synchronization concept is explained using the simple vibrations, $f(t)=\sin(2\pi t)+5$ and $g(t)=2\sin(2\pi(t-0.25))+5$ [Figure 1(a)]. Phase represents a specific position $(-\pi,\pi]$ in one amplitude of oscillation data.\footnote{
For details, see Section \ref{Hilbert transform and instantaneous phase} for the definition of the phase.
}
The phases of $f(t)$ and $g(t)$ are $2\pi t$ and $2\pi(t-0.25)$, respectively, in which each phase is converted into $(-\pi,\pi]$ value. Two time series, namely, $f(t)$ and $g(t)$, are synchronized if the phase difference is constant in time.
This synchronization is called phase synchronization. In the previous case, the phase difference is $0.5n$ for all $t$, which are synchronized. In the Figure 1(b) illustrates the ratio of two synchronized time series, whose ratio $\frac{2\sin(2\pi(t-0.25))+5}{\sin(2\pi t)+5}$ fluctuates around a reference value of approximately 1. In other words, this ratio has the property of returning to the reference value. This property is used to quantify the strength of the exchange rate returning to the PPP level.
\section{Data}
\label{Data}
Monthly average nominal exchange rate data for the USD/EUR, USD/JPY, Australian dollar (AUD)/USD, AUD/EUR, AUD/JPY, New Zealand dollar (NZD)/USD, NZD/EUR, and NZD/JPY during 1999:01--2017:12, 1987:01--2017:12, 1999:01--2017:12, 1999:01--2017:12, 1987:01--2017:12, 1999:01\\\noindent--2017:12, 1999:01--2017:12, and 1987:01--2017:12, respectively, are used. These data are obtained from \textit{Datastream}. Monthly producer price index data are used as the price level data of the United States, the euro area, and Japan, for which 2010 data are normalized to 100. These data are obtained from the \textit{International Financial Statistics} of the International Monetary Fund (IMF) website. Current account balances (percent of GDP) of the United States, the euro area, and Japan are yearly data obtained from the \textit{World Economic Outlook} of the IMF website.
\section{PPP and frequency band}
\label{PPP and frequency band}
\subsection{PPP}\label{PPP}
The PPP level is calculated as
\begin{equation}
\rho_t^{USDj}=S_{base}^{USDj}\frac{P_t^j/P_{base}^j}{P_t^{USD}/P_{base}^{USD}},
\label{eq:PPP}
\end{equation}
where $S_t^{USDj}$ denotes the USD/currency $j$ ($j=$EUR, JPY) exchange rate at time $t$, $S_{base}^{USDj}$ denotes the USD/currency $j$ ($j=$EUR, JPY) exchange rate at a base time, $P_t^j$ denotes the country of currency $j$'s price index at time $t$, and $P_{base}^j$ denotes the country of currency $j$'s price index at the base time. To calculate PPP, the base time is selected when the current account balances are close to zero, $\rho_t^{USDEUR}$ is in 2010, and $\rho_t^{USDJPY}$ is in 1991. Exchange rate fluctuations around PPP are calculated as
\begin{equation}
\xi_t^{USDj}:=\frac{S_t^{USDj}}{\rho_t^{USDj}}.
\label{eq:FPPP}
\end{equation}
Figure 2(a) represents the USD/EUR exchange rate and PPP level. When the USD/EUR exchange rate deviates from the PPP level, it returns to its level in approximately one to five years. Since 2015, the exchange rate has deviated from the PPP level. Figure 2(b) represents the divergence of the USD/EUR exchange rate from PPP, which fluctuates around 1. Similarly, Figure 2(c) represents the USD/JPY exchange rate and PPP level. The USD/JPY exchange rate returns to the PPP level approximately every two to five years. In addition, the return time is longer than that of the USD/EUR exchange rate. Since 2013, the exchange rate has deviated from the PPP level. Figure 2(d) represents the divergence of the USD/JPY exchange rate from the PPP level, which fluctuates around 1.
\begin{figure
\begin{center}
\subfloat{({\bf a}) }{\includegraphics[clip, width=0.435\columnwidth,height=0.31\columnwidth]{f2a_s2.eps}}
\subfloat{({\bf b}) }{\includegraphics[clip, width=0.435\columnwidth,height=0.31\columnwidth]{f2b.eps}}\\
\vspace{2mm}
\subfloat{({\bf c}) }{\includegraphics[clip, width=0.435\columnwidth,height=0.31\columnwidth]{f2c_s2.eps}}
\subfloat{({\bf d}) }{\includegraphics[clip, width=0.435\columnwidth,height=0.31\columnwidth]{f2d.eps}}
\end{center}
\begin{spacing}{1.1}
Figure 2. (a) USD/EUR exchange rate and PPP level ($\rho_t^{USDEUR}$). Note: “$\bullet$” (orange) represents the USD/EUR exchange rate, and “$\times$” (blue) represents the PPP level. When the USD/EUR exchange rate deviates from the PPP level, it returns to its level in approximately one to five years.
(b) The exchange rate fluctuations around PPP ($\xi_t^{USDEUR}$).
Note: “$\bullet$" (purple) represents the divergence of the USD/EUR exchange rate from PPP. The value fluctuates around 1.
(c) USD/JPY exchange rate and PPP level ($\rho_t^{USDJPY}$). Note: “$\bullet$” (orange) represents the USD/JPY exchange rate, and “$\times$” (blue) represents the PPP level. The USD/JPY exchange rate has returned to the PPP level over a long period.The USD/JPY exchange rate returns to the PPP level approximately every two to five years.
(d) The exchange rate fluctuations around PPP ($\xi_t^{USDJPY}$). Note: “$\bullet$” (purple) represents the divergence of the USD/JPY exchange rate from the PPP level. The value fluctuates around 1.
\end{spacing}
\label{fig:f2}
\end{figure}
\subsection{Power spectrum}
\label{Power spectrum}
Exchange rates have many determinants other than PPP. If data contain fluctuations of various sizes, then the appropriate phase cannot be clearly defined. Therefore, identifying the frequency band that holds PPP and generates data with the frequency is needed. The power spectrum is used to identify the frequency band of the exchange rate fluctuation around PPP.
See Appendix B Eq.~(\ref{eq:A9}) for the definition of the power spectrum.
Figure 3(a) shows the log power of $ \xi_t^{USDEUR}$, where the horizontal axis represents the monthly frequency. For example, the leftmost part of the figure suggests that a frequency component with amplitude 228 months has the highest power. The arrow represents the band used in this analysis. Figure 3(b) shows the log power of $ \xi_t^{USDJPY}$.
The power spectrum implies that the USD/EUR exchange rate fluctuates around PPP in frequencies ranges of approximately 32.6--228.0 and 38.0--228.0 months. Then, the time series is extracted with the frequency band of that range using the lower cutoff frequency of $k_0=1$ and upper cutoff frequency of $k_1=7$, corresponding to 228.0 $(\approx228/k_0)$ and 32.6 $(\approx228/k_1)$ months, respectively. (See Appendix B for the definition of $k_0$ and $k_1$.) Thus, if PPP holds, then deviations of the USD/EUR exchange rate from PPP have vanished from 16.3 (32.6/2) to 114.0 (228.0/2) months. Therefore, in the USD/EUR analisys, we focus on the range of the frequency band of 32.6--228.0 and 38.0--228.0 months.
Similarly, the corresponding frequency bands of 37.2--186.0 or 41.3--186.0 months are expected for the USD/JPY exchange rate. Thus, if PPP holds, then the USD/JPY exchange rate deviations from PPP vanish from 18.6 (37.2/2) to 93.0 (186.0/2) months. We focus on 37.2--186.0 and 41.3--86.0 months as the frequency band in the USD and JPY analysis. The band movements are considered to represent price adjustments through trade and productivity adjustments.
\begin{figure
\begin{center}
\subfloat{({\bf a}) }{\includegraphics[clip, width=0.435\columnwidth,height=0.31\columnwidth]{f3a.eps}}
\subfloat{({\bf b}) }{\includegraphics[clip, width=0.435\columnwidth,height=0.31\columnwidth]{f3b.eps}}\\
\end{center}
\begin{spacing}{1.1}
Figure 3. (a) The log power of the exchange rate fluctuations around PPP ($\xi_t^{USDEUR}$). Note: The horizontal axis represents monthly frequency. The arrow represents the band used in this analysis. The exchange rate fluctuations around PPP ($\xi_t^{USDEUR}$) oscillate in the frequency band represented by the arrows. Therefore, we expect that the USD/EUR exchange rate fluctuates around PPP in approximate frequency ranges of 32.6--228.0 and 38.0--228.0.
(b) The log power of the exchange rate fluctuations around PPP ($\xi_t^{USDJPY}$). Note: We can expect that the USD/EUR exchange rate fluctuates around PPP ($\xi_t^{USDJPY}$) in the approximate frequency ranges of 37.2--186.0 or 41.3--186.0.
\end{spacing}
\label{fig:f3}
\end{figure}
\subsection{Numeraire}
\label{Numeraire}
We analyze the synchronization between fluctuations in the USD and currency $j$ of $\xi_t^{USDj}$ using the numeraire to divide $\xi_t^{USDj}$ into USD and currency $j$ parts. Then, the synchronization between these two time series is studied. \cite{Frankel_1994} and \cite{McKinnon_2004} adopted this method to analyze the linkage between multiple exchange rates using numeraire-denominated exchange rates.
$\xi_t^{USDj}$ can be written using the AUD numeraire as\footnote{
Numeraire often uses the currency of the floating exchange rate system. The Swiss franc (CHF) and NZD are also often used for numeraire. We do not use the CHF because Swiss National Bank sets minimum exchange rate at CHF 1.20 per EUR from 2011 to 2015. Changing the NZD to numeraire does not affect the result. See Appendix C for details.
}
\begin{equation}
\xi_t^{USDj}\approx\frac{\xi_t^{AUDj}}{\xi_t^{AUDUSD}}=\frac{\frac{S_t^{AUDj}}{S_{base}^{AUDj}({P_t^j}/{P_{base}^j})}}{\frac{S_t^{AUDUSD}}{S_{base}^{AUDUSD}({P_t^{USD}}/{P_{base}^{USD}})}}=\frac{\xi^{\prime~AUDj}_t}{\xi^{\prime~AUDUSD}_t},
\label{eq:FPPP2}
\end{equation}
where
$$\xi^{\prime~AUDUSD}_t:=\frac{S_t^{AUDUSD}}{S_{base}^{AUDUSD}({P_t^{USD}}/{P_{base}^{USD}})},$$
and
$$\xi^{\prime~AUDj}_t:=\frac{S_t^{AUDj}}{S_{base}^{AUDj}({P_t^j}/{P_{base}^j})}.$$ $\xi^{\prime~AUDUSD}_t$
\footnote{$\xi^\prime_t$ is not PPP in a strict sense because it excludes the inflation rate of numeraire. However, we use $ \xi^\prime_t$ because of the small sample size of the PPI data of Australia. As shown in
Eq.~\eqref{eq:FPPP2}, this condition does not affect the analysis because the $\xi^\prime_t$ ratio is the same as that of $ \xi_t$.
}
and $\xi^{\prime~AUDj}_t$ exclude the AUD inflation rate from $ \xi_t^{AUDUSD}$ and $\xi_t^{AUDj}$. When $\xi^{\prime~AUDUSD}_t$ and $\xi^{\prime~AUDj}_t$ have the same fluctuation, $\xi_t^{USDj}$ fluctuates around 1 (see Figure 1), thus suggesting the establishment of PPP.
\subsection{Band-pass filter}
\label{Band-pass filter}
We extract a frequency band of a numeraire-denominated exchange rate using PPP as previously described. Time series data are generated with a frequency estimated from each $\xi^{\prime~AUDUSD}_t$ and $\xi^{\prime~AUDj}_t$ using a band-pass filter.\footnote{
See Appendix B for the band-pass filter details. \cite{Rodriguez_1999} and \cite{Varela_2001} perform a synchronization analysis using band-pass-filtered data for brain science research. Business cycle studies often use band-pass-filtered economic time series data \citep{Baxter_1999, Calderon_2007}.
}
Figure 4(a) shows $ \eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$ with a 32.6--228.0-month band-pass filter applied to $\xi^{\prime~AUDUSD}_t$ and $\xi^{\prime~AUDEUR}_t$.
The two time series fluctuation rhythms do not adjust around year 2015 but in other periods.
The two time series fluctuation rhythms do not adjust around 2015 but do so during other periods. Figure 4(b) indicates $ \eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$ with a 38.0--228.0-month band-pass filter applied to $\xi^{\prime~AUDUSD}_t$ and $\xi^{\prime~AUDEUR}_t$. The fluctuation
rhythms of the two time series do not adjust around 2006 but do so during other periods.
Figure 4(c) shows $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$ with a 37.2--186.0-month band-pass filter applied to $\xi^{\prime~AUDUSD}_t$ and $\xi^{\prime~AUDJPY}_t$. The two time series fluctuation rhythms adjust around 2001 but do not do so during other periods. Figure 4(d) shows $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$ with a 41.3--186.0-month band-pass filter applied to $\xi^{\prime~AUDUSD}_t$ and $\xi^{\prime~AUDJPY}_t$. The two time series fluctuation rhythms adjust around 2001 but do not do so during other periods.
\begin{figure
\begin{center}
\subfloat{({\bf a}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f4a_s.eps}}
\subfloat{({\bf b}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f4b_s.eps}}\\
\vspace{2mm}
\subfloat{({\bf c}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f4c_s.eps}}
\subfloat{({\bf d}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f4d_s.eps}}
\end{center}
\begin{spacing}{1.1}
Figure 4. (a) The 32.6--228.0-month band-pass filtered data $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$ of the exchange rate fluctuations around PPP ($\xi^{\prime~AUDUSD}_t$ and $\xi^{\prime~AUDEUR}_t$). Note: The two time series fluctuation rhythms do not adjust during year 2015 but during other periods.
(b) The 38.0--228.0-month band-pass filtered data $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$ of the exchange rate fluctuations around PPP ($\xi^{\prime~AUDUSD}_t$ and $\xi^{\prime~AUDEUR}_t$). Note: The two time series fluctuation rhythms do not adjust during 2006 but during other periods.
(c) The 37.2--186.0-month band-pass filtered data $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$ of the exchange rate fluctuations around PPP ($\xi^{\prime~AUDUSD}_t$ and $\xi^{\prime~AUDJPY}_t$). Note: The two time series fluctuation rhythms adjust during 2001 but not during other periods.
(d) The 41.3--186.0-month band-pass filtered data $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$ of the exchange rate fluctuations around PPP ($\xi^{\prime~AUDUSD}_t$ and $\xi^{\prime~AUDJPY}_t$). Note: The two time series fluctuation rhythms adjust during 2001 but not during other periods.
\end{spacing}
\label{fig:f4}
\end{figure}
\section{Synchronization analysis using the Hilbert transform}
\label{Synchronization analysis using the Hilbert transform}
\subsection{Hilbert transform and instantaneous phase}
\label{Hilbert transform and instantaneous phase}
In this paper, the Hilbert transform is used in our synchronization analysis to define a phase at each time. Some previous studies applied the Hilbert transform to an analysis of macroeconomic data. \cite{Ikeda_2013} analyze the business cycle in Japan using Indices of Industrial Production (IIP) data for 16 industrial sectors. They use the Hilbert transform to calculate phases from the data and find a partial phase synchronization of Japan's business cycles. \cite{Vodenska_2016} study the interactions between global equity and foreign exchange markets by analyzing daily price data for foreign exchange markets and major stock indices for 48 countries. They use the complex Hilbert principal component analysis (CHPCA), which is a complex version of principal component analysis that uses Hilbert transformed values for the imaginary part of the complex time series. They show that the information on lead-lag relationships in financial markets and exchange rates obtained by CHPCA can serve as an early warning system (EWS) against systemic risk contagion. The Hilbert transform is also applied to analyses in the finance field. \cite{Fusai_2016} proposed the Wiener-Hopf factorization of complex functions that can be applied to the pricing of barrier and look-back options when monitoring is discrete. This method uses the Hilbert and z-transforms. \cite{Phelan_2018} modify the pricing of options pricing using the discrete monitoring proposed by \cite{Fusai_2016} when monitoring is continuous. In addition, they examine the truncation error of the sinc-based Hilbert transform to investigate the error of the pricing method. \cite{Phelan_2019} improve the option pricing method using the sinc-based Hilbert transform with a spectral filter. This method can be applied to, for example, the pricing of barrier options when monitoring is discrete. \cite{Phelan_2020} presents new pricing methods for Bermuda, American, and $\alpha$-quantile options that use the Hilbert transforms, which have the advantage of small errors and fast CPU time.
We define a phase to measure the phase difference and assume that $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDj}_t$ are obtained from the real part of the complex variable data. The imaginary part of the complex data can be generated using the Hilbert transform value of the real part. The Hilbert transform can be realized by an ideal filter, for which the amplitude and phase responses are unity and a constant $\pi/2$ lag at all Fourier frequencies (\cite{Pikovsky_2001}, pp. 362-363). The Hilbert transform is expressed as
\begin{equation}
s_t^H=\frac{1}{\pi}P.V.\int_{-\infty}^{\infty}{\frac{s_\tau}{t-\tau}d\tau,}
\label{eq:Hilbert}
\end{equation}
where $s_t$ denotes the time series data at time $t$ and $P.V.$ denotes the Cauchy principal value integrals.
A complex valued time series ${\hat{s}}_t$ is constructed whose real part is actual data $s_t$, and the imaginary part $s_t^H$ is generated from $s_t$ using the Hilbert transform:
\begin{equation}
{\hat{s}}_t=s_t+s_t^Hi.
\label{eq:complex}
\end{equation}
The frequency spectrum of the complex number is as follows:
\begin{equation}
\hat{S}\left(f\right)=
\left\{\begin{matrix}2S(f)\\S(f)\\0\\\end{matrix}\begin{matrix}\\\\\\\end{matrix}\right.\begin{matrix}f>0\\f=0\\f<0\\\end{matrix}
\label{eq:complex_f}
\end{equation}
where $\hat{S}\left(f\right)$ is the Fourier transform of ${\hat{s}}_t$, and $S\left(f\right)$ is the Fourier transform of $s_t$. We compute the ${\hat{s}}_t$ using forward and inverse Fourier transforms. Specifically, the formula is as follows:
\begin{equation}
{\hat{s}}_t=F^{-1}\left(S\left(f\right)2U\right)=s_t+s_t^Hi
\label{eq:complex_c}
\end{equation}
where $F^{-1}$ is an inverse Fourier transform, and $U$ is the unit step function.\footnote{
We use the python program “scipy.signal.Hilbert” to obtain the Hilbert transform value from this calculation. This approach can be further refined with a sinc functions expansion yielding exponential rather than polynomial convergence of the error on the grid size, as explained in \cite{Fusai_2016}.
}
This calculation provides the Hilbert transform value $s_t^H$.
Phase is defined by the angle $\phi_t$ formed by the horizontal axis and complex variable data
\begin{equation}
\phi_t=
\begin{cases}
\tan^{-1}\left(\frac{s_t^H}{s_t}\right) & (st>0)\\
\tan^{-1}\left(\frac{s_t^H}{s_t}\right)+\pi & (st<0)
\end{cases}
.
\label{eq:Phase}
\end{equation}
Phase at a certain time is called an instantaneous phase. The value can be discontinuous over time because it ranges from $-\pi$ to $\pi$. Figure 5(a) shows that $s_t=\sin(2\pi t)$, and its Hilbert transform value $s_t^H=\sin(2\pi(t-0.25))$, which implies $s_t^H=s_{t-\frac{\pi}{2}}$. Figure 5(b) shows a behavior of $(s_t, s_t^H)$ in a complex plane.\footnote{An outlier occurs at both ends of the Hilbert transform values when numerical computations are performed. Thus, certain complex variable
data near $(0,-1)$ in the complex plane deviate slightly from the
unit circle. Therefore, we exclude 10 data points from both ends in the following analysis.
}
Phase is identified by the angle $\phi_t$ formed by the real axis and complex variable data.
\begin{figure
\begin{center}
\subfloat{({\bf a}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f5a.eps}}
\subfloat{({\bf b}) }{\includegraphics[clip, width=0.28\columnwidth,height=0.28\columnwidth]{f5b.eps}}\\
\end{center}
\begin{spacing}{1.1}
Figure 5. (a) $s_t=\sin(2\pi t)$ and its Hilbert transform value $s_t^H=\sin(2\pi(t-0.25))$. Note: “$\times$” (blue) represents $s_t^H=\sin(2\pi(t-0.25))$ delayed by $\pi/2$ from $s_t=\sin(2\pi t)$ and is created by the Hilbert transform. (b) Behavior of $(s_t,\ s_t^H)$ on a complex plane. Note: The phase is defined by the angle $\phi_t$ formed by the horizontal axis and the complex variable data $(s_t,\ s_t^H)$.
\end{spacing}
\label{fig:f5}
\end{figure}
Figures 6(a) and (b) show the behavior of $(s_t, s_t^H)$, where $s_t=\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$ with 32.6--228.0 months, respectively. Figures 6(c) and (d) show the behavior of $(s_t, s_t^H)$, where $s_t=\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$ with 37.2--186.0 months, respectively.
\begin{figure
\begin{center}
\subfloat{({\bf a}) }{\includegraphics[clip, width=0.28\columnwidth,height=0.28\columnwidth]{f6a.eps}}
\subfloat{({\bf b}) }{\includegraphics[clip, width=0.28\columnwidth,height=0.28\columnwidth]{f6b.eps}}\\
\vspace{2mm}
\subfloat{({\bf c}) }{\includegraphics[clip, width=0.28\columnwidth,height=0.28\columnwidth]{f6c.eps}}
\subfloat{({\bf d}) }{\includegraphics[clip, width=0.28\columnwidth,height=0.28\columnwidth]{f6d.eps}}
\end{center}
\begin{spacing}{1.1}
Figure 6. (a) Behavior of $(s_t,\ s_t^H)$, where $s_t=\eta^{\prime~AUDUSD}_t$ with 32.6--228.0 months. (b) Behavior of $(s_t,\ s_t^H)$, where $s_t=\eta^{\prime~AUDEUR}_t$ with 32.6--228.0 months. (c) Behavior of $(s_t,\ s_t^H)$, where $s_t=\eta^{\prime~AUDUSD}_t$ with 37.2--186.0 months. (d) Behavior of $(s_t,\ s_t^H)$, where $s_t=\eta^{\prime~AUDJPY}_t$ with 37.2--186.0 months. Note: When defining the phase, the complex data should rotate around the origin. The phase can jump when the complex data move quite close to the origin. From the figure, the complex data seem to rotate around the origin, although a few small circles are mixed in.
\end{spacing}
\label{fig:f6}
\end{figure}
Figures 7(a)--(d) show instantaneous phases of 32.6--228.0, 38.0--228.0, 37.2--186.0, and 41.3--186.0 months for $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$, $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$, $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$, and $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$, respectively. The phase differences in Figures 7(a)--(d) are not constant around 2006 (a), 2006 (b), 2005 and 2012 (c), and 2005 (d), respectively; however, all of them are almost constant during in other periods.
\begin{figure
\begin{center}
\subfloat{({\bf a}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f7a_s.eps}}
\subfloat{({\bf b}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f7b_s.eps}}\\
\vspace{2mm}
\subfloat{({\bf c}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f7c_s.eps}}
\subfloat{({\bf d}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f7d_s.eps}}
\end{center}
\begin{spacing}{1.1}
Figure 7. (a) The instantaneous phase of 32.6--228.0 months $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$. (b) The instantaneous phase of 38.0--228.0 months $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$. (c) The instantaneous phase of 37.2--186.0 months $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$. (d) The instantaneous phase of 41.3--186.0 months $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$. Note: The phase differences in Figures 7(a)--(d) are not constant around (a) 2006, (b) 2006, (c) 2005 and 2012, and (d) 2005, respectively; however, all are almost constant during other periods.
\end{spacing}
\label{fig:f7}
\end{figure}
Figure 7 distinguishes between periods in which the phase difference is constant and fluctuates. Phase difference discontinuity affects its analysis. Therefore, an unwrapped instantaneous phase, which is defined, is used to allow for continuous change in the time development of an instantaneous phase.
The phase difference is expressed as
\begin{equation}
\psi_t={\hat{\phi}}_t^{AUDUSD}-{\hat{\phi}}_t^{AUDj},
\label{eq:Phase_d}
\end{equation}
where ${\hat{\phi}}_t^{AUDUSD}$ and ${\hat{\phi}}_t^{AUDj}$ denote unwrapped instantaneous phases of $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDj}_t$ at time $t$, respectively. We say that $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDj}_t$ synchronize in the time interval $[t_0, t_1]$ if there exists a constant $d$ and a sufficiently small positive constant $\varepsilon$, such that:
\begin{equation}
\left|\psi_t-d\right|<\varepsilon,
\label{eq:syncro_e}
\end{equation}
for $t_0\le{{}^\forall}t\le t_1$.
\subsection{Synchronization index}
\label{Synchronization index}
The synchronization index $\gamma^2$ (\cite{Rosenblum_2001}) is employed for the time interval $1\le i\le W$ using the phase difference $\psi_i$ at time $i$ to measure the degree of synchronization between two time series,
\begin{equation}
\gamma^2=\left(\frac{1}{W}\sum_{i=1}^{W}{\cos\psi_i}\right)^2+\left(\frac{1}{W}\sum_{i=1}^{W}{\sin\psi_i}\right)^2,
\label{eq:Rosenblum1}
\end{equation}
Index $\gamma^2$ ranges from 0 to 1. When the phase difference between two time series is constant over time, $\psi_i$ takes a constant value. Thus, $\gamma^2$ takes the value close to 1 because $(\cos\psi_i,\mathrm{\ \sin}\psi_i)$ moves in the vicinity of one point on the unit circle. Therefore, if $\gamma^2$ is close to 1, then the two time series have high degrees of synchronization. In contrast, if $\gamma^2$ is close to 0, then the degree of synchronization is low.
Figure 8 shows the expected value of synchronization index $\gamma^2$ relevant to $W$. When $W = 13$ is selected, the expected value of $\gamma^2$ is 0.077 and the standard deviation of $\gamma^2$ is 0.074. Therefore, if the value of $\psi_i$ is given at random, then the synchronization index takes a value close to 0.077, which is a reference value for judging the randomness of $\psi_i$.
\begin{figure
\begin{center}
\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f8.eps}
\end{center}
\begin{spacing}{1.1}
Figure 8. The expected value of the synchronization index $\gamma^2$ with respect to $W$ calculated from 10,000 cases. $\psi_i$ is chosen from a uniform distribution of $[0,\ 2\pi]$. When we select $W = 13$, the expected value of $\gamma^2$ is 0.077 and the standard deviation of $\gamma^2$ is 0.074. The synchronization index with $W=13$ used in our analysis is small enough to include random values. Therefore, if the synchronization index is near 1, the two time series are considered to be highly synchronized.
\end{spacing}
\label{fig:f8}
\end{figure}
We measure the synchronization index for the time interval $W$ at each time $t$,
\begin{equation}
\gamma_t^2=\left(\frac{1}{W}\sum_{i=t-p}^{t+p}{\cos\psi_i}\right)^2+\left(\frac{1}{W}\sum_{i=t-p}^{t+p}{\sin\psi_i}\right)^2,
\label{eq:Rosenblum2}
\end{equation}
where $p=(W-1)/2$ and $0<p<t$. In the following analysis, a window size of 13 (approximately one-year period) is set.
\footnote{
Changing the window size does not affect the result. The synchronization analysis is performed using four window sizes: 7 (approximately 0.5 year), 13 (approximately one year), 19 (approximately 1.5 years), and 25 (approximately two years). Although the amplitude magnitude is different, periods for which the synchronization index peaks do not change. In addition, periods for which the synchronization index maintains a high value do not change.
}
\section{Results and interpretation}
\label{Results and interpretation}
\subsection{Summary of the results}
\label{Result summary}
Figure 9 shows the synchronization index. If this index is stably high, the degree of synchronization between $\eta^{\prime~AUDUSD}_t$ (the AUD/USD exchange rate band-pass-filtered fluctuation around PPP, excluding the AUD inflation rate) and $\eta^{\prime~AUDj}_t$ is high. We call the synchronization index stably high when it maintains a high for a certain period, suggesting that the USD/currency j exchange rate fluctuates around the PPP level and confirming the establishment of PPP (see Figure 1). Conversely, if the synchronization index does not maintain a high level, the degree of synchronization between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDj}_t$ is low. Although the synchronization index is temporarily high, by chance, the two instantaneous phase rhythms may match. Therefore, the degree of synchronization in such cases is not necessarily high. A low degree of synchronization suggests that the USD/currency $j$ exchange rate fluctuated during this period because of factors other than PPP. For example, interest rate difference affect exchange rates. If the exchange rate fluctuates due to interest rate differences, then PPP may not hold. Therefore, the two time series have a low degree of synchronization in such periods.
Figure 9(a) shows the time development of the synchronization index ${\hat{\gamma}}_p^2$ between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$. The degree of synchronization is stably high during 2000:05--2005:08 and 2007:09--2013:04. This finding suggests that the USD/EUR exchange rate fluctuates around the PPP level in these periods. From 2002:04 to 2002:09, the degree of synchronization decreases slightly in the short term. Therefore, we consider that this period belongs to a stably high period. Similarly, from 2013:05 to 2014:07, the degree of synchronization decreases slightly. However, when we replace the numeraire, the degree of synchronization is significantly reduced (see Figure 10 in Appendix C). Therefore, this period is not stably high.
Figure 9(b) shows the time development of the synchronization index $\hat{\gamma}_p^2$ between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$. The degree of synchronization is stably high during 1991:08--1997:12, 1999:04--2003:10, and 2007:09--2010:10, suggesting that the USD/JPY exchange rate fluctuates around the PPP level during this period. From these results, the degree of synchronization between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$ and between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$ is, for the most part, high. Therefore, PPP holds in the frequency band used in this analysis. The degree of synchronization is low in certain periods for two reasons. First, an economic event may have an asymmetric effect on each $\eta^\prime_t$. Second, factors other than PPP affect exchange rates. For example, exchange rates change when the interest rate difference between two countries increases.
\begin{figure
\begin{center}
\subfloat{({\bf a}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f9a_s.eps}}
\subfloat{({\bf b}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f9b_s.eps}}\\
\end{center}
\begin{spacing}{1.1}
Figure 9. (a) The synchronization index between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$. Note: The gray-colored interval indicates that the synchronization index from 32.6 to 228.0 months between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$ is the synchronization index $\gamma^2\geq\gamma^{2\ast}(\approx0.92)$, where $\gamma^{2\ast}$ denotes $\text{(the\ average\ value)}+0.25(\text{standard\ deviation})$. (b) The synchronization index from 37.2 to 186.0 months between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$. Note: The gray-colored interval indicates that the synchronization index between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$ is $\gamma^2\geq\gamma^{2\ast}(\approx0.85)$.
\end{spacing}
\label{fig:f9}
\end{figure}
\subsection{Interpretation of the results}
\label{Results interpretation}
This section discusses the relationship between fluctuations in the synchronization indices in the previous section and economic events. First, for the synchronization index between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$, the synchronization index is stably high in many periods, suggesting the establishment of PPP. However, the degree of synchronization did not remain high during 2005:09--2007:08 and after 2013:05. During 2005--2006, a housing bubble and subsequent housing market collapse occurred in the United States, and housing prices frequently fluctuated during this period. Therefore, PPP cannot explain the price index fluctuation in the United States during this period. Because an asymmetric event occurred, the degree of synchronization did not maintain a high level during 2005:09--2007:08. Since approximately 2014, the EUR has depreciated against the USD. In addition, the exchange rate from 2015 to the first half of 2017 has been approximately 1 USD = 0.9 EUR and continues to diverge from the PPP level. The exchange rate moved toward the PPP level in the second half of 2017 (Figure 2(a)). The EUR has depreciated since 2014 because of interest rate reductions, the introduction of negative interest rates, and quantitative easing by the European Central Bank. PPP may not explain these exchange rate fluctuations. During the divergence period from the PPP level this exchange rate overlapped with the period during which the degree of synchronization did not maintain a high level after 2013:05. The Lehman Brothers’ bankruptcy in 2008:09 marked the beginning of a worldwide recession. During the economic crisis, countries’ economic variables tended to move in the same direction. Therefore, the degree of synchronization was high during 2008:9 and might not have been affected by PPP.
Second, the synchronization index between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$ is stably high in several periods, suggesting the establishment of PPP. However, the degree of synchronization does not remain high from 1988:05 to 1991:07, approximately during 1999, from 2003:11 to 2007:08, and after 2010:11. From 1986 to 1989, Japan experienced an economic bubble, and stocks and real estate prices soared. In addition, a technology bubble occurred around 1999. These events may have hindered the establishment of PPP from 1988:06 to 1991:07 and approximately during 1999. During 2005--2006, the United States experienced a housing bubble and subsequent housing market collapse. The event period overlaps with the period during which the degree of synchronization was not high between 2003:11 and 2007:08. The Great East Japan Earthquake in 2011:03 caused violent JPY fluctuations. During this period, the JPY appreciated because of the JPY purchase for insurance claim payments following the earthquake. In addition, investors may have bought JPY in anticipation of the currency strengthening. During 2011, the degree of synchronization was low, which may have resulted from JPY fluctuations. Moreover, since 2013, the JPY has depreciated against the USD, when the exchange rate remained deviated from the PPP level (see Figure 2(c)), because of the Bank of Japan’s quantitative easing and a 2011 rebound to the highest JPY value. PPP may not explain this exchange rate fluctuation. Therefore, the degree of synchronization did not remain high after 2010:11. The Lehman Brothers collapse in 2008:09 initiated a worldwide recession. Countries’ economic variables during the economic crisis tended to move in the same direction. Therefore, the degree of synchronization was high during 2008:09 and might not have been affected by PPP.
\section{Comparison with correlation coefficient}
\label{Comparison with the correlation coefficient}
A correlation coefficient is often used in economic studies to measure the strength of the relationship between the movements of two time series. However, the time difference in phase between two synchronized time series affects correlation coefficients. In contrast, the synchronization index is not affected by the time difference in phase. Figure 10(a) shows the synchronization index and correlation coefficient between $\sin(2\pi t)$ and $\sin(2\pi(t+\mathrm{\Delta}t))$ relevant to $\mathrm{\Delta}t$. The synchronization index is 1 regardless of the existence of the time difference in phase. Depending on the $\mathrm{\Delta}t$ size, the correlation coefficient can take any value between $-1$ and 1. The synchronization index is useful for measuring the synchronization of two time series with the time difference in phase.
\begin{figure
\begin{center}
\subfloat{({\bf a})}{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f10a.eps}}\\
\vspace{2mm}
\subfloat{({\bf b}) }{\includegraphics[clip, width=0.95\columnwidth,height=0.28\columnwidth]{f10b_s.eps}}\\
\vspace{2mm}
\subfloat{({\bf c}) }{\includegraphics[clip, width=0.95\columnwidth,height=0.28\columnwidth]{f10c_s.eps}}
\end{center}
\begin{spacing}{1.1}
Figure 10. (a) The synchronization index and correlation coefficient between two time series $\sin(2\pi t)$ and $\sin(2\pi(t+\mathrm{\Delta t}))$. (b) The synchronization index and correlation coefficient from 32.6 to 228.0 months between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$. Note: “$\bullet$” (orange), “$\times$” (blue), and “$\blacktriangle$” (green) represent the synchronization index, absolute value of the correlation coefficient, and absolute value of the correlation coefficient from 32.6 to 228.0 months with the time difference, respectively, between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$. (c) The synchronization index and correlation coefficient from 37.2 to 186.0 months between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$. Note: “$\bullet$” (orange), “$\times$” (blue), and “$\blacktriangle$” (green) represent the synchronization index, absolute value of the correlation coefficient, and absolute value of the correlation coefficient from 37.2 to 186.0 months with the time difference, respectively, between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$.
\end{spacing}
\label{fig:f10}
\end{figure}
If the time difference in phase of two time series does not significantly change over time or is clearly known inn each period, then the correlation coefficient can be used by shifting the time series with the time difference in phase. However, an appropriate time difference is difficult to determine from our data. We compare the synchronization index and correlation coefficient with the time difference in phase between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDj}_t$. Figures 10(b) and (c) show the time series of the synchronization index and the absolute value of a correlation coefficient and that of a correlation coefficient with the time difference in phase.\footnote{
See Appendix D for the calculation method of the correlation coefficient with the time difference in phase.
}
The correlation coefficient is calculated using the same moving window (window size $W$ = 13) as that of the synchronization index. In addition, the time difference correlation coefficient considers the time difference in phase between two time series. Figure 10(b) shows the synchronization index and correlation coefficient between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$ from 32.6 to 228.0 months. Figure 10(c) shows the synchronization index and correlation coefficient between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$ from 37.2 to 186.0 months. Using these figures, when the synchronization index greatly decreases during a period, the correlation coefficient also decreases. The time series of a synchronization index and correlation coefficient sometimes behave differently when the synchronization index is stably high. An appropriate time difference in phase is difficult to identify from our data in which the lead-lag relationship varies during the short term. Thus, we employ the synchronization index.
In addition to the correlation coefficient, many other methods exist to analyze the relationship between two time series. For example, cross-correlation is often used. The differences between each of these methods and the synchronization index are as follows. Using the cross-correlation function:
\begin{equation}
R_\tau^{xy} = \int_\infty^\infty x_t y_{t-\tau}~dt,
\label{eq:cross-correlation}
\end{equation}
we can measure the similarity between two time series $x, y$ with time difference $\tau$. However, we consider that the economic data used in our analysis are likely to vary in structure with economic shocks and changes in economic conditions. Therefore, we can expect that the time lag of the relationship between two time series can change over time. Identifying the time lag in advance is difficult and is the same problem as that of the correlation coefficient with the previously explained time differences. The synchronization method has an advantage in this aspect.
\section{Concluding remarks}
\label{Conclusions}
We determine that the degree of synchronization between $\eta^{\prime~AUDUSD}_t$ (band-pass-filtered fluctuation around the PPP of the AUD/USD exchange rate, excluding the AUD inflation rate) and $\eta^{\prime~AUDEUR}_t$ and between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$ are high for most times. This result suggests that PPP holds in the long term and that the PPP level is the long-term equilibrium value of the USD/EUR and USD/JPY exchange rates. However, a high degree of synchronization is not maintained for several periods. This property can be attributed to the occurrence of asymmetrical economic events and factors other than PPP that affect the exchange rate, such as during the U.S. real estate bubble and the Great East Japan Earthquake.
Correlation coefficients are inappropriate in this study because of the frequent time difference in phase between two time series. In contrast, we use the synchronization index in our analysis to measure the synchronization degree without identifying the time difference value in phase at each time. Therefore, synchronization analysis is suitable for our study.
We are also interested in the currency that causes synchronization in each period. However, because the synchronization index cannot measure this synchronization, methods such as Granger causality are needed, which will be discussed in future work.
\section*{Acknowledgements}
\label{Acknowledgements}
The authors are grateful to Prof.~Eiji Ogawa and Prof.~Masao Kumamoto for their insightful comments and helpful suggestions.
They would like to thank the anonymous referees for helpful comments.
This work was partly supported by JST PRESTO (JPMJPR16E5), JSPS KAKENHI (17K05360, 19K01593, 19KK0067 and 21K18584), Tokio Marine Kagami Memorial Foundation, and Asset Management One Co., Ltd.
\section*{Appendix A. Theories of Exchange Rate Determination}
\label{Appendix A. Theories of Exchange Rate Determination}
Many factors affect exchange rates. Typical examples include price levels, interest rates, and balances of payment. Here, we describe exchange rate fluctuations on the basis of these factors.
As a result of international commodity arbitrage, the law of one price is established.
\begin{equation}
S_t^{xy}P_t^x=P_t^y.
\label{eq:APPP}
\end{equation}
where $S_t^{xy}$ denotes the $x/y$ exchange rate at time $t$, $P_t^x$ denotes the price level of the country of currency $x$ at time $t$, and $P_t^y$ denotes the price level of the country of currency $y$ at time $t$.
The absolute PPP determines the exchange rate level from the ratio of the price levels between the two countries.
\begin{equation}
S_t^{xy}=\frac{P_t^y}{P_t^x}.
\label{eq:APPP}
\end{equation}
The relative PPP determines the rate of change in the exchange rate from the difference in inflation rates between the two countries.
\begin{equation}
\frac{S_t^{xy}-S_{t-1}^{xy}}{S_{t-1}^{xy}}=\dot{P}_t^y-\dot{P}_t^x.
\label{eq:RPPP}
\end{equation}
where $\dot{P}_t^x$ denotes inflation rate of the country of currency $x$ at time $t$, and $\dot{P}_t^y$ denotes the inflation rate of the country of currency $y$ at time $t$.
The uncovered IRP determines the rate of change in the exchange rate from the difference in interest rates between the two countries.
\begin{equation}
\frac{S_t^{xy}-S_{t-1}^{xy}}{S_{t-1}^{xy}}=i_t^y-i_t^x.
\label{eq:RPPP}
\end{equation}
where $i_t^x$ denotes the interest rate of the country of currency $x$ at time $t$, and $i_t^y$ denotes the interest rate of the country of currency $y$ at time $t$.
Imbalances in the balance of payments affect exchange rate fluctuations. For example, a current account surplus in the home country increases the demand for the home currency, causing the home currency to appreciate. Conversely, a home country's current account deficit increases the supply of its currency, causing the home currency to depreciate.
This paper focuses on exchange rate deviations on the basis of the relative PPP. The deviations are expected to be influenced by long-run adjustments in price levels and the productivity between the two countries. Therefore, our analysis focuses on the long-term movement of exchange rates on the basis of PPP.
Interest rates and balance of payments are also considered to affect the long-term movement of exchange rates. We follow two steps to focus mainly on the relationship between PPP and the exchange rate. First, in our analysis, we use data on exchange rate deviations from PPP. Second, we estimate the frequency band that corresponds to adjustments in the exchange rate to PPP and extract the frequency band from the data. The analysis including other factors, such as interest rates and balance of payments, is left for future work.
\section*{Appendix B. Fourier band-pass filter and power spectrum}
\label{Appendix B. Fourier band-pass filter and power spectrum}
We focus on recurrent patterns in a specific time scale to measure the degree of synchronization between two time series data. We employ a band-pass filter using a Fourier series representation and briefly review the Fourier series of function $f$. For simplicity, let $f$ be a real-valued continuous periodic function on $[0, L)$. The function $f$ can be represented as a Fourier series in Eq. (\ref{eq:A1}) as
\begin{equation}
f(x)=\frac{a_0}{2}+\sum_{k=1}^{\infty}\left(a_k\cos\left(\frac{2\pi kx}{L}\right)+b_k\sin\left(\frac{2\pi kx}{L}\right)\right),
\label{eq:A1}
\end{equation}
where
\begin{equation}
a_k=\frac{1}{L}\int_{0}^{L}{f(x)\cos\left(\frac{2\pi kx}{L}\right)dx\mathrm{\ }(k=0, 1, 2, 3, \cdots)},
\label{eq:A2}
\end{equation}
\begin{equation}
b_k=\frac{1}{L}\int_{0}^{L}{f(x)\sin\left(\frac{2\pi kx}{L}\right)dx\mathrm{\ } (k=1, 2, 3, \cdots)}.
\label{eq:A3}
\end{equation}
We can consider the Fourier series for more general functions (e.g., Korner 2008). By taking a partial sum in Eq. (\ref{eq:A1}), we can create a band-pass-filtered periodic function $\widetilde{f}$ of a given function $f$ using bands $k$ for $1\le k_0\le k\le k_1$:
\begin{equation}
\widetilde{f}(x)=\sum_{k=k_0}^{k_1}{\left(a_k\cos\left(\frac{2\pi kx}{L}\right)+b_k\sin\left(\frac{2\pi kx}{L}\right)\right)\mathrm{\ } .}
\label{eq:A4}
\end{equation}
The transformation procedure from discrete non-periodic time series data $g_n=g(x_0+n\mathrm{\Delta}x)\mathrm{\ }(n=0,\cdots, N-1)$ to band-pass-filtered discrete periodic time series data ${\widetilde{f}}_n=\widetilde{f}(x_0+n\mathrm{\Delta}x)\mathrm{\ }(n=0,\cdots, N-1)$ is as follows.
\begin{enumerate}
\item Using the linear transformation determined by $g_0$ at $x_0$ and $g_{N-1}$ at $x_{N-1}$, convert a given set of uniformly discretized $N+1$ time series data ${(g_n)}_{n=0,\cdots,N-1}$ into a periodic data ${(f_n)}_{n=0,\cdots,N-1}$ such that $f_0=f_{N-1}(=g_0)$.
\item Compute Fourier coefficients $a_k, b_k$ for ${(f_n)}_{n=0,\cdots,N-1}$.
\item Construct a band-pass-filtered periodic time series data ${({\widetilde{f}}_n)}_{n=0,\cdots,N-1}$ using $a_k, b_k$ for $k_0, k_1(1\le k_0\le k\le k_1)$.
\end{enumerate}
Step 1
\begin{equation}
f_n=g_n-sx_n\mathrm{\Delta}x~~~~(n=0,...,N-1),
\label{eq:A5}
\end{equation}
where $s={(g_{N-1}-g_0)}/{(x_{N-1}-x_0)},\mathrm{\ } x_n=x_0+n\mathrm{\Delta}x$ and $\mathrm{\Delta}x={(x_{N-1}-x_0)}/{(N-1)}$.
\\
\noindent Step 2
\noindent Compute
\begin{equation}
a_k=\frac{1}{L}\sum_{n=0}^{N-1}{f_n\cos\left(\frac{2\pi k x_n}{L}\right)}\mathrm{\Delta}x~~~~(k=0,\cdots, K),
\label{eq:A6}
\end{equation}
\begin{equation}
b_k=\frac{1}{L}\sum_{n=0}^{N-1}{f_n\sin\left(\frac{2\pi k x_n}{L}\right)}\mathrm{\Delta}x~~~~(k=1,\cdots, K).
\label{eq:A7}
\end{equation}
\\
\noindent Step 3
\begin{equation}
{\widetilde{f}}_n=\sum_{k=k_0}^{k_1}{\left(a_k\cos\left(\frac{2\pi k x_n}{L}\right)+b_k\sin\left(\frac{2\pi k x_n}{L}\right)\right).}
\label{eq:A8}
\end{equation}
\medskip
\noindent {\bf Power spectrum}: We can compute power spectrum $E(k)$ for each $k$,
\begin{equation}
E(k)=\frac{a_k^2+b_k^2}{2}.
\label{eq:A9}
\end{equation}
If $f$ is $C^l$ function, then $k^l\left|a_k\right|, k^l\left|b_k\right|<\infty$, implying that the Fourier coefficients decrease exponentially as $k$ increases. Notably, we plot $\sqrt{E(k)}$ and not $E(k)$ (Figure 3).
\section*{Appendix C. Robustness check of numeraire}
\label{Appendix C. Robustness check of numeraire}
The third country’s currency is introduced as a numeraire to analyze the synchronization between two currencies (e.g., USD and EUR). We use the AUD as the numeraire in the main body. We confirm the same results by using NZD as the numeraire in this appendix.
Figure 11(a) shows the time development of the synchronization index from 32.6 to 228.0 months between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$ and between $\eta^{\prime~NZDUSD}_t$ and $\eta^{\prime~NZDEUR}_t$. The time series of the two synchronization indices behave similarly, except during 2014. Figure 11(b) shows the time development of the synchronization index from 37.2 to 186.0 months between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$ and between $\eta^{\prime~NZDUSD}_t$ and $\eta^{\prime~NZDJPY}_t$. The time series of the two synchronization indices behave similarly except during 1989:05.
\begin{figure
\begin{center}
\subfloat{({\bf a}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f11a.eps}}
\subfloat{({\bf b}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{f11b.eps}}\\
\end{center}
\begin{spacing}{1.1}
Figure 11. (a) The synchronization index between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$ and between $\eta^{\prime~NZDUSD}_t$ and $\eta^{\prime~NZDEUR}_t$. Note: “$\bullet$” (orange) and “$\times$” (blue) represent the synchronization index from 32.6 to 228.0 months between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$ and between $\eta^{\prime~NZDUSD}_t$ and $\eta^{\prime~NZDEUR}_t$, respectively. The gray-colored interval indicates that the synchronization index from 32.6 to 228.0 months between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDEUR}_t$ is the synchronization index $\gamma^2\geq\gamma^{2\ast}(\approx0.92)$, where $\gamma^{2\ast}$ denotes $\text{(the\ average\ value)}+0.25(\text{standard\ deviation})$. (b) Synchronization index between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$ and between $\eta^{\prime~NZDUSD}_t$ and $\eta^{\prime~NZDJPY}_t$. Note: “$\bullet$” (orange) and “$\times$” (blue) represent the synchronization index from 37.2 to 186.0 months between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$ and between $\eta^{\prime~NZDUSD}_t$ and $\eta^{\prime~NZDJPY}_t$, respectively. The gray-colored interval indicates that the synchronization index from 37.2 to 186.0 months between $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDJPY}_t$ is $\gamma^2\geq\gamma^{2\ast}(\approx0.85)$.
\end{spacing}
\label{fig:f11}
\end{figure}
\section*{Appendix D. Calculation method for correlation coefficient with time difference in phase}
\label{Appendix D. Calculation method for a correlation coefficient with time difference in phase}
The time difference in phase between two time series affects a correlation coefficient. Thus, the correlation coefficient is not useful when the time difference is not sufficiently small. However, the correlation coefficient may show result similar to that of the synchronization index when the time difference in phase is considered. The correlation coefficient with the time difference in phase is calculated as follows.
\begin{enumerate}
\item Identify the leading time series, and calculate the phase difference at each time. Find $m_t(={\hat{m}}_t)$ and $q_t(={\hat{q}}_t)$ that satisfy $\displaystyle \min_{q_t\in\mathbb{Z}}\left(\left|m_t\right|\right)$, where $m_t={\hat{\phi}}_t^{AUDUSD}-{\hat{\phi}}_t^{AUDj}+2q_t\pi$. The variables ${\hat{\phi}}_t^{AUDUSD}$ and ${\hat{\phi}}_t^{AUDj}$ denote unwrapped instantaneous phases of $\eta^{\prime~AUDUSD}_t$ and $\eta^{\prime~AUDj}_t$, respectively, both at time $t$.
\item Calculate the time difference in phase at each time and ${\hat{l}}_t=l-1$ that satisfies
\begin{equation}
\begin{cases}
\displaystyle \max_{l\in N, |l|\leq L}(\hat{\phi}_t^{AUDUSD}-\hat{\phi}_{t+l}^{AUDj}+2\hat{q}_t\pi)<0 & (\hat{m}_t>0)\\
\displaystyle \min_{l\in N, |l|\leq L}(\hat{\phi}_{t+l}^{AUDUSD}-\hat{\phi}_t^{AUDj}+2\hat{q}_t\pi)>0 & (\hat{m}_t<0)
\end{cases}
,
\label{eq:C1}
\end{equation}
where $L$ is chosen depending on the data. The time difference in phase at each time is expressed as
\begin{equation}
\delta_t=
\begin{cases}
\hat{l}_t+\frac{\phi_t^{AUDUSD}-\phi_{t+\hat{l}_t}^{AUDj}}{\phi_{t+\hat{l}_t+1}^{AUDj}-\phi_{t+\hat{l}_t}^{AUDj}} & (\hat{m}_t>0)\\
\hat{l}_t+\frac{\phi_t^{AUDj}-\phi_{t+\hat{l}_t}^{AUDUSD}}{\phi_{t+\hat{l}_t+1}^{AUDUSD}-\phi_{t+\hat{l}_t}^{AUDUSD}} & (\hat{m}_t<0)
\end{cases}
,
\label{eq:C2}
\end{equation}
\item Compute the absolute value of the correlation coefficient with the time difference in phase
\begin{equation}
\hat{r}_p^2=
\begin{cases}
\left|\frac{\displaystyle \displaystyle \sum_{i=t}^{t^\prime} \Bigl(\eta_i^{\prime AUDUSD}-\bar{\eta}^{\prime AUDUSD} \Bigr)\Bigl(\eta_{i+\bar{\delta}_t}^{\prime AUDj}-\bar{\eta}^{\prime AUDj}\Bigr)}{\sqrt{\displaystyle \sum_{i=t}^{t^\prime}\Bigl(\eta_i^{\prime AUDUSD}-\bar{\eta}^{\prime AUDUSD}\Bigr)^2}\sqrt{\displaystyle \sum_{i=t}^{t^\prime}\Bigl(\eta_{i+\bar{\delta}_t}^{\prime AUDj}-\bar{\eta}^{\prime AUDj}\Bigr)^2}}\right| & (\hat{m}_t>0)\\
\\
\left|\frac{\displaystyle \displaystyle \sum_{i=t}^{t^\prime} \Bigl(\eta_{i+\bar{\delta}_t}^{\prime AUDUSD}-\bar{\eta}^{\prime AUDUSD} \Bigr)\Bigl(\eta_i^{\prime AUDj}-\bar{\eta}^{\prime AUDj}\Bigr)}{\sqrt{\displaystyle \sum_{i=t}^{t^\prime}\Bigl(\eta_{i+\bar{\delta}_t}^{\prime AUDUSD}-\bar{\eta}^{\prime AUDUSD}\Bigr)^2}\sqrt{\displaystyle \sum_{i=t}^{t^\prime}\Bigl(\eta_i^{\prime AUDj}-\bar{\eta}^{\prime AUDj}\Bigr)^2}}\right| & (\hat{m}_t<0)
\end{cases}
.
\label{eq:C3}
\end{equation}
where $t^\prime=t+W+1$, $p=t+\frac{W+1}{2}$ and ${\bar{\delta}}_t=\frac{1}{W}\sum\limits_{i=t}^{t+W-1}{\delta_i}$. $W$ denotes the window size.
\end{enumerate}
| proofpile-arXiv_059-15665 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
In recent years, a large number of heavy baryon states, charmonium-like states and bottomonium-like states have been observed, and have attracted intensive attentions and have revitalized many works on the singly-heavy, doubly-heavy, triply-heavy and quadruply-heavy hadron spectroscopy \cite{PDG}.
In 2017, the LHCb collaboration observed the doubly-charmed baryon state $\Xi_{cc}^{++}$ in the $\Lambda_c^+ K^- \pi^+\pi^+$ mass spectrum for the first time \cite{LHCb-Xicc}, while
the doubly-charmed baryon states $\Xi_{cc}^{+}$ and $\Omega_{cc}^{+}$ are still unobserved.
In 2020, the LHCb collaboration studied the $J/\psi J/\psi$ invariant mass distribution using $pp$ collision data at center-of-mass energies of $\sqrt{s}=$7, 8 and 13 TeV, and observed a narrow resonance structure $X(6900)$ around $6.9\, \rm{GeV}$ and a broad structure just above the $J/\psi J/\psi$ mass with global significances of more than $5\sigma$ \cite{LHCb-cccc-2006}. They are good candidates for the fully-charmed tetraquark states, also are the first fully-heavy exotic multiquark candidates claimed experimentally to date. If they are really fully-charmed tetraquark states, we have observed doubly-charmed and quadruply-charmed hadrons, but no experimental evidences for the triply-charmed hadrons. The observation of the $\Xi_{cc}^{++}$ and $X(6900)$ provides some crucial experimental inputs on the strong correlation between the two charm quarks, which maybe shed light on the
spectroscopy of the doubly-heavy, triply-heavy baryon states, doubly-heavy, triply-heavy, quadruply-heavy tetraquark states and pentaquark states.
On the other hand, the spectrum of the triply-heavy baryon states have been studied extensively via different theoretical approaches, such as the lattice QCD \cite{LQCD-1,LQCD-2,LQCD-3,LQCD-4,LQCD-5,LQCD-6,LQCD-7,LQCD-8}, the QCD sum rules \cite{QCDSR-1,QCDSR-2,QCDSR-3,QCDSR-4}, various potential quark models \cite{PQM-1,PQM-2,PQM-3,PQM-4,PQM-5,PQM-6,PQM-7,PQM-8,PQM-9,PQM-10,PQM-11,PQM-12,PQM-13,PQM-14,PQM-15,PQM-16}, Fadeev equation \cite{Faddeev-1,Faddeev-2,Faddeev-3,Faddeev-4}, the Regge trajectories \cite{Regge-1,Regge-2}, etc. The predicted triply-heavy baryon masses vary in a rather large range, more theoretical works are still needed for the sake of obtaining more precise inputs for comparing to the experimental data in the future.
The QCD sum rules approach is a powerful theoretical tool in studying the mass spectrum of the heavy flavor hadrons, and plays an important role in assigning the new baryon states, exotic tetraquark (molecular) states and pentaquark (molecular) states. The ground state triply-heavy baryon states $QQQ$ and $QQQ^\prime$ have been studied with the QCD sum rules by taking into account the perturbative contributions and the gluon condensate contributions in performing the operator product expansion \cite{QCDSR-1,QCDSR-2,QCDSR-3,QCDSR-4}. In calculations, the constant or universal heavy-quark pole masses or $\overline{MS}$ masses are chosen in all the channels \cite{QCDSR-1,QCDSR-2,QCDSR-3,QCDSR-4}. In this article, we restudy
the mass spectrum of the ground state triply-heavy baryon states by taking into account the three-gluon condensates, it is the first time to take into account the three-gluon condensates in the QCD sum rules for the triply-heavy baryon states. Furthermore, we pay special attentions to the heavy quark masses, and choose the values which work well in studying the doubly-heavy baryon states \cite{Wang-cc-baryon-penta}, hidden-charm tetraquark states \cite{WangZG-Hiddencharm, WangZG-Vector-tetra}, hidden-bottom tetraquark states \cite{ WangZG-Hiddenbottom}, hidden-charm pentaquark states \cite{WangZG-hiddencharm-penta}, fully-charmed tetraquark states \cite{WangZG-QQQQ}, and perform a novel analysis.
The article is arranged as follows: we obtain the QCD sum rules for the masses and pole residues of the
triply-heavy baryon states in Sect.2; in Sect.3, we present the numerical results and discussions; and Sect.4 is reserved for our
conclusion.
\section{QCD sum rules for the triply-heavy baryon states}
Firstly, we write down the two-point correlation functions $\Pi(p)$ and $\Pi_{\mu\nu}(p)$ in the QCD sum rules,
\begin{eqnarray}
\Pi(p)&=&i\int d^4x e^{ip \cdot x} \langle0|T\left\{J(x)\bar{J}(0)\right\}|0\rangle \, , \nonumber \\
\Pi_{\mu\nu}(p)&=&i\int d^4x e^{ip \cdot x} \langle0|T\left\{J_\mu(x)\bar{J}_{\nu}(0)\right\}|0\rangle \, ,
\end{eqnarray}
where $J(x)=J^{QQQ'}(x)$, $J_\mu(x)=J^{QQQ'}_\mu(x)$, $J^{QQQ}_\mu(x)$,
\begin{eqnarray}
J^{QQQ'}(x)&=& \varepsilon^{ijk} Q^T_i(x)C\gamma_\mu Q_j(x) \gamma^\mu \gamma_5 Q'_k(x) \, , \nonumber \\
J^{QQQ'}_\mu(x)&=& \varepsilon^{ijk} Q^T_i(x)C\gamma_\mu Q_j(x) Q'_k(x) \, , \nonumber \\
J^{QQQ}_\mu(x)&=& \varepsilon^{ijk} Q^T_i(x)C\gamma_\mu Q_j(x) Q_k(x) \, ,
\end{eqnarray}
where $Q$, $Q'=b$, $c$, $Q\neq Q^\prime$, the $i$, $i$ and $k$ are color indexes, and the $C$ is the charge conjugation
matrix.
We choose the Ioffe-type currents $J(x)$ and $J_\mu(x)$ to interpolate the triply-heavy baryon states with the spin-parity $J^P={\frac{1}{2}}^+$ and ${\frac{3}{2}}^+$, respectively,
\begin{eqnarray}
\langle 0|J(0)|\Omega_{QQQ^\prime,+}(p)\rangle&=&\lambda_{+} U_{+}(p,s)\, , \nonumber\\
\langle 0|J_\mu(0)|\Omega_{QQQ^{(\prime)},+}^*(p)\rangle&=&\lambda_{+} U^{+}_\mu(p,s)\, ,
\end{eqnarray}
where the $\Omega_{QQQ^\prime,+}$ and $\Omega^*_{QQQ^{(\prime)},+}$ represent the triply-heavy baryon states with the spin-parity $J^P={\frac{1}{2}}^+$ and ${\frac{3}{2}}^+$, respectively, the $\lambda_{+}$ are the pole residues, the $U(p,s)$ and $U_\mu(p,s)$ are the Dirac spinors, the subscript or superscript $+$ denotes the parity.
The currents $J(x)$ and $J_\mu(x)$ also couple potentially to the negative-parity triply-heavy baryon states $\Omega_{QQQ^\prime,-}$ and $\Omega^*_{QQQ^{(\prime)},-}$, respectively,
because multiplying $i \gamma_{5}$ to the currents $J(x)$ and $J_\mu(x)$ changes their
parity \cite{QCDSR-4,Wang-cc-baryon-penta,WangZG-hiddencharm-penta,Oka96,WangHbaryon},
\begin{eqnarray}
\langle 0|J(0)|\Omega_{QQQ^\prime,-}(p)\rangle&=&\lambda_{-} i\gamma_5 U_{-}(p,s)\, , \nonumber\\
\langle 0|J_\mu(0)|\Omega_{QQQ^{(\prime)},-}^*(p)\rangle&=&\lambda_{-} i\gamma_5 U^{-}_\mu(p,s)\, ,
\end{eqnarray}
again the subscript or superscript $-$ denotes the parity. On the other hand, we can use the valance quarks and spin-parity to represent the triply-heavy baryon states, for example, $QQQ^\prime({\frac{3}{2}}^+)$. We cannot construct the currents $J^{QQQ}(x)= \varepsilon^{ijk} Q^T_i(x)C\gamma_\mu Q_j(x) \gamma^\mu \gamma_5 Q_k(x) $ to interpolate the triply-heavy baryon states $\Omega_{QQQ,+}$ with the spin-parity $J^{P}={\frac{1}{2}}^+$, because such current operators cannot exist due to the Fermi-Dirac statistics.
We insert a complete set of intermediate triply-heavy baryon states with the same quantum numbers as the current operators $J(x)$, $i\gamma_5 J(x)$,
$J_\mu(x)$ and $i\gamma_5J_\mu(x)$ into the correlation functions $\Pi(p)$ and
$\Pi_{\mu\nu}(p)$ to obtain the hadron representation
\cite{SVZ79,Reinders85}. After isolating the pole terms of the lowest
states of the positive-parity and negative-parity triply-heavy baryon states, we obtain the
results:
\begin{eqnarray}
\Pi(p) & = &\lambda_+^2 {\!\not\!{p} + M_{+} \over M^{2}_+ -p^{2} } + \lambda_{-}^2 {\!\not\!{p} - M_{-} \over M_{-}^{2}-p^{2} } + \cdots \, , \nonumber \\
&=& \Pi_1(p^2) \!\not\!{p} +\Pi_0(p^2) \, ,\nonumber \\
\Pi_{\mu\nu}(p)&=&\Pi(p)\left( -g_{\mu\nu}+\cdots \right)+\cdots \, ,
\end{eqnarray}
we choose the tensor structure $g_{\mu\nu}$ to study the spin $J=\frac{3}{2}$ triply-heavy baryon states.
We can obtain the hadronic spectral densities $\rho^1_H(s)$ and $\rho^0_H(s)$ at the hadron side through dispersion relation,
\begin{eqnarray}
\rho^1_H(s)&=& \frac{{\rm Im} \, \Pi_1(s)}{\pi} \nonumber\\
& = & \lambda_+^2 \, \delta\left(s - M_{+}^2\right)+ \lambda_{-}^{2} \, \delta\left(s - M_{-}^2\right) \, , \nonumber \\
\rho^0_H(s)&=& \frac{{\rm Im} \, \Pi_0(s)}{\pi}\nonumber\\
& = & M_{+}\lambda_+^2 \, \delta\left(s - M_{+}^2\right)-M_{-} \lambda_{-}^{2} \, \delta\left(s - M_{-}^2\right) \, ,
\end{eqnarray}
then introduce the weight function $\exp\left(-\frac{s}{T^2} \right)$, and obtain the QCD sum rules at the hadron side,
\begin{eqnarray}
2M_{+}\lambda_{+}^2 \exp\left( -\frac{M_{+}^2}{T^2}\right) & = & \int_{\Delta^2}^{s_0}ds \left[\sqrt{s}\rho^1_H(s)+ \rho^0_H(s)\right] \exp\left(-\frac{s}{T^2} \right)\ \, ,
\end{eqnarray}
where the thresholds $\Delta^2=(2m_Q+m_{Q'})^2$ or $9m^2_Q$, the $T^2$ are the Borel parameters, and the $s_0$ are the continuum threshold parameters.
The combinations $\sqrt{s}\rho^1_H(s)+ \rho^0_H(s)$ and $\sqrt{s}\rho^1_H(s)- \rho^0_H(s)$ contain the
contributions from the positive-parity and negative-parity triply-heavy baryon states, respectively.
Now we briefly outline the operator product expansion performed at the deep Euclidean region $p^2 \ll 0$.
We contract the heavy quark fields in the correlation functions $\Pi(p)$ and $\Pi_{\mu\nu}(p)$ with
Wick theorem, substitute the full heavy quark propagators $S_{ij}(x)$ into the
correlation functions $\Pi(p)$ and $\Pi_{\mu\nu}(p)$ firstly,
\begin{eqnarray}
S_{ij}(x)&=&\frac{i}{(2\pi)^4}\int d^4k e^{-ik \cdot x} \left\{
\frac{\delta_{ij}}{\!\not\!{k}-m_Q}
-\frac{g_sG^n_{\alpha\beta}t^n_{ij}}{4}\frac{\sigma^{\alpha\beta}(\!\not\!{k}+m_Q)+(\!\not\!{k}+m_Q)
\sigma^{\alpha\beta}}{(k^2-m_Q^2)^2}\right.\nonumber\\
&& -\frac{g_s^2 (t^at^b)_{ij} G^a_{\alpha\beta}G^b_{\mu\nu}(f^{\alpha\beta\mu\nu}+f^{\alpha\mu\beta\nu}+f^{\alpha\mu\nu\beta}) }{4(k^2-m_Q^2)^5}\nonumber\\
&&\left.+\frac{\langle g_s^3GGG\rangle}{48}\frac{(\!\not\!{k}+m_Q)\left[\!\not\!{k}(k^2-3m_Q^2)+2m_Q(2k^2-m_Q^2) \right](\!\not\!{k}+m_Q)}{(k^2-m_Q^2)^6}+\cdots\right\}\, ,\nonumber\\
f^{\alpha\beta\mu\nu}&=&(\!\not\!{k}+m_Q)\gamma^\alpha(\!\not\!{k}+m_Q)\gamma^\beta(\!\not\!{k}+m_Q)\gamma^\mu(\!\not\!{k}+m_Q)\gamma^\nu(\!\not\!{k}+m_Q)\, ,
\end{eqnarray}
where $\langle g_s^3GGG\rangle=\langle g_s^3f_{abc}G^a_{\mu\nu}G_b^{\nu\alpha}G^b_{\alpha}{}^\mu\rangle$, $t^n=\frac{\lambda^n}{2}$, the $\lambda^n$ is the Gell-Mann matrix, the $i$, $j$ are the color indexes \cite{Reinders85},
then complete the integrals in the coordinate space and momentum
space sequentially to obtain the correlation functions $\Pi(p)$ and $\Pi_{\mu\nu}(p)$ at the quark-gluon level, finally we obtain
the corresponding QCD spectral densities through dispersion relation,
\begin{eqnarray}
\rho^1_{QCD}(s)&=& \frac{{\rm Im} \, \Pi_1(s)}{\pi} \, , \nonumber \\
\rho^0_{QCD}(s)&=& \frac{{\rm Im} \, \Pi_0(s)}{\pi} \, .
\end{eqnarray}
We match the hadron side with the QCD side of the correlation functions $\Pi_1(p^2)$ and $\Pi_0(p^2)$ below the continuum thresholds $s_0$, introduce the weight function $\exp\left(-\frac{s}{T^2} \right)$, and obtain the QCD sum rules,
\begin{eqnarray}\label{QCDSR}
2M_{+}\lambda_{+}^2 \exp\left( -\frac{M_{+}^2}{T^2}\right) & = & \int_{\Delta^2}^{s_0}ds \left[\sqrt{s}\rho^1_H(s)+ \rho^0_H(s)\right] \exp\left(-\frac{s}{T^2} \right)\ \, , \nonumber\\
& = & \int_{\Delta^2}^{s_0}ds \left[\sqrt{s}\rho^1_{QCD}(s)+ \rho^0_{QCD}(s)\right] \exp\left(-\frac{s}{T^2} \right)\ \, ,
\end{eqnarray}
where $\rho^1_{QCD}(s)=\rho^1_{QQQ,\frac{3}{2}}(s)$, $\rho^1_{QQQ^\prime,\frac{3}{2}}(s)$, $\rho^1_{QQQ^\prime,\frac{1}{2}}(s)$,
$\rho^0_{QCD}(s)=m_Q\rho^0_{QQQ,\frac{3}{2}}(s)$, $m_{Q^\prime}\rho^0_{QQQ^\prime,\frac{3}{2}}(s)$, $m_{Q^\prime}\rho^0_{QQQ^\prime,\frac{1}{2}}(s)$,
\begin{eqnarray}
\rho^1_{QQQ,\frac{3}{2}}(s)&=&\frac{3}{64\pi^4}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, yz(1-y-z) (s-\widetilde{m}_{Q}^2)(11s-5\widetilde{m}_{Q}^2) \nonumber\\
&&+\frac{15m_Q^2}{32\pi^4}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \,y\, (s-\widetilde{m}_Q^2)\nonumber\\
&&-\frac{m_Q^2}{32\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z(1-y-z)}{y^{2}} \left(1+\frac{3s}{2T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&-\frac{5m_Q^4}{192\pi^2T^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{y^{2}} \delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&-\frac{5m_Q^4}{96\pi^2T^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z}{y^{3}} \delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&+\frac{5m_Q^2}{32\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z}{y^{2}} \delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&-\frac{25}{384\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, (1-y-z)\left[ 1+\frac{7s}{25}\delta\left(s-\widetilde{m}_{Q}^2\right)\right]\nonumber\\
&&-\frac{5m_Q^2}{192\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{z} \delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&-\frac{m_Q^2\langle g_s^3GGG\rangle}{512\pi^4T^2}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z(1-y-z)}{y^{3}}\left( 1-\frac{3s}{T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&+ \frac{5m_Q^4\langle g_s^3GGG\rangle}{768\pi^4T^4}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z}{y^{4}}\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&+\frac{5m_Q^4\langle g_s^3GGG\rangle}{1536\pi^4T^4} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{y^{3}}\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&- \frac{\langle g_s^3GGG\rangle}{1024\pi^4}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z(1-y-z)}{y^{2}}\left( 2+\frac{3s}{T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&- \frac{5m_Q^2\langle g_s^3GGG\rangle}{256\pi^4T^2} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z}{y^{3}}\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&- \frac{23m_Q^2\langle g_s^3GGG\rangle}{4608\pi^4T^2} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{y^{2}}\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&+\frac{11m_Q^2\langle g_s^3GGG\rangle}{4608\pi^4T^2}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1-y-z}{y^{2}}\left( 1+\frac{7s}{11T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&+ \frac{m_Q^2\langle g_s^3GGG\rangle}{2304\pi^4T^2}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{y}\left(1+\frac{s}{T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&+ \frac{m_Q^2\langle g_s^3GGG\rangle}{768\pi^4T^2} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1-y-z}{zy^{2}}\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&- \frac{m_Q^2\langle g_s^3GGG\rangle}{384\pi^4T^2} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1-y-z}{zy}
\left( 1-\frac{s}{3T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&+ \frac{m_Q^4\langle g_s^3GGG \rangle}{1536\pi^4T^4} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{y^{3}}\delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber
\end{eqnarray}
\begin{eqnarray}
&&+ \frac{5m_Q^4\langle g_s^3 GGG\rangle}{4608\pi^4T^4} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{zy^{2}}\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&+ \frac{m_Q^4\langle g_s^3 GGG\rangle}{1536\pi^4T^4} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1-y-z}{zy^{3}}\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&- \frac{\langle g_s^3 GGG \rangle}{512\pi^4} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1-y-z}{y}\left( 3+\frac{7s}{6T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&- \frac{5m_Q^2\langle g_s^3GGG\rangle}{3072\pi^4T^2} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{zy}\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&- \frac{m_Q^2\langle g_s^3GGG\rangle}{512\pi^4T^2} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1-y-z}{zy^{2}}\delta\left(s-\widetilde{m}_{Q}^2\right)\, ,
\end{eqnarray}
\begin{eqnarray}
\rho^0_{QQQ,\frac{3}{2}}(s)&=&\frac{3}{32\pi^4}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, yz \, (s-\widetilde{m}_{Q}^2)(8s-3\widetilde{m}_{Q}^2) + \frac{9 m_Q^2}{32\pi^4}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, (s-\widetilde{m}_Q^2)\nonumber\\
&&-\frac{m_Q^2}{192\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z(1+y-z)}{y^{3}} \left( 1+\frac{5s}{T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&-\frac{3m_Q^4}{64\pi^2T^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{y^{3}} \delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&+\frac{3}{32\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z(1-y-z)}{y^{2}} \left[ 1+\frac{5s}{6}\delta\left(s-\widetilde{m}_{Q}^2\right)\right]\nonumber\\
&&+\frac{9m_Q^2}{64\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{y^{2}}\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&-\frac{3}{64\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \left[ 1+\frac{7s}{18}\delta\left(s-\widetilde{m}_{Q}^2\right)\right]\nonumber\\
&&-\frac{1}{32\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1-y-z}{z}\left[ 1+\frac{5s}{6}\delta\left(s-\widetilde{m}_{Q}^2\right)\right]\nonumber\\
&&-\frac{m_Q^2}{64\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{zy}\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&- \frac{m_Q^2\langle g_s^3GGG\rangle}{384\pi^4T^2}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z(1+y-z)}{y^{4}}\left( 1-\frac{5s}{4T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&+ \frac{3m_Q^4\langle g_s^3GGG\rangle}{512\pi^4T^4} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{y^{4}}\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&- \frac{\langle g_s^3GGG\rangle}{1536\pi^4}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z(3-2y-3z)}{y^{3}}\left( 1+\frac{5s}{T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&- \frac{9m_Q^2\langle g_s^3GGG\rangle}{512\pi^4T^2} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{y^{3}}\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&+\frac{\langle g_s^3GGG\rangle}{4608\pi^4} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1-y-z}{y^{2}}\left(1+\frac{2ys}{T^2} \right)\left( 1+\frac{s}{T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&+ \frac{m_Q^2\langle g_s^3GGG\rangle}{1152\pi^4T^2}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{y^{2}}\left( 1+\frac{7s}{4T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber
\end{eqnarray}
\begin{eqnarray}
&&- \frac{m_Q^2\langle g_s^3GGG\rangle}{2304\pi^4T^2}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1-y-z}{y^{3}}\left( 1-\frac{3s}{2T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&- \frac{m_Q^2\langle g_s^3GGG\rangle}{1152\pi^4T^2} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1-y-z}{zy^{2}}\left( 1-\frac{5s}{4T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&+ \frac{m_Q^2\langle g_s^3GGG\rangle}{576\pi^4T^2} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1-y}{zy^{2}}\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&+\frac{7m_Q^4\langle g_s^3GGG\rangle }{4608\pi^4T^4} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{zy^{3}}\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&-\frac{11\langle g_s^3GGG\rangle}{3072\pi^4} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{y}\left( 1+\frac{7s}{11T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&-\frac{\langle g_s^3GGG\rangle}{1024\pi^4}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1-y-z}{y^{2}}\left( 1+\frac{11s}{3T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&- \frac{\langle g_s^3GGG\rangle}{3072\pi^4} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1-y-z}{zy}\left( 1+\frac{5s}{T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right)\nonumber\\
&&- \frac{5m_Q^2\langle g_s^3GGG\rangle}{768\pi^4T^2} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{zy^{2}}\delta\left(s-\widetilde{m}_{Q}^2\right)\, ,
\end{eqnarray}
\begin{eqnarray}
\rho^1_{QQQ^\prime,\frac{3}{2}}(s)&=&\frac{3}{16\pi^4}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, yz(1-y-z) (s-\widetilde{m}_{Q}^2)(2s-\widetilde{m}_{Q}^2) \nonumber\\
&&+ \frac{3m_Q^2}{16\pi^4}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \,z\, (s-\widetilde{m}_Q^2) \nonumber\\
&&-\frac{m_Q^2}{48\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z(1-y-z)}{y^{2}} \left( 1+\frac{s}{T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&-\frac{m_Q^4}{48\pi^2T^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z}{y^{3}} \delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&+ \frac{m_Q^2}{16\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z}{y^{2}}\delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&-\frac{m_{Q^\prime}^2}{96\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{y(1-y-z)}{z^{2}} \left( 1+\frac{s}{T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&-\frac{m_{Q^\prime}^2m_Q^2}{96\pi^2T^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{z^{2}} \delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&-\frac{1}{48\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, z \left[ 1+\frac{s}{4}\delta\left(s-\widetilde{m}_{Q}^2\right)\right] \, ,
\end{eqnarray}
\begin{eqnarray}
\rho^0_{QQQ^\prime,\frac{3}{2}}(s)&=&\frac{3}{32\pi^4}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, y(1-y-z) (s-\widetilde{m}_{Q}^2)(3s-\widetilde{m}_{Q}^2) \nonumber\\
&&+\frac{3m_Q^2}{16\pi^4} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, (s-\widetilde{m}_Q^2) \nonumber\\
&&-\frac{m_Q^2}{48\pi^2T^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{(1-y-z)}{y^{2}} \,s\, \delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber
\end{eqnarray}
\begin{eqnarray}
&&- \frac{m_Q^4}{48\pi^2T^2} \langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{y^{3}} \delta\left(s-\widetilde{m}_{Q}^2\right)
\nonumber\\
&&+\frac{m_Q^2}{16\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{y^{2}} \delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&-\frac{m_{Q^\prime}^2}{96\pi^2T^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{y(1-y-z)}{z^{3}} \,s\, \delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&- \frac{m_{Q^\prime}^2m_Q^2}{96\pi^2T^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{z^{3}} \delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&+\frac{1}{32\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{y(1-y-z)}{z^{2}} \Big[ 1+s\delta\left(s-\widetilde{m}_{Q}^2\right)\Big] \nonumber\\
&&+\frac{m_Q^2}{32\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{z^{2}} \delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&-\frac{1}{64\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \left[ 1+\frac{s}{3}\delta\left(s-\widetilde{m}_{Q}^2\right)\right] \, ,
\end{eqnarray}
\begin{eqnarray}
\rho^1_{QQQ^\prime,\frac{1}{2}}(s)&=&\frac{3}{8\pi^4}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, yz(1-y-z) (s-\widetilde{m}_{Q}^2)(5s-3\widetilde{m}_{Q}^2) \nonumber\\
&&+\frac{3m_Q^2}{8\pi^4} \int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \,z\, (s-\widetilde{m}_Q^2) \nonumber\\
&&-\frac{m_Q^2}{6\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z(1-y-z)}{y^{2}} \left( 1+\frac{s}{2T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&-\frac{m_Q^4}{24\pi^2T^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z}{y^{3}} \delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&+\frac{m_Q^2}{8\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{z}{y^{2}} \delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&-\frac{m_{Q^\prime}^2}{12\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{y(1-y-z)}{z^{2}} \left( 1+\frac{s}{2T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&-\frac{m_{Q^\prime}^2m_Q^2}{48\pi^2T^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{z^{2}} \delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&+\frac{3}{16\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, (1-y-z) \left[ 1+\frac{s}{3}\delta\left(s-\widetilde{m}_{Q}^2\right)\right] \nonumber\\
&&+\frac{m_Q^2}{16\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{y} \delta\left(s-\widetilde{m}_{Q}^2\right) \, ,
\end{eqnarray}
\begin{eqnarray}
\rho^0_{QQQ^\prime,\frac{1}{2}}(s)&=&\frac{3}{8\pi^4}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, y(1-y-z) (s-\widetilde{m}_{Q}^2)(2s-\widetilde{m}_{Q}^2) \nonumber\\
&&+\frac{3m_Q^2}{4\pi^4}\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, (s-\widetilde{m}_Q^2) \nonumber\\
&&-\frac{m_Q^2}{24\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{(1-y-z)}{y^{2}} \left( 1+\frac{s}{T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber
\end{eqnarray}
\begin{eqnarray}
&&-\frac{m_Q^4}{12\pi^2T^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{y^{3}} \delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&+ \frac{m_Q^2}{4\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{y^{2}}\delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&-\frac{m_{Q^\prime}^2}{48\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{y(1-y-z)}{z^{3}} \left( 1+\frac{s}{T^2}\right)\delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&-\frac{m_{Q^\prime}^2m_Q^2}{24\pi^2T^2} \langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{z^{3}} \delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&+\frac{1}{8\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{y(1-y-z)}{z^{2}} \left[ 1+\frac{s}{2}\delta\left(s-\widetilde{m}_{Q}^2\right)\right] \nonumber\\
&&+\frac{m_Q^2}{8\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{z^{2}} \delta\left(s-\widetilde{m}_{Q}^2\right) \nonumber\\
&&-\frac{1}{16\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \left[ 1+\frac{s}{2}\delta\left(s-\widetilde{m}_{Q}^2\right)\right] \nonumber\\
&&+\frac{1}{8\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \,\frac{(1-y-z)}{z} \left[ 1+\frac{s}{2}\delta\left(s-\widetilde{m}_{Q}^2\right)\right] \nonumber\\
&&+\frac{m_Q^2}{8\pi^2}\langle\frac{\alpha_{s}GG}{\pi}\rangle\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \, \frac{1}{zy} \delta\left(s-\widetilde{m}_{Q}^2\right) \, ,
\end{eqnarray}
$\widetilde{m}_Q^2=\frac{m_Q^2}{y}+\frac{m_Q^2}{1-y-z}+\frac{m_{Q^\prime}^2}{z}$,
\begin{eqnarray}
z_{i/f}&=&\frac{s+m_{Q^\prime}^2-4m_Q^2\mp\sqrt{(s+m_{Q^\prime}^2-4m_Q^2)^2-4sm_{Q^\prime}^2}}{2s}\, ,\nonumber\\
y_{i/f}&=&\frac{1-z \mp \sqrt{(1-z)^2-4z(1-z)m_Q^2/(zs-m_{Q^\prime}^2)}}{2}\, ,
\end{eqnarray}
in the case of the $QQQ$ baryon states, we set $m_{Q^\prime}^2=m_{Q}^2$. When the $\delta\left(s-\widetilde{m}_{Q}^2\right)$ functions appear, $\int_{z_i}^{z_f}dz \int_{y_i}^{y_f}dy \to \int_{0}^{1}dz \int^{1-z}_{0}dy $.
We differentiate Eq.\eqref{QCDSR} with respect to $\tau=\frac{1}{T^2}$, then eliminate the
pole residues $\lambda_{+}$ and obtain the QCD sum rules for the masses of the triply-heavy baryon states,
\begin{eqnarray}\label{QCDSR-mass}
M^2_{+} &=& \frac{-\frac{d}{d\tau}\int_{\Delta^2}^{s_0}ds \,\left[\sqrt{s}\rho^1_{QCD}(s)+ \rho^0_{QCD}(s)\right]\exp\left( -s\tau\right)}{\int_{\Delta^2}^{s_0}ds\, \left[\sqrt{s}\rho^1_{QCD}(s)+ \rho^0_{QCD}(s)\right]\exp\left( -s\tau\right)}\, .
\end{eqnarray}
\section{Numerical results and discussions}
We take the standard values of the gluon condensates
$\langle \frac{\alpha_sGG}{\pi}\rangle=0.012\pm0.004\,\rm{GeV}^4$ and the three-gluon condensates $\langle g_s^3GGG\rangle=0.045\pm0.014\,\rm{GeV}^6$
\cite{SVZ79,Reinders85,ColangeloReview}, and take the $\overline{MS}$ masses of the heavy quarks $m_{c}(m_c)=(1.275\pm0.025)\,\rm{GeV}$
and $m_{b}(m_b)=(4.18\pm0.03)\,\rm{GeV}$
from the Particle Data Group \cite{PDG}, which work well in studying the doubly-heavy baryon states \cite{Wang-cc-baryon-penta}, hidden-charm tetraquark states \cite{WangZG-Hiddencharm, WangZG-Vector-tetra}, hidden-bottom tetraquark states \cite{ WangZG-Hiddenbottom}, hidden-charm pentaquark states \cite{WangZG-hiddencharm-penta}, fully-charmed tetraquark states \cite{WangZG-QQQQ}, and perform a new analysis.
Furthermore, we take into account
the energy-scale dependence of the $\overline{MS}$ masses according to the renormalization group equation,
\begin{eqnarray}
m_c(\mu)&=&m_c(m_c)\left[\frac{\alpha_{s}(\mu)}{\alpha_{s}(m_c)}\right]^{\frac{12}{33-2n_f}} \, ,\nonumber\\
m_b(\mu)&=&m_b(m_b)\left[\frac{\alpha_{s}(\mu)}{\alpha_{s}(m_b)}\right]^{\frac{12}{33-2n_f}} \, ,\nonumber\\
\alpha_s(\mu)&=&\frac{1}{b_0t}\left[1-\frac{b_1}{b_0^2}\frac{\log t}{t} +\frac{b_1^2(\log^2{t}-\log{t}-1)+b_0b_2}{b_0^4t^2}\right]\, ,
\end{eqnarray}
where $t=\log \frac{\mu^2}{\Lambda^2}$, $b_0=\frac{33-2n_f}{12\pi}$, $b_1=\frac{153-19n_f}{24\pi^2}$, $b_2=\frac{2857-\frac{5033}{9}n_f+\frac{325}{27}n_f^2}{128\pi^3}$, $\Lambda=210\,\rm{MeV}$, $292\,\rm{MeV}$ and $332\,\rm{MeV}$ for the flavors $n_f=5$, $4$ and $3$, respectively \cite{PDG}.
For the $ccb$, $bbc$ and $bbb$ baryon states, we choose the flavor numbers $n_f=5$, for the $ccc$ baryon state, we choose the flavor numbers $n_f=4$, and choose the optimal energy scales $\mu$ to obtain stable QCD sum rules in different channels and to enhance the pole contributions in a consistent way, while in previous works, the heavy quark masses were just taken as mass parameters, had constant values in all the QCD sum rules \cite{QCDSR-1,QCDSR-2,QCDSR-3,QCDSR-4}.
As long as the continuum threshold parameters $s_0$ are concerned, we should choose suitable values to avoid contaminations from the first radial excited states of the triply-heavy baryon states, and can borrow some ideas from the experimental data on the conventional charmonium (bottomonium) states and the charmonium-like states.
The energy gaps between the ground states and the first radial excited states are $M_{\psi^\prime}-M_{J/\psi}=589\,\rm{MeV}$ and $M_{\Upsilon^\prime}-M_{\Upsilon}=563\,\rm{MeV}$ from the Particle Data Group \cite{PDG},
$M_{B_c^{*\prime}}-M_{B^*_c}=567 \,\rm{MeV}$ from the CMS collaboration \cite{CMS-Bc-1902}, $M_{B_c^{*\prime}}-M_{B^*_c}=566 \,\rm{MeV}$ from the LHCb collaboration \cite{LHCb-Bc-1904}, $M_{Z_c(4430)}-M_{Z_c(3900)}=591\,\rm{MeV}$, $M_{X(4500)}-M_{X(3915)}=588\,\rm{MeV}$ from the Particle Data Group \cite{PDG},
$M_{Z_c(4600)}-M_{Z_c(4020)}=576\,\rm{MeV}$ from the LHCb collaboration \cite{LHCb-Z4600}.
We usually assign the $Z_c^\pm(3900)$ and $Z_c^\pm(4430)$ to be the ground state and the first radial excited state of the axialvector tetraquark states respectively \cite{Maiani-Z4430-1405},
assign the $X(3915)$ and $X(4500)$ to be the ground state and the first radial excited state of the scalar tetraquark states respectively \cite{Lebed-X3915,WangZG-X4500}, assign the $Z_c^\pm(4020)$ and $Z_c^\pm(4600)$ to be the ground state and the first radial excited state of the axialvector tetraquark states respectively with different quark structures from that of the $Z_c^\pm(3900)$ and $Z_c^\pm(4430)$ \cite{WangZG-Hiddencharm,ChenHX-Z4600-A}.
In the present work, we can take the experimental data from the Particle Data Group, CMS and LHCb collaborations as input parameters and choose the continuum threshold parameters as $\sqrt{s_0}=M_{\Omega/\Omega^*}+(0.50\sim0.55)\,\rm{GeV}$ as a constraint to study the triply-heavy baryon states with the QCD sum rules, furthermore, we add an uncertainty $\delta\sqrt{s_0}=\pm0.1\,\rm{GeV}$ as we usually do in estimating the uncertainties from the continuum threshold parameters in the QCD sum rules.
We vary the energy scales of the QCD spectral densities, the continuum threshold parameters and the Borel parameters to satisfy
the two basic criteria of the QCD sum rules, i.e. the ground state dominance at the hadron side and the operator product expansion convergence at the QCD side.
After trial and error, we obtain the ideal energy scales of the QCD spectral densities, the Borel parameters $T^2$ and the continuum threshold parameters $s_0$, therefore the pole contributions of the ground state triply-heavy baryon states, see Table \ref{Borel-mass}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}\hline\hline
&$T^2(\rm{GeV}^2)$ &$\sqrt{s_0}(\rm{GeV})$ &$\mu(\rm{GeV})$ &pole &$M(\rm{GeV})$ &$\lambda(10^{-1}\rm{GeV}^3)$ \\ \hline
$ccc({\frac{3}{2}}^+)$ &$3.1-4.1$ &$5.35\pm0.10$ &$1.2$ &$(56-83)\%$ &$4.81\pm0.10$ &$(2.08\pm0.31)$ \\ \hline
$ccb({\frac{3}{2}}^+)$ &$4.7-5.7$ &$8.55\pm0.10$ &$2.1$ &$(65-85)\%$ &$8.03\pm0.08$ &$(2.25\pm0.25)$ \\ \hline
$ccb({\frac{1}{2}}^+)$ &$4.9-5.9$ &$8.55\pm0.10$ &$2.0$ &$(64-84)\%$ &$8.02\pm0.08$ &$(4.30\pm0.47)$ \\ \hline
$bbc({\frac{3}{2}}^+)$ &$6.4-7.4$ &$11.75\pm0.10$ &$2.2$ &$(65-83)\%$ &$11.23\pm0.08$ &$(3.24\pm0.46)$ \\ \hline
$bbc({\frac{1}{2}}^+)$ &$6.3-7.3$ &$11.75\pm0.10$ &$2.2$ &$(65-84)\%$ &$11.22\pm0.08$ &$(5.65\pm0.81)$ \\ \hline
$bbb({\frac{3}{2}}^+)$ &$8.6-9.6$ &$14.95\pm0.10$ &$2.5$ &$(66-83)\%$ &$14.43\pm0.09$ &$(9.42\pm1.39)$ \\ \hline\hline
\end{tabular}
\end{center}
\caption{ The Borel windows, continuum threshold parameters, energy scales of the QCD spectral densities, pole contributions, masses and pole residues for the
triply-heavy baryon states. } \label{Borel-mass}
\end{table}
In the Borel windows, the pole contributions are about $(60-80)\%$, the pole dominance is well satisfied. In Fig.\ref{fr-fig}, we plot the contributions of the perturbative terms, the gluon condensates and the three-gluon condensates with variations of the Borel parameters for the central values of the continuum threshold parameters shown in Table \ref{Borel-mass} in the QCD sum rules for the $ccc$ and $bbb$ baryon states. From the figure, we can see that the main contributions come from the perturbative terms, the gluon condensates play a minor important role, and the three-gluon condensates play a tiny role in the Borel windows. The Borel parameters have the relation $T_{bbb}^2> T^2_{bbc}>T^2_{ccb}>T^2_{ccc}$, we add the subscripts $bbb$, $bbc$, $ccb$ and $ccc$ to denote the corresponding QCD sum rules. From Fig.\ref{fr-fig}, we can see that the contributions from the three-gluon condensates decrease quickly with the increase of the Borel parameters, at the region $T^2\geq 1.5\, \rm{GeV}^2$ in the $ccc$ channel and at the region $T^2\geq 3.0\, \rm{GeV}^2$ in the $bbb$ channel, the
contributions from the three-gluon condensates reach zero and can be neglected safely.
On the other hand, the Borel window $T^2_{ccc}>3.0\,\rm{GeV}^2$, so we can neglect the three-gluon condensates in the QCD sum rules for the $ccb$ and $bbc$ baryon states without impairing the predictive ability. The operator product expansion is well convergent. Although the three-gluon condensates play a tiny role in the Borel windows and can be neglected in the Borel windows, we take them into account to obtain the values $T^2\geq 1.5\, \rm{GeV}^2$ or $T^2\geq 3.0\, \rm{GeV}^2$, the calculations are non-trivial.
Now we take into account all uncertainties of the input parameters, and obtain the values of the masses and pole residues of
the triply-heavy baryon states, which are shown explicitly in Table \ref{Borel-mass} and Fig.\ref{mass-fig}.
In Fig.\ref{mass-fig}, we plot the masses of the triply-heavy baryon states with variations of the Borel parameters $T^2$ in much large ranges than the Borel windows. From the figure, we can see that there appear platforms in the Borel windows indeed, the uncertainties originate from the Borel parameters are very small, it is reliable to extract the triply-heavy baryon masses.
\begin{figure}
\centering
\includegraphics[totalheight=5cm,width=7cm]{ccc-32-fr.EPS}
\includegraphics[totalheight=5cm,width=7cm]{bbb-32-fr.EPS}
\caption{ The contributions of the vacuum condensates with variations of the Borel parameters $T^2$, where the $A$ and $B$ denote the baryon states $ccc({\frac{3}{2}}^+)$ and $bbb({\frac{3}{2}}^+)$, respectively, the $n=0$, $4$, $6$ denotes the dimensions of the vacuum condensates. }\label{fr-fig}
\end{figure}
\begin{figure}
\centering
\includegraphics[totalheight=5cm,width=7cm]{mass-ccc-32.EPS}
\includegraphics[totalheight=5cm,width=7cm]{mass-ccb-32.EPS}
\includegraphics[totalheight=5cm,width=7cm]{mass-ccb-12.EPS}
\includegraphics[totalheight=5cm,width=7cm]{mass-bbc-32.EPS}
\includegraphics[totalheight=5cm,width=7cm]{mass-bbc-12.EPS}
\includegraphics[totalheight=5cm,width=7cm]{mass-bbb-32.EPS}
\caption{ The masses of the triply-heavy baryon states with variations of the Borel parameters $T^2$, where the $A$, $B$, $C$, $D$, $E$ and $F$ denote the baryon states $ccc({\frac{3}{2}}^+)$, $ccb({\frac{3}{2}}^+)$, $ccb({\frac{1}{2}}^+)$, $bbc({\frac{3}{2}}^+)$, $bbc({\frac{1}{2}}^+)$ and
$bbb({\frac{3}{2}}^+)$, respectively. }\label{mass-fig}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}\hline\hline
&$ccc({\frac{3}{2}}^+)$ &$ccb({\frac{3}{2}}^+)$ &$ccb({\frac{1}{2}}^+)$ &$bbc({\frac{3}{2}}^+)$ &$bbc({\frac{1}{2}}^+)$ &$bbb({\frac{3}{2}}^+)$ \\ \hline
This Work &$4.81\pm0.10$ &$8.03\pm0.08$ &$8.02\pm0.08$ &$11.23\pm0.08$ &$11.22\pm0.08$ &$14.43\pm0.09$ \\ \hline
\cite{LQCD-1} &$4.763$ & & & & & \\ \hline
\cite{LQCD-2} &$4.796$ &$8.037$ &$8.007$ &$11.229$ &$11.195$ &$14.366$ \\ \hline
\cite{LQCD-3} &$4.769$ & & & & & \\ \hline
\cite{LQCD-4} &$4.789$ & & & & & \\ \hline
\cite{LQCD-5} &$4.761$ & & & & & \\ \hline
\cite{LQCD-6} &$4.734$ & & & & & \\ \hline
\cite{LQCD-7} & & & & & &$14.371$ \\ \hline
\cite{LQCD-8} & &$8.026$ &$8.005$ &$11.211$ &$11.194$ & \\ \hline
\cite{QCDSR-1} &$4.67\pm 0.15$ &$7.45\pm 0.16$ &$7.41\pm 0.13$ &$10.54\pm 0.11$ &$10.30\pm 0.10$ &$13.28\pm 0.10$ \\ \hline
\cite{QCDSR-2} &$4.72\pm0.12$ &$8.07\pm0.10$ & &$11.35\pm0.15$ & &$14.30\pm0.20$ \\ \hline
\cite{QCDSR-3} & & &$8.50\pm 0.12$ & &$11.73\pm 0.16$ & \\ \hline
\cite{QCDSR-4} &$4.99 \pm 0.14$ &$8.23\pm 0.13$ &$8.23 \pm 0.13$ &$11.49 \pm 0.11$ &$11.50\pm 0.11$ &$14.83\pm 0.10$ \\ \hline
\cite{PQM-1} &$4.965$ &$8.265$ &$8.245$ &$11.554$ &$11.535$ &$14.834$ \\ \hline
\cite{PQM-2} &$4.798$ &$8.023$ &$8.004$ &$11.221$ &$11.200$ &$14.396$ \\ \hline
\cite{PQM-3} &$4.763$ & & & & &$14.371$ \\ \hline
\cite{PQM-4} &$4.760$ &$8.032$ &$7.999$ &$11.287$ &$11.274$ &$14.370$ \\ \hline
\cite{PQM-5} &$4.799$ &$8.019$ & &$11.217$ & &$14.398$ \\ \hline
\cite{PQM-6} &$4.76$ &$7.98$ &$7.98$ &$11.19$ &$11.19$ &$14.37$ \\ \hline
\cite{PQM-7} &$4.777$ &$8.005$ &$7.984$ &$11.163$ &$11.139$ &$14.276$ \\ \hline
\cite{PQM-8} &$4.79$ &$8.03$ & &$11.20$ & &$14.30$ \\ \hline
\cite{PQM-9} &$4.803$ &$8.025$ &$8.018$ &$11.287$ &$11.280$ &$14.569$ \\ \hline
\cite{PQM-10} &$4.806$ & & & & &$14.496$ \\ \hline
\cite{PQM-11} &$4.897$ &$8.273$ &$8.262$ &$11.589$ &$11.546$ &$14.688$ \\ \hline
\cite{PQM-12} &$4.773$ & & & & & \\ \hline
\cite{PQM-13} &$4.828$ & & & & &$14.432$ \\ \hline
\cite{PQM-14} &$4.900$ &$8.140$ & &$10.890$ & &$14.500$ \\ \hline
\cite{PQM-15} &$4.799$ &$8.046$ &$8.018$ &$11.245$ &$11.214$ &$14.398$ \\ \hline
\cite{PQM-16} &$4.798$ & &$8.018$ & &$11.215$ &$14.398$ \\ \hline
\cite{Faddeev-1} &$4.760$ &$7.963$ &$7.867$ &$11.167$ &$11.077$ &$14.370$ \\ \hline
\cite{Faddeev-2} &$4.799$ & & & & &$14.244$ \\ \hline
\cite{Faddeev-3} &$5.00$ & &$8.19$ & & &$14.57$ \\ \hline
\cite{Faddeev-4} &$4.93$ &$8.03$ &$8.01$ &$11.12$ &$11.09$ &$14.23$ \\ \hline
\cite{Regge-1} &$4.834$ & & & & & \\ \hline
\cite{Regge-2} & & & & & &$14.788$ \\ \hline
\hline\hline
\end{tabular}
\end{center}
\caption{ The masses of the triply-heavy baryon states from different theoretical approaches, where the unit is GeV. } \label{QQQ-mass}
\end{table}
In Table \ref{QQQ-mass}, we also present the predictions of the triply-heavy baryon masses from the lattice QCD \cite{LQCD-1,LQCD-2,LQCD-3,LQCD-4,LQCD-5,LQCD-6,LQCD-7,LQCD-8}, the QCD sum rules \cite{QCDSR-1,QCDSR-2,QCDSR-3,QCDSR-4}, various potential quark models \cite{PQM-1,PQM-2,PQM-3,PQM-4,PQM-5,PQM-6,PQM-7,PQM-8,PQM-9,PQM-10,PQM-11,PQM-12,PQM-13,PQM-14,PQM-15,PQM-16}, the Fadeev equation \cite{Faddeev-1,Faddeev-2,Faddeev-3,Faddeev-4}, the Regge trajectories \cite{Regge-1,Regge-2}. From the Table, we can see that the predicted masses are $M_{ccc,{\frac{3}{2}}^+}=(4.7\sim5.0)\,\rm{GeV}$, $M_{ccb,{\frac{3}{2}}^+}=(7.5\sim8.3)\,\rm{GeV}$, $M_{ccb,{\frac{1}{2}}^+}=(7.4\sim8.5)\,\rm{GeV}$,
$M_{bbc,{\frac{3}{2}}^+}=(10.5\sim11.6)\,\rm{GeV}$, $M_{bbc,{\frac{1}{2}}^+}=(10.3\sim11.7)\,\rm{GeV}$, $M_{bbb,{\frac{3}{2}}^+}=(13.3\sim14.8)\,\rm{GeV}$ from the previous works, the present predictions $M_{ccc,{\frac{3}{2}}^+}=(4.81\pm0.10)\,\rm{GeV}$, $M_{ccb,{\frac{3}{2}}^+}=(8.03\pm0.08)\,\rm{GeV}$, $M_{ccb,{\frac{1}{2}}^+}=(8.02\pm0.08)\,\rm{GeV}$,
$M_{bbc,{\frac{3}{2}}^+}=(11.23\pm0.08)\,\rm{GeV}$, $M_{bbc,{\frac{1}{2}}^+}=(11.22\pm0.08)\,\rm{GeV}$, $M_{bbb,{\frac{3}{2}}^+}=(14.43\pm0.09)\,\rm{GeV}$, which are compatible with them, but with refined and more robust values compared to the previous calculations based on the QCD sum rules \cite{QCDSR-1,QCDSR-2,QCDSR-3,QCDSR-4}.
\section{Conclusion}
In this article, we restudy the ground state triply-heavy baryon states with the QCD sum rules by carrying out the operator product expansion up to the vacuum condensates of dimension 6 in a consistent way and performing a novel analysis. It is the first time to take into account the three-gluon condensates in the QCD sum rules for the triply-heavy baryon states. In calculations, we choose the $\overline{MS}$ masses of the heavy quarks, which work well in studying the doubly-heavy baryon states, hidden-charm tetraquark states, hidden-bottom tetraquark states, hidden-charm pentaquark states, fully-charmed tetraquark states, and vary the energy scales to select the optimal values so as to obtain more stable QCD sum rules and enhance the pole contributions. The present predictions of the triply-heavy baryon masses are compatible with the existing theoretical calculations but with refined and more robust values compared to the previous calculations based on the QCD sum rules, which can be confronted to the experimental data in the future to make contributions to the mass spectrum of the heavy baryon states.
\section*{Acknowledgements}
This work is supported by National Natural Science Foundation, Grant Number 11775079.
| proofpile-arXiv_059-15666 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
A generalized random matrix model with additional interactions \cite{Alam-Muttalib-Wang-Yadav20}, called the $\gamma$-ensembles, was introduced recently as a solvable toy model for three-dimensional (3D) disordered conductors. The joint probability distribution (jpd) of the $N$ non-negative eigenvalues $x_i$ for these $\gamma$-ensembles has the form
\begin{equation}
\begin{aligned}
&p(\{x_i\};\theta,\gamma) \propto \prod_{i=1}^Nw(x_i)\prod_{i<j}|x_i-x_j||x_i^{\theta}-x_j^{\theta}|^{\gamma},\\
&0< \gamma, \;\;\; 1 < \theta < \infty.
\label{gamma_ensemble_jpd}
\end{aligned}
\end{equation}
Here we assume the convention $w(x) = e^{-NV(x)}$, so that the empirical distribution of the particles (a.k.a. the equilibrium measure) converges as $N \to \infty$. In \cite{Alam-Muttalib-Wang-Yadav20}, the parameter $\gamma$ was restricted to $ 0 < \gamma \le 1$, but the method developed there allows the evaluation of the density of eigenvalues of the $\gamma$-ensembles for any $\gamma > 0$, $\theta>1$ and for any well behaved $V(x)$. In particular, it was shown that the jpd for the $\gamma$-ensembles can be mapped on to the Muttalib-Borodin (MB) ensembles \cite{Muttalib95,Borodin99,Forrester-Wang15,Zhang15,Kuijlaars-Molag19,Molag20,Wang-Zhang21} (which has the same jpd as Eq. (\ref{gamma_ensemble_jpd}), with $\gamma=1$), by replacing the external potential $V(x)$ with a $\gamma$-dependent effective potential $V_{\eff}(x;\gamma)$. This effective potential was calculated explicitly for $\theta=2$ by numerically solving the Riemann-Hilbert (RH) problem associated with the jpd of the $\gamma$-ensembles. This allowed the calculation of the corresponding exact density of the eigenvalues $\sigma(x)$, which can be used to calculate the average conductance of a disordered conductor.
In terms of the variables in Eq. (\ref{gamma_ensemble_jpd}), the average dimensionless conductance per channel $g_{channel}$ of a disordered conductor (in units of the quantum conductance $e^2/\hbar$) is given by \cite{note,Muttalib-Pichard-Stone87}
\begin{equation}
g_{channel}=\int_0^{\infty} \frac{\sigma(x)}{\cosh^2\sqrt{x}}dx.
\label{g}
\end{equation}
Clearly, a large peak in the density near the origin corresponds to a large conductance, or a metal, while a density which is small near the origin and spread out at large values of $x$ will correspond to a small conductance, or an insulator.
As shown in \cite{Alam-Muttalib-Wang-Yadav20}, while the exact solution of the density for Eq. (\ref{gamma_ensemble_jpd}) for $\theta=2$ shows a significant change in the density as a function of the two-particle interaction parameter $\gamma$, the change in density is not large enough to affect the conductance $g$ significantly. Thus the question arises: What is the role of the parameter $\gamma$ in the transition from metallic to insulating behavior of a disordered quantum conductor?
In this paper we address this question in three steps:
First, we show that if we allow $1 < \theta < 2$, then the effective potential near the origin becomes non-monotonic for $\gamma < 1$, where the degree of non-monotonicity increases with decreasing $\gamma$. This is significant because such non-monotonic effective potential can in principle give rise to a transition in density from hard-edge to soft-edge, which means a transition from a diverging to a non-diverging density near the origin, as shown by Clays and Romano (CR) \cite{Claeys-Romano14}. As a bonus, we find that for Laguerre $\beta$ ensembles, the eigenvalue density for all values of $\beta\ge 1$ can be obtained by considering the $\theta\to 1$ limit of the $\gamma$-ensembles, with $\beta=\gamma+1$, as shown in Appendix A.
Second, while the CR model (which belongs to the MB-ensembles) shows a transition from a diverging to a non-diverging density near the origin by changing the non-monotonicity parameter $\rho$ of the single-particle potential $V(x)=x^2-\rho x$, we show that for a fixed value of $\rho$, a similar transition occurs as a function of the two-particle interaction parameter $\gamma$. This shows that the role of the parameter $\gamma$ in the $\gamma$-ensembles is qualitatively similar to a non-monotonicity parameter in the single-particle potential.
Third, we consider a realistic phenomenological single-particle potential for a disordered conductor of the form $V(x)=\Gamma x - (1/2)\ln \sinh^2\sqrt{x}$ where the logarithmic term arises naturally as a Jacobian factor \cite{Markos-Muttalib-Wolfle05} and $\Gamma$ is also a function of $\gamma$. This model produces a transition in the density from a peak near the origin to a density with a gap near the origin as $\gamma$ is reduced systematically from 1, the gap increasing with decreasing $\gamma$. This change in the density is sufficient to result in a transition from a metallic to an insulating conductance.
While such a toy model is clearly not sufficient to describe metal-to-insulator transition in actual physical systems, the results suggest that the $\gamma$-ensembles with appropriate single-particle potentials can be used as a possible framework to study the distribution of conductances across the metal-insulator transition.
The paper is organized as follows. In Sec. \ref{sec:2} we give a brief outline of the numerical solution to RH problem for $\gamma$ ensembles. The equilibrium density can be obtained replacing external potential $V(x)$ with $\gamma$ dependent effective potential $V_{\eff}(x;\gamma)$. In Secs. \ref{sec:3} , \ref{sec:4} and \ref{sec:5} we follow the three steps mentioned above and systematically explore the role of the parameter $\gamma$.
We summarize our results in Sec. \ref{sec:6}. Results obtained as a bonus for the well-known
$\beta$-ensembles as a $\theta \to 1$ limit of the $\gamma$-ensembles are discussed in Appendix A. Some mathematical details are given in Appendix B.
\section{The equilibrium problem for $\gamma$ ensemble} \label{sec:2}
Here we give a brief overview of the solution to the RH problem of $\gamma$-ensembles and the computation of its eigenvalue density. The complete analysis can be found in \cite{Alam-Muttalib-Wang-Yadav20}. Consider the $\gamma$-ensembles defined by the jpd in Eq. (\ref{gamma_ensemble_jpd}).
The unique equilibrium measure $\mu$ that minimizes the energy functional
\begin{equation}
\begin{aligned}
&\frac{1}{2} \iint \ln \frac{1}{\lvert x - y \rvert} d\mu(x) d\mu(y) + \frac{\gamma}{2} \iint \ln \frac{1}{\lvert x^{\theta} - y^{\theta} \rvert} d\mu(x) d\mu(y) \\
&+ \int V(x) d\mu(x),
\end{aligned}
\end{equation}
satisfies the Euler-Lagrange (EL) equation
\begin{equation}
\int \ln| x - y| d\mu(y) +\gamma\int \ln| x^{\theta}-y^{\theta}|d\mu(y)
- V(x)= \ell
\label{euler-lagrange}
\end{equation}
if $x$ lies inside the support of density and the equality sign is replaced by $<$ if $x$ lies outside the support. Here $\ell$ is some constant. In this section we give formalism for hard-edge support where we assume that the eigenvalue density lies on support $[0,b]$ for some $b>0$. The similar formalism for soft-edge for which density lies on support $[a,b]$ with $b>a>0$, is given in Appendix B. In formulating the RH problem from the above EL equations, crucial role is played by the Joukowsky transformation (JT) for hard edge,
\begin{equation}
\begin{aligned}
J_c(s) = {}& c(s+1)(\frac{s+1}{s})^{\frac{1}{\theta}}, \\
\end{aligned}
\label{joukowsky}
\end{equation}
where $s$ is a complex variable. The points in the complex domain, which are mapped by the JT on to a real line, form a contour $\nu$ given by,
\begin{equation}
r(\phi)= \left. \tan \left( \frac{\phi}{1+\theta} \right) \middle/ \left[ \sin\phi-\cos\phi\tan \left( \frac{\phi}{1+\theta} \right) \right] \right.,
\label{contour_hard_edge}
\end{equation}
where $0<\phi<2\pi$ is the argument of $s$ in the complex plane. Schematic Fig. \ref{mapping_schematic} shows mapping of all points on contour $\nu$ to two different regions in the complex plane by the JT $J_c(s)$.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{figures/mapping_sch.pdf}
\end{center}
\caption{
(Color online)
Schematic figure for the mapping of JT for a hard-edge problem. Here $D$ is the region inside the contour $\nu_1,\nu_2$ ($\bar{D}$ is the region outside). $\mathbb{H}_\theta$ is the angular region at the top right between the lines [5],[6]. $\mathbb{C}$ denotes the complex plane.}
\label{mapping_schematic}
\end{figure}
By defining complex transforms
\begin{equation}
\begin{aligned}
g(z) \equiv {}& \int_0^b \log(z-x)d\mu(x), && z \in \mathbb{C}\backslash(-\infty,b], \\
\tilde{g}(z) \equiv {}& \int_0^b \log(z^{\theta}-x^{\theta})d\mu(x), && z \in \mathbb{H}_\theta\backslash(0,b],
\end{aligned}
\label{complex_transforms}
\end{equation}
with their derivatives $G(s)\equiv g^{\prime}(s)$, $\tilde{G}(s)\equiv \tilde{g}^{\prime}(s)$ and the function $M(s)$ as,
\begin{equation}
M(s)\equiv
\begin{cases}
G[J_c(s)], & \text{for } s\in\mathbb{C}\backslash \bar{D}, \\
\tilde{G}[J_c(s)], & \text{for } s\in D\backslash[-1,0],
\end{cases}
\label{M_def}
\end{equation}
the sum and difference of the EL equations can be written as
\begin{equation}
\begin{split}
M_+(s_1)+\gamma M_-(s_1)+M_-(s_2)+\gamma M_+(s_2)
= {}& 2V^{\prime}[J_c(s)], \\
M_+(s_1)-M_-(s_2)+M_-(s_1)-M_+(s_2)
= {}& 0.
\end{split}
\label{M_EL}
\end{equation}
Here $s_1 \in \nu_1$ and $s_2 \in \nu_2$ (see Fig. \ref{mapping_schematic}). Equation (\ref{M_EL}), together with some of the limits of $M(s)$, form the RH problem for $M(s)$. The RH problem in terms of $N(s)\equiv M(s)J_c(s)$ is then as follows.
\vspace{0.2cm}
\paragraph*{RH problem for $N$:}
\begin{itemize}
\item
$N$ is analytic in $\mathbb{C} \setminus \nu$.
\item
$N_+(s_1)+\gamma N_-(s_1)+N_-(s_2)+\gamma N_+(s_2)$\\
$= 2V^{\prime}[J_c(s)]J_c(s)$
\begin{equation}
N_+(s_1)-N_-(s_2)+N_-(s_1)-N_+(s_2)
= 0.
\label{Npm}
\end{equation}
\item
$N(0) = \theta$ and $N(s) \to 1$ as $s \to \infty$.
\end{itemize}
We further define a function $f$ such that,
\begin{equation} \label{eq:defn_f}
f[J_c(s)]\equiv N_+(s)+N_-(s).
\end{equation}
This gives solution to RH problem of $N(s)$ as,
\begin{equation}
N(s)=
\begin{cases}
\frac{-1}{2\pi i}\oint_{\nu}\frac{f[J_c(\xi)]}{\xi -s}\; d\xi +1, & s\in \mathbb{C}\backslash \bar{D}, \cr
\frac{1}{2\pi i}\oint_{\nu}\frac{f[J_c(\xi)]}{\xi -s}\; d\xi -1, & s\in D\backslash [-1,0].
\end{cases}
\label{N_def_contr}
\end{equation}
Also from the RH problem for $N(s)$, the constant $c$ of the JT in Eq. (\ref{joukowsky}) satisfies the equation
\begin{equation}
\label{c_hard_edge}
\frac{1}{2\pi i}{\displaystyle \oint_{\nu}^{}}\frac{f[J_c(s)]}{s}ds=1+\theta.
\end{equation}
Thus the sum equation in the RH problem for $N(s)$ can be rewritten as,
\begin{equation}
(1-\gamma)[N_+(s_1)+N_-(s_2)]+2\gamma f[J_c(s)]
=2V^{\prime}[J_c(s)]J_c(s).
\end{equation}
Defining the inverse mapping of JT as,
\begin{equation}
s=J_c^{-1}(x)=h(x).
\label{inversemap}
\end{equation}
with $(s_1)_+ = h(y); \ (s_2)_- = \bar{h}(y); \ s_1 = h(x) \ \text{and} \ s_2 = \bar{h}(x) $, we substitute for $[N_+(s_1)+N_-(s_2)]$ using Eq. (\ref{N_def_contr}) and the inverse mapping. We finally get the integral equation,
\begin{equation}
\label{f_integral_eqn}
f(y;\gamma) = \frac{V^{\prime}(y)y}{\gamma}
-\frac{1-\gamma}{\gamma}\bigg[1 + \frac{1}{2\pi}\int_0^b f(x;\gamma)\phi(x,y)dx \bigg],
\end{equation}
where
\begin{equation}
\phi(x,y) = \Im\bigg[ \left( \frac{1}{h(y) - \overline{h}(x)} + \frac{1}{\overline{h}(y) - \overline{h}(x)} \right) \overline{h}^{\prime}(x) \bigg].
\end{equation}
We solve Eq. (\ref{f_integral_eqn}) for $f(y;\gamma)$ and Eq. (\ref{c_hard_edge}) for $c$ numerically, self-consistently. The new effective potential $V_{\eff}(x;\gamma)$ is related to $f(x;\gamma)$ by
\begin{equation}
V'_{\eff}(x;\gamma)=\frac{f(x;\gamma)}{x}.
\label{V-effective}
\end{equation}
The eigenvalue density for this effective potential is given by \cite{Alam-Muttalib-Wang-Yadav20},
\begin{equation}
\label{density_hard_edge}
\begin{split}
\sigma(y;\gamma)= {}& \frac{-1}{2{\pi}^2 \gamma y}\int_{b}^0 xV'_{\eff}(x;\gamma)\chi(x,y) dx,\\
\chi(x,y)= {}& \Re\bigg[ \bigg( \frac{1}{\overline{h}(y) - h(x)}-\frac{1}{h(y) - h(x)} \bigg)h^{\prime}(x)\bigg].
\end{split}
\end{equation}
\begin{figure*}
\centering
\subfigure{\includegraphics[width=0.325\textwidth]{figures/potential_theta_comparison_gamma_0pt6_theta_1pt5.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/potential_theta_comparison_gamma_0pt6_theta_1pt1.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/potential_theta_comparison_gamma_0pt6_theta_1pt01.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/potential_theta_comparison_gamma_0pt6_theta_1pt0001.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/potential_theta_comparison_gamma_0pt6.eps}}
\caption{(Color online)
Effective potentials close to the origin and over the full support, for $\gamma = 0.6$ and different values of $\theta$. Near the origin, the minima of the non-monotonic effective potential first moves away from the origin and then moves towards the origin as $\theta$ is reduced. Note that the effective potential is monotonic for $\theta =2$ \cite{Alam-Muttalib-Wang-Yadav20}. Also, consistent with the analytical result for $\theta=1$, the non-monotonicity of effective potential near the origin reduces as $\theta\rightarrow 1$.}
\label{potential_theta_1}
\end{figure*}
In summary, starting with a jpd of the $\gamma$-ensemble with some confining potential $V(x)$, it is possible to map the problem to an MB ensemble ($\gamma=1$), but with an effective potential $V_{eff}(x,\gamma)$ given by Eq. (\ref{V-effective}). Then, the density of the eigenvalues for such an MB ensemble can be obtained using Eq. (\ref{density_hard_edge}).
We will use this prescription in the following sections to obtain the density of eigenvalues for several different toy models. We will show that one effect of the parameter $\gamma$ is to add non-monotonicity to the effective potential.
\section{Nonmonotonic effective potential for $1<\theta < 2$} \label{sec:3}
As a first step towards understanding the role of the parameter $\gamma$ in the $\gamma$-ensembles, we consider a range of the parameter $\theta$, beyond the value $\theta=2$ considered in detail in \cite{Alam-Muttalib-Wang-Yadav20}. The idea is to show first of all that for certain range of $\theta$, the effective potential can become non-monotonic near the origin. Within that range, the goal is then to choose a particular fixed value of $\theta$ that shows a significant non-monotonicity and systematically study the effective potential as well as the eigenvalue density as a function of $\gamma$. This would allow us to focus on the role of $\gamma$ in the $\gamma$-ensembles. We will restrict ourselves to the case $\gamma <1$, which is expected to be relevant for disordered quantum conductors.
Figure \ref{potential_theta_1} shows the effective potentials near the origin for $\gamma=0.6$ and a range of values for $\theta$ between $1$ and $2$. We have shown in \cite{Alam-Muttalib-Wang-Yadav20} that the effective potential for $\theta=2$ monotonically goes to zero at the origin. As $\theta$ is reduced from $2$, the effective potential develops a non-monotonicity. The minima of the effective potential gradually becomes deeper and moves away from the origin. Later as $\theta$ moves closer to $1$, the depth of the minima of the effective potential decreases and the minima shifts closer to the origin. Thus with decreasing non-monotonicity, we expect the effective potential to become linear for $\theta=1$ as predicted by Eq. (\ref{V_eff_theta_1}). We have also verified this expected analytical results for $\gamma >1$ case.
Figure \ref{potential_theta_1} suggests that even for $\theta$ close enough to $\theta=1$, the effect of $\gamma$ on the non-monotonicity could be observable. We therefore choose $\theta = 1.0001$ and a linear external potential, $V(x)=2x$. Figure \ref{potential_near_origin} shows the effective potential for different values of $\gamma$, where we include $\gamma > 1$ as well to show that the results are qualitatively different.
Note that the limit $\theta=1$ is identical to the well-known $\beta$-ensembles with $\beta=\gamma+1$. Analytical results for such Laguerre $\beta$-ensembles obtained in Appendix A suggest that the non-monotonicity of the effective potential should disappear at $\theta = 1$. The present formalism allows us to consider the $\theta \to 1$ limit and thereby obtain the effective potential as well as the density for $\beta$-ensembles for arbitrary $\beta$, as shown in the Appendix.
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{figures/potential_near_origin_beta_ensembles_pub2_corrected4.eps}
\end{center}
\caption{
(Color online)
Effective potential near the origin for different $\gamma$, $V(x)=2x$ and $\theta = 1.0001$.}
\label{potential_near_origin}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{figures/potential_near_origin_quadratic.eps}
\end{center}
\caption{
(Color online)
Effective potential near the origin for quadratic potential $V(x)=0.2x^2$, $\gamma=0.7$ and $\theta = 1.0001$.}
\label{potential_quadratic}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.48\textwidth]{figures/potential_transition.eps}
\end{center}
\caption{
(Color online)
Effective potential for $\theta = 1.2$, $V(x)=x^2-2.35x$ and different $\gamma$.}
\label{potential_transition}
\end{figure}
\begin{figure*}
\centering
\subfigure{\includegraphics[width=0.325\textwidth]{figures/density_transition_gamma_0pt8.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/density_transition_gamma_0pt6.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/density_transition_gamma_0pt5_hard.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/density_transition_gamma_0pt4_hard.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/density_transition_gamma_0pt5_soft.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/density_transition_gamma_0pt4_soft.eps}}
\caption[]{(Color online)
The eigenvalue density, for $\theta = 1.2$, $V(x)=x^2-2.35x$, and different values of $\gamma$. Inset shows the corresponding density near the origin. For $\gamma = 0.5$ and $0.4$, the hard-edge eigenvalue densities become negative near the origin, implying that the assumption of hard-edge support is wrong and true density has a soft-edge support. The last two panels (the small kinks in the density are numerical artifacts and go away with finer grid) show the true eigenvalue density for $\gamma = 0.5$ and $0.4$ with the soft-edge support. }
\label{density_transition}
\end{figure*}
\begin{figure*}
\centering
\subfigure{\includegraphics[width=0.325\textwidth]{figures/density_MMW_sigmoid_fit_gamma_0pt7_ver2_new.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/density_MMW_sigmoid_fit_gamma_0pt71_ver2_new.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/density_MMW_sigmoid_fit_gamma_0pt72_ver2_new.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/density_MMW_sigmoid_fit_gamma_0pt73_ver2_new.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/density_MMW_sigmoid_fit_gamma_0pt73033_ver2_new.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/density_MMW_sigmoid_fit_gamma_0pt73066_ver2_new.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/density_MMW_sigmoid_fit_gamma_0pt731_ver2_new.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/density_MMW_sigmoid_fit_gamma_0pt73102_ver2_new.eps}}
\subfigure{\includegraphics[width=0.325\textwidth]{figures/density_MMW_sigmoid_fit_gamma_0pt73105_ver2_new.eps}}
\caption{(Color online)
The eigenvalue density, for $\theta = 1.8$, $V(x)=\frac{a\gamma}{1+\ln\frac{1-\gamma}{\gamma}}x-\frac{1}{2}\ln(\sinh 2\sqrt{x})$ with $a=0.01$, and different values of $\gamma$. All densities have soft-edge support.}
\label{density_MMW_sigmoid}
\end{figure*}
To explore how the non-monotonicity changes with the single-particle potential, we consider the $\gamma$-ensemble with a quadratic single-particle potential $V(x)=\alpha x^2$, $\gamma = 0.7$ and $\theta \rightarrow 1$. We choose $\alpha = 0.2$ so that the potential is much weaker near the origin compared to the linear potential. Figure \ref{potential_quadratic} shows that the minima of the effective potential is shifted significantly away from the origin and is deeper compared to the effective potentials in Fig. \ref{potential_near_origin}.
\section{Hard-edge to soft-edge transition for eigenvalue density} \label{sec:4}
In the previous section we showed that the effect of decreasing the exponent $\gamma$ from $1$ in the $\gamma$-ensembles with either a linear or a quadratic single-particle potential is equivalent to adding a non-monotonicity in the effective potential for the corresponding MB ensembles. It has been shown in \cite{Claeys-Romano14} that such a minima in confining potential, if deep enough, can produce a transition from a diverging eigenvalue density at the hard-edge to a non-diverging density. However, the non-monotonic effective potentials we have computed in these cases for different $\gamma$ and different $\theta$ are not sufficient to produce the hard-edge to soft-edge transition in the eigenvalue density. In this section we show that starting with a given non-monotonic potential of the form $V(x)= x^2-\rho x$, with fixed $\rho=2.35$ for which the density is still diverging near the origin, changing $\gamma$ alone is sufficient to produce such a transition. Note that this is qualitatively different from the CR model \cite{Claeys-Romano14}, where a transition is obtained by changing the non-monotonicity parameter $\rho$ in the single-particle potential, while we keep $\rho$ fixed, and change the two-particle interaction parameter $\gamma$ which is expected to be related to the strength of disorder in a three-dimensional quantum conductor.
We choose the interaction parameter $\theta = 1.2$ because the results from Fig. \ref{potential_theta_1} suggest that for a given $\gamma$, the non-monotonicity in the effective potential is qualitatively the largest for $\theta$ between $1.1$ and $1.5$.
For all $\gamma < 1$, we begin with the assumption that the support of density is hard-edge (i.e. the support starts at the origin)) and we use the hard-edge formalism to compute the eigenvalue density. If for some $\gamma < 1$, our assumption of hard-edge support for density is wrong and the actual support is soft-edge (i.e. the support starts away from the origin) then the hard-edge formalism gives a negative (unphysical) density near origin. In that case, we switch to the soft-edge formalism described in Appendix B and compute the non-negative density with soft-edge support. As the $\gamma$ decreases from $1$, the effective potential increases (becomes more and more non-monotonic) near origin, as shown in Fig. \ref{potential_transition}. For some critical value of $\gamma$ between $0.5$ and $0.6$, this added non-monotonicity in the effective potential brings about the hard-edge to soft-edge transition in the density, see Fig. \ref{density_transition}. As $\gamma$ is reduced further, the soft-edge of the support of the density near origin moves further and further away from origin, increasing the gap in the spectrum.
\section{Phenomenological model for 3D disordered conductors} \label{sec:5}
In this section we consider a phenomenological model based on results from \cite{Markos-Muttalib-Woelfle-Klauder04, Markos-Muttalib-Wolfle05}. We will restrict ourselves to 3D only; for a brief discussion of how the dimensionality enters the current formulation, see Appendix C. The jpd for the ensemble is given by
\cite{Markos-Muttalib-Woelfle-Klauder04, Markos-Muttalib-Wolfle05, Klauder-Muttalib99, Gopar-Muttalib02, Douglas-Markos-Muttalib14}
\begin{equation}
\begin{aligned}
&p(\{x_i\};\gamma) \propto \prod_{i=1}^Nw(x_i,\gamma)\prod_{i<j}|x_i-x_j||s(x_i)-s(x_j)|^{\gamma},\\
&w(x,\gamma) = e^{-N V(x,\gamma)}.
\end{aligned}
\label{DMPK_3D_MMW}
\end{equation}
where $s(x)=\sinh^2 \sqrt{x}$.
The Joukowsky transformation for the interaction term, $|\sinh^2\sqrt{x_i}-\sinh^2\sqrt{x_j}|$, is not available and hence the explicit numerical solution to the RH problem associated with this jpd can not be obtained. Fortunately, the $x^{\theta}$ interaction term in $\gamma$ ensembles with $\theta = 1.8$ and the $\sinh^2\sqrt{x}$ interaction term in Eq. (\ref{DMPK_3D_MMW}) have very similar qualitative behavior over a reasonable range of support for the eigenvalue density. Thus, we can use the $\gamma$-ensemble interaction term with $\theta = 1.8$ as a solvable toy model. The single-particle potential
$V(x,\gamma)$ has a dominant linear dependence on $x$ in the strongly disordered regime, whose strength depends on the parameter $\gamma$. It also includes a logarithmic part arising from a Jacobian of transformation. In the strong disorder regime, the total single-particle potential is given by \cite{Markos-Muttalib-Wolfle05}
\begin{equation}
V(x,\gamma)=\Gamma x-\frac{1}{2}\ln(\sinh 2\sqrt{x}),
\label{Vofgamma}
\end{equation}
where the coefficient $\Gamma$ depends on disorder, but its functional relationship with the two-particle interaction parameter $\gamma$ is not known in general. The relationship has been discussed only in the strongly disordered insulating regime \cite{Markos-Muttalib-Wolfle05} where $\Gamma\propto \gamma$, with $\gamma \ll 1$. Starting from the strongly disordered limit, Fig. 7 in Ref.~\cite{Markos-Muttalib-Wolfle05} suggests a sharp sigmoidal increase in $\gamma$ as disorder is decreased; this signals a transition from the strongly disordered insulating regime towards a weakly disordered metallic regime. Finally
in the metallic regime corresponding to $\gamma \sim 1$, the parameter $\Gamma$ is expected to be very large, although there is no numerical guideline on its $\gamma$-dependence.
A simple one-parameter model that incorporates the strongly disordered insulating limit as well as the rapid change at the transition as suggested by the numerical studies is given by $\Gamma= a\gamma/[1+\ln\frac{1-\gamma}{\gamma}]$, where $a$ is a phenomenological parameter that loosely characterizes the transition point.
In the spirit of a toy model, we do not try to fix $a$. Instead, since our numerical results converge progressively slowly for $\gamma \le 0.5$, we choose $a=0.01$ which generates a transition for $\gamma \sim 0.73$. Starting from the insulating side and systematically increasing $\gamma$, we stop where $\Gamma$ diverges [at $\gamma = e/(1+e)$], and therefore reaches the metallic limit. Note that it is easy to construct a model with more parameters to include the weakly disordered (metallic) regime within this formulation, but since our focus is near the transition, which occurs at strong disorder, we will use the simplest one-parameter model discussed above.
The effect of the logarithm in $V(x,\gamma)$ is two-fold: First, it provides a starting non-monotonicity when combined with the dominant linear single-particle potential. Second, it removes any divergence at the origin. Thus unlike the CR model, a metallic regime in this case will correspond to a peak in the density near the origin (instead of a diverging density), while an insulating regime will correspond to zero or exponentially small density (a gap) over a finite range near the origin. The metal-to-insulator transition in this case will therefore correspond to the destruction of the peak in the density of eigenvalues near the origin.
Since there is no divergence at the origin, we use the soft-edge formalism and compute the eigenvalue densities for different values of $\gamma$. Note that in this phenomenological model, both the two-particle interaction term and the single-particle potential change as $\gamma$ is changed.
Figure \ref{density_MMW_sigmoid} shows the change in the density as $\gamma$ is increased systematically. At $\gamma=0.7$ the density has a large gap near the origin and is spread out with no peak. As $\gamma$ increases, the gap becomes smaller and the density starts to develop a peak near the origin. The peak becomes very large at $\gamma=0.73105$, which is the largest value our model allows us to consider. Thus there is a clear ``transition" in the density from zero near the origin to a large peak.
Clearly, our simplified solvable toy-models can not provide a quantitative description of a three-dimensional disordered system. Nevertheless, the toy model discussed here can provide qualitatively correct behavior for some of the quantities that are not sensitive to the details of the system parameters. Here we use Eq. (\ref{g}) to compute $g_{channel}$, the average conductance per channel (in units of the conductance quantum $e^2/\hbar$).
Figure \ref{conductance_MMW_sigmoid} shows how this quantity changes with $\gamma$.
At $\gamma=0.70$ where the density has a large gap near the origin, the conductance is very small, and it remains small as long as the gap remains appreciable, up to $\gamma=0.72$. Beyond
$\gamma = 0.725$ the gap in the density starts to close and a peak near the origin starts to grow, and the conductance starts to increase rapidly.
It reaches the value $g_{channel} \sim 1$ for $\gamma=0.73105$ which corresponds to the metallic regime. Thus a transition in the density from a peak near the origin to a large gap can be associated with a metal-to-insulator transition in the conductance.
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{figures/conductance_MMW_sigmoid_fit_new.eps}
\end{center}
\caption{
(Color online)
The average conductance $g_{channel}$, computed from eigenvalue densities for different $\gamma$ from Figure \ref{density_MMW_sigmoid}.}
\label{conductance_MMW_sigmoid}
\end{figure}
\section{Summary and conclusion} \label{sec:6}
The eigenvalue density of $\gamma$-ensembles has previously been computed by solving the corresponding Riemann-Hilbert problem.
In this paper we use the same method to explore the role of the parameter $\gamma$ by considering various solvable toy models. First, we show that
for different values of $\theta$ between $1$ and $2$, the effective potentials for linear as well as quadratic single-particle potentials can become non-monotonic near the origin for $\gamma < 1$. The minimum of the effective potential shifts further away from the origin as $\gamma$ is decreased systematically. Second, we show that in a CR type model with a fixed non-monotonicity, reducing $\gamma$ can give rise to a transition from a diverging to a non-diverging density. Finally, we show that a toy model that includes a linear as well as a logarithmic single-particle potential as suggested for three-dimensional disordered conductors, $\gamma\sim 1$ gives conductance $g_{channel}\sim 1$, while $\gamma \ll 1$ corresponds to $g_{channel} \ll 1$. For our particular choice of the model, it also shows a rapid change in the conductance at the transition region between the two limits. While this by itself cannot describe a true metal to insulator transition, it provides a framework where in principle one should be able to study the full distribution of conductances $P(g)$ across a metal-insulator transition.
This is because given a jpd $p(\{x_a\})$ of the eigenvalues, the distribution of conductances $P(g)$ can be expressed as \cite{Markos-Muttalib-Woelfle-Klauder04}
\begin{equation}
P(g)=\int \; \prod_a^N dx_a p(\{x_a\})\delta \left(g-\sum_a \frac{1}{\cosh^2 \sqrt{x_a}}\right).
\end{equation}
Considering the transition in terms of the full distribution rather than in terms of the average (or typical) conductance is particularly important. This is because even in quasi one-dimension, where $\gamma=1$ for all disorder \cite{Beenakker97} and therefore no transition exists \cite{Abrahams79}, $P(g)$ has a highly asymmetric ``one-sided log-normal distribution" at the crossover point \cite{Muttalib-Woelfle99}, which is expected to remain qualitatively valid in three dimensions near the metal-insulator transition that happens at a critical value $\gamma=\gamma_c < 1$. It is also known from numerical studies in three dimensions that at strong disorder, $P(g)$ has a large variance as well as a finite skewness \cite{Douglas-Muttalib09}. The solvable $\gamma$-ensembles with appropriate single-particle potentials provide a possible framework to analytically study a broad and highly asymmetric distribution of conductances across a transition.
As a by-product, we find that the limit $\theta \rightarrow 1$ also corresponds to the Laguerre $\beta$-ensembles. This allows us to use the model to numerically compute the eigenvalue density for Laguerre $\beta$-ensembles for all $\beta > 1$. The results agree with various expected analytical expressions including the ones from the exact analytical solution to the RH problem for $\theta=1$. This shows the applicability of our method for general $\gamma$-ensembles with different values of $\theta >1$ and $\gamma > 0$.
\section{Acknowledgment}
DW acknowledges support from Ministry of Education, Singapore AcRF Tier 1 Grant No. R-146-000-262-114 and National Natural Science Foundation of China (NSFC) Grant No. 11871425.
| proofpile-arXiv_059-15667 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Review of abelian varieties of CM-type}
\begin{plain}
\label{r1}A complex abelian variety is said to be of CM-type if $\End^{0}(A)$
contains a CM-algebra\footnote{That is, a product of CM-fields.} $E$ such that
$H^{1}(A,\mathbb{Q}{})$ is free of rank $1$ as an $E$-module. Let
$S=\Hom(E,\mathbb{C}{})$, and let $H^{1}(A)=H^{1}(A,\mathbb{C}{})$. The
\[
H^{1}(A)\simeq H^{1}(A,\mathbb{Q}{})\otimes\mathbb{C}{}=\bigoplus_{s\in
S}H^{1}(A)_{s},\quad H^{1}(A)_{s}\overset{\smash{\lower.12em\hbox{\textup{\tiny def}}}}{=}H^{1}(A)\otimes
_{E,s}\mathbb{C}{}\text{.
\]
Here $H^{1}(A)_{s}$ is the (one-dimensional) subspace of $H^{1}(A)$ on which
$E$ acts through $s$. We hav
\[
H^{1,0}(A)=\bigoplus_{s\in\Phi}H^{1}(A)_{s},\quad H^{0,1}(A)=\bigoplus
_{s\in\bar{\Phi}}H^{1}(A)_{s}\text{,
\]
where $\Phi$ is a CM-type on $E$, i.e., a subset of $S$ such that
$S=\Phi\sqcup\bar{\Phi}$. Every pair $(E,\Phi)$ consisting of a CM-algebra $E$
and a CM-type $\Phi$ on $E$ arises in this way from an abelian variety.
Sometimes we identify a CM-type with its characteristic function $\phi\colon
S\rightarrow\{0,1\}$.
\end{plain}
\begin{plain}
\label{r4}Let $A$ be a complex abelian variety of CM-type, and let $E$ be a
CM-subalgebra of $\End^{0}(A)$ such that $H^{1}(A,\mathbb{Q}{})$ is a free
$E$-module of rank $1$. Let $F$ be a Galois extension of $\mathbb{Q}$ in
$\mathbb{C}{}$ splitting\footnote{That is, such that $E\otimes_{\mathbb{Q}}F$
is isomorphic to a product of copies of $F$.} $E$, and let $S=\Hom(E,F)$. We
regard the CM-type $\Phi$ of $A$ as a subset of $S$. Let $H^{r}(A)=H^{r
(A,F)$. The
\[
H^{1}(A)\simeq H^{1}(A,\mathbb{Q}{})\otimes_{\mathbb{Q}{}}F=\bigoplus
\nolimits_{s\in S}H^{1}(A)_{s},\quad
\]
where $H^{1}(A)_{s}\overset{\smash{\lower.12em\hbox{\textup{\tiny def}}}}{=}H^{1}(A,\mathbb{Q}{})\otimes_{E,s}F$ is the
(one-dimensional) $F$-subspace of $H^{1}(A)$ on which $E$ acts through $s$.
\begin{enumerate}
\item We have
\[
H^{r}(A)\simeq\bigwedge\nolimits_{F}^{r}H^{1}(A)=\bigoplus\nolimits_{\Delta
}H^{r}(A)_{\Delta}\quad\quad\text{(}F\text{-vector spaces),
\]
where $\Delta$ runs over the subsets of $S$ of size $|\Delta|=r$ and
$H^{r}(A)_{\Delta}\overset{\smash{\lower.12em\hbox{\textup{\tiny def}}}}{=}\bigotimes_{s\in\Delta}H^{1}(A)_{s}$ is the
(one-dimensional) subspace on which $a\in E$ acts as $\prod\nolimits_{s\in
\Delta}s(a)$.
\item Let $H^{1,0}=\bigoplus\nolimits_{s\in\Phi}H^{1}(A)_{s}$ and
$H^{0,1}=\bigoplus\nolimits_{s\in\bar{\Phi}}H^{1}(A)_{s}$. Then
\[
H^{p,q}\overset{\smash{\lower.12em\hbox{\textup{\tiny def}}}}{=}\bigwedge\nolimits^{p}H^{1,0}\otimes\bigwedge
\nolimits^{q}H^{0,1}=\bigoplus\nolimits_{\Delta}H^{p+q}(A)_{\Delta}\quad
\quad\text{(}F\text{-vector spaces),
\]
where $\Delta$ runs over the subsets of $S$ with $|\Delta\cap\Phi|=p$ and
$|\Delta\cap\bar{\Phi}|=q$.
\item (\cite{pohlmann1968}, Theorem 1.) Let $B^{p}=H^{2p}(A,\mathbb{Q}{})\cap
H^{p,p}$ ($\mathbb{Q}{}$-vector space of Hodge classes of degree $p$ on $A$).
Then
\[
B^{p}\otimes F=\bigoplus\nolimits_{\Delta}H^{2p}(A)_{\Delta}\text{,
\]
where $\Delta$ runs over the subsets of $S$ wit
\begin{equation}
|(t\circ\Delta)\cap\Phi|=p=|(t\circ\Delta)\cap\bar{\Phi}|\text{ for all
t\in\Gal(F/\mathbb{Q}{}). \label{eq2
\end{equation}
\end{enumerate}
\end{plain}
\section{Review of Weil classes}
\begin{plain}
\label{r2}Let $A$ be a complex abelian variety and $\nu$ a homomorphism from a
CM-field $E$ into $\End^{0}(A)$. The pair $(A,\nu)$ is said to be of Weil type
if $H^{1,0}(A)$ is a free $E\otimes_{\mathbb{Q}{}}\mathbb{C}{}$-module. In
this case, $d\overset{\smash{\lower.12em\hbox{\textup{\tiny def}}}}{=}\dim_{E}H^{1}(A,\mathbb{Q}{})$ is even and the
subspace $W_{E}(A)\overset{\smash{\lower.12em\hbox{\textup{\tiny def}}}}{=}\bigwedge\nolimits_{E}^{d}H^{1
(A,\mathbb{Q}{})$ of $H^{d}(A,\mathbb{Q}{})$ consists of Hodge classes
(\cite{deligne1982}, 4.4). When $E$ has degree $2$ over $\mathbb{Q}$, these
Hodge classes were studied by Weil (1977)\nocite{weil1977}, and for this
reason are called Weil classes. A polarization of $(A,\nu)$ is a polarization
$\lambda$ of $A$ whose Rosati involution stabilizes $\nu(E)$ and acts on it as
complex conjugation. The Riemann form of such a polarization can be writte
\[
(x,y)\mapsto\Tr_{E/\mathbb{Q}{}}(f\phi(x,y))
\]
for some totally imaginary element $f$ of $E$ and $E$-hermitian form $\phi$ on
$H_{1}(A,\mathbb{Q}{})$. If $\lambda$ can be chosen so that $\phi$ is split
(i.e., admits a totally isotropic subspace of dimension $d/2$), then $(A,\nu)$
is said to be of split Weil type.
\end{plain}
\begin{plain}
\label{r3}(Deligne 1982, \S 5.) Let $F$ be a CM-algebra, let $\phi_{1
,\ldots,\phi_{2p}$ be CM-types on $F$, and let $A=\prod\nolimits_{i}A_{i}$,
where $A_{i}$ is an abelian variety of CM-type $(F,\phi_{i})$. If $\sum
_{i}\phi_{i}(s)=p$ for all $s\in T\overset{\smash{\lower.12em\hbox{\textup{\tiny def}}}}{=}\Hom(F,\mathbb{\mathbb{Q}{
}^{\mathrm{al}}{})$, then $A$, equipped with the diagonal action of $F$, is of
split Weil type. Let $I=\{1,\ldots,2p\}$ and $H^{r}(A)=H^{r}(A,\mathbb{Q
{}^{\mathrm{al}})$. In this case, there is a diagram
\[
\begin{tikzcd}[column sep=tiny+]
W_{F}(A)\otimes\mathbb{\mathbb{Q}}^{\mathrm{al}}\arrow[equals]{r}{\text{def}}
& \Big(\dstyle\bigwedge\nolimits_{F}^{2p}H^{1}(A,\mathbb{Q}{})\Big)\otimes_{\mathbb{Q}}\mathbb{Q
^{\mathrm{al}}\arrow[equals]{d}\arrow[hook]{r}
&\Big(\dstyle\bigwedge\nolimits_{\mathbb{Q}}^{2p}H^{1}(A,\mathbb{Q}{})\Big)\otimes_{\mathbb{Q}{}}\mathbb{Q}^{\mathrm{al}}
\arrow[equals]{d}\arrow[equals]{r}
&H^{2p}(A)\\
&\dstyle\bigoplus_{t\in T}\Big(\bigotimes_{i\in I}H^{1}(A_{i
)_{t}\Big)\arrow[hook]{r}
&\dstyle\bigoplus_{\substack{J\subset I\times T\\|J|=2p\\}}\Big(\bigotimes_{(i,t)\in J}H^{1}(A_{i})_{t}\Big)
\end{tikzcd}
\]
\end{plain}
\section{Theorem 1.}
\begin{theorem}
[\cite{andre1992}]\label{r0} Let $A$ be a complex abelian variety of CM-type.
There exist abelian varieties $A_{\Delta}$ and homomorphisms $f_{\Delta}\colon
A\rightarrow A_{\Delta}$ such that every Hodge class $t$ on $A$ can be written
as a sum $t=\sum f_{\Delta}^{\ast}(t_{\Delta})$ with $t_{\Delta}$ a Weil class
on $A_{\Delta}$.
\end{theorem}
\begin{proof}
Let $A$ be of CM-type and let $p\in\mathbb{N}{}$. We may suppose that $A$ is a
product of simple abelian varieties $A_{i}$ and let $E=\prod_{i}\End^{0
(A_{i})$. Then $E$ is a CM-algebra, and $A$ is of CM-type $(E,\phi)$ for some
CM-type $\phi$ on $E$. Let $F$ be a CM subfield of $\mathbb{C}{}$, Galois over
$\mathbb{Q}$, splitting the centre of $\End^{0}(A)$. Then $F$ splits $E$. We
shall show that Theorem 1 holds with each $A_{\Delta}$ of split Weil type
relative to $F$. Let $T=\Hom(F,\mathbb{Q}{}^{\mathrm{al}})$, where
$\mathbb{Q}{}^{\mathrm{al}}$ is the algebraic closure of $\mathbb{Q}{}$ in
$\mathbb{C}{}$. As $F\subset\mathbb{Q}{}^{\mathrm{al}}$, we can identify $T$
with $\Gal(F/\mathbb{Q}{})$.
Fix a subset $\Delta$ of $S\overset{\smash{\lower.12em\hbox{\textup{\tiny def}}}}{=}\Hom(E,F)$ satisfying (\ref{eq2}).
For $s\in\Delta$, let $A_{s}=A\otimes_{E,s}F$. Then $A_{s}$ is an abelian
variety of CM type $(F,\phi_{s})$, where $\phi_{s}(t)=\phi(t\circ s)$ for
$t\in T$. Because $\Delta$ satisfies (\ref{eq2})
\[
\sum\nolimits_{s\in\Delta}\phi_{s}(t)\overset{\smash{\lower.12em\hbox{\textup{\tiny def}}}}{=}\sum\nolimits_{s\in
\Delta}\phi(t\circ s)=p\text{, all }t\in T,
\]
and so we can apply \ref{r3}: the abelian variety $A_{\Delta}\overset{\smash{\lower.12em\hbox{\textup{\tiny def}}}}{=
\prod\nolimits_{s\in\Delta}A_{s}$ equipped with the diagonal action of $F$ is
of split Weil type. There is a homomorphism $f_{\Delta}\colon A\rightarrow
A_{\Delta}$ such that
\[
f_{\Delta\ast}\colon H_{1}(A,\mathbb{Q}{})\rightarrow H_{1}(A_{\Delta
},\mathbb{Q}{})\simeq H_{1}(A,\mathbb{Q})\otimes_{E}F^{\Delta
\]
is $x\mapsto x\otimes1$. Here $F^{\Delta}$ is a product of copies of $F$
indexed by $\Delta$. The map $f_{\Delta}^{\ast}\colon H^{1}(A_{\Delta
},\mathbb{Q}{})\rightarrow H^{1}(A,\mathbb{Q}{})$ is the $E$-linear dual of
$f_{\Delta\ast}$.
Note that $A_{\Delta}$ has complex multiplication by $F^{\Delta}$. According
to \ref{r4}(a),
\[
H^{2p}(A_{\Delta})\overset{\smash{\lower.12em\hbox{\textup{\tiny def}}}}{=}H^{2p}(A_{\Delta},\mathbb{Q}{}^{\mathrm{al
})=\bigoplus\nolimits_{J}H^{2p}(A_{\Delta})_{J},\quad H^{2p}(A_{\Delta
)_{J}\overset{\smash{\lower.12em\hbox{\textup{\tiny def}}}}{=}\bigotimes\nolimits_{(s,t)\in J}H^{1}(A_{s})_{t},
\]
where $J$ runs over the subsets of $\Delta\times T$ of size $2p$. Let
$W_{F}(A_{\Delta})\subset H^{2p}(A_{\Delta},\mathbb{Q}{})$ be the space of
Weil classes on $A_{\Delta}$. Then $W_{F}(A_{\Delta})\otimes\mathbb{Q
{}^{\mathrm{al}}=\bigoplus\nolimits_{t\in T}H^{2p}(A_{\Delta})_{\Delta
\times\{t\}}$. Note that $a\in E$ acts on $H^{2p}(A_{\Delta})_{\Delta
\times\{t\}}\overset{\smash{\lower.12em\hbox{\textup{\tiny def}}}}{=}\bigotimes\nolimits_{s\in\Delta}H^{1}(A_{s})_{t}$
as multiplication by $\prod\nolimits_{s\in\Delta}(t\circ s)(a)$. Therefore,
$f_{\Delta}^{\ast}\otimes1\colon H^{2p}(A_{\Delta})\rightarrow H^{2p}(A)$ maps
$H^{2p}(A_{\Delta})_{\Delta\times\{t\}}$ into $H^{2p}(A)_{t\circ\Delta}\subset
B^{p}(A)\otimes\mathbb{Q}{}^{\mathrm{al}}$.
In summary: for every subset $\Delta$ of $S$ satisfying (\ref{eq2}), we have a
homomorphism\linebreak$f_{\Delta}\colon A\rightarrow A_{\Delta}$ from $A$ into
an abelian variety $A_{\Delta}$ of split Weil type relative to $F$; moreover,
$f_{\Delta}^{\ast}(W_{F}(A_{\Delta}))\otimes\mathbb{Q}^{\mathrm{al}}$ is
contained in $B^{p}(A)\otimes\mathbb{Q}{}^{\mathrm{al}}$ and contains
$H^{2p}(A)_{\Delta}$. As the subspaces $H^{2p}(A)_{\Delta}$ span $B^{p
\otimes\mathbb{Q}{}^{\mathrm{al}}$ (see \ref{r4}(c)), this implies that the
subspaces $f_{\Delta}^{\ast}(W_{F}(A_{\Delta}))$ span $B^{p}$.\footnote{Let
$W$ and $W^{\prime}$ be subspaces of a $k$-vector space $V$, and let $K$ be a
field containing $k$. If $W\otimes_{k}K\subset W^{\prime}\otimes_{k}K$, then
$W\subset W^{\prime}$. Indeed, if $W$ is not contained in $W^{\prime}$, then
$(W+W^{\prime})/W^{\prime}\neq0$; but then $(W\otimes K+W^{\prime}\otimes
K)/W\otimes K\neq0$, which implies that $W\otimes K$ is not contained in
$W^{\prime}\otimes K$.}
\end{proof}
\section{Deligne's original version of Theorem 1.}
Let $E$ be a CM-field Galois over $\mathbb{Q}{}$, and let $\mathcal{S}{}$ be
the set of CM-types on $E$. For each $\Phi\in\mathcal{S}{}$ choose an abelian
variety $A_{\Phi}$ of CM-type $(E,\Phi)$, and let $A_{\mathcal{S}{}
=\prod\nolimits_{\Phi\in\mathcal{S}{}}A_{\Phi}$. Define $G^{H}$ (resp. $G^{W
$) to be the algebraic subgroup of $\GL_{H^{1}(A_{\mathcal{S}{}},\mathbb{Q
{})}$ fixing all Hodge classes (resp. divisor classes and split Weil classes)
on all products of powers of the $A_{\Phi}$, $\Phi\in\mathcal{S}{}$.
\begin{theorem}
\label{t2}The algebraic groups $G^{H}$ and $G^{W}$ are equal.
\end{theorem}
\begin{proof}
As divisor classes and Weil classes are Hodge classes, certainly $G^{H}\subset
G^{W}$. On the other hand, the graphs of homomorphisms of abelian varieties
are in the $\mathbb{Q}{}$-algebra generated by divisor classes
(\cite{milne1999lc}, 5.6), and so, with the notation of Theorem 1, $G^{W}$
fixes the elements of $f_{\Delta}^{\ast}(W_{F}(A_{\Delta}))$. As these span
the Hodge classes, we deduce that $G^{W}\subset G^{H}$.
\end{proof}
\begin{remark}
\label{r7}Theorem \ref{t2} is Deligne's original theorem (1982, \S 5) except
that, instead of requiring $G^{W}$ to fix all divisor classes, he requires it
to fix certain specific homomorphisms.
\end{remark}
\section{Applications}
\begin{plain}
\label{r10}Call a rational cohomology class $c$ on a smooth projective complex
variety $X$ \emph{accessible }if it belongs to the smallest family of rational
cohomology classes such that,
\begin{enumerate}
\item the cohomology class of every algebraic cycle is accessible;
\item the pull-back by a map of varieties of an accessible class is accessible;
\item if $(X_{s})_{s\in S}$ is an algebraic family of smooth projective
varieties with $S$ connected and smooth and $(t_{s})_{s\in S}$ is a family of
rational classes (i.e., a global section of $R^{r}f_{\ast}\mathbb{Q}{}\ldots)$
such that $t_{s}$ is accessible for one $s$, then $t_{s}$ is accessible for
all $s$.
\end{enumerate}
\noindent Accessible classes are automatically Hodge, even absolutely Hodge
(\cite{deligne1982}, \S \S 2,3).
\end{plain}
\begin{theorem}
\label{t3}For abelian varieties, every Hodge class is accessible.
\end{theorem}
\begin{proof}
This is proved in \cite{deligne1982} (see its Introduction) except that the
statement there includes an extra \textquotedblleft
tannakian\textquotedblright\ condition on the accessible classes (ibid.,
p.~10, (c)). However, this condition is used only in the proof that Hodge
classes on CM abelian varieties are accessible. Conditions (a) and (c) of
\ref{r10} imply that split Weil classes are accessible (ibid., 4.8), and so
this follows from Theorem 1 (using \ref{r10}(b)).
\end{proof}
\begin{remark}
\label{r5}In particular, we see that the Hodge conjecture holds for abelian
varieties if algebraic classes satisfy the variational Hodge conjecture (i.e.,
condition \ref{r10}(c)).
\end{remark}
\begin{remark}
\label{r6}For Theorem 3, it suffices to assume that \ref{r10}(c) holds for
families of abelian varieties over a \emph{complete smooth curve }$S$. Indeed,
(c) is used in the proof of the theorem only for families of abelian varieties
$(A_{s})_{s\in\mathbb{S}}$ with additional structure over a locally symmetric
variety $S$. More precisely, there is a semisimple algebraic group $G$ over
$\mathbb{Q}{}$, a bounded symmetric domain $X$ on which $G(\mathbb{R}{})$ acts
transitively with finite kernel, and a congruence subgroup $\Gamma\subset
G(\mathbb{Q}{})$ such that $S(\mathbb{C}{})=\Gamma\backslash X$
(\cite{deligne1982}, proofs of 4.8, 6.1). For $s\in S(\mathbb{C}{})$, the
points $s^{\prime}$ of the orbit $G(\mathbb{Q}{})\cdot s$ are dense in $S$ and
each abelian variety $A_{s^{\prime}}$ is isogenous to $A_{s}$. The boundary of
$S$ in its minimal (Baily-Borel) compactification has codimension $\geq2$.
After Bertini, for any pair of points $s_{1},s_{2}\in S(\mathbb{C}{})$, we can
find a smooth linear section of $S$ meeting both orbits $G(\mathbb{Q}{})\cdot
s_{1}$ and $G(\mathbb{Q}{})\cdot s_{2}$ but not meeting the boundary. This
proves what we want. Cf. \cite{andre1996}, p.~32.
\end{remark}
Let $X$ be an algebraic variety of dimension $d$, and let $L\colon H^{\ast
}(X,\mathbb{Q}{})\rightarrow H^{\ast+2}(X,\mathbb{Q}{})$ be the Lefschetz
operator defined by a hyperplane section of $X$. The strong Lefschetz theorem
says that $L^{d-i}\colon H^{i}(X,\mathbb{Q})\rightarrow H^{2d-i
(X,\mathbb{Q}{})$ is an isomorphism for all $i\leq d$. Let $a\!H^{2i
(X,\mathbb{Q}{})$ denote the $\mathbb{Q}$-subspace of $H^{2i}(X,\mathbb{Q}{})$
spanned by the algebraic classes. Then $L^{d-2i}$ induces an injective map
$L^{d-2i}\colon a\!H^{2i}(X,\mathbb{Q})\rightarrow a\!H^{2d-2i}(X,\mathbb{Q
{})$. The standard conjecture of Lefschetz type asserts that this map is
surjective for all $i\leq d$. It is known to be true for abelian varieties.
\begin{proposition}
[\cite{abdulali1994}, \textnf{p.~1122}]\label{p1}Let $f\colon A\rightarrow S$
be an abelian scheme over a smooth complete complex variety $S$. Assume that
the Lefschetz standard conjecture holds for $A$. Let $t$ be a global section
of the sheaf $R^{2r}f_{\ast}\mathbb{Q}(r)$; if $t_{s}\in H^{2r}(A_{s
,\mathbb{Q}{}(r))$ is algebraic for one $s\in S(\mathbb{C}{})$, then it is
algebraic for all $s$.
\end{proposition}
\begin{proof}
For $n\in\mathbb{N}{}$, let $\theta_{n}$ denote the endomorphism of $A/S$
acting as multiplication by $n$ on the fibres. By a standard argument
(\cite{kleiman1968}, p.~374), $\theta_{n}^{\ast}$ acts as $n^{j}$ on
$R^{j}f_{\ast}\mathbb{Q}{}$. As $\theta_{n}^{\ast}$ commutes with the
differentials $d_{2}$ of the Leray spectral sequence $H^{i}(S,R^{j}f_{\ast
}\mathbb{Q}{})\implies H^{i+j}(A,\mathbb{Q}{})$, we see that the spectral
sequence degenerates at the $E_{2}$-term and
\[
H^{r}(A,\mathbb{Q}{})\simeq\bigoplus\nolimits_{i+j=r}H^{i}(S,R^{j}f_{\ast
}\mathbb{Q}{})
\]
with $H^{i}(S,R^{j}f_{\ast}\mathbb{Q}{})$ the subspace of $H^{i+j
(A,\mathbb{Q}{})$ on which $\theta_{n}$ acts as $n^{j}$. Let $s\in
S(\mathbb{C}{})$ and $\pi=\pi_{1}(S,s)$. The inclusion $j_{s}\colon
A_{s}\hookrightarrow A$ induces an isomorphism $j_{s}^{\ast}\colon
H^{0}(S,R^{2r}f_{\ast}\mathbb{Q}{})\hookrightarrow H^{2r}(A_{s},\mathbb{Q
{})^{\pi}$ preserving algebraic classes, and s
\begin{equation}
\dim a\!H^{0}(S,R^{2r}f_{\ast}\mathbb{Q}{})\leq\dim a\!H^{2r}(A_{s
,\mathbb{Q}{})^{\pi}. \label{e2
\end{equation}
Similarly, the Gysin map $j_{s\ast}\colon H^{2d-2r}(A_{s},\mathbb{Q
{})\rightarrow{}H^{2d-2r+2m}(A,\mathbb{Q}{})$, where $m=\dim(S)$ and
$d=\dim(A/S)$, induces a map $H^{2d-2r}(A_{s},\mathbb{Q}{})^{\pi}\rightarrow
H^{2m}(S,R^{2d-2r}f_{\ast}\mathbb{Q})$ preserving algebraic classes, and s
\begin{equation}
\dim a\!H^{2d-2r}(A_{s},\mathbb{Q}{})^{\pi}\leq\dim a\!H^{2m}(S,R^{2d-2r
f_{\ast}\mathbb{Q})\text{.} \label{e3
\end{equation}
Because the Lefschetz standard conjecture holds for $A_{s}$
\begin{equation}
\dim a\!H^{2r}(A_{s},\mathbb{Q}{})^{\pi}=\dim a\!H^{2d-2r}(A_{s},\mathbb{Q
{})^{\pi}\text{.} \label{e4
\end{equation}
Hence,
\begin{align*}
\dim a\!H^{0}(S,R^{2r}f_{\ast}\mathbb{Q}{}) & \overset{(\text{\ref{e2
})}{\leq}\dim a\!H^{2r}(A_{s}{},\mathbb{Q}{})^{\pi}\overset{(\text{\ref{e4
)}}{=}\dim a\!H^{2d-2r}(A_{s},\mathbb{Q}{})^{\pi}\\
& \overset{(\text{\ref{e3}})}{\leq}\dim a\!H^{2m}(S,R^{2d-2r}f_{\ast
}\mathbb{Q})\text{.
\end{align*}
The Lefschetz standard conjecture for $A$ implies that
\[
\dim a\!H^{0}(S,R^{2r}f_{\ast}\mathbb{Q}{})=\dim a\!H^{2m}(S,R^{2d-2r}f_{\ast
}\mathbb{Q}),
\]
and so the inequalities are equalities. Thu
\[
a\!H^{2r}(A_{s},\mathbb{Q}{})^{\pi}=a\!H^{0}(S,R^{2r}f_{\ast}\mathbb{Q}),
\]
which is independent of $s$.
\end{proof}
\begin{theorem}
[Abdulali, Andr\'{e}]\label{t4}The Lefschetz standard conjecture for algebraic
varieties over $\mathbb{C}$ implies the Hodge conjecture for abelian varieties.
\end{theorem}
\begin{proof}
By Theorem \ref{t3}, it suffices to show that algebraic classes are
accessible. They obviously satisfy conditions (a) and (b) of \ref{r10}, and it
suffices to check (c) with $S$ a complete smooth curve (Remark \ref{r6}). This
Proposition \ref{p1} does.
\end{proof}
\begin{remark}
\label{r8}Proposition 1 applies also to absolute Hodge classes and motivated
classes. As these satisfy the Lefschetz standard conjecture, we deduce that
Hodge classes on abelian varieties are absolutely Hodge and motivated.
\end{remark}
\begin{remark}
\label{r9}Let $H_{B}$ denote the Betti cohomology theory and $\Mot_{H
(\mathbb{C}{})$ the category of motives over $\mathbb{C}{}$ for homological
equivalence generated by the algebraic varieties for which the K\"{u}nneth
projectors are algebraic. If the Betti fibre functor $\omega_{B}$ on
$\Mot_{H}(\mathbb{C})$ is conservative, then the Lefschetz standard conjecture
holds${}$ for the varieties in question.
Indeed, as $L^{d-2i}\colon H_{B}^{2i}(X)(i)\rightarrow H_{B}^{2d-2i
(X,\mathbb{Q}{})(d-i)$ is an isomorphism (strong Lefschetz theorem), so also
is $l^{d-2i}\colon h^{2i}(X)(i)\rightarrow h^{2d-2i}(X)(d-i)$ by our
assumption on $\omega_{B}$. When we apply the functor $\Hom({1\mkern-7mu1},-)$ to this
last isomorphism, it becomes $L^{d-2i}\colon a\!H_{B}^{2i}(X)(i)\rightarrow
a\!H_{B}^{2d-2i}(X)(d-i)$, which is therefore an isomorphism. Thus the
standard conjecture $A(X,L)$ is true, and $A(X\times X,L\otimes1+1\otimes L)$
implies $B(X).$
\end{remark}
\bibliographystyle{cbe}
| proofpile-arXiv_059-15668 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
In recent years, advances in deep learning have presented new opportunities to assist and improve clinical diagnosis involving different medical imaging modalities such as magnetic resonance imaging (MRI), X-ray, Ultrasound, computed tomography (CT), and positron emission tomography (PET) \cite{ahmad2018semantic, yan2018weakly, yap2020breast, tomita2018deep, liu2018deep}. Chest X-rays (CXR) are commonly used as an important imaging tool to screen patients for a number of diseases.
In recent studies, deep learning has provided end-to-end proof of concept for models achieving radiologist-level performance in the detection of different clinical findings on CXRs \cite{rajpurkar2017chexnet, wang2017chestx}. In doing so, deep learning has helped physicians to prioritize urgent medical cases and focus their attention during the course of clinical diagnosis.
Pneumoperitoneum is a critical clinical finding that requires immediate surgical attention \cite{stapakis1992diagnosis, woodring1995detection}. Although abdominal radiographs and CT scans are standard modalities for the detection of pneumoperitoneum, CXRs are often an initial exam that is ordered in the emergency room setting. Therefore, pneumoperitoneum is often detected on initial CXRs, before additional imaging, such as CT exams, are ordered. Free air in the abdomen is most visible on CXRs of patients in the standing position. Because gas ascends to the highest point in the abdomen, free air accumulates beneath the domes of the diaphragm in the standing or upright position. Therefore, CXR is one of the most sensitive modalities to detect pneumoperitoneum \cite{chen2002ultrasonography}. Solis et al. showed that performing abdominal CT exams can delay surgery, without providing any measurable benefit over a CXR for the diagnosis of pneumoperitoneum \cite{solis2014free}.
Despite the recent success of deep learning models in detecting disease on CXRs, it has been found that these models can be highly sensitive to the types of systems used for the training dataset. For instance, Marcus et al. \cite{marcus_little_2019} argued a deep learning model trained on standard CXR images captured by a particular imaging system in a fixed location may not perform as well on portable CXR images. This is because the trained deep learning model has to deal with variabilities in patterns and characteristics found in CXR images across different imaging systems, rather than variability and differences in chest anatomy and morphology intrinsic to the disease itself. In this study, we developed state-of-the-art deep learning models to detect pneumoperitoneum on CXR images and evaluated the sensitivity and specificity of these models on a diverse dataset assembled from different types of X-ray imaging systems from various hospitals to demonstrate the generalizability of our approach.
In a hemodynamically stable patient with acute, severe, or generalized abdominal pain, multidetector CT scan is the preferred imaging test and it provides invaluable diagnostic information in the diagnostic workup of such patients \cite{paolantonio2016multidetector}. However, in unstable patients, the emergency room staff, or clinical house staff, need timely information. In these cases, usually, clinicians first obtain an upright CXR to exclude a critical finding of free air when gastrointestinal perforation is suspected. Once the patient is stabilized, a CT scan is also usually obtained, but only after an initial plain radiograph of an upright CXR is performed to quickly look for pneumoperitoneum. In this study, we focused on the CXR exam, since it is one of the most common initial radiographs to be performed in hospital settings, including emergency rooms and inpatient and outpatient settings for patients with acute, severe, and generalized abdominal pain. The purpose of the deep learning tool is to assist radiologist readers with prioritizing interpretations of the most urgent exams and help them to reach a prompt, correct diagnosis.
In this study, CXRs were performed with fixed imaging, which is performed in the X-ray department, as well as with portable imaging, which is performed with mobile X-ray units. The portable CXR is useful for diagnosis and monitoring patients at their bedside in the emergency room, and in the intensive care unit or inpatient setting, which are often utilized when a patient cannot be transferred to the hospital radiology department. Despite the advantages of portable CXR, the image quality of a portable bedside CXR can be limited, and the image interpretation and appropriate clinical action can be affected. Therefore, this study aims to determine if there are differences between portable CXRs and fixed CXRs performed in a radiology department in terms of the detection of pneumoperitoneum. Additionally, given the image variabilities due to differences in scanners from different manufacturers, such as GE, Philips, and Siemens, that are used to obtain CXRs, this study aims to determine if there was a difference in detection of pneumoperitoneum on CXRs from different manufacturers of X-ray machines.
\begin{figure*}
\centering
\includegraphics[scale=.75]{IncExc.png}
\caption{Details of inclusion and exclusion criteria of this study.}
\label{fig:InEx}
\end{figure*}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.7]{PositiveCases.png}
\caption{Four examples of pneumoperitoneum positive cases in chest X-ray images in our dataset. Where yellow arrow indicates the presence of Pneumoperitoneum.}
\label{fig:xrayd}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.52]{ManfIm.png}
\caption{Number of CXRs stratified in our dataset by their corresponding imaging system manufacturer.}
\label{fig:ImgSys}
\end{figure}
\section{Pneumoperitoneum Dataset}
The pneumoperitoneum dataset consisted of 1,287 CXR images (from 1,124 patients) and was collected using Montage (Montage Healthcare Solutions, Philadelphia, PA) search functionality from the database of a tertiary academic hospital and several community hospitals serving rural populations between March 2011 and September 2019. The inclusion and exclusion criteria for this study is demonstrated in Fig. \ref{fig:InEx}. This dataset is nearly balanced with 634 pneumoperitoneum positive cases and 673 pneumoperitoneum negative cases. The pneumoperitoneum negative cases consist of both normal and other conditions (such as pneumothorax, pneumonia, atelectasis, etc.). A brief description of this dataset is presented in Table \ref{my-label2}. All CXR images in this dataset were retrieved in DICOM format. The resolution of CXR images in our dataset ranges from 1728x1645 pixels to 4280x3520 pixels. A few examples of positive pneumoperitoneum CXR images are shown in Fig. \ref{fig:xrayd}. All CXR images from the academic hospital were taken with Philips imaging systems, whereas CXR images from other community hospitals were taken with imaging systems from various manufacturers (Philips, Fujifilm, Siemens, Kodak, Konica Minolta). Further details of the total number of CXR images specified by the imaging system manufacturer are shown in Fig. \ref{fig:ImgSys}.
\begin{table*}[]
\centering
\addtolength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{1.5}
\caption{Characteristics of our dataset stratified by pneumoperitoneum positive and negative cases. Where CXRs is Chest X-rays, SD is Standard Deviation, AP is Anteroposterior, PA is Posteroanterior, and AH is Academic Hospital}
\label{my-label2}
\scalebox{1}
{
\begin{tabular}{llll}\hline
Characteristic & Full Dataset & Positive Cases & Negative \\\hline \hline
No. of CXRs & 1,287 & 634 & 653 \\\hline
\multirow{2}{*}{Sex} & Male - 697 & Male - 344 & Male -353 \\
& Female -590 & Female - 290 & Female -303 \\\hline
\multirow{4}{*}{Age (SD)} & Male & Male & Male \\
& 61.44+/-17.62 & 61.06+/-17.44 & 61.82+/-17.79 \\
& Female & Female & Female \\
& 62.03+/-17.96 & 65.09+/-16.09 & 58.98+/-19.89 \\\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Technique \\ (AP/ PA)\end{tabular}} & AP - 554 & AP - 251 & AP - 304 \\
& PA - 733 & PA - 383 & PA - 429 \\\hline
Imagine System & Fixed - 969 & Fixed - 451 & Fixed - 518 \\
Type & Portable - 318 & Portable - 183 & Portable - 135 \\\hline
\multirow{2}{*}{Hospital} & AH - 1061 & AH - 545 & AH - 516 \\
& Others - 226 & Others - 89 & Others - 137 \\\hline
\multirow{2}{*}{Manfacturer} & Philips - 1145 & Philips - 576 & Philips -569 \\
& Others - 142 & Others - 58 & Others - 84 \\\hline
\end{tabular}}
\end{table*}
Although several images from the same patient were included, they were not identical in terms of positioning and appearance. Since our goal in this study is to detect pneumoperitoneum per exam, multiple images for a patient do not alter the findings. Furthermore, we kept all the images from same patient in one partition (training, validation, testing). This dataset is nearly balanced, with 634 pneumoperitoneum positive cases and 673 pneumoperitoneum negative cases. For pneumoperitoneum positive cases, there is 182 post-operative pneumoperitoneum included for this study.
\subsection{Expert Annotations}
We needed high-quality expert annotations indicating ground truth pneumoperitoneum labels (i.e., positive or negative) for each CXR image in our dataset to develop and evaluate our model. The expert annotations in our study were generated by four radiologists from the main academic hospital campus. To produce the ground truth labels, the CXR images were equally divided among two radiologists for annotation. Then, the other two radiologists independently reviewed all the ground truth labels generated by the previous radiologists for accuracy. Any disagreements among annotators were resolved by further review and discussion among all radiologists.
We tested the consistency of expert annotation between two radiologists on 177 randomly selected test cases consisting of 45 positive and 132 negative cases. Out of 177 tested cases, there was one on which the radiologists disagreed about the presence of pneumoperitoneum. That case was negative for pneumoperitoneum, and the disagreement was resolved after further discussion among the radiologists. Radiologist 1 has over 30 years of general radiology experience; Radiologist 2 has over 7 years of general radiology experience; Radiologist 3 has 20 years of abdominal imaging experience; and Radiologist 4 is a 4th-year radiology resident.
\section{Methodology}
This section describes our proposed technique for the recognition of pneumoperitoneum in chest radiographs. The preparation of dataset, deep learning approaches used for binary classification of pneumoperitoneum are detailed in this section.
\begin{figure}
\centering
\includegraphics[scale=.35]{TVT.png}
\caption{Number of CXRs stratified in our dataset by their corresponding imaging system manufacturer.}
\label{fig:TrVTe}
\end{figure}
\subsection{Training, Validation and Test Dataset Split}
Our study has two objectives: 1) to train and evaluate the performance of common deep learning architectures on our CXR image dataset for classification of pneumoperitoneum status, and 2) to analyse the sensitivity and specificity of these models based on different characteristics of the radiographs. Therefore, as shown in Table \ref{my-label2}, we partitioned this dataset into training, validation, and test datasets. For the training and validation datasets, we only used the CXR images with the most common characteristics in the dataset, i.e., images taken by fixed X-ray machines at the main academic hospital (420 positive cases and 465 negative cases; Fig. \ref{fig:TrVTe}). The training dataset consisted of 750 CXR images (375 positive and 375 negative), whereas the validation dataset consisted of 135 CXR images (45 positive and 90 negative). The cases in the training and validation datasets were randomly selected. In contrast, our test dataset consisted of 402 CXR images (214 positive and 188 negative) with images from different manufacturers and with both fixed and portable characteristics. Therefore, our test dataset was suitable to perform sensitivity and specificity analysis for the different deep learning models. Of note, in our data split, we ensured that CXR images from the same patient stayed in the same partition (training, validation, and testing datasets) to avoid any biases.
\subsubsection{Deep Learning Methods}
We utilized four different state-of-the-art deep learning architectures (ResNet50, DenseNet161, InceptionV3, ResNeXt101) for the detection of pneumoperitoneum on CXR images \cite{he2016deep, iandola2014densenet, chen2017dual, szegedy2016rethinking}. We used pre-trained models on the ImageNet dataset \cite{deng2009imagenet} for each architecture to benefit from transfer learning in our training process. Utilizing transfer learning is critical for the optimization of deep learning models on a limited number of images, such as in our training dataset. In our training, we did not freeze any of the convolutional layers to fine-tune the CNN weights for extraction of pneumoperitoneum-related features.
The CXR images are resized according to the required input size of different deep learning models, i.e., 299x299 pixels for InceptionV3 and 224x224 pixels for the rest of the models. All deep learning models were trained on a PyTorch framework \cite{paszke2019pytorch} using an NVIDIA Quadro RTX graphics processing unit with 48 GB memory. We experimented with different hyper-parameters such as learning rate, number of epochs, and data augmentation options for each model to minimize both training and validation losses. For the final models, we spent 100 epochs for training, which we found sufficient for the convergence of our optimization process on the dataset. We also tried different learning rates (1e-2 to 5e-4) for training the models in our study. Data-augmentation (horizontal flip, vertical flip, and random rotations from -15$^{\circ}$ to 15$^{\circ}$) was performed on the fly during training. In this training, we used binary cross-entropy as the loss function, a stochastic gradient descent optimizer, a batch size of 256, and a momentum value of 0.9. We reduced the learning rate by a factor of 0.1 after every 25 epochs. The final model was selected based on minimum validation loss during training.
\begin{table*}[]
\centering
\addtolength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{1.5}
\caption{The performance measures of various deep learning models for binary classification of pneumoperitoneum.}
\label{result2s}
\scalebox{0.85}
{
\begin{tabular}{lllllll} \hline
Method & Sensitivity & Specificity & Accuracy & Precision & F-1 Score & AUC \\\hline\hline
InceptionV3 & 0.841 & 0.931 & 0.883 & 0.932 & 0.884 & 0.938 \\
ResNet101 & 0.873 & 0.936 & 0.902 & 0.937 & 0.906 & 0.946 \\
ResNeXt101 & 0.865 & 0.952 & 0.905 & 0.953 & 0.907 & 0.951 \\
DenseNet161 & 0.916 & 0.899 & 0.908 & 0.911 & 0.913 & 0.957\\ \hline
\end{tabular}}
\end{table*}
\begin{figure}
\centering
\includegraphics[scale=.5]{ROC.jpg}
\caption{ROC curve for all deep learning models trained on the test dataset.}
\label{fig:ROCM}
\end{figure}
\section{Results}
We evaluated the different deep learning models for the binary classification of pneumoperitoneum using our test dataset. In Table \ref{result2s}, we report sensitivity, specificity, accuracy, F1-score, and area under the receiver operating characteristic curve (AUC) as our evaluation metrics. These metrics are considered reliable measures for assessing the quality of machine learning models.
All deep learning models, particularly DenseNet161 and ResNeXt101, performed well for the binary classification of pneumoperitoneum. DenseNet161 achieved the highest accuracy of 0.908, whereas ResNeXt101 (0.905), ResNet101 (0.902), and InceptionV3 (0.883) performed slightly worse. For sensitivity, DenseNet161 (0.916) again outperformed ResNet101, InceptionV3, and ResNeXt101 by a margin of 0.43, 0.75, and 0.51, respectively. On the contrary, DenseNet161 achieved the lowest score of 0.899 for specificity, whereas ResNeXt101 performed best in this category, with a score of 0.952. The AUC score is considered to be a stable performance metric for evaluating machine learning approaches. ResNeXt101 and DenseNet161 achieved 0.951 and 0.957, respectively, for AUC. The ROC curves for all deep learning models are shown in Fig. \ref{fig:ROCM}.
\begin{table*}[]
\centering
\addtolength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{1.5}
\caption{Stratification of the Test Dataset (402 CXR images: 214 positive and 188 negative) according to the CXR image characteristics. Where CXRs is Chest X-rays, Pos is Positive cases, Neg is Negative cases, andd MFR. is Manufacturer.}
\label{strat}
\scalebox{1}
{
\begin{tabular}{lll} \hline
Characteristic - Type & Portable & Fixed \\\hline
No. of CXRs & 318 CXRs & 84 CXRs \\
Pos/Neg Cases & Pos - 183 \& Neg - 135 & Pos -31 \& Neg - 53 \\\hline
Characteristic – MFR. & Philips & Others \\\hline
No. of CXRs & 260 CXRs & 142 CXRs \\
Pos/Neg Cases & Pos - 156 \& Neg - 135 & Pos - 58\& Neg - 84 \\ \hline
\end{tabular}}
\end{table*}
\begin{table*}[]
\centering
\addtolength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{1.5}
\caption{The performance metrics for DenseNet161 and ResNeXt101 on the stratified test dataset according to portable/ fixed and Philips/Others Manufacturer characteristics.}
\label{resultst}
\scalebox{0.75}
{
\begin{tabular}{llllllll} \hline
Dataset & Method & Sensitivity & Specificity & Accuracy & Precision & F-1 Score & AUC \\\hline\hline
\multirow{2}{*}{Portable} & ResNeXt101 & 0.863 & 0.948 & 0.899 & 0.958 & 0.908 & 0.950 \\
& DenseNet161 & 0.907 & 0.903 & 0.905 & 0.927 & 0.917 & 0.956 \\\hline
\multirow{2}{*}{Fixed} & ResNeXt101 & 0.871 & 0.962 & 0.929 & 0.931 & 0.900 & 0.974 \\
& DenseNet161 & 0.968 & 0.887 & 0.917 & 0.833 & 0.896 & 0.958 \\\hline
\multirow{2}{*}{Philips} & ResNeXt101 & 0.865 & 0.952 & 0.900 & 0.964 & 0.912 & 0.938 \\
& DenseNet161 & 0.917 & 0.885 & 0.904 & 0.923 & 0.920 & 0.946 \\\hline
\multirow{2}{*}{Others} & ResNeXt101 & 0.862 & 0.952 & 0.915 & 0.926 & 0.893 & 0.951 \\
& DenseNet161 & 0.914 & 0.917 & 0.915 & 0.883 & 0.898 & 0.957\\\hline
\end{tabular}}
\end{table*}
\begin{figure}
\centering
\includegraphics[scale=.52]{ROCs.png}
\caption{ROC curves for DenseNet161 and ResNeXt101 for the stratified test dataset based on (a) Portable/Fixed and (b) Philips/Other Manufacturer.}
\label{fig:ROCS}
\end{figure}
\subsection{Stratified Test Results}
As shown in Table \ref{strat}, we stratified the test dataset by image characteristics, such as different manufacturers and fixed/portable imaging systems, to further evaluate the effect of these characteristics on our best performing deep learning models (DenseNet161 and ResNeXt101). We found that the performance of both of these models on a stratified test dataset was comparable to the full test dataset, as shown in Table \ref{resultst}. However, there were a few exceptions. For instance, DenseNet161 achieved a higher sensitivity of 0.968, and ResNeXt161 also showed a higher AUC score of 0.974 on images in the fixed imaging system subgroup of the test dataset. This is likely due to a small sample size of 84 CXR images in this subset. The ROC curves for these stratified test datasets are shown in Fig. \ref{fig:ROCS}.
\begin{figure}
\centering
\includegraphics[scale=0.55]{Vis.png}
\caption{Examples of Grad-CAM Activation of true-positive (1 and 2) and false-positive (3 and 4) cases by DenseNet161: the left image of each case is the original image, whereas the right image is an activation by the Grad-CAM method. The red coloring indicates a highly weighted region of interest. In the top true positive case (1), the pneumoperitoneum (free air) is beneath both the right and left diaphragm, and the heat map correctly marks both sides. In the bottom case (2), free air is only under the right diaphragm, and the heat map identifies it on the correct side. The false positive cases (3 and 4) in these examples are due to air in the bowel, below the left diaphragm (in the upper images), and air in the stomach, below the left diaphragm (in the lower images), without pneumoperitoneum in either case.}
\label{fig:Vis}
\end{figure}
\subsection{Model Visualization and Error Analysis}
We used the Grad-CAM algorithm \cite{selvaraju2017grad}, which uses pneumoperitoneum specific gradient information flowing into the final convolutional layer of the DenseNet161 deep learning model to mark the regions of interest on the CXR images that heavily influenced the outcomes of our model. Examples of Grad-CAM activations on randomly selected true positive cases of pneumoperitoneum are shown in Fig. \ref{fig:Vis}. This visualization produces localization maps of the regions of interest for CXR images and can provide an explanation for the final diagnostic decisions of the deep learning models. The red coloring indicates the most important regions for the ultimate decision of the model on CXR images. We also applied the Grad-CAM algorithm on randomly selected false-positive cases of pneumoperitoneum, which were incorrectly identified by our DenseNet161 model. We found that false-positive cases, CXRs without pneumoperitoneum, were most frequently due to air in the stomach or small bowel below the left diaphragm, or lucency in the lungs above the diaphragm, as shown in Fig. \ref{fig:Vis}.
\section{Discussion}
In this study, we used various common deep learning architectures to develop a model for binary classification of pneumoperitoneum on chest X-ray exams. Pneumoperitoneum, also known as free air, is the abnormal presence of air in the peritoneal cavity. In this experiment, we developed our deep learning models using training and validation datasets that only consisted of CXR images from Philips fixed imaging systems, whereas, in the test dataset, we used CXR images from portable imaging systems or from other imaging system manufacturers (Siemens, Kodak, etc.). Our experiment showed that deep learning models trained on data from a fixed imaging system from a single institution performed well on heterogeneous data from other institutions. Particularly, our deep learning models in this study achieved a high specificity and sensitivity on our diverse test dataset overall.
In our stratified test dataset, the two best performing models (DenseNet161 and ResNeXt101) showed consistent performance regardless of various image characteristics, such as fixed, portable, Philips, or other manufacturer. ResNet is a residual network that uses skip or shortcut connections to allow one to pass the information over convolutional blocks to extract abstract features. In contrast, DenseNet is a densely connected convolutional network that simplifies and refines the connection between convolutional blocks to ensure the flow of maximum information and gradients through feature reuse, i.e., features extracted by very early layers are directly used by deeper layers.
The models showed slightly higher performance in the fixed imaging system subgroup. This may be attributed to a small number of CXR images (84) in this subgroup. Furthermore, the Grad-CAM algorithm showed that our models accurately identify the correct anatomic area and features on the CXR images for the detection of pneumoperitoneum. Radiologists can generally identify and interpret the urgency of the findings based on chart review of patients' medical history. However, the goal of this deep learning approach for pneumoperitoneum detection is to identify and triage possible urgent cases for interpretation rather than replacing the need for radiologist interpretation. A major application for our deep learning algorithm is to screen and triage all imaging with critical findings for expedited interpretation and patient care. Particularly, when the reading list is long, such deep learning approaches can assist with prioritizing urgent exams, especially when the finding is not tagged as STAT (immediate) priority. In addition, when there are many other findings in chest X-rays, subtle pneumoperitoneum cases can be missed. The proposed deep learning model can help radiologists by drawing attention to those cases.
\paragraph{This study has several limitations} First, our study would benefit from further validation on a larger external test dataset and a prospective clinical trial, which we will pursue as future work. Second, our pipeline is focused on distinguishing pneumoperitoneum negative and positive cases, and does not recognize other urgent or critical findings such as pneumothorax or pneumonia on chest X-rays, and such findings could require patients to seek immediate medical or surgical attention. As future work, we plan to include other urgent findings in the next version of our model and will evaluate it in a multi-class classification setting. Finally, although we used Grad-CAM in this study to visualize the regions of interest in our classification, we plan to develop and evaluate a precise detection and a segmentation model to localize pneumoperitoneum on CXR images.
\section{Conclusion}
In summary, this study evaluated the generalizability of deep learning models across different image characteristics for the detection of pneumoperitoneum on CXR images. Our results showed that end-to-end deep learning models performed well in detecting pneumoperitoneum on CXR images from different types of imaging systems at various institutions. If clinically validated, this system could assist radiologists as a pre-screening tool to help prioritize chest X-rays with emergent findings, or offer a second opinion for the presence of pneumoperitoneum on CXR images. For future study, we plan to expand our training dataset to a large multi-institutional dataset to further improve the performance of various deep learning models selected for this task. Also, we plan to expand our test dataset and run prospective clinical trials for further validation of our models. Finally, we plan to expand our study to include other imaging modalities, such as CT scans, to assist with the detection of other urgent and critical findings detected on radiology exams.
\section*{Ethical Considerations}
The use of human subject data in this study was approved by the Dartmouth Institutional Review Board (IRB) with a waiver of informed consent.
\section*{Conflicts of interest}
None Declared
\section*{Funding}
This research was supported in part by National Institute of Health grants R01LM012837 and R01CA249758.
| proofpile-arXiv_059-15669 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}\label{sec.Intro}
Suppose that $d\in{\mathbb N}$, let ${\mathcal L}$ be a unimodular lattice in ${\mathbb R}^d$, and define ${\mathbb T}^d={\mathbb R}^d/{\mathcal L}$. For each $\vec\alpha\in{\mathbb R}^d$ and $N\in{\mathbb N}$ let $S_N=S_N(\vec\alpha,{\mathcal L})$ denote the $d$-dimensional Kronecker sequence defined by
\begin{equation*}
S_N=\{n\vec\alpha +{\mathcal L} : 1\le n\le N\}\subseteq{\mathbb T}^d.
\end{equation*}
Given a metric $\mathrm{d}$ on ${\mathbb R}^d$ we define, for each $1\le n\le N$,
\begin{equation}\label{eqn.DeltaDef}
\delta^\mathrm{d}_{n,N}=\min\{\mathrm{d}(n\vec\alpha,m\vec\alpha+\vec\ell)>0:1\le m\le N,~\vec\ell\in{\mathcal L}\}.
\end{equation}
The quantity $\delta^\mathrm{d}_{n,N}$ is the smallest positive distance in ${\mathbb R}^d$ from $n\vec\alpha$ to an element of the set $S_N+{\mathcal L}$. As a natural generalization of the well known three distance theorem \cite{Sos1957,Sos1958,Sura1958,Swie1959}, we are interested in understanding, for each $\vec\alpha$ and $N$, the number
\begin{equation*}
g_N^\mathrm{d}=g_N^\mathrm{d}(\vec\alpha,{\mathcal L})=|\{\delta^\mathrm{d}_{n,N}:1\le n\le N\}|
\end{equation*}
of distinct values taken by $\delta^\mathrm{d}_{n,N}$, for $1\le n\le N$. We will focus our discussion on two metrics: the Euclidean metric (for which we will write $\delta^\mathrm{d}_{n,N}=\delta_{n,N}$ and $g_N^\mathrm{d}=g_N$), and the maximum metric (for which we will write $\delta^\mathrm{d}_{n,N}=\delta^*_{n,N}$ and $g_N^\mathrm{d}=g_N^*$). To be clear, by the maximum metric on ${\mathbb R}^d$ we mean the metric defined by
\begin{equation*}
\mathrm{d}(\vec x,\vec y)=\max_{1\le i\le d}|x_i-y_i|.
\end{equation*}
For the case of the Euclidean metric it is known that, for any ${\mathcal L}, \vec\alpha,$ and $N$,
\begin{equation*}
g_N(\vec\alpha,{\mathcal L}) \le
\begin{cases}
3&\text{if}~d=1,\\
5&\text{if}~d=2,\\
\sigma_d+1 &\text{if}~d\ge 3 ,
\end{cases}
\end{equation*}
where $\sigma_d$ is the maximum number of non-overlapping spheres of radius one in ${\mathbb R}^d$ which can be arranged so that they all touch the unit sphere in exactly one point ($\sigma_d$ is also known as the \textit{kissing number} for ${\mathbb R}^d$). The bound for $d=1$, in which case the Euclidean and maximum metrics coincide, is a slightly modified version of the three distance theorem (the classical three distance theorem considers the number of `one-sided' gaps, which can in general be greater than $g_N$). The bounds for $d\ge 2$ were recently established in \cite{HaynMark2020b}. For $d=1$ and $2$ there are examples of ${\mathcal L}, \vec\alpha,$ and $N$ for which the upper bounds above are actually obtained (see the introduction of \cite{HaynMark2020b}), therefore those bounds are best possible. For $d\ge 3$ the upper bounds above are 13, 25, 46, 79, 135, 241, 365, 555, etc. These are probably far from best possible, but improving them substantially may require new ideas.
In this paper, motivated both by historical precedent and by questions which were asked of us after the publication of \cite{HaynMark2020b}, we will show how the machinery from that paper can be used to easily bound the corresponding quantity $g_N^*$ for the maximum metric, and even to obtain the best possible bounds in dimensions $d=2$ and $3$. To our knowledge, the only known result about this problem is due to Chevallier \cite[Corollaire 1.2]{Chev1996}, who showed that $g_N^*\le 5$ when $d=2$ and ${\mathcal L}={\mathbb Z}^2$. Chevallier also gave an example in this case (see remark at end of \cite[Section 1]{Chev1996}) for which $g_N^*=4$. We will prove the following theorem.
\begin{thm}\label{thm.GapsBd1}
For any $d,{\mathcal L}, \vec\alpha,$ and $N$, we have that
\begin{equation*}
g_N^*(\vec\alpha,{\mathcal L})\le 2^d+1.
\end{equation*}
Furthermore, when $d=2$ or $3$ this bound is, in general, best possible.
\end{thm}
To prove Theorem \ref{thm.GapsBd1} we will first realize the quantity $g_N^*$ as the value of a function ${\mathcal G}$ defined on the space $\operatorname{SL} (d+1,{\mathbb Z})\backslash\operatorname{SL} (d+1,{\mathbb R})$ of unimodular lattices in ${\mathbb R}^{d+1}$. This part of the proof, carried out in Section \ref{sec.Latt}, is exactly analogous to the development in \cite{HaynMark2020} and \cite{HaynMark2020b}, which in turn is an extension of ideas originally presented by Marklof and Str\"{o}mbergsson in \cite{MarkStro2017}. In Section \ref{sec.Proofs} we will use a simple geometric argument to bound ${\mathcal G}(M)$, when $M$ is an arbitrary unimodular lattice in ${\mathbb R}^d$, and for $d=2$ and $3$ we will give examples of ${\mathcal L}, \vec\alpha,$ and $N$ for which our upper bounds are attained. Such examples, especially when $d=3$, appear to be quite difficult to find.
Finally we remark that for $d=2$ the conclusions of Theorem \ref{thm.GapsBd1} also hold for the Manhattan metric (i.e. the $\ell^1$ metric on ${\mathbb R}^d$). To see this, observe that the unit ball for this metric is a rotated and homothetically scaled copy of the unit ball for the maximum metric. It follows that, if $d=2$ and if $\mathrm{d}$ is the Manhattan metric on ${\mathbb R}^2$, then there is a matrix $R\in\operatorname{SO} (2,{\mathbb R})$ (rotation by $\pi/4$) with the property that, for every ${\mathcal L}, \vec\alpha,$ and $N$,
\begin{equation*}
g_N^{\mathrm{d}}(\vec\alpha,{\mathcal L})=g_N^*\left(R\vec\alpha,R{\mathcal L}\right).
\end{equation*}
Therefore $g_N^{\mathrm{d}}\le 5$ for this metric also, and this bound is best possible. \vspace*{10bp}
\noindent Acknowledgments: We would like to thank Jens Marklof and Nicolas Chevallier for bringing this problem to our attention, and for helpful comments. We also thank the referee for their feedback, and for carefully reading our paper. This project is part of the second author's undergraduate senior research project at the University of Houston.
\section{Lattice formulation of the problem}\label{sec.Latt}
As mentioned above, the observations in this section are very similar to those in \cite[Section 2]{HaynMark2020b}. Therefore we will omit some of the details, which are explained in full in that paper.
Let $|\cdot|_\infty$ denote the maximum norm on ${\mathbb R}^d$. By a linear change of variables in the definition \eqref{eqn.DeltaDef}, we have that
\begin{equation*}
\begin{split}
\delta_{n,N}^* = \min\{| k\vec\alpha + \vec\ell |_\infty>0 : -n< k< N_+-n,~ \vec\ell\in{\mathcal L} \} ,
\end{split}
\end{equation*}
where $N_+:=N+\tfrac12$.
Choose $M_0\in\operatorname{SL}(d,{\mathbb R})$ so that ${\mathcal L}={\mathbb Z}^d M_0$, and let
\begin{equation*}
A_N(\vec\alpha)=A_N(\vec\alpha,{\mathcal L})=\begin{pmatrix} 1 & 0 \\ 0 & M_0 \end{pmatrix} \begin{pmatrix} 1 & \vec{\alpha} \\ 0 & \bm{1}_d \end{pmatrix} \begin{pmatrix} N^{-1} & 0 \\ 0 & N^{1/d}\bm{1}_d\end{pmatrix}.
\end{equation*}
Then, for all $1\leq n\leq N$, we have that
\begin{multline*}
\delta_{n,N}^* = N_+^{-1/d} \min\bigg\{| \vec v |_\infty>0 : (u,\vec v)\in{\mathbb Z}^{d+1} A_{N_+}(\vec\alpha),~ -\frac{n}{N_+}< u < 1-\frac{n}{N_+} \bigg\} .
\end{multline*}
Now write $G=\operatorname{SL}(d+1,{\mathbb R})$ and $\Gamma=\operatorname{SL}(d+1,{\mathbb Z})$ and, for $M\in G$ and $t\in (0,1)$, define
\begin{equation*}
F(M,t) = \min\big\{| \vec v |_\infty>0 : (u,\vec v)\in{\mathbb Z}^{d+1} M, ~-t< u < 1-t \big\} .
\end{equation*}
It follows from the proof of \cite[Proposition 1]{HaynMark2020b} that $F$ is well-defined as a function from $\Gamma\backslash G\times (0,1)$ to ${\mathbb R}_{>0}$. It is also clear that
\begin{equation*}
\delta_{n,N}^*= N_+^{-1/d} F\left(A_{N_+}(\vec\alpha),\frac{n}{N_+} \right) .
\end{equation*}
Given $M\in G$, a bounded region of ${\mathbb R}^{d+1}$ can contain only finitely many points of the lattice ${\mathbb Z}^{d+1}M$. This implies, after a short argument, that the function $F(M,t)$ can only take finitely many values as $t$ varies over $(0,1)$. We denote this finite number by
\begin{equation*}
{\mathcal G}(M)=|\{ F(M,t) \mid 0<t<1\}|,
\end{equation*}
and for $N\in{\mathbb N}$ we also write
\begin{equation*}
{\mathcal G}_{N}(M)=|\{ F(M,\tfrac{n}{N_+}) \mid 1\leq n \leq N\}|.
\end{equation*}
It follows from the definitions above that
\begin{equation}\label{eqn.g_NBnd}
g_N^* = {\mathcal G}_{N}(A_{N_+}(\vec\alpha)) \leq {\mathcal G}(A_{N_+}(\vec\alpha)).
\end{equation}
This is the key connection which we will use in our proof of Theorem \ref{thm.GapsBd1}.
\section{Proof of Theorem \ref{thm.GapsBd1}}\label{sec.Proofs}
To prove the first part of Theorem \ref{thm.GapsBd1}, in light of \eqref{eqn.g_NBnd} it is sufficient to show that, for any $M\in\Gamma\backslash G,$
\begin{equation}\label{eqn.G(M)Bd}
{\mathcal G}(M)\le 2^d+1.
\end{equation}
Suppose that $M\in\Gamma\backslash G$ and choose vectors $(u_1,\vec{v}_1),\ldots,(u_K,\vec{v}_K)\in{\mathbb Z}^{d+1}M$, with $K={\mathcal G}(M)$, so that the following conditions hold:\vspace*{3bp}
\begin{itemize}
\item $0<|\vec{v}_1|_\infty<|\vec{v}_2|_\infty<\cdots <|\vec{v}_K|_\infty$.\vspace*{3bp}
\item For each $t\in (0,1)$, there exists an $1\le i\le K$ such that $|\vec{v}_i|_\infty=F(M,t)$.\vspace*{3bp}
\item For each $1\le i\le K$, there exists a $t\in(0,1)$ such that $-t<u_i<1-t$ and $|\vec v_i|_\infty=F(M,t)$.\vspace*{3bp}
\end{itemize}
Note that each $u_i$ lies in the interval $(-1,1)$. We make the following basic observation.
\begin{prop}\label{prop.SmallT}
If $(u,\vec{v})\in{\mathbb Z}^{d+1}M$ satisfies $|u|<1/2$, then
\begin{equation*}
F(M,t)\le |\vec v|_\infty,
\end{equation*}
for all $0<t<1$.
\end{prop}
\begin{proof}
If $u\in[0,1/2)$ then for any $0<t<1-u$, we have that $F(M,t)\le|\vec{v}|_\infty$. Noting that $(-u,-\vec v)\in{\mathbb Z}^{d+1}M$, we see that this inequality also holds for any $t$ satisfying $u<t<1$. Since $u<1/2$, we conclude that $F(M,t)\le |\vec{v}|_\infty$ for all $0<t<1$. The case when $u\in(-1/2,0]$ follows from the same argument.
\end{proof}
Next we use geometric information to place restrictions on the vectors $(u_i,\vec v_i)$. This is where we will use the fact that we are working with the maximum norm.
\begin{prop}\label{prop.NoSameOrth}
For $1\le i<j\le K$, if $\operatorname{sgn}(u_i)\vec v_i$ and $\operatorname{sgn}(u_j)\vec v_j$ lie in the same orthant of ${\mathbb R}^d$, then $j=K$.
\end{prop}
\begin{proof}
If $|u_j|<1/2$ then by the previous proposition we have that $F(M,t)\le|\vec v_j|_\infty$ for all $0<t<1$, which implies that $j=K$. Therefore suppose that $|u_j|\ge 1/2$. Then, by Proposition \ref{prop.SmallT} again, this forces $|u_i|\ge 1/2$.
If $\operatorname{sgn}(u_i)=\operatorname{sgn}(u_j)$ and if $\vec v_i$ and $\vec v_j$ lie in the same orthant, then $|u_i-u_j|<1/2$, and
\begin{equation*}
0<|\vec v_i-\vec v_j|_\infty\le \max\left\{|\vec v_i|_\infty,|\vec v_j|_\infty\right\}=|\vec v_j|_\infty.
\end{equation*}
Since $(u_i-u_j,\vec v_i-\vec v_j)\in{\mathbb Z}^{d+1}M$, it follows from Proposition \ref{prop.SmallT} that
\begin{equation}\label{eqn.SameOrthBd}
F(M,t)\le |\vec v_i-\vec v_j|_\infty\le |\vec v_j|_\infty
\end{equation}
for all $0<t<1$. Therefore we conclude that $j=K.$
Similarly, if $\operatorname{sgn}(u_i)=-\operatorname{sgn}(u_j)$ and if $\vec v_i$ and $-\vec v_j$ lie in the same orthant, then
$|u_i+u_j|<1/2$, and
\begin{equation*}
0<|\vec v_i+\vec v_j|_\infty\le \max\left\{|\vec v_i|_\infty,|\vec v_j|_\infty\right\}=|\vec v_j|_\infty.
\end{equation*}
Since $(u_i+u_j,\vec v_i+\vec v_j)\in{\mathbb Z}^{d+1}M$, this again implies that \eqref{eqn.SameOrthBd} holds, and we conclude that $j=K$.
\end{proof}
By Proposition \ref{prop.NoSameOrth}, each of the vectors $\operatorname{sgn}(u_i)\vec v_i$, for $1\le i\le K-1$, must lie in a different orthant of ${\mathbb R}^d$. This immediately gives the bound in \eqref{eqn.G(M)Bd}, and therefore completes the proof of the first part of Theorem \ref{thm.GapsBd1}.
Finally, we give examples with $d=2$ and 3 for which the bound in Theorem \ref{thm.GapsBd1} is attained. In what follows, for $\vec x\in{\mathbb R}^d$ we write
\begin{equation*}
\|\vec x\|=\min\{|\vec x-\vec \ell|_\infty : \ell\in{\mathcal L}\}.
\end{equation*}
For $d=2$ take ${\mathcal L}={\mathbb Z}^2,~\vec\alpha=(157/500,-23/200),$ and $N=11$. Then we have that
\begin{align*}
\delta^*_{1,N}=\|10\vec\alpha\|=\frac{3}{20},\quad\delta^*_{2,N}=\|9\vec\alpha\|=\frac{87}{500},\quad\delta^*_{4,N}=\|7\vec\alpha\|=\frac{99}{500},
\end{align*}
\begin{align*}
\delta^*_{5,N}=\|6\vec\alpha\|=\frac{31}{100},\quad\text{and}\quad\delta^*_{6,N}=\|\vec\alpha\|=\frac{157}{500},
\end{align*}
therefore $g^*_N(\alpha,{\mathcal L})=5$.
For $d=3$ take ${\mathcal L}={\mathbb Z}^3,~\vec\alpha=(-157/10000, -742/3125, -23/400),$ and $N=73$. Then we have that
\begin{align*}
\delta^*_{1,N}=\|72\vec\alpha\|=\frac{7}{50},\quad\delta^*_{2,N}=\|71\vec\alpha\|=\frac{443}{3125},\quad\delta^*_{5,N}=\|68\vec\alpha\|=\frac{456}{3125},
\end{align*}
\begin{align*}
\delta^*_{6,N}=\|67\vec\alpha\|=\frac{59}{400},\quad\delta^*_{18,N}=\|55\vec\alpha\|=\frac{13}{80},\quad \delta^*_{19,N}=\|54\vec\alpha\|=\frac{557}{3125},
\end{align*}
\begin{align*}
\delta^*_{22,N}=\|51\vec\alpha\|=\frac{1993}{10000},\quad\delta^*_{23,N}=\|50\vec\alpha\|=\frac{43}{200},\quad\text{and}\quad\delta^*_{24,N}=\|4\vec\alpha\|=\frac{23}{100},
\end{align*}
therefore $g^*_N(\alpha,{\mathcal L})=9$.
This completes the proof of our main result. We conclude with a couple of remarks. First of all, we note that the proof that we have given here (in particular, establishing the optimal bounds in dimensions 2 and 3) is simpler than the proof of the corresponding result for the Euclidean metric given in \cite{HaynMark2020b}. For the proof of the optimal bounds in dimensions 2 and 3 here, we only needed Proposition \ref{prop.NoSameOrth}, which plays a similar role for the maximum metric as \cite[Proposition 2]{HaynMark2020b} does for the Euclidean metric. However, for the Euclidean metric, \cite[Proposition 2]{HaynMark2020b} is not enough to establish the optimal bound in dimension 2, which is the reason for the additional geometric arguments given in \cite[Section 5]{HaynMark2020b}. Of course, a less precise explanation for this is that cube packing is easier and more efficient than sphere packing.
Finally, for $d\ge 4,$ it seems likely that the upper bound of Theorem \ref{thm.GapsBd1} is too large. In fact, for $d=4$, we have not found any examples so far with $g_N^*>9$. Establishing optimal upper bounds in these cases is an interesting open problem.
| proofpile-arXiv_059-15670 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Complexity measures}
We now introduce a number of complexity measures used in the proofs of our main results.
Throughout this section, let $f : \{0, 1\}^n \to \mathbb{R}$ be a function on the Boolean domain. We identify elements of $\{0, 1\}^n$ with subsets of $[n]$. Given two inputs $x, y \in \{0, 1\}^n$ we denote by $x \lor y$ their union and by $x \land y$ their intersection.
\subsection{Decision trees}
We assume familiarity with the standard notion of a \textit{decision tree}.
Our primary interest is in a variant of decision trees called \textit{AND decision trees},
which strengthens decision trees by allowing queries of the form $\land_{i \in S} z_i$
for arbitrary $S \subseteq [n]$. Let $\andDT{f}$ denote the depth of the shallowest
AND decision tree computing $f$. The following simple connection
to the communication complexity of $f \circ \land$ motivates
our interest in this model:
\begin{claim}
Let $f: \{0, 1\}^n \to \{0, 1\}$ and $F = f \circ \land$.
Then $\PCC(F) \le 2 \andDT{f}$.
\end{claim}
A related complexity measure --- called the \text{$0$-decision tree complexity} of $f$
and denoted by $\zeroDT{f}$ --- is defined as follows. The \textit{$0$-depth} of a
(standard) decision tree $\mathcal{T}$ is largest number of $0$-edges encountered on a
root-to-leaf path in $\mathcal{T}$. The $0$-decision tree complexity of $f$ is the
smallest $0$-depth over all trees $\mathcal{T}$ computing $f$. The following relationship
between AND decision trees and $0$-decision tree complexity is
from~\cite{mukhopadhyay2019lifting}:
\begin{claim}\label{claim:0DT_ADT}
For any Boolean function $f: \{0, 1\}^n \to \{0, 1\}$,
$\andDT{f} \le \zeroDT{f} \lceil \log (n+1) \rceil$.
\end{claim}
For completeness, we include the short proof.
\begin{proof}
Let $\mathcal{T}$ be a decision tree computing $f$ with $0$-depth $d$. Consider the
subtree which is truncated after the first $0$ is read. We can compute which leaf in
the subtree is reached by doing a binary search on the at most $n+1$ options, which can
be implemented using $\lceil \log(n+1) \rceil$ computations of ANDs. Then, the same
process continues on the tree rooted at the node reached, which has $0$-depth at most
$d - 1$.
\end{proof}
We note that this inequality is tight, by considering the function $f$ which outputs
$x_{i+1}$ where $x_i$ is the first bit in $x = (x_1, \dots, x_n)$ set to $1$.
\subsection{Monotone block sensitivity}
Next, we consider a variant of the standard notion of \textit{block sensitivity}
from~\cite{nisan1994degree} which we call \textit{monotone block sensitivity}. In a
nutshell, this a `directed' restriction of block sensitivity, where we can only change an
input by flipping $0$'s to $1$'s. Say that two inputs $x,y$ are disjoint if $x \land y = 0^n$;
namely, their corresponding sets are disjoint.
\begin{definition}[Monotone block sensitivity]
The monotone block sensitivity of a function $f : \{0, 1\}^n \to \{0, 1\}$ at an input
$z \in \{0, 1\}^n$, denoted $\MBS(f, z)$, is the maximal number $k$ such that there
exist pairwise disjoint inputs $w_1, \dots, w_k \in \{0, 1\}^n$ that satisfy
$f(z) \ne f(z \lor w_i)$ for all $i \in [k]$. We denote $\MBS(f) = \max_z \MBS(f,z)$.
\end{definition}
\shachar{
We can extend this definition to $f:\{0, 1\}^n \to R$ for any range $R$; many of
the following results still hold, in particular connecting FMBS and MBS. I think we
should add this, to explain what parts rely on the output being Boolean and which ones do not
}
Observe that we may assume without loss of generality in the definition of $\FMBS(f,z)$
that $w_1, \dots, w_k$ are disjoint from $z$, as otherwise we can replace $w_i$ with
$w_i \setminus z$. In this case it coincides with the standard definition of block
sensitivity, where we restrict each block $w_i$ to be disjoint from the support of $z$.
For two motivating examples, observe that for the $n$-bit AND and OR functions we have
$\MBS(\text{AND}) = 1$ and $\MBS(\text{OR}) = n$, respectively.
We now consider a fractional variant of monotone block sensitivity. In this case,
instead of asking for a set of disjoint sensitive inputs, we ask for a a distribution
over sensitive inputs which satisfy a particular set of `packing' constraints.
\begin{definition}[Smooth distribution]
A distribution $\mathcal{D}$ over $\{0, 1\}^n$ is said to be $p$-smooth if for any
$i \in [n]$ it holds that $\Pr_{w \sim D}[w_i = 1] \le p$.
\end{definition}
\begin{definition}[Fractional monotone block sensitivity]
The fractional monotone block sensitivity of a function
$f : \{0, 1\}^n \to \{0, 1\}$ at an input $z \in \{0, 1\}^n$, denoted $\FMBS(f,z)$,
is equal to $1/p$, where $p>0$ is the smallest number such that the following holds.
There exists a distribution $\mathcal{D}$ over inputs $w \in \{0, 1\}^n$ such that:
\begin{enumerate}
\item $f(z) \ne f(z \lor w)$ for all $w$ in the support of $\mathcal{D}$.
\item $\mathcal{D}$ is $p$-smooth.
\end{enumerate}
We denote $\FMBS(f) = \max_z \FMBS(f,z)$.
\end{definition}
We will see next section that this fractional variant is captured by a simple linear
program (and that the integral variant is captured by a simple integral linear program),
but, for the time being, we won't make direct use of this formulation. Note that also
here we may assume without loss of generality that the distribution $\mathcal{D}$ is over inputs
$w$ disjoint from $z$. It is straightforward to see that $\FMBS$ upper bounds $\MBS$:
\begin{claim}
For any Boolean function $f : \{0, 1\}^n \to \{0, 1\}$ and input $z \in \{0, 1\}^n$,
$\FMBS(f,z) \ge \MBS(f,z)$. In particular, $\FMBS(f) \ge \MBS(f)$.
\end{claim}
\begin{proof}
Assume that $\MBS(f,z)=k$ is witnessed by pairwise disjoint $w_1,\ldots,w_k$.
Define $\mathcal{D}$ to be the uniform distribution over $w_1,\ldots,w_k$. Then $\mathcal{D}$ is
$(1/k)$-biased and hence $\FMBS(f, z) \ge k$ .
\end{proof}
Later on, we will see that a converse of this inequality holds up to polynomial factors.
We remark that, in order to avoid introducing a dependence on $n$, such a converse must
hold with respect to the `global' quantities $\FMBS(f)$ and $\MBS(f)$, rather than the
`local' quantities $\FMBS(f, z)$ and $\MBS(f, z)$ for particular $z \in \{0, 1\}^n$.
\sam{I'm forgetting the counter-example at the moment but will fill it in soon}
\subsection{Hitting set complexity}
It is straightforward to see that $\MBS(f, z)$ is equal to the value of the following
\textit{set packing} integer linear program:
\begin{align*}
\text{maximize } & \sum_{w} a_w \\
\text{subject to } & \sum_{w: w_i = 1} a_w \le 1 \quad \text{ for all } i \in [n]\\
& a_w \in \{0, 1\} \text{ for all } w,\ f(z \lor w) \neq f(z).
\end{align*}
By taking its dual, we obtain our next complexity measure, which we refer to as the
\textit{hitting set complexity}.
\begin{definition}[Hitting set complexity]
The hitting set complexity of a function $f: \{0, 1\}^n \to \{0, 1\}$
at an input $z \in \{0, 1\}^n$, denoted $\HSC(f, z)$, is the minimal $k$
so that there are $k$ indices $i_1, ..., i_k \in [n]$ for which the following holds.
For every $w \in \{0, 1\}^n$ such that $f(z) \neq f(z \vee w)$, there is some
$j \in [k]$ so that $w_{i_j} = 1$. We denote
$\HSC(f) = \max_z \HSC(f, z)$.
\end{definition}
One can readily verify that $\HSC(f, z)$ is in fact equal to the value of the following
\textit{set covering} integer linear program (the dual of the LP defining $\MBS$):
\begin{align*}
\text{minimize} &\ \sum_{i \in [n]} b_i \\
\text{subject to} &\ \sum_{i \in [n]: w_i = 1} b_i \ge 1 \quad \text{ for all } w, \ f(z \lor w) \neq f(z)\\
&\ b_i \in \{0, 1\} \quad \text{ for all } i \in [n]
\end{align*}
We record the following simple observation:
\begin{claim}
For any $f: \{0, 1\}^n \to \{0, 1\}$, $\HSC(f, 0^n) \le \zeroDT{f}$.
\end{claim}
\begin{proof}
Suppose $\zeroDT{f} = d$, witnessed by some decision tree $\mathcal{T}$. Let $S \subseteq [n]$
be the set of $0$'s queried by $\mathcal{T}$ when simulated on $0^n$ and note that $|S| \le d$.
We claim that every $w \in \{0, 1\}^n$ for which $f(0^n) \neq f(w)$ has some $i \in S$
has $w_i = 1$. Otherwise, one such $w$ is $0$ on all of $S$. But in this case, $0^n$ and
$w$ have the same root-to-leaf path in $\mathcal{T}$. Therefore a contradiction to $\mathcal{T}$'s
correctness would arise if $f(0^n) \neq f(w)$.
\end{proof}
...
\begin{definition}[Fractional hitting set complexity]
\label{def:FHSC}
The fractional hitting set complexity of a function $f:\{0, 1\}^n \to \{0, 1\}$ at an input
$z \in \{0, 1\}^n$, denoted $\FHSC(f, z)$, is the minimal number $k$ such that there
exists a distribution $\mathcal{D}$ over coordinates $i \in [n]$ for which the following holds.
For every $w \in \{0, 1\}^n$ such that $f(z) \neq f(z \vee w)$, it holds that
$\Pr_{i \sim \mathcal{D}}[w_i =1 ] \ge 1 / k$. We denote $\FHSC(f) = \max_z \FHSC(f, z)$.
\end{definition}
Also here, it is sufficient to consider $w$ disjoint from $z$.
Linear programming duality yields an equivalence of $\FHSC$ and $\FMBS$:
\begin{claim}
For any Boolean function $f: \{0, 1\}^n \to \{0, 1\}$ and input $z \in \{0, 1\}^n$,
\[
\MBS(f, z) \le \FMBS(f,z) = \FHSC(f,z) \le \HSC(f, z)
\]
\end{claim}
\subsection{Sparsity as a multilinear polynomial}
Any Boolean function $f: \{0, 1\}^n \to \{0, 1\}$ can be written uniquely as a multilinear real-valued polynomial $f(x) = \sum_{S \subseteq [n]} \alpha_S \prod_{i \in S} x_i$. The \emph{sparsity} of $f$, denoted $\spar(f)$, is the number of nonzero monomials in $f$'s multilinear representation.
Let $F = f \circ \wedge$ denote the AND function corresponding to $f$, given by $F(x,y) = f(x \wedge y)$. The sparsity of $f$ characterizes the communication complexity of $F$:
\begin{claim}\label{claim:rank_sparsity}
Let $f: \{0, 1\}^n \to \{0, 1\}$ be a Boolean function. Let $M_{F}$ denote the communication matrix for $F = f \circ \wedge$. Then $\spar(f) = \rank(M_F)$.
\end{claim}
\begin{proof}[Proof sketch]
The functions $\prod_{i \in S} x_i$ for $S \subseteq [n]$ are the eigenvectors of $M_F$, and the eigenvalues are their coefficients in the polynomial decomposition of $f$. Hence, then rank of $M_F$ equals the number of nonzero coefficients.
\end{proof}
\begin{claim}
Let $f: \{0, 1\}^n \to \{0, 1\}$ be a Boolean function
and let $\mathcal{F} = \{S : \alpha_S \neq 0\}$. Then if $\HSC(f, 0^n) = c$,
$\mathcal{F}$ has a hitting set of size $c$.
\end{claim}
\begin{corollary}
Let $f: \{0, 1\}^n \to \{0, 1\}$ be a Boolean function
and let $\mathcal{F} = \{S : \alpha_S \neq 0\}$. Then if $\zeroDT{f} = c$,
$\mathcal{F}$ has a hitting set of size $c$.
\end{corollary}
\section{Upper bounding monotone block sensitivity}
In this section we prove two inequalities which serve as the starting point in the proofs of the main theorems. Both of these inequalities provide upper bounds on the monotone block sensitivity in terms of two `communication-type' measures: the communication complexity of $F = f \circ \land$ and the sparsity of $f$.
\subsection{Connection to communication complexity}
If $f$ has large monotone block sensitivity, then its AND function embeds unique disjointness
as a sub-function. The unique disjointness function on $k$ bits $\UDISJ_k$ takes two inputs,
$a, b \in \{0, 1\}^k$, and is defined as the partial function:
\[
\UDISJ_k(a,b) =
\begin{cases}
0 & \text{if } |a \land b| = 1 \\
1 & \text{if } |a \land b| = 0 \\
\text{undefined} & \text{otherwise},
\end{cases}
\]
where $|\cdot|$ is the Hamming weight.
\begin{claim}\label{claim:MBS_UDISJ}
Let $f:\{0, 1\}^n \to \{0, 1\}$ be a Boolean function with $\MBS(f)=k$. Then $F =f \circ \wedge$ contains as a sub-matrix $\UDISJ_k$. That is, there are
maps $\mathbf{x}, \mathbf{y} : \{0, 1\}^k \to \{0, 1\}^n$ and $c \in \{0, 1\}$ such that the following holds.
For any
$a, b \in \{0, 1\}^k$ which satisfy that $|a \land b| \in \{0, 1\}$, it holds that
\[
\UDISJ_k(a,b) = F(\mathbf{x}(a), \mathbf{y}(b)) \oplus c.
\]
\end{claim}
\begin{proof}
Let $z,w_1, \dots, w_k \in \{0, 1\}^n$ be pairwise disjoint such that $f(z) \ne f(z \lor w_i)$ for all
$i \in [k]$. We may assume without loss of generality that $f(z)=1$, otherwise replace
$f$ with its negation, and set $c=1$.
Assume that Alice and Bob want to solve unique-disjointness on inputs $a, b \in \{0, 1\}^k$,
which we identify with subsets of $[k]$. Define
\[
\mathbf{x}(a) = z \lor \bigvee_{i \in a} w_i, \qquad
\mathbf{y}(b) = z \lor \bigvee_{j \in b} w_j.
\]
Observe that
\[
\mathbf{x}(a) \land \mathbf{y}(b) =
\begin{cases}
z & \text{if } a \land b = \emptyset\\
z \lor w_i & \text{if } a \land b = \{i\}.
\end{cases}
\]
Thus we get that $\UDISJ_k(a,b) = f(\mathbf{x}(a) \land \mathbf{y}(b))$ for all $a,b$.
\end{proof}
It is well known that UDISJ is hard in several communication complexity measures: deterministic, randomized and nondeterministic.
\begin{theorem}[\cite{goos2018landscape, razborov1992distributional}]\label{thm:disj_lbs}
For any communication complexity measure $\Delta \in\{ \PCC, \mathrm{BPP}^{\text{cc}}, \mathrm{NP}^{\text{cc}}\}$,
\[
\Delta(\UDISJ_k)=\Omega(k).
\]
\end{theorem}
We immediately get the following corollary:
\begin{corollary}\label{corollary:PCC_MBS}
Let $f:\{0, 1\}^n \to \{0, 1\}$ be a Boolean function. Then for $F=f \circ \wedge$ and any communication complexity measure $\Delta \in\{ \PCC, \mathrm{BPP}^{\text{cc}}, \mathrm{NP}^{\text{cc}}\}$, if $\Delta(F) = C$ then $\MBS(f) = O(C)$.
\end{corollary}
\begin{proof}
Assume that $\Delta(F)=C$ and $\MBS(f)=k$. \Cref{claim:MBS_UDISJ} shows that any protocol for $F$ also solves $\UDISJ_k$. Hence by \Cref{thm:disj_lbs} we have $k=O(C)$.
\end{proof}
\subsection{Connection to sparsity}
Next, we prove the following relationship between monotone block sensitivity
and sparsity.
\begin{claim}\label{claim:MBS_sparsity}
For any Boolean function $f: \{0, 1\}^n \to \{0, 1\}$,
$\MBS(f) = O(\log(\spar(f))^2)$.
\end{claim}
Since the log-rank of the communication matrix lower bounds the communication complexity, this claim immediately implies
\[
\MBS(f) = O(\log(\spar(f))^2) = O(\log(\rank(M_F))^2) = O(\PCC(F)^2).
\]
This is worse than the bound obtained in the previous section by a quadratic factor but allows us to upper bound $\MBS$ using a weaker assumption. In the case of the log-rank conjecture, the ability to use this weaker assumption is crucial.
The proof of $\Cref{claim:MBS_sparsity}$ uses the following well-known relationship between
the degree and the sensitivity of Boolean functions. The \textit{sensitivity} $S(f)$ of a Boolean function $f$ is the largest $s$ so that there exists an input $z$ and $s$ coordinates $\{i_1, ..., i_s\}$ so that $f(z) \neq f(z \oplus e_{i_j})$ for all $j \in [s]$.
\begin{claim}[\cite{nisan1994degree}]
For any Boolean function $f: \{0, 1\}^n \to \{0, 1\}$, $S(f) \le O(\deg(f)^2)$.
\end{claim}
\begin{proof}[Proof of \Cref{claim:MBS_sparsity}]
Suppose $\MBS(f) = k$, witnessed by pairwise disjoint $z,w_1, ..., w_k \subseteq [n]$. Let
$g: \{0, 1\}^k \to \{0, 1\}$ denote the function obtained from $f$ by identifying variables
in each $w_i$ and setting all variables not occurring in any $w_i$ to the corresponding bit
in $z$. That is, $g(x) = f(z + \sum x_i w_i)$.
Note that $S(g) = k$ and $\spar(g) \le \spar(f)$.
Let $r = \spar(f)$. We will reduce the degree of $g$ to $d=O(\log r)$ by repeating the following
process $k/2$ times: set to zero the coordinate which appears in the largest number of monomials of degree $\ge d$.
Let $M_i$ denote the number of monomials of degree $\ge d$ remaining after the $i$-th step. Initially $M_0 \le r$. Next, note that if $M_i > 0$, then there is a variable that occurs in
at least a $d/k$ fraction of the monomials of degree $\ge d$. We therefore obtain the recurrence
$M_{i+1} \le (1 - d/k)M_i$. After $k/2$ steps, $M_{k/2} \le (1-d/k)^{k/2}r \le \exp(-d/2)r < 1$
for $d = O(\log r)$. As $M_{k/2}$ is an integer, we obtain $M_{k/2}$ is zero.
Let $h$ denote the function obtained by this restriction process. Since $M_{k/2} = 0$,
$\deg(h) < d$. Moreover, since $g$ had full sensitivity and we restricted only $k / 2$
coordinates, $S(h) \ge k/2$. Finishing up, we have
$k/2 \le S(h) \le O(\deg(h)^2) \le O((\log r)^2)$, completing the proof.
\end{proof}
\section{From $\MBS$ to $\FMBS$}
In this section we show that the fractional monotone block sensitivity can be at most cubic in the monotone block sensitivity.
\begin{lemma}\label{lemma:FMBS_MBS}
For any Boolean function $f:\{0, 1\}^n \to \{0, 1\}$, $\FMBS(f) = O(\MBS(f)^3)$.
\end{lemma}
We first need the following claim.
\begin{claim}\label{claim:bias_restrict}
Let $f:\{0, 1\}^n \to \{0, 1\}$ be a Boolean function.
Let $z \in \{0, 1\}^n$ and $\mathcal{D}$ be a $q$-biased distribution. Then
\[
\Pr_{w \sim D}[f(z) \ne f(z \lor w)] \le q \cdot \FMBS(f).
\]
\end{claim}
\begin{proof}
Assume $\FMBS(f)=1/p$. We may assume $q < p$ as otherwise the claim is trivial.
Let $\delta = \Pr_{w \sim D}[f(z) \ne f(z \lor w)]$.
Let $\mathcal{D}'$ be the distribution $\mathcal{D}$ restricted to inputs $w$ such that $f(z) \ne f(z \lor w)$.
Observe that $\mathcal{D}'$ is $(q / \delta)$-biased, and is supported on inputs $w$ such that $f(z) \ne f(z \lor w)$. As $\FMBS(f)=1/p$ we have $q/\delta \ge p$ which implies the claim.
\end{proof}
\begin{proof}[Proof of \Cref{lemma:FMBS_MBS}]
Let $\FMBS(f)=1/p$. Let $z \in \{0, 1\}^n$ and $\mathcal{D}$ be a distribution as in the definition of
fractional monotone block sensitivity, such that $\mathcal{D}$ is $p$-biased.
Let $k$ to be determined later, and sample inputs
$w_1,\ldots,w_k \sim \mathcal{D}$ independently. Let $u$ denote all the elements that appear at least in two of the $w_i$, namely
\[
u = \bigvee_{i \ne j} \left( w_i \bigwedge w_j \right).
\]
The main observation is that $u$ is $q$-biased for $q=(pk)^2$. This holds since for every $\ell \in [n]$ we have
\[
\Pr[u_\ell=1] \le \sum_{i \ne j} \Pr[(w_i)_{\ell}=1, (w_j)_{\ell}=1] \le k^2 p^2.
\]
Define the following ``bad'' events:
\[
E_0 = [f(z) \ne f(z \vee u)], \quad E_t = [f(z \vee w_t) \ne f(z \vee w_t \vee u)] \text{ for } t=1,\ldots,k.
\]
We claim that $\Pr[E_t] \le q/p = p k^2$ for all $t=0,\ldots,k$. The proof for $E_0$ follows directly from \Cref{claim:bias_restrict}. To see why it holds for $E_t$ for $t=1,\ldots,k$, define $u_t$ to be the elements that appear in two sets $w_i$, excluding $w_t$, namely
\[
u_t = \bigvee_{i \ne j, \; i,j \ne t} \left( w_i \bigwedge w_j \right).
\]
Observe that $w_t,u_t$ are independent, that $u_t$ is $(pk)^2$-biased and that $w_t \vee u = w_t \vee u_t$. Thus \Cref{claim:bias_restrict} gives that, for any fixing of $w_t$, we have
\[
\Pr_{u_t}[f(z \vee w_t) \ne f(z \vee w_t \vee u_t) \; | \; w_t] \le q/p = pk^2.
\]
The claim for $E_t$ follows by averaging over $w_t$.
Next, let us choose $k$ so that $pk^2 < 1/(k+1)$. Then with positive probability, none of the events $E_t$ hold. Fix $w_1,\ldots,w_k$ for which this happens.
In such a case, we argue that we can find a witness that $\MBS(f) \ge k$. Define $z' = z \vee u$ and $w'_i = w_i \setminus u$. As non of the events $E_t$ hold we have
\[
f(z') = f(z), \qquad f(z' \vee w'_i) = f(z \vee w_i) \; \forall i \in [k].
\]
Thus $f(z') \ne f(z' \vee w'_i)$ for all $i \in [k]$.
Moreover, $w'_1,\ldots,w'_k$ are pairwise disjoint. Hence $\MBS(f) \ge k$. As we can choose $k=\Omega((1/p)^{1/3})$, the lemma follows.
\end{proof}
\section{Towards log-rank and deterministic lifting
for AND functions}
In this section, we prove the following relationship
between $\log(\rank(M_F))$ and the AND-decision tree
complexity of $f: \{0, 1\}^n \to \{0, 1\}$:
\begin{theorem}[Main theorem]
\label{thm:AND_DT}
Let $f: \{0, 1\}^n \to \{0, 1\}$ be a Boolean function and set
$F=f \circ \wedge$. Assume that $\log(\rank(M_F))=C$. Then there
exists an AND decision tree computing $f$ of depth
$D = O(C^7 \log n)$.\shachar{%
to complete the proof, we need to show that when $f$ cannot
be embedded on fewer bits that we can bound
$\log n \le \text{poly}(C)$
}
\end{theorem}
As corollaries, we get within a multiplicative $\log n$
factor of the log-rank conjecture and deterministic lifting
for AND functions.
\begin{corollary}
The log-rank conjecture holds for AND functions, since an AND decision
tree for $f$ of depth $D$ implies a deterministic protocol for
$f \circ \wedge$ of complexity $2D$, as the players can simply simulate
the computation in the AND decision tree.
\end{corollary}
\begin{corollary}
\label{corollary:lifting}
Deterministic lifting holds for AND functions, since if $F$ has a deterministic
protocol of complexity $C$ then in particular $\log(\rank(M_F)) \le C$ which
satisfies the assumption of \Cref{thm:AND_DT}.
\end{corollary}
Note that this implies the log-rank
conjecture and deterministic lifting whenever
$\log n \le (\log (\spar(f)))^{O(1)}$, which
is the case for many natural examples of Boolean functions
but is not true in general. We will discuss this case
in more detail later. For the remainder of this section, we turn to proving \Cref{thm:AND_DT}. Fix a Boolean function $f$ and $F = f \circ \wedge$ throughout.
The following inequality allows us to upper bound the $0$-decision tree complexity in terms of $\FMBS$ and sparsity:
\begin{claim}\label{claim:FMBS+sparsity_0DT}
For a Boolean function $f: \{0, 1\}^n \to \{0, 1\}$, $\zeroDT{f} \le \FMBS(f) \cdot \log(\spar(f))$.
\end{claim}
Before proving it, we deduce \cref{thm:AND_DT}:
\begin{proof}[Proof of \Cref{thm:AND_DT}]
Let $f$ be a Boolean function such that $\log(\rank(M_F))=C$ for $F=f \circ \wedge$. By \Cref{claim:rank_sparsity} we have $\log(\spar(f))=C$, and by \Cref{claim:MBS_sparsity} we have $\MBS(f)=O(C^2)$. \Cref{lemma:FMBS_MBS} gives that $\FMBS(f)=O(C^6)$ and \Cref{claim:FMBS+sparsity_0DT} then gives $\zeroDT{f} = O(C^7)$. This implies by \Cref{claim:0DT_ADT} that
$\andDT{f} = O(C^7 \log n)$.
\end{proof}
\begin{proof}[Proof of \Cref{claim:FMBS+sparsity_0DT}]
We'll show that $\zeroDT{f} \le \FHSC(f) \cdot \log(\spar(f))$, from
which the claim follows by linear programming duality.
We build a decision tree that computes $f$ and bound its $0$-depth. We start by choosing the first variable $x_i$ to query. It will be convenient to denote by $\spar_0(f)$ the sparsity of $f$, excluding the constant term, if there is one. Note that $\spar_0(f) \in \{\spar(f), \spar(f)-1\}$.
Let $f(x) = \sum \alpha_S \prod_{i \in S} x_i$ be the polynomial decomposition of $f$, and let $M=\{S: S \ne 0, \alpha_S \ne 0\}$ be the non-constant monomials occurring in it, so that $|M|=\spar_0(f)$. Let $M_{\min} \subset M$ be the set of minimal monomials with respect to containment. Observe that if $S \in M_{\min}$ then $f(S) \ne f(0)$ since $f(S) = \alpha_0 + \alpha_S$ whereas $f(0)=\alpha_0$.
Let $k=\FHSC(f, 0) \le \FHSC(f)$.
By definition, there is a distribution $\mathcal{D}$ over indices $i \in [n]$ such that $\Pr[i \in w] \ge 1/k$ for any input $w$ such that $f(w) \ne f(0)$. In particular, $\Pr[i \in S] \ge 1/k$ for any $S \in M_{\min}$, and hence also for all $S \in M$, as any $S \in M$ contain some $S' \in M_{\min}$.
Thus, there exists $i \in [n]$ that belongs to at least a $(1/k)$-fraction of the non-constant monomials of $f$. Query the variable $x_i$ and let $b_i \in \{0, 1\}$ be the outcome.
Let $f':\{0, 1\}^{n} \to \{0, 1\}$ be the function $f$ restricted to $x_i=b_i$.
First, consider the sparsity of $f'$:
\begin{itemize}
\item If $x_i=0$ then $\spar_0(f') \le (1-1/k) \cdot \spar_0(f)$, as setting $x_i=0$ kills a $(1/k)$-fraction of the non-constant monomials.
\item If $x_i=1$ then $\spar_0(f') \le \spar_0(f)$, since fixing variables to constants cannot increase the number of non-constant monomials.
\end{itemize}
Next, observe that $\FHSC(f') \le \FHSC(f)$. This is since for every input $z$ consistent with the restriction (namely, with $z_i=b_i)$, we have
$\FHSC(f,z)=\FHSC(f',z)$. Thus, we can repeat the process on $f'$, and keep going until we reach a restricted function which is constant, at which point we terminate and output this constant value.
To conclude the proof, we bound the $0$-depth of the obtained decision tree. Assume that for some input $x$ the decision tree queries $r$ zeros. The sparsity of the corresponding restricted function (excluding the constant term) is at most $(1-1/\FHSC(f))^r \cdot \spar_0(f)$. In particular, for $r= \FHSC(f) \cdot \log(\spar_0(f))$ this is below $1$, and hence the restricted function must be constant.
\end{proof}
\section{Failure of co-nondeterministic lifting}
\shachar{I think we switched from co-nondeterministic to nondeterministic, since UDISJ outputs 1 on disjoint inputs, which means that it's nondeterministic complexity is large. I think this should be integrated in this section as well}
Since $\mathrm{coNP}^{\text{cc}}(f \circ \land) = C$ implies that $\FMBS(f) = O(C^2)$, it seems reasonable to ask whether one could prove that $\mathrm{coNP}^{\text{cc}}(f \circ \land)$ is a polynomial upper bound on $\andDT{f}$. \shachar{we can also look on monotone certificate complexity of $f$} The immediate obstruction in the proof is that the assumption that $\mathrm{coNP}^{\text{cc}}(f \circ \land) = C$ does not imply sparsity, which we needed as a progress measure in \Cref{claim:FMBS+sparsity_0DT}.
We observe that such a lifting theorem is impossible:
\begin{theorem}
There exists a function $f: \{0, 1\}^{2n} \to \{0, 1\}$ so that
\begin{enumerate}
\item $\mathrm{coNP}^{\text{cc}}(f \circ \land) = O(\log n)$
\item $\zeroDT{f} = \Omega(n)$
\end{enumerate}
\end{theorem}
\begin{proof}
Let $f = \text{AND}(\text{OR}(z_1, z_2), \dots, \text{OR}(z_{2n-1}, z_{2n}))$. Then
\begin{enumerate}
\item To certify the input $z \in f(0)$, Alice and Bob will guess an $OR(z_i, z_{i+1})$
(using $\log n$ bits) and then check to see that
$OR(x_i \land y_i, x_{i + 1} \land y_{i + 1}) = 0$ using a constant amount of communication.
Such an OR always exists when $z \in f(0)$, by definition of AND.
\item We use an adversary argument. Name each OR $C_1, \dots, C_n$. Whenever the decision tree
$\mathcal{T}$ queries $z_i \in C_j$, we do the following:
\begin{itemize}
\item If $\mathcal{T}$ has not queries $C_j$ before, answer $0$.
\item If $\mathcal{T}$ has queried $C_j$ before, answer $1$ unless $C_j$ is the last unset
clause, in which case answer $0$.
\end{itemize}
It is easy to see that this strategy forces any decision tree to look at $n$ $0$'s.
\end{enumerate}
\end{proof}
\section{Structure}
\begin{theorem}
Let $f:\{0,1\}^n \to \mathbb{R}$ be a multilinear polynomial. For any $m$ one of the following must hold:
\begin{itemize}
\item There is a set of $m$ variables that intersects all monomials of $f$.
\item Or, there is a restriction of $f$ with $k$ disjoint monomials, where $k \ge \text{poly}(m)$.
\end{itemize}
\end{theorem}
\begin{theorem}
Let $f:\{0,1\}^n \to R$ where $|R|=k$ be a multilinear polynomial with sparsity $r$. Then $f$ has a hitting set of size $O(k^2 (\log r)^7)$.
\end{theorem}
get log-rank, lifting as direct corollaries...
\section{Monotone}
Let $f:\{0,1\}^n \to \{0,1\}$ be monotone, $f(x)=\sum_T f_T x^T$. Inversion formula:
$$
f_T = \sum_{S \subset T} f(S) (-1)^{|T|-|S|}.
$$
Let $S_1,\ldots,S_m$ be minimal monomials. Assume $T$ is \emph{not} a union of them. Let $\{S_i: i \in I\}$ be the monomials contained in $T$, and let $T'$ be their union, so $T' \subsetneq T$. Fix $t \in T \setminus T'$. For any $R \subset T \setminus \{t\}$ we have $f(R)=f(R \cup \{t\}$).
\section{Discussion}
\label{sec:discussion}
\subsection{Ruling out the \texorpdfstring{$\log n$}{log n} factor}
Both results about communication complexities of AND-functions
(\Cref{thm:lifting,thm:logrank}) are not ``tight'' in the sense that
both of them have a $\log n$ factor in the right side of the inequality.
Unfortunately, $n$ can be exponential in sparsity (see \Cref{example:redundant-indexing}).
It is easy to see that if the $\log n$ factor is truly necessary in these
theorems we are very close to refuting the log-rank conjecture. Hence,
we believe that a ``tighter'' version of the log-rank theorem
(\Cref{thm:logrank}) is true.
\begin{conjecture}
Let $f : \{0, 1\}^n \to \{0, 1\}$ be a boolean function, where $\spar[f] = r$.
Then
\begin{equation*}
\PandDT[f] \le \mathrm{poly}(\log r).
\end{equation*}
\end{conjecture}
Note that this conjecture would imply a ``tighter'' version of the
lifting theorem as well.
\subsection{Randomized complexity}
The main results of this paper are concerned with the deterministic communication
complexity of AND-functions. However, \Cref{corollary:PCC_MBS} says that the randomized communication
complexity of an AND-function is lower bounded by its monotone block sensitivity.
The relation between randomized communication complexity and sparsity remains
unclear. We conjecture that the relation between these two measures is the same as
the proved relation (\Cref{thm:lifting}) between sparsity and \emph{deterministic}
communication complexity.
\begin{conjecture}
\label{conj:randomized}
Let $f : \{0, 1\}^n \to \{0, 1\}$ be a boolean function. Suppose that
$\BPPCC(\andFunction{f}) = C$. Then
\begin{equation*}
\log(\spar[f]) \le \mathrm{poly}(C) \cdot \log n.
\end{equation*}
In particular, $f$ can be computed by an AND-decision tree of depth
\begin{equation*}
\PCC(\andFunction{f}) \le \mathrm{poly}(C) \cdot \log n.
\end{equation*}
\end{conjecture}
Observe that \Cref{conj:randomized} implies that randomness does not significantly help
to compute AND-functions. Concretely, it implies that
\[
\PCC(\andFunction{f}) \le \mathrm{poly} \left( \BPPCC(\andFunction{f}) \right) \cdot \log n.
\]
Interestingly, the $\log n$ factor in this conjecture is necessary as shown
by the following example.
\begin{example}[Threshold Functions]
Let $f : \{0, 1\}^n \to \{0, 1\}$ be the threshold function such that
\[
f(x) = 1 \iff |x| \ge n - 1.
\]
It is clear that $\spar[f] = n + 1$; however, $\BPPCC(f) = O(1)$. Indeed, let
us consider the following randomized AND-decision tree for $f$: it samples a
subset $S \subseteq [n]$ uniformly at random, then output the value of
\[
q_S(x) = \left( \bigwedge_{i \in S} x_i \right) \lor
\left( \bigwedge_{i \notin S} x_i \right).
\]
Note that if $|x| \ge n-1$ then $q_S(x)=1$ with probability $1$. If $|x| \le n - 2$,
let $i,j$ be such that $x_i=x_j=0$. With probability $1/2$ we have
$i \in S$, $j \notin S$ or $i \notin S$, $j \in S$, in both cases $q_S(x) = 0$.
In order to reduce the error, repeat this for a few random sets $S$.
\end{example}
\subsection{Sparsity vs coefficients size}
Let $f:\{0, 1\}^n \to \{0, 1\}$ and consider the multi-linear polynomial computing $f$, namely
$f(x) = \sum_s f_s \prod_{i \in s} x_i$. It is well known that the
coefficients $f_s$ take integer values. In particular, if we denote by
$\|f\|_1 = \sum |f_s|$ the $L_1$ norm of the coefficients, then we get the
obvious inequality
\[
\spar[f] \le \|f\|_1.
\]
We note the following corollary of \Cref{thm:logrank}, which shows that $\|f\|_1$
cannot be much larger than $\spar[f]$.
\begin{claim}
Let $f : \{0, 1\}^n \to \{0, 1\}$ and assume that $\spar[f] = r$.
Then $\|f\|_1 \le n^{O(\log r)^5}$.
\end{claim}
\begin{proof}
By \Cref{thm:logrank} we have $\PandDT[f] = d$ for $d = O((\log r)^5 \log n)$.
By a similar proof to \Cref{claim:adt_exp_sparsity}, any function $f$ computed
by an AND-decision tree of depth $d$ has $\|f\|_1 \le 3^d$. The claim follows.
\end{proof}
We conjecture that the gap between sparsity and $L_1$ is at most polynomial.
\begin{conjecture}
For any boolean function $f$, $\|f\|_1 \le \mathrm{poly}(\spar[f])$.
\end{conjecture}
\section{Introduction}
Communication complexity has seen rapid development in the last
couple of decades. However, most of the celebrated results in the field are
about the communication complexity of important \emph{concrete functions},
such as set disjointness~\cite{razborov1992distributional} and gap Hamming
distance~\cite{chakrabarti2012hamming}. Unfortunately, the understanding of
communication complexity of \emph{arbitrary functions} is still lacking.
Probably the most famous problem of this type is the log-rank
conjecture~\cite{lovasz1988logrank}.
It speculates that given any total boolean communication problem, its
deterministic communication complexity is polynomially related to the
logarithm of the real rank of its associated communication matrix. Currently,
there is an exponential gap between the lower and upper bounds relating to
the log-rank conjecture. The best known upper bound~\cite{lovett2016rootrank}
states that the communication complexity of a boolean function $F$ is at most
$O(\sqrt{\rank[F]} \log \rank[F])$, where $\rank[F]$ denotes the real rank
of the communication matrix of $F$. On the other hand, the best known lower
bound~\cite{goos18detpart} states that there exist a boolean function $F$
with communication complexity $\Omega(\log^2(\rank[F]))$.
Given this exponential gap and lack of progress for general communication
problems, many works~\cite{goos20019PNP,goos2020autoamisationCP,%
goos2017BPPlifting,zhang2009communication,mukhopadhyay2019lifting,%
pitassi2020hierarchies,chattopadhyay2019approxlogrank,goos18detpart,%
rezende2020liftingsimplegadgets,cChattopadhyay2019BPPlifting,%
mande2020paritylogrank,hatami2018xorfunctions,tsang2013fourier,%
montanaro2009communication,leung2011tight,zhang2014efficient,%
yao2015parity,hatami2017unbounded,sanyal2015fourieranddimension}
focused on the communication complexity
of functions with some restricted structure. In particular, the study of composed
functions was especially successful, and produced the so-called lifting method,
which connects query complexity measures of boolean functions with communication
complexity measures of their corresponding communication problems.
Concretely, given a boolean function $f:\{0, 1\}^n \to \{0, 1\}$ and a \emph{gadget}
$g : \{0, 1\}^\ell \times \{0, 1\}^m \to \{0, 1\}$, the corresponding lifted function is
the following communication problem: Alice gets as input $x \in (\{0, 1\}^\ell)^n$,
Bob gets as input $y \in (\{0, 1\}^m)^n$, and their goal is to compute the composed
function $f \circ g^n$, defined as
$(f \circ g^n)(x, y) = f(g(x_1, y_1), \dots, g(x_n, y_n))$. Lifting theorems allow
to connect query complexity measures of the underlying boolean function $f$ with
communication complexity measures of the composed function. \Cref{figure:lifting}
lists some notable examples.
\begin{figure}[ht]
\centering
\begin{tabular}{l l l l l}
\toprule
Gadget & Query Model & Communication Model & Total Functions & Reference\\
\midrule
\multirow{2}{*}{$\Ind{m}$} & $\PDT$ & $\PCC$ & No & \cite{raz1999separationNC} \\
& $\BPPDT$ & $\BPPCC$ & No & \cite{goos2017BPPlifting} \\
\multirow{2}{*}{$\IP{\log m}$} & $\PDT$ & $\PCC$ & No & \cite{chattopadhyay2019lowdiscrep} \\
& $\BPPDT$ & $\BPPCC$ & No & \cite{cChattopadhyay2019BPPlifting} \\
$\EQ{\log m}$ & $\PandDT$ & $\PCC$ & No & \cite{mukhopadhyay2019lifting}
\vspace{10pt} \\
$\oplus$ & $\PxorDT$ & $\PCC$ & Yes & \cite{hatami2018xorfunctions} \\
$g$ & $\deg$ & $\rank$ & Yes & \cite{sherstov2010quantumclassical} \\
\bottomrule
\end{tabular}
\caption{%
Query-to-communication lifting theorems.
The parameter $m$ is polynomial in $n$;
$g$ in the last line is any function that has as sub-functions both an AND and an OR.
$\PCC$ denotes determenistic communication complexity, $\PDT$ denotes decision tree
complexity, $\BPPDT$ denotes the probabilistic decision tree complexity with bounded
error, $\BPPCC$ denotes the probabilistic communication complexity with bounded error,
$\PandDT$ denotes AND-decision tree complexity, $\deg$ denotes the real degree,
and $\rank$ denotes the real rank.
}
\label{figure:lifting}
\end{figure}
Of particular interest to us are lifting theorems with very simple gadgets. The reason
for that is twofold. First, using complex gadgets (such as inner product or indexing)
yields sub-optimal bounds in applications. A second and perhaps more important reason is
that the study of composed functions with complex gadgets does not bring us any closer
towards the understanding of general communication problems. This is because the
corresponding lifting theorems connect the communication complexity of the lifted function
to well-studied query measures of the underlying boolean function (such as decision tree
complexity, or degree as a real polynomial), and hence does not shed new light on general
communication problems.
Thus, in this paper we consider gadgets which are as simple as they could be --- one-bit
gadgets. In fact, there are only two non-equivalent one-bit gadgets: one-bit XOR, which
yields XOR-functions; and one-bit AND, which yields AND-functions. As we shortly discuss,
they naturally correspond to query models which extend the standard ones: parity-decision
trees and AND-decision trees.
\paragraph{XOR-functions.}
XOR-functions have been studied in several works~\cite{%
mande2020paritylogrank,%
hatami2018xorfunctions,tsang2013fourier,%
zhang2009communication,montanaro2009communication,%
leung2011tight,zhang2014efficient,yao2015parity,hatami2017unbounded,%
sanyal2015fourieranddimension}.
Given a boolean function $f:\{0, 1\}^n \to \{0, 1\}$,
its corresponding XOR-function is $\xorFunction{f} = f \circ \oplus^n$, defined as
$\xorFunction{f}(x,y)=f(x \oplus y)$. A natural query measure corresponding to the
communication complexity of XOR-functions is the \emph{Parity-Decision Tree} (PDT) model.
This model is an extension of the standard decision tree model, where nodes can query an
arbitrary parity of the bits. To see the connection, note that if $f$ can be computed by
a PDT of depth $d$ (denoted by $\PxorDT[f] = d$), then $\xorFunction{f}$ has a communication
protocol of complexity $2d$. This is by simulating the computation in the PDT: whenever
the PDT needs to compute the parity of $x \oplus y$ on some set $S$ of coordinates, each
player computes the corresponding parity on their input, and then they exchange the
answers, which allows to compute the corresponding parity on $x \oplus y$ as well, and
continue to traverse the tree. Thus we have $\PCC[\xorFunction{f}] \le 2 \PxorDT[f]$.
In the other direction, \cite{hatami2018xorfunctions} proved that $\PxorDT[f]$ is at most
a polynomial in the communication complexity of $\xorFunction{f}$. That is,
$\PxorDT[f] \le \text{poly}\left(\PCC[\xorFunction{f}] \right)$. Thus, the two measures
are equivalent, up to polynomial factors.
If one considers the log-rank conjecture for XOR-functions, then a simple
observation~\cite{tsang2013fourier} is that the rank of the communication matrix of
$\xorFunction{f}$ is equal to Fourier sparsity of $f$. Thus, in order to prove the
log-rank conjecture for XOR-functions it is sufficient to show that $\PxorDT[f]$ is at
most a polynomial in the log of the Fourier sparsity of $f$. Unfortunately, the latter
relation is currently unknown.
\paragraph{AND-functions.}
The goal of this paper is to develop an analogous theory of AND-functions. Let
$f : \{0, 1\}^n \to \{0, 1\}$ be a boolean function. Its corresponding AND-function
is $\andFunction{f} = f \circ \land^n$, defined as
$\andFunction{f}(x, y) = f(x \land y)$. Similar to the case of XOR-functions, there is
a corresponding natural query model, \emph{AND-Decision Tree} (ADT), where each node
in the decision tree can query an arbitrary AND of the input bits. We denote by
$\PandDT[f]$ the minimal depth of an ADT computing $f$. Also here, efficient ADTs for
$f$ imply efficient communication protocols for $\andFunction{f}$, where
$\PCC[\andFunction{f}] \le 2 \PandDT[f]$. Our main focus in this work is
\begin{enumerate}[(i)]
\item lifting theorems for AND-functions, and
\item the log-rank conjecture for AND-functions.
\end{enumerate}
Concretely, we will show that assuming that $\andFunction{f}$ has either (i)
efficient deterministic communication protocol or (ii) low rank, then $f$ has an
efficient ADT. As we will shortly see, understanding both questions is directly related
to understanding the monomial structure of polynomials computing boolean functions.
\subsection{Main results}
Let $f : \{0, 1\}^n \to \{0, 1\}$ be a boolean function. It is computed by a unique multi-linear
polynomial over the reals. That is, $f(x) = \sum_s f_s \prod_{i \in s} x_i$, where
$s \subseteq [n]$ and $f_s \in \mathbb{R}$ are real-valued coefficients. The sparsity of $f$,
denoted $\spar[f]$, is the number of nonzero coefficients in the decomposition.
This is related
to AND-functions, as a simple observation (\Cref{claim:rank_sparsity})
is that this also equals the rank of its communication matrix, namely
$\rank[\andFunction{f}] = \spar[f]$.
Before describing our results, we need one more definition. Let $\mathcal{F}$ be
a set system (family of sets). A set $H$ is a \emph{hitting set} for $\mathcal{F}$ if
it intersects all the sets in $\mathcal{F}$. Of particular interest to us are set systems
that correspond to the monomials of boolean functions. Given a boolean function $f$,
define $\mon[f] = \set{s \ :\ f_s \neq 0, s \neq 0}$
to be the set system of the non-constant monomials of $f$. We exclude the constant term as
it is irrelevant for the purpose of constructing hitting sets, and it simplifies some of
the later arguments. Note that $|\mon[f]| \in \set{\spar[f], \spar[f] - 1}$.
Our main combinatorial result is that set systems corresponding to the monomials of
boolean functions have small hitting sets.
\begin{restatable}{theorem}{mainthm}
\label{theorem:sparsity-to-hitting}
Let $f : \{0, 1\}^n \to \{0, 1\}$ be a boolean function with sparsity $\spar(f) = r$.
Then there exists a hitting set $H$ for $\mon[f]$ of size $|H| = O((\log r)^5)$.
\end{restatable}
This result can be seen as an analog of a similar result for union-closed families.
A set system $\mathcal{F}$ is union-closed if it is closed under taking unions; namely,
if $S_1, S_2 \in \mathcal{F}$ then also $S_1 \cup S_2 \in \mathcal{F}$. A famous
conjecture of Frankl~\cite{frankl1983tracefinitesets} is that in any union-closed
family $\mathcal{F}$ there is an element which belongs to at least half the sets in
the set system. Assume $|\mathcal{F}| = r$; the best known result in this direction is
that $\mathcal{F}$ has a hitting set of size $\log(r)$~\cite{knill1994graph}, which
implies that one of its elements belongs to a $1 / \log(r)$ fraction of sets in the
set system. We view \cref{theorem:sparsity-to-hitting} as a qualitative extension
of this result to more general set systems.
Our main application of \Cref{theorem:sparsity-to-hitting} is a near-resolution of the
log-rank conjecture for AND-functions. Our bounds nearly match the conjectured bounds
(poly-log in the rank), except for an extra $\log(n)$ factor that we are currently
unable to eliminate.
\begin{restatable}[Log-rank Theorem for AND-functions]{theorem}{logrank}
\label{thm:logrank}
Let $f : \{0, 1\}^n \to \{0, 1\}$ be a boolean function.
Let $r = \spar[f] = \rank[\andFunction{f}]$. Then $f$ can be computed by an
AND-decision tree of depth
\[
\PandDT[f] = O((\log r)^5 \cdot \log n).
\]
In particular, the deterministic communication complexity of $\andFunction{f}$ is
bounded by
\[
\PCC[\andFunction{f}] = O((\log r)^5 \cdot \log n).
\]
\end{restatable}
Note that if $f : \{0, 1\}^n \to \{0, 1\}$ is a function of sparsity at least
$n^{0.1}$, say, then \Cref{thm:logrank} proves the log-rank conjecture for its
corresponding AND-function. Thus, the only remaining obstacle is to extend the result
to very sparse functions.
Observe that \Cref{thm:logrank} implies a lifting theorem for AND-functions. Assume
that $\andFunction{f}$ has deterministic communication complexity $C$. The rank of
the associated communication matrix is then at most $2^C$, which by \Cref{thm:logrank}
gives an ADT for $f$ of depth $O(C^5 \log n)$. We can improve the exponent $5$ to $3$
by directly exploiting the existence of a communication protocol.
\begin{restatable}[Lifting Theorem for AND-functions]{theorem}{lifting}
\label{thm:lifting}
Let $f: \{0, 1\}^n \to \{0, 1\}$ be a boolean
function. Let $C = \PCC[\andFunction{f}]$
denote the deterministic communication complexity of its corresponding AND-function.
Then $f$ can be computed by an AND-decision tree of depth
\[
\PandDT[f] = O(C^3 \cdot \log n).
\]
\end{restatable}
\subsection{Proof overview}
We first discuss how our combinatorial theorem (\Cref{theorem:sparsity-to-hitting})
implies the log-rank theorem (\Cref{thm:logrank}). It relies on showing that sparse
boolean functions have efficient AND-decision trees (ADTs).
Let $f$ be a boolean function with $\spar[f] = r$. Our goal is to construct an ADT for $f$
of depth $\mathrm{poly}(\log r) \cdot \log(n)$. This directly implies \Cref{thm:logrank}, as the
sparsity of $f$ equals the rank of its AND-function $\andFunction{f}$, and an ADT for $f$
of depth $d$ implies a protocol for $\andFunction{f}$ which sends $2d$ bits.
It will be convenient to first consider another model of decision trees, called
\emph{zero decision trees}. A (standard) decision tree computing $f$ has zero
decision tree complexity $d$, if any path from root to leaf in it queries at most $d$
variables which evaluate to $0$. We denote by $\PzeroDT[f]$ the minimal such $d$ over all
decision trees that compute $f$. It is shown in~\cite{mukhopadhyay2019lifting}
(see also \Cref{claim:0DT_ADT}) that ADT complexity and zero DT complexity are tightly
connected. Concretely, for any boolean function $f$ they show that
\[
\PzeroDT[f] \le \PandDT[f] \le \PzeroDT[f] \cdot \lceil \log(n+1) \rceil.
\]
Thus, we will show that $\PzeroDT[f] \le \mathrm{poly}(\log r)$, which implies our target bound of
$\PandDT[f]$.
\Cref{theorem:sparsity-to-hitting} gives that there is a hitting set size $h = \mathrm{poly}(\log r)$
which intersects all the monomials of $f$. In particular, there is a variable $x_i$ that
intersects at least a $1 / h$ fraction of the monomials of $f$. The decision tree will first
query $x_i$, and then branch depending on whether $x_i = 0$ or $x_i = 1$. We use the simple
fact that the sparsity of $f$ cannot increase when variables are fixed, and continue this
process, until the value of the function is determined. Observe that every time that we
query a variable and get $0$, we eliminates a $1 / h$ fraction of the monomials. If we get
a $1$ the number of monomials can either stay the same or decrease, but it cannot increase.
So, as $f$ starts with $r$ monomials, we get that the maximal number of $0$s queried before
all monomials are eliminated is at most $h \cdot \log(r)$. Hence
$\PzeroDT[f] \le h \cdot \log(r) = \mathrm{poly}(\log r)$, as claimed.
Thus, from now on we focus on proving \Cref{theorem:sparsity-to-hitting}. Let $f$ be a
boolean function of sparsity $r$, and let $\mon[f]$ denote the set system of its monomials.
We consider four complexity measures associated with it:
\begin{enumerate}
\item The hitting set complexity ($\HSC$) is the minimal size of a hitting set for it.
This is what we are trying to bound, and can be phrased as an covering integer program.
\item The fractional hitting set complexity ($\FHSC$) is the fractional relaxation for
$\HSC$. Here, we want a distribution over variables that hits every monomial with high
probability, which can be phrased as a fractional covering linear program.
\item The fractional monotone block sensitivity ($\FMBS$) is the dual linear program. The
reason for the name would become clear soon. It can be phrased as a fractional
packing linear program.
\item The monotone block sensitivity ($\MBS$) is the integral version of $\FMBS$. It equals
the maximal number of pairwise disjoint monomials in $f$. Equivalently, it is block
sensitivity of $f$ at $0^n$. It can be phrased as a packing integer program.
\end{enumerate}
More generally, given $s \subseteq [n]$, let $f_s$ denote the restriction of $f$ given by
setting $x_i = 1$ for all $i \in s$. It will be convenient to identify $s$ with its indicator
vector $1_s \in \{0, 1\}^n$. Thus, for $z \in \{0, 1\}^n$, we denote by $f_z$ the restriction of
$f$ to the $1$s in $z$. Define $\HSC[f, z]$, $\FHSC[f, z]$, $\FMBS[f, z]$, $\MBS[f, z]$ to be the
above four measures for the monomials of $f_z$. It is simple to observe
(see \Cref{claim:FMBS_eq_FHSC}) that for each $z$ we have:
\[
\MBS[f, z] \le \FMBS[f, z] = \FHSC[f, z] \le \HSC[f, z].
\]
As a first step, we use existing techniques in boolean function analysis techniques to bound
$\MBS[f, z]$ in terms of the sparsity of $f$. We show in \Cref{lemma:MBS_sparsity} that
\[
\MBS[f, z] \le
O((\log \spar[f_z])^2) \le
O((\log r)^2).
\]
Thus, to complete the picture, we would like to show that if $\MBS[f, z]$ is low then so is
$\HSC[f, z]$. This however is false, if one compares them point wise (for a single $z$).
However, we show that the measures are equivalent (up to polynomial factors) if instead
we consider their maximal value over all $z$. Define
\[
\MBS[f] = \max_{z \in \{0, 1\}^n} \MBS[f, z]
\]
and similarly define $\FMBS[f],\FHSC[f],\HSC[f]$.
We show in \Cref{lemma:FMBS_MBS} that
\[
\FMBS[f] = O(\MBS[f]^2),
\]
linear programming duality gives $\FHSC(f) = \FMBS(f)$, and we show in \Cref{lemma:HS_sparsity}
that
\[
\HSC(f) = O(\FHSC(f) \cdot \log r).
\]
This completes the proof of \Cref{theorem:sparsity-to-hitting}.
We also briefly discuss \Cref{thm:lifting}. The improved exponent is obtained by using the bound
$\MBS(f) = O(\PCC[\andFunction{f}])$, which we prove in \Cref{corollary:PCC_MBS}. Its proof is based
on the observation that if $\MBS(f)=b$ then $\andFunction{f}$ embeds as a sub-function unique
disjointness on $b$ bits, and combine it with known lower bounds on the communication complexity of
unique disjointness.
\subsection{Generalizations}
Several of our definitions and techniques readily extend to non-boolean functions,
namely to functions $f : \{0, 1\}^n \to \mathbb{R}$. We refer the reader to \Cref{sec:prelim}
for the relevant definitions and \Cref{sec:generalization} for a detailed discussion
of the generalized results. Here, we briefly state some of the results.
\begin{restatable}{theorem}{generalhsc}
\label{thm:gen_hsc}
Let $f : \{0, 1\}^n \to \mathbb{R}$ be a multlinear polynomial
with sparsity $r$. Suppose $\MBS[f] = m$. Then the hitting set complexity of $f$ is
bounded by
\begin{equation*}
\HSC[f] = O(m^2\log r).
\end{equation*}
\end{restatable}
\begin{restatable}{theorem}{maingeneral}
\label{thm:maingeneral}
Let $f: \{0, 1\}^n \to S$ for $S \subset \mathbb{R}$. Assume that $\spar[f] = r$ and
$|S| = s$. Then the hitting set complexity of $f$ is bounded by
\begin{equation*}
\HSC[f] = O(s^4 (\log r)^5).
\end{equation*}
\end{restatable}
\begin{restatable}{theorem}{generalsetsystem}
\label{thm:generalsetsystem}
Let $\mathcal{F}=\{S_1,\cdots,S_r\}$ be a set system. Then for any
$m \ge 1$, at least one of the following holds:
\begin{enumerate}
\item $\mathcal{F}$ has a hitting set of size $h=O(m^2\log r)$.
\item There exists a subset $T\subset [n]$ so that
$\mathcal{F}_T = \{S_1\setminus T,\cdots,S_r\setminus T\}$
contains $m$ pairwise disjoint sets.
\end{enumerate}
\end{restatable}
\paragraph*{Acknowledgements.} S.L. thanks Kaave Hosseini, who was involved
in early stages of this work. S.M. thanks Russell Impagliazzo for useful
discussions throughout the course of this work.
\section{Corollaries in communication complexity}
\label{sec:communication}
\subsection{Preliminaries}
Fix a \textit{boolean} function $f: \{0, 1\}^n \to \{0, 1\}$.
Let $\andFunction{f} = f \circ \wedge$ denote the AND function corresponding to $f$, given by
$\andFunction{f}(x,y) = f(x \wedge y)$. The sparsity of $f$ characterizes the rank of $\andFunction{f}$.
\begin{claim}
\label{claim:rank_sparsity}
Let $f: \{0, 1\}^n \to \{0, 1\}$ be a boolean function.
Then $\spar(f) =\rank[\andFunction{f}]$.
\end{claim}
\begin{proof}[Proof]
Let $f(z) = \sum_s f_s \prod_{i \in s} z_i$ be the multilinear polynomial computing $f$.
Then $f(x \land y)$, expanded as a multilinear polynomial, equals
\[
f(x \land y) = \sum_s f_s \left(\prod_{i \in s} x_i\right) \left( \prod_{i \in s} y_i\right).
\]
Hence we can write the $2^n \times 2^n$ communication matrix of $\andFunction{f}(x, y) = f(x \land y)$ as
\[
M = \sum_s f_s v_s v_s^{\top}
\]
where $v_s \in \{0, 1\}^{2^n}$ is given by $(v_s)_x = \prod_{i \in s} x_i$. The $v_s$'s
are linearly independent and therefore $M$ has rank equal to the number of non-zero
entries in the sum.
\end{proof}
We assume familiarity with the standard notion of a \textit{decision tree}.
Our primary interest is in a variant of decision trees called
\textit{AND decision trees}, which strengthens decision trees by allowing
queries of the conjunction of an arbitrary subset of the variables, namely
queries of the form $\land_{i \in S} z_i$ for arbitrary $S \subseteq [n]$.
Let $\PandDT[f]$ denote the smallest depth of an AND decision tree computing
$f$. The following simple connection to the communication complexity of
$\andFunction{f}$ motivates our interest in this model:
\begin{claim}
\label{claim:cc_adt}
Let $f : \{0, 1\}^n \to \{0, 1\}$. Then $\PCC(\andFunction{f}) \le 2 \PandDT[f]$.
\end{claim}
\begin{proof}[Proof]
Whenever the AND-decision tree queries a set $S \subseteq [n]$, Alice and Bob
privately evaluate $a = \land_{i \in S} x_i$ and $b = \land_{j \in S} y_j$, exchange them
and continue the evaluation on the sub-tree obtained by following the edge
labelled $a \land b$. If the decision tree height is $d$, this protocol uses
$2d$ bits of communication. Correctness follows from the observation that
$\bigwedge_{i \in S} (x_i \land y_i)
= (\bigwedge_{i \in S} x_i) \land (\bigwedge_{j \in S} y_j)$.
\end{proof}
There is also a simple connection between AND-decision trees
and sparsity:
\begin{claim}
\label{claim:adt_exp_sparsity}
Let $f: \{0, 1\}^n \to \{0, 1\}$ with $d = \PandDT[f]$. Then $\spar[f] \leq 3^d$.
\end{claim}
\begin{proof}
Assume that $f$ is computed by a depth-$d$ AND decision tree, where the first query
is $\land_{i \in S} z_i$, and where $f_1,f_2$ are the functions computed by the left
and right subtrees, respectively. Note that both are computed by AND decision trees
of depth $d-1$. We have
\[
f(z) = \prod_{i \in S} z_i \cdot f_1(z) + \left(1 - \prod_{i \in S} z_i\right) f_2(z).
\]
Thus
\[
\spar[f] \le \spar[f_1] + 2 \cdot \spar[f_2].
\]
The claim follows, since in the base case, functions computed by an AND-decision
tree of depth $1$ has sparsity at most $2$.
\end{proof}
A related complexity measure introduced in \cite{mukhopadhyay2019lifting}, called
the \text{$0$-decision tree complexity} of $f$, is defined as follows. The
\textit{$0$-depth} of a (standard) decision tree $\mathcal{T}$ is largest number
of $0$-edges encountered on a root-to-leaf path in $\mathcal{T}$. The $0$-decision
tree complexity of $f$, denoted $\PzeroDT[f]$, is the smallest $0$-depth over all
trees $\mathcal{T}$ computing $f$. The following relationship between AND decision
trees and $0$-decision tree complexity is from~\cite{mukhopadhyay2019lifting}:
\begin{claim}[\cite{mukhopadhyay2019lifting}]\label{claim:0DT_ADT}
For any boolean function $f: \{0, 1\}^n \to \{0, 1\}$,
\[
\PzeroDT[f] \le \PandDT[f] \le \PzeroDT[f] \ceil{\log (n + 1)}.
\]
\end{claim}
For completeness, we include the short proof.
\begin{proof}
The first inequality follows since an AND query can be simulated by querying
the bits in it one at a time, until the first $0$ is queried, or until they
are all queried to be $1$. In particular, at most a single $0$ query is made.
This implies that an AND decision tree of depth $d$ can be simulated by a
standard decision tree of $0$-depth $d$.
For the second inequality, let $\mathcal{T}$ be a decision tree computing $f$
with $0$-depth $d$. Consider the subtree which is truncated after the first $0$
is read. We can compute which leaf in the subtree is reached by doing a binary
search on the at most $n + 1$ options, which can be implemented using
$\ceil{\log(n + 1)}$ computations of ANDs. Then, the same process continues on
the tree rooted at the node reached, which has $0$-depth at most $d - 1$.
\end{proof}
The following example shows that this gap of $\log n$ cannot be avoided.
\begin{example}
For $z \in \{0, 1\}^n$, let $\text{ind}(z) \in [n]$ denote the
first index $i$ for which $z_i = 0$. Let
\[
f(z) =
\begin{cases}
1 & \text{ if $z = 1^n$ or $z=1^{n-1} 0$} \\
z_{\text{ind}(z) + 1} &\text{ otherwise}
\end{cases}
\]
Any decision tree computing
$f$ will have to query at most two zeroes, corresponding to
$z_{\text{ind}(z)}$ and $x_{\text{ind}(z) + 1}$, and hence
$\PzeroDT[f] \leq 2$. However, a direct calculation shows that
$\spar(f) = \Omega(n)$ and therefore, by
\Cref{claim:adt_exp_sparsity}, $\PandDT[f] = \Omega(\log n)$.
\end{example}
We also use a lemma closely related to \Cref{lemma:HS_sparsity}.
\begin{lemma}
\label{lemma:0DT_spars}
Let $f: \{0, 1\}^n \to \{0, 1\}$ be an arbitrary boolean function.
Then
\[
\PzeroDT[f] = O( \FMBS[f] \cdot \log \spar[f] ).
\]
\end{lemma}
\begin{proof}
Let $k = \FHSC[f, 0] \le \FHSC[f]$. By \Cref{claim:transfer},
there is an $i \in [n]$ that belongs to at least a
$(1 / k)$-fraction of $\mon[f]$. Query the
variable $x_i$ and let $b_i \in \{0, 1\}$ be the outcome.
Let $f' : \{0, 1\}^n \to \{0, 1\}$ be the function $f$ restricted to
$x_i = b_i$. Consider the sparsity of $f'$:
\begin{itemize}
\item If $x_i = 0$ then $|\mon[f']| \le (1 - 1 / k) |\mon[f]|$,
as setting $x_i = 0$ kills a $(1 / k)$-fraction of the
non-constant monomials. Thus, as long as $f$ is not a
constant function, $|\mon[f]| \ge 1$ and we have
\[
\spar[f'] \le
\spar(f) - |\mon[f]|/k \le
\spar[f] (1 - 1 / 2k).
\]
\item If $x_i = 1$ then $\spar(f') \le \spar[f]$, since fixing
variables to constants cannot increase the number monomials.
\end{itemize}
Let $t$ the maximum number of $0$'s queried along any
path in the obtained decision tree. The sparsity of the
subfunction $f'$ corresponding to a leaf must be $0$
or else $f'$ is non-constant. By, \Cref{claim:FHSC-is-nonincreasing}
$f'$ is constant when $(1 - 1/2k)^t \spar[f] \leq e^{-t / 2k} \spar[f] < 1$,
which occurs when $t \geq 2k \cdot \log \spar[f]$.
\end{proof}
\subsection{The log-rank conjecture}
A weak version of the log-rank conjecture for AND-functions, which
includes an additional $\log n$ factor, now follows quite readily
from the tools we have developed.
\logrank*
\begin{proof}
By \Cref{lemma:MBS_sparsity},
$\MBS(f) = O((\log r)^2)$.
By \Cref{lemma:FMBS_MBS}, $\FMBS(f) = O((\log r)^4)$.
By \Cref{lemma:0DT_spars}, $\PzeroDT[f] = O((\log r)^5)$.
By \Cref{claim:0DT_ADT} this gives us
an AND-decision tree of height $O((\log r)^5 \cdot \log n)$.
Finally, we convert the AND-decision tree for $f$ into a protocol for
$\andFunction{f}$ using \Cref{claim:cc_adt} with complexity
$O((\log r)^5 \cdot \log n)$.
\end{proof}
In particular, the log-rank conjecture for AND-functions is true for any
$f$ with $\spar[f] \geq n^c$ for any constant $c>0$. In some sense this is an extremely mild condition,
which random $f$ will satisfy with exceedingly high probability. On the other
hand, the log-rank conjecture is about \emph{structured} functions; rank and
communication complexity are both maximal for random functions, whereas we
are interested in low-complexity functions/low-rank matrices. It could very
well be the case that the ultra-sparse regime of $\spar(f) = n^{o(1)}$ is precisely
where the log-rank conjecture \emph{fails}. We therefore see removing
the $\log n$ factor as an essential problem left open by this work.
See \Cref{sec:discussion} for additional discussion.
\subsection{Lifting AND-functions}
\label{subsection:lifting}
Since $\log(\spar(f))$ lower bounds the deterministic communication
of $\andFunction{f}$, the log-rank result from the previous section
immediately implies a new upper bound on the AND decision tree complexity
of $f$. We can prove a better upper bound by making use of our stronger
assumption: instead of only assuming $\log(\spar(f))$ is small, we assume
that $\PCC[\andFunction{f}]$ is small.
If $f$ has large monotone block sensitivity, then its AND-function embeds
unique disjointness as a sub-function. The unique disjointness function
on $k$ bits, denoted $\UDISJ{k}$, takes two inputs $a, b \in \{0, 1\}^k$, and is defined
as the partial function:
\[
\UDISJ{k}(a,b) =
\begin{cases}
0 & \text{if } |a \land b| = 1 \\
1 & \text{if } |a \land b| = 0 \\
\text{undefined} & \text{otherwise}
\end{cases}\;,
\]
where $|\cdot|$ is the Hamming weight.
\begin{claim}\label{claim:MBS_UDISJ}
Let $f : \{0, 1\}^n \to \{0, 1\}$ be a boolean function with $\MBS(f) = k$.
Then $\andFunction{f}$ contains as a sub-matrix $\UDISJ{k}$. That is,
there are maps $\mathbf{x}, \mathbf{y} : \{0, 1\}^k \to \{0, 1\}^n$ and $c \in \{0, 1\}$
such that the following holds.
For any
$a, b \in \{0, 1\}^k$ which satisfy that $|a \land b| \in \{0, 1\}$, it
holds that
\[
\UDISJ{k}(a, b) = \andFunction{f}(\mathbf{x}(a), \mathbf{y}(b)) \oplus c.
\]
\end{claim}
\begin{proof}
Let $z,w_1, \dots, w_k \in \{0, 1\}^n$ be pairwise disjoint such that
$f(z) \ne f(z \lor w_i)$ for all $i \in [k]$. We may assume without loss of
generality that $f(z)=1$, otherwise replace $f$ with its negation, and set
$c = 1$.
Assume that Alice and Bob want to solve unique-disjointness on inputs
$a, b \in \{0, 1\}^k$,
which we identify with subsets of $[k]$. Define
\[
\mathbf{x}(a) = z \lor \bigvee_{i \in a} w_i, \qquad
\mathbf{y}(b) = z \lor \bigvee_{j \in b} w_j.
\]
Observe that
\[
\mathbf{x}(a) \land \mathbf{y}(b) =
\begin{cases}
z & \text{if } a \land b = \emptyset\\
z \lor w_i & \text{if } a \land b = \{i\}.
\end{cases}
\]
Thus we get that $\UDISJ{k}(a,b) = f(\mathbf{x}(a) \land \mathbf{y}(b))$ for all $a,b$.
\end{proof}
It is well known that $\UDISJ{k}$ is hard with respect to several communication
complexity measures such as deterministic, randomized and nondeterministic.
\begin{theorem}[\cite{goos2018landscape, razborov1992distributional}]\label{thm:disj_lbs}
For any communication complexity measure $\Delta \in \set{\PCC, \mathrm{BPP}^{\text{cc}}, \mathrm{NP}^{\text{cc}}}$,
\[
\Delta(\UDISJ{k}) = \Omega(k).
\]
\end{theorem}
We immediately get the following corollary:
\begin{corollary}\label{corollary:PCC_MBS}
Let $f : \{0, 1\}^n \to \{0, 1\}$ be a boolean function and
$\Delta \in \set{ \PCC, \mathrm{BPP}^{\text{cc}}, \mathrm{NP}^{\text{cc}}}$ be a communication complexity
measure. Then $\MBS(f) = O(\Delta(\andFunction{f}))$.
\end{corollary}
\begin{proof}
Assume that $\MBS[f] = k$. \Cref{claim:MBS_UDISJ} shows that
any protocol for $\andFunction{f}$ also solves $\UDISJ{k}$. Hence by
\Cref{thm:disj_lbs} we have $k = O(\Delta(\andFunction{f}))$.
\end{proof}
Taking $\Delta = \PCC$, we obtain the main theorem of this section:
\lifting*
\begin{proof}
\Cref{claim:rank_sparsity} gives that
$\log \spar[f] = \log \rank[\andFunction{f}] \leq C$.
By \Cref{claim:MBS_UDISJ}, $\MBS(f) = O(C)$.
By \Cref{lemma:FMBS_MBS}, $\FMBS(f) = O(C^2)$.
Combining this upper bound on $\FMBS$ with
the fact that $\log \spar[f] \leq C$, we see,
by \Cref{lemma:0DT_spars}, that $\PzeroDT[f] = O(C^3)$.
Finally, by \Cref{claim:0DT_ADT}, we get that
$\PandDT[f] = O(C^3 \cdot \log n)$.
\end{proof}
\section{Generalizations to non-boolean functions}
\label{sec:generalization}
In this section, we extend our conclusion to general multilinear polynomials
and set systems. The main observation is that all measures introduced in
\Cref{sec:prelim} are defined for general real-valued functions. In addition,
both \Cref{lemma:FMBS_MBS} and \Cref{lemma:HS_sparsity} are established for
real-valued functions. The following theorem holds true as the joint result
of these two lemmas.
\generalhsc*
\begin{proof}
By \Cref{lemma:FMBS_MBS}, $\FHSC(f)=O(m^2)$. Then by \Cref{lemma:HS_sparsity},
we obtain the claimed bound.
\end{proof}
\subsection{Finite-range functions}
\Cref{lemma:MBS_sparsity} is not true for general multilinear polynomials.
Nevertheless, if we make the assumption that the multilinear polynomial's range
is finite, denote its size by $s$, then we can bound the monotone block
sensitivity by a polynomial of log-sparsity and $s$.
\begin{lemma}
Let $f: \{0, 1\}^n \to S$ be a multilinear polynomial
where $\spar[f] = r$ and $|S| = s$. Then $\MBS(f) = O(s^2 \log^2r)$.
\end{lemma}
\begin{proof}
Suppose $\MBS(f) = \MBS(f, z) = k$ for $z \in \{0, 1\}^n$, and let $a = f(z) \in S$.
Define a polynomial $p : \mathbb{R} \to \{0, 1\}$ such that $p(a)=1$ and $p(b)=0$ for
$b \in S \setminus \set{a}$. There exist such a polynomial of degree
$\deg(p) = |S| - 1$. Define a boolean function $g:\{0, 1\}^n \to \{0, 1\}$ by
$g(z) = p(f(z))$. Note that $\MBS(g, z) = k$ and $\spar[g] \le r^{s - 1}$.
Then by \Cref{lemma:MBS_sparsity}, we have
$k = O(\log^2(\spar[g])) = O(s^2 \log^2r)$.
\end{proof}
Combining it with \Cref{thm:gen_hsc},
one can bound the hitting set complexity of finite-range functions.
\maingeneral*
The following example shows that a polynomial dependency on the range size is necessary
in \Cref{thm:maingeneral}.
\begin{example}
Let $f(x)=x_1 + \dots + x_s$. Then $\spar[f] = s$, the range of $f$ has size
$s + 1$, and $\HSC[f] = s$.
\end{example}
\subsection{Set systems}
\Cref{thm:gen_hsc} can also be interpreted in the language of set system.
\generalsetsystem*
\begin{proof}
Let $f(x)=\sum_{i=1}^r \prod_{j \in S_i} x_j$. Fix $m \ge 1$, and consider first
the case that $\MBS(f)<m$. In this case, by \Cref{thm:gen_hsc},
$\HSC[f] = O(m^2 \log r)$. Note that by construction, if $H$ is a hitting set
for the monomials of $f$ then $H$ is a hitting set for $\mathcal{F}$.
The other case is that $\MBS(f) \ge m$. Let $z \in \{0, 1\}^n$ be such that
$\MBS[f, z] \ge m$. By definition, this implies that $f_z$ has $m$ minimal
pairwise disjoint sets, which by \Cref{claim:minimal_monomials} implies that the
polynomial computing $f_z$ contains $m$ pairwise disjoint monomials. Each such
monomial is of the form $S_i \setminus T$ for $T = \set{i \ :\ z_i = 1}$.
\end{proof}
\section{Proof of \texorpdfstring{\Cref{theorem:sparsity-to-hitting}}{Theorem \ref{theorem:sparsity-to-hitting}}}
We recall the statement of \Cref{theorem:sparsity-to-hitting}.
\mainthm*
The proof relies on three lemmas which provide various relationships
between $\spar[f]$, $\MBS[f]$ and $\HSC[f]$, as well as their
fractional variants. In this subsection, we will state the lemmas and
show how \Cref{theorem:sparsity-to-hitting} follows as a consequence.
Then, in the following subsections, we prove the lemmas.
The first gives an upper bound on the monotone block sensitivity
of a \textit{boolean-valued} $f$ in terms of its sparsity.
\begin{restatable}{lemma}{lemmaone}
\label{lemma:MBS_sparsity}
For any boolean function $f : \{0, 1\}^n \to \{0, 1\}$,
$\MBS[f] = O(\log(\spar[f])^2)$.
\end{restatable}
\noindent We stress that this only holds for boolean-valued functions.
To some extent, we will be able to relax this condition when we
consider generalizations in \Cref{sec:generalization}. Additionally,
we note that this inequality can be very from tight: \Cref{example:and-or}
gives a function with constant MBS but exponential sparsity.
The second lemma shows that $\FMBS$ and $\MBS$ are equivalent
up to a polynomial factor.
Unlike \Cref{lemma:MBS_sparsity}, this holds for any real-valued
function.
\begin{restatable}{lemma}{lemmatwo}
\label{lemma:FMBS_MBS}
For any function $f:\{0, 1\}^n \to \mathbb{R}$, $\FMBS[f] = O(\MBS[f]^2)$.
\end{restatable}
The third lemma, which also holds for any real-valued function,
upper bounds the hitting set complexity of $f$ in terms of $\FMBS(f)$
and $\spar[f]$.
\begin{restatable}{lemma}{lemmathree}
\label{lemma:HS_sparsity}
For any function $f : \{0, 1\}^n \to \mathbb{R}$,
$\HSC[f] \le \FMBS[f] \cdot \log(\spar[f])$.
\end{restatable}
\Cref{theorem:sparsity-to-hitting} now follows quite readily
from the three lemmas.
\begin{proof}[Proof of \Cref{theorem:sparsity-to-hitting}]
Fix a boolean function $f$ with sparsity $r$ as in the theorem statement.
By \Cref{lemma:MBS_sparsity}, $\MBS[f] = O((\log r)^2)$.
By \Cref{lemma:FMBS_MBS}, $\FMBS[f] = O((\log r)^4)$.
Finally, by \Cref{lemma:HS_sparsity}, $\HSC[f] \leq O((\log r)^5)$,
as desired.
\end{proof}
\subsection{MBS from sparsity}
We begin by proving \Cref{lemma:MBS_sparsity}.
\lemmaone*
The proof uses a
well-known relationship between the degree and the sensitivity
of boolean functions \cite{nisan1994degree}. The \textit{sensitivity} $S(f)$ of a
boolean function $f$ is the largest $s$ so that there exists
an input $z$ and $s$ coordinates $\set{i_1, \dots, i_s}$ so that
$f(z) \neq f(z \oplus e_{i_j})$ for all $j \in [s]$.
\begin{claim}[Nisan-Szegedy, \cite{nisan1994degree}]
For any boolean function $f: \{0, 1\}^n \to \{0, 1\}$, $S(f) = O(\deg(f)^2)$.
\end{claim}
\begin{proof}[Proof of \Cref{lemma:MBS_sparsity}]
Suppose $\MBS[f] = k$, witnessed by pairwise disjoint
$z, w_1, \dots, w_k \subseteq [n]$. Namely,
$f(z) \ne f(z \lor w_i)$ for $i \in [k]$.
Let $g : \{0, 1\}^k \to \{0, 1\}$ denote the function obtained from
$f$ by identifying variables in each $w_i$ and setting all
variables not occurring in any $w_i$ to the corresponding
bit in $z$. That is, $g(x) = f(z + \sum x_i w_i)$. Note
that $S(g) = k$, since $g(0) \ne g(e_i)$ for $i \in [k]$,
and $\spar[g] \le \spar[f]$.
Let $r = \spar[f]$. We will reduce the degree of $g$ to
$d = O(\log r)$ by repeating the following process $k / 2$ times:
set to zero the coordinate which appears in the largest number
of monomials of degree at least $d$.
Let $M_i$ denote the number of monomials of degree at least $d$
remaining after the $i$-th step. Initially $M_0 \le r$. Next,
note that if $M_i > 0$, then there is a variable that occurs in
at least a $d/k$ fraction of the monomials of degree $\ge d$.
We therefore obtain the recurrence $M_{i + 1} \le (1 - d / k) M_i$.
After $k / 2$ steps,
$M_{k / 2} \le (1 - d / k)^{k / 2} r \le \exp(-d / 2) r < 1$
for $d = O(\log r)$. As $M_{k / 2}$ is an integer, we obtain that
$M_{k / 2}$ is zero.
Let $h$ denote the function obtained by this restriction
process. Since $M_{k / 2} = 0$ we have $\deg(h) < d$. Moreover, since
$g$ had full sensitivity at $0^k$ and we restricted only $k / 2$
coordinates, $S(h) \ge k/2$. Finishing up, we have
$k/2 \le S(h) = O(\deg(h)^2) = O((\log r)^2)$, completing
the proof.
\end{proof}
\subsection{Fractional vs. integral solutions for MBS}
This subsection proves \Cref{lemma:FMBS_MBS}, restated here:
\lemmatwo*
We first need the following claim, which states
that any function $f: \{0, 1\}^n \to \mathbb{R}$
is not too sensitive to noise which
is $q$-smooth for $q \ll 1/\FMBS[f]$.
\begin{claim}\label{claim:bias_restrict}
Let $f:\{0, 1\}^n \to \mathbb{R}$, $z \in \{0, 1\}^n$ and $\mathcal{D}$ a distribution on
$\{0, 1\}^{[n] \setminus z}$. Assume that $\mathcal{D}$ is $q$-smooth for some
$q \in (0, 1]$. Then
\[
\Pr_{w \sim D}[f(z) \neq f(z \lor w)] \le q \cdot \FMBS[f, z].
\]
\end{claim}
\begin{proof}
Assume $\FMBS[f, z] = 1 / p$. We may assume $q < p$ as otherwise the
claim is trivial. Let $\delta = \Pr_{w \sim D}[f(z) \neq f(z \lor w)]$.
Let $\mathcal{D}'$ be the distribution $\mathcal{D}$ restricted to inputs $w$ such that
$f(z) \neq f(z \lor w)$. Observe that $\mathcal{D}'$ is $(q / \delta)$-smooth,
and is supported on inputs $w$ such that $f(z) \neq f(z \lor w)$. As
$\FMBS(f,z) = 1 / p$ we have $q / \delta \ge p$ which implies the claim.
\end{proof}
\begin{proof}[Proof of \Cref{lemma:FMBS_MBS}]
Let $\FMBS(f) = 1/p$. Let $z \in \{0, 1\}^n$ such that $\FMBS(f,z) = 1/p$, and let $\mathcal{D}$ be a $p$-biased distribution supported on $\mathcal{W}(f,z)$.
Fix $k$ to be determined later, and sample inputs
$w_1, \dots,w_k \sim \mathcal{D}$ independently. Let $u$ denote all the elements
that appear at least in two of the $w_i$, namely
\[
u = \bigvee_{i \neq j} \left( w_i \bigwedge w_j \right).
\]
The main observation is that $u$ is $q$-biased for $q=(pk)^2$. This holds
since for every $\ell \in [n]$ we have
\[
\Pr[u_\ell=1] \le
\sum_{i \neq j} \Pr[(w_i)_\ell = 1, (w_j)_\ell = 1] \le
k^2 p^2.
\]
Define the following ``bad'' events:
\[
E_0 = [f(z) \neq f(z \lor u)], \quad E_t = [f(z \lor w_t) \neq
f(z \lor w_t \lor u)] \text{ for } t \in [k].
\]
We claim that $\Pr[E_t] \le q/p = p k^2$ for all $t=0, \dots,k$. The proof
for $E_0$ follows directly from \Cref{claim:bias_restrict}. To see why it
holds for $E_t$ for $t=1, \dots,k$, define $u_t$ to be the elements that
appear in two sets $w_i$, excluding $w_t$, namely
\[
u_t = \bigvee_{i \neq j, \; i,j \neq t} \left( w_i \bigwedge w_j \right).
\]
Observe that $w_t,u_t$ are independent, that $u_t$ is $(pk)^2$-biased and
that $w_t \vee u = w_t \vee u_t$. Thus \Cref{claim:bias_restrict} gives that,
for any fixing of $w_t$, we have
\[
\Pr_{u_t}[f(z \vee w_t) \neq f(z \vee w_t \vee u_t) \; | \; w_t] \le
q \cdot \FMBS(f,z \vee w_t) \le q \cdot \FMBS(f) = q / p = pk^2.
\]
The claim for $E_t$ follows by averaging over $w_t$.
Pick $k = 1/(2\sqrt{p})$, meaning
$E_t$ occurs with probability at most $1/4$
for each $0 \leq t \leq k$. Then conditioning on
$\neg E_0$ will increase the probability
of any event by a factor of at most $1/(1 - 1/4) = 4/3$.
In particular, because $\Pr[E_t] \leq pk^2 = 1/4$ for any $t$,
we have $\Pr[E_t | \neg E_0] \leq 1/3$ for any $t \neq 0$.
This means that we can sample the $w_t$'s conditioned on
$\neg E_0$, and still
be sure that every $\neg E_t$ occurs with probability at least $2/3$.
Averaging, some setting of the $\{w_t\}$ satisfies $\neg E_0$
and at least $2/3$ of $\neg E_t$ for $1 \leq t \leq k$. Fix these $\{w_t\}$.
Define $z' = z \vee u$ and $w'_t = w_t \setminus u$.
For every $1 \leq t \leq k$ for which $\neg E_t$ holds, we have
\[
f(z') = f(z), \qquad
f(z' \vee w'_t) = f(z \vee w_t).
\]
Thus $f(z') \neq f(z' \vee w'_t)$ for at least
$2k/3$ choices of $w'_t$.
Moreover, $z',w'_1, \dots, w'_k$ are pairwise disjoint.
Hence $\MBS(f) \geq 2k/3$. This completes the proof,
by recalling that $k = 1/(2\sqrt{p})$ with $\FMBS(f) = 1/p$.
\end{proof}
A notable feature of this proof is that we need to employ upper
bounds on the fractional block sensitivity for more than one choice
of input. This is actually necessary; there is a function $f$ based
on the projective plane for
which $\MBS(f, z) = 1$ and $\FMBS(f, z) \sim \sqrt{n}$ at a point $z$. See
\Cref{example:projective_plane} for details.
\subsection{Hitting sets from sparsity}
Our final lemma is an upper bound on the hitting set complexity of
any $f: \{0, 1\}^n \to \mathbb{R}$ in terms of $\FMBS(f)$ and $\log(\spar[f])$.
Recall that $\FMBS$ and $\FHSC$ are equal, so such an upper
bound implies that $\FHSC$ and $\HSC$ are polynomially related for
sparse boolean functions.
\lemmathree*
Before proving it, we need two straightforward
claims which we will use again later on.
The first allows us to find (non-uniformly) indices $i \in [n]$
which hit a large fraction of $\mon[f]$, given
that $f$ has small $\FMBS / \FHSC$ at $0^n$.
\begin{claim}
\label{claim:transfer}
Suppose $\FMBS[f, 0^n] = \FHSC[f, 0^n] = k$ and this is witnessed by a
distribution $\mathcal{D}$ over $[n]$. Then
\begin{enumerate}
\item $\Pr_{i \sim \mathcal{D}}[i \in w] \geq 1/k$
for every $w \in \mon[f]$. That is, $\mathcal{D}$ is also a fractional
hitting set for the \emph{monomials} of $f$.
\item There is some $i$ in the support of $\mathcal{D}$ which hits
a $1 / k$-fraction of $\mon[f]$.
\end{enumerate}
\end{claim}
\begin{proof}
Note that the second part of the claim follows from the first
by an averaging argument, so we are contented to prove the
first part of the claim.
Let $\mathcal{D}$, $\FHSC(f, 0) = k$ be as stated,
so that $\Pr_{i \sim \mathcal{D}}[i \in w] \geq 1/k$
for all $w \in \mathcal{W}(f, z)$. By \Cref{claim:minimal_monomials},
it is the case that $\Pr_{i \sim \mathcal{D}}[i \in w] \geq 1/k$
for any minimal monomial $w$. The measure of $\mathcal{D}$ on some $w$ is
non-decreasing with respect to taking supersets, meaning
$\Pr_{i \sim D}[i \in w] \geq 1/k$ for every monomial $w \in \mon[f]$.
\end{proof}
The second claim says that $\FHSC(f)$ is non-increasing
under restrictions. For simplicity, we only consider
reductions which set a single bit (which can be extended
to more bits by induction).
\begin{claim}
\label{claim:FHSC-is-nonincreasing}
Let $f : \{0, 1\}^n \to \mathbb{R}$ be a function, $i \in [n]$ and $b \in \{0, 1\}$.
Let $f':\{0, 1\}^{[n] \setminus \set{i}} \to \mathbb{R}$ be the function obtained
by restricting to inputs with $x_i = b$. Then
\[
\FHSC[f'] \le \FHSC[f].
\]
\end{claim}
\begin{proof}
Fix $z \in \{0, 1\}^{[n] \setminus \set{i}}$. We will show that
$\FHSC[f', z] \le \FHSC[f, z^*]$ where $z^* = z$ if $b = 0$ and
$z^* = z \cup \set{i}$ if $b = 1$. In either case,
$\FHSC[f',z] \le \FHSC(f)$ and hence $\FHSC(f') \le \FHSC(f)$.
Consider first the case of $b=0$, and assume that
$\FHSC[f, z] = 1 / p$. Recall that $f_z$ is the restriction of $f$
to inputs $x \ge z$, and that $\FHSC[f, z] = \FHSC[f_z, 0]$.
By definition, there is a distribution $\mathcal{D}$ over $[n]$ such that
for every $w \in \mon[f_z]$ we have $\Pr_{i \sim D}[w_i = 1] \ge p$.
Observe that $\mon[f'_z] \subset \mon[f_z]$ since setting a variable
to $0$ can only remove monomials. Thus we get $\FHSC[f', z] \le \FHSC[f, z]$.
Next, consider the case of $b=1$. Note that $f'_z = f_{z \cup \set{i}}$
and hence $\FHSC[f', z] = \FHSC[f, z \cup \set{i}]$.
\end{proof}
\begin{proof}[Proof of \Cref{lemma:HS_sparsity}]
Let $k = \FHSC(f, 0) \leq \FHSC(f)$, $S_0 = \emptyset$, $f_0 = f$ and
perform the following iterative process. At time $t \geq 1$, let
$S_t = S_{t-1} \cup \{i_t\}$ where $i_t \in [n]$ is the index which
hits a $1/k$-fraction of $\mon[f_{t-1}]$, guaranteed to exist by
\Cref{claim:transfer}. Let $f_{t} = f_{t-1}|_{z_{i_t} = 0}$. At each
step, the restriction $z_{i_t} = 0$ sets every monomial containing
$i_t$ to zero, causing the sparsity of $f_{t-1}$ to decrease by a
multiplicative factor $(1 - 1/k)$. Let $r_t = |\mon[f_t]|$. Since $S_t$
is a hitting set for $\mon[f]$ when $f_t$ has no non-zero monomials,
this process terminates with a hitting set when
\[
r_t = (1 - 1/k)^t r_0 \leq e^{- t/k} r_0 < 1.
\]
Therefore, taking $t = k \log r_0$ suffices.
\end{proof}
\section{Preliminaries}
\label{sec:prelim}
This section introduces a number of complexity measures used in the proofs
of our main results. We start by collecting some simple definitions, proceed
to define the complexity measures, and then provide some examples which
clarify some aspects of these definitions.
Throughout this section, fix a boolean function $f : \{0, 1\}^n \to \mathbb{R}$.
We identify elements of $\{0, 1\}^n$ with subsets of $[n]$. Namely, we identify
$z \in \{0, 1\}^n$ with the set $\set{i \ :\ z_i=1}$, and shorthand
$[n] \setminus z = \set{i \ :\ z_i = 0}$.
Given two inputs $z, w \in \{0, 1\}^n$ we denote by $z \lor w$ their union and by
$z \land w$ their intersection. The partial order on $\{0, 1\}^n$ is defined by
the relation $z \leq w$, satisfied precisely when $z$ is a subset of $w$.
Define $f_z : \{0, 1\}^{[n] \setminus z} \to \mathbb{R}$ to be the restriction of $f$ to
inputs which are consistent with the $1$s in $z$; namely $f_z(w) = f(z \lor w)$.
Define
$\mathcal{W}(f, z) = \set{ w \in \{0, 1\}^{[n] \setminus z} \ :\ f(z) \neq f(z \lor w) }$
and note that it can be equivalently defined as
$\mathcal{W}(f, z) = \set{ w \in \{0, 1\}^{[n] \setminus z} \ :\ f_z(w) \neq f_z(0) }$.
Recall also the notation from the proof overview.
Any $f : \{0, 1\}^n \to \mathbb{R}$
can be written uniquely
as a multilinear real-valued polynomial
$f(x) = \sum_{s \subseteq [n]} \alpha_s \prod_{i \in s} x_i$. The \emph{sparsity}
of $f$, denoted $\spar(f)$, is the number of nonzero coefficients in the
polynomial expansion of $f$. Next, let
$\mon[f] = \set{s \subseteq [n] \ :\ \alpha_s \neq 0,\ s \neq 0^n}$ denote the
set system of non-zero, non-constant monomials in $f$ when written as a multilinear
polynomial. We emphasize that the coefficient $\alpha_{\emptyset}$ is not included
in $\mon[f]$; $\alpha_{\emptyset}$ is inessential, since we are interested in hitting
sets for monomials and $\emptyset$ is trivially hit by any set. Observe that
$|\mon[f]| \in \set{\spar[f], \spar[f] - 1}$.
For any set system $\mathcal{F}$ over $[n]$, an element $z \in \mathcal{F}$
is \textit{minimal} if there does not exist $w \in \mathcal{F}$ with $w<z$.
\begin{claim}
\label{claim:minimal_monomials}
Fix $f: \{0, 1\}^n \to \mathbb{R}$, $z \in \{0, 1\}^n$
and $\mathcal{W}(f, z)$, $\mon[f_z]$ as above. Then, for
any $w \in \{0, 1\}^n$, $w$ is a minimal
element in $\mathcal{W}(f, z)$ if and only if $w$ is a minimal
element in $\mon[f_z]$.
\end{claim}
\begin{proof}
We assume for simplicity that $z = \emptyset$ so that $f_z(w) = f(w)$,
$f(\emptyset) = \alpha_\emptyset$
and write $\mathcal{W} = \mathcal{W}(f, \emptyset)$.
Suppose $w \in \mon[f]$ is a minimal element. Writing
$f$ as a multilinear polynomial, we get
$f(w) = \sum_{u \leq w} \alpha_u$.
Since $\alpha_w$ is minimal, $f(w) = \alpha_\emptyset + \alpha_u$
and so $f(w) \neq f(\emptyset)$ and $w \in \mathcal{W}$. Additionally, $w$ is minimal
in $\mathcal{W}$ because if $w'<w$ then the non-constant terms of
$f(w') = \sum_{u \leq w'} \alpha_u$ are all $0$, hence $f(w')=f(0)$ and
$w' \not\in \mathcal{W}$.
In the other direction, suppose $w \in \mathcal{W}$ is a minimal
element. Assume there is $w'<w$ in $\mon[f]$; choosing such a minimal $w'$,
we would get $f(w') \ne f(0)$ which violates the minimality of $w$. Similarly,
if $w \not\in \mon[f]$ then we get $f(w) = \sum_{u \leq w} \alpha_u = f(0)$,
which violates the assumption that $w \in \mathcal{W}$. Thus $w$ is a minimal element
in $\mon[f]$.
\end{proof}
\subsection{Monotone block sensitivity}
First, we consider \textit{monotone
block sensitivity}, a variant of the standard notion of \textit{block sensitivity}
due to Nisan and Szegedy~\cite{nisan1994degree}. In a nutshell, this is a
``directed'' restriction of block sensitivity, where we can only change an
input by flipping $0$'s to $1$'s. We also define MBS (and all other complexity
measures introduced later in this section) with respect to real-valued functions
over $\{0, 1\}^n$. This differs from block sensitivity, which
is usually (though not always) studied in the context of boolean-valued functions.
The generalization to real-valued $f$ will be immaterial to some of our proofs,
permitting us to draw more general conclusions regarding the monomial structure
of multilinear polynomials; see \Cref{sec:generalization} for more details.
Say that two inputs $z,w$ are disjoint if $z \land w = 0^n$;
namely, their corresponding sets are disjoint.
\begin{definition}[Monotone block sensitivity]
For $f: \{0, 1\}^n \to \mathbb{R}$ and $z \in \{0, 1\}^n$,
the \textit{monotone block sensitivity of $f$ at $z$},
denoted $\MBS(f, z)$, is the largest integer $k$
such that there exist $k$ pairwise disjoint inputs
$w_1, \dots, w_k \in \mathcal{W}(f, z)$. We denote $\MBS[f] = \max_z \MBS[f, z]$.
\end{definition}
For two motivating examples, observe that for the $n$-bit AND and OR functions
we have $\MBS[\text{AND}] = 1$ and $\MBS[\text{OR}] = n$, respectively.
\begin{remark}
We emphasize that $\mathcal{W}(f, z) \subseteq \{0, 1\}^{[n] \setminus z}$, so each $w_i$
is disjoint from $z$. This corresponds to the standard definition of block
sensitivity where we restrict each block $w_i$ to be disjoint from the support
of $z$.
\end{remark}
\begin{remark}
Suppose $w_1, \dots, w_k$ are minimal witnesses that $\MBS[f, z] = k$
in the sense that for any $i \in [k]$ there is no $w'_i < w_i$
so that $w'_i \in \mathcal{W}(f, z)$. Then by \Cref{claim:minimal_monomials},
each $w_i$ is a minimal element in $\mon[f_z]$.
\end{remark}
As alluded to in the proof overview, $\MBS[f, z]$
can be phrased as the value of a particular
set packing linear program (LP). Fixing $z$,
write $\mathcal{W} = \mathcal{W}(f, z)$. The program optimizes
over variables $a_w$ for each $w \in \mathcal{W}$.
\begin{align*}
\text{maximize } & \sum_{w \in \mathcal{W}} a_w \\
\text{subject to } & \sum_{w \in \mathcal{W}: w_i = 1} a_w \le 1 \quad \text{ for all } i \in [n]\\
& a_w \in \{0, 1\} \quad \text{for all } w \in \mathcal{W}
\end{align*}
\emph{Fractional monotone block sensitivity} ($\FMBS$) is obtained
by relaxing the constraints in the above LP, allowing variables
$a_w$ to assume non-integral values in $[0, 1]$. We use an alternative
formulation of $\FMBS$ whose equivalence to the LP formulation is simple to verify.
\begin{definition}[Smooth distribution]
A distribution $\mathcal{D}$ over $\{0, 1\}^n$ is said to be $p$-smooth if for any
$i \in [n]$ it holds that $\Pr_{w \sim \mathcal{D}}[w_i = 1] \le p$.
\end{definition}
\begin{definition}[Fractional monotone block sensitivity]
\label{def:FMBS}
The fractional monotone block sensitivity of a function
$f : \{0, 1\}^n \to \mathbb{R}$ at an input $z \in \{0, 1\}^n$, denoted
$\FMBS(f,z)$,
is equal to $1/p$, where $p > 0$ is the smallest number for which
there exists a $p$-smooth distribution $\mathcal{D}$ supported on a subset of
$\mathcal{W}(f, z)$. We denote $\FMBS[f] = \max_z \FMBS[f, z]$.
\end{definition}
\begin{remark}
To see the equivalence between this definition of $\FMBS$ and
the LP formulation, notice that a solution
$\set{a_w \ :\ w \in \mathcal{W}(f,z)}$ to the LP with $s = \sum a_w$ gives
rise to a $1/s$-smooth distribution $\mathcal{D}$ over $\mathcal{W}(f,z)$ via
$\mathcal{D}(w) = a_w / s$.
\end{remark}
\begin{remark}
Clearly, any solution to the fractional program for $\FMBS$
is a solution to the integral program for $\MBS$. Hence, both being
\emph{maximization} problems, $\FMBS$ upper bounds $\MBS$. Later, we
prove in \Cref{lemma:FMBS_MBS} that the converse of this inequality
holds in the sense that $\FMBS[f]$ is upper bounded by a polynomial
in $\MBS[f]$.
\end{remark}
\begin{remark}
Fractional block sensitivity (the non-monotone variant) was considered
by Tal in~\cite{tal2013properties}. Tal mentions explicitly the problem
of finding separations between fractional block sensitivity and sensitivity.
\end{remark}
\subsection{Hitting set complexity}
Next, we consider \emph{hitting set complexity}. This can be viewed
as a variant of \emph{certificate complexity},
a commonly-studied quantity in standard query complexity.
\begin{definition}[Hitting set complexity]
The hitting set complexity of a function $f : \{0, 1\}^n \to \mathbb{R}$
at an input $z \in \{0, 1\}^n$, denoted $\HSC[f, z]$, is the minimal
size of a set $H \subseteq [n]$ which intersects all sets in $\mathcal{W}(f,z)$.
In other words, for every $w \in \mathcal{W}(f, z)$ there is some
$i \in H$ so that $w_i = 1$.
We denote $\HSC[f] = \max_z \HSC[f, z]$.
\end{definition}
Similarly to $\MBS$, it is simple to see that the $n$-bit AND and OR functions
have $\HSC[\text{AND}_n] = 1$ and $\HSC[\text{OR}_n] = n$, respectively.
\begin{remark}
It suffices to consider $H \subseteq [n]$ which have non-empty
intersection with any \emph{minimal} element of $\mathcal{W}(f, z)$.
This is simply because if $H$ hits an element $w$
then it also hits every superset of $w$.
\end{remark}
\begin{remark}
Suppose $H \subseteq [n]$ with $|H| = b$ witnesses $\HSC[f, 0^n] = b$.
By the previous remark and \Cref{claim:minimal_monomials}, one can see
that $H$ is hitting set of $\mon[f]$.
\end{remark}
We can also phrase $\HSC[f, z]$
as the value of a certain set covering LP. Putting
$\mathcal{W} = \mathcal{W}(f, z)$,
the LP optimizes over the variable $\set{b_i \ :\ i \in [n]}$ as follows:
\begin{align*}
\text{minimize} &\ \sum_{i \in [n]} b_i \\
\text{subject to} &\ \sum_{i \in [n]: w_i = 1} b_i \ge 1 \quad \text{ for all } w \in \mathcal{W}\\
&\ b_i \in \{0, 1\} \quad \text{ for all } i \in [n]
\end{align*}
One can easily verify that this LP is dual to the
LP defining monotone block sensitivity.
Fractional hitting set complexity is obtained from hitting
set complexity by relaxing each constraint $b_i \in \{0, 1\}$
to $b_i \in [0, 1]$. We give an alternative definition, equivalent
to the LP formulation:
\begin{definition}[Fractional hitting set complexity]
\label{def:FHSC}
The fractional hitting set complexity of a function
$f : \{0, 1\}^n \to \mathbb{R}$ at an input $z \in \{0, 1\}^n$,
denoted $\FHSC[f, z]$, is $1/p$, where $p > 0$ is the smallest
number for which there
exists a distribution $\mathcal{D}$ of indices $i \in [n]$ with the
property that $\Pr_{i \sim \mathcal{D}}[w_i =1 ] \ge p$
for each $w \in \mathcal{W}(f, z)$.
We denote $\FHSC[f] = \max_z \FHSC[f, z]$.
\end{definition}
\begin{remark}
The same reasoning as the $\FMBS$ case can be used to show that this definition is equivalent to the LP definition.
Also by analogous reasoning, $\FHSC[f, z] \leq \HSC[f, z]$ (recalling that $\FHSC$ is a minimization problem).
\end{remark}
The LPs defining $\FHSC$ and $\FMBS$ are dual, so
linear programming duality yields $\FHSC[f, z] = \FMBS[f, z]$. Combined
with the remarked-upon relationships between $\MBS / \FMBS$ and $\HSC / \FHSC$,
we conclude the following:
\begin{claim}
\label{claim:FMBS_eq_FHSC}
For any function $f: \{0, 1\}^n \to \mathbb{R}$ and input $z \in \{0, 1\}^n$,
\[
\MBS[f, z] \le \FMBS[f,z] = \FHSC[f, z] \le \HSC[f, z].
\]
\end{claim}
\subsection{Some informative examples}
To digest the definitions, some examples are in order.
We start by noting that there are large gaps in the inequalities
from \Cref{claim:FMBS_eq_FHSC} for \textit{fixed} $z$. These correspond
to integrality gaps for the set cover and hitting set linear programs (of
which $\FHSC[f, z]$ and $\FMBS[f, z]$ are a special case),
which are central to combinatorial optimization.
The first example gives a separationbetween $\FMBS[f, z]$ and $\MBS[f, z]$.
\begin{example}[Projective plane]
\label{example:projective_plane}
For a prime power $m$, let $P$ be the set of 1-dimensional
subspaces of $\mathbb{F}_m^3$ and $L$ the set of 2-dimensional
subspaces of $\mathbb{F}_m^3$. $P$ is the set of \emph{points}
and $L$ is the set of \emph{lines}.
Note that $|P| = |L| = m^2 + m + 1$.
It is well-known that $P$ and $L$ form a \emph{projective plane},
in that they enjoy the following relationship:
\begin{enumerate}
\item Any two points in $P$ are contained in exactly one
line in $L$. Moreover, each point is contained in $m + 1$ lines.
\item Any two lines in $L$ intersect at exactly one point in $P$.
Moreover, each line contains $m + 1$ points.
\item There are 4 points, no 3 of which lie on the same line.
\end{enumerate}
For more background on finite geometry,
see, for example, \cite{ball2011introduction}.
Let $n = m^2 + m + 1$, thinking of each $i \in [n]$ as corresponding
to a point $p_i \in P$. For lines $\ell \in L$, let
$S_\ell = \set{ i \in [n] \ :\ p_i \text{ is contained in } \ell}$ be the
set of (indices of) points incident to $\ell$ and define $f: \{0, 1\}^n \to \{0, 1\}$ as
\[
f(z) = \bigvee_{\ell \in L} \Big(\bigwedge_{i \in S_\ell} z_i\Big).
\]
Since any two lines intersect at a point, any $\ell_1, \ell_2 \in L$
have $S_{\ell_1} \cap S_{\ell_2} \neq \emptyset$. This implies
$\MBS(f, 0^n) = 1$. On the other hand, because each line contains $m+1$ points,
$\Pr_{i \in [n]}[i \in S_\ell] = (m+1)/(m^2 + m + 1)$ when $i$ is uniform and
therefore $\FMBS(f, 0^n) \approx m \approx \sqrt{n}$.
\end{example}
The next example gives a similar separation
between $\FHSC(f, z)$ and $\HSC(f, z)$.
\begin{example}[Majority]
\label{example:maj}
For $n$ even, let $f(z) = \mathbf{1}[\sum_i z_i \ge n/2]$ be the Majority function.
The minimal elements of $\mon[f]$ consist of sets $s$ with $n/2$ members.
Any set $s$ of size at most $n/2$ will fail to hit $[n] \setminus s \in \mon[f]$.
Therefore any hitting set for the monomials of $f$, namely for $\mon[f]$, has size more than
$n/2$. In particular, $\HSC(f, 0^n) = n / 2 + 1$ (clearly $n/2 + 1$ suffices).
On the other hand, the uniform distribution over $[n]$ satisfies
$\Pr[i \in s] = 1/2$ for any minimal monomial $s \in \mon[f]$. Hence
$\FHSC[f, 0^n] = 2$.
\end{example}
\noindent These two examples show that it will be necessary to utilize
the fact that $\MBS$ and $\HSC$ are defined as the maximum over
all inputs.
The next examples shows that $\HSC[f]$ and $\MBS[f]$ can be constant while
$\spar(f)$ grows exponentially.
\begin{example}[AND-OR]
\label{example:and-or}
Consider a string $z \in \{0, 1\}^{2n}$
written as $z = xy$ for $x,y \in \{0, 1\}^n$.
Define
\[
f(x,y) = \bigwedge_{j \in [n]} \left(x_j \lor y_j\right).
\]
One can verify that $\HSC[f] = \MBS[f] = 2$.
On the other hand, writing $f$ as a multilinear polynomial
yields
\[
f(x,y) = \prod_{j \in [n]} (x_i + y_j - x_j \cdot y_j),
\]
which clearly has sparsity exponential in $n$.
\end{example}
\noindent Note that this holds for the \textit{global} (i.e. maximizing
over $\{0, 1\}^n$) definitions of $\MBS$ and $\HSC$. To see the
significance of this example, recall from the proof overview that we
are interested in eventually showing $\MBS(f) \leq O((\log \spar(f))^2)$.
This example shows that this latter inequality can be very far from the
truth; we are able to make up for this discrepancy by using the
low-sparsity assumption multiple times.
Finally, we include an example which will become relevant to our
applications to communication complexity in \Cref{sec:communication}.
\begin{example}[Redundant indexing]
\label{example:redundant-indexing}
Let $k \ge 1$, and
consider two sets of variables $\set{x_S}_{S \subseteq [k]}$ and
$\set{y_i}_{i \in [k]}$ of sizes $2^k$ and $k$, respectively.
Let $n = 2^k + k$ and define
\[
f(x, y) = \sum_{i \in [k]}
\left(\prod_{S \ :\ i \in S} x_S\right)
(1 - y_i) \left(\prod_{j \neq i} y_j\right).
\]
In words, $f(x, y) = 1$ when $y$ has weight exactly $k-1$ with $y_i = 0$
and $x_S = 1$ for every $S$ containing $i$.
By the mutilinear representation, one can see that
the sparsity of $f$ is $2k \sim \log n$.
Moreover, $\HSC(f) \le 2$. To see why,
consider an input $z = (a, b)$ and note that $f$ restricted
to inputs $w = (x, y) \geq z$ becomes
\[
f'(x, y) = \sum_{i : b_i = 0}
\left(\prod_{S: i \in S, a_S = 0} x_S\right)
(1 - y_i)\left(\prod_{j: j \neq i, b_j = 0} y_j\right).
\]
In particular, if $a \ne 1^{[2^k]}$ then the variable $x_{[2^k] \setminus a}$
hits all the monomials, and if $a=1^{[2^k]}$ then any two $y_i,y_j$ hit all
the monomials.
\end{example}
We view this as an important example in understanding the $\log n$
factor currently present in the statements of \Cref{thm:logrank}
and \Cref{thm:lifting}. This connection will be discussed in
more detail in \Cref{sec:discussion}.
| proofpile-arXiv_059-15671 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section*{Supplementary Material for the manuscript ``New Insights into the Nature of Nonlinear Gravitational Waves"}
\subsection*{Deriving the Ricci Tensor}
The Ricci tensor can be expressed in terms of Christoffel symbols:
\begin{equation}
R_{\mu \nu}=\partial_{\rho} \Gamma^{\rho}_{\mu \nu}-\partial_{\nu} \Gamma^{\rho}_{\rho \mu}+ \Gamma^{\rho}_{\rho \lambda} \Gamma^{\lambda}_{\nu \mu}- \Gamma^{\rho}_{\nu \lambda} \Gamma^{\lambda}_{\rho \mu},
\end{equation}
which in turn can be expressed in terms of the metric of the spacetime we wish to describe:
\begin{equation}
\Gamma^{\rho}_{\mu \nu}=\frac{1}{2}g^{\rho \lambda} ( \partial_{\mu}g_{\lambda \nu} + \partial_{\nu}g_{ \mu \lambda} -\partial_{\lambda}g_{\mu \nu} ),
\end{equation}
\begin{equation}
\Gamma^{\rho}_{\rho \mu }=\frac{1}{2}g^{\rho \lambda} ( \partial_{\rho}g_{\lambda \mu} + \partial_{\mu}g_{ \rho \lambda} -\partial_{\lambda}g_{\rho \mu} ),
\end{equation}
\begin{equation}
\Gamma^{\rho}_{\rho \lambda }=\frac{1}{2}g^{\rho \alpha} ( \partial_{\rho}g_{\alpha \lambda} + \partial_{\lambda}g_{ \rho \alpha} -\partial_{\alpha}g_{\rho \lambda} ),
\end{equation}
\begin{equation}
\Gamma^{\lambda}_{\mu \nu}=\frac{1}{2}g^{ \lambda \beta} ( \partial_{\mu}g_{\beta \nu} + \partial_{\nu}g_{ \mu \beta} -\partial_{\beta}g_{\mu \nu} ),
\end{equation}
\begin{equation}
\Gamma^{\rho}_{\nu \lambda }=\frac{1}{2}g^{\rho \alpha} ( \partial_{\nu}g_{\alpha \lambda} + \partial_{\lambda}g_{ \nu \alpha} -\partial_{\alpha}g_{\nu \lambda} ),
\end{equation}
\begin{equation}
\Gamma^{\lambda}_{\rho \mu }=\frac{1}{2}g^{\lambda \beta} ( \partial_{\rho}g_{\beta \mu} + \partial_{\mu}g_{ \rho \beta} -\partial_{\beta}g_{\rho \mu} ).
\end{equation}
We can substitute our metric definitions into the Christoffel symbols:
\begin{align}
\Gamma^{\rho}_{\mu \nu}=\frac{1}{2}(\tilde{\eta}^{\rho \lambda}- h^{\rho \lambda}) ( \partial_{\mu}h_{\lambda \nu} + \partial_{\nu}h_{ \mu \lambda} -\partial_{\lambda}h_{\mu \nu} )
\end{align}
Note that
\begin{equation}
\Gamma^{\rho}_{\mu \nu}=\Gamma^{\rho}_{\nu \mu}
\end{equation}
and all metrics are symmetric. From here, we can find a definition of $R_{\mu \nu}$ up to order $h^2$:
\begin{align}
R_{\mu \nu} = \partial_{\rho} \bigg[ \frac{1}{2}(\tilde{\eta}^{\rho \lambda}- h^{\rho \lambda}) ( \partial_{\mu}h_{\lambda \nu} + \partial_{\nu}h_{ \mu \lambda} -\partial_{\lambda}h_{\mu \nu} ) \bigg]\nonumber \\ - \partial_{\nu} \bigg[ \frac{1}{2}(\tilde{\eta}^{\rho \lambda}- h^{\rho \lambda}) ( \partial_{\mu}h_{\lambda \rho} + \partial_{\rho}h_{ \mu \lambda} -\partial_{\lambda}h_{\mu \rho} ) \bigg]\nonumber \\ +\frac{1}{4} \bigg[ \tilde{\eta}^{\rho \alpha} ( \partial_{\rho}h_{\alpha \lambda} + \partial_{\lambda}h_{ \rho \alpha} -\partial_{\alpha}h_{\rho \lambda} )\tilde{\eta}^{ \lambda \beta} ( \partial_{\mu}h_{\beta \nu} + \partial_{\nu}h_{ \mu \beta} -\partial_{\beta}h_{\mu \nu} ) \bigg] \nonumber \\ - \frac{1}{4} \bigg[ \tilde{\eta}^{\rho \alpha} ( \partial_{\nu}h_{\alpha \lambda} + \partial_{\lambda}h_{ \nu \alpha} -\partial_{\alpha}h_{\nu \lambda} )\tilde{\eta}^{\lambda \beta} ( \partial_{\rho}h_{\beta \mu} + \partial_{\mu}h_{ \rho \beta} -\partial_{\beta}h_{\rho \mu} ) \bigg].
\end{align}
Raising indices for second order terms:
\begin{align}
R_{\mu \nu} = \partial_{\rho} \bigg[ \frac{1}{2}(\tilde{\eta}^{\rho \lambda}- h^{\rho \lambda}) ( \partial_{\mu}h_{\lambda \nu} + \partial_{\nu}h_{ \mu \lambda} -\partial_{\lambda}h_{\mu \nu} ) \bigg]&\nonumber \\ - \partial_{\nu} \bigg[ \frac{1}{2}(\tilde{\eta}^{\rho \lambda}- h^{\rho \lambda}) ( \partial_{\mu}h_{\lambda \rho} + \partial_{\rho}h_{ \mu \lambda} -\partial_{\lambda}h_{\mu \rho} ) \bigg]&\nonumber \\ +\frac{1}{4} \bigg[ ( \partial_{\rho}h_{\lambda}^{\rho} + \partial_{\lambda}h_{ \rho }^{\rho} -\partial^{\rho}h_{\rho \lambda} ) ( \partial_{\mu}h^{\lambda}_{ \nu} + \partial_{\nu}h_{ \mu }^{\lambda} -\partial^{\lambda} h_{\mu \nu} ) \bigg]& \nonumber \\ - \frac{1}{4} \bigg[ ( \partial_{\nu}h_{\lambda}^{\rho} + \partial_{\lambda}h_{ \nu }^{\rho} -\partial^{\rho} h_{\nu \lambda} )( \partial_{\rho}h_{ \mu}^{\lambda} + \partial_{\mu}h_{ \rho}^{\lambda} -\partial^{\lambda} h_{\rho \mu} ) \bigg] &.
\end{align}
Raising indices for first order terms:
\begin{align}
R_{\mu \nu} = \partial_{\rho} \bigg[ \frac{1}{2}( \partial_{\mu}h_{ \nu}^{\rho}+ h_{\lambda \nu}\partial_{\mu}h^{\rho \lambda}+ \partial_{\nu}h_{ \mu }^{\rho}+ h_{\mu \lambda }\partial_{\nu}h^{\rho \lambda} -\partial^{\rho}h_{\mu \nu} ) \bigg] &
\nonumber \\ - \partial_{\nu} \bigg[ \frac{1}{2}( \partial_{\mu}h_{ \rho}^{\rho}+ h_{\lambda \rho}\partial_{\mu}h^{\rho \lambda}+ \partial_{\rho}h_{ \mu }^{\rho}+ h_{\mu \lambda }\partial_{\rho}h^{\rho \lambda} -\partial^{\rho}h_{\mu \rho} )\bigg]&
\nonumber \\ +\frac{1}{4} \bigg[ ( \partial_{\rho}h_{\lambda}^{\rho} + \partial_{\lambda}h_{ \rho }^{\rho} -\partial^{\rho}h_{\rho \lambda} ) ( \partial_{\mu}h^{\lambda}_{ \nu} + \partial_{\nu}h_{ \mu }^{\lambda} -\partial^{\lambda} h_{\mu \nu} ) \bigg] &\nonumber \\ - \frac{1}{4} \bigg[ ( \partial_{\nu}h_{\lambda}^{\rho} + \partial_{\lambda}h_{ \nu }^{\rho} -\partial^{\rho} h_{\nu \lambda} )( \partial_{\rho}h_{ \mu}^{\lambda} + \partial_{\mu}h_{ \rho}^{\lambda} -\partial^{\lambda} h_{\rho \mu} ) \bigg] &.
\end{align}
Here, we have used the condition $g^{\rho \lambda}\partial_{\mu}h_{\lambda \nu} = \partial_{\mu}(g^{\rho \lambda}h_{\lambda \nu}) - h_{\lambda \nu} \partial_{\mu}g^{\rho \lambda} = \partial_{\mu}(g^{\rho \lambda}h_{\lambda \nu}) + h_{\lambda \nu} \partial_{\mu}h^{\rho \lambda}$.
Finally, our expression for the Ricci curvature tensor simplifies to:
\begin{align} \label{riccifinal}
R_{\mu \nu} = \frac{1}{2}\bigg[ \partial_{\rho}\partial_{\mu}h_{ \nu}^{\rho}- \partial_{\rho}\partial^{\rho}h_{\mu \nu} -\partial_{\nu} \partial_{\mu}h_{ \rho}^{\rho} + \partial_{\nu}\partial^{\rho}h_{\mu \rho} \bigg] &\nonumber \\ +\frac{1}{2} \bigg[ \partial_{\rho} (h_{\lambda \nu}\partial_{\mu}h^{\rho \lambda}+h_{\mu \lambda }\partial_{\nu}h^{\rho \lambda} ) -\partial_{\nu}(h_{\lambda \rho}\partial_{\mu}h^{\rho \lambda} + h_{\mu \lambda }\partial_{\rho}h^{\rho \lambda} )\bigg] &\nonumber \\ +\frac{1}{4} \bigg[ ( \partial_{\rho}h_{\lambda}^{\rho} + \partial_{\lambda}h_{ \rho }^{\rho} -\partial^{\rho}h_{\rho \lambda} ) ( \partial_{\mu}h^{\lambda}_{ \nu} + \partial_{\nu}h_{ \mu }^{\lambda} -\partial^{\lambda} h_{\mu \nu} ) \bigg] &\nonumber \\ - \frac{1}{4} \bigg[ ( \partial_{\nu}h_{\lambda}^{\rho} + \partial_{\lambda}h_{ \nu }^{\rho} -\partial^{\rho} h_{\nu \lambda} )( \partial_{\rho}h_{ \mu}^{\lambda} + \partial_{\mu}h_{ \rho}^{\lambda} -\partial^{\lambda} h_{\rho \mu} ) \bigg] &.
\end{align}
As the indices are raised and lowered by the Minkowski metric for second order terms, we can recognise that some of the terms will cancel:
\begin{align}
R_{\mu \nu} = \frac{1}{2}\bigg[ \partial_{\rho}\partial_{\mu}h_{ \nu}^{\rho}- \partial_{\rho}\partial^{\rho}h_{\mu \nu} -\partial_{\nu} \partial_{\mu}h_{ \rho}^{\rho} + \partial_{\nu}\partial^{\rho}h_{\mu \rho} \bigg] &\nonumber \\ +\frac{1}{2} \bigg[ \partial_{\rho} (h_{\lambda \nu}\partial_{\mu}h^{\rho \lambda}+h_{\mu \lambda }\partial_{\nu}h^{\rho \lambda} ) -\partial_{\nu}(h_{\lambda \rho}\partial_{\mu}h^{\rho \lambda} + h_{\mu \lambda }\partial_{\rho}h^{\rho \lambda} )\bigg] &\nonumber \\ +\frac{1}{4} \bigg[ \partial_{\lambda}h_{ \rho }^{\rho} ( \partial_{\mu}h^{\lambda}_{ \nu} + \partial_{\nu}h_{ \mu }^{\lambda} -\partial^{\lambda} h_{\mu \nu} ) \bigg] &\nonumber \\ - \frac{1}{4} \bigg[ ( \partial_{\nu}h_{\lambda}^{\rho} + \partial_{\lambda}h_{ \nu }^{\rho} -\partial^{\rho} h_{\nu \lambda} )( \partial_{\rho}h_{ \mu}^{\lambda} + \partial_{\mu}h_{ \rho}^{\lambda} -\partial^{\lambda} h_{\rho \mu} ) \bigg]&,
\end{align}
Expanding the final bracket, by raising, lowering, and relabelling indices, we can show:
\begin{align}
( \partial_{\nu}h_{\lambda}^{\rho} + \partial_{\lambda}h_{ \nu }^{\rho} -\partial^{\rho} h_{\nu \lambda} )( \partial_{\rho}h_{ \mu}^{\lambda} + \partial_{\mu}h_{ \rho}^{\lambda} -\partial^{\lambda} h_{\rho \mu} ) = &\partial_{\nu}h_{\lambda}^{\rho} \partial_{\rho}h_{ \mu}^{\lambda} +\partial_{\nu}h_{\lambda}^{\rho} \partial_{\mu}h_{ \rho}^{\lambda} - \partial_{\nu}h_{\lambda}^{\rho} \partial^{\lambda} h_{\rho \mu} \nonumber \\ +& \partial_{\lambda}h_{ \nu }^{\rho} \partial_{\rho}h_{ \mu}^{\lambda} + \partial_{\lambda}h_{ \nu }^{\rho} \partial_{\mu}h_{ \rho}^{\lambda} - \partial_{\lambda}h_{ \nu }^{\rho} \partial^{\lambda} h_{\rho \mu} \nonumber \\ - & \partial^{\rho} h_{\nu \lambda}\partial_{\rho}h_{ \mu}^{\lambda} -\partial^{\rho} h_{\nu \lambda} \partial_{\mu}h_{ \rho}^{\lambda} + \partial^{\rho} h_{\nu \lambda}\partial^{\lambda} h_{\rho \mu} \nonumber \\ \nonumber \\
= &\partial_{\nu}h^{\rho \lambda} \partial_{\rho}h_{ \mu \lambda} +\partial_{\nu}h^{\lambda \rho} \partial_{\mu}h_{ \rho \lambda} - \partial_{\nu}h^{\rho \lambda} \partial_{\lambda} h_{\rho \mu} \nonumber \\ + &\partial^{\lambda}h_{ \nu \rho} \partial^{\rho}h_{ \mu \lambda} + \partial_{\lambda}h_{ \nu \rho} \partial_{\mu}h^{\rho \lambda} - \partial_{\lambda}h_{ \nu }^{\rho} \partial^{\lambda} h_{\rho \mu} \nonumber \\ -& \partial^{\rho} h_{\nu \lambda}\partial_{\rho}h_{ \mu}^{\lambda} -\partial_{\rho} h_{\nu \lambda} \partial_{\mu}h^{ \rho \lambda} + \partial^{\rho} h_{\nu \lambda}\partial^{\lambda} h_{\rho \mu} \nonumber \\ \nonumber \\
= &\partial_{\nu}h^{\lambda \rho} \partial_{\mu}h_{ \rho \lambda} +2 \partial^{\lambda}h_{ \nu \rho} \partial^{\rho}h_{ \mu \lambda} - 2\partial_{\lambda}h_{ \nu }^{\rho} \partial^{\lambda} h_{\rho \mu} .
\end{align}
From here, we find our final form of the Ricci tensor:
\begin{align} \label{riccifinal2}
R_{\mu \nu} & = \frac{1}{2}\bigg[ \partial_{\rho}\partial_{\mu}h_{ \nu}^{\rho}- \partial_{\rho}\partial^{\rho}h_{\mu \nu} -\partial_{\nu} \partial_{\mu}h_{ \rho}^{\rho} + \partial_{\nu}\partial^{\rho}h_{\mu \rho} \bigg] \\ \nonumber &+\frac{1}{2} \bigg[ \partial_{\rho} (h_{\lambda \nu}\partial_{\mu}h^{\rho \lambda}+h_{\mu \lambda }\partial_{\nu}h^{\rho \lambda} ) -\partial_{\nu}(h_{\lambda \rho}\partial_{\mu}h^{\rho \lambda} + h_{\mu \lambda }\partial_{\rho}h^{\rho \lambda} )\bigg] \\ \nonumber &+\frac{1}{4} \bigg[ \partial_{\lambda}h_{ \rho }^{\rho} ( \partial_{\mu}h^{\lambda}_{ \nu} + \partial_{\nu}h_{ \mu }^{\lambda} -\partial^{\lambda} h_{\mu \nu} ) \bigg] \nonumber \\ & - \frac{1}{4} \bigg[ \partial_{\nu} h_{\rho \lambda} \partial_{\mu}h^{\rho \lambda} - 2 \, \partial^{\rho} h_{\lambda \nu} \partial_{\rho} h^{\lambda}_{\mu} \, + 2 \, \partial^{\rho} h_{\lambda \nu} \partial^{\lambda} h_{\rho \mu} \bigg]. \nonumber
\end{align}
\newpage
\subsection*{The Harmonic Gauge}
The full harmonic gauge can be expressed as
\begin{equation}
g^{\mu \nu}\Gamma_{\mu \nu}^{\rho}=0,
\end{equation}
which we can expand to find
\begin{equation}
\frac{1}{2}g^{\mu \nu}g^{\rho \lambda} ( \partial_{\mu}g_{\lambda \nu} + \partial_{\nu}g_{ \mu \lambda} -\partial_{\lambda}g_{\mu \nu} )=0.
\end{equation}
We can raise indices to find
\begin{equation}
g^{\rho \lambda}\partial^{\nu}g_{\lambda \nu} + g^{\rho \lambda}\partial^{\mu}g_{ \mu \lambda} -g^{\mu \nu}\partial^{\rho}g_{\mu \nu} =0,
\end{equation}
which we can simplify to
\begin{equation}
g^{\rho \lambda}\partial^{\nu}g_{\lambda \nu} - \frac{1}{2}g^{\mu \nu}\partial^{\rho}g_{\mu \nu} =0.
\end{equation}
The derivative of the Minkowski metric is constant, so we can write,
\begin{equation}
g^{\rho \lambda}\partial^{\nu}h_{\lambda \nu} - \frac{1}{2}g^{\mu \nu}\partial^{\rho}h_{\mu \nu} =0.
\end{equation}
We can show that $\partial^{\rho}(g^{\mu \nu}h_{\mu \nu} ) = g^{\mu \nu}\partial^{\rho}h_{\mu \nu} - h_{\mu \nu}\partial^{\rho}h^{\mu \nu}$ , and therefore find
\begin{equation}
g^{\rho \lambda}\partial^{\nu}h_{\lambda \nu} - \frac{1}{2}\partial^{\rho}h - \frac{1}{2}h_{\mu \nu}\partial^{\rho}h^{\mu \nu} =0.
\end{equation}
For now, we shall leave the harmonic gauge in this form.
\subsection*{Ricci Tensor in the Harmonic Gauge}
Earlier, we showed our Ricci tensor had the form
\begin{align} \label{}
R_{\mu \nu} = \frac{1}{2}\bigg[ \partial_{\rho}\partial_{\mu}h_{ \nu}^{\rho}- \partial_{\rho}\partial^{\rho}h_{\mu \nu} -\partial_{\nu} \partial_{\mu}h_{ \rho}^{\rho} + \partial_{\nu}\partial^{\rho}h_{\mu \rho} \bigg] &\\ \nonumber +\frac{1}{2} \bigg[ \partial_{\rho} (h_{\lambda \nu}\partial_{\mu}h^{\rho \lambda}+h_{\mu \lambda }\partial_{\nu}h^{\rho \lambda} ) -\partial_{\nu}(h_{\lambda \rho}\partial_{\mu}h^{\rho \lambda} + h_{\mu \lambda }\partial_{\rho}h^{\rho \lambda} )\bigg]& \\ \nonumber+\frac{1}{4} \bigg[ \partial_{\lambda}h_{ \rho }^{\rho} ( \partial_{\mu}h^{\lambda}_{ \nu} + \partial_{\nu}h_{ \mu }^{\lambda} -\partial^{\lambda} h_{\mu \nu} ) \bigg] &\nonumber \\ - \frac{1}{4} \bigg[ \partial_{\nu} h_{\rho \lambda} \partial_{\mu}h^{\rho \lambda} - 2 \, \partial^{\rho} h_{\lambda \nu} \partial_{\rho} h^{\lambda}_{\mu} \, + 2 \, \partial^{\rho} h_{\lambda \nu} \partial^{\lambda} h_{\rho \mu} \bigg]&. \nonumber
\end{align}
In what follows, we isolate the two terms relevant to the harmonic gauge, and rearrange them to a form which will allow us to make the substitution:
\begin{align} \label{}
R_{\mu \nu} = \frac{1}{2}\bigg[ \partial_{\mu}( \partial_{\rho}h_{ \nu}^{\rho} ) - \partial_{\rho}\partial^{\rho}h_{\mu \nu} -\partial_{\nu} \partial_{\mu}h_{ \rho}^{\rho} + \partial_{\nu} (\partial^{\rho}h_{\mu \rho} ) \bigg]& \\ \nonumber +\frac{1}{2} \bigg[ \partial_{\rho} (h_{\lambda \nu}\partial_{\mu}h^{\rho \lambda}+h_{\mu \lambda }\partial_{\nu}h^{\rho \lambda} ) -\partial_{\nu}(h_{\lambda \rho}\partial_{\mu}h^{\rho \lambda} + h_{\mu \lambda }\partial_{\rho}h^{\rho \lambda} )\bigg]& \\ \nonumber+\frac{1}{4} \bigg[ \partial_{\lambda}h_{ \rho }^{\rho} ( \partial_{\mu}h^{\lambda}_{ \nu} + \partial_{\nu}h_{ \mu }^{\lambda} -\partial^{\lambda} h_{\mu \nu} ) \bigg] &\nonumber \\ - \frac{1}{4} \bigg[ \partial_{\nu} h_{\rho \lambda} \partial_{\mu}h^{\rho \lambda} - 2 \, \partial^{\rho} h_{\lambda \nu} \partial_{\rho} h^{\lambda}_{\mu} \, + 2 \, \partial^{\rho} h_{\lambda \nu} \partial^{\lambda} h_{\rho \mu} \bigg]&,\nonumber
\end{align}
\begin{equation}
\partial^{\rho}h_{\mu \rho} = \partial_{\lambda}(g^{\rho \lambda}h_{\mu \rho}) - h_{\mu \rho} \partial_{\lambda} g^{\rho \lambda} = \partial_{\lambda}h^{\lambda}_{\mu} + h_{\mu \rho} \partial_{\lambda} h^{\rho \lambda},
\end{equation}
\begin{align} \label{}
R_{\mu \nu} = \frac{1}{2}\bigg[ \partial_{\mu}( \partial_{\rho}h_{ \nu}^{\rho} ) - \partial_{\rho}\partial^{\rho}h_{\mu \nu} -\partial_{\nu} \partial_{\mu}h_{ \rho}^{\rho} + \partial_{\nu} (\partial_{\rho}h^{\rho}_{\mu} + h_{\mu \lambda} \partial_{\rho} h^{\rho \lambda}) \bigg] & \\ \nonumber +\frac{1}{2} \bigg[ \partial_{\rho} (h_{\lambda \nu}\partial_{\mu}h^{\rho \lambda}+h_{\mu \lambda }\partial_{\nu}h^{\rho \lambda} ) -\partial_{\nu}(h_{\lambda \rho}\partial_{\mu}h^{\rho \lambda} + h_{\mu \lambda }\partial_{\rho}h^{\rho \lambda} )\bigg] &\\ \nonumber+\frac{1}{4} \bigg[ \partial_{\lambda}h_{ \rho }^{\rho} ( \partial_{\mu}h^{\lambda}_{ \nu} + \partial_{\nu}h_{ \mu }^{\lambda} -\partial^{\lambda} h_{\mu \nu} ) \bigg] &\nonumber \\ - \frac{1}{4} \bigg[ \partial_{\nu} h_{\rho \lambda} \partial_{\mu}h^{\rho \lambda} - 2 \, \partial^{\rho} h_{\lambda \nu} \partial_{\rho} h^{\lambda}_{\mu} \, + 2 \, \partial^{\rho} h_{\lambda \nu} \partial^{\lambda} h_{\rho \mu} \bigg] &,\nonumber
\end{align}
\begin{align}
\partial_{\rho}h_{ \nu}^{\rho}= g_{\lambda \nu} \partial_{\rho} h^{\rho \lambda} +h^{\rho \lambda}\partial_{\rho} h_{\lambda \nu},
\end{align}
\begin{align} \label{}
R_{\mu \nu} = \frac{1}{2}\bigg[ \partial_{\mu}( g_{\lambda \nu} \partial_{\rho} h^{\rho \lambda} +h^{\rho \lambda}\partial_{\rho} h_{\lambda \nu} ) - \partial_{\rho}\partial^{\rho}h_{\mu \nu} -\partial_{\nu} \partial_{\mu}h_{ \rho}^{\rho} \\+ \partial_{\nu} (g_{\lambda \mu} \partial_{\rho} h^{\rho \lambda} +h^{\rho \lambda}\partial_{\rho} h_{\lambda \mu} + h_{\mu \lambda} \partial_{\rho} h^{\rho \lambda}) \bigg] &\nonumber \\ \nonumber +\frac{1}{2} \bigg[ \partial_{\rho} (h_{\lambda \nu}\partial_{\mu}h^{\rho \lambda}+h_{\mu \lambda }\partial_{\nu}h^{\rho \lambda} ) -\partial_{\nu}(h_{\lambda \rho}\partial_{\mu}h^{\rho \lambda} + h_{\mu \lambda }\partial_{\rho}h^{\rho \lambda} )\bigg] &\\ \nonumber+\frac{1}{4} \bigg[ \partial_{\lambda}h_{ \rho }^{\rho} ( \partial_{\mu}h^{\lambda}_{ \nu} + \partial_{\nu}h_{ \mu }^{\lambda} -\partial^{\lambda} h_{\mu \nu} ) \bigg] &\nonumber \\ - \frac{1}{4} \bigg[ \partial_{\nu} h_{\rho \lambda} \partial_{\mu}h^{\rho \lambda} - 2 \, \partial^{\rho} h_{\lambda \nu} \partial_{\rho} h^{\lambda}_{\mu} \, + 2 \, \partial^{\rho} h_{\lambda \nu} \partial^{\lambda} h_{\rho \mu} \bigg] &. \nonumber
\end{align}
Now turning back to the harmonic gauge, we can manipulate the equation, such that it can be substituted into the tensor:
\begin{equation}
g^{\rho \lambda}\partial^{\nu}h_{\lambda \nu} - \frac{1}{2}\partial^{\rho}h - \frac{1}{2}h_{\mu \nu}\partial^{\rho}h^{\mu \nu} =0,
\end{equation}
changing the first term using the product rule:
\begin{equation}
\partial^{\nu}h^{\rho}_{\nu} +h_{\lambda \nu}\partial^{\nu}h^{\rho \lambda} - \frac{1}{2}\partial^{\rho}h - \frac{1}{2}h_{\mu \nu}\partial^{\rho}h^{\mu \nu} =0.
\end{equation}
Using the product rule again:
\begin{equation}
\partial_{\nu}h^{\rho \nu} +h^{\rho \lambda}\partial^{\nu}h_{\nu \lambda}+h_{\lambda \nu}\partial^{\nu}h^{\rho \lambda} - \frac{1}{2}\partial^{\rho}h - \frac{1}{2}h_{\mu \nu}\partial^{\rho}h^{\mu \nu} =0.
\end{equation}
We can rearrange to find
\begin{equation}
\partial_{\rho}h^{\rho \lambda} =-h^{\lambda \alpha}\partial^{\rho}h_{\rho \alpha}-h_{\rho \alpha}\partial^{\rho}h^{\alpha \lambda} + \frac{1}{2}\partial^{\lambda}h + \frac{1}{2}h_{\alpha \rho}\partial^{\lambda}h^{\alpha \rho}
\end{equation}
which when contacted with $g_{\lambda \nu}$ will give
\begin{equation}
g_{\lambda \nu} \partial_{\rho}h^{\rho \lambda} =-h_{\nu}^{ \alpha}\partial^{\rho}h_{\rho \alpha}-h_{\rho \alpha}\partial^{\rho}h^{\alpha}_{\nu} + \frac{1}{2}\partial_{\nu}h + \frac{1}{2}h_{\alpha \rho}\partial_{\nu}h^{\alpha \rho} .
\end{equation}
We can substitute the above into the first part of the Ricci tensor and simplify:
\begin{align}
\partial_{\mu}( -h_{\nu}^{ \alpha}\partial^{\rho}h_{\rho \alpha}-h_{\rho \alpha}\partial^{\rho}h^{\alpha}_{\nu} + \frac{1}{2}\partial_{\nu}h + \frac{1}{2}h_{\alpha \rho}\partial_{\nu}h^{\alpha \rho} +h^{\rho \lambda}\partial_{\rho} h_{\lambda \nu} ) - \partial_{\rho}\partial^{\rho}h_{\mu \nu} -\partial_{\nu} \partial_{\mu}h_{ \rho}^{\rho} \\+ \partial_{\nu} (-h_{\mu}^{ \alpha}\partial^{\rho}h_{\rho \alpha}-h_{\rho \alpha}\partial^{\rho}h^{\alpha}_{\mu} + \frac{1}{2}\partial_{\mu}h + \frac{1}{2}h_{\alpha \rho}\partial_{\mu}h^{\alpha \rho} +h^{\rho \lambda}\partial_{\rho} h_{\lambda \mu} + h_{\mu \lambda} \partial_{\rho} h^{\rho \lambda}) ,
\end{align}
\begin{align}
\partial_{\mu}( -h_{\nu}^{ \alpha}\partial^{\rho}h_{\rho \alpha} + \frac{1}{2}\partial_{\nu}h + \frac{1}{2}h_{\alpha \rho}\partial_{\nu}h^{\alpha \rho} ) - \partial_{\rho}\partial^{\rho}h_{\mu \nu} -\partial_{\nu} \partial_{\mu}h_{ \rho}^{\rho} \\+ \partial_{\nu} ( \frac{1}{2}\partial_{\mu}h + \frac{1}{2}h_{\alpha \rho}\partial_{\mu}h^{\alpha \rho} ) ,
\end{align}
\begin{align}
\partial_{\mu}( -h_{\nu}^{ \alpha}\partial^{\rho}h_{\rho \alpha} + \frac{1}{2}h_{\alpha \rho}\partial_{\nu}h^{\alpha \rho} ) - \partial_{\rho}\partial^{\rho}h_{\mu \nu} + \frac{1}{2}\partial_{\nu}(h_{\alpha \rho}\partial_{\mu}h^{\alpha \rho} ).
\end{align}
We can now look again at the full tensor. It should be clear that some of the second order terms will now cancel.
\begin{align} \label{}
R_{\mu \nu} = \frac{1}{2}\bigg[ \partial_{\mu}( -h_{\nu}^{ \alpha}\partial^{\rho}h_{\rho \alpha} + \frac{1}{2}h_{\alpha \rho}\partial_{\nu}h^{\alpha \rho} ) - \partial_{\rho}\partial^{\rho}h_{\mu \nu} + \frac{1}{2}\partial_{\nu}(h_{\alpha \rho}\partial_{\mu}h^{\alpha \rho} ) \bigg] &\\ \nonumber +\frac{1}{2} \bigg[ \partial_{\rho} (h_{\lambda \nu}\partial_{\mu}h^{\rho \lambda}+h_{\mu \lambda }\partial_{\nu}h^{\rho \lambda} ) -\partial_{\nu}(h_{\lambda \rho}\partial_{\mu}h^{\rho \lambda} + h_{\mu \lambda }\partial_{\rho}h^{\rho \lambda} )\bigg] & \\ \nonumber+\frac{1}{4} \bigg[ \partial_{\lambda}h_{ \rho }^{\rho} ( \partial_{\mu}h^{\lambda}_{ \nu} + \partial_{\nu}h_{ \mu }^{\lambda} -\partial^{\lambda} h_{\mu \nu} ) \bigg] &\nonumber \\ - \frac{1}{4} \bigg[ \partial_{\nu} h_{\rho \lambda} \partial_{\mu}h^{\rho \lambda} - 2 \, \partial^{\rho} h_{\lambda \nu} \partial_{\rho} h^{\lambda}_{\mu} \, + 2 \, \partial^{\rho} h_{\lambda \nu} \partial^{\lambda} h_{\rho \mu} \bigg]&.\nonumber
\end{align}
To second order, $\partial_{\mu} (h_{\alpha \rho}\partial_{\nu}h^{\alpha \rho} ) \approx \partial_{\nu} (h_{\alpha \rho}\partial_{\mu}h^{\alpha \rho} )$, which allows the terms to cancel:
\begin{align} \label{}
R_{\mu \nu} = \frac{1}{2}&\bigg[ \partial_{\mu}( -h_{\nu}^{ \alpha}\partial^{\rho}h_{\rho \alpha} ) - \partial_{\rho}\partial^{\rho}h_{\mu \nu} \bigg] \\ \nonumber +\frac{1}{2} &\bigg[ \partial_{\rho} (h_{\lambda \nu}\partial_{\mu}h^{\rho \lambda}+h_{\mu \lambda }\partial_{\nu}h^{\rho \lambda} ) -\partial_{\nu}( h_{\mu \lambda }\partial_{\rho}h^{\rho \lambda} )\bigg] \\ \nonumber+\frac{1}{4}& \bigg[ \partial_{\lambda}h_{ \rho }^{\rho} ( \partial_{\mu}h^{\lambda}_{ \nu} + \partial_{\nu}h_{ \mu }^{\lambda} -\partial^{\lambda} h_{\mu \nu} ) \bigg] \nonumber \\ - \frac{1}{4} &\bigg[ \partial_{\nu} h_{\rho \lambda} \partial_{\mu}h^{\rho \lambda} - 2 \, \partial^{\rho} h_{\lambda \nu} \partial_{\rho} h^{\lambda}_{\mu} \, + 2 \, \partial^{\rho} h_{\lambda \nu} \partial^{\lambda} h_{\rho \mu} \bigg],\nonumber
\end{align}
collecting the other second order terms:
\begin{align} \label{}
R_{\mu \nu} = \frac{1}{2}&\bigg[ - \partial_{\rho}\partial^{\rho}h_{\mu \nu} \bigg] \\ \nonumber +\frac{1}{2} &\bigg[ \partial_{\rho} (h_{\lambda \nu}\partial_{\mu}h^{\rho \lambda}+h_{\mu \lambda }\partial_{\nu}h^{\rho \lambda} ) - \partial_{\mu}( h_{\nu \lambda}\partial_{\rho}h^{\rho \lambda} ) -\partial_{\nu}( h_{\mu \lambda }\partial_{\rho}h^{\rho \lambda} )\bigg] \\ \nonumber+\frac{1}{4} &\bigg[ \partial_{\lambda}h_{ \rho }^{\rho} ( \partial_{\mu}h^{\lambda}_{ \nu} + \partial_{\nu}h_{ \mu }^{\lambda} -\partial^{\lambda} h_{\mu \nu} ) \bigg] \nonumber \\ - \frac{1}{4} &\bigg[ \partial_{\nu} h_{\rho \lambda} \partial_{\mu}h^{\rho \lambda} - 2 \, \partial^{\rho} h_{\lambda \nu} \partial_{\rho} h^{\lambda}_{\mu} \, + 2 \, \partial^{\rho} h_{\lambda \nu} \partial^{\lambda} h_{\rho \mu} \bigg],\nonumber
\end{align}
and expanding and cancelling, we find
\begin{align} \label{final}
R_{\mu \nu} = \frac{1}{2} & \bigg[ - \partial_{\rho}\partial^{\rho}h_{\mu \nu} \bigg] \\ \nonumber +\frac{1}{2} & \bigg[ \partial_{\rho} h_{\lambda \nu}\partial_{\mu}h^{\rho \lambda}+\partial_{\rho}h_{\mu \lambda }\partial_{\nu}h^{\rho \lambda} - \partial_{\mu} h_{\nu \lambda}\partial_{\rho}h^{\rho \lambda} -\partial_{\nu} h_{\mu \lambda }\partial_{\rho}h^{\rho \lambda} \bigg] \\ \nonumber+\frac{1}{4}& \bigg[ \partial_{\lambda}h_{ \rho }^{\rho} ( \partial_{\mu}h^{\lambda}_{ \nu} + \partial_{\nu}h_{ \mu }^{\lambda} -\partial^{\lambda} h_{\mu \nu} ) \bigg] \nonumber \\ - \frac{1}{4}& \bigg[ \partial_{\nu} h_{\rho \lambda} \partial_{\mu}h^{\rho \lambda} - 2 \, \partial^{\rho} h_{\lambda \nu} \partial_{\rho} h^{\lambda}_{\mu} \, + 2 \, \partial^{\rho} h_{\lambda \nu} \partial^{\lambda} h_{\rho \mu} \bigg].\nonumber
\end{align}
We leave this as the final form of our Ricci tensor in the harmonic gauge.
\subsection*{Gauge Invariance to Second Order}
To move to a new reference frame, the standard formula is
\begin{equation}
g'^{\mu \nu}= \frac{\partial x'^{\mu}}{\partial x^{\rho}} \frac{\partial x'^{\nu}}{\partial x^{\lambda}} g^{\rho \lambda}
\end{equation}
where
\begin{equation}
x'^{\mu}=x^{\mu} + \xi (x).
\end{equation}
This means the transformation is
\begin{equation}
\frac{\partial x'^{\mu}}{\partial x^{\rho}} = \delta^{\mu}_{\rho}+ \frac{\xi^{\mu}(x)}{\partial x^{\rho}}.
\end{equation}
We can infer that the inverse transformation will take the form
\begin{equation}
\frac{\partial x^{\nu}} {\partial x'^{\mu}}= \delta^{\nu}_{\mu}- \frac{\xi^{\nu}(x)}{\partial x^{\mu}} +\mathcal{O}(\xi^2)
\end{equation}
from the fact that like the metric, $\Lambda^{\mu'}_{\nu}\Lambda^{\nu}_{\mu'}=4$, where $\Lambda^{\mu'}_{\nu}= \frac{\partial x'^{\mu}}{\partial x^{\nu}}$. The perturbation $\xi^{\mu}$ is typically assumed to be the same order of magnitude as the perturbation $h_{\mu \nu}$. Hence, when calculating the effect of coordinate shifts on our metric, we must continue to second order, as we have in our other derivations. If we assume the form of the inverse transformation is
\begin{equation}
\frac{\partial x^{\nu}} {\partial x'^{\mu}}= \delta^{\nu}_{\mu}- \frac{\xi^{\nu}(x)}{\partial x^{\nu}} + \partial_{\mu}\xi^{\rho}(x)\partial_{\rho}\xi^{\nu}(x) +\mathcal{O}(\xi^2),
\end{equation}
then the condition,
\begin{equation}
\frac{\partial x'^{\mu}} {\partial x^{\nu}} \frac{\partial x^{\nu}} {\partial x'^{\mu}}= 4 - \delta^{\mu}_{\nu} \partial_{\mu}\xi^{\nu}(x) + \delta_{\mu}^{\nu} \partial_{\nu}\xi^{\mu}(x) +\delta^{\mu}_{\nu} \partial_{\mu}\xi^{\rho}(x)\partial_{\rho}\xi^{\nu}(x) - \partial_{\mu}\xi^{\nu}(x) \partial_{\nu}\xi^{\mu}(x) + \mathcal{O}(\xi^3) \approx 4,
\end{equation}
is satisfied to second order. We can now derive how the metric will change for a change in coordinates:
\begin{align}
g'_{\mu \nu}= \frac{\partial x^{\rho}}{\partial x'^{\mu}} \frac{\partial x^{\lambda}}{\partial x'^{\nu}} g_{\rho \lambda} = g_{\mu \nu} -g_{\rho \nu} \partial_{\mu} \xi^{\rho} -g_{\mu \lambda} \partial_{\nu} \xi^{\lambda} + \partial_{\nu}\xi^{\rho} \partial_{\mu} \xi_{\rho} + \partial_{\nu}\xi^{\rho}\partial_{\rho}\xi_{\mu} + \partial_{\mu}\xi^{\rho}\partial_{\rho}\xi_{\nu}
\end{align}
\begin{align}
g'^{\mu \nu}= g^{\mu \nu} + \partial^{\mu} \xi^{\nu} + \partial^{\nu} \xi^{\mu} + \partial_{\lambda} \xi^{\nu} \partial^{\lambda} \xi^{\mu}
\end{align}
If we insert the definition of the covariant and contravariant metric into the above expressions, we can find how our perturbation $h_{\mu \nu}$ will change under coordinate transformations:
\begin{align}
\eta_{\mu \nu} + h'_{\mu \nu}= \eta_{\mu \nu} + h_{\mu \nu} -g_{\rho \nu} \partial_{\mu} \xi^{\rho} -g_{\mu \lambda} \partial_{\nu} \xi^{\lambda} + \partial_{\nu}\xi^{\rho} \partial_{\mu} \xi_{\rho} + \partial_{\nu}\xi^{\rho}\partial_{\rho}\xi_{\mu} + \partial_{\mu}\xi^{\rho}\partial_{\rho}\xi_{\nu},
\end{align}
which simplifies to
\begin{align}
h'_{\mu \nu}= h_{\mu \nu} -g_{\rho \nu} \partial_{\mu} \xi^{\rho} -g_{\mu \lambda} \partial_{\nu} \xi^{\lambda} + \partial_{\nu}\xi^{\rho} \partial_{\mu} \xi_{\rho} + \partial_{\nu}\xi^{\rho}\partial_{\rho}\xi_{\mu} + \partial_{\mu}\xi^{\rho}\partial_{\rho}\xi_{\nu}.
\end{align}
For the contravariant metric:
\begin{align}
\tilde{\eta}^{\mu \nu} - h'^{\mu \nu} + h'^{\mu \lambda} h'^{\nu}_{\lambda}= \tilde{\eta} ^{\mu \nu} - h^{\mu \nu} + h^{\mu \lambda} h'^{\nu}_{\lambda} + \partial^{\mu} \xi^{\nu} + \partial^{\nu} \xi^{\mu} + \partial_{\lambda} \xi^{\nu} \partial^{\lambda} \xi^{\mu},
\end{align}
\begin{align}
h'^{\mu \nu} - h'^{\mu \lambda} h'^{\nu}_{\lambda}= h^{\mu \nu} - h^{\mu \lambda} h^{\nu}_{\lambda} - \partial^{\mu} \xi^{\nu} - \partial^{\nu} \xi^{\mu} - \partial_{\lambda} \xi^{\nu} \partial^{\lambda} \xi^{\mu}.
\end{align}
We can examine the second order term on the left-hand side to find
\begin{align}
h'^{\mu \lambda} h'^{\nu}_{\lambda} = (h^{\mu \lambda} - \partial^{\mu} \xi^{\lambda} - \partial^{\lambda} \xi^{\mu})(h^{\nu}_{\lambda} - \partial_{\lambda} \xi^{\nu} - \partial^{\nu} \xi_{\lambda}) \\ = h^{\mu \lambda} h^{\nu}_{\lambda} - (\partial^{\mu} \xi^{\lambda} + \partial^{\lambda} \xi^{\mu})h^{\nu}_{\lambda} - (\partial_{\lambda} \xi^{\nu} + \partial^{\nu} \xi_{\lambda})h^{\mu \lambda},
\end{align}
which gives us an overall expression for the change in $h^{\mu \nu}$:
\begin{align}
h'^{\mu \nu} = h^{\mu \nu} - \partial^{\mu} \xi^{\nu} - \partial^{\nu} \xi^{\mu} - \partial_{\lambda} \xi^{\nu} \partial^{\lambda} \xi^{\mu} - (\partial^{\mu} \xi^{\lambda} + \partial^{\lambda} \xi^{\mu})h^{\nu}_{\lambda} -(\partial_{\lambda} \xi^{\nu} + \partial^{\nu} \xi_{\lambda})h^{\mu \lambda}.
\end{align}
\subsection*{Slowly Varying}
We can make the assumption that the amplitudes of our gravitational wave are slowly varying, and as a result the derivatives of these amplitudes can be treated as small. In this case, for higher order terms containing multiple derivatives, we can approximately describe them as plane waves. For a derivative of the perturbation which is part of a second order term:
\begin{equation}
\partial_{\mu}h_{\rho \nu}(x) \approx i k_{\mu} [ a_{\rho \nu}(x) e^{i k_{\lambda}x^{\lambda}} - a^*_{\rho \nu}(x) e^{-i k_{\lambda}x^{\lambda}} ] = i k_{\mu} \bar{h}_{\rho \nu}(x).
\end{equation}
This gives us the following replacements for second order terms:
\begin{equation}
\partial_{\lambda} h^{\rho \lambda}\partial_{\mu}h_{\rho \nu} \approx -k_{\lambda} k_{\mu} \bar{h}^{\rho \lambda}\bar{h}_{\rho \nu},
\end{equation}
\begin{equation}
h_{\lambda \nu}\partial_{\rho}\partial_{\mu}h^{\rho \lambda} \approx -k_{\rho} k_{\mu} h_{\lambda \nu}h^{\rho \lambda}.
\end{equation}
We can input these replacements into our final equation,
\begin{align} \label{ricciharm2}
\Box_{\eta}h_{ \mu \nu} = -& \bigg[ \partial_{\rho} h^{\rho \lambda} \partial_{\lambda}h_{ \mu \nu} + h^{\rho \lambda} \partial_{\rho} \partial_{\lambda}h_{ \mu \nu} \bigg] \\ \nonumber + & \bigg[ \partial_{\rho} h_{\lambda \nu}\partial_{\mu}h^{\rho \lambda}+\partial_{\rho}h_{\mu \lambda }\partial_{\nu}h^{\rho \lambda} - \partial_{\mu} h_{\nu \lambda}\partial_{\rho}h^{\rho \lambda} -\partial_{\nu} h_{\mu \lambda }\partial_{\rho}h^{\rho \lambda} \bigg] \\ \nonumber+\frac{1}{2}& \bigg[ \partial_{\lambda}h_{ \rho }^{\rho} ( \partial_{\mu}h^{\lambda}_{ \nu} + \partial_{\nu}h_{ \mu }^{\lambda} -\partial^{\lambda} h_{\mu \nu} ) \bigg] \nonumber \\ - \frac{1}{2}& \bigg[ \partial_{\nu} h_{\rho \lambda} \partial_{\mu}h^{\rho \lambda} - 2 \, \partial^{\rho} h_{\lambda \nu} \partial_{\rho} h^{\lambda}_{\mu} \, + 2 \, \partial^{\rho} h_{\lambda \nu} \partial^{\lambda} h_{\rho \mu} \bigg], \nonumber
\end{align}
to find
\begin{align} \label{ricciharm2}
\Box_{\eta}h_{ \mu \nu} = -& \bigg[ -k_{\rho} k_{\lambda} \bar{h}^{\rho \lambda}\bar{h}_{ \mu \nu} -k_{\rho} k_{\lambda} h^{\rho \lambda}h_{ \mu \nu} \bigg] \\ \nonumber +\frac{1}{2} &\bigg[ -k_{\lambda}k_{\mu} \bar{h}_{ \rho }^{\rho} h^{\lambda}_{ \nu} -k_\lambda k_{\nu}\bar{h}\bar{h}_{ \mu }^{\lambda} \bigg] \nonumber \\ - \frac{1}{2} & \bigg[ -k_{\nu}k_{\mu} \bar{h}_{\rho \lambda} \bar{h}^{\rho \lambda} \, - 2 \, k^{\rho} k^{\lambda} \bar{h}_{\lambda \nu} \bar{h}_{\rho \mu} \bigg]. \nonumber
\end{align}
As discussed in the paper, these terms will vanish for longitudinal waves. For transverse waves, the left-hand side cannot match the right-hand side. This suggests the approximation is too extreme, and we must keep higher order terms involving derivatives (as is standard when a term vanishes). It is possible to find a solution to the above equation if one considers gravitational waves to be a mixture of transverse and longitudinal modes, as discussed in the paper. However, for accuracy, one should still account for the vanishing terms, otherwise the approximation is not valid.
\end{document}
| proofpile-arXiv_059-15672 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
General relativity is a theory for the action of gravity in space and
time. The dynamics of the gravitational field is constrained by the
Einstein's classical field equations. They are tensorial non-linear
equations, because of the self-interaction of the gravitational field,
notoriously difficult to solve. It is therefore important to develop
efficient methods for studying gravity in various regimes.
General relativity can be embedded in quantum theory where the gravitational
force results from the exchange of a quantized massless spin-2 graviton
field~\cite{tHooft:1974toh,Veltman:1975vx,DeWitt:1967yk,DeWitt:1967ub,DeWitt:1967uc}. One
can
then consider the Einstein-Hilbert term as the first term of a low-energy
effective action containing an infinite number of higher derivative operators~\cite{Donoghue:1994dn}.
The classical limit $\hbar\to0$ has been studied by Duff
in~\cite{Duff:1973zz} where he showed how to reproduce the classical Schwarzschild
metric in four dimensions from quantum tree graphs up to
the second order $O(G_N^2)$ in Newton's constant.
The relation between the quantum theory of gravity and the classical
Einstein's theory of general relativity has received a new
interpretation with the understanding~\cite{Iwasaki:1971vb,BjerrumBohr:2002ks,Holstein:2004dn,Donoghue:1996mt,Bjerrum-Bohr:2018xdl,Kosower:2018adc} that an appropriate (and
subtle) $\hbar\to0$ limit of quantum multi-loop
scattering gravitational amplitudes lead to higher $G_N$-order
classical gravity contributions. Considering the importance of such
approach for the evaluation of the post-Minkowskian expansion for
the gravitational two-body scattering~\cite{Cheung:2018wkq, Bern:2019nnu,Bern:2019crd,Chung:2019duq,Kalin:2019rwq,Cheung:2020gyp,1821624}, we use the procedure given
in~\cite{Bjerrum-Bohr:2018xdl} for extracting the classical
contributions from the multi-loop vertex function of a graviton emission from a
massive scalar field to recover the Schwarzschild-Tangherlini metric in
various dimensions.
The scattering amplitude approach works in general
dimensions~\cite{Collado:2018isu,KoemansCollado:2019ggb,Cristofoli:2020uzm,Jakobsen:2020ksu}
and gives the opportunity to explore general relativity in higher-dimensions~\cite{Emparan:2008eg,Emparan:2013moa}. At tree-level and one-loop our results agree
with the general dimension results
in~\cite{Collado:2018isu,Jakobsen:2020ksu}. We show how to
reconstruct the metric up to the fourth order $O(G_N^4)$ in Newton's
constant by evaluating the scattering amplitudes up to three-loop orders.
Using the procedure designed in~\cite{Bjerrum-Bohr:2018xdl} we argue,
in section~\ref{sec:class-contr}, that the classical contribution at
$l$-loop order is given by the two-point $l$-loop massless sunset
graphs. We verify this explicitly
evaluating the classical limit of the quantum scattering amplitudes up to
three-loop order.
The scattering amplitudes develop ultraviolet divergences. In section~\ref{sec:nonmin}, we show how
to recover the finite static Schwarzschild-Tangherlini metric by the addition of non-minimal
couplings given schematically by (see~\eqref{e:Sctn}
for a precise expression)
\begin{equation}
\delta^{(n)}S^{\rm ct.} \sim (G_N m)^{2n\over d-2} \int d^{d+1}x \sqrt{-g} \,
\nabla^{2(n-1)} \mathcal R_{\mu\nu} \partial^\mu \phi \partial^\nu\phi\,.
\end{equation}
In four dimensions the non-minimal couplings $\delta^{(1)}S^{\rm ct.}$ have been introduced
in~\cite{Goldberger:2004jt} for the analysis up to the third
post-Minkowskian order in the context of the world-line formalism. The
relation between the world-line formalism and the amplitude
approach is detailed in~\cite{1821624}.
Higher-derivative couplings with $n\geq2$ would be needed in four
dimensions from the fifth post-Minkowskian order, but they appear at
lowest order in higher dimensions.
Indeed, we show that in five
dimensions one needs to consider higher dimensional of non-minimal
couplings $\delta^{(2)}S^{\rm ct.}$ at the third post-Minkowskian
order and $\delta^{(3)}S^{\rm ct.}$ at the fourth post-Minkowskian. Interestingly, the metric
components are finite in space-time dimensions greater or equal to six, although the stress-tensor
develops ultraviolet divergences from one-loop order in odd dimensions and
from two-loop order in even dimensions. These
divergences are cancelled by the non-minimal couplings
$\delta^{(n)}S^{\rm ct.}$.
Actually, we expect
that an all order computation in perturbation will require an
infinite set of such non-minimal couplings.
We show that the effects of the non-minimal couplings can be
reabsorbed by a coordinate transformation, and they do not affect the
Schwarzschild-Tangherlini space-time geometry. Since we work in
the fixed gauge de Donder gauge, we give the coordinate transformation
for extracting the classical space-time metric from the scattering amplitudes in
that gauge. Although general
relativity is coordinate system invariant, our analysis shows that
there is a preferred coordinate system when extracting the classical
geometry from scattering amplitudes in the de Donder gauge. The
lowest-order $n=1$ non-minimal couplings have been shown to arise from
the gauge fixing in~\cite{BjerrumBohr:2006mz,Jakobsen:2020ksu,1821624}. We will not address the question of the gauge dependence,
but we remark that the choice of coordinate system (or gauge) can be
critical for finding solution to Einstein's
equations~\cite{Fromholz:2013hka}.
Since ``black hole formation is a
robust prediction of the general theory of
relativity''~\cite{nobelpenrose}, it is satisfying to be able to
embed such classical solutions in the new understanding of the
relation between general relativity and the quantum theory of gravity.
The paper is organised as follows. In section~\ref{sec:schw} we setup
the connection between the perturbation expansion vertex function
for the emission a graviton from a massive scalar field and the
post-Minkowskian expansion of the static metric in $d+1$ dimensions.
In section~\ref{sec:class-contr} we
show that the classical contribution from the
multi-loop amplitudes is given by the massless sunset multi-loop
integrals in $d$ dimensions. In
section~\ref{sec:mast-integr-class} we evaluate the master integrals. In section~\ref{sec:class-metr-pert} we
derive the metric component up to the order $O(G_N^4)$ by computing
the relevant amplitudes up to three-loop order in $d+1$ dimensions.
In section~\ref{sec:nonmin} we compute the non-minimal couplings
required for cancelling the ultraviolet divergences in the amplitude computation.
In
section~\ref{sec:deDonder} we solve the Einstein's equations in
four ($d=3$), five ($d=4$) and six ($d=5$) dimensions in the de Donder gauge, and we show
in section~\ref{sec:matching-amplitude} how these results match the results derived from the amplitude computations.
In
section~\ref{sec:discussion} we give
an interpretation of the results in this paper. The appendix~\ref{sec:FT}
contains formul\ae{} for the Fourier transforms used in the text,
and appendix~\ref{sec:vertices} the vertices for the scattering
amplitude computations.
\section{The Schwarzschild-Tangherlini metric from scalar field amplitudes}
\label{sec:schw}
The Schwarzschild metric is obtained by the gravitational scattering
of a scalar field of mass $m$
\begin{equation}
\mathcal{S}=\int d^{d+1}x \sqrt{-g}\left({R\over 16\pi G_N}+
\frac{1}{2} g^{\mu\nu}\partial_{\mu}\phi \partial_{\nu}\phi-\frac{1}{2}m^2\phi^2\right)\,.
\end{equation}
For further reference Newton's constant has length dimensions
$[G_N]=(length)^{d-1}$, the scalar field has dimension
$[\phi]=(length)^{1-d}$ and the mass $[m]=(length)^{-1}$. We work with
the mostly negative signature $(+,-,\cdots,-)$ metric.
The graviton emission from a scalar particle of mass $p_1^2=p_2^2=m^2$
is given by the three-point vertex function
\begin{equation}\label{e:M3pt}
\mathcal M_3(p_1,q) =\qquad
\begin{gathered}
\begin{fmffile}{gravemission}
\begin{fmfgraph*}(100,100)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmftop{v3}
\fmfbottom{v4}
\fmfrpolyn{smooth,filled=30}{G}{3}
\fmf{fermion,tension=2.5,label=$p_1$}{i1,G1}
\fmf{fermion,tension=2.5,label=$p_2$}{G2,i2}
\fmf{dbl_wiggly,tension=.5,label=${q}\qquad$}{G3,o1}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}\quad .
\end{equation}
At each loop order we extract the $l$-loop contribution to the
transition density of the stress-energy tensor
$\langle T_{\mu\nu}(q^2)\rangle=\sum_{l\geq0} \langle T^{(l)}_{\mu\nu}(q^2)\rangle$
\begin{equation}\label{e:MtoT}
i\mathcal M^{ (l )}_3(p_1,q) =-{i\sqrt{32\pi G_N}\over2}
\langle T^{(l)\, \mu\nu}(q^2) \rangle\epsilon_{\mu\nu}
\end{equation}
where $\epsilon^{\mu\nu}$ is the polarisation of the graviton with
momentum $q=p_1-p_2$ is the momentum transfer.
The scattering amplitude computation is not done in the harmonic gauge
coordinates $g^{\mu\nu}\Gamma^\lambda_{\mu\nu}(g)=0$ but in the \textit{de
Donder gauge}
coordinate system~\cite{Veltman:1975vx,Goldberger:2004jt,Cheung:2020gyp,Collado:2018isu,Jakobsen:2020ksu}
\begin{equation}\label{e:deDonderGauge}
\eta^{\mu\nu}\Gamma^\lambda_{\mu\nu}(g)= \eta^{\mu\nu}g^{\lambda\rho}\left({\partial g_{\rho\mu}\over \partial x^\nu}+ {\partial g_{\rho\nu}\over \partial x^\mu}-{\partial g_{\mu\nu}\over \partial x^\rho}\right)=0\,,
\end{equation}
the metric perturbations $g_{\mu\nu}=\eta_{\mu\nu} +\sum_{n\geq1}
h^{(n)}_{\mu\nu}$ satisfy\footnote{The harmonic gauge linearized
at the first order in perturbation gives~(\ref{e:hndD}) with
$n=1$. The higher-order expansions of the harmonic
gauge differ from these conditions.}
\begin{equation}\label{e:hndD}
{\partial\over
\partial x^{\lambda}} h^{\lambda (n)}_{\nu}-\frac{1}{2} {\partial\over
\partial x^{\nu}} h^{(n)} =0\,.
\end{equation}
The de Donder gauge relation between the
metric perturbation and the stress-energy tensor reads
\begin{equation}\label{e:TtohAmplitudedeDonder}
h^{(l+1)}_{\mu\nu}(\vec x) = -16\pi G_N\int {d^d{\vec q}\over(2\pi)^d} e^{i\vec q\cdot \vec x} {1\over
\vec q^2} \left( \langle T_{\mu\nu}^{(l)}\rangle^{\rm
class.}(q^2)-\frac{1}{d-1}\eta_{\mu\nu}\langle T^{(l)}\rangle^{\rm class.}(q^2)\right)\,.
\end{equation}
In this relation enters the classical contribution at $l$ loop order $\langle T^{(l)}_{\mu\nu}\rangle^{\rm class.}(q^2) $ defined
by the classical limit of the quantum scattering
amplitude~\cite{Holstein:2004dn,Bjerrum-Bohr:2018xdl,Kosower:2018adc}.
From now, we are dropping the super-script class and just use the
notation $\langle T^{(l)}_{\mu\nu}\rangle(q^2) $ for the classical contribution.
\subsection{The classical contribution of the amplitude}
\label{sec:class-contr}
In this section we derive the generic form
of the classical contribution of the gravity amplitudes~(\ref{e:M3pt}) in the static limit where $q=(0,\vec
q)$ and $\vec q^2\ll m^2$. The classical limit is obtained by taking
$\hbar\to0$ with the momentum transfer $q/\hbar$ held fixed~\cite{Kosower:2018adc}.
At the $l$-loop order we have to consider the
graphs
\begin{equation}\label{e:M3quantum}
\mathcal M^{(l)}_3(p_1,q) =
\begin{gathered}
\begin{fmffile}{gravemissionlloop}
\begin{fmfgraph*}(100,100)
\fmfstraight
\fmfleftn{i}{6}
\fmfrightn{o}{1}
\fmfrpolyn{smooth,label={tree},filled=30}{G}{5}
\fmf{dbl_wiggly}{G1,i2}
\fmf{dbl_wiggly}{G2,i3}
\fmf{dbl_wiggly}{G3,i4}
\fmf{dbl_wiggly}{G4,i5}
\fmf{dbl_wiggly,tension=3,label={q}}{G5,o1}
\fmf{plain,tension=2.5}{i1,i2,i3,i4,i5,i6}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}\,,
\end{equation}
The classical contribution emerges as
a particular $\hbar \to 0$ limit of the amplitude
in~\cite{Iwasaki:1971vb,Holstein:2004dn,Donoghue:1996mt,Kosower:2018adc,Bjerrum-Bohr:2018xdl}.
The classical limit results in cutting the massive lines,
projecting on the contribution from localised sources at different
positions in space~\cite{PlanteThesis,Galusha,Bjerrum-Bohr:2018xdl},
pictorially represented by shaded blobs
\begin{equation}\label{e:M3classical}
\mathcal M^{(l)~\rm class.}_3(p_1,q) =\qquad
\begin{gathered}
\begin{fmffile}{gravemissionlloopclassical}
\begin{fmfgraph*}(100,100)
\fmfstraight
\fmfleftn{i}{6}
\fmfrightn{o}{1}
\fmfrpolyn{smooth,label={tree},filled=30}{G}{5}
\fmf{dbl_wiggly}{G1,i2}
\fmf{dbl_wiggly}{G2,i3}
\fmf{dbl_wiggly}{G3,i4}
\fmf{dbl_wiggly}{G4,i5}
\fmf{dbl_wiggly,tension=3,label={q}}{G5,o1}
\fmfblob{.5cm}{i2}
\fmfblob{.5cm}{i3}
\fmfblob{.5cm}{i4}
\fmfblob{.5cm}{i5}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}\Bigg|_{\textrm{leading}~q^2}\,,
\end{equation}
In this process one keeps only the leading $q^2$ contribution from the
multi-graviton tree-level amplitudes.
The quantum tree-level graphs that were considered
in~\cite{Duff:1973zz} arise from the classical limit of the scattering
amplitude up to two-loop order.
In the rest of this section, we derive the
generic features of the classical limit to all orders in perturbation.
We then explicitly evaluate the classical limit up to three-loop order in perturbation.
\medskip
The quantum amplitude in~\eqref{e:M3quantum} is
an $l+2$ gravitons amplitude with $l+1$ gravitons attached to the massive scalar
line
\begin{align}
&\mathcal L_{\mu_1\nu_1,\dots,\mu_{l+1}\nu_{l+1}}(p_1,p_2,\ell_1,\dots,\ell_{l+1}) =
\begin{gathered}
\begin{fmffile}{linegraviton}
\begin{fmfgraph*}(100,100)
\fmfstraight
\fmfleftn{i}{6}
\fmfrightn{o}{6}
\fmf{plain,tension=2.5}{i1,i2,i3,i4,i5,i6}
\fmf{phantom,tension=2.5}{o1,o2,o3,o4,o5,o6}
\fmf{dbl_wiggly}{i2,o2}
\fmf{dbl_wiggly}{i3,o3}
\fmf{dbl_wiggly}{i4,o4}
\fmf{dbl_wiggly}{i5,o5}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}\\
&={(-i\sqrt{8\pi G_N})^{l+1}\tau_{\mu_1\nu_1}(p_1,p_1-\ell_1)\tau_{\mu_2\nu_2}(p_1\ell_1,p_1-\ell_1-\ell_2)\cdots\tau_{\mu_{l+1}\nu_{l+1}}(p_1-\ell_1-\cdots-\ell_{l+1},p_2)\over \prod_{i=1}^l \left((p_1-\sum_{j=1}^i\ell_j)^2-m^2+i\epsilon\right)}\,,
\end{align}
with the momentum conservation condition
$\ell_1+\cdots+\ell_{l+1}=q=p_1-p_2$ and the vertex for emitting a graviton from a scalar field\footnote{The vertices are given in
appendix~\ref{sec:vertices}. We have stripped of a factor
$i\sqrt{8\pi G_N}$ from their normalisation.}
\begin{equation}\label{e:vertex2s1g}
\tau^{\mu\nu}(p_1,p_2) =p_1^\mu p_2^{\nu}
+p_1^\nu p_2^{\mu} +\frac12 \eta^{\mu\nu}\,(p_1-p_2)^2\,.
\end{equation}
This line is
attached to an $l+2$ tree-level graviton amplitude
\begin{equation}
\mathcal M^{\mu_1\nu_1,\dots,\mu_{l+1}\nu_{l+1}}(\ell_1,\dots,\ell_{l+1},q) =
\begin{gathered}
\begin{fmffile}{gravtreen}
\begin{fmfgraph*}(100,100)
\fmfstraight
\fmfleftn{i}{6}
\fmfrightn{o}{1}
\fmf{phantom,tension=2.5}{i1,i2,i3,i4,i5,i6}
\fmfrpolyn{smooth,label={tree},filled=30}{G}{5}
\fmf{dbl_wiggly}{G1,i2}
\fmf{dbl_wiggly}{G2,i3}
\fmf{dbl_wiggly}{G3,i4}
\fmf{dbl_wiggly}{G4,i5}
\fmf{dbl_wiggly,tension=3,label={q}}{G5,o1}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}\,.
\end{equation}
We have to sum over all the permutation of the graviton
lines attached to the scalar lines. Because the gravity amplitude is
invariant under the action of the permutation of the graviton lines
we have
\begin{multline}\label{e:Ml}
i\mathcal M^{(l)}_3(p_1,q)=\frac{1}{\sqrt{4
E_1E_2}}\int \prod_{n=1}^l {d^{d+1}\ell_n\over(2\pi)^D}\left( \sum_{\sigma\in\mathfrak S_{l+1}}
\mathcal
L_{\mu_1\nu_1,\dots,\mu_{l+1}\nu_{l+1}}(p_1,p_2,\ell_{\sigma(1)},\dots,\ell_{\sigma(l+1)}) \right)
\cr
\times\prod_{i=1}^{l+1} {i\mathcal P^{\mu_i\nu_i,\rho_i\sigma_i}\over \ell_i^2+i\epsilon}\mathcal M_{\rho_1\sigma_1,\dots,\rho_{l+1}\sigma_{l+1}} (\ell_1,\dots,\ell_{l+1},q)
\end{multline}
where $\mathfrak S_{l+1}$ is the group of permutation of $l+1$ elements.
In the static limit the vertex~(\ref{e:vertex2s1g})
becomes
\begin{equation}
\tau_{\mu\nu}(p_1,p_1-\ell)\simeq -2 m^2
\delta^0_\mu \delta^0_\nu\,,
\end{equation}
therefore the scalar line approximates to
\begin{equation}
\mathcal L(p_1,p_2,\ell_1,\dots,\ell_{l+1})\simeq{ \prod_{i=1}^{l+1} i \sqrt{32\pi
G_N}m^2\delta^0_{\mu_i}\delta^0_{\nu_i}\over\prod_{i=1}^l\left(
(p_1-\sum_{j=1}^i \ell_j)^2-m^2+i\epsilon\right)}\,.
\end{equation}
In the static limit $(p_1-L)^2-m^2+i\epsilon= L^2-2 p_1\cdot
L+i\epsilon\simeq L_0^2-\vec L^2-2mL_0+i\epsilon$.
In the limit where the mass $m$ is large compared to the graviton loop
momenta $|L|\ll m$ we have
\begin{multline}
L_0^2-\vec L^2-2mL_0+i\epsilon= \left(L_0-m-\sqrt{\vec
L^2+m^2-i\epsilon}\right)
\left(L_0-m+\sqrt{\vec
L^2+m^2-i\epsilon}\right)\cr
\simeq \left(L_0-2m-{\vec L^2\over
2m}+i\epsilon\right)
\left(L_0+{\vec L^2\over 2m}-i\epsilon\right)\simeq -2m \left(L_0-i\epsilon\right) \,.
\end{multline}
%
Therefore we have
\begin{equation}
\mathcal L(p_1,p_2,\ell_1,\dots,\ell_{l+1})\simeq i\sqrt{32\pi G_N}m^2
\delta^0_{\mu_{l+1}}\delta^0_{\nu_{l+1}}\prod_{i=1}^{l} {-i 2\sqrt{2\pi
G_N}m\delta^0_{\mu_i}\delta^0_{\nu_i}\over
\sum_{j=1}^i \ell^0_j-i\epsilon}\,.
\end{equation}
Using momentum conservation $\ell_1+\cdots+\ell_{l+1}=p_1-p_2$ and
that in the static limit $p_1^0-p_2^0\simeq0$ we have
\begin{equation}
\mathcal L(p_1,p_2,\ell_1,\dots,\ell_{l+1})\simeq 2m i\epsilon
\prod_{i=1}^{l+1} {-i 2\sqrt{2\pi
G_N}m\delta^0_{\mu_i}\delta^0_{\nu_i}\over
\sum_{j=1}^i \ell^0_j-i\epsilon}\,.
\end{equation}
Using the identity\footnote{This was proven in the appendix of~\cite{Levy:1969cr}. We give here an
alternative proof using recursion. For
$l=1$ we have $\Sigma(2)={1\over x_1(x_1+x_2)}+{1\over x_2(x_1+x_2)}={1\over
x_1x_2}$. Assuming that~\eqref{e:Sn} is true at the order $l$, then
at the order $l+1$ we have
\begin{equation}
\Sigma(l+1)= \sum_{\sigma\in\mathfrak S_{l+1}}
\prod_{i=1}^{l+1} {1\over \sum_{j=1}^i x_{\sigma(j)}}
= {1\over x_1+\cdots +x_{l+1}}\sum_{i=1}^{l+1}
\sum_{\sigma\in\mathfrak S_l} \prod_{i=1}^l {1\over \sum_{j=1}^i
\hat x_{\sigma(j)}}
\end{equation}
where $\sigma(n+1)=i$ and the $\{\hat x_1,\dots,\hat
x_l\}=\{x_1,\dots,x_{l+1}\}\backslash \{x_i\}$.
By recursion hypothesis we can use the expression for $\Sigma(l)$
\begin{equation}
\Sigma(l+1)
= {1\over x_1+\cdots +x_{l+1}}\sum_{i=1}^{l+1}
\prod_{i=1}^{l} {1\over \hat x_i}= {1\over x_1+\cdots +x_{l+1}}\sum_{i=1}^{l+1}
x_i\prod_{i=1}^{l+1} {1\over x_i}= \prod_{i=1}^{l+1}{1\over x_i}\,.
\end{equation}
}
\begin{equation}\label{e:Sn}
\sum_{\sigma\in\mathfrak S_{l+1}}
\prod_{i=1}^{l+1} {1\over \sum_{j=1}^i x_{\sigma(j)}} =
\prod_{i=1}^{l+1} {1\over
x_{i}}\,.
\end{equation}
In the limit $\epsilon\to0$ the expression vanishes unless some
of the $\ell_j^0$ vanish at the same time.
This means that one needs
to pick the residues at $\ell_j^0=i\epsilon$ for $j=1,\dots,l$ to
have a non vanishing answer.
This implies that the amplitude~(\ref{e:Ml}) reduces to
\begin{multline}\label{e:Mlapprox}
i\mathcal M^{(l)}_3(p_1,q)\simeq -i^l \left(2\sqrt{2\pi G_N}
m\right)^{l+1}\cr\times\int \prod_{n=1}^{l}
{d^d\vec\ell_n\over(2\pi)^d}\prod_{i=1}^{l+1}{\mathcal
P^{00,\rho_i\sigma_i} \over \prod_{i=1}^{l+1}(\ell_i^2+i\epsilon)}
\mathcal
M_{\rho_1\sigma_1,\dots,\rho_{l+1}\sigma_{l+1}} (\ell_1,\dots,\ell_{l+1},q)\Big|_{\ell_i^0=0}
\end{multline}
with $\ell_1+\cdots+\ell_{l+1}=q$.
We recall that
\begin{equation}
\mathcal P^{00,\rho\sigma}=
\delta^\rho_0\delta^\sigma_0-{\eta^{\rho\sigma}\over D-2} \,.
\end{equation}
The amplitude~(\ref{e:Mlapprox}) corresponds to the graph where the scalar line has been collapsed to a point
\begin{equation}
\mathcal M^{(l)}_3(p_1,q)\simeq
\begin{gathered}
\begin{fmffile}{gravtreencol}
\begin{fmfgraph*}(180,100)
\fmfsurroundn{i}{8}
\fmf{phantom,tension=10}{i6,v2,v3,v4,i4}
\fmf{dbl_wiggly}{i5,v2}
\fmf{dbl_wiggly}{i5,v3}
\fmf{dbl_wiggly}{i5,v4}
\fmfrpolyn{smooth,label={tree},filled=30,tension=.8}{G}{4}
\fmf{dbl_wiggly}{G1,v2}
\fmf{dbl_wiggly}{G2,v3}
\fmf{dbl_wiggly}{G3,v4}
\fmf{dbl_wiggly,tension=3}{G4,i1}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}\,.
\end{equation}
In the static with $q=(0,\vec q)$, $|q|\ll m$, the $l+2$-tree level
gravitons amplitude has the leading behaviour
\begin{equation}
\prod_{n=1}^{l+1}\mathcal P^{00,\rho_i\sigma_i} \mathcal
M_{\rho_i\sigma_i,\cdots,\rho_{l+1}\sigma_{l+1}}(\ell_1,\dots,\ell_{l+1},q)\propto
{\sqrt{G_N}}^l q^2\,,
\end{equation}
and higher powers of $\vec q^2$ contribute to higher powers of $\hslash$
and are sub-leading
quantum corrections (see section~\ref{sec:tree} for more about this).
Therefore, the classical contribution to the stress-tensor
in~\eqref{e:MtoT} is given by\footnote{We have checked this explicitly to three-loop order
using the {\tt LiteRed} code~\cite{Lee:2012cn,Lee:2013mka}.}
\begin{equation}\label{e:Tstatic}
\langle T^{(l)}_{\mu\nu}\rangle=\pi^l(G_Nm)^{l} m\Big(c^{(l)}_1(d)\delta_{\mu}^0\delta_{\nu}^0
+c^{(l)}_2(d)\big({q_{\mu}q_{\nu}\over q^2} -\eta_{\mu\nu}\big) \Big)\, J_{(l)}(q^2)\,,
\end{equation}
where $c^{(l)}_1(d)$ and $c^{(l)}_2(d)$ are rational functions of the dimension
$d$ and $J_{(n)}(q^2)$ is the massless $n$-loop sunset graph
\begin{equation}\label{e:Jmastersunset}
J_{(n)}(\vec q^2)=
\begin{gathered}\tikzpicture[scale=1.7]
\scope[xshift=-5cm,yshift=-0.4cm,decoration={
markings,
mark=at position 0.5 with {\arrow{>}}}]
\draw(0.0,0) node{};
\draw (1,0.0) node{$\bullet$} ;
\draw (2,0) node{$\bullet$} ;
\draw (0.5,0) node[left]{$q$};
\draw[postaction={decorate}] (0.5,0) -- (1,0);
\draw[dashed] (1,0) -- (2,0) ;
\draw[postaction={decorate}] (2,0) -- (2.5,0);
\draw (2.5,0) node[right]{$q$};
\draw[dashed] (1.5,0.0) ellipse (0.5 and 0.1);
\draw[dashed] (1.5,0.0) ellipse (0.5 and 0.2);
\draw[dashed] (1.5,0.0) ellipse (0.5 and 0.3);
\draw[dashed] (1.5,0.0) ellipse (0.5 and 0.4);
\endscope
\endtikzpicture
\end{gathered}
=\int {\vec
q^2\over \prod_{i=1}^n \vec l_i^2 \, (\vec l_1+\cdots+\vec
l_n+\vec q)^2}\prod_{i=1}^n{ d^d{\vec l}_i\over (2\pi)^{d}}
\,.
\end{equation}
\subsection{The master integrals for the classical limit}
\label{sec:mast-integr-class}
The master integrals~(\ref{e:Jmastersunset}) can be evaluated
straightforwardly with the
parametric representation of the $n$-loop sunset in $D$ dimensions
(see~\cite{Vanhove:2014wqa})
\begin{equation}
J_{(n)}(\vec q^2) ={ (\vec q^2)^{n(d-2)\over2} \over
(4\pi)^{nd\over2}}\Gamma\left(n+1 -{n d\over 2} \right)
\int_{x_i\geq0} \left({1\over x_1}+\cdots +{1\over x_n}+1\right)^{(n+1)(2-d)\over2}
\prod_{i=1}^{n} {dx_i \over x_i^{d\over 2}}
\end{equation}
%
since the first Symanzik polynomial is $U_{n+1}=
\left( \sum_{i=1}^{n+1}{1\over x_i}\right)\left(\prod_{i=1}^{n+1} x_i\right)$
and the second Symanzik polynomial is $F_{n+1}= -q^2 x_1\cdots
x_{n+1}=\vec q^2 x_1\cdots x_{n+1}$.
Changing variables to $y_i=1/x_i$ we have
\begin{equation}
J_{(n)}(\vec q^2) ={ (\vec q^2)^{n(d-2)\over2} \over
(4\pi)^{nd\over2}}\Gamma\left(n+1 -{n d\over 2} \right)
\int_{y_i\geq0} \left(y_1+\cdots + y_n+1\right)^{(n+1)(2-d)\over2}
\prod_{i=1}^{n} {dy_i \over y_i^{4-d\over 2}}\,.
\end{equation}
Using the expression for Euler's beta-function
\begin{equation}
\int_0^\infty (x+a)^{\alpha}
{dx \over x^{1-\beta}}= a^{\alpha+\beta}
{\Gamma(-\beta-\alpha)\Gamma(\beta)\over \Gamma(-\alpha)},
\end{equation}
the master integral is readily evaluated to be
\begin{equation}\label{e:Jnresult}
J_{(n)}(\vec q^2) ={ (\vec q^2)^{n(d-2)\over2} \over
(4\pi)^{nd\over2}}
{\Gamma\left(n+1 -{n d\over 2} \right) \Gamma\left(d-2\over2\right)^{n+1}\over \Gamma\left((n+1)(d-2)\over2\right)}\,.
\end{equation}
The master integrals develop ultraviolet poles at loop orders,
inducing divergences in the stress-energy tensor. We
will show in section~\ref{sec:nonmin} how to renormalise these divergences with the
introduction of higher-derivative couplings.
\section{The metric perturbation from graviton emission}
\label{sec:class-metr-pert}
Using the relation~(\ref{e:TtohAmplitudedeDonder}) between the metric
perturbation and using the expression~(\ref{e:Tstatic}) for the stress-energy tensor in $d$-dimension in the
static limit we have
\begin{multline}
h_{\mu\nu}^{(l+1)}(\vec q)= -8
\left(c_1^{(l)}(d)(2\delta^0_\mu\delta^0_\nu-\eta_{\mu\nu})+c_2^{(l)}(d)\left(2{q_\mu
q_\nu\over q^2}+(d-2)\eta_{\mu\nu}\right)\right)\cr
\times { (\pi G_Nm)^{l+1}J_{(l)}(\vec q^2) \over
\vec q^2}\,.
\end{multline}
The static space-time components are obtained by computing the Fourier
transform in $d$ dimensions
\begin{equation}
h^{(l+1)}_{\mu\nu}(\vec x) = \int_{\mathbb R^d} h_{\mu\nu}^{(l+1)}(\vec q)e^{i\vec q\cdot
\vec x} {d^d{\vec q}\over(2\pi)^d} \,.
\end{equation}
Using the Fourier transformations given in appendix~\ref{sec:FT}, and
setting $r=|\vec
x|$, the Fourier transform of the master integrals are given by
\begin{equation}\label{e:Jnr}
\int_{\mathbb R^d} {J_{(l)}(\vec q^2)\over \vec q^2} e^{i\vec
q\cdot \vec x} {d^d\vec q\over (2\pi)^d}
=\left({\Gamma\left(d-2\over2\right)\over
4\pi^{d\over2}} {1\over r^{d-2}}\right)^{l+1}
\end{equation}
which is finite to all loop orders. The
ultraviolet divergences in the momentum space representation
in~\eqref{e:Jnresult} has been cancelled by the Fourier transform.\footnote{This fact had been noticed by L. Plant\'e
in his PhD thesis~\cite{PlanteThesis}.}
The tensorial Fourier transform
\begin{equation}\label{e:Jnrxx}
\int_{\mathbb R^d} {q_iq_j\over \vec q^2} {J_{(l)}(\vec q^2)\over \vec q^2} e^{i\vec
q\cdot \vec x} {d^d\vec q\over (2\pi)^d}
=\left({\Gamma\left(d-2\over2\right)\over
4\pi^{d\over2}} {1\over r^{d-2}}\right)^{l+1}{1\over2-l(d-2)}\left(
- \delta_{ij}+ (l+1)(d-2) {x_ix_j\over r}\right)\,.
\end{equation}
diverges for $l=1$ and $d=4$ and for $l=2$ and $d=3$, and are
otherwise finite.
By spherical symmetry we parameterise the metric in $d+1$ dimensions
\begin{equation}
ds^2=h_0(r,d) dt^2- h_1(r,d) d\vec x^2-h_2(r,d) {(\vec x\cdot
d\vec x)^2\over \vec x^2}\,,
\end{equation}
so that
\begin{equation}
h_{i}(\vec x)=h_i^{(0)}+\sum_{l\geq1} h_i^{(l)}(\vec x)\,,
\end{equation}
with $h_i^{(0)}=1,1,0$ for $i=0,1,2$,
the post-Minkowskian expansion of the metric components
\begin{align}\label{e:h0amp}
h^{(l+1)}_{0}(r,d)&=-\frac{16}{d-1} \left((d-2)c^{(l)}_1(d)+ c^{(l)}_2(d) \right)
\left(\rho(r,d)\over 4\right)^{l+1} ,\\
h^{(l+1)}_{1}(r,d)&=\frac{16}{d-1}
\left(c^{(l)}_1(d)-\left(1+{d-1\over 2-l(d-2)}\right)c^{(l)}_2(d)\right) \left(\rho(r,d)\over 4\right)^{l+1} ,
\cr
\nonumber h^{(l+1)}_2(r,d)&= 16 {(d-2)(l+1)\over2-l(d-2)} c^{(l)}_2(d)
\left(\rho(r,d)\over 4\right)^{l+1} \,.
\end{align}
We have introduced the radial parameter
\begin{equation}\label{e:rhodef}
\rho(r,d)={\Gamma\left(d-2\over2\right)\over \pi^{d-2\over 2} } {
G_N m \over r^{d-2}}\,,
\end{equation}
which is our post-Minkowskian expansion parameter. Recall that in
$d+1$ dimensions the length dimension of $[G_N m]= (length)^{d-2}$
and $\rho(r,d)$ is dimensionless.
The metric component present poles in four dimensions ($d=3$) from
two-loop order and in five
dimensions ($d=4$) from one-loop order. Such divergences will be
removed by the contribution from the non-minimal coupling contributions in section~\ref{sec:nonmin}.
\subsection{Tree-level amplitude}\label{sec:tree}
At tree-level, the only contributing diagram is
\begin{equation}
\mathcal M^{(0)}_3(p_1,q) =
\begin{gathered}\begin{fmffile}{gravtree}
\begin{fmfgraph*}(80,100)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{fermion,tension=2.5,label=$p_1$}{i1,v1}
\fmf{fermion,tension=2.5,label=$p_2$}{v1,i2}
\fmf{dbl_wiggly,tension=1, label.dist=10,label=$q$}{o1,v1}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}\,,
\end{equation}
is the emission of a graviton
from the scattering of two massive scalars of momenta $p_1$ and $p_2$
and $p_1^2=p_2^2=m^2$ with momentum transfert
$q=p_1-p_2$. The scattering amplitude is given by the
2-scalar-1-graviton vertex $\tau^{\mu\nu}(p_1,p_2)$ in~(\ref{e:tau1})
\begin{equation}
i \mathcal M ^{(0)}_3(p_1,q)=-{i\sqrt{32\pi G_N}\over2\sqrt{4E_1E_2}} \epsilon^{\mu\nu}
\tau_{\mu\nu} =- {i\sqrt{32\pi G_N}\over2} \epsilon^{\mu\nu}
\big(p_{1\mu}p_{2\nu}+p_{2\mu}p_{1\nu}-\eta_{\mu\nu}(p_1\cdot p_2-m^2)\big)\,.
\end{equation}
Using that $P=(p_1+p_2)/2$ and $q=p_1-p_2$ we have that
\begin{equation}
i\mathcal M ^{(0)}_3(p_1,q)=- {i\sqrt{32\pi G_N}\over2\sqrt{4E_1E_2}} \epsilon^{\mu\nu}
\big(2 P_{\mu}P_{\nu}-\frac12 (q_\mu q_\nu-\eta_{\mu\nu}q^2)\big)\,.
\end{equation}
In the static limit $q=p_1-p_2\simeq(0,\vec q)$, $E_1\simeq E_2\simeq m$ and $|\vec q|\ll m$
we have
\begin{equation}
\label{eq:1}
\langle T^{(0)}_{\mu\nu}(q^2)\rangle \simeq m
\delta^0_\mu\delta^0_\nu +\left({q_iq_j\over \vec
2q^2}\eta^i_\mu\eta^j_\nu+\frac12\eta_{\mu\nu}\right) \vec q^2\,.
\end{equation}
The $\vec q^2$ term in this expression is the contact term which has
a higher power of $\hslash$ and does not contribute to the classical
limit~\cite{Bjerrum-Bohr:2018xdl,Cristofoli:2019neg}.
The coefficients of the classical contribution to the stress-tensor at
tree-level are given by
\begin{align}\label{e:F1F2treeAmp}
c_1^{(0)}(d)&= 1,\cr
c_2^{(0)}(d)&= 0\,.
\end{align}
From this we deduce the metric components in $d+1$ dimensions
using~(\ref{e:h0amp})
\begin{align}\label{e:htree}
h_{0}^{(1)}(r,d)&= -4{d-2\over d-1} \rho(r,d),\cr
h_{1}^{(1)}(r,d)&={4\over d-1}\rho(r,d), \cr
h_{2}^{(1)}(r,d)&=0\,,
\end{align}
where $\rho(r,d)$ is defined in~(\ref{e:rhodef}). This reproduces
the expression given in~\cite{Collado:2018isu,Jakobsen:2020ksu}.
\subsection{One-loop amplitude}\label{sec:oneloop}
At one-loop the only contributing diagram to the classical limit
is
\begin{equation}
i\mathcal M^{(1)}_3(p_1,q) =\begin{gathered}\begin{fmffile}{oneloopgraph}
\begin{fmfgraph*}(100,100)
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{fermion,label=$p_1$}{i1,v1}
\fmf{fermion,label=$p_2$}{v2,i2}
\fmf{dbl_wiggly,label=$q$}{o1,v3}
\fmf{plain,tension=.1}{v1,v2}
\fmf{dbl_wiggly,tension=.3}{v3,v1}
\fmf{dbl_wiggly,tension=.3}{v3,v2}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}=-{i\sqrt{32\pi G_N}\over2} \epsilon_{\mu\nu}
T^{(1)\ \mu\nu}(q^2)\,,
\end{equation}
from which we extract the one-loop contribution to the stress-energy
tensor in $d+1$ dimensions
\begin{equation}
T^{(1)\ \mu\nu}(q^2)=\frac{i 8\pi G_N}{\sqrt{4
E_1E_2}}\int\frac{d^{d+1}l}{(2\pi)^D}\frac{\tau^{\sigma\rho}(p_1,l+p_1) \tau^{\mu\nu}_{(3)\sigma\rho,\kappa\delta}(l,q)
\tau^{\kappa\delta} (p_2,l+p_1)
}{(l^2+i\epsilon)
((l+q)^2+i\epsilon) ((l+p_1)^2-m^2+i\epsilon)},
\end{equation}
where $\tau^{\mu\nu}_{(3)\
\pi\rho,\sigma\tau}(p_1,p_2)$ is the three graviton vertex and $\tau^{\mu\nu}(p_1,p_2)$ the
vertex for the emission of a graviton from two scalars with momenta
$p_1$ and $p_2$. We refer to appendix~\ref{sec:vertices} for
definitions and normalisation of our vertices.
In the static limit, $\vec{q}^2\ll m^2$, the classical contribution
coming from the two scalars to one-graviton vertex is
\begin{equation}\label{e:tau1limit}
\tau_{\alpha\beta}\approx 2 m^2\delta^0_{\alpha}\delta^0_{\beta},
\end{equation}
using that $p_1^2=p_2^2=m^2$.
This gives for the stress-energy tensor
\begin{equation}
T^{(1)\
\mu\nu}(q^2)=i 16 \pi G_N m^3\int\frac{d^{d+1}l}{(2\pi)^D}\frac{ \tau^{\mu\nu}_{(3)00,00}(l,q)}{(l^2+i\epsilon)
((l+q)^2+i\epsilon) ((l+p_1)^2-m^2+i\epsilon)}.
\end{equation}
At this point, we want to focus on the computation of the classical contribution at the static limit. Thus, we will employ a trick, which will prove useful for higher loops. We symmetrize the diagram
\begin{multline}
T^{(1)\
\mu\nu}(q^2)=i 8 \pi G_N m^3\int\frac{d^{d+1}l}{(2\pi)^D}\frac{ \tau^{\mu\nu}_{(3)00,00}(l,q)}{(l^2+i\epsilon)
((l+q)^2+i\epsilon)}\cr
\times\Big[\frac{1}{(l+p_1)^2-m^2+i\epsilon}+\frac{1}{(l-p_2)^2-m^2+i\epsilon}\Big]\,.
\end{multline}
In the approximation $l^2\ll m^2$ we have $(l+p_i)^2-m^2=l^2+2l\cdot
p_1=l^2+2l_{0}E-\vec{l}\cdot\vec{q}\simeq l_0^2 +2ml_0$ and the
amplitude reduces at leading order
\begin{multline}
T^{(1)\ \mu\nu}(q^2)\simeq i8\pi G_N
m^3\int\frac{d^{d+1}l}{(2\pi)^D}\frac{ \tau^{\mu\nu}_{(3)00,00}(l,q)}{(l^2+i\epsilon)((l+q)^2+i\epsilon)}\cr
\times\Big[\frac{1}{l_0^2+2ml_0+i\epsilon}+\frac{1}{l_0^2-2ml_0+i\epsilon}\Big].
\end{multline}
It is obvious that at $\mathcal{O}(\epsilon^0)$ order we get a zero
contribution at leading order in $1/m$, since $l_0\ll m$. Thus, we can
compute the leading contribution of the integral over $l_0$ via
\textit{Cauchy's theorem}, by taking the residue
$2ml_0=i\epsilon$ and closing the contour of integration in the upper
half-plane\footnote{One could have taken the residue at
$2ml_0=-i\epsilon$ and closing the contour in the lower half-plane
with the same result.}
\begin{equation}
T^{(1)\ \mu\nu}(q^2)=4\pi G_N m^2\int\frac{d^d{\vec l}}{(2\pi)^{d}}\frac{ \tau^{\mu\nu}_{(3)00,00}(l,q)}{(\vec{l}^2-i\epsilon)((\vec{l}+\vec{q})^2-i\epsilon)}\Bigg\vert_{l_0=0}\,,
\end{equation}
with
\begin{align}
\tau^{\mu\nu}_{(3)00,00}(l,q)=\frac{1}{d-1}\bigg(&(d-2)\big(l^{\mu}l^{\nu}+(l+q)^{\mu}(l+q)^{\nu}+q^{\mu}q^{\nu}+\frac{3}{2}\eta^{\mu\nu}\vec{q}^2\big) \nonumber \\
&-2(d-2)\big(\vec{l_1}^2+(\vec{l_1}+\vec{q})^2\big)(\delta^{\mu}_0\delta^{\mu}_0-\frac{\eta^{\mu\nu}}{4})-2(d-3) \vec{q}^2\delta^{\mu}_0\delta^{\mu}_0\bigg)\,.
\end{align}
The component of the stress-tensor are proportional to the one-loop master
integral $J_{(1)}(\vec q^2)$ as expected from the general discussion
of section~\ref{sec:mast-integr-class}
\begin{equation}
\label{e:Toneloop}
\langle T_{\mu\nu}^{(1)}\rangle= \pi G_Nm^2
\left(c_1^{(1)}(d) \delta_\mu^0\delta_\nu^0+ c_2^{(1)}(d)
\left({q_\mu q_\nu\over q^2}-\eta_{\mu\nu}\right)\right)\, J_{(1)}(q^2)\,,
\end{equation}
with the master integral
\begin{equation}
\label{e:J1}
J_{(1)}(q^2)=\frac{ \Gamma \left(4-d\over2\right) \Gamma
\left(\frac{d-2}{2}\right)^2}{2^d\pi^{d\over2}\Gamma (d-2)}\,
\left(\vec q^2\right)^{\frac{d-2}{2}}\,,
\end{equation}
and the coefficients
\begin{align}
c_1^{(1)}(d)&=-\frac{2(4d^2-15d+10)}{(d-1)^2},\cr
c_2^{(1)}(d)&=-\frac{2(d-2)(3d-2)}{(d-1)^2}\,.
\end{align}
\subsubsection{The one-loop contribution to the metric components}
\label{sec:one-loop-metric}
Using~\eqref{e:h0amp} we get for the metric components in $d+1$ dimensions
\begin{align}\label{e:honeloop}
h_{0}^{(2)}(r,d)&={8(d-2)^2\over (d-1)^2}\rho(r,d)^2,\cr
h_1^{(2)}(r,d)&=-\frac{4(2d^2-9d+14)}{(d-4) (d-1)^2}\rho(r,d)^2,\cr
h_2^{(2)}(r,d)&=\frac{4 (d-2)^2(3d-2)}{(d-4) (d-1)^2}\rho(r,d)^2 \,,
\end{align}
where $\rho(r,d)$ is defined in~(\ref{e:rhodef}).
This reproduces the expression given in~\cite{Collado:2018isu} and the
expression in~\cite[eq.~(22)]{Jakobsen:2020ksu} for $\alpha=0$.
\subsection{Two-loop amplitude}\label{sec:twoloop}
The diagrams contributing to the classical corrections at third post-Minkowskian
order of the metric at the two-loop graphs
\begin{equation}
i \mathcal M^{(2)}_3(p_1,q)= -\sqrt{32 \pi G_N} T^{(2)\, \mu\nu} \epsilon_{\mu\nu},
\end{equation}
there are four contributions
\begin{align}
T_{(a)}^{(2)\mu\nu}&=
\begin{gathered}
\begin{fmffile}{twolooptriangle4pt12}
\begin{fmfgraph}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,v1,v4,v2,i2}
\fmffreeze
\fmf{dbl_wiggly,tension=3}{v3,v5,v2}
\fmf{dbl_wiggly}{v3,v1}
\fmf{dbl_wiggly,tension=.1}{v5,v4}
\fmf{dbl_wiggly,tension=5}{v3,o1}
\end{fmfgraph}
\end{fmffile}
\end{gathered},\qquad
\nonumber T_{(b)}^{(2)\mu\nu}=
\begin{gathered}
\begin{fmffile}{twolooptriangle4pt21}
\begin{fmfgraph}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,v1,v4,v2,i2}
\fmffreeze
\fmf{dbl_wiggly,tension=3}{v3,v5,v1}
\fmf{dbl_wiggly}{v3,v2}
\fmf{dbl_wiggly,tension=.1}{v5,v4}
\fmf{dbl_wiggly,tension=5}{o1,v3}
\end{fmfgraph}
\end{fmffile}
\end{gathered},\\
\nonumber
T_{(c)}^{(2)\mu\nu}&=
\begin{gathered}
\begin{fmffile}{twolooptriangle3}
\begin{fmfgraph}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain}{i1,v1,vph1a,vph1b,vph1c,v4,vph2,v2,i2}
\fmffreeze
\fmf{dbl_wiggly}{v3,v1}
\fmf{dbl_wiggly}{v3,v2}
\fmf{dbl_wiggly,tension=3}{o1,v5,v3}
\fmf{dbl_wiggly,left=.5,tension=.1}{v5,v4}
\end{fmfgraph}
\end{fmffile}
\end{gathered},\qquad
T_{(d)}^{(2)\mu\nu}=\begin{gathered}
\begin{fmffile}{twolooptriangle4pt}
\begin{fmfgraph}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,ov1,ovph1,ovph2,ov3,ovph3,ovph4,ov2,i2}
\fmffreeze
\fmf{dbl_wiggly,tension=.3}{v3,ov1}
\fmf{dbl_wiggly,tension=.3}{v3,ov2}
\fmf{dbl_wiggly,tension=.3}{v3,ov3}
\fmf{dbl_wiggly,tension=1}{o1,v3}
\end{fmfgraph}
\end{fmffile}
\end{gathered}\,.
\end{align}
\subsubsection{The diagrams $(a)$, $(b)$, $(c)$}
\label{sec:diagrams-abc}
The sum of the contributions from the diagrams $(a)$, $(b)$, $(c)$
after appropriate labelling of the momenta, can be expressed as
\begin{multline}
\sum_{i=a}^c T_{(i)}^{(2)\,\mu\nu}=-{16 G_N^2\pi^2\over m}\int
\prod_{n=1}^3\frac{d^{d+1}l_n}{(2\pi)^{2d}}\delta(l_1+l_2+l_3+q)\cr
\times\frac{\tau^{\gamma\delta} (p_1,l_1+p_1)
\tau ^{\sigma\tau} (l_1+p_1,-l_2+p_1)
\tau ^{\iota\theta} (l_2-p_2,-p_2)
\tau^{\phi\chi}_{(3)\iota\theta,\sigma\tau}(-l_2,l_1+q)\cdot\mathcal{P}^{\alpha\beta}_{\phi\chi}\cdot
\tau_{(3)\alpha\beta,\gamma\delta}^{\mu\nu}(l_1+q,q)}{l_1^2l_2^2l_3^2(l_1+q)^2}\cr
\times\Bigg(\frac{1}{(l_1+p_1)^2-m^2}\frac{1}{(l_2-p_2)^2-m^2}+\frac{1}{(l_3+p_1)^2-m^2}\frac{1}{(l_1-p_2)^2-m^2}\cr+\frac{1}{(l_3+p_1)^2-m^2}\frac{1}{(l_2-p_2)^2-m^2} \Bigg).
\end{multline}
Using the approximate form of the two scalars one graviton vertex
in~(\ref{e:tau1limit}) and $(l_1+p_1)^2-m^2\approx 2ml_1^0$ and taking the residue $2ml_i^0= i\epsilon$, since for the rest of the residues we get a zero contribution at order $\mathcal{O}(\epsilon^0)$, we get
\begin{equation}
\sum_{i=a}^c T_{(i)}^{(2)\,\mu\nu}=32\pi^2G_N^2 m^3\int \prod_{n=1}^2\frac{d^{d+1}l_n}{(2\pi)^{2d}}\frac{\tau_{(3) \alpha\beta,00}^{\mu\nu}(l_1+q,q)\cdot\mathcal{P}^{\alpha\beta}_{\phi\chi}\cdot\tau_{(3)\, 00,00}^{\phi\chi}(-l_2,l_1+q)}{(\vec{l_1})^{^2}(\vec{l_2})^{^2}(\vec{l_3})^{^2}(\vec{l_1}+\vec{q})^{^2}}\Bigg\vert_{l_1^0=l_2^0=0},
\end{equation}
with
\begin{multline}
\eta_{\mu\nu} \tau_{(3) \phi\chi,00}^{\mu\nu}(l_1+q,q)=\big(l^{\mu}l^{\nu}-(l+q)^{\mu}(l+q)^{\nu}-q^{\mu}q^{\nu}\big)-\frac{3}{2}\eta^{\mu\nu}\vec{q}^2\big(\eta^{\mu\nu}-(d-1)\delta^{\mu}_0\delta^{\nu}_0\big) \cr
+\frac{\eta^{\mu\nu}}{2}\big(\vec{l_1}^2-(\vec{l_1}+\vec{q})^2\big)-\frac{5-d}{2}\delta^{\mu}_0\delta^{\nu}_0\big(\vec{l_1}^2+(\vec{l_1}+\vec{q})^2\big),
\end{multline}
and
\begin{multline}
\delta_{\mu}^0\delta_{\nu}^0\tau_{(3) \phi\chi,00}^{\mu\nu}(l_1+q,q)=\frac{1}{d-1}\bigg((d-3)\big((l+q)^{\mu}(l+q)^{\nu}+q^{\mu}q^{\nu}\big)+(d-1)\big(l_1^{\mu}l_1^{\nu}-\frac{\vec{l_1}^2}{2}(3\delta^{\mu}_0\delta^{\nu}_0-\eta^{\mu\nu}) \big) \cr
+\frac{\eta^{\mu\nu}-\delta^{\mu}_0\delta^{\nu}_0}{2}\big(\vec{q}^2(d-5)+(3d-7)(\vec{l_1}+\vec{q})^2\big)\bigg),
\end{multline}
and
\begin{multline}
\tau^{\mu\nu}_{(3)00,00}(l,q)=\frac{1}{d-1}\bigg((d-2)\big(l^{\mu}l^{\nu}+(l+q)^{\mu}(l+q)^{\nu}+q^{\mu}q^{\nu}+\frac{3}{2}\eta^{\mu\nu}\vec{q}^2\big) \cr
-2(d-2)\big(\vec{l_1}^2+(\vec{l_1}+\vec{q})^2\big)(\delta^{\mu}_0\delta^{\mu}_0-\frac{\eta^{\mu\nu}}{4})-2(d-3) \vec{q}^2\delta^{\mu}_0\delta^{\mu}_0\bigg).
\end{multline}
Using the {\tt LiteRed} code~\cite{Lee:2012cn,Lee:2013mka} in $d$ dimensions, we
find that all the contributions are proportional to the master
integral as expected from the general discussion of
section~\ref{sec:mast-integr-class}
\begin{align}
J_{(2)}(\vec q)&= \int \prod_{i=1}^2 {d^d{\vec l}_i\over(2\pi)^d} {\vec
q^2\over \prod_{i=1}^2 \vec l_i^2 (\vec l_1+\vec l_2+\vec q)^2}\cr
&=
-{\vec q^2\over 32\pi^2(d-3)}-\left(-3+\gamma_E-\log(4\pi)+\log(\vec
q^2)\right)\vec q^2+O(d-3)\,,
\end{align}
where $\gamma_E=0.57721\cdots$ is the Euler-Mascheroni constant~\cite{Lagarias}.
We find for the 00-component
\begin{equation}
\sum_{i=a}^cT_{(i)}^{(2)\,00}=\frac{32\pi^2G_N^2m^3}{3}\frac{6d^3-45d^2+134d-160}{(d-4)(d-1)^2} J_{(2)}(\vec q^2)\,,
\end{equation}
and for the trace part
\begin{equation}
\sum_{i=a}^cT_{(i)}^{(2)\,\mu\nu}\eta_{\mu\nu}=-\frac{32\pi^2
G_N^2m^3}{3}\frac{10d^3-63d^2+123d-86}{(d-1)^2}J_{(2)}(\vec q^2)\,.
\end{equation}
\subsubsection{The diagrams $(d)$}
\label{sec:diagrams-d}
The diagram $(d)$ after symmetrisation over the massive scalar legs reads
\begin{multline}
T_{(d)}^{(2)\,\mu\nu}=-\frac{ 32 G_N^2\pi^2}{3m}\int \prod_{n=1}^3\frac{d^{d+1}l_n}{(2\pi)^{2d}}\frac{\delta(l_1+l_2+l_3+q)}{l_1^2l_2^2l_3^2}
\Bigg(\frac{1}{(l_1+p_1)^2-m^2+i\epsilon}\frac{1}{(l_2-p_2)^2-m^2+i\epsilon}\cr+\frac{1}{(l_3+p_1)^2-m^2+i\epsilon}\frac{1}{(l_1-p_2)^2-m^2+i\epsilon}+\frac{1}{(l_3+p_1)^2-m^2+i\epsilon}\frac{1}{(l_2-p_2)^2-m^2+i\epsilon}
\Bigg)\cr
\times \tau ^{\gamma\delta} (p_1,l_1+p_1)\tau ^{\sigma\tau} (l_1+p_1,-l_2+p_1)\tau ^{\iota\theta} (l_2-p_2,-p_2)\tau_{(4)\gamma\delta,\sigma\tau,\iota\theta}^{\mu\nu}(q,l_1,l_2,l_3),
\end{multline}
and leads to the contribution
\begin{equation}
T_{(d)}^{(2)\,\mu\nu}=-\frac{64\pi^2G_N^2 m^3}{3}\int
\prod_{n=1}^2\frac{d^{d+1}l_n}{(2\pi)^{d}}\frac{\tau_{(4)\,00,00,00}^{\mu\nu}(q,l_1,l_2,-l_1-l_2-q)}{(\vec{l_1})^{^2}(\vec{l_2})^{^2}(\vec{l_1}+\vec{l_2}+\vec
{q})^{^2}}\Bigg\vert_{l_1^0=l_2^0=0}
\end{equation}
with the vertex
\begin{multline}
\tau_{(4)\,00,00,00}^{\mu\nu}(q,l_1,l_2,l_3)=\frac{1}{(d-1)^2}\Bigg(\vec{q}^2\frac{\delta^{\mu}_0\delta^{\nu}_0}{2}
(7d^2-45d+70)-\vec{q}^2\frac{\eta^{\mu\nu}}{2}(d-2)(6d-23) \cr
+(d-2)\bigg((9-2d)q^{\mu}q^{\nu}+(7-2d)\big(l_1^{\mu}l_1^{\nu}+l_2^{\mu}l_2^{\nu}+l_3^{\mu}l_3^{\nu}\big)\bigg) \cr
+\frac{d-2}{2}\big(\vec{l_1}^2+\vec{l_2}^2+\vec{l_3}^2\big) \big(\delta^{\mu}_0\delta^{\nu}_0(7d-23)-\eta^{\mu\nu}(2d-9)\big) \Bigg)\,.
\end{multline}
Evaluating these integral we find, for the $00$-component
\begin{equation}
T_{(d)}^{(2)\,00}=-\frac{32\pi^2G_N^2m^3}{3} \frac{(4-d)(6-d)}{(d-1)^2}J_{(2)}(\vec q^2)\,,
\end{equation}
and for the trace part
\begin{equation}
T_{(d)}^{(2)\,\mu\nu}\eta_{\mu\nu}=\frac{64\pi^2G_N^2m^3}{3} \frac{3d^3-20d^2+41d-30}{(d-1)^2}J_{(2)}(\vec
q^2)\,.
\end{equation}
\subsubsection{The two-loop contribution to the metric components}
Summing up all the contributions the two-loop stress-tensor is given by
\begin{equation}\label{e:Tstatictwoloop}
\langle T^{(2)}_{\mu\nu}\rangle=\pi^2 G_N^2 m^3\Big(c^{(2)}_1(d)\delta_{\mu}^0\delta_{\nu}^0
+c^{(2)}_2(d)\big({q_{\mu}q_{\nu}\over q^2} -\eta_{\mu\nu}\big) \Big)\, J_{(2)}(q^2)\,,
\end{equation}
with the coefficients given by
\begin{align}\label{e:c1c22loop}
c_1^{(2)}(d) &={32\over3(d-4)(d-1)^3}\left(9d^4-70d^3+203d^2-254d+104 \right),\cr
c_2^{(2)}(d) &={64(d-2)\over3(d-4)(d-1)^3}\left(2d^3-13d^2+25d-10 \right)\,,
\end{align}
and the expression for the master integral
\begin{equation}\label{e:J2}
J_{(2)}(\vec q^2)=\frac{\Gamma (3-d) \Gamma
\left(\frac{d-2}{2}\right)^3}{(4\pi)^d\Gamma \left(\frac{3
(d-2)}{2}\right)}\left(\vec q^2\right)^{d-2}\,.
\end{equation}
From which we extract the metric components using the relations~(\ref{e:h0amp}) (using the
definition of $\rho(r,d)$ in~(\ref{e:rhodef}))
\begin{align}\label{e:htwoloopdiv}
h_{0}^{(3)}(r,d)& =-{8(3d-7)(d-2)^3\over (d-4)(d-1)^3}\rho(r,d)^3,\cr
h_1^{(3)}(r,d)&=\frac{8 (7d^4-63d^3+214d^2-334d+212)}{3
(d-3)(d-4)(d-1)^3}\rho(r,d)^3,\cr
h_{2}^{(3)}(r,d)&=-\frac{8(d-2)^2(2d^3-13d^2+25d-10)}{
(d-3)(d-4)(d-1)^3}\rho(r,d)^3\,.
\end{align}
\subsection{Three-loop amplitude}\label{sec:threeloop}
The diagrams contributing to the classical corrections at third post-Minkowskian
order of the metric at the two-loop graphs
\begin{equation}
i \mathcal M^{(3)}_3(p_1,q)=- \sqrt{32 \pi G_N} T^{(3)\, \mu\nu} \epsilon_{\mu\nu},
\end{equation}
where the three-loop stress-tensor is given by five distinct diagrams
\begin{align}
T_{(a)}^{(3)\mu\nu}&=
\begin{gathered}
\begin{fmffile}{threeloopDa}
\begin{fmfgraph}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,v1,v2,v3,v4,i2}
\fmffreeze
\fmf{dbl_wiggly}{v3,v6,v4}
\fmf{dbl_wiggly}{v2,v5,v1}
\fmf{dbl_wiggly,tension=2}{v5,v7,v6}
\fmf{dbl_wiggly,tension=5}{v7,o1}
\end{fmfgraph}
\end{fmffile}
\end{gathered},\qquad
\nonumber T_{(b)}^{(3)\mu\nu}=
\begin{gathered}
\begin{fmffile}{threeloopDb}
\begin{fmfgraph}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,v1,v2,v3,v4,i2}
\fmffreeze
\fmf{dbl_wiggly}{v2,v5,v1}
\fmf{dbl_wiggly}{v3,v6,v5}
\fmf{dbl_wiggly,tension=1}{v6,v7,v4}
\fmf{dbl_wiggly,tension=5}{v7,o1}
\end{fmfgraph}
\end{fmffile}
\end{gathered},\\
\nonumber
T_{(c)}^{(3)\mu\nu}&=
\begin{gathered}
\begin{fmffile}{threeloopDc}
\begin{fmfgraph}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,v1,v2,v3,v4,i2}
\fmffreeze
\fmf{dbl_wiggly}{v5,v1}
\fmf{dbl_wiggly}{v5,v2}
\fmf{dbl_wiggly}{v5,v3}
\fmf{dbl_wiggly,tension=2}{v4,v6,v5}
\fmf{dbl_wiggly,tension=5}{v6,o1}
\end{fmfgraph}
\end{fmffile}
\end{gathered},\qquad
T_{(d)}^{(3)\mu\nu}=\begin{gathered}
\begin{fmffile}{threeloopDd}
\begin{fmfgraph}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,v1,v2,v3,v4,i2}
\fmffreeze
\fmf{dbl_wiggly}{v6,v1}
\fmf{dbl_wiggly}{v6,v4}
\fmf{dbl_wiggly}{v6,v5}
\fmf{dbl_wiggly,tension=2}{v3,v5,v2}
\fmf{dbl_wiggly,tension=5}{v6,o1}
\end{fmfgraph}
\end{fmffile}
\end{gathered},\\
\nonumber
T_{(e)}^{(3)\mu\nu}&=
\begin{gathered}
\begin{fmffile}{threeloopDe}
\begin{fmfgraph}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,v1,v2,v3,v4,i2}
\fmffreeze
\fmf{dbl_wiggly}{v5,v1}
\fmf{dbl_wiggly}{v5,v2}
\fmf{dbl_wiggly}{v5,v3}
\fmf{dbl_wiggly}{v5,v4}
\fmf{dbl_wiggly,tension=5}{v5,o1}
\end{fmfgraph}
\end{fmffile}
\end{gathered}\,.
\end{align}
As before, we permute the internal momenta such that by taking
the residue at $2ml_i^0=i\epsilon$ from the massive propagators, we
extract the non-analytic terms which contribute to the classical
metric in the static limit.
After taking the residues and including the symmetry factors
\begin{align}
& T_{(a)}^{(3)\,\mu\nu}=64\pi^3 G_N^3 m^4\int \prod_{n=1}^3\frac{d^d{\vec l}_n}{(2\pi)^{d}}\frac{\tau_{(3) \pi\rho, \sigma\tau}^{\mu\nu}(l_1+l_2,q)\tau_{(3)}^{\pi\rho}(-l_1,l_1+l_2)\tau_{(3)}^{\sigma\tau}(-l_3,l_3+l_4)}{(\vec{l}_1)^{^2}(\vec{l}_2)^{^2}(\vec{l}_3)^{^2}(\vec{l}_4)^{^2}(\vec{l}_1+\vec{l}_2)^{^2}(\vec{l}_3+\vec{l}_4)^{^2}}\Bigg\vert_{l_1^0=l_2^0=l_3^0=0},\cr
& T_{(b)}^{(3)\,\mu\nu}=256 \pi^3 G_N^3 m^4\int \prod_{n=1}^3\frac{d^d{\vec l}_n}{(2\pi)^{d}}\frac{\tau_{(3) \sigma\tau,00}^{\mu\nu}(l_1+q,q)\tau_{(3)}^{\pi\rho}(-l_3,l_3+l_4)\tau_{(3) 00,\pi\rho}^{\sigma\tau}(-l_2,l_1+q)}{(\vec{l}_1)^{^2}(\vec{l}_2)^{^2}(\vec{l}_3)^{^2}(\vec{l}_4)^{^2}(\vec{l}_1+\vec{q})^{^2}(\vec{l}_3+\vec{l}_4)^{^2}}\Bigg\vert_{l_1^0=l_2^0=l_3^0=0},\cr
& T_{(c)}^{(3)\,\mu\nu}=-\frac{512 \pi^3 G_N^3m^4}{3}\int \prod_{n=1}^3\frac{d^d{\vec l}_n}{(2\pi)^{d}}\frac{\tau_{(3) \alpha\beta,00}^{\mu\nu}(l_1+q,q)\tau_{(4)00,00,00}^{\alpha\beta}(l_1+q,l_2,l_3,l_4)}{(\vec{l}_1)^{^2}(\vec{l}_2)^{^2}(\vec{l}_3)^{^2}(\vec{l}_4)^{^2}(\vec{l}_1+\vec{q})^{^2}}\Bigg\vert_{l_1^0=l_2^0=l_3^0=0},\cr
& T_{(d)}^{(3)\,\mu\nu}=-256\pi^3 G_N^3m^4\int \prod_{n=1}^3\frac{d^d{\vec l}_n}{(2\pi)^{d}}\frac{\tau_{(3)}^{\gamma\delta}(-l_3,l_3+l_4)\tau_{(4)\gamma\delta,00,00}^{\mu\nu}(q,l_1,l_2,l_3+l_4)}{(\vec{l}_1)^{^2}(\vec{l}_2)^{^2}(\vec{l}_3)^{^2}(\vec{l}_4)^{^2}(\vec{l}_3+\vec{l}_4)^{^2}}\Bigg\vert_{l_1^0=l_2^0=l_3^0=0},\cr
&T_{(e)}^{(3)\,\mu\nu}=\frac{256\pi^3G_N^3m^4}{3}\int \prod_{n=1}^3\frac{d^d{\vec l}_n}{(2\pi)^{d}}\frac{\tau_{(5) 00,00,00,00}^{\mu\nu}(q,l_1,l_2,l_3,l_4)}{(\vec{l_1})^{^2}(\vec{l_2})^{^2}(\vec{l_3})^{^2}(\vec{l_4})^{^2}}\Bigg\vert_{l_1^0=l_2^0=l_3^0=0},
\end{align}
with the five-graviton vertex contribution
\begin{align}\label{e:tau5}
& \tau^{\mu\nu}_{(5)\, 00,00,00,00}(k_1,k_2,k_3,k_4,k_5):=\tilde
\tau^{\mu\nu}_{(5)\, \alpha\beta, \gamma\delta, \epsilon\eta, \kappa\lambda}(k_1,k_2,k_3,k_4,k_5) \mathcal P^{\alpha\beta}_{00}
\mathcal P^{\gamma\delta}_{00} \mathcal
P^{\epsilon\eta}_{00}P^{\kappa\lambda}_{00} \nonumber \\
&=\frac{1}{4(d-1)^3}\Bigg(
4\delta_{\mu}^0\delta_{\nu}^0\bigg(4(2d^3-18d^2+57d-61) k_1^2+(d-2)(8d^2-47d+79)\sum_{i=2}^5 k_i^2\bigg)\nonumber \\
&-(d-2)\eta_{\mu\nu}\bigg((29d^2-191d+362) k_1^2+(7d^2-61d+142)\sum_{i=2}^5 k_i^2 \bigg)\nonumber \\
&+2(d-2)\bigg((11d^2-73d+150) k_{1\mu}k_{1\nu}+(7d^2-53d+102)(k_{2\mu}k_{2\nu}+k_{3\mu}k_{3\nu}+k_{4\mu}k_{4\nu}+k_{5\mu}k_{5\nu})\bigg)\Bigg)\,.
\end{align}
where the vertex $\tau^{\mu\nu}_{(5)\, \alpha\beta, \gamma\delta,
\epsilon\eta, \kappa\lambda}(k_1,k_2,k_3,k_4,k_5)$
has been derived using the results of~\cite{Prinz:2020nru}.
The integral reduction is done using the {\tt LiteRed}
code~\cite{Lee:2012cn,Lee:2013mka} in $d$ dimensions.
In agreement with the
general analysis of section~\ref{sec:mast-integr-class},
we find that the classical contribution is proportional to the single master integral
\begin{equation}\label{e:J4master}
J_{(3)}(\vec q^2)=\int\frac{d^d{\vec l}_1d^d{\vec l}_2d^d{\vec l}_3}{(2\pi)^{3d}}\frac{\vec{q}^2}{\vec{l_1}^2\vec{l_2}^2\vec{l_3}^2(\vec{l_1}+\vec{l_2}+\vec{l_3}+\vec{q})^2}\,.
\end{equation}
\subsubsection{The $\mu=\nu=0$ component}
\begin{align}
&T_{(a)}^{(3)\,00}=-\frac{32 \pi^3 G_N^3m^4}{3}\frac{3d^5-169d^4+1378d^3-4592d^2+7256d-4752}{(d-4)^2(d-1)^3} J_{(3)}(\vec q^2),\cr
&T_{(b)}^{(3)\,00}=-\frac{128 \pi^3 G_N^3m^4}{3}\frac{68d^6-1003d^5+6211d^4-20820d^3+40020d^2-41584d+17824}{(d-4)(d-3)(3d-4)(d-1)^3} J_{(3)}(\vec q^2),\cr
&T_{(c)}^{(3)\,00}=\frac{64 \pi^3 G_N^3m^4}{3}\frac{37d^5-502d^4+2731d^3-7486d^2+10164d-5256}{(d-3)(3d-4)(d-1)^3} J_{(3)}(\vec q^2),\cr
&T_{(d)}^{(3)\,00}=\frac{32 \pi^3 G_N^3m^4}{3}\frac{53d^4-615d^3+2690d^2-5572d+4840}{(d-4)(d-1)^3}J_{(3)}(\vec q^2),\cr
&T_{(e)}^{(3)\,00}=64 \pi^3 G_N^3m^4\frac{(6-d)(d^2-7d+14)}{(d-1)^3}J_{(3)}(\vec q^2).
\end{align}
\subsubsection{Contraction with $\eta_{\mu\nu}$}
\begin{align}
&T_{(a)}^{(3)\,\mu\nu}\eta_{\mu\nu}=\frac{32 \pi^3 G_N^3m^4}{3}\frac{85d^6-1126d^5+6307d^4-19114d^3+32944d^2-30472d+11952}{(d-4)^2(d-1)^3} J_{(3)}(\vec q^2),\cr
&T_{(b)}^{(3)\,\mu\nu}\eta_{\mu\nu}=\frac{128 \pi^3 G_N^3m^4}{3}\frac{168d^6-2231d^5+12319d^4-35796d^3+57396d^2-48304d+16736}{(d-4)(3d-4)(d-1)^3}J_{(3)}(\vec q^2),\cr
&T_{(c)}^{(3)\,\mu\nu}\eta_{\mu\nu}=-\frac{64 \pi^3 G_N^3m^4}{3}\frac{147d^6-1801d^5+8727d^4-21555d^3+28942d^2-20148d+5688}{(3d-4)(d-1)^4} J_{(3)}(\vec q^2),\cr
&T_{(d)}^{(3)\,\mu\nu}\eta_{\mu\nu}=-\frac{32 \pi^3 G_N^3m^4}{3} \frac{179 d^5 - 2146 d^4 + 10305 d^3 - 24614 d^2 + 28972 d - 13704}{(d-4)(d-1)^3} J_{(3)}(\vec q^2),\cr
&T_{(e)}^{(3)\,\mu\nu}\eta_{\mu\nu}=\frac{64 \pi^3 G_N^3m^4}{3}\frac{29d^4-274d^3+973d^2-1484d+852}{(d-1)^3}J_{(3)}(\vec q^2).
\end{align}
\subsubsection{The classical three-loop contribution to the stress-tensor}
Summing up all the contributions we get for the three-loop
stress-tensor
\begin{equation}\label{e:Tstaticthreeloop}
\langle T^{(3)}_{\mu\nu}\rangle=\pi^3 G_N^3 m^4\Big(c^{(3)}_1(d)\delta_{\mu}^0\delta_{\nu}^0
+c^{(3)}_2(d)\big({q_{\mu}q_{\nu}\over q^2} -\eta_{\mu\nu}\big) \Big)\, J_{(3)}(q^2)\,,
\end{equation}
with the master integral
\begin{equation}
J_{(3)}(q^2)=\frac{ \Gamma \left(8-3 d\over 2\right) \Gamma
\left(\frac{d-2}{2}\right)^4}{8^d\pi^{3d\over2}\Gamma (2 (d-2))}\,
|\vec q|^{3(d-2)}\,,
\end{equation}
and the three-loop coefficients are given by
\begin{align}
c^{(3)}_1(d) &=-\frac{64}{3
(d-3)(d-4)^2(d-1)^4}\times\Big(56d^7-889d^6+5868d^5\cr
&-20907d^4+43434d^3-52498d^2+33888d-8760\Big),\cr
c^{(3)}_2(d) &=-\frac{64}{3
(d-3)(d-4)^2(d-1)^4}\times\Big(45d^7-670d^6+4167d^5\cr
&-14016d^4+27430d^3-30916d^2+18104d-3952\Big).
\end{align}
Using the relations~(\ref{e:h0amp}) we obtained
the three-loop contribution to the metric from the classical
stress-tensor in~(\ref{e:T3loopDiv}) (using the notation for $\rho$ in~\eqref{e:rhodef})
\begin{align}\label{e:hthreeloopdiv}
h_{0}^{(4)}(r,d)&=\frac{16(d-2)^3(14d^3-85d^2+165d-106)}{3(d-3)(d-4)(d-1)^4}\rho(r,d)^4,\nonumber \\
h_{1}^{(4)}(r,d)&=-\frac{8(39 d^7 -
691 d^6 + 5155 d^5 - 21077 d^4+ 51216 d^3- 74346 d^2+ 60168 d-21208 )}{3(d-3)(d-4)^2(d-1)^4(3d-8)}\rho(r,d)^4,\nonumber \\
h_2^{(4)}(r,d) &=\frac{16(d-2)^2( 45 d^6- 580 d^5 + 3007 d^4- 8002 d^3+ 11426 d^2- 8064 d+1976 )}{3(d-3)(d-4)^2(d-1)^4(3d-8)}\rho(r,d)^4.
\end{align}
\section{Non-minimal couplings and renormalised metric}\label{sec:nonmin}
The stress-tensor and the metric components have ultraviolet
divergences.
These divergences can be removed by the addition of the non-minimal
couplings made from the powers of the covariant derivative $\nabla_\mu$ acting
on a single power of the Riemann tensor and its contractions.
The Bianchi identity on the Riemann tensor $\nabla_{\mu}
R_{\nu\rho\sigma\lambda}+\nabla_{\nu}
R_{\rho\mu\sigma\lambda}+\nabla_{\rho}
R_{\mu\nu\sigma\lambda}=0$, implies that
\begin{equation}
\nabla_\mu R^\mu{}_{\rho\sigma\lambda}=\nabla_{\sigma}
R_{\rho\lambda}- \nabla_{\lambda} R_{\rho\sigma}, \qquad \nabla_\mu
R^\mu{}_\nu=\frac12 \nabla_\nu R\,.
\end{equation}
The counter-terms are powers of covariant derivative acting on a
single power of the Ricci tensor and Ricci scalar.
Therefore the counter-terms are given by the following non-minimal couplings
\begin{multline}\label{e:Sctn}
\delta^{(n)}S^{\rm ct.} = (G_N m)^{2n\over d-2} \int d^{d+1}x \sqrt{-g} \Big(\alpha^{(n)}(d)
(\nabla^2)^{n-1} R \partial_\mu\phi \partial^\mu\phi \cr+
\left(\beta^{(n)}_0(d) \nabla_\mu\nabla_\nu (\nabla^2)^{n-2} R+
\beta^{(n)}_1(d) (\nabla^2)^{n-1} R_{\mu\nu} \right) \partial^\mu \phi \partial^\nu\phi\Big).
\end{multline}
where $\alpha^{(n)}(d)$, $\beta_0^{(n)}(d)$ and $\beta_1^{(n)}(d)$ are dimensionless coefficients
depending on the space-time dimension. The power of $G_Nm$ is determined by dimensional analysis, and give the
correct order of $G_N$ in all dimensions.
The first non-minimal coupling with $n=1$ is given by
\begin{equation}\label{e:Sctn1}
\delta^{(1)} S^{\rm ct.}= (G_N m)^{2\over d-2}\int d^{d+1}x \sqrt{-g} \,
\left( \alpha^{(1)}(d) R \partial_\mu\phi \partial^\mu\phi +\beta^{(1)}(d) R^{\mu\nu} \partial_\mu \phi \partial_\nu \phi\right)\,.
\end{equation}
This non-minimal coupling has been introduced in~\cite{Goldberger:2004jt} in four
dimensions and~\cite{Jakobsen:2020ksu} in five
dimensions. We will see that up to three-loop order the renormalisation of the static metric component
only require the counter-term $\alpha^{(1)}(d)
R\partial_\mu\phi\partial^\mu \phi$, whereas both
couplings are needed for the cancellation of the stress-tensor
divergences. This coupling is
induced by harmonic gauge condition~\cite{Jakobsen:2020ksu,1821624}
and the value of its coefficient depends on the choice of gauge. In
our gauge, the de Donder gauge, this corresponds to $\alpha=0$ in the
work of~\cite{Jakobsen:2020ksu} and $\xi=\frac14$ in the work
of~\cite{1821624}. Since we are working in fixed gauge we will not
discuss further the gauge dependence of the higher-order non-minimal coupling
coefficients, but we expect that the gauge dependence of these
coefficients will be an extension of the discussion in~\cite[app.~B]{Jakobsen:2020ksu}.
The power of the Newton constant in~(\ref{e:Sctn1}) is an integer only
in four dimensions with $d=3$ and five dimensions $d=4$. Therefore
this counter-term will not appear in dimensions $D\geq 6$.
In four dimensions, from five-loop order, or the sixth post-Minkowskian order $O(G_N^6)$,
one expects that higher derivative non-minimal couplings will be needed
to get finite stress-tensor components.
In dimensions five and six, the higher-derivative non-minimal couplings arise at lower loop order.
In five dimensions one needs to consider higher-derivative non-minimal
couplings $\delta^{(n)}S^{\rm ct.}$ with $n\geq2$ for removing the divergences in
the stress-tensor.
The non-minimal coupling at this order is then given by
\begin{multline}\label{e:Sctn2}
\delta^{(2)} S^{ct.}= (G_Nm)^{4\over d-2} \int d^{d+1}x\sqrt{-g} \Big(
\alpha^{(2)}(d) \Box R \partial_\mu\phi \partial^\mu\phi
\cr + \left(\beta^{(2)}_0(d) \nabla_\mu\nabla_\nu R+ \beta^{(2)}_1(d) \Box R_{\mu\nu}\right)
\partial^\mu \phi\partial^\nu\phi \Big)\,.
\end{multline}
We will need the non-minimal coupling
\begin{multline}\label{e:Sctn3}
\delta^{(3)} S^{ct.}= (G_Nm)^{6\over d-2} \int d^{d+1}x\sqrt{-g} \Big(
\alpha^{(3)}(d) (\nabla^2)^2 R \partial_\mu\phi \partial^\mu\phi
\cr + \left(\beta^{(3)}_0(d) \nabla_\mu\nabla_\nu \nabla^2 R+ \beta^{(3)}_1(d) (\nabla^2)^2 R_{\mu\nu}\right)
\partial^\mu \phi\partial^\nu\phi \Big)\,,
\end{multline}
for removing the two-loop divergence in the
stress-tensor in six ($d=5$) dimensions and the three-loop divergence
in five ($d=4$) dimensions. In five dimensions ($d=4$) the metric, up to $G_N^4$, is renormalised using only the
$n=1$ and the metric is finite to all order in six dimensions ($d=5$).
The higher-order non-minimal couplings
$\delta^{(n)}S^{\rm ct.}$ with $n\geq2$ will not contribute to the
classical limit when inserted into graphs with loops, because they
contribute to higher powers in the momentum transfer $\vec q$,
and are sub-leading with respect to the classical contributions.
Their tree-level insertions will contribute to the renormalisation of
the stress-tensor but thanks to the properties of the Fourier
transform they will not contribute to the metric components.
\subsection{Tree-level insertions}\label{sec:tree-level-nonmin}
We give the contribution of the insertions of the
non-minimal counter-terms with $n=1$ in~(\ref{e:Sctn1}), with $n=2$
in~(\ref{e:Sctn2}) and with $n=3$ in~(\ref{e:Sctn3}) in the tree-level graph.
\subsubsection{Insertion of $\delta^{(1)}S^{\rm ct.}$}\label{e:delta1tree}
The insertion of the non-minimal couplings $\delta^{(1)}S^{\rm ct.}$ in~\eqref{e:Sctn1} into the tree-level diagram
\begin{equation}
\delta^{(1)}\mathcal M^{(0)}(p_1,q) =
\begin{gathered}\begin{fmffile}{delta1tree}
\begin{fmfgraph*}(80,100)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{fermion,tension=2.5,label=$p_1$}{i1,v1}
\fmf{fermion,tension=2.5,label=$p_2$}{v1,i2}
\fmf{dbl_wiggly,tension=1, label.dist=10,label=$q$}{o1,v1}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=1,label.dist=-2}{v1}
\end{fmfgraph*}
\end{fmffile}
\end{gathered},
\end{equation}
leads to the stress-tensor contribution in $d+1$ dimensions
\begin{equation}\label{e:T1nonmin}
\delta^{(1)} \langle T_{\mu\nu}^{(0)}\rangle=
- \vec{q}^{2} (G_Nm)^{2\over d-2} m \left( -{\beta^{(1)}(d)}\delta_\mu^0\delta_\nu^0+ 2{\alpha^{(1)}(d)}\left(
{q_\mu q_\nu\over q^2}-\eta_{\mu\nu}\right)\right)\,,
\end{equation}
and using~\eqref{e:TtohAmplitudedeDonder} this contributes to the metric components
\begin{align}\label{e:hnonmin0}
\delta^{(1)} h_{0}^{(1)}(r,d)&=0,\\
\delta^{(1)} h_1^{(1)}(r,d)&={16 \alpha^{(1)}(d)
\Gamma\left(d\over2\right)\over
\pi^{d-2\over2} }\left((G_Nm)^{1\over d-2}\over r\right)^d,\cr
\delta^{(1)} h_{2}^{(1)}(r,d)&=- {32 \alpha^{(1)}(d)
\Gamma\left(d+2\over2\right)\over
\pi^{d-2\over2} }\left((G_Nm)^{1\over d-2}\over r\right)^d\,.
\end{align}
Thanks to the properties of the Fourier transformation (see
appendix~\ref{sec:FT}) only the coefficient $\alpha(d)$ contributes to
static metric perturbation.
\subsubsection{Insertion of $\delta^{(2)}S^{\rm
ct.}$}\label{e:delta2tree}
The insertion of the non-minimal couplings $\delta^{(2)}S^{\rm ct.}$ in~\eqref{e:Sctn2} into the tree-level diagram
\begin{equation}
\delta^{(2)}\mathcal M^{(0)}(p_1,q) =
\begin{gathered}\begin{fmffile}{delta2tree}
\begin{fmfgraph*}(80,100)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{fermion,tension=2.5,label=$p_1$}{i1,v1}
\fmf{fermion,tension=2.5,label=$p_2$}{v1,i2}
\fmf{dbl_wiggly,tension=1, label.dist=10,label=$q$}{o1,v1}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=2,label.dist=-2}{v1}
\end{fmfgraph*}
\end{fmffile}
\end{gathered},
\end{equation}
leads to the stress-tensor condition in $d+1$ dimensions
\begin{equation}\label{e:T2nonmin}
\delta^{(2)} \langle T_{\mu\nu}^{(0)}\rangle=
|\vec{q}|^{4} (G_Nm)^{4\over d-2} m \left( -{\beta_1^{(2)}(d)}\delta_\mu^0\delta_\nu^0+ 2\left(\alpha^{(2)}(d)+\frac12\beta_0^{(2)}(d)\right)\left(
{q_\mu q_\nu\over q^2}-\eta_{\mu\nu}\right)\right)\,.
\end{equation}
Because of the vanishing of the Fourier transforms
\begin{equation}
\int_{\mathbb R^d} |\vec q|^2 e^{i\vec q\cdot\vec x} {d^d\vec q\over (2\pi)^d}=0,\qquad
\int_{\mathbb R^d} {q_iq_j\over |\vec q|^{2}} |\vec q|^2 e^{i\vec q\cdot\vec x}
{d^d\vec q\over(2\pi)^d}=0\,,
\end{equation}
this extra contribution to the stress-tensor does not affect the
metric components
\begin{align}\label{e:delta2hnonmin0}
\delta^{(2)} h_{0}^{(1)}(r,d)&=0,\\
\delta^{(2)} h_1^{(1)}(r,d)&=0,\cr
\delta^{(2)} h_{2}^{(1)}(r,d)&=0\,.
\end{align}
\subsubsection{Insertion of $\delta^{(3)}S^{\rm
ct.}$}\label{e:delta3tree}
The insertion of the non-minimal couplings $\delta^{(3)}S^{\rm ct.}$ in~\eqref{e:Sctn3} into the tree-level diagram
\begin{equation}
\delta^{(3)}\mathcal M^{(0)}(p_1,q) =
\begin{gathered}\begin{fmffile}{delta3tree}
\begin{fmfgraph*}(80,100)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{fermion,tension=2.5,label=$p_1$}{i1,v1}
\fmf{fermion,tension=2.5,label=$p_2$}{v1,i2}
\fmf{dbl_wiggly,tension=1, label.dist=10,label=$q$}{o1,v1}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=3,label.dist=-2}{v1}
\end{fmfgraph*}
\end{fmffile}
\end{gathered},
\end{equation}
leads to the stress-tensor condition in six dimensions ($d=5$)
\begin{equation}\label{e:T3nonmin}
\delta^{(3)} \langle T_{\mu\nu}^{(0)}\rangle=
- |\vec{q}|^{6} (G_Nm)^{6\over d-2} m \left( -{\beta_1^{(3)}(d)}\delta_\mu^0\delta_\nu^0+ 2\left(\alpha^{(3)}(d)+\frac14\beta_0^{(3)}(d)\right)\left(
{q_\mu q_\nu\over q^2}-\eta_{\mu\nu}\right)\right)\,.
\end{equation}
Because of the vanishing of the Fourier transforms
\begin{equation}
\int_{\mathbb R^d} |\vec q|^4 e^{i\vec q\cdot\vec x} {d^d\vec q\over (2\pi)^d}=0,\qquad
\int_{\mathbb R^d} {q_iq_j\over |\vec q|^{2}} |\vec q|^4 e^{i\vec q\cdot\vec x}
{d^d\vec q\over(2\pi)^d}=0\,,
\end{equation}
this extra contribution to the stress-tensor does not affect the
metric components
\begin{align}\label{e:delta3hnonmin0}
\delta^{(2)} h_{0}^{(1)}(r,d)&=0,\\
\delta^{(2)} h_1^{(1)}(r,d)&=0,\cr
\delta^{(2)} h_{2}^{(1)}(r,d)&=0\,.
\end{align}
\subsection{One-loop insertions}
\label{sec:one-loop-nonmin}
We give the contribution of the insertions of the
counter-terms~(\ref{e:Sctn}) with $n=1$ in~(\ref{e:Sctn1}) in the one-loop graph.
\subsubsection{Insertion of $\delta^{(1)}S^{\rm ct.}$}\label{e:delta1oneloop}
The insertion of the non-minimal coupling in~(\ref{e:Sctn1}) in
the one-loop graph
\begin{equation}
\delta^{(1)} \mathcal M^{(1)}(p_1,q)=
\begin{gathered}
\begin{fmffile}{delta1oneloop1}
\begin{fmfgraph*}(100,100)
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{fermion,label=$p_1$}{i1,v1}
\fmf{fermion,label=$p_2$}{v2,i2}
\fmf{dbl_wiggly,label=$q$}{o1,v3}
\fmf{plain,tension=.1}{v1,v2}
\fmf{dbl_wiggly,tension=.3}{v3,v1}
\fmf{dbl_wiggly,tension=.3}{v3,v2}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=1,label.dist=-3}{v1}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
+
\begin{gathered}
\begin{fmffile}{delta1oneloop2}
\begin{fmfgraph*}(100,100)
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{fermion,label=$p_1$}{i1,v1}
\fmf{fermion,label=$p_2$}{v2,i2}
\fmf{dbl_wiggly,label=$q$}{o1,v3}
\fmf{plain,tension=.1}{v1,v2}
\fmf{dbl_wiggly,tension=.3}{v3,v1}
\fmf{dbl_wiggly,tension=.3}{v3,v2}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=1,label.dist=-3}{v2}
\end{fmfgraph*}
\end{fmffile}
\end{gathered},
\end{equation}
leads to the stress-tensor contribution
\begin{align}\label{e:T3loopct}
\delta^{(1)}\langle T_{\mu\nu}^{(1)}\rangle&=32i\alpha^{(1)}(d) \pi (G_Nm)^{d\over d-2} m^2\cr
&\times\int\frac{d^d{\vec l}}{(2\pi)^d}\frac{{\tau}^{\mu\nu}_{10\ \alpha\beta,\gamma\delta}(l,q)\mathcal{P}^{\alpha\beta}_{00}l^{\gamma}l^{\delta}}{l^2(l+q)^2}\Big[\frac{1}{(l+p_1)^2-m^2+i\epsilon}+\frac{1}{(l-p_2)^2-m^2+i\epsilon}\Big] \cr
&=8\pi\alpha^{(1)}(d) (G_Nm)^{d\over d-2}m\vec{q}^2\frac{d-2}{(d-1)^2}\Big(d\delta^0_{\mu}\delta^0_{\nu}+\frac{q_{\mu}q_{\nu}}{q^2}-\eta_{\mu\nu}\Big) J_{(1)}(\vec{q}^2)\,.
\end{align}
where we used that
\begin{equation}
\eta_{\mu\nu}{\tau}^{\mu\nu}_{10\ 00,\gamma\delta}(l,q)l^{\gamma}l^{\delta}=\frac{\vec{l_1}^2}{2}\bigg(\vec{q}^2+(\vec{l_1}+\vec{q})^2-\vec{l_1}^2\bigg),
\end{equation}
and
\begin{multline}
\delta_{\mu}^0\delta_{\nu}^0 {\tau}^{\mu\nu}_{10\ 00,\gamma\delta}(l,q)l^{\gamma}l^{\delta}=
\frac{1}{2(d-1)}\bigg((d-2)\vec{q}^4+(d-2)(\vec{l_1}+\vec{q})^2\big((\vec{l_1}+\vec{q})^2-2\vec{q}^2\big)-\vec{l_1}^4\cr-(d-3)\vec{l_1}^2\big((\vec{l_1}+\vec{q})^2+\vec{q}^2\big)\bigg)\,.
\end{multline}
Using the Fourier transforms
\begin{align}
\int_{\mathbb R^d} J_{(1)}(\vec q^2) e^{i\vec
q\cdot \vec x} {d^d\vec q\over (2\pi)^d}
&=-{\Gamma\left(d\over2\right)^2\over 2 \pi^d r^{2(d-1)}},\cr
\int_{\mathbb R^d} {q_iq_j\over \vec q^2} J_{(1)}(\vec q^2) e^{i\vec
q\cdot \vec x} {d^d\vec q\over (2\pi)^d}
&={\Gamma\left(d-2\over2\right) \Gamma\left(d\over2\right)\over
4\pi^d r^{2(d-1)}} \left(
\delta_{ij}-2 (d-1) {x_ix_j\over r}\right)\,.
\end{align}
and the relation between the stress-tensor and the metric components
in~\eqref{e:TtohAmplitudedeDonder} we obtain the following contribution to the metric components
\begin{align}\label{e:hnonmin1}
\delta^{(1)} h_{0}^{(2)}(r,d)&=64\alpha^{(1)}(d)\frac{(d-2)\Gamma({d\over2})^2}{(d-1)\pi^{d-2}}\left((G_Nm)^{1\over d-2}\over r\right)^{2(d-1)},\cr
\delta^{(1)} h_1^{(2)}(r,d)&=-64\alpha^{(1)}(d)\frac{\Gamma({d\over2})^2}{(d-1)\pi^{d-2}}\left((G_Nm)^{1\over d-2}\over r\right)^{2(d-1)},\cr
\delta^{(1)} h_{2}^{(2)}(r,d)&=128\alpha^{(1)}(d)\frac{\Gamma({d\over2})^2}{(d-1)\pi^{d-2}}\left((G_Nm)^{1\over d-2}\over r\right)^{2(d-1)}\,.
\end{align}
\subsubsection{Two insertions of $\delta^{(1)}S^{\rm ct.}$}\label{e:delta1delta1oneloop}
Two insertions of the non-minimal coupling $\delta^{(1)}S^{\rm ct.}$
in~\eqref{e:Sctn1} in the one-loop graph
\begin{equation}
(\delta^{(1)} )^2\mathcal M^{(1)}(p_1,q)=
\begin{gathered}
\begin{fmffile}{delta1delta1oneloop1}
\begin{fmfgraph*}(100,100)
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{fermion,label=$p_1$}{i1,v1}
\fmf{fermion,label=$p_2$}{v2,i2}
\fmf{dbl_wiggly,label=$q$}{o1,v3}
\fmf{plain,tension=.1}{v1,v2}
\fmf{dbl_wiggly,tension=.3}{v3,v1}
\fmf{dbl_wiggly,tension=.3}{v3,v2}
\fmfv{decor.shape=square,decor.filled=empty,
decor.size=5thick,label=1,label.dist=-3}{v1}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=1,label.dist=-3}{v2}
\end{fmfgraph*}
\end{fmffile}
\end{gathered},
\end{equation}
leads to the stress-tensor contribution
\begin{equation}\label{abc}
(\delta^1)^2\langle T_{\mu\nu}^{(1)}\rangle=\frac{2(\alpha^{(1)}(d))^2(G_Nm)^{d+2\over{d-2}}\pi m \vec{q}^4}{d-1}\Bigg(\delta_{\mu}^0\delta_{\nu}^0-(d-2)\big({q_{\mu}q_{\nu}\over q^2}-\eta_{\mu\nu}\big)\Bigg)\, J_{(1)}(\vec{q}^2),
\end{equation}
and the metric contributions
\begin{align}
&(\delta^1)^2 h^{(2)}_0(r,d)=0,\cr
&(\delta^1)^2 h^{(2)}_1(r,d)={64 (\alpha^{(1)}(d))^2 \over \pi^{d-2}}\Gamma\left({d\over2}\right)^2\left((G_Nm)^{1\over d-2}\over r\right)^{2d},\cr
&(\delta^1)^2 h^{(2)}_2={64 d(d-2) (\alpha^{(1)}(d))^2 \over \pi^{d-2}}\Gamma\left({d\over2}\right)^2\left((G_Nm)^{1\over d-2}\over r\right)^{2d}.
\end{align}
\subsection{Two-loop insertions}
{\centering
\begin{table}
\begin{tabular}{ccc}
\begin{fmffile}{delta1twolooptriangle4pt12a}
\begin{fmfgraph*}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,v1,v4,v2,i2}
\fmffreeze
\fmf{dbl_wiggly,tension=3}{v3,v5,v2}
\fmf{dbl_wiggly}{v3,v1}
\fmf{dbl_wiggly,tension=.1}{v5,v4}
\fmf{dbl_wiggly,tension=5}{v3,o1}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=1,label.dist=0}{v1}
\end{fmfgraph*}
\end{fmffile},&
\begin{fmffile}{delta1twolooptriangle4pt12b}
\begin{fmfgraph*}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,v1,v4,v2,i2}
\fmffreeze
\fmf{dbl_wiggly,tension=3}{v3,v5,v2}
\fmf{dbl_wiggly}{v3,v1}
\fmf{dbl_wiggly,tension=.1}{v5,v4}
\fmf{dbl_wiggly,tension=5}{v3,o1}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=1,label.dist=-2}{v4}
\end{fmfgraph*}
\end{fmffile},&
\begin{fmffile}{delta1twolooptriangle4pt12c}
\begin{fmfgraph*}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,v1,v4,v2,i2}
\fmffreeze
\fmf{dbl_wiggly,tension=3}{v3,v5,v2}
\fmf{dbl_wiggly}{v3,v1}
\fmf{dbl_wiggly,tension=.1}{v5,v4}
\fmf{dbl_wiggly,tension=5}{v3,o1}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=1,label.dist=-2}{v2}
\end{fmfgraph*}
\end{fmffile},\\
\begin{fmffile}{delta1twolooptriangle4pt21a}
\begin{fmfgraph*}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,v1,v4,v2,i2}
\fmffreeze
\fmf{dbl_wiggly,tension=3}{v3,v5,v1}
\fmf{dbl_wiggly}{v3,v2}
\fmf{dbl_wiggly,tension=.1}{v5,v4}
\fmf{dbl_wiggly,tension=5}{o1,v3}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=1,label.dist=-2}{v1}
\end{fmfgraph*}
\end{fmffile},&
\begin{fmffile}{delta1twolooptriangle4pt21b}
\begin{fmfgraph*}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,v1,v4,v2,i2}
\fmffreeze
\fmf{dbl_wiggly,tension=3}{v3,v5,v1}
\fmf{dbl_wiggly}{v3,v2}
\fmf{dbl_wiggly,tension=.1}{v5,v4}
\fmf{dbl_wiggly,tension=5}{o1,v3}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=1,label.dist=-2}{v4}
\end{fmfgraph*}
\end{fmffile},&
\begin{fmffile}{delta1twolooptriangle4pt21c}
\begin{fmfgraph*}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,v1,v4,v2,i2}
\fmffreeze
\fmf{dbl_wiggly,tension=3}{v3,v5,v1}
\fmf{dbl_wiggly}{v3,v2}
\fmf{dbl_wiggly,tension=.1}{v5,v4}
\fmf{dbl_wiggly,tension=5}{o1,v3}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=1,label.dist=-2}{v2}
\end{fmfgraph*}
\end{fmffile},\\
\begin{fmffile}{delta1twolooptriangle3a}
\begin{fmfgraph*}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain}{i1,v1,vph1a,vph1b,vph1c,v4,vph2,v2,i2}
\fmffreeze
\fmf{dbl_wiggly}{v3,v1}
\fmf{dbl_wiggly}{v3,v2}
\fmf{dbl_wiggly,tension=3}{o1,v5,v3}
\fmf{dbl_wiggly,left=.5,tension=.1}{v5,v4}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=1,label.dist=-2}{v1}
\end{fmfgraph*}
\end{fmffile},&
\begin{fmffile}{delta1twolooptriangle3b}
\begin{fmfgraph*}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain}{i1,v1,vph1a,vph1b,vph1c,v4,vph2,v2,i2}
\fmffreeze
\fmf{dbl_wiggly}{v3,v1}
\fmf{dbl_wiggly}{v3,v2}
\fmf{dbl_wiggly,tension=3}{o1,v5,v3}
\fmf{dbl_wiggly,left=.5,tension=.1}{v5,v4}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=1,label.dist=-2}{v4}
\end{fmfgraph*}
\end{fmffile},&
\begin{fmffile}{delta1twolooptriangle3c}
\begin{fmfgraph*}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain}{i1,v1,vph1a,vph1b,vph1c,v4,vph2,v2,i2}
\fmffreeze
\fmf{dbl_wiggly}{v3,v1}
\fmf{dbl_wiggly}{v3,v2}
\fmf{dbl_wiggly,tension=3}{o1,v5,v3}
\fmf{dbl_wiggly,left=.5,tension=.1}{v5,v4}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=1,label.dist=-2}{v2}
\end{fmfgraph*}
\end{fmffile},\\
\begin{fmffile}{delta1twolooptriangle4pta}
\begin{fmfgraph*}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,ov1,ovph1,ovph2,ov3,ovph3,ovph4,ov2,i2}
\fmffreeze
\fmf{dbl_wiggly,tension=.3}{v3,ov1}
\fmf{dbl_wiggly,tension=.3}{v3,ov2}
\fmf{dbl_wiggly,tension=.3}{v3,ov3}
\fmf{dbl_wiggly,tension=1}{o1,v3}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=1,label.dist=-2}{ov1}
\end{fmfgraph*}
\end{fmffile},&
\begin{fmffile}{delta1twolooptriangle4ptb}
\begin{fmfgraph*}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,ov1,ovph1,ovph2,ov3,ovph3,ovph4,ov2,i2}
\fmffreeze
\fmf{dbl_wiggly,tension=.3}{v3,ov1}
\fmf{dbl_wiggly,tension=.3}{v3,ov2}
\fmf{dbl_wiggly,tension=.3}{v3,ov3}
\fmf{dbl_wiggly,tension=1}{o1,v3}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=1,label.dist=-2}{ov3}
\end{fmfgraph*}
\end{fmffile},&
\begin{fmffile}{delta1twolooptriangle4ptc}
\begin{fmfgraph*}(100,50)
\fmfstraight
\fmfleftn{i}{2}
\fmfrightn{o}{1}
\fmf{plain,tension=10}{i1,ov1,ovph1,ovph2,ov3,ovph3,ovph4,ov2,i2}
\fmffreeze
\fmf{dbl_wiggly,tension=.3}{v3,ov1}
\fmf{dbl_wiggly,tension=.3}{v3,ov2}
\fmf{dbl_wiggly,tension=.3}{v3,ov3}
\fmf{dbl_wiggly,tension=1}{o1,v3}
\fmfv{decor.shape=square,decor.filled=empty, decor.size=5thick,label=1,label.dist=-2}{ov2}
\end{fmfgraph*}
\end{fmffile} .
\end{tabular}
\caption{Insertion of the non-minimal coupling in the two-loop graph}
\label{fig:2loopinsertions}
\end{table}
}
For the insertion of the non-minimal coupling $\delta^{(1)}S^{\rm ct.}$
in~\eqref{e:Sctn1} in the two-loop graph one needs to sum over all the
contributions in table~\ref{fig:2loopinsertions}.
The classical limit of the sum of all these graphs lead to the
following contribution to the stress-tensor
\begin{multline}\label{e:delta1T2}
\delta^{(1)}\langle T_{\mu\nu}^{(2)}\rangle =-{128\pi^2(d-2)\alpha^{(1)}(d)\over 3(d-4)(3d-4)(d-1)^2}
(G_Nm)^{2(d-1)\over d-2}m \vec{q}^2\Bigg((3d^3-19d^2+28d-10)\delta_{\mu}^0\delta_{\nu}^0\cr+ (3d^3-15d^2+18d-4)\big({q_{\mu}q_{\nu}\over q^2}-\eta_{\mu\nu}\big)\Bigg) J_{(2)}(\vec{q}^2),
\end{multline}
which leads to the following contributions to the metric components
\begin{align}
\delta^{(1)} h_{0}^{(3)}(r,d)&=-{512\alpha^{(1)}(d)\over d-1}\frac{\Gamma({d\over2})^3}{\pi^{{3\over2}(d-2)}}\left((G_Nm)^{1\over d-2}\over r\right)^{3d-4},\\
\delta^{(1)} h_1^{(3)}(r,d)&=\frac{256 \alpha^{(1)}(d)\left(3 d^3-23 d^2+46 d-28\right)}{(d-4) (d-2) (d-1)^2 (3 d-4)}\frac{\Gamma({d\over2})^3}{\pi^{{3\over2}(d-2)}}\left((G_Nm)^{1\over d-2}\over r\right)^{3d-4},\cr
\nonumber \delta^{(1)} h_{2}^{(3)}(r,d)&=-\frac{256\alpha^{(1)}(d) \left(3 d^3-15 d^2+18 d-4\right)}{(d-4) (d-2) (d-1)^2}\frac{\Gamma({d\over2})^3}{\pi^{{3\over2}(d-2)}}\left((G_Nm)^{1\over d-2}\over r\right)^{3d-4}.
\end{align}
\subsection{The renormalised metric in four
dimensions}\label{sec:renorD4}
The metric components have ultraviolet poles in four dimensions from
two-loop order. We show how the addition of the non-minimal couplings
leads to finite renormalised metric components.
\subsubsection{The two-loop renormalisation}
\label{sec:two-loop-form}
The two-loop metric components in~\eqref{e:htwoloopdiv} have a divergence in
four dimensions ($d=3$)
\begin{align}\label{e:htwoloopdivpole4D}
h_{0}^{(3)}(r,d)& =O(1),\cr
h_1^{(3)}(r,d)&=-{2\over 3(d-3)}\left(G_Nm\over r\right)^3+O(1),\cr
h_{2}^{(3)}(r,d)&={2\over d-3}\left(G_Nm\over r\right)^3+O(1)\,.
\end{align}
This divergence is cancelled by adding the metric contribution from
the non-minimal coupling in~(\ref{e:hnonmin0})
\begin{equation}
h_i^{\rm renor.~(3)}(r,d):= h_i^{(3)}(r,d)+ \delta^{(1)} h_i^{(1)}(r,d), \qquad i=0,1,2\,
\end{equation}
and setting the $\alpha^{(1)}(d)$ coefficient to be
\begin{equation}\label{e:alpha4d}
\alpha^{(1)}(d)={1\over 12(d-3)}+a^{(1)}(3)-{\log(2)\over6}+O(d-3)\,.
\end{equation}
The resulting renormalised two-loop metric reads
\begin{align}\label{e:htwolooprenorm}
h_{0}^{\rm renor.~(3)}(r,d)&= 2\left(G_Nm\over r\right)^3+O(d-3),\cr
h_1^{\rm renor.~(3)}(r,d)&= {4\over3} \left(-{1\over2}+6a^{(1)}(3)+\log\left(r C_E\over G_Nm\right)
\right) \left(G_N m \over r\right)^3 +O(d-3),\cr
h_{2}^{\rm renor.~(3)}(r,d)&= 4 \left({1\over3}-6a^{(1)}(3)- \log\left(rC_E\over G_Nm\right)\right) \left(G_N m \over r\right)^3+O(d-3)\,.
\end{align}
where we have introduced the following combination of the
Euler-Mascheroni constant~\cite{Lagarias} and $\pi$
\begin{equation}
\label{e:Cedef}
C_E:= \sqrt{\pi} e^{ \gamma_E\over2} \,.
\end{equation}
The divergence in the two-loop stress-tensor
in~\eqref{e:Tstatictwoloop}
\begin{equation}\label{e:Ttwoloopdiv}
\langle T_{\mu\nu}^{(2)}\rangle=
{G_N^2 \vec q^2m^3\over 6(d-3)} \Bigg(2\delta_\mu^0 \delta_\nu^0
+ \left({q_\mu
q_\nu\over q^2}-\eta_{\mu\nu}\right)\Bigg)+O(1)\,,
\end{equation}
is cancelled by adding the contribution in~\eqref{e:T1nonmin} from
the non-minimal coupling with the following choice of $\beta^{(1)}(d)$ coefficient
\begin{equation}\label{e:beta4d}
\beta^{(1)}(d)=-{1\over3(d-3)}+O(1)\,.
\end{equation}
Notice that this computation does not determine the finite part of the
$\alpha^{(1)}(d)$ and $\beta^{(1)}(d)$. They are free scales in the
logarithms. We will show in section~\ref{sec:matching-amplitude} that
this freedom is totally reabsorbed in the change of coordinate and the
Schwarzschild-Tangherlini metric does not have any ambiguity.
\subsubsection{The three-loop renormalisation}
The three-loop metric components in~\eqref{e:hthreeloopdiv} have a
divergence in four dimensions ($d=3$) given by
\begin{align}\label{e:hthreeloopdivexp}
h_{0}^{(4)}(r,d)&=-{2\over 3(d-3)}\left(G_N m\over r\right)^4+O(1),\cr
h_{1}^{(4)}(r,d)&={2\over 3(d-3)}\left(G_N m\over r\right)^4+O(1),\cr
h_2^{(4)}(r,d) &=-{4\over 3(d-3)}\left(G_N m\over r\right)^4+O(1),\,.
\end{align}
Adding to this contribution the~(\ref{e:hnonmin1}) from the insertion
of the non-minimal couplings at one-loop, and using the value of
$\alpha^{(1)}(d)$ determined in~\eqref{e:alpha4d}, we obtain the
renormalised three-loop metric
\begin{align}\label{e:hthreelooprenorm}
h_{0}^{\rm renorm. (4)}(r)&=\left(-{32\over3}+8a^{(1)}(3)+{4\over3}\log\left(rC_E\over G_Nm \right) \right)\left(G_N m\over r\right)^4+O(d-3),\cr
h_{1}^{\rm renorm. (4)}(r)&=\left(10-8a^{(1)}(3)-{4\over3}\log\left(rC_E\over G_Nm\right)\right)\left(G_N m\over r\right)^4+O(d-3),\cr
h_2^{\rm renorm. (4)}(r)&=\left(-{86\over3}+16a^{(1)}(3)+{8\over 3}\log\left(rC_E\over G_Nm\right)\right) \left(G_N m\over r\right)^4+O(d-3)\,.
\end{align}
The classical three-loop contribution to the stress-tensor has an
ultraviolet divergence
\begin{equation}\label{e:T3loopDiv}
\langle T_{\mu\nu}^{(3)}(\vec q)\rangle=-{\pi
G_N^3m^4 |\vec q|^{3\over2}\over 48(d-3)}
\Bigg(3\delta_\mu^0 \delta_\nu^0
+
\left({q_\mu q_\nu\over q^2}-\eta_{\mu\nu}\right)\Bigg) +O(1)\,,
\end{equation}
this divergence is cancelled by the addition of the contribution
in~\eqref{e:T3loopct} from the non-minimal coupling and the choice of
$\alpha^{(1)}(d)$ in~\eqref{e:alpha4d}.
\subsection{The renormalised metric in five
dimensions}\label{sec:renorD5}
The metric components have ultraviolet divergences in five dimensions from
one-loop order. We show how the addition of the non-minimal couplings
leads to finite renormalised metric components.
\subsubsection{The one-loop renormalisation}\label{sec:D5ctoneloop}
The metric components in~(\ref{e:honeloop}) have a divergence in
five dimension $(d=4)$ given by
\begin{align}\label{e:honeloopdiv}
h_{0}^{(2)}(r,d)&=O(1),\cr
h_1^{(2)}(r,d)&=-\frac{40 }{9 (d-4)}\left(G_N m\over \pi r^2\right)^2+O(1),\cr
h_2^{(2)}(r,d)&=\frac{160 }{9(d-4) }\left(G_N m\over \pi r^2\right)^2+O(1) \,.
\end{align}
The divergences in the metric components~\eqref{e:honeloopdiv} are
cancelled for the choice
\begin{equation}\label{e:alpha5d}
\alpha^{(1)}(d)= {5\over18\pi (d-4)}+a^{(1)}(5)+O(d-4)\,,
\end{equation}
so that the renormalised metric components
\begin{equation}
h_i^{\rm renor.~(2)}(r,d):= h_i^{(2)}(r,d)+ \delta^{(1)} h_i^{(1)}(r,d), \qquad i=0,1,2\,,
\end{equation}
have a finite expansion near $d=4$
\begin{align}\label{e:honelooprenorm5D}
h_{0}^{\rm renor.~(2)}(r,d)&={32\over9}\left(G_Nm\over \pi r^2\right)^2+O(d-4),\\
h_1^{\rm renor.~(2)}(r,d)&={20\over9} \left({14\over15}+{36 a^{(1)}(5)\pi\over5}+ \log\left(r^2C_E^2\over G_Nm\right)
\right) \left(G_N m \over\pi r^2\right)^2 +O(d-4),\cr
\nonumber h_{2}^{\rm renor (2)}(r,d)&=-{80\over9} \left({7\over30}+{36a^{(1)}(5)\pi\over5}+ \log\left(r^2C_E^2\over G_Nm\right)\right) \left(G_N m \over \pi r^2\right)^2+O(d-4)\,.
\end{align}
where $C_E$ is defined in~\eqref{e:Cedef}.
Thanks to the properties of the Fourier transform, only the
coefficient $\alpha^{(1)}(d)$ enters the counter-term contribution to the
metric component.
To determine as well the coefficient $\beta^{(1)}(d)$ in~\eqref{e:Sctn1} one
needs to look at the divergences of the stress-tensor
\begin{equation}\label{afe}
\langle T_{\mu\nu}^{(1)}\rangle= {G_N m^2 \vec q^2\over
18\pi (d-4)} \left(7\delta_\mu^0\delta_\nu^0+10 \left( {q_\mu
q_\nu\over q^2}-\eta_{\mu\nu}\right)\right)+O(1)
\end{equation}
The cancellation of the pole fixes the pole part of $\beta^{(1)}(d)$ near
five dimensions
\begin{equation}
\beta^{(1)}(d)=-\frac{7}{18\pi (d-4)}+O(1)\,.
\end{equation}
\subsubsection{The two-loop renormalisation}
The two-loop metric components in~(\ref{e:htwoloopdiv}) have a divergence in
five dimensions ($d=4$)
\begin{align}\label{e:htwoloopdivpole5D}
h_{0}^{(3)}(r,d)& =-{320\over 27(d-4)}\left(G_Nm\over \pi r^2\right)^3 +O(1),\cr
h_1^{(3)}(r,d)&={160\over 27(d-4)}\left(G_Nm\over \pi r^2\right)^3+O(1),\cr
h_{2}^{(3)}(r,d)&=-{320\over 27(d-4)}\left(G_Nm\over \pi r^2\right)^3+O(1)\,.
\end{align}
The divergences in the metric components~\eqref{e:htwoloopdiv} are
cancelled for the choice made at one-loop in~\eqref{e:alpha5d},
so that the renormalised metric components
\begin{equation}
h_i^{\rm renor.~(3)}(r,d):= h_i^{(3)}(r,d)+ \delta^{(1)} h_i^{(2)}(r,d), \qquad i=0,1,2\,,
\end{equation}
have a finite expansion near $d=4$
\begin{align}\label{e:htwolooprenorm5D}
h_{0}^{\rm renor.~(3)}(r,d)&= {160\over27}\left({2\over15}+{36 a^{(1)}(5)\pi\over5}+ \log\left(r^2C_E^2\over G_Nm\right)
\right) \left(G_Nm\over \pi r^2\right)^3+O(d-4),\cr
h_1^{\rm renor.~(3)}(r,d)&=-{80\over27} \left({7\over15}+{36 a^{(1)}(5)\pi\over5}+ \log\left(r^2C_E^2\over G_Nm\right)
\right) \left(G_N m \over\pi r^2\right)^3 +O(d-4),\cr
h_{2}^{\rm renor (3)}(r,d)&= {160\over27} \left(-{1\over15}+{36a^{(1)}(5)\pi\over5}+ \log\left(r^2C_E^2\over G_Nm\right)\right) \left(G_N m \over \pi r^2\right)^3+O(d-4)\,.
\end{align}
The two-loop stress-tensor in~\eqref{e:Tstatictwoloop} is not finite
in $d=4$ as it diverges like
\begin{multline}
\langle T_{\mu\nu}^{(2)}\rangle= {5 G_N^2 m^3 |\vec
q|^4\over 162\pi^2 (d-4)^2} \left(4\delta_\mu^0\delta_\nu^0+ {q_\mu
q_\nu\over q^2}-\eta_{\mu\nu}\right)\cr
+ {5 G^2m^3|\vec q|^4\over 162
\pi^2 (d-4)} \Big(\left(4 \log \left(\vec q^2\over 4\pi\right)+4 \gamma_E -{183\over20}\right)\delta_\mu^0\delta_\nu^0 \cr+\left(\log \left(\vec q^2\over4\pi\right)+ \gamma_E -{41\over20}\right)\left( {q_\mu
q_\nu\over q^2}-\eta_{\mu\nu}\right)\Big)+O(1)\,.
\end{multline}
The addition of the counter-term in~\eqref{e:T3loopct} from the
non-minimal couplings in~(\ref{e:Sctn1}) is not enough for making the
stress-tensor finite in $d=4$
\begin{multline}\label{e:T2deltaT1}
\langle T_{\mu\nu}^{(2)}\rangle+\delta^{(1)}\langle
T_{\mu\nu}^{(1)}\rangle= - {5 G_N^2 m^3 |\vec
q|^4\over 162\pi^2 (d-4)^2} \left(4\delta_\mu^0\delta_\nu^0+ {q_\mu
q_\nu\over q^2}-\eta_{\mu\nu}\right)\cr
+{5G^2m^3|\vec q|^4\over 162
\pi^2 (d-4)} \Bigg(\left(4\log \left( G_Nm\right)-{144\pi a^{(1)}(5)\over5}-{109\over60} \right)\delta_\mu^0\delta_\nu^0 \cr
+\left(\left(\log \left(G_Nm\right) +{17\over60}-{36\over5} \pi a^{(1)}(5)\right)\left( {q_\mu
q_\nu\over q^2}-\eta_{\mu\nu}\right)\right)\Bigg)+O(1)\,.
\end{multline}
We need to consider the addition of the counter-term from the
insertion of $\delta^{(2)}S^{\rm ct.}$ evaluated in
section~\ref{e:delta2tree} with the values of the coefficient near $d=4$
\begin{align}
\beta_1^{(2)}(d)&= {1\over \pi^2}\left({10\over 81(d-4)^2 }+{109 +
1728 \pi a^{(1)}(5)\over 1944(d-4)}
+a^{(2)}(5)+O(d-4)\right),\cr
\alpha^{(2)}(d)+\frac12\beta_0^{(2)}(d)&=-{1\over 2\pi^2}\left({5\over 162(d-4)^2 }+\frac{432 \pi a^{(1)}(5)-17}{1944(d-4)}+b^{(2)}(5)+O(d-4)\right) \,,
\end{align}
plugged in~\eqref{e:T2nonmin} cancel the divergences in~\eqref{e:T2deltaT1}
\begin{equation}
\langle T_{\mu\nu}^{(2)}\rangle+\delta^{(1)}\langle
T_{\mu\nu}^{(1)}\rangle+\delta^{(2)}\langle T^{(0)}_{\mu\nu}\rangle =O(1)\,.
\end{equation}
\subsubsection{The three-loop renormalisation}
The three-loop metric components in~(\ref{e:hthreeloopdiv}) have a divergence in
five dimensions ($d=4$)
\begin{align}\label{e:hthreeloopdivpole5D}
h_{0}^{(4)}(r,d)& ={1280\over 27(d-4)}\left(G_Nm\over \pi r^2\right)^4 +O(1),\cr
h_1^{(4)}(r,d)&=\left({400\over 81(d-4)^2}-\frac{20\left(101+120\log\left(r^2C_E^2\right)\right)}{243(d-4)}\right)\left(G_Nm\over \pi r^2\right)^4+O(1),\cr
h_{2}^{(4)}(r,d)&=\left({3200\over 81(d-4)^2}+\frac{160\left(187-120\log \left(r^2C_E^2\right)\right)}{243}\right)\left(G_Nm\over \pi r^2\right)^4+O(1)\,.
\end{align}
The divergences in the metric components~\eqref{e:hthreeloopdiv} are
cancelled for the choice made at one-loop in~\eqref{e:alpha5d},
so that the renormalised metric components
\begin{equation}
h_i^{\rm renor.~(4)}(r,d):= h_i^{(4)}(r,d)+ \delta^{(1)} h_i^{(3)}(r,d)+(\delta^1)^2 h^{(2)}_i(r,d), \qquad i=0,1,2\,,
\end{equation}
have a finite expansion near $d=4$
\begin{align}\label{e:hthreelooprenorm5D}
h_{0}^{\rm renor.~(4)}(r,d)=& -{128\over 243}\left(23+324 a^{(1)}(5)\pi+45 \log\left(r^2C_E^2\over G_Nm\right)
\right) \left(G_Nm\over \pi r^2\right)^4+O(d-4),\cr
h_1^{\rm renor.~(4)}(r,d)=&{100\over81} \Bigg(\left({36 a^{(1)}(5)\pi\over5}+ \log\left(r^2C_E^2\over G_Nm\right)\right)\left({161\over30}+{36\over5} a^{(1)}(5)\pi+\log\left(r^2C_E^2\over G_Nm\right) \right)+
\cr
& +{7085\over1800}\Bigg)\times \left(G_N m \over\pi r^2\right)^4 +O(d-4),\cr
h_{2}^{\rm renor (4)}(r,d)=&-{800\over81} \Bigg(\left({36 a^{(1)}(5)\pi\over5}+ \log\left(r^2C_E^2\over G_Nm\right)\right)\left({41\over15}-{36\over5} a^{(1)}(5)\pi-\log\left(r^2C_E^2\over G_Nm\right)\right)+\cr
& +{2381\over900}\Bigg) \times \left(G_N m \over\pi r^2\right)^4 +O(d-4).
\end{align}
The three-loop stress-tensor in~\eqref{e:Tstaticthreeloop} is not finite
in $d=4$ as it diverges like
\begin{multline}
\langle T_{\mu\nu}^{(3)}\rangle= {25 G_N^3 m^4 |\vec
q|^6\over 5832\pi^3 (d-4)^3} \left(-{1\over2}\delta_\mu^0\delta_\nu^0+ {q_\mu
q_\nu\over q^2}-\eta_{\mu\nu}\right)\cr
+ {25 G_N^3m^4|\vec q|^6\over 3888
\pi^3 (d-4)^2} \Big(-{1\over2}\left( \log \left(\vec q^2\over 4\pi\right)+ \gamma_E -{41\over6}\right)\delta_\mu^0\delta_\nu^0 \cr+\left(\log \left(\vec q^2\over4\pi\right)+ \gamma_E -{17\over10}\right)\left( {q_\mu
q_\nu\over q^2}-\eta_{\mu\nu}\right)\Big)\cr
+ {225 G_N^3m^4|\vec q|^6\over 839808
\pi^3 (d-4)}
\Big({1\over2}\left({70939\over450}+\pi^2-18\left(\log \left(\vec q^2\over 4\pi\right)+ \gamma_E -{41\over6}\right)^2\right)\delta_\mu^0\delta_\nu^0 \cr
+\left({4769\over450}-\pi^2+18\left(\log \left(\vec q^2\over 4\pi\right)+ \gamma_E -{17\over10}\right)^2\right)\left( {q_\mu
q_\nu\over q^2}-\eta_{\mu\nu}\right)\Big)+O(1)\,.
\end{multline}
The addition of the counter-terms in $(\delta^1)^2\langle
T_{\mu\nu}^{(1)}\rangle$ in~\eqref{abc}, and $\delta^{(1)}\langle
T_{\mu\nu}^{(2)}\rangle$ in~\eqref{e:delta1T2} from the
non-minimal couplings in~(\ref{e:Sctn1}) is not enough for making the
stress-tensor finite in $d=4$
\begin{multline}
\langle T_{\mu\nu}^{(3)}\rangle+(\delta^1)^2\langle T_{\mu\nu}^{(1)}\rangle+\delta^{(1)}\langle T_{\mu\nu}^{(2)}\rangle
= {25 G_N^3 m^4 |\vec
q|^6\over 5832\pi^3 (d-4)^3} \left(-{1\over2}\delta_\mu^0\delta_\nu^0+ {q_\mu
q_\nu\over q^2}-\eta_{\mu\nu}\right)\cr
+ {25 G_N^3m^4|\vec q|^6\over 3888
\pi^3 (d-4)^2} \Bigg(-{1\over2}\left( {25\over12}+{36\over5} a^{(1)}(5)\pi -\log\left(G_Nm\right)\right)\delta_\mu^0\delta_\nu^0 \cr+\left({1\over60}+{36\over5} a^{(1)}(5)\pi -\log\left(G_Nm\right)\right)\left( {q_\mu
q_\nu\over q^2}-\eta_{\mu\nu}\right)\Bigg)\cr
- {25 G_N^3m^4|\vec q|^6\over 5184
\pi^3 (d-4)} \times\cr
\times \Bigg(\left({27487\over48600}+ a^{(1)}(5)\pi \left(1+{288\over25}a^{(1)}(5)\pi\right) -{\log\left(G_Nm\right)\over3}\left({7\over2}+\log\left(G_Nm\right)+{72\over5}a^{(1)}(5)\pi \right)\right)\delta_\mu^0\delta_\nu^0 \cr
+ \left({6749\over16200}- {6a^{(1)}(5)\pi\over25} \left(1+144a^{(1)}(5)\pi\right) -{\log\left(G_Nm\right)}\left({19\over30}+\log\left(G_Nm\right)-{72\over5}a^{(1)}(5)\pi \right)\right)\left( {q_\mu
q_\nu\over q^2}-\eta_{\mu\nu}\right)\Bigg)\cr
+O(1)\,.
\end{multline}
We need to consider the addition of the counter-term from the
insertion of $\delta^{(3)}S^{\rm ct.}$ evaluated in
section~\ref{e:delta3tree} with the values of the coefficient near $d=4$
\begin{align}
\beta_1^{(3)}(d)&=\frac{25}{11664 \pi ^3
(d-4)^3}+ \frac{5 (432 \pi a^{(1)} (5)+125)}{93312 \pi ^3 (d-4)^2}\cr
&+\frac{559872 (\pi a^{(1)}(5))^2+486000 \pi a^{(1)} (5)+27487}{6718464 \pi ^3 (d-4)}+O(1),\cr
\alpha^{(3)}(d)+\frac14\beta_0^{(3)}(d)&=\frac{25}{11664 \pi ^3
(d-4)^3}+\frac{2160 \pi a^{(1)}(5)+5}{93312 \pi ^3 (d-4)^2}\cr
&+\frac{559872 (\pi a^{(1)}(5))^2+3888 \pi a^{(1)}(5)-6749}{6718464 \pi ^3 (d-4)}+O(1)\,,
\end{align}
plugged in~\eqref{e:T2nonmin} cancel the divergences in~\eqref{e:T2deltaT1}
\begin{equation}
\langle T_{\mu\nu}^{(2)}\rangle+\delta^{(1)}\langle
T_{\mu\nu}^{(2)}\rangle+(\delta^{(1)}\langle T_{\mu\nu}^{(2)}\rangle+\delta^{(3)}\langle
T_{\mu\nu}^{(0)}\rangle =O(1)\,.
\end{equation}
\subsection{The renormalised stress-tensor in six
dimensions}\label{sec:renorD6}
In six dimensions, the metric component are finite to all order in
perturbation but the two-loop
stress-tensor in~\eqref{e:Tstatictwoloop} presents an ultraviolet
divergence in six dimensions ($d=5$)
\begin{equation}
\langle T_{\mu\nu}^{(2)}\rangle= -{G_N^2 m^3 |\vec
q|^6\over 40320\pi^2 (d-5)} \left(49\delta_\mu^0\delta_\nu^0+ 15\left({q_\mu
q_\nu\over q^2}-\eta_{\mu\nu}\right)\right)+O(1)\,,
\end{equation}
which is cancelled by the addition of the insertion of the non-minimal
coupling $\delta^{(3)}S^{\rm ct.}$ at tree-level in~\eqref{e:T3nonmin}
with the choice of the coefficients
\begin{align}
\alpha^{(3)}(d)+\frac14\beta^{(3)}_0(d)&=-{15\over80640\pi^2(d-5)}+O(1),\cr
\beta_1^{(3)}(d)&= -{49\over40320\pi^2(d-5)}+O(1)\,.
\end{align}
\section{The Schwarzschild-Tangherlini metric in de Donder gauge in
four, five and six dimensions }\label{sec:deDonder}
The Schwarzschild-Tangherlini~\cite{Tangherlini:1963bw} space-time metric in $d+1$ dimensions is given by
the Tangherlini solution, using $\rho(r,d)$ defined in~(\ref{e:rhodef}),\footnote{In
spherical coordinate the metric reads
\begin{equation}
ds^2 = \left(1-{\mu\over r^{d-2}}\right)dt^2-{dr^2\over
1-{\mu\over r^{d-1}}}-r^2 d\Omega_{d-1}
\end{equation}
with $\mu={16\pi G_Nm\over (d-1)\Omega_{d-1}}$ and
$\Omega_{d-1}={2\pi^{d\over2}\over \Gamma\left(d\over2\right)}$
is the area of the unit $(d-1)$-sphere.
}
\begin{equation}
ds^2_{\rm Schw}= \left(1- 4{d-2\over d-1} \rho(r,d) \right)dt^2-
d\vec x^2- {4{d-2\over d-1} \rho(r,d) \over 1- 4{d-2\over d-1} \rho(r,d)}{(\vec x\cdot
d\vec x)^2\over r^2}\,.
\end{equation}
As explained in section~\ref{sec:schw} the amplitude computation
selects the de Donder gauge in~(\ref{e:deDonderGauge}).
We make the coordinate transformation $(t,\vec x)\to (t, f(r)\vec x)$
so that the Schwarzschild metric reads
\begin{equation}\label{e:SchwaF}
ds^2=h_0(r) dt^2- h_1(r) d\vec x^2-h_2(r) {(\vec x\cdot
d\vec x)^2\over r^2},
\end{equation}
with $r=|\vec x|$ and
\begin{align}
\label{e:hfinitedef} h_0(r)&:=1-4{d-2\over d-1}\, {\rho(r,d)\over f(r)^{d-2}}, \\
h_1(r)&:= f(r)^2, \cr
\nonumber h_2(r)&:=-f(r)^2-f(r)^{d-2} {(f(r)+r{df(r)\over dr})^2\over
f(r)^{d-2}-4{d-2\over d-1}\rho(r,d) }\,.
\end{align}
The de Donder gauge condition~(\ref{e:deDonderGauge}) then reads
\begin{equation}\label{e:dDf}
2(d-1)h_2(r)= r {d\over dr}\left(h_0(r)+(d-2) h_1(r)-h_2(r)\right)\,.
\end{equation}
We will be solving the de Donder gauge
condition~(\ref{e:deDonderGauge}) in four dimensions ($d=3$), five
dimensions ($d=4$) and six dimensions ($d=5$), using the post-Minkowskian
expansion
\begin{equation}\label{e:fpostN}
f(r)= 1+ \sum_{n\geq1} f_n(r) \rho(r,d)^n
\end{equation}
with the condition at each order that
\begin{equation}\label{e:bdy}
\lim_{r\to +\infty} f_n(r)/r^n =0\,.
\end{equation}
\subsection{The metric in the de Donder gauge in four dimensions}\label{sec:soldeDonder}
The de Donder gauge condition~(\ref{e:deDonderGauge}) in $d=3$ reads
\begin{equation}\label{e:dDfd3}
4h_2(r)= r {d\over dr}\left(h_0(r)+h_1(r)-h_2(r)\right)\,,
\end{equation}
supplemented with the asymptotic boundary condition
\begin{equation}\label{e:finf4d}
\lim_{r\to\infty} f(r)=1\,.
\end{equation}
This differential equation implies either that $f(r)=C /r$, which does
not satisfy the boundary condition~(\ref{e:finf4d}), or $f(r)$
satisfies the differential equation, with $x=G_N m/r$
\begin{multline}\label{e:diffGauge}
x f(x)^3(2x-f(x)) {d^{2}f(x)\over dx^2}+ \left(x f \left( x \right)
\right) ^{2} \left(df(x)\over dx \right) ^{2}\cr
+2\, f \left( x \right)^3 (f(x)-3x ){df(x)\over dx}-3\, \left( f \left( x \right) \right) ^{4}+8\,
\left( f \left( x \right) \right) ^{3}x+ \left( f \left( x \right)
\right) ^{2}-4\,f \left( x \right) x+4\,{x}^{2}
=0.
\end{multline}
We solve the equation~(\ref{e:diffGauge}) using a series expansion in
$G_N m$ using~\eqref{e:fpostN} and the boundary condition~\eqref{e:bdy}.
The result to the order $(G_N m)^7$ is given by
\begin{multline} \label{e:ffinite}
f(r)=1+{G_Nm\over r}+2\left( G_N m\over r\right)^2+ {2\over3} \log
\left(\frac{r C_3}{G_N m}\right)\left(G_N m\over
r\right)^3\cr
+ \left( {2\over3}-{4\over3}
\log \left(\frac{r C_3}{G_N m}\right)\right)\left(G_N m\over r\right)^4
+ \left(
-\frac{21}{25}+{32\over15} \log \left(\frac{rC_3}{G_N m}\right)\right)\left(G_N
m\over r\right)^5\cr
+ \left(\frac{112}{75}-{28\over15} \log\left(r C_3\over G_N
m\right)\right) \left(G_N
m\over r\right)^6\cr
+\left(\frac{50023}{34300}
+\frac{1139 }{2205} \log\left(r C_3\over G_N
m\right)+ {2\over 7}\log\left(r C_3\over G_N
m\right)^2\right)\left(G_N
m\over r\right)^7
+O(G_N^8)\,.
\end{multline}
This solution is finite and has $\log(r)$ terms from the order
$G_N^3$. The solution has a single constant of integration $C_3$
associated with the scale of the logarithm.
\subsubsection{The metric perturbation}
In $d=3$ we derive components of the metric in perturbation by
plugging the expression for $f(r)$ in~\eqref{e:ffinite}
in~\eqref{e:hfinitedef}.
We obtain for the time component
\begin{multline}\label{e:h0finite}
h^{\rm dD}_0(r)=1-2\frac{ G_N m}{r}+2\left(\frac{ G_N
m}{r}\right)^2+2\left(\frac{ G_N m}{r}\right)^3+\left(\frac43 \log
\left(\frac{r C_3}{G_N m}\right)-6\right) \left(G_N m\over
r\right)^4\cr
+\left(-\frac{16}3 \log
\left(\frac{rC_3}{G_N m}\right)+{10\over3}\right) \left(G_N m\over
r\right)^5 +\left(\frac{124}{15} \log
\left(\frac{rC_3}{G_N m}\right)+\frac{424}{75}\right) \left(G_N m\over
r\right)^6 \cr
+\Bigg(-\frac{8}{9} \log
\left(\frac{rC_3}{G_N m}\right)^2+\frac{16}{15} \log
\left(\frac{rC_3}{G_N m}\right)
-\frac{674}{75}\Bigg) \left(G_N m\over
r\right)^7
+O (G_N^8),
\end{multline}
and for the spatial components
\begin{multline}\label{e:h1finite}
h^{\rm dD}_1(r)=1+2\frac{ G_N m}{r}+5\left(\frac{ G_N m}{r}\right)^2+
\left({4\over3} \log
\left(\frac{rC_3}{G_N m}\right)+ 4\right)\left(G_N m\over
r\right)^3\cr
+
\left(-\frac43 \log
\left(\frac{rC_3}{G_N m}\right) +\frac{16}{3}\right)\left(G_N m\over r\right)^4
+ \left(\frac{64}{15} \log
\left(\frac{rC_3}{G_N m}\right) -\frac{26}{75}\right)\left(G_N m\over r\right)^5\cr
+\Bigg(\frac{4}{9} \log
\left(\frac{rC_3}{G_N m}\right)^2-\frac{24}{5}\log
\left(\frac{rC_3}{G_N m}\right)
+\frac{298}{75}\Bigg) \left(G_N m\over
r\right)^6+O(G_N ^7),
\end{multline}
and
\begin{multline}\label{e:h2finite}
h^{\rm dD}_2(r)=-7\left(\frac{ G_N m}{r}\right)^2-\left(4 \log
\left(\frac{rC_3}{G_N m}\right)+{38\over3}\right)
\left(G_N m\over r\right)^3
+ \left(\frac83 \log \left(\frac{rC_3}{G_N
m}\right)-\frac{58}{3}\right)\left(G_N m\over r\right)^4\cr
- \left(\frac{16}{3} \log
\left(\frac{rC_3}{G_N m}\right) -\frac{32}{3}\right)\left(G_N m\over r\right)^5\cr
+\Bigg(\frac{4}{3} \log
\left(\frac{rC_3}{G_N m}\right)^2+\frac{508}{45} \log
\left(\frac{rC_3}{G_N m}\right)
+\frac{7378}{225}\Bigg) \left(G_N m\over
r\right)^6+O(G_N^7).
\end{multline}
Notice the appearance of the $\log(r)^2$ at the sixth
post-Minkowskian order, $G_N^6$, in the spatial components of the metric. This is
one order less than the appearance in the time component. The same
phenomenon happens for the $\log(r)$ contribution which appears
one order earlier in the spatial component than in the time component.
\subsection{The metric in the de Donder gauge in five
dimensions}\label{sec:soldeDonder5d}
The de Donder gauge condition~(\ref{e:deDonderGauge}) in $d=4$ reads
\begin{equation}\label{e:dDfd4}
6h_2(r)= r {d\over dr}\left(h_0(r)+2h_1(r)-h_2(r)\right)\,,
\end{equation}
supplemented with the asymptotic boundary condition
\begin{equation}\label{e:finf5d}
\lim_{r\to\infty} f(r)=1\,.
\end{equation}
This differential equation implies either that $f(r)=C/r$, which does
not satisfy the boundary condition~(\ref{e:finf5d}), or $f(r)$
satisfies the differential equation, setting $x=G_N m/(\pi r^2)$
\begin{multline}\label{e:diffGauged4}
xf(x)^5 \left(8x-3 f(x)^2\right) {d^2f(x)\over dx^2}+8f(x)^4x^2 \left(d
f(x)\over dx\right)^2+f(x)^5 \left(3f(x)^2-16x\right) {df(x)\over
dx}\cr
-4f(x)^6+(16x+2)f(x)^4-{32\over3}x f(x)^2+{128x^2\over9}=0\,.
\end{multline}
We solve the equation~(\ref{e:diffGauged4}) using a series expansion in
$G_N m$ using~\eqref{e:fpostN} and the boundary condition~\eqref{e:bdy}.
The result to the order $(G_N m)^7$ is given by
\begin{multline}\label{e:fsol5d}
f(r)=1+\frac23 {G_N m\over \pi r^2}+\frac{10}9 \log\left(r^2C_2\over G_Nm\right) \left(G_N m\over \pi
r^2\right)^2 -\frac{4}{81} \left(-8+45 \log\left(r^2C_2\over G_Nm\right)\right)\left(G_N m\over \pi r^2\right)^3
\cr
+\frac{67+3780 \log\left(r^2C_2\over G_Nm\right)}{972}
\left(G_N m\over \pi r^2\right)^4-\frac{32963+156420 \log\left(r^2C_2\over G_Nm\right)-43200
\log\left(r^2C_2\over G_Nm\right)^2}{21870} \left(G_N m\over \pi r^2\right)^5 \cr
+\frac{ 409303+1620270 \log\left(r^2C_2\over G_Nm\right)-1087200 \log\left(r^2C_2\over G_Nm\right)^2}{131220}\left(G_N
m\over \pi r^2\right)^6\cr
-\frac{
11148022313+37508666370 \log\left(r^2C_2\over G_Nm\right)-64367301600 \log\left(r^2C_2\over G_Nm\right)^2}{2362944150}\left(G_N m\over
\pi r^2\right)^7\cr
-\frac{4939200000
}{2362944150}\log\left(r^2C_2\over G_Nm\right)^3\left(G_N m\over \pi r^2\right)^7+O(G_N^8).
\end{multline}
Again there is a single constant of integration $C_2$ arising as the
scale of the $\log(r)$ arising from the $G_N^2$ order.
\subsubsection{The metric perturbation}\label{sec:metpert5D}
In $d=4$ we derive components of the metric in perturbation by
plugging the expression for $f(r)$ in~\eqref{e:fsol5d}
in~\eqref{e:hfinitedef}.
We obtain for the time component
\begin{multline}\label{e:h0finite5D}
h^{\rm dD}_0(r)=1-{8\over3} \frac{ G_N m}{ \pi r^2}+\frac{32}{9 }\left(\frac{ G_N m}{ \pi r^2}\right)^2+\frac{32 \left(-3+5 \log\left(r^2C_2\over G_N m\right)\right) }{27
}\left(\frac{ G_N m}{ \pi r^2}\right)^3\cr
-\frac{640 \left(-2+9 \log\left(r^2C_2\over G_N m\right)\right)} {243 } \left(\frac{ G_N m}{ \pi r^2}\right)^4+O(G_N^5)\,,
\end{multline}
and for the spatial components
\begin{multline}\label{e:h1finite5D}
h^{\rm dD}_1(r)=1+\frac{4 }{3 } \frac{ G_N m}{ \pi
r^2}+\frac{4\left(1+5 \log\left(r^2C_2\over G_N m\right)\right)
}{9 } \left(\frac{ G_N m}{ \pi r^2}\right)^2+\frac{\left(64-240 \log\left(r^2C_2\over G_N m\right)\right) }{81
}\left(\frac{ G_N m}{ \pi r^2}\right)^3\cr
+\frac{\left(323+2340 \log\left(r^2C_2\over G_N m\right)+600 \log
^2\left(r^2C_2\over G_Nm\right)\right)}{486 }\left(\frac{ G_N m}{ \pi r^2}\right)^4+O(G_N^5)\,,
\end{multline}
and
\begin{multline}\label{e:h2finite5D}
h^{\rm dD}_2(r)=\frac{40 \left(1-2 \log\left(r^2C_2\over G_N
m\right)\right) }{9 } \left(\frac{ G_N m}{ \pi r^2}\right)^2+\frac{32 \left(-4+5 \log\left(r^2C_2\over G_N m\right)\right) }{27 }\left(\frac{ G_N m}{ \pi r^2}\right)^3\cr
+\frac{8 \left(-31-1260 \log\left(r^2C_2\over G_N m\right)+300 \log\left(r^2C_2\over G_N m\right)^2\right) }{243 }\left(\frac{ G_N m}{ \pi r^2}\right)^4+O(G_N^5)\,.
\end{multline}
\subsection{The metric in the de Donder gauge in six dimensions}\label{sec:soldeDonder6d}
The de Donder gauge condition~(\ref{e:deDonderGauge}) in $d=5$ reads
\begin{equation}\label{e:dDfd5}
8h_2(r)= r {d\over dr}\left(h_0(r)+3h_1(r)-h_2(r)\right)\,,
\end{equation}
supplemented with the asymptotic boundary condition
\begin{equation}\label{e:finf6d}
\lim_{r\to\infty} f(r)=1\,.
\end{equation}
This differential equation implies either that $f(r)= C /r$, which does
not satisfy the boundary condition~(\ref{e:finf6d}), or $f(r)$
satisfies the differential equation with $x=G_Nm/(\pi r^3)$
\begin{multline}\label{e:diffGauged5}
xf(x)^7 \left(6x-4 f(x)^3\right) {d^2f(x)\over dx^2}+9f(x)^6x^2 \left(d
f(x)\over dx\right)^2+f(x)^7 \left({8\over3}f(x)^3-10x\right) {df(x)\over
dx}\cr
-{5\over3}f(x)^8+f(x)^6+4xf(x)^5-3x f(x)^3+{9x^2\over4}=0\,.
\end{multline}
We solve the equation~(\ref{e:diffGauged5}) using a series expansion in
$G_N$ using~\eqref{e:fpostN} and the boundary condition~\eqref{e:bdy}.
Asking for an expression with only integer powers of $G_N$, the result to the order $G_N^7$ is given by
\begin{multline}\label{e:fsol6d}
f(r)=1+\frac{G_N m}{4 \pi r^3}-\frac{5 }{8 }\left(\frac{G_N m}{\pi r^3}\right)^2+\frac{2 }{3 }\left(\frac{G_N m}{\pi r^3}\right)^3-\frac{775}{1344}\left(\frac{G_N m}{\pi r^3}\right)^4+\frac{545977 }{537600 }\left(\frac{G_N m}{\pi r^3}\right)^5\cr
-\frac{15194099 }{10483200 }\left(\frac{G_N m}{\pi r^3}\right)^6+\frac{4421000509
}{1878589440}\left(\frac{G_N m}{\pi r^3}\right)^7+O(G_N^8)\,.
\end{multline}
The expression is uniquely determined and finite.
\subsubsection{The metric perturbation}\label{sec:metpert6D}
In $d=5$ we derive components of the metric in perturbation by
plugging the expression for $f(r)$ in~\eqref{e:fsol6d}
in~\eqref{e:hfinitedef}.
We obtain for the metric components
\begin{align}
h^{\rm dD}_0(r)&=1-\frac{3 G_N m}{2 \pi r^3}+\frac{9}{8 }\left(\frac{G_N m}{ \pi r^3}\right)^2-\frac{27 }{8 }\left(\frac{G_N m}{ \pi r^3}\right)^3+\frac{387 }{64 }\left(\frac{G_N m}{ \pi r^3}\right)^4+O(G_N^5)\,,\cr
h^{\rm dD}_1(r)&=1+\frac{G_N m}{2 \pi r^3}-\frac{19 }{16 }\left(\frac{G_N m}{ \pi r^3}\right)^2+\frac{49 }{48 }\left(\frac{G_N m}{ \pi r^3}\right)^3-\frac{577}{1344}\left(\frac{G_N m}{ \pi r^3}\right)^4+O(G_N^5)\,,\cr
\label{e:h2finite6D}
h^{\rm dD}_2(r)&=\frac{117}{16 }\left(\frac{G_N m}{ \pi r^3}\right)^2-\frac{45}{16}\left(\frac{G_N m}{ \pi r^3}\right)^3+\frac{1599 }{112}\left(\frac{G_N m}{ \pi r^3}\right)^4+O(G_N^5)\,.
\end{align}
\section{Recovering the Schwarzschild-Tangherlini metric from the amplitude computations}
\label{sec:matching-amplitude}
In this section we show how the amplitude computations match the Schwarzschild-Tangherlini metric in
four, five and six dimensions in the de Donder gauge of the previous section.
\subsection{The Schwarzschild metric in four dimensions}
\subsubsection{The first post-Minkowskian contribution $O(G_N)$}
Setting $d=3$ in the expressions for the metric perturbation from the tree-level amplitude
in~\eqref{e:htree} matches the de Donder gauge first post-Minkowskian order
in four dimension $(d=3)$ in~(\ref{e:h0finite})--(\ref{e:h2finite}).
\subsubsection{The second post-Minkowskian contribution $O(G_N^2)$}
At the order $G_N^2$, setting $d=3$ in the metric perturbation from the one-loop amplitude
in~\eqref{e:honeloop} matches the metric in the de Donder gauge
in four dimensions $(d=3)$ in~(\ref{e:h0finite})--(\ref{e:h2finite}).
\subsubsection{The third post-Minkowskian contributions $O(G_N^3)$}
At this order the components of the metric in the de Donder gauge in
four dimensions ($d=3$)
from~(\ref{e:h0finite})---(\ref{e:h2finite}) match the
metric components from the renormalised two-loop
amplitude computation in~(\ref{e:htwolooprenorm}) for the value of the constant
of integration
\begin{equation}\label{e:C3}
\log C_3= \log C_E-{7\over2}+6a^{(1)}(3)\,,
\end{equation}
where $ C_E$ is given in~\eqref{e:Cedef}.
With this identification we recover the results
of~\cite{Goldberger:2004jt} for the renormalisation of the metric
divergences and the coordinate change from the de Donder gauge to the
harmonic gauge from the world-line approach.
Substituting this value of $C_3$ in the solution~(\ref{e:ffinite})
completely determines the solution to the de Donder gauge in four dimensions
and the coordinate change in~\eqref{e:ffinite} to the Schwarzschild
metric in~(\ref{e:SchwaF}) in four dimensions. The parameter
$a^{(1)}(3)$ is a free parameter, which corresponds to the running
coupling in~\cite{Goldberger:2004jt}.
\subsubsection{The fourth post-Minkowskian contribution $O(G_N^4)$}
At the fourth post-Minkowskian order, we get again a diverging metric from the amplitude computation.
This finite component metric in the de Donder gauge in four dimensions
$(d=3)$
in~(\ref{e:h0finite})---(\ref{e:h2finite}) using the value of the
constant of integration $C_3$ determined in~(\ref{e:C3}) give
\begin{align}
h^{\rm dD (4)}_{0}&= \left(-{32\over3}+8 a^{(1)}(3)+{4\over3} \log
\left(\frac{rC_E}{G_N m}\right)\right) \left(G_Nm\over r\right)^4,\\
h^{\rm dD(4)}_1&=
\left(10-8a^{(1)}(3)-{4\over3} \log
\left(\frac{rC_E}{G_N m}\right)\right)\left(G_N m\over r\right)^4,\cr
\nonumber h^{\rm dD(4)}_{2}&= \left(-{86\over 3}+16a^{(1)}(3)+ {8\over3} \log \left(\frac{rC_E}{G_N
m}\right)\right)
\left(G_N m\over r\right)^4\,.
\end{align}
This matches exactly the renormalised metric components from the three-loop
amplitude computation obtained in~(\ref{e:hthreelooprenorm}) with $d=3$.
\subsection{The Schwarzschild-Tangherlini metric in five dimensions}
\subsubsection{The first post-Minkowskian contribution $O(G_N)$}
Setting $d=4$ in the expressions for the metric perturbation from the tree-level amplitude
in~\eqref{e:htree} matches the de Donder gauge first post-Minkowskian order
in
five dimensions $(d=4)$ in~(\ref{e:h0finite5D})--(\ref{e:h2finite5D}).
\subsubsection{The second post-Minkowskian contribution $O(G_N^2)$}
The renormalised one-loop computation in~\eqref{e:honelooprenorm5D}
matches the expression at order $O(G_N^2)$ from the de Donder gauge
in~(\ref{e:h0finite5D})---(\ref{e:h2finite5D}) for the choice of the constant of
integration
\begin{equation}\label{e:C2}
\log C_2={11\over15} +2\log C_E + {36\pi\over 5} a^{(1)}(5)\,.
\end{equation}
Again there is a free parameter $a^{(1)}(5)$ which can be associated with a running coupling constant.
\subsubsection{The third post-Minkowskian contributions $O(G_N^3)$}
At this order in perturbation, the two-loop amplitude computation
had divergences that had to be renormalized to give~(\ref{e:htwolooprenorm5D}).
This matches exactly the finite component metric in the de Donder gauge in five dimensions
$(d=4)$ in~(\ref{e:h0finite5D})---(\ref{e:h2finite5D}), using the value of the
constant of integration $C_2$ determined in~(\ref{e:C2}), given by
\begin{align}\label{e:hhthreeloopdD5}
h^{\rm dD (3)}_{0}&= {160\over27}\left({2\over15}+{36 a^{(1)}(5)\pi\over5}+ \log\left(r^2C_E^2\over G_Nm\right)
\right) \left(G_Nm\over \pi r^2\right)^3+O(d-4),\\
h^{\rm dD(3)}_1&= -{80\over27} \left({7\over15}+{36 a^{(1)}(5)\pi\over5}+ \log\left(r^2C_E^2\over G_Nm\right)
\right) \left(G_N m \over\pi r^2\right)^3 +O(d-4),\cr
\nonumber h^{\rm dD(3)}_{2}&= {160\over27} \left(-{1\over15}+{36a^{(1)}(5)\pi\over5}+ \log\left(r^2C_E^2\over G_Nm\right)\right) \left(G_N m \over \pi r^2\right)^3+O(d-4)\,.
\end{align}
\subsubsection{The fourth post-Minkowskian contribution $O(G_N^4)$}
The three-loop amplitude computation diverges and the finite metric
component at the fourth post-Minkowskian order was obtained after
normalisation in~(\ref{e:hthreelooprenorm5D}).
This matches exactly, the finite component metric in the de Donder gauge in five dimensions
$(d=4)$ in~(\ref{e:h0finite5D})---(\ref{e:h2finite5D}), using the value of the
constant of integration $C_2$ determined in~(\ref{e:C2}), given by
\begin{align}\label{e:hhthreeloopdD}
h^{\rm dD (4)}_{0}&=- {128\over 243}\left(23+324 a^{(1)}(5)\pi+ 45\log\left(r^2C_E^2\over G_Nm\right)
\right) \left(G_Nm\over \pi r^2\right)^4+O(d-4),\cr
h^{\rm dD(4)}_1&= \Bigg({7085+69552 \pi
a^{(1)}(5)+93312(\pi a^{(1)}(5))^2\over1458} +{10\over 243} (161+432 \pi a^{(1)}(5)) \log\left(r^2C_E^2\over
G_Nm\right) \cr
&+ {100\over 81}\log\left(r^2C_E^2\over
G_Nm\right)^2
\Bigg) \left(G_N m \over\pi r^2\right)^4 +O(d-4),\cr
h^{\rm dD(4)}_{2}&= \Bigg({-19048-141696 \pi a^{(1)}(5)373248 (\pi a^{(1)}(5))^2\over 729} +{160\over 243} (-41+216 \pi a^{(1)}(5)) \log\left(r^2C_E^2\over
G_Nm\right) \cr
&+ {800\over 81}\log\left(r^2C_E^2\over
G_Nm\right)^2
\Bigg) \left(G_N m \over\pi r^2\right)^4 +O(d-4)\,.
\end{align}
\subsection{The Schwarzschild-Tangherlini metric in six dimensions}
The metric components in six dimensions ($d=5$) are finite. They are
given up to the order $O(G_N^4)$
in~\eqref{e:h2finite6D} and are reproduced by the sum of the
contributions of the
tree-level amplitude in~(\ref{e:htree}), one-loop amplitude
in~(\ref{e:honeloop}), two-loop amplitude in~(\ref{e:htwoloopdiv}) and
three-loop amplitude in~(\ref{e:hthreeloopdiv}) and setting $d=5$ in
these expressions.
\section{Discussion} \label{sec:discussion}
General relativity can be considered in space-times of various
dimensions. It is therefore important to validate our current
understanding of the connection between scattering amplitudes and
classical general relativity in general
dimensions~\cite{KoemansCollado:2019ggb,Cristofoli:2020uzm}
We have shown how to reconstruct the classical
Schwarzschild-Tangherlini metric from scattering amplitudes in four,
five and six dimensions. We have
extracted the classical contribution as defined
in~\cite{Bjerrum-Bohr:2018xdl} from the vertex function for the
emission of a graviton from a massive scalar field. For such a static metric, the classical contribution is
obtained by taking appropriate residues on the time components of the
loop momenta. These residues project the quantum scattering amplitude
on contribution similar to the quantum tree graphs considered
in~\cite{Duff:1973zz}, by cutting the massive propagators.
The amplitudes develop ultraviolet
divergences which are renormalised by introducing higher-derivative non-minimal
couplings in~(\ref{e:Sctn}). The non-minimal coupling removes the
ultraviolet divergences in the stress-tensor and the metric
components. For the static solution the higher $n\geq2$ non-minimal
coupling only contribute from insertions in tree-level graphs.
Interestingly, in six dimensions the metric components are finite but
the stress-tensor has ultraviolet divergences. These divergences are
removed by adding counter-terms from non-minimal couplings. These
counter-terms do not induce any contribution to the metric components.
From the presence of ultraviolet poles in the master integrals
$J_{(l)}(\vec q^2)$ in~\eqref{e:Jnresult}, we conclude that in all dimensions one needs
to introduce an infinite set of higher-derivative non-minimal operators for
removing the ultraviolet divergences from the scattering
amplitude.
These counter-terms do not affect the
space-time geometry because their effect is reabsorbed by the change of coordinate from the
de Donder coordinate system to the Schwarzschild-Tangherlini coordinate
system.
The scattering amplitude approach presented in this work can be applied to
any effective field theory of gravity coupled to matter fields.
The amplitudes computations, being performed in general
dimensions, lead to results that have an analytic dependence on the
space-time dimensions. As black-hole solutions develop non trivial properties in general
dimensions~\cite{Emparan:2008eg,Emparan:2013moa}, it is interesting to
apply the method of this paper to other black-hole metrics. The Kerr-Newman and Reissner-Nordstr\"om metric in four dimensions have
been obtained
in~\cite{Donoghue:2001qc,BjerrumBohr:2002ks,Guevara:2018wpp,Chung:2018kqs,Moynihan:2019bor,Chung:2019yfs,Guevara:2019fsj,Cristofoli:2020hnk} by
considering tree-level and one-loop vertex function of the
emission of the graviton from a massive particle of spin $s$. The
higher order post-Minkowskian contributions should be obtained from
higher-loop amplitudes in a direct application of the methods used in
this work.
\acknowledgments
We would like to thank Emil Bjerrum-Bohr, Poul Damgaard, Paolo di Vecchia, Ludovic Plant\'e for discussions
and comments. The research of P. Vanhove has received funding from
the ANR grant ``Amplitudes'' ANR-17- CE31-0001-01, and is partially
supported by Laboratory of Mirror Symmetry NRU HSE, RF Government
grant, ag. N$^\circ$ 14.641.31.0001. P.V. is grateful to the
I.H.E.S. for allowing to use their computer resources.
| proofpile-arXiv_059-15673 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
At the heart of what drives the bulk of innovation and activity in Silicon Valley and elsewhere is {\it scalability}. Scalability is that most-important, desirable attribute that “connotes the ability of a system to accommodate an increasing number of elements or objects, to process growing volumes of work gracefully, and/or be susceptible to enlargement” \cite]p.~195]{Bondi2000}. It means that a small start-up is capable of growing into a multinational corporation, serving a global audience in a short period of time. This unwavering commitment to scalability - to identify strategies for efficient growth - is at the heart of what we refer to as “scale thinking.” Whether people are aware of it or not, scale thinking is all-encompassing. It is not just an attribute of one’s product, service, or company, but frames how one thinks about the world (what constitutes it and how it can be observed and measured), its problems (what is a problem worth solving versus not), and the possible technological fixes for those problems.
We argue that scale thinking represents a very particular form of growth, a form that has become the default and predominant approach within the current technology sector. Scale thinking presumes that everything can be made more efficient -- that products and services can be supplied and consumed at faster and faster speeds, for more and more people. Such efficiency is possible when production of the good or service can be expanded without significantly modifying the inputs (e.g., the type of raw material used, the type of labor required) or infrastructure necessary. Such growth is indisputably a good thing. For developers and investors, it means capturing greater market shares with lower levels of investment. Even those committed to social good over financial gain tend to believe that the efficient use of resources and the rapid expansion of potential solutions would help deal with the world’s ills faster and more effectively -- they are able to do more good with fewer resources.
This paper examines different facets of scale thinking and its implication on how we view technology and collaborative work. We argue that technological solutions grounded in scale thinking are unlikely to be as liberatory or effective at deep, systemic change as their purveyors imagine. As we will discuss, the logics that drive scale thinking are antithetical to constructing solutions which can produce systemic, equity-driven social change. Rather, solutions which {\it resist} scale thinking are necessary to undo the social structures which lie at the heart of social inequality. We draw on recent work on mutual aid networks and propose questions to ask of collaborative work systems as a means to evaluate technological solutions and identify resistances to scale thinking.
\section{Prior Literature on Scale \& What is Scale Thinking?}
“Scale thinking” is an approach that centers on and prioritizes {\it scalability}\footnote{Our intervention borrows heavily from Lilly Irani's \cite{irani2018} critique of "design thinking," which characterizes the concept as a type of racialized Western expertise that resists the imaginary mechanistic reasoning of a looming Asian labor threat. It also borrows from Ruha Benjamin's critique of design as a "colonizing project" which "submerg[es] so much heterogeneity" under its name \cite[pp. 175-76]{benjamin2019}.}. Scalability refers to the ability of a system to expand without having to change itself in substantive ways or rethinking its constitutive elements \cite{Tsing2012}. Scalability goes hand-in-hand with other macro-level processes of making modernity legible, including standardization \cite{timmermans2010}, classification \cite{bowkerStar2000}, and colonial trade \cite{Tsing2012}. Investors seek technological start-ups which are designed to grow quickly \---\ businesses that are structured to meet any level of demand at moment’s notice. In an essay titled "Startup = Growth," Y Combinator founder Paul Graham discusses how the distinguishing feature of startups is its ability to grow, stating "not every newly founded company is a startup" and most of those are in service. The startup is an instantiation of scale thinking which requires immediate growth. "A barbershop doesn't scale," he quips. In scale thinking, this feature of any given system is assumed to be the most relevant for success and longevity. As Werner Vogels, chief technology officer of Amazon, wrote, “scalability cannot be an after-thought. It requires applications and platforms to be designed with scaling in mind…” \cite{Vogels2006}.
By centering scalability in the design and development process, scale thinking revolves around three key tenets: (1) Scalability is a morally good quality of a system; (2) Quantification is a necessary part of designing scalable systems; and (3) Scalability is achieved by identifying and manipulating quantifiable, core elements or attributes of a system.
First, in scale thinking, scalability is taken from being a desirable trait to a {\it morally good} trait. Insofar as capitalism values the maximization of production and the minimization of waste -- whether it is unused labor or raw materials and goods -- scalability is imbued with a moral goodness because it centers on designing a system that is able to serve a greater number of people with fewer resources over time. When a system is scalable, growth -- which itself is fundamentally a positive and good thing -- can take place without the loss of time, resources, or effort. The absence of scalability is equated with wastefulness: opportunities to meet the needs of an ever-growing body of people are lost and precious resources (money, time, raw materials) are utilized ineffectively. Especially at a time when natural resources are dwindling, while populations are growing, wastefulness is marked as evil and morally corrupt. It becomes taken-for-granted that to be a “good” company or person, one should {\it obviously} seek scalability.
Moreover, scalability is a prerequisite to technical implementation; accordingly, solutions which don't scale are morally abject. Large tech firms spend much of their time hiring developers who can envision solutions which can be implemented algorithmically. Code and algorithms which scale poorly are seen as undesirable and inefficient. Many of the most groundbreaking infrastructural developments in big tech have been those which increase scalability, such as Google File System (and subsequently the MapReduce computing schema) and distributed and federated machine learning models. Large-scale technical systems operate much like the markets that preceded them, and are considered by liberals to be in and of themselves a moralizing and virtuous force \cite{FourcadeHealy}.
Given the importance of growth to scalability, the measurement of growth and efficiency is paramount to measuring the worth and value of the scalable system. The greater the growth, the more efficient the use of inputs and resources, and the greater the capture of demand, the better and more valuable the system. In scale thinking, developing tactics for measuring such growth is as important as designing for scalability itself (because if no one sees the growth, did it really happen?). Quantification, or the production and act of transforming human observations and experiences into quantities based on a common metric \cite{EspelandStevens} is an important procedural aspect of scalability.
Scalability is achieved when a system is able to expand {\it without rethinking basic elements} \cite{Tsing2012}. The system is designed in such a way that can accommodate new inputs without changing its fundamental framework. To create such a system, thus, requires an understanding of what those core elements or attributes of the system are -- of what aspects can remain unchanged and what elements can be changed to accommodate a growing system. Improving efficiency also means eliminating excess and removing non-essential elements to ensure the system runs smoothly. To engage in scale thinking, then, is to try and reduce a complex process or interaction to its most elemental and simplistic exchange of input to output.
\section{Implications of scale thinking}
The implications of scale thinking are three fold. In the first, scale thinking requires units of work to be interchangeable, abstract, and universal. Units themselves can be servers, offices, databases, individual workers. Next, scale thinking requires users to be of the same kind, or within a narrow set of predefined bounds, which is most often to the detriment of people "at the margins," marked by systems of racism, transphobia, and ableism. Lastly, scale thinking has implications for the legibility of users, with a natural endpoint in data gathering and datafication of individuals.
Scale thinking demands a standardization of inputs and outputs. Global supply chains demand modularity: shipping containers require only a pickup point and destination \cite{Posner2018}. As {\it The Atlantic} contributor Alexis Madrigal has suggested, containers are the "embodiment...of global capitalism." \cite{Madrigal2017}. What's significant is how readily that embodiment has translated to technological work.
The container metaphor readily extends to the realm of software development and deployment. Docker, the first widely used "container" service, allows developers to develop "standard unit[s] of software that package up code" and run them in different environments. Docker's logo is a whale with a set of shipping containers set on top of it. Kubernetes, likewise, is a software package used to manage "deployment, scaling, and management of containerized applications." Accordingly, its logo is a ship's wheel, steering containers to where they need to go.
\begin{figure}
\includegraphics[scale=0.5]{docker.png}
\includegraphics[scale=0.5]{kubernetes.png}
\caption{The Docker and Kubernetes logos, respectively.}
\end{figure}
The ubiquity of the container metaphor highlights the embeddedness of scale thinking in modern computing and infrastructural development; the metaphor extends to how developers think about work units, software teams, and technological organizations. If modern software companies are supposed to be "flat," then the work teams within them operate as standardized containers that take as input design and requirement documents, and output code, processes, and product. These organizations scale with the addition of more containerized work teams. At the base unit of this operation, the individual tech worker needs to have such standardized inputs and outputs. As such, scaling has a fractal quality within tech development.
While scale thinking permeates the organization of the production side of tech development, it sharply determines its consumptive side as well. Scale thinking requires a sameness of user, or a user which falls within a tightly bound constraint of imagination. In a containerized world, interchangeability is critical to the operation working, and users operate in a standardized manner. The desire of the startup is to ensure that users fall within the bounds of the universal. Heterogeneity becomes antithetical to scalability, because the same product/service can no longer be duplicated to sufficiently serve a suffuse audience. A varied user base means that many different solutions are needed, rather than a scalable solution. Despite Graham's cry to "do things that don't scale," \cite{Graham2012} a startup's outputs need to be constrained insofar as they do just that.
The failures for users that fall outside of universality of scaled solutions abound. Media scholar Safiya Noble accounts in stark detail the racist and sexist failures of web search for queries for Black, Latina, and Asian women and girls \cite{Noble2018}. In the book's conclusion, she details how Kandis, the owner of a Black-serving salon in a college town of a predominantly white institution, had to contend with Yelp and the erasure of her business: "I quickly realized that Internet/Yelp told people that I did not exist" \cite]p.~175]{Noble2018}. Kandis goes on to detail the extensive hoops she has to go through to get listed on the service, and the hesitance of her primarily Black clientele to "check-in" using Yelp's functionality, because of their sense of already being over surveilled.
Datafication, then, becomes the endpoint of scale thinking. Much has been made of the impulse of datafication of the individual \cite{VanDjick, CheneyLippold} and its anti-Black, carceral dimensions \cite{Browne, benjamin2019}. Datafication's move -- and therefore scale thinking's move -- is to find ways to rationalize the individual into legible data points. A single data point is rarely useful on its own -- data points only matter insofar as they accumulate and move through the world as a new form of capital \cite{Sadowski}. This can only be made possible via creation of systems undergirded by scale thinking, and the building of systems which operate as massive accrual machines.
\section{Resistances to Scale Thinking}
In April 2020, at the height of the Coronavirus pandemic in New York City, George Farcasiu, the "lead automation engineer" of the mutual aid network Bed-Stuy Strong told VICE's technology-focused publication {\it Motherboard} that “We’re making sure we’re building tools that are about organizing people to interact with neighbors, not treating volunteers as boxes that do work.” He continued, “We’re not looking for a system that scales. We’re looking at the social system we have and augmenting and enabling that” \cite{Rose}.
On the flip side of this, tech companies continue to struggle with diversity and inclusion among their employees, despite years of investment and highly publicized efforts. One early strategy adopted by many tech companies was the installation of “people analytics” teams with human resources divisions to provide “data-driven,” scalable solutions \cite{Bersin}. Efforts focused on scalable tactics -- focusing on hiring percentages and inclusion metrics, implementing bias workshops, developing employee resource groups -- with little to no impact on the composition of the workforce \cite{RooneyKhorram} or the experiences of marginalized people within the company \cite{Tiku}.
These two examples illustrate two opposite, but active resistances to scale thinking. In the former, the resistance is centered on the hopeful and local in mutual aid networks. In the latter, however, we find resistance anchored in the pessimistic and anti-social. To conclude this piece, we draw on recent writings on mutual aid to discuss the ways in which scale limits -- and actively inhibits -- participation in tech and society. Indeed, scale thinking forces particular types of participation to operate as extractive or exploitative labor \cite{Sloane2020}. We ask readers to consider what potential resistances to scale thinking may look like, and invite them to think through what kinds of technology encourage collaborative work in ways which don't replicate its logics.
One such framework is that of mutual aid. Mutual aid allows a possible way to think of collaborative work more fruitfully. Critical legal scholar Dean Spade defines mutual aid as:
\begin{quote}
a form of political participation in which people take responsibility for caring for one another and changing political conditions, not just through symbolic acts or putting pressure on their representatives in government but by actually building new social relations that are more survivable. \cite[p.~136]{Spade}
\end{quote}
The concept of mutual aid has been around for years (many trace the concept to the anarchist theorist Peter Kropotkin) and the practice for much longer. It is a particular tradition of relationship building and has many local instantiations, including Black mutual aid traditions in the diaspora and the Black Panthers breakfast program in Oakland to name a few \cite{Harris, Heynen, Mochama}. Unlike other social assistance programs, mutual aid is oriented around collective problem-solving to develop strategies and obtain resources to help a community of people meet one another’s needs while simultaneously organizing towards different social and political relations, both among community members and between the community and other structures of power. It is the latter that distinguishes mutual aid from forms of charity or assistance: assistance provided between group members is not only about sharing the resources on hand, but changing what resources are made available and how they are distributed in general; it is about shifting dynamics of power.
For example, for the Black Panther Party, the Breakfast Program was an important mutual aid program that was integral to their overall political strategy to transform U.S. politics. In addition to providing necessary support and aid for the communities they served, it also functioned as a model that the federal government could use to establish a national school breakfast program \cite{Heynen}. The urgency of the COVID pandemic has revived interest and networks around mutual aid. In the US, with the lack of federal and state intervention resulting in mass infections and death, mutual aid groups have emerged as a means to provide material relief for vulnerable populations, including Black and brown, elderly, low-income, disabled, and queer and transgender people.
Mutual aid networks, by their nature, are not intended to “scale”. While scale thinking emphasizes abstraction and modularity, mutual aid networks encourage concretization and connection. Mutual aid is intended to operate as a mode of radical collective care \cite[p.~131]{Spade} in which individuals in the network have their direct material needs met, regardless of considerations of those receiving aid falling into a set of datatified categories, such as "deserving or undeserving." While scale thinking encourages top-down coordination, mutual aid considers building skills for "collaboration, participation, and decision making" \cite[p.~137]{Spade}. Mutual aid focuses on "how to organize human activity without coercion" \cite{Spade}. And mutual aid encourages building of solidarities between people with different needs. While mutual aid is not the only framework through which we can consider a move away from scale thinking-based collaborative work arrangements, we find it to be a fruitful one to theorize and pursue.
The Bed-Stuy neighborhood in New York City’s Brooklyn borough reflects gentrification trends. It is a mix of newly arrived residents who are white, more financially-secure, technologically-oriented, both professionally and personally, and long-time residents who are Black or Latinx, older, and more likely to be negatively harmed by COVID-19 and its related economic impacts \cite{NYSComptroller}.\footnote{It is also a neighborhood with a rich history of localized Black activism that opposed its designation as the “largest ghetto” \cite{Woodsworth}.} And, given the disparate impacts of COVID-19 on Black and Brown communities and lower-income communities, the desire to make the aid distribution system nimble enough to grow quickly and across more neighborhoods, if not entire cities and regions, is a well-intentioned one. Like many other “tech for social good” projects, it is not unreasonable to fixate on the end goal of the project - in this case, distributing money and food to as many people as possible - in the hopes of assisting the most number of people.
Yet, the developers who volunteered to build out the tech-enabled aspect of the work resisted building a system focused on scalability, explicitly focusing on developing a system that centers both the recipients of the aid and the volunteers who distribute the aid. Instead, they looked to using technology to augment the relationships formed between volunteers and recipients. As noted by Alyssa Dizon, a civic technologist and a volunteer with Bed-Stuy Strong:
\begin{quote}
The most important piece of our operation is our relationship to the neighborhood, our sensitivity to the community, our approach, our values, our culture as an organization. The tech is a layer of it, and it’s not the most important layer of it at all.” \cite{McKenzie}
\end{quote}
The priority for developing the technological infrastructure for Bed-Stuy Strong is focused less on building a system that can expand quickly to serve the most people possible. Instead, it is focused on easing and strengthening social connections between the people involved in doing the mutual aid: the volunteers, the donors, and the recipients. The technology layer is built with the understanding that those involved, rather than being easily replaceable units, are not homogenous. Organizers presume that the people and their relationships with one another will change, including their roles within the mutual aid network. A volunteer may be a donor one week, and due to changes in circumstances, may become a recipient the next as well. People who receive aid may also volunteer their time to provide aid to others.
Furthermore, there is an expectation that the users of the mutual aid system are different and have different needs. Rather than building for a homogenous base of users or clients, the tech-oriented developers within Bed-Stuy Strong oriented themselves towards identifying technology that could help them identify differences between users, such as those who are elderly, immuno-compromised, living with children, or in need of immediate assistance. By doing so, they can render assistance in a way that is most functional and appropriate for the recipient. Rather than trying to identify users to who fit, or could be made to fit, into the system, they adapted the work of volunteers and donors to meet the needs of a vast array of recipients.
We conclude this section with a few considerations around how to organize collaborative work in technological systems. Spade proposes four questions in assessing reforms and tactics which we use as our jump-off point:
\begin{quote}
Does [the reform or tactic] provide material relief? Does it leave out an especially marginalized part of the affected group (e.g., people with criminal records, people without immigration status)? Does it legitimize or expand a system we are trying to dismantle? Does it mobilize people, especially those most directly impacted, for ongoing struggle? \cite[p.~133]{Spade}
\end{quote}
We add three more, related questions for technical collaborative work systems: Does the technological system centralize power (either through coordination, data extraction, or authority) or distribute it between developers and users? Does the technological system treat the contributions and experiences of individuals as interchangeable or as uniquely essential? Lastly, does it open up avenues for participation, and are those avenues of participation mobilizing or demobilizing?
\section{Conclusion}
In this piece, we identify "scale thinking" as a mode of thinking prevalent in Silicon Valley and other technological development centers. We discuss the dimensions and implications of scale thinking, and how this type of thinking is universally valorized within technological development. We then introduce resistances to scale thinking, namely mutual aid, as an alternative mode of arranging social relations, and offer several provocations and questions for designers and theorists of cooperative and collaborative work systems.
Pandemics and disasters are sites of immense loss of life, property, and livelihood. They also starkly highlight what elements, at their root, are not working for most people and contribute to interlocking systems of inequality. Failures under disasters are often failures of technological scale and the attendant scale thinking which undergirds them. We hope that this set of provocations introduces a mode of evaluating whether systems created by designers and other tech workers are working for those who need them most.
\bibliographystyle{ACM-Reference-Format}
| proofpile-arXiv_059-15674 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
The last two decades have seen remarkable progress in the study of random interface growth, interacting
particle systems and random polymers within the Kardar-Parisi-Zhang (KPZ) universality class through the
identification of deep connections between probability, combinatorics, symmetric functions, queueing theory, random matrices and quantum integrable systems. The greatest progress has been made with narrow-wedge initial data
(for example see
\cite{baik_deift_johansson, baryshnikov, borodin_corwin_macdonald, cosz, johansson_2, o_connell2012, prahofer_spohn, tracy_widom_asep}) and
there are substantial differences in the case of flat initial data, see \cite{baik_rains, bisi_zygouras, bfps, ferrari, remenik_nguyen, sasamoto}.
The purpose of this paper is to prove multi-dimensional identities in law
between different models in the KPZ universality class with flat initial data.
These are closely related to identities involving reflected Brownian motions and point-to-line last passage percolation with exponential data proved recently in \cite{FW}. The results of this paper, together with \cite{FW}, suggests
the possibility that there may be more identities of this form and deeper algebraic reasons for why they hold.
On the one hand, these identities involve an interacting particle system called PushASEP (introduced in \cite{borodin2008}) in the presence of an additional wall at the origin. This is a continuous-time Markov chain $(Y_1(t), \ldots, Y_n(t))_{t \geq 0}$
taking values in $W^n_{\geq 0} = \{(y_1, \ldots, y_n) : 0 \leq y_1 \ldots \leq y_n \text{ and } y_i \in \mathbb{Z}\}$ and
with the following evolution depending on $2n$ independent exponential clocks. Throughout we refer to the $i$-th co-ordinate as the $i$-th particle.
At rate $v_i$, the right-clock of the $i$-th particle rings and the $i$-th particle jumps to the right. All particles which have
(before the jump of the $i$-th particle) a position equal to the $i$-th particle position and an index greater than or equal to $i$ are
\emph{pushed} by one step to the right. At rate $v_i^{-1}$ the left-clock of the $i$-th particle rings and if the $i$-th particle has a position strictly larger than both the $(i-1)$-th particle and zero then
the $i$-th particle jumps by one step to the left; if not this jump is suppressed. In summary, particles \emph{push} particles with higher indices and are \emph{blocked} by particles with lower indices and a wall at the origin.
A second viewpoint is to relate the top particle in PushASEP with a wall
to the top particle in an ordered (or non-colliding process), see Proposition \ref{two_sided_intertwining} and related statements in \cite{baryshnikov, biane2005, o_connell2012, warren}.
Let $(Z_1^{(v_n)}(t), \ldots, Z_n^{(v_1)}(t))_{t \geq 0}$ be a multi-dimensional continuous-time random walk where $Z_i^{(v_{n-i+1})}$ jumps to the right with rate $v_{n-i+1}$ and to the left with
rate $v_{n-i+1}^{-1}$. We construct from this an ordered process
$(Z^{\dagger}_1(t), \ldots, Z^{\dagger}_n(t))_{t \geq 0}$ by a Doob $h$-transform, see Section \ref{Harmonic_Functions}.
In the case $0 < v_n < \ldots < v_1$,
this is given by conditioning $(Z_1^{(v_n)}, \ldots, Z_n^{(v_1)})$
on the event of positive probability that $Z_1^{(v_n)} \leq \ldots \leq Z_n^{(v_1)}$.
The other side of the identities we prove, involve point-to-line last passage percolation times.
Let $\Pi_n^{\text{flat}}$ denote the set of all directed (up and right) nearest neighbour paths from the point $(k, l)$
to the line $\{(i, j): i+j = n+1\}$ and
let
\begin{equation}
\label{lpp_defn}
G(k, l) = \max_{\pi \in \Pi_n^{\text{flat}}(k, l)} \sum_{(i, j) \in \pi} g_{ij}
\end{equation}
where $g_{ij}$ are an independent collection of geometric random variables with parameter $1- v_i v_{n-j+1}$
indexed by
$\{(i, j) : i, j \in \mathbb{Z}_{\geq 1} \text{ and } i + j \leq n+1\}$ and
with $0 < v_i < 1$ for each $i = 1, \ldots, n$.
The geometric random variables are defined as $P(g_{ij} = k) = (1-v_i v_{n-j+1}) (v_i v_{n-j+1})^{k}$ for all $k \geq 0$.
\begin{theorem}
\label{top_particle_theorem}
Let $n \geq 1$ and suppose $0 < v_1, \ldots, v_n < 1$ and let $Y_n^*$ be distributed according to the top particle of PushASEP with a wall
in its invariant measure, let $Z_n^{\dagger}$ be the top particle in the ordered random walk above, see also \eqref{non_colliding_defn}, and $G(1, 1)$ be the point-to-line last passage percolation time defined by \eqref{lpp_defn}. Then
\begin{equation*}
Y_n^* \stackrel{d}{=} \sup_{t \geq 0} Z_n^{\dagger}(t) \stackrel{d}{=} G(1, 1).
\end{equation*}
\end{theorem}
The first identity in Theorem \ref{top_particle_theorem} follows from two representations for $Y_n^*$
and $\sup_{t \geq 0} Z_n^{\dagger}(t)$ as point-to-line last passage percolation times in a random
environment constructed from Poisson point processes. The equality in law then follows from
a time reversal argument.
The main content of Theorem
\ref{top_particle_theorem} is that either of these random variables is equal in distribution to a point-to-line last
passage percolation time.
This can be proven in two ways. The first method is to calculate the distribution function of
$\sup_{t \geq 0} Z_n^{\dagger}(t)$
by relating the problem to conditioning a multi-dimensional random walk to stay in a Weyl chamber of type C given that
it remains in a Weyl chamber of type A. This gives the distribution function of $\sup_{t \geq 0} Z_n^{\dagger}(t)$ as proportional to a symplectic Schur function divided by a Schur function. This can be identified as a known expression
for the distribution function of point-to-line last passage percolation in a geometric environment from \cite{Bisi_Zygouras2}. This proof
of Theorem \ref{top_particle_theorem} is given in Section \ref{Harmonic_Functions}.
The second method of proof is to view Theorem \ref{top_particle_theorem} as an equality of the marginal distributions of the largest co-ordinates in a multi-dimensional identity in law relating the whole invariant measure of PushASEP with a wall
to a vector of point-to-line last passage percolation times. This leads to our main result.
\begin{theorem}
\label{main_theorem}
Let $n \geq 1$ and suppose $0 < v_1, \ldots, v_n < 1$.
Let $(Y_1^*, \ldots, Y_n^*)$ be distributed according to the invariant measure of PushASEP with a wall and let
$(G(1, n), \ldots, G(1, 1))$ be a vector of point-to-line last passage percolation times defined in \eqref{lpp_defn}.
Then
\begin{equation*}
(Y_1^*, \ldots, Y_n^*) \stackrel{d}{=} (G(1, n), \ldots, G(1, 1)).
\end{equation*}
\end{theorem}
We give two proofs of Theorem \ref{main_theorem}. In the first proof, we prove in Section \ref{push_asep} a formula for the transition probability of PushASEP
with a wall, following the method of \cite{borodin2008}. From this we obtain an expression for the
probability mass function of
$(Y_1^*, \ldots, Y_n^*)$ in Proposition \ref{Invariant_measure}.
In Section \ref{point_to_line}, we use an interpretation of last passage percolation as a
discrete-time Markov chain, with a sequential update rule for particle positions, which has explicit
determinantal transition probabilities given in \cite{dieker2008}. In order to find the distribution of a vector of \emph{point-to-line last passage percolation times}, we
use the update rule of this discrete-time Markov chain while adding in a new particle at the origin after each time step.
In such a way we can find an explicit probability mass function for $(G(1, n), \ldots, G(1, 1))$ which agrees with
$(Y_1^*, \ldots, Y_n^*)$ and gives our first proof of Theorem \ref{main_theorem}.
The second proof of Theorem \ref{main_theorem} is to obtain this multi-dimensional equality in law as a marginal equality of a larger identity in law. We give this proof in Section \ref{dynamic_reversibility}.
In particular, we construct a multi-dimensional Markov process involving pushing and blocking
interaction which has (i) an invariant measure given by $\{G(i, j) : i + j \leq n+1\}$ and (ii)
a certain marginal given by PushASEP with a wall.
Moreover, the process we construct is \emph{dynamically reversible}. This notion has appeared in the queueing literature \cite{kelly} and
means that a process started in stationarity has the same distribution when run forwards and backwards in time
\emph{up to a
relabelling of the co-ordinates}. Dynamical reversibility leads to a convenient way of finding an invariant measure and can be used to deduce further properties of PushASEP with a wall. In particular, when started in stationarity the top particle of PushASEP with a wall evolves as a non-Markovian process with the same distribution when run forwards and backwards
in time. This is a property shared by the $\text{Airy}_1$ process and it is natural to expect that the
top particle in PushASEP with a wall
run in stationarity converges to the $\text{Airy}_1$ process.
We end the introduction by comparing with the results on PushASEP in Borodin and Ferrari \cite{borodin2008}.
When started from a step or periodic initial condition \cite{borodin2008}
prove that
the associated height function converges to the $\text{Airy}_2$ or $\text{Airy}_1$ process respectively
(see also the seminal work \cite{bfps, sasamoto}). The choice of a periodic initial condition thus gives one way of accessing the KPZ universality class started from a flat interface. In this paper we
instead impose a wall at the origin and consider the invariant measure of PushASEP with a wall. This makes a substantial difference to the analysis and unveils different connections within the KPZ universality class with flat initial data.
\section{Proof of Theorem \ref{top_particle_theorem}}
\label{Harmonic_Functions}
\subsection{The all-time supremum of a non-colliding process}
\label{suprema_non_colliding}
We start by defining Schur and symplectic Schur functions. It will be sufficient for our purposes to define them according to their Weyl character formulas and we only remark that they can also be defined as a sum over weighted
Gelfand Tsetlin patterns and have a representation theoretic significance, see \cite{Fulton_Harris}.
Let $W^n = \{(x_1, \ldots, x_n) \in \mathbb{Z}^n: x_1 \leq \ldots \leq x_n\}$,
$W^n_{\geq 0} = \{(x_1, \ldots, x_n) \in \mathbb{Z}^n: 0 \leq x_1 \leq \ldots \leq x_n\}$
and
$W^n_{\leq 0} = \{(x_1, \ldots, x_n) \in \mathbb{Z}^n: x_1 \leq \ldots \leq x_n \leq 0\}$.
For $x \in W^n$ we define the Schur function $S_x : \mathbb{R}^n \rightarrow \mathbb{R}$ by
\begin{equation}
\label{schur_defn}
S_{x}(v) = \frac{\text{det}(v_i^{x_j + j - 1})_{i, j = 1}^n}{\text{det}(v_i^{j-1})_{i, j = 1}^n}
\end{equation}
and for $x \in W^n_{\geq 0}$ we define the symplectic Schur function $\text{Sp}_x : \mathbb{R}^n_{> 0} \rightarrow \mathbb{R}$
by
\begin{equation}
\label{symplectic_schur_defn}
\text{Sp}_{x}(v) = \frac{\text{det}\left(v_i^{x_j + j} - v_i^{-(x_j + j)}\right)_{i, j = 1}^n}{\text{det}(v_i^{j} - v_i^{-j})_{i, j = 1}^n}.
\end{equation}
Let $(Z_1^{(v_n)}(t), \ldots, Z_n^{(v_1)}(t))_{t \geq 0}$ denote a multi-dimensional continuous-time random walk
started from $(x_1, \ldots, x_n)$ where each component is independent and $Z_i^{(v_{n-i+1})}$ jumps to the right at rate $v_{n-i+1}$ and to the left with
rate $v_{n-i+1}^{-1}$. We define an ordered random walk
$(Z^{\dagger}_1(t), \ldots, Z^{\dagger}_n(t))_{t \geq 0}$ started from $x \in W^n$ as having a $Q$-matrix
given by a Doob $h$-transform: for $x \in W^n$
and $i = 1, \ldots, n$,
\begin{equation}
\label{non_colliding_defn}
Q_{Z^{\dagger}}(x, x \pm e_i) = \frac{S_{x \pm e_i}(v)}{S_{x}(v)} 1_{\{x \pm e_i \in W^n\}}.
\end{equation}
This is a version of $(Z_1^{(v_n)}(t), \ldots, Z_n^{(v_1)}(t))_{t \geq 0}$ with components conditioned to remain ordered as $Z_1 \leq \ldots \leq Z_n$.
It is related to a non-colliding random walk with components conditioned to remain strictly ordered by a co-ordinate change; for more information on non-colliding random walks we refer to \cite{konig2005, konig2002, o_connell_2003}.
Define
$h_A : W^n \rightarrow \mathbb{R}$ by
$h_A(x) = \prod_{i = 1}^n v_{n-i+1}^{-x_i} S_x(v)$
and define
$h_C : W^n_{\leq 0} \rightarrow \mathbb{R}$ by
$h_C(x_1, \ldots, x_n) = \prod_{i = 1}^n v_{n-i+1}^{-x_i} \text{Sp}_{(-x_n, \ldots, -x_1)}(v)$.
\begin{proposition}
\label{two_sided_intertwining}
\begin{enumerate}[(i)]
\item $Q_{Z^\dagger}$ is a conservative $Q$-matrix.
Equivalently, $h_A$ is harmonic for $(Z_1^{(v_n)}(t), \ldots, Z_n^{(v_1)}(t))_{t \geq 0}$ killed when it leaves
$W^n$.
\item We have that
\begin{equation}
(Z_n^{\dagger}(t))_{t \geq 0} \stackrel{d}{=}
\left(\sup_{0 = t_0 \leq t_1 \leq \ldots \leq t_n = t} \sum_{i=1}^n (Z_i^{(v_{n-i+1})}(t_i) - Z_i^{(v_{n-i+1})}(t_{i-1}))\right)_{t \geq 0}.
\end{equation}
\end{enumerate}
\end{proposition}
This is a consequence of Theorem 5.10 in \cite{biane2005} and is proved by multidimensional versions of
Pitman's transformation. It is also closely related to the analysis in \cite{borodin2008}. In the case that only rightward jumps in $Z_i$ are present, this corresponds to a construction of a process on a Gelfand-Tsetlin patten with pushing and blocking interactions \cite{warren_windridge}.
The statement above can also be proved as a consequence of push-block dynamics by minor modifications of the proof of Theorem 2.1 in \cite{warren_windridge}
and we describe these modifications in Section \ref{Intertwining}. The construction of a corresponding process
on a symplectic Gelfand Tsetlin pattern in \cite{warren_windridge} leads to the following.
\begin{lemma}[Theorem 2.3 of \cite{warren_windridge}]
\label{h_c_harmonic}
$h_C$ is harmonic for
$(Z^{(v_n)}(t), \ldots, Z^{(v_1)})$ killed when it
leaves $W^n_{\leq 0}$.
\end{lemma}
This is a reflection through the origin of the result in \cite{warren_windridge} which considers
a process killed when it
leaves $W^n_{\geq 0}$.
\begin{proposition}[Corollary 7.7 of \cite{lecouvey}]
Suppose $0 < v_n < \ldots < v_1 < 1$.
\begin{enumerate}[(i)]
\item Let $T_A = \inf \{t \geq 0: (Z^{(v_n)}(t), \ldots, Z^{(v_1)}(t)) \notin W^n\}$.
Then for $x \in W^n$, we have $P_x(T_A = \infty) = \kappa_A h_A(x)$ where
\begin{equation*}
\kappa_A = \prod_{i < j} (v_i - v_j) \prod_{j=1}^{n-1} v_{j}^{-(n-j)}.
\end{equation*}.
\item Let $T_C = \inf \{t \geq 0: (Z^{(v_n)}(t), \ldots, Z^{(v_1)}(t)) \notin W^n_{\leq 0}\}$. Then for $x \in W^n_{\leq 0}$,
we have $P_x(T_C = \infty) = \kappa_C h_C(x)$
where
\begin{equation*}
\kappa_C = \prod_{1 \leq i \leq j \leq n} (1 - v_i v_j ) \prod_{i < j} (v_i - v_j) \prod_{j=1}^{n-1} v_j^{-(n-j)}
\end{equation*}
\end{enumerate}
\end{proposition}
The probability that a random walk remains within a Weyl chamber for all time is considered in a general setting in \cite{lecouvey}. In our setting, we give a direct proof using Proposition \ref{two_sided_intertwining} and Lemma \ref{h_c_harmonic}.
\begin{proof}
Proposition \ref{two_sided_intertwining} and Lemma \ref{h_c_harmonic} show that $h_A$ and $h_C$ are harmonic functions for
$(Z^{(v_n)}(t), \ldots, Z^{(v_1)})$ killed when it leaves $W^n$ and
$W^n_{\leq 0}$ respectively.
We now check that $\kappa_A h_A$ and $\kappa_C h_C$ have the correct boundary behaviour.
Let $\lvert x \rvert = \sum_{i=1}^d x_i$ for $x \in \mathbb{R}^d$ and define $\partial W^n = \{x \notin W^n : \exists x' \in W^n \text{ with } \lvert x - x' \rvert = 1\}$ and $\partial W^n_{\leq 0} = \{x \notin W^n_{\leq 0} : \exists x' \in W^n_{\leq 0} \text{ with } \lvert x - x' \rvert = 1\}$. Then we can observe from \eqref{schur_defn} that $S_x(v) = 0$ for all $x \in \partial W^n$ because two columns in the determinant in the numerator of \eqref{schur_defn} coincide if $x_i = x_{i+1} + 1$ for some $i = 1, \ldots, n-1$. In a similar manner, $h_C(x_1, \ldots, x_n) = \prod_{i=1}^n v_{n-i+1}^{-x_i} \text{Sp}_{(-x_n, \ldots, -x_1)}(v) = 0$ for all $x \in \partial W^n_{\leq 0}$ due to the above observation
and that $h_C(x) = 0$ when $x_n = 1$.
We now consider the behaviour at infinity. For $h_A$, it is easy to see from the Weyl character formula \eqref{schur_defn} that
\begin{equation*}
\kappa_A \lim_{x_i - x_{i+1} \rightarrow -\infty} \prod_{i=1}^n v_{n-i+1}^{-x_i} S_x(v) = \kappa_A \frac{\prod_{j=1}^{n-1} v_{j}^{n-j}}{\prod_{i < j} (v_i - v_j)} = 1
\end{equation*}
where we use the limit above to mean $x_1, \ldots, x_n \rightarrow -\infty$ and $x_i - x_{i+1} \rightarrow -\infty$ for
each $i = 1, \ldots, n-1$.
For the symplectic Schur function we find \begin{equation*}
\lim_{x_{i} - x_{i+1} \rightarrow -\infty} \prod_{i=1}^n v_{n-i+1}^{-x_i} \text{Sp}_{(-x_n, \ldots, -x_1)}(v) =
\frac{(-1)^{n} \prod_{j = 1}^n v_j^{-j}}{\text{det}(v_i^j - v_i^{-j})_{i, j = 1}^n}
\end{equation*}
and use Eq. 24.17 from \cite{Fulton_Harris} to give a more explicit expression for the limiting constant
\begin{equation}
\label{fulton_harris_eqn}
\text{det}(v_i^j - v_i^{-j})_{i, j = 1}^n = (-1)^{n} \prod_{i < j} (v_i - v_j)
\prod_{1 \leq i \leq j \leq n}(1 - v_i v_j ) \prod_{j=1}^n v_j^{-n}.
\end{equation}
We conclude that
\begin{equation*}
\lim_{x_{i} - x_{i+1} \rightarrow -\infty} \kappa_C \prod_{i=1}^n v_{n-i+1}^{-x_i} \text{Sp}_{(-x_n, \ldots, -x_1)}(v) = 1.
\end{equation*}
In the case $0 < v_n < \ldots < v_1 < 1$ the process $(Z^{(v_n)}(t), \ldots, Z^{(v_1)}(t))$ almost surely has $Z_i^{(v_i)} \rightarrow -\infty$ for each $i = 1, \ldots, n$ and $Z_i^{(v_i)} - Z_{i+1}^{(v_{i+1})} \rightarrow -\infty$ for $i=1, \ldots, n-1$.
Therefore the above specifies the boundary behaviour of $\kappa_A h_A$ and $\kappa_C h_C$.
Suppose that $(h, T)$ either equals $(\kappa_A h_A, T_A)$ or $(\kappa_C h_C, T_C)$ and
let $Z_t^*$ denote $(Z^{(v_n)}(t), \ldots, Z^{(v_1)}(t))$ killed at the instant it leaves $W^n$ or $W^n_{\leq 0}$.
Then $(h(Z_t^*))_{t \geq 0}$ is a bounded martingale and
converges almost surely and in $L^1$ to a random variable $\mathcal{Y}$. From the boundary behaviour specified above, $\mathcal{Y}$ equals $1$ if $T = \infty$
and equals zero otherwise almost surely. Using this in the $L^1$ convergence shows that $h(x) = \lim_{t \rightarrow \infty} E_x(h(Z_t^*)) = P_x(T = \infty).$
\end{proof}
From this we can prove the second equality in law in Theorem \ref{top_particle_theorem} for a particular choice of rates.
Suppose $0 < v_n < \ldots < v_1 < 1$ which ensures that all of the following events have strictly positive probabilities,
and let $x \in W^n_{\leq 0}$. Then
\begin{IEEEeqnarray*}{rCl}
P_{(x_1, \ldots, x_n)}\left(\sup_{t \geq 0} Z_n^{\dagger} < 0\right)
& = &
\frac{P_{(x_1, \ldots, x_n)}\left(T_C = \infty \right)}{P_{(x_1, \ldots, x_n)}(T_A = \infty)} \\
& = & \frac{\kappa_C h_C(x)}{\kappa_A h_A(x)} \\
& = & \prod_{1 \leq i \leq j \leq n} (1 - v_i v_j) \frac{ \text{Sp}_{(-x_n, \ldots, -x_1)}(v)}{\text{S}_x(v)}.
\end{IEEEeqnarray*}
Let $(x_1, \ldots, x_n) \rightarrow (-\eta, \ldots, -\eta)$ and shift co-ordinates by $\eta$. Then
\begin{equation}
\label{largest_particle_non_colliding_sup}
P_{0}\left(\sup_{t \geq 0} Z_n^{\dagger} \leq \eta \right) =
P_{-\eta}\left(\sup_{t \geq 0} Z_n^{\dagger} \leq 0 \right)
= \prod_{1 \leq i \leq j \leq n} (1 - v_i v_j) \prod_{i = 1}^n v_{i}^{\eta}\text{Sp}_{\eta^{(n)}}(v)
\end{equation}
by using that $S_{(x_1, \ldots, x_n)}(v) \rightarrow \prod_{i=1}^n v_i^{-\eta}$ and the notation $\eta^{(n)} = (\eta, \ldots, \eta)$.
We compare this to Corollary 4.2 of \cite{Bisi_Zygouras2} which in our notation states that
\begin{equation}
\label{lpp_formula}
P(G(1, 1) \leq \eta) = \prod_{1 \leq i \leq j \leq n} (1 - v_i v_j) \prod_{i = 1}^n v_i^{\eta}
\text{Sp}_{\eta^{(n)}}(v).
\end{equation}
Equation \eqref{largest_particle_non_colliding_sup} and \eqref{lpp_formula} prove the second equality in law in Theorem \ref{top_particle_theorem} for $0 < v_n < \ldots < v_1 < 1$.
This can be extended to all distinct rates with $v_i < 1$ for each $i = 1, \ldots, n$ by observing that the law of the process $(Z_n^{\dagger}(t))_{t \geq 0}$
is invariant under permutations of the $v_i$. In particular, this holds for $\sup_{t \geq 0} Z_n^{\dagger}(t)$ and
also holds for $G(1, 1)$ from \eqref{lpp_formula}.
\subsection{Time reversal}
We now prove that PushASEP with a wall started from $(0, \ldots, 0)$ has an interpretation as semi-discrete last passage percolation times in a
environment constructed from $2n$ Poisson point processes.
In particular,
\begin{equation}
\label{semi_discrete_lpp}
(Y_k(t))_{k=1}^n = \left(\sup_{0 \leq t_0 \leq t_1 \leq \ldots \leq t_k = t} \sum_{i=1}^k (Z_i^{(v_i)}(t_i) - Z_i^{(v_i)}(t_{i-1}))\right)_{k=1}^n
\end{equation}
where the $Z_i^{(v_i)}$ are a difference of two Poisson point processes.
In the proof of \eqref{semi_discrete_lpp} we will denote
the right hand side of \eqref{semi_discrete_lpp} by $(U_k(t))_{k = 1}^n$. We check that the evolution of this process is PushASEP with a wall. When $n = 1$, $U_1(t) = \sup_{0 \leq t_0 \leq t} (Z_1^{(v_1)}(t) - Z_1^{(v_1)}(t_0))$
and this evolves as PushASEP with a wall with one particle started from zero.
For the inductive step we note that adding in the $n$-th particle to $(U_k(t))_{k = 1}^n$ does not affect the
evolution of the first $(n-1)$ particles. Therefore we only need to consider the $n$-th particle given by
\begin{equation}
\label{poisson_skorokhod}
U_n(t) = \sup_{0 \leq s \leq t} (Z_n^{(v_n)}(t) - Z_n^{(v_n)}(s) + Y_{n-1}(s))
\end{equation}
where $Y_{n-1}$ is the $(n-1)$-th particle in PushASEP with a wall.
If $U_{n} > Y_{n-1}$ then the suprema in \eqref{poisson_skorokhod} is attained with a choice $s < t$
and $U_n$ jumps right or left whenever $Z_n^{(v_n)}$ does. If $U_n = Y_{n-1}$ then at least one of the
(possibly non-unique) maximisers of the supremum in \eqref{poisson_skorokhod} involves $s = t$.
This means that if $Z_n^{(v_n)}$ jumps to the right then $U_n$ jumps to the right;
if $Y_{n-1}$ jumps to the right then $U_n$ jumps to the right (this is is the \emph{pushing} interaction); and if
$Z_n^{(v_n)}$ jumps to the left then $U_n$ is unchanged (this is the \emph{blocking} interaction).
Therefore $U_n$ defined by \eqref{poisson_skorokhod} follows the dynamics of the $n$-th particle in PushASEP
with a wall started from the origin.
Therefore \eqref{semi_discrete_lpp} follows inductively.
Equation \ref{semi_discrete_lpp} has a similar form to Proposition \ref{two_sided_intertwining} and
this along with time reversal establishes the following connection, see \cite{five_author, FW} for a similar
argument in a
Brownian context.
\begin{proposition}
\label{push_asep_non_colliding}
Let $Y_n^*$ be distributed as the top particle in PushASEP with a wall in its invariant measure and
$Z_n^{\dagger}$ be the top particle in the ordered random walk with $Q$-matrix given by \eqref{non_colliding_defn} and started from the origin. Then
\begin{equation*}
Y_n^* \stackrel{d}{=} \sup_{t \geq 0} Z_n^{\dagger}(t).
\end{equation*}
\end{proposition}
\begin{proof}
For any fixed $t$, we let $t - u_i = t_{k-i}$ and use time reversal of continuous-time random walks
$(Z_i^{(v_i)}(t) - Z_i^{(v_i)}(t - s))_{s \geq 0} \stackrel{d}{=} (Z_{n-i+1}^{(v_i)}(s))_{s \geq 0}$ to establish that
\begin{IEEEeqnarray}{rCl}
\label{time_reversal}
(Y_k(t))_{k=1}^n & = & \left(\sup_{0 \leq t_0 \leq \ldots \leq t_k = t} \sum_{i=1}^k (Z_i^{(v_i)}(t_i) - Z_i^{(v_i)}(t_{i-1}))\right)_{k=1}^n \IEEEnonumber\\
& = & \left(\sup_{0 = u_0 \leq \ldots \leq u_k \leq t} \sum_{i=1}^k (Z_i^{(v_i)}(t- u_{k-i}) - Z_i^{(v_i)}(t - u_{k-i+1}))
\right)_{k=1}^n \IEEEnonumber\\
& \stackrel{d}{=} & \left(\sup_{0 = u_0 \leq \ldots \leq u_k \leq t} \sum_{i=1}^k (Z_{n-i+1}^{(v_i)}(u_{k-i+1}) - Z_{n-i+1}^{(v_i)}(u_{k-i})) \right)_{k=1}^n
\end{IEEEeqnarray}
The equality in law of the largest co-ordinates, relabelling the sum from $i$ to $n-i+1$ and comparing with Proposition \ref{two_sided_intertwining} part (ii) shows that $Y_n(t) \stackrel{d}{=} \sup_{0 \leq s \leq t} Z_n^{\dagger}(s)$. In particular, letting $t \rightarrow \infty$ completes the proof.
\end{proof}
\begin{lemma}
\label{continuity_invariant_measure}
The distribution of $(Y_1^*, \ldots, Y_n^*)$ is continuous in $(v_1, \ldots, v_n)$ on the set $(0, 1)^n$.
\end{lemma}
\begin{proof}
We will use the representation for $(Y_1^*, \ldots, Y_n^*)$ obtained by relabelling the sum $i$ to $k-i+1$ in \eqref{time_reversal}
and letting
$t \rightarrow \infty$,
\begin{equation*}
(Y_k^*)_{k=1}^n \stackrel{d}{=} \left(\sup_{0 = t_0 \leq \ldots \leq t_k < \infty} \sum_{i=1}^k (Z_{n-k+i}^{(v_{k-i+1})}(t_{k-i+1}) - Z_{n-k+i}^{(v_{k-i+1})}(t_{k-i})) \right)_{k=1}^n.
\end{equation*}
We fix $\epsilon > 0$ and construct realisations of $Z_i^{(v_{n-i+1})}$ for all $\epsilon < v_i < 1 - \epsilon$ on the same probability space.
To achieve this we define $2n$ independent marked Poisson point process $R_1, \ldots, R_n$ and $L_1, \ldots, L_n$ on
$\mathbb{R}_{\geq 0} \times [0, 1]$ which will dictate the rightwards and leftward jumps respectively of
$Z_i^{(v_{n-i+1})}$.
For each $i = 1, \ldots, n$, the marked Poisson point process $R_i$ and $L_i$ consist of points $(t_k, w_k)_{k \geq 1}$ and $(\bar{t}_k, \bar{w}_k)_{k \geq 1}$ where $(t_k)_{k \geq 1}$ and $(\bar{t}_k)_{k \geq 1}$ are the points
of a Poisson point process of rate $1$ and $1/\epsilon$ respectively on $\mathbb{R}_{\geq 0}$. The $w_i$ and
$\bar{w}_i$ are uniform random variables on the interval $[0, 1]$ which are independent of each other and $(t_i)_{i \geq 1}, (\bar{t}_i)_{i \geq 1}$.
We define $R_i^{(v)}$ to be the subset of $(t_i, w_i)_{i \geq 1}$ with $w_i > 1-v$ and $L_i^{(1/v)}$ to be the subset of $(\bar{t}_i, \bar{w}_i)_{i \geq 1}$ with $\bar{w}_i > 1 - \epsilon/v$.
The projection onto the first co-ordinate of $R_i^{(v_i)}$ and $L_i^{(v_i)}$ give independent Poisson point process of rate $v_i$ and $1/v_i$ respectively which define coupled realisations of $Z_i^{(v_{n-i+1})}$ for any choice of
$\epsilon < v < 1-\epsilon$.
Almost surely, the suprema on the right hand side of Proposition \ref{semi_discrete_lpp} part (ii) all stablise (after some random time uniform over $(v_1, \ldots, v_n) \in (\epsilon, 1- \epsilon)^n$). For any realisation of the marked Poisson point processes, the right hand side of Proposition \ref{semi_discrete_lpp} part (ii) is continuous
in $(v_1, \ldots, v_n)$ except at $(1-w_i)_{i \geq 1}$ and $(\epsilon/(1-w_i))_{i \geq 1}$.
Therefore the distribution of the right hand side of part (ii) of Proposition \ref{semi_discrete_lpp} is continuous
in $(v_1, \ldots, v_n)$ on the set $(\epsilon, 1-\epsilon)^n$, and hence so is the distribution of $(Y_1^*, \ldots, Y_n^*)$. As $\epsilon$ is arbitrary this completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{top_particle_theorem}]
Proposition \ref{push_asep_non_colliding} is the first equality in law.
At the end of Section \ref{suprema_non_colliding} we proved the second equality for distinct $0 < v_1, \ldots, v_n < 1$. Lemma \ref{continuity_invariant_measure} allows us to remove the constraint that the $v_i$ are distinct.
\end{proof}
\section{Push ASEP with a wall}
\label{push_asep}
\subsection{Transition probabilities}
We give a more explicit definition of Push-ASEP with a wall at the origin as a continuous-time Markov chain $(Y(t))_{t \geq 0} = (Y_1(t), \ldots, Y_n(t))_{t \geq 0}$ taking values in $W^n_{\geq 0} = \{(y_1, \ldots, y_n) : y_i \in \mathbb{Z} \text { and } 0 \leq y_1 \leq y_2 \ldots \leq y_n\}$. We use $e_i$ to denote the vector taking value $1$ in position $i$ and zero otherwise.
The transition rates of $Y$ are defined for
$y, y + e_i + \ldots + e_j \in W^n_{\geq 0}$ and $i \leq j$ by
\begin{equation}
\label{defn_PushASEP1}
q(y, y+e_i + e_{i+1} + \ldots + e_j) = v_i 1_{\{ y_i =
y_{i+1} = \ldots = y_j < y_{j+1}\}}
\end{equation}
with the notation $y_{n+1} = \infty$
and for $y \in W^n_{\geq 0}$ by
\begin{equation}
\label{defn_PushASEP2}
q(y, y-e_i) = v_i^{-1} 1_{\{ y - e_i \in W^n_{\geq 0}\}}.
\end{equation}
All other transition rates equal zero.
We note that in \cite{borodin2008} the particles were strictly ordered, whereas it is convenient for us to
consider a weakly ordered system; these systems can be related by a co-ordinate change
$x_j \rightarrow x_j + j -1$.
To describe the transition probabilities we first introduce the operators acting on functions $f: \mathbb{Z} \rightarrow \mathbb{R}$ with $v > 0$,
\begin{equation*}
D^{(v)}f(u) = f(u) - vf(u-1), \qquad J^{(v)}f(u) = \sum_{j = u}^{\infty} v^{u - j} f(j),
\end{equation*}
where we will always apply $J^{(v)}$ to functions with superexponential decay at infinity.
We use $D^{(v_1, \ldots, v_n)} = D^{(v_1)} \ldots D^{(v_n)}$ and
$J^{(v_1, \ldots, v_n)} = J^{(v_1)} \ldots J^{(v_n)}$ as notation for concatenated operators
and $D_u^{(v)}, J_u^{(v)}$ to specify a variable $u$ on which the operators act.
We recall Siegmund duality for birth-death processes, see for example \cite{clifford_sudbury, cox_rosler}.
Let $(X_t)_{t \geq 0}$ denote a birth-death process on the state space $\mathbb{Z}_{\geq 0}$ with transition rates:
\begin{equation*}
i \rightarrow i+1 \text{ rate } \lambda_i \text{ for } i \geq 0, \qquad i \rightarrow i - 1 \text{ rate } \mu_i \text{ for } i \geq 1.
\end{equation*}
Let $(X^*_t)_{t \geq 0}$ denote a birth-death process on the state space $\mathbb{Z}_{\geq -1}$
with transition rates
\begin{equation*}
i \rightarrow i+1 \text{ rate } \mu_{i+1} \text{ for } i \geq 0, \qquad i \rightarrow i-1 \text{ rate } \lambda_i
\text{ for } i \geq 0.
\end{equation*}
The process $X$ has a reflecting boundary at zero while $X^*$ is absorbed at $-1$.
Under suitable conditions on the rates, see \cite{cox_rosler}, which hold in the case
of interest to us: $\lambda_i = v_1$ for $i \geq 0$
and $\mu_i = v^{-1}_1$ for $i \geq 1$, Siegmund duality states that
\begin{equation*}
P_{x}(X_t \leq y) = P_{y}(X_t^* \geq x) \quad \text{ for all } t \geq 0, \quad x, y \in \mathbb{Z}_{\geq 0}.
\end{equation*}
We can find the transition probabilities for $X^*$ by solving the Kolmogorov forward equation.
We define for any $t \geq 0$ and $x, y \in \mathbb{Z}$,
\begin{equation*}
\psi_t(x, y) = \frac{1}{2\pi i} \oint_{\Gamma_0} \frac{dz}{z} (z^{y-x} - z^{x + y + 2} ) e^{t(z + 1/z)}
\end{equation*}
where $\Gamma_0$ denotes the unit circle oriented anticlockwise. The transition probabilities of $X^*$ are given for $x \in \mathbb{Z}_{\geq 0}$ and $t \geq 0$ by $P_{x}(X^*_t = y)
= v_1^{y - x} e^{-t(v_1 + 1/v_1)} \psi_t(x, y)$ for
$y \in \mathbb{Z}_{\geq 0}$ and $P_x(X^*_t = -1) = 1 - \sum_{y \geq 0} v_1^{y - x} e^{-t(v_1+1/v_1)} \psi_t(x, y)$.
By using Siegmund duality, the transition probabilities of PushASEP with a wall with a single particle (which is an M/M/1 queue)
are given by
\begin{equation*}
v_1^{y - x} e^{-t(v _1+ 1/v_1)} D_{y}^{(1/v_1)} J_{x}^{(v_1)} \psi_t(x, y) \quad \text{ for all } t \geq 0, \quad x, y \in \mathbb{Z}_{\geq 0}.
\end{equation*}
The purpose of the above is that this now provides a form which is convenient to generalise
to $n$ particles.
Define for all $t \geq 0$ and $x, y \in \mathbb{Z}^n$,
\begin{equation*}
r_t(x, y) = \prod_{k=1}^n v_k^{y_k - x_k} e^{-t(v_k + 1/v_k)}\text{det}(F_{ij}(t; x_i+i-1, y_j+j-1))_{i, j = 1}^n
\end{equation*}
with
\begin{equation*}
F_{ij}(t; x_i + i - 1, y_j + j - 1) = D_{y_j}^{(1/v_1 \ldots 1/v_j)} J_{x_i}^{(v_1 \ldots v_i)} \psi_t(x_i + i - 1, y_j + j - 1).
\end{equation*}
\begin{proposition}
The transition probabilities of $(Y_1(t), \ldots, Y_n(t))_{t \geq 0}$ are given by $r_t(x, y)$ for $x, y \in W^n_{\geq 0}$.
\end{proposition}
The transition probabilities for PushASEP in the absence of a wall were found in \cite{borodin2008} and related
examples have been found in \cite{schutz, warren}. Our proof follows the ideas in \cite{borodin2008}.
\begin{proof}
Observe that for all $u, w \in \mathbb{Z}$,
\begin{equation*}
\frac{d\psi}{dt} = \psi_t(u, w+1) + \psi_t(u, w-1)
\end{equation*}
and therefore for all $x, y \in W^n$,
\begin{equation}
\label{push_ASEP_free_eqn}
\frac{dr}{dt} = \sum_{k=1}^n v_k^{-1} r_t(x, y + e_k) + v_k r_t(x, y-e_k) -(v_k + v_k^{-1})r_t(x, y).
\end{equation}
We note that $y \pm e_k$ may be outside of the set $W^n_{\geq 0}$ but that $r$ has been defined for
all $x, y \in \mathbb{Z}^n$. The proof will involve showing that the terms involving $y \pm e_k \notin W^n_{\geq 0}$ in \eqref{push_ASEP_free_eqn} can be replaced, using identities for $r$, by
terms corresponding to the desired pushing and blocking interactions.
An important role is played by the identity, that if $y_j = y_{j+1}$ then
\begin{equation}
\label{push_ASEP_identity}
v_j^{-1} r_t(x, y + e_j) = v_{j+1}^{-1} r_t(x, y).
\end{equation}
This can be proved by showing that the difference of the two sides is equal to
\begin{equation*}
\prod_{k = 1}^n v_k^{y_k - x_k} e^{-t(v_k + 1/v_k)} \text{det}(A_{ij})_{i, j = 1}^n
\end{equation*}
where the relevant columns of $A$ are the $k$-th and $(k+1)$-th which have entries for each $i = 1, \ldots, n$
given by
\begin{equation*}
A_{ik} = D_{y_{k+1}}^{(1/v_{k+1})} F_{ik}(x_i + i - 1, y_{k+1}+k), \qquad
A_{ik+1} = F_{i k+1}(x_i + i - 1, y_{k+1} + k).
\end{equation*}
These two columns are equal which proves \eqref{push_ASEP_identity}.
We first consider the terms in \eqref{push_ASEP_free_eqn} with $y - e_k \notin W^n_{\geq 0}$
and $y_k > 0$
which corresponds to right jumps with a pushing interaction.
Denote by $m(k)$ the minimal index such that $y_{m(k)} = y_{m(k+1)} = \ldots = y_k$.
Then by iteratively applying the identity \eqref{push_ASEP_identity} we obtain
\begin{IEEEeqnarray*}{rCl}
v_k r_t(x, y-e_k) & = & v_{k-1} r_t(x, y - e_{k-1} -
e_{k}) = \ldots = v_{m(k)} r_t(x, y - e_{m(k)} - \ldots
- e_{k-1} - e_{k}).
\end{IEEEeqnarray*}
This shows that,
\begin{multline}
\label{push_ASEP_push}
\sum_{k=1}^n v_k r_t(x, y - e_k) 1_{\{y_{m(k-1)} < y_{m(k)} = \ldots = y_k\}} - v_k r_t(x, y)
\\
= \sum_{k=1}^n v_{m(k)} r_t(x, y- e_{m(k)} - \ldots
- e_{k-1} - e_{k}) 1_{\{y_{m(k-1)} < y_{m(k)} = \ldots = y_k\}} - v_k r_t(x, y)
\end{multline}
where $y_{0} := 0$. We note that
$y-e_{m(k)} - \ldots
-e_{k-1} - e_{k} \in W^n_{\geq 0}$ whenever $y_{m(k-1)} < y_{m(k)} = \ldots = y_k$ holds.
We next consider the terms in \eqref{push_ASEP_free_eqn} with $y + e_k \notin W^n_{\geq 0}$ which
will correspond to blocking interactions.
This means that $y_k = y_{k+1}$ and using \eqref{push_ASEP_identity} shows that
\begin{equation*}
\sum_{k=1}^{n-1} v_k^{-1} 1_{\{y_k = y_{k+1}\}} r_t(x, y + e_k)
= \sum_{k=2}^{n} v_k^{-1} 1_{\{y_{k-1} = y_{k}\}} r_t(x, y).
\end{equation*}
Therefore
\begin{multline}
\label{push_ASEP_block}
\sum_{k=1}^n v_k^{-1} r_t(x, y + e_k) -v_k^{-1}r_t(x, y)
= -v_1^{-1} r_t(x, y) - \sum_{k=2}^n v_k^{-1}(1 - 1_{\{y_{k-1} = y_k\}} )
r_t(x, y) \\
+ v_n^{-1} r_t(x, y+ e_n) + \sum_{k=1}^{n-1} v_k^{-1} (1- 1_{\{y_k = y_{k+1}\}})r_t(x, y+ e_k).
\end{multline}
We note that $y+ e_k \in W^n_{\geq 0}$ whenever $(1- 1_{\{y_k = y_{k+1}\}}) \neq 0$.
The final terms we need to consider in \eqref{push_ASEP_free_eqn} are those with $y - e_k \notin W^n_{\geq 0}$
and $y_k = 0$ which correspond to left jumps which are suppressed by the wall.
If $y_1 = \ldots = y_k = 0$ for some $k > 1$, then
\begin{equation*}
r_t(x, y-e_k) = v_k^{-1} \prod_{j=1}^n v_j^{y_j - x_j} e^{-t(v_j + 1/v_j)} \text{det}(B_{ij})_{i, j = 1}^n
\end{equation*}
for a matrix $B$ where the relevant entries of $B$ are the columns indexed by $1, \ldots, k$.
The first column has entries $B_{i1} = J_{x_i}^{(v_1 \ldots v_i)}
(\psi(x_i - i+1, 0) - v_1^{-1} \psi(x_i - i+1, -1))$ which simplifies to $B_{i1} = J_{x_i}^{(v_1 \ldots v_i)}
\psi(x_i - i+1, 0)$ by the fact that $\psi(\cdot, -1) = 0$.
The columns indexed by $j = 2, \ldots, k-1$ can be simplified to
$B_{ij} = J_{x_i}^{(v_1 \ldots v_i)} \psi(x_i - i+1, j-1)$ by using $\psi(\cdot, -1) = 0$ and column operations.
Using this argument for the $k$-th column and that we consider the vector $y - e_k$,
we observe that the $k$-th column is a linear combination of columns $1, \ldots, k-1$ and hence
$r_t(x, y - e_k) = 0$ if $y_k = 0$ for any $k > 1$.
The remaining case is when $0 = y_1 < y_2$ and we show that
\begin{equation*}
v_1 r_t(x, y-e_1) - v_1^{-1} r_t(x, y) = 0.
\end{equation*}
This follows from multilinearity of the determinants involved in the definition of $r$ and using $\psi(\cdot, - 1) = 0$,
\begin{equation*}
D^{(1/v_1)} \psi(x_i, -1) - v_1^{-1} D^{(1/v_1)} \psi_t(x_i, 0) =
-v_1^{-1}\psi(x_i, -2) - v_1^{-1} \psi_t(x_i, 0) = 0.
\end{equation*}
Therefore
\begin{equation}
\label{push_ASEP_wall}
\sum_{k=1}^n v_k r_t(x, y-e_k) 1_{\{0 = y_1 = \ldots = y_k\}} = v_1^{-1} 1_{\{y_1 = 0\}} r_t(x, y).
\end{equation}
We combine \eqref{push_ASEP_free_eqn}, \eqref{push_ASEP_push}, \eqref{push_ASEP_block}
and \eqref{push_ASEP_wall} to obtain that
\begin{IEEEeqnarray}{rCl}
\label{forward_eq1}
\frac{dr}{dt} & = & \sum_{k=1}^n v_{m(k)} r_t(x, y-e_{m(k)} - \ldots
-e_{k-1} - e_{k})1_{\{y_{m(k-1)} < y_{m(k)} = \ldots = y_k\}} - \sum_{k=1}^n v_k r_t(x, y) \IEEEnonumber
\\ & &
-v_1^{-1}(1- 1_{\{y_1 = 0\}}) r_t(x, y)
- \sum_{k=2}^n v_k^{-1}(1 - 1_{\{y_{k-1} = y_k\}} )
r_t(x, y) \IEEEnonumber \\
& &
+ v_n^{-1} r_t(x, y+ e_n) + \sum_{k=1}^{n-1} v_k^{-1} (1- 1_{\{y_k = y_{k+1}\}})r_t(x, y+ e_k).
\end{IEEEeqnarray}
We now consider the initial condition.
\begin{equation}
\label{r_0_eqn}
r_0(x, y) = \prod_{k=1}^n v_k^{y_k - x_k} \text{det}(F_{ij}(0; x_i+i-1, y_j+j-1))_{i, j = 1}^n
\end{equation}
where
\begin{equation*}
F_{ij}(0; x_i+i-1, y_j+j-1) = D_{y_j}^{(1/v_1 \ldots 1/v_j)} J_{x_i}^{(v_1 \ldots v_i)} \psi_0(x_i+i-1, y_j+j-1)
\end{equation*}
and $\psi_0(u, w) = 1_{\{w - u = 0\}}$ for $u, w \geq 0$ depends only on the difference $w-u$
and we will view this as a function of $w-u$.
For any function $f : \mathbb{Z} \rightarrow \mathbb{R}$ and $u, w \in \mathbb{Z}$, $r > 0$
\begin{equation}
\label{JD_translation_invariance}
D_w^{(1/r)} J_u^{(r)} f(w - u) = D_w^{(1/r)} \left(\sum_{k = u}^{\infty} r^{u-k} f(w - k)\right) = f(w - u).
\end{equation}
Therefore the top-left entry in the matrix defining $r_0$ equals $1_{\{y_1 = x_1\}}$.
Suppose $y_1 > x_1$ and observe that if a function $g$ has $g(u) = 0$ for $u > 0$,
then for any $j=1, \ldots, n$ we have
$D^{(1/v_2 \ldots 1/v_j)}_u g(u+j-1) = 0$ for $u > 0$. This shows that when $y_1 > x_1$ the top row of the matrix defining $r_0$ equals zero.
In a similar manner, when $y_1 < x_1$ the first column in the matrix defining $r_0$ is zero.
Therefore
\begin{equation}
\label{r_0_inductive}
r_0(x, y) = 1_{\{x_1 = y_1\}} \prod_{k=2}^n v_k^{y_k - x_k} \text{det}(F_{ij}(0; x_i+i-1, y_j+j-1))_{i, j = 2}^n
\end{equation}
and using \eqref{JD_translation_invariance} the entries of the matrix in \eqref{r_0_inductive}
have the same form as the entries of the matrix in \eqref{r_0_eqn} but with $n-1$ particles. Continuing inductively,
\begin{equation}
\label{forward_eq2}
r_0(x, y) = \prod_{k = 1}^n 1_{\{x_k = y_k\}}.
\end{equation}
Therefore $r_t(x, y)$ satisfies the Kolmogorov forward equations \eqref{forward_eq1} and \eqref{forward_eq2} corresponding to
the process $(Y(t))_{t \geq 0}$. These equations have a unique solution given by the transition probabilities of
$(Y(t))_{t \geq 0}$ because the process does
not explode.
\end{proof}
\begin{lemma}
\label{ibp_lemma}
Let $(f_i)_{i =1}^n$ and $(g_j)_{j = 1}^n$ be functions $\mathbb{Z}_{\geq - 1} \rightarrow \mathbb{R}$ such that $g_j$ decays superexponentially while $f_i$ grows at most exponentially at infinity.
\begin{enumerate}[(i)]
\item Suppose further that $f_i(-1) = 0$ for each $i = 1, \ldots, n$,
\begin{multline*}
\sum_{x \in W^n_{\geq 0}} \text{det}(D^{(1/v_1 \ldots 1/v_j)} f_i(x_j + j -1 ))_{i, j = 1}^n
\text{det}(J^{(v_1 \ldots v_i)} g_j(x_i + i -1 ))_{i, j = 1}^n \\ =
\text{det}\left(\sum_{u \geq 0} f_i(u) g_j(u)\right)_{i, j = 1}^n.
\end{multline*}
\item With no extra conditions and the notation $D^{\emptyset} = \text{Id}$,
\begin{multline*}
\sum_{x \in W^n_{\geq 0}} \text{det}(D^{(1/v_2 \ldots 1/v_j)} f_i(x_j + j -1 ))_{i, j = 1}^n
\text{det}(J^{(v_2 \ldots v_i)} g_j(x_i + i -1 ))_{i, j = 1}^n \\ =
\text{det}\left(\sum_{u \geq 0} f_i(u) g_j(u)\right)_{i, j = 1}^n.
\end{multline*}
\end{enumerate}
\end{lemma}
\begin{proof}
The proof is similar to Lemma 2 in \cite{FW} and so we give
a description of the proof and
refer to \cite{FW} which carries out some of the steps more explicitly. We prove (i) first and (ii) is almost identical.
We first observe that
\begin{equation}
\label{ibp_identity}
\sum_{u = a}^b (D^{(1/v)}f)(u+1) (J^{(v)} g)(u+1) = \sum_{u = a}^b f(u) g(u) + f(b+1) (J^{(v)}g)(b+1) - f(a) (J^{(v)}g)(a).
\end{equation}
We apply \eqref{ibp_identity} repeatedly to show that
\begin{multline}
\label{ibp}
\sum_{x \in W^n_{\geq 0}} \text{det}(D^{(1/v_1 \ldots 1/v_j)} f_i(x_j + j -1 ))_{i, j = 1}^n
\text{det}(J^{(v_1 \ldots v_i)} g_j(x_i + i -1 ))_{i, j = 1}^n \\
= \sum_{x \in W^n_{\geq 0}} \text{det}(f_i(x_j-1 ))_{i, j = 1}^n
\text{det}( g_j(x_i-1 ))_{i, j = 1}^n.
\end{multline}
The general procedure is to use a Laplace expansion of the determinants on the left hand side, apply \eqref{ibp_identity} with a particular choice of variable and parameter, and then reconstruct the
result as a sum of three determinants. A key property is that all of the boundary terms in \eqref{ibp_identity}
will end up contributing zero.
The first application of this procedure is with the parameter $v_n$, variable $x_n$ and summing $x_n$
from $x_{n-1}$ to infinity.
This shows that the left hand side of \eqref{ibp} equals a sum of three terms which all take the form:
\begin{equation*}
\sum_{\Sigma} \text{det}(A_{ij})_{i, j = 1}^n \text{det}(B_{ij})_{i, j = 1}^n.
\end{equation*}
In the first term, $\Sigma = \{(x_1, \ldots, x_n) \in W^n_{\geq 0}\}$. The $A_{ij}$ are given by the entries of the first matrix on the left hand side of \eqref{ibp} except with the
application of $D^{(1/v_n)}$ in the $n$-th column removed and the argument $x_n + n - 1$ replaced by
$x_n + n - 2$. The $B_{ij}$ are given by the entries of the second matrix on the left hand side of \eqref{ibp} except with the
application of $J^{(1/v_n)}$ in the $n$-th row removed and the argument $x_n + n - 1$ replaced by
$x_n + n - 2$.
There are two boundary terms which have $\Sigma = \{(x_1, \ldots, x_{n-1}) \in W_{n-1}^+\}$ and are
evaluated at $x_n = x_{n-1}$ and $x_{n} = \infty$. These terms are both zero:
when evaluated at $x_n = x_{n-1}$
two columns in $A_{ij}$ are equal, and the boundary term at infinity vanishes due to the growth and decay conditions
imposed on $f$ and $g$.
We continue this process of using \eqref{ibp_identity} with the following orders of parameters and variables:
$(x_n, v_n), (x_{n-1}, v_{n-1}), \ldots, (x_1, v_1), (x_{n-1}, v_n), (x_{n-2}, v_{n-1}), \ldots, (x_2, v_1),
(x_n, v_1)$. For each $j=2, \ldots, n-1$ the sum in \eqref{ibp_identity} when applied to the $x_j$ variable is from $x_{j-1}$ to $x_{j+1}$
and all boundary terms are zero. In the generic case, the boundary term corresponding to the upper limit of summation
is evaluated at $x_j = x_{j+1} + 1$ and is zero because two rows in the determinant of $B_{ij}$ are equal. When \eqref{ibp_identity} is applied to the $x_1$ variable
the sum is from $0$ to $x_2$ and the boundary term at zero vanishes by the condition that $f_i(-1) = 0$
for each $i = 1, \ldots, n$.
This proves \eqref{ibp} and we apply the Cauchy-Binet (or Andr\'eief) identity to the right hand side of
\eqref{ibp}
to complete the proof of part (i).
Part (ii) is identical except that we do not apply \eqref{ibp_identity} to the $x_1$ variable. Thus the condition
$f_i(-1) = 0$ for each $i=1, \ldots, n$ can be omitted.
\end{proof}
\subsection{Invariant measure}
\begin{proposition}
\label{Invariant_measure}
Let $(Y_1^*, \ldots, Y_n^*)$ be distributed according to the
invariant measure of PushASEP with a wall and suppose that
the rates $0 < v_1, \ldots, v_n < 1$ are distinct. Then the probability mass function of $(Y_1^*, \ldots, Y_n^*)$
is given by
\begin{equation*}
\pi(x_1, \ldots, x_n) = c_n\prod_{k=1}^n v_k^{x_k} \text{det}( D^{(1/v_1 \ldots 1/v_j)} \phi_i(x_j + j - 1))_{i, j = 1}^n
\end{equation*}
where $\phi_i(x) = v_i^{-(x+1)} - v_i^{x+1}$ and
$c_n = \prod_{1 \leq i < j \leq n} \frac{1}{(v_i - v_j)} \prod_{j = 1}^n v_j^n$.
\end{proposition}
We note that the Markov chain is irreducible, does not explode and
the invariant measure is unique when normalised.
\begin{proof}
We use Lemma \ref{ibp_lemma}, noting that $\phi_i(-1) = 0$ for each $i$ and that the conditions at infinity are satisfied, to find
\begin{equation*}
\sum_{x \in W^n_{\geq 0}} \pi(x) r_t(x, y)
= c_n \prod_{k=1}^n v_k^{y_k} e^{-t(v_k + 1/v_k)} \text{det}\left(D_{y_j}^{(1/v_1 \ldots 1/v_j)} \sum_{u \geq 0}
\phi_i(u) \psi_t(u, y_j + j -1 )\right)_{i, j = 1}^n.
\end{equation*}
We recall that $\psi$ is related to the transition probabilities
of a process $(X^*_t)_{t \geq 0}$ defined through two independent Poisson point processes $N_t^{(1)}$ and $N_t^{(2)}$ of rate
$1$ as $X_t^* = N_t^{(1)} - N_t^{(2)}$ for all $0 \leq t \leq \tau_{-1}$ and $X_t^* = -1$ for all $t > \tau_{-1}$, where
$\tau_{-1} = \inf\{t\geq 0: N_t^{(2)} = N_t^{(1)}+1\}$.
The transition probabilities of $X^*$ are given for $\xi \geq 0$ by $P_{\xi}(X_t^* = \eta) = e^{-2t} \psi_t(\xi, \eta)$ for $\eta \geq 0$ and
$P_{\xi}(X_t^* = -1) = 1 - \sum_{\eta \geq 0} e^{-2t} \psi_t(\xi, \eta)$. On the other hand,
$(v^{-(X^*_t + 1)} - v^{X_t^*+1}) e^{-(v + 1/v)t + 2t}$ is a martingale for $X^*$ for any $v > 0$.
In particular,
\begin{equation*}
\sum_{u \geq 0} \psi_t(y_j + j -1, u)\phi_i(u) e^{-t(v_i + 1/v_i)} = \phi_i(y_j + j - 1).
\end{equation*}
Using this and the fact that $\psi$ is symmetric in the right hand side of the first displayed equation in this proof
shows that
\begin{equation*}
\sum_{x \in W^n_{\geq 0}} \pi(x) r_t(x, y) = c_n \prod_{k=1}^n v_k^{y_k} \text{det}(D^{(1/v_1 \ldots 1/v_j)} \phi_i(y_j + j - 1))_{i, j = 1}^n = \pi(y).
\end{equation*}
We defer the proof that $\pi$ is positive and the identification of the normalisation constant. These two
properties will follow by identifying $\pi$ as
the probability mass function for a vector of last passage percolations times in the proof of Theorem \ref{main_theorem}.
\end{proof}
\section{Point-to-line last passage percolation}
\label{point_to_line}
Point-to-line last passage percolation can be interpreted as an interacting particle system,
where at each time step a new particle is added at the origin and particles interact by pushing particles to the right of them.
We define a discrete-time Markov chain
denoted $(\mathbf{G}^{pl}(k))_{1 \leq k \leq n}$ where $\mathbf{G}^{pl}(k) = (G_1^{pl}(k), \ldots, G_k^{pl}(k))$.
The particles are updated between time $k-1$ and time $k$
by sequentially defining $G_1^{pl}(k), \ldots, G_n^{pl}(k)$
starting with
$G_1^{pl}(k) = g_{n-k+1, k}$ and then applying the update rule
\begin{equation}
\label{update_rule_lpp}
G_j^{pl}(k) = \max(G_j^{pl}(k-1), G_{j-1}^{pl}(k)) + g_{n-k+1, k-j+1}, \qquad \text{ for } 2 \leq j \leq k
\end{equation}
where $(g_{jk})_{j, k \geq 1, j+k\leq n+1}$ are an independent collection of geometrically distributed random variables with parameters $1 - v_j v_{n-k+1}$ and $0 < v_j < 1$ for each $j = 1, \ldots, n$.
The geometric random variables are defined as
$P(g_{jk} = u) = (1-v_j v_{n-k+1}) (v_j v_{n-k+1})^{u}$ for all $u \geq 0$.
The initial state is $G_1^{pl} = g_{n1}$.
The connection to point-to-line last passage percolation is that the largest particle at time $n$ has the representation
\begin{equation*}
G_n^{pl}(n) = \max_{\pi \in \Pi_{n}} \sum_{(i, j) \in \pi} g_{ij}
\end{equation*}
where $\Pi_{n}^{\text{flat}}$ is the set of directed up-right paths nearest neighbour paths from $(1, 1)$ to the line
$\{(i, j):i+j = n+1\}$.
Moreover, $\mathbf{G}^{pl}(n)$ is the vector on the right hand side of Theorem \ref{main_theorem}.
The advantage of this interpretation is that the transition probabilities of $(\mathbf{G}^{pl}(k))_{1 \leq k \leq n}$ have a determinantal
form and this can be used to find the probability mass function of $\mathbf{G}^{pl}(n)$ as a determinant.
In the context of point-to-point last passage percolation the transition kernel of a Markov chain analogous to the above
is given in Theorem 1 of \cite{dieker2008}. This can be used to describe the update rule of
$\mathbf{G}^{pl}(n-1)$ to $\mathbf{G}^{pl}(n)$ from time $n-1$ to time $n$ by viewing $\mathbf{G}^{pl}(n-1)$ as being extended to an $n$-dimensional
vector with zero as the leftmost position.
We first define: for functions $f : \mathbb{Z} \rightarrow \mathbb{R}$ with $f(u) = 0$ for all $u < 0$,
\begin{equation*}
D^{(v)}f(u) = f(u) - vf(u-1), \qquad I^{(v)}f(u) = \sum_{j=0}^u v^{u-j} f(j) \text{ for } u \geq 0
\end{equation*}
and $I^{(v)} f(u) = 0$ for $u < 0$.
Suppose $p^{-1} = (1/p_1, \ldots, 1/p_n)$ and
for a function $g : \mathbb{Z} \rightarrow \mathbb{R}$ with $g(u) = 0$ for $u < 0$ define
\begin{equation}
\label{function_integral_derivatives}
g_{p^{-1}}^{(ij)}(u) = \begin{cases}
D^{(1/p_{i+1} \ldots 1/p_j)} g(u) & \text{ for } j > i \\
I^{(1/p_{j+1} \ldots 1/p_i)} g(u) & \text{ for } j < i \\
g(u) & \text{ for } j = i.
\end{cases}
\end{equation}
\begin{lemma}[Dieker, Warren \cite{dieker2008}]
\label{lpp_transition_density}
As above, suppose the geometric random variables $g_{1, n-j+1}$ used in the update rule \eqref{update_rule_lpp} from $\mathbf{G}^{\text{pl}}(n-1)$
to $\mathbf{G}^{\text{pl}}(n-1)$ have parameters $1-v_1 v_j$ for $j = 1, \ldots, n$.
Then
\begin{multline*}
P(\mathbf{G}^{pl}(n) = (y_1, \ldots, y_n) \vert \mathbf{G}^{pl}(n-1) = (x_2, \ldots, x_n)) \\
= \prod_{k=1}^n (1- v_1 v_k) (v_1 v_k)^{y_k - x_k} \text{det}(w_{1, (1/(v_1 v_k))_{k=1}^n}^{(ij)}(y_j - x_i + j - i))_{i, j = 1}^n
\end{multline*}
where $x_1 := 0$ and $w_1(u) = 1_{\{u \geq 0\}}$.
\end{lemma}
The proof uses the RSK correspondence; a more direct proof is given in the case with all parameters equal in \cite{johansson2010} and with the geometric replaced by exponential data in \cite{FW}.
We will iteratively apply these one-step updates and use the following lemma to find the probability
mass function for $\mathbf{G}^{pl}(n)$ as a single determinant.
\begin{lemma}
\label{ibp_lpp}
Suppose that $p = (p_1, \ldots, p_n)$ and $p_i > 0$ for $i = 1, \ldots, n$.
Let $(f_i)_{i = 1}^n$ be a collection of functions from $\mathbb{Z}_{\geq 0} \rightarrow \mathbb{R}$
and $g : \mathbb{Z} \rightarrow \mathbb{R}$ with $g(u) = 0$ for all $u < 0$. Then
\begin{multline*}
\sum_{x \in W^n_{\geq 0}} \text{det}(D^{(1/p_2 \ldots 1/p_j)} f_i(x_j + j - 1))_{i, j = 1}^n
\text{det}( g_{1, p^{-1}}^{(ij)}(y_j - x_i + j-i))_{i, j = 1}^n \\
= \text{det}\left( D^{(1/p_2 \ldots 1/p_j)} \sum_{u \geq 0}
f_i(u) g_1(y_j + j - 1 - u)\right)_{i, j = 1}^n.
\end{multline*}
\end{lemma}
\begin{proof}
We have $g(u) = 0$ for all $u < 0$ which means that $(J^{(p)}g)(z - \cdot)(u) = (I^{(1/p)} g)(z - u)$.
We apply Lemma \ref{ibp_lemma} part (ii) with the functions $g_j(\cdot) = D^{(1/p_2 \ldots 1/p_j)} g(y_j + j - 1 - \cdot)$
where $D^{\emptyset} = \text{Id}$. We note that as $g$ is zero in a neighbourhood of infinity the condition
on the growth of $f$ can be omitted.
\end{proof}
\subsection{Proof of Theorem \ref{main_theorem}}
\begin{proof}
We prove by induction on $n$ that the probability mass function for $\mathbf{G}^{\text{pl}}(n)$ is given by $\pi(x_1, \ldots, x_n)$ and note that the case $n=1$ holds.
The proposed probability mass function for $\mathbf{G}^{pl}(n-1)$
is \begin{equation*}
\pi(x_2, \ldots, x_n) = c_{n-1} \prod_{k=2}^n v_k^{x_k} \text{det}(D_{x_j}^{(1/v_2 \ldots 1/v_j)} \phi_i(x_j + j - 2))_{i, j = 2}^n
\end{equation*}
where $\phi_i(u) = v_i^{-(u+1)} - v_i^{u+1}$ for each $i = 2, \ldots, n$
and $c_{n-1} = (\prod_{2 \leq i < j \leq n} (v_i - v_j))^{-1} \prod_{j = 2}^n v_j^{n-1}$.
We define
\begin{equation}
\label{pi_hat_defn}
\hat{\pi}(x_1, \ldots, x_n) = c_{n-1} \prod_{k=2}^n v_k^{x_k} \text{det}(D_{x_j}^{(1/v_2 \ldots 1/v_j)} \hat{\phi}_i(x_j + j - 2))_{i, j = 1}^n
\end{equation}
where $D^{\emptyset} = \text{Id}$, $\hat{\phi}_1(u) = 1_{\{u = -1\}}$ and $\hat{\phi}_i = \phi_i$ for each $i = 2, \ldots, n$.
We first show that for $(x_1, \ldots, x_n) \in W^n_{\geq 0}$,
\begin{equation}
\label{pi_hat_equality}
\hat{\pi}(x_1, \ldots, x_n) = 1_{\{x_1 = 0\}} \pi(x_2, \ldots, x_n).
\end{equation}
Consider a Laplace expansion of $\hat{\pi}$ where the summation is indexed by a permutation $\sigma$.
If $\sigma(1) = 1$ then the top-left entry in the matrix defining $\hat{\pi}$ is given by $\hat{\phi}_1(x_1 - 1)$ which equals $1$ if $x_1=0$ and $0$ otherwise. Therefore the terms in the Laplace expansion with $\sigma(1) = 1$ will give the desired expression for $\hat{\pi}$
and we need to show the remaining terms in the Laplace expansion of $\hat{\pi}$ are zero.
Let $\sigma(1) = j$ for some $2 \leq j \leq n$ and $\sigma(i) = 1$ for some $2 \leq i \leq n$.
For any $2 \leq j \leq n$, the $(1, j)$ entry in the matrix in \eqref{pi_hat_defn} is only non-zero if $x_j = 0$ by using the definition of $\hat{\phi}_1$.
On the other hand, the $(i, 1)$ entry in the matrix in \eqref{pi_hat_defn}
is given by $\hat{\phi}_i(x_1 - 1) = v_i^{x_1} - v_i^{-x_1} = 0$ if $x_1 = 0$. Therefore as $x_1 \leq x_j$ all terms
in the Laplace expansion of $\hat{\pi}$ with $\sigma(1) \neq 1$ are zero. This proves $\eqref{pi_hat_equality}$.
We use \eqref{pi_hat_defn}, \eqref{pi_hat_equality} and the update rule in Lemma \ref{lpp_transition_density} to
find the probability mass function for $\mathbf{G}^{pl}(n)$ as
\begin{multline}
\label{G_density_formula1}
c_{n-1} \sum_{(x_2, \ldots, x_n) \in W^{n-1}_{\geq 0}} \prod_{k=2}^n v_k^{x_k} \text{det}(D_{x_j}^{(1/v_2 \ldots 1/v_j)} \hat{\phi}_i(x_j + j - 2))_{i, j = 1}^n\\
\cdot \prod_{k=1}^n (1- v_1 v_k) (v_1 v_k)^{y_k - x_k} \text{det}(w_{1, (1/(v_1 v_k))_{k=1}^n}^{(ij)}(y_j - x_i +j - i))_{i,j=1}^n
\end{multline}
where $x_1:=0$.
We use the identities:
\begin{equation*}
D^{(\alpha)}f(u) = \beta^u D^{(\alpha/\beta)}(f(u) \beta^{-u}), \qquad \quad I^{(\alpha)}f(u) = \beta^u I^{(\alpha/\beta)}(f(u) \beta^{-u})
\end{equation*}
to show that with $u = y_j - x_i + j - i$,
\begin{gather}
\label{change_parameters1}
v_1^{u} I^{(1/(v_1 v_{j+1}), \ldots, 1/(v_1 v_i))}
(w_1(u)) = I^{(1/v_{j+1}, \ldots, 1/v_i)}(
w_1(u) v_1^{u}) \text{ for } i > j \\
\label{change_parameters2}
v_1^{u} D^{(1/(v_1 v_{i+1}), \ldots, 1/(v_1 v_j))} (w_1(u)) = D^{(1/v_{i+1}, \ldots, 1/v_j)}(w_1(u) v_1^{u})
\text{ for } j > i.
\end{gather}
We use \eqref{change_parameters1} and \eqref{change_parameters2} in
\eqref{G_density_formula1} to obtain the probability mass function for $\mathbf{G}^{pl}(n)$
as
\begin{multline*}
c_{n-1} \sum_{(x_2, \ldots, x_n) \in W^{n-1}_{\geq 0}} \prod_{k=1}^n v_k^{y_k} \text{det}\left(D_{x_j}^{(1/v_2 \ldots 1/v_j)} \hat{\phi}_i(x_j + j - 2)\right)_{i, j = 1}^n \\
\cdot \prod_{k=1}^n (1- v_1 v_k) \text{det}(\hat{w}_{1, v^{-1}}^{(ij)}(y_j - x_i +j - i))_{i,j=1}^n
\end{multline*}
where $\hat{w}_1(u) = w_1(u) v_1^u$ and $\hat{w}_{1, v^{-1}}^{(ij)}$ is defined by \eqref{function_integral_derivatives}.
We apply Lemma \ref{ibp_lpp} to show this equals
\begin{equation}
\label{G_formula_2}
c_{n-1} \prod_{k=1}^n (1- v_1 v_k) v_k^{y_k} \text{det}\left(D_{y_j}^{(1/v_2 \ldots 1/v_j)}
\sum_{u = 0}^{y_j + j -1} \hat{\phi}_i(u-1) \hat{w}_1(y_j - u + j - 1) \right)_{i, j = 1}^n.
\end{equation}
In the case $i = 1$, recall $\phi_1(x) = v_1^{(x+1)} - v_1^{-(x+1)}$ and observe that
\begin{equation}
\label{row_1_simplifications}
\sum_{u = 0}^{y_j + j -1} \hat{\phi}_1(u-1) v_1^{y_j - u + j - 1} = v_1^{y_j + j - 1} = \frac{v_1}{1 - v_1^2} D^{(1/v_1)} \phi_1(y_j + j - 1).
\end{equation}
In the case $2 \leq i \leq n$ observe that
\begin{IEEEeqnarray}{rCl}
\label{row_2_simplifications}
\sum_{u = 0}^{y_j + j - 1} \phi_i(u - 1) v_1^{y_j - u + j - 1} & = & \frac{v_i^{y_j + j}}{v_1 - v_i} -
\frac{v_1^{y_j + j - 1}}{1-v_i/v_1}
+ \frac{v_1^{y_j + j -1}}{1 - 1/(v_1 v_i)} - \frac{v_i^{-y_j - j + 1}}{v_1 v_i - 1} \IEEEnonumber \\
& = & \frac{v_i v_1}{(1-v_1 v_i)(v_1 - v_i)} D^{(1/v_1)} \phi_i(y_j + j - 1) + C v_1^{y_j}
\end{IEEEeqnarray}
where $C$ is independent of $y_j$.
Using \eqref{row_1_simplifications} and \eqref{row_2_simplifications} in \eqref{G_formula_2} and removing the terms $C v_1^{y_j}$
using row operations we find
\begin{equation*}
c_{n-1}\frac{v_1}{1 - v_1^2} \prod_{j = 2}^n \frac{v_1 v_j}{(1 - v_1 v_j )(v_1 - v_j)} \prod_{k = 1}^n (1- v_k v_1) v_k^{y_k} \text{det}(D_{y_j}^{(1/v_1 \ldots 1/v_j)} \phi_i(y_j + j - 1))_{i, j = 1}^n.
\end{equation*}
The prefactor equals $c_n$ where $c_{n} = \prod_{1 \leq i < j \leq n}\frac{1}{(v_i - v_j)} \prod_{j = 1}^n v_j^n$ and so we establish inductively that the
probability mass function of $\mathbf{G}^{pl}(n)$
is given by
\begin{equation}
\label{lpp_prob_mass}
\pi(y_1, \ldots, y_n) = c_n \prod_{k=1}^n v_k^{y_k} \text{det}( D_{y_j}^{(1/v_1 \ldots 1/v_j)} \phi_i(y_j + j - 1))_{i, j = 1}^n.
\end{equation}
We recall that in Proposition \ref{Invariant_measure}, we deferred the proof of positivity of $\pi$ and the
normalisation constant. This is now proven as
we have identified $\pi$ as the probability mass function of $\mathbf{G}^{pl}(n)$.
Moreover, Equation \eqref{lpp_prob_mass} and Proposition \ref{Invariant_measure} proves Theorem \ref{main_theorem} when $v_1, \ldots, v_n$ are distinct. The distribution of
$(Y_1^*, \ldots, Y_n^*)$ is continuous in $(v_1, \ldots, v_n)$ on the set $(0, 1)^n$ from Lemma \ref{continuity_invariant_measure} and the distribution of $(G(1, n), \ldots, G(1, 1))$ is continuous in $(v_1, \ldots, v_n)$ on the same set as a finite number of operations of summation and maxima applied to geometric random
variables. This
completes the proof of Theorem \ref{main_theorem}.
\end{proof}
\subsection{The largest particle}
\begin{proposition}
Let $F(\eta) = P(Y_n^* \leq \eta) = P(G(1, 1) \leq \eta).$
For distinct $0 < v_1, \ldots, v_n < 1$,
\begin{equation*}
F(\eta) = \prod_{1 \leq i \leq j \leq n} (1 - v_i v_j) \left(\prod_{j = 1}^n v_j\right)^{\eta} \text{Sp}_{\eta^{(n)}}(v_1, \ldots, v_n)
\end{equation*}
where $\text{Sp}$ denotes the symplectic Schur function from \eqref{symplectic_schur_defn}, and $\eta^{(n)}$ denotes an $n$-dimensional vector $(\eta, \ldots, \eta)$.
\end{proposition}
For point-to-line last passage percolation this was proven in \cite{Bisi_Zygouras2} and related to earlier formulas
for point-to-line last passage percolation in \cite{baik_rains} and \cite{bisi_zygouras}. We could appeal to this and Theorem \ref{top_particle_theorem}
to prove the same expression for the distribution function of $Y_n^*$. We now show that it follows
quickly from Proposition \ref{Invariant_measure}.
\begin{proof}
From Proposition \ref{Invariant_measure} we have
\begin{equation*}
F(\eta) = c_n \sum_{y_1, \ldots, y_n \leq \eta, y \in W^n_{\geq 0}} \prod_{k=1}^n v_k^{y_k}
\text{det}(D^{(1/v_1 \ldots 1/v_j)} \phi_i(y_j + j - 1))_{i, j = 1}^n.
\end{equation*}
We perform the summation in $y_n$ from $y_{n-1}$ to $\infty$ which replaces the last column by
$v_n^{\eta} D^{(1/v_1 \ldots 1/v_{n-1})} \phi_i(\eta +n - 1) - v_n^{y_{n-1} -1}
D^{(1/v_1 \ldots 1/v_{n-1})} \phi_i(y_{n-1} +n - 2)$. The second term differs from the penultimate column
by a factor of $v_{n}^{y_{n-1} -1} v_{n-1}^{-y_{n-1}}$ which is non-zero and independent of $i$. Therefore the second term
can be removed from the last column by column operations. We now apply this procedure inductively in order $x_{n-1}, \ldots, x_1$ to obtain
\begin{equation*}
F(\eta) = c_n \left(\prod_{k=1}^n v_k \right)^{\eta} \text{det}(D^{(1/v_2 \ldots 1/v_{n-1})} \phi_i(\eta + j - 1))_{i, j = 1}^n
\end{equation*}
where $D^{\emptyset} = \text{Id}$.
We relate this to a symplectic Schur function by using column operations.
The entries in the first column are $\phi_i(\eta) = v_i^{-(\eta + 1)} - v_i^{\eta + 1}$. In the second column,
the entries are $D^{(1/v_2)} \phi_i(\eta + 1) = (v_{i}^{-(\eta + 2)} - v_i^{\eta + 2}) - v_2^{-1} (v_i^{-(\eta+1)}
- v_i^{\eta + 1})$ and the second bracketed term can be removed by column operations. This can be continued inductively and leads to
\begin{IEEEeqnarray*}{rCl}
F(\eta) & = & c_n (-1)^n \left(\prod_{k=1}^n v_k\right)^{\eta} \text{det}(v_i^{\eta + j} - v_i^{-(\eta + j)})_{i, j = 1}^n \\
& = & c_n \left(\prod_{k=1}^n v_k\right)^{\eta} Sp_{\eta^{(n)}}(v_1, \ldots, v_n) \text{det}(v_i^j - v_i^{-j})_{i, j = 1}^n.
\end{IEEEeqnarray*}
The proof is now completed by using \eqref{fulton_harris_eqn} to equate the normalisation constants.
\end{proof}
\section{A dynamically reversible process}
\label{dynamic_reversibility}
We will suppose throughout that $0 < v_1, \ldots, v_n < 1$.
Let $e_{ij}$ denote the vector taking value $1$ in position $(i, j)$ and $0$ otherwise. Let $S = \{(i, j): i, j \in \mathbb{Z}_{\geq 1}, i + j \leq n+1\}$ and
\begin{equation*}
\mathcal{X} = \{ (x_{ij})_{(i, j ) \in S} : x_{ij} \in \mathbb{Z}_{\geq 0}, x_{i+1, j} \leq x_{ij} \text{ and } x_{i, j+1} \leq x_{ij} \}.
\end{equation*}
We define a continuous-time Markov process $(X_{ij}(t) : i+j \leq n+1, t \geq 0)$ taking values in $\mathcal{X}$ by specifying its transition rates.
For $x, x + e_{ij} + e_{i j-1} + \ldots + e_{ik} \in \mathcal{X}$, $(i, j), (i, k) \in S$ and $k \leq j$ define
\begin{equation}
\label{X_defn1}
q(x, x + e_{ij} + e_{i j-1} + \ldots + e_{ik}) = v_{n-j + 1} (v_{n-j + 1} v_{i-1})^{-1_{\{x_{ij} \geq x_{i-1, j+1}\}}} 1_{\{x_{ij} = x_{i j-1} = \ldots = x_{ik}\}}
\end{equation}
with the notation that $x_{0, j} = \infty$ for $j = 2, \ldots, n+1$.
For $x, x - e_{ij} - e_{i+1 j} - \ldots - e_{lj} \in \mathcal{X}, (i, j), (l, j) \in S$ and $l \geq i$ define
\begin{equation}
\label{X_defn2}
q(x, x - e_{ij} - e_{i+1 j} - \ldots - e_{lj}) = v_{n - j +1}^{-1} (v_{i-1} v_{n - j + 1})^{1_{\{x_{ij} > x_{i - 1, j+1}\}}} 1_{\{x_{ij} = x_{i+1 j} = \ldots = x_{lj}\}}.
\end{equation}
All other transition rates are zero. The fact that $q(x, x') \neq 0$ only
if $x, x' \in \mathcal{X}$ corresponds to blocking interactions.
This defines a multi-dimensional Markov chain with interactions shown in Figure \ref{fig_Xarray}, where the arrows in Figure \ref{fig_Xarray} correspond to the following interactions.
\begin{enumerate}[(i)]
\item A push-block interaction denoted $A \rightarrow B$. If $A = B$ and $A$ jumps to the right by one then $B$ also jumps to the right by one (this may then causes further right jumps if $B \rightarrow C$). If $A = B$ and $B$ jumps left then this jump is suppressed.
\item A push-block interaction denoted $A \downarrow B$ where $A$ is at the base of the arrow and $B$ at the head of the arrow in Figure \ref{fig_Xarray}. If $A = B$ and $A$ jumps to the left by one then $B$ also jumps to the left by one (this may then causes further left jumps if $B \downarrow C$). If $A = B$ and $B$ jumps right then this jump is suppressed.
\item An interaction $A \leadsto B$ in which the rates of right and left jumps experienced by $B$ depend on its location relative to $A$. The particular form is given in \eqref{X_defn1} and \eqref{X_defn2} and is chosen such that
$(X_{ij}(t):i+j \leq n+1, t \geq 0)$ is dynamically reversible, see part (ii) of Theorem \ref{dynamical_reversibility_theorem}.
\item An interaction with a wall in which all left jumps below zero are suppressed. This is depicted by the diagonal
line on the left side of Figure \ref{fig_Xarray}.
\end{enumerate}
\begin{figure}
\centering
\begin{tikzpicture}[scale = 1.2]
\node at (4, 4) {$X_{11}$};
\draw[->] (3.3, 4) -- (3.7, 4);
\draw[->] (4, 3.7) -- (4, 3.3);
\node at (3, 4) {$X_{12}$};
\draw[->] (2.3, 4) -- (2.7, 4);
\draw[->] (4, 2.7) -- (4, 2.3);
\node at (2, 4) {$X_{13}$};
\draw[->] (1.3, 4) -- (1.7, 4);
\draw[->] (4, 1.7) -- (4, 1.3);
\node at (1, 4) {$X_{14}$};
\draw[->] (3.3, 3) -- (3.7, 3);
\draw[->] (3, 3.7) -- (3, 3.3);
\node at (4, 3) {$X_{21}$};
\draw[->] (2.3, 3) -- (2.7, 3);
\draw[->] (3, 2.7) -- (3, 2.3);
\node at (3, 3) {$X_{22}$};
\draw[->] (3.3, 2) -- (3.7, 2);
\node at (2, 3) {$X_{23}$};
\draw[->] (2, 3.7) -- (2, 3.3);
\node at (4, 2) {$X_{31}$};
\node at (3, 2) {$X_{32}$};
\node at (4, 1) {$X_{41}$};
\draw (0, 4) -- (4, 0);
\path[draw = black, ->, snake it] (3.3, 1.7) -- (3.7,1.3);
\path[draw = black, ->, snake it] (2.3, 2.7) -- (2.7,2.3);
\path[draw = black, ->, snake it] (1.3,3.7) -- (1.7, 3.3);
\path[draw = black, ->, snake it] (3.3,2.7) -- (3.7, 2.3);
\path[draw = black, ->, snake it] (3.3,3.7) -- (3.7, 3.3);
\path[draw = black, ->, snake it] (2.3,3.7) -- (2.7, 3.3);
\draw[->] (0.6, 3.6) -- (0.8, 3.8);
\draw[->] (1.6, 2.6) -- (1.8, 2.8);
\draw[->] (2.6, 1.6) -- (2.8, 1.8);
\draw[->] (3.6, 0.6) -- (3.8, 0.8);
\end{tikzpicture}
\caption{The interactions in the system $\{X_{ij} : i + j \leq n+1\}$.}
\label{fig_Xarray}
\end{figure}
To find the invariant measure of $X$ we use a result which has found applications in the queueing theory literature.
\begin{lemma}[Theorem 1.13, Kelly \cite{kelly}]
\label{Kelly}
Let $X$ be a stationary Markov process with state space $E$ and transition rates $(q(j, k))_{i, j \in E}$. Suppose we can find positive sequences
$(\hat{q}(j, k))_{i, j \in E}$ and $(\pi(j))_{j \in E}$ with $\sum_{j \in E} \pi(j) = 1$ such that:
\begin{enumerate}[(i)]
\item $q(j) = \hat{q}(j)$ where $q(j) = \sum_{k \neq j} q(j, k)$ and $\hat{q}(j) = \sum_{k \neq j} \hat{q}(j, k)$.
\item $\pi(j) q(j, k) = \pi(k) \hat{q}(k, j)$.
\end{enumerate}
Then $\pi$ is the invariant measure for $X$ and $\hat{q}$ are the transition rates of the time reversal of $X$ in stationarity.
\end{lemma}
The proof is straightforward: using (ii) then (i)
\begin{equation*}
\sum_{j \in E} \pi(j) q(j, k) = \sum_{j \in E} \pi(k) \hat{q}(k, j) = \pi(k) \hat{q}(k) = \pi(k) q(k).
\end{equation*}
Nonetheless this gives a convenient way of verifying an invariant measure \emph{if} we can guess
the transition rates of the time reversed process. In general, this is an intractable problem. However, in this case we can
make the choice that the invariant measure is a field of point-to-line last passage percolation times and the reversed transition
probabilities are given by \emph{reversing the direction of all interactions} between particles in Figure \ref{fig_Xarray}
and changing the order of the parameters $(v_1, \ldots, v_n) \rightarrow (v_n, \ldots, v_1)$ (the interactions
with the wall remain unchanged). This is motivated by the construction of \cite{FW}.
More precisely, we define the reversed transition rates as follows.
For $x, x + e_{ij} + e_{i j -1} + \ldots + e_{ik} \in \mathcal{X}$ for $(i, j), (i, k) \in S$ and $k \leq i$ define
\begin{equation*}
\hat{q}( x + e_{ij} + e_{i j -1} + \ldots + e_{ik}, x) = v_i^{-1} (v_{n-k+2} v_i)^{1_{\{ x_{ik} \geq x_{i+1, k-1}\}}} 1_{\{x_{ij} = x_{i j - 1} = \ldots = x_{ik}\}}
\end{equation*}
with the notation that $x_{j0} = \infty$ for $j = 2, \ldots, n+1$.
For $x, x -e_{ij} - e_{i+1, j} - \ldots - e_{lj} \in \mathcal{X}$, $(i, j), (l, j) \in S$ and $l \geq i$, define
\begin{equation*}
\hat{q}(x -e_{ij} - e_{i+1, j} - \ldots - e_{lj}, x) = v_l (v_{n-j+2} v_l)^{-1_{\{x_{lj} > x_{l+1, j-1} \}}} 1_{\{x_{ij} = x_{i+1 j} = \ldots = x_{lj}\}}.
\end{equation*}
Our proposed invariant measure is the probability mass function of $(G(i, j): i+j \leq n+1)$. This has an explicit form:
\begin{equation*}
\pi(x) = \prod_{\{i + j < n+1\}} (1- v_i v_{n-j+1}) (v_i v_{n-j+1})^{x_{ij} - \max(x_{i+1 j}, x_{i j+1})}
\prod_{i=1}^n (1 - v_i^2) v_i^{2 x_{i n-i+1}}
\end{equation*}
\begin{theorem}
\label{dynamical_reversibility_theorem}
Suppose that $0 < v_1, \ldots, v_n <1$.
Let $(X_{ij}^{(v_1 \ldots v_n)}(t) : i + j \leq n+1, t \geq 0)$ be the continuous-time Markov process with transition rates given
by \eqref{X_defn1} and \eqref{X_defn2}. This process has a unique invariant measure $(X_{ij}^*: i + j \leq n+1)$
which satisfies
\begin{equation*}
(X_{ij}^*: i + j \leq n+1) \stackrel{d}{=} (G(i, j) : i+j \leq n+1).
\end{equation*}
When run in stationarity,
\begin{equation*}
(X_{ij}^{(v_1, \ldots, v_n)}(t))_{t \in \mathbb{R}, i+j \leq n+1} \stackrel{d}{=} (X_{ji}^{(v_n, \ldots, v_1)}(-t))_{t \in \mathbb{R}, i+j \leq n+1}.
\end{equation*}
\end{theorem}
The process $(X_{ij}(t): i + j \leq n+1, t \geq 0)$ is irreducible, does not explode and
has a unique invariant measure.
The second statement is the statement that when run in stationarity $(X_{ij}(t): i + j \leq n+1, t \geq 0)$ is dynamically reversible.
\begin{proof}
We will use Lemma \ref{Kelly}.
We first prove that for all $x, x' \in \mathcal{X}$,
\begin{equation}
\label{dynamical_rev_eq1}
\pi(x) q(x, x') = \pi(x') \hat{q}(x', x).
\end{equation}
First consider the case when $x' = x + e_{ij} + e_{i j -1} + \ldots + e_{ik}$. Both sides are zero unless
$x_{ij} = x_{i j-1} = \ldots = x_{ik}$.
When these equalities hold, $\max(x_{i p}, x_{i-1, p+1}) = x_{i-1, p+1}$ for each $p = j-1, \ldots, k$ and $\max(x_{i+1, p-1}, x_{i p}) = x_{i p}$ for each $p = j, \ldots, k+1$.
Therefore when $x_{ij} = x_{i j-1} = \ldots = x_{ik}$,
\begin{equation*}
\pi(x) = \bar{\pi}_1 (v_i v_{n-j+1})^{x_{ij}} (v_{i-1} v_{n-j+1})^{-x_{ij} 1_{\{x_{ij} \geq x_{i-1, j+1}\}}} (v_i v_{n-k+2})^{-x_{ik} 1_{\{ x_{ik} \geq x_{i+1, k-1}\}}}
\end{equation*}
where $\bar{\pi}_1$ does not depend on $x_{ij}, x_{i j-1}, \ldots, x_{ik}$.
In particular, with $x' = x + e_{ij} + e_{i j -1} + \ldots + e_{ik}$,
\begin{equation*}
\frac{\pi(x')}{\pi(x)} = v_i v_{n-j+1} (v_{i-1} v_{n-j+1})^{-1_{\{x_{ij} \geq x_{i-1, j+1} \}}} (v_i v_{n-k+2})^{-1_{\{ x_{ik} \geq x_{i+1, k-1}\}}}.
\end{equation*}
We compare to the ratio
\begin{equation*}
\frac{q(x, x')}{\hat{q}(x', x)} =
v_i v_{n-j+1} (v_{i-1} v_{n-j+1})^{-1_{\{x_{ij} \geq x_{i-1, j+1} \}}} (v_i v_{n-k+2})^{-1_{\{ x_{ik} \geq x_{i+1, k-1}\}}}.
\end{equation*}
Combining the above two equations proves \eqref{dynamical_rev_eq1} in the case $x' = x + e_{ij} + e_{i j -1} + \ldots + e_{ik}$.
The second case is when $x' = x -e_{ij} - e_{i+1, j} - \ldots - e_{lj}$ and proceeds in a similar manner.
Both sides are zero unless $x_{ij} = x_{i+1 j} = \ldots = x_{lj}$. When these equalities hold, then $\max(x_{p-1 j}, x_{p, j-1}) = x_{p, j-1}$ for $p = l, \ldots, i+1$ and $\max(x_{p-1, j+1}, x_{p, j}) = x_{p j}$ for $p = l, \ldots, i+1$.
Therefore
\begin{equation*}
\pi(x) = \bar{\pi}_2 (v_l v_{n-j+1})^{x_{lj}} (v_l v_{n-j+2})^{-x_{lj} 1_{\{x_{lj} > x_{l+1, j-1}\}}}
(v_{i-1} v_{n-j+1})^{-x_{ij} 1_{\{x_{ij} > x_{i-1, j+1}\}}}
\end{equation*}
where $\bar{\pi}_2$ does not depend on $x_{ij}, x_{i+1, j}, \ldots, x_{lj}$.
Therefore letting
$x' = x-e_{ij} - e_{i+1, j} - \ldots - e_{lj}$ we have
\begin{equation*}
\frac{\pi(x)}{\pi(x')} = (v_l v_{n-j +1})(v_l v_{n-j+2})^{-1_{\{x_{lj} > x_{l+1, j-1} \}}} (v_{i-1} v_{n- j +1})^{-1_{ \{ x_{ij} > x_{i-1, j+1} \}}}
\end{equation*}
and
\begin{equation*}
\frac{\hat{q}(x', x)}{q(x, x')}
= (v_l v_{n-j +1})(v_l v_{n-j+2})^{-1_{\{x_{lj} > x_{l+1, j-1} \}}} (v_{i-1} v_{n-j +1})^{-1_{ \{ x_{ij} > x_{i-1, j+1} \}}}.
\end{equation*}
The two above equations prove \eqref{dynamical_rev_eq1} in the case when $x' = x -e_{ij} - e_{i+1, j} - \ldots - e_{lj}$.
Both sides of \eqref{dynamical_rev_eq1} are zero in all other cases and so we have proven \eqref{dynamical_rev_eq1}.
We now show that for all $x \in \mathcal{X}$ we have $q(x) = \hat{q}(x)$.
This follows from comparing,
\begin{multline}
\label{q_rate}
q(x) = v_1 + v_1^{-1} 1_{\{x_{1n} > 0\}} + \sum_{k=1}^{n-1} v_{n-k+1} + v_{n-k+1}^{-1} 1_{\{x_{1 k} > x_{1 k+1}\}} \\
+ \sum_{\{i \neq 1, i+j \leq n+1\}} v_{n-j+1} (v_{n-j+1} v_{i-1})^{-1_{\{x_{ij} \geq x_{i-1, j+1}\} }} 1_{\{x_{ij} < x_{i-1, j}\}} \\ +
\sum_{\{i \neq 1, i+j < n+1\}} v_{n-j+1}^{-1} (v_{n-j+1} v_{i-1})^{1_{\{x_{ij} > x_{i-1, j+1} \}}} 1_{\{x_{ij} > x_{i j+1}\}} \\
+ \sum_{\{i \neq 1, i+j = n+1\}} v_{n-j+1}^{-1} (v_{n-j+1} v_{i-1})^{1_{\{x_{ij} > x_{i-1, j+1} \}}} 1_{\{x_{ij} > 0\}}
\end{multline}
and
\begin{multline}
\label{q_hat_rate}
\hat{q}(x) = \sum_{k=1}^{n-1} v_k + v_k^{-1} 1_{\{x_{k1} > x_{k+1, 1}\}} + v_n + v_n^{-1} 1_{\{x_{n1} > 0 \}} \\
+ \sum_{\{i \neq 1, i+j \leq n+1\}}
v_{i-1}(v_{i-1} v_{n-j+1})^{-1_{\{x_{i-1, j+1} \geq x_{ij}\}}} 1_{\{x_{i-1, j+1} < x_{i-1, j}\}} \\ + \sum_{\{i \neq 1, i+j < n+1\}}
v_{i-1}^{-1}(v_{i-1} v_{n-j+1})^{1_{\{x_{i-1, j+1} > x_{ij}\}}} 1_{\{x_{i-1, j+1} > x_{i, j+1}\}}
\\ +\sum_{\{i \neq 1, i+j = n+1\}}
v_{i-1}^{-1}(v_{i-1} v_{n-j+1})^{1_{\{x_{i-1, j+1} > x_{ij}\}}} 1_{\{x_{i-1, j+1} > 0\}}.
\end{multline}
One way to check that $q(x) = \hat{q}(x)$ is to check the equality first in the case when the inequalities
$x_{ij} < x_{i-1, j}$ for each $i \neq 1$, $x_{ij} < x_{i j-1}$
for each $j \neq 1$ and $x_{i n-i+1} > 0$ hold for each $i = 1, \ldots, n$. This case can be seen directly from \eqref{q_rate} and \eqref{q_hat_rate}. We now consider the rates of jumps which are suppressed in each case when these inequalities no longer hold:
\begin{enumerate}[(i)]
\item If $x_{i, n-i+1} = 0$ then both forwards and backwards in time a jump of rate $v_{i}^{-1}$ is suppressed by the wall.
\item If $x_{ij} = x_{i, j-1}$ then forwards in time the left jump of the $(i, j-1)$ particle is suppressed and the suppressed jump has rate
$v_{n-j+2}^{-1}$ because $x_{i, j - 1} = x_{ij} \leq x_{i-1, j}$.
Backwards in time, the right jump of the $(i, j)$ particle is suppressed and the suppressed jump has rate $v_{n-j+2}^{-1}$
because $x_{i, j} = x_{i, j-1} \geq x_{i+1, j-1}$.
\item If $x_{ij} = x_{i-1 j}$ then forwards in time the right jump of the $(i, j)$ particle is suppressed and the suppressed jump has rate
$v_{i-1}^{-1}$ because $x_{ij} = x_{i-1, j} \geq x_{i-1, j+1}$.
Backwards in time, the left jump of the $(i-1, j)$ particle is suppressed and the suppressed jump has rate $v_{i-1}^{-1}$
because $x_{i-1 j} = x_{ij} \leq x_{i j-1}$.
\end{enumerate}
Using Lemma \ref{Kelly}, we have now established that $\pi$ is the invariant measure and $\hat{q}$ are the reversed
transition rates in stationarity of $(X_{ij}(t):i+j \leq n+1, t \geq 0)$. The second statement in the Theorem follows from comparing $q$ and $\hat{q}$ and observing that they are identical after
the swap $x_{ij} \rightarrow x_{ji}$ and $(v_1, \ldots, v_n) \rightarrow (v_n, \ldots, v_1)$.
\end{proof}
We end by discussing two further properties of the the process $X$. These properties can both be proved by running the process $(X_{ij}(t) : i + j \leq n+1, t \geq 0)$ in stationarity, forwards and backwards in time, and follow in exactly the same way as Section 5 of \cite{FW} as they depend on the structural properties of the $X$ array rather than the exact dynamics.
\begin{enumerate}[(i)]
\item The marginal distribution of any row $(X_{i, n-i+1}, \ldots, X_{i, 1})$ run forwards in time is PushASEP with a wall
with rate vector $(v_i, \ldots, v_n)$. The marginal distribution of any column $(X_{n-j+1, j}, \ldots, X_{1, j})$ run backwards in time is PushASEP with
a wall with rate vector $(v_{n-j+1}, \ldots, v_1)$
\item Let $Q^n_t$ denote the transition semigroup for PushASEP with a wall with $n$ particles.
Let $P_{n-1 \rightarrow n}$ denote the transition kernel for the update of the Markov chain $\mathbf{G}^{\text{pl}}$ defined in
Section \ref{point_to_line} from time $n-1$ to $n$. There is an intertwining
between $Q^{n-1}_t$ and $Q^n_t$ with intertwining kernel given by $P_{n-1 \rightarrow n}$.
In operator notation,
\begin{equation*}
Q_t^{n-1} P_{n-1 \rightarrow n} = P_{n-1 \rightarrow n} Q_t^n.
\end{equation*}
\end{enumerate}
\section{Push-block dynamics and Proposition \ref{two_sided_intertwining}}
\label{Intertwining}
The aim of this Section is to describe how Proposition \ref{two_sided_intertwining}
can be obtained by a construction of an interacting particle system with pushing and blocking interactions. This section is adapting the proof of Theorem 2.1 in \cite{warren_windridge} with a
different intertwining \eqref{intertwining} replacing Equation 3.3 from \cite{warren_windridge}.
We follow the set-up and notation of \cite{warren_windridge}.
For each $n \geq 1$, let $(\mathfrak{X}(t) : t \geq 0)$ be a continuous-time Markov process
$\mathfrak{X}(t) = (\mathfrak{X}^j_i(t))_{1\leq i \leq j \leq n}$ taking values in
$\mathbb{K}_n = \{(x^j_i)_{1 \leq i \leq j \leq n} \text{ with } x^{j+1}_i \leq x^j_i \leq x^{j+1}_{i+1}\}.$
We use $x^j$ to denote the vector $x^j = (x^j_1, \ldots, x^j_j)$ and describe $\mathfrak{X}^j$
as the positions of the particles in the $j$-th level of $\mathfrak{X}$. Let $v_i > 0$ for each $i \geq 1$.
The dynamics of $\mathfrak{X}$ is governed by $n(n+1)$ independent exponential clocks, where each particle
in the $j$-th level has two independent exponential clocks with rates $v_j$ and $v_{j}^{-1}$ corresponding to its right and left jumps
respectively. When the clock of a particle in the $j$-th level rings, that particle attempts to jump to the right or left but
will experience a pushing and a blocking interaction which ensures that $\mathfrak{X}$ remains within $\mathbb{K}_n$. In summary, a particle at the $j$-th level \emph{pushes} particles at levels $k > j$ and is \emph{blocked}
by particles at levels $k < j$.
More precisely, suppose the right clock of $\mathfrak{X}^j_i$ rings.
\begin{enumerate}[(i)]
\item If $\mathfrak{X}^j_i = \mathfrak{X}^{j-1}_i$ then the right jump is suppressed.
\item If $\mathfrak{X}^j_i < \mathfrak{X}^{j-1}_i$
and $\mathfrak{X}^j_i = \mathfrak{X}^{j+1}_{i+1}$ then $\mathfrak{X}^j_i$ jumps right by one and \emph{pushes}
$\mathfrak{X}^{j+1}_{i+1}$ to the right by one. The right jump of $\mathfrak{X}^{j+1}_{i+1}$ may then cause further right jumps in the same way.
\item In all other cases $\mathfrak{X}^j_i$ jumps to the right by one and all other
particles are unchanged.
\end{enumerate}
If the left clock of $\mathfrak{X}^j_i$ rings then we have the same trichotomy of cases: (i) if $\mathfrak{X}^j_i = \mathfrak{X}^{j-1}_{i-1}$ then the left
jump of $\mathfrak{X}^j_i$ is suppressed ; (ii) if $\mathfrak{X}^j_i > \mathfrak{X}^{j-1}_{i-1}$ and $\mathfrak{X}^j_i = \mathfrak{X}^{j+1}_i$ then $\mathfrak{X}^j_i$ jumps to the left and \emph{pushes} $\mathfrak{X}^{j+1}_i$ to the left by one which may then push further particles to the left; and (iii) in all other cases $\mathfrak{X}^j_i$ jumps to the left
by one and all other
particles are unchanged.
Let $n \geq 1$ and for $x \in \mathbb{K}_n$ let $w_v(x) = \prod_{i=1}^n v_i^{\lvert x^i \rvert - \lvert x^{i-1} \rvert}$ where $\lvert x \rvert = \sum_{j=1}^d x_j$
for $x \in \mathbb{R}^d$ and $\lvert x^0 \rvert = 0$.
For any $z \in W^n$ we define $\mathbb{K}_n(z) = \{(x^j_i)_{1 \leq i \leq j \leq n} \in K_n : x^n = z\}$ and a probability measure on $\mathbb{K}_n(z)$
by $M_z(x) = w_v(x)/S_z(v)$ for all $x \in \mathbb{K}_n(z)$.
\begin{proposition}
\label{GT_process}
Suppose $z \in W^n$ and that $(\mathfrak{X}(t): t \geq 0)$ has initial distribution $M_z(\cdot)$.
Then $(\mathfrak{X}^n(t) : t \geq 0)$ is a Markov process with conservative $Q$-matrix, given for $x \in W^n$
by
\begin{equation*}
Q(x, x \pm e_i) = \frac{S_{x \pm e_i}(v)}{S_x(v)} 1_{\{x \pm e_i \in W^n\}}, \text{ for } i = 1, \ldots, n
\end{equation*}
with $Q(x, x) = -\sum_{i=1}^n (v_i^{-1} + v_i)$.
\end{proposition}
We prove this Proposition inductively in $n$ by analysing the two consecutive bottom layers of $\mathfrak{X}$
and include the statement that $Q$ is conservative as part of the induction argument.
For $x \in W^n$ and $y \in W^{n+1}$ we will write $x \preceq y$ to mean that
$y_1 \leq x_1 \leq y_2 \leq \ldots x_n \leq y_{n+1}$ and define
$W^{n, n+1} = \{x \in W^n, y \in W^{n+1}: x \preceq y\}$.
By the inductive hypothesis, the marginal distribution of the two consecutive bottom layers of $\mathfrak{X}$ is a continuous-time
Markov process $(X(t), Y(t))_{t \geq 0}= (X_1(t), \ldots, X_n(t), Y_1(t), \ldots, Y_{n+1}(t))_{t \geq 0}$ taking values in $W^{n, n+1}$
and with $Q$-matrix given by the off-diagonal entries: for $(x, y), (x', y') \in W^{n, n+1}$,
\begin{equation*}
\mathcal{A}((x, y), (x', y')) = \begin{cases}
Q_X(x, x + e_i) & \text{ if } (x', y') = (x + e_i, y) \text{ and } x_i < y_{i+1}, \\
Q_X(x, x + e_i) & \text{ if } (x', y') = (x + e_i, y + e_{i+1}) \text{ and } x_i = y_{i+1}, \\
Q_X(x, x - e_i) & \text { if } (x', y') = (x - e_i, y) \text{ and } x_i > y_i, \\
Q_X(x, x - e_i) & \text{ if } (x', y') = (x - e_i, y - e_i) \text{ and } x_i = y_i, \\
v_{n+1}^{\pm} & \text{ if } (x', y') = (x, y \pm e_i)
\end{cases}
\end{equation*}
with $Q_X$ given by the $Q$-matrix from Proposition \ref{GT_process}.
All other off-diagonal entries are zero and
the diagonal entries $\mathcal{A}((x', y'), (x', y'))$ equal the negative of
\begin{equation}
\label{diagonal_entries}
\sum_{i=1}^n v_i + \sum_{i=1}^n v_i^{-1} + \sum_{i=1}^n v_{n+1} 1_{\{y_i' < x_i'\}}
+ \sum_{i=1}^n v_{n+1}^{-1} 1_{\{y_{i+1}' > x_i'\}} + v_{n+1} + v_{n+1}^{-1}.
\end{equation}
The inductive hypothesis that $Q_X$ is conservative means that $\mathcal{A}$ is conservative.
We define the function \begin{equation*}
m(x, y) = v_{n+1}^{\lvert y \rvert - \lvert x \rvert} \frac{S_x(v)}{S_y(v)}, \text{ for } (x, y) \in W^{n, n+1}
\end{equation*}
and an intertwining kernel given by $\Lambda : W^{n+1} \rightarrow W^{n, n+1}$
\begin{equation*}
\Lambda(y, (x', y'))) = m(x', y') 1_{\{y = y'\}}.
\end{equation*}
The key step in proving Proposition \ref{GT_process} is to prove the intertwining
$Q_Y \Lambda = \Lambda \mathcal{A}$ where $Q_Y$ is the desired $Q$-matrix from
Proposition \ref{GT_process} with $n+1$ particles. This is equivalent to the statement that
\begin{equation}
\label{intertwining}
Q_{Y}(y, y') = \sum_{x \leq y} \frac{m(x, y)}{m(x', y')} \mathcal{A}((x, y), (x', y')), \text{ for all }y, y' \in W^n
\end{equation}
Once this is established it follows from general theory \cite{rogers1981} that $Y$ is an autonomous
Markov process
with the desired $Q$-matrix and this $Q$-matrix is conservative; therefore Proposition \ref{GT_process} follows inductively.
For a more detailed argument we refer to \cite{warren_windridge}: we are replacing Equation 3.3 from \cite{warren_windridge}
with equation \eqref{intertwining} and the rest of the argument is unchanged.
It remains to show \eqref{intertwining}.
We first show \eqref{intertwining} in the case $y' = y$. The right hand side of \eqref{intertwining} equals
\begin{equation*}
\sum_{x \preceq y} v_{n+1}^{\lvert x' \rvert - \lvert x \rvert} \frac{S_x(v)}{S_{x'}(v)} \mathcal{A}((x, y'), (x', y'))
\end{equation*}
and $\mathcal{A}((x, y'), (x', y'))$ can be non-zero if $x = x'$ or $x = x' \pm e_i$.
If $x = x'$ then $\mathcal{A}((x', y'), (x', y'))$ equals the negative of \eqref{diagonal_entries}.
If $x = x' - e_i$ then $\mathcal{A}((x, y'), (x', y')) = Q_X(x'-e_i, x') 1_{\{y_{i}' < x_{i}'\}}$ and
if $x = x' + e_i$ then $\mathcal{A}((x, y'), (x', y')) = Q_X(x'+e_i, x') 1_{\{y_{i+1}' > x_{i}'\}}$.
Using these expressions we find that the right hand side of \eqref{intertwining}
is equal to
\begin{multline*}
-\left(\sum_{i=1}^n v_i + \sum_{i=1}^n v_i^{-1} + \sum_{i=1}^n v_{n+1} 1_{\{ y_i' < x_i' \}}
+ \sum_{i=1}^n v_{n+1}^{-1} 1_{\{y_{i+1}' > x_i'\}} + v_{n+1} + v_{n+1}^{-1} \right)\\
+ \sum_{i=1}^n v_{n+1} \frac{S_{x' - e_i}(v)}{S_{x'}(v)} \frac{S_{x'}(v)}{S_{x' - e_i}(v)} 1_{\{y_i' < x_i'\}}
+ \sum_{i=1}^n v_{n+1}^{-1} \frac{S_{x' + e_i}(v)}{S_{x'}(v)} \frac{S_{x'}(v)}{S_{x' + e_i}(v)} 1_{\{y_{i+1}' > x_{i}'\}}
\end{multline*}
This equals $-\sum_{i=1}^{n+1} v_i - \sum_{i=1}^{n+1} v_i^{-1} = Q_Y(y', y')$ and proves \eqref{intertwining} for
$y' = y$.
Suppose that $y' = y+e_i$ and consider two cases: depending on whether or not there was a pushing interaction.
If $i = 1$ or $ i > 1$ and $x_{i-1}' < y_i'$, the right hand side of \eqref{intertwining} equals
\begin{equation*}
\frac{v_{n+1}^{\lvert y' - e_i \rvert - \lvert x' \rvert}}{v_{n+1}^{\lvert y' \rvert - \lvert x' \rvert}}
\frac{S_{x'}(v)}{S_{y' - e_i}(v)} \frac{S_{y'}(v)}{S_{x'}(v)} v_{n+1} = Q_Y(y' - e_i, y').
\end{equation*}
The second case is if $i > 1$ and $x_{i-1}' = y_i'$, when the right hand side of \eqref{intertwining} equals
\begin{equation*}
\frac{v_{n+1}^{\lvert y' - e_i \rvert - \lvert x' - e_i\rvert}}{v_{n+1}^{\lvert y' \rvert - \lvert x' \rvert}} \frac{S_{x'-e_i}(v)}{S_{y' - e_i}(v)}
\frac{S_{y'}(v)}{S_{x'}(v)}
\frac{S_{x'}(v)}{S_{x' - e_i}(v)} = Q_Y(y' - e_i, y').
\end{equation*}
Finally suppose that $y' = y - e_i$ and split again into two cases. If $i = n$ or $i < n $ and $y_i' < x_i'$ then the right hand side of \eqref{intertwining} equals
\begin{equation*}
\frac{v_{n+1}^{\lvert y' + e_i \rvert - \lvert x' \rvert}}{v_{n+1}^{\lvert y' \rvert - \lvert x' \rvert}}
\frac{S_{x'}(v)}{S_{y' + e_i}(v)} \frac{S_{y'}(v)}{S_{x'}(v)} v_{n+1}^{-1} = Q_Y(y'+e_i, y')
\end{equation*}
If $i < n$ and $y_i' = x_i'$
then the right hand side
of \eqref{intertwining} equals
\begin{equation*}
\frac{v_{n+1}^{\lvert y' + e_i \rvert - \lvert x' + e_i\rvert}}{v_{n+1}^{\lvert y' \rvert - \lvert x' \rvert}} \frac{S_{x' + e_i}(v)}{S_{y' + e_i}(v)}
\frac{S_{y'}(v)}{S_{x'}(v)}
\frac{S_{x'}(v)}{S_{x' + e_i}(v)} = Q_Y(y' + e_i, y').
\end{equation*}
This completes the proof of \eqref{intertwining} and as described above completes the proof of Proposition \ref{GT_process}.
\begin{proof}[Proof of Proposition \ref{two_sided_intertwining}]
We construct the process $\mathfrak{X}$ in
Proposition \ref{GT_process} started from the origin with the rate of right and left jump rates on level $j$ given by $v_{n-j+1}$
and $v_{n-j+1}^{-1}$ respectively.
Part (i) is an immediate consequence.
Part (ii)
follows from the fact that $(\mathfrak{X}^n_n(t))_{t \geq 0}$ is the
largest particle in two different Markov processes $(\mathfrak{X}^n_1(t), \mathfrak{X}^n_2(t), \ldots, \mathfrak{X}^n_n(t))_{t \geq 0}$ which has $Q$-matrix given in Proposition \ref{GT_process}
and $(\mathfrak{X}^1_1(t), \mathfrak{X}^2_2(t), \ldots, \mathfrak{X}^n_n(t))_{t \geq 0}$ which
is PushASEP (without a wall) where the $i$-th particle has right jump rate $v_{n-i+1}$ and left jump rate $v_{n-i+1}$.
The top particle of PushASEP (without a wall) is equal in distribution as a process to the right hand side of Proposition \ref{two_sided_intertwining}
by the argument used to prove equation \eqref{semi_discrete_lpp}.
\end{proof}
\paragraph{Acknowledgements.} I am very grateful to Jon Warren for helpful and stimulating discussions and to Neil O'Connell
for suggesting the approach in Section 2.1. I am grateful for the financial support of the Royal Society Enhancement Award `Log-correlated Gaussian fields and symmetry classes in random matrix theory RGF\textbackslash EA\textbackslash 181085.'
| proofpile-arXiv_059-15675 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Two of the most basic matrix parameters are the \emph{determinant}
and the \emph{permanent}: for an $n\times n$ matrix $M=(x_{i,j})_{i,j}$,
define
\begin{equation}
\det(M)=\sum_{\pi\in S_{n}}\operatorname{sign}(\pi)\prod_{i=1}^{n}x_{i,\pi(i)}\quad\textnormal{and}\quad\per(M)=\sum_{\pi\in S_{n}}\prod_{i=1}^{n}x_{i,\pi(i)}.\label{eq:det-per}
\end{equation}
A central direction of research in probabilistic combinatorics and random matrix
theory is to understand the determinant and permanent of different
types of random matrices. For example, let $A_{n}$ be an $n\times n$ matrix whose entries are i.i.d.\ Rademacher-distributed
random variables, taking values $\pm1$ with probability $1/2$ each (this is often called a \emph{random
Bernoulli matrix}).
A classical theorem of Koml\'os~\cite{Kom67} (perhaps the foundational
theorem in discrete random matrix theory) is that $\Pr(\det A_{n}=0)=o(1)$
as $n\to\infty$. That is to say, $A_n$ is \emph{asymptotically
almost surely} nonsingular. Since then, there has been intensive effort
to refine our understanding of the singularity probability
(see \cite{KKS95,TV06,TV07,RV08,BVW10}), culminating in a recent breakthrough of Tikhomirov~\cite{Tik20} proving that $\Pr(\det A_{n}=0)=2^{-n+o(n)}$.
The problem of estimating the order of magnitude of $\det A_{n}$ has also received significant attention:
Tao and Vu~\cite{TV06} proved that with probability $1-o(1)$
we have $\left|\det A_{n}\right|=n^{n/2+o(n)}$, and later
Nguyen and Vu~\cite{NV14} proved a central limit theorem for $\log |\det A_{n}|$ (see also \cite{Gir79,Gir97}).
Most of the above-mentioned results generalise readily to more general types
of random matrices, where the entries are independently sampled from
any subgaussian distribution. There has also been intensive
interest in random matrices with dependence between the entries. Perhaps the most prominent examples are \emph{symmetric} random matrices. Let $M_{n}$ be the random matrix whose entries on and above the diagonal are independent Rademacher random variables, and the entries below the diagonal are chosen to make the matrix symmetric (equivalently, we can choose a random matrix $A_n$ as above and condition on the event that $A_{n}$ is symmetric). The study of random symmetric matrices has necessitated the development of new tools, but by now there is a fairly complete understanding of the determinant of a random symmetric matrix with Rademacher entries. The fact that $\Pr(\det M_{n}=0)=o(1)$ was first proved by Costello, Tao and Vu~\cite{CTV06} (see also \cite{Fer20}), and stronger
estimates on $\Pr(\det M_{n}=0)$ were obtained by several
authors~\cite{Ngu12b,Ver14,FJ19,CMMM19}. It is also known that with probability $1-o(1)$ we have $|\det M_{n}|=n^{n/2+o(n)}$
(this follows from work on the least singular value of $M_{n}$ due
to Nguyen~\cite{Ngu12a} and Rudelson~\cite{Ver14}, together with Wigner's celebrated
semicircle law~\cite{Wig55,Wig58}), and a central limit theorem for $\log |\det M_{n}|$
was proved by Bourgade and Mody~\cite{BM19} (see also \cite{TV12}).
It is widely believed that for all the above-mentioned theorems concerning determinants of random matrices (symmetric or not), there should be analogous theorems for permanents. However, the permanent appears to be a much more challenging parameter to study. For example, while the determinant encodes information about linear dependence and can be interpreted as the (signed) volume of a certain parallelepiped, it only seems possible to attack the permanent from a ``combinatorial'' point of view, directly considering the formal definition in \cref{eq:det-per}. The fact that the permanent is harder to study is maybe not surprising, since (in contrast to the determinant) the permanent of a matrix is $\#$P-hard to compute (as was famously proved by Valiant~\cite{Val79}). Even the analogue of the singularity problem, to show that $\Pr(\per A_n=0)=o(1)$, was open for a long time. In 2009, Tao and Vu~\cite{TV09} finally resolved this problem and also estimated the typical magnitude of $\per A_{n}$: namely, they proved that asymptotically almost surely $\left|\per A_{n}\right|=n^{n/2+o(n)}$. Perhaps surprisingly, permanents of random matrices turn out to be of interest in quantum computing: Aaronson and Arkhipov~\cite{AA13} proved that quantum computers cannot be efficiently simulated by classical computers, conditional on a conjecture which strengthens the aforementioned Tao--Vu permanent theorem (see also \cite{Aar10,EM18,LM19}).
In this paper we study the permanent of random \emph{symmetric} matrices. More precisely, we estimate the typical magnitude of the permanent of a random symmetric matrix with Rademacher entries.
\begin{thm}\label{thm:per-magnitude}
\label{thm:main-0}Asymptotically almost surely, $|\per M_n|=n^{n/2+o(n)}$.
\end{thm}
The study of the permanent of a random symmetric matrix seems to have first been explicitly suggested by Tao and Vu (see \cite[Remark~1.6]{TV09}). They observed that their arguments for the permanent of a (not necessarily symmetric) random matrix ``do not seem to easily yield any non-trivial result for the permanent of a random \emph{symmetric} Bernoulli matrix''. The statement of \cref{thm:per-magnitude} has been conjectured by Vu in 2009 (see \cite{Vu09}). He also mentioned the conjecture in a recent survey \cite[Conjecture 6.11]{Vu20}, and described it as ``the still missing piece of the picture'' regarding determinants and permanents of random discrete matrices.
\cref{thm:per-magnitude} is actually a combination of two different results. First, the following proposition gives an upper bound on $|\per M_n|$.
\begin{prop}\label{thm:var}
For any $\varepsilon>0$, if $n$ is sufficiently large with respect to $\varepsilon$, we have
\[\Pr\left(|\per M_{n}|\ge n^{n/2+\varepsilon n}\right)\le n^{-\varepsilon n}.\]
\end{prop}
\cref{thm:var} is easily proved using an estimate for $\E[ (\per M_n)^2]$ and Markov's inequality; see \cref{sec:var}. The main role of this paper is to prove the following lower bound on $|\per M_n|$.
\begin{thm}
\label{thm:main}There is a positive constant $c>0$ such that for any $\varepsilon>0$ the following holds. If $n$ is sufficiently large with respect to $\eps$, we have
\[\Pr\left(|\per M_{n}|\le n^{n/2-\varepsilon n}\right)\le n^{-c}.\]
\end{thm}
We remark that the constant $c$ in \cref{thm:main} can be made explicit. For example, $c=1/150$ certainly suffices, though this can be improved substantially simply by optimising constants throughout the proof. However, without new ideas our methods do not seem to be capable of proving the statement of \cref{thm:main} with any $c\ge 1/2$. This state of affairs is essentially the same as for the non-symmetric case previously considered by Tao and Vu~\cite{TV09}.
Of course, \cref{thm:main} also shows that the probability of having $\per M_n=0$ is polynomially small. Before this paper no nontrivial bounds for this probability were known, except when $n=2^m-1$ with $m\in \NN$, where for elementary number-theoretic reasons it is actually impossible to have $\per M_n=0$ (see \cite{SS83}\footnote{Actually, this result is part of an extensive body of research concerning permanents of (non-random) matrices with $\pm1$ entries; see for example \cite{Wan74,BG19,BGT15,BGT17,SS83,Wan05,KS83,Sei84,KS84}.}).
Finally, we remark that the methods used to prove \cref{thm:main-0} are quite robust, and analogous theorems can be proved for much more general distributions. For example, consider any fixed real probability distributions $\mu$ and $\nu$, and let $M_n^{\mu,\nu}$ be the random symmetric matrix whose diagonal entries have distribution $\nu$ and whose off-diagonal entries have distribution $\mu$ (and whose entries on and above the diagonal are mutually independent). With some fairly mild assumptions on $\mu$ and $\nu$ it is a routine matter to prove an analogue of \cref{thm:var} for $M_n^{\mu,\nu}$, and in \cref{sec:concluding} we sketch how to make some minor adaptations to the proof of \cref{thm:main} to obtain a version that holds for very general distributions (we only require that $\mu$ has nontrivial support). In particular, one can prove an analogue of \cref{thm:main-0} for random symmetric Gaussian matrices such as the Gaussian Orthogonal Ensemble (GOE).
\textbf{Notation.} In this paper, we use the notation $\NN=\{1,2,\dots\}$ for the positive integers. All logarithms are to base $e$. For functions $f:\NN\to \RR$ and $g:\NN\to \RR_{>0}$, we write $f=o(g)$ if $f(n)/g(n)\to 0$ as $n\to\infty$.
\section{Proof of \texorpdfstring{\cref{thm:var}}{Proposition~\ref{thm:var}}: the second moment of the permanent}\label{sec:var}
In this section we provide the simple proof of \cref{thm:var}. It will be an immediate consequence of Markov's inequality and the following lemma.
\begin{lem}\label{lem:var}
$\E[ (\per M_n)^2]\le n^{n+o(n)}$.
\end{lem}
\begin{proof}
Write $x_{i,j}$ for the $(i,j)$ entry of $M_{n}$. For a permutation
$\pi\in S_n$, let $X_{\pi}=\prod_{i=1}^n x_{i,\pi(i)}$,
so that $\per M_{n}=\sum_{\pi}X_{\pi}$. It will not be necessary for the proof, but we remark that
\[
\E X_{\pi}=\begin{cases}
1 & \text{if }\pi\text{ consists only of 2-cycles,}\\
0 & \text{otherwise}.
\end{cases}
\]
Furthermore, let $I_{\pi}\subseteq\{1,\dots,n\}$
be the set of indices which appear in 2-cycles of $\pi$, and let
$F_{\pi}$ be the family of sets $\{i,\pi(i)\}$, for
$i\notin I_{\pi}$. Then for two permutations $\pi,\pi'\in S_n$,
we have
\[
\E [X_{\pi}X_{\pi'}]=\begin{cases}
1 & \text{if }(I_{\pi},F_{\pi})=(I_{\pi'},F_{\pi'}),\\
0 & \text{otherwise}.
\end{cases}
\]
For $k=0,\dots,n$, let $\mathcal{Q}_{k}$ be the set of all permutations $\pi\in S_n$
satisfying $|I_{\pi}|=k$, and note that $|\mathcal{Q}_{k}|\le \binom{n}{k}k^{k/2}(n-k)!\le 2^nk^{k/2}n^{n-k}$. Indeed, for any choice of a set $I\subseteq \{1,\dots,n\}$ of size $k$, there are at most $k^{k/2}$ ways to partition $I$ into 2-cycles, and at most $(n-k)!$ ways to choose a permutation of $\{1,\dots,n\}\setminus I$.
Now, for any $0\leq k\leq n$ and any $\pi\in\mathcal{Q}_{k}$, there are at most $2^{n-k}k^{k/2}$
choices of $\pi'$ satisfying $(I_{\pi},F_{\pi})=(I_{\pi'},F_{\pi'})$.
Indeed, for such $\pi'$, the restriction of $\pi'$ to $I_{\pi}$
must be a permutation of $I_\pi$ consisting only of 2-cycles (so there are at most $k^{k/2}$ ways to choose this restriction), and for each $i\notin I_{\pi}$ we
must have $\pi'(i)\in\{\pi(i),\pi^{-1}(i)\}$. It follows that
\[
\E[(\per M_{n})^2]=\sum_{\pi,\pi'\in S_n}\E X_{\pi}X_{\pi'} \le\sum_{k=0}^n|\mathcal{Q}_{k}|\cdot2^{n-k}k^{k/2}\le\sum_{k=0}^n 4^n k^k n^{n-k} \le n^{n+o(n)},
\]
as claimed.
\end{proof}
\begin{proof}[Proof of \cref{thm:var}]
Let $\eps>0$, and suppose that $n$ is sufficiently large such that the $o(n)$-term in \cref{lem:var} is at most $\eps n$. Then we have $\E[ (\per M_n)^2]\le n^{n+\eps n}$ and consequently by Markov's inequality
\[\Pr\left(\vert\per M_n\vert\geq n^{n/2+\eps n}\right)=\Pr\left((\per M_n)^2\geq n^{n+2\eps n}\right)\leq \frac{\E[ (\per M_n)^2]}{n^{n+2\eps n}}\leq n^{-\eps n},\]
as desired.
\end{proof}
\section{Structure of the proof of \texorpdfstring{\cref{thm:main}}{Theorem~\ref{thm:main}}}
The rest of this paper is devoted to the proof of \cref{thm:main}. In this section we outline the high-level strategy of the proof, stating two key lemmas and deducing \cref{thm:main} from them.
We couple the distributions of the matrices $M_n$ for all $n\in \NN$ by viewing each $M_{n}$ as containing the first $n$ rows and columns
of an infinite random symmetric matrix (with Rademacher entries). Say that subsets of a given ground
set are \emph{complement-disjoint} if their complements are disjoint.
For $A,B\su \{1,\dots,n\}$, let $M_{n}[A,B]$ be the submatrix of $M_{n}$ consisting
of the rows in $A$ and the columns in $B$. For $\lambda\geq 0$, we say that a matrix is
\emph{$\lambda$-heavy} if its permanent has absolute value at least
$\lambda$.
The following lemma shows that with high probability there exists a heavy submatrix of $M_n$ consisting of almost all the rows and columns of $M_n$, and moreover we have some control over which rows and columns are not included. Roughly speaking, it is proved by studying how permanents of submatrices evolve in the sequence of random matrices $M_1,\dots,M_n$.
\begin{lem}
\label{lem:grow-single-minor}There is a positive constant $c>0$ such that for any $\varepsilon>0$ the following holds. Let $n\in \NN$ be sufficiently large with respect to $\eps$, and let $L=\lfloor (\log n)/10\rfloor$. Let $X$ and $Y$ be disjoint subsets of $\{ 1,\dots,n\}$ with sizes $\vert X\vert=L$ and $\vert Y\vert=3L$. Then with probability at least $1-(1/4)\cdot n^{-c}$ there is a set $B$ satisfying $|B|=n-L$ and $\{ 1,\dots,n\} \setminus Y\su B\su \{ 1,\dots,n\}$, such that $M_{n}[\{ 1,\dots,n\} \setminus X,B]$ is $n^{(1-\eps)n/2}$-heavy.
\end{lem}
By applying \cref{lem:grow-single-minor} with various different choices of $X$ and $Y$, we can obtain many heavy submatrices $M_n[A_1,B_1],\dots, M_n[A_m,B_m]$ such that the sets $A_{1},\dots,A_{m},B_{1},\dots,B_{m}$ are complement-disjoint. The next lemma states that in such a situation, if we sample $M_{n+1}$ by adding a random row and column to $M_n$, then a large proportion of our submatrices can be transformed into larger submatrices without losing much of their heaviness. We will apply this lemma repeatedly, each step decreasing by 1 the number of rows and columns that our submatrices are missing.
\begin{lem}
\label{lem:endgame-step}
Let $m\in \NN$ be sufficiently large. Let $\lambda>0$, let $1\leq L< n$ be integers, and let $A_{1},\dots,A_{m},B_{1},\dots,B_{m}$
be complement-disjoint subsets of $\{ 1,\dots,n\} $ of
size $n-L$. Let us condition on an outcome of $M_{n}$ such that all the submatrices $M_{n}[A_{\ell},B_{\ell}]$, for $\ell=1,\dots,m$, are $\lambda$-heavy.
Then, with
probability at least $1-m^{-1/24}$, for $m'=\ceil{m/36}$ there are complement-disjoint subsets $A_{1}',\dots,A_{m'}',B_{1}',\dots,B_{m'}'\subseteq\{ 1,\dots,n+1\} $
of size $n-L+2$, such that for all $\ell=1,\dots,m'$ the submatrices $M_{n+1}[A_{\ell}',B_{\ell}']$ are $\lambda/(4n^{4})$-heavy.
\end{lem}
We now show how to deduce \cref{thm:main} from \cref{lem:grow-single-minor,lem:endgame-step}.
\begin{proof}[Proof of \cref{thm:main}]
Choose an absolute constant $0<c<1/50$, such that the statement in \cref{lem:grow-single-minor} is satisfied. Fix $\eps>0$.
Let $L=\lfloor (\log n)/10\rfloor$ and $m=\lfloor (n-L)/(4L)\rfloor$, and consider disjoint sets $X_{1},\dots,X_{m},Y_{1},\dots,Y_{m}\subseteq\{ 1,\dots,n-L\} $ with $\vert X_1\vert=\dots=\vert X_m\vert=L$ and $\vert Y_1\vert=\dots=\vert Y_m\vert=3L$.
For each $\ell=1,\dots, m$, we apply \cref{lem:grow-single-minor} to the subsets $X_{\ell},Y_{\ell}\subseteq \{ 1,\dots,n-L\}$. Each application fails with probability at most $n^{-c}/4$, so it follows from Markov's inequality (see for example \cref{lem:Markov}) that with probability at least $1-(1/2)\cdot n^{-c}$, at least $m/2$ applications succeed. That is to say, with $m'= \ceil{m/2}$ and $\lambda=(n-L)^{(1-\eps)(n-L)/2}$, we obtain complement-disjoint sets $A_{1},\dots,A_{m'},B_{1},\dots,B_{m'}\subseteq\{ 1,\dots,n-L\} $ of size $n-2L$ such that for all $\ell=1,\dots, m'$, the matrices $M_{n-L}[A_{\ell},B_{\ell}]$ are $\lambda$-heavy. Note that if $n$ is sufficiently large with respect to $\eps$, then $m'\ge n^{9/10}$ and $\lambda\geq n^{n/2-(3/4)\eps n}$.
Now, we wish to iteratively apply \cref{lem:endgame-step}, $L$ times in total. After each application, $m'$ decreases by a factor of $36< e^4$, so after $L=\lfloor (\log n)/10\rfloor$ steps the value of $m'$ will still be at least $\sqrt{n}$. Each of the $L$ applications of \cref{lem:endgame-step} succeeds with probability at least $1-n^{-1/48}$. Thus, with probability at least $1-L\cdot n^{-1/48}\geq 1-(1/2)\cdot n^{-c}$ (for sufficiently large $n$) we can indeed apply the lemma $L$ times. In the end we obtain subsets $A',B'\subseteq \{ 1,\dots,n\}$ of size $n$ such that the matrix $M_n[A',B']$ is $\lambda'$-heavy, where
\[\lambda'=\frac{\lambda}{4(n-L)^{4}\cdot 4(n-L+1)^{4}\dotsm 4(n-1)^{4}}\geq \frac{\lambda}{(4n^{4})^L}\geq \frac{\lambda}{n^{5L}}\geq n^{n/2-(3/4)\eps n-\log n/2}\geq n^{n/2-\eps n}\]
(again assuming that $n$ is sufficiently large with respect to $\eps$). But note that we must have $A'=B'=\{ 1,\dots,n\}$, so this means that $M_n$ itself is $\lambda'$-heavy. In summary, with probability at least $1-n^{-c}$ we have $\vert \per M_n\vert\geq \lambda'\geq n^{n/2-\eps n}$, as desired.
\end{proof}
We remark that the overall structure of our proof is similar to the work of Tao and Vu~\cite{TV09} on permanents of (not necessarily symmetric) random matrices. Indeed, Tao and Vu's proof can also be broken up into two parts analogous to \cref{lem:grow-single-minor,lem:endgame-step}. However, in Tao and Vu's setting, all entries of the random matrix are independent, allowing them to expose the entries row by row. After exposing $k$ rows, they consider $k\times k$ submatrices that consist of all the $k$ exposed rows (and of $k$ of the $n$ columns). When exposing the $k$-th row, the permanent of any such $k\times k$ submatrix can be described as a linear polynomial in some of the entries of the new row, where the coefficients are given by the permanents of certain $(k-1)\times (k-1)$ submatrices in the first $k-1$ rows. In contrast, in our setting with the random symmetric matrix $M_n$, we are forced to expose the entries of our matrix in a different way: at the $k$-th step we reveal the entries in $M_k$ that are not present in $M_{k-1}$ (that is, we add a new random row and column, with equal entries, to the matrix considered so far\footnote{This type of exposure is standard in the study of symmetric random matrices (see for example \cite{CTV06}).}). Since there is only one $k\times k$ submatrix in $M_k$ (namely $M_k$ itself), in our setting we also need to consider the permanents of (substantially) smaller submatrices of $M_k$.
This more intricate strategy introduces significant challenges. Most notably, the permanents of the submatrices of $M_k$ are described by \emph{quadratic} polynomials in the new matrix entries, where the coefficients depend on the permanents of certain submatrices of $M_{k-1}$ (this is in contrast to Tao and Vu's setting, where the permanents are described by linear polynomials in the entries of the new row). This necessitates the use of some more sophisticated probabilistic tools. Furthermore, there can be certain types of cancellations within these quadratic polynomials, which are not possible for the linear polynomials in the Tao--Vu setting. For example, even if all submatrices of $M_{k-1}$ have non-zero permanent, it can happen that the polynomial describing the permanent of some submatrix of $M_k$ has only very few nonzero coefficients. Handling these types of cancellations requires key new ideas.
\textbf{Organization of the rest of the paper.} \cref{lem:grow-single-minor} will be proved in \cref{sec:grow-single-minor}, and \cref{lem:endgame-step} will be proved in \cref{sec:endgame-step}. As preparation, in Section 4 we collect some probabilistic tools that we will use in the proofs, and in Section 5 we collect some lemmas that can be obtained by studying permanent expansion formulas.
\section{Probabilistic tools}\label{sec:tools}
This section collects some theorems and simple facts that will be needed for proving \cref{lem:grow-single-minor,lem:endgame-step}. We start with some basic anti-concentration estimates for linear forms. The first of these is the famous Erd\H os--Littlewood--Offord inequality (see for example \cite[Corollary~7.8]{TV10}).
\begin{thm}
\label{lem:LO}Let $t\ge 1$ be a real number, and let $f$ be a linear
polynomial in $n$ variables, in which at least $m$ degree-$1$ coefficients have absolute value at least $r$. Then for uniformly random $\boldsymbol{\xi}\in\{ -1,1\} ^{n}$ we have
\[
\Pr\left(|f(\boldsymbol{\xi})|\le t\cdot r\right)\leq (\lceil t\rceil +1)\cdot \binom{m}{\lfloor m/2\rfloor}\cdot 2^{-m}\leq \frac {3t}{\sqrt{m}}.
\]
\end{thm}
We will also need the following very easy fact.
\begin{fact}
\label{fact:non-degenerate-linear}Let $f$ be a linear polynomial in
$n$ variables, which has at least one coefficient
with absolute value at least $r$. Then for uniformly random $\boldsymbol{\xi}\in\{ -1,1\} ^{n}$
we have
\[
\Pr\left(|f(\boldsymbol{\xi})|<r\right)\le\frac12.
\]
\end{fact}
\begin{proof}
First, suppose that the constant coefficient of $f$ has absolute value at least $r$, and suppose without loss of generality that $f(\boldsymbol{\xi})=a_1\xi_1+\dots+a_n\xi_n+c$ with $c\geq r$. Then, by symmetry we have that $\Pr(a_1\xi_1+\dots+a_n\xi_n< 0)\leq 1/2$ and therefore $\Pr(f(\boldsymbol{\xi})<r)\leq 1/2$.
Otherwise, for some $i\in \{1,\dots,n\}$, the coefficient of $\xi_i$ in $f(\boldsymbol{\xi})$ has absolute value at least $r$, and suppose without loss of generality that $i=1$. Condition on any outcomes of the variables $\xi_2,\dots,\xi_{n}$, and observe that then we can have $|f(\boldsymbol{\xi})|<r$ for at most one of the two possible outcomes of $\xi_1$.
\end{proof}
We will also need counterparts of both the above statements for quadratic polynomials. The quadratic counterpart of \cref{fact:non-degenerate-linear} is again easy to prove.
\begin{fact}
\label{fact:non-degenerate}Let $f$ be a quadratic polynomial in
$n$ variables, which has at least one multilinear degree-2 coefficient
with absolute value at least $r$. Then for uniformly random $\boldsymbol{\xi}\in\{ -1,1\} ^{n}$
we have
\[
\Pr(|f(\boldsymbol{\xi})|<r)\le\frac34.
\]
\end{fact}
\begin{proof}
We may assume that $f$ is multilinear (every term of the form $\xi_i^2$ can be replaced by the constant $1$ without changing the behaviour of $f(\boldsymbol \xi)$).
Suppose without loss of generality that the coefficient $a_{12}$ of $\xi_{1}\xi_{2}$ satisfies $\vert a_{12}\vert \geq r$, and write $f(\xi_{1},\dots,\xi_{n})=\xi_{1}\cdot (a_{12} \xi_{2}+g(\xi_{3},\dots,\xi_{n}))+h(\xi_{2},\dots,\xi_{n})$.
Conditioning on any outcomes of $\xi_{3},\dots,\xi_{n}$, with probability
at least $1/2$ we have $|a_{12}\xi_{2}+g(\xi_{3},\dots,\xi_{n})|\ge r$.
Then, conditioning on such an outcome of $\xi_{2}$, we have $\vert f(\boldsymbol{\xi})\vert \ge r$
with probability at least $1/2$.
\end{proof}
It is more delicate to generalise the Erd\H os--Littlewood--Offord inequality to quadratic polynomials. For a multilinear quadratic polynomial $f$ in the variables $x_1,\dots,x_n$ and for a real number $r> 0$, let $G^{(r)}(f)$
be the graph with vertex set $\{1,\dots,n\}$ having an edge $ij$ whenever the coefficient of $x_ix_j$ in $f$ has absolute value at least $r$. Let $\nu(G)$ be the matching number\footnote{The matching number of a graph $G$ is the largest number $\nu$ such that one can find $\nu$ disjoint edges in $G$.} of a graph $G$. The following is a special case of a theorem proved by Meka, Nguyen and Vu~\cite[Theorem 1.6]{MNV16}.
\begin{thm}
\label{lem:MNV}Let $r>0$, let $f$ be a multilinear quadratic polynomial in $n$ variables, and let $\nu=\nu(G^{(r)}(f))\geq 3$. Then for uniformly random $\boldsymbol{\xi}\in\{ -1,1\} ^{n}$
we have
\[
\Pr(|f(\boldsymbol{\xi})|\le r)\le \frac{(\log \nu)^C}{\nu^{1/2}},
\]
where $C$ is an absolute constant.
\end{thm}
The following concentration inequality is a special case of the Azuma--Hoeffding martingale concentration inequality, and is sometimes known as McDiarmid's inequality (see for example \cite[Lemma~1.34]{TV10}).
\begin{lem}
\label{lem:AH}Let $c>0$ and let $X$ be a random variable defined
in terms of independent random variables $\xi_{1},\dots,\xi_{n}$,
having the property that varying any individual $\xi_{i}$ affects
the value of $X$ by at most $c$. Then for any $t\geq 0$ we have
\[
\Pr\left(|X-\E X|\ge t\right)\le 2e^{-t^{2}/(2nc^{2})}.
\]
\end{lem}
The next inequality is a one-sided version of the Azuma--Hoeffding inequality for supermartingales (see \cite[Lemma~2.3]{TV09}).
\begin{lem}
\label{lem:AH2}Let $c>0$. In a probability space, let $Z_1,\dots,Z_n$ be a sequence of random objects, and let $W_1,\dots,W_n$ be a sequence of random variables, such that for each $k$, all of $Z_1,\dots,Z_{k},W_1,\dots,W_k$ are fully determined by $Z_k$, and such that $|W_{k+1}-W_{k}|\le c$ for all $k=1,\dots,n-1$. Suppose that the supermartingale property $\E[W_{k+1}|Z_k]\le W_k$ is satisfied for $k=1,\dots,n-1$. Then for any $t>0$ we have
\[
\Pr\left(W_n-W_1\ge t\right)\le e^{-t^{2}/(2nc^{2})}.
\]
\end{lem}
Recall that for $0<p<1$, a Bernoulli random variable $\chi\sim \Ber(p)$ is a random variable taking values $0$ and $1$ with $\Pr(\chi=1)=p$ and $\Pr(\chi=0)=1-p$. The following lemma is a version of the Chernoff concentration bound for sums of Bernoulli random variables (see for example \cite[Theorem~A.1.4]{AS}).
\begin{lem}
\label{lem:Chernoff}Let $\chi_1,\dots,\chi_m$ be independent Bernoulli random variables, where for each $i=1,\dots, m$ we have $\chi_i\sim \Ber(p_i)$ for some $0<p_i<1$. Then for any $t>0$ the sum $X=\chi_1+\dots+\chi_m$ satisfies
\[
\Pr\left(X-\E X> t\right)< e^{-2t^{2}/n}\quad \text{and}\quad \Pr\left(X-\E X< -t\right)< e^{-2t^{2}/n}.
\]
\end{lem}
Sometimes we will encounter random variables that \emph{stochastically dominate} a sum of Bernoulli random variables (we say that a random variable $X$ stochastically dominates another random variable $Y$ if there is a coupling of $X$ and $Y$ such that we always have $X\ge Y$). For example, consider a random process with $n$ steps where each step satisfies a certain property with probability at least $1/2$, even when conditioning on any outcome of the previous steps. Then the number $X$ of steps with this property stochastically dominates a sum of $n$ independent $\Ber(1/2)$ random variables. Denoting this sum by $Y$, we therefore have $\Pr\left(X-(n/2)< -t\right)\leq \Pr\left(Y-(n/2)< -t\right)< e^{-2t^{2}/n}$ for any $t>0$ by \cref{lem:Chernoff}. Hence we can use \cref{lem:Chernoff} to show that a random variable is very likely reasonably large if it stochastically dominates a sum of Bernoulli random variables.
Finally, the following lemma is an easy consequence of Markov's inequality (see, for example, \cite[Lemma~2.1]{TV09}).
\begin{lem}
\label{lem:Markov}Let $1>p>q>0$, and let $E_1,\dots,E_m$ be events (not necessarily independent), each of which occurs with probability at least $p$. Then the probability that at least $qm$ of the events $E_1,\dots,E_m$ occur simultaneously is at least $(p-q)/(1-q)$.
\end{lem}
\section{Permanent expansion formulas}
Just as for the determinant, it is possible to expand the permanent of a matrix in terms of permanents of submatrices.
Below we record two such expansions, which we will use in the proofs of \cref{lem:grow-single-minor,lem:endgame-step}.
\begin{fact}
\label{fact:per-expansion}Let $M$ be an $n\times n$ matrix. Add
a new row $(x_{1},\dots,x_{n})$ to obtain an $(n+1)\times n$
matrix $M'$. Then for any subsets $A,B\subseteq\{ 1,\dots,n\} $
with $|B|=|A|+1$, we have
\[
\per M'[A\cup\{ n+1\} ,B]=\sum_{i\in B}x_{i}\per M[A,B\setminus\{ i\} ].
\]
\end{fact}
For a matrix $M$, let $M^{(i,j)}$ be the submatrix of
$M$ obtained by removing row $i$ and column $j$.
\begin{fact}
\label{fact:per-double-expansion}Let $M$ be an $n\times n$ matrix.
Add a new row $(x_{1},\dots,x_{n},z)$ and a new column
$(y_{1},\dots,y_{n},z)$ to obtain an $(n+1)\times(n+1)$
matrix $M'$. Then for any subsets $A,B\subseteq\{ 1,\dots,n\} $ with $|A|=|B|$, we have
\[
\per M'[A\cup\{ n+1\} ,B\cup\{ n+1\} ]=z\per M[A,B]+\sum_{i\in A,j\in B}x_{j}y_{i}\per M[A,B]^{(i,j)}.
\]
\end{fact}
We will use \cref{fact:per-expansion} in combination with the linear anti-concentration inequalities in \cref{fact:non-degenerate-linear} and \cref{lem:LO}, and we will use \cref{fact:per-double-expansion} in combination with the quadratic anti-concentration inequalities in \cref{fact:non-degenerate} and \cref{lem:MNV}
Observe in particular that the formula in \cref{fact:per-double-expansion} gives an expression for
the permanent of a $(k+1)\times(k+1)$ matrix
in terms of permanents of $(k-1)\times(k-1)$
submatrices (and one $k\times k$ submatrix). This means that, for example, when we add a new row and
column to a matrix, the size of the largest submatrix with nonzero permanent can
increase by two. This observation will be crucial for the proof of \cref{lem:endgame-step}.
In the proof of \cref{lem:endgame-step}, we will apply \cref{fact:per-double-expansion} with the symmetric matrices $M=M_{n}$
and $M'=M_{n+1}$. That is to say, $(x_{1},\dots,x_{n},z)=(y_{1},\dots,y_{n},z)$, so the formula in \cref{fact:per-double-expansion} can be interpreted as a quadratic polynomial in $x_{1},\dots,x_{n}$ (after conditioning on the value of $z$). In order to apply \cref{fact:non-degenerate} to this polynomial, we need this polynomial to have a multilinear degree-2 coefficient with large absolute value. For this, it suffices that $\per M[A,B]^{(i,j)}+\per M[A,B]^{(j,i)}$ has large absolute value for some $i\ne j$ (with $i,j\in A\cap B$). The following lemma will be useful for ensuring this condition.
\begin{lem}
\label{lem:per-no-cancel}Let $M$ be an $n\times n$ matrix and let $A,B\su \{1,\dots,n\}$ be subsets with $|A|=|B|$ such that $M[A,B]$
is $\lambda$-heavy. Suppose we are given an element $a\in B\setminus A$ and distinct elements $b_{1},b_{2}\in A\setminus B$. Then there are distinct $i,j\in\{ a,b_{1},b_{2}\} $
such that
\[
\left|\per M[A',B']^{(i,j)}+\per M[A',B']^{(j,i)}\right|\ge\lambda/2,
\]
where $A'=A\cup\{ a\} $ and $B'=(B\setminus\{ a\})\cup\{ i,j\} $.
\end{lem}
\begin{proof}
Suppose without loss of generality that $\per M[A,B]\ge\lambda$.
If we have
\[\per M[(A\setminus\{ b_{s}\} )\cup\{ a\} ,(B\setminus\{ a\} )\cup\{ b_{s}\} ]\ge-\lambda/2\]
for some $s\in\{ 1,2\}$, then we can take $i=b_{s}$ and
$j=a$. Indeed, then we have $A'=A\cup\{a\}$ and $B'=B\cup \{b_s\}$, and obtain $\per M[A',B']^{(i,j)}+\per M[A',B']^{(j,i)}\geq (-\lambda/2)+\lambda=\lambda/2$.
Otherwise, if there is no such $s\in\{ 1,2\}$, we can take $i=b_{1}$ and $j=b_{2}$. Then we have $A'=A\cup\{a\}$ and $B'=(B\setminus\{ a\})\cup \{b_1,b_2\}$, and obtain $\per M[A',B']^{(i,j)}+\per M[A',B']^{(j,i)}< (-\lambda/2)+(-\lambda/2)=-\lambda$.
\end{proof}
We end this section with two simple lemmas that illustrate how to apply \cref{fact:per-expansion,fact:per-double-expansion,lem:per-no-cancel} to ``grow'' heavy minors in a random symmetric matrix. Recall that $M_n$ is a random symmetric matrix, and that $M_{n-1}$ contains the first $n-1$ rows and columns of $M_n$.
\begin{lem}
\label{lem:simple-augment}Consider $A,B\su \{1,\dots,n-1\}$ with $\vert A\vert=\vert B\vert$, and fix any nonempty $I\subseteq\{ 1,\dots,n-1\} \setminus B$. Consider any outcome $M$ of $M_{n-1}$ such that $M[A,B]$ is $\lambda$-heavy, for some $\lambda>0$. Then
\[\Pr\left(\text{$M_{n}[A\cup\{ n\}, B\cup\{i\}]$ is $\lambda$-heavy for some $i\in I$}\,\big\vert \,M_{n-1}=M\right)\ge 1-2^{-|I|}.\]
\end{lem}
\begin{proof}
We condition on $M_{n-1}=M$. Let $x_{1},\dots,x_{n}$ be the entries in the last row of $M_{n}$.
Let us also condition on any outcome of the variables $x_{b}$ for $b\in B$. Now, by \cref{fact:per-expansion}, for each $i\in I$ we have
\[\per M_{n}[A\cup\{ n\} ,B\cup\{i\}] =x_i\per M_{n-1}[A ,B]+\sum_{b\in B}x_b\per M_{n-1}[A ,(B\cup\{i\})\setminus\{b\}].\]
Since $\vert \per M_{n-1}[A ,B]\vert\geq \lambda$, each $i\in I$ satisfies the desired condition $\vert\per M_{n}[A\cup\{ n\} ,B\cup\{i\}]\vert \geq \lambda$ with probability at least $1/2$, and (by our conditioning on the variables $x_{b}$ for $b\in B$) this happens independently for all $i\in I$.
\end{proof}
\begin{lem}
\label{lem:corner-might-work}Consider $A,B\su \{1,\dots,n-1\}$ with $\vert A\vert=\vert B\vert$, and consider an outcome $M$ of $M_{n-1}$ such that $M[A,B]$ is $\lambda$-heavy, for some $\lambda>0$. Then for any $a\in B\setminus A$ and any distinct $b_{1},b_{2}\in A\setminus B$,
we can choose distinct $i,j\in\{ a,b_{1},b_{2}\}$ such that
\[
\Pr\left(M_{n}[A\cup\{ a,n\} ,(B\setminus\{ a\} )\cup\{ i,j,n\}]\text{ is }(\lambda/2)\text{-heavy}\,\big\vert\,M_{n-1}=M\right)\ge\frac14.
\]
\end{lem}
\begin{proof}Let us condition on $M_{n-1}=M$. By \cref{lem:per-no-cancel}, we can choose distinct $i,j\in\{ a,b_{1},b_{2}\}$ such that
\begin{equation}\label{eq-proof-corner-might-work}
\left|\per M_{n-1}[A',B']^{(i,j)}+\per M_{n-1}[A',B']^{(j,i)}\right|\ge\lambda/2,
\end{equation}
where $A'=A\cup\{ a\} $ and $B'=(B\setminus\{ a\} )\cup\{ i,j\}$. Note that $\{i,j\}\su \{ a,b_{1},b_{2}\}\su A'$ and that clearly $\{ i,j\}\su B'$.
Now, $\per M_{n}[A\cup\{ a,n\} ,(B\setminus\{ a\} )\cup\{ i,j,n\}]=\per M_n[A'\cup\{n\}, B'\cup\{n\}]$, so it suffices to show that with probability at least $1/4$ we have $\vert \per M_n[A'\cup\{n\}, B'\cup\{n\}]\vert\geq \lambda/2$.
Let $(x_1,\dots,x_{n-1},z)$ be the random entries of the last row (and the last column) of $M_n$. By \cref{fact:per-double-expansion}, we have
\[\per M_n[A'\cup\{n\}, B'\cup\{n\}]=z\per M_{n-1}[A',B']+\sum_{k\in A', \ell\in B'}x_kx_{\ell}\per M_{n-1}[A',B']^{(k,\ell)}.\]
Note that this is a quadratic polynomial in the variables $x_1,\dots,x_{n-1},z$, and the coefficient of $x_ix_j$ is precisely $\per M_{n-1}[A',B']^{(i,j)}+\per M_{n-1}[A',B']^{(j,i)}$. Recalling $i\neq j$ and \cref{eq-proof-corner-might-work}, \cref{fact:non-degenerate} now implies that $\Pr(\vert \per M_n[A'\cup\{n\}, B'\cup\{n\}]\vert< \lambda/2)\leq 3/4$. This finishes the proof of \cref{lem:corner-might-work}.
\end{proof}
\section{Proof of \texorpdfstring{\cref{lem:grow-single-minor}}{Lemma~\ref{lem:grow-single-minor}}: growing a single heavy submatrix}\label{sec:grow-single-minor}
In this section we prove \cref{lem:grow-single-minor}. Recall that $L=\lfloor (\log n)/10\rfloor$ and that $X,Y\su \{1,\dots,n\}$ are disjoint subsets with $\vert X\vert=L$ and $\vert Y\vert=3L$. By reordering the rows and columns, we can assume without loss of generality that $X=\{ 1,\dots,L\}$ and $Y=\{ n-3L+1,\dots,n\}$.
\cref{lem:grow-single-minor} will be a consequence of the following two lemmas. The first of these lemmas is itself a weaker version of \cref{lem:grow-single-minor} (it also produces a heavy submatrix, but with less control over where it lies, and not with dimensions as close to $n\times n$).
\begin{lem}
\label{lem:grow-single-minor-weak}For any fixed $0<\delta<1/16$, the following holds for all integers $n\in \NN$ that are sufficiently large with respect to $\delta$. Let $\lambda=n^{(1/2-8\delta)n}$ and suppose that $R\in \NN$ satisfies $\delta n\le R\le 2\delta n$. Then with probability at least $1-e^{-\delta^2 n}$ there is a subset $B\subseteq\{ 1,\dots,n\} $ of size $n-R$ such that $M_{n}[\{ R+1,\dots,n\} ,B]$ is $\lambda$-heavy.
\end{lem}
To prove \cref{lem:grow-single-minor-weak} we adapt an argument in Tao and Vu's work \cite{TV09}, simultaneously tracking the propagation and growth of many heavy submatrices.
Our second lemma takes a heavy submatrix of a certain form with dimensions reasonably close to $n\times n$, and produces a slightly less heavy submatrix with dimensions much closer to $n\times n$, whose row and column sets satisfy the desired conditions in \cref{lem:grow-single-minor} (recall that we are assuming that $X=\{ 1,\dots,L\}$ and $Y=\{ n-3L+1,\dots,n\}$). To be more precise, we actually start with a submatrix contained inside the $n'\times n'$ matrix $M_{n'}$ for some $n'$ slightly smaller than $n$, and, conditioning on the outcome of $M_{n'}$, we only use the randomness from the additional rows and columns exposed when extending $M_{n'}$ to $M_n$
\begin{lem}
\label{lem:grow-single-minor-end}There is an absolute constant $c>0$ such that the following holds for all sufficiently large integers $n\in \NN$. Consider $\lambda>0$ and integers $L$ and $R$ satisfying $(\log n)/20<L<L^2<R< (n-5L^2-3L)/9$, and let $n'=n-8R-5L^2-3L$ and $\lambda'=\lambda/2^{R-L}$. Condition
on an outcome of $M_{n'}$ for which there is a subset $B\subseteq\{ 1,\dots,n'\} $
of size $n'-R$ such that $M_{n'}[\{ R+1,\dots,n'\} ,B]$
is $\lambda$-heavy. Then with probability at least $1-(1/8)\cdot n^{-c}$,
there is a set $B'$ of size $n-L$ with $\{ 1,\dots,n-3L\} \subseteq B'\subseteq \{ 1,\dots,n\}$, such
that $M_{n}[\{ L+1,\dots,n\} ,B']$ is $\lambda'$-heavy.
\end{lem}
It is now easy to deduce \cref{lem:grow-single-minor} from the two lemmas above.
\begin{proof}[Proof of \cref{lem:grow-single-minor}]
Let $c>0$ be the constant in \cref{lem:grow-single-minor-end}. Recall that we are considering some $\eps>0$ and that we are assuming that $n$ is sufficiently large with respect to $\eps$. Then in particular $L=\lfloor (\log n)/10\rfloor> (\log n)/20$. As mentioned at the beginning of this section, we may assume that $X=\{ 1,\dots,L\}$ and $Y=\{ n-3L+1,\dots,n\}$.
Let $\delta=\eps/32$, and $R=\lceil \delta n\rceil $, and note that by our assumption that $n$ is large with respect to $\eps$ we have $L<L^2<R<(n-5L^2-3L)/9$. Now let $n'=n-8R-5L^2-3L\geq (1-9\delta)n$ and $\lambda=(n')^{(1/2-8\delta)n'}\geq n^{(1/2-15\delta)n}$ (again recalling that we assume $n$ to be large with respect to $\eps$). Note that then $\delta n'\leq R\leq 2\delta n'$.
By \cref{lem:grow-single-minor-weak}, with probability at least $1-e^{-\delta^2 n'}\geq 1-(1/8)\cdot n^{-c}$ (for $n$ sufficiently large with respect to $\eps$) there is a subset $B\subseteq\{ 1,\dots,n'\} $ of size $n'-R$ such that $M_{n'}[\{ R+1,\dots,n'\} ,B]$ is $\lambda$-heavy. Then by \cref{lem:grow-single-minor-end} and our choice of $c$, with probability at least $1-(1/8)\cdot n^{-c}$ there is a set $B'$ of size $n-L$ with $\{ 1,\dots,n-3L\} \subseteq B'\subseteq \{ 1,\dots,n\}$ such that $M_{n}[\{ L+1,\dots,n\} ,B']$ is $\lambda'$-heavy, where $\lambda'=\lambda/2^{R-L}\ge n^{(1/2-15\delta)n}/2^{\delta n}\ge n^{(1/2-16\delta)n}=n^{(1-\eps)n/2}$. Thus, the total probability that such a set $B'$ exists is at least $1-(1/4)\cdot n^{-c}$, as desired.
\end{proof}
\cref{lem:grow-single-minor-weak} will be proved in \cref{sec:grow-single-minor-weak} and \cref{lem:grow-single-minor-end} will be proved in \cref{sec:grow-single-minor-end}.
\subsection{Proof of \texorpdfstring{\cref{lem:grow-single-minor-weak}}{Lemma~\ref{lem:grow-single-minor-weak}}: propagation of heavy submatrices}\label{sec:grow-single-minor-weak}
In this subsection we prove \cref{lem:grow-single-minor-weak}, adapting an argument from \cite{TV09} to simultaneously track the propagation and growth of heavy submatrices as we expose more rows and columns of our random matrix. Roughly speaking, at each step we track submatrices of a certain form (with dimension growing by 1 at each step). At a given step, if we are guaranteed that many of our submatrices under consideration are heavy, then it is extremely likely that at the next step there will also be reasonably many heavy submatrices of the desired form. Moreover, depending on the structure of our random matrix at this step, one of the following is true, which will likely improve our situation in one of two ways. Either we have a good chance to dramatically increase the number of heavy submatrices in the next step, or we have a (very) good chance to have many submatrices in the next step which are much heavier than before. \cref{lem:grow-large-minors} below makes this precise.
After having proved \cref{lem:grow-large-minors}, we will deduce \cref{lem:grow-single-minor-weak} by iteratively applying \cref{lem:grow-large-minors}, adding a new row and a new column to our random matrix at every step. Most likely, there will be many steps where our situation improves in one of the two ways described above. However, there is an upper bound for the number of heavy submatrices that we can have at the end of the process (simply by counting the total number of submatrices of the form that we consider). Hence the first type of improvement, which significantly increases the number of heavy submatrices, cannot occur too many times. So, among the two ways we can ``improve the situation'', the second type of improvement must happen most of the time. This means that during our process we get submatrices that are more and more heavy, and at the end we find a reasonably large number of very heavy submatrices in our final matrix $M_n$ (in fact, we only need one such very heavy submatrix).
\begin{lem}
\label{lem:grow-large-minors}Fix $R, n\in \NN$. For $k\in \NN$ and real numbers $N>0$ and $\lambda>0$, let $E(k,N,\lambda)$ denote the event that there are at least $N$ different subsets $B\subseteq\{ 1,\dots,k+R\} $ with $\vert B\vert=k$ such that the matrix $M_{k+R}[\{ R+1,\dots,k+R\} ,B]$ is $\lambda$-heavy.
Then for any $k\in \NN$ with $k+R\le n$, and any real numbers $0<\delta<1/2$ as well as $K>1$, $\lambda>0$ and $N>0$, there is a partition $E(k,N,\lambda)=E'(k,N,\lambda)\cup E''(k,N,\lambda)$ of the event $E(k,N,\lambda)$ such that the following holds. Let $N^{+}=RN/(8K)$, $N^{-}=RN/(8n)$ and $\lambda^{+}=K^{1/2-\delta}\lambda$, and let $M,M',M''$ be any possible outcomes of $M_{k+R}$ satisfying $E(k,N,\lambda)$, $E'(k,N,\lambda)$ and $E''(k,N,\lambda)$ respectively. Then
\begin{align}
\Pr\left(E(k+1,N^{-},\lambda)\cond M_{k+R}=M\right) &\ge 1-2e^{-R/8}.\label{eq:survive}\\
\Pr\left(E(k+1,N^{+},\lambda)\cond M_{k+R}=M'\right) &\ge1/3.\label{eq:breed}\\
\Pr\left(E(k+1,N^{-},\lambda^{+})\cond M_{k+R}=M''\right) &\ge 1-4K^{-\delta}.
\label{eq:grow}
\end{align}
\end{lem}
\begin{proof}
We may assume without loss of generality that $N>0$ is an integer (indeed, otherwise we can replace $N$ by $\ceil{N}$, noting that the statement for $\ceil{N}$ implies the statement for $N$).
Let $x_{1},\dots,x_{k+R+1}$ be the entries in the last row of $M_{k+R+1}$.
For subsets $B\subseteq B'\subseteq\{1,\dots,k+R\}$ with sizes $k$ and $k+1$ respectively, we say that $B$ is a \emph{parent} of $B'$ and that $B'$ is a \emph{child} of $B$. For a subset $B'\subseteq\{1,\dots,k+R\}$ of size $k+1$, note that by \cref{fact:per-expansion} we have \begin{equation}\label{eq-expansion-parents}
\per M_{k+R+1}[\{ R+1,\dots,k+R+1\} ,B']=\sum_B \per M_{k+R}[\{ R+1,\dots,k+R\} ,B]\cdot x_{B'\setminus B},
\end{equation}
where the sum is over all parents $B$ of $B'$ (here, with slight abuse of notation we write $x_{\{i\}}$ instead of $x_i$ for $i\in \{1,\dots,k+R\}$).
For each outcome of $M_{k+R}$ such that $E(k,N,\lambda)$ holds, let us fix subsets $B_{1},\dots,B_{N}$ as in the definition of $E(k,N,\lambda)$. Note that we always have $\vert \per M_{k+R}[\{ R+1,\dots,k+R\} ,B_i]\vert\geq \lambda$ for $i=1,\dots,N$.
Furthermore, for each outcome of $M_{k+R}$ satisfying $E(k,N,\lambda)$, let $S_{q}$ denote the collection of subsets
of $\{ 1,\dots,k+R\} $ of size $k+1$ which have exactly $q$ parents
among the sets $B_{1},\dots,B_{N}$, and let $S=S_{1}\cup\dots\cup S_{n}$ be
the collection of all such subsets which have at least one parent among $B_{1},\dots,B_{N}$. Furthermore, let $S_{\geq K}=S_{\lceil K\rceil }\cup\dots\cup S_{n}$ be the collection of all such subsets which have at least $K$ parents among $B_{1},\dots,B_{N}$. We say that $B'\in S$ is \emph{$\lambda'$}-heavy for some $\lambda'>0$ if $M_{k+R+1}[\{ R+1,\dots,k+R+1\} ,B']$
is $\lambda'$-heavy.
Since each of the sets $B_{1},\dots,B_N$ is a parent of exactly $R$ different sets $B'\in S$,
a double-counting argument shows that we have
\[
\sum_{q=1}^{n}q|S_{q}|= RN
\]
for each outcome of $M_{k+R}$ such that $E(k,N,\lambda)$ holds.
Now, let $E'(k,N,\lambda)\subseteq E(k, N,\lambda)$ be the event that $\sum_{q< K}q|S_{q}|\ge RN/2$, and condition on any outcome $M'$ of $M_{k+R}$ satisfying $E'(k,N,\lambda)$. Note that we then have $|S|\ge \sum_{q< K}|S_{q}|> RN/(2K)$. Furthermore note that for each $B'\in S$, at least one of the terms $\per M_{k+R}[\{ R+1,\dots,k+R\} ,B]$ on the left-hand side of \cref{eq-expansion-parents} has absolute value at least $\lambda$ (since $B'$ has at least one parent among $B_1,\dots,B_N$). Hence, by \cref{fact:non-degenerate-linear}, each $B'\in S$ is $\lambda$-heavy with probability at least $1/2$, and \cref{eq:breed} follows from Markov's inequality (to be precise, it follows from \cref{lem:Markov} applied with $p=1/2$ and $q=1/4$).
On the other hand, let $E''(k,N,\lambda)=E(k, N,\lambda)\setminus E'(k,N,\lambda)$ be the complementary event to $E'(k,N,\lambda)$ within $E(k, N,\lambda)$, i.e.\ the event that $\sum_{q\geq K}q|S_{q}|> RN/2$. Condition on any outcome $M''$ of $M_{k+R}$ satisfying $E''(k,N,\lambda)$, and note that then $|S_{\geq K}|\ge RN/(2n)$. Also note that for each $B'\in S_{\geq K}$, at least $K$ of the terms $\per M_{k+R}[\{ R+1,\dots,k+R\} ,B]$ on the left-hand side of \cref{eq-expansion-parents} have absolute value at least $\lambda$. Hence, by the Erd\H os--Littlewood--Offord inequality (specifically, \cref{lem:LO}, applied with $m=K$, $r=\lambda$ and $t=K^{1/2-\delta}$), each $B'\in S_{\geq K}$ is $\lambda^{+}$-heavy with probability at least $1-3K^{-\delta}$. Then, \cref{eq:grow} follows from Markov's inequality (specifically, we apply \cref{lem:Markov} with $p=1-3K^{-\delta}$ and $q=1/4$).
Finally, to prove \cref{eq:survive}, let us condition on any outcome $M$ of $M_{k+R}$ satisfying $E(k,N,\lambda)$. Say that for $i=1,\dots,N$, the set $B_{i}$ is \emph{good} if at least $R/4$ of its $R$ children
$B'\in S$ are $\lambda$-heavy. We claim that each $B_i$ is good with probability at least $1-e^{-R/8}$. Indeed, consider some fixed $i\in \{1,\dots,N\}$, and condition on any outcome of
the variables $x_{b}$ for $b\in B_{i}$. Now for each child $B'\in S$ of $B_i$, the sum in \cref{eq-expansion-parents} depends only on the outcome of $x_{B'\setminus B_i}$ (since for all other elements of $b\in B'$ the corresponding variable $x_b$ has already been fixed). Since $\vert \per M_{k+R}[\{R+1,\dots,k+R\},B_i]\vert\geq \lambda$, each child $B'\in S$ of $B_i$ is $\lambda$-heavy with probability at least $1/2$, independently for all children $B'\in S$. So, by the Chernoff bound (\cref{lem:Chernoff}), the set $B_{i}$ is indeed good with probability at least $1-e^{-2(R/4)^2/R}=1-e^{-R/8}$, as claimed.
Now, by Markov's inequality (specifically, \cref{lem:Markov}, applied with $p=1-e^{-R/8}$ and $q=1/2$), with probability at least $1-2e^{-R/8}$ at least $N/2$ of the sets $B_1,\dots,B_N$ are good. Whenever this is the case, there are at least $(N/2)\cdot (R/4)/n=RN/(8n)$ different $\lambda$-heavy sets $B'\in S$ (since each such set $B'\in S$ is a child of at most $k+1\leq n$ different sets $B_i$). This proves \cref{eq:survive}.
\end{proof}
Now we deduce \cref{lem:grow-single-minor-weak}.
\begin{proof}[Proof of \cref{lem:grow-single-minor-weak}] As in the lemma statement, let $0<\delta<1/16$ and assume that $n\in \NN$ is sufficiently large with respect to $\delta$ (sufficiently large to satisfy certain inequalities later in the proof). Let $R\in \NN$ be an integer satisfying $\delta n\leq R\leq 2\delta n$, and let $K=n^{1-\delta}$. Furthermore, recall the notation from the statement of \cref{lem:grow-large-minors}. We define random sequences $N_{1},\dots,N_{n-R}$
and $\lambda_{1},\dots,\lambda_{n-R}$ of positive real numbers by an iterative process. Let $N_{1}=\lambda_{1}=1$ and for each $1\leq k\leq n-R-1$ define $N_{k+1}$ and $\lambda_{k+1}$ as follows:
\begin{itemize}
\item[(i)] if $E'(k,N_k,\lambda_{k})$ and $E(k+1,N_{k}^{+},\lambda_{k})$
both hold, then let $N_{k+1}=N_{k}^{+}$ and $\lambda_{k+1}=\lambda_{k}$;
\item[(ii)] if $E''(k,N_k,\lambda_{k})$ and $E(k+1,N_{k}^{-},\lambda_{k}^{+})$
both hold, then let $N_{k+1}=N_{k}^{-}$ and $\lambda_{k+1}=\lambda_{k}^{+}$;
\item[(iii)] if neither (i) nor (ii) holds, but $E(k,N_{k},\lambda_{k})$ and $E(k+1,N_{k}^{-},\lambda_{k})$ both hold, then let $N_{k+1}=N_{k}^{-}$ and $\lambda_{k+1}=\lambda_{k}$;
\item[(iv)] otherwise, abort (and then our sequences are not well-defined).
\end{itemize}
Note that the event $E(1,N_1,\lambda_1)$ always holds. If we do not abort at any point in the above process, then $E(k,N_k,\lambda_k)$ holds for each $k$, and in particular there is a subset $B\subseteq\{1,\dots,n\}$ of size $\vert B\vert=n-R$ such that $M_n[\{R+1,\dots,n\},B]$ is $\lambda_{n-R}$-heavy. Thus, in order for the desired event in \cref{lem:grow-single-minor-weak} to hold, it is sufficient that the process does not abort and that $\lambda_{n-R}\ge n^{(1/2-8\delta)n}$. We will show that this happens with probability at least $1-e^{-\delta^2 n}$.
The main observation is that case (i) cannot occur too many times, simply because it is not possible for $N_k$ to ever be larger than $2^n$. Roughly speaking, it will follow from this observation and \cref{eq:breed} that $E'(k,N_k,\lambda_{k})$ is unlikely to occur too many times. This will then imply that case (ii) is likely to occur many times, meaning that $\lambda_{n-R}$ is large.
\begin{claim}\label{claim:dangerous-case}
Case (i) in the above process occurs for fewer than $\delta n$ different values of $k$.
\end{claim}
\begin{proof}
Note that whenever (i) holds, we have $N_{k+1}/N_k=N_k^+/N_k=R/(8K)\geq \delta n/8n^{1-\delta}=n^\delta/8$. On the other hand, whenever (ii) or (iii) holds, we have $N_{k+1}/N_k=N_k^-/N_k=R/(8n)\geq \delta n/8n=\delta/8$. Now suppose for the purpose of contradiction that (i) holds for at least $\delta n$ different $k$, and let us define $m=k+1$ for the last such value $k$. Note that then we have
\[N_{m}\geq (n^\delta/8)^{\delta n}\cdot (\delta/8)^{m-\delta n}\geq n^{\delta^2 n}\cdot (\delta/8)^m\geq n^{\delta^2 n}\cdot (\delta/8)^n>2^n\]
for sufficiently large $n$. On the other hand, by our choice of $m$, case (i) holds for $k=m-1$, and so in particular the event $E(k+1,N_k^+,\lambda_k)=E(m,N_m,\lambda_m)$ holds. This means that there are at least $N_m>2^n$ different subsets $B\subseteq \{1,\dots,m+R\}\subseteq \{1,\dots,n\}$ satisfying the conditions in the definition of the event $E(m,N_m,\lambda_m)$. But this is clearly a contradiction, since the total number of subsets of $\{1,\dots,n\}$ is only $2^n$.
\end{proof}
The next observation is that if case (ii) occurs many times, then we are done.
\begin{claim}
If we do not abort at any point, and case (ii) occurs for at least $n-12\delta n$ different values of $k$, then $\lambda_{n-R}\ge n^{(1/2-8\delta)n}$.
\end{claim}
\begin{proof}
Whenever (ii) holds, we have $\lambda_{k+1}/\lambda_k=\lambda_k^+/\lambda_k=K^{1/2-\delta}$. On the other hand, whenever (i) or (iii) holds, we have $\lambda_{k+1}=\lambda_k$. So, if case (ii) occurs for at least $n-12\delta n$ values of $k$, then
\[\lambda_{n-R}\geq (K^{1/2-\delta})^{n-12\delta n} \geq n^{(1-\delta)\cdot (1/2-\delta)\cdot (n-12\delta n)}\geq n^{(1/2-2\delta)\cdot (n-12\delta n)}\geq n^{(1/2-8\delta)n}.\tag*{\qedhere}\]
\end{proof}
It now suffices to show that with probability at least $1-e^{-\delta^2 n}$, we do not abort and case (ii) occurs at least $n-12\delta n$ times. To this end, we define an auxiliary random process $W_1,\dots,W_{n-R}$ that evolves in parallel with $N_{1},\dots,N_{n-R}$ and $\lambda_{1},\dots,\lambda_{n-R}$. Namely, let $W_1=0$, and for $1\le k\le n-R-1$ let
\[W_{k+1}=W_k+(1-\delta)-\begin{cases}
3&\text{in case (i),}\\
1&\text{in case (ii),}\\
0&\text{in case (iii) or (iv).}
\end{cases}\]
Furthermore, if case (iv) occurs then let $W_{k+2}=W_{k+3}=\dots=W_{n-R}$ all be equal to the value of $W_{k+1}$ just defined (that is to say, we ``freeze'' the value of $W_k$ after the process aborts).
Note that $W_1,\dots,W_k$ are fully determined by the random matrix $M_{k+R}$ (which also determines its submatrices $M_{R+1},\dots,M_{k+R}$). Moreover, this defines a supermartingale, in the sense that $\E[W_{k+1}|M_{k+R}]\le W_k$ for each $k$ (provided $n$ is sufficiently large). To see this, consider any outcome of $M_k$ for which we have not yet aborted (meaning in particular that the event $E(k,N_k,\lambda_k)=E'(k,N_k,\lambda_k)\cup E''(k,N_k,\lambda_k)$ holds). If $E'(k,N_k,\lambda_k)$ holds, then $\E[W_{k+1}-W_{k}|M_k]\le (1-\delta)-(1/3)\cdot 3\le-\delta$ by \cref{eq:breed}. On the other hand, if $E''(k,N_k,\lambda_k)$ holds, then $\E[W_{k+1}-W_{k}|M_k]\le (1-\delta)-(1-4K^{-\delta})=-\delta+4K^{-\delta}\le 0$ for sufficiently large $n$, by \cref{eq:grow}. In addition, observe that $|W_i-W_{i-1}|\le 3$ for each $1<i\le n-R$.
By \cref{lem:AH2} (with $Z_k=M_{k+R}$ for $k=1,\dots,n-R$, and $c=3$) we have $W_{n-R}\le 5\delta n$ with probability at least $1-e^{-(25/18)\delta^2 n}\ge 1-(1/2)e^{-\delta^2 n}$. Also, by \cref{eq:survive} and the union bound, the probability that we ever abort is bounded by $(n-R)\cdot 2e^{-R/8}\leq n\cdot 2e^{-\delta n/8}\leq (1/2)e^{-\delta^2 n}$. But note that if we never abort, then
\[W_{n-R}=(n-R-1)(1-\delta)-3X_{\text{(i)}}-X_{\text{(ii)}},\]
where $X_{\text{(i)}}$ is the number of times that case (i) occurs, and $X_{\text{(ii)}}$ is the number of times that case (ii) occurs. Recall that $X_{\text{(i)}}\le \delta n$ by \cref{claim:dangerous-case}. Hence, if $W_{n-R}\le 5\delta n$ and the process does not abort, then case (ii) occurs $X_{\text{(ii)}}\ge (n-R-1)(1-\delta)-3\delta n-5\delta n\ge n-12\delta n$ times, which by \cref{claim:dangerous-case} implies that $\lambda_{n-R}\geq n^{(1/2-8\delta)n}$. Thus, we have indeed shown that with probability at least $1-e^{-\delta^2 n}$ the process does not abort and we have $\lambda_{n-R}\geq n^{(1/2-8\delta)n}$.
\end{proof}
\subsection{Proof of \texorpdfstring{\cref{lem:grow-single-minor-end}: ``filling out''}{Lemma~\ref{lem:grow-single-minor-end}: "filling out"} a single heavy submatrix}\label{sec:grow-single-minor-end}
In this subsection we prove \cref{lem:grow-single-minor-end}. It will be a consequence
of the following two lemmas, which (in two slightly different ways) ``grow'' a heavy submatrix by exposing a few additional rows and columns.
\begin{lem}
\label{lem:iterative-cover}Let $1\leq S<n$ and $\lambda>0$, and condition on an outcome of $M_{n}$ for which there is a subset $B\subseteq \{1,\dots,n\}$ of size $n-S$ such that $M_{n}[\{ S+1,\dots,n\} ,B]$ is $\lambda$-heavy.
Then with probability at least $1-3S\cdot 2^{-S}-e^{-S/6}$, there is a set $B'$ of size $n+2S$ with $\{ 1,\dots,n\}\subseteq B'\subseteq \{ 1,\dots,n+3S\}$ such that the matrix $M_{n+3S}[\{ S+1,\dots,n+3S\} ,B']$ is $\lambda$-heavy.
\end{lem}
\begin{lem}
\label{lem:iterative-growth}Let $2\leq T<S<n$ and $\lambda>0$, and condition on an outcome of $M_{n}$ for which there is a set $B$ of size $n-S$ with $\{1,\dots,S\}\subseteq B\subseteq \{1,\dots,n\}$ such that $M_{n}[\{ S+1,\dots,n\} ,B]$ is $\lambda$-heavy. Then with probability at least $1-5S\cdot 2^{-T}-e^{-S/40}$, there is a set $B'$ of size $n+5S-T$ with $\{1,\dots,T\}\subseteq B\subseteq \{1,\dots,n+5S\}$ such that $M_{n+5S}[\{ T+1,\dots,n+5S\} ,B']$ is $\lambda/2^{S-T}$-heavy.
\end{lem}
Before proving \cref{lem:iterative-cover,lem:iterative-growth}, we deduce \cref{lem:grow-single-minor-end}.
\begin{proof}[Proof of \cref{lem:grow-single-minor-end}]
Recall that $n'=n-8R-5L^2-3L$, and that we are conditioning on an outcome of $M_{n'}$ for which there is a subset $B\subseteq\{ 1,\dots,n'\} $ of size $n'-R$ such that $M_{n'}[\{ R+1,\dots,n'\} ,B]$ is $\lambda$-heavy.
First, by \cref{lem:iterative-cover} (applied with $S=R<n'$), with probability at least $1-3R\cdot 2^{-R}-e^{-R/6}$, there is a set $B_1$ of size $n'+2R$ with $\{ 1,\dots,R\}\subseteq \{ 1,\dots,n'\}\subseteq B'\subseteq \{ 1,\dots,n'+3R\}$ such that $M_{n'+3R}[\{ R+1,\dots,n'+3R\} ,B_1]$ is $\lambda$-heavy. Let us now condition on such an outcome for $M_{n'+3R}$.
Then, by \cref{lem:iterative-growth} (applied with $T=L^2$ and $S=R<n'+3R$), we obtain that with probability at least $1-5R\cdot 2^{-L^2}-e^{-R/40}$
there is a set $B_2$ of size $n'+8R-L^2$ with $\{1,\dots,L^2\}\subseteq B_2\subseteq \{1,\dots,n'+8R\}$ such that
that the matrix $M_{n'+8R}[\{ L^2+1,\dots,n'+8R\} ,B_2]$ is $(\lambda/2^{R-L^2})$-heavy. Let us condition on such an outcome for $M_{n'+8R}$.
Applying \cref{lem:iterative-growth} again (this time with $T=L$ and $S=L^2<n'+8R$), we now get that with probability at least $1-5L^2\cdot 2^{-L}-e^{-L^2/40}$, there is a set $B_3$ of size $n'+8R+5L^2-L$ with $\{1,\dots,L\}\subseteq B_3\subseteq \{1,\dots,n'+8R+5L^2\}$ such that
that $M_{n'+8R+5L^2}[\{ L+1,\dots,n'+8R+5L^2\} ,B_3]$ is $\lambda'$-heavy, where $\lambda'=(\lambda/2^{R-L^2})/2^{L^2-L}=\lambda/2^{R-L}$. Let us condition on such an outcome for $M_{n'+8R+5L^2}$
Finally, by \cref{lem:iterative-cover} (applied with $S=L<n'+8R+5L^2$), with probability at least $1-3L\cdot 2^{-L}-e^{-L/6}$, there is a set $B'$ of size $n'+8R+5L^2+2L=n-L$ with
\[\{ 1,\dots,n-3L\}=\{ 1,\dots,n'+8R+5L^2\}\subseteq B'\subseteq \{ 1,\dots,n'+8R+5L^2+3L\}=\{ 1,\dots,n\}\]
such that $M_{n}[\{ L+1,\dots,n\} ,B']$ is $\lambda'$-heavy.
The probability that all four steps succeed is at least
\[1-(3R\cdot 2^{-R}+e^{-R/6})-(5R\cdot 2^{-L^2}+e^{-R/40})-(5L^2\cdot 2^{-L}+e^{-L^2/40})-(3L\cdot 2^{-L}+e^{-L/6})\ge 1-(1/8)\cdot n^{-c}\]
for some small constant $c>0$ (recall that $(\log n)/20<L<L^2<R<n$ and that $n$ is sufficiently large).
\end{proof}
We now prove \cref{lem:iterative-cover}.
\begin{proof}[Proof of \cref{lem:iterative-cover}]For any $m\in \NN$ with $n\leq m\leq n+3S$, define the random variable $Q_{m}$ to be the minimum value of $|\{ 1,\dots,n\} \setminus B'|$ among all subsets $B'\subseteq\{1,\dots,m\}$ of size $m-S$ such that $M_{m}[\{ S+1,\dots,m\} ,B']$
is $\lambda$-heavy. If no such subset $B'$ exists, let $Q_{m}=\infty$.
Recall that in \cref{lem:iterative-cover} we are conditioning on an outcome of $M_n$ for which there is a subset $B\subseteq \{1,\dots,n\}$ of size $n-S$ such that $M_{n}[\{ S+1,\dots,n\} ,B]$ is $\lambda$-heavy. This means that $Q_n\leq |\{ 1,\dots,n\} \setminus B|= S$.
For $n<m\le n+3S$, we say that step $m$ is a \emph{failure} if $Q_{m}>Q_{m-1}$. We say that step $m$ is \emph{progress} if $Q_{m}<Q_{m-1}$ or if $Q_{m}=Q_{m-1}=0$
or if $Q_{m-1}=\infty$.
For any $n<m\le n+3S$, when conditioning on any outcome of $M_{m-1}$, we claim that step $m$ is a failure with probability at most $2^{-S}$. Indeed, if $Q_{m-1}<\infty$, let $B'\subseteq\{1,\dots,m-1\}$ be a subset of size $m-1-S$ with $|\{ 1,\dots,n\} \setminus B'|=Q_{m-1}$ such that $M_{m-1}[\{ S+1,\dots,m-1\} ,B']$ is $\lambda$-heavy. By applying \cref{lem:simple-augment} with $I=\{ 1,\dots,m-1\} \setminus B'$, we see that with probability at least $1-2^{-S}$ there exists some $i\in \{ 1,\dots,m-1\} \setminus B'$ such that $M_{m}[\{ S+1,\dots,m\}, B'\cup\{i\}]$ is $\lambda$-heavy (which in particular implies $Q_m\leq |\{ 1,\dots,n\} \setminus (B'\cup\{i\})|\leq Q_{m-1}$). On the other hand, if $Q_{m-1}=\infty$, then step $m$ cannot be a failure. So this indeed shows that in any case (when conditioning on any outcome of $M_{m-1}$), step $m$ is a failure with probability at most $2^{-S}$.
Furthermore, for any $n<m\le n+3S$, when conditioning on any outcome of $M_{m-1}$, we claim that step $m$ is progress with probability at least $1/2$. Indeed, if $Q_{m-1}\notin\{ 0,\infty\}$, let $B'\subseteq\{1,\dots,m-1\}$ be a subset of size $m-1-S$ with $|\{ 1,\dots,n\} \setminus B'|=Q_{m-1}$ such that $M_{m-1}[\{ S+1,\dots,m-1\} ,B']$ is $\lambda$-heavy. We can then apply \cref{lem:simple-augment} with $I=\{ 1,\dots,n\} \setminus B'$, to see that with probability at least $1/2$ there exists some $i\in \{ 1,\dots,n\} \setminus B'$ such that $M_{m}[\{ S+1,\dots,m\}, B'\cup\{i\}]$ is $\lambda$-heavy (which in particular implies $Q_m\leq |\{ 1,\dots,n\} \setminus (B'\cup\{i\})|<Q_{m-1}$). If $Q_{m-1}=\infty$, by definition step $m$ is always progress. If $Q_{m-1}=0$, then step $m$ is progress if and only if it is not failure, and we already showed that it is failure with probability at most $2^{-S}\leq 1/2$. This shows that in any case (when conditioning on any outcome of $M_{m-1}$), step $m$ is progress with probability at least $1/2$.
Hence the number of progress steps among the $3S$ steps $m\in \{n+1,\dots,n+3S\}$ stochastically dominates a sum of $3S$ independent $\Ber(1/2)$ random variables. By the Chernoff bound (\cref{lem:Chernoff}) such a sum is at least $S$ with probability at least $1-e^{-2(S/2)^2/(3S)}=1-e^{-S/6}$. Thus, the number of progress steps is also at least $S$ with probability at least $1-e^{-S/6}$. Furthermore, note that by the union bound, with probability at least $1-3S\cdot 2^{-S}$ none of the $3S$ steps $m\in \{n+1,\dots,n+3S\}$ is a failure.
If there are no failures and at least $S$ progress steps, then we must have $Q_{n+3S}=0$. Hence, with probability at least $1-3S\cdot 2^{-S}-e^{-S/6}$ there exists a subset $B'\subseteq\{1,\dots,n+3S\}$ of size $n+2S$ such that $M_{n+3S}[\{ S+1,\dots,n+3S\} ,B']$ is $\lambda$-heavy and $|\{ 1,\dots,n\} \setminus B'|=0$ (meaning that $\{ 1,\dots,n\}\su B'$).
\end{proof}
The proof of \cref{lem:iterative-growth} follows a similar strategy as the proof of \cref{lem:iterative-cover} above.
\begin{proof}[Proof of \cref{lem:iterative-growth}]
For any $m\in \NN$ with $n\leq m\leq n+5S$, define the random variable $Q_{m}$ to be the minimal number $Q\in \{T, T+1, T+2,\dots\}$ such that there is a set $B'$ of size $m-Q$ with $\{ 1,\dots,Q\}\su B'\su \{1,\dots,m\}$ such that the matrix $M_{m}[\{ Q+1,\dots,m\} ,B']$ is $(\lambda/2^{S-Q})$-heavy. If there is no $Q\in \{T, T+1, T+2,\dots\}$ for which such a set $B'$ exists, we define $Q_m=\infty$.
Recall that in \cref{lem:iterative-growth}, we are conditioning on an outcome of $M_{n}$ for which there is a set $B$ of size $n-S$ with $\{1,\dots,S\}\subseteq B\subseteq \{1,\dots,n\}$ such that
that $M_{n}[\{ S+1,\dots,n\} ,B]$ is $\lambda$-heavy. This means that $Q_n\leq S$ (recall that $S>T$).
For $n<m\le n+5S$, we say that step $m$ is a \emph{failure} if $Q_{m}>Q_{m-1}$. We say that step $m$ is \emph{progress} if $Q_{m}<Q_{m-1}$ or if $Q_{m}=Q_{m-1}=T$ or if $Q_{m-1}=\infty$.
For any $n<m\leq n+5S$, when conditioning on any outcome of $M_{m-1}$, we claim that step $m$ is a failure with probability at most $2^{-T}$. Indeed, if $Q_{m-1}<\infty$, let $B'$ be a set of size $m-1-Q_{m-1}$ with $\{ 1,\dots,Q_{m-1}\}\su B'\su \{1,\dots,m-1\}$ such that the matrix $M_{m-1}[\{ Q_{m-1}+1,\dots,m-1\} ,B']$ is $(\lambda/2^{S-Q_{m-1}})$-heavy. By applying \cref{lem:simple-augment} with $I=\{ 1,\dots,m-1\} \setminus B'$, we obtain that with probability at least $1-2^{-Q_{m-1}}\geq 1-2^{-T}$ there exists some $i\in \{ 1,\dots,m-1\} \setminus B'$ such that the matrix $M_{m}[\{ Q_{m-1}+1,\dots,m\} ,B'\cup \{i\}]$ is $(\lambda/2^{S-Q_{m-1}})$-heavy (which in particular implies that $Q_m\leq Q_{m-1}$). On the other hand, if $Q_{m-1}=\infty$, then step $m$ cannot be a failure.
We furthermore claim that for any $n<m\leq n+5S$, when conditioning on any outcome of $M_{m-1}$, step $m$ is progress with probability at least $1/4$. First assume that $Q_{m-1}\not\in \{T,\infty\}$, and let $B'$ be a set of size $m-1-Q_{m-1}$ with $\{ 1,\dots,Q_{m-1}\}\su B'\su \{1,\dots,m-1\}$ such that $M_{m-1}[\{ Q_{m-1}+1,\dots,m-1\} ,B']$ is $(\lambda/2^{S-Q_{m-1}})$-heavy. We can then apply \cref{lem:corner-might-work} to the sets $\{ Q_{m-1}+1,\dots,m-1\}$ and $B'$. Since $\{ 1,\dots,Q_{m-1}\}\su B'$ and $\vert B'\vert =m-1-Q_{m-1}$, we have $\vert \{ Q_{m-1}+1,\dots,m-1\}\setminus B'\vert=Q_{m-1}\geq T\geq 2$. So we can find two distinct elements $b_1,b_2\in \{ Q_{m-1}+1,\dots,m-1\}\setminus B'$. Let us furthermore take $a=Q_{m-1}\in B'\setminus \{ Q_{m-1}+1,\dots,m-1\}$. By \cref{lem:corner-might-work}, there exist distinct elements $i,j\in \{a,b_1,b_2\}$ such that the set $B^*=(B'\setminus\{a\})\cup \{i,j,m\}$ has the property that the matrix $M_m[\{ Q_{m-1},\dots,m\} ,B^*]$ is $(\lambda/2^{S-Q_{m-1}+1})$-heavy with probability at least $1/4$. Also note that $\vert B^*\vert=m-(Q_m-1)$ and $\{ 1,\dots,Q_{m-1}-1\}\su B^*\su \{1,\dots,m\}$. So we can conclude that with probability at least $1/4$ we have $Q_m\leq Q_{m-1}-1$, meaning that step $m$ is progress.
In order to finish proving the claim that for any $n<m\leq n+5S$ (when conditioning on any outcome of $M_{m-1}$), step $m$ is progress with probability at least $1/4$, it only remains to consider the cases $Q_{m-1}=\infty$ and $Q_{m-1}=T$. If $Q_{m-1}=\infty$, then step $m$ is always progress. If $Q_{m-1}=T$, then step $m$ is progress if and only if it is not failure, and we already proved that step $m$ is failure with probability at most $2^{-T}\leq 1/4\leq 3/4$.
Having proved this claim, we can now conclude that the number of progress steps among the $5S$ steps $m\in \{n+1,\dots,n+5S\}$ stochastically dominates a sum of $5S$ independent $\Ber(1/4)$ random variables. By the Chernoff bound (\cref{lem:Chernoff}) such a sum is at least $S$ with probability at least $1-e^{-2(S/4)^2/(5S)}=1-e^{-S/40}$. Thus, the number of progress steps is also at least $S$ with probability at least $1-e^{-S/40}$. Furthermore, note that by the union bound, with probability at least $1-5S\cdot 2^{-T}$ none of the $5S$ steps $m\in \{n+1,\dots,n+5S\}$ is a failure.
If there are no failures and at least $S\geq S-T$ progress steps, then we must have $Q_{n+5S}=T$. Hence, with probability at least $1-5S\cdot 2^{-T}-e^{-S/40}$ there is a set $B'$ of size $n+5S-T$ with $\{ 1,\dots,T\}\su B'\su \{1,\dots,n+5S\}$ such that the matrix $M_{n+5S}[\{ T+1,\dots,n+5S\} ,B']$ is $(\lambda/2^{S-T})$-heavy.
\end{proof}
\section{Proof of \texorpdfstring{\cref{lem:endgame-step}}{Lemma~\ref{lem:endgame-step}}: survival of heavy submatrices}\label{sec:endgame-step}
In this section we prove \cref{lem:endgame-step}. Recall that we call subsets $S_1,\dots,S_m$ of some ground set $S$ complement-disjoint, if their complements $S\setminus S_1,\dots,S\setminus S_m$ are disjoint (and note that this condition is in particular satisfied if $S_1=\dots=S_m=S$).
As in the lemma statement, let $\lambda>0$ and let $A_1,\dots,A_m,B_1,\dots,B_m$ be complement-disjoint subsets of $\{1,\dots,n\}$ of size $n-L$. Recall that we are conditioning on an outcome of the matrix $M_n$ such that we have $\vert \per M_n[A_\ell,B_\ell]\vert\geq \lambda$ for $\ell=1,\dots,m$. Also recall that we are assuming that $m$ is large.
Let $x_{1},\dots,x_{n},z$ be the entries of the new row and column in $M_{n+1}$, and let us condition on any fixed outcome of $z$ (which we no longer view as being random).
First, starting from the complement-disjoint subsets $A_1,\dots,A_m,B_1,\dots,B_m\su \{1,\dots,n\}$ of size $n-L$, we will construct certain complement-disjoint subsets $A_1^*,\dots,A_m^*,B_1^*,\dots,B_m^*\su \{1,\dots,n\}$ of size $n-L+1$. The plan is then to choose the desired subsets $A_1',\dots,A_{m'}',B_1',\dots,B_{m'}'$ in \cref{lem:endgame-step} to each be of the form $A_i^*\cup \{n+1\}$ or $B_i^*\cup \{n+1\}$, for suitably chosen $i\in \{1,\dots,m\}$.
\begin{claim}\label{claim-sets-A-B-prime}
We can find quadruples $(A^*_\ell,B^*_\ell,i_\ell,j_\ell)$ for $\ell\in\{ 1,\dots,m\}$, satisfying the following conditions.
\begin{itemize}
\item For each $\ell\in\{ 1,\dots,m\}$, we have $A_{\ell}^*,B_{\ell}^*\subseteq\{ 1,\dots,n\} $ and $\vert A^*_\ell\vert=\vert B^*_\ell\vert=n-L+1$, and furthermore $i_\ell,j_\ell\in A_\ell^*\cap B_\ell^*$.
\item The elements $i_{1},j_{1}, \dots, i_{m},j_{m}\in \{1,\dots,n\}$ are distinct.
\item The sets $A_{1}^*,B_{1}^*,\dots,A_{m}^*,B_{m}^*$ are complement-disjoint (over the ground set $\{ 1,\dots,n\}$).
\item For each $\ell\in\{ 1,\dots,m\}$, if we view $\per M_{n+1}[A_{\ell}^*\cup\{ n+1\} ,B_{\ell}^*\cup\{ n+1\} ]$ as a polynomial in $x_{1},\dots,x_{n}$, then the coefficient of $x_{i_{\ell}}x_{j_{\ell}}$ has absolute value at least $\lambda/2$.
\end{itemize}
\end{claim}
\begin{proof}
First consider the case $L=1$. For every $\ell\in \{1,\dots,m\}$, let us take $A_{\ell}^*=B_{\ell}^*=\{ 1,\dots,n\}$, let $i_{\ell}$ be the single element of $\{ 1,\dots,n\}\setminus A_\ell$, and let $j_{\ell}$ be the single element of $\{ 1,\dots,n\}\setminus B_\ell$. Note that the second condition is satisfied by the complement-disjointness of the sets $A_{1},B_{1},\dots,A_{m},B_{m}$. For the last condition, observe that by \cref{fact:per-double-expansion} and the symmetry of our matrices, the coefficient of $x_{i_{\ell}}x_{j_{\ell}}$ in $\per M_{n+1}$ equals $\per M_n[A_{\ell},B_{\ell}]+\per M_n[B_{\ell},A_{\ell}]=2\per M_n[A_{\ell},B_{\ell}]$, which has absolute value at least $2\lambda\ge\lambda/2$.
Now, we consider the case $L\geq 2$. For each $\ell\in \{1,\dots,m\}$, choose $a_\ell\in B_\ell\setminus A_{\ell}$ and distinct $b_{\ell},b_{\ell}'\in A_\ell\setminus B_{\ell}$ (this is possible since $A_\ell$ and $B_\ell$ are complement-disjoint and have size at most $n-2$). Now let $i_{\ell},j_{\ell}\in \{a_\ell,b_\ell,b_\ell'\}$ be
as in \cref{lem:per-no-cancel}, and let $A_{\ell}^*=A_{\ell}\cup\{ a_\ell\} $ and $B_{\ell}^*=(B_{\ell}\setminus\{ a_\ell\} )\cup\{ i_{\ell},j_{\ell}\}$. For the last condition, note that by \cref{fact:per-double-expansion} the coefficient of $x_{i_{\ell}}x_{j_{\ell}}$ in $\per M_{n+1}[A_{\ell}^*\cup\{ n+1\} ,B_{\ell}^*\cup\{ n+1\}]$ equals $\per M_n[A_{\ell}^*,B_{\ell}^*]^{(i_\ell,j_\ell)}+\per M_n[B_{\ell}^*,A_{\ell}^*]^{(j_\ell,i_\ell)}$, which has absolute value at least $\lambda/2$ by the conclusion of \cref{lem:per-no-cancel}.
\end{proof}
Fix quadruples $(A^*_\ell,B^*_\ell,i_\ell,j_\ell)$ for $\ell\in\{ 1,\dots,m\}$ as in \cref{claim-sets-A-B-prime}. Let $I=\{i_{1},j_{1}, \dots, i_{m},j_{m}\}\su \{1,\dots,n\}$, and let us condition on any outcome for all the variables $x_i$ with $i\notin I$ (which we will no longer view as being random). For every $\ell=1,\dots,m$, define $P_{\ell}=\per M_{n+1}[A_{\ell}^*\cup\{ n+1\} ,B_{\ell}^*\cup\{ n+1\}]$, viewed as a polynomial in the variables $x_i$ for $i\in I$, but with all quadratic terms $x_i^2$ replaced by $1$ (recall that our variables $x_i$ are chosen in $\{-1,1\}$). Then $P_\ell$ is a multilinear quadratic polynomial and the coefficient of $x_{i_{\ell}}x_{j_{\ell}}$ has absolute value at least $\lambda/2$.
Now, after our conditioning, the only remaining randomness comes from the $2m$ variables $x_i$ for $i\in I$. It suffices to show that for sufficiently large $m$ we have
\begin{equation}\label{eq-many-l-satisfy}
\Pr\left(|P_{\ell}|\ge\lambda/(4n^4)\text{ for at least }m/36\text{ indices }\ell\in \{1,\dots,m\}\right)\geq 1-m^{-1/24}.
\end{equation}
Indeed, if the event in \cref{eq-many-l-satisfy} holds, then we can take the sets $A_1',\dots,A_{m'}, B_1',\dots,B_{m'}$ in \cref{lem:endgame-step} to be the sets $A_\ell^*\cup\{n+1\}$ and $B_\ell^*\cup\{n+1\}$ for $m'=\ceil{m/36}$ different indices $\ell\in \{1,\dots,n\}$ for which we have $|P_{\ell}|=\vert \per M_{n+1}[A_{\ell}^*\cup\{ n+1\} ,B_{\ell}^*\cup\{ n+1\}]\vert\ge\lambda/(4n^{4})$.
Let $\sigma=\lambda/(4n^2)$ and $\tau=\lambda/(4n^4)$. Using the notation introduced above \cref{lem:MNV}, for each $\ell\in \{1,\dots,m\}$ we consider the graph $G_\ell=G^{(\tau)}(P_\ell)$ on the vertex set $I$ whose edges correspond to the coefficients of the polynomial $P_\ell$ of absolute value at least $\tau$. We say that the index $\ell\in \{1,\dots,m\}$ is \emph{easy} if the graph $G_\ell$ has matching number $\nu(G_\ell)\geq m^{1/6}$. If there are many easy indices, then we can prove \cref{eq-many-l-satisfy} using the Meka--Nguyen--Vu polynomial anti-concentration inequality (\cref{lem:MNV}), as follows.
\begin{claim}
If there are at least $m/3$ easy indices $\ell\in \{1,\dots,m\}$, then \cref{eq-many-l-satisfy} holds.
\end{claim}
\begin{proof}
Recall that for each easy index $\ell$, we have $\nu(G^{(\tau)}(P_\ell))=\nu(G_\ell)\geq m^{1/6}$. Hence by \cref{lem:MNV} we have that
\[\Pr(\vert P_\ell\vert\geq \tau)\geq 1-\frac{(\log \nu(G_\ell))^{C}}{\nu(G_\ell)^{1/2}}\geq 1-\nu(G_\ell)^{-1/3}\geq 1-m^{-1/18}\]
for sufficiently large $m$ (where $C$ is the absolute constant appearing in the statement of \cref{lem:MNV}). Hence by Markov's inequality (specifically \cref{lem:Markov} applied with $p=1-m^{-1/18}$ and $q=1/2$), with probability at least $1-2m^{-1/18}\geq 1-m^{-1/24}$ we have $\vert P_\ell\vert\geq \tau=\lambda/(4n^{4})$ for at least $(1/2)\cdot (m/3)=m/6$ easy indices $\ell\in \{1,\dots,m\}$ (again assuming that $m$ is sufficiently large). This in particular proves \cref{eq-many-l-satisfy}.
\end{proof}
Let us from now on assume that there are at least $2m/3$ indices $\ell\in \{1,\dots,m\}$ which are not easy. For each of these non-easy $\ell$, since $G_\ell$ has no large matching it must have a large vertex cover, as follows.
\begin{claim}\label{claim:vertex-cover}
For every non-easy index $\ell\in \{1,\dots,m\}$, there is a subset $S_\ell\su I$ of size $\vert S_\ell\vert \leq 2m^{1/6}$ such that each edge of the graph $G_\ell$ contains at least one vertex in $S_\ell$ (in other words, $S_\ell$ is a vertex cover of the graph $G_\ell$).
\end{claim}
\begin{proof}
Let us take a maximal collection of disjoint edges in $G_\ell$ (this collection consists of at most $\nu(G_\ell)< m^{1/6}$ edges), and let $S_\ell$ consist of all the vertices contained in one of these edges. Then by the maximality of the chosen edge collection, each edge of $G_\ell$ must contain at least one vertex in $S_\ell$.
\end{proof}
For each non-easy $\ell$, fix a subset $S_\ell\su I$ as in \cref{claim:vertex-cover}. We now briefly describe the strategy of the remainder of the proof. The idea is that all the degree-2 terms of $P_\ell$ whose coefficient has large absolute value must contain a variable whose index is in $S_\ell$. So if we condition on outcomes of $x_i$ for $i\in S_\ell$, then $P_\ell$ ``essentially'' becomes a linear polynomial (apart from some small terms that we can ignore). If this linear polynomial has many coefficients with large absolute value, then we can apply the Erd\H os--Littlewood--Offord inequality to show that $|P_\ell|$ is typically quite large. However, it is possible that for most of the non-easy $\ell$, we end up with linear polynomials which have few coefficients of large absolute value (in which case we will not be able to use such an argument). It turns out that this is unlikely to happen unless for many $\ell$ the polynomial $P_\ell$ each essentially depend on only a few of the variables $x_i$ (we will call such indices $\ell$ \emph{short}). We will be able to handle the case that there are many such indices $\ell$ using the Azuma--Hoeffding inequality.
Let us say that a variable $x_i$ with $i\in I$ is \emph{bad} if we have $i\in S_\ell$ for at least $m^{1/3}$ non-easy indices $\ell\in \{1,\dots,m\}$. Note that by a simple counting argument, there are at most $m\cdot 2m^{1/6}/m^{1/3}=2m^{5/6}$ bad variables. We say that a variable $x_i$ with $i\in I$ is \emph{good} if it is not bad. Let $I_{\text{good}}\su I$ be the set of all $i\in I$ such that $x_i$ is good (and note that $\vert I_{\text{good}}\vert\leq \vert I\vert =2m$).
In addition to all of our previous conditioning, let us now also condition on any fixed outcome of all bad variables $x_i$. This means that at this point the only remaining randomness comes from the variables $x_i$ with $i\in I_{\text{good}}$. After fixing the outcomes for the bad variables, we can interpret each polynomial $P_\ell$ (for $\ell\in \{1,\dots,m\}$) as a polynomial in the variables $x_i$ with $i\in I_{\text{good}}$. It suffices to prove \cref{eq-many-l-satisfy} with this additional conditioning on the outcomes of the bad variables.
Note that for each $\ell\in \{1,\dots,m\}$ which is not easy, for any distinct $i,j\in I_{\text{good}}\setminus S_\ell$ the coefficient of $x_ix_j$ in $P_\ell$ has absolute value less than $\tau$. Indeed, this follows from the definition of the graph $G_\ell=G^{(\tau)}(P_\ell)$ and the fact that every edge of $G_{\ell}$ contains at least one vertex in $S_\ell$.
Recall that $\sigma=\lambda/(4n^2)$. We say that an index $\ell\in \{1,\dots,m\}$ is \emph{short} if there are at most $6m^{1/6}$ good variables $x_i$ such that $x_i$ appears in a term of $P_\ell$ whose coefficient has absolute value at least $\sigma$.
\begin{claim}
If there are at least $m/3$ short indices $\ell\in \{1,\dots,m\}$, then \cref{eq-many-l-satisfy} holds.
\end{claim}
\begin{proof}
Recall that we are viewing each $P_\ell$ as a polynomial in the variables $x_i$ with $i\in I_{\text{good}}$. Now, let $P_\ell'$ be the polynomial obtained from $P_\ell$ by deleting all terms whose coefficients have absolute value less than $\sigma$. Note that for each short index $\ell\in \{1,\dots,m\}$, the polynomial $P_\ell'$ contains at most $6m^{1/6}$ different variables $x_i$. Also note that we always have $\vert P_\ell-P_\ell'\vert< \sigma n^2=\lambda/4$ for any outcomes of the good variables $x_i\in \{-1,1\}$ (this is because at most $n^2$ terms get deleted in $P_\ell'$, each with absolute value less than $\sigma$).
We say that a good variable is \emph{short-popular} if it appears in the polynomial $P_\ell'$ for at least $m^{1/3}$ different short indices $\ell\in \{1,\dots,m\}$. Note that there are at most $m\cdot 6m^{1/6}/m^{1/3}=6m^{5/6}$ short-popular variables. For the remainder of the proof of this claim, let us now also condition on any fixed outcomes for all short-popular variables $x_i$, which we no longer view as being random.
Since there are at most $2m^{5/6}$ bad variables, and at most $6m^{5/6}$ short-popular variables, there are at least $m/3-8m^{5/6}\geq m/4$ short indices $\ell\in \{1,\dots,m\}$ for which both of the variables $x_{i_\ell}$ and $x_{j_\ell}$ are good and not short-popular (for the inequality here, we are assuming that $m$ is sufficiently large). For any such index $\ell$, the coefficient of $x_{i_\ell}x_{j_\ell}$ in the polynomial $P_\ell'$ has absolute value at least $\lambda/2$ (since this is also the case for $P_\ell$, and $\lambda/2>\sigma$). Hence we can apply \cref{fact:non-degenerate} to find that $\Pr(|P_{\ell}'|\ge\lambda/2)\ge 1/4$.
So, if $Y$ is the number of short indices $\ell$ with $|P_{\ell}'|\ge\lambda/2$, then $\E Y\ge(m/4)\cdot (1/4)=m/16$. Recall that we already conditioned on outcomes of all the short-popular variables. So each of the remaining random variables occur in $P_\ell'$ for at most $m^{1/3}$ short indices $\ell$, and hence varying each individual variable affects the value of $Y$ by at most $m^{1/3}$. Therefore, by \cref{lem:AH} we have $Y\ge m/32$ with probability at least
\[1-2\exp\left(-\frac{(m/32)^{2}}{2\cdot 2m\cdot m^{2/3}}\right)=1-2\exp\left(-\frac{m^{1/3}}{2^{12}}\right)\geq 1-m^{-1/24},\]
assuming that $m$ is sufficiently large. Hence with probability at least $1-m^{-1/24}$ there are at least $m/32$ short indices $\ell$ with $|P_{\ell}'|\ge\lambda/2$, which implies that $|P_{\ell}|\ge\lambda/2-\lambda/4\geq \lambda/(4n^4)$. This in particular proves \cref{eq-many-l-satisfy}.
\end{proof}
We may from now on assume that there are at least $m/3$ indices $\ell\in \{1,\dots,m\}$ which are not easy and not short. Let us call such indices \emph{interesting}.
For every interesting index $\ell\in \{1,\dots,m\}$, recall that $P_\ell$ is a multilinear polynomial in the variables $x_i$ with $i\in I_{\text{good}}$. Furthermore, recall that for any distinct $i,j\in I_{\text{good}}\setminus S_\ell$ the coefficient of $x_ix_j$ in $P_\ell$ has absolute value less than $\tau$. Let $P_\ell^*$ be the polynomial obtained from $P_\ell$ by deleting all terms of the form $x_ix_j$ for $i,j\in I_{\text{good}}\setminus S_\ell$. Note that we always have $\vert P_\ell-P_\ell^*\vert\leq \tau n(n-1)/2\leq \tau (n^2-1)=\sigma-\tau$ (for all outcomes of the $x_i$).
For every interesting $\ell\in \{1,\dots,m\}$, there are at least $6m^{1/6}$ good variables which appear in a term of $P_\ell$ whose coefficient has absolute value at least $\sigma$. Since only terms with coefficient less than $\tau<\sigma$ get deleted in $P_\ell^*$, this means that there are also at least $6m^{1/6}$ good variables which appear in a term of $P_\ell^*$ whose coefficient has absolute value at least $\sigma$.
Now, for every interesting $\ell\in \{1,\dots,m\}$, let us interpret $P_\ell^*\in \mathbb{R}[x_i, i\in I_{\text{good}}]$ as a polynomial $Q_\ell\in \mathbb{R}[x_i, i\in S_\ell][x_i, i\in I_{\text{good}}\setminus S_\ell]$, i.e.\ as a polynomial in the variables $x_i$ for $i\in I_{\text{good}}\setminus S_\ell$ whose coefficients are polynomials in the variables $x_i$ for $i\in S_\ell$. Then $Q_\ell$ is a linear polynomial (in the variables $x_i$ for $i\in I_{\text{good}}\setminus S_\ell$). Its constant coefficient is a quadratic polynomial, and its other coefficients are linear polynomials (in the variables $x_i$ for $i\in S_\ell$).
Let $T_\ell$ be the number of degree-$1$ coefficients of the linear polynomial $Q_\ell$ (in the variables $x_i$ for $i\in I_{\text{good}}\setminus S_\ell$) which have absolute value at least $\sigma$. This is a random variable depending on the outcomes of the $x_i$ with $i\in S_\ell$.
\begin{claim}\label{claim-T-ell-large}
For each interesting index $\ell\in \{1,\dots,m\}$, we have $\Pr\left(T_\ell\geq m^{1/6}\right)\geq 1/3$.
\end{claim}
\begin{proof}
Fix an interesting $\ell\in \{1,\dots,m\}$. Recall that there are at least $6m^{1/6}$ good variables $x_i$ which appear in a term of $P_\ell^*$ whose coefficient has absolute value at least $\sigma$. Since $\vert S_\ell\vert\leq 2m^{1/6}$, at least $4m^{1/6}$ of these variables satisfy $i\in I_{\text{good}}\setminus S_\ell$. For each such $i$, the coefficient of $x_i$ in $Q_\ell$ is a linear polynomial in the variables $x_j$ for $j\in S_{\ell}$, with at least one coefficient of absolute value at least $\sigma$. Thus, by \cref{fact:non-degenerate-linear}, with probability at least $1/2$ the coefficient of $x_i$ in $Q_\ell$ has absolute value at least $\sigma$. By Markov's inequality (specifically \cref{lem:Markov} applied with $p=1/2$ and $q=1/4$), with probability at least $1/3$ there are at least $(1/4)\cdot 4m^{1/6}=m^{1/6}$ different $i\in I_{\text{good}}\setminus S_\ell$ such that the coefficient of $x_i$ in $Q_\ell$ has absolute value at least $\sigma$. In other words, with probability at least $1/3$, we have $T_\ell\geq m^{1/6}$.
\end{proof}
\begin{claim}\label{claim-T-ell-large-and-poly-small}
For each interesting index $\ell\in \{1,\dots,m\}$, we have $\Pr\left(T_\ell\geq m^{1/6}\text{ and }\vert P_\ell^*\vert<\sigma\right)\leq 3m^{-1/12}$.
\end{claim}
\begin{proof}
Fix an interesting $\ell\in \{1,\dots,m\}$. Recall that $T_\ell$ depends only on the variables $x_i$ with $i\in S_\ell$. So, for the proof of this claim, let us condition on some outcome for the variables $x_i$ with $i\in S_\ell$ such that we have $T_\ell\geq m^{1/6}$. Under this conditioning the polynomial $P_\ell^*$ becomes precisely the polynomial $Q_\ell$, which is a linear polynomial in the remaining variables $x_i$ for $i\in I_{\text{good}}\setminus S_\ell$, having $T_\ell\geq m^{1/6}$ degree-$1$ coefficients with absolute value at least $\sigma$. Now, by the Erd\H os--Littlewood--Offord inequality (\cref{lem:LO}, applied with $t=1$) we indeed have $\Pr\left(\vert P_\ell^*\vert<\sigma\right)\leq 3/T_\ell^{1/2}\leq 3m^{-1/12}$.
\end{proof}
Next, define the random variable $X$ as the number of interesting $\ell\in \{1,\dots,m\}$ such that $T_\ell\geq m^{1/6}$. Furthermore, define the random variable $Y$ as the number of interesting $\ell\in \{1,\dots,m\}$ such that $T_\ell\geq m^{1/6}$ and $\vert P_\ell^*\vert<\sigma$.
\begin{claim}
With probability at least $1-m^{-1/12}$ we have $X\geq m/18$.
\end{claim}
\begin{proof}
First, note that by \cref{claim-T-ell-large} we have $\E X\geq (m/3)\cdot (1/3)=m/9$. Also recall that for each interesting $\ell$, the event $T_\ell\geq m^{1/6}$ only depends on the outcomes of $x_i$ for $i\in S_\ell$. This means that changing the outcome of any of the good random variables $x_i$ can affect $X$ by at most $m^{1/3}$ (recall that for each good $x_i$, we have $i\in S_\ell$ for at most $m^{1/3}$ different $\ell$). Hence by \cref{lem:AH}, we have $X\geq m/18$ with probability at least
\[1-2\exp\left(-\frac{(m/18)^{2}}{2\cdot 2m\cdot m^{2/3}}\right)=1-2\exp\left(-\frac{m^{1/3}}{36^2}\right)\geq 1-m^{-1/12}\]
(for sufficiently large $m$), as desired.
\end{proof}
\begin{claim}
With probability at least $1-108m^{-1/12}$ we have $Y\leq m/36$.
\end{claim}
\begin{proof}
By \cref{claim-T-ell-large-and-poly-small} we have $\E Y\leq m\cdot 3m^{-1/12}=3m^{11/12}$. Hence, by Markov's inequality we have $Y\geq m/36$ with probability at most $108m^{-1/12}$.
\end{proof}
From the previous two claims, we conclude that if $m$ is sufficiently large, then with probability at least $1-109m^{-1/12}\geq 1-m^{-1/24}$ we have $X-Y\geq m/36$. But note that whenever $X-Y\geq m/36$, there are at least $m/36$ interesting $\ell\in \{1,\dots,m\}$ such that $\vert P_\ell^*\vert\geq \sigma$, which implies $\vert P_\ell\vert\geq \sigma-(\sigma-\tau)= \tau=\lambda/(4n^4)$. This proves \cref{eq-many-l-satisfy}, and finishes the proof of \cref{lem:endgame-step}.
\section{Concluding remarks}\label{sec:concluding}
We have proved that the permanent of a random symmetric $\pm 1$ matrix typically has magnitude $n^{n/2+o(n)}$. This encapsulates the upper bound in \cref{thm:var} and the lower bound in \cref{thm:main}.
The upper bound and lower bound both permit some fairly immediate generalisations. For example, in the setting of \cref{thm:main} we actually have $\Pr(\per M_{n}=a)\le \Pr(|\per M_{n}-a|\le n^{n/2-\varepsilon n})\le n^{-c}$ for all $a$ (not just $a=0$). To see this, recall that the proof of \cref{thm:main} concludes by repeatedly applying \cref{lem:endgame-step}, where the final application shows that, conditional on a typical outcome of $\per M_{n-1}$, it is very likely that $|\per M_{n}|\le n^{n/2-\varepsilon n}$. In this final application we can instead apply a slight generalisation of \cref{lem:endgame-step} (proved in the same way), showing that for any $a$ in fact it is very likely that $|\per M_{n}-a|\le n^{n/2-\varepsilon n}$.
We can also permit the entries of our random matrix to take more general distributions. As defined in the introduction, consider any real probability distributions $\mu$ and $\nu$, and let $M_n^{\mu,\nu}$ be the random symmetric matrix whose diagonal entries have distribution $\nu$ and whose off-diagonal entries have distribution $\mu$ (and whose entries on and above the diagonal are mutually independent). If $\mu$ and $\nu$ are fixed (not depending on $n$), $\nu$ has finite second moment and $\mu$ has vanishing first moment and finite fourth moment, then essentially the same proof as for \cref{thm:var} shows that $\Pr\left(|\per M^{\mu,\nu}_{n}|\ge n^{n/2+\varepsilon n}\right)\le n^{-\varepsilon n}$ for $n$ sufficiently large with respect to $\eps$, $\mu$ and $\nu$.
The conclusion of \cref{thm:main} can be even more freely generalised to any fixed distributions $\mu$ and $\nu$ such that $\mu$ is supported on at least two points (not requiring any moment assumptions at all). In the case where $\mu$ is supported on two points, any quadratic polynomial in independent $\mu$-distributed random variables can be rewritten as a multilinear polynomial. One can then use essentially the same proof as for \cref{thm:main} (only changing the various constants in the lemma statements) to prove that there is a constant $c$ (depending on $\mu$ but not $\nu$) such that for all $\eps>0$ and all $n$ sufficiently large with respect to $\eps$, we have $\Pr(|\per M^{\mu,\nu}_n|\le n^{n/2-\varepsilon n})\le n^{-c}$. Note that changing $\nu$ has no effect on our arguments, because we never actually use the randomness of the diagonal entries. One needs some slight generalisations of the anti-concentration lemmas in \cref{sec:tools} for $\mu$-distributed random variables, but these are indeed available; see \cite[Theorem~A.1]{BVW10} and \cite[Theorem~1.8]{MNV16}.
In the case where $\mu$ is supported on more than two points, then it is necessary to make slightly more involved changes to the proof of \cref{thm:main}, but the same result does hold. To give a brief sketch: in this case, the main issue is that in the proof of \cref{lem:endgame-step} we are no longer able to assume that the relevant quadratic polynomials are multilinear, so we must treat the square terms $x_i^2$ in basically the same way we treat the linear terms $x_i$. To be specific, in the proof of \cref{lem:endgame-step}, we must allow the polynomials $Q_\ell$ to contain square terms $x_i^2$ in addition to linear terms. There are several aspects to this. First, we need to generalise \cref{fact:non-degenerate} and \cref{lem:MNV} to quadratic polynomials of independent $\mu$-distributed random variables. In particular, we need a generalisation of \cref{lem:MNV} for quadratic polynomials that are not necessarily multilinear, still giving a bound in terms of the graph matching number $\nu(G^{(r)}(f))$ (where we ignore the square terms in $f$ for the purpose of constructing the graph $G^{(r)}(f)$). Suitable generalisations of \cref{fact:non-degenerate} and \cref{lem:MNV} can be proved with the methods in \cite[Section~4]{MNV16} (in fact, in this section the authors prove \cite[Theorem~1.8]{MNV16}, which is essentially the required generalisation of \cref{lem:MNV} but with slightly different assumptions).
Second, we need a generalisation of \cref{fact:non-degenerate-linear} for $\mu$-distributed random variables (for which we can make fairly trivial changes to the proof of \cref{fact:non-degenerate-linear}). And lastly, we need a generalisation of the Erd\H os--Littlewood--Offord theorem (\cref{lem:LO}) for polynomials of independent $\mu$-distributed random variables (when $\mu$ is supported on at least three values) which applies not just to linear polynomials, but also to quadratic polynomials consisting of both linear and square terms (having no multilinear degree-2 terms). Such polynomials can be interpreted as linear polynomials in independent (but not identically distributed) random variables, and therefore the appropriate generalisation of \cref{lem:LO} follows from the Doeblin--L\'evy--Kolmogorov--Rogozin inequality~\cite{Rog61} (or alternatively the method in \cite[Section~4]{MNV16}), together with a basic single-variable quadratic anti-concentration bound. Namely, we need the fact that for any real random variable $x$ supported on at least 3 values, and any $r>0$, there are $s>0$ and $p>0$ such that we have $\Pr( |f(x)| < s ) < 1-p$ for any one-variable quadratic polynomial $f$ whose linear or quadratic coefficient has absolute value at least $r$.
So, for example, both \cref{thm:var} and \cref{thm:main}, and therefore \cref{thm:main-0}, can be generalised to the case where $\mu$ is a centred Gaussian distribution (as long as $\mu$ and $\nu$ do not depend on $n$, and $\nu$ has finite second moment). The Gaussian Orthogonal Ensemble (GOE) is an important special case. Another case that may be of particular interest is the case where the support of $\mu$ is $\{0,1\}$, and $\nu$ is the trivial distribution always taking the value zero (\cref{thm:var} does not hold in this case, but \cref{thm:main} does). In this case $M^{\mu,\nu}_n$ can be interpreted as the adjacency matrix of a random graph. However, we note that in this case the statement in \cref{thm:main} can be proved in a much simpler way: we can take advantage of the fact that changing any off-diagonal entry in $M^{\mu,\nu}_n$ from $0$ to $1$ typically causes a large increase in the value of $\per M^{\mu,\nu}_n$, and apply a more general anti-concentration inequality for functions with this property \cite[Theorem~1.2]{FKS}. Actually, in this setting we suspect that it is possible to prove a limit law for $\per M^{\mu,\nu}_n$, using the ideas in \cite{Jan94}.
Regarding further directions for research, it would be very interesting to prove stronger upper bounds for the concentration probabilities of the permanent, in both the i.i.d.\ and the symmetric case. It is currently only known that $\Pr(\per M_{n}=a)$ and $\Pr(\per A_{n}=a)$ are bounded by $n^{-c}$ for some constant $c$. It would be very interesting to prove bounds of the form $n^{-\omega(1)}$, where $\omega(1)$ is any function going to infinity with $n$. Actually, Vu conjectured (see \cite[Conjecture~6.12]{Vu20}) that $\Pr(\per A_{n}=0)$
is of the form $\omega(1)^{-n}$ (in contrast to the situation for the determinant, where we have $\Pr(\det A_{n}=0)=2^{-n+o(n)}$). It seems reasonable to conjecture that in fact all probabilities of the form $\Pr(\per M_{n}=a)$ or $\Pr(\per A_{n}=a)$ are upper-bounded by $n^{-cn}$, for some constant $c$. However, essentially all known tools for studying permanents of random matrices also apply to determinants, so significant new ideas would be required to prove such strong results.
\textbf{Acknowledgements.} We would like to thank Asaf Ferber for insightful discussions. We are also grateful to the referee for their careful reading of the paper, and their useful comments and suggestions.
\bibliographystyle{amsplain_initials_nobysame_nomr}
| proofpile-arXiv_059-15676 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec-intro}
The gender gap in science have proved remarkably resilient. While girls perform at least as well as boys in secondary school STEM classes~\cite{stoet2018gender}, a gender gap appears in higher education and widens as women decide to pursue careers in science and technology. Women receive 35\% of the bachelor's degrees in STEM~\cite{de2019status}, but only about 20\% of the doctorates.
Although some progress has been made in recent decades in increasing the representation of women in STEM doctoral programs and entry-level academic positions, such as a lecturer or an assistant professor, representation in senior professorial positions has remained much more stagnant. And despite the progress that has been made, women still represent less than a quarter of the STEM workforce~\cite{hill2010so}. The problem is especially acute in physics, where among faculty in four-year colleges and universities, women represent 23\% of assistant professors, 18\% of associate professors and just 10\% of full professors~\cite{porter2019women}.
This shrinking participation of women in STEM is known as the "leaky pipeline," and fixing it is essential for the future of scientific innovation. Retaining talented women will increase the diversity of the scientific workforce, a factor associated with higher creativity and productivity in research~\cite{page, smith2017diversity, Woolley2010}. Increasing representation of women, especially in senior academic positions, will also create a self-perpetuating cycle by inspiring more young women to pursue careers in science, and promote gender equality that makes it easier for even more women to ascend academic ranks.
Researchers have identified some of the sources of leaks in the STEM pipeline. The self-confidence gap leads girls to rate their abilities in science and math lower than boys, despite performing similarly well on these subjects in high school~\cite{hill2010so}. As a result, fewer women choose to major in science in college. In fact, low-achieving boys who perform in the 1st percentile on high school math and science assessment tests are just as likely to choose a science major in college as high-achieving girls who score in the 80th percentile~\cite{cimpian2020understanding}.
After they enter the academic workforce, women continue to face more challenges than men, including balancing family responsibilities, an unwelcoming work environment, reduced funding opportunities, and explicit bias~\cite{goulden2009staying,wenneraas2000chair,way2016gender}. The self-confidence gap seen in early education is not just internal; women must meet higher expectations to receive the same opportunities as their male colleagues. One study found that female applicants for an academic position had to be significantly more productive than male applicants to receive a similar rating~\cite{wenneraas1997sexism}. These harms accumulate. Studies have determined that women are more likely than their male colleagues to leave academia earlier and change career paths, despite having equal research productivity and commitment to their careers~\cite{huang2020historical,xu2008gender}. Thus, the loss of women from academia and differences in their accomplishments cannot be justified by their gender or familial responsibilities alone. Female researchers face a multitude of barriers and biases that inhibit their ability pursue the same opportunities as their male colleagues.
In this paper we identify another source of gender disparity in physics, namely a gender gap in physics publishing. We analyze data provided by the American Physical Society (APS), a leading publisher of physics research. APS publishes specialized disciplinary journals, such as Physical Review A (atomic physics) and E (statistical physics), as well as journals aimed at a broader audience, such as its prestigious Physical Review Letters (PRL) and Reviews of Modern Physics.
We infer the genders of authors based on their names using state-of-the-art methods. We show that the share of women among all authors with identified genders lags far behind that of men. Moreover, the share of women drops with the journal impact factor, and it is lowest for the most prestigious PRL and RMP. Not only do these journals have lower acceptance rates---PRL published only a third of all submitted papers---critically, the editors play a decisive role in which papers are initially reviewed. In contrast, the proportion of female authors is highest in the Physical Review Physics Education Research journal.
Our findings demonstrate a ``leaky pipeline'' in physics publishing, where the proportion of women authors gradually shrinks with each additional stage of review. Although women publish a representative share of physics research in the less selective APS journals, they are less likely to publish in high-impact APS journals, potentially with negative impact to their careers.
The additional decisions made by editors and reviewers of high impact journals create additional opportunities for implicit bias to creep in. By quantifying sources of bias, we hope to provide the tools for mitigating them to increase representation of women in physics.
\section{Data and Methods}
\label{sec-methods}
The \textbf{APS data} is provided by the American Physical Society (APS), a leading publisher of physics research. The journals published by the APS include the prestigious Physical Review Letters (PRL) and Reviews of Modern Physics (RMP), as well as specialized journals on topics ranging from atomic physics (PRA) and nuclear physics (PRC), to statistical physics (PRE), fluid mechanics (Fluids) and physics education research (PRPER). The APS dataset contains information about more than $450,000$ articles published by the APS journals between the years 1893 and 2018. The data is available at https://journals.aps.org/datasets. The article metadata include full names of authors, their affiliations, and the year and journal in which the article was published.
To infer the author's gender from the name we use a state-of-the-art service \textit{genderizer.io}~\cite{santamaria2018comparison}. Given a name, it returns a gender and a confidence score between 0.5 and 1. The uncertainty is greater for Asian names that often are not gender-specific. We filter out authors with confidence score less than 0.8, about 19\% of the names, composed predominantly of Chinese and Korean names. In addition, some authors in the APS dataset are identified by their initials only which makes it impossible to detect the gender. Leaving these types of authors out, we performed some general pre-processing techniques on the remaining names, such as normalizing, in order to get the most accurate gender labels for the names in this dataset.
\section{Results}
\label{sec-results}
The number of authors publishing in APS journals has grown greatly over the past decades,
but despite the growth, women are still a small fraction of all APS authors with identified genders.
Over the last 50 years, the share of female authors has increased from 4.4\% in 1970 to 14.8\% in 2018. These numbers closely mirror the proportion of women physics faculty.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figs/a,c,prl,rmp,x.png}
\caption{Proportion of female authors over time for selected journals shows systematically lower representation among the authors publishing in Physical Review Letters, Physical Review X and Reviews of Modern Physics
}
\label{fig:journals-years}
\end{figure}
When data is disaggregated by journal, the trends remain remarkably consistent over time. Figure~\ref{fig:journals-years} shows that the proportion of female authors for selected journals grows over time, but those for PRL, PRX and RMP are systematically lower than for other journals.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figs/prop_journal_2008-2018.png}
\caption{Proportion of female authors publishing in APS journals during the time period 2008--2018. }
\label{fig:journals}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figs/impact_factor_noEdu.png
\\
\includegraphics[width=0.9\linewidth]{figs/hIndex_noEdu.png}
\caption{Proportion of female authors as a function of journal impact factor (left) and journal h-index (right) for papers published between 2008 and 2018. The outlier with the high fraction of women is physics education journal. The line shows linear fit and the inset the average proportion of female authors of journals binned by their impact factor, without the outlier. The bars show standard errors.}
\label{fig:impact-factor}
\end{figure}
There is large variation in the proportion of female authors when disaggregating data by journal, even after restricting the timescale to the latest decade (Fig.~\ref{fig:journals}). The proportion of female authors is lowest for Review of Modern Physics (8.1\%), Physical Review Accelerators and Beams, Physical Review Letters and Physical Review X. At the other end, the proportion of women authors is highest for Physical Review Physics Education Research journal (38.9\%).
More prestigious journals publish fewer women authors. Figure~\ref{fig:impact-factor} shows the fraction of women authors publishing in a journal during the decade 2008--2018 as a function of its impact factor and h-index, two common measures of journal prestige. There is a negative correlation between share of women authors and both journal impact factor (Spearman correlation $r=-0.462$, $p=0.11$) and its h-index (Spearman $r=-0.687$, $p=0.01$). Review of Modern Physics has the highest impact factor, i.e., average number of citations per paper, followed by PRX and PRL, while Physics Education Research has one of the lowest (impact factor = 1.811). The differences remain after journals are combined into three groups (after omitting the education outlier): the highest impact tertile that includes the most prestigious journals has significantly fewer ($p=0.02$) women authors (inset in Fig.~\ref{fig:impact-factor}).
Journal importance is slightly different when judged by h-index (Fig.~\ref{fig:impact-factor}(right)), which measures the number of papers published in the journal with at least that many citations. Here PRL is the most important, followed by PRB, PRD, and then RMP.
\section{Discussion}
Women have made quick gains against the historic discrimination they have faced in higher education.
Although they were not admitted to elite universities until the late 1960's, by 1980 they surpassed men in the number of bachelors degrees awarded. However, gender equity among STEM degree recipients, as well as among faculty, has remained an elusive goal.
In this study, we have identified a potential factor for this gender gap by demonstrating that a leaky pipeline exists in physics publishing, in which the proportion of female authors publishing physics papers decreases for more prestigious journals.
We propose that this gender gap may arise from differences in editorial procedures, as our data mirrors the official editorial guidelines for APS journals
In our study, we found that the Review of Modern Physics has the lowest proportion of female authors. Accordingly, it is the only journal that invites contributions and considers few unsolicited manuscripts. For the next two most impactful journals, PRL and PRX, editors play a major role in deciding whether the paper is sent to referees, who are asked to ``comment critically on the validity and importance of the manuscript,'' according to the APS editorial guidelines. For all the other journals, editors ``select expert(s) in the field to comment on manuscripts that are sent out to review,'' and are not bound to the minimum two referees. The guidelines for these journals are thus more relaxed and involve fewer people reviewing the paper. Our data shows that the proportion of female authors in these journals is higher than that of the first three, and more accurately reflects the fraction of women physicists.
We suggest two potential reasons for this effect. First, reviewers are subject to bias, which has been responsible for gender gaps across many platforms. Methods to counteract gender biases have historically involved switching to blind review. In one example, the shift to blind auditions among orchestras resulted in a 30\% increase in new female hires~\cite{goldin2000orchestrating}. Each additional stage of the review process adds yet another opportunity for implicit bias to creep in and affect the final decision. Therefore, because more prestigious journals involve more individual reviewers, a leaky pipeline is created.
Another reason for the gender disparities among APS journals may be the norms for rebuttal, and the unwillingness of women to fight for their own work.
Men may be more likely to appeal a rejection and ultimately see more papers published. In contrast, women are hesitant to engage in self-promotion~\cite{moss2010disruptions} and as a result, are less likely to appeal an adverse editorial decision. An examination of appeals is needed to shed more light on this question.
The leaky publishing pipeline hurts women's careers and publishers need to take steps to identify and ameliorate gender-based leaks in the research publication pipeline. When women are more likely to publish in prestigious journals, they will be more likely to receive promotions and less likely to drop out of academia. Higher numbers of women faculty members will inspire more young women to choose careers in science, increasing diversity and improving innovation.
\bibliographystyle{plain}
| proofpile-arXiv_059-15677 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
The {\it IJCAI--21 Proceedings} will be printed from electronic
manuscripts submitted by the authors. These must be PDF ({\em Portable
Document Format}) files formatted for 8-1/2$''$ $\times$ 11$''$ paper.
\subsection{Length of Papers}
All paper {\em submissions} must have a maximum of six pages, plus at most one for references. The seventh page cannot contain {\bf anything} other than references.
The length rules may change for final camera-ready versions of accepted papers and will differ between tracks. Some tracks may include only references in the last page, whereas others allow for any content in all pages. Similarly, some tracks allow you to buy a few extra pages should you want to, whereas others don't.
If your paper is accepted, please carefully read the notifications you receive, and check the proceedings submission information website\footnote{\url{https://proceedings.ijcai.org/info}} to know how many pages you can finally use. That website holds the most up-to-date information regarding paper length limits at all times. Please notice that if your track allows for a special references-only page, the {\bf references-only page(s) cannot contain anything else than references} (i.e.: do not write your acknowledgments on that page or you will be charged for it).
\subsection{Word Processing Software}
As detailed below, IJCAI has prepared and made available a set of
\LaTeX{} macros and a Microsoft Word template for use in formatting
your paper. If you are using some other word processing software, please follow the format instructions given below and ensure that your final paper looks as much like this sample as possible.
\section{Style and Format}
\LaTeX{} and Word style files that implement these instructions
can be retrieved electronically. (See Appendix~\ref{stylefiles} for
instructions on how to obtain these files.)
\subsection{Layout}
Print manuscripts two columns to a page, in the manner in which these
instructions are printed. The exact dimensions for pages are:
\begin{itemize}
\item left and right margins: .75$''$
\item column width: 3.375$''$
\item gap between columns: .25$''$
\item top margin---first page: 1.375$''$
\item top margin---other pages: .75$''$
\item bottom margin: 1.25$''$
\item column height---first page: 6.625$''$
\item column height---other pages: 9$''$
\end{itemize}
All measurements assume an 8-1/2$''$ $\times$ 11$''$ page size. For
A4-size paper, use the given top and left margins, column width,
height, and gap, and modify the bottom and right margins as necessary.
\subsection{Format of Electronic Manuscript}
For the production of the electronic manuscript, you must use Adobe's
{\em Portable Document Format} (PDF). A PDF file can be generated, for
instance, on Unix systems using {\tt ps2pdf} or on Windows systems
using Adobe's Distiller. There is also a website with free software
and conversion services: \url{http://www.ps2pdf.com}. For reasons of
uniformity, use of Adobe's {\em Times Roman} font is strongly suggested.
In \LaTeX2e{} this is accomplished by writing
\begin{quote}
\mbox{\tt $\backslash$usepackage\{times\}}
\end{quote}
in the preamble.\footnote{You may want also to use the package {\tt
latexsym}, which defines all symbols known from the old \LaTeX{}
version.}
Additionally, it is of utmost importance to specify the {\bf
letter} format (corresponding to 8-1/2$''$ $\times$ 11$''$) when
formatting the paper. When working with {\tt dvips}, for instance, one
should specify {\tt -t letter}.
\subsection{Title and Author Information}
Center the title on the entire width of the page in a 14-point bold
font. The title must be capitalized using Title Case. Below it, center author name(s) in 12-point bold font. On the following line(s) place the affiliations, each affiliation on its own line using 12-point regular font. Matching between authors and affiliations can be done using numeric superindices. Optionally, a comma-separated list of email addresses follows the affiliation(s) line(s), using 12-point regular font.
\subsubsection{Blind Review}
In order to make blind reviewing possible, authors must omit their
names and affiliations when submitting the paper for review. In place
of names and affiliations, provide a list of content areas. When
referring to one's own work, use the third person rather than the
first person. For example, say, ``Previously,
Gottlob~\shortcite{gottlob:nonmon} has shown that\ldots'', rather
than, ``In our previous work~\cite{gottlob:nonmon}, we have shown
that\ldots'' Try to avoid including any information in the body of the
paper or references that would identify the authors or their
institutions. Such information can be added to the final camera-ready
version for publication.
\subsection{Abstract}
Place the abstract at the beginning of the first column 3$''$ from the
top of the page, unless that does not leave enough room for the title
and author information. Use a slightly smaller width than in the body
of the paper. Head the abstract with ``Abstract'' centered above the
body of the abstract in a 12-point bold font. The body of the abstract
should be in the same font as the body of the paper.
The abstract should be a concise, one-paragraph summary describing the
general thesis and conclusion of your paper. A reader should be able
to learn the purpose of the paper and the reason for its importance
from the abstract. The abstract should be no more than 200 words long.
\subsection{Text}
The main body of the text immediately follows the abstract. Use
10-point type in a clear, readable font with 1-point leading (10 on
11).
Indent when starting a new paragraph, except after major headings.
\subsection{Headings and Sections}
When necessary, headings should be used to separate major sections of
your paper. (These instructions use many headings to demonstrate their
appearance; your paper should have fewer headings.). All headings should be capitalized using Title Case.
\subsubsection{Section Headings}
Print section headings in 12-point bold type in the style shown in
these instructions. Leave a blank space of approximately 10 points
above and 4 points below section headings. Number sections with
arabic numerals.
\subsubsection{Subsection Headings}
Print subsection headings in 11-point bold type. Leave a blank space
of approximately 8 points above and 3 points below subsection
headings. Number subsections with the section number and the
subsection number (in arabic numerals) separated by a
period.
\subsubsection{Subsubsection Headings}
Print subsubsection headings in 10-point bold type. Leave a blank
space of approximately 6 points above subsubsection headings. Do not
number subsubsections.
\paragraph{Titled paragraphs.} You should use titled paragraphs if and
only if the title covers exactly one paragraph. Such paragraphs should be
separated from the preceding content by at least 3pt, and no more than
6pt. The title should be in 10pt bold font and ended with a period.
After that, a 1em horizontal space should follow the title before
the paragraph's text.
In \LaTeX{} titled paragraphs should be typeset using
\begin{quote}
{\tt \textbackslash{}paragraph\{Title.\} text} .
\end{quote}
\subsubsection{Acknowledgements}
You may include an unnumbered acknowledgments section, including
acknowledgments of help from colleagues, financial support, and
permission to publish. If present, acknowledgements must be in a dedicated,
unnumbered section appearing after all regular sections but before any
appendices or references.
Use
\begin{quote}
{\tt \textbackslash{}section*\{Acknowledgements\}})
\end{quote}
to typeset the acknowledgements section in \LaTeX{}.
\subsubsection{Appendices}
Any appendices directly follow the text and look like sections, except
that they are numbered with capital letters instead of arabic
numerals. See this document for an example.
\subsubsection{References}
The references section is headed ``References'', printed in the same
style as a section heading but without a number. A sample list of
references is given at the end of these instructions. Use a consistent
format for references. The reference list should not include publicly unavailable work.
\subsection{Citations}
Citations within the text should include the author's last name and
the year of publication, for example~\cite{gottlob:nonmon}. Append
lowercase letters to the year in cases of ambiguity. Treat multiple
authors as in the following examples:~\cite{abelson-et-al:scheme}
or~\cite{bgf:Lixto} (for more than two authors) and
\cite{brachman-schmolze:kl-one} (for two authors). If the author
portion of a citation is obvious, omit it, e.g.,
Nebel~\shortcite{nebel:jair-2000}. Collapse multiple citations as
follows:~\cite{gls:hypertrees,levesque:functional-foundations}.
\nocite{abelson-et-al:scheme}
\nocite{bgf:Lixto}
\nocite{brachman-schmolze:kl-one}
\nocite{gottlob:nonmon}
\nocite{gls:hypertrees}
\nocite{levesque:functional-foundations}
\nocite{levesque:belief}
\nocite{nebel:jair-2000}
\subsection{Footnotes}
Place footnotes at the bottom of the page in a 9-point font. Refer to
them with superscript numbers.\footnote{This is how your footnotes
should appear.} Separate them from the text by a short
line.\footnote{Note the line separating these footnotes from the
text.} Avoid footnotes as much as possible; they interrupt the flow of
the text.
\section{Illustrations}
Place all illustrations (figures, drawings, tables, and photographs)
throughout the paper at the places where they are first discussed,
rather than at the end of the paper.
They should be floated to the top (preferred) or bottom of the page,
unless they are an integral part
of your narrative flow. When placed at the bottom or top of
a page, illustrations may run across both columns, but not when they
appear inline.
Illustrations must be rendered electronically or scanned and placed
directly in your document. They should be cropped outside latex, otherwise portions of the image could reappear during the post-processing of your paper. All illustrations should be understandable when printed in black and
white, albeit you can use colors to enhance them. Line weights should
be 1/2-point or thicker. Avoid screens and superimposing type on
patterns, as these effects may not reproduce well.
Number illustrations sequentially. Use references of the following
form: Figure 1, Table 2, etc. Place illustration numbers and captions
under illustrations. Leave a margin of 1/4-inch around the area
covered by the illustration and caption. Use 9-point type for
captions, labels, and other text in illustrations. Captions should always appear below the illustration.
\section{Tables}
Tables are considered illustrations containing data. Therefore, they should also appear floated to the top (preferably) or bottom of the page, and with the captions below them.
\begin{table}
\centering
\begin{tabular}{lll}
\hline
Scenario & $\delta$ & Runtime \\
\hline
Paris & 0.1s & 13.65ms \\
Paris & 0.2s & 0.01ms \\
New York & 0.1s & 92.50ms \\
Singapore & 0.1s & 33.33ms \\
Singapore & 0.2s & 23.01ms \\
\hline
\end{tabular}
\caption{Latex default table}
\label{tab:plain}
\end{table}
\begin{table}
\centering
\begin{tabular}{lrr}
\toprule
Scenario & $\delta$ (s) & Runtime (ms) \\
\midrule
Paris & 0.1 & 13.65 \\
& 0.2 & 0.01 \\
New York & 0.1 & 92.50 \\
Singapore & 0.1 & 33.33 \\
& 0.2 & 23.01 \\
\bottomrule
\end{tabular}
\caption{Booktabs table}
\label{tab:booktabs}
\end{table}
If you are using \LaTeX, you should use the {\tt booktabs} package, because it produces better tables than the standard ones. Compare Tables \ref{tab:plain} and~\ref{tab:booktabs}. The latter is clearly more readable for three reasons:
\begin{enumerate}
\item The styling is better thanks to using the {\tt booktabs} rulers instead of the default ones.
\item Numeric columns are right-aligned, making it easier to compare the numbers. Make sure to also right-align the corresponding headers, and to use the same precision for all numbers.
\item We avoid unnecessary repetition, both between lines (no need to repeat the scenario name in this case) as well as in the content (units can be shown in the column header).
\end{enumerate}
\section{Formulas}
IJCAI's two-column format makes it difficult to typeset long formulas. A usual temptation is to reduce the size of the formula by using the {\tt small} or {\tt tiny} sizes. This doesn't work correctly with the current \LaTeX{} versions, breaking the line spacing of the preceding paragraphs and title, as well as the equation number sizes. The following equation demonstrates the effects (notice that this entire paragraph looks badly formatted):
\begin{tiny}
\begin{equation}
x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i
\end{equation}
\end{tiny}%
Reducing formula sizes this way is strictly forbidden. We {\bf strongly} recommend authors to split formulas in multiple lines when they don't fit in a single line. This is the easiest approach to typeset those formulas and provides the most readable output%
\begin{align}
x =& \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \nonumber\\
+ & \prod_{i=1}^n \sum_{j=1}^n j_i
\end{align}%
If a line is just slightly longer than the column width, you may use the {\tt resizebox} environment on that equation. The result looks better and doesn't interfere with the paragraph's line spacing: %
\begin{equation}
\resizebox{.91\linewidth}{!}{$
\displaystyle
x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i
$}
\end{equation}%
This last solution may have to be adapted if you use different equation environments, but it can generally be made to work. Please notice that in any case:
\begin{itemize}
\item Equation numbers must be in the same font and size than the main text (10pt).
\item Your formula's main symbols should not be smaller than {\small small} text (9pt).
\end{itemize}
For instance, the formula
\begin{equation}
\resizebox{.91\linewidth}{!}{$
\displaystyle
x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j
$}
\end{equation}
would not be acceptable because the text is too small.
\section{Examples, Definitions, Theorems and Similar}
Examples, definitions, theorems, corollaries and similar must be written in their own paragraph. The paragraph must be separated by at least 2pt and no more than 5pt from the preceding and succeeding paragraphs. They must begin with the kind of item written in 10pt bold font followed by their number (e.g.: Theorem 1), optionally followed by a title/summary between parentheses in non-bold font and ended with a period. After that the main body of the item follows, written in 10 pt italics font (see below for examples).
In \LaTeX{} We strongly recommend you to define environments for your examples, definitions, propositions, lemmas, corollaries and similar. This can be done in your \LaTeX{} preamble using \texttt{\textbackslash{newtheorem}} -- see the source of this document for examples. Numbering for these items must be global, not per-section (e.g.: Theorem 1 instead of Theorem 6.1).
\begin{example}[How to write an example]
Examples should be written using the example environment defined in this template.
\end{example}
\begin{theorem}
This is an example of an untitled theorem.
\end{theorem}
You may also include a title or description using these environments as shown in the following theorem.
\begin{theorem}[A titled theorem]
This is an example of a titled theorem.
\end{theorem}
\section{Proofs}
Proofs must be written in their own paragraph separated by at least 2pt and no more than 5pt from the preceding and succeeding paragraphs. Proof paragraphs should start with the keyword ``Proof." in 10pt italics font. After that the proof follows in regular 10pt font. At the end of the proof, an unfilled square symbol (qed) marks the end of the proof.
In \LaTeX{} proofs should be typeset using the \texttt{\textbackslash{proof}} environment.
\begin{proof}
This paragraph is an example of how a proof looks like using the \texttt{\textbackslash{proof}} environment.
\end{proof}
\section{Algorithms and Listings}
Algorithms and listings are a special kind of figures. Like all illustrations, they should appear floated to the top (preferably) or bottom of the page. However, their caption should appear in the header, left-justified and enclosed between horizontal lines, as shown in Algorithm~\ref{alg:algorithm}. The algorithm body should be terminated with another horizontal line. It is up to the authors to decide whether to show line numbers or not, how to format comments, etc.
In \LaTeX{} algorithms may be typeset using the {\tt algorithm} and {\tt algorithmic} packages, but you can also use one of the many other packages for the task.
\begin{algorithm}[tb]
\caption{Example algorithm}
\label{alg:algorithm}
\textbf{Input}: Your algorithm's input\\
\textbf{Parameter}: Optional list of parameters\\
\textbf{Output}: Your algorithm's output
\begin{algorithmic}[1]
\STATE Let $t=0$.
\WHILE{condition}
\STATE Do some action.
\IF {conditional}
\STATE Perform task A.
\ELSE
\STATE Perform task B.
\ENDIF
\ENDWHILE
\STATE \textbf{return} solution
\end{algorithmic}
\end{algorithm}
\section*{Acknowledgments}
The preparation of these instructions and the \LaTeX{} and Bib\TeX{}
files that implement them was supported by Schlumberger Palo Alto
Research, AT\&T Bell Laboratories, and Morgan Kaufmann Publishers.
Preparation of the Microsoft Word file was supported by IJCAI. An
early version of this document was created by Shirley Jowell and Peter
F. Patel-Schneider. It was subsequently modified by Jennifer
Ballentine and Thomas Dean, Bernhard Nebel, Daniel Pagenstecher,
Kurt Steinkraus, Toby Walsh and Carles Sierra. The current version
has been prepared by Marc Pujol-Gonzalez and Francisco Cruz-Mencia.
\section{Introduction}
Modern planning systems have successfully solved a variety of individual and important tasks, such as logistics
\citep{planningForLogistics}
and chemical synthesis \citep{planningForChemicalSynthesis}.
\mfnote{It doesn't look like \citep{planningForLogistics} shows any real world/practical applications.}
However, while such tasks have immediate practical importance and economic value, planning must be made much more efficient if it is to form the core computational process of a generally intelligent agent. In particular, generally intelligent agents must maintain world models that are \textit{open-scope}: rich enough to describe any task the agent may be asked to solve, thus necessarily including large amounts of information irrelevant to any \textit{individual} task \citep{GeorgeNecessity}. For instance, an agent that may be asked to solve any task in Minecraft must possess a model containing information necessary to craft various weapons to kill enemies, build shelter, and obtain and cook food items; however when confronted with the specific, immediate goal of crafting a bed, information about shelter and food is simply irrelevant. When confronted with the different goal of cooking a stew, a completely different set of information is rendered irrelevant.
Recent work \citep{VallatiPlanningRobustness,TomAndRohanPLOI} has shown that many state-of-the-art planning engines---even Fast Downward \citep{Helmert:2006:FDP:1622559.1622565}, which attempts to prune the problem's state-action space before search---suffer significant reductions in performance when irrelevant objects, fluents or actions are included in domain descriptions. For example, the state-of-the-art ENHSP-2020 planner \citep{ENHSP} takes over $57$ minutes to find an optimal plan for a particular task within the Minecraft domain pictured in Figure \ref{FrontPage}, in large part because it must search through approximately $10^{135}$ states; simply deleting model components irrelevant to this specific task shrinks the state space by over $100$ orders of magnitude, allowing it to be solved in just over $3.5$ minutes. Generally-intelligent agents with rich, open-scope world models will therefore likely require a means of ignoring the vast majority of information in those models on a task-specific basis.
\begin{figure}
\centering
\includegraphics[width=\linewidth,trim={0 5cm 0 5cm},clip]{img/MinecraftBedmakingUnscoped.png}~\\%
\includegraphics[width=\linewidth,trim={0 5cm 0 5cm},clip]{img/MinecraftBedmakingScopedDye.png}
\caption{A Minecraft environment that supports a wide range of tasks. If asked to craft blue dye by picking blue flowers, the majority of the objects in the environment are irrelevant; scoping removes these objects from the representation (visualized here as graying them out), reducing planning time by an order of magnitude.}
\label{FrontPage}
\end{figure}
We identify and characterize different types of task-irrelevance and propose a novel algorithm, \textit{task scoping}, that exploits knowledge of a problem's
initial condition, goal condition, and transition system
to prune particular types of irrelevant fluents and actions without compromising the existence of optimal plans. This pruning process operates at the level of the problem definition instead of the concrete state-action space, making first pruning, then planning often substantially faster than planning in the original space. We prove that the resulting abstraction is sound and complete: all valid plans in the scoped domain are valid in the original and pass through the same abstract states, and
all optimal plans in the original domain can be translated into an optimal plan in the scoped domain.
Additionally, we show that task scoping can have better worst-case computational complexity than planning for problems with particular features and structure. We argue that many planning domains of interest possess such characteristics, and thus, scoping followed by planning will often be more computationally efficient than planning directly. Task scoping can thus be viewed as a planner-agnostic pre-processing step.
We empirically demonstrate task scoping's efficacy on several domains from past iterations of the International Planning Competition (IPC) and on novel domains that we developed. Our results demonstrate that task scoping can substantially reduce the state-action spaces of various planning tasks specified in PDDL 2.1 or SAS+ \citep{Helmert:2006:FDP:1622559.1622565}, and that the resulting abstraction varies appropriately depending on the task. Moreover, using task scoping as a pre-processing step often significantly reduces the time taken by the state-of-the-art ENHSP-2020 and Fast Downward planners to solve these large problems, especially for our open-scope Minecraft domain.
\section{Background}
We assume a grounded planning formalism equivalent to PDDL 2.1 level 2 \citep{PDDL2.1} without conditional effects. This formalism subsumes SAS+ problems without axioms \citep{Helmert:2006:FDP:1622559.1622565}. We define a planning problem as a tuple $PP = (S[\mathcal{J}], \mathcal{A}, \mathcal{T}, s_{0}, G)$, where $S[\mathcal{J}]$ is the state space, $\mathcal{J}$ is a set of grounded fluents\footnote{All fluents considered in this work are grounded, so we will henceforth simply use the term 'fluents'.} where each element is either numeric or propositional, $\mathcal{A}$ is a set of actions, $\mathcal{T}$ represents the transition system, $s_{0}$ is an initial state, and $G$ is a goal condition.
\begin{itemize}
\item $S = \prod_{j \in \mathcal{J}} S[j]$, where $S[j]$ is the set of values that fluent $j$ can take. We consider a specific state $s \in S$ as a vector $\langle s_1,...,s_{|\mathcal{J}|}\rangle$ with a value assigned to each fluent (i.e, a rational number to every numeric fluent, and a truth value to every propositional fluent).
If $J \subseteq \mathcal{J}$, then $S[J] \coloneqq \prod_{j \in J} S[j]$ refers to the projection abstraction \citep{cegarjair,PatternDatabases} including only the fluents mentioned in $J$, meaning that all \textit{other} (non $J$) fluents' values are ignored.
\item $\mathcal{T}$ is a set of ({\sl pre-Condition; Action; Effect}) (or \textit{CAE}) triples,
where
\begin{itemize}
\item {\sl pre-Condition} is a predicate over $S[J]$ for some $J \subseteq \mathcal{J}$. We assume these preconditions are specified such that they are mutually exclusive for any pair of CAEs with the same action $a \in \mathcal{A}$. This choice ensures that only \textit{one} CAE triple's effects can occur at any time step,
\item {\sl Action} $\in$ $\mathcal{A}$,
\item {\sl Effect} is a list of deterministic functions $\langle F: S[j] \to S[j] \rangle_{j \in \mathcal{J}}$ describing how actions change the values of each of the $J$ fluents.
\end{itemize}
\item $s_{0}$ is an initial state.
\item $G$ is a conjunction of predicates over $S$.
\end{itemize}
An action is applicable in a state $s$ where the pre-condition of the corresponding CAE triple is satisfied. Applying an action in $s$ results in a new state $s'$ where the effect of the corresponding CAE triple has been applied to $s$. Applying a sequence of successively-applicable actions $<a_0,a_1,...,a_n>$ from the initial state $s_{0}$ results in a final state $s_{n+1}$. If the goal condition $G$ is true in this final state, then the action sequence (or \textit{trace}) is a \textit{valid plan}. Furthermore, we assume each action has some non-negative cost. Thus, we can define an \textit{optimal plan} as a valid plan with the minimum cost.
\section{Task Scoping}
\mfnote{Make sure this paragraph matches what we actually do.}
\noindent Before describing our approach and characterize the problems for which it is useful, we first introduce a type of abstraction for a planning problem that we call a 'reduced planning problem'. Next, we introduce definitions for task-irrelevance with respect to an optimal plan and use these to prove sufficient conditions for a reduced planning problem to be sound and complete. Finally, we introduce an algorithm that efficiently computes such abstractions and characterize its computational complexity.
\subsection{Reduced Planning Problems}
Let $PP$ be a planning problem with fluents $\mathcal{J}$. Suppose some subset of these fluents $J^c \subseteq \mathcal{J}$ are \textit{deleted}; there is now a unique \textit{reduced planning problem} $PP_{r}[J]$ induced by this subset $J \subseteq \mathcal{J}$ where $J = \mathcal{J} \setminus J^c$.
$PP_{r}[J]$ is a projection abstraction.
We denote the states, actions, transition system, initial state and goal conditions for the reduced problem as $PP_{r}[J] = (S[J], \mathcal{A}[J], \mathcal{T}[J], s_{0}[J], G[J])$.
The initial and goal conditions remain the same except that clauses mentioning fluents in $J^c$ are deleted since they are no longer meaningful.\footnote{If $J^c = \emptyset$, the RPP is the same as the original planning problem.}
$A[J]$ is the set of actions used by $T[J]$.
The transition system $\mathcal{T}[J]$ is induced by these $J$ fluents as described below.
\paragraph{Induced Transition System.}
Two CAE triples $x, x' \in \mathcal{T}$ are \textit{equivalent with respect to} $J$, denoted $x \simeq_J x'$, if, for each fluent in $J$, $x$ and $x'$ have the same effect: $x.$effects$[J] = x'.$effects$[J]$.
In the RPP induced by $J$, we do not distinguish CAEs with identical effects on $J$. We obtain the transition system on the RPP induced by $J$, denoted $\mathcal{T}[J]$, by discarding all CAEs with no effect on $J$, partitioning the remaining set of CAEs according to $\simeq_J$, and then creating from each part class $X$ a new \textit{quotient CAE} $\overline{x}$:
\begin{equation*}
\overline{x} = \left(\bigvee_{x' \in X} x.\mathrm{\text{precondition}}, \bigcup_{x' \in X} \{x.\mathrm{\text{action}}\}, x.\mathrm{\text{effects}}[J]\right).
\end{equation*}
When discussing the task scoping algorithm in Section \ref{subsec:Scoping_algo_sec}, we will also refer to $\overline{x}.\mathrm{\text{sideeffects}}$---the set of fluents \textit{not} in $J$ that may be affected when executing $\overline{x}$:
\begin{equation*}
\overline{x}.\mathrm{\text{sideeffects}} = \bigcup_{x' \in X, j \in J^c} \{x'.\mathrm{\text{effects}}[j]\}).
\end{equation*}
Note that the side effects are \textit{not} included in the returned RPP, they are simply a book-keeping tool used in the task scoping algorithm. When writing out a quotient CAE, the side effects may be included as the fourth component.
As a running example, consider a version of the continuous playroom domain from \citet{playroompaper}. An agent controls 3 effectors (an eye, a hand, and a marker) to interact with 7 objects (a thermostat, a light switch, a red button, a green button, a ball, a bell, and a monkey). The domain can be discretized into a grid where the agent can take an action to move its effectors in the cardinal directions. To interact with the thermostat, light switch or buttons, the agent's eye and hand effectors must be at the same grid cell as the relevant object. The thermostat can be set to 1 of 5 different temperatures that do not impact other dynamics of the problem whatsoever. The light switch can be turned on and off to toggle the playroom's lights, and, when the lights are on, the green button can be pushed to turn on music, while the red button can be pushed to turn off music. Once the music is on, regardless of the state of the lights, the agent can move its eye and hand effectors to the ball and its marker effector to the bell to throw the ball at the bell and frighten the monkey. While that is the original goal, it is possible to specify other goals such as turning the music on, or turning the lights off.
Consider two different CAE triples within this example: one that moves the agent's \texttt{(agent1)} hand north and one that does the same while also flicking the thermostat \texttt{(thermo1)} if it is in the way.
\begin{itemize}
\item \texttt{((hand-y agent1) != (thermostat-y thermo1); move\_north\_hand; (hand-y agent1)++)}
\item \texttt{((hand-y agent1) == (thermostat-y thermo1); move\_and\_flick; (hand-y agent1)++ and (temperature)++)}
\end{itemize}
Suppose that we want to compute the induced CAEs for these two CAEs and \texttt{(hand-y agent1)} is within the subset $J$ while (temperature) is not. Since these two triples differ \textit{only} in their effects on variables in $J^c$, they would be merged to produce a single triple with a \textit{side effect} on temperature.\footnote{'(x) ++' is short for (increase (x) 1) \citep{PDDL2.1}.}
\begin{itemize}
\item \texttt{(True; move\_north; (hand-y agent1)++; temperature)}
\end{itemize}
The precondition of this CAE triple is simply \textit{True}, as a result of having taken the disjunction of the previous triples' preconditions. Both triples affect the agent's hand in the same way, regardless of whether it is at the thermostat.
Pseudocode for ReduceTransitions, a procedure that computes this induced transition system, is given below.
\begin{algorithm}
\caption{Full pseudocode for the ReduceTransitions algorithm}
\label{ReduceTransitions}
\begin{algorithmic}[1]
\Procedure{ReduceTransitions}{$\mathcal{T}$ , $J$}
\State $U \gets $ Partition of $\mathcal{T}$ based on $\simeq_J$
\State Discard from $U$ each part that does not effect $S[J]$
\State $\mathcal{T}[J] \gets \{\}$
\For{$X \in U$}
\State \begin{multline}
\overline{x} \gets \biggl(\bigvee x.\text{prec}, \bigcup_{x' \in X} \{x.\text{action}\},\\ x.\text{effects}[J], \bigcup_{x' \in X} \mathrm{vars}(x'.\text{effects}[J^c])\biggr)
\end{multline}
\State $\mathcal{T}[J]$.insert($\overline{x}$)
\EndFor
\State\Return $\mathcal{T}[J]$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\paragraph{Sound and optimality-Complete RPPs.}
\label{subsec:SCRPP}
It is possible to construct an RPP by deleting \textit{any} combination of fluents from \textit{any} planning problem. Intuitively, not all such RPPs will be useful. We wish to construct RPPs that are optimality-complete
and sound.
\theoremstyle{definition}
\begin{definition}
An RPP is \textit{optimality-complete} if an optimal solution to the original planning problem can be obtained as a solution to the reduced planning problem, or the original planning problem has no solutions.
\end{definition}
\theoremstyle{definition}
\begin{definition}
An RPP is \textit{sound} if each plan in the RPP gives the same trace with respect to the abstract state space (i.e, the $J$ fluents) when executed in the original planning problem.
\end{definition}
We call a sound and optimality-complete RPP an SC-RPP.
To find sufficient conditions for an RPP to be sound and optimality-complete, we first define what it means for a fluent to be irrelevant with respect to an optimal plan and distinguish between a few types of irrelevance.
\theoremstyle{definition}
\begin{definition}
A fluent is \textit{irrelevant} for a specific planning problem if projecting it away results in a planning problem whose optimal plans are also optimal for the original problem.
\end{definition}
\theoremstyle{definition}
\begin{definition}
A fluent is \textit{trivially irrelevant} if it is irrelevant for any initial state.
\end{definition}
\theoremstyle{definition}
\begin{definition}
A fluent is \textit{conditionally irrelevant} if it is irrelevant for the specified initial state, but may be relevant for a different initial state.
\end{definition}
In our running example, the room's temperature is a trivially irrelevant fluent because the optimal plan is the same regardless of the value of the \texttt{(temperature)} for any initial state. Additionally, the lights and music are conditionally irrelevant to the goal of frightening the monkey, because they can be projected away without changing the optimal plan (but only if the music is already on in the initial state).
\begin{definition} A \textit{causally-linked} fluent maintains its original value throughout all optimal plans, and arbitrarily changing its value while executing a valid trace may increase the cost of optimal plans. This term is adapted from \texttt{UCPOP} \citep{UCPOP_Original}.
\end{definition}
\begin{definition}
A fluent is \textit{causally-masked} if there is a set of causally-linked fluents such that, as long as each of these causally-linked fluents maintains its initial value throughout the trace, the causally-masked fluent can vary arbitrarily without impacting the quality of optimal plans.
\end{definition}
By \textit{vary arbitrarily}, we mean changed without the agent taking an action.
In our running continuous playroom example, the the status of the lights is causally-masked by the music being on, which is itself causally-linked.
Given these definitions, we can now define and describe a \textit{scoped} RPP.
\theoremstyle{definition}
\begin{definition
For a given RPP, translate the preconditions in $\mathcal{T}[J]$ to conjunctive normal form (CNF), and let $\Phi$ be the set of clauses appearing in any of these preconditions. If, for each $\phi \in \Phi$, either:
\begin{enumerate}
\item $\phi$ is defined over $S[J]$, or
\item $\phi$ is true in $s_0$, and none of the fluents mentioned in $\phi$ are in the side effects of $\mathcal{T}[J]$,
\end{enumerate}
then the RPP is \textit{scoped}.
\end{definition}
Suppose we are given a scoped RPP and the $J \subseteq \mathcal{J}$ fluents used to induce it. $\mathcal{J}$ can be partitioned into 3 distinct, non-overlapping sets:
\begin{itemize}
\item $J = J_{\mathit{rel}}$, the fluents satisfying (1) above. We call these fluents \textit{relevant}.
\item $J_{\mathit{CL}}$, the fluents mentioned in any $\phi$ satisfying (2) above. These are \textit{causally linked}.
\item $J_{\mathit{irrel}}$, all other fluents.
These fluents are either trivially irrelevant or causally-masked.
\end{itemize}
By the above definitions, the preconditions of the CAE triples within $\mathcal{T}[J]$ only mention fluents within $S[J_{\mathrm{\it rel}}] \cup S[J_{\mathit{CL}}]$. For any precondition $C$ of a CAE triple in a scoped RPP, we can decompose $C$ as the conjunction of a clause defined over $S[J_{\mathrm{\it rel}}]$ and a clause defined over $S[J_{\mathit{CL}}]$:
$$C = C[J_{\mathrm{\it rel}}] \wedge C[J_{\mathit{CL}}],$$
where $s_{0}[J_{\mathit{CL}}] \implies C[J_{\mathit{CL}}]$.
Given this definition, we can show that scoped RPP's are SC-RPP'
s under a certain condition.
Below, we sketch our reasoning by stating the theorems that prove this fact (full proofs are in the supplement).
\begin{theorem}
A scoped RPP is sound: an initial state and a sequence of $\overline{x}$ from a scoped RPP can be lifted to a sequence of $x$ from the original PP, and these two sequences induce the same sequence of states in $S[J_{\mathrm{\it rel}}]$.
\end{theorem}
\begin{lemma}
Given a scoped RPP, there is no
CAE
triple in $\mathcal{T}[J]$ that affects both $J_{\mathrm{\it rel}}$ and $J_{\mathit{CL}}$.
\end{lemma}
\begin{theorem}
A scoped RPP is optimality-complete if each goal fluent is contained in $J_{\mathit{rel}} \cup J_{\mathit{CL}}$.
\end{theorem}
\subsection{The Task Scoping Algorithm}
\label{subsec:Scoping_algo_sec}
We can now define an algorithm that produces a scoped RPP given a planning problem and thereby provably supports optimal planning. Algorithm~\ref{alg:Scoping_algo} contains the full pseudo-code for our task scoping algorithm. Intuitively, the algorithm begins by assuming that the only relevant fluents are the goal fluents that are not causally linked. It then calls ReduceTransitions to create an RPP that contains \textit{only} these fluents. If this RPP is not a scoped RPP, then there must exist at least one fluent mentioned in the preconditions of the reduced CAE triples that is not causally linked. The algorithm adds all such fluents to the set of relevant fluents ($J_{\mathrm rel}$) and continues this process until a scoped RPP is returned. Note that while ReduceTransitions distinguishes between CAEs with different effects on the same fluent, the details of the effect are ignored by ReduceTransitions and Scope Tasks.
\begin{algorithm} [H]
\caption{Task Scoping. Note that $g$ is used as a `dummy' goal fluent, similar to \texttt{UCPOP}, to simplify the main loop. The supplement includes a proof that this algorithm returns a scoped RPP.
}
\label{alg:Scoping_algo}
\begin{algorithmic}[1]
\Procedure{Scope Task}{$S[\mathcal{J}], \mathcal{A}, \mathcal{T}, s_{0}, G$}
\State $\mathcal{J} \gets \mathcal{J} \cup \{g\}$
\State $\mathcal{T} \gets \mathcal{T} \cup \{(G$, doGoal, $g \gets$ True)$\}$
\State $J_{\mathrm{\it rel}} \gets \{g\}$
\State $T[J_{\mathrm{\it rel}}] \gets$ \Call{ReduceTransitions}{$\mathcal{T}$, $J_{\mathrm{\it rel}}$}
\While{($S[J_{\mathrm{\it rel}}], \mathcal{A}, T[J_{\mathrm{\it rel}}] , s_{0}, G$) is not a scoped RPP}
\State $J_{\mathrm{\it aff}} \gets J_{\mathrm{\it rel}} \bigcup_{\overline{x} \in T[J_{\mathrm{\it rel}}]} \text{vars}(\overline{x}.\text{sideeffects})$
\For{$\overline{x} \in T[J_{\mathrm{\it rel}}]$}
\For{$\phi \in $ CNF$(\overline{x}.\text{precondition})$}
\If{$(s_{0} \centernot\implies \phi)\vee (\text{vars}(\phi) \cap J_{\mathrm{\it aff}} \neq \{\})$}
\State $J_{\mathrm{\it rel}} \gets J_{\mathrm{\it rel}} \cup \{$vars$(\phi)\}$
\EndIf
\EndFor
\EndFor
\State $T[J_{\mathrm{\it rel}}] \gets$ \Call{ReduceTransitions}{$\mathcal{T}$, $J_{\mathrm{\it rel}}$}
\EndWhile
\State \Return ($S[J_{\mathrm{\it rel}}], \mathcal{A}, T[J_{\mathrm{\it rel}}] , s_{0}, G$)
\EndProcedure
\end{algorithmic}
\end{algorithm}
\paragraph{Computational Complexity.}
For a given planning problem, suppose the transition system $\mathcal{T}$ is composed of $t$ CAE triples, and that there are $n_{j}$ fluents. The worst-case complexity of ReduceTransitions $O(t)$. Given this, and inspecting the loop in Algorithm 1, the worst-case computational complexity is:
\begin{multline*}
O( n_{j} \cdot (t + t \cdot O( \text{convert to CNF} ) \cdot
\lvert \text{clauses in CNF} \rvert \cdot \\
(O(\text{check } s_{0} \centernot\implies \phi))
) + O(t) ).
\end{multline*}
The worst-case complexity of converting a clause to CNF is exponential. However, in practice, this conversion does not significantly impact the algorithm's runtime since most preconditions contain few clauses. Complexity is thus dominated by the $n_{j} \times t$ term.
Crucially, note that this time complexity does not depend on the concrete state-action space of the original task, since task scoping does not reason about the specific values that fluents can take. Thus, it can be much more efficient than searching the concrete state-action space, as is done in a naive reachability analysis, for problems composed of few fluents with large domains (such as integer fluents). \nknote{Additionally, task scoping will terminate more quickly for problems whose transition systems with less 'coupling' (the term 'coupling' is defined in \citep{AccidentalComplexityHaslum}).}
\section{Experimental Evaluation}
To evaluate task scoping's performance and utility empirically, we implemented the task scoping algorithm along with PDDL 2.1 and SAS+ \citep{Helmert:2006:FDP:1622559.1622565} domain parsers capable of extracting the information necessary for task scoping from planning problems written in either of these formats.\footnote{The details of this translation process and our implementation are discussed in the supplementary material} All experiments were conducted on an Intel Core i7-8550U CPU with 2 quad-core 4GHz processors, and 16GB of RAM.
\begin{figure}[tp]
\includegraphics[width=\linewidth]{img/planning_times_log.png}
\caption{Planning time using ENHSP-2020 with and without task scoping, as the number of irrelevant fluents are increased in the Multi-Switch Continuous Playroom Domain. Shading indicates standard deviation across 5 independent runs.}
\label{fig:MonkeyResults}
\end{figure}
\subsection{IPC Domains with Fast Downward}
\label{expers:FD_Experiments}
Perhaps the best-known pruning algorithm for classical planning problems is the translator component of the Fast Downward planning system (FD) \citep{Helmert:2006:FDP:1622559.1622565}. To empirically compare task scoping's utility against this existing approach, we selected 5 benchmark domains (Logistics, DriverLog, Satellite, Zenotravel and Gripper) from the optimal track of several previous iterations of the International Planning Competition (IPC) \citep{IPC_2002,IPC2014,IPC2009,IPC2000}. Since these domains do not generally contain enough irrelevance (especially conditional irrelevance) for pruning techniques to make a significant difference to search time \citep{AIPlanningPerspectiveOnAbstraction}, we modified 3 problem files from 4 out of 5 domains (Logistics, DriverLog, Satellite and Zenotravel) to contain some states and actions that are either trivially or conditionally irrelevant. We translated each problem to SAS+ using FD's translator, ran task scoping to prune this SAS+ file, then ran the FD planner in the \texttt{seq-opt-lmcut} configuration on each of these problems. We inspected the original and scoped SAS+ files to compare the relative sizes of their state-action spaces and measured the end-to-end wall-clock time for FD to plan with and without task scoping. Results are in Table \ref{table:experiment_times}.
These results reveal that task scoping is able to prune some of these problems more finely than FD's translator alone, and that scoping can reduce FD's search time by more than the time taken by scoping itself. Task scoping reduces the size of the state-action space significantly more than FD's translator alone for the $4$ domains that we added irrelevance to. By inspecting the SAS+ files, we observed that this difference in reduction was mostly because FD's translator was unable to prune any conditionally irrelevant states or actions. Additionally, for almost all problems within these 4 domains, running task scoping was able to significantly speed up FD's search time. This is especially evident in the Satellite domain, where the total time taken to find a plan was between $2\times$ and $6\times$ faster with task scoping than without. We also observed that task scoping was unable to prune any irrelevance in Gripper, which is unsurprising because it is a very simple domain that features neither trivial nor conditional irrelevance. However, the total planning time was approximately the same both with task scoping and without.
\begin{table*}[t]
\caption{Results for all of our experiments. All times are in seconds, and shown $\pm$ standard deviation across $5$ independent runs.}
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}lllllll@{}}
\toprule
problem & (state) x action size & scoped state x action size & scoping time & planning time (after scoping) & scoping + planning time & planning (no scoping) \\ \midrule
Logistics prob15 & $(2.29 \times 10^{21}) \times 650$ & $\mathbf{(1.14 \times 10^{9}) \times 250}$ & $1.55 \pm 0.02$ & $6.03 \pm 0.02$ & $\mathbf{7.58 \pm 0.02}$ & $23.8 \pm 0.5$ \\
Logistics prob20 & $(2.29 \times 10^{21}) \times 650$ & $\mathbf{(1.14 \times 10^{9}) \times 250}$ & $1.57 \pm 0.07$ & $10.8 \pm 0.04$ & $\mathbf{12.4 \pm 0.09}$ & $53.5 \pm 2$ \\
Logistics prob25 & $(2.29 \times 10^{21}) \times 650$ & $\mathbf{(1.93 \times 10^{10}) \times 290}$ & $1.70 \pm 0.03$ & $64.4 \pm 1$ & $\mathbf{66.1 \pm 1}$ & $257 \pm 7$ \\
DriverLog prob15 & $(2.97 \times 10^{21}) \times 2592$ & $\mathbf{(2.83 \times 10^{15}) \times 2112}$ & $6.71 \pm 0.08$ & $6.69 \pm 0.04$ & $13.4 \pm 0.1$ & $\mathbf{10.6 \pm 0.09}$ \\
DriverLog prob16 & $(2.85 \times 10^{27}) \times 4890$ & $\mathbf{(5.57 \times 10^{15}) \times 3540}$ & $14.7 \pm 0.1$ & $18.9 \pm 0.3$ & $\mathbf{33.7 \pm 0.2}$ & $47.1 \pm 0.7$ \\
DriverLog prob17 & $(8.69 \times 10^{35}) \times 6170$ & $\mathbf{(1.28 \times 10^{16}) \times 3770}$ & $17.2 \pm 0.4$ & $18.5 \pm 0.3$ & $\mathbf{35.7 \pm 0.2}$ & $48.3 \pm 0.2$ \\
Satellite prob05 & $(2.10 \times 10^{12}) \times 339$ & $\mathbf{(1.14 \times 10^{9}) \times 250}$ & $1.54 \pm 0.02$ & $1.04 \pm 0.008$ & $\mathbf{2.58 \pm 0.02}$ & $4.19 \pm 0.09$ \\
Satellite prob06 & $(1.32 \times 10^{9}) \times 582$ & $\mathbf{(1.09 \times 10^{7}) \times 362}$ & $1.35 \pm 0.02$ & $3.19 \pm 0.02$ & $\mathbf{4.54 \pm 0.03}$ & $12.5 \pm 0.06$ \\
Satellite prob07 & $(3.76 \times 10^{13}) \times 983$ & $\mathbf{(2.17 \times 10^{10}) \times 587}$ & $2.14 \pm 0.02$ & $103.0 \pm 4$ & $\mathbf{105.0 \pm 4.0}$ & $689.0 \pm 30.0$ \\
Zenotravel prob10 & $(7.19 \times 10^{11}) \times 1155$ & $\mathbf{(1.12 \times 10^{10}) \times 1095}$ & $4.48 \pm 0.02$ & $66.5 \pm 0.6$ & $\mathbf{71.0 \pm 0.6}$ & $77.0 \pm 2.0$ \\
Zenotravel prob12 & $(1.55 \times 10^{16}) \times 3375$ & $\mathbf{(7.47 \times 10^{11}) \times 3159}$ & $13.2 \pm 0.09$ & $91.0 \pm 1.0$ & $\mathbf{104 \pm 2.0}$ & $107 \pm 0.9$ \\
Zenotravel prob14 & $(6.46 \times 10^{19}) \times 6700$ & $\mathbf{(8.51 \times 10^{13}) \times 6200}$ & $13.0 \pm 0.2$ & $200.0 \pm 3.0$ & $\mathbf{213.0 \pm 3.0}$ & $227.0 \pm 4.0$ \\
Gripper prob04 & $(1.43 \times 10^{7}) \times 82$ & $(1.43 \times 10^{7}) \times 82$ & $0.449 \pm 0.01$ & $1.24 \pm 0.05$ & $1.69 \pm 0.05$ & $\mathbf{1.45 \pm 0.06}$ \\
Gripper prob05 & $(1.80 \times 10^{8}) \times 98$ & $(1.80 \times 10^{8}) \times 98$ & $0.589 \pm 0.09$ & $7.45 \pm 0.2$ & $\mathbf{8.04 \pm 0.2}$ & $8.41 \pm 1.0$ \\
Gripper prob06 & $(2.15 \times 10^{9}) \times 582$ & $(2.15 \times 10^{9}) \times 582$ & $0.500 \pm 0.01$ & $49.0 \pm 3.0$ & $\mathbf{49.0 \pm 3.0}$ & $52.3 \pm 4.0$ \\
& & & & & & \\
Playroom1 & $(16 \times 10^{32}) \times 18$ & $\mathbf{(2 \times 10^{32}) \times 14}$ & $0.276 \pm 0.01$ & $6.64 \pm 0.07$ & $\mathbf{6.91 \pm 0.07}$ & $8.38 \pm 0.3$ \\
Playroom3 & $(10.24 \times 10^{34}) \times 18$ & $\mathbf{(2 \times 10^{32}) \times 14}$ & $0.491 \pm 0.05$ & $6.81 \pm 0.04$ & $\mathbf{7.30 \pm 0.08}$ & $13.0 \pm 0.4$ \\
Playroom5 & $(65.546 \times 10^{35}) \times 18$ & $\mathbf{(2 \times 10^{32}) \times 14}$ & $0.782 \pm 0.02$ & $6.58 \pm 0.1$ & $\mathbf{7.37 \pm 0.2}$ & $19.2 \pm 0.3$ \\
Playroom7 & $(41.943 \times 10^{37}) \times 18$ & $\mathbf{(2 \times 10^{32}) \times 14}$ & $1.26 \pm 0.01$ & $6.86 \pm 0.05$ & $\mathbf{8.12 \pm 0.05}$ & $27.6 \pm 0.8$ \\
Playroom9 & $(26.844 \times 10^{39}) \times 18$ & $\mathbf{(2 \times 10^{32}) \times 14}$ & $1.94 \pm 0.02$ & $6.59 \pm 0.04$ & $\mathbf{8.53 \pm 0.04}$ & $38.7 \pm 0.7$ \\
Minecraft Wool Dyeing Task & $(4.81 \times 10^{135}) \times 29$ & $\mathbf{(3.53 \times 10^{25}) \times 11}$ & $3.70 \pm 0.05$ & $1.48 \pm 0.03$ & $\mathbf{5.14 \pm 0.07}$ & $168 \pm 5$ \\
Minecraft Plank Crafting Task & $(4.81 \times 10^{135}) \times 29$ & $\mathbf{(3.53 \times 10^{25}) \times 13}$ & $4.74 \pm 0.2$ & $3.44 \pm 0.1$ & $\mathbf{8.18 \pm 0.3}$ & $16.0 \pm 0.4$ \\
Minecraft Bed Making Task & $(4.81 \times 10^{135}) \times 29$ & $\mathbf{(1.02 \times 10^{28}) \times 16}$ & $4.21 \pm 0.07$ & $208 \pm 6$ & $\mathbf{212 \pm 6}$ & $3450 \pm 50$ \\ \bottomrule
\end{tabular}
}
\label{table:experiment_times}
\end{table*}
\subsection{Numeric Domains with ENHSP-2020}
\label{expers:ENHSP_Experiments}
\paragraph{Continuous Playroom.}
To study how task scoping's performance and utility scale with the size of the state-action space within a simple open-scope numeric planning domain, we implemented a version of our running example in PDDL 2.1 (without the trivially irrelevant \texttt{thermostat} object or \texttt{(temperature)} factor). We created progressively larger problems by increasing the number of conditionally irrelevant light switches and buttons. For each of these problems, we ran ENHSP-2020 equipped with its default heuristic both with and without pre-processing using task scoping, and measured the end-to-end wall-clock time to produce a plan (that is, we compared the time to parse and plan for ENHSP with the time taken to parse, ground, scope, save the scoped PDDL domain and problem files, then parse and plan with ENHSP). The results, shown in Figure~\ref{fig:MonkeyResults}, clearly demonstrate that performing task scoping as a pre-processing step can significantly speed up planning by reducing the size of the state-action space. By simply removing the irrelevant switches, buttons, and actions, the state-action space can often be reduced by a large multiplicative factor, as shown in Table~\ref{table:experiment_times}.
\paragraph{Minecraft}
To examine task scoping's utility on a novel open-scope domain of interest, we created the \textit{Build-a-Bed Domain} containing simplified dynamics and several tasks from Minecraft and pictured in Figure~\ref{FrontPage}. The domain's main objective is to construct a blue bed---an important task in the videogame---in a simplified setting. The domain features a number of interactive items---such as diamonds, sticks and various plants---strewn about the map, that the agent can destroy, place elsewhere or use to craft different items. Thus, the domain supports a large variety of potential tasks.
We wrote PDDL 2.1 files to express the Build-a-Bed domain and 3 specific tasks within it: (1) dye 3 wool blocks blue by plucking 3 blue flowers from a field and using these to craft blue dye, (2) mine wood using a diamond axe and use this to craft wooden planks, and (3) use the results of (1) and (2) to craft a blue bed and place it at a specific location. The agent possess 3 wool blocks and a diamond axe in the initial state, which does not vary between tasks.
Task scoping is successfully able to recognize and remove a large number of irrelevant fluents and associated actions depending on the task chosen within this domain, as shown in Table~\ref{table:experiment_times}. The state space is reduced by over $10^{100}$ for each of these goals, and this reduction dramatically speeds up planning time for the Wool Dyeing and Bed Making tasks.
\section{Related Work}
Our work is certainly not the first to suggest pruning the state-action space of a planning problem before beginning search. \citet{Helmert:2006:FDP:1622559.1622565}'s widely-used Fast Downward Planner in particular includes a 'translation' phase before planning whereby abstract reachability analysis from the initial state is used to prune and merge various predicates. Recent works have extended this translator to perform more aggressive pruning and merging \citep{fivser2020lifted,fivser2020strengthening}. While the original translator does prune some irrelevant states and actions, we empirically observed (in Section \ref{expers:FD_Experiments}) that it is unable to prune conditionally irrelevant states and actions. Additionally, this approach and its successors perform a reachability analysis from the initial state and specifically attempt to minimize the encoding size of the problem, whereas our approach also accounts for the particular goal conditions and serves a related, but different purpose (i.e, computing a projection that is sound and optimality-complete). Furthermore, these previous approaches operate strictly on propositional planning problems whereas task scoping can be applied to numeric problems as well (as demonstrated in Section \ref{expers:ENHSP_Experiments}).
Recent work has attempted to leverage machine learning methods to predict an abstraction that ignores objects \citep{TomAndRohanPLOI} or actions \citep{gnadActionPrediction} that are irrelevant to a goal. Such methods can discern both trivial and conditional irrelevance and drastically improve the search time for state-of-the-art planners like Fast Downward. However, they must be trained on a set of small instances of a planning problem with the same goal to learn to predict irrelevance. Additionally, they provide no guarantees that their predicted abstractions are solution-preserving, but rather incrementally add more objects or actions if planning fails. We view these methods as orthogonal to our approach and considering integrating them an interesting direction for future work.
Counterexample-guided abstraction refinement (CEGAR) \citep{cegarjair,cegar_pattern} generates abstractions through iterative refinement. CEGAR finds an optimal solution to the abstraction and checks whether it is a solution to the original problem. If it is, it returns the solution. If not, it either refines the abstraction and tries again, or returns a planning heuristic based on the abstract solution.
Task scoping differs in that it derives a sound and optimality-complete abstraction, rather than deriving heuristics based on lossy abstractions. CEGAR can use richer families of abstractions like cartesian abstractions, while task scoping currently only uses projections.
Finally, some preliminary ideas behind task scoping were introduced by \citet{kumar2020task} in a different problem setting (Factored Markov Decision Processes). The authors provided no empirical evidence or theoretical guarantees.
\section{Conclusion}
Task scoping is a novel, planner-agnostic abstraction algorithm that can recognize and delete provably task-irrelevant fluents and actions from planning problems. We proved that task scoping always preserves optimal plans and characterized the algorithm's computational complexity. Through experiments on a variety of domains, we showed that performing task scoping as a pre-computation can significantly reduce the state-action space of \textit{open-scope} problems---large planning problems with multiple possible goals---thereby enabling a state-of-the-art planner to solve these problems more quickly.
\clearpage
\section{Supplementary Material}
\subsection{Proofs}
\textbf{Theorem 1.} A scoped RPP is sound: an initial state and a sequence of $\overline{x}$ from a scoped RPP can be lifted to a sequence of $x$ from the original PP, and these two sequences induce the same sequence of states in $S[J_{\mathrm{\it rel}}]$.
\begin{proof}
True by construction.
\end{proof}
\textbf{Lemma 1.} Given a scoped RPP, there is no
CAE triple in $\mathcal{T}[J]$ that affects both $J_{\mathrm{\it rel}}$ and $J_{\mathit{CL}}$.
\begin{proof}
Suppose $J_{\mathrm{\it rel}}$ induces an SC-RPP, and $x \in \mathcal{T}$ affects both $j \in J_{\mathrm{\it rel}}$ and $j' \in J_{\mathit{CL}}$.
Then, $\overline{x} \in \mathcal{T}[J]$ affects $j$ and has a side effect on $j'$, contradicting Condition~2 from the definition of $J_{\mathit{CL}}$ in an SC-RPP.
\end{proof}
\textbf{Theorem 2.} A scoped RPP is optimality-complete if it contains each goal fluent in $J_{\mathit{rel}} \cup J_{\mathit{CL}}$.
\begin{proof}
Suppose we have a scoped RPP. We will map each concrete trace to an abstract trace of equal length that passes through the same states when projected onto $J_{rel}$, and conclude that the RPP is complete.
Let $\tau$ be a trace in the concrete PP, with $\tau.a = (a_0, a_1, \ldots , a_n)$ be a sequence of actions that can be executed when starting from the initial state $s_{0}$ and $\tau.s = (s_0, s_1, \ldots , s_{n+1})$ the the sequence of states obtained by taking $\tau.a$ from $s_0$. Let $\tau.x = (x_0, x_1, \ldots , x_n)$ be the sequence of CAE triples corresponding to taking $a_i$ from $s_i$ for each $i$.
Let the $i^{th}$ state or action be indicated with a subscript, ex. $\tau_{i}.a$.
Let \textit{non-affecting action} refer to any action that has no effects on $S[J_{\mathrm{\it rel}}]$.
Let $\tau_{i}.x[J_{\mathrm{\it rel}}]$ be the abstract CAE triple induced by $\tau_{i}.x$ on $J_{\mathrm{\it rel}}$.
Let $\overline{\tau}$ be the trace obtained by replacing each non-affecting action in $\tau$ with a no-op.
We will show that, for each $i$, either $\tau_{i}.x$ is non-affecting or that the effects of $\tau_{i}.x$ and $\overline{\tau}_{i}.x$ on the $J_{\mathrm{\it rel}}$ fluents are the same.
We will then conclude that $\tau.s = \overline{\tau}.s$
Suppose now that $\tau_{i}.x$ is the first CAE triple in $\tau$ that affects any of the $[J_{\mathrm{\it rel}}]$ fluents.
\\ %
Since we only took non-affecting actions prior to $\tau_{i}.x$, we know that $\tau_{i}.s[J_{\mathrm{\it rel}}] = \overline{\tau}_{i}.s[J_{\mathrm{\it rel}}]$. We also know that $\overline{\tau}_i.x$.precondition can be written as $C[J_{\mathrm{\it rel}}] \wedge C[J_{\mathit{CL}}]$ by definition of our abstracted transition dynamics.
\\ %
By Lemma 1, only non-affecting actions can affect fluents $J_{\mathit{CL}}$, so replacing non-affecting actions with no-ops to produce $\overline{\tau}$ guarantees that the state
$\overline{\tau}_{i}[s]$ has the same values for the $J_{\mathit{CL}}$ fluents as the initial state $s_0 = \overline{s}_0$, and so $\overline{\tau}_{i}[s][J_{\mathit{CL}}] = s_0[J_{\mathit{CL}}]$, so $C[J_{\mathit{CL}}]$ is true.
Thus, by Property 2 of an SC-RPP, $\overline{\tau}_{i}.x$ is executable from both $\tau_{i}.s$ and $\overline{\tau}_{i}.s$ and thus the action $\tau_{i}.a$ must have the same effect on $J_{\mathit{rel}}$. Therefore, $\tau_{i+1}.s[J_{\mathrm{\it rel}}] = \overline{\tau}_{i+1}.s[J_{\mathrm{\it rel}}]$.
We have just shown that if we assume $\tau_{i}.a$ is the first affecting action, then all actions upto $\tau_{i}.a$ can be replaced by no-ops and $\tau_{i+1}.s[J_{\mathrm{\it rel}}] = \overline{\tau}_{i+1}.s[J_{\mathrm{\it rel}}]$. Given this, suppose now that the next affecting action is $\tau_{j}.a$, where $j > i$. We can apply the exact same argument to show that $\tau_{j+1}.s[J_{\mathrm{\it rel}}] = \overline{\tau}_{j+1}.s[J_{\mathrm{\it rel}}]$. We can continue to do this for every subsequent affecting action after $j$.
\end{proof}
\begin{theorem}
The task scoping algorithm returns an SC-RPP. \nknote{@Fishman, check this to make sure it follows well from the previous theorems?}
\end{theorem}
\begin{proof}
The algorithm begins by setting $J_{\mathrm{\it rel}} = \{g\}$ and thereafter never deletes a fluent from $J_{\mathrm{\it rel}}$. Therefore, $g \in J_{\mathrm{\it rel}}$.
The algorithm adds at least one fluent to $J_{\mathrm{\it rel}}$ each iteration, or terminates. $J_{\mathrm{\it rel}}$ is always a subset of $\mathcal{J}$.
Finally, scoping \textit{cannot} terminate without returning a \textit{scoped} RPP due to the explicit check in line 6. By the previous theorems, a scoped RPP with goal fluents in $J_{\mathrm{\it rel}} \cup J_{\mathrm{\it CL}}$ is sound and optimality-complete.
\end{proof}
\subsection{Implementation Details}
We run task scoping on PDDL 2.1 level 2 domains without conditional effects or on SAS+\footnote{Our current implementation does not support SAS+ domains that have axioms} domains obtained by running Fast Downward's translator on PDDL files. The specific process for each of these is described below.
\paragraph{PDDL 2.1 Domains.}
\begin{enumerate}
\item Parse the PDDL domain and problem files.
\item Ground all fluents and actions to objects in the problem files. We replace existential and universal quantifiers with disjunctions and conjunctions, respectively.
\item Create grounded CAE triples from the grounded actions.
\item Run the task scoping algorithm on the initial state, the grounded CAE triples and the goal conditions to get a list of relevant and causally linked fluents and relevant actions.
\item Find objects that are never part of a relevant fluent and ungrounded actions that have no relevant groundings.
\item Delete these objects and actions from the PDDL files.
\end{enumerate}
\paragraph{SAS+ Domains}
\begin{enumerate}
\item Extract CAE triples from SAS+ grounded operators by simply assuming that all fluents are integers (i.e, ignore the domain of each fluent that SAS+ specifies).
\item Run the task scoping algorithm on the initial state, the grounded CAE triples and the goal conditions to get a list of relevant and causally linked fluents and relevant actions.
\item Delete grounded operators\footnote{Our current implementation does not delete irrelevant fluents from SAS+ tasks, even though this is possible. Rather, it indirectly removes these by removing all irrelevant grounded operators that refer to irrelevant fluents, which effectively decreases the search space for a planner. In our experimental section, we include in the state space calculations only fluents that appear in both the condition of some operator and the effect of some (possibly different) operator.} from the SAS+ domain that only involve irrelevant fluents.
\end{enumerate}
Note that we do not delete causally-linked fluents, since doing so would make some actions ill defined. We could remedy this by editing the action definition to not rely on the causally-linked fluents, but abstained for the sake of simplicity.
Aditionally, note that we implemented our approach entirely in Python, and though our experimental results indicate task scoping is already able to speed up state-of-the-art like ENHSP-2020 and Fast Downward, we believe the wall-clock time necessary to run our algorithm could be significantly improved if we rewrote it in a compiled language like C++ or Cython.
\bibliographystyle{named}
\section{Introduction}
The {\it IJCAI--21 Proceedings} will be printed from electronic
manuscripts submitted by the authors. These must be PDF ({\em Portable
Document Format}) files formatted for 8-1/2$''$ $\times$ 11$''$ paper.
\subsection{Length of Papers}
All paper {\em submissions} must have a maximum of six pages, plus at most one for references. The seventh page cannot contain {\bf anything} other than references.
The length rules may change for final camera-ready versions of accepted papers and will differ between tracks. Some tracks may include only references in the last page, whereas others allow for any content in all pages. Similarly, some tracks allow you to buy a few extra pages should you want to, whereas others don't.
If your paper is accepted, please carefully read the notifications you receive, and check the proceedings submission information website\footnote{\url{https://proceedings.ijcai.org/info}} to know how many pages you can finally use. That website holds the most up-to-date information regarding paper length limits at all times. Please notice that if your track allows for a special references-only page, the {\bf references-only page(s) cannot contain anything else than references} (i.e.: do not write your acknowledgments on that page or you will be charged for it).
\subsection{Word Processing Software}
As detailed below, IJCAI has prepared and made available a set of
\LaTeX{} macros and a Microsoft Word template for use in formatting
your paper. If you are using some other word processing software, please follow the format instructions given below and ensure that your final paper looks as much like this sample as possible.
\section{Style and Format}
\LaTeX{} and Word style files that implement these instructions
can be retrieved electronically. (See Appendix~\ref{stylefiles} for
instructions on how to obtain these files.)
\subsection{Layout}
Print manuscripts two columns to a page, in the manner in which these
instructions are printed. The exact dimensions for pages are:
\begin{itemize}
\item left and right margins: .75$''$
\item column width: 3.375$''$
\item gap between columns: .25$''$
\item top margin---first page: 1.375$''$
\item top margin---other pages: .75$''$
\item bottom margin: 1.25$''$
\item column height---first page: 6.625$''$
\item column height---other pages: 9$''$
\end{itemize}
All measurements assume an 8-1/2$''$ $\times$ 11$''$ page size. For
A4-size paper, use the given top and left margins, column width,
height, and gap, and modify the bottom and right margins as necessary.
\subsection{Format of Electronic Manuscript}
For the production of the electronic manuscript, you must use Adobe's
{\em Portable Document Format} (PDF). A PDF file can be generated, for
instance, on Unix systems using {\tt ps2pdf} or on Windows systems
using Adobe's Distiller. There is also a website with free software
and conversion services: \url{http://www.ps2pdf.com}. For reasons of
uniformity, use of Adobe's {\em Times Roman} font is strongly suggested.
In \LaTeX2e{} this is accomplished by writing
\begin{quote}
\mbox{\tt $\backslash$usepackage\{times\}}
\end{quote}
in the preamble.\footnote{You may want also to use the package {\tt
latexsym}, which defines all symbols known from the old \LaTeX{}
version.}
Additionally, it is of utmost importance to specify the {\bf
letter} format (corresponding to 8-1/2$''$ $\times$ 11$''$) when
formatting the paper. When working with {\tt dvips}, for instance, one
should specify {\tt -t letter}.
\subsection{Title and Author Information}
Center the title on the entire width of the page in a 14-point bold
font. The title must be capitalized using Title Case. Below it, center author name(s) in 12-point bold font. On the following line(s) place the affiliations, each affiliation on its own line using 12-point regular font. Matching between authors and affiliations can be done using numeric superindices. Optionally, a comma-separated list of email addresses follows the affiliation(s) line(s), using 12-point regular font.
\subsubsection{Blind Review}
In order to make blind reviewing possible, authors must omit their
names and affiliations when submitting the paper for review. In place
of names and affiliations, provide a list of content areas. When
referring to one's own work, use the third person rather than the
first person. For example, say, ``Previously,
Gottlob~\shortcite{gottlob:nonmon} has shown that\ldots'', rather
than, ``In our previous work~\cite{gottlob:nonmon}, we have shown
that\ldots'' Try to avoid including any information in the body of the
paper or references that would identify the authors or their
institutions. Such information can be added to the final camera-ready
version for publication.
\subsection{Abstract}
Place the abstract at the beginning of the first column 3$''$ from the
top of the page, unless that does not leave enough room for the title
and author information. Use a slightly smaller width than in the body
of the paper. Head the abstract with ``Abstract'' centered above the
body of the abstract in a 12-point bold font. The body of the abstract
should be in the same font as the body of the paper.
The abstract should be a concise, one-paragraph summary describing the
general thesis and conclusion of your paper. A reader should be able
to learn the purpose of the paper and the reason for its importance
from the abstract. The abstract should be no more than 200 words long.
\subsection{Text}
The main body of the text immediately follows the abstract. Use
10-point type in a clear, readable font with 1-point leading (10 on
11).
Indent when starting a new paragraph, except after major headings.
\subsection{Headings and Sections}
When necessary, headings should be used to separate major sections of
your paper. (These instructions use many headings to demonstrate their
appearance; your paper should have fewer headings.). All headings should be capitalized using Title Case.
\subsubsection{Section Headings}
Print section headings in 12-point bold type in the style shown in
these instructions. Leave a blank space of approximately 10 points
above and 4 points below section headings. Number sections with
arabic numerals.
\subsubsection{Subsection Headings}
Print subsection headings in 11-point bold type. Leave a blank space
of approximately 8 points above and 3 points below subsection
headings. Number subsections with the section number and the
subsection number (in arabic numerals) separated by a
period.
\subsubsection{Subsubsection Headings}
Print subsubsection headings in 10-point bold type. Leave a blank
space of approximately 6 points above subsubsection headings. Do not
number subsubsections.
\paragraph{Titled paragraphs.} You should use titled paragraphs if and
only if the title covers exactly one paragraph. Such paragraphs should be
separated from the preceding content by at least 3pt, and no more than
6pt. The title should be in 10pt bold font and ended with a period.
After that, a 1em horizontal space should follow the title before
the paragraph's text.
In \LaTeX{} titled paragraphs should be typeset using
\begin{quote}
{\tt \textbackslash{}paragraph\{Title.\} text} .
\end{quote}
\subsubsection{Acknowledgements}
You may include an unnumbered acknowledgments section, including
acknowledgments of help from colleagues, financial support, and
permission to publish. If present, acknowledgements must be in a dedicated,
unnumbered section appearing after all regular sections but before any
appendices or references.
Use
\begin{quote}
{\tt \textbackslash{}section*\{Acknowledgements\}})
\end{quote}
to typeset the acknowledgements section in \LaTeX{}.
\subsubsection{Appendices}
Any appendices directly follow the text and look like sections, except
that they are numbered with capital letters instead of arabic
numerals. See this document for an example.
\subsubsection{References}
The references section is headed ``References'', printed in the same
style as a section heading but without a number. A sample list of
references is given at the end of these instructions. Use a consistent
format for references. The reference list should not include publicly unavailable work.
\subsection{Citations}
Citations within the text should include the author's last name and
the year of publication, for example~\cite{gottlob:nonmon}. Append
lowercase letters to the year in cases of ambiguity. Treat multiple
authors as in the following examples:~\cite{abelson-et-al:scheme}
or~\cite{bgf:Lixto} (for more than two authors) and
\cite{brachman-schmolze:kl-one} (for two authors). If the author
portion of a citation is obvious, omit it, e.g.,
Nebel~\shortcite{nebel:jair-2000}. Collapse multiple citations as
follows:~\cite{gls:hypertrees,levesque:functional-foundations}.
\nocite{abelson-et-al:scheme}
\nocite{bgf:Lixto}
\nocite{brachman-schmolze:kl-one}
\nocite{gottlob:nonmon}
\nocite{gls:hypertrees}
\nocite{levesque:functional-foundations}
\nocite{levesque:belief}
\nocite{nebel:jair-2000}
\subsection{Footnotes}
Place footnotes at the bottom of the page in a 9-point font. Refer to
them with superscript numbers.\footnote{This is how your footnotes
should appear.} Separate them from the text by a short
line.\footnote{Note the line separating these footnotes from the
text.} Avoid footnotes as much as possible; they interrupt the flow of
the text.
\section{Illustrations}
Place all illustrations (figures, drawings, tables, and photographs)
throughout the paper at the places where they are first discussed,
rather than at the end of the paper.
They should be floated to the top (preferred) or bottom of the page,
unless they are an integral part
of your narrative flow. When placed at the bottom or top of
a page, illustrations may run across both columns, but not when they
appear inline.
Illustrations must be rendered electronically or scanned and placed
directly in your document. They should be cropped outside latex, otherwise portions of the image could reappear during the post-processing of your paper. All illustrations should be understandable when printed in black and
white, albeit you can use colors to enhance them. Line weights should
be 1/2-point or thicker. Avoid screens and superimposing type on
patterns, as these effects may not reproduce well.
Number illustrations sequentially. Use references of the following
form: Figure 1, Table 2, etc. Place illustration numbers and captions
under illustrations. Leave a margin of 1/4-inch around the area
covered by the illustration and caption. Use 9-point type for
captions, labels, and other text in illustrations. Captions should always appear below the illustration.
\section{Tables}
Tables are considered illustrations containing data. Therefore, they should also appear floated to the top (preferably) or bottom of the page, and with the captions below them.
\begin{table}
\centering
\begin{tabular}{lll}
\hline
Scenario & $\delta$ & Runtime \\
\hline
Paris & 0.1s & 13.65ms \\
Paris & 0.2s & 0.01ms \\
New York & 0.1s & 92.50ms \\
Singapore & 0.1s & 33.33ms \\
Singapore & 0.2s & 23.01ms \\
\hline
\end{tabular}
\caption{Latex default table}
\label{tab:plain}
\end{table}
\begin{table}
\centering
\begin{tabular}{lrr}
\toprule
Scenario & $\delta$ (s) & Runtime (ms) \\
\midrule
Paris & 0.1 & 13.65 \\
& 0.2 & 0.01 \\
New York & 0.1 & 92.50 \\
Singapore & 0.1 & 33.33 \\
& 0.2 & 23.01 \\
\bottomrule
\end{tabular}
\caption{Booktabs table}
\label{tab:booktabs}
\end{table}
If you are using \LaTeX, you should use the {\tt booktabs} package, because it produces better tables than the standard ones. Compare Tables \ref{tab:plain} and~\ref{tab:booktabs}. The latter is clearly more readable for three reasons:
\begin{enumerate}
\item The styling is better thanks to using the {\tt booktabs} rulers instead of the default ones.
\item Numeric columns are right-aligned, making it easier to compare the numbers. Make sure to also right-align the corresponding headers, and to use the same precision for all numbers.
\item We avoid unnecessary repetition, both between lines (no need to repeat the scenario name in this case) as well as in the content (units can be shown in the column header).
\end{enumerate}
\section{Formulas}
IJCAI's two-column format makes it difficult to typeset long formulas. A usual temptation is to reduce the size of the formula by using the {\tt small} or {\tt tiny} sizes. This doesn't work correctly with the current \LaTeX{} versions, breaking the line spacing of the preceding paragraphs and title, as well as the equation number sizes. The following equation demonstrates the effects (notice that this entire paragraph looks badly formatted):
\begin{tiny}
\begin{equation}
x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i
\end{equation}
\end{tiny}%
Reducing formula sizes this way is strictly forbidden. We {\bf strongly} recommend authors to split formulas in multiple lines when they don't fit in a single line. This is the easiest approach to typeset those formulas and provides the most readable output%
\begin{align}
x =& \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \nonumber\\
+ & \prod_{i=1}^n \sum_{j=1}^n j_i
\end{align}%
If a line is just slightly longer than the column width, you may use the {\tt resizebox} environment on that equation. The result looks better and doesn't interfere with the paragraph's line spacing: %
\begin{equation}
\resizebox{.91\linewidth}{!}{$
\displaystyle
x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i
$}
\end{equation}%
This last solution may have to be adapted if you use different equation environments, but it can generally be made to work. Please notice that in any case:
\begin{itemize}
\item Equation numbers must be in the same font and size than the main text (10pt).
\item Your formula's main symbols should not be smaller than {\small small} text (9pt).
\end{itemize}
For instance, the formula
\begin{equation}
\resizebox{.91\linewidth}{!}{$
\displaystyle
x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j
$}
\end{equation}
would not be acceptable because the text is too small.
\section{Examples, Definitions, Theorems and Similar}
Examples, definitions, theorems, corollaries and similar must be written in their own paragraph. The paragraph must be separated by at least 2pt and no more than 5pt from the preceding and succeeding paragraphs. They must begin with the kind of item written in 10pt bold font followed by their number (e.g.: Theorem 1), optionally followed by a title/summary between parentheses in non-bold font and ended with a period. After that the main body of the item follows, written in 10 pt italics font (see below for examples).
In \LaTeX{} We strongly recommend you to define environments for your examples, definitions, propositions, lemmas, corollaries and similar. This can be done in your \LaTeX{} preamble using \texttt{\textbackslash{newtheorem}} -- see the source of this document for examples. Numbering for these items must be global, not per-section (e.g.: Theorem 1 instead of Theorem 6.1).
\begin{example}[How to write an example]
Examples should be written using the example environment defined in this template.
\end{example}
\begin{theorem}
This is an example of an untitled theorem.
\end{theorem}
You may also include a title or description using these environments as shown in the following theorem.
\begin{theorem}[A titled theorem]
This is an example of a titled theorem.
\end{theorem}
\section{Proofs}
Proofs must be written in their own paragraph separated by at least 2pt and no more than 5pt from the preceding and succeeding paragraphs. Proof paragraphs should start with the keyword ``Proof." in 10pt italics font. After that the proof follows in regular 10pt font. At the end of the proof, an unfilled square symbol (qed) marks the end of the proof.
In \LaTeX{} proofs should be typeset using the \texttt{\textbackslash{proof}} environment.
\begin{proof}
This paragraph is an example of how a proof looks like using the \texttt{\textbackslash{proof}} environment.
\end{proof}
\section{Algorithms and Listings}
Algorithms and listings are a special kind of figures. Like all illustrations, they should appear floated to the top (preferably) or bottom of the page. However, their caption should appear in the header, left-justified and enclosed between horizontal lines, as shown in Algorithm~\ref{alg:algorithm}. The algorithm body should be terminated with another horizontal line. It is up to the authors to decide whether to show line numbers or not, how to format comments, etc.
In \LaTeX{} algorithms may be typeset using the {\tt algorithm} and {\tt algorithmic} packages, but you can also use one of the many other packages for the task.
\begin{algorithm}[tb]
\caption{Example algorithm}
\label{alg:algorithm}
\textbf{Input}: Your algorithm's input\\
\textbf{Parameter}: Optional list of parameters\\
\textbf{Output}: Your algorithm's output
\begin{algorithmic}[1]
\STATE Let $t=0$.
\WHILE{condition}
\STATE Do some action.
\IF {conditional}
\STATE Perform task A.
\ELSE
\STATE Perform task B.
\ENDIF
\ENDWHILE
\STATE \textbf{return} solution
\end{algorithmic}
\end{algorithm}
\section*{Acknowledgments}
The preparation of these instructions and the \LaTeX{} and Bib\TeX{}
files that implement them was supported by Schlumberger Palo Alto
Research, AT\&T Bell Laboratories, and Morgan Kaufmann Publishers.
Preparation of the Microsoft Word file was supported by IJCAI. An
early version of this document was created by Shirley Jowell and Peter
F. Patel-Schneider. It was subsequently modified by Jennifer
Ballentine and Thomas Dean, Bernhard Nebel, Daniel Pagenstecher,
Kurt Steinkraus, Toby Walsh and Carles Sierra. The current version
has been prepared by Marc Pujol-Gonzalez and Francisco Cruz-Mencia.
\section{Introduction}
Modern planning systems have successfully solved a variety of individual and important tasks, such as logistics
\citep{planningForLogistics}
and chemical synthesis \citep{planningForChemicalSynthesis}.
\mfnote{It doesn't look like \citep{planningForLogistics} shows any real world/practical applications.}
However, while such tasks have immediate practical importance and economic value, planning must be made much more efficient if it is to form the core computational process of a generally intelligent agent. In particular, generally intelligent agents must maintain world models that are \textit{open-scope}: rich enough to describe any task the agent may be asked to solve, thus necessarily including large amounts of information irrelevant to any \textit{individual} task \citep{GeorgeNecessity}. For instance, an agent that may be asked to solve any task in Minecraft must possess a model containing information necessary to craft various weapons to kill enemies, build shelter, and obtain and cook food items; however when confronted with the specific, immediate goal of crafting a bed, information about shelter and food is simply irrelevant. When confronted with the different goal of cooking a stew, a completely different set of information is rendered irrelevant.
Recent work \citep{VallatiPlanningRobustness,TomAndRohanPLOI} has shown that many state-of-the-art planning engines---even Fast Downward \citep{Helmert:2006:FDP:1622559.1622565}, which attempts to prune the problem's state-action space before search---suffer significant reductions in performance when irrelevant objects, fluents or actions are included in domain descriptions. For example, the state-of-the-art ENHSP-2020 planner \citep{ENHSP} takes over $57$ minutes to find an optimal plan for a particular task within the Minecraft domain pictured in Figure \ref{FrontPage}, in large part because it must search through approximately $10^{135}$ states; simply deleting model components irrelevant to this specific task shrinks the state space by over $100$ orders of magnitude, allowing it to be solved in just over $3.5$ minutes. Generally-intelligent agents with rich, open-scope world models will therefore likely require a means of ignoring the vast majority of information in those models on a task-specific basis.
\begin{figure}
\centering
\includegraphics[width=\linewidth,trim={0 5cm 0 5cm},clip]{img/MinecraftBedmakingUnscoped.png}~\\%
\includegraphics[width=\linewidth,trim={0 5cm 0 5cm},clip]{img/MinecraftBedmakingScopedDye.png}
\caption{A Minecraft environment that supports a wide range of tasks. If asked to craft blue dye by picking blue flowers, the majority of the objects in the environment are irrelevant; scoping removes these objects from the representation (visualized here as graying them out), reducing planning time by an order of magnitude.}
\label{FrontPage}
\end{figure}
We identify and characterize different types of task-irrelevance and propose a novel algorithm, \textit{task scoping}, that exploits knowledge of a problem's
initial condition, goal condition, and transition system
to prune particular types of irrelevant fluents and actions without compromising the existence of optimal plans. This pruning process operates at the level of the problem definition instead of the concrete state-action space, making first pruning, then planning often substantially faster than planning in the original space. We prove that the resulting abstraction is sound and complete: all valid plans in the scoped domain are valid in the original and pass through the same abstract states, and
all optimal plans in the original domain can be translated into an optimal plan in the scoped domain.
Additionally, we show that task scoping can have better worst-case computational complexity than planning for problems with particular features and structure. We argue that many planning domains of interest possess such characteristics, and thus, scoping followed by planning will often be more computationally efficient than planning directly. Task scoping can thus be viewed as a planner-agnostic pre-processing step.
We empirically demonstrate task scoping's efficacy on several domains from past iterations of the International Planning Competition (IPC) and on novel domains that we developed. Our results demonstrate that task scoping can substantially reduce the state-action spaces of various planning tasks specified in PDDL 2.1 or SAS+ \citep{Helmert:2006:FDP:1622559.1622565}, and that the resulting abstraction varies appropriately depending on the task. Moreover, using task scoping as a pre-processing step often significantly reduces the time taken by the state-of-the-art ENHSP-2020 and Fast Downward planners to solve these large problems, especially for our open-scope Minecraft domain.
\section{Background}
We assume a grounded planning formalism equivalent to PDDL 2.1 level 2 \citep{PDDL2.1} without conditional effects. This formalism subsumes SAS+ problems without axioms \citep{Helmert:2006:FDP:1622559.1622565}. We define a planning problem as a tuple $PP = (S[\mathcal{J}], \mathcal{A}, \mathcal{T}, s_{0}, G)$, where $S[\mathcal{J}]$ is the state space, $\mathcal{J}$ is a set of grounded fluents\footnote{All fluents considered in this work are grounded, so we will henceforth simply use the term 'fluents'.} where each element is either numeric or propositional, $\mathcal{A}$ is a set of actions, $\mathcal{T}$ represents the transition system, $s_{0}$ is an initial state, and $G$ is a goal condition.
\begin{itemize}
\item $S = \prod_{j \in \mathcal{J}} S[j]$, where $S[j]$ is the set of values that fluent $j$ can take. We consider a specific state $s \in S$ as a vector $\langle s_1,...,s_{|\mathcal{J}|}\rangle$ with a value assigned to each fluent (i.e, a rational number to every numeric fluent, and a truth value to every propositional fluent).
If $J \subseteq \mathcal{J}$, then $S[J] \coloneqq \prod_{j \in J} S[j]$ refers to the projection abstraction \citep{cegarjair,PatternDatabases} including only the fluents mentioned in $J$, meaning that all \textit{other} (non $J$) fluents' values are ignored.
\item $\mathcal{T}$ is a set of ({\sl pre-Condition; Action; Effect}) (or \textit{CAE}) triples,
where
\begin{itemize}
\item {\sl pre-Condition} is a predicate over $S[J]$ for some $J \subseteq \mathcal{J}$. We assume these preconditions are specified such that they are mutually exclusive for any pair of CAEs with the same action $a \in \mathcal{A}$. This choice ensures that only \textit{one} CAE triple's effects can occur at any time step,
\item {\sl Action} $\in$ $\mathcal{A}$,
\item {\sl Effect} is a list of deterministic functions $\langle F: S[j] \to S[j] \rangle_{j \in \mathcal{J}}$ describing how actions change the values of each of the $J$ fluents.
\end{itemize}
\item $s_{0}$ is an initial state.
\item $G$ is a conjunction of predicates over $S$.
\end{itemize}
An action is applicable in a state $s$ where the pre-condition of the corresponding CAE triple is satisfied. Applying an action in $s$ results in a new state $s'$ where the effect of the corresponding CAE triple has been applied to $s$. Applying a sequence of successively-applicable actions $<a_0,a_1,...,a_n>$ from the initial state $s_{0}$ results in a final state $s_{n+1}$. If the goal condition $G$ is true in this final state, then the action sequence (or \textit{trace}) is a \textit{valid plan}. Furthermore, we assume each action has some non-negative cost. Thus, we can define an \textit{optimal plan} as a valid plan with the minimum cost.
\section{Task Scoping}
\mfnote{Make sure this paragraph matches what we actually do.}
\noindent Before describing our approach and characterize the problems for which it is useful, we first introduce a type of abstraction for a planning problem that we call a 'reduced planning problem'. Next, we introduce definitions for task-irrelevance with respect to an optimal plan and use these to prove sufficient conditions for a reduced planning problem to be sound and complete. Finally, we introduce an algorithm that efficiently computes such abstractions and characterize its computational complexity.
\subsection{Reduced Planning Problems}
Let $PP$ be a planning problem with fluents $\mathcal{J}$. Suppose some subset of these fluents $J^c \subseteq \mathcal{J}$ are \textit{deleted}; there is now a unique \textit{reduced planning problem} $PP_{r}[J]$ induced by this subset $J \subseteq \mathcal{J}$ where $J = \mathcal{J} \setminus J^c$.
$PP_{r}[J]$ is a projection abstraction.
We denote the states, actions, transition system, initial state and goal conditions for the reduced problem as $PP_{r}[J] = (S[J], \mathcal{A}[J], \mathcal{T}[J], s_{0}[J], G[J])$.
The initial and goal conditions remain the same except that clauses mentioning fluents in $J^c$ are deleted since they are no longer meaningful.\footnote{If $J^c = \emptyset$, the RPP is the same as the original planning problem.}
$A[J]$ is the set of actions used by $T[J]$.
The transition system $\mathcal{T}[J]$ is induced by these $J$ fluents as described below.
\paragraph{Induced Transition System.}
Two CAE triples $x, x' \in \mathcal{T}$ are \textit{equivalent with respect to} $J$, denoted $x \simeq_J x'$, if, for each fluent in $J$, $x$ and $x'$ have the same effect: $x.$effects$[J] = x'.$effects$[J]$.
In the RPP induced by $J$, we do not distinguish CAEs with identical effects on $J$. We obtain the transition system on the RPP induced by $J$, denoted $\mathcal{T}[J]$, by discarding all CAEs with no effect on $J$, partitioning the remaining set of CAEs according to $\simeq_J$, and then creating from each part class $X$ a new \textit{quotient CAE} $\overline{x}$:
\begin{equation*}
\overline{x} = \left(\bigvee_{x' \in X} x.\mathrm{\text{precondition}}, \bigcup_{x' \in X} \{x.\mathrm{\text{action}}\}, x.\mathrm{\text{effects}}[J]\right).
\end{equation*}
When discussing the task scoping algorithm in Section \ref{subsec:Scoping_algo_sec}, we will also refer to $\overline{x}.\mathrm{\text{sideeffects}}$---the set of fluents \textit{not} in $J$ that may be affected when executing $\overline{x}$:
\begin{equation*}
\overline{x}.\mathrm{\text{sideeffects}} = \bigcup_{x' \in X, j \in J^c} \{x'.\mathrm{\text{effects}}[j]\}).
\end{equation*}
Note that the side effects are \textit{not} included in the returned RPP, they are simply a book-keeping tool used in the task scoping algorithm. When writing out a quotient CAE, the side effects may be included as the fourth component.
As a running example, consider a version of the continuous playroom domain from \citet{playroompaper}. An agent controls 3 effectors (an eye, a hand, and a marker) to interact with 7 objects (a thermostat, a light switch, a red button, a green button, a ball, a bell, and a monkey). The domain can be discretized into a grid where the agent can take an action to move its effectors in the cardinal directions. To interact with the thermostat, light switch or buttons, the agent's eye and hand effectors must be at the same grid cell as the relevant object. The thermostat can be set to 1 of 5 different temperatures that do not impact other dynamics of the problem whatsoever. The light switch can be turned on and off to toggle the playroom's lights, and, when the lights are on, the green button can be pushed to turn on music, while the red button can be pushed to turn off music. Once the music is on, regardless of the state of the lights, the agent can move its eye and hand effectors to the ball and its marker effector to the bell to throw the ball at the bell and frighten the monkey. While that is the original goal, it is possible to specify other goals such as turning the music on, or turning the lights off.
Consider two different CAE triples within this example: one that moves the agent's \texttt{(agent1)} hand north and one that does the same while also flicking the thermostat \texttt{(thermo1)} if it is in the way.
\begin{itemize}
\item \texttt{((hand-y agent1) != (thermostat-y thermo1); move\_north\_hand; (hand-y agent1)++)}
\item \texttt{((hand-y agent1) == (thermostat-y thermo1); move\_and\_flick; (hand-y agent1)++ and (temperature)++)}
\end{itemize}
Suppose that we want to compute the induced CAEs for these two CAEs and \texttt{(hand-y agent1)} is within the subset $J$ while (temperature) is not. Since these two triples differ \textit{only} in their effects on variables in $J^c$, they would be merged to produce a single triple with a \textit{side effect} on temperature.\footnote{'(x) ++' is short for (increase (x) 1) \citep{PDDL2.1}.}
\begin{itemize}
\item \texttt{(True; move\_north; (hand-y agent1)++; temperature)}
\end{itemize}
The precondition of this CAE triple is simply \textit{True}, as a result of having taken the disjunction of the previous triples' preconditions. Both triples affect the agent's hand in the same way, regardless of whether it is at the thermostat.
Pseudocode for ReduceTransitions, a procedure that computes this induced transition system, is given below.
\begin{algorithm}
\caption{Full pseudocode for the ReduceTransitions algorithm}
\label{ReduceTransitions}
\begin{algorithmic}[1]
\Procedure{ReduceTransitions}{$\mathcal{T}$ , $J$}
\State $U \gets $ Partition of $\mathcal{T}$ based on $\simeq_J$
\State Discard from $U$ each part that does not effect $S[J]$
\State $\mathcal{T}[J] \gets \{\}$
\For{$X \in U$}
\State \begin{multline}
\overline{x} \gets \biggl(\bigvee x.\text{prec}, \bigcup_{x' \in X} \{x.\text{action}\},\\ x.\text{effects}[J], \bigcup_{x' \in X} \mathrm{vars}(x'.\text{effects}[J^c])\biggr)
\end{multline}
\State $\mathcal{T}[J]$.insert($\overline{x}$)
\EndFor
\State\Return $\mathcal{T}[J]$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\paragraph{Sound and optimality-Complete RPPs.}
\label{subsec:SCRPP}
It is possible to construct an RPP by deleting \textit{any} combination of fluents from \textit{any} planning problem. Intuitively, not all such RPPs will be useful. We wish to construct RPPs that are optimality-complete
and sound.
\theoremstyle{definition}
\begin{definition}
An RPP is \textit{optimality-complete} if an optimal solution to the original planning problem can be obtained as a solution to the reduced planning problem, or the original planning problem has no solutions.
\end{definition}
\theoremstyle{definition}
\begin{definition}
An RPP is \textit{sound} if each plan in the RPP gives the same trace with respect to the abstract state space (i.e, the $J$ fluents) when executed in the original planning problem.
\end{definition}
We call a sound and optimality-complete RPP an SC-RPP.
To find sufficient conditions for an RPP to be sound and optimality-complete, we first define what it means for a fluent to be irrelevant with respect to an optimal plan and distinguish between a few types of irrelevance.
\theoremstyle{definition}
\begin{definition}
A fluent is \textit{irrelevant} for a specific planning problem if projecting it away results in a planning problem whose optimal plans are also optimal for the original problem.
\end{definition}
\theoremstyle{definition}
\begin{definition}
A fluent is \textit{trivially irrelevant} if it is irrelevant for any initial state.
\end{definition}
\theoremstyle{definition}
\begin{definition}
A fluent is \textit{conditionally irrelevant} if it is irrelevant for the specified initial state, but may be relevant for a different initial state.
\end{definition}
In our running example, the room's temperature is a trivially irrelevant fluent because the optimal plan is the same regardless of the value of the \texttt{(temperature)} for any initial state. Additionally, the lights and music are conditionally irrelevant to the goal of frightening the monkey, because they can be projected away without changing the optimal plan (but only if the music is already on in the initial state).
\begin{definition} A \textit{causally-linked} fluent maintains its original value throughout all optimal plans, and arbitrarily changing its value while executing a valid trace may increase the cost of optimal plans. This term is adapted from \texttt{UCPOP} \citep{UCPOP_Original}.
\end{definition}
\begin{definition}
A fluent is \textit{causally-masked} if there is a set of causally-linked fluents such that, as long as each of these causally-linked fluents maintains its initial value throughout the trace, the causally-masked fluent can vary arbitrarily without impacting the quality of optimal plans.
\end{definition}
By \textit{vary arbitrarily}, we mean changed without the agent taking an action.
In our running continuous playroom example, the the status of the lights is causally-masked by the music being on, which is itself causally-linked.
Given these definitions, we can now define and describe a \textit{scoped} RPP.
\theoremstyle{definition}
\begin{definition
For a given RPP, translate the preconditions in $\mathcal{T}[J]$ to conjunctive normal form (CNF), and let $\Phi$ be the set of clauses appearing in any of these preconditions. If, for each $\phi \in \Phi$, either:
\begin{enumerate}
\item $\phi$ is defined over $S[J]$, or
\item $\phi$ is true in $s_0$, and none of the fluents mentioned in $\phi$ are in the side effects of $\mathcal{T}[J]$,
\end{enumerate}
then the RPP is \textit{scoped}.
\end{definition}
Suppose we are given a scoped RPP and the $J \subseteq \mathcal{J}$ fluents used to induce it. $\mathcal{J}$ can be partitioned into 3 distinct, non-overlapping sets:
\begin{itemize}
\item $J = J_{\mathit{rel}}$, the fluents satisfying (1) above. We call these fluents \textit{relevant}.
\item $J_{\mathit{CL}}$, the fluents mentioned in any $\phi$ satisfying (2) above. These are \textit{causally linked}.
\item $J_{\mathit{irrel}}$, all other fluents.
These fluents are either trivially irrelevant or causally-masked.
\end{itemize}
By the above definitions, the preconditions of the CAE triples within $\mathcal{T}[J]$ only mention fluents within $S[J_{\mathrm{\it rel}}] \cup S[J_{\mathit{CL}}]$. For any precondition $C$ of a CAE triple in a scoped RPP, we can decompose $C$ as the conjunction of a clause defined over $S[J_{\mathrm{\it rel}}]$ and a clause defined over $S[J_{\mathit{CL}}]$:
$$C = C[J_{\mathrm{\it rel}}] \wedge C[J_{\mathit{CL}}],$$
where $s_{0}[J_{\mathit{CL}}] \implies C[J_{\mathit{CL}}]$.
Given this definition, we can show that scoped RPP's are SC-RPP'
s under a certain condition.
Below, we sketch our reasoning by stating the theorems that prove this fact (full proofs are in the supplement).
\begin{theorem}
A scoped RPP is sound: an initial state and a sequence of $\overline{x}$ from a scoped RPP can be lifted to a sequence of $x$ from the original PP, and these two sequences induce the same sequence of states in $S[J_{\mathrm{\it rel}}]$.
\end{theorem}
\begin{lemma}
Given a scoped RPP, there is no
CAE
triple in $\mathcal{T}[J]$ that affects both $J_{\mathrm{\it rel}}$ and $J_{\mathit{CL}}$.
\end{lemma}
\begin{theorem}
A scoped RPP is optimality-complete if each goal fluent is contained in $J_{\mathit{rel}} \cup J_{\mathit{CL}}$.
\end{theorem}
\subsection{The Task Scoping Algorithm}
\label{subsec:Scoping_algo_sec}
We can now define an algorithm that produces a scoped RPP given a planning problem and thereby provably supports optimal planning. Algorithm~\ref{alg:Scoping_algo} contains the full pseudo-code for our task scoping algorithm. Intuitively, the algorithm begins by assuming that the only relevant fluents are the goal fluents that are not causally linked. It then calls ReduceTransitions to create an RPP that contains \textit{only} these fluents. If this RPP is not a scoped RPP, then there must exist at least one fluent mentioned in the preconditions of the reduced CAE triples that is not causally linked. The algorithm adds all such fluents to the set of relevant fluents ($J_{\mathrm rel}$) and continues this process until a scoped RPP is returned. Note that while ReduceTransitions distinguishes between CAEs with different effects on the same fluent, the details of the effect are ignored by ReduceTransitions and Scope Tasks.
\begin{algorithm} [H]
\caption{Task Scoping. Note that $g$ is used as a `dummy' goal fluent, similar to \texttt{UCPOP}, to simplify the main loop. The supplement includes a proof that this algorithm returns a scoped RPP.
}
\label{alg:Scoping_algo}
\begin{algorithmic}[1]
\Procedure{Scope Task}{$S[\mathcal{J}], \mathcal{A}, \mathcal{T}, s_{0}, G$}
\State $\mathcal{J} \gets \mathcal{J} \cup \{g\}$
\State $\mathcal{T} \gets \mathcal{T} \cup \{(G$, doGoal, $g \gets$ True)$\}$
\State $J_{\mathrm{\it rel}} \gets \{g\}$
\State $T[J_{\mathrm{\it rel}}] \gets$ \Call{ReduceTransitions}{$\mathcal{T}$, $J_{\mathrm{\it rel}}$}
\While{($S[J_{\mathrm{\it rel}}], \mathcal{A}, T[J_{\mathrm{\it rel}}] , s_{0}, G$) is not a scoped RPP}
\State $J_{\mathrm{\it aff}} \gets J_{\mathrm{\it rel}} \bigcup_{\overline{x} \in T[J_{\mathrm{\it rel}}]} \text{vars}(\overline{x}.\text{sideeffects})$
\For{$\overline{x} \in T[J_{\mathrm{\it rel}}]$}
\For{$\phi \in $ CNF$(\overline{x}.\text{precondition})$}
\If{$(s_{0} \centernot\implies \phi)\vee (\text{vars}(\phi) \cap J_{\mathrm{\it aff}} \neq \{\})$}
\State $J_{\mathrm{\it rel}} \gets J_{\mathrm{\it rel}} \cup \{$vars$(\phi)\}$
\EndIf
\EndFor
\EndFor
\State $T[J_{\mathrm{\it rel}}] \gets$ \Call{ReduceTransitions}{$\mathcal{T}$, $J_{\mathrm{\it rel}}$}
\EndWhile
\State \Return ($S[J_{\mathrm{\it rel}}], \mathcal{A}, T[J_{\mathrm{\it rel}}] , s_{0}, G$)
\EndProcedure
\end{algorithmic}
\end{algorithm}
\paragraph{Computational Complexity.}
For a given planning problem, suppose the transition system $\mathcal{T}$ is composed of $t$ CAE triples, and that there are $n_{j}$ fluents. The worst-case complexity of ReduceTransitions $O(t)$. Given this, and inspecting the loop in Algorithm 1, the worst-case computational complexity is:
\begin{multline*}
O( n_{j} \cdot (t + t \cdot O( \text{convert to CNF} ) \cdot
\lvert \text{clauses in CNF} \rvert \cdot \\
(O(\text{check } s_{0} \centernot\implies \phi))
) + O(t) ).
\end{multline*}
The worst-case complexity of converting a clause to CNF is exponential. However, in practice, this conversion does not significantly impact the algorithm's runtime since most preconditions contain few clauses. Complexity is thus dominated by the $n_{j} \times t$ term.
Crucially, note that this time complexity does not depend on the concrete state-action space of the original task, since task scoping does not reason about the specific values that fluents can take. Thus, it can be much more efficient than searching the concrete state-action space, as is done in a naive reachability analysis, for problems composed of few fluents with large domains (such as integer fluents). \nknote{Additionally, task scoping will terminate more quickly for problems whose transition systems with less 'coupling' (the term 'coupling' is defined in \citep{AccidentalComplexityHaslum}).}
\section{Experimental Evaluation}
To evaluate task scoping's performance and utility empirically, we implemented the task scoping algorithm along with PDDL 2.1 and SAS+ \citep{Helmert:2006:FDP:1622559.1622565} domain parsers capable of extracting the information necessary for task scoping from planning problems written in either of these formats.\footnote{The details of this translation process and our implementation are discussed in the supplementary material} All experiments were conducted on an Intel Core i7-8550U CPU with 2 quad-core 4GHz processors, and 16GB of RAM.
\begin{figure}[tp]
\includegraphics[width=\linewidth]{img/planning_times_log.png}
\caption{Planning time using ENHSP-2020 with and without task scoping, as the number of irrelevant fluents are increased in the Multi-Switch Continuous Playroom Domain. Shading indicates standard deviation across 5 independent runs.}
\label{fig:MonkeyResults}
\end{figure}
\subsection{IPC Domains with Fast Downward}
\label{expers:FD_Experiments}
Perhaps the best-known pruning algorithm for classical planning problems is the translator component of the Fast Downward planning system (FD) \citep{Helmert:2006:FDP:1622559.1622565}. To empirically compare task scoping's utility against this existing approach, we selected 5 benchmark domains (Logistics, DriverLog, Satellite, Zenotravel and Gripper) from the optimal track of several previous iterations of the International Planning Competition (IPC) \citep{IPC_2002,IPC2014,IPC2009,IPC2000}. Since these domains do not generally contain enough irrelevance (especially conditional irrelevance) for pruning techniques to make a significant difference to search time \citep{AIPlanningPerspectiveOnAbstraction}, we modified 3 problem files from 4 out of 5 domains (Logistics, DriverLog, Satellite and Zenotravel) to contain some states and actions that are either trivially or conditionally irrelevant. We translated each problem to SAS+ using FD's translator, ran task scoping to prune this SAS+ file, then ran the FD planner in the \texttt{seq-opt-lmcut} configuration on each of these problems. We inspected the original and scoped SAS+ files to compare the relative sizes of their state-action spaces and measured the end-to-end wall-clock time for FD to plan with and without task scoping. Results are in Table \ref{table:experiment_times}.
These results reveal that task scoping is able to prune some of these problems more finely than FD's translator alone, and that scoping can reduce FD's search time by more than the time taken by scoping itself. Task scoping reduces the size of the state-action space significantly more than FD's translator alone for the $4$ domains that we added irrelevance to. By inspecting the SAS+ files, we observed that this difference in reduction was mostly because FD's translator was unable to prune any conditionally irrelevant states or actions. Additionally, for almost all problems within these 4 domains, running task scoping was able to significantly speed up FD's search time. This is especially evident in the Satellite domain, where the total time taken to find a plan was between $2\times$ and $6\times$ faster with task scoping than without. We also observed that task scoping was unable to prune any irrelevance in Gripper, which is unsurprising because it is a very simple domain that features neither trivial nor conditional irrelevance. However, the total planning time was approximately the same both with task scoping and without.
\begin{table*}[t]
\caption{Results for all of our experiments. All times are in seconds, and shown $\pm$ standard deviation across $5$ independent runs.}
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}lllllll@{}}
\toprule
problem & (state) x action size & scoped state x action size & scoping time & planning time (after scoping) & scoping + planning time & planning (no scoping) \\ \midrule
Logistics prob15 & $(2.29 \times 10^{21}) \times 650$ & $\mathbf{(1.14 \times 10^{9}) \times 250}$ & $1.55 \pm 0.02$ & $6.03 \pm 0.02$ & $\mathbf{7.58 \pm 0.02}$ & $23.8 \pm 0.5$ \\
Logistics prob20 & $(2.29 \times 10^{21}) \times 650$ & $\mathbf{(1.14 \times 10^{9}) \times 250}$ & $1.57 \pm 0.07$ & $10.8 \pm 0.04$ & $\mathbf{12.4 \pm 0.09}$ & $53.5 \pm 2$ \\
Logistics prob25 & $(2.29 \times 10^{21}) \times 650$ & $\mathbf{(1.93 \times 10^{10}) \times 290}$ & $1.70 \pm 0.03$ & $64.4 \pm 1$ & $\mathbf{66.1 \pm 1}$ & $257 \pm 7$ \\
DriverLog prob15 & $(2.97 \times 10^{21}) \times 2592$ & $\mathbf{(2.83 \times 10^{15}) \times 2112}$ & $6.71 \pm 0.08$ & $6.69 \pm 0.04$ & $13.4 \pm 0.1$ & $\mathbf{10.6 \pm 0.09}$ \\
DriverLog prob16 & $(2.85 \times 10^{27}) \times 4890$ & $\mathbf{(5.57 \times 10^{15}) \times 3540}$ & $14.7 \pm 0.1$ & $18.9 \pm 0.3$ & $\mathbf{33.7 \pm 0.2}$ & $47.1 \pm 0.7$ \\
DriverLog prob17 & $(8.69 \times 10^{35}) \times 6170$ & $\mathbf{(1.28 \times 10^{16}) \times 3770}$ & $17.2 \pm 0.4$ & $18.5 \pm 0.3$ & $\mathbf{35.7 \pm 0.2}$ & $48.3 \pm 0.2$ \\
Satellite prob05 & $(2.10 \times 10^{12}) \times 339$ & $\mathbf{(1.14 \times 10^{9}) \times 250}$ & $1.54 \pm 0.02$ & $1.04 \pm 0.008$ & $\mathbf{2.58 \pm 0.02}$ & $4.19 \pm 0.09$ \\
Satellite prob06 & $(1.32 \times 10^{9}) \times 582$ & $\mathbf{(1.09 \times 10^{7}) \times 362}$ & $1.35 \pm 0.02$ & $3.19 \pm 0.02$ & $\mathbf{4.54 \pm 0.03}$ & $12.5 \pm 0.06$ \\
Satellite prob07 & $(3.76 \times 10^{13}) \times 983$ & $\mathbf{(2.17 \times 10^{10}) \times 587}$ & $2.14 \pm 0.02$ & $103.0 \pm 4$ & $\mathbf{105.0 \pm 4.0}$ & $689.0 \pm 30.0$ \\
Zenotravel prob10 & $(7.19 \times 10^{11}) \times 1155$ & $\mathbf{(1.12 \times 10^{10}) \times 1095}$ & $4.48 \pm 0.02$ & $66.5 \pm 0.6$ & $\mathbf{71.0 \pm 0.6}$ & $77.0 \pm 2.0$ \\
Zenotravel prob12 & $(1.55 \times 10^{16}) \times 3375$ & $\mathbf{(7.47 \times 10^{11}) \times 3159}$ & $13.2 \pm 0.09$ & $91.0 \pm 1.0$ & $\mathbf{104 \pm 2.0}$ & $107 \pm 0.9$ \\
Zenotravel prob14 & $(6.46 \times 10^{19}) \times 6700$ & $\mathbf{(8.51 \times 10^{13}) \times 6200}$ & $13.0 \pm 0.2$ & $200.0 \pm 3.0$ & $\mathbf{213.0 \pm 3.0}$ & $227.0 \pm 4.0$ \\
Gripper prob04 & $(1.43 \times 10^{7}) \times 82$ & $(1.43 \times 10^{7}) \times 82$ & $0.449 \pm 0.01$ & $1.24 \pm 0.05$ & $1.69 \pm 0.05$ & $\mathbf{1.45 \pm 0.06}$ \\
Gripper prob05 & $(1.80 \times 10^{8}) \times 98$ & $(1.80 \times 10^{8}) \times 98$ & $0.589 \pm 0.09$ & $7.45 \pm 0.2$ & $\mathbf{8.04 \pm 0.2}$ & $8.41 \pm 1.0$ \\
Gripper prob06 & $(2.15 \times 10^{9}) \times 582$ & $(2.15 \times 10^{9}) \times 582$ & $0.500 \pm 0.01$ & $49.0 \pm 3.0$ & $\mathbf{49.0 \pm 3.0}$ & $52.3 \pm 4.0$ \\
& & & & & & \\
Playroom1 & $(16 \times 10^{32}) \times 18$ & $\mathbf{(2 \times 10^{32}) \times 14}$ & $0.276 \pm 0.01$ & $6.64 \pm 0.07$ & $\mathbf{6.91 \pm 0.07}$ & $8.38 \pm 0.3$ \\
Playroom3 & $(10.24 \times 10^{34}) \times 18$ & $\mathbf{(2 \times 10^{32}) \times 14}$ & $0.491 \pm 0.05$ & $6.81 \pm 0.04$ & $\mathbf{7.30 \pm 0.08}$ & $13.0 \pm 0.4$ \\
Playroom5 & $(65.546 \times 10^{35}) \times 18$ & $\mathbf{(2 \times 10^{32}) \times 14}$ & $0.782 \pm 0.02$ & $6.58 \pm 0.1$ & $\mathbf{7.37 \pm 0.2}$ & $19.2 \pm 0.3$ \\
Playroom7 & $(41.943 \times 10^{37}) \times 18$ & $\mathbf{(2 \times 10^{32}) \times 14}$ & $1.26 \pm 0.01$ & $6.86 \pm 0.05$ & $\mathbf{8.12 \pm 0.05}$ & $27.6 \pm 0.8$ \\
Playroom9 & $(26.844 \times 10^{39}) \times 18$ & $\mathbf{(2 \times 10^{32}) \times 14}$ & $1.94 \pm 0.02$ & $6.59 \pm 0.04$ & $\mathbf{8.53 \pm 0.04}$ & $38.7 \pm 0.7$ \\
Minecraft Wool Dyeing Task & $(4.81 \times 10^{135}) \times 29$ & $\mathbf{(3.53 \times 10^{25}) \times 11}$ & $3.70 \pm 0.05$ & $1.48 \pm 0.03$ & $\mathbf{5.14 \pm 0.07}$ & $168 \pm 5$ \\
Minecraft Plank Crafting Task & $(4.81 \times 10^{135}) \times 29$ & $\mathbf{(3.53 \times 10^{25}) \times 13}$ & $4.74 \pm 0.2$ & $3.44 \pm 0.1$ & $\mathbf{8.18 \pm 0.3}$ & $16.0 \pm 0.4$ \\
Minecraft Bed Making Task & $(4.81 \times 10^{135}) \times 29$ & $\mathbf{(1.02 \times 10^{28}) \times 16}$ & $4.21 \pm 0.07$ & $208 \pm 6$ & $\mathbf{212 \pm 6}$ & $3450 \pm 50$ \\ \bottomrule
\end{tabular}
}
\label{table:experiment_times}
\end{table*}
\subsection{Numeric Domains with ENHSP-2020}
\label{expers:ENHSP_Experiments}
\paragraph{Continuous Playroom.}
To study how task scoping's performance and utility scale with the size of the state-action space within a simple open-scope numeric planning domain, we implemented a version of our running example in PDDL 2.1 (without the trivially irrelevant \texttt{thermostat} object or \texttt{(temperature)} factor). We created progressively larger problems by increasing the number of conditionally irrelevant light switches and buttons. For each of these problems, we ran ENHSP-2020 equipped with its default heuristic both with and without pre-processing using task scoping, and measured the end-to-end wall-clock time to produce a plan (that is, we compared the time to parse and plan for ENHSP with the time taken to parse, ground, scope, save the scoped PDDL domain and problem files, then parse and plan with ENHSP). The results, shown in Figure~\ref{fig:MonkeyResults}, clearly demonstrate that performing task scoping as a pre-processing step can significantly speed up planning by reducing the size of the state-action space. By simply removing the irrelevant switches, buttons, and actions, the state-action space can often be reduced by a large multiplicative factor, as shown in Table~\ref{table:experiment_times}.
\paragraph{Minecraft}
To examine task scoping's utility on a novel open-scope domain of interest, we created the \textit{Build-a-Bed Domain} containing simplified dynamics and several tasks from Minecraft and pictured in Figure~\ref{FrontPage}. The domain's main objective is to construct a blue bed---an important task in the videogame---in a simplified setting. The domain features a number of interactive items---such as diamonds, sticks and various plants---strewn about the map, that the agent can destroy, place elsewhere or use to craft different items. Thus, the domain supports a large variety of potential tasks.
We wrote PDDL 2.1 files to express the Build-a-Bed domain and 3 specific tasks within it: (1) dye 3 wool blocks blue by plucking 3 blue flowers from a field and using these to craft blue dye, (2) mine wood using a diamond axe and use this to craft wooden planks, and (3) use the results of (1) and (2) to craft a blue bed and place it at a specific location. The agent possess 3 wool blocks and a diamond axe in the initial state, which does not vary between tasks.
Task scoping is successfully able to recognize and remove a large number of irrelevant fluents and associated actions depending on the task chosen within this domain, as shown in Table~\ref{table:experiment_times}. The state space is reduced by over $10^{100}$ for each of these goals, and this reduction dramatically speeds up planning time for the Wool Dyeing and Bed Making tasks.
\section{Related Work}
Our work is certainly not the first to suggest pruning the state-action space of a planning problem before beginning search. \citet{Helmert:2006:FDP:1622559.1622565}'s widely-used Fast Downward Planner in particular includes a 'translation' phase before planning whereby abstract reachability analysis from the initial state is used to prune and merge various predicates. Recent works have extended this translator to perform more aggressive pruning and merging \citep{fivser2020lifted,fivser2020strengthening}. While the original translator does prune some irrelevant states and actions, we empirically observed (in Section \ref{expers:FD_Experiments}) that it is unable to prune conditionally irrelevant states and actions. Additionally, this approach and its successors perform a reachability analysis from the initial state and specifically attempt to minimize the encoding size of the problem, whereas our approach also accounts for the particular goal conditions and serves a related, but different purpose (i.e, computing a projection that is sound and optimality-complete). Furthermore, these previous approaches operate strictly on propositional planning problems whereas task scoping can be applied to numeric problems as well (as demonstrated in Section \ref{expers:ENHSP_Experiments}).
Recent work has attempted to leverage machine learning methods to predict an abstraction that ignores objects \citep{TomAndRohanPLOI} or actions \citep{gnadActionPrediction} that are irrelevant to a goal. Such methods can discern both trivial and conditional irrelevance and drastically improve the search time for state-of-the-art planners like Fast Downward. However, they must be trained on a set of small instances of a planning problem with the same goal to learn to predict irrelevance. Additionally, they provide no guarantees that their predicted abstractions are solution-preserving, but rather incrementally add more objects or actions if planning fails. We view these methods as orthogonal to our approach and considering integrating them an interesting direction for future work.
Counterexample-guided abstraction refinement (CEGAR) \citep{cegarjair,cegar_pattern} generates abstractions through iterative refinement. CEGAR finds an optimal solution to the abstraction and checks whether it is a solution to the original problem. If it is, it returns the solution. If not, it either refines the abstraction and tries again, or returns a planning heuristic based on the abstract solution.
Task scoping differs in that it derives a sound and optimality-complete abstraction, rather than deriving heuristics based on lossy abstractions. CEGAR can use richer families of abstractions like cartesian abstractions, while task scoping currently only uses projections.
Finally, some preliminary ideas behind task scoping were introduced by \citet{kumar2020task} in a different problem setting (Factored Markov Decision Processes). The authors provided no empirical evidence or theoretical guarantees.
\section{Conclusion}
Task scoping is a novel, planner-agnostic abstraction algorithm that can recognize and delete provably task-irrelevant fluents and actions from planning problems. We proved that task scoping always preserves optimal plans and characterized the algorithm's computational complexity. Through experiments on a variety of domains, we showed that performing task scoping as a pre-computation can significantly reduce the state-action space of \textit{open-scope} problems---large planning problems with multiple possible goals---thereby enabling a state-of-the-art planner to solve these problems more quickly.
\clearpage
\section{Supplementary Material}
\subsection{Proofs}
\textbf{Theorem 1.} A scoped RPP is sound: an initial state and a sequence of $\overline{x}$ from a scoped RPP can be lifted to a sequence of $x$ from the original PP, and these two sequences induce the same sequence of states in $S[J_{\mathrm{\it rel}}]$.
\begin{proof}
True by construction.
\end{proof}
\textbf{Lemma 1.} Given a scoped RPP, there is no
CAE triple in $\mathcal{T}[J]$ that affects both $J_{\mathrm{\it rel}}$ and $J_{\mathit{CL}}$.
\begin{proof}
Suppose $J_{\mathrm{\it rel}}$ induces an SC-RPP, and $x \in \mathcal{T}$ affects both $j \in J_{\mathrm{\it rel}}$ and $j' \in J_{\mathit{CL}}$.
Then, $\overline{x} \in \mathcal{T}[J]$ affects $j$ and has a side effect on $j'$, contradicting Condition~2 from the definition of $J_{\mathit{CL}}$ in an SC-RPP.
\end{proof}
\textbf{Theorem 2.} A scoped RPP is optimality-complete if it contains each goal fluent in $J_{\mathit{rel}} \cup J_{\mathit{CL}}$.
\begin{proof}
Suppose we have a scoped RPP. We will map each concrete trace to an abstract trace of equal length that passes through the same states when projected onto $J_{rel}$, and conclude that the RPP is complete.
Let $\tau$ be a trace in the concrete PP, with $\tau.a = (a_0, a_1, \ldots , a_n)$ be a sequence of actions that can be executed when starting from the initial state $s_{0}$ and $\tau.s = (s_0, s_1, \ldots , s_{n+1})$ the the sequence of states obtained by taking $\tau.a$ from $s_0$. Let $\tau.x = (x_0, x_1, \ldots , x_n)$ be the sequence of CAE triples corresponding to taking $a_i$ from $s_i$ for each $i$.
Let the $i^{th}$ state or action be indicated with a subscript, ex. $\tau_{i}.a$.
Let \textit{non-affecting action} refer to any action that has no effects on $S[J_{\mathrm{\it rel}}]$.
Let $\tau_{i}.x[J_{\mathrm{\it rel}}]$ be the abstract CAE triple induced by $\tau_{i}.x$ on $J_{\mathrm{\it rel}}$.
Let $\overline{\tau}$ be the trace obtained by replacing each non-affecting action in $\tau$ with a no-op.
We will show that, for each $i$, either $\tau_{i}.x$ is non-affecting or that the effects of $\tau_{i}.x$ and $\overline{\tau}_{i}.x$ on the $J_{\mathrm{\it rel}}$ fluents are the same.
We will then conclude that $\tau.s = \overline{\tau}.s$
Suppose now that $\tau_{i}.x$ is the first CAE triple in $\tau$ that affects any of the $[J_{\mathrm{\it rel}}]$ fluents.
\\ %
Since we only took non-affecting actions prior to $\tau_{i}.x$, we know that $\tau_{i}.s[J_{\mathrm{\it rel}}] = \overline{\tau}_{i}.s[J_{\mathrm{\it rel}}]$. We also know that $\overline{\tau}_i.x$.precondition can be written as $C[J_{\mathrm{\it rel}}] \wedge C[J_{\mathit{CL}}]$ by definition of our abstracted transition dynamics.
\\ %
By Lemma 1, only non-affecting actions can affect fluents $J_{\mathit{CL}}$, so replacing non-affecting actions with no-ops to produce $\overline{\tau}$ guarantees that the state
$\overline{\tau}_{i}[s]$ has the same values for the $J_{\mathit{CL}}$ fluents as the initial state $s_0 = \overline{s}_0$, and so $\overline{\tau}_{i}[s][J_{\mathit{CL}}] = s_0[J_{\mathit{CL}}]$, so $C[J_{\mathit{CL}}]$ is true.
Thus, by Property 2 of an SC-RPP, $\overline{\tau}_{i}.x$ is executable from both $\tau_{i}.s$ and $\overline{\tau}_{i}.s$ and thus the action $\tau_{i}.a$ must have the same effect on $J_{\mathit{rel}}$. Therefore, $\tau_{i+1}.s[J_{\mathrm{\it rel}}] = \overline{\tau}_{i+1}.s[J_{\mathrm{\it rel}}]$.
We have just shown that if we assume $\tau_{i}.a$ is the first affecting action, then all actions upto $\tau_{i}.a$ can be replaced by no-ops and $\tau_{i+1}.s[J_{\mathrm{\it rel}}] = \overline{\tau}_{i+1}.s[J_{\mathrm{\it rel}}]$. Given this, suppose now that the next affecting action is $\tau_{j}.a$, where $j > i$. We can apply the exact same argument to show that $\tau_{j+1}.s[J_{\mathrm{\it rel}}] = \overline{\tau}_{j+1}.s[J_{\mathrm{\it rel}}]$. We can continue to do this for every subsequent affecting action after $j$.
\end{proof}
\begin{theorem}
The task scoping algorithm returns an SC-RPP. \nknote{@Fishman, check this to make sure it follows well from the previous theorems?}
\end{theorem}
\begin{proof}
The algorithm begins by setting $J_{\mathrm{\it rel}} = \{g\}$ and thereafter never deletes a fluent from $J_{\mathrm{\it rel}}$. Therefore, $g \in J_{\mathrm{\it rel}}$.
The algorithm adds at least one fluent to $J_{\mathrm{\it rel}}$ each iteration, or terminates. $J_{\mathrm{\it rel}}$ is always a subset of $\mathcal{J}$.
Finally, scoping \textit{cannot} terminate without returning a \textit{scoped} RPP due to the explicit check in line 6. By the previous theorems, a scoped RPP with goal fluents in $J_{\mathrm{\it rel}} \cup J_{\mathrm{\it CL}}$ is sound and optimality-complete.
\end{proof}
\subsection{Implementation Details}
We run task scoping on PDDL 2.1 level 2 domains without conditional effects or on SAS+\footnote{Our current implementation does not support SAS+ domains that have axioms} domains obtained by running Fast Downward's translator on PDDL files. The specific process for each of these is described below.
\paragraph{PDDL 2.1 Domains.}
\begin{enumerate}
\item Parse the PDDL domain and problem files.
\item Ground all fluents and actions to objects in the problem files. We replace existential and universal quantifiers with disjunctions and conjunctions, respectively.
\item Create grounded CAE triples from the grounded actions.
\item Run the task scoping algorithm on the initial state, the grounded CAE triples and the goal conditions to get a list of relevant and causally linked fluents and relevant actions.
\item Find objects that are never part of a relevant fluent and ungrounded actions that have no relevant groundings.
\item Delete these objects and actions from the PDDL files.
\end{enumerate}
\paragraph{SAS+ Domains}
\begin{enumerate}
\item Extract CAE triples from SAS+ grounded operators by simply assuming that all fluents are integers (i.e, ignore the domain of each fluent that SAS+ specifies).
\item Run the task scoping algorithm on the initial state, the grounded CAE triples and the goal conditions to get a list of relevant and causally linked fluents and relevant actions.
\item Delete grounded operators\footnote{Our current implementation does not delete irrelevant fluents from SAS+ tasks, even though this is possible. Rather, it indirectly removes these by removing all irrelevant grounded operators that refer to irrelevant fluents, which effectively decreases the search space for a planner. In our experimental section, we include in the state space calculations only fluents that appear in both the condition of some operator and the effect of some (possibly different) operator.} from the SAS+ domain that only involve irrelevant fluents.
\end{enumerate}
Note that we do not delete causally-linked fluents, since doing so would make some actions ill defined. We could remedy this by editing the action definition to not rely on the causally-linked fluents, but abstained for the sake of simplicity.
Aditionally, note that we implemented our approach entirely in Python, and though our experimental results indicate task scoping is already able to speed up state-of-the-art like ENHSP-2020 and Fast Downward, we believe the wall-clock time necessary to run our algorithm could be significantly improved if we rewrote it in a compiled language like C++ or Cython.
\bibliographystyle{named}
| proofpile-arXiv_059-15678 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{\label{sec:Intro} Introduction}
Nucleons (protons and neutrons) are the building blocks of atomic nuclei, the structure of which provides an excellent
laboratory to advance our understanding about how quantum chromodynamics (QCD) -- the theory of strong interaction -- works
in the nonperturbative region quantitatively where currently our knowledge is rather poor. The proton root-mean-square (rms)
charge radius -- defined as
\begin{equation}
r_{p} \equiv r_{p,rms} \equiv \sqrt{\langle r^{2} \rangle} = \left( -6 \left. \frac{\mathrm{d} G_{E}^{p}(Q^2)}
{\mathrm{d}Q^2} \right|_{Q^{2}=0} \right)^{1/2} ,
\label{eq:eqn_rp}
\end{equation}
with $G_{E}^{p}$ being the proton electric form factor and $Q^{2}$ the four-momentum transfer squared measured in lepton scattering
experiments -- also has a major impact on bound-state quantum electrodynamics calculations of atomic energy levels.
As such the proton charge radius defined in the same way as in lepton scattering experiments~\cite{Miller:2018ybm} can be
determined from hydrogen spectroscopic measurements. However, there are distinct discrepancies in the measurement results,
observed among three types of experiments. The discrepancies mostly arose after 2010, when high-precision muonic
hydrogen ($\mu$H) spectroscopy experiments reported two values of $r_{p}$, being $0.8418 \pm 0.0007$~fm
\cite{Pohl:2010} and $0.8409 \pm 0.0004$~fm \cite{Antognini:1900n}. On the other hand, the world-average value from CODATA-2014
-- $r_{p} = 0.8751 \pm 0.0061$~fm \cite{Mohr:2015ccw} -- determined from atomic hydrogen ($e$H) spectroscopy experiments,
and the results from electron-proton ($e$-$p$) scattering experiments until 2010 mostly agreed with each other.
The challenge stemming from such a difference between the $r_{p}$ values, measured from different
types of the experiments, is known as the {\it proton charge radius puzzle} \cite{Pohl:2013yb,Carlson:2015jba,Hill:2017wzi}.
In the last few years, four more $r_{p}$ measurements from $e$H spectroscopy have been reported. Within experimental
uncertainties, the one from \cite{Fleurbaey:2018} is consistent with the previous $e$H spectroscopy results, while the other
two from \cite{Beyer:2017,Bezginov:2019} support the $\mu$H spectroscopy results. However, the latest result from
\cite{Grinin:2020} reported $r_p = 0.8482 \pm 0.0038$ ~fm, which exceeds the $\mu$H results by $\sim 1.9\sigma$.
Such an agreement with the $\mu$H spectroscopy results is also observed from the $r_{p}$ measured by the PRad collaboration
at Jefferson Lab \cite{Xiong:2019} -- $r_{p} = 0.831 \pm 0.007_{\rm stat} \pm 0.012_{\rm syst}$~fm -- that used a magnetic-spectrometer-free,
calorimeter-based method in an unpolarized elastic $e$-$p$ scattering experiment at very low $Q^{2}$, down to
$ 2.1\!\times\!10^{-4}$~GeV$^{2}$/c$^{2}$ \cite{Gasparian:2014rna,Peng:2015szv}.
The situation becomes similarly interesting and challenging if we move on to discuss measurements of the rms charge
radius of the deuteron, $r_{d}$, in electron-deuteron ($e$-$d$) scattering experiments as well as in $eD$ and $\mu D$ spectroscopy.
In particular, the CREMA collaboration has reported a deuteron charge radius -- $r_{d} = 2.12562 \pm 0.00078$~fm -- from a
muonic spectroscopy-based measurement of three $2P \rightarrow 2S$ transitions in $\mu D$ atoms \cite{Pohl:2016}, which
is 2.7 times more accurate but 7.5-$\sigma$ smaller than the CODATA-2010 world-average value \cite{Mohr:2012}. The radius from \cite{Pohl:2016} is
also 3.5-$\sigma$ smaller than the $r_{d}$ value, $2.1415 \pm 0.0045$~fm, extracted from an electronic
spectroscopy-based measurement \cite{Pohl:2016glp} of $1S \rightarrow 2S$ transitions in $eD$ atoms, after these
transitions have already been measured in \cite{Parthey:2010aya}.
Thereby, one also observes discrepancies from $r_{d}$ measurements (like in the case of $r_{p}$) that have given rise to
another challenge, dubbed as the {\it deuteron charge radius puzzle}. The PRad collaboration has proposed a low-$Q^{2}$
unpolarized elastic $e$-$d$ scattering experiment named as DRad -- basically anchored upon PRad's
experimental setup -- for a model-independent extraction of $r_{d}$ with a subpercent $(\leq 0.25\%)$ precision, in order
to address this newly developed puzzle \cite{DRad}.
Thus, given the importance of measuring not only $r_{p}$ but also $r_{d}$, our goal is to show how one can robustly extract
$r_{d}$ and control its uncertainties in a fitting procedure, using four parametrizations of the deuteron charge form factor,
$G_{C}^{d}$ \cite{Abbott:2000ak,Abbott:2000fg,kobushkin1995deuteron,Parker:2020,Sick:1974suq,Zhou:2020}.
In this paper we apply and extend the ansatz used in \cite{Yan:2018bez}, in which a comprehensive and
systematic method is presented for choosing mathematical functions that can robustly extract $r_{p}$ from a broad set of
input functions describing the proton electric form factor, $G_{E}^{p}$.
The rest of the paper is presented as follows. Sec.~\ref{sec:FormFacRad} has a brief discussion on the
deuteron form factors and the radius extraction. In Sec.~\ref{sec:Fitting} we describe the general fitting procedure on how
to extract $r_{d}$ from generated $G_{C}^{d}$ pseudo-data in the DRad kinematics and define some quantities to compare the
properties of different fitters. In Sec.~\ref{sec:RobFitCan} we introduce the pseudo-data generation from the $G_{C}^{d}$
parametrizations and discuss the method for searching for a fitter that will be able to extract $r_{d}$ by using the available
elastic $e$-$d$ scattering data. In Sec.~\ref{sec:Robustness} we show a comprehensive way to estimate the bias for
$r_{d}$ extraction. We conclude on our paper and discuss its prospects at the end. Also, in the Appendices we discuss the results of
testing a few theoretical models and provide another robust fitter candidate which is analogous to the one considered in
Sec.~\ref{sec:RobFitCan}.
\section{\label{sec:FormFacRad} Form factors and charge radius from unpolarized elastic electron-deuteron cross section}
The understanding of the electromagnetic properties of the deuteron is of fundamental importance in nuclear physics,
given that the deuteron is the only bound two-nucleon system. It is expected that at the low-$Q^{2}$ region, where the
relativistic effects and non-nucleonic degrees of freedom are expected to be negligible,
the deuteron form factors are dominated by part of its wave function for which the two constituent nucleons are far apart.
Theoretical calculations of $r_{d}$ are considered to be reliable since they are independent of the nucleon-nucleon potential
(for a broad class of potentials), and depend mostly on the binding energy and neutron-proton scattering length \cite{Wong:1994sy}.
This makes $r_{d}$ a perfect observable for a theory-experiment comparison.
So far three experiments have been conducted
for determination of $r_{d}$ from unpolarized elastic $e$-$d$ scattering at low $Q^{2}$ \cite{Berard:1974ev,Simon:1981br,Platchkov:1989ch},
the cross section of which in the one-photon exchange approximation is given by
\begin{equation}
\frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}\left( E, \theta \right) = \sigma_{_{\!NS}} \left( A_{d}(Q^{2}) + B_{d}(Q^{2})\,
\tan^{2}{\!\left( \frac{\theta}{2} \right)} \right) ,
\label{eq:eqn_sigma}
\end{equation}
where $\sigma_{_{\!NS}}$ is the differential cross section for the elastic scattering from a pointlike and spinless
particle at a scattering angle $\theta$ and an incident energy $E$. The four-momentum transfer squared carried
by the exchanged virtual photon is defined in terms of the four-momenta of the incident ($k$) and scattered
($k^{\prime}$) electrons: $Q^{2} = -\left( k - k^{\prime} \right)^{2}$. In this case the deuteron structure functions
in Eq.\thinspace(\ref{eq:eqn_sigma}) are related to its charge, $G_{C}^{d}$, magnetic dipole, $G_{M}^{d}$, and
electric quadrupole, $G_{Q}^{d}$, form factors via \cite{Jankus:1997,Gourdin:1963, Mainz}
\begin{eqnarray}
A_{d}(Q^{2}) & = & \left( G_{C}^{d}(Q^{2}) \right)^{2} + \frac{2}{3}\,\tau \left( G_{M}^{d}(Q^{2}) \right)^{2} +
\nonumber \\
& & + \frac{8}{9}\,\tau^{2} \left( G_{Q}^{d}(Q^{2}) \right)^{2} ,
\nonumber \\
B_{d}(Q^{2}) & = & \frac{4}{3}\,\tau (1 + \tau) \left( G_{M}^{d}(Q^{2}) \right)^{2} ,
\label{eq:eqn_deuteronstrucfunc}
\end{eqnarray}
with $\tau = Q^{2}/4M_{d}^{2}$, where $M_{d}$ is the deuteron mass. Also, there are the following additional relations:
\begin{displaymath}
G_{C}^{d}(0) = 1 ,\,\,\,\,\,\,\,\,\frac{G_{Q}^{d}(0)}{\mu_{Q}^{d}} = 1 ,\,\,\,\,\,\,\,\frac{G_{M}^{d}(0)}{\mu_{M}^{d}} = 1 ,
\label{eq:eqn_deuteronstrucfunc2}
\end{displaymath}
with the given deuteron electric quadrupole moment, $\mu_{Q}^{d}$, and magnetic dipole moment,
$\mu_{M}^{d}$\footnote{Throughout the text we use dimensionless $\mu_{M}^{d} \equiv (\mu_{M}^{d}/\mu_{N}$) = 0.8574
and $\mu_{Q}^{d} \equiv (\mu_{Q}^{d}/{\rm fm^{2}})$ = 0.2859 \cite{Garcon:2001sz}.}.
At very low but experimentally accessible $Q^{2}$ such as $\sim 10^{-4}~({\rm GeV/c})^{2}$,
the contributions from $G_{Q}^{d}$ and $G_{M}^{d}$ to the scattering process are negligible. By choosing different
$G_{M}^{d}$ and $G_{Q}^{d}$ form factors \cite{Abbott:2000ak,Abbott:2000fg,kobushkin1995deuteron,Parker:2020,Sick:1974suq,Zhou:2020} from
four data-driven models discussed in Appendix A (and throughout the paper) for extracting $G_{C}^{d}$ from the cross section, the
effects of the choice of the form-factor models on the deuteron radius are found to be $0.03$ and $0.009\%$, respectively. Thereby, in order to extract the deuteron rms
charge radius from $e$-$d$ scattering data, one should fit $G_{C}^{d}$ to the experimental data as a function of \(Q^2\), and calculate
the slope of this function at \(Q^2=0\), according to
\begin{equation}
r_{d} \equiv r_{d,rms} \equiv \sqrt{\langle r^{2} \rangle} = \left( -6 \left. \frac{\mathrm{d} G_{C}^{d}(Q^2)}
{\mathrm{d}Q^2} \right|_{Q^{2}=0} \right)^{1/2} ,
\label{eq:eqn_rd}
\end{equation}
in analogy to how $r_{p}$ is obtained.
\section{\label{sec:Fitting} The fitting procedure and robustness}
\subsection{\label{sec:procedure} The general procedure}
Refs.~\cite{Yan:2018bez,Kraus:2014qua, bernauer2014electric} give a general framework with input form-factor functions
and various fitting functions, for finding functional forms (fitters) that allow for a robust extraction of an input proton radius.
Analogously, we can find robust fitters to extract $r_{d}$ by testing all combinations of available input functions and fitting functions.
From a developed routine\footnote{A C++ coded program library has been created for generating, adding fluctuations to, and fitting the pseudo-data
\cite{Yan:2018bez,Radius_fitting_lib} (see Sec.~\ref{sec:RobFitCan}). The bin-by-bin and overall type fluctuations are assumed to imitate
the binning and random uncertainties of a given set of real data. For fitting purposes the library uses the MINUIT package of CERN ROOT
\cite{Brun:1997,James:1975}.} we generate many sets of $G_{C}^{d}$ pseudo-data values with user-defined fluctuations at given $Q^{2}$ bins
by using some $G_{C}^{d}$ charge form-factor models as input. Then we use various fitting functions to fit the pseudo-data and extrapolate them
to $Q^2=0$, in order to obtain the $r_d$ values according to Eq.~(\ref{eq:eqn_rd}).
When the program library generates bin-by-bin type fluctuations added to the pseudo-data, it occurs according to the
user-defined random Gaussian distribution at each bin. Stated otherwise, in order to mimic the bin-by-bin fluctuations
($Q^{2}$-independent) of the data, the pseudo-data should be smeared by shifting the $G_{C}^{d}$ central
value at each $Q^{2}$ bin with a random number following the Gaussian distribution, \(\mathcal{N}(\mu , \sigma^{2}_{g})\), given by
\begin{equation}
\mathcal{N}(\mu,\,\sigma_{g}^{2}) = \frac{1}{\sqrt{2\pi\sigma_{g}^{2}}}\,e^{-\left( G_{C}^{d} - \mu \right)^{2}/
\left( 2\sigma_{g}^{2} \right)} .
\label{eq:eqn_Gaus}
\end{equation}
In this paper we take \(\mu = 0\) and \(\sigma_{g} = \delta G_{C}^{d}\), where \(\delta G_{C}^{d}\) comes from the estimated
statistical and/or systematic uncertainties in the $e$-$d$ (DRad) experiment. The produced tables of $G_{C}^{d}$ vs. $Q^{2}$ with
fluctuations are fitted with a number of fitters for extracting $r_{d}$ (see Fig.~\ref{fig:bias_fitter}).
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.425\textwidth]{Radius_Fitting.pdf}
\caption{(Color online) The upper plot shows an example of one fit using the Abbott1 model (see Sec.~\ref{sec:pseudo-data}) as input and
Rational\,(1,1) (see Sec.~\ref{sec:RMSE}) as the fitting function. The lower plot shows an example of $r_{d}{\rm [fit]}$ distribution
obtained by following the above-mentioned pseudo-data and fitting procedure. A Gaussian function, similar to that in Eq.~(\ref{eq:eqn_Gaus}),
is used to fit the distribution.}
\label{fig:bias_fitter}
\end{figure}
\begin{figure*}[hbt!]
\centering
\includegraphics[scale=0.315]{Four_model_results.pdf}
\caption{(Color online) Five fitters from \cite{Yan:2018bez}, which give the best RMSE values for extraction of $r_{d}$, when they are
fitted with pseudo-data generated by the four $G_{C}^{d}$ parametrizations that we refer to as Abbott1 and Abbott2
\cite{Abbott:2000ak,kobushkin1995deuteron}, as well as Parker \cite{Parker:2020} and SOG \cite{Sick:1974suq,Zhou:2020} models. The error bars
show the statistical uncertainty of the deuteron radius.
}%
\label{fig:DRad_fitter}%
\end{figure*}
\subsection{\label{sec:robustness2} The robustness and goodness of fitters}
In this paper, the robustness of a fit function is determined by its ability to extract
$r_{d}$ from a variety of pseudo-data generated from plausible form-factor parametrizations. Our conviction is that the true and unknown form-factor function is reasonably approximated by the trial functions. As discussed in \cite{Higinbotham:2019jzd}, descriptive functions
(such as high-order polynomials), which precisely match onto the data over a limited $Q^2$ range, are often not the same as predictive functions
(such as low-order rational functions), which are able to extrapolate. Unsurprisingly, the predictive functions are often found to be the most
robust functions for $r_p$ extractions.
In order to determine the robustness of a fitter based upon the general procedure already discussed, one can
compare the size of the bias (${\rm bias}\equiv \delta r_{d}{\rm} = r_{d}[{\rm mean}]$ - $r_{d}[{\rm input}]$) with the variance $\sigma$
(the rms value of the radius distribution). The bias
comes from the mismatch of the fitting function and the underlying
generation function, which leads to a misprediction of the slope at
$Q^2=0$. The variance reflects the influence of the $G_{C}^{d}$ bin-by-bin uncertainties on the radius. If $\delta r_{d} < \sigma_{stat}$
(statistical variance) for most of the input form-factor models, the given fitter will be considered as sufficiently robust. In the case of an experiment,
the goal of which is to minimize the overall uncertainty, we should also consider the bias and variance together, using the
root-mean-square error
(RMSE) \cite{HTF:2009}\footnote{The RMSE discussed throughout this paper is somewhat different from that discussed in \cite{Yan:2018bez}, where
the authors have considered $\sigma_{stat}$ in the formula of RMSE.}:
\begin{equation}
\textrm{RMSE} = \sqrt{\textrm{$\delta r_{d}$}^{2} + \textrm{$\sigma_{total}$}^{2}} ,
\label{eq:eqn_RMSE}
\end{equation}
where $\sigma_{total}$ includes both bin-by-bin statistical and systematic uncertainties. The RMSE is a standard way of quantifying goodness of fitters.
The smaller the RMSE is, the better the corresponding fitter is.
Eventually, we need to find a fitter(s) that can extract the deuteron radius precisely, from pseudo-data generated from a range of plausible form
factors, which should be reasonable approximations to the unknown true function to allow for the best possible determination of the radius when the
fitter is applied to $e$-$d$ experimental data. The key point here is that the fitters are determined prior to obtaining the experimental results from the
planned $Q^2$ range and precision of the DRad experiment.
\subsection{\label{sec:RMSE} Initial studies and motivation}
Ref.~\cite{Yan:2018bez} takes into account different reasonable approximations to the unknown true function by using nine different $G_{E}^{p}$
form-factor parametrizations to generate pseudo-data in the PRad $Q^2$ range. The studies show that the two-parameter rational function,
Rational\,(1,1), is robust and the best fitter for extraction of $r_{p}$ for the range and uncertainties of the PRad data, represented by
\begin{eqnarray}\label{R11}
& & f_{\rm Rational\,(1,1)}(Q^{2}) \equiv {\rm Rational\,(1,1)} =
\nonumber \\
& &
~~~~~~~~~~~~~~~ = p_{0}\,G_{E}^{p}(Q^{2}) = p_{0} \frac{1 + p_{1}^{(a)}Q^{2}}{1 + p_{1}^{(b)}Q^{2}} ,
\end{eqnarray}
where \(p_{0}\) is a floating normalization parameter, and \(p_{1}^{(a)}\) and \(p_{1}^{(b)}\) are two free fitting parameters. The radius
is determined by \(r_{p} = \sqrt{6 \left( p_{1}^{(b)} - p_{1}^{(a)} \right)}\). The other two robust fitters are the two-parameter
continued fraction and the second-order polynomial expansion of the so-called Z transformation \cite{Yan:2018bez,Lee:2015jqa}, which can
extract the input proton radius regardless of the input electric form-factor functions.
Eq.~(\ref{R11}) is actually a special case from the class of the multiparameter rational function of $Q^{2}$ given by
\begin{eqnarray}\label{RNM}
& & f_{\rm Rational\,(N,M)}(Q^{2}) \equiv {\rm Rational\,(N,M)} =
\nonumber \\
& &
~~~~~~~~~~ = p_{0}\,G_{E}^{p}(Q^{2}) = p_{0} \frac{1 + \sum\limits_{i=1}^{N} p_{i}^{(a)}Q^{2i}}
{1 + \sum\limits_{j=1}^{M} p_{j}^{(b)}Q^{2j}} ,
\end{eqnarray}
where the orders $N$ and $M$ are defined by the user.
All the fitters studied in \cite{Yan:2018bez} have been tested here by fitting pseudo-data generated using the four $G_{C}^{d}$
parametrizations from \cite{Abbott:2000ak,kobushkin1995deuteron,Parker:2020,Sick:1974suq,Zhou:2020} (see Sec.~\ref{sec:pseudo-data}). For this test
we took the DRad kinematic range of $2 \times 10^{-4}~{\rm (GeV/c)^2} < Q^{2} < 0.05~{\rm (GeV/c)^2}$, using bin-by-bin statistical uncertainties
from $0.02$ to $0.07\%$ and systematic uncertainties from $0.06$ to $0.16\%$. The bias and $\sigma_{stat}$ values of the five best fitters are
shown in Fig.~\ref{fig:DRad_fitter}\footnote{The three-parameter continued fraction [CF\,(3)] and Polynomial\,Z\,(4) are from the classes
of the CF expansion and multiparameter polynomial expansion of $Z$, respectively. For their explicit expressions we refer the reader to
\cite{Yan:2018bez}. The CF\,(3) has the same functional form as Rational\,(1,2).}. Although the four-parameter Polynomial\,Z gives
the smallest bias, it also gives the largest variance and RMSE amongst them. The RMSE value of Rational\,(1,1) is the smallest one, though it gives
larger bias compared to the others.
However, given the limited number of $G_{C}^{d}$ parametrizations, the robustness of the fitters cannot be convincingly determined from these results.
In this case, we can not mimic different kinds of approximations to the unknown true function as comprehensively as it can be done for the proton $G_{E}^{p}$
models. We have also studied some theory-based models (discussed in Appendix~B), and found that those models have large
discrepancies with the experimental data, which show that the testing method of robustness applied to PRad is no longer suitable for the deuteron radius
extraction. Based on our studies, the bias is a non-negligible source of the $r_{d}$ systematic uncertainty estimated for the DRad experiment. This
observation was our motivation for looking into other potentially better fitters for DRad, which might give similar variance but smaller bias as compared
to those of Rational\,(1,1). At the same time, by having limited $G_{C}^{d}$ parametrizations at our disposal, we also need to develop a more comprehensive
method to estimate the bias when using various fitters.
\section{\label{sec:RobFitCan} Searching for a robust fitter candidate}
\subsection{\label{sec:pseudo-data} Pseudo-data generation}
Here we give some specific details on the pseudo-data generation and fitting procedure described in the previous section: \\
(A) Generating pseudo-data:
\begin{itemize}
\item[(i)]Four $G_C^{d}$ parametrizations based on available experimental data (named as Abbott1, Abbott2, Parker and SOG)
are used to generate $G_C^{d}$ values at given \(Q^{2}\) bins. The details of these parametrizations are discussed in Appendix A.
\item[(ii)] 30 $G_{C}^{d}$ pseudo-data points at 1.1~GeV and 37 $G_{C}^{d}$ points at 2.2~GeV are generated
from each of the four deuteron models in step i. Our binning choice for DRad is based on the binning of PRad. There are 30 bins from 0.8$^{\circ}$ to 6.0$^{\circ}$ at 1.1-GeV
beam energy, and 37 bins from 0.7$^{\circ}$ to 6.0$^{\circ}$ at 2.2~GeV \cite{Binningset}.
\end{itemize}
(B) Adding fluctuations to the pseudo-data and fitting:
The following steps are repeated 10000 times, which is sufficient to obtain stable results of the mean value and
rms of the $r_{d}{\rm [fit]}$ distribution to the precision of $10^{-4}$~fm.
\begin{itemize}
\item[(i)] To add statistical fluctuations, the total 67 pseudo-data points generated in step A are smeared by
67 different random numbers according to Eq.~(\ref{eq:eqn_Gaus}).
\item[(ii)] In this step a set of pseudo-data is fitted by a specific fitter $f_{d}(Q^{2})$. The data points at 1.1 and
2.2~GeV are combined and fitted by that fitter with two different floating normalization parameters corresponding to
these two beam energies. The other fitting parameters in the fitter are required to be the same for the two energies.
\item[(iii)] Then the fitted radius is calculated from the fitted function in step ii, with
\begin{equation}\label{eq:eqn_rd2}
r_{d}{\rm [fit]} = \left( -6 \left. \frac{\mathrm{d} f_{d}(Q^{2})}{\mathrm{d}Q^{2}} \right|_{Q^{2}= 0} \right)^{1/2}.
\end{equation}
\end{itemize}
\subsection{\label{sec:Datadriven} Data-driven method}
As described in Sec~\ref{sec:RMSE}, our studies have shown that the bias is an important source of systematic uncertainty in
the extraction of $r_d$. Hence, to better control and/or minimize the bias in the $r_d$ extraction, such as that obtained by the Rational\,(1,1)
fitter, we propose a data-driven approach to search for a new robust fitter candidate.
The Rational\,(1,3) is a function with four free parameters that has been used in \cite{kelly2004simple} to fit $G_{E}^{p}$.
Compared to the Rational\,(1,1), it has
good asymptotic behaviors satisfying not only $G_{C}^d = 1$ at $Q^2=0$
but also $G_{C}^d \rightarrow 0$ at \(Q^{2} \rightarrow \infty\). This fitter function is given by
\begin{eqnarray}\label{R13}
& &
f_{\rm Rational\,(1,3)}(Q^{2}) \equiv {\rm Rational\,(1,3)} =
\nonumber \\
& &
~~ = p_{01}\,G_{C}^{d}(Q^{2}) =
p_{01}\frac{1 + a_{1}Q^{2}}{1 + b_{1}Q^{2} + b_{2}Q^{4} + b_{3}Q^{6}} ,
\end{eqnarray}
where \(a_{1}, b_{1}, b_{2}, b_{3}\) are free parameters, and \(p_{01}\) is a floating normalization parameter.
In order to control the variance of $r_{d}{\rm [fit]}$, we fit this function to the existing experimental data sets in Table~1
of \cite{Abbott:2000ak}, which provides $G_{C}^{d}$ and $\delta G_{C}^{d}$ at fixed $Q^2$ values that are typically higher than
the values of the $Q^2$ range of the proposed DRad experiment. With $\chi^2/{\rm NDF} \simeq 1.25$, we determine \(b_{2} = 0.0416 \pm 0.0152 \)
and \(b_{3} = 0.00474 \pm 0.000892 \). Then fixing these values for fitting the pseudo-data in the (low-$Q^{2}$) DRad range
will render a fitter, which we refer to as fixed Rational\,(1,3) or fRational\,(1,3):
\begin{eqnarray}\label{fixed_Rational}
& & f_{\rm fixed\,Rational\,(1,3)}(Q^{2}) \equiv {\rm fRational\,(1,3)} =
\nonumber \\
& &
~~~~~~ = p_{01}\frac{1 + a_{1}Q^{2}}{1 + b_{1}Q^{2} + b_{2,{\rm fixed}}Q^{4} + b_{3,{\rm fixed}} Q^{6}} ,
\end{eqnarray}
where the uncertainties in the fixed parameters are taken into account when we calculate the bias.
In principle, if some fitter functions have fitting uncertainties in their fixed parameters, those parameters should be smeared
using a Gaussian distribution, with $\sigma_{g}$ to be the fitting uncertainty (see Eq.~(\ref{eq:eqn_Gaus})). We repeat this step
in the fitting procedure 10000 times as discussed in Sec.~\ref{sec:pseudo-data}.
To compare the differences between Rational\,(1,1), fRational\,(1,3) and other fitters shown in Fig.~\ref{fig:DRad_fitter}, all the functions
are plotted in the Abbott1/Abbott2 model range [from $Q^2$ = \(3 \times 10^{-2}\ {\rm to}\ 1.5~{\rm (GeV/c)^2}\)].
The parameters in these fitters are determined by fitting pseudo-data generated from the Abbott1 model in the DRad $Q^2$ range.
The results from the Abbott2, Parker, and SOG models are very similar, therefore we do not show them here. As shown in Fig.~\ref{fig:R11_vs_FixedR13},
all the fitters describe the data quite well in the low-\(Q^2\) range [$Q^2 < 0.15~{\rm (GeV/c)^2}$],
while Polynomial\,Z\,(4) and
CF\,(3) diverge. At high-\(Q^2\) range, the fRational\,(1,3) describes the data much better than the other fitters, which means that the
fRational\,(1,3) has a better asymptotic behavior at high \(Q^2\). Based on this observation, the fRational\,(1,3) may also have a potential to
describe the data in the low-\(Q^2\) range better than the Rational\,(1,1). Other than the fRational\,(1,3) functional form, we
have also studied another fitter, which has similar properties and is capable of extracting $r_{d}$ robustly. The details on our
studies for this fitter are presented in Appendix~C.
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.425\textwidth]{comparison_new1.pdf}
\includegraphics[width=0.455\textwidth]{comparison_new11.pdf}
\caption{(Color online) The upper plot shows the fRational\,(1,3), Rational\,(1,1),
Rational\,(1,2), Rational\,(2,1), CF\,(3), and Polynomial\,Z\,(4) obtained from fitting the pseudo-data generated by the Abbott1 model \cite{Abbott:2000ak},
which for comparison are overlaid with the black colored data points listed in Table 1 of \cite{Abbott:2000ak}. The color coding is displayed in the
legends, where the CF\,(3) and Rational\,(1,2) are the same and described by the two asymptotic green dotted lines. The lower plot shows the residual
points for the fRational\,(1,3), Rational\,(1,1), and Rational\,(2,1), where ``the residual" means the difference between $G_{C}^{d}[{\rm fit}]$ described
by the fitters and $G_{C}^{d}[{\rm data}]$ from the data.}
\label{fig:R11_vs_FixedR13}
\end{figure}
\section{\label{sec:Robustness} A comprehensive way to estimate the bias in deuteron charge radius extraction}
\subsection{\label{sec:Smearing} Smearing procedure}
After the candidate fitter is found, the robustness for the deuteron radius extraction needs to be tested. Being limited by the number of
$G_{C}^{d}$ parametrizations, in order to reflect various reasonable approximations to the unknown true function, the
parameters in the two Abbott as well as in the Parker and SOG models should be smeared. Once they are smeared, the functional forms describing
the models are different, and are used to perform a variety of extrapolations at low \(Q^2\). Overall, this test is a \(\chi^{2}\) test,
which consists of the following steps.
(A) Smearing of the parameters and calculation of \(\chi^{2}\):
First, we smear all the parameters for $\pm 10\%$, following a uniform distribution in a model.
Then we use the smeared model to generate the corresponding \(G_{C}^{d~\prime}\) with respect to its value at the same \(Q^{2}\) bin in the (\(Q^{2}\),
\(G_{C}^{d}\), \(\delta G_{C}^{d}\)) data set from Table~1 of \cite{Abbott:2000ak}. Afterwards, we calculate \(\chi^{2}\) by
\begin{equation}\label{Eq:chi2}
\chi^{2} = \sum {\frac{(G_{C}^{d} - G_{C}^{d~\prime})^{2}}{\delta G_{C}^{d}}} .
\end{equation}
(B) Checking of the acceptable region:
The definition of an acceptable \(\chi^{2}\) region is that the probability of the calculated \(\chi^{2}\) (after the parameters are
smeared) with a specific degree of freedom is ``acceptable'' when it is larger than 99.7\% in the $\chi^{2}$ probability distribution.
This requirement restricts the value of \(\chi^{2}\), which means that the smeared model should not be far away from the real
experimental data. With the specific degree of freedom \(\nu^{2}\), the \(\chi^{2}\) probability distribution is defined as
\begin{equation}\label{chi2pro}
f(\chi^{2}) = \frac{1}{2^{\nu/2}\Gamma(\nu/2)} e^{(-\chi^{2}/2)}(\chi^{2})^{(\nu/2) - 1} .
\end{equation}
Integrating the function in Eq.~(\ref{chi2pro}), from zero to \(\chi^{2}_{0}\), will result in the probability for \(\chi^{2}_{0}\).
The number of degrees of freedom (NDF) and the critical $\chi^2_0$ value for each of the four smeared
data-based models are shown in Table.~\ref{tab:table_chi2first}.
\begin{table}[h!]
\begin{tabularx}{0.425\textwidth}{X X c}
\hline
\hline
Model & \!\!\!NDF & $\chi^2_0$\\
\hline
{Abbott1} & 16 & 35.9 \\
{Abbott2} & 7 & 21.6\\
{Parker} & 16 & 35.9\\
{SOG} & 11 & 28.2\\
\hline
\hline
\end{tabularx}
\caption{The number of degrees of freedom and the critical $\chi^2_0$ value for each of the four smeared data-based models.}
\label{tab:table_chi2first}
\end{table}
If the calculated \(\chi^{2}\) is smaller than the above numbers for each smeared model, then we keep the given smeared
model and go to the next step. For each smeared model there is a new $r_{d}{\rm [input]}$, which is calculated by
Eq.~(\ref{eq:eqn_rd}) with the slope of a smeared model at \(Q^{2} = 0\). If \(\chi^{2}\) is unacceptable, the parameters of the model
are re-smeared and the whole procedure is repeated.
(C) Generating pseudo-data: If the smeared models pass step B, in this case these models can be utilized to generate
sets of pseudo-data in the DRad $Q^{2}$ range using the binning discussed in Sec.~\ref{sec:RobFitCan}.
(D) Fitting and calculating the bias:
After the pseudo-data are generated, we use the selected fitter to fit and obtain the quantity $r_{d}{\rm [fit]}$.
(E) Repeating and obtaining the relative bias:
In this step the above procedure for each model is repeated 10000 times for obtaining 10000 values of relative bias, which is defined as
$\delta r_{d}/r_{d}{\rm [input]}$.
(F) Finalization:
From each relative bias distribution of the smeared Abbott1, Abbott2, Parker, and SOG models, we select the rms value to
calculate $\delta r_{d}$ in Eq.~(\ref{eq:eqn_RMSE}).
\subsection{Proof of the robustness test using the proton form-factor models}
\begin{figure*}[hbt!]
\centering
\includegraphics[scale=0.615]{PRad_models_robustness.pdf}
\caption{(Color online) Seven proton electric form-factor models in which $G_{E}^{p}$ is plotted as a function of \(Q^2\).
The gray bands are the bands generated by each smeared model. The superimposed red points are the PRad 1.1-GeV data; the
blue points are the 2.2-GeV data \cite{Xiong:2019}.
}
\label{fig:PRad_models}
\end{figure*}
The parameter smearing approach for deuteron form-factor models helps us better calculate the bias, by imitating a variety of reasonable approximations to the unknown true function, when the number of models is limited. In order to verify that this approach is valid and
applicable, several proton electric form-factor \(G_{E}^{p}\) models can be tested in turn. Namely, we consider such parametrization
models, including Kelly \cite{kelly2004simple}, Arrington1 \cite{Venkat:2010by}, Arrington2
\cite{Arrington:2003qk}, Arrington-Sick \cite{Arrington:2006hm}, Ye \cite{Ye:2017gyb}, Alarcon, and
Bernauer-2014. The Alarcon model is our refit based upon \cite{Alarcon:2017ivh,Alarcon:2017lhg,Alarcon:2018irp},
and the Bernauer-2014 model is our refit of data from \cite{bernauer2014electric}. By smearing the parameters in the
proton $G_{E}^{p}$ models, we determine whether or not the smearing method can mimic the low-$Q^{2}$ extrapolation behavior of
those models.
Following the same steps shown in the previous section, the bias values obtained from fitting the Rational\,(1,1) with pseudo-data
generated by the \(G_{E}^{p}\) models, before and after smearing, have been found and are displayed in Table~\ref{table2}.
The nonsmeared bias in the table is the relative bias obtained by fitting pseudo-data generated from the original models. The smeared
bias is the relative bias obtained by fitting pseudo-data generated from the smeared models following the procedure in the previous
section.
\begin{table}[h!]
\centering
\begin{tabularx}{0.475\textwidth}{X X c}
\hline
\hline
Model & \!\!\!\!\!\!\!\!\!\!Nonsmeared bias (\%) & Smeared bias (\%) \\
\hline
Kelly & ~~~~0.002 & 0.0007 \\
Arrington1 & ~~~~0.005 & 0.003 \\
Arrington2 & ~~~~0.009 & 0.002 \\
Arrington-Sick & ~~~~0.001 & 0.0007 \\
Alarcon & ~~~~0.166 & 0.174 \\
Ye & ~~~~0.476 & 0.081 \\
Bernauer-2014 & ~~~~0.271 & 0.062 \\
\hline
\hline
\end{tabularx}
\caption{The relative bias obtained from fitting the Rational\,(1,1) with pseudo-data generated by nonsmeared and
smeared seven proton $G_{E}^{p}$ models.}
\label{table2}
\end{table}
In Fig.~\ref{fig:PRad_models} we show a band of each model by smearing all the parameters (again in each model) for
$\pm 10\%$, and restricting the values of \(\chi^{2}\) with respect to their degrees of freedom based on available data.
One can also see that all the models and the superimposed PRad data are covered by most of the bands except for the
band from the Arrington-Sick model, which means that the smearing method generates the pseudo-data in a reasonable range.
By looking at Table~\ref{table2} we find that the smeared bias is smaller than the nonsmeared bias for most
of the models. This result is expected as the bias calculated from the smearing method gives the most probable value in the
1$\sigma$ range based on the data. By looking at both Table~\ref{table2} and Fig.~\ref{fig:PRad_models}, we conclude that
although the smearing method used with limited models can not precisely reflect the behavior of other models,
it can exhibit more comprehensively how a fitter controls the bias.
\section*{Conclusions and outlook}
\begin{figure*}[hbt!]
\centering
\includegraphics[scale=0.315]{smeared_model_results_new.pdf}
\caption{(Color online) This figure shows the rms values of the bias for the shown fitters, derived from fitting pseudo-data
generated by the four smeared Abbott1, Abbott2, Parker and SOG models (Sec.~\ref{sec:Smearing}). The error bars reflect the
effects of the bin-by-bin total uncertainties of $G_C^d$ (Sec.~\ref{sec:procedure}).
}%
\label{fig:DRad_fitter_smeared1}%
\end{figure*}
In this section we summarize and conclude on our findings exhibited in the paper (including both appendices B and C). Also, we briefly discuss
the prospects that this paper may have in the future.
Fig.~\ref{fig:DRad_fitter_smeared1} shows the rms values of the bias for the given five fitters, derived from fitting
pseudo-data generated by the four smeared Abbott1, Abbott2, Parker and SOG models (see Sec.~\ref{sec:Smearing}), along with the bin-by-bin total
uncertainties (see Sec.~\ref{sec:procedure}). According to the definition of the robustness discussed in Sec.~\ref{sec:robustness2}, the five
fitters are all robust ($\rm bias[\rm rms] < \sigma_{stat}$). Although the Rational\,(1,1) and fRational\,(1,3) have larger
bias values compared to those of the other three fitters, they can control the RMSE better because their variances are smaller
than those of the others.
By comparing the bias and variance ($\sigma_{total}$) in that figure, our understanding is that the RMSE (overall
uncertainty) in the DRad experiment will be dominated by the bin-by-bin uncertainties rather than by the bias obtained in the fitting
procedure. Based on our results, we propose to use the fRational\,(1,3) as the primary fitter in the deuteron charge radius extraction
for this planned experiment, noting that it also has a better asymptotic behavior compared to that of Rational\,(1,1). Nonetheless,
the fRational\,(1,3) is determined based on the data-driven method. Since it only has constraints from deuteron charge form-factor data at
high $Q^{2}$, its extrapolation may not be very accurate, when it is used for fitting generated pseudo-data in a lower-$Q^{2}$ range.
Once we have more data at low $Q^2$, we can better determine the fixed parameters in this fitter, in which case we will be able to
extract the $r_{d}$ value more precisely. This might be done, for example, with possible upcoming new data from the A1 Collaboration at
Mainz Microtron (MAMI). On the other hand, if we consider the results shown in Figs.~\ref{fig:R11_vs_FixedR13},~\ref{fig:DRad_fitter_smeared1},~\ref{fig:D1_R11_vs_modified},~and~\ref{fig:DRad_fitter_smeared2} together,
in this case we find that (i) the fRational\,(1,3) and (ii) the modRational\,(1,1) are currently our best fitters for the robust extraction
of $r_{d}$. In addition, we note that the above-mentioned conclusions are anchored upon our studies for the DRad experiment. One should
first account for the trade-off between the bias and variance, then select the best fitter stemming from the latest estimation of experimental
uncertainties. If it turns out that the bin-by-bin uncertainties during the DRad experiment are much smaller (at least ten times) than
what we have already evaluated, in this case we may search for another potentially robust fitter, which can minimize the bias and
simultaneously will also have good asymptotics.
The radius extraction methods discussed so far depend on specific functional forms. In \cite{Craig2020fresh}, different extraction
of the charge radius of the proton is discussed. The so-called cubic spline method is used to interpolate form-factor data, by which a
smooth function is obtained afterwards. Then the radius could be extracted with an extrapolation using that smooth function. This method
may also be applicable by us for the robust extraction of the deuteron charge radius in the near future, as an independent way for cross
checking our results coming from the ansatz provided in this paper.
\section*{Acknowledgments}
This work is supported in part by the U.S. Department of Energy under Grants No. DE-FG02-03ER41231 and
No. DE-AC05-06OR23177, under which the Jefferson Science Associates operates the Thomas Jefferson National
Accelerator Facility. This work is also supported in part by the U.S. National Science Foundation.
| proofpile-arXiv_059-15679 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Fracton models \cite{Nandkishore_2019, Pretko_2020} are characterized by the peculiar feature that some of their gapped point excitations are completely localized or are restricted to move only in a lower dimensional sub-manifold. Two large classes of models have been studied extensively, with very different features. The exactly solvable fracton models (see for example \onlinecite{ChamonModel,HaahCode, Sagar16, VijayFracton,Prem_2019, Song2019}),
on the one hand, are gapped and exhibit properties like exponential ground state degeneracy, nontrivial entanglement features, \cite{HermeleEntropy, ShiEntropy, BernevigEntropy, Shirley_2019} and foliation structure. \cite{Shirley_2018} The higher rank continuum gauge theories (see for example \onlinecite{PretkoU1, electromagnetismPretko, Gromov2019, seiberg2020exotic}), on the other hand, host gapless photon excitations, on top of which gapped fracton excitations emerge due to nontrivial forms of symmetry like dipole conservation. The fracton models discovered so far host features that are very similar to those in topological models like fractional quantum Hall and (rank-1) gauge theories, but also generalize the topological framework in nontrivial ways.
One theoretical tool that plays an important role in the study of $2+1$D topological phases is Chern-Simons gauge theory. \cite{WenRigid} In particular, it has been shown that multi-component $U(1)$ gauge theories with a Chern-Simons term give a complete characterization of $2+1$D abelian topological phases. \cite{WenChernSimons} The Lagrangian of the theory is given by
\begin{equation}
\label{eq:lagrangian}
\mathcal{L} = -\frac{1}{4e^2}\sum_i \mathcal{F}^i_{\mu\nu}\mathcal{F}^{i,\mu\nu} + \frac{1}{4\pi} \sum_{ij} K_{ij} \epsilon^{\mu\nu\lambda} \mathcal{A}^i_{\mu}\partial_{\nu}\mathcal{A}^j_{\lambda},
\end{equation}
where $\mu,\nu,\lambda = 0,1,2$, and $i,j$ label the different gauge fields and take values in a finite set $i,j=1,...,N$. The matrix $K$ is an $N\times N$ symmetric integer matrix. The universal topological features are captured in the $K$ matrix, from which one can derive the ground state degeneracy, anyon fusion and braiding statistics, edge states, etc. of the topological phase.\cite{WenChernSimons}
Can we take the number of gauge fields to infinity and extend this formalism to describe $3+1$D fractonic order? In this paper, we call such theories ``iCS'' theories, ``i'' for infinite. This idea comes from the simple observation that if we do take this extension and choose the infinite dimensional $K$ matrix to be simply diagonal (with diagonal entries being, for example, $3$),
\begin{equation}
K=
\begin{pmatrix}
\ddots\\
& 3\\
& & 3\\
& & & 3\\
& & & & \ddots
\end{pmatrix},\label{eq:Kd3}
\end{equation}
then the Lagrangian describes a decoupled stack of $2+1$D fractional quantum Hall states (each with filling fraction $\nu=1/3$ in this example). Such decoupled stacks of $2+1$D topological states, while simple, contain several of the key features of fracton physics: ground state degeneracy that increases exponentially with the height of the stack; anyon excitations that are mobile in 2D planes only and cannot hop vertically; and entanglement entropy of sub-regions that contains a sub-leading term which scales linearly with the height of the region.\cite{HermeleEntropy} Therefore, this simple stack system described by a diagonal infinite $K$ matrix is a fracton model, although a very trivial one. In the following discussion, we will always use the convention that each gauge field has $x$ and $y$ spatial components, but not a $z$ one, and we will try to interpret the $i$ index as a $z$ direction spatial coordinate (we will discuss the condition under which we can safely do so).
Can iCS theories with more complicated $K$ matrices, e.g. quasi-diagonal ones, lead to more interesting types of fractonic behavior? In this paper, we show that this is indeed the case. In section~\ref{sec:gapped_foliated}, we show that some gapped iCS theories have foliated fractonic order, which was first identified in several exactly solvable fracton models. \cite{Shirley_2018, Shirley_2019, Shirley2019twisted} The iCS models cover both twisted and non-twisted foliated fractonic phases and can represent foliated phases without an exactly solvable limit. More interestingly, in section~\ref{sec:gapped_non-foliated}, we present a gapped iCS theory that is qualitatively different from any exactly solvable fracton model we know before. The ground state degeneracy does not follow exactly a simple exponential form, but approaches one in the thermodynamic limit. Quasiparticles move in planes and braiding statistics become ever more fractionalized as system size increases. This example represents a new class of gapped fractonic order. Note that the gapped iCS theories discussed in this paper are ``fractonic'' in the sense that they contain point ``planon'' excitations that move in 2D planes only but not the third direction. There are no true ``fracton'' excitations in these models which are completely localized when on their own. Finally, in section~\ref{sec:gapless}, we discuss an iCS theory which is gapless. On top of the gapless photon excitation, the system has a constant ground state degeneracy and fractional excitations generated by membrane operators. What kind of $3+1$D physics this model describes is an intriguing question. Some of these iCS theories have been studied in the context of three dimensional quantum Hall systems\cite{qiu1989phases,Qiu90-2, Naud00, Naud01} where their unusual properties of braiding statistics, edge states, etc., were first pointed out.
To substantiate the results we obtain from field theory analysis, we present an explicit lattice construction in section~\ref{sec:lattice}. The construction works for any quasi-diagonal infinite dimensional $K$ matrix with bounded entries (defined in section~\ref{sec:lattice}) and demonstrates that the corresponding iCS theory indeed describes the effective low energy physics of an anomaly-free $3+1$D local model. In particular, we write down a lattice Hamiltonian and the lattice form of the string operators for the planons, and calculate the spectrum (from field theory).
Finally, in section~\ref{sec:conclusion}, we summarize our result and discuss the various open questions that follow the initial exploration of iCS theory presented here.
In Appendix~\ref{sec:Km_statistics} we discuss the tangential problem of how to construct the $K$ matrix representation of a $2+1$D abelian topological order if we are given the fusion group and statistics of its anyons. This translates to the math problem of quadratic forms on finite abelian groups and a complete solution is known. \cite{WALL1963281, wang2020abelian} We present the procedure step by step for interested physics readers.
Note that in this paper, we are considering $2+1$D gauge fields with $2+1$D gauge symmetries although the model is a $3+1$D model. It is possible to add a $z$ component to the gauge field and modify the model so that it satisfies $3+1$D gauge symmetries. We find that in most such cases, the model becomes gapless, similar to the case studied in \onlinecite{Levin2009}. We leave these cases out of the scope of this paper.
\section{Gapped foliated theories}
\label{sec:gapped_foliated}
\begin{figure}[tbp]
\hbox{\hspace{-1em}
\includegraphics[width=0.5\textwidth]{foliation.png}}
\caption{Foliated fracton order and its interpretation in terms of $K$ matrix. In Figure~(a) (first line), we start with a system $H(L)$ of size $L$ in the $z$ direction. A finite depth local unitary circuit $U$ is applied to the green region $\{(x,y,z):z_1\le z\le z_2\}$. The result is the same system $H(L-1)$ of size $L-1$ in the $z$ direction and a decoupled $2+1$D gapped system (red layer). In Figure~(b) (second line), we start with a quasi-diagonal $K(N)$ of size $N\propto L$ with periodic boundary condition. Only entries in the blue region can be non-zero. We apply the transformation $K(N)\mapsto WK(N)W^T$, where $W\in\text{GL}(N,\mathbb{Z})$ shown in the dashed box is equal to the identity except in the green block, so the action of $W$ on $K(N)$ is within the green cross in the second figure. The result is the direct sum of the same system $K(N-a)$ of size $N-a$ and a decoupled block $K'$ of size $a=O(1)$ (red block).}
\label{fig:foliation}
\end{figure}
A number of the fracton models discovered so far have a ``foliation structure''.\cite{Shirley_2018, Shirley_2019, Shirley2019twisted} That is, a model with a larger system size can be mapped under a finite depth local unitary circuit $U$ to the same model with a smaller system size together with decoupled layers of $2+1$D gapped states, as shown in Fig.~\ref{fig:foliation}~(a). For example, it was shown \cite{Shirley_2018} that the X-cube model of size $L_x\times L_y \times L_z$ can be mapped to one with size $L_x \times L_y \times (L_z-1)$ together with a $2+1$D toric code. Actually, the same process can be implemented in all $x$, $y$ and $z$ directions, and hence the X-cube model is said to be ``3-foliated''. Other fracton models with a ``foliation structure'' include the semionic X-cube model,\cite{MaLayers, Shirley_2019_excitation} the checkerboard model,\cite{Shirley_2019_checkerboard} the Majorana checkerboard model\cite{Wang_2019}.
An iCS theory can have a ``foliation structure'' as well and the $K$ matrix formulation provides a particularly simple mathematical framework to study it, as explained in Fig.~\ref{fig:foliation}~(b). Obviously the diagonal $K$ matrix, for example the one in Eq.~\ref{eq:Kd3}, represents a rather trivial 1-foliated fracton model where a model of height $L$ (in the stack direction) is the same as a model of height $L-1$ together with a decoupled 2D layer. Moreover, in Ref.~\onlinecite{Shirley2019twisted}, it was shown that it is possible to represent more nontrivial types of foliated fracton order using an iCS theory. In particular, it was shown that an infinite dimensional $K$ matrix of the form
\begin{equation}
\label{eq:K_F}
\let\quad\;
\def\-{\raisebox{.75pt}{-}}
K_\text{F}=\bordermatrix{
&&e_1&m_1&e_2&m_2&e_3&m_3&e_4& \cr
&\ddots & & & & & & & & \cr
&& 0& 2& \-1& & & & & \cr
&& 2& 0& & & & & & \cr
&& \-1& & 0& 2& \-1& & & \cr
&& & & 2& 0& & & & \cr
&& & & \-1& & 0& 2& \-1& \cr
&& & & & & 2& 0& & \cr
&& & & & &\-1 & & 0& \cr
&& & & & & & & & \ddots
}.
\end{equation}
describes a twisted 1-foliated fracton order. All non-zero entries in the matrix lie within distance $2$ from the main diagonal; the matrix is hence said to be quasi-diagonal. It is translation invariant with a period of $2$: $i\mapsto i+2$, $j\mapsto j+2$. We have added a subscript ``F'' to indicate that it is foliated. The meaning of the column labels will become clear once we take the inverse of this matrix.
To see what kind of physics this $K_\text{F}$ matrix describes, we first notice that the determinant of the $K_\text{F}$ matrix of size $2L$ is given by $\det{K_\text{F}(2L)} = (-4)^L$. Therefore, the ground state degeneracy on a 3D torus is given by
\[\log_2 \text{GSD} = 2L,\]
which takes a simple linear form in $L$. Next, the quasi-particle content can be read from the $K^{-1}_\text{F}$ matrix
\begin{equation}
\let\quad\;
K_\text{F}^{-1}=\frac{1}{4}\bordermatrix{
&& m_0&e_1&m_1&e_2&m_2&e_3&m_3& \cr
&\ddots & & & & & & & & \cr
&& 0 & & 1 & & & & & \cr
&& & 0 & 2 & & & & & \cr
&& 1 & 2 & 0 & & 1 & & & \cr
&& & & & 0 & 2 & & & \cr
&& & & 1 & 2 & 0 & & 1 & \cr
&& & & & & & 0 & 2 & \cr
&& & & & & 1 & 2 & 0 & \cr
&& & & & & & & & \ddots
}. \label{eq:KFi}
\end{equation}
The column labels $e_i$ and $m_i$ follow from those in Eq.~\ref{eq:K_F}. It is now easy to see that we choose these labels because the statistics of $e_i$ and $m_i$ are similar to those in a $\mathbb{Z}_2$ gauge theory where the $e$ and $m$ excitations are bosons and have a mutual $-1$ braiding statistic. But this $K_\text{F}$ matrix represents not just a decoupled stack of $\mathbb{Z}_2$ gauge theories, because the $m$ excitations have mutual $i$ statistics between neighbors. Indeed, it was shown in Ref.~\onlinecite{Shirley2019twisted} to describe a twisted 1-foliated fractonic order. That is
\begin{enumerate}
\item The model is gapped and has fractional excitations that move only in the $xy$ plane, hence a fracton model.
\item The model of height $L$ in the $z$ direction (corresponding to a $K_\text{F}$ matrix of size $4L$) can be mapped to one of height $L-1$ (corresponding to a $K_\text{F}$ matrix of size $4L-4$) together with a $2+1$D topological state layer (a twisted $\mathbb{Z}_2\times \mathbb{Z}_2$ gauge theory in this case).
\item The model is not equivalent to a pure stack of $2+1$D topological layers. Note that the entries in $K_\text{F}^{-1}$ are strictly zero once we move sufficiently far away from the main diagonal.
\end{enumerate}
Comparing this to examples discussed in later sections, we see that this is a hallmark of foliated iCS theories.
The way to see the foliation structure is to apply a local, general linear transformation $W$ of the form
\begin{equation}
\let\quad\;
\def\-{\raisebox{.75pt}{-}}
W=\bordermatrix{
&& \tilde{e}_1&\tilde{m}_1&\tilde{e}^A&\tilde{m}^A&\tilde{e}^B&\tilde{m}^B&\tilde{e}_2&\tilde{m}_2&\tilde{e}_3& \cr
&\ddots & & & & & & & & & & \cr
e_1&& 1 & & & & \-1 & & & \-1 & & \cr
m_1&& & 1 & & & & & & & & \cr
e_2&& & & 1 & & & & & & & \cr
m_2&& & & & 1 & & & & 1 & & \cr
e_3&& & & & & 1 & & & & & \cr
m_3&& & 1 & & & & 1 & & & & \cr
e_4&& & & \-1 & & & & 1 & & & \cr
m_4&& & & & & & & & 1 & & \cr
e_5&& & & & & & & & & 1 & \cr
&& & & & & & & & & & \ddots
}, \label{eq:W}
\end{equation}
so that $K_\text{F}$ is transformed into
\begin{equation*}
\let\quad\;
\def\-{\raisebox{.75pt}{-}}
WK_\text{F}W^T=\bordermatrix{
&& \tilde{e}_1&\tilde{m}_1&\tilde{e}^A&\tilde{m}^A&\tilde{e}^B&\tilde{m}^B&\tilde{e}_2&\tilde{m}_2&\tilde{e}_3& \cr
&\ddots & & & & & & & & & & \cr
&& 0 & 2 & & & & & \-1 & & & \cr
&& 2 & 0 & & & & & & & & \cr
&& & & 0 & 2 & \-1 & 0 & & & & \cr
&& & & 2 & 0 & 0 & 0 & & & & \cr
&& & & \-1 & 0 & 0 & 2 & & & & \cr
&& & & 0 & 0 & 2 & 0 & & & & \cr
&& \-1 & & & & & & 0 & 2 & \-1 & \cr
&& & & & & & & 2 & 0 & & \cr
&& & & & & & & \-1 & & 0 & \cr
&& & & & & & & & & & \ddots
},
\end{equation*}
where the middle $4\times 4$ block is decoupled from the rest of the system and the remaining part of the transformed $K$ matrix is the same as the original one in Eq.~\ref{eq:K_F}, only slightly smaller. Note that, although the $W$ matrix looks quite big, it acts non-trivially only within the finite block shown in Eq.~\ref{eq:W}. Its action is the identity outside. This transformation hence realizes the renormalization group transformation \cite{VidalRG, Shirley_2018} of the 1-foliated fracton model formulated in terms of infinite dimensional $K$ matrices, as shown schematically in Fig.~\ref{fig:foliation}~(b).
The iCS theory, and correspondingly the infinite dimensional $K$ matrix, hence provide a convenient formulation for studying the foliation structure in a 1-foliated fracton model. The example discussed above can be generalized to a whole class of 1-foliated models with a similar foliation structure, as discussed in Appendix~\ref{sec:KFnr}.
\begin{comment}
Many of the type~I fracton orders discovered so far have a ``foliation structure''. That is, a model with a larger system size can be mapped under a finite depth local unitary circuit $U$ to the same model with a smaller system size together with decoupled layers of $2+1$D gapped states. A foliation structure naturally relates the same fracton system of different sizes, and since only $2+1$D systems are spit out, we expect the foliation procedure to preserve the $3+1$D physics. Depending on the model, the decoupled $2+1$D layers may lie in one or several directions, e.g. the X-cube model can spit out \xmc{choice of word} $2+1$D toric codes lying orthogonal to the $x$, $y$ or $z$ direction with a suitable choice of $U$. Since the planons in an iCS theory are restricted to planes orthogonal to the $z$ direction, it's natural to ask whether the theory is ``1-foliated'', in the sense that a system $H(L)$ of size $L$ in the $z$ direction can be mapped by $U$ to the same system $H(L-1)$ of size $L-1$ together with some decoupled $2+1$D gapped system $H'$ lying orthogonal to the $z$ direction, where $U$ is a finite depth local unitary circuit with compact support in the $z$ direction. More precisely, we require that $U$ act non-trivially only on sites $(x,y,z)$ where $z_1\le z\le z_2$ (see Fig.~\ref{fig:foliation}). In terms of $K$ matrix, this is if for each $K(N)$ of size $N\propto L$, there exists a matrix $W\in\text{GL}(N,\mathbb{Z})$ that is equal to the identity except in a block of finite size, such that $WK(N)W^T$ is decoupled into $K(N-a)$ and another block $K'$ of size $a=O(1)$.
\xcc{$H(L)$ refers to system size $L\times L \times L$? Does the system size decrease by one in all directions or just one direction?}
\xcc{What is the relation between $N$ and $L$ and $a$ and 1?}
\xcc{What does 1-foliated mean?}
In this section, we give an example of a gapped, foliated iCS theory, namely the twisted 1-foliated model described in (ref). The theory is obtained by condensing bosons in a stack of $\mathbb{Z}_n\times\mathbb{Z}_n$ twisted gauge theories, and has
\[K(N)=
\begin{pmatrix}
\ddots\\
& 0 & n & -r\\
& n & 0\\
& -r & & 0 & n & -r\\
& & & n & 0\\
& & & -r & & 0 & n\\
& & & & & n & 0\\
& & & & & & & \ddots
\end{pmatrix}_{4N\times4N},\]
where $n>0$ and $r$ can be chosen to take values in $\{0,1,...,n-1\}$. To see the foliation structure, take \[W=
\setcounter{MaxMatrixCols}{12}
\begin{pmatrix}
1 & & & & -1 & & & & 1\\
& 1\\
& & 1 & & & & -1\\
& & & 1 & & & & & & & & -1\\
& & & & 1\\
& & & & & 1 & & & & 1\\
& & & & & & 1\\
& & & 1 & & & & 1\\
& & & & 1 & & & & -1\\
& 1 & & & & & & & & -1\\
& & 1 & & & & -1 & & & & 1\\
& & & & & & & & & & & 1
\end{pmatrix},\]
where the action of $W$ outside this $12\times12$ block is the identity. We find that $WK(N)W^T$ decouples into $K(N-2)$ and two copies of $\mathbb{Z}_n\times\mathbb{Z}_n$ twisted gauge theories described by \[K_0=
\begin{pmatrix}
0 & n & -r\\
n & 0\\
-r & & 0 & n\\
& & n & 0
\end{pmatrix}.\]
The fusion group when $n$ and $r$ are coprime is
\begin{equation}
\label{eq:group_foliated}
G=
\begin{cases}
\mathbb{Z}_{n^2}^{2N-2}\times\mathbb{Z}_n^4 & \mbox{if } N \mbox{ is even},\\
\mathbb{Z}_{n^2}^{2N} & \mbox{if } N \mbox{ is odd, } n \mbox{ is odd,}\\
\mathbb{Z}_{n^2}^{2N-2}\times\mathbb{Z}_{n^2/2}^2\times\mathbb{Z}_2^2 & \mbox{if } N \mbox{ if odd, } n \mbox{ is even.}
\end{cases}
\end{equation}
When $n$ and $r$ are not coprime, we can factor out $\text{gcd}(n,r)$ from $K$ and analyze similarly. \xmc{comment on non-triviality}\xmc{unitary version}
\end{comment}
\section{Gapped non-foliated theories}
\label{sec:gapped_non-foliated}
While the iCS formulation is useful in the study of foliated fracton models, a more surprising finding is that iCS theories can also be non-foliated. Among all type~I fracton models -- ones with mobile fractional excitations -- that we know so far, the abelian ones are all foliated. The iCS theory, being an abelian type~I fracton model, hence extends our understanding of what is possible within the realm of fractonic order.
Consider the iCS theory with a simple tridiagonal $K$ matrix
\begin{equation}
\label{eq:KnF}
K_{\text{nF}}=\begin{pmatrix}
3 & 1 & & & 1 \\
1 & 3 & 1 & & \\
& \ddots & \ddots & \ddots & \\
& & 1 & 3 & 1 \\
1 & & & 1 & 3
\end{pmatrix}.
\end{equation}
Note that we have taken periodic boundary condition in the matrix. The ``nF'' subscript denotes non-foliated. This theory was studied in Refs.~\onlinecite{Qiu90,Qiu90-2,Naud00,Naud01} as an effective theory for coupled fractional quantum Hall layers. Many aspects of its properties have been studied. Here we look at the theory from a fracton perspective, that is, to address the question: is this a fracton model and if so, what type of fracton model?
A field theory calculation shows that this theory is gapped (see section~\ref{sec:spectrum}). The determinant $D(N)$ of the matrix of size $N$, and hence the ground state degeneracy of the model on a 3D torus of height $N$, follow a rather complicated form
\begin{equation}
\label{eq:det(K)}
D(N) = \left(\frac{3+\sqrt{5}}{2}\right)^N+\left(\frac{3-\sqrt{5}}{2}\right)^N-2(-1)^N.
\end{equation}
The exponential growth of GSD in system size indicates fractonic order. However, unlike in the foliated case, the GSD does not follow a simple exponential form (with possible pre-factors), but only approaches such a form in the thermodynamic limit $N\to \infty$ with an irrational base $\left(3+\sqrt{5}\right)/2$.
Another way to see that this model is ``weird'' is from the fusion group and statistics of its planons. Such information can be read from the inverse of the matrix, which for size $N=5$ takes the form
\[K_{\text{nF}}^{-1} = \frac{1}{25} \begin{pmatrix} 11 & -4 & 1 & 1 & -4 \\ -4 & 11 & -4 & 1 & 1 \\ 1 & -4 & 11 & -4 & 1 \\ 1 & 1 & -4 & 11 & -4 \\ -4 & 1 & 1 & -4 & 11 \end{pmatrix},\]
and for size $N=7$ takes the form\ws{\footnote{The integers 1, 4, 11, 29, etc. form a sequence known as the Lucas numbers.}}
\[K_{\text{nF}}^{-1} = \frac{1}{65} \begin{pmatrix} 29 & -11 & 4 & -1 & -1 & 4 & -11 \\ -11 & 29 & -11 & 4 & -1 & -1 & 4 \\ 4 & -11 & 29 & -11 & 4 & -1 & -1 \\ -1 & 4 & -11 & 29 & -11 & 4 & -1 \\ -1 & -1 & 4 & -11 & 29 & -11 & 4 \\ 4 & -1 & -1 & 4 & -11 & 29 & -11 \\ -11 & 4 & -1 & -1 & 54 & -11 & 29 \end{pmatrix}.\]
Note the difference from the foliated case (for example Eq.~\ref{eq:KFi}). First of all, the magnitude of the entries decay exponentially away from the main diagonal, but they never become exactly zero. Secondly, each entry varies as the system size increases and approaches an irrational number as the system size goes to infinity
\begin{equation}
\label{eq:K_nF_inv}
\left(K^{-1}_{\text{nF}}\right)_{ij} \to \frac{(-1)^{i-j}}{\sqrt{5}}\left(\frac{3+\sqrt{5}}{2}\right)^{-|i-j|}.
\end{equation}
In terms of quantum Hall physics, this indicates an irrational amount of charge in layer $j$ attached to a flux inserted in layer $i$.\cite{qiu1989phases} In terms of abelian topological order, this indicates an irrational phase angle in the braiding statistics between the $i$th anyon and the $j$th anyon
\begin{equation*}
\theta_{ij} = 2\pi\frac{(-1)^{i-j}}{\sqrt{5}}\left(\frac{3+\sqrt{5}}{2}\right)^{-|i-j|}.
\end{equation*}
$K_{\text{nF}}$ of size $N$ gives a fusion group $G_N = \mathbb{Z}_{F_N} \times \mathbb{Z}_{5F_N}$, where $F_N$ is the $N$th number in the Fibonacci sequence. Therefore, the fusion group has two generators, one of order $F_N$, the other of order $5F_N$.
These features preclude a foliation structure in $K_{\text{nF}}$. In both cases, the fusion group is exponentially large and correspondingly the ground state degeneracy grows exponentially with system size. But the underlying reasons for this growth are very different. In the foliated models, as the planons come from the hidden 2D layers, they have finite orders and correspondingly rational statistics. At the same time, the fusion group has a lot of generators, a number that grows linearly with system size. In the non-foliated example however, the fusion group has only two generators, each of infinite order (exponentially growing with system size). Their self and mutual statistics also become more and more fractionalized as the system size grows and eventually approach an irrational number.
It is therefore straightforward to see that the theory represented by $K_{\text{nF}}$ cannot be foliated. In particular, not every local (in the $z$ direction) planon can be decoupled into an anyon in a foliation layer. First, the ground state degeneracy does not follow a simple formula of $ab^N$, with $b$ being an integer or a root of an integer, as expected in a foliated model. Secondly, the elementary planon has an infinite order and nontrivial (although exponentially decaying) statistics with planons a large distance away. This cannot happen in a foliated model. In a foliated model, when each foliation layer is inserted, we can apply a local unitary transformation to ``integrate'' the layer into the bulk. The anyons that come from the layers can acquire a different (but still local) profile in the $z$ direction when becoming a planon. In particular, if the unitaries have exponentially decaying tails, the profile can have exponential tails, which is not surprising. But it is not possible for the planons to have exponentially decaying tails in its statistics because unitary transformations cannot change statistics. The only thing we can do when mapping the anyons in the foliation layers into planons is to relabel them, i.e. choose a different set and call them elementary. But when combining anyons into a new generating set, it is not possible to combine fractions of them together in the form of an exponentially decaying tail. Therefore, the exponential decaying infinite statistics precludes a foliation structure. Moreover, this ``profile" of braiding statistics defines an intrinsic length scale in the system along the $z$ direction, determined entirely by the topological order and can not be tuned continuously. As we show below, the length scale characterizes the spread of anyon string operators in the $z$ direction.
Similar phenomena can be found in many other iCS theories, as discussed in Appendix~\ref{sec:det}. In fact, the properties of $K_{\text{nF}}$ are so unusual that one may wonder if it represents a physical $3+1$D theory at all and if so, whether the planons are indeed point excitations. In Ref.~\onlinecite{Qiu90,Qiu90-2, Naud00, Naud01}, the theory was studied in terms of its related Laughlin wave-function and the corresponding quantum Hall Hamiltonian, which partially addresses this question. In section~\ref{sec:lattice}, we address this question for all iCS theories with quasi-diagonal $K$ matrices through explicit lattice construction. We show that all such theories are local $3+1$D models and in particular for $K_{\text{nF}}$, the elementary planons are indeed point excitations. They move in the $xy$ plane and are hence planons.
\section{Gapless theories}
\label{sec:gapless}
If we change the diagonal entries in Eq.~\ref{eq:KnF} from $3$ to $2$,
\begin{equation}
\label{eq:K_gl}
K_{\text{gl}}=\begin{pmatrix}
2 & 1 & & & 1 \\
1 & 2 & 1 & & \\
& \ddots & \ddots & \ddots & \\
& & 1 & 2 & 1 \\
1 & & & 1 & 2
\end{pmatrix},
\end{equation}
we get a very different theory. In particular, the calculation in section~\ref{sec:spectrum} shows that the theory becomes gapless. The ``gl'' subscript denotes ``gapless''. It is not clear what the nature of the gapless phase is. In this section, we will simply list some of the properties of $K_{\text{gl}}$.
The eigenvalues of $K_{\text{gl}}$ form a gapless band with a quadratic dispersion. Therefore, according to discussion in section~\ref{sec:spectrum}, the photon sector in the theory is gapless with a quadratic dispersion in the $z$ direction. As the band touches the zero energy point when the size $N$ of the matrix is even, the determinant of the matrix is zero with even $N$. With odd $N$, the determinant is always 4.
The inverse of the matrix looks like, for $N=5$,
\begin{equation*}
K_{\text{gl}}^{-1} = \frac{1}{4} \begin{pmatrix} 5 & -3 & 1 & 1 & -3 \\ -3 & 5 & -3 & 1 & 1 \\ 1 & -3 & 5 & -3 & 1\\ 1 & 1 & -3 & 5 & -3 \\ -3 & 1 & 1 & -3 & 5 \end{pmatrix},
\end{equation*}
while for $N=7$,
\begin{equation*}
K_{\text{gl}}^{-1} = \frac{1}{4} \begin{pmatrix} 7 & -5 & 3 & -1 & -1 & 3 & -5 \\ -5 & 7 & -5 & 3 & -1 & -1 & 3 \\ 3 & -5 & 7 & -5 & 3 & -1 & -1\\ -1 & 3 & -5 & 7 & -5 & 3 & -1 \\ -1 & -1 & 3 & -5 & 7 & -5 & 3 \\ 3 & -1 & -1 & 3 & -5 & 7 & -5 \\ -5 & 3 & -1 & -1 & 3 & -5 & 7\end{pmatrix}.
\end{equation*}
The fusion group in this case turns out to be $\mathbb{Z}_4$ and the topological spin of the generating anyon is $\theta=q\pi/4$, where $q=N\text{ mod }8$. Hence, the topological order is that of the $\nu=2q$ fermionic $\mathbb{Z}_2$ gauge theory in Kitaev's 16-fold way.\cite{KitaevAnyons} The entries in $K_{\text{gl}}^{-1}$ decay linearly away from the main diagonal. However, unlike for $K_{\text{nF}}$ in Eq.~\ref{eq:KnF}, the statistics do not become more fractional as the system size grows. Instead, the fractional part remains $\pm 1/4$ no matter the distance. Because of this, the fractional excitations are hence very different from those in $K_{\text{nF}}$. As we will show in section~\ref{sec:lattice}, while the fractional excitations in $K_{\text{nF}}$ have a localized profile in the $z$ direction and can be considered as point excitations, those in $K_{\text{gl}}$ have an extensive profile in the $z$ direction and should be regarded as a line excitation (if such consideration is valid at all given the existence of gapless modes in the model).
$K_{\text{gl}}$ is a representative of the class of gapless iCS theories with quasi-diagonal $K$ matrices. In Ref.~\onlinecite{Qiu90-2}, it was mentioned that some of these theories might have an instability towards ``staging'', that is, translation symmetry breaking in the $z$ direction. Whether that always happens, or whether some of these theories might be gapless spin liquids or gapless fracton phases is not clear. We will leave more in-depth study of these gapless phases to future work.
\section{Lattice Construction}
\label{sec:lattice}
In the previous sections, we have presented some interesting and sometimes even surprising properties of the iCS theories without addressing one crucial question: are the iCS theories legitimate $3+1$D models? In particular, can we interpret the $i$ index in Eq.~\ref{eq:lagrangian} as a $z$ direction spatial coordinate? After all, the Chern-Simons gauge fields $\mathcal{A}^i$ are not local degrees of freedom and can have complicated commutation relations between one another. For example, when $e\to \infty$,
\begin{equation*}
\left[\mathcal{A}^i_x,\mathcal{A}^j_{y}\right] \propto K^{-1}_{ij}.
\end{equation*}
The situation is particularly worrisome in the case of $K_{\text{nF}}$ and $K_{\text{gl}}$ where the entries in $K^{-1}$ are all nonzero. It means that if we try to interpret $i$ and $j$ as the $z$ direction spatial coordinate, the gauge field in the $i$th layer $\mathcal{A}^i$ would have nontrivial commutation relation with the gauge field in the $j$th layer even though they are very far away.
This is related to the question of what the fractional excitations look like, in particular whether the ones associated with the unit vectors $(\ldots,0,0,1,0,0,\ldots)$ have a local profile in the $z$ direction. In the CS formulation, this seems to be the case because these excitations are unit gauge charges of the gauge field $\mathcal{A}^i$ and are created simply by string operators of the form (in the $e\to \infty$ limit)
\begin{equation*}
\mathcal{W}^i=\exp\left[-i\int_{\text{path}} dx_\alpha\mathcal{A}^i_\alpha\right],\label{eq:string_field}
\end{equation*}
but this seems to be at odds with the fact that the $i$th excitation has a nontrivial braiding statistic with the $j$th excitation no matter how far away they are.
In this section, we clarify these issues by presenting a lattice construction whose low energy effective theory is described by Eq.~\ref{eq:lagrangian}. Our construction works for any iCS theory with a quasi-diagonal $K$ matrix --- symmetric integer matrices whose entries are zero beyond a certain distance from the main diagonal and whose nonzero entries are all bounded by some finite number. Therefore, our construction shows that all such iCS theories are legitimate $3+1$D local models. Moreover, we write down the explicit form of the string operators that generate fractional excitations and show that for $K_{\text{F}}$, $K_{\text{nF}}$, the elementary excitations --- those associated with unit vectors $(\ldots,0,0,1,0,0,\ldots)$ --- are local in the $z$ direction and are hence point excitations. For $K_{\text{gl}}$, however, the elementary excitation is not localized in the $z$ direction and should not be thought of as a point excitation.
\begin{comment}
In this section, we construct a lattice model that realizes an arbitrary, quasi-diagonal $K$ with bounded entries. More precisely, $K$ is quasi-diagonal if all its entries beyond a certain distance from the diagonal are 0; $K$ has bounded entries if all its entries have absolute value at most some $k_{\text{max}}$. We then propose the form of the string operators at the lattice level and check that our proposal gives the correct statistics. This establishes our claim that the iCS field theory (\ref{eq:lagrangian}) is physical, and also allows us to see whether the anyons encoded in $K$ are point particles. \xmc{discuss}
\end{comment}
\subsection{Lattice model}
\label{subsec:model}
We now describe the lattice model that realizes a quasi-diagonal iCS theory. For clarity, we start with a toy example $K=(2)$, a $1\times1$ matrix. Although this $K$ matrix has finite dimension, it contains much of the relevant physics, and will also be revisited in Section~\ref{subsec:string} when we study string operators. We then proceed to the less trivial example of $K_{\text{nF}}$ defined in Eq.~\ref{eq:KnF}. Finally, we present the construction in full generality which works for arbitrary quasi-diagonal $K$ with bounded entries.
\begin{figure}[ht]
\centering
\includegraphics[width=0.231\textwidth]{eg_1.png}
\caption{Lattice model realizing $K=(2)$. The matter content of the system is two IQH layers $\Omega^1$ and $\Omega^2$ (blue lines) with Chern number $C^l=1$. The layers are coupled each with unit charge to a dynamical $U(1)$ gauge fields $A$. }
\label{fig:eg_1}
\end{figure}
The $K=(2)$ CS theory can be realized as a chiral spin liquid, as discussed for example in Ref.~\onlinecite{Kalmeyer87}.
Here we present a more complicated construction so that it can be generalized to all iCS theories. We start with two integer quantum Hall (IQH) layers $\Omega^l$, $l=1,2$, with Chern number $C^l=1$. Each layer is a free fermion hopping model in the $xy$ plane. The fermions in each layer carry unit charge under a global charge conservation symmetry and we can gauge the system by coupling the layer to a dynamical $U(1)$ gauge field $A$. More precisely, we add gauge degrees of freedom $A_{\mathbf{r}\mathbf{r}'}$ on the horizontal links $\left<\mathbf{r}\mathbf{r}'\right>$. As usual, we define the electric field $E_{\mathbf{r}\mathbf{r}'}$ as the conjugate variable to $A_{\mathbf{r}\mathbf{r}'}$, $[A_{\mathbf{r}\mathbf{r}'},E_{\mathbf{r}\mathbf{r}'}]=i$. The Hamiltonian after gauging is
\begin{align}
\label{eq:toy_hamiltonian}
H=&\sum_{l=1,2}\sum_{\left<\mathbf{r}\mathbf{r}'\right>}u_{\mathbf{r}\mathbf{r}'}e^{iA_{\mathbf{r}\mathbf{r}'}}c^{\dagger}_{l,\mathbf{r}'}c_{l,\mathbf{r}}+\sum_{\left<\mathbf{r}\mathbf{r}'\right>}g_E\left(E_{\mathbf{r}\mathbf{r}'}\right)^2\nonumber\\
&-g_B\sum_p\cos B_p+g_Q\sum_\mathbf{r}\left(Q_\mathbf{r}\right)^2,
\end{align}
where $\mathbf{r}$, $\mathbf{r}'$ are 2-component vectors labelling the sites in each layer, $u_{\mathbf{r}\mathbf{r}'}$ is the IQH hopping coefficient, $B_p$ is the flux of $A$ through plaquette $p$, and
\begin{equation}
\label{eq:toy_gauss}
Q_\mathbf{r}=(\nabla\cdot\mathbf{E})_\mathbf{r} -\sum_{l=1,2}c_{l,\mathbf{r}}^{\dagger}c_{l,\mathbf{r}}
\end{equation}
is the Gauss's law term (see Fig.~\ref{fig:B_and_Q}). Note that here Gauss's law is only being imposed as an energetic constraint, not a Hilbert space constraint. Because of this, the resulting theory is fermionic instead of bosonic. More specifically, we will show below that the resulting theory is the $K=2$ bosonic theory together with two decoupled fermionic IQH layers.
\begin{figure}[t]
\centering
\includegraphics[width=0.471\textwidth]{B_and_Q.png}
\caption{The flux and Gauss's law terms in Eq.~\ref{eq:hamiltonian}. Reversing the direction of the edges changes the signs in front of $A$ and $E$. In the context of our first example $K=(2)$, the index $i$ should be ignored and $q^{il}=1$ for $l=1,2$.}
\label{fig:B_and_Q}
\end{figure}
At low energies, the model is described by an effective CS theory (we kept only the topological CS terms and omitted the Maxwell term and source term $A_\mu J^\mu$)
\begin{equation}
\mathcal{L}=-\frac{1}{4\pi}\sum_{l=1,2}C^l \epsilon^{\mu\nu\lambda} a^l_{\mu}\partial_{\nu}a^l_{\lambda}+\frac{1}{2\pi}\sum_{l=1,2}\epsilon^{\mu\nu\lambda}A_\mu\partial_\nu a_\lambda^l,
\label{eq:toy_L}
\end{equation}
whose $K$ matrix with respect to the basis $(a^1,a^2,A)$ is
\begin{equation}
\label{eq:toy_K0}
K_0=
\begin{pmatrix}
-1 & 0 & 1\\
0 & -1 & 1\\
1 & 1 & 0
\end{pmatrix}.
\end{equation}
Note that an IQH layer with Chern number 1 corresponds to a $-1$ in the $K$ matrix. To see how $K_0$ relates to the desired $K=(2)$, we apply the transformation $K_0\mapsto\widetilde{K}_0=WK_0W^T$ with
\begin{equation*}
W=
\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
1 & 1 & 1
\end{pmatrix}.
\end{equation*}
We obtain
\begin{equation*}
\widetilde{K}_0=
\begin{pmatrix}
-1 & 0 & 0\\
0 & -1 & 0\\
0 & 0 & 2
\end{pmatrix}
\end{equation*}
in terms of the new fields
\begin{equation*}
\begin{pmatrix} \tilde{a}^1 \\ \tilde{a}^2 \\ \widetilde{A} \end{pmatrix} = \left(W^{-1}\right)^T\begin{pmatrix} a^1 \\ a^2 \\ A \end{pmatrix} = \begin{pmatrix} a^1-A \\ a^2 -A \\ A \end{pmatrix}.
\end{equation*}
We see that $\widetilde{K}_0$ contains the decoupled block $K=(2)$ in its lower right corner. We also have two decoupled IQH layers in $\widetilde{K}_0$, but these have no anyon content. Therefore, the construction, as written, realizes not exactly the $K=2$ theory, but a very close fermionic cousin represented by $\tilde{K}_0$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.233\textwidth]{eg_2.png}
\caption{Lattice model realizing $K_{\text{nF}}$. The matter content of the system is infinitely many IQH layers $\Omega^l$ (blue lines) with Chern number $C^l=1$. The layers are coupled with unit charge to infinitely many dynamical $U(1)$ gauge fields $A^i$ in the way indicated by the curly brackets. }
\label{fig:eg_2}
\end{figure}
Next, we consider the example of $K_{\text{nF}}$ defined in Eq.~\ref{eq:KnF}. To realize $K_{\text{nF}}$, we take infinitely many IQH layers $\Omega^l$, $l\in\mathbb{Z}$, each with Chern number $C^l=1$. We couple the layers to infinitely many dynamical $U(1)$ gauge fields $A^i$, $i\in\mathbb{Z}$, as follows: fermions in layers $\Omega^1$, $\Omega^2$, $\Omega^3$ have unit charge under $A^1$, those in layers $\Omega^3$, $\Omega^4$, $\Omega^5$ have unit charge under $A^2$, those in layers $\Omega^5$, $\Omega^6$, $\Omega^7$ have unit charge under $A^3$, etc. All other pairs of $\Omega^l$ and $A^i$ not following this pattern are uncoupled (see Fig.~\ref{fig:eg_2}). This model has a low energy effective CS theory similar to Eq.~\ref{eq:toy_L}, but now with $K$ matrix
\begin{equation*}
K_0=
\begin{pmatrix}
\ddots\\
& -1 & & 1\\
& & -1 & 1\\
& 1 & 1 & 0 & 1\\
& & & 1 & -1 & & 1\\
& & & & & -1 & 1\\
& & & & 1 & 1 & 0 & 1\\
& & & & & & 1 & -1\\
& & & & & & & & \ddots
\end{pmatrix}.
\end{equation*}
with respect to the basis $\left(...,a^1,a^2,A^1,a^3,a^4,A^2,a^5,...\right)$. Like in the previous example, we apply the transformation $K_0\mapsto\widetilde{K}_0=WK_0W^T$ with
\begin{equation*}
W=
\begin{pmatrix}
\ddots\\
& 1\\
& & 1\\
& 1 & 1 & 1 & 1\\
& & & & 1\\
& & & & & 1\\
& & & & 1 & 1 & 1 & 1\\
& & & & & & & 1\\
& & & & & & & & \ddots
\end{pmatrix}.
\end{equation*}
$W$ is a local transformation in the sense that it can be decomposed into two layers where each layer is a product of non-overlapping general linear transformations that act on three nearest neighbor dimensions. Borrowing the terminology for local unitary transformations, $W$ is a ``finite depth circuit'' of general linear transformations. This is an important point because it shows that locality is preserved when we map from the lattice model to the effective iCS theory. Specifically, $W$ can be decomposed into a finite product of block diagonal integer matrices $W=W_1W_2$, where
\begin{equation*}
W_1=
\begin{pmatrix}
\ddots\\
& 1\\
& & 1\\
& 1 & & 1\\
& & & & 1\\
& & & & & 1\\
& & & & 1 & & 1\\
& & & & & & & 1\\
& & & & & & & & \ddots
\end{pmatrix},
\end{equation*}
\begin{equation*}
W_2=
\begin{pmatrix}
\ddots\\
& 1\\
& & 1\\
& & 1 & 1 & 1\\
& & & & 1\\
& & & & & 1\\
& & & & & 1 & 1 & 1\\
& & & & & & & 1\\
& & & & & & & & \ddots
\end{pmatrix}.
\end{equation*}
The result of this transformation is
\begin{equation*}
\widetilde{K}_0=
\begin{pmatrix}
\ddots\\
& -1\\
& & -1\\
& & & 3 & & & 1\\
& & & & -1\\
& & & & & -1\\
& & & 1 & & & 3\\
& & & & & & & -1\\
& & & & & & & & \ddots
\end{pmatrix},
\end{equation*}
which breaks into $K_{\text{nF}}$ consisting of rows and columns with indices $...,3,6,9,...$ together with decoupled $\nu=1$ IQH layers. Similar to the previous case, we get almost the theory we want except for some extra IQH layers which do not have any impact on the anyon statistics of the theory.
Having discussed two examples, we finally present a construction that works for an arbitrary quasi-diagonal $K$ with bounded entries. Similar to the $K_{\text{nF}}$ example, we start with a stack of IQH layers $\Omega^l$. The Chern number of layer $l$ is $C^l=\pm1$, to be fixed later. We introduce gauge degrees of freedom $A_{\mathbf{r}\mathbf{r}'}^i$ and their conjugate momenta $E_{\mathbf{r}\mathbf{r}'}^i$ on the horizontal links $\left<\mathbf{r}\mathbf{r}'\right>$ and impose the commutation relation $[A_{\mathbf{r}\mathbf{r}'}^j,E_{\mathbf{r}\mathbf{r}'}^k]=i\delta_{jk}$ as usual. We then couple $\Omega^l$ to $A^i$ with charge $q^{il}$, also to be fixed later. The resulting Hamiltonian is
\begin{multline}
\label{eq:hamiltonian}
H=\sum_l\sum_{\left<\mathbf{r}\mathbf{r}'\right>}u_{l,\mathbf{r}\mathbf{r}'}\exp{\left(i\sum_i q^{il}A^i_{\mathbf{r}\mathbf{r}'}\right)}c^{\dagger}_{l,\mathbf{r}'}c_{l,\mathbf{r}}\\
+\sum_i\left[\sum_{\left<\mathbf{r}\mathbf{r}'\right>}g_E\left(E_{\mathbf{r}\mathbf{r}'}^i\right)^2-g_B\sum_p\cos B_p^i\right.\\
\left.+g_Q\sum_\mathbf{r}\left(Q_\mathbf{r}^i\right)^2\right],
\end{multline}
where $u_{l,\mathbf{r}\mathbf{r}'}$ is the IQH hopping coefficient determined by $C^l$, $B_p^i$ is the flux of $A^i$ through plaquette $p$, and
\begin{equation}
\label{eq:lattice_gauss}
Q_\mathbf{r}^i=(\nabla\cdot\mathbf{E})_\mathbf{r}^i -\sum_l q^{il} c_{l,\mathbf{r}}^{\dagger}c_{l,\mathbf{r}}
\end{equation}
is the Gauss's law term (see Fig.~\ref{fig:B_and_Q}). We think of the fermion and gauge field layers as interlaced in the $z$ direction. The interactions are local as long as only a finite number of neighboring layers are charged under each $A^i$, or equivalently, each row and column of $q^{il}$ has bounded support, which turns out to hold with our choice of $q^{il}$ later. The low energy field theory of Eq.~\ref{eq:hamiltonian} is given by
\begin{equation}
\label{eq:general_L}
\mathcal{L}=-\frac{1}{4\pi}\sum_{l}C^l \epsilon^{\mu\nu\lambda} a^l_{\mu}\partial_{\nu}a^l_{\lambda}+\frac{1}{2\pi}\sum_{il}q^{il}\epsilon^{\mu\nu\lambda}A_\mu^i\partial_\nu a_\lambda^l.
\end{equation}
Here we have only kept the CS terms and omitted the Maxwell and source terms.
To realize a particular $K=(K_{ij})$, we need to specify $\Omega^l$, $C^l$ and $q^{il}$. We adopt the following setup:
\begin{enumerate}[leftmargin=*]
\item For each index $i$ of $K$, we have a dynamical $U(1)$ gauge field $A^i$.
\item For each $i$ such that \[\Delta_i:=K_{ii}-\sum_{j\neq i}K_{ij}\neq0,\] we have IQH layers $\Omega^{i,s}_\text{d}$ where $s=1,2,...,|\Delta_i|$ and the subscript ``d'' stands for ``diagonal''. Each $\Omega^{i,s}_\text{d}$ has Chern number $C_\text{d}^i=\text{sgn}(\Delta_i)$ and carries $+1$ charge under $A^i$ only. The emergent gauge field of $\Omega^{i,s}_\text{d}$ is denoted by $a_\text{d}^{i,s}$. If $\Delta_i=0$, no diagonal layer is needed.
\item For each pair $i<j$ such that $K_{ij}\neq0$, we have IQH layers $\Omega^{ij,t}_\text{o}$ where $t=1,2,...,|K_{ij}|$ and the subscript ``o'' stands for ``off-diagonal''. Each $\Omega^{ij,t}_\text{o}$ has Chern number $C_\text{o}^{ij}=\text{sgn}(K_{ij})$ and carries $+1$ charge under $A^i$ and $A^j$ only. The emergent gauge field of $\Omega^{ij,t}_\text{o}$ is denoted by $a_\text{o}^{ij,t}$.
\end{enumerate}
Since $K$ is quasi-diagonal with bounded entries, all these IQH layers $\Omega$ and physical gauge fields $A$ can be interlaced in the $z$-direction in such a way that the interaction is local. We denote by $\mathbfcal{A}$ the collection of emergent and physical gauge fields ordered in this particular way, and $K_0$ the $K$ matrix of the CS theory Eq.~\ref{eq:general_L} with respect to the basis $\mathbfcal{A}$. Next, we apply the local transformation $\widetilde{\mathcal{A}}^i=\sum_j \left(W^{-1}\right)^{ji}\mathcal{A}^j$, $\widetilde{K}_0=WK_0W^T$ defined by
\begin{align*}
\tilde{a}_\text{d}^{i,s}&=a_\text{d}^{i,s}-\text{sgn}(\Delta_i)A^i,\\
\tilde{a}_\text{o}^{ij,t}&=a_\text{o}^{ij,t}-\text{sgn}(K_{ij})\left(A^i+A^j\right),\\
\widetilde{A}^i&=A^i.
\end{align*}
This transformation is local in the sense that $W$ can be decomposed into a finite depth circuit (i.e. a finite product) of local, block diagonal integer matrices. In fact, the circuit has depth 2. The first step of the circuit is to map
\begin{align*}
a_\text{d}^{i,s}&\mapsto a_\text{d}^{i,s}-\text{sgn}(\Delta_i)A^i,\\
a_\text{o}^{ij,t}&\mapsto a_\text{o}^{ij,t}-\text{sgn}(K_{ij})A^i,
\end{align*}
and the second step is to map \[a_\text{o}^{ij,t}\mapsto a_\text{o}^{ij,t}-\text{sgn}(K_{ij})A^j.\] Each step is block diagonal because each $a$ is modified by at most one $A$, and each block is local because we have arranged the degrees of freedom in the $z$ direction such that each $A^i$ is some finite distance away from each $a_\text{d}^{i,s}$ and $a_\text{o}^{ij,t}$. After the transformation, the $\tilde{a}_\text{d}$ and $\tilde{a}_\text{o}$ fields are in decoupled IQH states, and the $\widetilde{A}$ fields have the desired $K$ matrix.
We conclude this subsection by relating the general construction to the two examples we gave. For $K=(2)$, we have
\begin{align*}
\mathbfcal{A}&=\left(a^1,a^2,A\right)\nonumber\\
&=\left(a_\text{d}^{1,1},a_\text{d}^{1,1},A^1\right),
\end{align*}
with no ``off-diagonal'' layers. For $K_{\text{nF}}$, we have
\begin{align*}
\mathbfcal{A}&=\left(...,a^1,a^2,A^1,a^3,a^4,A^2,a^5,...\right)\nonumber\\
&=\left(...,a_\text{o}^{01,1},a_\text{d}^{1,1},A^1,a_\text{o}^{12,1},a_\text{d}^{2,1},A^2,a_\text{o}^{23,1},...\right).
\end{align*}
\subsection{Spectrum of iCS theory}
\label{sec:spectrum}
Given an iCS theory, we can calculate its spectrum from its Lagrangian Eq.~\ref{eq:lagrangian}. Note that it is important to include the Maxwell term for this calculation. In the temporal gauge $A_0=0$, the equations of motion are
\begin{align*}
\partial_t^2A_x^i+\partial_x\partial_yA_y^i-\partial_y^2A_x^i+\frac{e^2}{2\pi}K_{ij}\partial_tA_y^j&=0,\\
\partial_t^2A_y^i+\partial_x\partial_yA_x^i-\partial_x^2A_y^i-\frac{e^2}{2\pi}K_{ij}\partial_tA_x^j&=0.
\end{align*}
They are solved by \[A_{x,y}^i=\alpha_{x,y}v^i_q\exp{\left[i(k_xx+k_yy-\omega t)\right],}\]
where $v^i_q$ is an eigenvector of $K$ with eigenvalue $K_q$, labelled by $q$. We find the spectrum \[\omega^2=k_x^2+k_y^2+\left(\frac{e^2}{2\pi}K_q\right)^2.\] If $K$ is invariant under translation along the diagonal such as $K_{\text{nF}}$ and $K_{\text{gl}}$, then $q$ is the momentum in the $z$ direction. For $K_{\text{nF}}$ we have $K_q=3+2\cos q$ therefore the whole spectrum is gapped. For $K_{\text{gl}}$ we have $K_q=2+2\cos q$ which is gapless and the full spectrum has a zero mode at momentum $(k_x,k_y,q) = (0,0,\pi)$.
\subsection{String operators}
\label{subsec:string}
We now study the string operators of the fractional excitations in our lattice model Eq.~\ref{eq:hamiltonian}. We work in the limit of $g_E=0$, and will argue later about the case where $g_E$ is nonzero but small. For simplicity, we first consider the example $K=(2)$ studied in section~\ref{subsec:model}, for which we wrote down a lattice model with low energy CS theory given by Eq.~\ref{eq:toy_K0}. This system contains one type of fractional excitation, which is a semion. A charge vector that lies in the semion superselection sector is $Q=(0,0,1)^T$; the general form of a semion charge vector is $(-a,-b,a+b+2c+1)^T$ where $a,b,c\in\mathbb{Z}$. The flux vector attached to $Q$ is
\begin{equation}
\label{eq:toy_attach}
\Phi=-2\pi K_0^{-1}Q=
\begin{pmatrix}
-\pi \\ -\pi \\ -\pi
\end{pmatrix}.
\end{equation}
The $-\pi$ fluxes for the emergent fields $a^1$ and $a^2$ should be interpreted as $-1/2$ fermion charges in each fermion layer. Therefore, the semion consists of a $+1$ \textit{external} charge, a $-\pi$ dynamical flux, a $-1/2$ charge in $\Omega^1$ and a $-1/2$ charge in $\Omega^2$.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{string.png}
\caption{The string operator for the lattice model realizing $K=(2)$. Fermions live in the blue layers $\Omega^1$ and $\Omega^2$, and the gauge field in the middle, green layer. The operators $\mathcal{O}_l$ are generated by hopping operators $c^{\dagger}_{l,\mathbf{r}'}e^{iA_{\mathbf{r}\mathbf{r}'}}c_{l,\mathbf{r}}$. The action of $\mathcal{O}_l$ is non-trivial only near the path (grey region), and is exponentially close to the identity away from the path. The string operator $\mathcal{W}$ consists of $e^{-iA}$ acting on the dashed red line, $e^{-i\pi E}$ acting on the solid red segments and $\mathcal{O}_1$, $\mathcal{O}_2$ acting near the path.}
\label{fig:string}
\end{figure}
The string operator $\mathcal{W}$ consists of three parts, $\mathcal{W}=\mathcal{W}_1\mathcal{W}_2\mathcal{W}_3$, as follows (see Fig.~\ref{fig:string}):
\begin{enumerate}[leftmargin=*]
\item $\mathcal{W}_1=\prod_{\text{path}}e^{-iA}$ acts on the dynamical gauge field along the path and creates a $+1$ external charge at the end of the path (and a $-1$ charge at the start).
\item $\mathcal{W}_2=\prod_{\perp\text{path}} e^{-i\pi E}$ acts on the dynamical gauge field along adjacent links to the right of and perpendicular to the path, and creates a $-\pi$ flux at the end of the path. $\mathcal{W}_2$ acts only on the gauge DOF.
\item $\mathcal{W}_3$ is the quasi-adiabatic response\cite{Hastings2005} of the fermions to the $-\pi$ flux insertion. More precisely, in each gauge field sector $\{A_{\mathbf{r}\mathbf{r}'}\}$ of the Hilbert space, we insert an external $-\pi$ flux adiabatically, which is implemented by an $A$-dependent evolution operator $\mathcal{W}_3[A]$ on the fermion Hilbert space. As the fermion hopping model is not exactly solvable, we do not know the exact expression for $\mathcal{W}_3[A]$ except that it is of the form \[\mathcal{W}_3[A]=\mathcal{O}_1\left[c^{\dagger}_{1,\mathbf{r}'}e^{iA_{\mathbf{r}\mathbf{r}'}}c_{1,\mathbf{r}}\right] \mathcal{O}_2\left[c^{\dagger}_{2,\mathbf{r}'}e^{iA_{\mathbf{r}\mathbf{r}'}}c_{2,\mathbf{r}}\right],\] where $\mathcal{O}_l\left[c^{\dagger}_{l,\mathbf{r}'}e^{iA_{\mathbf{r}\mathbf{r}'}}c_{l,\mathbf{r}}\right]$ is some gauge invariant operator generated by hopping operators $c^{\dagger}_{l,\mathbf{r}'}e^{iA_{\mathbf{r}\mathbf{r}'}}c_{l,\mathbf{r}}$. Nonetheless, properties of quasi-adiabatic evolution ensure that $\mathcal{W}_3[A]$ is local, acting only near the path (grey region in Fig.~\ref{fig:string}). A $-1/2$ charge in $\Omega^1$ and a $-1/2$ charge in $\Omega^2$ are accumulated in the process near the end of the string operator, which correspond to the $-\pi$ fluxes of $a^1$ and $a^2$.
\end{enumerate}
We check the correctness of our string operator by computing the semion braiding phase, which we expect to be $2\pi Q^T K_0^{-1}Q=\pi$. To see this from the string operator, we break the overall commutation relation into the commutations of:
\begin{enumerate}[leftmargin=*]
\item $\mathcal{W}_1$ with $\mathcal{W}_2$. This takes a $+1$ charge counterclockwise around a $-\pi$ flux, giving a phase of $\pi$.
\item $\mathcal{W}_2$ with $\mathcal{W}_1$. This gives a phase of $\pi$ for the same reason.
\item the product $\mathcal{W}_2\mathcal{W}_3$ with itself. This contributes a phase $-\pi$ which can be understood as the Berry phase obtained due to the following actions on the fermions: increasing the (background) flux in the $x$ direction by $\pi$, increasing the flux in the $y$ direction by $\pi$, decreasing the flux in the $x$ direction by $\pi$, decreasing the flux in the $y$ direction by $\pi$. In each IQH layer, the Berry phase over the entire flux parameter space $[0,2\pi)^2$ is $-2\pi$. The Berry phase over a quarter of the parameter space is therefore $-\pi/2$. As we have two IQH layers, the total phase is $-\pi$.
\item $\mathcal{W}_1$ with itself, $\mathcal{W}_1$ with $\mathcal{W}_3$ and $\mathcal{W}_3$ with $\mathcal{W}_1$. All of these are trivial.
\end{enumerate}
Summing these contributions up, we find a total braiding phase $\pi+\pi-\pi=\pi$ as expected. Of course, phases are defined mod $2\pi$, but we have been careful distinguishing e.g.~$-\pi$ from $\pi$ so that this argument extends naturally to general $K$.
So far we have considered the $g_E=0$ limit, where we showed that $\mathcal{W}$ is a string operator for the charge vector $Q=(0,0,1)^T$. In fact, in this limit we could write down many other different string operators for $Q$ which all commute with the Hamiltonian except near the end points. For example, we could have $\mathcal{W}'=\mathcal{W}'_1\mathcal{W}'_2\mathcal{W}'_3$ where $\mathcal{W}'_1=\prod_{\text{path}}e^{-iA}$ as before, $\mathcal{W}'_2=\prod_{\perp\text{path}}e^{i\theta E}$ for arbitrary $\theta$ and $\mathcal{W}'_3$ is the quasi-adiabatic response of the fermions to a $\theta$ flux insertion. To see why we chose the particular $\mathcal{W}$ that satisfies the charge-flux attachment condition Eq.~\ref{eq:toy_attach}, we turn on a small $g_E>0$, much smaller than the other couplings in the Hamiltonian and the Landau level spacing. Now if the string operator creates a $\theta$ flux and hence a $\theta/2\pi$ charge in each IQH layer, then Gauss's law (Eq.~\ref{eq:toy_gauss}) implies \[\nabla\cdot\mathbf{E}=1+\frac{\theta}{\pi}.\] If $\nabla\cdot\mathbf{E}\neq 0$, then we have an electric energy that diverges at least logarithmically. Therefore, we must choose $\theta=-\pi$ so that $\nabla\cdot\mathbf{E}=0$. This way, when $g_E>0$, it is possible to modify $\mathcal{W}$ in a region near the path such that the electric energy is finite. Furthermore, since $g_E$ is small, the gauge field sectors $\{A_{\mathbf{r}\mathbf{r}'}\}$ that are present in the ground state can differ from the flat configuration $B\equiv0$ at most by a small perturbation. Therefore, even with the new hopping coefficients $u_{l,\mathbf{r}\mathbf{r}'}e^{iA_{\mathbf{r}\mathbf{r}'}}$, the fermions are still in a $C^l=1$ IQH state, so the $-\pi$ flux is indeed bound with a $-1/2$ charge in each layer. The exact expression of the new $\mathcal{W}$ is not important, and the braiding statistic remains unchanged as long as the correct amount of external charge, fermion charge and flux are created.
A similar construction of string operators works for iCS theories. When $g_E=0$, the string operator $\mathcal{W}^i$ corresponding to standard basis vector $e_i$ takes the form $\mathcal{W}^i=\mathcal{W}^i_1\mathcal{W}^i_2\mathcal{W}^i_3$. First, $\mathcal{W}_1^i=\prod_{\text{path}}e^{-iA^i}$ creates a $+1$ external $A^i$ charge. Next,
\begin{equation*}
\mathcal{W}^i_2=\prod_{\perp\text{path}}\exp\left[-2\pi i\sum_j\left(K^{-1}\right)^{ij}E^j\right]
\end{equation*}
creates fluxes according to the $i$th row of $K^{-1}$, as required by Gauss's law Eq.~\ref{eq:lattice_gauss} when a small $g_E>0$ is present. The IQH layers then respond quasi-adiabatically, giving an evolution operator $\mathcal{W}^i_3$. The braiding statistic of $\mathcal{W}^i$ and $\mathcal{W}^j$ results from the commutations of $\mathcal{W}^i_1$ with $\mathcal{W}^j_2$, $\mathcal{W}^i_2$ with $\mathcal{W}^j_1$, and $\mathcal{W}^i_2\mathcal{W}^i_3$ with $\mathcal{W}^j_2\mathcal{W}^j_3$. In particular, the commutation of $\mathcal{W}^i_2\mathcal{W}^i_3$ with $\mathcal{W}^j_2\mathcal{W}^j_3$ correspond to the following actions on the fermions: increasing the (background) $A^k$ flux in the $x$ direction by $2\pi\left(K^{-1}\right)^{ik}$ for all $k$, increasing the $A^l$ flux in the $y$ direction by $2\pi\left(K^{-1}\right)^{jl}$ for all $l$, decreasing the $A^k$ flux in the $x$ direction by $2\pi\left(K^{-1}\right)^{ik}$ for all $k$, decreasing the $A^l$ flux in the $y$ direction by $2\pi\left(K^{-1}\right)^{jl}$ for all $l$. A diagonal layer $\Omega_\text{d}^{k,s}$ is coupled to $A^k$ only, and contributes a Berry phase of
\begin{equation*}
\theta_\text{d,ij}^k=-2\pi\,\text{sgn}(\Delta_k)\left(K^{-1}\right)^{ik}\left(K^{-1}\right)^{jk},
\end{equation*}
whereas an off-diagonal layer $\Omega_\text{o}^{kl,t}$, $k<l$, is coupled to $A^k$ and $A^l$, and contributes
\begin{multline*}
\theta_\text{o,ij}^{kl}=-2\pi\,\text{sgn}(K_{kl})\left[\left(K^{-1}\right)^{ik}+\left(K^{-1}\right)^{il}\right]\\
\times\left[\left(K^{-1}\right)^{jk}+\left(K^{-1}\right)^{jl}\right].
\end{multline*}
The braiding phase of $\mathcal{W}_2^i\mathcal{W}_3^i$ with $\mathcal{W}_2^j\mathcal{W}_3^j$ is then
\begin{equation*}
\sum_k |\Delta_k|\theta_\text{d,ij}^k+\sum_{k<l} |K_{kl}|\theta_\text{o,ij}^{kl}=-2\pi\left(K^{-1}\right)^{ij},
\end{equation*}
as can be confirmed by a straightforward calculation. Finally, we find the total braiding phase to be
\begin{equation*}
2\pi \left(K^{-1}\right)^{ij}+2\pi \left(K^{-1}\right)^{ij}-2\pi \left(K^{-1}\right)^{ij}=2\pi \left(K^{-1}\right)^{ij},
\end{equation*}
as expected.
The string operators allow us to understand the profile of fractional excitations in the $z$ direction, which is determined by the fractional part of $K^{-1}$ (the integral part of $K^{-1}$ corresponds to local fermion and integer flux excitations). In particular, for the example of $K_{\text{nF}}$, the entries of each row of $K_{\text{nF}}^{-1}$ decay exponentially, which means that both $\mathcal{W}^i_2$ and $\mathcal{W}^i_3$ become exponentially close to the identity as we move in the $z$ direction ($\mathcal{W}^i_1$ is always local in the $z$ direction). Therefore, $\mathcal{W}^i$ is local in the $z$ direction with an exponentially decaying tail, and the fractional excitations are localized particles. On the other hand, for $K_{\text{gl}}$, the fractional parts of the entries of $K_{\text{gl}}^{-1}$ do not decay. This means that the fractional excitations in the $K_{\text{gl}}$ theory, if valid at all, are line excitations extended along the $z$ direction.
\section{Discussion}
\label{sec:conclusion}
In this paper, we established the iCS theory as a viable path to study a variety of fractonic phases. First of all, we showed in section~\ref{subsec:model} that iCS theories with a quasi-diagonal $K$ matrix are indeed legitimate local $3+1$D models by giving an explicit lattice realization for the theory. Using the method discussed in section~\ref{sec:spectrum}, we can further determine which iCS theories are gapped and which are gapless. Moreover, we found in section~\ref{subsec:string} the explicit form of the string operators that create the fractional excitations in the model. From the string operators, we can learn about the nature of the fractional excitations (for example when they are localized point excitations versus when they are extensive line excitations). Based on these understandings, we found through examples an interesting variety of fractonic phenomena in iCS theories with quasi-diagonal $K$ matrices. There are 1-foliated fracton models; there are abelian type~I models which do not have a foliation structure --- a feature not present in previously studied models; and there are gapless theories whose nature is not clear. Some of the non-foliated gapped models have been studied previously from the perspective of coupled fractional quantum Hall layers\cite{Qiu90,Qiu90-2,Naud00,Naud01}, which interestingly suggests a route toward experimental realization of these particular fracton phases.
The next step would be to study iCS theories more systematically and address questions such as
\begin{enumerate}[leftmargin=*]
\item For gapped iCS theories, how can foliated theories be distinguished from non-foliated ones?
\item If an iCS theory is foliated, how does one find the RG procedure that extracts 2D layers?
\item How can we understand the non-foliated models, for example in terms of RG $s$-sourcery?\cite{SwingleSSource}
\item What is the nature of the gapless models?
\end{enumerate}
We hope that by addressing these questions, we can have a more complete picture of possible fractonic orders, beyond what we can learn from exactly solvable models or other frameworks.
Of course, the possibilities represented by the iCS theories are limited. The only kind of fractional excitations in these models are planons in the $xy$ plane. They do not even contain fracton excitations which are completely immobile. But as we have learned from previous studies, planons play an important role in type~I models. Once we have a better understanding of planons, maybe we can combine them with fractons and other sub-dimensional excitations to achieve a more complete story.
\begin{acknowledgments}
We are indebted to inspiring discussions with Tina Zhang, Xiao-Gang Wen, Yuan-Ming Lu, Zhenghan Wang, and Kevin Slagle. We also thank Po-Shen Hsin for pointing out that a fermionic abelian topological order can always be decomposed into a bosonic abelian topological order and transparent fermions. X. M, W.S. and X.C. are supported by the National Science Foundation under award number DMR-1654340, the Simons collaboration on ``Ultra-Quantum Matter'' and the Institute for Quantum Information and Matter at Caltech. X.C. is also supported by the Walter Burke Institute for Theoretical Physics at Caltech. M. C. is supported by NSF CAREER (DMR-1846109) and the Alfred P. Sloan foundation. This work was supported in part by
funds provided by the U.S. Department of Energy
(D.O.E.) under cooperative research agreement
DE-SC0009919, by the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651440, XC, ML, JM, XM, WS).
\end{acknowledgments}
| proofpile-arXiv_059-15680 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:intro}
{Regression trees} {(RT)} focus on the problems in which the target variables can take continuous values (typically real numbers), such as stock price \cite{patel2015predicting}, in-hospital mortality \cite{fonarow2005risk}, landslide susceptibility \cite{felicisimo2013mapping} and mining capital cost \cite{nourali2020regression}. Apart from the direct application of single regression tree, the performance can be further improved via aggregating more trees, e.g., bootstrap aggregation \cite{breiman1996bagging}, gradient boosted trees \cite{friedman2001greedy} and Xgboost \cite{chen2016xgboost}.
In single-output regression problem, the induction of regression trees can be divided into two stages, the construction stage and the prediction stage. In the construction stage, a regression tree is learned by recursively splitting the data. Finally, the samples are divided into several disjoint subsets (i.e., the leaf nodes). In the prediction stage, a value is fitted for each leaf node and the most popular prediction is the mean of sample targets in the corresponding leaf node \cite{breiman1984classification}. For decades, several methods have been proposed to replace the sample mean prediction strategy in single-output RT, e.g., linear regression \cite{quinlan1992learning}, k-Nearest Neighbor \cite{weiss1995rule}, kernel regression \cite{deng1995multiresolution} and a hybrid system of these methods \cite{torgo1997functional}. These methods make progress for partial regression tasks; however, none of them work for all regression tasks. Besides, the improvement achieved in some of these alternative methods comes with the sacrifice of the efficiency, while expeditious decision-making based on the prediction of regression trees is in need at situations like stock market. A new approach is therefore needed for stable improvement of the single-output RT prediction for all tasks while maintaining the high efficiency.
Besides, the existing estimation methods in single-output RT have one common drawback: all of them only use the training samples in a single node, which means the global data information are not well used. Typically, the input-output relation is nonlinear and varies over datasets. Thus, predictive models which only use local data information show limited generalization performance. In recent years, convolutional neural network (CNN) has swept many fields \cite{krizhevsky2012imagenet,zhang2018visual}. One main advantage of CNN is that the receptive field is increased as the network goes deeper, which means deep CNN takes advantage of global information in the whole image. In the multi-output regression problem, a regularization-based method is proposed to exploits output relatedness when estimating models at leaf nodes, which verifies the importance of global effect to improve generalization performance in regression tasks \cite{jeong2020regularization}.
In this paper, we propose a single-output regression tree that exploits global information in all training samples, using James-Stein (JS) estimator, named James-Stein regression tree (JSRT). We first use JS estimator only in the prediction stage of RT to achieve a P-JSRT. The P-JSRT estimates values for all leaf nodes simultaneously. The estimated value of each leaf node combines local data information in each specific leaf node and global data information in all the other leaf nodes. Then, we further extend this idea to the construction stage of the regression tree. For clarity, RT with JS estimation only in the construction stages is called C-JSRT and RT with JS estimation both in the construction and prediction stages is called CP-JSRT, while JSRT is the collective name for P-JSRT, C-JSRT and CP-JSRT.
The contribution in this work involves:
\begin{itemize}
\item the first to combine global data information and local data information in the single-output RT;
\item introducing JS estimator to construct RT;
\item proving that the proposed P-JSRT is uniformly better \cite{seshadri1963constructing,shao2006mathematical} than the original CART in term of Mean-Square-Error (MSE).
\end{itemize}
\section{Preliminaries}
\label{sec:Pre}
\subsection{Review of regression trees}
\textbf{Regression tree construction.} The construction process of a decision tree is known as a top-down approach. Given a data set $\mathcal{D}_n = \left\{(\boldsymbol{x}_1,y_1),(\boldsymbol{x}_2,y_2),...,(\boldsymbol{x}_n,y_n)\right\}$, $\boldsymbol{x}_i \in \mathbb{R}^d$ with features $T = \left\{t_1,t_2,...,t_d\right\}$, a regression tree chooses the best splitting feature and value pair to split into two subsets (children nodes). Each child node recursively performs this splitting procedure until the stopping condition is satisfied.
The choice of a best splitting feature and value pair in one node is usually guided by minimizing the MSE. Suppose the node to be split is $N_0$, the two children nodes after splitting are $N_1$ and $N_2$, the feature and value pair chosen to split the set is $(a, v)$. The best splitting feature and value are found by solving
\begin{equation}
\label{eq:split}
\min_{(a, v)}[\min_{c_1}\sum_{\boldsymbol{x}_i\in N_1}(y_i-c_1)^2+\min_{c_2}\sum_{\boldsymbol{x}_i\in N_2}(y_i-c_2)^2],
\end{equation}
where $c_1$ and $c_2$ are the optimal output of children nodes $N_1$ and $N_2$ respectively. When each child node is considered independently, the average of sample targets is usually estimated as the optimal value:
\begin{equation}
\label{eq:split estimate}
\hat{c}_1=\text{ave}(y_i|\boldsymbol{x}_i\in N_1), \\
\hat{c}_2=\text{ave}(y_i|\boldsymbol{x}_i\in N_2)
\end{equation}
\textbf{Regression tree prediction.} In the prediction stage, the traditional regression tree takes the sample mean as the predicted value for a new ovservation, which is proved to be the maximum likelihood estimation (MLE) \cite{tang2017robust} . Given an new observation $\boldsymbol{x}$, suppose it follows a path to the $i$-th leaf node in the constructed tree. Then the predicted value $\hat{y}$ for $\boldsymbol{x}$ is $\hat{y}=\tilde{y}_i$, where $\tilde{y}_i$ is the mean of sample targets in the $i$-th leaf node which can be calculated via Eq.~(\ref{eq:split estimate}). For simplicity, we use MLE to represent the sample targets mean method in later parts of this paper.
\subsection{Related work}
Some efforts have been made to improve tree leaf approximations by fitting more complex models within each partition.
\textbf{Linear regression} is one of the most used statistical techniques and it has been introduced into regression trees to improve the prediction \cite{quinlan1992learning}. The linear regression trees have the advantage of simplicity while with lackluster performance in most cases.
\textbf{The k-nearest neighbor (KNN)} is also one of the simplest regression methods, in which only the $k$ nearest neighbors are considered when predicting the value for an unknown sample. Also, the KNN has been used in decision trees to give a more accurate classification probability \cite{weiss1995rule}.
Similarly, \textbf{kernel regressors (KR)} obtain the prediction of a new instance $\boldsymbol{x}$ by a weighted average of its neighbors. The weight of each neighbor is calculated by a function of its distance to $\boldsymbol{x}$ (called the kernel function). The kernel regression method has also been integrated with kd-trees \cite{deng1995multiresolution}.
Despite the effectiveness of KNN and KR methods, the principle of kernel regressors (i,e., involving other instances when making prediction) brings the problem of low efficiency as well.
Furthermore, a \textbf{hybrid tree learner (HTL)} was explored for better balance between accuracy and computation efficiency in regression trees \cite{torgo1997functional}. HTL applied several alternative models in the regression tree leaves, and found that kernel methods had a pretty possibility of usage in regression tree leaves. Also, the combination of different models in HTL needs careful tuning.
Apart from the unique drawback for each method stated above, they all neglect the global information in all leaf nodes, which leads to poor generalization performance in some datasets.
\subsection{Stein's phenomenon}
\label{sec:stein_phenomenon}
In decision theory and estimation theory, Stein's phenomenon is the phenomenon that when three or more parameters are estimated simultaneously, there exist combined estimators more accurate on average (that is, having lower MSE) than any method that handles the parameters separately \cite{stein1956}.
\textbf{James-Stein estimator.} JS estimator \cite{stein1956,james1961} is the best-known example of Stein's phenomenon, which behaves uniformly better than the ordinary MLE approach for the estimation of parameters in the multivariate normal distribution. Let $\mathbf{Y}$ be a normally distributed $m$-dimensional vector ($m>2$) with unknown mean $\boldsymbol{\mu}= E\mathbf{Y}$ and covariance matrix equal to the identity matrix $\boldsymbol{I}$. Given an observation $\boldsymbol{y}$, the goal is to estimate $\boldsymbol{\mu}$ by estimator $\hat{\boldsymbol{\mu}}=\phi(\boldsymbol{y})$. Denote the ordinary MLE estimator as $\hat{\boldsymbol{\mu}}_0$ and the JS estimator as $\hat{\boldsymbol{\mu}}_1$. Using the notation: $||\boldsymbol{x}||^2 = \boldsymbol{x'x}$. According to the Stein's phenomenon, the JS estimator $\hat{\boldsymbol{\mu}}_1$ leads to lower risk than the MLE estimator $\hat{\boldsymbol{\mu}}_0$ under the MSE metrics:
\begin{equation}
\label{eq:risk compare}
R(\boldsymbol{\mu}, \hat{\boldsymbol{\mu}}_1)=E||\boldsymbol{\mu} - \hat{\boldsymbol{\mu}}_1||^2<R(\boldsymbol{\mu}, \hat{\boldsymbol{\mu}}_0)=E||\boldsymbol{\mu} - \hat{\boldsymbol{\mu}}_0||^2, m \ > \ 2.
\end{equation}
\section{JSRT: James-Stein Regression Tree}
\label{sec:method}
In the RT prediction stage, we need to fit an optimal value for each of the leaf nodes. In the RT construction stage, an optimal value is also fitted to each child node when splitting a node. The existing estimation of this optimal value only considers training samples in a single node, we first propose to combine global data information from all the leaf nodes in the RT prediction stage and theoretically prove the lower generalization error of this method. Then, we extend this global information idea to the RT construction stage.
\subsection{P-JSRT: RT prediction with JS estimator}
\label{subsec:JS_prediction}
Usually, the RT independently predicts a value for each leaf node. However, Stein's phenomenon points out that if we consider the prediction of all leaf nodes as one multivariate expectation estimation problem, it is possible to achieve better results on average. Thus, we propose to exploit JS estimator to simultaneously make predictions for all the leaf nodes in RT.
Following Feldman \textit{et al}. \cite{feldman2012multi} and Shi \textit{et al}. \cite{shi2016improving}, we use the positive-part JS estimator with independent unequal variances \cite{bock1975minimax,casella1985introduction}. Specifically, the positive JS estimator is given by
\begin{equation}
\label{eq:positive_JS}
\hat{\mu}_i^{JS+}= GM + (1-\gamma)^{+}\cdot(\tilde{y_i}-GM), m>3, \end{equation}
where $GM=\frac{1}{m}\sum_{i=1}^m\tilde{y_i}$ is the grand mean, $\gamma=(m-3)(\sum_{i=1}^m\frac{n_i}{\sigma_i^2}(\tilde{y_i}-GM)^2)^{-1}$, and $(\cdot)^{+}=max(0,\cdot)$; $n_i$ is the number of samples in leaf node $S_i$, $\tilde{y_i}$ is the mean of sample targets in leaf node $S_i$, and $\sigma_i^2$ is the standard unbiased estimate of the variance of leaf node $S_i$.
The pseudo-code of RT prediction process with JS estimator is presented in Algorithm~\ref{alg:js prediction}.
\begin{algorithm}
\caption{The prediction process in P-JSRT}
\label{alg:js prediction}
\begin{algorithmic}[1]
\Require
A constructed regression tree $f$.
\Ensure
A regression tree $f^{js}$, which makes prediction of the leaf nodes via JS estimator.
\If {the leaf nodes number of $f$ is more than 3}
\State Make predictions for all the leaf nodes simultaneously according to the positive JS estimator in Eq. (\ref{eq:positive_JS}).
\Else
\State Use MLE to estimate the value of each leaf node independently according to Eq. (\ref{eq:split estimate}).
\EndIf
\State\Return {A P-JSRT $f^{js}$.}
\end{algorithmic}
\end{algorithm}
\textbf{Generalization error analysis of P-JSRT.}
In a regression problem, the model is fitted to the training data $\mathcal{D}_0(X,Y)$, which is a sample set of the true data distribution $\mathcal{D}$. Denote $\boldsymbol{x}$ as an observation in $\mathcal{D}$ and $y$ as the target of $\boldsymbol{x}$. The fitted regression tree model is marked by $f$, the output of $f$ in term of $\boldsymbol{x}$ on $\mathcal{D}$ is $f(\boldsymbol{x};\mathcal{D})$. Suppose the loss is square loss, then the generalized error on average of regression tree $f$ is
\begin{equation}
\label{eq: risk tree 1}
E(f; \mathcal{D}) = E_\mathcal{D}[(f(\boldsymbol{x}; \mathcal{D}) - y)^2].
\end{equation}
Suppose the probability density of true data distribution is $p(\boldsymbol{x}, y)$, it follows from Eq. (\ref{eq: risk tree 1}) that
\begin{equation}
\label{eq: risk tree 2}
\begin{aligned}
E(f; \mathcal{D}) = \int_\mathcal{D} (f(\boldsymbol{x}; \mathcal{D}) - y)^2 p(\boldsymbol{x}, y) d\boldsymbol{x}dy.
\end{aligned}
\end{equation}
After a regression tree is learned, the feature space is divided into $m$ disjoint subspaces (i.e., $m$ leaf nodes), $S_1, S_2,..., S_m$. Thus we have
\begin{equation}
\label{eq: risk tree 3}
\begin{aligned}
E(f; \mathcal{D}) &= \sum_{i=1}^m \int_{(\boldsymbol{x}, y), \boldsymbol{x} \in S_i} (f(\boldsymbol{x}; \mathcal{D}) - y)^2 p(\boldsymbol{x}, y) d\boldsymbol{x}dy \\
&= \sum_{i=1}^m P_i \int_{(\boldsymbol{x}, y), \boldsymbol{x} \in S_i} (f(\boldsymbol{x}; \mathcal{D}) - y)^2 \frac{p(\boldsymbol{x}, y)}{P_i} d\boldsymbol{x}dy,
\end{aligned}
\end{equation}
where $P_i = \int_{(\boldsymbol{x}, y), \boldsymbol{x} \in S_i} p(\boldsymbol{x}, y) d\boldsymbol{x}dy$.
Assume that the probability of $\mathcal{D}$ is divided equally by $S_1, S_2,..., S_m$, $i.e., P_i = P, (i=1, 2,..., m)$. Denote the estimated value of $\boldsymbol{x}$ in node $S_i$ as $\hat{y}_i = f(\boldsymbol{x};\mathcal{D}), \boldsymbol{x} \in S_i$, and the probability density function of $(\boldsymbol{x}, y), \boldsymbol{x} \in S_i$ as
\begin{equation}
\label{eq:pi}
p_i(\boldsymbol{x}, y) =
\left\{
\begin{array}{ll}
\displaystyle\frac{p(\boldsymbol{x}, y)}{P_i} &, \boldsymbol{x} \in S_i \\
0 &, {\rm other}\\
\end{array}
\right.
.
\end{equation}
Then we have
\begin{equation}
\label{eq: risk tree 4}
\begin{aligned}
&E(f; \mathcal{D}) = \sum_{i=1}^m P \int_{(\boldsymbol{x}, y), \boldsymbol{x} \in S_i} (\hat{y}_i - y)^2 \frac{p(\boldsymbol{x}, y)}{P} d\boldsymbol{x}dy \\
&= P\sum_{i=1}^m \int_{(\boldsymbol{x}, y), \boldsymbol{x} \in S_i} (\hat{y}_i - y)^2 p_i(\boldsymbol{x}, y) d\boldsymbol{x}dy \\
&= P\sum_{i=1}^{m} E_{p_i(\boldsymbol{x}, y)}(y - \hat{y}_i)^2 \\
&= P\sum_{i=1}^{m} E_{p_i(\boldsymbol{x}, y)}(y - \bar{y}_i)^2 + P\sum_{i=1}^{m} E_{p_i(\boldsymbol{x}, y)}(\bar{y}_i - \hat{y}_i)^2, \\
\end{aligned}
\end{equation}
where $\bar{y}_i = E_{p_i(\boldsymbol{x}, y)}(y)$ is the true mean of $y, \boldsymbol{x} \in S_i$. The first term of the right hand side of Eq. (\ref{eq: risk tree 4}) is irrelevant to the estimated value $\hat{y}_i$. The second term of the right hand side of Eq. (\ref{eq: risk tree 4}) can be formulated as
\begin{equation}
P\sum_{i=1}^{m} E_{p_i(\boldsymbol{x}, y)}(\bar{y}_i - \hat{y}_i)^2=P\cdot E||\bar{\boldsymbol{y}} - \hat{\boldsymbol{y}}||^2.
\end{equation}
Let us suppose when $(\boldsymbol{x}, y) \sim p_i(\boldsymbol{x}, y), y_i \sim N(\mu_i, \sigma^2)$, then we can draw the conclusion that P-JSRT achieves lower generalization error than the original CART when the number of leaf nodes is more than 2 according to Eq. (\ref{eq:risk compare}).
\subsection{C-JSRT: RT construction with JS estimator}
\label{subsec: JS_construction}
Since we achieve lower generalization error simply by using JS estimator in the RT prediction stage, we explore to combine global information in each splitting step of the RT construction stage. When splitting a node $N_0$ into two children nodes $N_1$ and $N_2$ in C-JSRT, the choice of the best feature and value pair $(a, v)$ is made by solving
\begin{equation}
\label{eq:C-JSRT split}
\min_{(a, v)}[\sum_{\boldsymbol{x}_i\in N_1}(y_i-c_1)^2+\sum_{\boldsymbol{x}_i\in N_2}(y_i-c_2)^2],
\end{equation}
where $c_1$ and $c_2$ are estimated via JS estimation. Note that JS estimation is better than MLE on average only when 3 or more parameters are estimated simultaneously. To satisfy this condition, when choosing the best feature and value pair $(a, v)$ to split node $N_0$, we consider the constructed part of a RT after splitting $N_0$ as a whole constructed RT. Therefore, we can use Algorithm~\ref{alg:js prediction} to estimate values of the `leaf nodes', including $c_1$ and $c_2$ for nodes $N_1$ and $N_2$.
However, when we estimate values of children nodes according to Eq.~(\ref{eq:positive_JS}) in C-JSRT, we found it hard to change the structure of a RT. Denote the square loss reduction caused by splitting via a feature and value pair $(a, v)$ as $\Delta L$. The reason for almost no change of C-JSRT structure is that the change (caused by JS estimation instead of MLE) to $\Delta L$ for a $(a, v)$ pair is relatively small compared with the gap of $\Delta L$ between different $(a, v)$ pairs. Therefore, the solving result of Eq.~(\ref{eq:C-JSRT split}) is the same with that of Eq.~(\ref{eq:split}).
In other words, the chosen best splitting pair $(a, v)$ is not changed via directly using JS estimator given by Eq.~(\ref{eq:positive_JS}).
\begin{algorithm*}
\caption{The feature selection process in C-JSRT}
\label{alg:js construction}
\begin{algorithmic}[1]
\Require
Node $N_0$ to be split; Optional non-empty feature set $(A, V)$; Current leaf nodes; The number of current leaf nodes $m_{temp}$; Stopping conditions.
\Ensure
The best splitting feature and value pair $(a, v)_{best}$.
\If{node $N_0$ satisfies one or more of the stopping conditions}
\State Mark node $N_0$ as a leaf node; \Return
\Else
\State Initialize the sum of square loss in two children nodes as $L_{min}=\infty$, the best splitting feature and value pair $(a, v)_{best}$;
\For {each feature and value pair $(a, v)$ in $(A, V)$}
\State Use $(a,v)$ to split node $N_0$ into two children nodes $N_1$ and $N_2$;
\If {$m_{temp}\geq 3$}
\State Use the information of current leaf nodes and the scaled JS estimator given in Eq.~(\ref{eq:scaled positive_JS}) to estimate values for children nodes $N_1$ and $N_2$ simultaneously;
\Else
\State Use MLE to estimate values for children nodes $N_1$ and $N_2$ according to Eq.~(\ref{eq:split estimate}) independently;
\EndIf
\State Calculate the sum of square loss $L_{temp}$ in two children nodes $N_1$ and $N_2$;
\If {$L_{temp}<L_{min}$}
\State $L_{min} = L_{temp}$, $(a,v)_{best} = (a,v)$;
\EndIf
\EndFor
\State\Return {$(a,v)_{best}$.}
\EndIf
\end{algorithmic}
\end{algorithm*}
A phenomenon observed in experimental results of P-JSRT inspired us to solve this problem. This phenomenon will be carefully analyzed in \ref{subsec:shrinkage property}, we briefly give the conclusion here: the reduce of MSE ($\%$) is positive correlated with the weight of grand mean ($GM$) in P-JSRT. Therefore, we introduce a new scale parameter $\lambda$ to control the weight of $GM$ in C-JSRT. The scaled positive-part JS estimator used in our C-JSRT is given by
\begin{equation}
\label{eq:scaled positive_JS}
\hat{\mu}_i^{JS+}= GM + (1-\lambda\cdot\gamma)^{+}\cdot(\tilde{y_i}-GM), m>3, \end{equation}
where the meaning of other notations are the same as Eq.~(\ref{eq:positive_JS}).
The feature section process in our proposed C-JSRT is concluded in Algorithm~\ref{alg:js construction}.
\section{Experimental Evaluation}
\label{sec:experiment}
\subsection{Experimental setup}
\textbf{Dataset.} We use 15 UCI datasets \cite{Dua:2019} and 1 Kaggle \cite{rodolfo2018abalone} dataset in our experiments. These datasets are designed for a wide scope of tasks and vary in the number of observations and the number of features, which are adequate to evaluate the regression tree algorithm.
Samples with missing values in their features are removed in the experiment implementation.
\textbf{Metrics.} In regression problems, MSE on test data is the most recognized generalized error, and it is adopted to evaluate performance in our experiments.
\textbf{Settings.} To avoid the influence of data manipulation, we use 10-fold cross validation and repeat 10 times in our experiments. The minimum number of samples for one node to continue splitting is set to 20 for all the datasets. Additionally, the minimum number of samples in a leaf node is set to 5.
\subsection{Comparing P-JSRT with other RT prediction methods}
We compare our method with the original CART \cite{breiman1984classification}, KNNRT \cite{weiss1995rule} and KRT \cite{deng1995multiresolution} tree models. The distance metrics used in KRT and KNNRT are both Euclidean distance. The weight of neighbors in the KR model is the reciprocal of the distance. For fair comparison, we use sklearn toolkit in the regression tree construction stage for all methods in this part.
\begin{table}[!htbp]
\caption{MSE comparison of P-JSRT and RT with other estimation methods.}
\label{tab:JS_KR_KNN_MSE}
\centering
\small
\begin{threeparttable}
\begin{tabular}{ p{2.30cm}<{\centering} p{0.80cm}<{\centering} p{0.9cm}<{\centering} p{0.80cm}<{\centering} p{1.05cm}<{\centering}}
\hline
Dataset & CART & KNNRT & KRT & P-JSRT\\
\hline
\specialrule{0em}{1pt}{1pt}
wine &0.5671 & 0.5741 & \underline{\textbf{0.5105}} & \textbf{0.5576} \\
\hline
\specialrule{0em}{1pt}{1pt}
auto\_mpg &10.80 & \textbf{10.80} & 11.05 & \underline{\textbf{10.77}}\\
\hline
\specialrule{0em}{1pt}{1pt}
airfoil & 10.89 & \underline{\textbf{8.66}} & 11.50 & \textbf{10.86}\\
\hline
\specialrule{0em}{1pt}{1pt}
bike & 36.02 & \textbf{18.00} &\underline{\textbf{15.15}} & \textbf{36.02}\\
\hline
\specialrule{0em}{1pt}{1pt}
energy & 3.4461 & 3.4947 &\underline{\textbf{3.1263}} & \textbf{3.4451} \\
\hline
\specialrule{0em}{1pt}{1pt}
concrete & 21.570 & \textbf{18.144} & \underline{\textbf{14.924}} & \textbf{21.556}\\
\hline
\specialrule{0em}{1pt}{1pt}
compressive& 51.55 & \textbf{46.55} & \underline{\textbf{38.75}} & \textbf{51.40}\\
\hline
\specialrule{0em}{1pt}{1pt}
boston & 19.60 & \textbf{18.73} & \underline{\textbf{17.43}} & \textbf{19.54}\\
\hline
\specialrule{0em}{1pt}{1pt}
abalone & 5.9828 & 6.2288 & 6.3795 & \underline{\textbf{5.9053}}\\
\hline
\specialrule{0em}{1pt}{1pt}
skill & 1.2758 & 1.3206 & 1.3728 & \underline{\textbf{1.2622}}\\
\hline
\specialrule{0em}{1pt}{1pt}
communities & 0.0286 & 0.0294 & 0.0296 & \textbf{\underline{0.0284}}\\
\hline
\specialrule{0em}{1pt}{1pt}
electrical($\times 10^{-4}$) & 3.1344 & \textbf{3.0682} & \textbf{\underline{3.0171}} & \textbf{3.1242} \\
\hline
\specialrule{0em}{1pt}{1pt}
diabetes($\times 10^{3}$) & 4.5146 & 4.5639 & 4.5907 & \underline{\textbf{4.4503}}\\
\hline
\specialrule{0em}{1pt}{1pt}
pm25($\times 10^{3}$) & 2.4462 & \textbf{2.2645}& \underline{\textbf{1.9742}} & \textbf{2.4346}\\
\hline
\specialrule{0em}{1pt}{1pt}
geographical($\times 10^{3}$)& 2.8600 & 2.8653 & \textbf{2.8466} & \underline{\textbf{2.8065}}\\
\hline
\specialrule{0em}{1pt}{1pt}
baseball($\times 10^{5}$) & 6.0142 & 6.3447 & 6.3126 & \underline{\textbf{5.9761}}\\
\hline
\end{tabular}
\begin{tablenotes}
\item[*] In the first column, `electrical($\times 10^{-4}$)' means the given results of dataset electrical should multiply by $10^{-4}$. And so on, for each of the dataset after electrical in the first column.
\end{tablenotes}
\end{threeparttable}
\end{table}
\textbf{Effectiveness comparison.} The experimental results of MSE are displayed in Table~\ref{tab:JS_KR_KNN_MSE}. The results equal to or better than CART are highlighted in bold and the lowest MSE for each dataset is underlined. Since the experimental results about the standard deviation show no significant difference for different methods, they are not presented due to the space limitation. We notice that P-JSRT performs uniformly better \cite{shao2006mathematical} than CART, while both KNNRT and KRT methods impair the performance in about half of the datasets. Despite the relative small improvement of P-JSRT, it achieves best results in 7 datasets. Though the KNNRT and KRT methods do perform best in some datasets, they suffer from instabitily and inefficiency (this will be discussed later).
\textbf{Efficiency comparison.} Results about test time are recorded in Table~\ref{tab:JS_KR_KNN_time}, in which the datasets are placed from small to large. The results taking shortest time are highlighted in bold, and the second best results are shown in italics. It is obvious that the original CART and P-JSRT completely outperform KNNRT and KRT in term of efficiency. Besides, the prediction speed of P-JSRT is comparable to that of the original CART. Particularly, the advantage of CART and P-JSRT gradually become prominent as the size of dataset grows.
\begin{table}[!htbp]
\caption{The test time (millisecond) of P-JSRT and RT with other estimation methods.}
\label{tab:JS_KR_KNN_time}
\centering
\small
\begin{threeparttable}
\begin{tabular}{ p{1.35cm}<{\centering} p{1.00cm}<{\centering} p{0.58cm}<{\centering} p{0.95cm}<{\centering} p{0.97cm}<{\centering} p{1.05cm}<{\centering}}
\hline
Dataset & Samples & CART & KNNRT & KRT & P-JSRT\\
\hline
\specialrule{0em}{1pt}{1pt}
concrete & 103 & \textbf{0.18} & 3.22 & 3.48 & \textit{0.32} \\
\hline
baseball & 337 & \textbf{0.20} & 11.34 & 12.24 & \textit{0.34}\\
\hline
auto\_mpg & 392 & \textbf{0.19} & 13.25 & 14.25 & \textit{0.34}\\
\hline
diabetes & 442 & \textbf{0.20} & 15.17 & 16.32 & \textit{0.35} \\
\hline
boston & 503 & \textbf{0.20} & 18.61 & 19.93 & \textit{0.36} \\
\hline
energy & 768 & \textbf{0.19} & 26.97 & 28.84 & \textit{0.35} \\
\hline
compressive & 1030 & \textbf{0.21} & 43.30 & 45.78 & \textit{0.37} \\
\hline
geographical & 1059 & \textbf{0.22} & 47.76 & 50.58 & \textit{0.40} \\
\hline
airfoil & 1503 & \textbf{0.21} & 70.90 & 74.76 & \textit{0.40}\\
\hline
communities & 1993 & \textbf{0.26} & 118.01 & 123.54 & \textit{0.47} \\
\hline
skill & 3338 & \textbf{0.25} & 258.29 & 267.30 & \textit{0.51} \\
\hline
abalone & 4177 & \textbf{0.25} & 378.74 & 390.34 & \textit{0.55} \\
\hline
wine & 4898 & \textbf{0.26} & 486.84 & 500.18 & \textit{0.59} \\
\hline
electrical & 10000 & \textbf{0.33} & 1696.70 & 1722.10 & \textit{0.85}\\
\hline
bike & 17379 & \textbf{0.42} & 4802.90 & 4836.50 & \textit{1.10}\\
\hline
pm25 & 41758 & \textbf{0.87} & 32071.00 & 32155.00 & \textit{2.51} \\
\hline
\end{tabular}
\begin{tablenotes}
\item[*] The four methods are tested on the same learned tree and with same CPU E5-2680 v2 @ 2.80GHz for every dataset.
\end{tablenotes}
\end{threeparttable}
\end{table}
With respect to the ratio of improvement over CART against the growth of the running time, it is apparent that our P-JSRT has its unique advantages, especially in large datasets.
\subsection{The shrinkage property of JS estimator}
\label{subsec:shrinkage property}
\textbf{A toy experiment.} Simple types of maximum-likelihood and least-squares estimation procedures do not include shrinkage effects. In contrast, shrinkage is explicit in James-Stein-type inference. Our proposed P-JSRT improves the ordinary MLE tree (i.e., CART) in the leaf prediction via combining information in other leaf nodes.
\begin{table}[!htbp]
\centering
\begin{threeparttable}
\caption{An example demonstrating $JSE$ shrinks $MLE$ towards the grand mean $GM$.}
\centering
\small
\begin{tabular}{c| p{0.55cm}<{\centering} p{0.55cm}<{\centering} p{0.55cm}<{\centering} p{0.55cm}<{\centering} p{0.55cm}<{\centering} p{0.55cm}<{\centering}}
\hline
Leaf\_ID & 2 & 4 & 5 & 7 & 9 & 10\\
\hline
MLE & 21.23 & 35.29 & 29.87 & 30.57 & 44.50 & 38.89\\
\hline
JSE & 21.41 & 35.26 & 29.93 & 30.61 & 44.33 & 38.80\\
\hline
$|GM\tnote{*}-MLE|$ & 12.16 & 1.90 & 3.52 & 2.82 & 11.11 & 5.50\\
\hline
$|GM\tnote{*}-JSE|$ & \bfseries11.98 & \bfseries1.87 & \bfseries3.46 & \bfseries2.78 & \bfseries10.94 & \bfseries5.41\\ \hline
\end{tabular}
\begin{tablenotes}
\item[*] $GM$ means the grand mean of all leaf nodes on this regression tree, which is 33.39 in this specific example.
\end{tablenotes}
\label{Table:JS_shrinks_MLE}
\end{threeparttable}
\end{table}
We carry out a toy experiment on a small UCI dataset \cite{Dua:2019} to better understand the shrinkage effect in P-JSRT. Results in Table~\ref{Table:JS_shrinks_MLE} verify that the P-JSRT estimate ($JSE$) is closer to the value supplied by the information in other leaf nodes ($GM$) than the original CART estimate ($MLE$).
\textbf{The influence of shrink weight.}
In this part, we delve deeper into the shrinkage property of JS estimation. When $\gamma<1$, Eq.~(\ref{eq:positive_JS}) can be reformulated as
\begin{equation}
\label{eq:shrink weight}
\begin{aligned}
\hat{\mu}_i^{JS}= GM + (1-\gamma)(y_i-GM)
=(1-\gamma)y_i+\gamma{GM},
\end{aligned}
\end{equation}
where $\gamma$ means the weight of shrink direction $GM$ in the JS estimator. It is useful to know how the weight of shrink direction is related to the improvement degree of P-JSRT. Thus, we record the average weight of shrink direction in the JS estimation and the corresponding reduce on average MSE (\%) for each dataset in a 10-times 10-fold cross validation.
The experimental results in Fig.\ref{fig:gamma_reduce} demonstrate that the greater weight of shrink direction leads to the better performance of P-JSRT. The greater weight of shrink direction means more global information, which can avoid the overfitting problem of single regression tree to some extent. Hence, the shrinkage effect in the P-JSRT leads to lower generalization errors, greater the shrink weight, better the performance.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.9\linewidth]{fig/js_gamma_reduce.jpg}
\caption{The influence of shrink weight $\gamma$ on the estimation. Greater the MSE reduce (\%), better the performance. PCC means the Pearson correlation coefficient between the shrink weight and the reduce on MSE (\%).}
\label{fig:gamma_reduce}
\end{figure}
\begin{table}[!htbp]
\caption{Comparison of MSE in ablation experiment.}
\label{tab:ablation experiment}
\centering
\small
\begin{threeparttable}
\begin{tabular}{p{1.70cm}<{\centering} p{0.90cm}<{\centering} p{1.0cm}<{\centering} p{1.05cm}<{\centering} p{1.2cm}<{\centering} p{0.25cm}<{\centering}}
\hline
Dataset & CART & P-JSRT & C-JSRT & CP-JSRT & $\lambda$\\ \hline
\specialrule{0em}{1pt}{1pt}
concrete & 31.7003 & 31.6934 & \textbf{31.1461} & \textit{31.1586} & 30 \\
\hline
baseball($\times 10^{5}$) & 5.2823 & 5.2782 & \textit{5.2652} & \textbf{5.2622} & 25 \\
\hline
auto\_mpg & 11.2340 & 11.2266 & \textit{11.1963} & \textbf{11.1896} & 15 \\
\hline
diabetes($\times 10^{3}$) & 3.8641 & 3.8499 & \textit{3.8238} & \textbf{3.8104} &40
\\
\hline
boston & 26.7308 & \textit{26.7153} & 26.7290 & \textbf{26.7136} & 1 \\
\hline
energy & 3.3012 & 3.3009 & \textit{3.2929} & \textbf{3.2926} & 25\\
\hline
compressive & 63.3120 & 63.2293 & \textit{62.6133} & \textbf{62.5370} & 45 \\
\hline
airfoil & 12.6341 & 12.6113 & \textit{12.5225} & \textbf{12.5000} & 35 \\
\hline
\end{tabular}
\end{threeparttable}
\end{table}
\subsection{Ablation study}
An ablation experiment is implemented to explore the efficacy of considering global information in the regression tree construction stage. We implement this ablation experiment on 8 UCI datasets. The minimum number of samples in a leaf node is set to 10 and 10-times 10-fold cross validation is used as well. We choose parameter $\lambda$ from [1, 50] with step 5. Since we have to change the construction process of the regression tree, we don't use the sklearn toolkit in this part of experiment.
The results are shown in Table~\ref{tab:ablation experiment}, the best and the second best results are highlighted in bold and italics respectively. We notice that CP-JSRT performs best in 7 of 8 datasets and the performance of C-JSRT is next to CP-JSRT, which demonstrates the efficacy of employing all training samples in both construction and prediction of the regression tree. The consistent better performance of C-JSRT/CP-JSRT than CART/P-JSRT further proves that global information should be considered in the regression tree construction stage. Besides, the fact that CP-JSRT performs better than C-JSRT and P-JSRT performs better than CART illustrates the efficacy of JS estimation in the RT prediction stage. The scale factor which controls the trade-off between local and global information during the RT construction stage varies with the dataset.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we took advantage of global information in both the RT construction and prediction stages. Instead of obtaining the predicted value for every leaf node separately, we first introduced JS estimator to predict values for all leaf nodes simultaneously. Then, we exploit JS estimator to estimate values of children nodes in the feature selection process of RT. In this process, a trade-off between global data information and local data information was achieved. In theory, we proved that our P-JSRT had lower generalized mean square error than RT with MLE prediction under certain assumptions. Empirically, we compared the performance of P-JSRT with other estimation methods in RT, the results verified the uniform effectiveness and remarkable efficiency of our proposed P-JSRT algorithm. The ablation experiment further proved the efficacy of using JS estimation in the feature selection process of RT (method C-JSRT and CP-JSRT). In conclusion, the combination of global information and local information via JS estimator improved the generalization ability of RT.
\section*{Acknowledgment}
This work is supported in part by the National Key Research and Development Program of China under Grant 2018YFB1800204, the National Natural Science Foundation of China under Grant 61771273, the China Postdoctoral Science Foundation under Grant 2019M660645, the R\&D Program of Shenzhen under Grant JCYJ20180508152204044, and the research fund of PCL Future Regional Network Facilities for Large-scale Experiments and Applications (PCL2018KP001).
\bibliographystyle{IEEEtran}
| proofpile-arXiv_059-15681 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
It is well known that the evolution equations\footnote{In this work, we focus on classical systems.} of charged systems subject to an external magnetic field are not invariant under the standard time-reversal transformation defined \emph{via} inversion of the momenta
\begin{equation}
\mathcal{M}_s (\vec{r},\vec{p}) = (\vec{r},-\vec{p}) ~, \quad \forall (\vec{r}, \vec{p}) \doteq \Gamma \in \mathfrak{M}
\label{eq:oldsym}
\end{equation}
combined with the change $t\rightarrow - t$. (In the equation above, $\Gamma$ is a point in the phase space $\mathfrak{M}$ of an $N$-particle system, with positions $\vec{r} = \{\vec{r}_i\}_{i=1}^{N}$ and momenta $\vec{p} = \{\vec{p}_i\}_{i=1}^{N}$.) This fact originated the idea that these systems require special treatment when discussing properties based on time reversibility. In particular, because the currents (and therefore the magnetic field that they generate) are reversed under $\mathcal{M}_s$, classical text books~\cite{landau:1980-book,degroot:1984-book} as well as current literature~\cite{barbier:2018a} present statistical relationships in the presence of a magnetic field using pairs of systems with identical interparticle interactions but under magnetic fields of opposite signs. For example,
the Onsager reciprocal relations were adapted by Casimir to relate cross-transport coefficients of systems under opposite magnetic fields~\cite{casimir:1945}. Similarly,
Kubo~\cite{kubo:1966} derived symmetry properties of time-correlation functions of two such systems subject to $\vec{B}$ and $- \vec{B}$. In nonequilibrium statistical mechanics, results for currents, response, and Fluctuation Relations (FRs) are typically presented under the same conditions~\cite{gaspard:2013,barbier:2018a,wang:2014,saito:2008}. All these results are to be contrasted with their ``standard'' counterparts that refer to a single system.
The situation described above is somewhat unsatisfactory for two main reasons. The first is conceptual: the introduction of a second system, while physically correct, blurs the distinction between the system and its external environment. Indeed, in the evolution equations of the system, the magnetic field appears as an external agent whose physical origin (\emph{e.g.} moving charges originating a current) is not associated to active degrees of freedom in the dynamical system. Its inversion then implicitly implies an extension of the system to include the sources of the magnetic field, applying $\mathcal{M}_s$ to the extended system, and then forgetting again about the additional degrees of freedom. The second reason is practical: this commonly adopted approach reduces the predictive power of the statistical relationships mentioned above. For example, within linear response theory, null values of transport coefficients in experiments concerning a single system in a given magnetic field cannot be predicted based on symmetry properties of the time-correlation functions. Similarly, in the context of nonlinear response, which includes the fluctuation relations~\cite{evans:1993,gallavotti:1995b,evans:2002,evans:2005,rondoni:2007}, null cumulants of the dissipation cannot be identified \emph{via} symmetry \cite{barbier:2018a}.
Recently however, it was demonstrated that, for systems in a constant external magnetic field, these difficulties can be overcome, recovering the full predictive power of statistical mechanics~\cite{bonella:2014}. The starting observation for these recent developments is that invariance of the Hamiltonian (and hence of the dynamical system) under Eq.~\eqref{eq:oldsym} is a sufficient but not necessary condition for establishing the properties mentioned above. Following a known approach in nonequilibrium statistical mechanics, alternative time-reversal operators --- that do not necessitate inversion of the magnetic field --- can be introduced~\cite{roberts:1992,bonella:2017a,coretti:2018a,carbone:2020} and used instead of $\mathcal{M}_s$ to reinstate standard proofs. These new symmetries lack the intuitive property of retracing the coordinates in the backward propagation in pairs of trajectories with opposite momenta, but they nonetheless identify pairs of trajectories with opposite values of relevant observables (\emph{e.g.} elements of the diffusion tensor, instantaneous dissipation) and their physical effects can be predicted and measured. This was illustrated numerically for the case of time-correlation functions in Refs.~\cite{bonella:2017a,coretti:2018a} and for fluctuation relations in the presence of orthogonal electric and magnetic fields in Ref.~\cite{coretti:2020b}. The lack of experimental evidence of the violation of the Onsager reciprocal relations~\cite{luo:2020} might also be explained \emph{via} these generalized time-reversal operators.
In this paper we consider again the nonequilibrium behavior of classical charged systems in external magnetic and electric fields and generalize and consolidate previous work. In particular, we consider the case of parallel fields in which net currents and dissipation arise: the situation for which FRs have been developed~\cite{evans:2002,rondoni:2007}. We build on two generalized time-reversal operators introduced in Ref.~\cite{coretti:2018a} to motivate the validity of a transient FR that benefits --- contrary to the existing literature --- from a fully single-system (single magnetic field) description. The analytical results are then illustrated by molecular dynamics simulations of a realistic model of NaCl. The evolution equations are integrated \emph{via} a symplectic and time-reversible algorithm that includes a modified Nos\'e-Hoover thermostat to enforce constant temperature~\cite{mouhat:2013}. These simulations show the odd parity of the dissipation under the proposed generalized time-reversal operators and verify the validity of the transient fluctuation relation for a representative value of the electric field.
\section{Theory}
Let us consider $N$ particles of charge $q_i$ and mass $m_i$ in three dimensions and in the presence of external uniform and static electric and magnetic fields. The Hamiltonian of the system is
\begin{equation}
\begin{aligned}
\label{eq:EM_Ham}
H(\Gamma) &= H_0(\Gamma) - \sum_{i=1}^Nq_i\vec{E}\cdot\vec{r}_i = \\
&= \sum_{i=1}^N\frac{\bigl(\vec{p}_i - q_i\vec{A}(\vec{r}_i)\bigr)^2}{2m_i} + \sum_{i,j<i}^NV(r_{ij}) - \sum_{i=1}^Nq_i\vec{E}\cdot\vec{r}_i
\end{aligned}
\end{equation}
In the equation above, $\vec{A}(\vec{r})$ is the vector potential associated to the magnetic field $\vec{B} = \vec{\nabla}_{\vec{r}} \times \vec{A}(\vec{r})$, $\vec{E}$ is the electric field and $V(r_{ij})$ a pairwise additive interaction potential, depending only on the modulus of the distance between particles: $r_{ij} = |\vec{r}_i - \vec{r}_j|$. We set $\vec{E} = (0, 0, E_z)$ and $\vec{B} = (0, 0, B_z)$, \emph{i.e.} the fields are parallel and oriented along the $z$-axis. In the Coulomb gauge ($\vec{\nabla}_{\vec{r}} \cdot \vec{A}(\vec{r}) = 0$), a valid choice for the vector potential is $\vec{A}(\vec{r}) = B_z/2(-y, x, 0)$.~\footnote{The choice of the gauge does not affect the discussion below since it cannot affect the evolution equations, see also~\cite{carbone:2020}.} This setting, while not completely general, is typically adopted to discuss the time-reversal properties of systems in external magnetic fields~\cite{gaspard:2013,barbier:2018a,jayannavar:2007,poria:2016} and it describes relevant physical situations. In particular, this orientation of the electric and magnetic fields ensures the presence of dissipation in the system and this is the framework in which FRs and their corollaries are generally considered.
To proceed, we couple the system to the Nos\'e-Hoover thermostat. This thermostat is commonly adopted in molecular dynamics simulations and, at equilibrium, it provides a sampling of the canonical ensemble. In particular, we shall consider a modified version of this thermostat that was introduced to account for the presence of magnetic and electric fields~\cite{mouhat:2013}. The corresponding dynamical system is
\begin{equation}
\label{eq:NH-EoM}
\begin{aligned}
\frac{\mathrm{d} x_i}{\mathrm{d} t} &= \frac{p^x_i}{m_i} + \omega_iy_i \\
\frac{\mathrm{d} y_i}{\mathrm{d} t} &= \frac{p^y_i}{m_i} - \omega_ix_i \\
\frac{\mathrm{d} z_i}{\mathrm{d} t} &= \frac{p^z_i}{m_i} \\
\frac{\mathrm{d} \ln s}{\mathrm{d} t} & = \xi
\end{aligned}
\qquad\qquad
\begin{aligned}
\frac{\mathrm{d} p^x_i}{\mathrm{d} t} &= F^x_i + \omega_i(p^y_i - m_i\omega_ix_i) - \xi(p^x_i + m_i\omega_iy_i)\\
\frac{\mathrm{d} p^y_i}{\mathrm{d} t} &= F^y_i - \omega_i(p^x_i + m_i\omega_iy_i) - \xi(p^y_i - m_i\omega_ix_i)\\
\frac{\mathrm{d} p^z_i}{\mathrm{d} t} &= F^z_i + q_iE_z - \xi p^z_i \\
\frac{\mathrm{d} \xi}{\mathrm{d} t} &= \frac{1}{\tau^2_{\mathrm{NH}}}\biggl[\frac{K(\Gamma) - K^*}{K^*}\biggr] = \frac{\delta K(\Gamma)}{\tau^2_{\mathrm{NH}}}
\end{aligned}
\end{equation}
where $\tau_{\mathrm{NH}}$ is the characteristic time of the thermostat and $K(\Gamma)$ is the instantaneous kinetic energy of the physical degrees of freedom. This kinetic energy fluctuates around the target value $K^*$, which is related to the temperature, $T$, of the system \emph{via} $\beta = G/2K^*$ ($\beta=1/k_BT$, $k_B$ the Boltzmann constant and $G$ are the degrees of freedom of the system). As hinted by the notation in Eq.~(\ref{eq:EM_Ham}), in the following we shall consider the effect of the electric field on the system at equilibrium in the presence of the external magnetic field. In Ref.~\cite{mouhat:2013} it was shown that the dynamical system \eqref{eq:NH-EoM} conserves the quantity $H_{\mathrm{NH}}(\Gamma, \xi, s) = H_0(\Gamma) + K^*\bigl[\tau^2_{\mathrm{NH}}\xi^2 + 2\ln s\bigr]$. When no electric field is present, the dynamics samples the equilibrium distribution
\begin{equation}\label{eq:ExtCanonicalDens}
f_0(X) = \mathcal{Z}^{-1}\exp[-\beta H_0(\Gamma)]\exp\biggl[-\frac{G\tau^2_{\mathrm{NH}}\xi^2}{2}\biggr]
\end{equation}
where $\mathcal{Z}$ is the partition function
and $X$ denotes a point in the extended phase space of the system, with $X = (\Gamma, \xi)$. As in standard Nos\'e-Hoover dynamics, the marginal probability obtained integrating Eq.~\eqref{eq:ExtCanonicalDens} with respect to $\xi$ is the canonical density, in magnetic field, for the physical variables.
Let us now discuss the behavior under time reversal of this dynamical systems. Direct inspection shows that, as expected, standard time reversal does not hold even considering a natural extension $\mathcal{M}_s^{\mathrm{ext}}$ which includes the Nos\'e-Hoover auxiliary variables by leaving $s$ unchanged and changing the sign of $\xi$. This is due to the coupling between coordinates and momenta induced by the magnetic field and seems to imply that a standard treatment of equilibrium and nonequilibrium relationships based on time-reversal is indeed impossible. However, the proof of these relationships requires the existence of (at least) \textbf{one} valid time-reversal operator and the even parity of the equilibrium distribution under this operator, but it does not prescribe the specific form of the operator and, in particular, it does not fix it to $\mathcal{M}_s$ or $\mathcal{M}^{\mathrm{ext}}_s$. In fact, generalized time-reversal operators, different from $\mathcal{M}_s$, have already appeared in the literature to investigate the equilibrium and nonequilibrium statistical mechanics of molecular dynamics systems, \emph{i.e.} deterministic particle systems~\cite{EMbook}. To adapt these definitions to our problem, let us denote as $\mathfrak{M}_{\mathrm{ext}}$ the extended phase space spanned by the dynamical system~\eqref{eq:NH-EoM}, and as $\mathcal{U}_t$ the associated time-evolution operator. Generalized time-reversal operators in this extended phase space are defined, in complete analogy with what is done in the physical phase space, as involutions $\mathcal{M}_{\mathrm{ext}}$ satisfying
\begin{equation}
\label{eq:TRI}
\mathcal{U}_{-t} X = \mathcal{M}_{\mathrm{ext}} \mathcal{U}_t \mathcal{M}_{\mathrm{ext}} X \quad
\forall t \in \mathbb{R} \ , \ \forall X \in \mathfrak{M}_{\mathrm{ext}}
\end{equation}
Crucially for the developments presented in the following, two such operators can be defined for~\eqref{eq:NH-EoM}: the dynamical system is in fact invariant under
\begin{subequations}
\label{eq:NH_TRS}
\begin{align}
\mathcal{M}_{\mathrm{ext}}^{(3)}(\Gamma, s, \xi) &= (-x,y,z,p^x,-p^y,-p^z,s,-\xi) \\
\mathcal{M}_{\mathrm{ext}}^{(4)}(\Gamma, s, \xi) &= (x,-y,z,-p^x,p^y,-p^z,s,-\xi)
\end{align}
\end{subequations}
combined with time inversion, and both operators satisfy the definition Eq.~\eqref{eq:TRI}. The notation adopted above reflects the nomenclature introduced in Ref.~\cite{coretti:2018a} where related symmetries --- established in the absence of a thermostat --- were first introduced. Note that the equilibrium density Eq.~\eqref{eq:ExtCanonicalDens} is even under these transformations. As mentioned above, the validity of the time-reversal operators defined in Eq.~\eqref{eq:NH_TRS} and the even parity of the equilibrium probability density of the system is a sufficient condition to reinstate standard proofs of relevant statistical mechanics relationships. We stress that these new time-reversal symmetries touch only the active degrees of freedom in the dynamical system and do not require inversion of the magnetic field. Based on these symmetries, we can then derive interesting results within a \textbf{single-system} (single magnetic field) discussion of the dynamics. For example, following the derivation in Ref.~\cite{coretti:2018a}, it can be shown within linear response theory that the $yz$ and $xz$ components of the diffusion and conductivity tensors of the system must be zero.
In the following, we shall consider the implications of the newly introduced time-reversal operators on nonequilibrium properties of the system, focusing in particular on the transient fluctuation relation. The key quantity in this relation is the instantaneous dissipation function that, in the extended phase space, is defined as
\begin{equation}
\label{eq:omegazero_NH_def}
\Omega^{(0)}(X) \doteq - \dot{X} \cdot \nabla_X \ln f_0 - \Lambda(X)
\end{equation}
where $\Lambda(X)= \nabla_{X}\cdot\dot{X}$ is the phase-space expansion rate. Substitution of Eq.~\eqref{eq:ExtCanonicalDens} in the definition above shows, after some algebra~\cite{coretti:2020b}, that $\dot{X}\cdot \nabla_X \ln f_0 = G\xi - \beta \sum^N_{i=1}q_i\dot{\vec{r}}_i\cdot\vec{E} -G\xi\delta K(\Gamma)$. Furthermore, $\Lambda(X) = -G\xi$. Combining these results in Eq.~\eqref{eq:omegazero_NH_def} we obtain
\begin{equation}
\label{eq:omegazero_NH}
\Omega^{(0)}(X) = \mathscr{V} \beta \vec{J}(\Gamma)\cdot\vec{E} + G\xi\delta K(\Gamma)
\end{equation}
where $\mathscr{V}$ is the volume of the system and $\vec{J}(\Gamma)=\mathscr{V}^{-1}\sum^N_{i=1}q_i\dot{\vec{r}}_i$ is the microscopic current. For the Nos\'e-Hoover system examined here then, the dissipation function is given by the sum of two contributions. The first one is the usual dissipative term related to the current induced by the electric field; the second accounts for the dissipation originating from the thermal gradient between the system and the reservoir --- represented in this case by the $(s,\xi)$ variables. The average dissipation over a time-leg $\tau$ is defined as
\begin{equation}
\label{eq:average_Om}
\overline{\Omega^{(0)}}_{0,\tau}(X) \doteq \frac{1}{\tau} \int_{0}^{\tau}\mathrm{d} s\Omega^{(0)}(\mathcal{U}_s X)
\end{equation}
For $\tau \gg \tau_{\mathrm{NH}}$, it can be shown~\cite{coretti:2020b} that the contribution to this average due to the temperature gradient is negligible compared to the term expressing the dissipation associated with the currents driven by the electric field. This is illustrated in Figure~\ref{fig:Omega_thermostat}, where the dissipation function is analyzed for a 25\um{ps} molecular dynamics run of liquid NaCl with realistic interactions (the details of the simulation are provided in the next section). The bottom panel of the figure shows the instantaneous (orange curve) and average (red curve) thermal dissipation (second term on the right hand side of Eq.~\eqref{eq:omegazero_NH} and its time average, respectively). As it can be seen, the instantaneous thermal dissipation is finite, but it oscillates around zero so that it sums to a null contribution in the average. The electric dissipative term on the other hand, shown in the middle panel of Figure~\ref{fig:Omega_thermostat} (first term on the right hand side of Eq.~\eqref{eq:omegazero_NH} --- in cyan --- and its time average --- in blue ---, respectively), has nonzero instantaneous and average value and clearly dominates the total dissipation, upper panel of the figure.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{Pictures/Omega_contributions}
\caption{Different contributions to the dissipation function $\Omega^{(0)}$ computed along 25\um{ps} of evolution. The contribution due to the thermostat $\Omega^{(0)}_{\mathrm{th}}$ is shown in the lower panel, the contribution due to the electric field $\Omega^{(0)}_{\mathrm{el}}$ is in the middle panel, while the upper panel shows the total dissipation.}
\label{fig:Omega_thermostat}
\end{figure}
The analytical result in Ref.~\cite{coretti:2020b} and the numerical results presented here then support the fact that, for sufficiently long averaging windows, the dissipation assumes a form which does not depend on the specific thermostat adopted. A similar effect was observed in Ref.~\cite{evans:2005}, where oscillations of the interparticle potential energy had to average out, in order for the currents contributions to dominate the dissipation function, and for the steady-state FR to hold. In that case, at the small fields relevant for linear response, this was not guaranteed to be the case.
Furthermore, as required, in this system the instantaneous dissipation is odd under Eqs.~\eqref{eq:NH_TRS}. In Figure~\ref{fig:M3} (top panel) we show the behavior of this quantity under $\mathcal{M}_{\mathrm{ext}}^{(3)}$ (the behavior under $\mathcal{M}_{\mathrm{ext}}^{(4)}$ is the same). In the figure, $\Omega^{(0)}(t)$ is computed along a ``forward'' trajectory (in red), and along the ``backward'' trajectory (in blue) identified by $\mathcal{M}_{\mathrm{ext}}^{(3)}$ for the same liquid NaCl simulation previously considered. More in detail, the two curves are obtained as follows: the dynamical system~\eqref{eq:NH-EoM} is evolved for $500\um{fs}$ (``forward'' trajectory) and $\Omega^{(0)}(t)$ is computed along the trajectory. The operator $\mathcal{M}_{\mathrm{ext}}^{(3)}$ is then applied to the phase-space point obtained at the end of the evolution and the system is evolved again \emph{via} Eqs.~\eqref{eq:NH-EoM} for $500\um{fs}$ starting from the transformed point (``backward'' trajectory). Along this trajectory we compute again $\Omega^{(0)}(t)$. The odd parity of the dissipation is apparent from the figure. As a curiosity, in the bottom panel of Figure~\ref{fig:M3}, we show the results for calculations in which the ``backward'' trajectory corresponds to standard time-reversal in the extended phase space. The figure clearly shows the lack of a specific signature for the dissipation under this symmetry.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{Pictures/M3M1}
\caption{Instantaneous dissipation function from Eq.~\eqref{eq:omegazero_NH_def} for 500\um{fs} of the ``forward'' evolution (red curve, open squares) and for 500\um{fs} of the ``backward'' trajectory (blue curve, open triangles) obtained \emph{via} $\mathcal{M}^{(3)}_{\mathrm{ext}}$ (upper panel). The opposite values of the dissipation demonstrate the odd signature under the generalized time-reversal transformation used. The same behavior is not observed when the backward trajectory is obtained applying $\mathcal{M}_s^{\mathrm{ext}}$ (bottom panel) as the presence of the magnetic field breaks the symmetry of the system under this transformation. Results are for the nonequilibrium simulation set-up described in Section~\ref{sec:results} for liquid NaCl.}
\label{fig:M3}
\end{figure}
Having computed and characterized the dissipation function, we now move to the discussion of the associated transient fluctuation relation. This relation provides, in fact, an explicit expression for the ratio of the initial probabilities to find the average dissipation function, $\overline{\Omega^{(0)}}_{0,\tau}$ in a neighborhood of size $\delta$ of the values $A$ and of $-A$. Defining the subset of the phase space where the average dissipation takes values in the interval $(\pm A)_{\delta} = (\pm A-\delta, \pm A+\delta)$ by $\{\overline{\Omega^{(0)}}_{0,\tau}\}_{(\pm A)_{\delta}}$, this expression is given by~\cite{searles:2007,rondoni:2007}
\begin{equation}
\label{eq:t-FR-fin}
\begin{aligned}
\frac{\mu_0(\{\overline{\Omega^{(0)}}_{0,\tau}\}_{(-A)_{\delta}})}{\mu_0(\{\overline{\Omega^{(0)}}_{0,\tau}\}_{(A)_{\delta}})}
= \frac{\int_{\{\overline{\Omega^{(0)}}_{0,\tau}\}_{(-A)_{\delta}}}f_0(X)\mathrm{d} X}{\int_{\{\overline{\Omega^{(0)}}_{0,\tau}\}_{(A)_{\delta}}}f_0(X)\mathrm{d} X}
= \exp\Bigl[-\tau[A + \epsilon(\delta, A, \tau)]\Bigr]
\end{aligned}
\end{equation}
where $\epsilon$ is a correction term obeying $|\epsilon(\delta, A, \tau)| \leq \delta$. Previous discussions of (transient albeit long-time limit) FRs in the presence of aligned static external electric and magnetic fields~\cite{barbier:2018b}, relied on the classical time-reversal and employed averages with respect to equilibrium distributions associated to opposite magnetic fields. The existence of $\mathcal{M}_{\mathrm{ext}}^{(3)}$ and $\mathcal{M}_{\mathrm{ext}}^{(4)}$, however, enables to repeat the proof of the relation in a single-system picture. The proof follows the same steps as in the standard derivation, but invokes the new operators instead of $\mathcal{M}_s$ where appropriate. In the next section, the validity of this single-system relation is illustrated \emph{via} molecular dynamics simulations.
\section{Simulations and Results}
\label{sec:results}
In the following, the theoretical results presented in the previous section are illustrated and further validated \emph{via} molecular dynamics simulations of a realistic model of liquid NaCl. The simulated system consists of 125 Na$^{+}$ and 125 Cl$^{-}$ ions in a cubic box of side 20.9\um{\mbox{\normalfont\AA}}. This corresponds to a physical density $\rho = 1.3793\um{g} \um{cm}^{-3}$ (or ionic number density of 0.0275\um{\mbox{\normalfont\AA}}$^{-3}$). The temperature is set to $T = 1550\um{K}$. Pair interactions are modeled using a generalized Huggins-Mayer potential, with the parameters proposed by Tosi and Fumi in Ref.~\cite{tosi:1964} and ionic charges $q_{\mathrm{Na}}=+1\um{\emph{e}}$ and $q_{\mathrm{Cl}}=-1\um{\emph{e}}$ (with \emph{e} elementary charge) for sodium and chloride, respectively. The magnetic field, directed along the $z$-axis, is set to to the value of $\vec{B} = (0, 0, 50)\um{cu}$ ($\um{cu}$ stands for code units: a detailed description of these units and of the conversion factors used in the code can be found in Ref.~\cite{mouhat:2013}), corresponding to approximately $B_z = 5\cdot10^6\um{T}$. The intensity of the field --- huge on experimental scales --- is not unusual in the context of molecular dynamics simulations of interacting systems in external fields~\cite{ciccotti:1975, ciccotti:1976,mugnai:2009,mouhat:2013,gagliardi:2016} and is dictated by the relative strength of the external to the interparticle forces. In particular, to observe appreciable effects of the external field in a reasonable simulation time, the ratio between the average interparticle forces and the average Lorentz forces has to be around one. The chosen intensity of the magnetic field results in a value of this ratio approximately equal to 0.2. Note that the magnetic field is part of the equilibrium Hamiltonian for our system. In the driven simulations, the electric field --- also directed along the $z$-axis --- is chosen to be $\vec{E} = (0, 0, 10)\um{cu}$, corresponding approximately to $E_z = 1 \cdot 10^9 \um{V}\um{m^{-1}}$.
With this choice of the field, the ratio between the average Lorentz forces and the average electrical drift forces (absolute value) is \emph{circa} 1.
In the simulations, periodic boundary conditions are enforced in all directions. The evolution equations~\eqref{eq:NH-EoM} are integrated \emph{via} a straightforward adaptation to the case of parallel (static and constant) magnetic and electric fields of the symplectic algorithm proposed in Ref.~\cite{mouhat:2013} for the evolution of a thermalized classical charged system in perpendicular fields. The long-range Coulombic interactions are treated using the Ewald summation method with an Ewald smearing parameter $\alpha = 0.1$ in code units. The characteristic time of the generalized Nos\'e-Hoover thermostat is fixed \emph{via} $\tau_{\mathrm{NH}} = \sqrt{Q/Gk_BT}$ with $Q = 0.1\um{cu}$.
A timestep of $\delta t = 0.25\um{fs}$ is chosen for all the simulations, ensuring that the fluctuations of the Nos\'e conserved quantity are essentially zero.
The results discussed in this work are obtained \emph{via} the following simulation scheme. Initial conditions are fixed by placing the ions in a BCC lattice, and sampling initial velocities from the Maxwell-Boltzmann distribution corresponding to the target temperature. A preliminary equilibration run of 25\um{ps} is then executed at null electric field to enforce the target temperature \emph{via} the generalized Nos\'e-Hoover thermostat. Following this, a long equilibrium simulation ($\vec{E} = 0$) is performed to sample the equilibrium probability distribution $f_0$. In this run, phase-space configurations are sampled every 500\um{fs} (a sufficient interval to ensure decorrelation) along a trajectory of total length equal to 25\um{ns}, yielding a sample of $5\cdot10^4$ decorrelated configurations. From each of these configurations, a nonequilibrium run is started where the electric field is switched on to the reference value of $\vec{E} = (0, 0, 10)\um{cu}$. The average dissipation function, Eq.~\eqref{eq:average_Om}, is computed along each driven trajectory for a set of values of $\tau$, ranging from 5 to 500\um{fs}.
Probability distribution functions (PDFs) for the possible values of the average dissipation at different times are then extracted through a histogramming process.
Results for the PDFs are presented in Figure~\ref{fig:PDFs-transient}, showing the typical shifting and narrowing around the driven value of the dissipation as the simulation time lengthens.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{Pictures/TFR_PDF.pdf}
\caption{Probability distribution functions (PDFs) estimated from the normalized histogram of the average dissipation function computed in nonequilibrium runs starting from $5\cdot10^4$ decorrelated equilibrium configurations for different values of $\tau$. The main plot shows $\tau$ ranging from $0.005$ to $0.5$\um{ps}, while the inset shows the trend for the low values of $\tau$ ranging from $0.005$ to $0.04$\um{ps}, \emph{i.e.} just after the switching on of the electric field that acts as the dissipative, nonequilibrium force.}
\label{fig:PDFs-transient}
\end{figure}
From the probabilities of opposite values of the average dissipation functions, it is possible to check Eq.~\eqref{eq:t-FR-fin} for the system under investigation.
Results are reported in Figure~\ref{fig:TFR} for $\tau$ ranging from $0.015\um{ps}$ to $0.115\um{ps}$ together with the corresponding theoretical expectations, computed from Eq.~(\ref{eq:t-FR-fin}). As expected, the agreement between the theoretical result (solid curves) and the molecular dynamics calculation suffers as the length of the simulation and the value of $A$ increase. The exponential behavior of the calculated quantities is, however, apparent and the agreement between the two sets of data is very good. To further quantify this agreement, in Table~\ref{tab:TFR}, we show the values for $\tau$ obtained by form exponential fits performed on the numerical results and compare them with the exact value. In this case too, the agreement is very good within error bars.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{Pictures/TFR_exp_vs_simul.pdf}
\caption{Simulation results for the transient fluctuation relation from the simulations described in Section~\ref{sec:results}. The points represent the ratio between probabilities (on the $y$-axis) of obtaining opposite values of the average dissipation ($A$, on the $x$-axis) at different values of $\tau$ from Figure~\ref{fig:PDFs-transient}. The statistical error is obtained dividing the $5\cdot10^4$ nonequilibrium runs in 50 blocks of $10^3$ runs each and computing the normalized histogram for each of them. The error reported on the plot is obtained from the standard deviation on the single bin of the histograms relative to different blocks. Points are the simulation results, while the solid lines represent theoretical expectations. Parameters obtained form exponential fits performed on this set of numerical results are reported in Table~\ref{tab:TFR}. Note the logarithmic scale on the $y$-axis.}
\label{fig:TFR}
\end{figure}
\begin{table}[htb]
\caption{\label{tab:TFR}Comparison between the expected value of $\tau$ and the one obtained from the exponential fits performed on the numerical results in Figure~\ref{fig:TFR}.}
\begin{ruledtabular}
\begin{tabular}{cc}
$\tau_{\mathrm{exp}}$ (\um{ps}) & $\tau_{\mathrm{simul}}$ (\um{ps}) \\ \hline
$0.015$ & $0.0166 \pm 0.0003$ \\
$0.025$ & $0.0252 \pm 0.0003$ \\
$0.035$ & $0.0349 \pm 0.0004$ \\
$0.045$ & $0.0446 \pm 0.0005$ \\
$0.055$ & $0.0564 \pm 0.0009$ \\
$0.065$ & $0.068 \pm 0.001$ \\
$0.075$ & $0.0770 \pm 0.0008$ \\
$0.085$ & $0.086 \pm 0.003$ \\
$0.095$ & $0.097 \pm 0.008$ \\
$0.105$ & $0.104 \pm 0.006$ \\
$0.115$ & $0.112 \pm 0.008$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Concluding remarks}
In this work, we have demonstrated that, and illustrated how, the transient FR can be actually verified in nonequilibrium molecular dynamics simulations of particles subject to magnetic and electric fields, and which are Nos\'e-Hoover thermostatted. The dissipation function, as well as the deterministic thermostat, have been expressed for the case in which electric and magnetic fields are parallel to each other. Although the applicability of generalized time-reversal symmetries implicitly implies the validity of the (transient) FR, based on the Nos\'e-Hoover canonical initial equilibrium distribution, the actual possibility of verifying it in a concrete, realistic simulation is not obvious. In the first place, to the best of our knowledge, this test has never been performed in presence of a magnetic field, which substantially modifies the dynamics of particles. In the second place, a verification of the FR may be hindered by the combination of scarce statistics, and noise in the signal. Indeed, while the thermal noise becomes irrelevant at long observation (averaging) times, such long times drastically reduce the statistics of negative dissipations. Our simulations prove that the delicate balance allowing the verification of the FR, can be achieved for systems of moderately large size. Future developments will address the steady-state FR, which require further conditions to be verified, besides the time symmetry of the dynamics and of the initial phase-space probability distribution~\cite{searles:2007,rondoni:2007,searles:2013,evans:2016}.
\section*{Author contributions}
All authors have contributed equally to the conceptualization and development of the methodology and algorithm. A.C. implemented the necessary changes in an in-house code and performed the simulations. All authors contributed equally to the validation of the results and their analysis. S.B. was responsible for writing the original draft, while all authors participated equally to the reviewing and editing of the manuscript.
\begin{acknowledgments}
A.C. and L.R. have been partially supported by Ministero dell'Istruzione, dell'Universit\`{a} e della Ricerca
(MIUR) grant ``Dipartimenti di Eccellenza 2018-2022''.
\end{acknowledgments}
| proofpile-arXiv_059-15682 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\vspace{-0.5em}
In optical transmission systems, the performance of a given
modulation format is determined by its tolerance to both
linear \gls{ASE} noise, and
\gls{NLI} arising from the Kerr effect.
Designing modulation formats which increase
achievable information rates (AIRs) in the presence of linear
and nonlinear impairments is crucial to achieve higher capacity and longer reach.
NLI depends on the average power of the transmitted signal. One of the most widely used model for NLI is the so-called Gaussian noise (GN) model. The GN model neglects most of the specific
properties of the transmitted signal, including its
underlying modulation format \cite{Poggiolini_JLT2014}.
However, in the vast majority of recent demonstrations, it has been shown that NLI also depends on the modulation format \cite{SerenaECOC2014,GaldinoECOC2016}. A model considering modulation format dependency was later developed as an improvement to the GN model, known as enhanced GN (EGN) model \cite{Carena:14,DarJLT2015}.
Signal shaping has recently been widely investigated in optical fibre communications to improve spectral efficiency (SE), and is currently implemented in commercial products via probabilistic shaping (PS) \cite{NokiaPSE-V} and geometric shaping (GS) \cite{FujitsuT600}.
The performance of PS
has been examined in theory, simulations, and experiments \cite{ChoECOC2016,TobiasJLT16,Buchali2016,RennerJLT2017,SillekensOFC2018}.
However, PS suffers from rate loss in practical implementations with finite blocklengths \cite{FehenbergerJLT2020} and also experiences increased NLI \cite{TobiasJLT16,FehenbergerOFC2020}. Multidimensional (MD) geometric shaping based on constant modulus formats is known to offer NLI tolerance in the nonlinear channel \cite{Kojima2017JLT,BinChenJLT2019}.
In so doing, the multidimensional modulation format ``shapes out" the partial NLI at the expense of losing only partial degrees of freedom \cite{DarISIT2014}.
In this paper, the recently proposed four-dimensional orthant-symmetric 128-ary (4D-OS128) format \cite{ChenArxiv2020} are studied that was designed by maximizing generalized mutual information with the benefit of significantly reducing the optimization searching space.
We observe a performance trade-offs between linear shaping and nonlinear tolerance, and thus 4D-OS128 outperforms the corresponding nonlinearity-tolerant geometrically-shaped constellation 7b-4D2A8PSK and PS-16QAM with finite blocklength at the same SE.
\vspace{-0.7em}
\section{Orthant-Symmetric MD Geometric Shaping}
To solve the multi-parameter optimization challenges of MD geometric shaping and also to reduce the transceiver requirements, we proposed to impose an ``orthant symmetry" constraint to the $N$-dimensional modulation format to be designed\cite{ChenArxiv2020}.
Orthant-symmetric labeled constellations are be generated from any \textit{first-orthant constellation}, where the constellation points
are obtained by folding the first-orthant points to the remaining orthants \cite{ChenArxiv2020}.
\begin{figure}[!b]
\vspace{-2em}
\includegraphics[width=0.23\textwidth]{tikz_compiled0}
\hspace{0em}
\includegraphics[width=0.23\textwidth]{tikz_compiled1}
\vspace{0em}
\caption{First orthant of 4D-OS128 modulation formats and associated bit mapping $[b_{j5}, b_{j6}, b_{j7}], j\in\{1,2,\dots,8\}$.}
\label{fig:firstOrthant}
\vspace{-1em}
\end{figure}
In this paper, we focus on comparing modulation formats with a \gls{SE} of $m=7$~bit/4D-sym, which consists of $M=2^m$ $N=4$-dimensional points $\bs_i,i\in\{1,2,\dots,128\}$ labeled by 7 bits $\bb_i= [b_1,b_2,\dots,b_7]$.
For the 4D-OS128 format, each orthant contains $2^{m-N}=8$ constellation points. The 8 constellation points in the first orthant
are denoted by $\boldsymbol{t}_j=\left[t_{j1},t_{j2},t_{j3},t_{j4}\right] \in\mathbb{R}_+^4$ with $j=\{1,2,\dots,8\}$.
The first orthant is labeled by four binary bits $[b_{j1},b_{j2},b_{j3},b_{j4}]$, which determine one of sixteen orthants.
The remaining three bits $[b_{j5}, b_{j6}, b_{j7}]$ determine one of eight point $\boldsymbol{t}_j$ in the corresponding orthant.
The 2D projections of the first orthant of the 4D-OS128 \cite{ChenArxiv2020} format is shown in Fig. \ref{fig:firstOrthant}, where the symbols with the same color belong to the same 4D symbol. Due to the structure of the 4D-OS128 format, the 4D symbols have three energy levels $r_0^2+r_k^2, k\in\{1,2,3\}$ (highlighted as three red or blue circle combinations in Fig. \ref{fig:firstOrthant}). The five amplitude values of the the 4D-OS128 format in Fig. \ref{fig:firstOrthant} optimized for SNR of $9.5$~dB are $(a_1,a_2,a_3,a_4,a_5)=(0.2875, 0.3834,0.4730,1.1501,1.2460)$.
\begin{figure*}[!tb]
\includegraphics[width=1\textwidth]{tikz_compiled2}
\vspace{-1em}
\caption{Probability of energy per 4D symbol for four \gls{SE} = 7 bits/4D-sym modulations. The variation of the transmitted symbols' energy contributes to the nonlinear interference noise (NLIN). Inset: 2D geometric representation of energy per 4D symbol.}
\label{fig:4DEnergy_hist}
\end{figure*}
\vspace{-0.5em}
\section{Probabilistic Amplitude Shaping}
In addition to the orthant-symmetric MD modulation format, we also consider probabilistic amplitude shaping
(PAS) with quadrature
amplitude modulation (QAM) to achieve the same \gls{SE} of 7~bit/4D-sym.
In this paper, PS-16QAM is generated via probabilistically shaping polarization-multiplexed (PM)-16QAM with constant composition distribution matching (CCDM).
To take finite-length CCDM rate loss into account, the AIRs of PAS for bit-metric decoding (BMD) and for a finite-length CCDM of length $n$ is computed as \cite{2018Tobias_PBDM},
\vspace{-1em}
\begin{equation}
\vspace{-1em}
\text{AIR}_n=[H(\boldsymbol{C}}\newcommand{\bc}{\boldsymbol{c})-\sum^{m}_iH(C_i|Y)]-R_{ \text{loss},n},
\vspace{-0em}
\end{equation}
where $\boldsymbol{C}}\newcommand{\bc}{\boldsymbol{c}=(C_1,\dots,C_m)$ represents the $m$ coded bit levels of the considered modulation format, $H(\cdot)$ denotes entropy, $R_{ \text{loss},n}$ indicate the finite-length rate loss \cite[eq. (4)]{Bcherer2017arxiv} and $Y$ is the channel output.
\vspace{-0.5em}
\section{Modulation-dependent NLI}
NLI in the EGN model is effectivel considered as additive white Gaussian noise.
It is well understood that low energy variations in the signal reduce NLI \cite{GhazisaeidiJLT2017}.
Therefore, choosing symbols with less energy variations and close to the envelope of a four-dimensional ball can lead to less NLI, which can be considered as \textit{nonlinear shaping}.
This is in contradiction with Gaussian shaped constellations choosing symbols within the multidimensional balls,
which is referred as \textit{linear shaping}.
Fig. \ref{fig:4DEnergy_hist} shows the example of \textit{linear shaping} and \textit{nonlinear shaping} by comparing the probability distribution of 4D symbol's energy for three different formats (all normalized to unit energy per polarization): 4D constant-modulus (CM) constellations, 4D-OS128 and PS-16QAM with block length of $n=64$ and $n=128$.
We can observe that nonlinear shaping is in contradiction with linear shaping, which moves the constellation symbols away from the average energy $E_s$ (also see 2D geometric representation of energy per 4D symbol as inset in Fig. \ref{fig:4DEnergy_hist}).
In addition, the 4D constellation symbols' energy spread out from the average normalized energy $E_s=2$, which are divided as three groups: low energy symbols, medium energy symbols and high energy symbols. We will show in following section that this 4D energy distribution can induce NLI fluctuation for the nonlinear fibre channel.
\input{table.tex}
\vspace{-0.7em}
\section{Numerical Simulations}
Split-step Fourier method simulations with a step size of 0.1~km were performed to compare the modulation formats and predict system performance. The simulation parameters are given in Table 1 for the optical multi-span fibre link under consideration, which comprises multiple standard single-mode fibre spans of $75$~km, amplified at the end of each span by an EDFA with noise figure of $5$~dB. The encoded bits are mapped according to four modulation formats: PS-16QAM, constant-modulus 7b4D-2A8PSK, and 4D-OS128.
Each of the 11 WDM channels carries independent data, where all of them are assumed to have the same transmitted power. An ideal receiver is used for detection and chromatic dispersion is digitally compensated.
\begin{figure}[!tb]
\includegraphics[width=0.48\textwidth]{tikz_compiled3}
\caption{$\text{SNR}_{\text{eff}}$ vs. transmission distance at $P_{\text{ch}}=-1$~dBm.}
\label{fig:4D-OS128_symbols_PvsSNR}
\vspace{-0.7em}
\end{figure}
\begin{figure*}[!b]
\vspace{-1em}
\subfigure[Effective SNR vs. $P_{\text{ch}}$ at 7875~km.]{\includegraphics[width=0.32\textwidth]{tikz_compiled4}}
\hspace{-11.5em}
\subfigure[AIR vs. $P_{\text{ch}}$ at 7875~km.]{
\includegraphics[width=0.87\textwidth]{tikz_compiled5}
}
\hspace{-14.5em}
\subfigure[AIR vs. distance at optimal $P_{\text{ch}}$.]{
\includegraphics[width=0.33\textwidth]{tikz_compiled6}}
\caption{Simulation results of multi-span optical fiber transmission with 11 WDM channels for three modulation formats with SE of 7~bit/4D-sym: 7b2D-4D2A8PSK, 4D-OS128 and PS-16QAM with three different blocklengths.
}
\label{fig:7bit_modulation}
\vspace{-1em}
\end{figure*}
In Fig.~\ref{fig:4D-OS128_symbols_PvsSNR}, the effective SNR ($\text{SNR}_{\text{eff}}$) in dB versus transmission distance for three modulation formats is evaluated. The $\text{SNR}_{\text{eff}}$ is estimated from the transmitted data $X$ and received symbols $Y$ as $E_s/\sigma^2$ where $\sigma^2=\text{var}(Y-X)$ denotes the noise variance. Fig.~\ref{fig:4D-OS128_symbols_PvsSNR} shows the average SNRs calculated for the three energy levels: low, medium, and high (see Fig.~\ref{fig:4DEnergy_hist}). We observe that the $\text{SNR}_{\text{eff}}$ varies significantly with 4D symbols' energy for 4D-OS128 and PS-16QAM. At a distance of 6750~km, the SNR difference between low and high energy symbols of PS-16QAM is $0.7$~dB, while the SNR difference for 4D-OS128 is only $0.34$~dB. The reason for this effect is that the transmitted symbols of 4D-OS128 are more close to a constant-modulus sequences than PS-16QAM's.
This observation permits us to assert that the proposed 4D-OS128 is more robust than PS solutions to fiber nonlinear impairments, and thus, gives an intuition about optimal design and implementation of future MD formats.
In Fig. \ref{fig:7bit_modulation}, we show the SNR and AIR of 7b4D-2A8PSK, 4D-OS128 and PS-16QAM with DM blocklengths $n = 32,64, 128$. We observe that PS with $n=128$ gives slightly higher AIR, but the resulting increased rate loss diminishes the efficiency of DM as the blocklength decreases.
With $n = 32$ and $n = 64$, PS-16QAM has even worse performance than 4D-OS128. These losses of PS-16QAM are shown in Fig. \ref{fig:7bit_modulation} (a) and (b),
and are particularly visible in the highly nonlinear regime. {For the considered optical fiber transmission setup, 4D-OS128 can achieve approximately 9.3\% (700~km) of reach increase with respect to 7b4D-2A8PSK and PS-16QAM with blocklength $n=32$ at the same transmission rate.}
In Fig. \ref{fig:7bit_modulation} (a), we also observe that PS with short blocklengths can also slightly increase the nonlinear tolerance, and thus, the $\text{SNR}_{\text{eff}}$. The phenomena has been reported in \cite{Amari2019_IntroducingESSoptics} and very recently in \cite{FehenbergerJLT2020}.
\vspace{-0.5em}
\section{Conclusions}
We have studied the performance of various signal shaping schemes in the presence of fibre nonlinearities. All formats have with the same spectral efficiency (7~bit/4D-sym), however, they differ greatly in the distribution of their symbol energies. 4D symbol energy considerations showed that constant-modulus constellations reduce the NLI and that probabilistic shaping exhibits large SNR variations across symbols with different energies. The newly introduced 4D-OS128 format was shown to be able to trade-off linear and nonlinear tolerance giving SNR improvements with respect to PS-16QAM. This is achieved by introducing less 4D symbol energy variations in the transmit sequences that effectively mitigates fiber nonlinearities.
\begin{spacing}{0.8}
{\footnotesize
\linespread{0.8} \textbf{Acknowledgements}:
The work of B. Chen is supported by the National Natural Science Foundation of China (NSFC) under Grant 61701155. C. Okonkwo is supported in part by the Dutch NWO Gravitation Program: Research centre for Integrated Nanophotonics (Grant Number 024.002.033). The work of A. Alvarado is supported by the Netherlands Organisation for Scientific Research (NWO) via the VIDI Grant ICONIC (project number 15685).
}
\end{spacing}
\cleardoublepage
\printbibliography
\end{document}
| proofpile-arXiv_059-15683 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{section:intro}
Affine semigroup rings are objects of many studies in combinatorial
commutative algebra. The goal of this article is to present the
\texttt{SageMath}\xspace library \texttt{stdPairs.spyx}\xspace, which systematizes computations for
monomial ideals in affine semigroup
rings. The algorithms implemented here are based on the notion of
\emph{standard pairs}, introduced for monomial ideals in polynomial
rings by \cite{MR1339920}, and generalized to the semigroup ring case in
\cite{STDPAIR}. Standard pairs are a combinatorial structure that contains
information on primary and irreducible decompositions of monomial
ideals, as well as multiplicities. One of the main contributions of
\cite{STDPAIR} is that standard pairs and the associated algebraic concepts
can be effectively computed over affine semigroup rings.
The \texttt{SageMath}\xspace library \texttt{stdPairs.spyx}\xspace implements the algorithms of
\cite{STDPAIR} to calculate standard pairs for monomial ideals in any
pointed (non-normal) affine semigroup ring. This library can be
regarded as a generalization of \texttt{standardPairs} function in
\texttt{Macaulay2}\xspace implemented by \cite{MR1949549}. This library can be obtained via \url{https://github.com/byeongsuyu/StdPairs}.
\subsection*{Outline}
\cref{section:classes} provides background on affine semigroup rings,
their monomial ideals, and related combinatorial notions. It also explains their implementation as \texttt{SageMath}\xspace classes. \cref{section:algorithms} presents the implementation of an algorithm finding standard pairs, proposed in~\cite[Section 4]{STDPAIR}. \cref{section:compatibility} shows compatibility with the \texttt{Normaliz}\xspace package by introducing methods to translate objects in \texttt{SageMath}\xspace into objects in \texttt{Macaulay2}\xspace using \texttt{Normaliz}\xspace.
\subsection*{Acknowledgements}
We would like to express our deepest appreciation to Laura Matusevich for conversations and helpful comments on draft versions of this paper. Also, we are grateful to Matthias K{\"o}ppe for advice on using \texttt{zsolve} in \texttt{4ti2}.
\subsection{Notation}
In this paper, a semigroup of nonnegative integers including 0 is denoted by $\N=\{0,1,2, \cdots \}$. A ring of integers is $\Z$. All nonnegative real numbers are represented by $\R_{\geq 0}$. Boldface upper case letters and boldface lower case letters denote matrices and vectors, respectively. This will be used when we call a cone $\R_{\geq 0}\mathbf{A}$ for some $d \times n$ matrix $\mathbf{A}$ over $\Z$. An arbitrary field is $\field$.
\section{Affine semigroup, ideal, and proper pair as classes of \texttt{SageMath}\xspace}
\label{section:classes}
\subsection{Mathematical Background}
An \emph{affine semigroup} is a semigroup of $\Z^{d}$ generated by
finitely many vectors $\veca_{1},\cdots, \veca_{n}$ for some $n \in
\N.$ We let $\mathbf{A}$ be a $d \times n$ matrix whose column vectors are
$\veca_{1},\cdots, \veca_{n} \in \Z^{d}$. The set of all nonnegative
integer linear combinations of $\veca_{1},\cdots, \veca_{n}$, denoted
by $\N\mathbf{A}$, is an affine semigroup. These columns $\veca_{1},\cdots,
\veca_{n}$ are the \emph{generators} of the affine semigroup $\N
\mathbf{A}$; the matrix $\mathbf{A}$ is called the \emph{generating
matrix}. Since $\N \mathbf{A}$ contains 0, an affine semigroup is a
commutative monoid. Given a field $\field$, we are concerned with the
\emph{affine semigroup ring} $\field[\N \mathbf{A}]$. A natural first
example is the polynomial ring in $d$-variables; in this case, $\mathbf{A}$
is the $d\times d$ identity matrix. We refer to \cite[Section~7]{CCA}
for more background on this topic. Throughout this article, we assume
that the affine semigroup $\N\genset$ under consideration is
\emph{pointed}, which means that the cone $\R_{\geq 0} \mathbf{A}$ does not contain lines.
An \emph{ideal} of an affine semigroup is a set $I \subset \N \mathbf{A}$
such that $I + \N \mathbf{A} \subseteq I$. There is a one-to-one
correspondence between monomial ideals of $\field[\N \mathbf{A}]$ and
ideals of $\N \mathbf{A}$. Therefore, the definition of prime, irreducible,
and primary ideals of $\field[\N \mathbf{A}]$ can be naturally extended to
the ideals of an affine semigroup. The \emph{standard monomials} of an
ideal $I \subset \N \mathbf{A}$ are all elements of $\N \mathbf{A} \smallsetminus I$. Let $\std(I)$ be a set of all standard monomials with respect to $I$.
A \emph{face} of an affine semigroup $\N\mathbf{A}$ is a subsemigroup
$\N\matF \subseteq \N\mathbf{A}$ such that the complement $\N\mathbf{A} \smallsetminus
\N\matF$ is an ideal of $\N\mathbf{A}$ \cite[Definition 7.8]{CCA}. Equivalently,
it is a subsemigroup $\N\matF$ with the property
that $\veca +\mathbf{b} \in \N \matF$ if and only if $\veca,\mathbf{b} \in \N
\matF$. The faces of an affine semigroup form a lattice which is isomorphic to the face
lattice of a (real) cone over the affine
semigroup~\cite{MR1251956,CCA}. Thus, we may represent a face
$\N\matF$ as a submatrix $\matF$ of $\mathbf{A}$.
A \emph{pair} is a tuple $(\veca, \matF)$ of an element $\veca$ in $\N
\mathbf{A}$ and a face $\matF$ of $\N \mathbf{A}$~\cite{STDPAIR}. A \emph{proper
pair} of an ideal $I$ is a pair $(\veca, \matF)$ such that $\veca +
\N\matF \subseteq \std(I)$. A pair $(\veca, \matF)$ \emph{divides}
$(\mathbf{b},\mathbf{G})$ if there exists $\vecc \in \N \mathbf{A}$ such that $\veca
+ \vecc + \N \matF \subseteq \mathbf{b} + \N \mathbf{G}$~\cite{STDPAIR}. The set
of all proper pairs of an ideal $I$ is partially ordered $\prec$ by
inclusion. In other words, $(\veca, \matF) \prec (\mathbf{b}, \mathbf{G})$ if
$\veca+ \N \matF \subset \mathbf{b}+ \N \mathbf{G}$. The \emph{standard pairs}
of an ideal $I$ are the maximal elements of the set of all proper
pairs of $I$ in this partial order. We denote by $\stdp(I)$ the set of
all standard pairs of an ideal $I$.
We remark that our notation here differs from existing notation for standard pairs over polynomial rings. Over the polynomial ring $\field[x_{1},x_{2},\cdots, x_{n}]$, a pair is a tuple $(x^{\veca},V)$ where $x^{\veca}$ is a monomial $x_{1}^{a_{1}}x_{2}^{a_{2}}\cdots x_{n}^{a_{n}}$ for some integer vector $\veca=(a_{1},a_{2},\cdots, a_{n})$ and $V$ is a set of variables \cite{MR1949549, MR1339920}. From the viewpoint of affine semigroup rings, the polynomial ring is a special case when the underlying affine semigroup is generated by an $n\times n$ identity matrix $\mathbf{I}$. Since the cone $\R_{\geq0}\mathbf{I}$ is a simplicial cone, i.e., every subset of rays form a face, we may interpret $V$ as a face. The following example shows the different notations for the standard pairs of a monomial ideal $I = \langle x^{\left[\begin{smallmatrix} 1\\ 3 \\ 1 \end{smallmatrix}\right]}, x^{\left[\begin{smallmatrix} 1\\ 2 \\ 2 \end{smallmatrix}\right]},x^{\left[\begin{smallmatrix} 0\\ 3 \\ 2 \end{smallmatrix}\right]}, x^{\left[\begin{smallmatrix} 0\\ 2 \\ 3 \end{smallmatrix}\right]} \rangle$ in the polynomial ring $\field[x_{1},x_{2},x_{3}]$.
In \texttt{Macaulay2}\xspace,
\begin{verbatim}
i1 : R = QQ[x,y,z];
i2 : I = monomialIdeal(x*y^3*z, x*y^2*z^2, y^3*z^2, y^2*z^3)
3 2 2 3 2 2 3
o2 = monomialIdeal (x*y z, x*y z , y z , y z )
o2 : MonomialIdeal of R
i3 : standardPairs I
o3 = {{1, {x, z}}, {y, {x, z}}, {1, {x, y}}, {z, {y}},
2 2 2
{y z, {x}}, {y z , {}}}
o3 : List
\end{verbatim}
whereas in the given library \texttt{stdPairs.spyx}\xspace in \texttt{SageMath}\xspace,
\begin{verbatim}
sage: load("~/stdPairs.spyx")
Compiling /Users/byeongsuyu/stdPairs.spyx...
sage: A = matrix(ZZ,[[1,0,0],[0,1,0],[0,0,1]])
sage: Q = affineMonoid(A)
sage: M = matrix(ZZ,[[1,1,0,0],[3,2,3,2],[1,2,2,3]])
sage: I = monomialIdeal(Q,M)
sage: I.standardCover()
{(): [([[0], [2], [2]]^T,[[], [], []])],
(1,): [([[0], [0], [1]]^T,[[0], [1], [0]])],
(0,): [([[0], [2], [1]]^T,[[1], [0], [0]])],
(0, 2): [([[0], [1], [0]]^T,[[1, 0], [0, 0], [0, 1]]),
([[0], [0], [0]]^T,[[1, 0], [0, 0], [0, 1]])],
(0, 1): [([[0], [0], [0]]^T,[[1, 0], [0, 1], [0, 0]])]}
\end{verbatim}
\subsection{Classes in \texttt{stdPairs.spyx}\xspace}
\label{subsec:class_aff_monoid}
\sloppy We implement three classes related to affine semigroup, semigroup ideal, and proper pair respectively. This implementation is based on \texttt{SageMath}\xspace 9.1 with \texttt{Python} 3.7.3. and \texttt{4ti2} package.
\subsubsection{Class \texttt{affineMonoid}}
This class is constructed by using an integer matrix $\mathbf{A}$. The name follows the convention of \texttt{SageMath}\xspace which distinguishes monoid from semigroup. In \texttt{SageMath}\xspace, $\mathbf{A}$ can be expressed as a 2-dimensional \texttt{Numpy} array or an integer matrix of \texttt{SageMath}\xspace. For example,
\begin{verbatim}
sage: A = matrix(ZZ,[[1,2],[0,2]])
sage: load("stdPairs.spyx")
Compiling ./stdPairs.spyx...
sage: Q = affineMonoid(A)
\end{verbatim}
generates $Q$ as a type of \texttt{affineMonoid}. This class has several member variables and member functions as explained below.
\begin{itemize}[leftmargin=*]
\item \texttt{Q.gens} stores a matrix generating an affine monoid $Q$ as \texttt{Numpy.ndarray} type. This may not be a minimal generating set of $Q$.
\item \texttt{Q.mingens} stores a minimal generating matrix of an affine monoid of $Q$.
\item \texttt{Q.poly} is a real cone $\R_{\geq 0}Q$ represented as a type of \texttt{Polyhedron} in \texttt{SageMath}\xspace. If one generates $Q$ with \texttt{True} parameter, i.e.,
\begin{verbatim}
sage: A = matrix(ZZ,[[1,2],[0,2]])
sage: load("stdPairs.spyx")
Compiling ./stdPairs.spyx...
sage: Q = affineMonoid(A,True)
\end{verbatim}
then \texttt{Q.poly} is of a class of \texttt{Normaliz}\xspace integral polyhedron. This requires \texttt{PyNormaliz} package. See~\cite{NormalizSage} for more details.
\item \texttt{Q.faceLattice} is a finite lattice containing all faces of the affine semigroup. A face in the lattice is saved as a tuple storing column numbers of generators $\mathbf{A}$. \texttt{Q.faceLattice} is of type of \texttt{Finite Lattice Poset} in \texttt{SageMath}\xspace. For example,
\begin{verbatim}
sage: Q.faceLattice
Finite lattice containing 5 elements with distinguished linear
extension
sage: Q.faceLattice.list()
[(-1,), (), (0,), (1,), (0, 1)]
\end{verbatim}
\item \texttt{Q.indexToFace} is a \texttt{dictionary} type whose key is a face as a tuple, and whose item is a face as an element of \texttt{Q.poly}. For example,
\begin{verbatim}
sage: Q.indexToFace
{(-1,): A -1-dimensional face of a Polyhedron in ZZ^2,
(): A 0-dimensional face of a Polyhedron in ZZ^2
defined as the convex hull of 1 vertex,
(0,): A 1-dimensional face of a Polyhedron in ZZ^2
defined as the convex hull of 1 vertex and 1 ray,
(1,): A 1-dimensional face of a Polyhedron in ZZ^2
defined as the convex hull of 1 vertex and 1 ray,
(0,1): A 2-dimensional face of a Polyhedron in ZZ^2
defined as the convex hull of 1 vertex and 2 rays}
\end{verbatim}
\item \texttt{Q.integralSupportVectors} is a \texttt{dictionary} type whose key is a face $\matF$ as a tuple and whose item is a set of the integral support functions of facets containing $\matF$ as a vector form. An \emph{integral support function} $\phi_{\matH}$ of a facet $\matH$ is a linear function $\phi_{\matH}:\R^{d} \to \R$ such that $\phi_{\matH}(\Z^{d}) =\Z$, $\phi_{\matH}(\veca) \geq 0$ for all column vectors $\veca$ of generators $A$, and $\phi_{\matH}(\veca) =0$ if and only if $\veca \in \matH$. By linearity, $\phi_{\matH}(\veca)=\mathbf{b} \cdot \veca$ for some rational vector $\mathbf{b}$. We call $\mathbf{b}$ as an \emph{integral support vector}. Each item of \texttt{Q.integralSupportVectors} returns a matrix whose rows are integral support vectors of facets containing the key.
For example,
\begin{verbatim}
sage: Q.integralSupportVectors
{(): array([[ 0, 1],
[ 1, -1]]),
(0,): array([[0, 1]]),
(1,): array([[ 1, -1]]),
(0, 1): array([], dtype=float64)}
\end{verbatim}
See~\cite[Definition 2.1]{STDPAIR} for the precise definition of a (primitive) integral support function.
\item \texttt{Q.isEmpty()} returns a boolean value indicating whether $Q$ is a trivial affine semigroup or not. A \emph{trivial affine semigroup} is an empty set as an affine semigroup.
\item \texttt{Q.isPointed()} returns a boolean value indicating whether $Q$ is a pointed affine semigroup or not.
\item \texttt{Q.isElement(vector $\mathbf{b}$)} returns nonnegative integral inhomogeneous solutions (minimal integer solutions) of $\mathbf{A} \mathbf{x} = \mathbf{b}$ using \texttt{zsolve} in~\cite{4ti2}. If $\mathbf{b}$ is not an element of an affine semigroup $Q$, then it returns an empty matrix.
\item \texttt{Q.IntersectionOfPairs(vector $\veca$, tuple $F$, vector $\mathbf{b}$, tuple $G$)} gives nonnegative integral inhomogeneous solutions (minimal integer solutions) of $$\left[\begin{smallmatrix} \matF & -\mathbf{G} \end{smallmatrix}\right] \left[\begin{smallmatrix} \mathbf{u} \\ \mathbf{v} \end{smallmatrix}\right] = \mathbf{b}-\veca$$ using \texttt{zsolve} in~\cite{4ti2}, where $\matF$ and $\mathbf{G}$ are submatrices of $A$ corresponding to its tuple forms respectively. This equation has solutions if and only if two pairs $(\veca, \matF)$ and $(\mathbf{b}, \mathbf{G})$ have a common element. This returns an empty matrix if it has no solution.
\item \texttt{Q.Face(tuple index)} returns a face as a submatrix of a generator $\mathbf{A}$ corresponding to a given tuple \texttt{index}. For example,
\begin{verbatim}
sage: Q.Face((1,))
array([[2],
[2]])
\end{verbatim}
\item \texttt{Q.IndFace(matrix face)} returns a face as a tuple of indices of column vectors of a generator $\mathbf{A}$ corresponding to a given submatrix \texttt{face} of $\mathbf{A}$. For example,
\begin{verbatim}
sage: M = matrix(ZZ,[[2],[2]])
sage: Q.IndFace(M)
(1,)
\end{verbatim}
\item \texttt{Q.primeIdeal(tuple face)} returns a prime ideal corresponding to the face represented by \texttt{face} as an object of \texttt{monomialIdeal}.
\begin{verbatim}
sage: Q.primeIdeal((1,))
An ideal whose generating set is
[[1]
[0]]
\end{verbatim}
\item \texttt{Q.save(string path)} saves the given \texttt{affineMonoid} object as a text file. This can be loaded again using \texttt{load\_stdPairs(string path)}, which will be explained in \cref{subsec:global_methods}. If the save is successful, then it returns 1.
\item \texttt{Q.hashstring} is a string unique for the (mathematically) same affine monoid.
\end{itemize}
Moreover, one can directly compare affine semigroups using the equality operator \texttt{==} in \texttt{SageMath}\xspace.
\subsubsection{Class \texttt{monomialIdeal}}
\label{subsec:class_ideal}
This class is constructed by an affine semigroup $Q$ and generators of an ideal as a matrix form, say $\mathbf{M}$, which is a 2-dimensional \texttt{Numpy} array or an integer matrix of \texttt{SageMath}\xspace. For example,
\begin{verbatim}
sage: M = matrix(ZZ,[[4,6],[4,6]])
sage: I = monomialIdeal(Q,M)
sage: I
An ideal whose generating set is
[[4]
[4]]
\end{verbatim}
As shown in the example above, this class stores only minimal generators of the ideal. The member variables and functions are explained below.
\begin{itemize}[leftmargin=*]
\item \texttt{I.gens} shows a (minimal) generators of $I$ as a \texttt{Numpy} array form.
\item \texttt{I.ambientMonoid} stores the ambient affine semigroup of $I$.
\item \texttt{I.isPrincipal()} returns a boolean value indicating whether $I$ is principal or not. Likewise, \texttt{I.isEmpty()}, \texttt{I.isIrreducible()}, \texttt{I.isPrimary()}, \texttt{I.isPrime()}, and \texttt{I.isRadical()} return a boolean value indicating whether $I$ has the properties implied by their name or not.
\item \texttt{I.isElement(vector $\mathbf{b}$)} returns nonnegative integral inhomogeneous solutions (minimal integer solutions) of $\mathbf{A} \mathbf{x} = \mathbf{b}-\veca$ for each generator $\veca$ of $I$ using \texttt{zsolve} in~\cite{4ti2}. If $\mathbf{b}$ is an element of ideal, then it returns a list $[\mathbf{x}, \veca]$ for some generator $\veca$ such that $\veca + \mathbf{A} \mathbf{x}^{T}=\mathbf{b}$. Otherwise, it returns an empty matrix.
\item \texttt{I.isStdMonomial(vector $\mathbf{b}$)} returns a boolean value indicating whether the given vector $\mathbf{b}$ is a standard monomial or not.
\item \texttt{radicalIdeal(I)} returns the radical of $I$ as an \texttt{monomialIdeal} object.
\item \texttt{I.standardCover()} returns the \emph{standard cover} of $I$. The definition of standard cover and algorithms will be given in \cref{subsec:principal_ideal}.
\item \texttt{I.overlapClasses()} returns the overlap classes of $I$. An \emph{overlap class} of an ideal $I$ is a set of standard pairs such that their representing submonoids intersect nontrivially. Moreover, \texttt{I.maximalOverlapClasses()} returns all maximal overlap classes of $I$. See~\cite[Section 3]{STDPAIR} for the detail.
\item \texttt{I.associatedPrimes()} returns all associated prime ideals of $I$ as a \texttt{dictionary} type in Python. In other words, the function returns a dictionary whose keys are faces of the affine semigroup as \text{tuple} form and whose value is a list containing associated prime ideals corresponding to the face in its key.
\item \texttt{I.multiplicity(ideal P or face $\matF$)} returns a multiplicity of an associated prime $P$ over the given ideal $I$. Also, the method takes the face $\matF$ (as a tuple) corresponding to a prime ideal $P$ as input instead.
\item \texttt{I.irreducibleDecomposition()} returns the irredundant irreducible primary decomposition of $I$ as a list. Since it takes a lot of time, this library also provides a method which calculates only one irreducible primary component corresponding to a maximal overlap class. This class is \texttt{I.irreducibleComponent(tuple face, list ov\_class)}, where \texttt{ov\_class} is a maximal overlap class of $I$ and \texttt{face} is a face corresponding to the overlap class \texttt{ov\_class}.
\item \texttt{I.intersect(J)} returns an intersection of two ideals $I$ and $J$ as a \texttt{monomialIdeal} object. Likewise, addition \texttt{+}, multiplication \texttt{$\ast$}, and comparison \texttt{==} are defined between two objects. The following example shows an addition of two monomial ideals in \texttt{SageMath}\xspace.
\begin{verbatim}
sage: I = monomialIdeal(Q,matrix(ZZ,[[4,6],[4,6]]))
sage: J = monomialIdeal(Q,matrix(ZZ,[[5],[0]]))
sage: I.intersect(J)
An ideal whose generating set is
[[9]
[4]]
sage: I+J
An ideal whose generating set is
[[5 4]
[0 4]]
\end{verbatim}
\item \texttt{I.save(string path)} saves the given \texttt{monomialIdeal} object $I$ as a text file. Especially, it saves not only the generators of $I$, but also its standard cover, overlap classes, associated primes, and irreducible primary decompositions if they were calculated. This can be loaded again using \texttt{load\_stdPairs(string path)}, which will be explained in \cref{subsec:global_methods}. If the save is successful, then it returns 1.
\item \texttt{I.hashstring} is a string unique for the (mathematically) same ideal.
\end{itemize}
\subsubsection{Class \texttt{properPair}}
\label{subsec:class_pair}
A proper pair $(\veca, \matF)$ of an ideal $I$ can be declared in \texttt{SageMath}\xspace by specifying an ideal $I$, a standard monomial $\veca$ as a matrix form (or \texttt{Numpy} 2D array), and a face $\matF$ as a tuple. If $(\veca, \matF)$ is not proper, then \texttt{SageMath}\xspace calls a \texttt{ValueError}. The following example shows two ways of defining a proper pair.
\begin{verbatim}
sage: I = monomialIdeal(Q,matrix(ZZ,[[4,6],[4,6]]))
sage: PP = properPair(np.array([2,0])[np.newaxis].T,(0,),I)
sage: PP
([[2], [0]]^T,[[1], [0]])
sage: QQ = properPair(np.array([2,0])[np.newaxis].T,(0,),I, True)
sage: QQ
([[2], [0]]^T,[[1], [0]])
\end{verbatim}
The second line tests whether the pair is a proper pair of the given ideal $I$ or not before generating \texttt{PP}. However, the fourth line generates \texttt{QQ} without such a test. Use the third parameter with \texttt{True} only if the generating pair is proper a priori. In any case, each \texttt{PP} and \texttt{QQ} denotes proper pair whose initial monomial is $\left[\begin{smallmatrix}2 \\ 0\end{smallmatrix}\right]$ and whose face is $\left[\begin{smallmatrix}1 \\ 0\end{smallmatrix}\right]$.
The member variables and functions are explained below. We assume that \texttt{PP} denotes a proper pair $(\veca,\matF)$.
\begin{itemize}[leftmargin=*]
\item \texttt{PP.monomial}, \texttt{PP.face}, and \texttt{PP.ambientIdeal} return the initial monomial $\veca$ (as \texttt{Numpy} 2D array), the face $\matF$ (as a \texttt{tuple}), and its ambient ideal (as an object of \texttt{affineMonoid}) respectively.
\item \texttt{PP.isMaximal()} returns a boolean value indicating whether the given pair is maximal with respect to the divisibility of proper pairs of the ambient ideal.
\item \texttt{PP.isElement(vector $\mathbf{b}$)} returns nonnegative integral inhomogeneous solutions (minimal integer solutions) of $\veca +\matF \mathbf{x} = \mathbf{b}$ using \texttt{zsolve} in~\cite{4ti2}. If $\mathbf{b}$ is not an element of the submonoid $\veca + \N\matF$, then it returns an empty matrix.
\item \texttt{divides(pair PP, pair QQ)} returns a matrix whose row $\mathbf{u}$ is a minimal solution of $\veca+\mathbf{A}\mathbf{u}+\N\matF = \mathbf{b}+\N\mathbf{G}$ if $PP = (\veca, \matF)$ and $QQ = (\mathbf{b}, \mathbf{G})$. The returned value is a nonempty matrix if and only if a pair $PP$ divides a pair $QQ$. For example,
\begin{verbatim}
sage: I = monomialIdeal(Q,matrix(ZZ,[[4,6],[4,6]]))
sage: PP = properPair(matrix(ZZ,[[2],[0]]),(0,),I)
sage: PP
([[2], [0]]^T,[[1], [0]])
sage: QQ = properPair(matrix(ZZ,[[2],[0]]),(0,),I,True)
sage: QQ
([[2], [0]]^T,[[1], [0]])
sage: divides(PP,QQ)
[0 0 0]
\end{verbatim}
since $PP=QQ$.
\item Like \texttt{affineMonoid} or \texttt{monomialIdeal}, one can directly compare proper pairs using the equality operator \texttt{==} in \texttt{SageMath}\xspace.
\item \texttt{PP.hashstring} is a string unique for the (mathematically) same proper pair.
\end{itemize}
\subsection{Global methods}
\label{subsec:global_methods}
Global methods are introduced below.
\begin{itemize}[leftmargin=*]
\item \texttt{unique\_monoids(list L)}, \texttt{unique\_ideals(list L)}, \texttt{unique\_pairs(list L)} and \texttt{unique\_np\_arrays(list L)} receive a list of variables whose type is \texttt{affineMonoid}, \texttt{monomialIdeal}, \texttt{properPair} and \texttt{numpy} 2D array respectively, and return a list of (mathematically) distinct objects.
\item \texttt{save\_cover(dict cover, affineMonoid Q, string path)} saves a \emph{cover}, a \texttt{dict} type object whose keys are faces (as \texttt{tuple} objects) of $Q$ and whose values are a list of pairs (as \texttt{properPair} objects) into a file located in \texttt{path}. Note that all pairs are saved as proper pairs under the empty ideal. If the save is successful, then it returns 1.
\item \texttt{load\_stdPairs(string path)} loads an \texttt{affineMonoid} object, an \texttt{monomialIdeal} object, or a cover stored in a file located in \texttt{path}. It is useful for users who want to avoid repeating calculation which was previously done. For example,
\begin{verbatim}
sage: I = load_stdPairs("/Users/byeongsuyu/ex_ideal.txt")
sage: I.ambientMonoid
An affine semigroup whose generating set is
[[1 1 2 3]
[1 2 0 0]]
sage: I
An ideal whose generating set is
[[3 5 6]
[2 1 1]]
sage: I.irreducibleDecomposition()
[An ideal whose generating set is
[[3 4 2 3 5]
[2 0 4 4 0]], An ideal whose generating set is
[[2 3]
[0 0]], An ideal whose generating set is
[[1 1]
[1 2]]]
sage:
\end{verbatim}
shows that the library loads pre-calculated irreducible decomposition of the given ideal. This save file can be obtained via \url{https://github.com/byeongsuyu/StdPairs}.
\end{itemize}
\section{Implementation of an algorithm finding standard pairs}
\label{section:algorithms}
\subsection{Case 1: Principal Ideal}
\label{subsec:principal_ideal}
A \emph{cover} of standard monomials of an ideal $I$ is a set of
proper pairs of $I$ such that the union of all subsemigroup
$\veca+\N\matF$ corresponding to an element $(\veca,\matF)$ of the
cover is equal to the set of all standard monomials. The
\emph{standard cover} of an ideal $I$ is a cover of $I$ whose elements
are standard pairs. The standard cover of a monomial ideal $I$ is
unique by the maximality of standard pairs among all proper pairs of
$I$. A key idea in~\cite[Section 4]{STDPAIR} is to construct covers
containing all standard pairs. Once a cover is obtained, we can then
produce the standard cover.
The following result helps to compute the standard cover in the
special case of a principal ideal.
\begin{theorem}[{{\cite[Theorem 4.1]{STDPAIR}}}]
\label{thm:pair_difference}
Let $\mathbf{b}, \mathbf{b}' \in \N \mathbf{A}$ and let $\mathbf{G}, \mathbf{G}'$ be faces of $A$ such that
$\mathbf{G}\cap \mathbf{G}' = \mathbf{G}$. There exists an algorithm to compute a finite collection $C$ of pairs over faces of $\mathbf{G}$ such that
\[
(\mathbf{b}+\mathbf{G}) \smallsetminus (\mathbf{b}'+\mathbf{G}') = \cup_{(\veca,\matF) \in C} (\veca+\matF).
\]
\end{theorem}
The \emph{pair difference} of the pairs $(\mathbf{b}, \mathbf{G})$ and $(\mathbf{b}', \mathbf{G}')$ is a finite collection of pairs over faces of $\mathbf{G}$ given by \cref{thm:pair_difference}.
\begin{corollary}
\label{cor:principal_ideal_case}
Given a principal ideal $I = \langle \mathbf{b} \rangle$, the pair difference of pairs $(0, \mathbf{A})$ and $(\mathbf{b}, \mathbf{A})$ is the standard cover of $I$.
\end{corollary}
\begin{proof}
\cref{thm:pair_difference} implies that the pair difference is a cover of $I$. To see it is the standard cover, suppose that the ambient affine semigroup is generated by $\mathbf{A} = \left[\begin{smallmatrix} \veca_{1}& \cdots &\veca_{n}\end{smallmatrix}\right].$ Let $(\vecc, F)$ be a proper pair in the pair difference. Without loss of generality, we assume that $F = \left[\begin{smallmatrix} \veca_{1}& \cdots &\veca_{m}\end{smallmatrix}\right]$ for some $m<n$ by renumbering indices. By the proof of \cref{thm:pair_difference} in~\cite{STDPAIR}, $\vecc = A \cdot \mathbf{u}$ where $x^{\mathbf{u}} \in \field[\N^{n}]$ is a standard monomial such that $(x^{\mathbf{u}}, \{ x_{1}, \cdots, x_{m}\})$ is a standard pair of some monomial ideal $J$ in $\field[\N^{n}]$.
Suppose that there exists $(\mathbf{d}, \mathbf{G})$ such that $F \subseteq \mathbf{G}$ and $\mathbf{d} + \mathbf{g} = \vecc$ for some $\mathbf{g} \in G$. Since $\mathbf{d} \in \N \mathbf{A} $, $\mathbf{d}= A \mathbf{w}$ for some $\mathbf{w} \in \N^{n}$. Since $A$ is pointed, $\mathbf{w}$ is coordinatewise less than $\mathbf{u}$. Thus, $(x^{\mathbf{w}}, \{ x_{1}, \cdots, x_{m}\} )$ contains $(x^{\mathbf{u}}, \{ x_{1}, \cdots, x_{m}\})$. Lastly, $(x^{\mathbf{w}}, \{ x_{1}, \cdots, x_{m}\})$ is a proper pair of $J$, otherwise, there exists $x^{\mathbf{v}} \in \field[x_{1}, \cdots, x_{m}] \subseteq \field[\N^{n}]$ such that $x^{\mathbf{w}+\mathbf{v}} \in J$. Then, $x^{\mathbf{g}}x^{\mathbf{w}+\mathbf{v}} \in J \implies x^{\mathbf{u}+ \mathbf{v}} \in J \cap (x^{\mathbf{u}}, \{ x_{1}, \cdots, x_{m}\}) = \emptyset$ leads to a contradiction.
Thus, by maximality of the standard pair, $\mathbf{w} = \mathbf{u}$. This implies $\mathbf{d} = \vecc$. Moreover, $G=F$, otherwise there exists $j \in \{ 1,2,\cdots, n\} \smallsetminus \{ 1,\cdots, m\}$ such that $x^{\mathbf{u}}x_{j}^{l} \not\in J$ for any $l$, which implies that $(x^{\mathbf{u}}, \{ x_{1}, \cdots, x_{m},x_{j}\})$ is a proper pair of $J$ strictly containing a standard pair $(x^{\mathbf{u}}, \{ x_{1}, \cdots, x_{m}\})$ of $J$, a contradiction.
\end{proof}
\cref{thm:pair_difference} is implemented as a method \texttt{pair\_difference(($\mathbf{b},\matF$), ($\mathbf{b}',\matF'$))} in the library \texttt{stdPairs.spyx}\xspace. Two input arguments should be the type of \texttt{properPair}. It returns the pair difference of pairs $(\mathbf{b},\matF)$ and $(\mathbf{b}',\matF')$ with \texttt{dictionary} type, called \texttt{Cover}. \texttt{Cover} classifies pairs by its faces. For example, the code below shows the pair difference of pairs $(0, \mathbf{A})$ and $((0,2),\mathbf{A})$, which are
$$ \left( 0, \begin{bmatrix} 2 \\ 0 \end{bmatrix}\right), \left( \begin{bmatrix} 0 \\ 1\end{bmatrix}, \begin{bmatrix} 2 \\ 0 \end{bmatrix}\right), \left( \begin{bmatrix} 1 \\ 2\end{bmatrix}, \begin{bmatrix} 2 \\ 0 \end{bmatrix}\right) \text{, and } \left( \begin{bmatrix} 1 \\ 1\end{bmatrix}, \begin{bmatrix} 2 \\ 0 \end{bmatrix}\right)$$
\begin{verbatim}
sage: B = affineMonoid(matrix(ZZ, [[2,0,1],[0,1,1]]))
sage: I = monomialIdeal(B, matrix(ZZ,0))
sage: C= properPair(np.array([[0,0]]).T, (0,1,2), I )
sage: D= properPair(np.array([[0,2]]).T, (0,1,2), I )
sage: print(pair_difference(C,D))
{(0,): [([[1], [1]]^T,[[2], [0]]), ([[1], [2]]^T,[[2], [0]]),
([[0], [1]]^T,[[2], [0]]), ([[0], [0]]^T,[[2], [0]])]}
\end{verbatim}
By \cref{cor:principal_ideal_case}, it is a set of all standard pairs of an ideal $I = \langle (0,2)\rangle$ in an affine semigroup $\N \left[\begin{smallmatrix} 2& 0 & 1 \\ 0 & 1 & 1\end{smallmatrix}\right]$.
\texttt{pair\_difference(($\mathbf{b},\matF$), ($\mathbf{b}',\matF'$))} uses \texttt{standardPairs} of \texttt{Macaulay2}\xspace internally to find standard pairs of a polynomial ring, which is implemented by~\cite{MR1949549}. Briefly, this method \texttt{pair\_difference(($\mathbf{b},\matF$), ($\mathbf{b}',\matF'$))} calculates minimal solution of the integer linear system
$$ \begin{bmatrix} F & -F'\end{bmatrix} \begin{bmatrix} \mathbf{u} \\ \mathbf{v}\end{bmatrix} = \mathbf{b}'-\mathbf{b}$$
using \texttt{zsolve} in \texttt{4ti2}. The solutions form an ideal $J$ of a polynomial ring in the proof of \cref{thm:pair_difference} on \texttt{Macaulay2}\xspace. \texttt{standardPairs} derives standard pairs of $J$. Lastly, the method \texttt{pair\_difference} constructs proper pairs based on the standard pairs of $J$, and classify the proper pairs based on their faces and return the pair difference.
\subsection{Case 2: General ideal}
\label{subsec:general_ideal}
\cite[Proposition 4.4]{STDPAIR} gives an algorithm to find the standard cover of non-principal monomial ideals.
\begin{proposition}[{{\cite[Proposition 4.4]{STDPAIR}}}]
\label{prop:cover_to_std_pairs}
Let $I$ be a monomial ideal in $\field[\N \mathbf{A}]$. There is an algorithm whose input is a cover of the standard monomials of $I$, and whose output is the standard cover of $I$.
\end{proposition}
According to the proof of \cite[Proposition 4.4]{STDPAIR}, this is achieved by repeating the procedures below.
\begin{enumerate}[leftmargin=*]
\label{enum:prop4.4}
\item Input: $C_{0}$, an initial cover of $I$.
\item \label{enum:zero_to_one} For each $(\veca, F) \in C_{0}$, find minimal solutions of $(\veca+\R F) \cap \N \mathbf{A}$ using the primitive integral support functions. (See \cite[Lemma 4.2]{STDPAIR} for the detail.)
\begin{itemize}[leftmargin=*]
\item If $\mathbf{b}_{1},\mathbf{b}_{2},\cdots, \mathbf{b}_{m}$ are minimal solutions of $(\veca+\R F) \cap \N \mathbf{A}$, construct pairs such as $(\mathbf{b}_{1},F), (\mathbf{b}_{2},F), \cdots, (\mathbf{b}_{m},F)$ and store them in the variable $C_{1}$.
\end{itemize}
\item\label{enum:one_to_two} For each pair $(\mathbf{b}, F) \in C_{1}$, construct $(\mathbf{b}, G)$ for any face $G$ which is not strictly contained in $F$. If $(\mathbf{b}, G)$ is a proper pair of $I$, save $(\mathbf{b}, G)$ on the variable $C_{2}$.
\item If $C_{0}$ is equal to $C_{2}$, done. Otherwise, set $C_{0}:=C_{2}$ and repeat the above process.
\end{enumerate}
\texttt{czero\_to\_cone($C_0$, $I$)} method in \texttt{stdPairs.spyx}\xspace implements \cref{enum:zero_to_one} to return $C_{1}$. It calls \texttt{minimalHoles(vector $\veca$, face F, affine semigroup A)} internally, which is the implementation of Lemma 4.2. \texttt{cone\_to\_ctwo($C_1$, $I$)} method implements \cref{enum:one_to_two}. Since the constructor function of the class \texttt{properPair} checks whether the pair is proper or not, the method \texttt{cone\_to\_ctwo($C_1$, $I$)} tries to construct proper pairs as a variable in \texttt{SageMath}\xspace and record it if it is successful. Lastly, \texttt{cover\_to\_stdPairs( $C,I,$ loopNum=1000)} implements the whole procedure \cref{enum:prop4.4}. Though \cref{prop:cover_to_std_pairs} shows that the given procedure terminates with finitely many steps, we set \texttt{loopNum} for safety. \texttt{loopNum} determines how many steps the loop can use for finding standard pairs. The default value is 1000.
Now we are ready to find the standard cover of a general ideal $I$ whose minimal generators are $\langle \mathbf{b}_{1},\cdots, \mathbf{b}_{n}\rangle$, one can find standard pairs as in \cite[Theorem 4.5]{STDPAIR} described below.
\begin{enumerate}[leftmargin=*]
\item Find the standard cover $C$ of $\langle \mathbf{b}_{1}\rangle$ using pair difference.
\item For $i=2$ to $n$:
\begin{enumerate}[leftmargin=*]
\item For each pair $(\mathbf{b}, F)$ in $C$, replace it with elements of the pair difference of pairs $(\mathbf{b},F)$ and $(\mathbf{b}_{i}, A)$. After this process $C$ is a cover of an ideal $\langle \mathbf{b}_{1},\mathbf{b}_{2},\cdots, \mathbf{b}_{i}\rangle$.
\item Using an algorithm of \cref{prop:cover_to_std_pairs} find the standard cover $C'$ of $\langle \mathbf{b}_{1},\mathbf{b}_{2},\cdots, \mathbf{b}_{i}\rangle$.
\item Replace $C$ with $C'$.
\end{enumerate}
\item Return $C$.
\end{enumerate}
The returned value $C$ is now the standard cover of $I$.
\texttt{stdPairs.spyx}\xspace implements \cite[Theorem 4.5]{STDPAIR} as a method \texttt{standardPairs($I$)}. This method has an input $I$ whose type is \texttt{monomialIdeal}. It returns a cover whose type is \texttt{dictionary}, classifying standard pairs by its face. For example, the code below shows that the standard cover of an ideal generated by $\left[\begin{smallmatrix}2 & 2 & 2\\ 0 & 1 & 2 \\ 2 & 2 &2 \end{smallmatrix}\right]$ in an affine semigroup $\N \mathbf{A} =\N \left[\begin{smallmatrix} 0&1&1&0 \\ 0&0&1&1 \\ 1&1&1&1\end{smallmatrix}\right]$ is
$$\left\{ \left( 0, \left[\begin{smallmatrix} 0 &0 \\ 0 & 1 \\ 1 & 1 \end{smallmatrix}\right]\right), \left( \left[\begin{smallmatrix} 1 \\ 1 \\ 1\end{smallmatrix}\right], \left[\begin{smallmatrix} 0 &0 \\ 0 & 1 \\ 1 & 1 \end{smallmatrix}\right]\right), \text{ and }\left( \left[\begin{smallmatrix} 1 \\ 0 \\ 1\end{smallmatrix}\right], \left[\begin{smallmatrix} 0 &0 \\ 0 & 1 \\ 1 & 1 \end{smallmatrix}\right]\right) \right\}.$$
\begin{verbatim}
sage: Q=affineMonoid(matrix(ZZ,[[0,1,1,0],[0,0,1,1],[1,1,1,1]]))
sage: I=monomialIdeal(Q, matrix(ZZ,[[2,2,2],[0,1,2],[2,2,2]]))
sage: I.standardCover()
It takes a few minutes, depending on the system.
Cover for 1 generator was calculated. 2 generators are left.
Cover for 2 generators was calculated. 1 generators are left.
Cover for 3 generators was calculated. 0 generators are left.
{(0, 3): [([[0], [0], [0]]^T,[[0, 0], [0, 1], [1, 1]]),
([[1], [1], [1]]^T,[[0, 0], [0, 1], [1, 1]]),
([[1], [0], [1]]^T,[[0, 0], [0, 1], [1, 1]])]}
\end{verbatim}
\section{Compatibility with \texttt{Normaliz}\xspace package in \texttt{SageMath}\xspace and \texttt{Macaulay2}\xspace}
\label{section:compatibility}
\texttt{Normaliz}\xspace is a package in \texttt{SageMath}\xspace and \texttt{Macaulay2}\xspace for finding Hilbert bases of rational cones and its normal affine monoid \cite{MR2659215}. \texttt{stdPairs.spyx}\xspace has methods translating classes in \cref{section:classes} into objects in the \texttt{Normaliz}\xspace package. If an affine semigroup $\N \mathbf{A}$ is \emph{normal}, i.e., $\N \mathbf{A} = \Z^{d} \cap \R_{\geq0}A$, then this translation works well. However, if it is not normal, then this translates into the saturation of the given affine semigroup described in \cref{section:classes}.
For \texttt{SageMath}\xspace, one can have a polyhedron over $\Z$ with the \texttt{Normaliz}\xspace package in \texttt{SageMath}\xspace by adding an argument \texttt{True} on the constructor of \texttt{affineMonoid}. For example, the code below gives an \texttt{affineMonoid} class variable $Q$ whose member variable $Q.poly$ is a polyhedron over $\Z$ with \texttt{Normaliz}\xspace.
\begin{verbatim}
sage: load("stdPairs.spyx")
Compiling ./stdPairs.spyx...
sage: Q=affineMonoid(matrix(ZZ, [[0,1,1,0],[0,0,1,1],[1,1,1,1]]),
True)
\end{verbatim}
For \texttt{Macaulay2}\xspace, \texttt{ToMacaulay2( monomialIdeal I)} returns a dictionary storing variable of \texttt{Macaulay2}\xspace computations. This dictionary contains affine semigroup ring, a list of generators of an ideal, and a list of standard pairs in \texttt{Macaulay2}\xspace. For example,
\begin{verbatim}
sage: load("stdPairs.spyx")
Compiling ./stdPairs.spyx...
sage: Q=affineMonoid(matrix(ZZ,[[0,1,1,0],[0,0,1,1],[1,1,1,1]]))
sage: I=monomialIdeal(Q, matrix(ZZ,[[2,2,2],[0,1,2],[2,2,2]]))
sage: S = ToMacaulay2(I,I.standardCover())
sage: print(S['AffineSemigroupRing'])
MonomialSubalgebra{cache => CacheTable{} }
generators => {c, a*c, a*b*c, b*c}
ring => R
sage: print(S['MonomialIdeal'])
2 2 2 2 2 2 2
{a c , a b*c , a b c }
sage: print(S['StandardCover'])
{{a*c, {c, b*c}}, {a*b*c, {c, b*c}}, {1, {c, b*c}}}
\end{verbatim}
In \texttt{Macaulay2}\xspace, a type \texttt{MonomialSubalgebra} in the \texttt{Normaliz}\xspace package may correspond to an affine semigroup ring. Since \texttt{Normaliz}\xspace has no attributes for a monomial ideal of the type \texttt{MonomialSubalgebra}, the ideal is stored as a list of its generators. The standard cover of $I$ is also sent to \texttt{Macaulay2}\xspace as a nested list, similar to the output of the method \texttt{standardPairs} in \texttt{Macaulay2}\xspace.
Conversely, \texttt{FromMacaulay( Macaulay2 $S$)} translates \texttt{monomialSubalgebra} object $S$ of \texttt{Macaulay2}\xspace into an \texttt{affineMonoid} object in \texttt{stdPairs.spyx}\xspace. For example,
\begin{verbatim}
sage: R = macaulay2('ZZ[x,y,z]')
sage: macaulay2("loadPackage Normaliz")
Normaliz
sage: S=macaulay2('createMonomialSubalgebra {x^2*y, x*z, z^3}')
sage: A=FromMacaulay(S)
sage: A
An affine semigroup whose generating set is
[[2 1 0]
[1 0 0]
[0 1 3]]
\end{verbatim}
\bibliographystyle{plain}
| proofpile-arXiv_059-15684 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{%
\@startsection{section}{1}
{\z@}%
{-3.5ex \@plus -1ex \@minus -.2ex}%
{2.3ex \@plus.2ex}%
{\large\bfseries
}
\renewcommand\subsection{%
\@startsection{subsection}{2}
{\z@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1sp
{\normalsize\bfseries
}
\renewcommand\subsubsection{%
\@startsection{subsubsection}{3}
{\z@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1sp
{\normalfont\normalsize
}
\makeatother
\marginparwidth -1pt \oddsidemargin 0pt \evensidemargin 0pt
\topmargin -10pt \textheight 23.5 truecm \textwidth 16.75 truecm
\title{{\LARGE\bf Arboricity games: the core and the nucleolus}\thanks{The Version of Record of this article is published in \href{https://www.springer.com/journal/10107}{Mathematical Programming}, and is available online at \href{https://doi.org/10.1007/s10107-021-01752-w}{https://doi.org/10.1007/s10107-021-01752-w}.}}
\author{Han Xiao\thanks{Corresponding author.}~~and Qizhi Fang}
\affil{School of Mathematical Sciences\\Ocean University of China\\Qingdao, China\\\{hxiao, qfang\}@ouc.edu.cn}
\date{}
\begin{document}
\clearpage\maketitle
\thispagestyle{empty}
\openup 1.2\jot
\begin{abstract}
The arboricity of a graph is the minimum number of forests required to cover all its edges.
In this paper, we examine arboricity from a game-theoretic perspective and investigate cost-sharing in the minimum forest cover problem.
We introduce the arboricity game as a cooperative cost game defined on a graph.
The players are edges, and the cost of each coalition is the arboricity of the subgraph induced by the coalition.
We study properties of the core and propose an efficient algorithm for computing the nucleolus when the core is not empty.
In order to compute the nucleolus in the core, we introduce the prime partition which is built on the densest subgraph lattice.
The prime partition decomposes the edge set of a graph into a partially ordered set defined from minimal densest minors and their invariant precedence relation.
Moreover, edges from the same partition always have the same value in a core allocation.
Consequently, when the core is not empty, the prime partition significantly reduces the number of variables and constraints required in the linear programs of Maschler's scheme and allows us to compute the nucleolus in polynomial time.
Besides, the prime partition provides a graph decomposition analogous to the celebrated core decomposition and the density-friendly decomposition, which may be of independent interest.
\hfill
\noindent\textbf{Keywords:} core, nucleolus, arboricity, density, graph decomposition.
\noindent\textbf{Mathematics Subject Classification:} 05C57, 91A12, 91A43, 91A46.
\end{abstract}
\newpage
\section{Introduction}
The arboricity of a graph is the minimum number of forests required to cover all edges of the graph.
Hence arboricity concerns forest cover, a special case of matroid covering.
Besides, arboricity is a measure of graph density.
A graph with large arboricity always contains a dense subgraph.
By employing the nontrivial interplay between forest cover and graph density under the polyhedral framework, we examine arboricity from a game-theoretic perspective and introduce the so-called arboricity game.
Briefly, the arboricity game is a cooperative cost game defined on a graph,
where the players are edges and the cost of each coalition is the arboricity of the subgraph induced by the coalition.
A central question in cooperative game theory is to distribute the total cost to its participants.
Many solution concepts have been proposed for cost-sharing.
One solution concept is the core, which requires that no coalition benefits by breaking away from the grand coalition.
Another solution concept is the nucleolus, which is the unique solution that lexicographically maximizes the vector of non-decreasingly ordered excess.
Following the definition, Kopelowitz \cite{Kope67} and Maschler et al. \cite{MPS79} proposed a standard procedure to compute the nucleolus by solving a sequence of linear programs.
However, the size of these linear programs may be exponentially large due to the number of constraints corresponding to all possible coalitions.
Hence it is in general unclear how to apply this procedure.
The first polynomial algorithm for computing the nucleolus was proposed by Megiddo \cite{Megi78} for cooperative cost games defined on directed trees.
Later on, a number of polynomial algorithms were developed for,
e.g., bankruptcy games \cite{AM85}, matching games \cite{BKP12, CLZ12, KP03, KPT20, SR94}, standard tree games \cite{GMOZ96}, airport profit games \cite{BITZ06}, flow games \cite{DFS09}, voting games \cite{EP09}, spanning connectivity games \cite{ALPS09}, shortest path games \cite{BB19}, and network strength games \cite{BB20}.
On the negative side, NP-hardness results for computing the nucleolus were shown for, e.g., minimum spanning tree games \cite{FKK98}, threshold games \cite{EGGW07}, $b$-matching games \cite{KTZ21}, flow games and linear production games \cite{DFS09, FZCD02}.
The main contribution of this paper is twofold.
One contribution is concerned with the arboricity game, where cost-sharing in the minimum forest cover problem is considered.
We study properties of the core and propose an efficient algorithm for computing the nucleolus when the core is nonempty.
Our results are in the same spirit as \cite{BB20, KPT20}, but justifications are different.
The other contribution goes to the prime partition, which is a graph decomposition analogous to the celebrated core decomposition \cite{Seid83} and the density-friendly decomposition \cite{Tatt19, TG15}.
The prime partition is inspired by the principle partition of matroids \cite{CGHL92} and by the graph decompositions developed in \cite{ALPS09, BB20}.
For arboricity games, the prime partition dramatically reduces the size of linear programs involved in Maschler's scheme and enables us to compute the nucleolus in polynomial time.
The rest of this paper is organized as follows.
Section \ref{sec:Preliminaries} introduces relevant concepts.
Section \ref{sec:PolyhedralCombinatorics} reviews some polyhedral results on arboricity.
Section \ref{sec:Core} studies properties of the core.
Section \ref{sec:PrimePartition} is devoted to the prime partition, a graph decomposition of independent interest.
Section \ref{sec:Nucleolus} develops an efficient algorithm for computing the nucleolus.
Section \ref{sec:Conclusion} concludes this paper.
\section{Preliminaries}
\label{sec:Preliminaries}
A \emph{cooperative game} $\Gamma=(N,\gamma)$ consists of a player set $N$ and a characteristic function $\gamma:2^N\rightarrow \mathbb{R}$ with convention $\gamma (\emptyset)=0$.
The player set $N$ is called the \emph{grand coalition}.
Any subset $S$ of $N$ is called a \emph{coalition}.
Given a vector $\boldsymbol{x}\in \mathbb{R}^N$,
we use $x(S)$ to denote $\sum_{i\in S}x_i$ for any $S\subseteq N$.
A vector $\boldsymbol{x}\in \mathbb{R}^N_{\geq 0}$ is called an \emph{allocation} of $\Gamma$ if $x(N)=\gamma (N)$.
The \emph{excess} of a coalition $S$ at an allocation $\boldsymbol{x}$ is defined as $e(S,\boldsymbol{x})=\gamma (S)-x(S)$.
The \emph{core} of $\Gamma$, denoted by $\mathcal{C}(\Gamma)$, is the set of allocations where all excesses are nonnegative, i.e.,
\begin{equation*}
\mathcal{C}(\Gamma)=\big\{\boldsymbol{x}\in\mathbb{R}^N_{\geq 0} : x(N)=\gamma (N);\, x(S)\leq \gamma (S), \, \forall S\subseteq N \big\}.
\end{equation*}
The \emph{excess vector} $\theta(\boldsymbol{x})$ of an allocation $\boldsymbol{x}$ is the $2^{\lvert N\rvert}-2$ dimensional vector whose components are
the non-trivial excesses $e(S,\boldsymbol{x})$ for $S\in 2^N\backslash \{\emptyset,N\}$ arranged in a non-decreasing order.
The \emph{nucleolus} \cite{Schm69} is the unique allocation $\boldsymbol{x}$ that lexicographically maximizes the excess vector $\theta(\boldsymbol{x})$.
When the core is nonempty, the nucleolus always exists and lies in the core.
Moreover, the nucleolus can always be computed with a standard procedure of Maschler et al. \cite{Kope67,MPS79} by recursively solving a sequence of linear programs.
\begin{alignat}{3}
\max\quad & \epsilon &{}& \label{eq:Nucleolus_LP1_0}\\
\lplabel[lp1]{$(LP_1)$}\mbox{s.t.}\quad
&x(N) = \gamma (N), &\quad & \label{eq:Nucleolus_LP1_1}\\
&x(S)+\epsilon \leq \gamma (S), &\quad &\forall~ S\in 2^N\backslash \{\emptyset,N\}, \label{eq:Nucleolus_LP1_2}\\
&x_i \geq 0, &\quad &\forall~ i\in N. \label{eq:Nucleolus_LP1_3}
\end{alignat}
To compute the nucleolus with Maschler's scheme, first solve linear program $LP_1$ to maximize the minimum excess among all non-trivial coalitions.
For any constant $\epsilon$, let $P_1(\epsilon)$ denote the set of vectors $\boldsymbol{x}\in \mathbb{R}^N$ such that $(\boldsymbol{x},\epsilon)$ satisfies \eqref{eq:Nucleolus_LP1_1}-\eqref{eq:Nucleolus_LP1_3}, i.e., $P_1(\epsilon)$ is the set of allocations whose minimum excess is no less than $\epsilon$.
It follows that $\mathcal{C}(\Gamma)=P_1(0)$.
Let $\epsilon_1$ be the optimal value of $LP_1$.
Then $P_1(\epsilon_1)$ is the set of optimal solutions of $LP_1$, which is also called the \emph{least core} of $\Gamma$.
Thus $\mathcal{C}(\Gamma)\not=\emptyset$ if and only if $\epsilon_1\geq 0$.
For any polyhedron $P\subseteq \mathbb{R}^N$, let $\text{Fix}(P)$ denote the set of coalitions \emph{fixed} by $P$, i.e.,
\begin{equation*}
\text{Fix}(P)=\big\{S\subseteq N : x(S)=y(S), ~\forall~\boldsymbol{x},\boldsymbol{y}\in P \big\}.
\end{equation*}
After solving linear program $LP_r$, let $\epsilon_r$ be the optimal value and $P_r(\epsilon_r)$ be the set of optimal solutions.
Then solve linear program $LP_{r+1}$ to maximize the minimum excess on coalitions that are not fixed by $P_r(\epsilon_r)$.
\begin{alignat}{3}
\max\quad & \epsilon &{}&\\
\lplabel[lp2]{$(LP_{r+1})$}\mbox{s.t.}\quad
&x(S)+\epsilon \leq \gamma (S), &\quad& \forall~ S\not\in \text{Fix}\big(P_{r}(\epsilon_{r})\big), \label{eq:Nucleolus_LPr_1}\\
&\boldsymbol{x}\in P_{r}(\epsilon_{r}) \label{eq:Nucleolus_LPr_2}.
\end{alignat}
Clearly, $\epsilon_{r+1}\geq \epsilon_{r}$ and $P_{r+1}(\epsilon_{r+1})\subseteq P_{r}(\epsilon_{r})$.
Moreover, the dimension of $P_{r+1} (\epsilon_{r+1})$ decreases before it collapses to zero.
Hence it takes up to $\lvert N\rvert$ rounds before $P_{r+1} (\epsilon_{r+1})$ becomes a singleton which is exactly the nucleolus.
However, the linear programs involved in Maschler's scheme are usually of exponential size.
Even if linear programs $LP_{1},\ldots,LP_{r}$ have been successfully solved, it may be intractable in polynomial time to determine all coalitions not fixed by $P_{r}(\epsilon_{r})$.
Hence it is in general unclear how to apply Maschler's scheme.
For arboricity games,
we show that the number of variables and constraints required in the successive linear programs of Maschler's scheme can be dramatically reduced,
and the nucleolus can always be determined efficiently on the second round of Maschler's scheme.
We assume that the readers have a moderate familiarity with graph theory.
But assumptions, notions and notations used in this paper should be clarified before proceeding.
Throughout this paper, we assume that all graphs are loopless but parallel edges are allowed.
We also assume that loops are always removed during edge contraction.
An \emph{image} is a vertex obtained from edge contraction.
A \emph{minor} is a graph obtained from repeated vertex deletion, edge deletion and edge contraction.
Let $G=(V,E)$ be a graph.
We use $c(G)$ to denote the number of components of $G$.
We use $n(G)$ and $m(G)$ to denote the number of vertices and edges in $G$ respectively.
We write $n$ for $n(G)$ and $m$ for $m(G)$ when no ambiguity occurs.
Let $U\subseteq V$ be a set of vertices.
We write $G-U$ for the graph obtained from $G$ by deleting all vertices in $U$.
Let $F\subseteq E$ be a set of edges.
We write $G\slash F$ for the graph obtained from $G$ by contracting all edges in $F$.
Let $H$ be a subgraph of $G$.
We write $G-H$ for $G-V(H)$ and write $G\slash H$ for $G\slash E(H)$.
If $X$ and $Y$ are two sets of vertices,
we use $(X,Y)$ and $m(X,Y)$ to denote the set and the number of crossing edges between $X$ and $Y$ respectively.
If $X$ and $Y$ are two subgraphs, we write $(X,Y)$ for $\big(V(X),V(Y)\big)$.
\section{Polyhedral results on arboricity}
\label{sec:PolyhedralCombinatorics}
This section reviews some polyhedral results on arboricity.
For more details about polyhedral combinatorics, we refer to \cite{Schr86, Schr03}.
Let $G=(V,E)$ be a graph.
A \emph{forest cover} of $G$ is a set of forests that covers all edges of $G$.
The \emph{arboricity} of $G$, denoted by $a(G)$, is the minimum size of forest covers of $G$.
The arboricity measures how dense a graph is.
The \emph{density} of $G$, denoted by $g(G)$, is the value of $\frac{m(G)}{n(G)-c(G)}$.
Hence $g(G)=\frac{m(G)}{n(G)-1}$ if $G$ is connected.
By convention, the density of a single vertex is zero.
Nash-Williams \cite{NW64} showed that the arboricity of a graph is lower bounded by the maximum density of subgraphs.
\begin{theorem}[Nash-Williams \cite{NW64}]
\label{thm:NW}
The edges of a graph $G$ can be covered by $k$ forests if and only if $\max_{H\subseteq G} g(H) \leq k$.
\end{theorem}
The value of $\max_{H\subseteq G} g(H)$ is called the \emph{fractional arboricity} of $G$ and denoted by $a_f(G)$.
Theorem \ref{thm:NW} implies that $a(G)=\lceil a_f(G)\rceil$.
Notice that the fractional arboricity is necessarily achieved at connected subgraphs.
It follows that the fractional arboricity of $G$ can be computed by
\begin{equation}\label{eq:SimpleFractionalArboricity}
\max_{H\subseteq G} \frac{m(H)}{n(H)-1}.
\end{equation}
Let $\mathcal{F}$ denote the set of forests in $G$.
Clearly, $\mathcal{F}$ makes a graphic matroid with ground set $E$,
and every forest cover of $G$ is essentially an independent set cover of the graphic matroid.
Additionally, the definition for density and (fractional) arboricity in the forest cover problem respects the conventional definition in the matroid covering problem \cite{Edmo65}.
Hence the forest cover problem is a special case of the matroid covering problem.
Notice that the fractional arboricity of a matroid is always equal to the fractional cover number of independent sets \cite{SU97, Schr03}.
It follows that the value of \eqref{eq:SimpleFractionalArboricity} is equal to the optimal value of linear program \eqref{eq:ForestCover0}-\eqref{eq:ForestCover2}
\begin{alignat}{4}
\min\quad & {\displaystyle\sum_{F\in \mathcal{F}} z_F} &{}&\label{eq:ForestCover0}\\
\mbox{s.t.}\quad
&\sum_{F:e\in F} z_F \geq 1, &\qquad &\forall ~ e &\in E, \label{eq:ForestCover1}\\
&z_F \geq 0, &\qquad &\forall ~ F &\in \mathcal{F}. \label{eq:ForestCover2}
\end{alignat}
and the optimal value of its dual \eqref{eq:ForestCoverDual0}-\eqref{eq:ForestCoverDual2}.
\begin{alignat}{4}
\max\quad & {\displaystyle\sum_{e\in E} x_e} &{}&\label{eq:ForestCoverDual0}\\
\mbox{s.t.}\quad
&\sum_{e:e\in F} x_e \leq 1, &\qquad& \forall ~ F &\in \mathcal{F}, \label{eq:ForestCoverDual1}\\
&x_e \geq 0, &\qquad& \forall ~ e &\in E. \label{eq:ForestCoverDual2}
\end{alignat}
Notice that \eqref{eq:SimpleFractionalArboricity} can be reformulated as
\begin{equation}\label{eq:WeightedFractionalArboricity}
\max_{H\subseteq G} \boldsymbol{1} \cdot \frac{\boldsymbol{\lambda}^{H}}{n(H)-1},
\end{equation}
where $\boldsymbol{1}\in \mathbb{Z}^E$ is an all-one vector and $\boldsymbol{\lambda}^H \in \mathbb{Z}^E$ is the incidence vector of $E(H)$.
Moreover, $\frac{\boldsymbol{\lambda}^{H}}{n(H)-1}$ satisfies \eqref{eq:ForestCoverDual1} and \eqref{eq:ForestCoverDual2} for any $H\subseteq G$.
Consequently, optimal solutions of \eqref{eq:ForestCoverDual0}-\eqref{eq:ForestCoverDual2} are among the vectors $\frac{\boldsymbol{\lambda}^H}{n(H)-1}$ in \eqref{eq:WeightedFractionalArboricity},
which leads to the following corollary that will be used in Section \ref{sec:Core}.
In the remainder of this paper, a \emph{densest subgraph} always refers to a \emph{connected} subgraph with the maximum density,
and a \emph{densest minor} always refers to a \emph{connected} minor the density of which is equal to the fractional arboricity of the graph.
\begin{lemma}
\label{thm:DensestSubgraph_IncidenceVector}
The set of optimal solutions of \eqref{eq:ForestCoverDual0}-\eqref{eq:ForestCoverDual2} is the convex hull of the vectors $\frac{\boldsymbol{\lambda}^H}{n(H)-1}$
for every densest subgraph $H$ of $G$.
\end{lemma}
Lemma \ref{thm:DensestSubgraph_IncidenceVector} suggests that the set of optimal solutions of \eqref{eq:ForestCoverDual0}-\eqref{eq:ForestCoverDual2} is a convex polytope where every extreme point corresponds to a densest subgraph.
By polyhedral theory \cite{Schr86}, all faces of a convex polytope form a partially ordered set under inclusion.
It turns out that all densest subgraphs also form a partially ordered set under inclusion,
which suggests that faces of the optimal solution polytope of \eqref{eq:ForestCoverDual0}-\eqref{eq:ForestCoverDual2} may be related to densest subgraphs.
It is this observation that leads to the graph decomposition in Section \ref{sec:PrimePartition}.
\section{The core and its properties}
\label{sec:Core}
Throughout this paper, we always assume that the underlying graph of arboricity games is connected.
Let $\Gamma_G=(N,\gamma)$ denote the \emph{arboricity game} defined on a graph $G=(V,E)$, where $N=E$ and $\gamma (S)=a(G[S])$ for $S\subseteq N$.
We start with an alternative characterization for the core.
\begin{lemma}
\label{thm:Core}
Let $\Gamma_G=(N,\gamma)$ be an arboricity game and $\mathcal{T}$ be the set of spanning trees in $G$.
Then
\begin{equation}
\label{eq:Core}
\mathcal{C}(\Gamma_G)=\big\{\boldsymbol{x}\in \mathbb{R}^E_{\geq 0} : x(E)=\gamma (E);\, x(T)\leq 1,\, \forall \:T\in \mathcal{T} \big\}.
\end{equation}
\end{lemma}
\begin{proof}
Denote by $\mathcal{C}'(\Gamma_G)$ the right hand of \eqref{eq:Core}.
We first show that $ \mathcal{C}(\Gamma_G)\subseteq \mathcal{C}'(\Gamma_G)$.
Let $\boldsymbol{x}\in \mathcal{C}(\Gamma_G)$.
For any $T\in \mathcal{T}$, we have $x(T)\leq \gamma(T)=1$.
It follows that $\boldsymbol{x}\in \mathcal{C}'(\Gamma_G)$.
Now we show that $\mathcal{C}'(\Gamma_G)\subseteq \mathcal{C}(\Gamma_G)$.
Let $\boldsymbol{x}\in \mathcal{C}'(\Gamma_G)$ and $S\in 2^N\backslash \{\emptyset\}$.
Assume that $\gamma (S)=k$ and $G[S]$ can be covered by $k$ forests $F_1,\ldots,F_k$.
Let $T_i$ be a spanning tree containing $F_i$.
It follows that
\begin{equation*}
x(S)=\sum_{i=1}^{k} x(F_i)\leq \sum_{i=1}^{k} x(T_i)\leq k=\gamma (S),
\end{equation*}
which implies that $\boldsymbol{x}\in \mathcal{C}(\Gamma_G)$.
\end{proof}
A necessary and sufficient condition for the core nonemptiness follows immediately.
\begin{theorem}
\label{thm:CoreNonempty}
Let $\Gamma_G=(N,\gamma)$ be an arboricity game.
Then $\mathcal{C}(\Gamma_G)\not=\emptyset$ if and only if $a_f(G)=a(G)$.
\end{theorem}
\begin{proof}
Let $\boldsymbol{x}$ be an optimal solution of \eqref{eq:ForestCoverDual0}-\eqref{eq:ForestCoverDual2}.
It follows that $x(E) = a_f(G)\leq a(G)=\gamma (E)$.
Since $\mathcal{T}\subseteq \mathcal{F}$, Lemma \ref{thm:Core} implies that $\boldsymbol{x}\in \mathcal{C}(\Gamma_G)$ if $a_f(G)=a(G)$.
Let $\boldsymbol{x}\in \mathcal{C}(\Gamma_G)$.
Since every forest is a subgraph of a spanning tree, Lemma \ref{thm:Core} implies that $\boldsymbol{x}$ is a feasible solution of \eqref{eq:ForestCoverDual0}-\eqref{eq:ForestCoverDual2}.
It follows that $x(E)\leq a_f(G)\leq a(G)=\gamma (E)$.
Hence $x(E)=\gamma (E)$ implies $a_f(G)=a(G)$.
\end{proof}
Since the arboricity game is a special case of the covering game,
Theorem \ref{thm:CoreNonempty} respects the universal characterization for the core nonemptiness of covering games \cite{DIN99}.
The corollary below also follows from the results for covering games.
\begin{corollary}
\label{thm:CoreCorollary}
Let $\Gamma_G=(N,\gamma)$ be an arboricity game.
Then the nonemptiness of $\mathcal{C}(\Gamma_G)$ can be determined in polynomial time.
Moreover, we can decide in polynomial time if a vector belongs to $\mathcal{C} (\Gamma_G)$, and if not, find a separating hyperplane.
\end{corollary}
Theorem \ref{thm:CoreNonempty} implies that, when the core is nonempty, a vector belongs to the core if and only if it is an optimal solution of \eqref{eq:ForestCoverDual0}-\eqref{eq:ForestCoverDual2}.
Lemma \ref{thm:DensestSubgraph_IncidenceVector} suggests that the nonempty core can be characterized by the incidence vector of densest subgraphs.
Thus we have the following corollary.
\begin{corollary}
\label{thm:CoreCoverHull}
Let $\Gamma_G=(N,\gamma)$ be an arboricity game with a nonempty core.
Then $\mathcal{C}(\Gamma_G)$ is the convex hull of the vectors
$\frac{\boldsymbol{\lambda}^H}{n(H)-1}$
for every densest subgraph $H$ of $G$.
\end{corollary}
Corollary \ref{thm:CoreCoverHull} implies that every core allocation is a convex combination of vectors, each of which is associated with a densest subgraph.
For edges not in any densest subgraph, we have the following corollary.
\begin{corollary}
\label{thm:NonPrimeSet}
Let $\Gamma_G=(N,\gamma)$ be an arboricity game with a nonempty core.
For any $\boldsymbol{x}\in \mathcal{C}(\Gamma_G)$,
$x_e = 0$ if edge $e$ does not belong to any densest subgraph of $G$.
\end{corollary}
It is well known that the nucleolus lies in the core when the core is nonempty.
To compute the nucleolus in the core, we need a better understanding of the core polytope.
Corollary \ref{thm:CoreCoverHull} states that every extreme point of the core polytope is associated with a densest subgraph, which suggests that faces of the core polytope may also be associated with densest subgraphs.
Inspired by the face lattice of convex polytopes \cite{Schr86}, we introduce a graph decomposition built on densest subgraphs, which is crucial for computing the nucleolus in the core of arboricity games.
\section{The prime partition}
\label{sec:PrimePartition}
This section is self-contained and devoted to the prime partition, a graph decomposition analogous to the core decomposition \cite{Seid83} and the density-friendly decomposition \cite{Tatt19, TG15}.
The prime partition is inspired by the face lattice of convex polytopes and built on the densest subgraph lattice where the edge set intersection of any two densest subgraphs is either empty or inducing a densest subgraph again.
By utilizing the uncrossing technique \cite{LRS11} to a chain of subgraphs with the maximum density, we introduce the prime partition.
The \emph{prime partition} decomposes the edge set of a graph into a \emph{non-prime set} and a number of \emph{prime sets}.
The \emph{non-prime set} is the set of edges that are not in any densest subgraph.
The \emph{prime sets} are the incremental edge sets of a chain of subgraphs with the maximum density.
In general, there is more than one chain of subgraphs with the maximum density that defines the prime sets.
A partial order can be defined on the prime sets according to the invariant inclusion relation in any chain of subgraphs defining the prime sets.
There are other graph decompositions \cite{ALPS09, BB20} inspired by the face lattice of convex polytopes.
But they are defined on different discrete structures.
The remainder of this section is organized as follows.
In Subsection \ref{sec:MinimalDensestSubgraph}, we investigate properties of minimal densest subgraphs which are basic ingredients of the prime partition.
In Subsection \ref{sec:PrimeSets}, we define prime sets by levels and introduce the non-prime set as a byproduct.
In Subsection \ref{sec:DensestSubgraphDecomposition}, we show that every densest subgraph admits a unique decomposition with prime sets.
In Subsection \ref{sec:AncestorRelation}, we introduce the ancestor relation of prime sets and define a partial order from the ancestor relation.
Throughout this section, we always assume that the graph $G=(V,E)$ is connected.
\subsection{Minimal densest subgraphs}
\label{sec:MinimalDensestSubgraph}
The following properties of minimal densest subgraphs are useful in defining the prime partition.
\begin{lemma}[Cut-vertex-free property]
\label{thm:MinimalDensestSubgraphCutVertex}
Let $H$ be a minimal densest subgraph of $G$.
Then $H$ has no cut vertex.
\end{lemma}
\begin{proof}
Assume to the contrary that $v$ is a cut vertex in $H$.
Let $H_1$ and $H_2$ be two subgraphs of $H$ such that $H_1\cup H_2 =H$ and $H_1\cap H_2=\{v\}$.
Since $H$ is a minimal densest subgraph, we have
\begin{equation}
g(H_i)=\frac{m(H_i)}{n(H_i)-1} < g(H),
\end{equation}
for $i=1,2$.
It follows that
\begin{equation}
g(H)=\frac{m(H)}{n(H)-1}=\frac{m(H_1)+m(H_2)}{[n(H_1)-1]+[n(H_2)-1]}<g(H),
\end{equation}
which is a contradiction.
Hence $H$ has no cut vertex.
\end{proof}
\begin{lemma}[Noncrossing property]
\label{thm:MDS_Noncrossing}
Let $H$ be a minimal densest subgraph of $G$.
For any densest subgraph $K$ of $G$, either $E(H) \subseteq E(K)$ or $E(H) \cap E(K)=\emptyset$.
\end{lemma}
\begin{proof}
When $E(H) \cap E(K)\not=\emptyset$, assume to the contrary that $E(H)\not\subseteq E(K)$.
Let $X=H\cap K$.
Then $X$ is a proper subgraph of $H$ with $E(X)\not=\emptyset$.
On one hand, we have
\begin{equation}\label{eq:Noncrossing1}
\frac{m(H-X)+m(H-X,X)}{n(H-X)}>g(H).
\end{equation}
Indeed, since otherwise
\begin{equation}
\begin{split}
g(X)&=\frac{m(X)}{n(X)-1}=\frac{m(H)-m(H-X)-m(H-X,X)}{n(H)-n(H-X)-1}\\
& \geq \frac{m(H)-n(H-K)\cdot g(H)}{n(H)-n(H-K)-1}=\frac{m(H)-n(H-K)\cdot \frac{m(H)}{n(H)-1}}{n(H)-n(H-K)-1}=g(H),
\end{split}
\end{equation}
which contradicts the minimality of $H$.
On the other hand, we have
\begin{equation}\label{eq:Noncrossing2}
\frac{m(K-X)+m(K-X,X)+m(X)}{n(K-X)+n(X)-1}=g(K).
\end{equation}
Since $g(H)=g(K)$, \eqref{eq:Noncrossing1} and \eqref{eq:Noncrossing2} imply that
\begin{equation}
g(H\cup K)\geq \frac{[m(H-X)+m(H-X,X)]+[m(K-X)+m(K-X,X)+m(X)]}{n(H-X)+[n(K-X)+n(X)-1]}> g(K),
\end{equation}
which contradicts the maximum density of $K$.
Hence either $E(H) \subseteq E(K)$ or $E(H) \cap E(K)=\emptyset$.
\end{proof}
Lemma \ref{thm:MDS_Noncrossing} implies that any two minimal densest subgraphs share no common edge, which is the key property for defining prime sets.
We also notice that any two minimal densest subgraphs share at most one common vertex.
This observation can be generalized to a ``cycle''-free property for minimal densest subgraphs.
\begin{lemma}[``Cycle''-free property]
\label{thm:MDS_CommonVertex}
Let $H_1,\ldots,H_r$ be minimal densest subgraphs of $G$.
Then $\lvert \{v: v\in V(H_i)\cap V(H_j),~i\not=j \}\rvert < r$.
\end{lemma}
\begin{proof}
Assume to the contrary that $\lvert \{v: v\in V(H_i)\cap V(H_j),~i\not=j \}\rvert \geq r$.
Let $H=\cup_{i=1}^{r} H_i$.
Lemma \ref{thm:MDS_Noncrossing} implies that
\begin{equation}
\begin{split}
g(H)
&\geq \frac{\sum_{i=1}^{r} m(H_i)}{\sum_{i=1}^{r} n(H_i)-\lvert \{v: v\in V(H_i)\cap V(H_j),~ i\not=j \} \rvert -1}\\
&> \frac{\sum_{i=1}^{r} m(H_i)}{\sum_{i=1}^{r} [n(H_i)-1]}=g(H_1),
\end{split}
\end{equation}
which contradicts the maximum density of $H_1$.
\end{proof}
To illustrate the ``cycle''-free property, we introduce an auxiliary graph $\mathcal{H}(G)$.
Every vertex $v_H$ in $\mathcal{H}(G)$ is associated with a minimal densest subgraph $H$ of $G$.
Every edge in $\mathcal{H}(G)$ joins two vertices $v_{H_1}$ and $v_{H_2}$ in $\mathcal{H}(G)$ if $H_1$ and $H_2$ share a common vertex.
Lemma \ref{thm:MDS_CommonVertex} implies that if any three minimal densest subgraphs of $G$ share no common vertex, then $\mathcal{H}(G)$ is acyclic.
The ``cycle''-free property will be used repeatedly in our arguments.
To define the prime partition, we have to determine all minimal densest subgraphs.
Gabow \cite{Gabo98} provided an $O(nm\log \frac{n^2}{m})$ algorithm for computing the fractional arboricity of a graph with $n$ vertices and $m$ edges.
By employing the algorithm of Gabow, the enumeration of all minimal densest subgraphs can be done in polynomial time.
\begin{lemma}
\label{thm:MDSEnumeration}
All minimal densest subgraphs of $G$ can be enumerated in $O(n^3 m \log \frac{n^2}{m})$.
\end{lemma}
\begin{proof}
We first show that a minimal densest subgraph of $G$ can be found in $O(n^2 m \log \frac{n^2}{m})$.
Initially, compute the fractional arboricity of $G$ and let $H=G$.
Compute the fractional arboricity of $H-v$ where $v\in V(H)$.
If $a_f(H-v)=a_f(G)$, then a densest subgraph of $G$ can be found in $H-v$.
Update $H$ with $H-v$ and repeat the process for $H$ until $a_f(H-v)<a_f(G)$ for any $v\in V(H)$.
Then $H$ is a minimal densest subgraph of $G$.
It takes $O(n)$ iterations before achieving a minimal densest subgraph of $G$.
Each iteration,
which involves computing the fractional arboricity of a subgraph of $G$,
can be done in $O(n m \log \frac{n^2}{m})$.
Hence a minimal densest subgraph of $G$ can be found in $O(n^2 m \log \frac{n^2}{m})$.
Now we show that all minimal densest subgraphs of $G$ can be enumerated in $O(n^3 m \log \frac{n^2}{m})$.
Let $H_1,\ldots,H_k$ be minimal densest subgraphs that have been found in $G$.
Let $G_k=G - \cup_{i=1}^{k} E(H_i)$.
Compute the fractional arboricity of $G_k$.
If $a_f(G_k)=a_f(G)$, then a minimal densest subgraph $H_{k+1}$ of $G$ can be found in $G_k$ and let $G_{k+1}=G - \cup_{i=1}^{k+1} E(H_i)$.
Repeat this process for $G_{k+1}$ until $a_f(G_{k+1})<a_f(G)$.
Then no minimal densest subgraph of $G$ remains in $G_{k+1}$.
Lemma \ref{thm:MDS_CommonVertex} implies that any two minimal densest subgraphs share at most one common vertex and there is a ``cycle''-free property among minimal densest subgraphs.
It follows that $\cup_{i=1}^{k+1}H_i$ has at least one more vertex than $\cup_{i=1}^{k}H_i$.
Therefore, there are $O(n)$ minimal densest subgraphs of $G$.
A minimal densest subgraph of $G$ can be found in $O(n^2m \log \frac{n^2}{m})$.
Therefore, all minimal densest subgraphs of $G$ can be enumerated in $O(n^3 m \log \frac{n^2}{m})$.
\end{proof}
\subsection{Defining prime sets by levels}
\label{sec:PrimeSets}
Now we define prime sets in the prime partition.
In short, every prime set is the edge set of a minimal densest minor.
Since edge contractions are involved, we introduce prime sets by levels.
A \emph{prime set of level zero} in $G$ is the edge set of a minimal densest subgraph.
By Lemma \ref{thm:MDS_Noncrossing}, prime sets of level zero are well defined.
Moreover, Lemma \ref{thm:MDSEnumeration} implies that all prime sets of level zero can be enumerated efficiently.
To define prime sets of higher levels, we study properties of densest subgraphs under edge contraction.
\begin{lemma}[Density preserving contraction]
\label{thm:ArboricityPreservingContraction}
Let $H$ be a proper densest subgraph of $G$.
Then $g(G\slash H)\leq g(G)$ and $a_f(G\slash H)\leq a_f(G)$.
Moreover, both equalities hold if $g(G)=a_f(G)$.
\end{lemma}
\begin{proof}
Let $\hat{G}=G\slash H$ and $v_H$ be the image of $H$ in $\hat{G}$.
We first prove that $g(\hat{G})\leq g(G)$ and the equality holds if $g(G)=f(G)$.
Notice that $m(\hat{G}-v_H)=m(G-H)$, $m(\hat{G}-v_H,v_H)=m(G-H,H)$ and $n(G-H)=n(\hat{G}-v_H)$.
Hence $g(G)\leq g(H)$ implies that
\begin{equation}
\label{eq:DensestSubgraphContraction}
\underbrace{\frac{m(\hat{G}-v_H)+m(\hat{G}-v_H,v_H)}{n(\hat{G}-v_H)}}_{=g(\hat{G})}\leq
\underbrace{\frac{[m(G-H)+m(G-H,H)]+m(H)}{n(G-H)+[n(H)-1]}}_{=g(G)}\leq
\underbrace{\frac{m(H)}{n(H)-1}}_{=g(H)}.
\end{equation}
Notice that $g(\hat{G})=\frac{m(G)-m(H)}{n(G)-n(H)}$.
Therefore, $g(\hat{G})=g(G)$ if $g(G)=g(H)$.
Now we prove that $a_f(\hat{G})\leq a_f(G)$.
It suffices to show that $g(\hat{G}')\leq a_f(G)$ for any induced subgraph $\hat{G}'$ of $\hat{G}$.
When $v_H\not\in V(\hat{G}')$, it is trivial that $g(\hat{G}')\leq a_f(G)$.
Now assume that $v_H\in V(\hat{G}')$.
Then there is an induced subgraph $G'$ of $G$ such that $H\subseteq G'$ and $\hat{G}'=G'\slash H$.
Hence \eqref{eq:DensestSubgraphContraction} implies that $g(\hat{G}')\leq g(G')\leq a_f(G)$.
It follows that $a_f(\hat{G})\leq a_f(G)$.
We have seen that $g(G)=g(H)$ implies $g(\hat{G})=g(G)$.
Therefore, $a_f(\hat{G}) = a_f(G)$ if $g(G)=g(H)$.
\end{proof}
Lemma \ref{thm:ArboricityPreservingContraction} implies that contracting a densest subgraph does not change the fractional arboricity if this subgraph is a proper subgraph of another densest subgraph.
By Lemma \ref{thm:MDS_Noncrossing}, minimal densest subgraphs possess an uncrossing property.
Hence minimal densest subgraphs can be contracted simultaneously.
After contracting all minimal densest subgraphs, if the fractional arboricity remains unchanged, densest subgraphs of the resulting graph are densest minors of the original graph; moreover, minimal densest subgraphs of the resulting graph are used to define prime sets of level one.
The procedure can be repeated to define prime sets of higher levels until the fractional arboricity of the resulting graph decreases.
In the following, we formally define prime sets of higher levels.
Let $\hat{G}^{(0)}=G$ and $\hat{G}^{(k+1)}$ be the graph obtain from $\hat{G}^{(k)}$ by contracting all edges in minimal densest subgraphs of $\hat{G}^{(k)}$, where $k\geq 0$.
If $a_f(\hat{G}^{(k+1)})=a_f(G)$, then a \emph{prime set of level $k+1$} is the edge set of a minimal densest subgraph in $\hat{G}^{(k+1)}$.
Otherwise, there is no more prime set and the edge set of $\hat{G}^{(k+1)}$ is the \emph{non-prime set}.
Therefore, every prime set is essentially the edge set of a minimal densest minor, and the non-prime set is the set of edges that are not in any minimal densest minor.
For simplicity, we introduce some notations for the prime partition of $G$.
For a prime set $P$ of level $k$, we use $n(P)$ to denote the number of vertices in its defining minimal densest subgraph $\hat{G}^{(k)}[P]$.
We use $\mathcal{P}_k$ to denote the collection of all prime sets of level $k$,
use $\mathcal{P}=\cup_{k} \mathcal{P}_k$ to denote the collection of all prime sets,
use $E_0$ to denote the non-prime set,
and use $\mathcal{E}=\mathcal{P}\cup \{E_0\}$ to denote the prime partition.
Figure \ref{fig:PrimePartition} provides an example of the prime partition.
The fractional arboricity of $G$ is $2$.
There are $4$ prime sets of level zero, $3$ prime sets of level one, and $2$ prime sets of level two.
The non-prime set is empty.
\begin{figure}[hbt!]
\centering
\includegraphics[width=.8\textwidth]{fig_PrimePartition}
\caption{An example for the prime partition.}
\label{fig:PrimePartition}
\end{figure}
Since the enumeration of minimal densest subgraphs can be done in polynomial time, the prime partition can be computed efficiently.
\begin{theorem}
\label{thm:PrimePartitionComplexity}
The prime partition of $G$ has $O(n)$ prime sets and can be computed in $O(n^4 m \log\frac{n^2}{m})$.
\end{theorem}
\begin{proof}
Lemmas \ref{thm:MDS_Noncrossing} and \ref{thm:MDS_CommonVertex} imply that contractions on different minimal densest subgraphs can be performed simultaneously.
Notice that $n(\hat{G}^{(k+1)})\leq n(\hat{G}^{(k)})-\lvert \mathcal{P}_{k}\rvert$.
Consequently, there are $O(n)$ prime sets in $\mathcal{P}$.
The definition of prime sets naturally yields an efficient algorithm for computing the prime partition of $G$.
Since there are $O(n)$ prime sets, it takes $O(n)$ iterations to compute the prime partition of $G$, and each iteration computes all prime sets of the same level.
Computing all prime sets of the same level is equivalent to enumerating all minimal densest subgraphs, which can be done in $O(n^3 m \log\frac{n^2}{m})$.
Hence the prime partition of $G$ can be computed in $O(n^4 m \log\frac{n^2}{m})$.
\end{proof}
\subsection{Decomposing densest subgraphs with prime sets}
\label{sec:DensestSubgraphDecomposition}
To show that any densest subgraph admits a decomposition of prime sets,
we first generalize the uncrossing property of minimal densest subgraphs to prime sets.
\begin{lemma}[Generalized noncrossing property]
\label{thm:NonCrossingPrimeSet}
For any prime set $P$ and any densest subgraph $H$ of $G$, either $P\subseteq E(H)$ or $P\cap E(H)=\emptyset$.
\end{lemma}
\begin{proof}
We apply induction on the level of prime set $P$.
When $P\in \mathcal{P}_0$, $G[P]$ is a minimal densest subgraph of $G$.
Lemma \ref{thm:MDS_Noncrossing} implies that either $P\subseteq E(H)$ or $P\cap E(H)=\emptyset$.
Now assume that $P\in \mathcal{P}_l$ where $l\geq 1$
and assume that for any prime set $Q$ of level less than $l$ either $Q\subseteq E(H)$ or $Q\cap E(H)=\emptyset$.
Let $\hat{H}^{(0)}=H$ and $\hat{H}^{(k+1)}$ be the graph obtained from $\hat{H}^{(k)}$ by contracting all edges in prime sets of level $k$, where $k\geq 0$.
Assume that $E(\hat{H}^{(l)})\not=\emptyset$, since otherwise we have $P\cap E(H)=\emptyset$.
By the induction hypothesis and Lemma \ref{thm:ArboricityPreservingContraction},
we have $g(\hat{H}^{(l)})=g(H)=a_f(G)=a_f(\hat{G}^{(l)})$.
Hence $\hat{H}^{(l)}$ is a densest subgraph of $\hat{G}^{(l)}$.
Since $\hat{G}^{(l)}[P]$ is a minimal densest subgraph of $\hat{G}^{(l)}$,
Lemma \ref{thm:MDS_Noncrossing} implies that either $P\subseteq E(\hat{H}^{(l)})$ or $P\cap E(\hat{H}^{(l)})=\emptyset$.
Therefore, either $P\subseteq E(H)$ or $P\cap E(H)=\emptyset$.
\end{proof}
It follows from Lemma \ref{thm:NonCrossingPrimeSet} that every densest subgraph admits a decomposition of prime sets.
\begin{lemma}[Prime set decomposition]
\label{thm:PrimePartition}
For any densest subgraph $H$ of $G$, there are prime sets $P_1,\ldots,P_r$ such that $E(H)=\cup_{i=1}^{r} P_{i}$, where $r\geq 1$.
Moreover, $n(H)=\sum_{i=1}^r [n (P_i)-1]+1$.
\end{lemma}
\begin{proof}
Lemmas \ref{thm:ArboricityPreservingContraction} and \ref{thm:NonCrossingPrimeSet} imply that there are prime sets $P_1,\ldots,P_r$ such that $E(H)=\cup_{i=1}^{r} P_i$.
Assume that $P_1,\dots,P_r$ are arranged in a non-decreasing order of levels.
Let $\hat{H}_0=H$ and $\hat{H}_{k}=\hat{H}_{k-1} \slash P_{k}$ for $k=1,\ldots,r$.
By Lemmas \ref{thm:MDS_Noncrossing} and \ref{thm:MDS_CommonVertex},
we have $n(\hat{H}_{k})=n(\hat{H}_{k-1})-n (P_{k})+1$.
Besides, $n(\hat{H}_r)=1$.
Therefore, $n(H)=\sum_{i=1}^{r} [n(\hat{H}_{i-1})-n(\hat{H}_{i})]+n(\hat{H}_r)=\sum_{i=1}^{r} [n (P_i)-1]+1$.
\end{proof}
Since the non-prime set consists of edges that are not in any densest subgraph,
we have the following corollary.
\begin{lemma}
\label{thm:PrimeComponent}
Let $E_0$ be the non-prime set of $G$.
Then every component of $G-E_0$ is a densest subgraph of $G$.
\end{lemma}
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.9\textwidth]{fig_LaminarStructure.pdf}
\caption{Illustrations for the prime partition of $G$ in Figure \ref{fig:PrimePartition}.}
\label{fig:PrimePartitionIllustration}
\end{figure}
The left Venn diagram in Figure \ref{fig:PrimePartitionIllustration} illustrates the relation of all $11$ densest subgraphs of $G$ in Figure \ref{fig:PrimePartition}.
It shows that the intersection of any two densest subgraphs is either empty or a densest subgraph again.
Hence, all densest subgraphs of $G$, together with $\emptyset$, form a lattice under inclusion.
It also shows that every densest subgraph can be decomposed into prime sets.
\subsection{The ancestor relation}
\label{sec:AncestorRelation}
Lemmas \ref{thm:NonCrossingPrimeSet} and \ref{thm:PrimePartition} suggest that there exists a laminar family of subgraphs with the maximum density that defines the prime sets.
We may determine a chain of subgraphs that defines the prime sets as follows.
Start with $G_0=G-E_0$.
For $k\geq 1$, let $S_k$ be the edge set of a minimal densest subgraph in $G_{k-1}$, $G_{k}$ be the graph obtained from $G_{k-1}$ by contracting edges in $S_k$, and $H_k = G[\cup_{i=1}^{k} S_i]$.
Then $H_l=G-E_0$ for some integer $l$ and $H_k = G[\cup_{i=1}^{k} S_i]$ is a subgraph with the maximum density for $k=1,\ldots,l$.
Consequently, $\{H_1,\ldots,H_l\}$ with $H_1\subsetneq \ldots \subsetneq H_l$ is a chain of subgraphs with the maximum density that defines the prime sets.
Furthermore, every prime set is precisely the incremental edge set of consecutive subgraphs in the chain.
The right Venn diagram in Figure \ref{fig:PrimePartitionIllustration} provides a chain of subgraphs with the maximum density that defines the prime sets of $G$ in Figure \ref{fig:PrimePartition}.
It shows that all prime sets of $G$ are precisely incremental edges sets of subgraphs in the chain.
Generally, there is more than one chain of subgraphs with the maximum density that defines the prime sets.
However, some prime sets are always preceded by other prime sets in any chain of subgraphs defining the prime sets, as some minimal densest minors occur only after the contraction of other minimal densest minors.
Therefore, we introduce the notion of ancestor to represent the invariant precedence relation in the prime sets.
A prime set $Q$ is called an \emph{ancestor} of a prime set $P$ if the minimal densest minor defining $P$ occurs only after the contraction of $Q$.
Alternatively, $Q$ is an ancestor of $P$ if $Q$ always precedes $P$ in any chain of subgraphs with the maximum density that defines the prime sets.
Clearly, the ancestor relation is \emph{transitive}.
If $Q$ is an ancestor of $P$ but not an ancestor of any other ancestors of $P$, then $Q$ is called a \emph{parent} of $P$.
To prove the ancestor relation is well defined, it suffices to show that the parent relation is well defined since the ancestor relation is transitive.
Let $P$ be a prime set of level $k\geq 1$.
Let $G'$ denote the graph obtained from $G$ by contracting all non-parent ancestors of $P$ by levels, i.e.,
first contract all non-parent ancestors of level zero and then repeatedly contract all non-parent ancestors of higher levels.
Let $G''$ denote the graph obtained from $G'$ by contracting all parents of $P$.
Let $e_1, e_2$ be edges from $P$ that become incident at some vertex in $G''$.
Let $Q_1,\ldots,Q_r $ be a minimal collection of parents of $P$ whose contraction concatenates $e_1$ and $e_2$.
Clearly, $G'[\cup_{i=1}^{r} Q_i]$ is connected.
Denote by $v_Q$ the image of $\cup_{i=1}^{r} Q_i$ in $G''$.
We show that $Q_1,\ldots,Q_r$ make the \emph{unique} collection of parents of $P$ that concatenates $e_1$ and $e_2$ at $v_Q$ in $G''$.
Assume to the contrary that there exists another collection of parents of $P$, say $Q'_1,\ldots,Q'_s$, such that $G'[\cup_{i=1}^{s} Q'_i]$ is connected and $v_Q$ is the image of $\cup_{i=1}^{s} Q'_i$ in $G''$.
Then $G'$ has a ``cycle'' consisting of minimal densest subgraphs induced by prime sets from $Q_1,\ldots,Q_r,Q'_1,\ldots,Q'_s$,
which contradicts Lemma \ref{thm:MDS_CommonVertex}.
Hence the parent relation is well defined.
Notice that all ancestors of a prime set constitute the minimal collection of prime sets that have to be contracted before arriving at its corresponding minimal densest minor.
Thus if a densest subgraph contains a prime set, it also contains all ancestors of the prime set.
\begin{lemma}
\label{thm:AncestorInclusion}
Let $H$ be a densest subgraph of $G$.
Let $P$ be a prime set of $G$ and $Q$ be an ancestor of $P$.
Then $P\subseteq E(H)$ implies $Q\subseteq E(H)$.
\end{lemma}
Moreover, the ancestor relation can be determined efficiently.
\begin{lemma}
\label{thm:PrimeSetRelation}
Given the prime partition of $G$,
the ancestor relation can be determined in $O(n^2 m)$.
\end{lemma}
\begin{proof}
Let $P$ be a prime set of level $k+1$, where $k\geq 0$.
We show that all ancestors of $P$ can be determined in $O(nm)$.
To determine all ancestors of $P$, it suffices to check every prime set $Q$ of level less than $k+1$.
Let $\hat{G}^{(l+1)}_{-Q}$ denote the graph obtained from $\hat{G}^{(l)}_{-Q}$ by contracting all edges in prime sets of level $l$, where $\hat{G}^{(0)}_{-Q}=G-Q$.
Then $Q$ is an ancestor of $P$ if and only if $n(\hat{G}^{(k+1)}_{-Q}[P])\not=n(\hat{G}^{(k+1)}[P])$.
Since there are $O(n)$ prime sets, all ancestors of $P$ can be determined in $O(nm)$.
Therefore, all ancestors for $O(n)$ prime sets can be determined in $O(n^2 m)$.
\end{proof}
We conclude this section with a partially ordered set defined from the prime sets and the ancestor relation.
Indeed, if we view every prime set as an ancestor of itself, then the ancestor relation naturally yields a partial order on the prime sets.
Write $P\prec Q$ for any two prime sets $P$ and $Q$ if $Q$ is an ancestor of $P$.
Consequently, a partial order $\prec$ is defined on the prime sets from the ancestor relation.
\section{Computing the nucleolus}
\label{sec:Nucleolus}
In this section, we develop an efficient algorithm for computing the nucleolus of arboricity games when the core is not empty.
In Subsection \ref{sec:Reformulation}, we employ the prime partition of the underlying graph to reformulate linear programs involved in Maschler's scheme.
In Subsection \ref{sec:Equivalence}, we prove the correctness of our formulation for Maschler's scheme.
In Subsection \ref{sec:CombinatorialAlgorithm}, we show that Maschler's scheme always terminates on the second round and the nucleolus can be computed in polynomial time.
Throughout this section, in addition to assuming that graph $G=(V,E)$ is connected, we further assume that arboricity game $\Gamma_G=(N,\gamma)$ has a nonempty core.
\subsection{Reformulating Maschler's scheme}
\label{sec:Reformulation}
To compute the nucleolus of $\Gamma_G$, the first round of Maschler's scheme is to solve linear program $LP_1$ \eqref{eq:Nucleolus_LP1_0}-\eqref{eq:Nucleolus_LP1_3} defined from the standard characterization for the core.
By referring to the alternative characterization for the core in Lemma \ref{thm:Core},
we introduce linear program $LP'_1$ \eqref{eq:Nucleolus_LP1_New_0}-\eqref{eq:Nucleolus_LP1_New_3}.
For any constant $\epsilon$, let $P'_1(\epsilon)$ denote the set of vectors $\boldsymbol{x}\in \mathbb{R}^E$ such that $(\boldsymbol{x},\epsilon)$ satisfies \eqref{eq:Nucleolus_LP1_New_1}-\eqref{eq:Nucleolus_LP1_New_3}.
We show that $LP_1$ and $LP'_1$ are equivalent.
\begin{alignat}{3}
\max\quad & \epsilon &{}& \label{eq:Nucleolus_LP1_New_0}\\
\lplabel[lp1_v2]{$(LP'_1)$\quad}\mbox{s.t.}\quad
&x(E) = \gamma (E), &\quad & \label{eq:Nucleolus_LP1_New_1}\\
&x(T) + \epsilon\leq 1, &\quad &\forall~ T\in \mathcal{T}, \label{eq:Nucleolus_LP1_New_2}\\
&x_e \geq 0, &\quad &\forall~ e\in E. \label{eq:Nucleolus_LP1_New_3}
\end{alignat}
\begin{lemma}
\label{thm:LeastCore_Alternative}
Let $\epsilon_1$ and $\epsilon'_1$ be the optimal value of $LP_1$ and $LP'_1$ respectively.
Then $\epsilon_1=\epsilon'_1$ and $P_1(\epsilon_1)=P'_1(\epsilon'_1)$.
\end{lemma}
\begin{proof}
We first show that $\epsilon_1=\epsilon'_1$.
It is easy to see that $\epsilon_1\leq \epsilon'_1$, since $LP'_1$ is a relaxation of $LP_1$.
Let $S\subseteq E$ and $\mathcal{C}_S$ be a minimum forest cover in $G[S]$.
For any $\boldsymbol{x}\in P'_1(\epsilon'_1)$, Lemma \ref{thm:Core} implies that
\begin{equation}
\gamma (S)-x(S) = \sum_{T_S\in \mathcal{C}_S} 1 -\sum_{e\in S} x_e = \sum_{T_S\in \mathcal{C}_S} (1-\sum_{e\in T_S} x_e) \geq \sum_{T_S\in \mathcal{C}_S} \epsilon'_1\geq \epsilon'_1.
\end{equation}
We remark that the last inequality follows from the assumption that $\mathcal{C}(\Gamma_G)\not=\emptyset$ which implies $\epsilon_1\geq 0$.
By the optimality of $\epsilon_1$, we have $\epsilon_1\geq \epsilon'_1$.
Thus $\epsilon_1=\epsilon'_1$ follows.
Next we show that $P_1(\epsilon_1)=P'_1(\epsilon'_1)$.
Clearly, $P_1(\epsilon_1)\subseteq P'_1 (\epsilon'_1)$, since $\epsilon_1=\epsilon'_1$ and $LP'_1$ is a relaxation of $LP_1$.
Since for any $\boldsymbol{x}\in P'_1 (\epsilon'_1)$, $\boldsymbol{x}$ also satisfies the constraints of $LP_1$ and gives the optimum.
Then $\boldsymbol{x}\in P_1 (\epsilon_1)$, which implies that $P'_1 (\epsilon'_1)\subseteq P_1 (\epsilon_1)$.
Thus, $P_1(\epsilon_1)=P'_1(\epsilon'_1)$.
\end{proof}
Before proceeding to the second round of Maschler's scheme, we have to determine the optimal value $\epsilon'_1$ of $LP'_1$.
Clearly, $\epsilon'_1 \geq 0$ as $\mathcal{C}(\Gamma_G)\not=\emptyset$.
Assume that $\gamma(E)=k$ and that $G$ can be covered by $k$ disjoint forests $F_1,\ldots,F_k$.
Let $T_i$ be a spanning tree containing $F_i$ and $\boldsymbol{x}\in \mathcal{C}(\Gamma_G)$.
Clearly, $x(F_i)\leq x(T_i)\leq \gamma(T_i)=1$.
It follows that
$x(E)=\sum_{i=1}^{k} x(F_i)\leq \sum_{i=1}^{k} x(T_i)\leq k=\gamma (E)$.
Then $x(E)=\gamma(E)$ implies $x(F_i)=x(T_i)=1$.
Hence $\epsilon'_1=0$, implying that the core $P'_1(0)$ and the least core $P'_1 (\epsilon'_1)$ coincide.
Consequently, there are spanning trees $T\in \mathcal{T}$ such that $x(T)=1$ for any $\boldsymbol{x}\in \mathcal{C}(\Gamma_G)$, i.e., $T$ is fixed by $P'_1 (\epsilon'_1)$.
Denote by $\mathcal{T}_0$ the set of spanning trees that are fixed by $P'_1 (\epsilon'_1)$.
Let $E_0$ denote the set of edges that are not in any densest subgraph of $G$.
Corollary \ref{thm:NonPrimeSet} implies that $x_e=\epsilon'_1=0$ for any $e\in E_0$.
By Lemma \ref{thm:LeastCore_Alternative}, the second round of Maschler's scheme can be formulated as $LP'_2$ from $LP'_1$.
\begin{alignat}{3}
\max\quad &{\epsilon} &{}& \label{eq:Nucleolus_LP2v2_0}\\
\mbox{s.t.}\quad
&x(T)+\epsilon \leq 1, &\qquad& \forall ~ T \in \mathcal{T}\backslash \mathcal{T}_0, \label{eq:Nucleolus_LP2v2_1}\\
\lplabel[lp2_v2]{$(LP'_2)$\quad\quad}
&x(T) = 1, &\qquad& \forall ~T \in \mathcal{T}_0, \label{eq:Nucleolus_LP2v2_2}\\
&x_e \geq \epsilon, &\qquad& \forall ~ e\in E\backslash E_0, \label{eq:Nucleolus_LP2v2_3}\\
&x_e = 0, &\qquad& \forall ~ e\in E_0. \label{eq:Nucleolus_LP2v2_4}
\end{alignat}
However, $LP'_2$ still has an exponential number of constraints.
We derive an equivalent formulation of $LP'_2$ that has only polynomial size by resorting to the prime partition of $G$.
Notice that $E_0$ in \eqref{eq:Nucleolus_LP2v2_4} is precisely the non-prime set of $G$.
Let $\mathcal{P}=\cup_{k} \mathcal{P}_k$ denote the collection of all prime sets of $G$, where $\mathcal{P}_k$ is the collection of all prime sets of level $k$.
Let $\mathcal{E}=\mathcal{P}\cup \{E_0\}$ denote the the prime partition of $G$.
Corollary \ref{thm:NonPrimeSet} states that all edges in the non-prime set have the same value in a core allocation.
It turns out that this property also holds for edges from the same prime set.
\begin{lemma}
\label{thm:PrimeSet_PartialOrder}
Let $\boldsymbol{x}$ be a core allocation of $\Gamma_G$ and $P$ be a prime set of $G$.
Then $x_e=x_f$ for any $e,f\in P$.
\end{lemma}
\begin{proof}
Let $\boldsymbol{x}^H$ be the vector associated with a densest subgraph $H$ of $G$.
By Lemma \ref{thm:NonCrossingPrimeSet}, either $P\subseteq E(H)$ or $P\cap E(H)=\emptyset$.
Thus for any $e,f\in P$, $x^H_e=x^H_f=\frac{1}{n(H)-1}$ if $P\subseteq E(H)$ and $x^H _e=x^H_f=0$ otherwise.
Since any vector $\boldsymbol{x}\in \mathcal{C}(\Gamma_G)$ is a convex combination of vectors associated with a densest subgraph of $G$, we have $x_e=x_f$ for any $e,f\in P$.
\end{proof}
Corollary \ref{thm:NonPrimeSet} and Lemma \ref{thm:PrimeSet_PartialOrder} state that
all edges in the same set of the prime partition have the same value in a core allocation.
Hence every core allocation $\boldsymbol{x}\in \mathbb{R}^{E}$ of $\Gamma_G$ defines a vector $\boldsymbol{y}\in \mathbb{R}^{\mathcal{E}}$ associated with the prime partition of $G$.
Moreover, $LP'_2$ can be reformulated with $\boldsymbol{y}$.
Let $(\mathcal{P},\prec)$ denote the partially ordered set defined on $\mathcal{P}$ from the ancestor relation.
Let $\mathcal{P}_{\min}$ denote the set of minimal prime sets in $(\mathcal{P},\prec)$.
Denote by $LP''_2$ the linear program \eqref{eq:Nucleolus_LP2v3_0}-\eqref{eq:Nucleolus_LP2v3_4} defined on $\boldsymbol{y}$.
In Subsection \ref{sec:Equivalence}, we show that $LP'_2$ and $LP''_2$ are equivalent, i.e., $\boldsymbol{x}$ is feasible to $LP'_2$ if and only if $\boldsymbol{y}$ is feasible to $LP''_2$.
In Subsection \ref{sec:CombinatorialAlgorithm}, we propose a combinatorial algorithm for $LP''_2$ and show that $LP''_2$ has a unique optimal solution which yields the nucleolus of $\Gamma_G$.
\begin{alignat}{3}
\hspace{4.5em}\max\quad &{\epsilon} &{}&\label{eq:Nucleolus_LP2v3_0}\\%[.5em]
\mbox{s.t.}\quad
& y_P +\epsilon\leq y_Q, &\quad& \forall ~ P\prec Q, \label{eq:Nucleolus_LP2v3_1}\\
\lplabel[lp]{$(LP''_2)$\quad~~\,}
&\textstyle\sum_{P\in \mathcal{P}} [n (P)-1] y_P = 1, &\qquad& \label{eq:Nucleolus_LP2v3_2}\\
& y_P \geq \epsilon, &\quad& \forall ~P\in \mathcal{P}_{\min},\label{eq:Nucleolus_LP2v3_3}\\
& y_{E_0}=0. \label{eq:Nucleolus_LP2v3_4}
\end{alignat}
\subsection{Equivalence of $LP'_2$ and $LP''_2$}
\label{sec:Equivalence}
Let $\boldsymbol{x}\in \mathcal{C}(\Gamma_G)$ and $\boldsymbol{y}\in \mathbb{R}^{\mathcal{E}}$ be a pair of associated vectors.
We show that $\boldsymbol{x}$ is feasible to $LP'_2$ if and only if $\boldsymbol{y}$ is feasible to $LP''_2$.
By Corollary \ref{thm:NonPrimeSet}, it is trivial that \eqref{eq:Nucleolus_LP2v2_4} and \eqref{eq:Nucleolus_LP2v3_4} are equivalent.
The equivalence of \eqref{eq:Nucleolus_LP2v2_3} and \eqref{eq:Nucleolus_LP2v3_3} follows from Lemma \ref{thm:PrimeSet_PartialOrder} and the observation below.
\begin{lemma}
Let $\boldsymbol{x}$ be a core allocation of $\Gamma_G$ and $P,Q$ be two prime sets of $G$ such that $Q$ is an ancestor of $P$.
Then $x_e \leq x_f$ for any $e\in P$ and any $f\in Q$.
\end{lemma}
\begin{proof}
Let $\boldsymbol{x}^H$ be the vector associated with a densest subgraph $H$ of $G$.
Lemma \ref{thm:AncestorInclusion} implies that $Q\subseteq E(H)$ if $P\subseteq E(H)$.
If follows that for any $e\in P$ and any $f\in Q$, $x^H_e=x^H_f=\frac{1}{n(H)-1}$ if $P\subseteq E(H)$ and $x^H_e=0\leq x^H_f$ otherwise.
Since any vector $\boldsymbol{x}\in \mathcal{C}(\Gamma_G)$ is a convex combination of vectors associated with densest subgraphs of $G$, we have $x_e \leq x_f$ for any $e\in P$ and any $f\in Q$.
\end{proof}
Next, we show the equivalence of \eqref{eq:Nucleolus_LP2v2_2} and \eqref{eq:Nucleolus_LP2v3_2}.
Notice that \eqref{eq:Nucleolus_LP2v2_2} provides a characterization for trees in $\mathcal{T}_0$ with $\boldsymbol{x}\in \mathbb{R}^{E}$.
Thus \eqref{eq:Nucleolus_LP2v3_2} serves the same purpose.
To associate trees in $\mathcal{T}_0$ with $\boldsymbol{y}\in \mathbb{R}^{\mathcal{E}}$,
we introduce the following lemma.
\begin{lemma}
\label{thm:TightSpanningTree}
A spanning tree $T$ belongs to $\mathcal{T}_0$ if and only if for any prime set $P$ we have
\begin{equation}
\label{eq:TightSpanningTree}
\lvert T\cap P\rvert=n(P)-1.
\end{equation}
\end{lemma}
\begin{proof}
$(\Leftarrow)$
Assume that $T$ is a spanning tree satisfying \eqref{eq:TightSpanningTree} for any prime set $P$.
Let $H$ be a densest subgraph of $G$.
The vector $\boldsymbol{x}^H$ associated with $H$ is defined by $x^H_e=\frac{1}{n(H)-1}$ if $e\in E(H)$ and $x^H_e=0$ otherwise.
By Lemma \ref{thm:PrimePartition}, we have $E(H)=\cup_{i=1}^{r} P_i$ and $n(H)=\sum_{i=1}^{r} [n (P_i)-1]+1$, where $P_i\in \mathcal{P}$ for $i=1,\ldots,r$.
Thus
\begin{equation*}
x^H (T)=\frac{\sum_{i=1}^{r} \lvert T\cap P_i\rvert}{n(H)-1}=\frac{\sum_{i=1}^{r} [n (P_i)-1]}{n(H)-1}=\frac{n(H)-1}{n(H)-1}=1.
\end{equation*}
Since every vector in $\mathcal{C}(\Gamma_G)$ is a convex combination of vectors associated with densest subgraphs of $G$, we have $x(T)=1$ for any $\boldsymbol{x} \in \mathcal{C}(\Gamma_G)$, implying that $T\in \mathcal{T}_0$.
$(\Rightarrow)$
Assume that $T\in \mathcal{T}_0$, i.e., $T$ is a spanning tree such that $x(T)=1$ for any $\boldsymbol{x}\in \mathcal{C}(\Gamma_G)$.
Among all densest subgraphs of $G$ containing $P$, let $H$ be a minimal one.
We apply induction on the level of prime set $P\in \mathcal{P}$.
First assume that $P \in \mathcal{P}_0$.
Hence $P=E(H)$, implying that $H$ is a densest subgraph of $G$.
Then the vector $\boldsymbol{x}^H\in \mathcal{C}(\Gamma_G)$ associated with $H$ is defined by $x^H_e=\frac{1}{n(H)-1}$ for $e\in P$ and $x^H_e=0$ otherwise.
Since $x^H(T)=\frac{1}{n(H)-1} \lvert T\cap P\rvert = 1$, it follows that $\lvert T\cap P\rvert=n(H)-1=n (P)-1$.
Hence \eqref{eq:TightSpanningTree} holds for $P$.
Now assume that $P\in \mathcal{P}_k$, where $k\geq 1$.
Lemma \ref{thm:PrimePartition} implies that there exist prime sets $Q_1,\ldots,Q_r$ such that $E(H)=P\cup (\cup_{i=1}^{r} Q_i )$.
The minimality of $H$ implies that $P\prec Q_i$ for $i=1,\ldots,r$.
By induction hypothesis, we have $\lvert T\cap (\cup_{i=1}^{r} Q_i)\rvert=\sum_{i=1}^{r} [n (Q_i)-1]$.
Then the vector $\boldsymbol{x}^H\in \mathcal{C}(\Gamma)$ associated with $H$ is defined by $x^H_e=\frac{1}{n(H)-1}$ if $e\in P\cup (\cup_{i=1}^{r} Q_i)$ and $x^H_e=0$ otherwise.
Since
\begin{equation*}
x^H (T) = \frac{\lvert T\cap [P\cup(\cup_{i=1}^{r} Q_i)]\rvert}{n(H)-1} = \frac{\lvert T\cap P\rvert+\sum_{i=1}^{r} [n (Q_i)-1]}{n(H)-1} =1,
\end{equation*}
Lemma \ref{thm:PrimePartition} implies that $\lvert T\cap P\rvert=n(H)-1-\sum_{i=1}^{r} [n (Q_i)-1]=n (P)-1$.
Hence \eqref{eq:TightSpanningTree} holds for $P\in \mathcal{P}_k$ where $k\geq 1$.
\end{proof}
Now we are ready to prove the equivalence of \eqref{eq:Nucleolus_LP2v2_2} and \eqref{eq:Nucleolus_LP2v3_2}.
\begin{lemma}
Let $\boldsymbol{x}\in \mathcal{C}(\Gamma_G)$ be a vector and $\boldsymbol{y}\in \mathbb{R}^{\mathcal{E}}$ be the vector defined from $\boldsymbol{x}$.
Then $\boldsymbol{x}$ satisfies \eqref{eq:Nucleolus_LP2v2_2} if and only if $\boldsymbol{y}$ satisfies \eqref{eq:Nucleolus_LP2v3_2}.
\end{lemma}
\begin{proof}
Notice that every spanning tree $T\in \mathcal{T}$ admits a decomposition from the prime partition.
Lemma \ref{thm:TightSpanningTree} implies that
\begin{equation}
x(T)=\sum_{e\in T}x_e=\lvert T\cap E_0\rvert y_{E_0} + \sum_{P\in \mathcal{P}} \lvert T\cap P\rvert y_P=\sum_{P\in \mathcal{P}} [n (P)-1] y_P=1.
\end{equation}
Thus $\boldsymbol{x}$ satisfies \eqref{eq:Nucleolus_LP2v2_2} if and only if $\boldsymbol{y}$ satisfies \eqref{eq:Nucleolus_LP2v3_2}.
\end{proof}
Finally, we come to the equivalence of $LP'_2$ and $LP''_2$.
We first show that any vector $\boldsymbol{y}\in \mathbb{R}^{\mathcal{E}}$ defined from a feasible solution $\boldsymbol{x}\in \mathbb{R}^{E}$ of $LP'_2$ satisfies \eqref{eq:Nucleolus_LP2v3_1} and hence is a feasible solution of $LP''_2$.
Our proof is based on the idea that for two any prime sets $P$ and $Q$ with $P\prec Q$,
a specific spanning tree outside $\mathcal{T}_0$ can be constructed from any spanning tree in $\mathcal{T}_0$ by repeatedly performing edge exchanges along a pathway consisting of ancestors of $P$ and ending with $Q$.
To this end, we need the following lemma.
\begin{lemma}
\label{thm:EdgeExchange}
Let $P,Q\in \mathcal{P}$ be a pair of prime sets with $P\prec Q$ in $(\mathcal{P},\prec)$.
For any spanning tree $T\in \mathcal{T}_0$, there exists a spanning tree $T'\in \mathcal{T}\backslash \mathcal{T}_0$ such that
\vspace{-.5em}
\begin{align}
\lvert T'\cap P\rvert&=\lvert T\cap P\rvert +1, \label{eq:EdgeExchange_1}\\
\lvert T'\cap Q\rvert&=\lvert T\cap Q\rvert -1, \label{eq:EdgeExchange_2}\\
\lvert T'\cap R\rvert&=\lvert T\cap R\rvert, \quad\qquad \forall ~R\in \mathcal{P}\backslash \{P,Q\}. \label{eq:EdgeExchange_3}
\end{align}
\end{lemma}
\begin{proof}
Assume that $P\in \mathcal{P}_{l}$ and $Q\in \mathcal{P}_{l-r}$, where $l\geq r\geq 1$.
We claim that there exist
\begin{itemize}
\item[\textendash] a sequence $S_0,\ldots,S_r$ of ancestors of $P$ such that $S_0=P$, $S_r=Q$ and $S_k\in \mathcal{P}_{l-k}$ for $k=0,\ldots,r$;
\item[\textendash] a sequence $T_0,\ldots,T_r$ of spanning trees obtained from $T$ such that $T_0=T$ and
\begin{equation}
\label{eq:SpanningTreeConstruction}
T_{k+1}=T_{k}+e'_{k}-e_{k+1},
\end{equation}
where $e_k, e'_k\in S_k$ for $k=0,\ldots,r$.
\end{itemize}
It follows that
\begin{equation}
T_{k+1}=T_0+e'_0-\sum_{i=1}^{k} (e_i-e'_i)-e_{k+1}.
\end{equation}
Notice that $\lvert T_{k+1} \cap S_0 \rvert=\lvert T_0\cap S_0 \rvert+1$,
$\lvert T_{k+1} \cap S_{k+1} \rvert=\lvert T_0\cap S_{k+1} \rvert-1$,
and $\lvert T_{k+1} \cap S \rvert=\lvert T_0\cap S \rvert$ for any $S\in \mathcal{P}\backslash \{S_0,S_{k+1}\}$.
Lemma \ref{thm:TightSpanningTree} implies that $T_r$ is a spanning tree satisfying \eqref{eq:EdgeExchange_1}-\eqref{eq:EdgeExchange_3} in $\mathcal{T}\backslash \mathcal{T}_0$.
The sequence $S_0,\ldots,S_r$ sets a pathway for edge exchange operations in \eqref{eq:SpanningTreeConstruction}, which can be identified as follows.
Start with $S_r=Q$ and work backwards.
Suppose $S_{k+1}\in \mathcal{P}_{l-k-1}$ has been identified.
If $S_{k+1}$ is a parent of an ancestor $R\in \mathcal{P}_{l-k}$ of $P$, then let $S_{k}=R$.
Otherwise, there exist two ancestors $R_1,R_2\in \mathcal{P}_{l-k}$ of $P$ such that
$\hat{G}^{(l-k-1)}[R_1]$ and $\hat{G}^{(l-k-1)}[R_2]$ share no common vertex but $\hat{G}^{(l-k)}[R_1]$ and $\hat{G}^{(l-k)}[R_2]$ share a common vertex $s_{k+1}$ which is the image of $S_{k+1}$ in $\hat{G}^{(l-k)}$.
Then let $S_k$ be any one of $R_1$ and $R_2$, say $S_k=R_1$.
Repeat this process until $S_0=P$.
Denote by $\mathcal{S}$ the set of $S_0,\ldots,S_r$.
It remains to show how to perform edge exchange operations in \eqref{eq:SpanningTreeConstruction}.
Let $\hat{G}_k$ be the graph obtained from $G$ by contracting all edges in prime sets of level less than $S_k$ and all edges in other prime sets of the same level with $S_k$.
Let $s_{k+1}$ denote the image of $S_{k+1}$ in $\hat{G}_{k}$.
Clearly, $s_{k+1}$ is a vertex in $\hat{G}_{k} [S_k]$.
Let $T_k\in \mathcal{T}$ be a spanning tree constructed from $T_{k-1}$ by \eqref{eq:SpanningTreeConstruction}.
It follows that $T_k\cap S_{i}=T\cap S_{i}$ for $i=k+1,\ldots,r$ and $T_k\cap S=T\cap S$ for $S\in \mathcal{P}\backslash \mathcal{S}$.
Let $\hat{G}_{k+1} [T_k]$ denote the edge-induced subgraph of $\hat{G}_{k+1}$ on the common edges of $\hat{G}_{k+1}$ and $T_k$.
Lemma \ref{thm:TightSpanningTree} implies that $\hat{G}_{k+1} [T_k]$ is a spanning tree of $\hat{G}_{k+1}$.
In particular, $\hat{G}_{k+1} [T_k\cap S_{k+1}]$ is a spanning tree of $\hat{G}_{k+1} [S_{k+1}]$.
To construct $T_{k+1}$ from $T_{k}$, we distinguish two cases based on whether $S_{k+1}$ is a parent of $S_{k}$.
First assume that $S_{k+1}$ is a parent of $S_{k}$.
Then there exist edges from $S_{k}$ incident to two distinct vertices $u_1,u_2\in V(\hat{G}_{k+1}[S_{k+1}])$ in $\hat{G}_{k+1}$.
To construct $T_{k+1}$ from $T_{k}$ by \eqref{eq:SpanningTreeConstruction},
we concentrate on $\hat{G}_{k+1}$ and further distinguish two cases on edges in $T_k\cap S_k$.
\begin{figure}[!htb]
\centering
\includegraphics[width=1\textwidth]{fig_EdgeExchange}
\caption{The dashed line denotes the edge $e'_k$ added to $T_k$. The dash-dotted line denotes the path in $T_k$ avoiding edges in $S_k$.}
\label{fig:EdgeExchange}
\end{figure}
\begin{itemize}
\item Edges from $T_{k}\cap S_{k}$ are only incident to one vertex, say $u_1$, of $\hat{G}_{k+1} [S_{k+1}]$ (cf., left graph in Figure \ref{fig:EdgeExchange}).
Then there exists an edge $e'_k\in S_k$ incident to $u_2$.
Since $\hat{G}_{k+1} [T_k]$ is a spanning tree of $\hat{G}_{k+1}$, adding $e'_k$ to $T_k$ creates a cycle involving edges in $\hat{G}_{k+1} [T_k\cap S_{k+1}]$.
Remove an edge $e_{k+1}$ from $\hat{G}_{k+1} [T_k\cap S_{k+1}]$ to break the cycle and denote the new tree by $T_{k+1}$.
Thus we have $T_{k+1}=T_k+e'_k-e_{k+1}$.
\item Edges from $T_k\cap S_k$ are incident to more than one vertices in $\hat{G}_{k+1}[S_{k+1}]$ (cf., middle graph in Figure \ref{fig:EdgeExchange}).
For $i=1,2$,
let $f_i \in T_k\cap S_{k}$ be an edge incident to $u_i\in V(\hat{G}_{k+1}[S_{k+1}])$,
and $v_i$ be the other endpoint of $f_i$.
Since $\hat{G}_{k+1}[T_k \cap S_{k+1}]$ is a tree, $v_1$ and $v_2$ are distinct vertices in $\hat{G}_{k+1} [T_k]$.
For $i=1,2$, let $U_i$ be the set of vertices in $\hat{G}_{k+1}[S_{k}]$ that are connected to $u_i$ with edges in $T_{k}$.
Clearly, $v_1\in U_1$ and $v_2\in U_2$.
Now consider $\hat{G}_{k}$.
Since $\hat{G}_{k+1} [T_{k}]$ is a spanning tree of $\hat{G}_{k+1}$, $\hat{G}_{k}[T_k \cap S_k]$ is a spanning tree of $\hat{G}_{k} [S_k]$.
Notice that $\hat{G}_{k} [T_{k}]$ is a spanning tree of $\hat{G}_{k}[\cup_{i=1}^{k} S_i]$.
It follows that $U_1\cup U_2\cup\{s_{k+1}\}=V(\hat{G}_{k}[S_{k}])$.
Notice that $\hat{G}_{k}[S_{k}]$ is a minimal densest subgraph of $\hat{G}_{k}$.
By Lemma \ref{thm:MinimalDensestSubgraphCutVertex}, there is a crossing edge $e'_k\in S_k$ between $U_1$ and $U_2$ in $\hat{G}_{k} [S_k]$.
Since $\hat{G}_{k+1} [T_k]$ is a spanning tree of $\hat{G}_{k+1}$, adding $e'_{k}$ to $T_{k}$ creates a cycle involving edges in $\hat{G}_{k+1}[T_k\cap S_{k+1}]$.
Remove an edge $e_{k+1}$ from $\hat{G}_{k+1} [T_k\cap S_{k+1}]$ to break the cycle and denote the new tree by $T_{k+1}$.
Thus we have $T_{k+1}=T_k+e'_k-e_{k+1}$.
\end{itemize}
Now assume that $S_{k+1}$ is not a parent of $S_{k}$ where $k\geq 1$ (cf., right graph in Figure \ref{fig:EdgeExchange}).
Then there exists another ancestor $S'_k\in \mathcal{P}_{l-k}$ of $S_0$ such that $\hat{G}^{(l-k-1)}[S_k]$ and $\hat{G}^{(l-k-1)}[S'_k]$ share no common vertex but $\hat{G}^{(l-k)}[S_k]$ and $\hat{G}^{(l-k)}[S'_k]$ share a common vertex $s_{k+1}$ which is the image of $S_{k+1}$.
Now consider $\hat{G}_{k+1}$.
Notice that $\hat{G}_{k+1} [S_{k+1}]$, $\hat{G}_{k+1}[S_k]$ and $\hat{G}_{k+1} [S'_k]$ are all minimal densest subgraphs in $\hat{G}_{k+1}$.
Moreover, $\hat{G}_{k+1} [S_{k+1}]$ shares a common vertex $u$ with $\hat{G}_{k+1}[S_k]$ and shares a common vertex $u'$ with $\hat{G}_{k+1}[S'_k]$ respectively.
Clearly, $u\not=u'$.
Since $\lvert T_k\cap S_k \lvert =\lvert T_0\cap S_k\rvert -1 = n(S_k)-2$, $\hat{G}_{k+1}[T_k \cap S_k]$ is not connected.
Notice that $\hat{G}_{k+1}[T_k]$ is a spanning tree of $\hat{G}_{k+1}$.
Let $U$ and $U'$ be the set of vertices in $\hat{G}_{k+1} [S_k]$ that are connected to $u$ and $u'$ respectively in $\hat{G}_{k+1} [T_k]$.
Hence $U$ and $U'$ form a nontrivial bipartition of $V(\hat{G}_{k+1}[S_k])$.
Then there is a crossing edge $e'_k\in S_k$ between $U$ and $U'$ in $V(\hat{G}_{k+1}[S_k])$.
Since $\hat{G}_{k+1} [T_k]$ is a spanning tree of $\hat{G}_{k+1}$, adding $e'_{k}$ to $T_{k}$ creates a cycle involving edges in $\hat{G}_{k+1}[T_k\cap S_{k+1}]$.
Remove an edge $e_{k+1}$ from $\hat{G}_{k+1} [T_k\cap S_{k+1}]$ to break the cycle and denote the new tree by $T_{k+1}$.
Thus we have $T_{k+1}=T_k+e'_k-e_{k+1}$.
\end{proof}
\begin{lemma}
\label{thm:EquivalenceSufficiency}
Let $\textbf{x}\in \mathcal{C}(\Gamma_G)$ be a vector satisfying \eqref{eq:Nucleolus_LP2v2_2}-\eqref{eq:Nucleolus_LP2v2_4} and $\boldsymbol{y}\in \mathbb{R}^{\mathcal{E}}$ be the vector defined from $\boldsymbol{x}$.
If $\boldsymbol{x}$ satisfies \eqref{eq:Nucleolus_LP2v2_1},
then $\boldsymbol{y}$ satisfies \eqref{eq:Nucleolus_LP2v3_1}.
\end{lemma}
\begin{proof}
Let $T\in \mathcal{T}_0$ be a spanning tree.
Lemma \ref{thm:EdgeExchange} implies that there exists a spanning tree $T'\in \mathcal{T}\backslash \mathcal{T}_0$ such that $\lvert T'\cap P\rvert=\lvert T\cap P\rvert +1$,
$\lvert T'\cap Q\rvert=\lvert T\cap Q\rvert -1$, and
$\lvert T'\cap R\rvert=\lvert T\cap R\rvert$ for any $R\in \mathcal{P}\backslash \{P,Q\}$.
It follows that
\begin{align*}
x(T')
&=\lvert T'\cap P\rvert \cdot y_P +\lvert T'\cap Q\rvert \cdot y_Q + \sum_{R\in \mathcal{P}\backslash \{P,Q\}} \lvert T'\cap R\vert \cdot y_R\\
&=(\lvert T\cap P\rvert + 1) \cdot y_P + (\lvert T\cap Q\rvert - 1) \cdot y_Q + \sum_{R\in \mathcal{P}\backslash \{P,Q\}} \lvert T\cap R\vert \cdot y_R\\
&= x(T)+y_P-y_Q.
\end{align*}
Since $T\in \mathcal{T}_0$ and $T'\in \mathcal{T}\backslash \mathcal{T}_0$, we have $x(T)=1$ and $x(T')+\epsilon\leq 1$.
Hence $y_P + \epsilon \leq y_Q$ follows.
\end{proof}
Now we show that if $\boldsymbol{y}\in \mathbb{R}^{\mathcal{E}}$ is a feasible solution of $LP''_2$, then its associated vector $\boldsymbol{x}\in \mathcal{C}(\Gamma_G)$ satisfies \eqref{eq:Nucleolus_LP2v2_1} and hence is a feasible solution of $LP'_2$.
Our proof is based on the idea that a spanning tree in $\mathcal{T}_0$ can be constructed from any spanning tree outside $\mathcal{T}_0$ by repeatedly performing edge exchanges between a prime set and the non-prime set or between a prime set and another prime set of higher level.
To this end, we need the following lemma.
\begin{lemma}
\label{thm:MinimalViolatingPrimeSet}
Let $T$ be a spanning tree in $\mathcal{T}\backslash \mathcal{T}_0$.
Among prime sets that violate \eqref{eq:TightSpanningTree} with $T$,
let $P$ be a prime set of minimum level.
Then we have $\lvert T\cap P\rvert < n (P)-1$.
\end{lemma}
\begin{proof}
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.825\textwidth]{fig_CycleExpansion}
\caption{A cycle $C'$ in $\hat{G}^{(k-1)}[T]$ is constructed from a cycle $C$ in $\hat{G}^{(k)}[T]$.}
\label{fig:CycleExpansion}
\vspace{-.5em}
\end{figure}
Let $T$ be a spanning tree in $\mathcal{T}\backslash \mathcal{T}_0$ and $k$ be the minimum level of prime sets that violate \eqref{eq:TightSpanningTree} with $T$.
For any prime set $P\in \mathcal{P}_{k}$ that violates \eqref{eq:TightSpanningTree} with $T$, we show that $\lvert T\cap P\rvert > n (P)-1$ is absurd.
Assume to the contrary that $\lvert T\cap P\rvert > n (P)-1$.
It follows that $\hat{G}^{(k)} [T\cap P]$ contains a cycle consisting of edges from $T$.
We apply induction on $k$ to show that a cycle in $\hat{G}^{(k)} [T]$ implies a cycle in $T$, which is absurd.
It is trivial for $k=0$, since $\hat{G}^{(0)} [T]=T$.
Now assume that $k\geq 1$ and that all prime sets of level less than $k$ satisfies \eqref{eq:TightSpanningTree} with $T$.
Hence $\hat{G}^{(k-1)} [T]$ is a tree.
Let $C$ be a cycle in $\hat{G}^{(k)} [T]$ (cf. left graph in Figure \ref{fig:CycleExpansion}).
It follows that $C$ contains images of prime sets of level $k-1$.
Let $q_1,\ldots,q_s$ be the images of prime sets of level $k-1$ in $C$ which appear in a clockwise order along $C$.
Denote by $e_i$ and $f_i$ the two edges incident to $q_i$ in $C$ and denote by $Q_i$ the union of prime sets of level $k-1$ with image $q_i$, for $i=1,\ldots,s$.
Hence $e_i$ and $f_i$ are incident to two distinct vertices $u_i$ and $w_i$ in $\hat{G}^{(k-1)} [Q_i]$ respectively.
By assumption, $\lvert T\cap Q\rvert = n (Q)-1$ for any $Q\in \mathcal{P}_{k-1}$.
It follows that $\hat{G}^{(k-1)} [T\cap Q_i]$ is a tree.
Let $p_i$ denote the unique $u_i$-$w_i$ path in $\hat{G}^{(k-1)} [T\cap Q_i]$.
Now inserting the $u_i$-$w_i$ path $p_i$ between $e_i$ and $f_i$ in $C$ for $i=1,\ldots,s$ creates a cycle $C'$ in $\hat{G}^{(k-1)} [T]$ (cf. right graph in Figure \ref{fig:CycleExpansion}).
However, this contradicts the acyclicity of $\hat{G}^{(k-1)} [T]$.
\end{proof}
\begin{lemma}
\label{thm:EquivalenceNecessity}
Let $\textbf{x}\in \mathcal{C}(\Gamma_G)$ be a vector satisfying \eqref{eq:Nucleolus_LP2v2_2}-\eqref{eq:Nucleolus_LP2v2_4} and $\boldsymbol{y}\in \mathbb{R}^{\mathcal{E}}$ be the vector defined from $\boldsymbol{x}$.
If $\boldsymbol{y}$ satisfies \eqref{eq:Nucleolus_LP2v3_1},
then $\boldsymbol{x}$ satisfies \eqref{eq:Nucleolus_LP2v2_1}.
\end{lemma}
\begin{proof}
Let $T\in \mathcal{T}\backslash \mathcal{T}_0$.
Among all prime sets that violates \eqref{eq:TightSpanningTree} with $T$, let $P\in \mathcal{P}_{k}$ be a prime set of minimum level, where $k\geq 0$.
Since every prime set of level less than $k$ satisfies \eqref{eq:TightSpanningTree} with $T$, $\hat{G}^{(k)}[T]$ is a spanning tree of $\hat{G}^{(k)}$.
Lemma \ref{thm:MinimalViolatingPrimeSet} implies that $\lvert T\cap P\rvert<n (P)-1$.
It follows that $\hat{G}^{(k)}[T\cap P]$ is not connected and there exists an edge $e'$ in $P\backslash T$ that joins two components of $\hat{G}^{(k)} [T\cap P]$.
Moreover, $e'$ joins two non-adjacent vertices of $\hat{G}^{(k)} [T]$.
Hence adding $e'$ to $\hat{G}^{(k)}[T]$ creates a cycle $C$.
As we shall see, $C$ involves edges either from the non-prime set $E_0$ or from a prime set of level greater than $k$.
Now we show that a new spanning tree $T'\in \mathcal{T}$ can be constructed from $T$ such that $x(T)\leq x(T')-\epsilon$ with an edge exchange operation.
We distinguish two cases.
\begin{itemize}
\item $C\cap E_0\not=\emptyset$.
Remove an edge $e$ from $C\cap E_0$ to break the cycle $C$ and denote the new tree by $T'$.
Hence $T'=T-e+e'$ where $e\in C\cap E_0$ and $e'\in P\backslash T$.
Since $x_e=0$ and $x_{e'}\geq \epsilon$, it follows that $x(T)=x(T')+x_e-x_{e'}\leq x(T')-\epsilon$.
\item $C\cap E_0=\emptyset$.
It follows that $C$ is a cycle in a component of $\hat{G}^{(k)}-E_0$.
Lemmas \ref{thm:ArboricityPreservingContraction} and \ref{thm:PrimeComponent} imply that every component of $\hat{G}^{(k)}-E_0$ is a densest subgraph of $\hat{G}^{(k)}$.
By Lemma \ref{thm:MDS_CommonVertex},
$C$ involves edges from prime sets of level higher than $k$, since otherwise there are minimal densest subgraphs in $\hat{G}^{(k)}$ which are pairwise connected along the cycle $C$ and the number of common vertices violates Lemma \ref{thm:MDS_CommonVertex}.
Let $Q\in \mathcal{P}_{l}$ where $l>k$ be a prime set of the largest level that intersects $C$.
We claim that $\hat{G}^{(l)}[Q\cap C]$ is a cycle.
To see this, consider $\hat{G}^{(l)}[C]$ which is the edge-induced subgraph of $\hat{G}^{(l)}$ on the common edges of $\hat{G}^{(l)}$ and $C\subseteq \hat{G}^{(k)}$.
If there exists a cycle $C'$ in $\hat{G}^{(l)}[C]$ involving more than one prime sets of level $l$, then their defining minimal densest subgraphs are pairwise connected along the cycle $C'$ and the number of common vertices violates Lemma \ref{thm:MDS_CommonVertex}.
Hence the claim follows.
It follows that $Q\prec P$.
To see this, we apply induction on $l-k$.
If $l=k+1$, then two edges in $C\cap Q$ become incident (or share one more common vertex) at the image $v_P$ of $P$ in $\hat{G}^{(l)}$.
Thus $P$ is a parent of $Q$ and $Q\prec P$ follows.
Now assume that $l>k+1$.
Let $C'$ be the cycle in $\hat{G}^{(k+1)}$ consisting of edges from $C$ and involving edges in $Q$.
For any prime set $R\in \mathcal{P}_{k+1}$ that intersects $C'$, $R\prec P$ implies $Q\prec P$ inductively.
Hence assume that $P$ is not a parent of any prime set of level $k+1$ that intersects $C'$.
Let $v_P$ be the image of $P$ in $\hat{G}^{(k+1)}$.
Then $v_P$ is a vertex in $C'$ which concatenates two minimal densest subgraphs of $\hat{G}^{(k+1)}$ involving edges of $C'$.
There exists
a prime set $R\in \mathcal{P}_{r}$ where $k+1<r\leq l$ such that two edges in $R\cap C'$ become incident (or share one more common vertex) in $\hat{G}^{(r)}$,
and a cycle $C''$ in $\hat{G}^{(r)}$ consisting of edges from $C'$ and involving edges from $Q$ and $R$.
Further assume that the prime set $R$ introduced above is of minimum level.
Hence $P$ is a parent of $R$.
If $Q=R$, then $Q\prec P$ follows directly.
Otherwise, $Q\prec R$ follows inductively.
Thus in either case, we have $Q\prec P$.
Remove an edge $e$ in $C\cap Q$ to break the cycle $C$ and denote the new tree by $T'$.
Then $T'=T-e+e'$ where $e\in C\cap Q$ and $e'\in P\backslash T$.
Since $Q\prec P$, we have $y_Q+\epsilon\leq y_P$ which implies $x_{e}+\epsilon\leq x_{e'}$.
It follows that $x(T)=x(T')+x_{e}-x_{e'} \leq x(T')-\epsilon$.
\end{itemize}
Hence a new spanning tree $T'$ can be constructed from $T$ such that $x(T)\leq x(T')-\epsilon$ with an edge exchange operation.
Now we consider $T'$.
If $T'\not\in \mathcal{T}_0$, then among all prime sets that violates \eqref{eq:TightSpanningTree} associated with $T'$, let $P'$ be one of minimum level.
By Lemma \ref{thm:MinimalViolatingPrimeSet}, $\lvert T'\cap P'\rvert<n (P')-1$ follows again.
Denote $T$ by $T_0$ and $T'$ by $T_1$.
Then repeating the process that constructs $T_1$ from $T_0$ yields a sequence $T_1,\ldots, T_k\in \mathcal{T}$ of spanning trees until the last tree $T_k$ appears in $\mathcal{T}_0$.
And we have $x(T_i)\leq x(T_{i+1})-\epsilon$ for $i=1,\ldots,k-1$.
This sequence ends properly because each time an edge exchange operation is performed between a prime set and the non-prime set or between a prime set and another prime set of higher levels.
This sequence ends with a spanning tree in $\mathcal{T}_0$ because each time an edge is added to the prime set of minimum level that violates \eqref{eq:TightSpanningTree}.
Finally, $T_k\in \mathcal{T}_0$ implies that
\begin{equation}
x(T)=x(T_0)\leq x(T_k)-k\epsilon=1-k\epsilon\leq 1-\epsilon,
\end{equation}
where the last inequality follows from the fact that $\epsilon>0$
\end{proof}
\subsection{A combinatorial algorithm for $LP''_2$}
\label{sec:CombinatorialAlgorithm}
The following lemma reveals how to solve $LP''_2$.
\begin{lemma}
\label{thm:TightConstraint}
Let $(\boldsymbol{y}^*,\epsilon^*)$ be an optimal solution of $LP''_2$.
Then for each prime set $P\in \mathcal{P}$, either \eqref{eq:Nucleolus_LP2v3_1} or \eqref{eq:Nucleolus_LP2v3_3} is tight for $(\boldsymbol{y}^*,\epsilon^*)$.
\end{lemma}
\begin{proof}
Assume to the contrary that neither \eqref{eq:Nucleolus_LP2v3_1} nor \eqref{eq:Nucleolus_LP2v3_3} is tight for $P_0 \in \mathcal{P}$.
For a constant $\delta>0$ small enough, define $\boldsymbol{y}^{\star}$ by $y^{\star}_{P_0}=y^*_{P_0}-\delta$ and $y^{\star}_{P}=y^*_{P}$ for any $P\in \mathcal{P}\backslash \{P_0\}$.
Then $(\boldsymbol{y}^{\star},\epsilon^*)$ satisfies \eqref{eq:Nucleolus_LP2v3_1}, \eqref{eq:Nucleolus_LP2v3_3} and \eqref{eq:Nucleolus_LP2v3_4}, but $\sum_{P\in \mathcal{P}} [n (P)-1] y^{\star}_P<1$.
Hence $(\boldsymbol{y}^{\star},\epsilon^*)$ can be scaled up with a constant $\theta>1$ such that $(\theta \boldsymbol{y}^{\star},\theta \epsilon^*)$ satisfies \eqref{eq:Nucleolus_LP2v3_1}-\eqref{eq:Nucleolus_LP2v3_4}.
However, this contradicts the optimality of $(\boldsymbol{y}^*,\epsilon^*)$.
\end{proof}
Based on the lemma above, we derive a combinatorial algorithm for solving $LP''_2$.
\begin{algorithm}[H]
\renewcommand{\thealgorithm}{}
\setstretch{1.15}
\caption{A combinatorial algorithm for $LP''_2$}
\label{alg}
\begin{algorithmic}[1]
\State $k=0$
\While{$\mathcal{P}\not=\emptyset$}
\State $k \gets k+1$
\State $\mathcal{P}_{\min} \gets $ the set of minimal prime sets in $(\mathcal{P},\prec)$
\State $y_P \gets k \epsilon$ for any $P\in \mathcal{P}_{\min}$
\State $\mathcal{P} \gets \mathcal{P}\backslash \mathcal{P}_{\min}$
\EndWhile
\State Since $y_P=k_P \epsilon$ where $k_P$ is an integer for any $P\in \mathcal{P}$, solving $\epsilon$ in \eqref{eq:Nucleolus_LP2v3_2} gives the unique optimal solution of $LP''_2$.
\end{algorithmic}
\end{algorithm}
The algorithm above implies that $LP''_2$ has a unique optimal solution, which yields the nucleolus of $\Gamma_G$.
Now we are ready to present our main result.
\begin{theorem}
\label{thm:Nucleolus}
Let $\Gamma_G=(N,\gamma)$ be an arboricity game with a nonempty core.
The nucleolus of $\Gamma_G$ can be computed in $O(n^4 m \log\frac{n^2}{m})$.
\end{theorem}
\begin{proof}
The prime partition can be computed in $O(n^4 m \log\frac{n^2}{m})$.
The ancestor relation of prime sets can be determined in $O(n^2 m)$.
The algorithm above takes $O(n)$ iterations and each iteration requires $O(n^2)$ time to determine the minimal prime sets in the remaining partially ordered set.
Hence the algorithm above ends in $O(n^3)$.
Notice that the prime partition computation dominates the computing time of all other parts.
Thus the nucleolus can be computed in $O(n^4 m \log\frac{n^2}{m})$.
\end{proof}
\section{Concluding remarks}
\label{sec:Conclusion}
This paper provides an efficient algorithm for computing the nucleolus of arboricity games when the core is not empty.
Notice that a variant of the arboricity game arises when the cost of each coalition is defined by fractional arboricity instead of arboricity.
Despite a new cost function, our algorithm for computing the nucleolus remains valid since the variant always has a nonempty core.
This paper also offers a graph decomposition built on the densest subgraph lattice.
The prime partition decomposes the edge set of a graph into a non-prime set and a number of prime sets, where prime sets correspond to minimal densest minors.
Notice that the non-prime set can be further decomposed following the same procedure for defining prime sets.
Therefore, the prime partition indeed provides a hierarchical graph decomposition analogous to the core decomposition.
\section*{Acknowledgments}
We are grateful to all reviewers for their comments and suggests which greatly improved the presentation of this work.
The first author is supported in part by the National Natural Science Foundation of China (No.\,12001507) and the Natural Science Foundation of Shandong (No.\,ZR2020QA024).
The second author is supported in part by the National Natural Science Foundation of China (Nos.\,11871442 and 11971447).
\bibliographystyle{abbrv}
| proofpile-arXiv_059-15685 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section*{Introduction}
This article stems from the exploration of the remark of F. Charles in \cite{FC-ISV} that the naive definition of irreducible symplectic varieties in positive characteristic is not satisfactory. We give a concrete example displaying one not very desirable phenomenon of ``\textbf{not} being deformation closed" class. The class of examples we study occurs only in characteristic 2 but we expect to find similar classes of examples in higher dimensions once more of birational classification of varieties is studied for varieties in positive characteristic (cf. Remark \ref{charneq2}).
The naive definition of (irreducible) symplectic varieties in arbitrary characteristic is given as:
\begin{definition}\cite[Definition 3.1]{FuLi} \label{ISV}
A smooth projective variety $X$ over an algebraically closed field $k$ is said to be \textbf{symplectic} if its \'etale fundamental group is zero and it has a nowhere vanishing global two form $\sigma \in H^0(X, \Omega^2)$. A symplectic variety is said to be \textbf{irreducible symplectic} if it is a symplectic variety and the nowhere vanishing global two form $\sigma$ generates $H^0(X, \Omega^2)$, i.e., $\dim H^0(X, \Omega^2) =1 $.
\end{definition}
The existence of a nowhere vanishing global two form $\sigma$ implies that the canonical bundle is trivial, and in case char $k \neq 2$ that the dimension of $X$ is even. This is because in characteristic 2, odd dimensional skew symmetric 2 forms do not have to be singular.
\textbf{Question:} Does there exist odd dimensional irreducible symplectic varieties over a field of characteristic 2?
\begin{remark}
Over $\C$, analytification of a smooth projective variety gives us a K\"ahler manifold and then being irreducible symplectic is equivalent to being a Hyperk\"ahler manifold. Indeed, first note that \'etale fundamental group being trivial is equivalent to topological fundamental group of the analytification is zero, as
$$
\pi_{et}(X)=\widehat{\pi_{top}(X^{an})},
$$
where $\hat{-}$ denotes profinite completion, see \cite[Remark I.5.1 (c)]{Milne80}. Thus, $\pi_{top}(X^{an}) = 0 \Rightarrow \pi_{et}(X) =0$. For the other side, as our varieties have $c_1(X) = 0$, the Beauville-Bogomolov theorem \cite{BB} implies that we have
$$
0 \ra \Z^k \ra \pi_{top}(X) \ra G \ra 0,
$$
where $G$ is a finite group. Since profinite completion is right exact and $\hat{G} = G$, see \cite{RZ}, we have
$$
\widehat{\Z^k} \ra \pi_{et}(X) \ra G \ra 0.
$$
Now, $\pi_{et}(X) = 0 \Rightarrow G=0$ which implies $\pi_{top}(X)= \Z^k$ but taking profinite completion implies that $k =0$. Thus $\pi_{top}(X) = 0$. Now the equivalence of the two definitions can be seen using Yau's proof of Calabi theorem and Beauville-Bogomolov decomposition, see \cite{HuyCHM} for details.
\end{remark}
Moreover over $\C$, $c_1(X) =0$ is a deformation invariant, as well as being simply connected and having a non-degenerate two form. The latter two results are easy to see using Ehresmann's fibration theorem.
The main result of this article is
\begin{proposition} [Proposition \ref{SSEsurface} , \ref{fund}, \ref{HN}]
Let $X$ be a supersingular Enriques surface over $k$, algebraically closed field of characteristic $2$, then it is an irreducible symplectic variety. Moreover, $Hilb^n(X)$ for $n \geq 2$ is a symplectic variety which is not irreducible symplectic.
\end{proposition}
The article is organized as follows: In section 1, we recall basic facts abouts Enriques Surface and note that it in characteristic 2, supersingular Enriques surface are indeed irreducible symplectic varieties as defined above in Definition \ref{ISV} and in section 2, we begin by recalling the properties of Hilbert scheme of points on a surface and show that in case of supersingular Enriques surface, the Hilbert scheme has the desired properties as claimed above.
\section{Enriques Surface}
We refer to the classification of surfaces by Bomberi-Mumford \cite{BMIII} for details.
\begin{definition}
An \textbf{Enriques surface} is a smooth and proper surface $X$ of finite type over an algebraically closed field $k$ of characteristic $p \geq 0$ such that
$$
\omega_X \sim_{num} \O_X \ \text{and} \ b_2(X) =10,
$$
where $\sim_{num}$ denotes numerical equivalence and $b_i$ denotes the $i$-th \'etale or crystalline Betti number.
\end{definition}
In char $k \neq 2$, these surfaces have the following invariants
\[\begin{matrix}
\omega_X \not\cong \O_X & \omega_X^2 \cong \O_X & p_g= h^0(X, \omega_X)=0 & h^{0,1}= 0\\
\chi(\O_X) = 1 & c_2 = 12 & b_1 = 0 & b_2 = 10.
\end{matrix}\]
And the Hodge diamond is (char $\neq 2$):
\[\begin{matrix}
& & h^{2,2} & &\\
& h^{1,2}& &h^{2,1} &\\
h^{0,2} & & h^{1,1} & &h^{2,0} \\
& h^{0,1}& &h^{1,0} & \\
& & h^{0,0} & &
\end{matrix}
\ := \
\begin{matrix}
& & 1 & &\\
& 0& &0 &\\
0 & & 10 & &0 \\
& 0& &0 & \\
& & 1 & &
\end{matrix}\]
As $\omega_X^2 \cong \O_X$, this defines a defines an \'etale double cover $\tilde{X} \ra X$, where $\tilde{X}$ is a K3 surface. The existence of a K3 double cover implies $\pi_{et}(X) = \Z/2\Z$ as $\pi_{et}(\tilde{X})= 0$.
In case the characteristic of the ground field is 2, we have the following invariants of Enriques surfaces:
\[\begin{matrix}
\omega_X \sim_{num} \O_X & \chi(\O_X) = 1 & c_2 = 12 & b_1 = 0 & b_2 = 10.
\end{matrix}\]
However, in char k = 2, hodge symmetry can fail, i.e., $h^{0,1} = 1$, while $h^{1,0}= 0$. Moreover, $\omega_X \not\cong \O_X$ if and only if $h^{0,1}=0$. In case $h^{0,1}=1$, we can study the action of Frobenius on $H^1(X, \O_X)$, which is either zero or bijective, this leads to the following definition and characterization of Enriques surface in characteristic 2:
\begin{definition}
An Enriques surface $X$ is called
\begin{enumerate}
\item \textbf{Classical} if $h^1(\O_X) = 0$, hence $\omega_X \not\cong \O_X$ and $\omega^2 \cong \O_X$.
\item \textbf{1-ordinary/Singular} if $h^1(\O_X) = 1$, hence $\omega_X \cong \O_X$ and Frobenius is bijective on $h^1(\O_X)$.
\item \textbf{Supersingular} if $h^1(\O_X) = 1$, hence $\omega_X \cong \O_X$ and Frobenius is zero on $h^1(\O_X)$.
\end{enumerate}
\end{definition}
The following theorem gives us the fundamental group of the three kinds of Enriques surface:
\begin{theorem}[Bomberi-Mumford]
Let $X$ be an Enriques surface with char $k = 2$. Then $X$ admits a canonical covering space $\pi: \tilde{X} \ra X$ of degree $2$, whose structure group is
\begin{enumerate}
\item $\mu_2$ if $X$ is classical $\Rightarrow \pi_{et}(X) = 0$
\item $\Z/2\Z$ if $X$ is 1-ordinary/singular $\Rightarrow \pi_{et}(X) = \Z/2\Z$
\item $\alpha_2$ if $X$ is supersingular $\Rightarrow \pi_{et}(X) = 0$.
\end{enumerate}
\end{theorem}
Now we are ready to note that supersingular Enriques surfaces are irreducible symplectic varieties.
\begin{proposition} \label{SSEsurface}
Supersingular Enriques surface is an irreducible symplectic variety over an algebraically closed field of char $p =2$ and it does not lift to any irreducible symplectic variety in char 0, even more it lifts to a variety which has $\omega_X \not\cong \O_X$.
\end{proposition}
\begin{proof}
In \cite[Theorem 4.10]{Lie15}, Liedtke has shown that every Enriques surface lifts to characteristic zero, see remark \ref{Enriqueslift} below. \\
Now, all we are have to do is note that for supersingular Enriques surface, we have $\omega_X \cong \O_X$, which is equivalent to having $H^0(X, \Omega_X^2) = k \sigma$, where $\sigma$ is nowhere degenerate 2 form (since we are in $\dim 2$), and $\pi_{et}(X)=0$, thus we are done.
\end{proof}
\begin{remark} \label{Enriqueslift}
Note that in \cite[Theorem 4.10]{Lie15} Liedtke constructs the lift of a supersingular Enriques surface only as an algebraic space over a possibly ramified extension of W(k), the Witt ring of $k$, but proves all such lifts are projective, which implies that they are indeed just schemes, it follows from \cite[II.6.16]{KnutsonAS}. Moreover, the generic fibers of these lifts are Enriques surfaces over a field of characteristic 0, indeed, use the results of \cite[Theorem 11.4]{LiedtkeCAS} or \cite[Section 9]{K-U95} to see that the Kodaira dimension of the generic fiber is zero, Euler characteristic is 1 and first betti number is 0; then classification of surfaces as in \cite{LiedtkeCAS} allows us to conclude.
\end{remark}
Also, one can deform supersingular Enriques surface to a classical one, thus unlike in characteristic zero, the class of varieties with trivial canonical bundle is not deformation closed in characteristic $p > 0$, \cite{AZST}.
\section{Hilbert scheme of Enriques surface}
In this section we will prove that the Hilbert scheme of points on a supersingular Enriques surface is a symplectic variety but not irreducible symplectic by showing that it is simply connected, has a non-degenerate two form but the group $H^0(\Omega^2)$ is not generated by it. We start by recalling a few basic results on Hilbert scheme of points on a surface, which immediately implies that the Hilbert scheme of points on supersingular Enriques has trivial canonical bundle and then we compute its fundamental group and $h^{2,0}$. For basic results and proofs about Hilbert scheme of points of a scheme (in arbitrary characteristic) see \cite[Chapter 7]{BK}.
\subsection{Hilbert schemes of points on a surface}\label{HilbS}
Let $X$ denote a nonsingular projective surface over an algebraically closed field $k$ of characteristic $p \geq 0$ and $Hilb^n(X)$ denote the \textbf{Hilbert scheme of length $n$} on $X$. It is the unique scheme representing the functor
\begin{eqnarray*}
\textbf{Hilb}^n(X): \{Sch / k \} &\xrightarrow{\hspace*{2.5cm}} \{Sets\} \\
S &\mapsto \{ \pi: Z \ra S \ \text{flat}; \ Z \subset X_S \\
&\text{length $n$ closed subscheme}\}.
\end{eqnarray*}
Moreover, it is a smooth projective variety of dimension of dim $2n$, see \cite[Theorem 7.2.3, 7.4.1]{BK}. Note that any point of $Hilb^n(X)$ is a closed subscheme $Z$ of $X$ such that $h^0(\O_Z)=n$ and generic points of $Hilb^n(X)$ are reduced, that is, they are the closed subschemes of $n$-distinct points of $X$. \\
Let $X^{(n)}$ denote the \textbf{$n$-th symmetric product} of $X$, which is defined to be the quotient of $X^n$ by the natural action of $S_n$ which permutes the factors of $X^n$, $X^n/S_n$. It is a projective scheme \cite[Lemma 7.1.1]{BK}. Any point of $X^{(n)}$ can be written as a formal sum $\sum_i m_i p_i$, where $\sum_i m_i =n$, all $m _i \in \N$ and $p_i \in X$ are distinct points. There is a natural morphism of schemes from the Hilbert scheme of $n$ points to the $n$-th symmetric product called \textbf{Hilbert-Chow morphism}, which at the level of points can be described as follows:
\begin{eqnarray*}
Hilb^n(X) &\xrightarrow{\hspace*{0.7cm} \gamma \hspace*{0.6cm}} X^{(n)}\\
Z &\mapsto \sum_{p \in X} l(\O_{Z,p})p,
\end{eqnarray*}
where $l(-)$ denotes the length of a module and the sum is a formal sum. This morphism is projective and gives a crepant resolution of $X^{(n)}$ \cite[Theorem7.3.1, 7.4.6]{BK}. Thus, we have $\omega_{Hilb^n(X)} \cong \omega_{X^{(n)}}$ and \cite[Lemma 7.1.7]{BK} implies that $\omega_{Hilb^n(X)} \cong (\omega_X)^{(n)}$. Thus, if $X$ is a variety (of dim $\geq 2$) with trivial canonical bundle, then so is $Hilb^n(X)$.
\subsection{The Fundamental group of Hilbert scheme on supersingular Enriques}
We will compute the fundamental group by lifting the Hilbert scheme of supersingular Enriques to characteristic zero and computing the fundamental group of the geometric generic fiber and then using specialization of fundamental groups to bound the type of elements in the fundamental group of Hilbert scheme of supersingular Enriques. Then, since the Hilbert scheme is unirational so the fundamental group has no 2-torsion but the specialization map on fundamental groups shows that the fundamental group can be either a 2-torsion group or zero, so we have that the fundamental group is zero.
As a supersingular Enriques surface, $X$, lifts to characteristic 0 (over a DVR $A$ with residue field $k$ and fraction field $K$) as a (relative) Enriques surface $X_A$, with generic fiber $X_K$ (see Remark \ref{Enriqueslift}), we can construct lifts of $Hilb^n(X)$ as $Hilb^n(X_A)$ with generic fiber as $Hilb^n(X_K)$, the Hilbert scheme of points on an Enrique surface over $K$, a field of characteristic zero. Now using the specialization morphism of \'etale fundamental groups \cite[Corollary X.2.3]{SGA1} we have the following surjective map:
\begin{equation} \label{surjection}
\pi_{et}(Hilb^n(X_{\bar{K}})) \ra \pi_{et}(Hilb^n(X)) \ra 0
\end{equation}
and from \cite[Expose XIII, Proposition 4.6]{SGA1}, we have $$
\pi_{et}(Hilb^n(X_{\bar{K}})) \cong \pi_{et}(Hilb^n(X_{\C})),
$$
where $X_{\bar{K}}$ is the geometric generic fiber of the lift $X_A$ of $X$ and $X_{\C}$ is its base change to $\C$. We can always do base change to $\C$ since we are working with varieties of finite type over a field.
The analytic fundamental group of $Hilb^n(X_{\C})$ can be computed as follows: (for example using \cite[Proposition 2.1]{OGrady13}, we get following isomorphisms for $n \geq 2$)
$$
\Z/2\Z \cong \pi_{an}(X_{\C})^{ab} \cong H_1(X_{\C}, \Z) \cong \pi_{an}(Hilb^n(X_{\C})).
$$
And since the groups are finite, we have that $\pi_{et}(X_{\C}) \cong \Z/2\Z$, see \cite[Remark I.5.1(c)]{Milne80}.
Thus it is just a 2-torsion group and then equation \ref{surjection} implies that $\pi_{et}(Hilb^n(X))$ is a group of order at most 2.
On the other hand, supersingular Enrqiues surfaces are weakly unirational as proved by Blass in \cite{Bla} and this implies that $Hilb^n(X)$ is also weakly unirational, as we can construct a generically surjective rational map from $\mathbb{P}^n$ as follows:
$$
\mathbb{P}^{2n} \dashrightarrow X^{n} \ra X^{(n)} \sim_{bir} Hilb^n(X).
$$
In \cite{Eke}(see also \cite{CL-A03}), Ekedahl has proven that fundamental group of a weakly unirational variety over an algebraically closed field of characteristic $p >0$ is a finite group of order prime to $p$.
Combining the above observations we have
\begin{proposition}\label{fund}
The Hilbert scheme of n points on supersingular Enriques surface, $Hilb^n(X)$ is simply connected.
\end{proposition}
\subsection{Computation of $H^0(Hilb^n(X), \Omega_{Hilb^n(X)}^2)$}
The construction of a no-where vanishing 2-form on $Hilb^n(X)$, where $X$ is a supersingular surface, using the no-where vanishing 2-form on $X$ follows verbatim as in the case of varieties over $\C$ in \cite[Section 2.1.2]{OGrady13}. The only thing that needs to be checked is that $X^{(n)}$ is still $\mathbb{Q}$-factorial, this is \cite[Lemma 7.1.9]{BK}.
Now we show that $h^{2,0}(Hilb^n(X)) > 1$. Recall that the hodge numbers
$$
h^{q,0} = \dim_k H^0(X, \Omega^q)
$$
are birational invariants (see for example, \cite[Ex. II.8.8]{HartAG}). Thus in our case (see section \ref{HilbS}) we have
\begin{equation} \label{birHN}
h^{2,0}(Hib^n(X)) = h^{2,0}(X^{(n)}).
\end{equation}
Next note that since $ \Omega_{X^n} \cong \oplus_ip_i^*(\Omega_X)$, where $p_i: X^n \ra X$ is the i-th projection from the $n$-product of $X$ to $X$, and we have $\Omega^q_{X^{(n)}}= (\Omega^q_{X^n})^{S_n}$ for $q \geq 0$ , on the level of global sections we get
$$
h^{q,0}(X^{(n)}) = \Gamma(X, \Omega^q_{X^n})^{S_n}.
$$
In case $q= 2$, we have
$$
\Omega^2_{X^n} = \oplus_{\{i,j; i < j\}} p_i^*\Omega_X \otimes p_j^*\Omega_X \oplus \wedge^2(p_i^*\Omega_X).
$$
To write it explicitly in the case of $n=2$, we get
\begin{eqnarray*}
\Omega^2_{X^2} =& p_1^*\Omega_X\otimes p_2^*\Omega_X \oplus \wedge^2p_1^*\Omega_X \oplus \wedge^2p_2^*\Omega_X \\
= &p_1^*\Omega_X\otimes p_2^*\Omega_X \oplus p_1^*\wedge^2\Omega_X \oplus p_2^*\wedge^2\Omega_X.
\end{eqnarray*}
The action of $S_n$ on $\Omega^2_{X^n}$ permutes the corresponding factors of $\Omega_X$. One of the collections of invariant sections of $\Omega_{X^n}^2$ are given by
$$
\oplus \lambda p_i^*\sigma,
$$
where $\sigma \in H^0(X, \Omega^2_X)$ is a section, and $\lambda \in k^*$. Next, note that the sections of $\oplus_{\{i,j; i < j\}} p_i^*\Omega_X \otimes p_j^*\Omega_X$ say given as $\oplus_{\{i,j; i < j\}} p_i^*\alpha_i \otimes p_j^*\alpha_j$, under the action of $S_n$ change to $\oplus_{\{i',j'; i' < j'\}}p_{i'}^*\alpha_{i'} \otimes p_{j'}^*\alpha_{j'} \oplus _{\{i',j'; i' > j'\}}(-1)p_{j'}^*\alpha_{j'} \otimes p_{i'}^*\alpha_{i'}$, where the $i'$ (resp. $j'$) is the image of $i$ (resp. $j$) under the action of a permutation of $S_n$. \\
Explicitly in the case of $n=2$, we can see that the only non-trivial permutation $(12)$ of $S_2$ sends $p_1^*\alpha_1 \otimes p_2^*\alpha_2$ to $(-1)p_1^*\alpha_1 \otimes p_2^*\alpha_2$. \\
Thus, in case the characteristic of the ground field $k$ is not equal to 2, we see that these parts do not contribute to any more $S_n$ invariants, however in case characteristic $k $ is equal to $2$, which implies $-1 = 1$, we see that we will have more $S_n$ invariants of the form $\oplus_{\{i,j; i < j\}} p_i^*\alpha_i \otimes p_j^*\alpha_j$ with $\alpha_i = \alpha_j$. To put it formally we have shown that
\begin{proposition} \label{HN}
For $X$ a supersingular Enriques surface defined over an algebraically closed field of characteristic 2, we have $h^{2,0}(Hilb^n(X)) > 1$.
\end{proposition}
Moreover, the above arguments also shows the reason why G\"ottsche-Soergel Hodge number formula \cite[Theorem 2.3.14(3)]{GottscheBook}, proved in \cite{Got-Sor}, fails in characteristic 2, we miss out on a lot of invariant sections. \\
Thus, the Hilbert scheme of points on a supersingular Enriques surface gives examples of varieties with trivial canonical class which are symplectic variety but neither irreducible symplectic nor Calabi-Yau, thereby showing that there are strictly more classes of simply connected varieties with trivial canonical class in characteristic 2 than as over $\C$ given by Beauville-Bogomolov decomposition theorem. There is also a Beauville Bogomolov type partial decomposition theorem in positive characteristic by Patakfalvi-Zdanowicz \cite{ZMBB} but it is only for 1-ordinary varieties, which is equivalent to being $F$-split \cite[Proposition 2.13]{ZMBB} and $Hilb^n(X)$ is 1-ordinary (resp. F-split) if $X$ is so. However in this case, i.e., Hilbert scheme of 1-ordinary Enriques surface is not a symplectic variety. One of the better places to look for the definition of irreducible symplectic varieties would be to find out if we can identify more decomposition classes in the Patakfalvi-Zdanowicz version of Beauville-Bogomolov theorem.
\begin{remark} \label{charneq2}
The class of examples we study occurs only in characteristic 2. Being in characteristic 2, influenced the example in two ways, it gave us supersingular Enriques as irreducible symplectic varieties, even though they are not so in any other characteristic and it lead to the failure of G\"ottsche-Soergel's hodge number counting formula, thereby preventing the Hilbert scheme of supersingular Enriques surface from being irreducible symplectic.
Since the failure of the hodge number formula is not to be expected in any other characteristic and there are no further irreducible symplectic surfaces, we do not expect to have further examples of this type.
\end{remark}
| proofpile-arXiv_059-15686 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{Sec:Introduction}
The Internet-of-Medical-Things (IoMT) based healthcare Cyber-Physical System (H-CPS) has made smart healthcare possible with enhanced quality of care and faster diagnosis \cite{Aazam_MCE_2020-Mar, JBHI.2020.2973467, Ghamari_Sensors_2016-Jun}.
Smart healthcare is now omnipresent, in the form wearables and implantabale devices with connectivity to have perpetual and continuous body vital monitoring, as it has significant capability to improve quality human life \cite{Joshi_MCE.2020.3018775, Hsu_MCE_2020-Jan, MCE.2020.2969202}. It is well understood that IoMT based H-CPS that makes smart healthcare is more important with the current pandemic scenario. With the introduction of the IoMT, integration of multiple sensors has enabled applications and devices to work efficiently and effectively. IoMT devices are usually connected to the network either by wired or wireless communication means. Smart healthcare is already being effective in fitness tracking and has potentials to revolutionize many aspects, from continuous body health monitoring to performing automatic analysis of electrocardiogram (ECG/EKG) and computed tomography (CT) scan. Smart healthcare can allow doctors to remotely monitor patients based on their fitness tracker data \cite{thomson_nuss_2019, MCE.2019.2956205, Zhu_MCE_2019-Sep}. Wearable such as smartwatches allow users to record ECG/EKG from the wrist and share a copy of the report to doctors for assistance and opinion. Most of the hospitals around the globe use body vital systems that backup the data in real-time in the cloud for doctors to monitor by sitting at home. IoMT-based H-CPS helps in collecting several vitals that describe the health of the person and transmit them to the cloud for post processing. The introduction of IoMT in wearable devices has unlocked various applications in areas such as self healthcare, fitness and yoga.
In the world of wearable technology, fitness trackers have been the part of our lives. Millions of people around the globe use wrist-worn fitness trackers \cite{Gonzales_2019}. People are using fitness trackers to keep track of their daily activities and fitness routine. Millions use wearable IoMT technology that keeps track of both physical and mental health \cite{Joshi_TCE.2020.3011966, Jain_MCE_2020-Jan, 1932296818768618}. In many cases the use of home based healthcare monitoring devices require the user to be rested and the devices used to record vitals. With the upcoming technologies such as bio-medical textiles and devices that can be worn by the user, making him/her free to move around. Many such wearable garments are being used by sports teams and athletes to improve their performance by analyzing body musculoskeletal data recorded by the garment \cite{Pandian_2008}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.65\textwidth]{Thematic_Concept_Architecture}
\caption{A thematic overview of the proposed MyWear.}
\label{FIG:Thematic_Concept_Architecture}
\end{figure}
Many fitness enthusiasts are using garments that have embedded sensors to suggest whether the workouts or yoga is performed efficiently. These garments also analyze physiological data with equipped GPS modules that track the user's location \cite{JSEN.2019.2949608, JERM.2019.2929676}. With body vital data, fitness trainers, yoga instructors and doctors can modify the workout plan to favor a particular individual. The same data can be used for nutritionists and dietitians to analyze the body metabolism and accordingly plan the diet. Wearable garments are being developed not only to monitor ECG but also the activity the user is performing while wearing them \cite{Lee_Young_2009}. Law enforcement personnel and armed forces are using wearable medical devices to analyze the stress and health of the soldiers and policemen throughout the day. The data collected provides insights on the lifestyle of the person. In the current work, we envision an wearable called MyWear which is depicted in Fig. \ref{FIG:Thematic_Concept_Architecture} that can continuously monitor stress, heart conditions, muscle activities, and fall. With detailed analysis of the body and its muscle activity, analysts are helping athletes and teams perform and build better.
Rest of the paper is organized as follows: Section \ref{Sec:Contributions} explains the contributions made in this paper. Section \ref{Sec:Related_Research} discusses about the existing related research. Section \ref{Sec:System_architecture} provides the system level architecture. Section \ref{Sec:Stress_Monitoring_Method} discusses the method proposed for automatic heart rate and stress. Section \ref{Sec:Heart_Arrhythmia_Detection_Method} presents our method for detecting heart arrhythmia. Section \ref{Sec:Fall_Detection_Method} presents our method for automatic detection of fall. Section \ref{Sec:Experimental_Validation} explains the specific design of the proposed MyWear system along with the its validation through the results of the proposed MyWear system and also provides a comparative analysis with the state of the art similar wearables. Finally, the paper concludes in Section \ref{Sec:Conclusion} with brief discussions on future directions.
\section{Related Prior Research}
\label{Sec:Related_Research}
Consumer electronics to build smart healthcare is an active research area as evident from the fact that we see increasing more healthcare features are available in wearable and smart phones. The research on consumer electronics for smart healthcare has been undertaken in many fronts including stress management \cite{Rachakonda_TCE_2019}, diet management \cite{Rachakonda_TCE_2020-May}, assisting visually impaired individuals \cite{MCE.2018.2797741} and wearables focusing on women's health \cite{Ava_smart_bracelet}, hearing aids \cite{Lin_Lai_2018}, and garments \cite{Foroughi_2016}. A consumer electronic device that can automatically quantify calorie intake as well as stress of an user is available \cite{Rachakonda_TCE_2020-May}. Heart rate estimation using a photoplethysmography (PPG) based device was presented in \cite{Puranik_TCE_2020-Jan} that deployed neural networks. A framework that can automatically monitor stress level from the physical activities was proposed in \cite{Rachakonda_TCE_2019}. A smart watch for continuous monitoring of data in a privacy-assured manner was presented in \cite{Kim_TCE_2019-Aug}. A ECG signal analysis method using discrete cosine transformation (DCT) has been presented in \cite{Raj_Ray_2018}. A semi-automated IoMT-enabled diet management was discussed in \cite{Prabha_Saraju_2018}. A framework for detection of elderly fall and ECG abormality detection was presented in \cite{Wang_TCE_2016-May}.
\begin{table*}[htbp]
\centering
\caption{MyWear as compared to similar works in consumer electronics.}
\label{TBL:Product_Comparison}
\begin{tabular}{|p{1.8cm}|p{1.2cm}|p{1.5cm}|p{2.0cm}|p{1.4cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}|p{1.0cm}|}
\hline
Consumer Electronics & Real-Time HRV & Muscle Activity Detection & Abnormal Heartbeat Detection & Stress Level Detection & Fall Detection & Fall Prediction & Built-in Alert & Data Security
\\
\hline
\hline
Puranik et al. \cite{Puranik_TCE_2020-Jan} & Yes & No & No & No & No & No &No & No\\
\hline
Garment in \cite{Hexoskin_2020} &Yes &No &No & Yes & Yes & No &No &No \\
\hline
Raj et al. \cite{Raj_Ray_2018} & Yes & No & Yes & No & No & No & No & No \\
\hline
Garment in \cite{Athos_2018} &No &Yes &No &Yes & Yes & No &No &No \\
\hline
Wang et al. \cite{Wang_TCE_2016-May} & Yes & No & Yes & No & Yes & No & No & No \\
\hline
Farjadian et al. \cite{Farjadian_ICORR_2013} & No & Yes & No & No & No & No & No & No \\
\hline
Pandian et al. \cite{Pandian_2008} & No & No & No & No & No & No & No & No \\
\hline
\textbf{MyWear (Current Paper)} &Yes &Yes &Yes &Yes &Yes &Yes &Yes &Yes\\
\hline
\end{tabular}
\end{table*
Table \ref{TBL:Product_Comparison} presents a comparative perspective of similar consumer electronics as the proposed MyWear of the current work.
There are few smart wearable garments that can monitoring human body vitals in real-time \cite{Foroughi_2016}. Smart wearable garment in \cite{Athos_2018} uses surface electromyography (sEMG) to analyze the intensity of muscle activity of athletes.
However, there is no HRV analysis observed in the training system to detect stress levels of the user. A consumer products \cite{Hexoskin_2020} analyzes sleep activity and ECG for heart rate variability (HRV) analysis.
However, it does not examine the muscle activity of the user.
Garment in \cite{Farjadian_ICORR_2013} use electromyography (EMG) to detect muscle activity and assist the user in physical therapy.
There are few proposed solutions that use accelerometer data to analyze the intensity of exercises, however, it cannot be used in calculating individual muscle activity in comparison to using sEMG. Moreover, no accurate measurement of the body orientation of the user is observed in the above mentioned wearables. There was no alert system observed in the above-mentioned garments, that notifies users contacts, paramedic forces in case of emergency after detecting any abnormalities.
\section{Contributions of Current Paper}
\label{Sec:Contributions}
To the knowledge of the authors {MyWear is the first smart garment to introduce an integrated mechanism for automatic} HRV analysis, stress analysis, muscular activity analysis, and alert system to seek assistance in emergencies while providing data security.
\subsection{Problem Formulation}
We intend to address the following health conditions through the research and development of MyWear:
\begin{itemize}
\item
How can health condition be automatically monitored and healthcare provider be notified?
\item
How can heart abnormalities be automatically detected from the ECG signal?
\item
How can stress be automatically detected from the ECG signal?
\item
How can muscle activity be automatically detected?
\item
How can fall of an individual be automatically detected?
\end{itemize}
\subsection{Proposed Solution of the Current Paper}
MyWear provides following novel solutions to the research objective stated above:
\begin{itemize}
\item
An automated continuous health monitoring system by collecting body vitals in regular intervals of time, store it in a secured fashion and alert healthcare providers.
\item
A cloud based platform for user and healthcare providers to monitor previous logs and current routine in real-time.
\item
A novel automatic method to automatically analyze stress levels using Electrocardiogram (ECG) signal in real-time.
\item
A novel method to detect heart abnormality arrhythmia using Electrocardiogram (ECG) signal in real-time.
\item
A novel automatic method to determine fall of the user.
\end{itemize}
\subsection{Novelty and Significance of the Proposed Solution}
The novelty and significance of the proposed solutions include the following:
\begin{itemize}
\item
A novel wearable that can automatically perform heart rate monitoring, heart arrhythmia, stress monitoring, and fall detection for a complete healthcare solution.
\item
A secured smart garment with an automated body vital data logging system to the cloud and mobile application.
\item
ECG signal analysis using deep learning model to detect different types of abnormalities in heartbeat that is not observed in any of the smart garments.
\item
Real-time stress detection using Heart Rate Variability (HRV) from ECG which is not present in most of the smart garments.
\end{itemize}
\section{System Level Architecture of the Proposed Framework}
\label{Sec:System_architecture}
We present our vision of Internet-of-Medical-Things (IoMT) based healthcare Cyber-Physical System (H-CPS) with the proposed smart garment MyWear integrated in it. MyWear can be with 3 different options depending on where the computation happens and where the intelligence is built in \cite{Rachakonda_TCE_2020-May, MCE.2017.2776462, MCE.2017.2714695}: (1) IoT-end device in-sensor computing, (2) IoMT-edge/fog computing, and (3) IoMT-cloud computing paradigm. In option-1, IoMT-end device with all the sensors and models integrated in it, while IoMT-cloud stores data and diagnostic results \cite{8719325}. In this option, IoMT-end device need to have a bit higher level of computational capability and battery life than the usual sensors to present results instantaneously to the user. Also, with the use of light-duty machine learning models (i.e. TinyML) it can be possible with limited computational and battery resources at the IoMT-end device \cite{Mohanty_VAIBHAV_2020_Panel}. In Option-2, the ML models are part of IoMT-edge devices like edge routers and edge datacenters (EDC) for faster response to the users \cite{8684800}. In Option-3, heavy-duty accurate ML models can run in IoMT-cloud to detect the health conditions. Thus, Option-2 with IoMT-edge is a good trade-off accuracy and fast response. A more effective option is to have TinyML models in IoMT-sensor for fast response and at the same time have another health condition evaluation at IoMT-cloud before sending alarm to healthcare providers.
\subsection{Proposed Healthcare Cyber-Physical System (H-CPS) using our Smart Garment}
The complete overview of the proposed system in H-CPS framework is depicted in Fig. \ref{FIG:Smart-Garment_Framework_in_H-CPS}. The garment acts as the IoMT-End device and the input point for the mobile application and cloud service. Surface dry electrodes are embedded in the garment keeping in mind the optimal position to obtain stable signals. The dry electrodes are connected to the respective ECG and EMG sensors. These sensors extract, amplify and filter the raw signals therefore removing noise and unwanted artifacts. The filtered data is sampled by sampling unit along with the data received from the temperature and Inertial Measurement Unit (IMU) sensors. The temperature sensor measures the body temperature whereas IMU sensors measure the change in body orientation. Collectively, data is transmitted to the mobile application and cloud for further analysis using the embedded Bluetooth and Wi-Fi module, respectively. The vital data is AES128 encrypted and can only be decrypted or accessed in the user's mobile application. Thus, keep the body vital data safe and secure, only readable by the owner/user. The mobile device displays ECG in real-time along with the stress level of the user. The mobile application visualizes muscle activity in different muscle regions on the human map pertaining to the individual along with the body orientation and body temperature. The application allows the user to share Data with medical officials for further analysis and assistance. Meanwhile, the ECG data transmitted to the cloud is analysed to detect any abnormal Heartbeats. The proposed Deep Learning model deployed in the cloud checks for any abnormalities and detects the kind of abnormality that occurred. In case of emergency, an alert is sent to medical officials for immediate assistance. Moreover, the body vital data received in the cloud can be monitored by medical officials in real-time.
MyWear gathers body vitals such as Heart Rate, body temperature, muscle activity and sends it to IoMT-edge and IoMT-cloud through Internet. The smartphone acts as an interface to visualize data post analysis for user’s information. The deep learning (DL) model in the cloud analyzes the user's ECG and stress level. It also returns a report to the user's smartphone for the user to access.
The \textit{main objectives of MyWear} are the following:
\begin{itemize}
\item
To create an automated health monitoring wearable that analyzes the user's body vitals regularly.
\item
To provide a solution that analyzes a user's stress level based on Electrocardiogram.
\item
To bridge the communication between User and medical officials, real-time user monitoring system to allow doctors, therapists to analyze the user's routines.
\item
To create an alert system to call for help in case of emergency.
\end{itemize}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.85\textwidth]{Smart-Garment_Framework_in_H-CPS}
\caption{A detailed architecture of the proposed MyWear in a Healthcare Cyber-Physical System (H-CPS) framework.}
\label{FIG:Smart-Garment_Framework_in_H-CPS}
\end{figure}
\subsection{Architecture of the Proposed Smart Garment}
Fig. \ref{FIG:Proposed_Smart_Garment_Architecture} shows the architecture of the proposed smart garment. The garment (called MyWear) is embedded with dry electrodes that are connected to ECG and EMG sensors. The sensors extract, amplify and filter the raw signals captured by the dry electrodes. The filtered data is sampled by sampling unit along with the data received from Temperature and IMU sensors. The data is then transmitted to the smartphone and cloud server for storage and analysis. The received ECG data is used to analyze user stress levels and heart rate as discussed later. The analyzed outcome along with a report is sent to the smartphone application for the user to comprehend. The filtered muscle data received at the smartphone application end is used to visualize on the human map. The application also displays the body orientation and temperature of the user. The user can also grant access to cloud data for doctors, therapists and anyone to monitor on a real-time basis. In case of emergency, an alert message is sent to contacts and paramedic forces for immediate assistance.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\textwidth]{Proposed_Smart_Garment_Architecture}
\caption{Architecture of the proposed Smart Garment (MyWear).}
\label{FIG:Proposed_Smart_Garment_Architecture}
\end{figure}
\subsection{Electrocardiogram (ECG) Acquisition Unit for Heart Rate}
Electrocardiogram (ECG) is a technique used to measure the electrical activity produced by the heart during a diastole and systole (relaxation and contraction). ECG is a reliable method to measure the heart rate variability and beats per minute (BPM) \cite{Iskandar_AISP_2019}. For convenient use, a 3 electrode system is used. The electrodes are placed following the Einthoven's triangle (refer Fig. \ref{FIG:Einthoven_Triangle}), which depicts the placement of electrodes to obtain stable ECG \cite{Kanani_2018, Villegas_2019}.
\begin{figure} [htbp]
\centering
\subfigure[Placement of electrodes and Einthoven's Triangle.]{
\centering
\includegraphics[width=0.35\textwidth]{Einthoven_Triangle}
\label{FIG:Einthoven_Triangle}
}
\subfigure[P segment, QRS complex and T segment of Electrocardiogram.]{
\includegraphics[width=0.55\textwidth]{ECG_Segment_PQRST}
\label{FIG:ECG_Segment_PQRST}
}
\caption{Einthoven's Triangle based electrode placement and P segment, QRS complex and T- segment of ECG.}
\end{figure}
\subsection{Euler’s Angle Unit Using Inertial Measurement Unit (IMU) Sensor}
MyWear is embedded with an Inertial Measurement Unit (IMU) sensor. It measures and provides details on orientation and velocity of a particular entity with the use of accelerometers and gyroscopes. IMU detects rotational movements and acceleration of the Euler angles, also known as yaw, pitch and roll. The gyroscope measures angular velocity which defines the rate of change of angle with respect to $x$, $y$ and $z$ axes. The accelerometer defines the velocity at which the entity is rotating in a particular axis.
\subsection{Electromyography (EMG) Unit}
Electromyography (EMG) is used to measure the change in electric potential that depicts the force exerted by the muscle. For ease of use, a three-electrode system is used to measure the muscle activity at a particular muscle region. Two electrodes are used to measure the muscle signal and the third electrode acts as a ground. Initially, the sensor is tuned by changing the gain. The gain helps in adjusting the sensitivity of signal acquisition to capture stable signals with low noise. EMG helps in understanding the muscle activity and its intensity in a muscle region. EMG is used to measure the change in electric potential generated at neuromuscular junctions as electric signals or action potential pass through. Clinical settings of EMG use a needle that is inserted into the muscle. For measuring ECG on the go and better ease of use, surface EMG is chosen \cite{Samarawickrama_2018, Alimam_2017}. Surface EMG uses electrodes that measure overall activity of a large portion of the muscle. The activity is measured in voltage and represents the amount of force exerted by the muscle in real-time. MyWear records muscle activity at the biceps or biceps brachii and at the chest or pectoralis major.
\subsection{Body Orientation Detection Unit}
Upon Calibration of the Euler's angle to the initial position of the user, the change in angles is recorded and defined for a few positions such as bending is four directions: right, left and forward, back. The user is prompted with the text in the application that shows the orientation of the user as shown in Fig. \ref{FIG:Mobile_App_for_User}. Body orientation provides insights into a person's activity.
\subsection{Emergency Alert Unit}
The ECG data is sent to the model to detect any abnormalities. The proposed deep Learning model detects any abnormalities if any. Detected abnormalities are sent as an alert to the user’s mobile application and the medical official. Regular intervals of Abnormalities are usually considered to cause a potential heart failure and hence, a prompt is sent to the user’s smartphone application and triggers an alarm. Another alert is sent to Medical officials and doctors for immediate assistance.
\subsection{Data Security Unit}
Privacy and Security play a major role when handling Body vital or Health Data. Health data when misused can pose serious privacy and security violations and eavesdropping. In order to keep user's data safe and secure, the body vitals data is AES128 encryption enabled. Advanced Encryption Standards (AES) is an encryption method which uses a 128-bit block with key size of 128bits. It functions on a symmetric algorithm wherein the key is used to encrypt and decrypt data. Before transmitting data from Arduino to Application. The collected data from all sensors is encrypted using AES128 with the predefined key. The key is unique to a particular garment, mobile device and is assigned by the mobile application when pairing and registering initially. Any other device with installed application will not be able to access and receive the sensors data. Once the encrypted data is transferred to assigned application, the data is decrypted with the pre-defined key.
\section{Proposed Methods for Automatic Heart Rate and Stress Monitoring from ECG}
\label{Sec:Stress_Monitoring_Method}
\subsection{Proposed Method to Obtain Heart Rate from ECG}
Fig. \ref{FIG:ECG_Segment_PQRST} shows what an ECG graph would look like and its features. Every beat of the heart corresponds to a P-QRS-T waveform in the graph and collection of multiple waveforms depicts the successive beats in a period of time. As seen in Fig. \ref{FIG:ECG_Segment_PQRST}, the `P' wave or bump that precedes the QRS complex represents the depolarization of the atria which tends to brief isoelectric period or state of near zero voltage.
The P wave lasts no more than 0.10 seconds and 2.4 mm tall. P wave is followed by the QRS complex. QRS complex consists the rapid succession of three waves i.e. `Q' wave, `R' wave and `S' wave. The QRS complex represents the activation of ventricular muscles that contract the heart. The normal duration of the QRS complex is between 90 milliseconds and 100 milliseconds. The R wave is usually a peak and positive unlike Q and R wave. The `T' wave follows the QRS complex and indicates ventricular repolarization that leads to relaxation of the heart. This repeats for every single beat of the heart. The heart rate is measured in beats per minute as the following:
\begin{equation}
\text{Heart Rate } (bpm) = \left( \frac{60}{T_r} \right),
\end{equation}
where $T_r$ = time between two successive $R$ peaks.
To calculate the time between two successive `R' peaks, the time at which first and second peaks occurred is saved. Subtracting the first peak time from second peak time results in RR-interval time. Beats per minute are obtained as follows:
\begin{equation}
\text{Heart Rate } (bpm) = \left( \frac{1.0}{RRinterval} \right) \times 60.0 \times 1000,
\end{equation}
where $RR$ intervals are used for Heart Rate Variability (HRV) analysis to detect the stress levels of the user.
\subsection{Metrics for obtaining Heart Rate Variability (HRV) score}
\subsubsection{Time-domain metrics}
HRV score can be calculated by measuring the time between two successive RR intervals using Time domain metrics as shown below:
\paragraph{MeanRR}
Average of all $RR$ intervals (distance between two `R' peaks) is calculated using the following expression:
\begin{equation}
MeanRR = \left( \frac{\sum\limits ^{n}_{i=1} R_{i}}{n} \right).
\label{eq:1}
\end{equation}
\paragraph{Standard Deviation of RR intervals (SDNN)}
The standard deviation of RR intervals (also known as NN-interval) is calculated by using the following expression:
\begin{equation}
SDNN\ =\ \sqrt{\frac{\sum\limits ^{n}_{i=1}( R_{i} -mean)^{2}}{n}} .
\label{eq:2}
\end{equation}
\paragraph{Root Mean Square of Successive Differences (RMSSD)}
The root mean square of two RR intervals' differences is calculated using the following expression:
\begin{equation}
RMSSD\ \ =\ \sqrt{\frac{\sum\limits ^{n-1}_{i=1}( R_{i} -R_{i+1})^{2}}{n-1}} .
\label{eq:3}
\end{equation}
\subsubsection{Frequency-domain metrics}
HRV score can be calculated by measuring the low and high frequency heart beats. Poincare plot is a reliable frequency domain metric among other metrics to visualize HRV. The algorithm plots two successive RR intervals and depicts how well each RR interval predicts the consecutive RR interval.
\subsection{Proposed Method to Automatically Monitor Stress from ECG}
Fig. \ref{FIG:Proposed_Stress_Level_Calculation_Flow} shows the algorithm to calculate stress level using Heart rate. RMSSD is usually obtained from ECG and is considered as the HRV score \cite{Wu_IS3C_2016}. Studies have shown that an increase in HRV depicts a reduction in stress levels \cite{Wang_mse_2012} and vice versa. High HRV score was found in users performing optimal levels of fitness routine. However, sometimes, during strenuous exercise, low HRV scores were noted because of the body trying to increase the user's heart rate for the activity. A user achieves a high HRV score during sleep as the result of state of relaxation and low stress levels. HRV score changes depending on the user's activity. Apart from HRV, ECG can also be used to detect Heart Arrhythmia. Table \ref{TBL:HRV_vs_Stress} shows the relation between HRV score and stress levels \cite{mindful_HRV_2020}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.80\textwidth]{Proposed_Stress_Level_Calculation_Flow}
\caption{Proposed approach for calculating stress level from the heart rate.}
\label{FIG:Proposed_Stress_Level_Calculation_Flow}
\end{figure}
\begin{table}[htbp]
\centering
\caption{Relation between HRV score and stress levels \cite{mindful_HRV_2020}.}
\label{TBL:HRV_vs_Stress}
\begin{tabular}{|p{3cm}|p{3cm}|}
\hline
\textbf{HRV Score} & \textbf{Stress Level} \\
\hline
\hline
90+ & Very low \\
\hline
80-90 &Low \\
\hline
71-80 &Moderate \\
\hline
61-70 &Average\\
\hline
$<$60 &High \\
\hline
\end{tabular}
\end{table}
\section{Proposed Deep Neural Network (DNN) Model For Automatic Detection of Heart Arrhythmia}
\label{Sec:Heart_Arrhythmia_Detection_Method}
An arrhythmia is a disorder of the heart that affects the rate or rhythm at which the heart beats. This takes place when electrical impulses that direct and regulate heartbeats don't function properly. As a result the heart may be too fast (tachycardia), too slow (bradycardia), too early (premature contraction), or too erratically (fibrillation). Thus, it is of critical importance to determine such conditions automatically and in real-time. We present a Deep Neural Network (DNN) model based method that is integrated in MyWear for the automatic determination of arrhythmia and correspending automatic warning to healthcare provider.
\subsection{Proposed DNN Model for Heart Arrhythmia}
Fig. \ref{FIG:DNN_Model_Architecture} shows the architecture of the proposed deep learning model. The proposed learning model consists of 6 one-dimensional convolutional layers with 64 filters each and input stride length of 2 and uses Rectified Linear Unit function (ReLU) activation function. Every convolutional layer is succeeded by the Maxpool layer of pool size 2 and stride size 2. The set of Convolutional layers is connected to 3 Fully Connected Layers. Finally, a softmax function is used in the output layer to predict probabilities of individual classes. The model is used to classify the dataset into four categories based on the heart beat rhythm and predict whether the input heart beat is normal or consisting of abnormalities. The activation function used in every layer is ReLU which is represented as follows:
\begin{equation}
f(x) = \left\{
\begin{array}{l}
1 \:\: x\textgreater1\\
x \:\: x=1 \text{ and} \: \: 0 \\
0 \: \: x \textless 0
\end{array}
\right.
\end{equation}
The output layer connected to the fully connected layer of $n$ neuron. The predicted classification of heartbeats for a sample $x$ is denoted by the Softmax function \cite{Xiangrui_2020} defined as the following expression:
\begin{equation}
f_{n} \ ( x) = \left( \frac{e^{( W_{n} h_{x} \ +\ b_{k})}}{\sum\limits_{j =1} ^{K} e^{( W_{j} h_{x} \ +\ b_{j})}} \right) ( n\ =\ 0,...,n-1) , \\
\end{equation}
where $h_x$ is the feature representation of $x$ extracted from the previous convolutional layer, $W_k$ and $b_k$ from nth neuron in the output layer.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.99\textwidth]{DNN_Model_Architecture}
\caption{Deep Neural Network (DNN) Model model explored for our MyWear.}
\label{FIG:DNN_Model_Architecture}
\end{figure*}
\subsection{Metrics for evaluating the DNN Model}
The metrics used for evaluating the proposed DNN model are precision, recall, accuracy and loss \cite{Rachakonda_TCE_2020-May}. In order to evaluate these metrics, there are four basic errors that describe the metrics which are to be defined. The errors are calculated as follows:
\begin{itemize}
\item
\textit{True Positive (TP)}: Heart beats that belong to the truth class and predicted as belonging to the same class
\item
\textit{True Negative (TN)}: Heart beats that do not belong to the truth class and predicted as not belonging to the same class.
\item
\textit{False Positive (FP)}: Heart beats that do not belong to the truth class and predicted to not belonging to the same class
\item
\textit{False Negative (FN)}: Heart beats that belong to the truth class and predicted as not belonging to the same class.
\end{itemize}
These metrics that are used to evaluate the DNN model are the following:
\begin{itemize}
\item
\textit{Precision}: The ability of the model to identify the possible heart beats from the input:
\begin{equation}
P\ =\left[ \ \frac{TP}{TP+FP} \ *\ 100\%\ \right].
\end{equation}
\item
\textit{Recall}: The ability of the model to identify all the relevant heartbeats from the predicted possible heartbeats:
\begin{equation}
R\ =\left[ \ \frac{TP}{TP+FN} \ *\ 100\%\ \right].
\end{equation}
\item
\textit{Accuracy}: The ratio of correct predictions made by the model to the total number of predictions by the model:
\begin{equation}
\alpha =\ \left[ \ \frac{TP\ +\ TN}{TP+TN\ +\ FP\ +FN} \ *\ 100\%\ \right].
\end{equation}
\end{itemize}
\subsection{Preparation of Dataset}
In order to detect abnormalities in ECG, a dataset with classification of heartbeats into normal beats, Ventricular and Supraventricular beats along with Fusion beats and Unknown beats is considered. The MIT-BIH Arrhythmia Dataset \cite{Moody_2001} is used that contains 48 half an hour excerpts of two channel ECG with over 150,000 samples. The proposed model is trained on 100,000 and tested on approximately 22,000 samples. The sample information that is being considered in MyWear is described in Table \ref{TBL:Dataset_MyWear}.
\begin{table}[htbp]
\centering
\caption{Different Heartbeat information considered for classification.}
\label{TBL:Dataset_MyWear}
\begin{tabular}{|p{3cm}|p{7.5cm}|}
\hline
\textbf{Characteristics} & \textbf{Specifics} \\
\hline
\hline
Number of samples & 100,000 \\
\hline
Sampling frequency & 125Hz\\
\hline
Classes & Normal, Supraventricular, Ventricular, Fusion, Unknown\\
\hline
\end{tabular}
\end{table
\section{Proposed Methods for Automatic Muscle Activity Detection, and Fall Detection and Prediction}
\label{Sec:Fall_Detection_Method}
Electromyography (EMG) helps in understanding the muscle activity and its intensity in a muscle region. EMG is used to measure the change in electric potential generated at neuromuscular junctions as electric signals or action potential pass through. Clinical settings of EMG use a needle that is inserted into the muscle. For measuring ECG on the go and better ease of use, surface EMG is chosen \cite{Alimam_2017, Samarawickrama_2018}. Surface EMG uses electrodes that measure overall activity of a large portion of the muscle. The activity is measured in voltage and represents the amount of force exerted by the muscle in real-time. MyWear records muscle activity at the biceps or biceps brachii and at the chest or pectoralis major.
\subsection{Proposed Method for Muscle Activity Automatic Detection from Electromyography (EMG)}
EMG is used to measure the change in electric potential that depicts the force exerted by the muscle. For ease of use, a three-electrode system is used to measure the muscle activity at a particular muscle region. Initially, to capture stable signals with low noise, the sensor is tuned by changing the gain. The gain helps in adjusting the sensitivity of signal acquisition. EMG helps in understanding the muscle activity and its intensity in a muscle region. EMG is used to measure the change in electric potential generated at neuromuscular junctions as electric signals or action potential pass through. Clinical settings of EMG use a needle that is inserted into the muscle. For measuring ECG on the go and better ease of use, surface EMG is chosen \cite{Alimam_2017, Samarawickrama_2018}. Surface EMG uses electrodes that measure overall activity of a large portion of the muscle. The activity is measured in voltage and represents the amount of force exerted by the muscle in real-time. MyWear records muscle activity at the biceps or biceps brachii and at the chest or pectoralis major.
\subsection{Proposed Method to Calculate Body Orientation}
In order to determine the body orientation. The sensor is calibrated with the user's initial position and orientation. The increase or decrease in the Euler's angle determines the change in body orientation of the user. Upon initializing the sensor, it provides the $X$, $Y$ and $Z$ values, however, these values depend on the sensitivity. The default sensitivity is -2$g$ to +2$g$. To calibrate the sensor, offset values are initialized. Initial position or offset values are recorded when the person is stood up straight and still. These values are written in $X$, $Y$ and $Z$ axis offset registers. After calibration, the user's movements are measured as $X_{out}$, $Y_{out}$ and $Z_{out}$. Roll, Pitch and Yaw values are calculated using the following expressions \cite{husstech_2020}:
\begin{eqnarray}
Roll, \rho = arctan \left(\frac{Y_{out}}{\sqrt{( X_{out})^{2} + ( Z_{out})^{2} \ }}\right)
\\
Pitch, \phi = arctan\ \left(\frac{X_{out}}{\sqrt{( Y_{out})^{2} \ +( Z_{out})^{2} \ }}\right) \\
Yaw, \theta = arctan\ \left(\frac{\sqrt{( Y_{out})^{2} \ +( X_{out})^{2}}}{Z_{out}}\right)
\end{eqnarray}
These values define the change in the $X$, $Y$ and $Z$ values from the calibrated values. The orientation of the user is provided as simple text showing whether the person is bending towards right, forward or diagonally towards left feet.
The above expressions form the basis of calculating any movement recorded by the IMU as well as computing these would be helpful to detect action such as fall and body orientation.
\subsection{Proposed Method for Fall Prediction}
In order to detect sudden fall of the person triggered by involuntary force, simple three-layer CNN followed by two MaxPooling layers and an output Softmax function is used to predict a probable fall \cite{Zhou_2018}. The model predicts whether the person is about to fall by the change in resultant acceleration obtained from the garment's Accelerometer. The resultant acceleration is calculated using the following expression \cite{Fan_2020}:
\begin{equation}
g_{_{i}} = \sqrt{\frac{x^{2}_{i} +y^{2}_{i} +\ z^{2}_{i} \ }{g}},
\end{equation}
where $g$ is the acceleration due to gravity which is 9.8. $g_i$ is the resultant acceleration at instance $i, x_i,y_i,z_i$ are the values of accelerations at instance $i$ along $x$, $y$ and $z$ axis, respectively \cite{Fan_2020}. $g_i$ is calculated at every instance of time that accelerometer is received.
\subsection{Proposed Method for Fall Detection}
After a fall is predicted, if it found that the resultant acceleration swiftly decreases below the maximum threshold of $+0.90g$ from $+1g$ in less and quickly increases over $+1g$ in less than 0.3 seconds \cite{Mezghan_2017, HEMALATHA_2013}, it can be concluded that the predicted fall occurred and is detected as seen in the Fig. \ref{FIG:Results_Fall_detection_prediction}. Table \ref{TBL:Body_Vitals_during_Fall} shows the change in body vitals during fall.
\begin{table}[htbp]
\centering
\caption{Body Vitals during Fall.}
\label{TBL:Body_Vitals_during_Fall}
\begin{tabular}{|p{3cm}|p{7cm}|}
\hline
\textbf{Characteristics} &\textbf{Specifics} \\
\hline
\hline
Fall Prediction &Swift decrease in Resultant Acceleration(g) below the calibrated threshold \\
\hline
Muscle Activity & Quick activation in bicep and Chest Muscle forces\\
\hline
Beats/min & A sudden fall triggers quick increase in Heart Rate\\
\hline
Fall Detection & Increase in g to above +1g within 0.3 seconds \\
\hline
\end{tabular}
\end{table
\begin{figure}[htbp]
\centering
\includegraphics[width=0.65\textwidth]{Results_Fall_Detection_Prediction}
\caption{Fall detection and Prediction.}
\label{FIG:Results_Fall_detection_prediction}
\end{figure}
\section{Experimental Validation of MyWear}
\label{Sec:Experimental_Validation}
We prototyped NyWear using off the shelf components. The prototype was validated using various datasets. This Section discusses all the details.
\subsection{A Specific Design of Proposed MyWear}
Fig. \ref{FIG:MyWear_Prototype} shows the photograph of the experimental prototype. The characterization table of the MyWear system is given in Table \ref{TBL:characterization_MyWear_System}. The sensors acquire the body vital data upon initialization. The filtered data is sent to the smartphone application using bluetooth. A copy of data is sent to the cloud for analysis and backup through Wi-Fi connectivity.The application displays vitals data as shown in the Fig. \ref{FIG:Mobile_App_for_User}. The temperature and body orientation is displayed. ECG samples of 4 minute time duration are recorded every 20 minutes. In case, no heartbeat is detected, the user is prompted to check whether the electrodes are in contact or not. The HRV score is calculated from ECG as discussed later which helps in determining the stress level of the user. The muscle activity and its intensity is visualized on the human map in forms of colors. The darker the color is, the larger the exerted muscle force is for the particular muscle.
\begin{table}[htbp]
\centering
\caption{Characterization of MyWear System.}
\label{TBL:characterization_MyWear_System}
\begin{tabular}{|p{3cm}|p{6cm}|}
\hline
\textbf{Characteristics} & \textbf{Specifics} \\
\hline
\hline
Sensor System &EMG, ECG, IMU, Temperature Sensor\\
\hline
Classifier & Deep Neural Network: C25\\
\hline
Input Dataset & MIT-BIH Arrythmia Dataset\\
\hline
Data Acquisition & Mobile Application and Cloud Service \\
\hline
Connectivity & Bluetooth and Wi-Fi\\
\hline
DNN Accuracy & 98.2\% (worst case) \\
\hline
\end{tabular}
\end{table
\begin{figure} [htbp]
\centering
\subfigure[MyWear Prototype]{
\includegraphics[height=0.55\textwidth]{MyWear_Prototype}
\label{FIG:MyWear_Prototype}
}
\hspace{1cm}
\subfigure[MyWear's mobile application displaying the body vitals]{
\includegraphics[height=0.65\textwidth]{Mobile_App_for_User}
\label{FIG:Mobile_App_for_User}
}
\caption{Prototyping of the MyWear using off-the-shelf components.}
\end{figure}
The vital data is stored in a Firebase database (cloud platform) that is also capable of running a deep learning model. The model detects whether the heartbeat is normal or irregular. The same is repeated for two samples. If an abnormal or irregular beat is detected, a prompt is then sent to the user's application and an alert is sent to the prescribed doctor and medical officials for immediate assistance.
\subsection{Validation of Detecting Muscle Activity}
Overall, 5 tests of 20 minutes were carried out to study the relation of flexing of muscle on the intensity of activity recorded. And the recorded electrical activity was plotted with respect to time. It was noted that while flexing, there was an increase in muscle activity depicted by peaks in the graph. The peaks were sharper and taller when the subject flexed a muscle with greater effort, hence conveying high intensity of muscle force. It was concluded that the higher the intensity the taller the peaks in the graph, suggesting that a greater exert force exerted by the muscle. Fig. \ref{FIG:EMG_Bicep_With_Peaks} and Fig. \ref{FIG:EMG_Chest_With_Peaks} shows the plotted graph depicting instances of increase in muscle intensity in left bicep brachii (bicep) and pectoralis major (chest), respectively.
\begin{figure}[htbp]
\centering
\subfigure[Muscle Activity - Bicep]{
\includegraphics[width=0.70\textwidth]{EMG_Bicep_With_Peaks}
\label{FIG:EMG_Bicep_With_Peaks}
}
\subfigure[Muscle Activity - Chest]{
\includegraphics[width=0.70\textwidth]{EMG_Chest_With_Peaks}
\label{FIG:EMG_Chest_With_Peaks}
}
\caption{Muscle Activity detection using EMG with highlighted peaks taken from Bicep and Chest.}
\end{figure}
\subsection{Validating ECG Classification}
Electrocardiogram was collected from three different subjects wearing the garment. ECG was plotted at three different time frames. Fig. \ref{FIG:ECG_Healthy_Subject} shows the Electrocardiogram value of a healthy subject.
\begin{figure} [htbp]
\centering
\subfigure[ECG of a healthy subject]{
\centering
\includegraphics[width=0.75\textwidth]{ECG_Healthy_Subject}
\label{FIG:ECG_Healthy_Subject}
}
\subfigure[Normal heartbeat of a healthy subject]{
\centering
\includegraphics[width=0.75\textwidth]{Heartbeat_Normal_Healthy_Person}
\label{FIG:Heartbeat_Normal_Healthy_Person}
}
\caption{ECG and Normal Heartbeat of a healthy subject.}
\end{figure}
The healthy subject showed a HRV score of 71.87. The HRV score is equal to the RMSSD value as discussed in the existing literature. Fig. \ref{FIG:Heartbeat_Normal_Healthy_Person} depicts a normal heartbeat and Fig. \ref{FIG:Heartbeat_Abnormal_4-Cases} shows different type abnormal heartbeats extracted from MIT-BIH Arrhythmia Dataset \cite{Moody_2001}. Fig. \ref{FIG:Heartbeat_Abnormal_4-Cases} shows the different abnormal heartbeat visualized into the following:
\begin{enumerate}
\item
Supraventricular
\item
Unknown
\item
Ventricular
\item
Fusion
\end{enumerate}
The proposed DNN model classifies ECG on the basis of the mentioned type of abnormal heartbeats
\begin{figure*}[htbp]
\centering
\subfigure[Supraventricular]{
\centering
\includegraphics[width=0.65\textwidth]{Heartbeat_Abnormal_Case1_Supraventricular}
\label{FIG:Heartbeat_Abnormal_Case1_Supraventricular}
}
\subfigure[Ventricular]{
\centering
\includegraphics[width=0.65\textwidth]{Heartbeat_Abnormal_Case2_Ventricular}
\label{FIG:Heartbeat_Abnormal_Case2_Ventricular}
}\\
\subfigure[Fusion]{
\centering
\includegraphics[width=0.65\textwidth]{Heartbeat_Abnormal_Case3_Fusion}
\label{FIG:Heartbeat_Abnormal_Case3_Fusion}
}
\subfigure[Unknown]{
\centering
\includegraphics[width=0.65\textwidth]{Heartbeat_Abnormal_Case4_Unknown}
\label{FIG:Heartbeat_Abnormal_Case4_Unknown}
}
\caption{Abnormal heartbeats in various forms: a. Supraventricular b. Ventricular c. Fusion d. Unknown.}
\label{FIG:Heartbeat_Abnormal_4-Cases}
\end{figure*}
\subsection{Validation of Stress (HRV) Detection - Time-Domain Metrics}
Table \ref{TBL:Time_domain_metric} shows the time domain metrics obtained after analyzing a 5 minute ECG sample. Fig. \ref{FIG:Results_Poincare-Plot-HRV} shows frequency domain metric Poincare plot. The greater the distance between values in plot, the higher the HRV and lower are the stress levels of the user. Poincare plot is a frequency domain metric or analysis wherein R-R intervals are plotted as a function of the previous RR-interval. The values of each pair of consecutive RR intervals represent a point in the plot. The plot consists of Standard Deviation SD1 and SD2 that correspond to the Standard Deviation of RR interval and Standard Deviation of Successive Difference of RR interval, respectively.
\begin{table}[htbp]
\centering
\caption{Time Domain Metric and Values}
\label{TBL:Time_domain_metric}
\begin{tabular}{|p{3cm}|p{3cm}|}
\hline
\textbf{Time Domain Metric} & \textbf{Values} \\
\hline
\hline
Mean RR (ms) & 865.41\\
\hline
STD RR/SDNN (ms) & 66.51\\
\hline
Mean HR (beats/min) & 69.81\\
\hline
STD HR (beats/min) & 6.40 \\
\hline
Min HR (beats/min) & 58.42 \\
\hline
Max HR (beats/min) & 122.95 \\
\hline
RMSSD (ms) & 71.87 \\
\hline
NNxx & 123.00 \\
\hline
pNNxx (\%) & 35.04 \\
\hline
\end{tabular}
\end{table
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textheight]{Results_Poincare-Plot-HRV}
\caption{Poincare plot of heart rate variability (HRV).}
\label{FIG:Results_Poincare-Plot-HRV}
\end{figure}
\subsection{Validation of the DNN Model for for Heart Arrhythmia}
The accuracy obtained for the proposed model is 98.2\%. The rate of learning of the model was 0.001. The pattern of maintaining accuracy and tracking loss in classifying heartbeats on the basis of defined categories is shown Fig. \ref{FIG:Results_Accuracy} and Fig. \ref{FIG:Results_Loss}, respectively. Loss represents the rate at which the model is optimized with respect to the number of epochs the model is trained. The accuracy curve represents the rate at which the system is trained and has reached the maximum level of predicting the heartbeats.
\begin{figure*} [htbp]
\centering
\subfigure[Loss]{
\includegraphics[width=0.60\textwidth]{Results_Loss}
\label{FIG:Results_Loss}
}
\subfigure[Accuracy]{
\centering
\includegraphics[width=0.60\textwidth]{Results_Accuracy}
\label{FIG:Results_Accuracy}
}
\subfigure[Recall and Precision]{
\centering
\includegraphics[width=0.60\textwidth]{Results_Performance-Recall-Precision}
\label{FIG:Results_Performance-Recall-Precision}
}
\caption{Performance measures of the proposed DNN model.}
\label{FIG:Performance_measure}
\end{figure*}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.65\textwidth]{Results_Confusion-Matrix}
\caption{Confusion matrix for DNN heartbeat classification model validation.}
\label{FIG:Results_Confusion-Matrix}
\end{figure}
Fig. \ref{FIG:Results_Performance-Recall-Precision} shows a plot depicting the relation of performance of recall and precision metric with the number of epochs trained with. The average accuracy of the model is 96.9\% while the precision 97.3\% and recall 97.1\%.
Table \ref{TBL:Heartbeat_classification_results} shows the comparison of the heartbeat classification results with other state of the art models \cite{Acharya_Raj_Oh_2017, Martis_WSP_2013, Li_Entropy_2016}. Fig. \ref{FIG:Results_Confusion-Matrix} shows the confusion matrix depicting the predicted and truth label after classification of target variables.
\begin{table}[htbp]
\centering
\caption{Comparison of heartbeat classification results.}
\label{TBL:Heartbeat_classification_results}
\begin{tabular}{|p{3.6cm}|p{3.4cm}|p{3.3cm}|}
\hline
\textbf{Methodology} & \textbf{Approach} & \textbf{Average Accuracy (\%)} \\
\hline
\hline
Raj et al. \cite{Raj_Ray_2018} &DCST + ABC-SVM & 96.1
\\
\hline
Wang et al. \cite{Wang_TCE_2016-May} &H-Box &95
\\
\hline
Acharya et al. \cite{Acharya_Raj_Oh_2017} &Augmentation + CNN &93.5
\\
\hline
Martis et al. \cite{Martis_WSP_2013} &DWT + SWM &93.8
\\
\hline
Li et al. \cite{Li_Entropy_2016} & DWT + random forest &94.6
\\
\hline
\textbf{MyWear (Current Paper)} & DNN &96.9 \\
\hline
\end{tabular}
\end{table
\begin{table}[htbp]
\centering
\caption{Comparison of classifying myocardial infarction results.}
\label{TBL:Comparison_results_myocardial infarction}
\begin{tabular}{|p{3.5cm}|p{1.9cm}|p{1.9cm}|p{1.5cm}|}
\hline
\textbf{Methodology} & \textbf{Accuracy (\%)} & \textbf{Precision (\%)} & \textbf{Recall (\%)} \\
\hline
\hline
Raj et al. \cite{Raj_Ray_2018} &96.1 & NA & NA
\\
\hline
Acharya et al. \cite{Acharya_Rajendra_2017} &93.5 &92.8 &93.7
\\
\hline
Kojuri et al. \cite{Kojuri_Javad_2015} &95.6 &97.9 &93.7
\\
\hline
Sharma et al. \cite{Sharma_ITBE_2015} &96 &99 &93
\\
\hline
\textbf{MyWear (Current Paper)} & 98.2 &97.3 &97.1
\\
\hline
\end{tabular}
\end{table}
Table \ref{TBL:Comparison_results_myocardial infarction} shows the comparison of classifying myocardial infarction results with other models \cite{Kojuri_Javad_2015, Sharma_ITBE_2015, Acharya_Raj_Oh_2017}. Fig. \ref{FIG:Mobile_App_for_User} shows the MyWear's mobile application displaying the body vitals such as Heart rate, body temperature, body orientation and HRV score along with plotting ECG data in real-time. The application visualizes the muscle activity on the human map.
\begin{table}[htbp]
\centering
\caption{Comparison of Fall detection results.}
\label{TBL:Comparison_results_Fall_detection}
\begin{tabular}{|p{3.5cm}|p{1.9cm}|p{2.19cm}|p{2.2cm}|}
\hline
\textbf{Methodology} & \textbf{Accuracy (\%)} & \textbf{Sensitivity (\%)} & \textbf{Specificity (\%)} \\
\hline
\hline
Hemalatha et al. \cite{HEMALATHA_2013} &92 & NA & NA
\\
\hline
Mezghani et al. \cite{Mezghan_2017} &98 &97.5 &98.5
\\
\hline
Wang et al. \cite{Wang_TCE_2016-May} &95 & NA & NA
\\
\hline
\textbf{MyWear (Current Paper)} & 98.5 &98 &99.5
\\
\hline
\end{tabular}
\end{table}
\subsection{Validation of Fall Prediction and Detection}
Fig. \ref{FIG:Results_Fall_detection_prediction} depicts the instance at which the model predicts when the person/user is about to experience a fall by recognizing a sudden and quick drop in the resultant acceleration. And a drop is detected when the resultant acceleration drops below $1g$ and quickly increases over $+1g$. Table \ref{TBL:Comparison_results_Fall_detection} shows the comparison of Fall detection results with other models \cite{HEMALATHA_2013, Mezghan_2017, Wang_TCE_2016-May}.
\section{Conclusion and Future Research}
\label{Sec:Conclusion}
Body vital provides insights into the life and lifestyle of the user. Analyzing them provides the user information to improve the health of individuals on a daily basis. Approach presented in this paper helps to enhance the mental and physical state of the user based on analysis of ECG and EMG data, respectively. The proposed garment is integrated with a deep learning model running in IoMT-cloud server that helps in detecting any abnormalities in the heart beat and classifies into the type of abnormality detected. The average accuracy and precision of the proposed deep learning model was 96.9\% and 97.3\%, respectively. MyWear can help in rehabilitation of athletes and sportsmen with the help of embedded sensors that detect Muscle activity and body movement to come up with help for overall body development.
Further, implementing the deep learning model on edge platforms would reduce computational time and resources hence giving results quicker. This can be an extension of the proposed garment and potentially future improvement. First this cane be integrated in IoMT-edge paradigm with TinyML models to rapidly detect the health conditions fast and at the user end \cite{8684800, 9085930, 8922820}. It is a fact that various security and privacy related challenges arise in IoMT driven H-CPS that make smart healthcare. So, blockchain data and device management in of MyWear in IoMT or H-CPS needs serious research \cite{8926457, 8595469, 8662009}.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_059-15687 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}\label{sec:intro}
The primary goal of this paper is to prove two ``master'' convolution
theorems
for determinantal polynomials.
One unavoidable fact of general matrices is that they are two dimensional
objects, containing both a width and a length.
While the dimensions of a matrix tend not to appear explicitly in basic linear
algebra formulas, they seem to have a more direct role in the context of
random matrices.
One obvious example of this is the {\em Wishart ensemble}: let $W = X X^*$
where
$X$ is an $n \times m$ random matrix with independent real Gaussian entries.
Even though $W$ is, itself, an $n \times n$ matrix (no $m$ involved), the joint
eigenvalue
has the form
\[
\mu_W(\lambda_1, \dots \lambda_n)
\propto e^{-1/2 \sum_i \lambda_i}
\prod_i \lambda_i^{(m-n-1)/2}
\prod_{i < j} |\lambda_i
- \lambda_j|
\]
where the value of $m$ is considered to be a measure of ``degrees of freedom''
\cite{forrester}.
In this note, we will refer to the value of $m$ in this example as a
{\em local} dimension (since the $m$ disappears after the product is taken) and
to $n$ as a {\em global} dimension (since the product matrix still has $n$ as a
dimension).
Similar to the case of the Wishart ensemble, polynomial convolutions will
depend on both local and global parameters, and
so the aim will be to find methods for computing both in the most general case
possible.
To state the results explicitly, we first introduce some notations: let
$\mathcal{M}_{m,
n}$ denote the collection of $m \times n$ matrices\footnote{All of
results in this paper will hold true for any base field.
This is not the case for the random matrix ensembles that we will mention at
various points, and so we will state the base field specifically in those
discussions.}.
We will use the standard notation for multivariate polynomials: for $\alpha \in
\mathbb{N}^k$, we write
\[
x^\alpha := \prod_{i=1}^k x_i^{\alpha_i}
\quad \text{and}\quad
\alpha! := \prod_{i=1}^k \alpha_i!
\]
Given two degree $n$ homogeneous polynomials
\[
p(x_1, \dots, x_k) = \sum_{\alpha \in \mathbb{N}^k} p_\alpha x^{\alpha}
\quad \text{and}\quad
q(x_1, \dots, x_k) = \sum_{\alpha \in \mathbb{N}^k} q_\alpha x^{\alpha},
\]
we define the $\star$-convolution of $p$ and $q$ to be\footnote{Note that this is {\em not} the same as the Schur--Hadamard convolution, which is defined by
\[
x^{\alpha} \bullet x^{\beta} = \frac{\delta_{\{\alpha = \beta\}}}{\binom{\eta}{\alpha}}
\]
where $\eta \in \mathbb{N}^k$ contains the maximum degree of each variable in $p$ and $q$ (see \cite{bb}).}
\begin{equation}\label{eq:conv}
[p \star q](x_1, \dots, x_k) = \frac{1}{n!}\sum_{\alpha \in \mathbb{N}^k} p_\alpha q_\alpha \alpha! x^{\alpha}.
\end{equation}
We also define an operator on multivariate polynomials that operates on pairs of variables (all other variables considered fixed).
For integers $i, j, m$, we define
\begin{equation}\label{eq:L}
L_m^{x, y} [ x^i y^j ] =
\begin{cases}
\frac{(m-i)!(m-j)!}{m!(m-i-j)!} x^i y^j & \text{for ${i + j \leq m}$} \\
0 & otherwise
\end{cases}
\end{equation}
and extend linearly to generic multivariate polynomials.
Our first major theorem shows the effect of averaging over a local dimension:
\begin{theorem}[Local]\label{thm:local}
For integers $d,m$ and variables $x,y$ let
\begin{itemize}
\item $R \in \mathcal{M}_{m, m}$ be a uniformly distributed signed
permutation matrix
\item $A_1, A_2 \in \mathcal{M}_{d, m}$ and $B_1, B_2 \in \mathcal{M}_{m, d}$ and $U \in
\mathcal{M}_{d, d}$ be matrices that are independent from $R$ and do not contain the
variables $x$ and $y$ (but could contain other variables).
\end{itemize}
Then
\[
\expect{R}{ \mydet{U + (x A_1 + y A_2 R)(B_1 + R^T B_2)}}
= L_m^{x, y}
\left\{ \mydet{U + x A_1 B_1 + y A_2 B_2} \right \}
\]
\end{theorem}
The second main theorem then shows the effect of averaging over a global dimension:
\begin{theorem}[Global]\label{thm:global}
For integers $n, d$, let $A_1, \dots, A_n, B_1, \dots, B_n \in \mathcal{M}_{d,d}$ and
set
\[
p(x_1, \dots, x_n) = \mydet{ \sum_i x_i A_i }
\quad \text{and}\quad
q(x_1, \dots, x_n) = \mydet{ \sum_i x_i B_i }.
\]
If $Q \in \mathcal{M}_{d, d}$ is a uniformly distributed signed permutation matrix,
then
\[
\expect{Q}{ \mydet{\sum_i x_i A_i Q B_i Q^T } } = [p \star q](x_1, \dots, x_n)
\]
\end{theorem}
These theorems can then be used iteratively to compute more complicated
convolutions (we give examples of this in
Sections~\ref{sec:nonHermitian}~and~\ref{sec:gsvd}).
The paper will proceed by first reviewing some of the basic combinatorial and
linear algebraic tools that we will need (Section~\ref{sec:prelims}).
We will then introduce a type of random matrix ensemble which we call {\em
minor-orthogonal} and prove some basic properties
(Section~\ref{sec:minor-orthogonal}).
Among these properties will be the fact that a uniformly distributed signed
permutation matrix is minor-orthogonal (Lemma~\ref{lem:sps})
In Section~\ref{sec:local}, we will give a proof of Theorem~\ref{thm:local} in
the more general context of minor-orthogonal matrices.
Unfortunately, we are not able to prove Theorem~\ref{thm:global} in similar
generality.
Instead, we present a proof of Theorem~\ref{thm:global} specific to signed
permutation matrices in Section~\ref{sec:global}.
We give some examples of applications (reproducing known results) in
Section~\ref{sec:apps} and then give the main application (the introduction of
an additive convolution for generalized singular values) in
Section~\ref{sec:gsvd}.
Finally, we discuss some open problems in Section~\ref{sec:conc}.
\section{The tools}
We start by giving the definitions and constructs that we will use.
\subsection{General}
For a statement $S$, we will use the Dirac delta function
\[
\delta_{ \{ S \} } =
\begin{cases}
1 & \text{if $S$ is true} \\
0 & \text{if $S$ is false}
\end{cases}.
\]
We write $[n]$ to denote the set $\{ 1, \dots, n \}$ and for a set $S$, we write $\binom{S}{k}$ to denote the collection of subsets of $S$ that have exactly $k$ elements.
For example,
\[
\binom{[4]}{2} = \big\{ \{ 1, 2 \}, \{ 1, 3 \}, \{ 1, 4 \}, \{ 2, 3 \}, \{ 2, 4 \}, \{ 3, 4 \} \big\}
\]
When our sets contain integers (which they always will), we will consider the set to be ordered from smallest to largest.
Hence, for example, if $S$ contains the elements $\{ 2, 5, 3 \}$, then we will write
\[
S = \{ s_1, s_2, s_3 \}
\quad
\text{where}
\quad
s_1 = 2, s_2 = 3, s_3 = 5.
\]
Now let $S = \{ s_1, \dots, s_k \} \in \binom{[n]}{k}$.
For a set $W \in \binom{[k]}{j}$ with $j \leq k$, we will write
\[
W(S) = \{ s_i : i \in W \}.
\]
Lastly, for a set of integers $S$, we will write
\[
\| S \|_1 = \sum_{s \in S} s
\]
and note that (as is easy to check)
\[
(-1)^{\| S + T \|_1} = (-1)^{\| S\|_1 + \|T \|_1}.
\]
\begin{example}
For $W = \{ 1, 3 \}$ and $S = \{ 2, 4, 5 \}$ we have
\[
W(S) = \{ 2, 5 \}
\quad \text{and}\quad
\|W\|_1 = 1 + 3 = 4
\quad \text{and}\quad
\|S\|_1 = 2 + 4 + 5 = 11.
\]
\end{example}
\subsection{Matrices}
Given a matrix $A \in \mathcal{M}_{n, n}$ and sets $S \in \binom{[n]}{k}$ and $T \in
\binom{[m]}{k}$, we will write the {\em $(S, T)$-minor of $A$} as
\[
[A]_{S, T} = \mydet{ \{ a_{i, j} \}_{i \in S, j \in T}}
\]
By definition, we will set $[A]_{\varnothing, \varnothing} = 1$.
There are well-known formulas for the minor of a product of matrices
\cite{horn}
as well as the minor of a sum of matrices \cite{mm}:
\begin{theorem}
For integers $m, n, p, k$ and matrices $A \in \mathcal{M}_{m, n}$ and $B \in \mathcal{M}_{n,
p}$, we have
\begin{equation}\label{eq:mult}
[A B]_{S, T} = \sum_{|U| \in \binom{[n]}{k}}
[A]_{S, U} [B]_{U, T}.
\end{equation}
for any sets $S \in \binom{[m]}{k}$ and $T \in \binom{[p]}{k}$.
\end{theorem}
\begin{theorem}
For integers $n, k$ and matrices $A, B \in \mathcal{M}_{n, n}$, we have
\begin{equation}\label{eq:add}
[A + B]_{S, T} = \sum_i \sum_{U, V \in \binom{[k]}{i}}
(-1)^{\| U(S) + V(T) \|_1}
[A]_{U(S), V(T)}
[B]_{\overline{U}(S), \overline{V}(T)}
\end{equation}
for any sets $S, T \in \binom{[n]}{k}$.
\end{theorem}
We will also make use of the {\em mixed discriminant}: for an integer $n$, let
$X_1, \dots, X_n \in \mathcal{M}_{n, n}$.
The {\em mixed discriminant} of these matrices is then defined as
\[
D(X_1, \dots, X_n) = \frac{1}{n!}\frac{\partial^n}{(\partial_{t_1}) \dots (\partial_{t_n})}~ \mydet{ \sum_i t_i X_i}
\]
Note that $\mydet{ \sum_i t_i X_i}$ is a degree $n$ homogeneous polynomial, so the derivative is precisely the coefficient of $t^\alpha$ where $\alpha = (1, 1, 1, \dots, 1)$.
The mixed discriminant has the following well-known properties (all following directly from the definition):
\begin{lemma}\label{lem:md}
Let $X_1, \dots, X_n, Y \in \mathcal{M}_{n, n}$ and let $\{u_i \}_{i=1}^n, \{ v_i
\}_{i=1}^n$ be vectors of length $n$.
Then
\begin{enumerate}
\item $D(a X_1 + b Y, X_2, \dots, X_n) = a D(X_1, X_2, \dots, X_n) + b D(Y, X_2, \dots, X_n)$ for all scalars $a, b$
\item $D(X_1, X_2, \dots, X_n) = D(X_{\pi(1)}, X_{\pi(2)}, \dots, X_{\pi(n)})$ for all permutations $\pi$
\item
$D(X_1 Y, X_2 Y, \dots, X_n Y)
= D(Y X_1, Y X_2, \dots, Y X_n)
= \mydet{Y} D(X_1, X_2, \dots, X_n)$
\item $D(u_1 v_1^T, u_2 v_2^T, \dots, u_n v_n^T) = \mydet{ u_1~u_2~\dots~u_n} \mydet{v_1~v_2~\dots~v_n}$
\end{enumerate}
\end{lemma}
Note that properties {\em 1.} and {\em 2.} combine to show that the mixed
discriminant is multilinear (that is, it is a linear function with respect to
each of its inputs).
We will leave the discussion of generalized singular values to
Section~\ref{sec:gsvd}.
\label{sec:prelims}
\section{Minor-orthogonality}
\newcommand{\mydel}[1]{\delta_{ \{ #1 \} } }
\newcommand{\minor}[3]{[#1]_{#2, #3}}
\newcommand{m}{m}
\newcommand{n}{n}
\newcommand{c}{c}
\newcommand{d}{d}
\newcommand{ \frac{1}{\binom{ \max \{ \mm, \nn \} }{k}}}{ \frac{1}{\binom{ \max \{ m, n \} }{k}}}
We will say that a random matrix $R \in \mathcal{M}_{m, n}$ is {\em
minor-orthogonal} if for all integers $k, \ell \leq \min \{ m, n \}$ and
all sets $S, T, U, V$ with $|S| = |T| = k$ and $|U| = |V| = \ell$, we have
\[
\expect{R}{\minor{R}{S}{T}\minor{R^T}{U}{V}}
= \frac{1}{\binom{ \max \{ \mm, \nn \} }{k}} \mydel{S = V}\mydel{T = U}.
\]
Given a minor-orthogonal ensemble $R$ it is easy to see from the definition that
\begin{enumerate}
\item $R^T$ is minor orthogonal
\item any submatrix that preserves the largest dimension of $R$ is minor
orthogonal
\end{enumerate}
\begin{lemma}\label{lem:rot}
If $R$ is minor-orthogonal and $Q$ is a fixed matrix for which $QQ^T = I$, then
$QR$ is minor-orthogonal.
\end{lemma}
\begin{proof}
For any sets $S, T$ with $|S| = |T| = k$, we have
\[
\minor{QR}{S}{T} = \sum_{|W| = k} \minor{Q}{S}{U} \minor{R}{U}{T}
\]
so for $|U| = |V| = \ell$, we have
\begin{align*}
\expect{R}{\minor{QR}{S}{T}\minor{(QR)^T}{U}{V}}
&=
\expect{R}{
\sum_{|W| = k}\sum_{|Z| = \ell}
\minor{Q}{S}{W} \minor{R}{W}{T}
\minor{R^T}{U}{Z} \minor{Q^T}{Z}{V}
}
\\&=
\sum_{|W| = k}\sum_{|Z| = \ell} \minor{Q}{S}{W} \minor{Q^T}{Z}{V}
\frac{1}{\binom{ \max \{ \mm, \nn \} }{k}} \mydel{W = Z} \mydel{T = U}
\\&=
\sum_{|W| = k} \frac{1}{\binom{ \max \{ \mm, \nn \} }{k}} \minor{Q}{S}{W}
\minor{Q^T}{W}{V} \mydel{T = U}
\\&=
\frac{1}{\binom{ \max \{ \mm, \nn \} }{k}}
\mydel{S = V}\mydel{T = U}.
\end{align*}
where the last line comes from the fact that $\minor{I}{S}{V} = \mydel{S = V}$.
\end{proof}
\begin{lemma}\label{lem:sps}
The collection of $n \times n$ signed permutation matrices (under the
uniform distribution) is minor-orthogonal.
\end{lemma}
\begin{proof}
We can write a uniformly random signed permutation matrix $Q$ as $Q = E_\chi
P_\pi$ where $P_\pi$ is a uniformly random permutation matrix and $E_\chi$ is a
uniformly random diagonal matrix with $\{ \pm 1 \}$ on the diagonal (and the
two are independent).
Hence for $|S| = |T| = k$ and $|U| = |V| = \ell$, we have
\begin{align*}
\expect{Q}{\minor{Q}{S}{T}\minor{Q^T}{U}{V}}
&=
\expect{\chi, \pi}{\minor{E_\chi P_\pi}{S}{T} \minor{ P_\pi^T E_\chi}{U}{V}}
\\&=
\sum_{|W| = k}\sum_{|Z| = \ell} \expect{\chi, \pi}{\minor{E_\chi}{S}{W}
\minor{P_\pi}{W}{T} \minor{ P_\pi^T }{U}{Z} \minor{E_\chi}{Z}{V}}.
\\&=
\expect{\chi, \pi}{\minor{E_\chi}{S}{S} \minor{P_\pi}{S}{T} \minor{ P_\pi^T
}{U}{V} \minor{E_\chi}{V}{V}}
\\&=
\expect{\chi}{ \prod_{i \in S} \chi_i \prod_{j \in V} \chi_j}
\expect{\pi}{\minor{P_\pi}{S}{T} \minor{ P_\pi^T }{U}{V}}.
\end{align*}
where the penultimate line uses the fact that a diagonal matrix $X$ satisfies
$\minor{X}{A}{B} = 0$ whenever $A \neq B$.
Now the $\chi_i$ are uniformly distributed $\{ \pm 1 \}$ random variables, so
\[
\expect{\chi}{ \prod_{i \in S} \chi_i \prod_{j \in V} \chi_j} = \mydel{S = V}
\]
and so we have
\begin{align*}
\expect{Q}{\minor{Q}{S}{T}\minor{Q^T}{U}{V}}
&=
\expect{\pi}{\minor{P_\pi}{S}{T} \minor{ P_\pi^T }{U}{V}} \mydel{S = V}
\\&=
\expect{\pi}{\minor{P_\pi}{S}{T} \minor{ P_\pi }{S}{U}} \mydel{S = V}
\end{align*}
Furthermore, $\minor{P_\pi}{S}{T} = 0$ except when $T = \pi(S)$, so in order
for both $\minor{P_\pi}{S}{T}$ and $\minor{ P_\pi }{S}{U}$ to be nonzero
simultaneously requires $U = T$.
In the case that $U = T = \pi(S)$, $\minor{P_\pi}{S}{T} = \pm 1$, and so we have
\begin{align*}
\expect{Q}{\minor{Q}{S}{T}\minor{Q^T}{U}{V}}
&=
\expect{\pi}{\minor{P_\pi}{S}{T}^2} \mydel{S = V} \mydel{T = U}
\\&=
\expect{\pi}{\mydel{\pi(S) = T}} \mydel{S = V} \mydel{T = U}
\end{align*}
But it is an easy exercise to check that the probability that a permutation
length $n$ maps a set $S$ to a set $T$ with $|S| = |T| = k$ is
\[
\frac{k!(n-k)!}{n!} = \frac{1}{\binom{n}{k}}
\]
and so for $|S| = |T| = k$, we have
\[
\expect{Q}{\minor{Q}{S}{T}\minor{Q^T}{U}{V}}
= \frac{1}{\binom{n}{k}} \mydel{S = V} \mydel{T = U}
\]
as required.
\end{proof}
As mentioned at the end of Section~\ref{sec:intro}, Lemma~\ref{lem:sps} can be
extended to show that any suitably symmetric group containing the signed
permutation matrices is minor-orthogonal.
An example of this is given in Corollary~\ref{cor:mo}:
\begin{corollary}\label{cor:mo}
The collection of $n \times n$ orthogonal matrices (under the
Haar measure) is minor-orthogonal.
\end{corollary}
\begin{proof}
Let $R$ be a Haar distributed random orthogonal matrix.
By definition, $RQ$ is also Haar distributed for any fixed orthogonal matrix
$Q$, and so (in particular) this holds when $Q$ is a signed permutation matrix.
Hence by Lemma~\ref{lem:rot}
\[
\expect{R}{\minor{R}{S}{T}\minor{R^T}{U}{V}}
=
\expect{R}{\minor{R Q}{S}{T}\minor{(RQ)^T}{U}{V}}
\]
and so
\[
\expect{R}{\minor{R}{S}{T}\minor{R^T}{U}{V}}
=
\expect{R, Q}{\minor{R}{S}{T}\minor{R^T}{U}{V}}
=
\expect{R, Q}{\minor{RQ}{S}{T}\minor{(RQ)^T}{U}{V}}
\]
where we are now considering $Q$ to be drawn uniformly and independently from
the
collection of signed permutation matrices.
By Lemma~\ref{lem:sps}, $Q$ is minor-orthogonal and so, for fixed $R$,
Lemma~\ref{lem:rot} implies that $RQ$ is minor-orthogonal.
So
\[
\expect{R}{\expect{Q}{\minor{RQ}{S}{T}\minor{(RQ)^T}{U}{V}}}
=
\expect{R}{ \frac{1}{\binom{n}{k}} \mydel{S = V}\mydel{T = U} }
=
\frac{1}{\binom{n}{k}} \mydel{S = V}\mydel{T = U}
\]
as required.
\end{proof}
\label{sec:minor-orthogonal}
\section{The local theorem}\label{sec:local}
The goal of this section is to prove Theorem~\ref{thm:local}.
In fact we will prove something more general --- that Theorem~\ref{thm:local}
holds when $R$ is {\em any} minor-orthogonal ensemble.
For the remainder of the section, we will assume the following setup:
we are given fixed integers $d,m$ and variables $x, y$ and the following
matrices:
\begin{itemize}
\item $R \in \mathcal{M}_{m, m}$, a uniformly distributed signed
permutation matrix
\item $A_1, A_2 \in \mathcal{M}_{d, m}$ and $B_1, B_2 \in \mathcal{M}_{m, d}$ and $U \in
\mathcal{M}_{d, d}$ all of which are independent from $R$ and do not contain the
variables $x$ and $y$ (but could contain other variables).
\end{itemize}
\begin{lemma} \label{lem:first}
Let $k, \ell \leq d$ be nonnegative integers, and let
$S, T \in \binom{[d]}{k}$ and
$U, V \in \binom{[d]}{\ell}$.
Then
\[
\expect{R}{ [A_1 R B_1]_{S, T} [A_2 R^{T} B_2]_{U, V} }
=
\frac{1}{\binom{m}{k}} [A_1 B_2]_{S, V} [A_2 B_1]_{U, T} ~ \delta_{ \{ k =
\ell \} }
\]
\end{lemma}
\begin{proof}
By (\ref{eq:mult}) we have
\begin{align*}
&\expect{R}{[A_1RB_1]_{S, T} [A_2 R^{T} B_2]_{U, V} } \\
&= \frac{1}{\binom{m}{k}}
\sum_{W, X \in \binom{[m]}{k}}\sum_{Y, Z \in \binom{[m]}{\ell}}
[A_1]_{S, W} [B_1]_{X, T} [A_2]_{U, Y} [B_2]_{Z, V}
~ \delta_{ \{ W = Z \} } \delta_{ \{ X = Y \} }
\\&= \frac{1}{\binom{m}{k}}
\sum_{W, X \in \binom{[m]}{k}} [A_1]_{S, W}
[B_1]_{X, T} [A_2]_{U, X} [B_2]_{W, V} ~ \delta_{ \{ k = \ell \} }
\\&= \frac{1}{\binom{m}{k}} [A_1 B_2]_{S, V} [A_2 B_1]_{U, T}~ \delta_{ \{ k =
\ell \} }
\end{align*}
\end{proof}
\begin{lemma} \label{lem:second}
Let $k \leq d$ be nonnegative integers, and let $S, T \in \binom{[d]}{k}$ and
consider the polynomials
\[
p(x, y)
= \expect{R}{[ x A_1 R B_1 + y A_2 R^{T} B_2]_{S, T} }
= \sum_{i} p_i x^i y^{k-i}
\]
and
\[
q(x, y) = [ x A_1B_2 + y A_2 B_1 ]_{S, T} = \sum_{i} q_i x^i y^{k-i}.
\]
Then
\[
p_i =
\delta_{ \{ i = k/2 \} } \frac{(-1)^i} {\binom{m}{i}} q_i
\]
\end{lemma}
\begin{proof}
We have by (\ref{eq:add})
\[
[ x A_1 R B_1 + y A_2 R^{T} B_2]_{S, T}
= \sum_i \sum_{W, X \subseteq \binom{[k]}{i}}
(-1)^{\| W + X \|_1} x^{|W|} y^{|\overline{W}|}
[A_1 R B_1]_{W(S), X(T)} [A_2 R^{T} B_2]_{\overline{W}(S), \overline{X}(T)}
\]
so by Lemma~\ref{lem:first} we will have
\[
\expect{R}{ [ x A_1 R B_1 + y A_2 R^{T} B_2]_{S, T} } = 0
\]
whenever $|W| \neq |\overline{W}|$.
To complete the lemma, it therefore remains to show that, for $k = 2t$, we have
$p_{t} = \frac{(-1)^{t}} {\binom{m}{t}} q_{t}$.
Using (\ref{eq:add}) and Lemma~\ref{lem:first} again, we have
\begin{align}
p_t
&= \frac{1}{\binom{m}{t}} \sum_{W, X \subseteq \binom{[k]}{t}}
(-1)^{\| W + X \|_1}
[A_1 B_2]_{W(S), \overline{X}(T)} [A_2 B_1]_{\overline{W}(S), X(T)} \notag
\\&= \frac{1}{\binom{m}{t}} \sum_{W, X \subseteq \binom{[k]}{t}}
(-1)^{\| W + \overline{X} \|_1}
[A_1 B_2]_{W(S), X(T)} [A_2 B_1]_{\overline{W}(S), \overline{X}(T)}
\label{eq:pt}
\end{align}
whereas
\begin{equation}\label{eq:qt}
q_t
= \sum_{W, X \subseteq \binom{[k]}{t}}
(-1)^{\| W + X \|_1}
[A_1 B_2]_{W(S), X(T)} [A_2 B_1]_{\overline{W}(S), X(T)}.
\end{equation}
Now using the fact that
\[
\| W \|_1 + \| \overline{W} \|_1 = \sum_{i=1}^k i = \binom{k+1}{2}
\]
we get
$
(-1)^{\| W + X \|_1} = (-1)^{\| W + \overline{X} \|_1} (-1)^{\binom{k+1}{2}}
$
and so (\ref{eq:pt}) and (\ref{eq:qt}) combine to give
\[
p_t = \frac{(-1)^{\binom{k+1}{2}}}{\binom{[m]}{t}} q_t.
\]
Hence it remains to show $(-1)^{\binom{k+1}{2}} = (-1)^t$.
However, (since $k = 2t$) we have
\[
(-1)^{\binom{k+1}{2}} = (-1)^{t(2t+1)} = (-1)^{2 t^2 + t} = (-1)^t
\]
finishing the proof.
\end{proof}
\begin{corollary}
For any sets $S, T$ with $|S| = |T|$,
\[
\expect{R}{[ x A_1 R B_2 + y A_2 R^{T} B_1]_{S, T} }
= \sum_i (-1)^{i} \frac{(m-i)!}{m! i!} (\partial_w)^{i}(\partial_z)^{i}
[ w x A_1B_1 + y z A_2 B_2 ]_{S, T} \bigg|_{w = z = 0}
\]
\end{corollary}
\begin{corollary}\label{cor:Dat1}
For any sets $S, T$ with $|S| = |T|$,
\[
\expect{R}{[(x A_1 + y A_2 R)(B_1 + R^{T} B_2)]_{S, T}}
= \sum_i (-1)^{i} \frac{(m-i)!}{m! i!} (\partial_w)^{i}(\partial_z)^{i}
[ w x A_1B_1 + y z A_2 B_2 ]_{S, T} \bigg|_{w = z = 1}
\]
\end{corollary}
We are now in a position to prove Theorem~\ref{thm:local}:
\begin{proof}[Proof of Theorem~\ref{thm:local}]
By (\ref{eq:add}), it suffices to show
\begin{equation}\label{eq:toshow}
\expect{R}{[(x A_1 + y A_2 R)(B_1 + R^{T} B_2)]_{S, T}} = L_m^{x, y} \left \{
[x A_1 B_1 + y A_2 B_2]_{S, T} \right \}
\end{equation}
for all $|S| = |T| = k$.
For $k > m$, both sides of (\ref{eq:toshow}) are $0$, so we restrict to the
case $k \leq m$.
Using Corollary~\ref{cor:Dat1}, (\ref{eq:toshow}) is equivalent to
showing
\[
L_m^{x, y} \left \{
[x A_1 B_1 + y A_2 B_2]_{S, T} \right \} =
\sum_i (-1)^{i} \frac{(m-i)!}{m! i!} (\partial_w)^{i}(\partial_z)^{i}
[ w x A_1B_2 + y z A_2 B_1 ]_{S, T} \bigg|_{w = z = 1}.
\]
Using (\ref{eq:add}) again, we have
\[
L_m^{x, y} \left \{
[x A_1 B_1 + y A_2 B_2]_{S, T} \right \}
=
\sum_{i} \sum_{W, X \subseteq \binom{[k]}{i}}
(-1)^{\| W + X \|_1}
L_m^{x, y} \{ x^i y^{k-i} \}
[A_1 B_1]_{W(S), \overline{X}(T)} [A_2 B_2]_{\overline{W}(S), X(T)}
\]
so the coefficient of $x^i y^{k-i}$ is
\[
\sum_{W, X \subseteq \binom{[k]}{i}}
(-1)^{\| W + X \|_1}
\frac{(m-i)!(m-k+i)!}{m!(m-k)!}
[A_1 B_1]_{W(S), \overline{X}(T)} [A_2 B_2]_{\overline{W}(S), X(T)}.
\]
On the other hand, we have
\begin{align*}
&\sum_j (-1)^{j} \frac{(m-j)!}{m! j!} (\partial_w)^{i}(\partial_z)^{i}
[ w x A_1B_1 + y z A_2 B_2 ]_{S, T} \bigg|_{w = z = 1}
\\&=
\sum_{i, j} (-1)^{i} \frac{(m-j)!}{m! j!} (\partial_w)^{j}(\partial_z)^{j}
\sum_{W, X \subseteq \binom{[k]}{j}}
(-1)^{\| W + X \|_1}
(wx)^i (yz)^{k-i}
[A_1 B_1]_{W(S), \overline{X}(T)} [A_2 B_2]_{\overline{W}(S), X(T)}
\\&=
\sum_{i, j} (-1)^{j} \frac{(m-j)!}{m! j!}
\sum_{W, X \subseteq \binom{[k]}{i}}
(-1)^{\| W + X \|_1}
\frac{i!}{(i-j)!}\frac{(k-i)!}{(k-i-j)!} x^i y^{k-i}
[A_1 B_1]_{W(S), \overline{X}(T)} [A_2 B_2]_{\overline{W}(S), X(T)}
\end{align*}
so the coefficient of $x^i y^{k-i}$ is
\[
\sum_{W, X \subseteq \binom{[k]}{i}}
(-1)^{\| W + X \|_1}
[A_1 B_1]_{W(S), \overline{X}(T)} [A_2 B_2]_{\overline{W}(S), X(T)}
\sum_j (-1)^{j} \frac{(m-j)!}{m! j!}
\frac{i!}{(i-j)!}\frac{(k-i)!}{(k-i-j)!}
\]
So it suffices to show
\[
\sum_j (-1)^{j} \frac{(m-j)!}{m! j!}
\frac{i!}{(i-j)!}\frac{(k-i)!}{(k-i-j)!}
=
\frac{(m-i)!(m-k+i)!}{m!(m-k)!}.
\]
or, after substituting $i \leftarrow a$ and $b \leftarrow k - i$, to show
\[
\sum_j (-1)^{j} \frac{(m-j)!}{m! j!}
\frac{a!}{(a-j)!}\frac{b!}{(b-j)!}
=
\frac{(m-a)!(m-b)!}{m!(m-a-b)!}.
\]
whenever $a + b \leq m$ (our assumption).
However this follows directly from standard theorems on generalized binomials:
\begin{align*}
\sum_i (-1)^{i} \frac{(m-i)!}{m! i!}
\frac{a!}{(a-i)!}\frac{b!}{(b-i)!}
&= \frac{b!(m-b)!}{m!} \sum_i (-1)^i \binom{a}{i} \binom{m-i}{b-i}
\\&= \frac{b!(m-b)!}{m!} (-1)^b \sum_i \binom{a}{i} \binom{-(m-b+1)}{b-i}
\\&= \frac{b!(m-b)!}{m!} (-1)^b \binom{-(m-b-a+1)}{b}
\\&= \frac{b!(m-b)!}{m!} \binom{m-a}{b}
\\&= \frac{(m-a)!(m-b)!}{(m-a-b)! m!}
\end{align*}
as required.
\end{proof}
\section{The global theorem}\label{sec:global}
The goal of this section is to prove Theorem~\ref{thm:global}.
Using the multilinearity of the mixed discriminant, one can show that Theorem~\ref{thm:global} is equivalent to the following theorem (note that $n! = D(I, \dots, I)$ and so this has the familiar form of zonal spherical polynomials --- see \cite{macd}).
\begin{theorem} \label{lem:separate}
Let $A_1, \dots, A_n, B_1, \dots, B_n, Q \in \mathcal{M}_{n,n}$ where $Q$ is a
uniformly distributed signed permutation matrix.
Then
\begin{equation}\label{eq}
\expect{Q}{D(A_1 Q B_1 Q^T, A_2 Q B_2 Q^T, \dots, A_n Q B_n Q^T)} = \frac{1}{n!} D(A_1, A_2, \dots, A_n)D(B_1, B_2, \dots, B_n).
\end{equation}
\end{theorem}
\begin{proof}
We start by noticing that, by the multilinearity of the mixed discriminant, it suffices to prove the theorem when the $A_i, B_i$ are the basis elements $\{ e_i e_j^T \}_{i, j=1}^n$.
So let $A_i = e_{w_i} e_{x_i}^T$ and $B_i = e_{y_i} e_{z_i}^T$ where $w_i, x_i, y_i, z_i \in [n]$.
Then for each $Q$ we have
\[
D(A_1 Q B_1 Q^T, \dots, A_n Q B_n Q^T)
= \mydet{Q} D(e_{w_1} e_{x_1}^T Q e_{y_1} e_{z_1}^T, \dots, e_{w_n} e_{x_n}^T Q e_{y_n} e_{z_n}^T)
\]
where each $e_{x_i}^T Q e_{y_i}$ is a scalar (and so can be factored out).
Hence
\begin{align*}
D(A_1 Q B_1 Q^T, \dots, A_n Q B_n Q^T)
&= \mydet{Q} \left( \prod_i e_{x_i}^T Q e_{y_i} \right) D(e_{w_1} e_{z_1}^T, \dots, e_{w_n} e_{z_n}^T)
\\&= \mydet{Q} \left( \prod_i e_{x_i}^T Q e_{y_i} \right) \mydet{ e_{w_1}~\dots~e_{w_n}}\mydet{ e_{z_1}~\dots~e_{z_n}}
\end{align*}
On the other hand,
\[
D(A_1, A_2, \dots, A_n) = \mydet{ e_{w_1}~\dots~e_{w_n}} \mydet{ e_{x_1}~\dots~e_{x_n}}
\]
and similarly for $B$, so we find that (\ref{eq}) is equivalent to showing
\begin{equation}\label{eq2}
\expect{Q}{\mydet{Q} \prod_i e_{x_i}^T Q e_{y_i}} = \frac{1}{n!} \mydet{ e_{x_1}~\dots~e_{x_n}}\mydet{ e_{y_1}~\dots~e_{y_n}}.
\end{equation}
We now decompose\footnote{This is where we lose the generality of
minor-orthogonal ensembles.} each $Q$ as $Q = P_\pi E_\chi$ where $P_\pi$ is a
permutation matrix and $E_\chi$ is a diagonal matrix with diagonal entries
$\chi_1, \dots, \chi_n \in \{ \pm 1 \}$.
Hence $\mydet{Q} = \left(\prod_i \chi_i \right) \mydet{P_\pi}$ and
\[
e_{x_i}^T Q e_{y_i}
= e_{x_i}^T P_\pi E_\chi e_{y_i}
= \chi_{y_i} e_{x_i}^T e_{\pi(y_i)}
= \chi_{y_i} \delta_{ \{ x_i = \pi(y_i) \} }
\]
and so
\[
\expect{Q}{\mydet{Q} \left( \prod_i e_{x_i}^T Q e_{y_i} \right)}
= \frac{1}{n!} \sum_{\pi} \mydet{P_\pi} \left(\prod_i \delta_{ \{ x_i = \pi(y_i) \}} \right) \expect{\chi_1, \dots, \chi_n}{ \left( \prod_i \chi_i \chi_{y_i} \right)}.
\]
Now it is easy to see that
\[
\expect{\chi_1, \dots, \chi_n}{ \left( \prod_i \chi_i \chi_{y_i} \right)} = 1
\]
whenever the $y_i$ are distinct (that is, form a permutation of $[n]$) and $0$ otherwise.
For distinct $y_i$, it should then be clear that
\[
\left(\prod_i \delta_{ \{ x_i = \pi(y_i) \}} \right) = 0
\]
unless the $x_i$ are also distinct.
Of course, this also holds for $\mydet{ e_{x_1}~\dots~e_{x_n}}$ and $\mydet{ e_{y_1}~\dots~e_{y_n}}$ and so (\ref{eq2}) is true whenever the $x_i$ or $y_i$ are not distinct (as both sides are $0$).
Thus it remains to consider the case when $x_i = \sigma(i)$ for some $\sigma$ and $y_i = \tau(i)$ for some $\tau$.
That is, we must show
\begin{equation}\label{eq3}
\sum_{\pi} \mydet{P_\pi} \left(\prod_i \delta_{ \{ \sigma(i) = \pi(\tau(i)) \}} \right) = \mydet{P_\sigma} \mydet{P_\tau}
\end{equation}
for all permutations $\tau$ and $\sigma$.
But now it is easy to see that
$\left(\prod_i \delta_{ \{ \sigma(i) = \pi(\tau(i)) \}} \right) = 0$
for all permutations $\pi$ except for one: $\pi = \sigma \circ \tau^{-1}$.
Hence
\[
\sum_{\pi} \mydet{P_\pi} \left(\prod_i \delta_{ \{ \sigma(i) = \pi(\tau(i)) \}} \right)
= \mydet{P_{\sigma \circ \tau^{-1}}}
= \mydet{P_{\sigma}}\mydet{P_{\tau^{-1}}}
\]
where
\[
\mydet{P_{\tau^{-1}}} = \mydet{P_\tau^{-1}} = \mydet{P_\tau^T} = \mydet{P_\tau},
\]
proving (\ref{eq3}), which in turn proves the remaining (nonzero) cases of (\ref{eq2}), and therefore the theorem.
\end{proof}
\section{Applications}\label{sec:apps}
In this section, we list some direct applications of the main theorems.
\subsection{Permanents of Low Rank Matrices}
Our first application is an algorithm for computing permanents of low rank
matrices that was originally discovered by Barvinok \cite{bar} using similar
tools.
Barvinok's algorithm takes advantage of a well known connection between
permanents and mixed discriminants: the permanent of a matrix $M \in \mathcal{M}_{n,
n}$ is the mixed discriminant $D(A_1, \dots, A_n)$ where each $A_i$ is a
diagonal matrix with diagonal matching the $i$th column of $M$.
When working with diagonal matrices, the signed permutation matrices behave in a particularly nice way: the $\pm 1$ entries in $Q$ and $Q^{T}$ cancel, so one can reduce such formulas to an average over (unsigned) permutation matrices.
\begin{corollary}
Let $\{ A_i \}_{i=1}^k$ and $\{ B_i \}_{i=1}^k$ be $n \times n$ diagonal matrices and consider the polynomials
\[
p(x_1, \dots, x_k) = \mydet{ \sum_i x_i A_i }
\quad \text{and}\quad
q(x_1, \dots, x_k) = \mydet{ \sum_i x_i B_i }
\]
Then
\[
\frac{1}{n!} \sum_{P \in \mathcal{P}_n} \mydet{ \sum_i x_i A_i P B_i P^T } = [ p \star q](x_1, \dots, x_k)
\]
\end{corollary}
Given a vector $v$, let $\diag(v)$ denote the diagonal matrix whose diagonal
entries are $v$.
Note that if $A_i = \diag(a_i)$ and $B_i = \diag(b_i)$ for each $i$, then
\[
\sum_{P \in \mathcal{P}_n} \mydet{ \sum_i x_i A_i P B_i P^T } = \perm\left[ \sum_i x_i a_i b_i^T \right].
\]
\vspace{0.25cm}
\noindent \textbf{Algorithm to find $\perm\left[ \sum_{i=1}^k a_i b_i^T \right]$} for $a_i, b_i \in \mathbb{R}^n$.
\begin{enumerate}
\item Form $A_i = \diag(a_i)$ and $B_i = \diag(b_i)$.
\item Compute $p(x_1, \dots, x_k) = \mydet{ \sum_{i=1}^k x_i A_i }$ and $q(x_1, \dots, x_k) = \mydet{ \sum_{i=1}^k x_i B_i }$
\item Compute $[p \star q](x_1, \dots, x_k)$
\item $\perm\left[ \sum_i a_i b_i^T \right] = n! [p \star q](1, \dots, 1)$.
\end{enumerate}
The complexity of this algorithm depends primarily on the number of terms in
the polynomials $p$ and $q$, which (in general) will be the
number of nonnegative integer solutions to the equation $\sum_{i=1}^k t_i = n$,
which is known to be $\binom{n+k-1}{k-1}$ (see
\cite{stanley}).
\subsection{Other convolutions}\label{sec:convolutions}
In this section, we show that the standard univariate convolutions defined in
\cite{ffp} can each be derived from the main theorems.
\subsubsection{Additive convolution of eigenvalues}\label{sec:additive}
Given matrices $A, B \in \mathcal{M}_{d, d}$ and polynomials
\[
p(x) = \mydet{xI - A}
\quad \text{and}\quad
q(x) = \mydet{xI - B}
\]
the {\em additive convolution} of $p$ and $q$ can be written as
\[
[p \boxplus q](x) = \expect{Q}{\mydet{xI - A - Q B Q^T}}
\]
where $Q$ can be chosen to be any minor-orthogonal ensemble (see \cite{dui}).
This can be achieved by setting
\[
\hat{p}(x, y, z) = \mydet{x I + y A + z I}
\quad \text{and}\quad
\hat{q}(x, y, z) = \mydet{x I + y I + z B}
\]
and applying Theorem~\ref{thm:global} to get
\[
\hat{p} \star \hat{q} = \expect{R}{\mydet{x I + y A + z R B R^T}}
\]
The formula for $[p \boxplus q]$ follows by setting $y = z = -1$.
\subsubsection{Multiplicative convolution of eigenvalues}
\label{sec:multiplicative}
Given matrices $A, B \in \mathcal{M}_{d, d}$ and polynomials
\[
p(x) = \mydet{xI - A}
\quad \text{and}\quad
q(x) = \mydet{xI - B}
\]
the {\em multiplicative convolution} of $p$ and $q$ can be written as
\[
[p \boxtimes q](x) = \expect{Q}{\mydet{xI - A Q B Q^T}}
\]
where $Q$ can be chosen to be any minor-orthogonal ensemble (see \cite{dui}).
Given matrices $A$ and $B$, this can be achieved by setting
\[
\hat{p}(x, y) = \mydet{x I + y A}
\quad \text{and}\quad
\hat{q}(x, y) = \mydet{x I + y B}
\]
and applying Theorem~\ref{thm:global} to get
\[
\hat{p} \star \hat{q} = \expect{Q}{\mydet{x I + y A Q B Q^T}}.
\]
The formula for $[p \boxtimes q]$ follows by setting $y = -1$.
\subsubsection{Additive convolution of singular values}\label{sec:rectangular}
Given matrices $A, B \in \mathcal{M}_{d, n+d}$ and polynomials
\[
p(x) = \mydet{xI - AA^T}
\quad \text{and}\quad
q(x) = \mydet{xI - BB^T}
\]
the {\em rectangular additive convolution} of $p$ and $q$ can be written as
\[
[p \boxplus_{d}^n q]
=
\expect{Q, R}{\mydet{xI - (A + Q B R)(A + Q B R)^T}}
\]
where $Q$ and $R$ can be chosen to be any (independent) minor-orthogonal
ensembles of the appropriate size (see \cite{dui}).
This can be achieved by
setting
\[
\hat{r}(x, y, z) = \mydet{x I + ( y A + z Q B R) (A^T + R^T B^T Q^T }.
\]
Assuming $Q$ and $R$ are independent, we can do the expectation in $R$ using
Theorem~\ref{thm:local} to get
\[
\expect{R}{\hat{r}(x, y, z)}
= L_m^{y, z} \left\{ \mydet{x I + y AA^T + z Q BB^T Q^T} \right\}.
\]
and then we can compute the remaining expectation
\[
\expect{Q}{\mydet{x I + y AA^T + z Q BB^T Q^T}}
\]
in terms of $p$ and $q$ using the method in Section~\ref{sec:additive}.
\subsubsection{Multiplicative convolution of non-Hermitian eigenvalues}
\label{sec:nonHermitian}
For the purpose of studying the eigenvalues of non-Hermitian matrices, one
could use the polynomial convolutions in the previous sections, but one quickly
realizes that they do not hold as much information as one would like.
This is due in part to the fact that, unlike in the Hermitian case, there can
be nontrivial relations between the left eigenvectors and right eigenvectors of
a non-Hermitian matrix (we refer the interested reader to \cite{benno} where a
multivariate theory is developed).
However it is well known that a non-Hermitian matrix $A$ can be written as $A =
H + K$ where
\[
H = \frac{A + A^*}{2}
\quad \text{and}\quad
K = i\frac{A - A^*}{2}
\]
are both Hermitian.
One can then consider the multivariate polynomial
\[
p(x, y, z) = \mydet{x I + y H + z K}
\]
for which an additive convolution follows easily from the Hermitian version in
Section~\ref{sec:additive}.
The multiplicative version, however, is more complicated.
Given pairs of Hermitian matrices $(H_1, K_1)$ and $(H_2, K_2)$, one would like
to ``convolve'' these matrices in a way that preserves the dichotomy between
real and imaginary parts.
One such possibility would be the polynomial
\[
r(x, y, z)
= \expect{Q}{\mydet{x I + y (H_1 Q H_2 Q^* - K_1 Q K_2 Q^*) + z(H_1 Q K_2 Q^* +
K_1 Q H_2 Q^*) }}
\]
but it is not clear (a priori) that the coefficients of this polynomial are
functions of the coefficients of the polynomials
\[
p_1(x) = \mydet{x I + y H_1 + z K_1}
\quad \text{and}\quad
p_2(x) = \mydet{x I + y H_2 + z K_2}.
\]
However it is easy to compute $r(x, y, z)$ using Theorem~\ref{thm:global}.
Letting
\begin{align*}
q_1(x, a, b, c, d) &= \mydet{x I + a H_1 + b H_1 + c K_1 + d K_1} \\
q_2(x, a, b, c, d) &= \mydet{x I + a H_2 + b K_2 + c H_2 + d K_2}
\end{align*}
we have that
\[
[q_1 \star q_2](x, a, b, c, d) =
\expect{Q}{\mydet{x I + a H_1 Q H_2 Q^* + b H_1 Q K_2 Q^* + c K_1 Q H_2 Q^* + d
K_1 Q K_2 Q^*}}
\]
and so $r(x, y, z) = [q_1 \star q_2](x, y, -y, z, z)$.
\section{An additive convolution for generalized singular values}
\label{sec:gsvd}
There are three standard ensembles that one studies in random matrix
theory: the Wigner ensemble, Wishart ensemble, and Jacobi ensemble
\cite{forrester}.
All are alike in that they can be derived from matrices with independent
Gaussian entries; the difference between them, as was first noted by Edelman
\cite{edelman}, can be
paralleled to different matrix decompositions.
The Wigner ensemble is Hermitian and the relevant distribution is the
eigenvalues distribution.
The Wishart ensemble is often thought of as a Hermitian ensemble (with an
eigenvalue distribution) but in some sense the more natural way to view it is
as a distribution on singular values (which, as the first step in calculate
them, you form a Hermitian matrix).
The Jacobi ensemble, in this ansatz, is most naturally viewed as a distribution
on ``generalized singular values.''
One can attempt to explore this trichotomy further by studying how the
eigenvalues/singular values/generalized singular values of matrices behave with
respect to more general matrix operations (and more general random
matrices).
This is one of the motivations behind the polynomial convolutions mentioned in
Section~\ref{sec:apps}: the convolution in Section~\ref{sec:additive} computes
statistics concerning the eigenvalues of a unitarily invariant sum, whereas the
convolution in Section~\ref{sec:rectangular} does similarly in the case of
singular values.
The purpose of this section is to introduce a polynomial that can be used to
study the final case: a unitarily invariant addition of generalized singular
values.
While the previous two could be accomplished using univariate convolutions, it
will become clear that this is not possible for the general singular value
decomposition.
For those interested in other aspects of the GSVD should consult the references
\cite{golub,
gsvd}.
Before jumping into a discussion regarding the generalized singular value
decomposition, it will be useful for us to recall the definition of the
pseudo-inverse of a matrix.
Given any matrix $X \in \mathcal{M}_{m, n}$ with rank $r$, the normal singular value
decomposition of matrices allows us to write $X = U \Sigma V^{T}$ where
\begin{itemize}
\item $U \in \mathcal{M}_{m, r}$ satisfies $U^T U = I$
\item $V \in \mathcal{M}_{n, r}$ satisfies $V^T V = I$
\item $\Sigma \in \mathcal{M}_{r, r}$ is diagonal and invertible.
\end{itemize}
The {\em pseudo-inverse} of $X$ (written $X^{\dagger} \in \mathcal{M}_{n, m}$) is then
defined to be $X^{\dagger} = V \Sigma^{-1} U^{T}$.
The name ``pseudo-inverse'' comes from the fact that
\begin{itemize}
\item $X X^{\dagger} \in \mathcal{M}_{m, m}$ is the projection onto the column space of
$V$, and
\item $X^{\dagger} X \in \mathcal{M}_{n, n}$ is the projection onto the column space of
$U$.
\end{itemize}
So, in particular, if $n = r$ then $X^{\dagger} X = I_{n}$ and if $m = n = r$
then $X$ is invertible and $X^{\dagger} = X^{-1}$.
Now fix integers $\ss, \tt, m$ and let $M \in \mathcal{M}_{(\ss + \tt), m}$ have
rank $r$ and block structure
\[
M = \begin{bmatrix}
M_1 \\
M_2
\end{bmatrix}
\begin{array}{c} \ss \\ \tt \end{array}
\]
The {\em generalized singular value decomposition (GSVD)} provides a
decomposition of $M_1$ and $M_2$ as
\[
M_1 = U_1 C H
\quad \text{and}\quad
M_2 = U_2 S H
\]
where
\begin{itemize}
\item $U_1 \in \mathcal{M}_{\ss, r}$ and $U_2 \in \mathcal{M}_{\tt, r}$ satisfy
$U_1^T U_1 = U_2^T U_2 = I_r$
\item $C, S \in \mathcal{M}_{r, r}$ are positive semidefinite diagonal matrices
with $C^T C + S^T S = I$, and
\item $H \in \mathcal{M}_{r, m}$ is some matrix with rank $r$.
\end{itemize}
In particular, the diagonal entries of $C$ and $S$ satisfy $c_i^2 + s_i^2 = 1$,
and as such, the matrices $C$ and $S$ are often referred to as {\em cosine}
and {\em sine} matrices.
Note that when $M_2$ has rank $r$, the matrix $S$ will be invertible and then
\[
M_1 M_2^{\dagger}
= (U_1 C H) (U_2 S H)^{\dagger}
= U_1 C S^{-1} U_2
\]
will be the (usual) SVD of $M_1 M_2^{\dagger}$, the reason for the nomenclature
``generalized" SVD.
When $M$ has rank $m$, there is an easy way to find the generalized singular
values without needing to form the entire decomposition.
Letting $W_1 = M_1^T M_1$ and $W_2 = M_2^T M_2$, the GSVD implies
that
\begin{equation}\label{eq:W}
W = (W_1 + W_2)^{-1/2} W_1 (W_1 + W_2)^{-1/2}
\end{equation}
is a positive semidefinite Hermitian matrix which is unitarily similar to
$C^T C$ (all of whose eigenvalues are in the interval $[0, 1]$).
Thus generalized singular values can be found directly from the characteristic
polynomial
\begin{equation}\label{eq:gsvdpoly}
\mydet{x I - (W_1 + W_2)^{-1/2} W_1 (W_1 + W_2)^{-1/2}}
=
\mydet{(W_1 + W_2)^{-1}}\mydet{(x-1)W_1 + x W_2}
\end{equation}
So now assume we are given $M, N \in \mathcal{M}_{(\ss + \tt), m}$ with block
structure
\[
M = \begin{bmatrix}
M_1 \\
M_2
\end{bmatrix}
\begin{array}{c} \ss \\ \tt \end{array}
\quad \text{and}\quad
N = \begin{bmatrix}
N_1 \\
N_2
\end{bmatrix}
\begin{array}{c} \ss \\ \tt \end{array}
\]
and we form the random matrix
\[
P = \begin{bmatrix}
P_1 \\
P_2
\end{bmatrix}
=
\begin{bmatrix}
M_1 + R_1 N_1 Q \\
N_2 + R_2 N_2 Q
\end{bmatrix}
\]
where $R_1, R_2, Q$ are independent signed permutation matrices of the
appropriate sizes.
Then the natural question is: what (if anything) can we say about the
generalized singular values of $P$ given the generalized singular values of $M$
and $N$.
By what we observed in (\ref{eq:gsvdpoly}), this means finding a correspondence
between the polynomials
\[
\mydet{(x-1)M_1 + x M_2}
\quad \text{and}\quad
\mydet{(x-1)N_1 + x N_2}
\quad \text{and}\quad
\mydet{(x-1)P_1 + x P_2}
\]
The obvious first attempt is to consider the polynomials
\[
p(x) = \mydet{x A_1^T A_1 + A_2^T A_2}.
\]
However when one starts to perturb $A_1$ and $A_2$ independently, one quickly
realizes that simply knowing the generalized singular values are not enough ---
information about $A_1$ and $A_2$ themselves is needed.
This motivates using a polynomial that keeps $A_1$ and $A_2$ independent (to
some extent), which leads to the following definition:
Given $A \in \mathcal{M}_{(\ss + \tt), m}$ with block
structure
\[
A = \begin{bmatrix}
A_1 \\
A_2
\end{bmatrix}
\begin{array}{c} \ss \\ \tt \end{array}
\]
we define the {\em generalized singular value characteristic polynomial
(GSVCP)} to be
\begin{equation}\label{eq:trivariate}
p_A(x, y, z) =
\mydet{x I + y A_1^T A_1 + z A_2^T A_2}
\end{equation}
The next theorem shows that (\ref{eq:trivariate}) defines a valid convolution
--- that is, one can compute the GSVCP of a unitarily invariant sum of matrices
from the GSVCPs of the summands.
\begin{theorem}\label{thm:gsvd}
Let $M, N, W \in \mathcal{M}_{(\ss + \tt), m}$ with block structure
\[
M = \begin{bmatrix}
M_1 \\
M_2
\end{bmatrix}
\begin{array}{c} \ss \\ \tt \end{array}
\quad \text{and}\quad
N = \begin{bmatrix}
N_1 \\
N_2
\end{bmatrix}
\begin{array}{c} \ss \\ \tt \end{array}
\quad \text{and}\quad
W = \begin{bmatrix}
W_1 \\
W_2
\end{bmatrix}
=
\begin{bmatrix}
M_1 + R_1 N_1 Q \\
M_2 + R_2 N_2 Q
\end{bmatrix}
\]
where $R_1, R_2, Q$ are independent, uniformly distributed, signed permutation
matrices of the
appropriate sizes and let
\begin{alignat*}{4}
p_M(x, y, z)
&=
\mydet{x I + y M_1^T M_1 + z M_2^T M_2}
&=& \sum_{j, k}
\frac{x^{m-j-k}}{(m-j-k)!}
\frac{y^j}{(\ss-j)!}
\frac{z^k}{(\tt-k)!}
p_{jk}
\\
p_N(x, y, z)
&=
\mydet{x I + y N_1^T N_1 + z N_2^T N_2}
&=&
\sum_{j, k}
\frac{x^{m-j-k}}{(m-j-k)!}
\frac{y^j}{(\ss-j)!}
\frac{z^k}{(\tt-k)!}
q_{jk}
\\
p_W(x, y, z)
&=
\mydet{x I + y W_1^T W_1 + z W_2^T W_2}
&=&
\sum_{j, k}
\frac{x^{m-j-k}}{(m-j-k)!}
\frac{y^j}{(\ss-j)!}
\frac{z^k}{(\tt-k)!}
r_{jk}
\end{alignat*}
be their GSVCPs, where each
$r_{jk} = r_{jk}(R_1, R_2, Q)$ is a random variable.
Then
\[
\expect{Q, R_1, R_2}{r_{jk}} =
\begin{cases}
\frac{1}{m!\ss!\tt!}
\sum_{\beta = 0}^{j} \sum_{\delta = 0}^{k}
p_{\beta,\delta} q_{j-\beta, k-\delta}
& \text{for $j \leq \ss, k \leq \tt, j+k \leq m$} \\
0 & \text{otherwise}
\end{cases}
\]
\end{theorem}
\begin{proof}
We start by changing variables to match Theorem~\ref{thm:local}:
let $f(x, s, t, u, v)$ denote the polynomial
\[
\expect{Q, R_1, R_2}{
\mydet{x I
+ (s M_1 + t R_1 N_1 Q)^T (M_1 + R_1 N_1 Q)
+ (u M_2 + v R_2 N_2 Q)^T (M_2 + R_2 N_2 Q)}}.
\]
We now do the expectations separately, starting with $R_2$ and then $R_1$.
By Theorem~\ref{thm:local}, we get
\begin{align*}
f(x, s, t, u, v)
&=
L_{\tt}^{u, v}
\expect{Q, R_1}{
\mydet{x I
+ (s M_1 + t R_1 N_1 Q)^T (M_1 + R_1 N_1 Q)
+ (u M_2^T M_2 + v Q^T N_2^T N_2 Q }}.
\\&=
L_{\ss}^{s, t}L_{\tt}^{u, v} \expect{Q}{
\mydet{x I
+ (s M_1^T M_1 + t Q^T N_1^T N_1 Q)
+ (u M_2^T M_2 + v Q^T N_2^T N_2 Q }}.
\end{align*}
By Theorem~\ref{thm:global} we have
\[
\expect{Q}{
\mydet{x I
+ (s M_1^T M_1 + t Q^T N_1^T N_1 Q)
+ (u M_2^T M_2 + v Q^T N_2^T N_2 Q }}
=
[g \star h](x, s, t, u, v)
\]
where
\[
g(x, s, t, u, v)
= \mydet{ xI + s M_1^T M_1 + t I + u M_2^T M_2 + v I}
= p_M(x + t + v, s, u)
\]
and
\[
h(x, s, t, u, v)
= \mydet{ xI + s I + t N_1^T N_1 + u I + v N_2^T N_2}
= p_N(x + s + u,t, v)
\]
are each $m$-homogeneous polynomials.
We can now go in the reverse direction to compute $f(x, s, t, u, v)$ from the
expansions in
the hypothesis.
Firstly, we have
\begin{align*}
g(x, s, t, u, v)
&=
\sum_{j,k}
\frac{(x + t + v)^{m-j-k}}{(m-j-k)!}
\frac{s^j}{(\ss-j)!}
\frac{u^k}{(\tt-k)!}
p_{jk}
\\&=
\sum_{j,k} \sum_{a,b}
\frac{x^{m-j-k-a-b}}{(m-j-k-a-b)!}
\frac{t^a}{a!}
\frac{v^b}{b!}
\frac{s^j}{(\ss-j)!}
\frac{u^k}{(\tt-k)!}
p_{jk}
\\&=
\sum_{\alpha + \beta + \gamma + \delta + \sigma = m}
\frac{x^\alpha s^\beta t^{\gamma} u^\delta
v^{\sigma}}{\alpha!(\ss-\beta)!(\tt-\delta)!\gamma!\sigma!}
p_{\beta \delta}
\end{align*}
and similarly
\begin{align*}
h(x, s, t, u, v)
&=
\sum_{j,k}
\frac{(x + s + u)^{m-j-k}}{(m-j-k)!}
\frac{t^j}{(\ss-j)!}
\frac{v^k}{(\tt-k)!}
p_{jk}
\\&=
\sum_{j,k} \sum_{a,b}
\frac{x^{m-j-k-a-b}}{(m-j-k-a-b)!}
\frac{s^a}{a!}
\frac{u^b}{b!}
\frac{t^j}{(\ss-j)!}
\frac{v^k}{(\tt-k)!}
q_{jk}
\\&=
\sum_{\alpha + \beta + \gamma + \delta + \sigma = m}
\frac{x^\alpha s^\beta t^{\gamma} u^\delta
v^{\sigma}}{\alpha!(\ss-\gamma)!(\tt-\sigma)!\beta!\delta!}
q_{\gamma \sigma}
\end{align*}
Hence by definition of the star product, we have
\[
[g \star h](x, s, t, u, v)
= \frac{1}{m!}\sum_{\alpha, \beta, \gamma, \delta, \sigma}
\frac{x^\alpha s^\beta t^{\gamma} u^\delta v^{\sigma}}
{\alpha!(\ss-\beta)!(\ss-\gamma)!(\tt-\delta)!(\tt-\sigma)! }
p_{\beta\delta}q_{\gamma \sigma}
\]
and so
\begin{align*}
f(x, s, t, u, v)
&= L_{\ss}^{s, t}L_{\tt}^{u, v} \{ ~ [g \star h](x, s, t, u, v)~ \}
\\&= \frac{1}{m!\ss!\tt!}
\sum_{\substack{\alpha + \beta + \gamma + \delta + \sigma = m \\ \beta +
\gamma \leq \ss \\ \delta + \sigma \leq \tt}}
\frac{x^\alpha s^\beta t^{\gamma} u^\delta v^{\sigma}}
{\alpha!(\ss-\beta-\gamma)!(\tt-\delta-\sigma)!}
p_{\beta \delta}
q_{\gamma \sigma}
\end{align*}
Putting all of this together, the polynomial we are interested in is
\begin{align*}
\expect{Q, R_1, R_2}{ p_W(x, y, z) }
&= f(x, y, y, z, z)
\\&=
\frac{1}{m!\ss!\tt!}
\sum_{\substack{\alpha + \beta + \gamma + \delta + \sigma = m \\ \beta +
\gamma \leq \ss \\ \delta + \sigma \leq \tt}}
\frac{x^\alpha y^{\beta + \gamma} z^{\delta + \sigma}}
{\alpha!(\ss-\beta-\gamma)!(\tt-\delta-\sigma)!}
p_{\beta \delta}
q_{\gamma \sigma}
\\&=
\frac{1}{m!\ss!\tt!}
\sum_{\substack{j\leq \ss \\ k \leq \tt \\ j+k \leq m}}
\frac{x^{m-j-k}}{(m-j-k)!} \frac{y^j}{ (\ss - j)! }\frac{z^k}
{(\tt-k)!}
\sum_{\beta = 0}^{j} \sum_{\delta = 0}^{k}
p_{\beta,\delta} q_{j-\beta, k-\delta}
\end{align*}
as required.
\end{proof}
Note that while Theorem~\ref{thm:gsvd} considers fixed matrices $M$ and $N$,
one can easily extend it to random matrices using linearity of expectation.
We finish the section by observing that the convolution described by
Theorem~\ref{thm:gsvd} has a remarkably simple form when the polynomials are
expressed in the context of differential operators.
\begin{corollary}\label{cor:gsvd}
Let $p_M, p_N, p_W$ be the polynomials in Theorem~\ref{thm:gsvd} and let $P, Q$
be
bivariate polynomials for which
\[
y^{\ss} z^{\tt} p_M(x, 1/y, 1/z)
= P(\partial_x \partial_y, \partial_x \partial_z) \{
x^m y^{\ss} z^{\tt} \}
\]
and
\[
y^{\ss} z^{\tt} p_N(x, 1/y, 1/z)
= Q(\partial_x \partial_y, \partial_x \partial_z) \{
x^m y^{\ss} z^{\tt} \}.
\]
Then
\[
\expect{R_1, R_2, Q}{y^{\ss} z^{\tt} p_W(x, 1/y, 1/z)} =
P(\partial_x \partial_y, \partial_x \partial_z)Q(\partial_x \partial_y,
\partial_x \partial_z) \{
x^{m} y^{\ss} z^{\tt} \}.
\]
\end{corollary}
Corollary~\ref{cor:gsvd} suggests that if $p_A(x, y, z)$ is the polynomial in
(\ref{eq:trivariate}), then a more reasonable polynomial to consider would be
\[
q_A(x, y, z)
= y^{\ss} z^{\tt} p_A(x, 1/y, 1/z) = y^{\ss-m}z^{\tt-m} \mydet{x y z I + z
A_1^T A_1 + y A_2^T A_2}.
\]
Another advantage to this alternative form is that there is a more direct
matrix model that one can work with, as one can easily check that
\[
\det
\begin{bmatrix}
x I_{m} & A_1^T & A_2^T \\
A_1 & y I_{\ss} & 0 \\
A_2 & 0 & z I_{\tt}
\end{bmatrix}
= y^{\ss - m} z^{\tt-m} \mydet{x y z I - z A_1^T A_1 - y A_2^T
A_2 }.
\]
Obviously the two are simple transformations from each other; we mention it
because particular applications can be more well suited to one versus
the other.
\section{Open Problems}\label{sec:conc}
The proof presented in Section~\ref{sec:local} applies more generally than
Theorem~\ref{thm:local} in the respect that it holds for all minor-orthogonal
ensembles.
We suspect that Theorem~\ref{thm:global} has a similar generalization, but have
not able to prove it.
This is not much of a hindrance when it comes to theoretical applications: the
majority of the minor-orthogonal ensembles that one comes across in random
matrix theory contain the signed permutation matrices as a subgroup and so
Theorem~\ref{thm:global} can be extended them by the averaging argument in
Corollary~\ref{cor:mo}.
The one notable situation where this is not the case is that of the uniform
distribution over the standard representation of $S_{n+1}$ (what you get when
you turn the collection of $(n+1) \times (n+1)$ permutation matrices into $n
\times n$ matrices by projecting each one orthogonally to the constant vector).
This is (as far as the author knows) the minor-orthogonal ensemble with the
smallest support and so is often useful for computational purposes.
Those familiar with the connection between polynomial convolutions and free
probability (see, for example, \cite{ffmain}), might recognize the
convolutions in Sections~\ref{sec:additive}, \ref{sec:multiplicative}, and
\ref{sec:rectangular} as the ``finite free'' versions of the additive,
multiplicative, and rectangular convolutions from classical free probability.
The operators from free probability have known closure properties (they map
distributions on the real line to distributions on the real line) and so one
might hope the same is true for the finite analogues.
This turns out to be true: the convolution in Section~\ref{sec:additive} maps
Hermitian determinantal representations
to Hermitian determinantal representations and
the ones in Sections~\ref{sec:multiplicative} and \ref{sec:rectangular} map
positive semidefinite representations to semidefinite representations.
In the multivariate case, one can show (using a powerful theorem of Helton and
Vinnikov \cite{hv}) that the convolution in Section~\ref{sec:gsvd} preserves
positive semidefinite representations as well.
Continuing the analogy, the operator $\star$ would be the natural finite
analogue of the {\em box product} $\boxed{\star}$ from free probability
\cite{nica-speicher}
and so one might hope that it, too, has some sort of closure property.
However this seems to be completely open.
It would therefore be both useful and interesting to understand the conditions
under which the operations in this paper can be shown to preserve some (amy)
class of polynomials.
While the expected characteristic polynomial of a random matrix gives you some
information, it will (in general) not be enough to characterize the eigenvalue
distribution of the underlying random matrix.
However, there is a natural way to ``assign'' an eigenvalue distribution to a
convolution --- the one which is uniformly distributed over the roots of the
polynomial (the fact that polynomial convolutions preserve real stability imply
that this will be a valid distribution on the real line).
It is still not known exactly how the uniform-over-roots distributions derived
from polynomial convolutions relate to the actual distributions of the
underlying random matrices.
The one area that seems to show the most striking resemblances to this is that
of free probability, which one can view as the study of the limiting
distributions of
random matrix theory as the dimension approaches $\infty$.
There is some speculation (mostly by this author) that polynomial convolutions
represent the limiting distributions of random matrix theory as some other
parameter (usually referred to as $\beta$) approaches $\infty$.
There is some evidence supporting this idea \cite{vadim}, but in many cases it
is not clear how to even define such a limit formally.
Understanding this relationship better, however, is certainly an interesting
open problem (and one that remains fairly wide open).
\section{Acknowledgements}\label{sec:thanks}
This paper was a direct consequence of the IPAM program in Quantitative Linear Algebra.
Essentially all of the results here were discovered as a result of discussions that
took place during this program.
The author specifically thanks Benno Mirabelli, who (among other things)
pointed out the relationship between the convolution in Theorem~\ref{global}
and the box convolution in free probability.
| proofpile-arXiv_059-15688 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction and results}\label{Sec_Intro}
\subsection{Ulam floating bodies}
A long standing open problem asked by Ulam in \cite{Ulam2} (see also \cite{Ulam1}, Problem 19),
is whether the Euclidean ball is the unique body of uniform density
which floats in a liquid in equilibrium in any direction. We call
such a body {\it Ulam floating body}.
\par
\noindent
A two-dimensional counterexample was found for relative density
$\rho = \frac12$ by Auerbach \cite{Auerbach} and for densities
$\rho \neq \frac12$ by Wegner \cite{Wegner1}. These counterexamples
are not origin symmetric. For higher dimension, Wegner obtained
results for non-convex bodies (holes in the body are allowed) in
\cite{Wegner2}. The problem remains largely open in the class of
convex bodies in higher dimension. In
order to study Ulam floating bodies, we use the notion of the {\it
convex floating body}, which was introduced independently by
B\'ar\'any and Larman \cite{BaranyLarman1988} and by Sch\"utt and
Werner \cite{SW1}. Let $K$ be a convex body in $\mathbb R^n$ and let
$\delta \in \mathbb R$, $0 \le \delta \leq \frac{1}{2}$. Then the convex
floating body $K_\delta$ is defined as
\[
K_\delta =
\bigcap_{u \in S^{n-1}}
H^-_{\delta,u}.
\]
Here $H^+_{\delta,u}$ is the halfspace with outer unit normal vector
$u$, which ``cuts off'' a $\delta$ proportion of $K$,
i.e. ${ \rm vol_n}\left( K\cap H^+_{\delta, u} \right) = \delta \ { \rm vol_n}(K)$.
The convex floating body is a natural variation of Dupin's floating
body $K_{[\delta]}$ (see \cite{Dupin}). $K_{[\delta]}$ is this convex body
contained in $K$ such that every support hyperplane cuts off a set of
volume $\delta \ { \rm vol_n}(K)$ exactly.
In general $K_{[\delta]}$ need not exist. An example is e.g., the simplex in $\mathbb R^n$.
Dupin showed that if it exists, each supporting hyperplane $H$ touches $K_{[\delta]}$ at the centroid of $K \cap H$.
If the floating body $K_{[\delta]}$ exists, it is equal to the convex floating body $K_\delta$.
It was shown in \cite{MR} that for a
symmetric convex body $K$, one has $K_{[\delta]} = K_\delta$.
\par
We recall the relation between the density $\rho$ and the volume $\delta{ \rm vol_n}(K)$ that is cut off.
If the liquid has density $1$ and the body $K$ has unit volume and
density $\rho$, then by Archimedes' law the submerged volume equals
the total mass of the body, i.e. $\rho$, and consequently the floating
part has volume $\delta = 1 -\rho$.
\par
In \cite{HS}, the authors defined the {\em metronoid} $M_\delta(K)$
of a convex body $K$ to be the body whose boundary consists of
centroids of the floating parts of $K$, i.e. $K\cap H^+_{\delta, u}$.
More precisely, denoting
$X_{K,\delta}(u) =
\frac{1}{\delta { \rm vol_n}(K)} \int_{K\cap H_{\delta,u}^+} x\,dx$,
they defined $M_{\delta}(K)$ by
\[
\partial M_\delta(K) =
\left\{
X_{K,\delta}(u) \,:\, u\in S^{n-1}\right\},
\]
and showed that $X_{K,\delta} : S^{n-1} \to \partial M_\delta(K)$ is
the Gauss map of $M_\delta(K)$, i.e. the normal to $M_{\delta}(K)$ at
$X_{K,\delta}(u)$ is $u$. Huang, Slomka and Werner showed that $K$
is an Ulam floating body if and only if $M_\delta(K)$ is a ball (see
\cite[Section 2.2]{HSW} for details). We utilize this
characterization in our proof of Theorem \ref{thm1}.
\subsection{Main results}
We present two results concerning Ulam's problem. We first establish
a relation between Ulam floating bodies and a uniform
isotropicity property of sections.
Our second
result provides a short proof of a known answer to Ulam's problem
in the class of symmetric convex bodies with relative density
$\rho = 1/2$.
\par
\noindent
In the next theorem, and elsewhere, we use the notation $g(B)$ for the centroid of a set $B$.
\par
\noindent
\begin{thm}\label{thm1}
Let $\delta \in \left(0, \frac12\right]$ and let $K\subset\mathbb R^n$ be a convex body such
that $K_\delta$ is $C^1$ or $K_\delta = K_{[\delta]}$ reduces to a point.
Then $K$ is an Ulam floating body
if and only if there exists $R>0$ such that for all $u \in S^{n-1}$
and $v \in S^{n-1} \cap u^\perp$,
\begin{equation}\label{eq-isotropic}
\int_{K\cap H_{\delta, u}} \langle x, v \rangle^2 - \langle g(K \cap H_{\delta, u}), v \rangle^2 \,dx = \delta \ { \rm vol_n}(K) R.
\end{equation}
In that case, $M_\delta(K)$ is a ball of radius $R$.
\end{thm}
\par
\noindent
{\bf Remark.}
Note that if $K_\delta $ reduces to a point, which without loss of generality we can assume to be $0$, then the condition (\ref {eq-isotropic}) reduces to
\begin{equation}\label{eq-isotropic point}
\int_{K\cap H_{\delta, u}} \langle x, v \rangle^2 \,dx = \delta \ { \rm vol_n}(K) R.
\end{equation}
\vskip 2mm
\noindent
We use the characterization given in Theorem \ref{thm1} to give
a short proof of the following result which was proved in dimension $3$ by Falconer \cite{Falconer}.
It also follows from a result in \cite{Schneider}.
\par
\noindent
\begin{thm}\label{thm2}
Let $K\subset\mathbb R^n$ be a symmetric convex body of volume $1$ and density $\frac12$.
If $K$ is an Ulam floating body, then $K$ is a ball.
\end{thm}
\section{Background}
We collect some definitions and basic results that we use throughout the paper. For further facts in convex geometry we refer the reader to the books by Gardner \cite{GardnerBook} and Schneider \cite{SchneiderBook}.
\vskip 2mm
\noindent
The radial function $r_{K, p}: S^{n-1} \to \mathbb R^+$ of a convex body $K$ about a point $p\in\mathbb R^n$ is defined by
\[
r_{K,p} (u) = \max\{\lambda \ge 0: \lambda u \in K-p\}.
\]
If $0 \in \text{int}(K)$, the interior of $K$, we simply write $r_K$ instead of $r_{K,0}$.
\newline
Let $K\subset \mathbb R^n$ be a convex body
containing a strictly convex body $D$ in its interior, and let $H$ be
a hyperplane supporting $D$ at a point $p$. If $u$ is the outer unit
normal vector at $p$, we denote the restriction of the radial
function $r_{K\cap H, p}\,$ to $S^{n-1}\cap u^\perp\,$ by
$\,r_{K,D}(u, \cdot)$.
\par
\noindent
We denote by $B^n_2$ the Euclidean unit ball centered at $0$ and by $S^{n-1} = \partial B^n_2$ its boundary.
The spherical Radon transform $R:C(S^{n-1}) \to C(S^{n-1})$ is defined by
\begin{equation} \label{Radon}
R f(u) = \int_{u^\perp \cap S^{n-1}} f(x) dx
\end{equation}
for every $f \in C(S^{n-1})$.
\vskip 2mm
\noindent
\subsection{Some results on floating bodies}
Since $\delta > \frac{1}{2}$ implies $K_\delta = \emptyset$, we restrict our
attention to the range $\delta \in \left[0, \frac{1}{2} \right]$.
It was shown in \cite{SchuettWerner94} that there is $\delta_c$, $0 < \delta_c \leq \frac{1}{2}$ such that $K_{\delta_c}$
reduces to a point. It can happen that $\delta_c < \frac {1}{2}$. An example is the simplex.
\par
\noindent
In fact, Helly's Theorem (and a simple union bound) implies that
$\delta_c > \frac {1}{n+1}$, so we have $\delta_c \in
\left( \frac {1}{n+1}, \frac{1}{2} \right]$.
\par
\noindent
As mentioned above, when Dupin's floating body $K_{[\delta]}$
exists, it coincides with the convex floating body $K_\delta$.
The following lemma states that existence of $K_{[\delta]}$ is also
guaranteed by smoothness of $K_\delta$. We use this for Theorem \ref{thm1}.
\begin{lem}
If $K_\delta$ is $C^1$, then $K_{[\delta]}$ exists and $K_\delta=K_{[\delta]}$.
\end{lem}
\par
\noindent
\begin{proof}
Let $x \in \partial K_\delta$. By \cite{SchuettWerner94},
there is at least one hyperplane $H$ through $x$ that cuts off
exactly $\delta { \rm vol_n}(K)$ from $K$ and this hyperplane touches
$\partial K_\delta$ in the barycenter of $H\cap K$. As $K_\delta$ is
$C^1$, the hyperplane at every boundary point $x \in K_\delta$ is
unique. Thus $K_{[\delta]}$ exists and $K_\delta = K_{[\delta]}$.
\end{proof}
\vskip 3mm
\noindent
We know that $K_{\frac {1}{2}} = \{0\}$ for every centrally symmetric convex body $K$.
The Homothety Conjecture \cite{SchuettWerner94} (see also \cite{Stancu, WernerYe}), states that the
homothety $K_\delta = t(\delta) K$ only occurs for ellipsoids. We treat the
following conjecture which has a similar flavor.
\begin{conj}
Let $K\subset \mathbb R^n$ be a convex body and let $\delta\in \left( 0, \frac {1}{2}\right)$.
If $K_\delta$ is a Euclidean ball, then $K$ is a Euclidean ball.
\end{conj}
\par
\noindent
We now prove the two dimensional case of the conjecture.
\par
\noindent
\begin{thm}
Let $K\subset \mathbb R^2$ and suppose there is $\delta \in \left(0, \frac {1}{2}\right)$ such
that $K_\delta = r \ B^2_2$. Then $K = R \ B^2_2$, for some $R>0$.
\end{thm}
\begin{proof}
We shall prove that the radial function $r_K: S^1 \to \mathbb R$ is
constant. If the continuous function $r_K$ is not constant, it
must attain some value $R>r$ such that the angle $\theta =
\arccos\left(\frac{r}{R}\right) \in \left(0,\frac{\pi}{2}\right)$ is
not a rational multiple of $\pi$. Let $u_1\in S^1$ be the point with
$r_K(u_1) = R$, and let $\left\{u_i \right\}_{i=1}^\infty$ be the
arithmetic progression on $S^1$ with difference $2\theta$. We claim
that $r_K$ is constant on $\left\{u_i \right\}$. Indeed, assuming
$R u_i \in \partial K$, we consider the triangle with vertices
$O,\, Ru_i, Ru_{i+1}$ (see the figure below). The edge
$\left[Ru_i, Ru_{i+1}\right]$ is tangent to $K_\delta = r B^2_2$ at its
midpoint $m_i$, and since $K_\delta$ is smooth, the chord on
$\partial K$ containing $Ru_i$ and $m_i$ is bisected by $m_i$, which
implies $Ru_{i+1} \in \partial K$, i.e., $r_K(u_{i+1}) = R$. Since
$\theta$ is not a rational multiple of $\pi$, the sequence
$\left\{u_i \right\}$ is dense in $S^1$. Since $r_K$ is constant
on a dense set and continuous, it is constant on $S^1$, as required.
\begin{center}
\includegraphics[width=0.6\linewidth]{dense-seq.jpg}
\end{center}
\end{proof}
\section{Proof of the main theorems}
\subsection{Proof of Theorem \ref{thm1}}
\begin{proof}
We first treat the case $n=2$. Also, we first consider when $K_\delta$ reduces to a point. Without loss of generality we can
assume that this point is $0$. Then we have for all $u \in S^1$ that $g(K \cap H_{\delta,u}) = 0$ by Dupin and thus $\langle g(K \cap H_{\delta,u}), v \rangle =0$, for all
$v \in u^\perp \cap S^1$ and the condition reduces to $\int_{K \cap H_{\delta,u}} \langle x, v \rangle^2 dx = C$. This observation is true in all dimensions.
\newline
Let $u \in S^1$. Let $v \in u^\perp \cap S^1 = H_{\delta,u} \cap S^1$. Then, as $H_{\delta,u}= \text{span}\{v\}$, we get for all $v \in S^1$,
\begin{eqnarray}\label{Gleichung1}
\int_{K \cap H_{\delta,u}} \langle x, v \rangle^2 dx =
\int_{-r_K(v) }^{r_K(v)}x^2 dx = \frac{2}{3} r_K(v)^3,
\end{eqnarray}
as $r_K(v)=r_K(-v)$.
Hence if $\int_{K \cap H_{\delta,u}} \langle x, v \rangle^2 dx=C$, then we get that for all $v \in S^1$ that $r_K(v)=C_1$, and hence
$K$ is a Euclidean ball and therefore also $M_\delta(K)$ is a Euclidean ball.
\par
For the other direction, we fix $u \in S^1$. We may assume that $u=(1,0)$, i.e., in polar coordinates
$u_0$ corresponds to $\theta=0$ and $r(\theta)=1$. For $\phi$ small, let $w=(\cos \phi, \sin\phi)$ and define the sets
\[
E_1 = H_{\delta, u}^+ \cap H_{\delta, w}^+ \cap K, \quad
E_2 = H_{\delta, u}^+ \cap H_{\delta, w}^- \cap K, \quad
E_3 = H_{\delta, u}^- \cap H_{\delta, w}^+ \cap K.
\]
In order to compute the derivative of the boundary curve of $M_\delta(K)$ we
write
\begin{eqnarray*}
&&\delta \text{vol}_2(K) \cdot \big[ X_{K,\delta}(w) - X_{K,\delta}(u) \big] \\
&& = \int_{ E_1\cup E_3}x\,dx - \int_{ E_1\cup E_2}x\,dx = \int_{E_3}x\,dx - \int_{E_2}x\,dx\\
&&=\int_{\frac{\pi}{2}}^{\frac{\pi}{2} + \phi}
\int_{0}^{r_{K}(\theta)}
(\cos\theta, \sin \theta) \ r^2 \,dr\,d\theta
-
\int_{-\frac{\pi}{2}}^{-\frac{\pi}{2} +\phi}
\int_{0}^{r_{K}(\theta)}
(\cos\theta, \sin \theta) \ r^2 \,dr\,d\theta \\
&&=2 \int_{\frac{\pi}{2}}^{\frac{\pi}{2} +\phi} \int_{0}^{r_{K}(\theta)}
(\cos\theta, \sin \theta) \ r^2 \,dr\,d\theta =
\frac{2}{3} \int_{\frac{\pi}{2}}^{\frac{\pi}{2} +\phi} r_K(\theta)^3 (\cos\theta, \sin \theta) \ d\theta.
\end{eqnarray*}
Thus
\begin{eqnarray*}
\frac{d}{ d \phi}\left[ X_{K,\delta}(w) - X_{K,\delta}(u) \right]
= \frac{2}{3 \ \delta \ \text{vol}_2(K) } \ r_K\left(\phi +\frac{\pi}{2} \right)^3 (- \sin \phi, \cos\phi)
\end{eqnarray*}
and hence
\begin{eqnarray}\label{Gleichung2}
\left|\frac{d}{ d \phi} \left[ X_{K,\delta}(w) - X_{K,\delta}(u) \right] \right|= \frac{2}{3 \ \delta\ \text{vol}_2(K)} \ r_K\left(\phi +\frac{\pi}{2} \right)^3,
\end{eqnarray}
where $| \cdot |$ denotes the Euclidean norm.
With $z= w^\perp$, we get from (\ref{Gleichung1}) and (\ref{Gleichung2}) that
\begin{eqnarray}\label{Gleichung3}
\left|\frac{d}{ d \phi} \left[ X_{K,\delta}(w) - X_{K,\delta}(u) \right] \right|= \frac{1}{\delta \ \text{vol}_2(K) } \int_{K \cap H_{\delta,w}} \langle x, z \rangle ^2 dx.
\end{eqnarray}
If $M_\delta(K)$ is a Euclidean ball with radius $R$, we write $X_{K,\delta}\ (\cos \phi, \sin\phi)= R \ (\cos \phi, \sin\phi)$ in polar coordinates
and get from (\ref{Gleichung3})
\begin{eqnarray*}
\frac{1}{\delta \ \text{vol}_2(K) } \int_{K \cap H_{\delta,w} } \langle x, z \rangle ^2 dx &=& \left|\frac{d}{ d \phi} \left[ X_{K,\delta}(w) - X_{K,\delta}(u) \right] \right|\\
&=&\left|\frac{d}{ d \phi} \left[ X_{K,\delta}(w) \right] \right| = R.
\end{eqnarray*}
\vskip 3mm
\noindent
To treat the case when $K_\delta$ does not consist of just one point,
we introduce the following
coordinate system for the complement of an open, strictly convex body
$D\subset\mathbb R^2$ with smooth boundary (see also e.g., \cite{YaskinZhang}). Let
$\gamma : [0,2\pi] \to \partial D$ be the inverse Gauss map, and
$T: [0,2\pi] \to S^1$ be the unit tangent vector to the curve at
$\gamma(\theta)$, oriented counterclockwise, i.e.,
\[
n(\theta):=n_D(\gamma(\theta)) =
\left(
\begin{array}{c}
\cos \theta \\
\sin \theta
\end{array}
\right),\qquad
T(\theta) =
\frac{\gamma'(\theta) }{\big| \gamma'(\theta) \big|} =
\left(
\begin{array}{c}
-\sin \theta\\
\cos \theta
\end{array}
\right).
\]
The coordinate system $F: \mathbb R\times [0, 2\pi] \to \mathbb R^2 \setminus D$ is
defined by
\begin{equation}\label{def-coord-sys}
F(r,\theta) = \gamma(\theta) + r T(\theta).
\end{equation}
Since $\frac{\partial F}{\partial r} = T$ and
$\frac{\partial F}{\partial \theta} = \gamma' -r n$, the Jacobian of
$F$ is given by $|r|$.
\par
\noindent
Now we fix $0 < \delta <1/2$ and assume that $K_\delta$ does not just consist of one point. We then set $D = \text{int} (K_\delta)$, which has
smooth boundary by assumption.
Without loss of generality, we can assume that $0 \in \text{int} (K_\delta)$.
It was shown in \cite{SchuettWerner94} that $K_\delta$ is strictly convex.
Let $u\in S^1$, and assume without loss of
generality that $u = n(0)=(1, 0)$.
Let $w = n(\phi)$ for an angle $\phi>0$ small enough, such that
the lines $H_{\delta,u}$ and $H_{\delta,w}$ intersect in the interior
of $K$. Define the sets
\[
E_1 = H_{\delta, u}^+ \cap H_{\delta, w}^+ \cap K, \quad
E_2 = H_{\delta, u}^+ \cap H_{\delta, w}^- \cap K, \quad
E_3 = H_{\delta, u}^- \cap H_{\delta, w}^+ \cap K,
\]
and let $E_4$ be the bounded connected component of
$\left(H_{\delta, u}^- \cap H_{\delta, w}^-\right) \setminus K_\delta$,
see Figure.
\begin{center}
\includegraphics[width=0.7\linewidth]{pictureadd.png}\\
Figure
\end{center}
Again, in order to compute the derivative of the boundary curve of $M_\delta(K)$ we
write
\begin{eqnarray*}
\delta \ \text{vol}_2(K) \cdot [ X_{K,\delta}(w) - X_{K,\delta}(u) ]
&=& \int_{ E_1 \cup E_3}x\,dx - \int_{ E_1 \cup E_2}x\,dx \\
&=& \int_{ E_3\cup E_4}x\,dx - \int_ {E_2\cup E_4}x\,dx .
\end{eqnarray*}
Now we use the above introduced coordinate system. For
$n=n(\theta)$ and $T=T(\theta)$, let $r_{K,K_\delta}(\theta)$ be such that $\gamma(\theta) + r_{K,K_\delta}(\theta) \ T \in \partial K$.
As $K_\delta = K_{[\delta]}$, $\gamma(\theta)$ is the midpoint of $n(\theta)^\perp \cap K$ by Dupin's characterization of $K_{[\delta]}$. Therefore
\begin{eqnarray*}
&&\delta \ \text{vol}_2(K) \cdot [ X_{K,\delta}(w) - X_{K,\delta}(u) ] \\ &=& \int_{0}^{\phi}
\int_{0}^{r_{K,K_\delta}(\theta)}
F(r,\theta)|r| \,dr\,d\theta -
\int_{0}^{\phi}
\int_{-r_{K,K_\delta}(\theta)}^{0}
F(r,\theta)|r|\,dr\,d\theta \\
&=&\int_{0}^{\phi}
\int_{0}^{r_{K,K_\delta}(\theta)}
F(r,\theta)r\,dr \,d\theta
+
\int_{0}^{\phi}
\int_{-r_{K,K_\delta}(\theta)}^{0}
F(r,\theta)r\,dr \,d\theta \\
&=&\int_{0}^{\phi}
\int_{-r_{K,K_\delta}(\theta)}^{r_{K,K_\delta}(\theta)}
F(r,\theta)r\,dr \,d\theta.
\end{eqnarray*}
Now we use the definition of $F$. Thus
\begin{eqnarray*}
X_{K,\delta}(w) - X_{K,\delta}(u)
&=& \frac{1}{\delta \ \text{vol}_2(K)} \int_{0}^{\phi}
\int_{-r_{K,K_\delta}(\theta)}^{r_{K,K_\delta}(\theta)}
r \gamma(\theta) + r^2 T(\theta)\,dr \,d\theta \\
&=& \frac{1}{\delta \ \text{vol}_2(K)} \int_{0}^{\phi}
\frac23 \ r_{K,K_\delta}^3(\theta) \ T(\theta) \,d\theta.
\end{eqnarray*}
As $M_\delta (K)$ is strictly convex and $C^1$ by \cite{HSW}, we can
differentiate with respect to $\phi$
and get,
\begin{equation}\label{eq-MdeltaK-gauss-deriv}
\frac{d X_{K,\delta}(n(\phi))}{d\phi} =
\frac{2\ r_{K,K_\delta}^3(\phi)}{3 \ \delta \ \text{vol}_2(K)} \ T(\phi).
\end{equation}
\par
\noindent
On the other hand, for any $\theta \in [0,2\pi]$,
\begin{eqnarray}\label{eq-section-2nd-moment}
&&\int_{K\cap H_{\delta, n(\theta)}} \langle x, T(\theta) \rangle^2 \,dx =
\int _{-r_{K,K_\delta}( \theta )}
^{ r_{K,K_\delta}( \theta )}
\langle \gamma (\theta) +r T(\theta), T(\theta) \rangle^2 \ dr \nonumber\\
&&= 2 r_{K,K_\delta}(\theta ) \ \langle \gamma (\theta), T(\theta) \rangle^2 + 2
\langle \gamma (\theta), T(\theta) \rangle \int _{-r_{K,K_\delta}( \theta )}
^{ r_{K,K_\delta}( \theta )}
r dr \nonumber + \int _{-r_{K,K_\delta}( \theta )}
^{ r_{K,K_\delta}( \theta )} r^2 dr\\
&& = \langle g(K \cap H_{\delta, n(\theta)}), T(\theta) \rangle ^2 \ \text{vol}_1(K \cap H_{\delta, n(\theta)} ) +
\frac{2\ r_{K,K_\delta}^3(\theta)}{3},
\end{eqnarray}
since $\gamma (\theta)$ is the centroid $g(K \cap H_{\delta, n(\theta)})$ of $K \cap H_{\delta, n(\theta)}$.
Combining \eqref{eq-MdeltaK-gauss-deriv} and \eqref{eq-section-2nd-moment},
we get that for $\theta = \phi$,
\begin{equation*} \label{eq-isotropic1}
\left|\frac{d X_{K,\delta}(n(\phi))}{d\phi} \right|= \frac{1}{\delta \ \text{vol}_2(K)} \ \int_{K\cap H_{\delta, n(\phi)}} \langle x, T(\phi) \rangle^2 -
\langle g(K \cap H_{\delta, n(\phi)}), T(\phi) \rangle^2 \,dx.
\end{equation*}
By \cite{HS, HSW}, the normal to $\partial M_\delta (K)$ at $X_{K,\delta} (n(\phi))$ is $n(\phi) = (\cos \phi, \sin \phi)$ and as $M_\delta (K)$ is strictly convex and $C^1$ by \cite{HSW},
$\xi(\phi) =
X_{K,\delta} (n(\phi))$ is a parametrization of $\partial M_\delta (K)$ with
respect to the angle of the normal. The curvature is given by $\frac{d\phi}{ds}$
where $s$ is the arc length along the curve. Since $\frac{d\xi}{ds}$ is a unit
vector, we get by the chain rule $\frac{d\xi}{ds}=
\frac{d\xi}{d\phi} \frac{d\phi}{ds}$ that the radius of curvature is given by
\[
R(\phi) = \left|\frac{d X_{K,\delta}(n(\phi))}{d\phi} \right| =
\frac{1}{\delta \ \text{vol}_2(K)} \int_{K\cap H_{\delta, n(\phi)}} \langle x, T(\phi) \rangle^2 - \langle g(K \cap H_{\delta, n(\phi)}), T(\phi) \rangle^2 \,dx.
\]
Since $M_\delta (K)$ is a disk if and only if its radius of curvature is constant,
the theorem follows.
\vskip 3mm
\noindent
Let now $n\geq 3$.
\par
\noindent
Let $u \in S^{n-1}$ be arbitrary, but fixed, and let $v \in S^{n-1}\cap u^\perp$. We denote by $W=\mbox{span}\{u,v\}$ the span of $u$ and $v$ and by $W^\perp$ the $(n-2)$-dimensional subspace that is the orthogonal complement of $W$. $\bar K=K|W$ is the orthogonal projection of the convex body $K$ onto the $2$-dimensional subspace $W$.
For a small $\phi$, let $w = \cos \phi \ u + \sin \phi \ v.$
\newline
We define $\bar E_1$, $\bar E_2$ and $\bar E_3$ as follows,
$$
\bar E_1 =(H_{\delta, u}^+ \cap H_{\delta,w}^+) \big | W, \ \ \bar E_2 =(H_{\delta, u}^+\cap H_{\delta,\eta}^-)\big |W, \ \ \bar E_3 = (H_{\delta, u}^-\cap H_{\delta, w}^+)\big |W
$$
and $\bar E_4$ is the curvilinear triangle enclosed by $H_{\delta, u}|W$, $H_{\delta, w} \big|W$, and the boundary of $\bar K_\delta=K_\delta|W$.
Then the picture is identical to the previous Figure.
We let
$$
E_i = \bar E_i\times W^\perp, \ \ \text{ for} \ \ i=1, 2, 3,4.
$$
When $K_\delta$ reduces to a point, we can assume without loss of generality that $K_\delta =\{0\}$. As noted before, the condition then reduces to
$\int_{K \cap H_{\delta,u}} \langle x, v \rangle^2 dx = C$.
In this case
$\bar E_4 = \emptyset$ and the proof continues along the same lines
as below.
Alternatively, one can replace the coordinate system (\ref{def-coord-sys}) by the usual polar coordinates in $W$ as it was done in the case $n=2$.
\par
\noindent
In the general case we thus have that
\begin{eqnarray*}
&&\delta \ { \rm vol_n}(K) \left [X_{K,\delta}(w)-X_{K,\delta}(u)\right]
=\int_{K\cap H_{\delta, w}^+}x\,dx-\int_{K\cap H_{\delta, u}^+}x\,dx \\
&&=\int_{K\cap ( E_1\cup E_3)}x\,dx-\int_{K\cap ( E_1\cup E_2)}x\,dx
= \int_{K\cap E_3}x\,dx-\int_{K\cap E_2}x\,dx \\
&&= \int_{K\cap (E_3 \cup E_4) }x\,dx-\int_{K\cap ( E_2\cup E_4) }x\,dx.
\end{eqnarray*}
For $x\in W$, we consider the following parallel section function,
\begin{equation} \label{AKW}
A_{K,W} (x) = \mbox{vol}_{n-2} \left(K\cap \{W^\perp+x\}\right)
\end{equation}
and observe that by Fubini,
\begin{equation*}
\delta \ { \rm vol_n}(K) \left [ X_{K,\delta}(w) \right ] = \int_{K \cap H_{\delta, w}^+} z\ dz = \int_{\bar K} \left( \int_{(x+ W^\perp) \cap K \cap H_{\delta, w}^+ } y \ dy \right) dx.
\end{equation*}
We denote by $g(B) = \frac{1}{{ \rm vol_n}(B)} \int_B y \ dy$ the centroid of the set $B$. Then we get
\begin{eqnarray*} \label{}
\delta(X_{K,\delta}(w))\big|W &= & \left(\int_{K\cap (E_3\cup E_4)}x\,dx \right) \Bigg | W =
\left(\int_{\bar K\cap (\bar E_3 \cup \bar E_4)} \left( \int_{(x+W ^\perp) \cap K} y \ dy \right) \,dx \right)\Bigg | W \\
&=& \left(\int_{\bar K\cap (\bar E_3 \cup \bar E_4)} \ A_{K,W} (x)\ g((x+W^\perp) \cap K) \ dx \right)\Bigg | W\\
&=& \int_{\bar K\cap (\bar E_3 \cup \bar E_4)} \ A_{K,W} (x)\ \left( g((x+W^\perp) \cap K) \right)\big | W \ dx \\
&=& \int_{\bar K\cap (\bar E_3 \cup \bar E_4)} \ A_{K,W} (x)\ x \ dx,
\end{eqnarray*}
and similarly for $\delta \ { \rm vol_n}(K)(X_{K,\delta}(u))\big| W$.
Now we will use the coordinate system $F: \mathbb R\times [0, 2\pi] \to \mathbb R \setminus \text{int}\left(\bar K_\delta\right)$, introduced earlier in (\ref{def-coord-sys}),
\begin{equation*}
F(r,\theta) = \gamma(\theta) + r T(\theta).
\end{equation*}
We can assume that $n(0)=u$. Then $T(0)=v$.
We recall that the Jacobian of $F$ is given by $|r|$.
We abbreviate $n=n(\theta)$ and $T=T(\theta)$. We
let $r_{\bar K,\bar K_\delta}(n(\theta)), T(\theta)) = r_{\bar K,\bar K_\delta}(n, T) >0$ be such that $\gamma(\theta) + r_{\bar K,\bar K_\delta}(n,T) \ T(\theta) \in \partial \bar K$,
and $ r_{\bar K,\bar K_\delta}(n, -T) >0$ be such that $\gamma(\theta) + r_{\bar K,\bar K_\delta}(n,T) \ (-T(\theta) ) \in \partial \bar K$.
We get
\begin{align*}
&\hskip -15mm \delta \ { \rm vol_n}(K) \left(X_{K,\delta}(w) -X_{K,\delta}(u)\right ) \big |W \\
&\hskip -10mm = \int_{0}^{\phi} \int_{0}^{r_{\bar K,\bar K_\delta}(n, T )} F (r,\theta) \ A_{K,W} (F(r,\theta)) \ |r|\,dr\,d\theta \\
&\hskip 25mm -
\int_{0}^{\phi} \int_{- r_{\bar K,\bar K_\delta}(n, -T )}^{0} F(r,\theta) \ A_{K,W} (F(r,\theta)) \ |r|\,dr
\,d\theta \\
& \hskip -10mm =\int_{0}^{\phi}\int_{0}^{r_{\bar K,\bar K_\delta}(n, T )} F (r,w) \ A_{K,W} (F(r,\theta)) \ r\,dr \,d\theta \\
&\hskip 25mm + \int_{0}^{\phi}\int_{-r_{\bar K,\bar K_\delta}(n, -T)}^{0} F (r,\theta) \ A_{K,W} (F(r,\theta)) \ r\,dr \,d\theta \\
&\hskip -10mm = \int_{0}^{\phi}\int_{-r_{\bar K,\bar K_\delta}(n, -T )}^{r_{\bar K,\bar K_\delta}(n, T)} F (r,\theta) \ A_{K,W} (F(r,\theta)) \ r\,dr \,d\theta .
\end{align*}
We differentiate with respect to $\phi$,
\begin{eqnarray*}\label{equ2}
&& \delta \ { \rm vol_n}(K) \frac{d}{d\phi}\left((X_{K,\delta}(w)-X_{K,\delta}(u))\big|W\right) = \delta \ { \rm vol_n}(K) \frac{d}{d\phi}\left( (X_{K,\delta}(w))\big|W\right) \\
&&=\int_{-r_{\bar K,\bar K_\delta}(n(\phi), -T(\phi) )}^{r_{\bar K,\bar K_\delta}(n(\phi),T(\phi))} F(r, \phi) A_{K,W} (F(r, \phi)) \ r\,dr.
\end{eqnarray*}
Putting $\phi=0$, results in
\begin{eqnarray}\label{GL1}
&& \delta \ { \rm vol_n}(K) \frac{d}{d\phi}\left( (X_{K,\delta}(w))\big|W\right) \Bigg| _{\phi = 0}
= \int_{-r_{\bar K,\bar K_\delta}(u, -v )}^{r_{\bar K,\bar K_\delta}(u,v)} F(r, 0) A_{K,W} (F(r, 0)) \ r\,dr \nonumber \\
&&= \int_{-r_{\bar K,\bar K_\delta}(u, -v )}^{r_{\bar K,\bar K_\delta}(u,v)} [\gamma(0) +r v] \ A_{K,W} (F(r, 0)) \ r\,dr \nonumber \\
&&= \int_{-r_{\bar K,\bar K_\delta}(u, -v )}^{r_{\bar K,\bar K_\delta}(u,v)} \gamma(0) \ A_{K,W} (F(r, 0)) \ r\,dr + \int_{-r_{\bar K,\bar K_\delta}(u, -v )}^{r_{\bar K,\bar K_\delta}(u,v)} r^2 \ v \ A_{K,W} (F(r, 0)) \,dr \nonumber\\
&&= v \ \int_{-r_{\bar K,\bar K_\delta}(u, -v )}^{r_{\bar K,\bar K_\delta}(u,v)} r^2 \ A_{K,W} (F(r, 0)) \,dr,
\end{eqnarray}
where the last equality holds by Dupin since $H_{\delta, u}\cap K_\delta$ is the centroid of $H_{\delta, u}\cap K$, i.e.
\begin{equation}\label{null}
\int_{-r_{\bar K,\bar K_\delta}(u, -v )}^{r_{\bar K,\bar K_\delta}(u,v)} \gamma(0) \ A_{K,W} (F(r, 0)) \ r\,dr = 0.
\end{equation}
Indeed, in the coordinate system (\ref{def-coord-sys}), the centroid of $H_{\delta, u}\cap K$ is $\gamma (0)$. Thus, with
coordinate system (\ref{def-coord-sys}) we get as above
\begin{eqnarray*}
&&\text{vol}_{n-1} \left(K\cap H_{\delta, u} \right) \langle v, \gamma (0) \rangle = \int _{K\cap H_{\delta, u}} \langle v, x \rangle \ dx \\
&&= \int_{-r_{\bar K,\bar K_\delta}(u, -v )}^{r_{\bar K,\bar K_\delta}(u,v)} \langle \gamma(0) +r v, v\rangle \ A_{K,W} (F(r, 0)) \ dr \\
&&= \langle \gamma(0), v \rangle \int_{-r_{\bar K,\bar K_\delta}(u, -v )}^{r_{\bar K,\bar K_\delta}(u,v)} A_{K,W} (F(r, 0)) \ dr
+ \langle v, v\rangle \int_{-r_{\bar K,\bar K_\delta}(u, -v )}^{r_{\bar K,\bar K_\delta}(u,v)} r \ A_{K,W} (F(r, 0)) \ dr \\
&&= \langle \gamma(0), v \rangle \ \text{vol}_{n-1} \left(K\cap H_{\delta, u} \right) + \int_{-r_{\bar K,\bar K_\delta}(u, -v )}^{r_{\bar K,\bar K_\delta}(u,v)} r \ A_{K,W} (F(r, 0)) \ dr.
\end{eqnarray*}
\par
\noindent
On the other hand, again in the coordinate system (\ref{def-coord-sys}), and also using (\ref{null}),
\begin{eqnarray} \label {GL2}
&&\int_{K\cap H_{\delta, u}} \langle x, v \rangle^2\ dx = \int _{-{r_{\bar K,\bar K_\delta}(u,-v)}} ^{r_{\bar K,\bar K_\delta}(u,v)} \langle \gamma(0) +r v, v\rangle ^2\ A_{K,W}(F(r,0)) dr \nonumber \\
&&= \langle \gamma(0), v\rangle ^2 \int _{-{r_{\bar K,\bar K_\delta}(u,-v)}} ^{r_{\bar K,\bar K_\delta}(u,v)} A_{K,W}(F(r,0)) dr + \int _{-{r_{\bar K,\bar K_\delta}(u,-v)}} ^{r_{\bar K,\bar K_\delta}(u,v)} r^2 \ A_{K,W}(F(r,0)) dr \nonumber \\
&&= \langle \gamma(0), v\rangle ^2 \ \text{vol}_{n-1} \left(K\cap H_{\delta, u} \right) + \int _{-{r_{\bar K,\bar K_\delta}(u,-v)}} ^{r_{\bar K,\bar K_\delta}(u,v)} r^2 \ A_{K,W}(F(r,0)) dr.
\end{eqnarray}
As $w=\cos \phi \ u + \sin \phi \ v$, it follows from (\ref{GL1}) and (\ref{GL2}) that
\begin{eqnarray*}
&&\hskip -4mm \Bigg | \frac{d}{d\phi}\left( X_{K,\delta}(\cos \phi \ u + \sin \phi \ v)\Big| W\right) \Big| _{\phi = 0} \Bigg |=
\frac{1}{ \delta { \rm vol_n}(K)} \ \int_{K\cap H_{\delta, u}} \left(\langle x, v \rangle^2 - \langle \gamma(0), v\rangle ^2 \right)dx \\
&&= \frac{1}{ \delta \ { \rm vol_n}(K)} \ \int_{K\cap H_{\delta, u}} \left(\langle x, v \rangle^2 - \langle g(K\cap H_{\delta, u}), v\rangle ^2 \right)\ dx.
\end{eqnarray*}
Observe that in the case when $K_\delta =\{0\}$, $\langle g(K\cap H_{\delta, u}), v\rangle =0$. We have that
$w= n(\phi)= \cos \phi \ u + \sin \phi \ v \in W$ is the outer unit normal to $M_\delta(K)$ in $X_{\delta, K}(w)$. Therefore,
$w$ is the outer unit normal to $M_\delta(K)\big|W$ in $X_{\delta, K}(w)\big|W$. Again, as $M_\delta(K)$ and therefore $M_\delta(K)\big|W$
is strictly convex and $C^1$ by \cite{HSW},
$X_{\delta, K}(n(\phi))$ is a parametrization of the boundary of
$M_\delta(K)\big|W$ with respect to the angle of the normal.
Thus the curvature of $M_\delta(K) \big|W$ is constant, which implies that $M_\delta(K) \big|W$ is a disk. Since $W$ is arbitrary, we get that every two dimensional projection of $M_\delta$ is a disk, and it follows that $M_\delta(K)$ is a Euclidean ball (\cite{GardnerBook}, Corollary 3.1.6).
\end{proof}
\vskip 3mm
\noindent
\subsection{Proof of Theorem \ref{thm2}}
\begin{proof}
Since $K$ is symmetric and has volume $1$ and density $\rho=\frac{1}{2}$, we have that $\delta = \frac{1}{2}$, as noted above. Therefore, $K_{\frac12} = K_{[\frac12]} = \{0\}$.
Since $K$ is an Ulam floating body, the remark after Theorem \ref{thm1} implies that for
any $u \in S^{n-1}$ and $v \in u^\perp\cap S^{n-1}$
\begin{equation}\label{intD}
\int_{u^\perp \cap K}\langle x, v\rangle^2\,dx =C,
\end{equation}
for some constant $C$.
Let $u \in S^{n-1}$ be arbitrary, but fixed. We pass to polar coordinates in $u^\perp$ and get for all $v \in u^\perp\cap S^{n-1}$,
\begin{eqnarray*}
C= \int_{u^\perp \cap S^{n-1}} \int_{t=0}^ {r_K(\xi)} t^n \langle \xi, v\rangle^2 dt \ d\sigma (\xi) = \frac{1}{n+1} \int_{u^\perp \cap S^{n-1}} r_K(\xi)^{n+1}
\langle \xi, v\rangle^2 d\sigma (\xi).
\end{eqnarray*}
Now we integrate over all $v \in u^\perp\cap S^{n-1} = S^{n-2}$ w.r. to the normalized Haar measure $\mu$ on $S^{n-2}$. We use that $\int_{S^{n-2} }
\langle \xi, v\rangle^2 d\mu (v) = c_n \|\xi\| = c_n$, where $c_n= 2 \frac{\text{vol}_{n-2} \left(B^{n-2}\right)}{\text{vol}_{n-2} \left(S^{n-2}\right)}$ and get that
\begin{eqnarray*}
\frac{(n+1) C}{c_n}= \int_{u^\perp \cap S^{n-1}} r_K(\xi)^{n+1} d\sigma (\xi) = R \ r_K^{n+1}(u),
\end{eqnarray*}
where $R$ is the spherical Radon transform (\ref{Radon}). We rewrite this equation as
\begin{eqnarray*}
\int_{u^\perp \cap S^{n-1}} d\sigma (\xi) = \int_{u^\perp \cap S^{n-1}} \frac{2 \text{vol}_{n-2} \left(B^{n-2}\right)}{(n+1) C} r_K(\xi)^{n+1} d\sigma (\xi),
\end{eqnarray*}
or
\begin{eqnarray*}
0 = \int_{u^\perp \cap S^{n-1}} \left( \frac{2 \text{vol}_{n-1} \left(B^{n-2}\right)}{(n+1) C} r_K(\xi)^{n+1} - 1 \right) d\sigma (\xi).
\end{eqnarray*}
As $u$ was arbitrary and as $r_K$ is even, it then follows from e.g., Theorem C.2.4 of \cite{GardnerBook} that $r_K = \text{const.}$ for $\sigma$ almost all $u$
and as $r_K$ is continuous, $r_K = \text{const.}$ on $S^{n-1}$. Thus $K$ is a ball.
\end{proof}
\vskip 4mm
\noindent
| proofpile-arXiv_059-15689 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\subsection{\@startsection{subsection}{2}%
\z@{.5\linespacing\@plus.7\linespacing}{-.5em}%
{\normalfont\scshape}}
\makeatother
\begin{document}
\title{``Bootstrap domain of dependence'':
bounds and time decay of solutions of
the wave equation}
\author{Thomas G. Anderson}
\address{Applied \& Comp.\ Mathematics, California Institute of
Technology, Pasadena, CA}
\curraddr{Department of Mathematics, University of Michigan, Ann Arbor, MI}
\email{tganders@umich.edu}
\author{Oscar P. Bruno}
\address{Applied \& Comp.\ Mathematics, California Institute of
Technology, Pasadena, CA}
\email{obruno@caltech.edu}
\thanks{This work was supported by NSF and DARPA through contracts DMS-1714169
and HR00111720035, and the NSSEFF Vannevar Bush Fellowship under
contract number N00014-16-1-2808.}
\subjclass[2010]{Primary 35B40, 35B30, 35L05, 35L20, 45M05}
\date{\today}
\keywords{boundary integral equations, wave equation, decay, Huygens' principle}
\begin{abstract}
This article introduces a novel ``bootstrap domain-of-dependence''
concept, according to which, for all time following a given
illumination period of arbitrary duration, the wave field scattered
by an obstacle is encoded in the history of boundary scattering
events for a time-length equal to the diameter of the obstacle,
measured in time units. Resulting solution bounds provide estimates
on the solution values in terms of a short-time history record, and
they establish superalgebraically fast decay (i.e., decay faster
than any negative power of time) for a wide range of scattering
obstacles---including certain types of ``trapping'' obstacles whose
periodic trapped orbits span a set of positive volumetric measure,
and for which no previous fast-decay theory was available. The
results, which do not rely on consideration of the Lax-Phillips
complex-variables scattering framework and associated resonance-free
regions in the \emph{complex} plane, utilize only \emph{real-valued}
frequencies, and follow from use of Green functions and boundary
integral equation representations in the frequency and time domains,
together with a certain $q$-growth condition on the frequency-domain
operator resolvent.
\end{abstract}
\maketitle
\markright{\MakeUppercase{Bootstrap Domain-of-dependence bounds \& decay for wave
solutions}}
\section{Introduction\label{Introduction}}
We present a novel ``bootstrap domain-of-dependence'' concept
(bootstrap DoD) and associated bounds on solutions of the scattering
problem for the wave equation on an exterior region $\Omega^c$. The
classical domain-of-dependence~\cite{John} concept for a space-time
point $({\mathbf{r}}_0, T_0)$ concerns the initial and boundary values that
determine the free-space solution at that space-time point. The
bootstrap DoD ending at a given time $T_0$, in contrast, involves
field values on the scattering boundary
$\Gamma = \partial \Omega= \partial \Omega^c$ over the length of time
$T_*$ preceding $T_0$, where $T_*$ equals the amount of time that is
sufficient for a wave to traverse the diameter of the obstacle. As
discussed in this paper, in absence of additional illumination after
time $t = T_0 - T_*$, the bootstrap DoD completely determines the
scattered field on $\Gamma$ for all times $t\geq T_0$. Further, the
solution bounds that result from the bootstrap DoD approach provide
estimates on the solution values in terms of a short-time history
record, and they establish superalgebraically fast decay (i.e., decay
faster than any negative power of time) for a wide range of scattering
obstacles---including certain types of trapping obstacles for which no
previous fast-decay theory was available. The results, which do not
rely on consideration of the Lax-Phillips complex-variables scattering
framework and associated resonance-free regions in the \emph{complex}
plane, utilize only \emph{real-valued} frequencies, and follow from
use of Green functions and boundary integral equation representations
in the frequency and time domains, together with a certain $q$-growth
condition on the frequency-domain operator resolvent. As discussed in
\Cref{3d_decay_rmk_iii}, relying on some materials in
\Cref{sec:theory_part_i} and most of the materials in
\Cref{sec:theory_part_ii}, related but less informative
superalgebraically decaying solution bounds can also be obtained by
relying on the complete time-history of the incident field instead of
the bootstrap DoD.
(Estimates based on the {\em classical} concept of domain of the
dependence have previously been provided, including, in particular, a
``domain-of-dependence inequality'' for the problem of scattering by
obstacles~\cite[Thm.\ 5.2]{Wilcox:62}. We note however, that this is
not a decay result and, in fact, it plays an important but very
different role: by establishing that at most exponential {\em growth}
can occur, it provides the necessary stability elements in a proof of
existence and uniqueness based on energy considerations.)
In addition to the decay-problem application, the bootstrap
DoD estimates introduced in this paper provide a
valuable tool in connection with the numerical analysis of certain
frequency-time ``hybrid'' numerical methods introduced
recently~\cite{Anderson:20} for the time-domain wave equation, which
proceed by Fourier transformation from the time domain into the
frequency domain. These hybrid solvers evaluate solutions of the wave
equation by partitioning incident wave fields into a sequence of
smooth compactly-supported wave packets and Fourier transformation in
time for each packet, and they incorporate a number of novel
approaches designed to accurately capture all high-frequency
behavior---both in time, for high frequencies, and in frequency, for
large time. The overall solution is then reconstructed as a sum of
many wave equation solutions, each with a distinct center in
time. Unlike the aforementioned complete time-history bounds, the
bootstrap DoD bounds introduced in this paper provide a natural
theoretical basis for efficient truncation of this sum while meeting a
prescribed accuracy tolerance.
Returning to the decay problem we note that, in contrast with previous
approaches, that typically rely on energy arguments and/or on analytic
continuation of the frequency-domain resolvent, the method proposed in
this paper relies on use of boundary integral equations for both
frequency-domain and time-domain problems, as well as on the Fourier
transform that relates them, and it characterizes the
multiple-scattering effects and trapping character of domain
boundaries (which is described in terms of billiard-ball trajectories,
see Section~\ref{sec:detailed_review}) in terms of the growth of the
norm of the resolvent for the frequency-domain problem as the {\em
real} frequency $\omega$ grows without bound. In detail, without
recourse to complex-analytic methods, the new approach to the decay
problem proceeds on the basis of the bootstrap DoD formulation introduced
in Section~\ref{sec:theory_part_i}, which, as mentioned above,
captures the impact of the complete time history up to a given time
$t$ in terms of the history over the time interval, immediately
preceeding $t$, of time-length equal to the time $T_*$ required for
propagation across the diameter of the obstacle. The resulting bounds
provide superalgebraically-fast time-decay energy estimates (decaying
faster than any negative power of time) for a wide range of (both
trapping and nontrapping) obstacles satisfying a certain $q$-growth
condition on the resolvent operator as a function of the \emph{real}
frequency $\omega$. In particular, this theory establishes the first
rapid decay estimates for specific types of trapping obstacles which
are: (a)~Not equal to unions of convex obstacles; and (b)~Whose
periodic trapped orbits span a set of positive volumetric measure
(such as those depicted in \Cref{fig:3d_connected_trapping}).
\subsection{Overview}\label{sec:overview}
This paper is organized as follows. A brief but somewhat detailed
overview of previous decay results for the obstacle scattering problem
is presented in \Cref{sec:detailed_review}. Then, after preliminaries
presented in \Cref{sec:prelim}, \Cref{sec:theory_part_i} introduces
the bootstrap domain-of-dependence formulation and it presents
\Cref{3d_decay_thm}, which shows that for obstacles satisfying the
$q$-growth condition, if the Neumann trace is ``small'' on a (slightly
extended) bootstrap DoD time interval, then, absent additional
illumination on and after that interval, it must remain ``permanently
small'', that is, small for all times subsequent to that interval.
\Cref{sec:theory_part_ii} then extends the estimates and it
establishes, in \Cref{3d_decay_thm_ii} and
Corollaries~\ref{decay_corr} and~\ref{decay_corr_energy}, various
superalgebraically fast time decay estimates for solutions
in presence of such obstacles as well
as associated local solution energies.
\subsection{Additional Background on Decay
Theory}\label{sec:detailed_review}
In order to provide relevant background concerning decay estimates we
briefly review the literature on this long-standing problem, and we
note the significant role played in this context by the shape of the
obstacle $\Omega$. Previous studies of the decay problem establish
exponential decay of solutions for certain classes of domain
shapes---including star-shaped domains~\cite{Morawetz:61}, domains
that are ``nontrapping'' with respect to
rays~\cite{Morawetz:77,Melrose:79}, and unions of strictly convex
domains that satisfy certain spacing
criteria~\cite{Ikawa:82,Ikawa:88}. (A domain is nontrapping if each
billiard ball traveling in the exterior of $\Omega$, which bounces off
the boundary $\Gamma = \partial \Omega$ in accordance with the law of
specular reflection, and which starts within a ball $B_R$ of radius $R$ around
$\Omega$, eventually escapes $B_R$ with bounded, $R$-dependent,
trajectory length~\cite{Melrose:79}.)
Early results~\cite{Morawetz:61,Lax:63,Morawetz:66}, obtained on the
basis of energy estimates in the domains of time and frequency,
establish exponential decay for ``star-shaped'' obstacles, that is,
obstacles $\Omega$ which, for a certain $\mathbf{r}_0 \in \Omega$,
contain the line segment connecting $\mathbf{r}_0$ and any other point
$\mathbf{r}\in\Omega$. As noted in~\cite{Lax:63,Lax:67}, exponential
decay generally implies analyticity of the resolvent operator in a
strip around the real $\omega$ frequency axis. A significant
generalization of these results was achieved in~\cite{Morawetz:77}
using a hypothesis somewhat more restrictive than the non-trapping
condition, while~\cite{Melrose:79} established
exponential
local energy decay for all non-trapping obstacles. All of these
results establish exponential decay
\begin{equation}\label{energy_decay}
E(u, D, t) \le C{\mathrm{e}}^{-\alpha t} E(u, \infty, 0), \quad \alpha > 0,
\end{equation}
for the local energy
\begin{equation}\label{local_energy}
E(u, D, t) = \int_D \left|\nabla u({\mathbf{r}}, t)\right|^2 + \left|u_t({\mathbf{r}},
t)\right|^2\,\d V({\mathbf{r}})
\end{equation}
contained in a compact region $D \subset \widebar{\Omega}^c$ in terms
of the energy $E(u, \infty, 0)$ contained in all of
$\widebar{\Omega}^c$---the latter one of which is, of course,
conserved. As reviewed in Remark~\ref{ralston}, a uniform decay estimate of
the form~\eqref{energy_decay} cannot hold for trapping obstacles.
In this connection it is tangentially relevant to consider
reference~\cite{Datchev:12}, which establishes sub-exponential decay
for the problem of scattering by a globally defined smooth
potential. The method utilized in that work relies on use of simple
resolvent manipulations for the differential scattering operator
$P =-\frac{1}{\omega^2}\Delta + V(x)$, where
$V\in C_0^\infty(\mathbb{R}^n)$ denotes the globally defined
potential. Such manipulations are not applicable in the context of the
Green function-based operators for the impenetrable-scattering
problem, for which, in particular, the frequency $\omega$ is featured
in the {\em Green function exponent} of the {\em integral} scattering
operator instead of a {\em linear factor} $\frac{1}{\omega^2}$ in the
corresponding {\em differential} operator.
As mentioned above, a decay estimate of the form~\eqref{energy_decay}
cannot hold for trapping obstacles. However, by relying on analytic
continuation of the frequency-domain resolvent into a strip
surrounding the real-axis in the complex frequency domain $\omega$,
exponential decay (of a different character than expressed
in~\eqref{energy_decay}; see \Cref{ralston}) has been established for
certain trapping geometries~\cite{Ikawa:82,Ikawa:88,Fahry:91}. In
view of previous work leading to results of exponential decay, even
for trapping geometries, the question may arise as to whether the
results of superalgebraically fast convergence presented in this paper
could actually be improved to imply, for example, exponentially fast
decay for trapping geometries which merely satisfy a $q$-growth
condition. A definite answer in the negative to such a question is
provided in~\cite[Thm.\ 1]{Ikawa:85}. This contribution exhibits an
example for which, consistent with earlier general
suggestions~\cite[p.\ 158]{Lax:67}, the sequence of imaginary parts of
the pole-locations of the scattering matrix in the complex plane tends
to zero as the corresponding sequence of real parts tends to infinity;
clearly such a domain cannot exhibit exponential decay in view of the
Paley-Wiener theorem~\cite[Thm.\ I]{PaleyWiener:34}. In detail, the
example presented in~\cite{Ikawa:85} concerns a domain $\Omega$
consisting of a union of two disjoint convex obstacles, where the
principal curvature of each connected component vanishes at the
closest point between the obstacles. For such a trapping domain
reference~\cite{Burq:98} provides the only previous applicable decay
result namely, the general-obstacle inverse polylogarithmic decay,
based on regions of the complex-$\Omega$ plane for which the resolvent
operator provably has no poles. The real-$\omega$ decay analysis
presented in this paper provides significant progress, as it
establishes super-algebraic decay for a wide range of obstacles not
previously treated by classical scattering theory, including the
aforementioned vanishing-curvature example~\cite{Ikawa:85} and the
connected and significantly more strongly trapping structures depicted
in \Cref{fig:3d_connected_trapping}, for which the trapped rays form a
set of positive volumetric measure.
The aforementioned references~\cite{Ikawa:82,Ikawa:88,Fahry:91}
establish exponential decay for wave scattering for certain trapping
structures consisting of unions of disjoint convex obstacles (but see
Remark~\ref{ralston}), and thereby answer in the negative a conjecture
by Lax and Phillips~\cite[p. 158]{Lax:67} according to which
exponential decay could not occur for any trapping structure (in view
of the Lax/Phillips conjectured existence, for all trapping obstacles,
of a sequence of resonances $\lambda_j$ for which
$\mathrm{Im}\,\lambda_j \to 0^-$ as $j \to \infty$). The trapping
structures with exponential decay are taken to equal a disjoint union
of two smooth strictly convex obstacles in~\cite{Ikawa:82}, and
otherwise unions of disjoint convex obstacles in~\cite{Ikawa:88}; in
all cases the geometries considered give rise to sets of trapping rays
spanning three-dimensional point sets of zero volumetric measure:
single rays in~\cite{Ikawa:82}, and countable sets of primitive
trapped rays in~\cite{Ikawa:88} as implied by Assumption (H.2) in that
reference. To the authors' knowledge, these are the only known results
on fast decay of solutions in trapping geometries. Despite these known
exceptions to the Lax-Phillips conjecture, it has been
surmised~\cite{Ikawa:85} that ``it seems very sure that the conjecture
remains to be correct for a great part of trapping
obstacles,''---which would disallow exponential decay for most
trapping obstacles---thus providing an interesting perspective on the
main results presented in this paper---which, in particular, establish
superalgebraic decay for certain obstacles for which the set of
trapped rays span a set of positive measure.
\section{Preliminaries}\label{sec:prelim}
We consider the problem of scattering of an incident field $b$ by a
Lipschitz bounded obstacle $\Omega$, an open set $\Omega\subset \mathbb{R}^3$. More
precisely, letting
$\Box = \frac{\partial^2}{\partial t^2} - c^2\Delta$ denote the
d'Alembertian operator with wavespeed $c > 0$ and given an open set
$\Omega^\textit{inc}$ containing the closure $\widebar{\Omega}$ of
$\Omega$, $\widebar{\Omega} \subset \Omega^\textit{inc}$, we study the
solution $u$ of the initial and boundary value problem
\begin{subequations}\label{eq:w_eq}
\begin{align}
\Box\,u(\mathbf{r}, t)
&= 0,\quad\mbox{for}\quad ({\mathbf{r}}, t) \in \widebar{\Omega}^c \times (0, \infty),\label{eq:w_eq_a}\\
u(\mathbf{r},0) &= \frac{\partial u}{\partial t}(\mathbf{r}, 0)
= 0,\quad\mbox{for}\quad\mathbf{r} \in \Omega^c,\\
u(\mathbf{r}, t) &= -\gamma^+ b(\mathbf{r}, t),
\quad\mbox{for}\quad(\mathbf{r},t)\in\Gamma\times (0,\infty),\label{eq:w_eq_c}
\end{align}
\end{subequations}
on the complement $\widebar{\Omega}^c$ of $\widebar{\Omega}$, for a
given incident-field function
\begin{equation}\label{eq:bdef}
b: \Omega^\textit{inc}\times\mathbb{R} \to \mathbb{R}\;\mbox{satisfying}\;
b \in C^2(\Omega^\textit{inc}\times\mathbb{R})\; \mbox{and}\; \Box b = 0\;\mbox{in}\; \Omega^\textit{inc}\times\mathbb{R},
\end{equation}
(cf.\ \Cref{rem:wave_eq_sob_assump}), where
$\Gamma = \partial\Omega^c= \partial\Omega$ denotes the boundary of
the obstacle $\Omega$, and where $\gamma^+$ (\Cref{trace_def} in
\Cref{sob-boch}) denotes the exterior trace operator. Compatibility
of the boundary values with the initial condition requires
$b({\mathbf{r}}, 0) = b_t({\mathbf{r}}, 0) = 0$ for ${\mathbf{r}} \in \Gamma$, that is to say,
that at $t = 0$ the wave has not as yet impinged on the obstacle.
Additionally, for compatibility with the integral equation
formulation~\eqref{eq:tdie_sl} below, letting
$\mathbb{R}_0^- =\{t\in\mathbb{R}\ : \ t\leq 0 \}$, we additionally
assume that
\begin{equation}\label{all_t_b}
\mbox{``The function}\; b \in C^2(\Omega^\textit{inc} \times \mathbb{R})\;\mbox{satisfies}\; b({\mathbf{r}}, t) = 0\;\mbox{for}\; ({\mathbf{r}}, t) \in \widebar{\Omega} \times \mathbb{R}_0^-\mbox{''}.
\end{equation}
The solution $u$ is the ``scattered'' component of the total field
$u^\textit{tot} = u + b$; with these conventions, we clearly have
$\gamma^+ u^\textit{tot}(\mathbf{r}, t) = 0$ for $\mathbf{r}$ on $\Gamma$.
\begin{rem}\label{rem:w_eq_ivp_ibvp_equiv}
In the case $\Omega^\textit{inc}= \mathbb{R}^3$, the function $v = u^\textit{tot} = u + b$, satisfies
the initial-value problem
\begin{subequations}\label{eq:w_eq_ivp}
\begin{align}
\Box &v(\mathbf{r}, t) = 0,\quad\mbox{for}\quad ({\mathbf{r}}, t) \in
\widebar{\Omega}^c\times (0, \infty),\label{eq:w_eq_ivp_a}\\
v(\mathbf{r},0) &= f({\mathbf{r}}),\, \frac{\partial
v}{\partial t}(\mathbf{r}, 0)
= g({\mathbf{r}}),\quad\mbox{for}\quad\mathbf{r} \in
\Omega^c,\label{eq:w_eq_ivp_b}\\
v(\mathbf{r}, t) &= 0,
\quad\mbox{for}\quad(\mathbf{r},t)\in\Gamma\times (0,\infty),\label{eq:w_eq_ivp_c}
\end{align}
\end{subequations}
in that domain, where $f(\mathbf{r}) = u^\textit{tot}({\mathbf{r}}, 0)$ and
$g(\mathbf{r}) = u^\textit{tot}_t({\mathbf{r}}, 0)$. Conversely, given smooth
data $f$ and $g$ in all of $\Omega^c \cap \Omega^\textit{inc}$, an incident field $b$ can be
obtained by first extending $f$ and $g$ to all of space~\cite[Thm.\
3.10]{Necas:11} and using the extended data as initial data for a
free-space wave equation with solution $b$. In other words, the
problems~\eqref{eq:w_eq} and~\eqref{eq:w_eq_ivp} are equivalent.
\end{rem}
\begin{rem}
The classical literature on decay rates for the wave
equation~\cite{Lax:67} concerns problem~\eqref{eq:w_eq_ivp} with the
additional assumption that $f$ and $g$ are compactly supported. In
view of strong Huygens principle~\cite{John}, which is valid in the
present three-dimensional context, the procedure described in
\Cref{rem:w_eq_ivp_ibvp_equiv} translates the spatial
compact-support condition on $f$ and $g$ into a temporal
compact-support condition for the function $b$
in~\eqref{eq:w_eq}---i.e.\ that the incident field vanishes on the
boundary $\Gamma = \partial \Omega$ after a certain initial
``illumination time period'' has elapsed.
\end{rem}
\begin{rem}\label{rem:wave_eq_sob_assump}
Even though the main problem considered in this paper,
problem~\eqref{eq:w_eq}, is solely driven by the Dirichlet
data~\eqref{eq:w_eq_c}, our analysis relies on assumptions that are
imposed not only on the function $\gamma^+b$, but also on
$\gamma^+\partial_\mathbf{n} b$. Specifically, various results
presented in this paper assume that, for an integer $s\geq 0$
the incident field $b$ satisfies the ``$s$-regularity
conditions''
\begin{equation}\label{eq:gamma_Hs_assump}
\gamma^+b \in H^{s+1}(\mathbb{R};\,L^2(\Gamma))\quad\mbox{and}\quad
\gamma^+\partial_\mathbf{n} b \in H^s(\mathbb{R};L^2(\Gamma)).
\end{equation}
Clearly, for the most relevent incident fields $b$, such as those
mentioned in \Cref{rem:w_eq_ivp_ibvp_equiv}, namely, solutions of
the wave equation that are smooth and compactly supported in time
over any compact subset of $\Omega^{\textit{inc}}$ the
$s$-regularity conditions hold for all non-negative integers $s$.
\end{rem}
The boundary value problem~\cref{eq:w_eq} can be
reduced~\cite{HaDuong:86} to an equivalent time-domain integral
equation formulation, in which a boundary integral density $\psi$
(defined to vanish for all $t < 0$) is sought that satisfies the
boundary integral equation
\begin{equation}\label{eq:tdie_sl}
\left(S \psi \right)(\mathbf{r}, t) = \gamma^+ b(\mathbf{r}, t)\quad\mbox{for}\quad (\mathbf{r},
t) \in \Gamma \times [0, \infty),
\end{equation}
where $\gamma^+$ is the exterior trace operator, and where,
calling
\begin{equation}\label{eq:green_fnct_time}
G(\mathbf{r}, t; \mathbf{r}', t') = \frac{\delta\left((t - t') - |\mathbf{r} -
\mathbf{r}'|/c\right)}{4\pi |\mathbf{r} - \mathbf{r}'|}
\end{equation}
the Green function for the three-dimensional wave equation, $S$
denotes the time-domain single-layer boundary integral operator
\begin{equation}\label{eq:single_layer_op_time}
(S\mu)(\mathbf{r}, t) = \int_{-\infty}^t \int_\Gamma G(\mathbf{r}, t;
\mathbf{r}', t') \mu(\mathbf{r}', t')\,\d\sigma(\mathbf{r}')\,\d
t',\quad{\mathbf{r}}\in\mathbb{R}^3.
\end{equation}
Once $\psi$ has been obtained, the solution $u$ of~\eqref{eq:w_eq} is given by
\begin{equation}\label{eq:kirchhoff_3d_soft}
u(\mathbf{r}, t) = \left(S \psi \right)(\mathbf{r}, t), \quad \mathbf{r} \in
\Omega^c.
\end{equation}
As is well known, further, the solution $\psi$ of~\eqref{eq:tdie_sl} equals the Neumann trace of the solution:
$\psi(\mathbf{r}, t) = \gamma^+ \frac{\partial u^\textit{tot}}{\partial \mathbf{n}}(\mathbf{r}, t)$, $\mathbf{r}\in\Gamma$. In view
of~\eqref{eq:kirchhoff_3d_soft}, spatio-temporal estimates and
temporal decay rates for the density $\psi$ imply similar
properties for the energy $E(u, D, t)$ in~\Cref{local_energy} over a
given open set $D \subset \overline{\Omega}^c$; see \Cref{decay_corr_energy}.
As mentioned in Section~\ref{Introduction}, this paper presents
estimates on the solutions $\psi$ of the integral equation
problem~\eqref{eq:tdie_sl}, including results for certain classes of
trapping obstacles (cf. the first paragraph of
Section~\ref{Introduction}). Our analysis is based on consideration of
the respective frequency-domain counterparts of problems~\ref{eq:w_eq}
and~\ref{eq:tdie_sl}, namely, for each frequency $\omega$, the
Helmholtz equation with wavenumber $\kappa(\omega)=\omega/c$ and the frequency
domain integral equation
\begin{equation}\label{CFIE_direct}
A_\omega \psi^f = \gamma^+ \partial_\mathbf{n} B^f - \i \eta \gamma^+ B^f,
\end{equation}
for the temporal Fourier transform
$\psi^f = \psi^f(\mathbf{r}, \omega)$ of $\psi$, where we have used
the definition~\eqref{eq:fourier_transf} below of the temporal Fourier
transform. (Note that
$\psi^f(\mathbf{r}, \omega) = \mathcal{F}\lbrace \gamma^+ \frac{\partial u^\textit{tot}}{\partial
\mathbf{n}}(\mathbf{r}, \cdot)\rbrace(\omega)$, $\mathbf{r}\in\Gamma$.)
In~\eqref{CFIE_direct} $A_\omega$ denotes the frequency-domain
combined-field scattering operator, which is given explicitly in
\Cref{Aomega_def_eqn}.
\begin{rem}\label{FT_conv}
Throughout this article the superscript $f$ upon an upper case
letter indicates a function of the temporal frequency
$\omega$. Thus, for any function $f(t)$ of the time variable $t$ we
write
\begin{equation}\label{eq:fourier_transf}
F^f(\omega) = \mathcal{F} [f](\omega) = \int_{-\infty}^\infty f(t) {\mathrm{e}}^{-\i\omega t}\,\d t,
\end{equation}
where $f$ (and therefore also $F^f = \mathcal{F} [f]$), is either a
complex-valued scalar function or a function with values on a Banach
space $X$ over the complex numbers. In the latter case the integral on
the right-hand side of~\eqref{eq:fourier_transf} indicates integration
in the sense of Bochner~\cite{Hille,DunfordSchwartz,Weis}.
\end{rem}
Our analysis
relates the decay problem to the growth of norms of associated
frequency-domain solution operators as the frequency grows without
bound. Studies of such norm growths, which provide an indicator of the
energy-trapping character of obstacles, have been undertaken over the
last several decades---including growth estimates for geometries that
exhibit a variety of trapping
behavior~\cite{Popov:91,Cardoso:02,ChandlerWilde:09,BetckeChandlerWilde:10,Spence:16,Spence:20}.
\begin{defi}[Combined field operator]\label{Aop_def}
Let $\Omega$ denote a Lipschitz domain with boundary $\Gamma$ and
let $\eta$ be a nonzero real number. Then, letting $G_\omega$
denote the Green function for the Helmholtz equation with wavenumber $\kappa(\omega) = \omega / c$,
$G_\omega(\mathbf{r}, \mathbf{r}') = \frac{{\mathrm{e}}^{\i\frac{\omega}{c}
|\mathbf{r} - \mathbf{r}'|}}{4\pi|\mathbf{r} - \mathbf{r}'|}$,
and letting $S_\omega$ and $K_\omega^*$ denote the single-layer and
adjoint double-layer operators (see e.g.~\cite{ChandlerWilde:12}),
\begin{equation}\label{eq:single_layer_op}
(S_\omega\mu)(\mathbf{r}) = \int_\Gamma G_{\omega}(\mathbf{r}, \mathbf{r'})
\mu(\mathbf{r'})\,\d\sigma(\mathbf{r'}),\quad \mathbf{r} \in \mathbb{R}^3, \quad \mbox{and}
\end{equation}
\begin{equation}\label{eq:adjoint_double_layer_op}
(K^*_\omega\mu)(\mathbf{r}) = \int_\Gamma \frac{\partial G_{\omega}(\mathbf{r},
\mathbf{r'})}{\partial n(\mathbf{r})}
\mu(\mathbf{r'})\,\d\sigma(\mathbf{r'}), \quad \mathbf{r} \in \Gamma,
\end{equation}
the combined-field operator $A_\omega$ is defined by
\begin{equation}\label{Aop_def_eqn}
A_{\omega,\eta} = \frac{1}{2}I + K_\omega^* - \i\eta
S_\omega.
\end{equation}
Letting $\omega_0 > 0$ we also define for $\omega \ge 0$
\begin{equation}\label{Aomega_def_eqn}
A_\omega = A_{\omega,\eta_0} \quad\mbox{where}\quad \eta_0 =
\begin{cases} 1, &~\mbox{if}~0 \le \omega < \omega_0\\
\omega, &~\mbox{if}~\omega \ge \omega_0,
\end{cases}
\end{equation}
where the dependence of $A_\omega$ on $\omega_0$ has been suppressed.
\end{defi}
\begin{rem}\label{A_om_invert}
As is well-known~\cite[Thm. 2.17]{ChandlerWilde:12}, for domains
with Lipschitz boundaries $\Gamma$, the operators $S_\omega$,
$K_\omega^*$ and $A_\omega$ are continuous operators on
$L^2(\Gamma)$ for each $\omega \ge 0$. Further, for
$\mathrm{Re}(\eta) \ne 0$ the operator $A_{\omega,\eta}$ is
invertible for all $\omega \ge 0$, see e.g.~\cite[Thm.\
2.27]{ChandlerWilde:12}.
\end{rem}
\begin{rem}\label{psi_omega_independent}
The function $\psi^f = \psi^f(\mathbf{r}, \omega)$ defined above,
which, by definition, is independent of the real parameter $\eta$
in~\cref{Aop_def_eqn}, is a solution of the $\eta$-dependent
equation
$A_{\omega,\eta} \psi^f = \gamma^+ \partial_\mathbf{n} B^f - \i \eta
\gamma^+ B^f$ for every real value of $\eta$ (see
e.g.~\cite{ChandlerWilde:12}). This fact, which implies that
$\psi^f$ is also a solution of~\eqref{CFIE_direct} for all real
values of $\omega$, is exploited in our analysis of the decay
problem (e.g., in the proofs of Lemmas~\ref{3d_decay_lemma_Ainv_freq_bounds}
and~\ref{3d_decay_thm_recursive_bound_general}).
\end{rem}
This article utilizes a certain ``$q$-growth condition'' which, for a
given $q\in \mathbb{R}$, may or may not be satisfied by a given
obstacle $\Omega$---namely, that a constant $C$ exists such that, for
all $\omega>0$, the relation~\eqref{eq:q-nontrapp} below holds. The
$q$-growth condition is fundamentally a statement on the
high-frequency character of the operator $A_\omega^{-1}$, and it is
indeed a condition on the domain $\Omega$ which is independent on the
constant $\omega_0$ in~\eqref{Aomega_def_eqn}. This can be verified by
noting that, given any compact intervals $I_\eta$ and $I_\omega$,
where $I_\eta$ is bounded away from $0$, the norm
$\norm{A_{\omega,\eta}^{-1}}_{L^2(\Gamma)\to L^2(\Gamma)}$ is
uniformly bounded for $(\eta,\omega)\in I_\eta\times I_\omega$---as it
can be established on the basis of the
theory~\cite{ChandlerWilde:12,McLean} for the combined field operator
on a Lipschitz domain together with a compactness argument based on
Taylor expansions of the oscillatory exponential factor of the Green
function together with Neumann series for the resolvent around each
pair $(\eta,\omega)\in I_\eta\times I_\omega$.
\begin{defi}[$q$-growth condition]\label{q-nontrapp} For real $q$
and $\omega_0 >0$ and for an obstacle $\Omega$ with Lipschitz
boundary $\Gamma$, we say that $\Omega$, and $\Gamma$, satisfy a
$q$-\emph{growth} condition iff for all $\omega \ge 0$, the operator
$A_\omega$ in~\eqref{Aomega_def_eqn} satisfies either the bound
\begin{equation}\label{eq:q-nontrapp}
\norm{A_\omega^{-1}}_{L^2(\Gamma)\to L^2(\Gamma)} \le C (1 +
\omega^2)^{q/2},
\end{equation}
for some constant $C > 0$, or, equivalently, the bound
\begin{equation}\label{eq:q-nontrapp-cases}
\left\|A_\omega^{-1}\right\|_{L^2(\Gamma)\to L^2(\Gamma)} \le
\begin{cases} C_1, &~\mbox{if}~\omega \le \omega_0\\
C_2 \omega^q, &~\mbox{if}~\omega > \omega_0
\end{cases}
\end{equation}
for certain constants $C_1 > 0$ and $C_2 > 0$.
\end{defi}
Per~\cite[Lem.\ 6.2]{Spence:20}, polynomially growing bounds on the
norm of this inverse operator in the high-frequency regime can be
obtained on the basis of corresponding polynomially growing bounds as
$\omega\to\infty$ on the norm of the resolvent operator
$(\Delta + \omega^2)^{-1}$. The results of this article thus imply
superalgebraically-fast local energy decay of solutions to
\Cref{eq:w_eq} for any Lipschitz domain for which such real-axis
high-frequency resolvent bounds can be established.
\begin{rem}\label{3d_decay_generalization}
It is known that the $q$-growth condition is satisfied with a variety of
$q$-values for various classes of obstacles. For example, reference~\cite[Thm.\
1.13]{Spence:16} shows that a smooth \emph{nontrapping} obstacle
satisfies a $q$-growth condition with $q = 0$. A related $q = 0$ result is
presented in~\cite{ChandlerWilde:08} for merely Lipschitz domains, but under
the stronger assumption that the obstacle is star-shaped.
Reference~\cite{Spence:20} shows that for ``hyperbolic'' trapping regions (in
which all periodic billiard ball trajectories are unstable),
a merely logarithmic growth in $\omega$ results, while for certain
``parabolic'' trapping regions, stronger ($q = 2$ or $q = 3$) growth takes place. It is
also known that much more strongly-trapping obstacles exist, including
obstacles for which exponentially-large inverse operator norms
$\left\|A_\omega^{-1}\right\|_{L^2(\Gamma)\to L^2(\Gamma)}$
occur~\cite{Cardoso:02, BetckeChandlerWilde:10, Lafontaine:20} (and which,
therefore, do not satisfy the $q$-growth condition for any $q$).
\end{rem}
\begin{rem}\label{connected-trapping}
There exist connected trapping obstacles containing cavities that
satisfy the $q$-growth condition of \Cref{q-nontrapp} for some value
of $q$. Indeed, the obstacle
\[
\Omega = [-1,1]\times[-1,1]\times[-1,1]\setminus [-1/2,1/2]\times[-1/2,1/2]\times[0,1]
\]
(a cube with a smaller cube removed from one of its faces as
displayed in \Cref{fig:3d_connected_trapping}) is an
$(R_0, R_1, a)$ parallel trapping obstacle in the sense
of~\cite[Def.\ 1.9]{Spence:20}, for $R_1 > {\mathrm{e}}^{1/4} R_0$,
$R_0 \ge \sqrt{3/2}$, and $a = 1$. According to~\cite[Cor.\ 1.14
and Rem.\ 1.16]{Spence:20}, $\Omega$ satisfies a $q$-growth
condition with $q=3$. Smoothing of the corners of this obstacle
results in a connected trapping obstacle that satisfies a
$q$-growth condition with $q=2$.
\end{rem}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{connected-trapping-cavity.png}
\includegraphics[width=0.49\textwidth]{elongated-connected-trapping-cavity.png}
\caption[Example of connected trapping obstacles that satisfy a $q$-growth
condition and for which wave equation decay rates are established.]{Examples
of connected trapping obstacles that satisfy a $q$-growth condition ($q = 3$)
and for which superalgebraically-fast wave equation time decay rates are
established in this article. Left: Visualization of the obstacle given in
\Cref{connected-trapping}, and which serves to demonstrate the existence of connected
trapping obstacles satisfying the $q$-growth condition of \Cref{q-nontrapp}. Right: An
elongated cavity trapping obstacle, with a vertical dimension of $12$ units
and rectangular dimensions of $4$ units, that also satisfies a $q$-growth
condition ($q = 3$).}\label{fig:3d_connected_trapping}
\end{figure}
\section{Uniform bootstrap domain-of-dependence boundary density estimates}\label{sec:theory_part_i}
This section presents uniform-in-time domain-of-dependence estimates
(\Cref{3d_decay_thm}) on the solution $\psi$ of the time-domain
boundary integral equation~\cref{eq:tdie_sl}. In particular, these
estimates show that if $\psi$ is ``small'' throughout $\Gamma$ for any
domain-of-dependence time interval of length
\begin{equation}\label{Tstar_def}
T_* \coloneqq \diam(\Gamma) / c = \max_{\mathbf{r},\mathbf{r}' \in \Gamma} |\mathbf{r} - \mathbf{r}'|/c
\end{equation}
after the $\Gamma$-values of the incident field have been turned off,
then $\psi$ will remain small for all time thereafter. At a
fundamental level, the domain-of-dependence analysis presented in this
section exploits an interesting property of
equation~\eqref{eq:tdie_sl} (made precise in
Lemmas~\ref{3d_decay_lemma_2ndkind}
and~\ref{3d_decay_thm_h_equiv}), namely, that, after the incident
field $b(\mathbf{r}, t)$ has been turned off permanently throughout
$\Gamma$, the values of its solution $\psi(\mathbf{r}, t)$ over a
domain-of-dependence time length $T^*$ determine the solution uniquely
for all subsequent times: the boundary integral density over the
interval $I_{T}$ encodes the necessary information to reproduce all
future scattering events and, in particular, it encapsulates the
effect of all previously imposed boundary values, over time periods of
arbitrarily long duration.
For technical reasons we utilize domain-of-dependence intervals
$I_{T}$, as detailed in \Cref{domainofdep}, of a length slightly
larger than $T^*$---larger by a small amount $2\tau>0$. Any positive
value of $\tau$ can be used, and the selection only affects the
multiplicative constants and integration domains in the main
domain-of-dependence estimates presented in this paper. The rest of
this section is organized as follows: Definitions~\ref{domainofdep}
through~\ref{def:sob_bochner} lay down the conventions necessary to
subsequently state and prove the main result of the section, namely,
\Cref{3d_decay_thm}. After these definitions, the statement of the
theorem is introduced. Lemmas~\ref{psi_Hp} through~\ref{sob_lemma} then
establish a series of results required in the proof of the theorem,
and then, concluding the section, the theorem's proof is presented.
\begin{defi}[Domain-of-dependence interval]\label{domainofdep}
Using the definition~\eqref{Tstar_def},
$T_* \coloneqq \diam(\Gamma) / c$, for a given Lipschitz boundary
$\Gamma$, and for given real numbers $T$ and $\tau > 0$, the time
interval
\begin{equation}\label{time_interval_def}
I_{T} = I_{T,T_*,\tau}= [T - T_* - 2\tau, T),
\end{equation}
will be referred to as the $\tau$-extended ``domain-of-dependence''
interval ($\tau$DoD) relative to the ``observation'' time $T$. (As
suggested by the notations in~\eqref{time_interval_def}, the
dependence of $I_T$ on the paramaters $T_*$ and $\tau$ will not be
made explicit in what follows.)
\end{defi}
\begin{rem}\label{ITvsI0}
The interval $I_T$ plays two important roles in our analysis. On one
hand, this interval, for some given value $T=T_0$, figures
prominently in the statements of the two main theorems, namely,
Theorems~\ref{3d_decay_thm} and~\ref{3d_decay_thm_ii}: the estimates
provided by these theorems are valid for times past the end of the
interval $I_{T_0}$ for any fixed given time $T_0$ under
consideration. To adequately account for the complex interplay of
recentering in time and oscillation rate in the frequency variable
$\omega$ for solutions of the wave equation, on the other hand, the
proof of Theorem~\ref{3d_decay_thm_ii} also relies on use of the
interval $I_T$ and the time $T_0$, but, this time, with
$T = 0 < T_0$. Thus, the recentering idea that is exploited
in~\cite{Anderson:20} to eliminate oscillations induced in frequency
domain by the passage of time, is also employed here as an element
of the proof of Theorem~\ref{3d_decay_thm_ii} (or, more precisely,
in the proof of the preliminary Lemma~\ref{Rderiv_sobolev_bound}) to
usefully exploit the frequency-time isometry inherent in the
Plancherel theorem without incurring the impact of large frequency
derivatives and resulting uncontrolled constants in Sobolev
estimates provided in Lemma~\ref{Rderiv_sobolev_bound}. To make this
possible, a translation in time, and thus the use of intervals $I_T$
with $T\ne T_0$, become necessary.
\end{rem}
\begin{defi}[Time-windowed solutions]\label{timewinddens}
For a given solution $\psi$ to~\eqref{eq:tdie_sl} and a given
$\tau$DoD interval $I_{T}$, using non-negative window functions
\begin{equation}\label{wtau_def}
w_-(t) = \begin{cases} &1\mbox{~for~} t < -\tau\\
&0\mbox{~for~} t \ge 0
\end{cases},\quad
w_+(t) = \begin{cases} &0\mbox{~for~} t < -\tau\\
&1\mbox{~for~} t \ge 0,
\end{cases}
\end{equation}
satisfying the ``Partition-of-unity'' relation $w_- + w_+ = 1$, we
define the auxiliary densities
\begin{equation}\label{psipm_def}
\psi_{-,T}({\mathbf{r}}, t) = w_-(t - T)\psi({\mathbf{r}},
t),\quad\psi_{+,T}({\mathbf{r}}, t) = w_+(t - T)\psi({\mathbf{r}}, t),
\end{equation}
whose temporal supports satisfy
$\supp \psi_{-,T} \subset (-\infty, T]$ and
$\supp \psi_{+,T} \subset [T -\tau, \infty)$, and for which the
relation
\begin{equation}\label{pou_decomp}
\psi(\cdot, t) = \psi_{-,T_0}(\cdot, t) + \psi_{+,T_0}(\cdot, t)
\end{equation}
holds for all time $t$. We further define
\begin{equation}\label{psistar_def}
\psi_{*,T}(\mathbf{r}, t) = w_+(t - T + T_* + \tau)w_-(t - T)\psi(\mathbf{r}, t),
\end{equation}
which is nonzero only in the interior of $I_{T}$
(equation~\eqref{time_interval_def}). Similarly, for a given integer
$p > 0$ we define the functions
\begin{equation}\label{psipm_p_def}
\psi_{p,-,T}({\mathbf{r}}, t) = w_-(t - T)\partial^p \psi({\mathbf{r}},
t),\quad\psi_{p,+,T}({\mathbf{r}}, t) = w_+(t - T)\partial^p \psi({\mathbf{r}}, t)
\end{equation}
and
\begin{equation}\label{psistar_p_def}
\psi_{p,*,T}(\mathbf{r}, t) = w_+(t - T + T_* + \tau)w_-(t - T)\partial^p \psi(\mathbf{r}, t).\qedhere
\end{equation}
\end{defi}
The smooth temporal decompositions introduced
in \Cref{timewinddens} play central roles in the derivations of the
uniform-in-time bounds and decay estimates presented in this
paper. Noting that $\psi$ is identical to $\psi_{+,T_0}$ for
$t > T_0$, to produce such bounds and estimates we first relate
$\psi_{+,T_0}$ to $\psi_{-,T_0}$ and thereby obtain a norm bound for
$\psi_{+,T_0}$ over the slightly larger time interval
$[T_0 - \tau, \infty)$---which then yields, in particular, the desired
domain-of-dependence estimate on the interval $[T_0 , \infty)$. The
necessary bounds for $\psi_{+,T_0}$ are produced via Fourier
transformation into the frequency domain and subsequent use of
existing wavenumber-explicit
bounds~\cite{ChandlerWilde:08,Spence:14,Spence:16,Spence:20,ChandlerWilde:09}
on the inverses of frequency-domain versions of our integral
scattering operator. A similar approach is followed for the
time-derivatives of the incident field data, leading to bounds in
temporal Sobolev norms of arbitrary orders, and, in particular, via
Sobolev embeddings, to uniform bounds in time. As suggested above,
the approach intertwines the frequency and time domains, and it
thus incorporates frequency-domain estimates while also exploiting the
time domain Huygens' principle, cf.\ for example the relations~\cref{h_equiv} and~\cref{HtoUbound0}.
The main result of this section, \Cref{3d_decay_thm}, which is stated
in what follows, relies on the definition of Sobolev-Bochner spaces in \Cref{sob-boch}.
\begin{theorem}\label{3d_decay_thm}
Let $p$ and $q$ denote non-negative integers, let $T_0 > 0$ and
$\tau > 0$ be given, and assume (i) $\Gamma$ satisfies the
$q$-growth condition (\Cref{q-nontrapp}); (ii) The incident field
satisfies the $s$-regularity
conditions~\eqref{eq:gamma_Hs_assump} with
$s = p + 2q + 1$; as well as, (iii) The incident field
$b = b({\mathbf{r}}, t)$ satisfies \Cref{all_t_b} and it vanishes for
$({\mathbf{r}}, t) \in \widebar{\Omega} \times \left\lbrace I_{T_0} \cup [T_0, \infty)\right\rbrace$, with $I_{T_0}$ as in
\Cref{domainofdep}. Then, the solution $\psi$ of \Cref{eq:tdie_sl}
satisfies both the $H^p$ estimate
\begin{equation}\label{density_Hp_time_bound}
\left\|\psi\right\|_{H^p([T_0, \infty);\,L^2(\Gamma))} \le
C(\Gamma, \tau, p) \left\|\psi\right\|_{H^{p+q+1}(I_{T_0};\,L^2(\Gamma))}
< \infty,
\end{equation}
and, in the case $p=1$ the time-uniform estimate
\begin{equation}\label{density_unif_time_bound}
\left\|\psi(\cdot, t)\right\|_{L^2(\Gamma)} \le
C(\Gamma, \tau)\left\|\psi\right\|_{H^{q+2}(I_{T_0};\,L^2(\Gamma))}\quad \mbox{for}\quad t > T_0.
\end{equation}
\end{theorem}
\begin{rem}\label{laplace_estimate_remark}
Several results in this article, including \Cref{3d_decay_thm}, \Cref{int_eq_pderiv}, \Cref{3d_decay_thm_ii} and \Cref{decay_estimate_L2},
utilize the (Fourier domain) $q$-growth condition-based estimate of \Cref{psi_Hp} to ensure that $\psi \in H^p(\mathbb{R}; L^2(\Gamma))$ in order to establish the less stringent requirement that
$\psi \in H^p([r, s]; L^2(\Gamma))$, $-\infty < r < s < \infty$. It is interesting to note, however, that the regularity assumptions of all of the aforementioned results can be relaxed if, instead,
Laplace-domain bounds are utilized. In contrast to \Cref{psi_Hp}, on the
basis of Laplace-domain estimates (in particular~\cite[Thm.\ 4.2]{Monk:14}
in conjunction with~\cite[Lem.\ 2]{Chen:10}, cf.\
also~\cite{HaDuong:86,Lubich:94}) such bounds provide that if $b$ is causal
and satisfies
\[
\gamma^+ b \in H^{p+1}([0, T]; L^2(\Gamma))\quad\mbox{and}\quad \gamma^+ \partial_\mathbf{n} b \in H^p([0, T]; L^2(\Gamma)),
\]
then $\psi \in H^p([0, T]; L^2(\Gamma))$. While we eschewed use of these bounds in order that our results be more self-contained,
we note here how the regularity assumptions of the various lemmas and theorems
can be relaxed by application of these auxiliary results. Assumption (ii) of \Cref{3d_decay_thm} can be relaxed to
the $s$-regularity conditions with $s = p + q + 1$, the assumptions of \Cref{int_eq_pderiv} can be relaxed to
the $s$-regularity conditions with $s = p + 1$, assumption (ii) of \Cref{3d_decay_thm_ii} can be relaxed to
$s$-regularity conditions with $s = p + (n+1)(q+1)$, and, finally, the assumptions of \Cref{decay_estimate_L2} can be relaxed to
the $s$-regularity conditions with $s = (n+1)(q+1)$.
Note that none of the estimates are quantitatively affected---these assumptions effectively are only used to ensure that the right-hand side quantities in the desired estimates are finite.
Also note that all of the assumptions mentioned here are immediately satisfied for data $b$ that is smooth in time---see \Cref{rem:wave_eq_sob_assump}.
\end{rem}
The proof of \Cref{3d_decay_thm} is deferred to the end of this
section, following a series of preparatory Lemmas
(Lemmas~\ref{psi_Hp} through \ref{sob_lemma}). The first of these
lemmas relates the size of the solution of equation~\eqref{eq:tdie_sl}
to the size of the imposed incident fields
$\gamma^+ b(\mathbf{r}, t)$ in various norms and under various assumptions on $\Gamma$.
\begin{rem}\label{negative_freq}
Without loss of generality, the incident field $b(\mathbf{r}, t)$ is
assumed to be real-valued. It follows that the solution
$\psi(\mathbf{r}, t)$ of \Cref{eq:tdie_sl} is real-valued, which
implies that its Fourier transform $\psi^f(\mathbf{r}, \omega)$
satisfies the Hermitian symmetry relation
\begin{equation}\label{negative_freq_eq}
\psi^f(\mathbf{r}, -\omega) = \overline{\psi^f(\mathbf{r}, \omega)}.
\end{equation}
Our studies of frequency-domain operator norms are therefore
restricted to the range $\omega \geq 0$, as is often the case in
mathematical frequency-domain scattering
theory~\cite{Spence:20,Spence:16,BetckeChandlerWilde:10}. The same
symmetry relations apply for quantities such as $h_T(\mathbf{r}, t)$,
$\psi_{\pm,T}(\mathbf{r}, t)$, $\psi_{*,T}(\mathbf{r}, t)$, etc., that are
defined in terms of the real-valued $\psi(\mathbf{r}, t)$.
\end{rem}
\begin{lemma}\label{psi_Hp}
Given an integer $q\geq 0$, let $\Gamma$ satisfy a $q$-growth
condition, and let the incident field $b$ satisfy the
$s$-regularity conditions~\eqref{eq:gamma_Hs_assump} with
$s = p + q$. Then \Cref{eq:tdie_sl} admits a unique solution $\psi \in H^p(\mathbb{R};L^2(\Gamma))$ and
\begin{equation}\label{psi_Hp_bound}
\left\|\psi\right\|_{H^p(\mathbb{R};\,L^2(\Gamma))} \le
C(\Gamma)\left( \left\|\gamma^+ \partial_\mathbf{n}
b\right\|_{H^{p+q}(\mathbb{R};\,L^2(\Gamma))}\\
+ \left\|\gamma^+
b\right\|_{H^{p+q+1}(\mathbb{R};\,L^2(\Gamma))}\right).
\end{equation}
If, additionally, $b$ satisfies
the $s$-regularity conditions~\eqref{eq:gamma_Hs_assump} with
$s = p + q + 1$, then $\psi \in C^p(\mathbb{R};L^2(\Gamma))$.
\end{lemma}
\begin{proof}
Using equation~\cref{CFIE_direct} we obtain
\[
(1 + \omega^2)^{p/2} \psi^f = (1 + \omega^2)^{p/2}
A_\omega^{-1}\left(\gamma^+ \partial_\mathbf{n} B^f - \i \eta \gamma^+B^f \right),
\]
and, thus, in view of \eqref{eq:q-nontrapp} and taking into account
the relation $|\eta| \le (1 + \omega^2)^{1/2}$ that follows easily
from~\eqref{Aomega_def_eqn}, we obtain
\begin{align*}
&\begin{alignedat}{2}(1 + \omega^2)^p\left\|\psi^f(\cdot, \omega)\right\|^2_{L^2(\Gamma)} \le C(1
+ \omega^2)^{p+q}&
\left(\left\|\gamma^+\partial_\mathbf{n} B^f(\cdot,
\omega)\right\|^2_{L^2(\Gamma)}\right.\\
&+ \left.|\eta|^2\left\| \gamma^+B^f(\cdot, \omega)\right\|^2_{L^2(\Gamma)}\right)
\end{alignedat}\\
&\le C\left((1 + \omega^2)^{p+q}\left\|\gamma^+\partial_\mathbf{n} B^f(\cdot,
\omega)\right\|^2_{L^2(\Gamma)}
+ (1 + \omega^2)^{p+q+1}\left\|\gamma^+B^f(\cdot,
\omega)\right\|^2_{L^2(\Gamma)}\right).
\end{align*}
Integrating with respect to $\omega$ the
estimate~\cref{psi_Hp_bound} follows. The fact that
$\psi \in C^p(\mathbb{R};\,L^2(\Gamma))$ results from an application
of~\eqref{psi_Hp_bound} with $p$ substituted by $p+1$ and the use of
the Sobolev embedding \Cref{sob_lemma}.
\end{proof}
\begin{rem}\label{rem:tilde_1}
It will be necessary in what follows (specifically, in the proofs of
the $H^p$ estimates in \Cref{3d_decay_thm} and
\Cref{3d_decay_thm_ii} for $p > 0$ and associated preparatory
lemmas) to allow for right-hand sides other than the specific wave
solution $b(\mathbf{r}, t)$ present in the Dirichlet problem
\Cref{eq:w_eq} and the integral equation formulation
\Cref{eq:tdie_sl}, and we thus consider scattering problems of an
incident field wave solution $\widetilde{b}(\mathbf{r}, t)$, similar
to $b$, for which we have a corresponding integral equation
formulation
\begin{equation}\label{eq:tdie_sl_generic}
(S\widetilde{\psi})(\mathbf{r}, t) = \gamma^+ \widetilde{b}(\mathbf{r}, t)\quad\mbox{for}\quad (\mathbf{r}, t) \in
\Gamma\times\mathbb{R},
\end{equation}
with solution $\widetilde{\psi}$, where $S$ denotes the single layer
integral operator \Cref{eq:single_layer_op_time}. We also define
$\widetilde{\psi}_{-,T}$, $\widetilde{\psi}_{+,T}$, and
$\widetilde{\psi}_{*,T}$ analogously to $\psi_{-,T}$, $\psi_{+,T}$,
and $\psi_{*,T}$, respectively, in \Cref{timewinddens}.
Of course, Lemma~\ref{psi_Hp} also holds in the context of solutions
of the integral equation~\cref{eq:tdie_sl_generic}.
\end{rem}
Analogous to \Cref{all_t_b} and letting $\mathbb{R}_\alpha^- =\{t\in\mathbb{R}\ : \ t\leq \alpha \}$
we assume, for some $\alpha \in \mathbb{R}$, that
\begin{equation}\label{all_t_b_generic}
\mbox{``The function}\; \widetilde{b} \in C^2(\Omega^\textit{inc} \times \mathbb{R})\;\mbox{satisfies}\; \widetilde{b}({\mathbf{r}}, t) = 0\;\mbox{for}\; ({\mathbf{r}}, t) \in \widebar{\Omega} \times \mathbb{R}_\alpha^-\mbox{''}.
\end{equation}
In preparation for Lemma~\eqref{3d_decay_lemma_2ndkind}, letting
\begin{equation}\label{umintilde_def}
\widetilde{u}_{-,T}(\mathbf{r}, t) := (S \widetilde{\psi}_{-,T})(\mathbf{r}, t), \quad (\mathbf{r}, t) \in \mathbb{R}^3 \times \mathbb{R},
\end{equation}
we define the function
\begin{equation}\label{htildedef}
\widetilde{h}_T(\mathbf{r}, t) := \widetilde{b}(\mathbf{r}, t) -
\widetilde{u}_{-,T}(\mathbf{r}, t),\quad (\mathbf{r}, t) \in \Omega^{inc} \times \mathbb{R},
\end{equation}
and its Fourier transform
\begin{equation}\label{Ht_def_generic}
\widetilde{H}^f_T(\mathbf{r}, \omega) = \widetilde{B}^f(\mathbf{r}, \omega) - (S_\omega
\widetilde{\psi}_{-,T}^f)(\mathbf{r}, \omega),\quad \mathbf{r}\in\Omega^{inc} \times \mathbb{R},
\end{equation}
(see~\eqref{eq:single_layer_op}.
\begin{rem}\label{rem:tilde_2}
With reference to Remark~\ref{rem:tilde_1}, we mention that
Lemma~\eqref{3d_decay_lemma_2ndkind} and subsequent lemmas apply in
particular to the function $\widetilde{b} = b$ in \Cref{eq:tdie_sl};
in that case we will accordingly use untilded notations as in e.g.
\begin{equation}\label{hdef}
h_T(\mathbf{r}, t) := b(\mathbf{r}, t) - u_{-,T}(\mathbf{r}, t)
\end{equation}
instead of~\eqref{htildedef},
\begin{equation}\label{umin_def}
u_{-,T}(\mathbf{r}, t) := (S \psi
_{-,T})(\mathbf{r}, t), \quad (\mathbf{r}, t) \in \mathbb{R}^3 \times \mathbb{R},
\end{equation}
instead of~\eqref{umintilde_def}, and
\begin{equation}\label{Ht_untilded_def}
H^f_T(\mathbf{r}, \omega) = B^f(\mathbf{r}, \omega) - (S_\omega
\psi_{-,T}^f)(\mathbf{r}, \omega),\quad \mathbf{r}\in\Omega^{inc} \times \mathbb{R},
\end{equation}
instead of~\eqref{Ht_def_generic}.
\end{rem}
Relying on the $\tau$DoD intervals $I_{T}$ introduced in
\Cref{domainofdep} as well as the windowed time-dependent boundary
densities $\psi_{-,T}$, $\psi_{+,T}$, and $\psi_{*,T}$ in
\Cref{timewinddens}, and the trace operators $\gamma^\pm$ on $\Gamma$
in \Cref{trace_def}, Lemmas~\ref{3d_decay_lemma_2ndkind}
to~\ref{per_freq_rhs_oper_bounds} below develop estimates related to
the density $\psi_{+,T}$ as well as frequency-domain operator bounds
that are used in the proof of \Cref{3d_decay_thm}. The subsequent
\Cref{sob_lemma} is a Sobolev embedding result for functions taking
values in a Banach space (i.e. $L^2(\Gamma)$), used to establish
estimates that are uniform-in-time.
\begin{lemma}[Direct second-kind integral equations for
$\widetilde{\psi}_{+,T}^f$]\label{3d_decay_lemma_2ndkind}
Assume that the obstacle $\Omega$ satisfies the $q$-growth condition for some integer $q \ge 0$.
Let $\widetilde{b}$ denote a function defined in an open set
$\Omega^\textit{inc}$ containing $\widebar{\Omega}$, and assume that
$\widetilde{b}$ is a solution to the wave equation in
$\Omega^\textit{inc}$ satisfying the $s$-regularity conditions~\eqref{eq:gamma_Hs_assump} with
$s = q$.
Additionally, let $T$ denote a real number with associated
time-windowed solutions $\widetilde{\psi}_{+,T}$ and
$\widetilde{\psi}_{-,T}$ as in \Cref{timewinddens} and
\Cref{rem:tilde_1}, where $\widetilde{\psi}$ is the solution to
\Cref{eq:tdie_sl_generic}. Then $\widetilde{\psi}_{+,T}$ satisfies
the integral equation
\begin{equation}\label{int_eq_lambda}
(S\widetilde{\psi}_{+,T})(\mathbf{r}, t) = \gamma^+ \widetilde{h}_T(\mathbf{r}, t),
\end{equation}
(cf. \Cref{eq:tdie_sl}) where $\widetilde{h}_T$ is given by
\Cref{htildedef}. Additionally, for each fixed $\omega \ge 0$ the
Fourier transform $\widetilde{\psi}_{+,T}^f$ of
$\widetilde{\psi}_{+,T}$ satisfies the second-kind integral
equation
\begin{equation}\label{CFIE_proof_generic}
\left(A_{\omega} \widetilde{\psi}_{+,T}^f \right)(\mathbf{r}, \omega) =
\gamma^-\partial_\mathbf{n} \widetilde{H}^f_T(\mathbf{r}, \omega)
- \i \eta \gamma^- \widetilde{H}^f_T(\mathbf{r}, \omega), \quad\mathbf{r} \in \Gamma,
\end{equation}
with $A_{\omega}$ as in \Cref{Aop_def} and $\widetilde{H}^f_T$ as in \Cref{Ht_def_generic}.
\end{lemma}
\begin{rem}\label{rem:tilde_3}
With reference to \Cref{rem:tilde_1}, we note that, for the
particular case $\widetilde{b} = b$, \Cref{3d_decay_lemma_2ndkind}
tells us that the solution $\psi$ to \Cref{eq:tdie_sl} satisfies
\begin{equation}\label{CFIE_proof}
\left(A_{\omega} \psi_{+,T}^f \right)(\mathbf{r}, \omega) =
\gamma^-\partial_\mathbf{n} H^f_T(\mathbf{r}, \omega)
- \i \eta \gamma^-H^f_T(\mathbf{r}, \omega), \quad\mathbf{r} \in \Gamma,
\end{equation}
where
\begin{equation}\label{Ht_def}
H^f_T(\mathbf{r}, \omega) = B^f(\mathbf{r}, \omega) - \left(S_\omega
\psi_{-,T}^f\right)(\mathbf{r}, \omega).\qedhere
\end{equation}
\end{rem}
\begin{proof}[Proof of \Cref{3d_decay_lemma_2ndkind}.]
Using the partition of unity decomposition embodied
in~\cref{psipm_def}, equation~\Cref{eq:tdie_sl_generic} may be
re-expressed in the form
\begin{equation}\label{int_eq_lambda_proof}
\int_{-\infty}^f \int_\Gamma \frac{\delta\left(t - t' - |\mathbf{r} -
\mathbf{r}'|/c\right) \widetilde{\psi}_{+,T}(\mathbf{r}', t')}{4\pi |\mathbf{r} -
\mathbf{r}'|}\,\d\sigma(\mathbf{r}')\,\d t' = \gamma^+ \widetilde{h}_T(\mathbf{r}, t),
\end{equation}
where $\widetilde{h}_T$ and $\widetilde{u}_{-,T}$ are given by
\Cref{htildedef} and \Cref{umintilde_def} respectively, and,
thus,~\Cref{int_eq_lambda} follows.
In order to establish~\Cref{CFIE_proof_generic}, in turn, we first
consider the solution $v^+$ of the Dirichlet problem
$\Delta v^+ + \kappa^2(\omega) v^+ = 0$
($\kappa(\omega) = \omega / c$) in
$\mathbb{R}^3 \setminus \widebar{\Omega}$ with Dirichlet data
$\gamma^+ v^+ = -\gamma^+ \widetilde{H}^f_T$, and we call
$v^- = -\widetilde{H}^f_T$, which, in view of~\eqref{Ht_def_generic}
and since $B^f$ is the Fourier transform of the solution $b$ of
the wave equation in $\Omega^\textit{inc} \supset \Omega$,
satisfies the Helmholtz equation
$\Delta v^- + \kappa^2(\omega) v^- = 0$ in $\Omega$ with Dirichlet
data $\gamma^- v^- = -\gamma^+ \widetilde{H}^f_T$. Continuing with
the proof of~\Cref{CFIE_proof_generic}, we now derive, by
application of Green's theorem to the functions $v^+$ and $v^-$, a
single-layer equation that is satisfied by the solution
$\widetilde{\psi}_{+,T}^f$ of that equation. In detail,
per~\cite[Lem.\ 3.4]{Costabel:88} we have the representation
formulas
\begin{equation}\label{vs_repr_11}
v^+(\mathbf{r}, \omega) = -\langle \gamma^+ \partial_\mathbf{n} v^+ -
\gamma^- \partial_\mathbf{n} v^-,
G_\omega(\mathbf{r}, \cdot) \rangle,\quad\mathbf{r} \in
\mathbb{R}^3 \setminus \widebar{\Omega},
\end{equation}
and
\begin{equation}\label{vs_repr_12}
v^-(\mathbf{r}, \omega) = -\langle \gamma^+ \partial_\mathbf{n} v^+ -
\gamma^- \partial_\mathbf{n} v^-,
G_\omega(\mathbf{r}, \cdot) \rangle,\quad\mathbf{r} \in
\Omega,
\end{equation}
where $\langle \cdot,\cdot \rangle$ denotes the duality pairing of the
Sobolev spaces $H^s(\Gamma)$ and $H^{-s}(\Gamma)$. But, in view
of~\cite[Cor.\ 2.28]{ChandlerWilde:12} we have $\gamma^+\partial_\mathbf{n} v^+ \in
L^2(\Gamma)$ and $\gamma^-\partial_\mathbf{n} v^- \in L^2(\Gamma)$, so
\Cref{vs_repr_11}
and \Cref{vs_repr_12} may be re-expressed in the form
\begin{equation}\label{vs_repr_21}
v^+(\mathbf{r}, \omega) = -\int_\Gamma \left(\gamma^+\partial_\mathbf{n} v^+ -
\gamma^- \partial_\mathbf{n} v^-\right)
G_\omega(\mathbf{r}, \mathbf{r}')\,\d\sigma(\mathbf{r}'),\quad\mathbf{r} \in
\mathbb{R}^3 \setminus \widebar{\Omega},
\end{equation}
and
\begin{equation}\label{vs_repr_22}
v^-(\mathbf{r}, \omega) = -\int_\Gamma \left(\gamma^+ \partial_\mathbf{n} v^+ -
\gamma^- \partial_\mathbf{n} v^-\right)
G_\omega(\mathbf{r}, \mathbf{r}')\,\d\sigma(\mathbf{r}'),\quad\mathbf{r} \in
\Omega.
\end{equation}
In view of the continuity throughout $\mathbb{R}^3$ of the single-layer potential $S_\omega$ with $L^2(\Gamma)$ density~\cite[Thm.\ 6.11]{McLean}
and setting
\begin{equation}\label{mu_density}
\mu = \gamma^+ \partial_\mathbf{n} v^+ - \gamma^-\partial_\mathbf{n} v^-,
\end{equation}
taking the interior trace
$\gamma^-$ of
\Cref{vs_repr_22} and using the definition that $v^- = -\widetilde{H}^f_T$
tells us that $\mu$ satisfies the integral equation
\begin{equation}\label{single_layer_lambda}
(S_\omega \mu) = \gamma^-\widetilde{H}^f_T, \quad \mathbf{r} \in \Gamma.
\end{equation}
In order to obtain an integral equation which, unlike
\Cref{single_layer_lambda}, is uniquely solvable at all frequencies,
we follow~\cite{Burton:71} and combine \Cref{single_layer_lambda}
with the equation
\begin{equation}\label{psiplus_deriv_int_eq}
\gamma^+ \partial_\mathbf{n} v^+(\mathbf{r}, \omega) =
- \int_\Gamma \mu(\mathbf{r}', \omega) \partial_{\mathbf{n}(\mathbf{r})} G_\omega(\mathbf{r},
\mathbf{r}')\,\d\sigma(\mathbf{r}')
+ \frac{1}{2}\mu({\mathbf{r}}, \omega),\quad\mbox{for}\quad\mathbf{r} \in
\Gamma.
\end{equation}
that results as the outer normal derivative operator on $\Gamma$ is
applied to the representation formula~\cref{vs_repr_22}.
In detail, \Cref{psiplus_deriv_int_eq} follows from the well-known jump
relations~\cite[p. 219]{McLean} that
\begin{equation}
\lim_{\varepsilon \to 0^+} \mathbf{n}(\mathbf{r}) \cdot \nabla \int_\Gamma \mu(\mathbf{r}')
G_\omega(\mathbf{r} + \varepsilon\mathbf{n}(\mathbf{r}),
\mathbf{r}')\,\d\sigma(\mathbf{r}') = \left(-\frac{1}{2}I +
K_\omega^*\right)\mu(\mathbf{r}),\,\mathbf{r} \in \Gamma,
\end{equation}
for
the operator equal to the normal derivative on $\Gamma$ of the single-layer
potential with density $\mu$ in the space $L^2(\Gamma)$
(cf.\ also~\cite[Thm.\ 1 \& Lem.\ 4.1]{Costabel:88}).
Using~\cref{mu_density} to rewrite the
left-hand-side of~\cref{psiplus_deriv_int_eq} gives the second-kind
integral equation for $\mu$,
\begin{equation}\label{psiplus_2nd_kind}
\frac{1}{2}\mu(\mathbf{r}, \omega) +
\left(K_\omega^*\mu \right)(\mathbf{r}, \omega) =
\gamma^-\partial_\mathbf{n} \widetilde{H}^f_T(\mathbf{r}, \omega).
\end{equation}
posed for $\mu \in L^2(\Gamma)$~\cite[Thm.\ 1(iv)]{Costabel:88}.
Finally, a linear combination of \cref{single_layer_lambda}
and~\cref{psiplus_2nd_kind} yields the combined field integral
equation
\begin{equation}
\left(A_{\omega} \mu \right)(\mathbf{r}, \omega) =
\gamma^-\partial_\mathbf{n} \widetilde{H}^f_T(\mathbf{r}, \omega)
- \i \eta \gamma^- \widetilde{H}^f_T(\mathbf{r}, \omega), \quad\mathbf{r} \in \Gamma,
\end{equation}
where $A_{\omega} = \frac{1}{2}I + K_\omega^* - \i\eta S_\omega$,
which is a uniquely solvable integral equation, for all $\omega \ge 0$, for the density $\mu$ in \Cref{mu_density}~\cite[Thm.\ 2.27]{ChandlerWilde:12}.
But $\mu$ also solves the integral
equation~\Cref{single_layer_lambda}, so since $\widetilde{H}^f_T$ is
the Fourier transform of $\widetilde{h}_T$ given in~\cref{htildedef},
inverse Fourier transformation of \Cref{single_layer_lambda} yields
\Cref{int_eq_lambda} and so $\mu = \widetilde{\psi}_{+,T}^f$, the
Fourier transform of the unique solution $\widetilde{\psi}_{+,T}$ to
\Cref{int_eq_lambda}. Thus, $\mu = \widetilde{\psi}_{+,T}^f$
satisfies~\Cref{CFIE_proof_generic}, and the proof is complete.
\end{proof}
\begin{lemma}[Frequency $L^2$ bounds on the solution
of~\cref{CFIE_proof_generic}]\label{3d_decay_lemma_Ainv_freq_bounds}
Let $T\in\mathbb{R}$ and $q \geq 0$, let $\Gamma = \partial \Omega$
satisfy the $q$-growth condition, and assume $\widetilde{b}$
satisfies the conditions of \Cref{3d_decay_lemma_2ndkind}. Then
there exists $C_1 = C_1(\Gamma) > 0$ and $C_2 = C_2(\Gamma) > 0$
such that $\widetilde{\psi}_{+,T}$ satisfies
\begin{equation}\label{psiplus_estimate_3}
\begin{split}
\left\|\widetilde{\psi}_{+,T}^f\right\|^2_{L^2(\mathbb{R};\,L^2(\Gamma))}
&\le C_1\int_0^{\omega_0}
\left\|\gamma^-\partial_\mathbf{n} \widetilde{H}^f_T(\cdot,
\omega) - \i \gamma^-\widetilde{H}^f_T(\cdot,
\omega)\right\|^2_{L^2(\Gamma)}\,\d\omega\\
& + C_2\int_{\omega_0}^\infty\omega^{2q}
\left\|\gamma^-\partial_\mathbf{n} \widetilde{H}^f_T(\cdot,
\omega) - \i\omega \gamma^- \widetilde{H}^f_T(\cdot.
\omega)\right\|^2_{L^2(\Gamma)}\,\d\omega,
\end{split}
\end{equation}
where $\widetilde{H}^f_T$ is given by \Cref{Ht_def_generic}.
\end{lemma}
\begin{proof}
As indicated in \Cref{A_om_invert}, $A_\omega$ is invertible for all
$\omega \ge 0$. From~\cref{CFIE_proof_generic} we then obtain
$\widetilde{\psi}_{+,T}^f = A_{\omega}^{-1} \left(
\gamma^-\partial_\mathbf{n} \widetilde{H}^f_T - \i \eta
\gamma^- \widetilde{H}^f_T\right)$, and thus
\begin{equation*}
\begin{split}
\|\widetilde{\psi}_{+,T}^f(\cdot, \omega)\|_{L^2(\Gamma)} \le
\|A_{\omega}^{-1}&\|_{L^2(\Gamma) \to L^2(\Gamma)}
\left\|\gamma^-\partial_\mathbf{n} \widetilde{H}^f_T(\cdot,
\omega) - \i\eta \gamma^-\widetilde{H}^f_T(\cdot,
\omega)\right\|_{L^2(\Gamma)},\label{psiplus_estimate_1}
\end{split}
\end{equation*}
for all $\omega \ge 0$.
In view of the $q$-growth condition~\Cref{eq:q-nontrapp-cases}
we obtain
\begin{equation}
\|\widetilde{\psi}_{+,T}^f(\cdot, \omega)\|_{L^2(\Gamma)} \le
\frac{C_1}{2}\omega^{2q}\left\|\gamma^-\partial_\mathbf{n} \widetilde{H}^f_T(\cdot,
\omega) - \i\omega \gamma^-\widetilde{H}^f_T(\cdot,
\omega)\right\|_{L^2(\Gamma)}\, (\omega > \omega_0)\label{psiplus_estimate_2}
\end{equation}
and
\begin{equation}
\|\widetilde{\psi}_{+,T}^f(\cdot, \omega)\|_{L^2(\Gamma)} \le
\frac{C_2}{2}\left\|\gamma^-\partial_\mathbf{n} \widetilde{H}^f_T(\cdot,
\omega) - \i \gamma^-\widetilde{H}^f_T(\cdot,
\omega)\right\|_{L^2(\Gamma)}\, (0 \le \omega < \omega_0)\label{psiplus_estimate_2b}
\end{equation}
for certain constants $C_1$ and $C_2$. Noting that by Hermitian
symmetry (see \Cref{negative_freq_eq}) we have
$\left\|\widetilde{\psi}_{+,T}^f(\cdot, \omega)\right\|_{L^2(\Gamma)}
= \left\|\widetilde{\psi}_{+,T}^f(\cdot,
|\omega|)\right\|_{L^2(\Gamma)}$ for $\omega < 0$, it follows that
\begin{equation}
\begin{split}
\left\|\widetilde{\psi}_{+,T}^f\right\|^2_{L^2(\mathbb{R};\,L^2(\Gamma))}
&= 2 \int_0^\infty \left\|\widetilde{\psi}_{+,T}^f(\cdot,
\omega)\right\|^2_{L^2(\Gamma)}\,\d\omega \\
&\le C_1\int_0^{\omega_0}
\left\|\gamma^-\partial_\mathbf{n} \widetilde{H}^f_T(\cdot,
\omega) - \i \gamma^-\widetilde{H}^f_T(\cdot,
\omega)\right\|^2_{L^2(\Gamma)}\,\d\omega\\
& + C_2\int_{\omega_0}^\infty
\omega^{2q}\left\|\gamma^-\partial_\mathbf{n} \widetilde{H}^f_T(\cdot,
\omega) - \i\omega \gamma^-\widetilde{H}^f_T(\cdot,
\omega)\right\|^2_{L^2(\Gamma)}\,\d\omega,\\
\end{split}
\end{equation}
as desired.
\end{proof}
\begin{lemma}[Relating the function $\widetilde{h}_T$ to a limited time-history density]\label{3d_decay_thm_h_equiv}
For an observation time $T$ and associated $\tau$DoD interval
$I_{T}$ (\Cref{domainofdep}), such that
$\widetilde{b}(\mathbf{r}, t)$ vanishes for
$({\mathbf{r}},t) \in \widebar\Omega \times \left\lbrace I_{T}\cup [T, \infty)\right\rbrace$ (that
is, ${\mathbf{r}} \in \widebar\Omega$ and $t \ge T - T_* - 2\tau$), the function $\widetilde{h}_T$ defined in
\Cref{htildedef} satisfies
\begin{equation}\label{h_equiv}
\widetilde{h}_T(\mathbf{r}, t) =
\left\{\begin{aligned}
-\widetilde{u}_{*,T}(\mathbf{r}, t), \quad &t \ge T - \tau\\
0, \quad &t < T - \tau
\end{aligned}\right.\quad\mbox{for}\quad\mathbf{r} \in \widebar{\Omega},
\end{equation}
where, with reference to \Cref{timewinddens} and \Cref{rem:tilde_1},
\begin{equation}\label{ustar_def}
\widetilde{u}_{*,T}(\mathbf{r}, t) = (S \widetilde{\psi}_{*,T})(\mathbf{r}, t).
\end{equation}
Furthermore, the function $\widetilde{h}_T(\mathbf{r}, t)$ has a bounded
temporal support:
\begin{equation}\label{h_compactsupp}
\supp \widetilde{h}_T(\mathbf{r}, \cdot)\subset [T - \tau, T + T_*], \quad\mbox{for}\quad \mathbf{r} \in \widebar{\Omega}.
\end{equation}
\end{lemma}
\begin{rem}\label{rem:h_equiv_rem_1}
It is crucial for the proof of Theorem~\ref{3d_decay_thm_ii} and
associated lemmas, that the function
$\widetilde{h}_T(\mathbf{r}, t) = \widetilde{b} -
\widetilde{u}_{-,T}$ has bounded temporal support. It is interesting
to note, further, that, in particular, the right-hand side
$\gamma^+\widetilde{h}_T(\mathbf{r}, t)$ of~\eqref{int_eq_lambda}
vanishes at times for which its solution $\widetilde{\psi}_{+,T}$
does not vanish.
\end{rem}
\begin{rem}\label{rem:h_equiv_rem_2}
In view of~\eqref{psistar_def}, and taking into account
\Cref{rem:tilde_1}, \Cref{3d_decay_thm_h_equiv} tells us that
$\widetilde{h}_T$
is determined by the restriction of $\widetilde{\psi}({\mathbf{r}}, t)$ to
the interval $I_T$, and, thus, in view of~\cref{int_eq_lambda}, the
same is true of $\widetilde{\psi}_{+,T}$. Since,
by~\eqref{psipm_def}, $\widetilde{\psi}({\mathbf{r}}, t)$ coincides with
$\widetilde{\psi}_{+,T}({\mathbf{r}}, t)$ for all $t\geq T$, it follows that,
for such values of $t$ the solution $\widetilde{\psi}({\mathbf{r}}, t)$ is
solely determined by the restriction of $\widetilde{\psi}({\mathbf{r}}, t)$
to the interval $I_T$. In other words, for a given time $T$ and in
absence of illumination for $t\geq T-T_* - \tau$,
$\widetilde{\psi}({\mathbf{r}}, t)$ is determined by the values of the same
solution function $\widetilde{\psi}({\mathbf{r}}, t)$ in the bounded time
interval $I_T$ immediately preceeding time $T$. Thus,
\Cref{3d_decay_thm_h_equiv} amounts to a bootstrap domain-of-dependence
relation for solutions of the wave equation.
\end{rem}
\begin{proof}[Proof of \Cref{3d_decay_thm_h_equiv}.]
We first consider the function $\widetilde{u}_{-,T}$ in
\Cref{htildedef}. Equation~\eqref{Tstar_def} tells us that for
$t > T - \tau$, $t' < T - T_* - \tau$,
$\mathbf{r} \in \widebar{\Omega}$ and $\mathbf{r}' \in\Gamma$ we
have $t - t' - |\mathbf{r} - \mathbf{r}'|/c > 0$. In view of
\Cref{umintilde_def}, it follows that, for
$\mathbf{r} \in \widebar{\Omega}$ and all $t > T - \tau$,
\begin{equation}\label{umin_trunc}
\widetilde{u}_{-,T}(\mathbf{r}, t) = \int_{T - T_* - \tau}^t \int_\Gamma \frac{\delta\left(t - t' -
|\mathbf{r} - \mathbf{r}'|/c\right) \widetilde{\psi}_{-,T}(\mathbf{r}', t')}{4\pi |\mathbf{r}
- \mathbf{r}'|}\,\d\sigma(\mathbf{r}')\,\d t'.
\end{equation}
But in view of \Cref{htildedef},
$\widetilde{u}_{-,T}(\mathbf{r}, t) = -\widetilde{h}_T(\mathbf{r}, t)$
for $t > T - T_* - 2\tau$ and $\mathbf{r} \in \Gamma$ since by
hypothesis $\widetilde{b}$ vanishes for $t > T - T_* - 2\tau$. In
sum, using \Cref{timewinddens}
($\widetilde{\psi}_{-,T}(\cdot, t) = w_-(t - T)\widetilde{\psi}(\cdot,
t)$, cf.\ \Cref{rem:tilde_1}), it follows that, for all
${\mathbf{r}} \in \widebar{\Omega}$ and $t > T - \tau$,
\begin{align*}
\widetilde{h}_T(\mathbf{r}, t) &= -\int_{T - T_* - \tau}^t \int_\Gamma \frac{\delta\left(t - t' -
|\mathbf{r} - \mathbf{r}'|/c\right) w_-(t' - T)\widetilde{\psi}(\mathbf{r}', t')}{4\pi |\mathbf{r}
- \mathbf{r}'|}\,\d\sigma(\mathbf{r}')\,\d t',
\end{align*}
and thus, since from \Cref{wtau_def} we have
$w_+(t' - T + T_* + \tau) = 1$ for $t' \in [T - T_* - \tau, \infty)$,
we further obtain, again for all ${\mathbf{r}} \in \widebar{\Omega}$ and
$t > T - \tau$,
\begin{equation*}
\begin{split}
\widetilde{h}_T(\mathbf{r}, t) &= -\int_{T - T_* - \tau}^t \int_\Gamma \frac{\delta\left(t - t' -
|\mathbf{r} - \mathbf{r}'|/c\right)}{4\pi |\mathbf{r}
- \mathbf{r}'|}\times \\
&\quad\quad\quad\quad\quad\quad\quad\times w_+(t' - T + T_* + \tau) w_-(t' - T)\widetilde{\psi}(\mathbf{r}', t')\,\d\sigma(\mathbf{r}')\,\d t'\\
&= -\int_{T - T_* - \tau}^t \int_\Gamma \frac{\delta\left(t - t' -
|\mathbf{r} - \mathbf{r}'|/c\right) \widetilde{\psi}_{*,T}(\mathbf{r}', t')}{4\pi |\mathbf{r}
- \mathbf{r}'|}\,\d\sigma(\mathbf{r}')\,\d t',
\end{split}
\end{equation*}
where the last equality follows from the definition~\Cref{psistar_def} of
$\widetilde{\psi}_{*,T}$ (cf.\
\Cref{rem:tilde_1}).
We have thus shown that $\widetilde{h}_T(\mathbf{r}, t) =
-\widetilde{u}_{*,T}(\mathbf{r}, t)$ for $(\mathbf{r}, t) \in
\widebar{\Omega} \times [T - \tau, \infty)$ with $\widetilde{u}_{*,T}(\mathbf{r}, t)$ as defined in \Cref{ustar_def}.
To establish~\eqref{h_equiv} it remains to show that
$\widetilde{h}_T = 0$ for ${\mathbf{r}} \in \widebar{\Omega}$ and
$t < T - \tau$. To do this we note that, since
$\widetilde{\psi}_{+,T}(\mathbf{r}, t) = 0$ for $t < T - \tau$,
\cref{int_eq_lambda} tells us that
$\gamma^+\widetilde{h}_T(\mathbf{r}, t) = 0$ for
$(\mathbf{r}, t) \in \Gamma \times (-\infty, T - \tau)$. It
follows that $\widetilde{h}_T$ is a solution of the wave equation on
$\Omega$ (by~\cref{htildedef}) with vanishing boundary values for
$t\in (-\infty, T - \tau)$. Thus, $\widetilde{h}_T(\mathbf{r}, t)$
vanishes for
$(\mathbf{r}, t) \in \widebar{\Omega} \times (-\infty, T - \tau)$,
by solution uniqueness, as desired, and thus \Cref{h_equiv}
follows.
In view of \Cref{h_equiv}, in order to establish
\Cref{h_compactsupp} it remains to show that
$\widetilde{h}_T(\mathbf{r}, t) = 0$ for all $t > T + T_*$. From
\Cref{timewinddens} we have
$\supp \widetilde{\psi}_*(\mathbf{r}, \cdot) \subset I_T = [T -
T_* - 2\tau, T)$. Since, in view of~\eqref{Tstar_def},
$|\mathbf{r} - \mathbf{r}'|/c < T_*$ for all
$\mathbf{r},\mathbf{r}' \in \widebar{\Omega}$,
equation~\eqref{ustar_def}, which can be expressed in the form
$\widetilde{u}_{*,T}(\mathbf{r}, t) = \int_\Gamma
\frac{\widetilde{\psi}_{*,T}(\mathbf{r}', t - |\mathbf{r} -
\mathbf{r}'|/c)}{4\pi |\mathbf{r} -
\mathbf{r}'|}\,\d\sigma(\mathbf{r}')$, tells us that
$\widetilde{h}_T(\mathbf{r}, t) = 0$ for
$(\mathbf{r}, t) \in \widebar{\Omega} \times [T + T_*, \infty)$,
and, thus, \Cref{h_compactsupp} follows.
\end{proof}
\begin{lemma}\label{per_freq_rhs_oper_bounds}
Let $\Gamma$ denote the boundary of a Lipschitz obstacle. Then there
exist constants $C_1 = C_1(\Gamma) > 0$ and $C_2 = C_2(\Gamma) > 0$
such that for all $\omega \ge 0$ the operator norm bounds
\begin{equation}\label{per_freq_rhs_oper_bound_i}
\left\|\left(\gamma^-\partial_\mathbf{n} \pm
\i\omega\gamma^- \right)S_\omega\right\|_{L^2(\Gamma)\to L^2(\Gamma)} \le C_1(1 +
\omega^2)^{1/2}
\end{equation}
and
\begin{equation}\label{per_freq_rhs_oper_bound_ii}
\left\|\left(\gamma^-\partial_\mathbf{n} \pm
\i\gamma^- \right)S_\omega\right\|_{L^2(\Gamma)\to L^2(\Gamma)} \le C_2(1 +
\omega^2)^{1/2}
\end{equation}
hold.
\end{lemma}
\begin{proof}
The relations~\eqref{per_freq_rhs_oper_bound_i}
and~\eqref{per_freq_rhs_oper_bound_ii} follow directly from the bounds
\begin{equation}\label{Somega_Kstaromega_bounds}
\left\|\gamma^-S_\omega\right\|_{L^2(\Gamma) \to L^2(\Gamma)}
\le D_1 ,\quad\mbox{and}\quad
\left\|K_\omega^*\right\|_{L^2(\Gamma) \to L^2(\Gamma)}
\le D_2 \omega + D_3,
\end{equation}
valid for all $\omega \geq 0$ (where $D_1, D_2, D_3 > 0$ are
constants), which are presented in reference~\cite[Thms.\ 3.3 and
3.5]{ChandlerWilde:09}. Indeed, in view of the jump relation
$\gamma^- \partial_\mathbf{n} S_\omega = \frac{1}{2}I + K_\omega^*$
for the normal derivative of the single layer potential as an operator on
$L^2(\Gamma)$~\cite[p. 219]{McLean}, it follows that there exist
constants $\widetilde{D}_1 > 0$ and $\widetilde{D}_2 > 0$ and $C_1$
such that
\begin{equation*}
\begin{split}
\left\|\left(\gamma^-\partial_\mathbf{n} \pm \i\omega\gamma^-\right)S_\omega\right\|_{L^2(\Gamma)\to L^2(\Gamma)} &= \left\|\frac{1}{2}I + K_\omega^* \pm
\i\omega \gamma^-S_\omega\right\|_{L^2(\Gamma)\to L^2(\Gamma)}\\
&\le (\widetilde{D}_1 + \widetilde{D}_2\omega) \le C_1(1 + \omega^2)^{1/2}.
\end{split}
\end{equation*}
Similarly,
\begin{equation*}
\begin{split}
\left\|\left(\gamma^-\partial_\mathbf{n} \pm \i\gamma^- \right)S_\omega\right\|_{L^2(\Gamma)\to L^2(\Gamma)}
&\le C_2(1 + \omega^2)^{1/2}
\end{split}
\end{equation*}
for some constant $C_2 > 0$, and the result follows.
\end{proof}
\begin{lemma}\label{conv_H_to_psi}
Let $q$ denote a non-negative integer, let $\Gamma = \partial\Omega$
satisfy a $q$-growth condition (\Cref{q-nontrapp}), let $T_0$ be a
given real number such that $\widetilde{b}$ vanishes for
$({\mathbf{r}},t) \in \widebar{\Omega} \times \left\lbrace I_{T_0}\cup [T_0, \infty)\right\rbrace$,
and let $\widetilde{H}_{T_0}^f$ be defined by \Cref{Ht_def_generic}
with $T = T_0$. Then for some constant $C > 0$ independent of $T_0$ and $\widetilde{b}$ we have
\begin{equation}\label{conv_H_to_psi_eq}
\begin{alignedat}{2}
\int_0^{\omega_0} &\, &&
\left\|\left(\gamma^-\partial_\mathbf{n} - \i
\gamma^-\right) \widetilde{H}^f_{T_0}(\cdot,
\omega)\right\|^2_{L^2(\Gamma)}\,\d\omega \\
+\,& &&\int_{\omega_0}^\infty \omega^{2q}
\left\|\left(\gamma^-\partial_\mathbf{n} - \i\omega
\gamma^-\right) \widetilde{H}^f_{T_0}(\cdot.
\omega)\right\|^2_{L^2(\Gamma)}\,\d\omega\\
&\,&&\le C \int_{-\infty}^\infty (1 + \omega^2)^{q+1} \left\|
\widetilde{\psi}_{*,T_0}^f(\cdot, \omega)\right\|_{L^2(\Gamma)}^2\,\d\omega.
\end{alignedat}
\end{equation}
\end{lemma}
\begin{proof}
In order to obtain the desired bound we define the differential
operators
\begin{equation}\label{ST_op_time}
\mathcal{R} = \left(\gamma^-\partial_{\mathbf{n}} - \i\gamma^- \right),
\enspace\mbox{and}\enspace\mathcal{T} = \left(-\i \frac{\partial}{\partial
t}\right)^{q}\left(\gamma^-\partial_{\mathbf{n}} -
\frac{\partial}{\partial t}\gamma^-\right),
\end{equation}
whose Fourier symbols are given by
\begin{equation}\label{ST_op_freq}
\hat{\mathcal{R}} =
\left(\gamma^-\partial_\mathbf{n} - \i \gamma^-\right),\enspace\mbox{and}\enspace \hat{\mathcal{T}} =
\omega^{q}\left(\gamma^-\partial_\mathbf{n} - \i\omega\gamma^- \right).
\end{equation}
Then, denoting by $\mathcal{Q}$ the sum of quantities on the left-hand
side of \Cref{conv_H_to_psi_eq} and using~\eqref{h_equiv} together
with Plancherel's theorem we obtain
\begin{alignat*}{2}
&\begin{alignedat}{2}
\mathcal{Q} =
\int_0^{\omega_0} &\left\|\hat{\mathcal{R}}\widetilde{H}^f_{T_0}(\cdot,
\omega)\right\|^2_{L^2(\Gamma)}\,\d\omega
&+ \int_{\omega_0}^\infty
\left\|\hat{\mathcal{T}}\widetilde{H}^f_{T_0}(\cdot,
\omega)\right\|^2_{L^2(\Gamma)}\,\d\omega
\end{alignedat}\\
&\quad\quad\begin{alignedat}{2}
\le \int_\Gamma \int_{-\infty}^\infty &\left|\hat{\mathcal{R}} \widetilde{H}^f_{T_0}(\mathbf{r},
\omega)\right|^2\,\d\omega\,\d\sigma(\mathbf{r}) + \int_\Gamma
\int_{-\infty}^\infty \left|\hat{\mathcal{T}} \widetilde{H}^f_{T_0}(\mathbf{r},
\omega)\right|^2\,\d\omega\,\d\sigma(\mathbf{r})
\end{alignedat}\\
&\quad\quad\begin{alignedat}{2}
= \int_\Gamma \int_{-\infty}^\infty &\left|\mathcal{R} \widetilde{h}_{T_0}(\mathbf{r},
t')\right|^2\,\d t'\,\d\sigma(\mathbf{r}) + \int_\Gamma
\int_{-\infty}^\infty \left|\mathcal{T} \widetilde{h}_{T_0}(\mathbf{r},
t')\right|^2\,\d t'\,\d\sigma(\mathbf{r})
\end{alignedat}\\
&\quad\quad\begin{alignedat}{2}
= \int_\Gamma \int_{T_0 - \tau}^\infty &\left|\mathcal{R} \widetilde{u}_{*,T_0}(\mathbf{r},
t')\right|^2\,\d t'\,\d\sigma(\mathbf{r}) + \int_\Gamma
\int_{T_0 - \tau}^\infty \left|\mathcal{T} \widetilde{u}_{*,T_0}(\mathbf{r},
t')\right|^2\,\d t'\,\d\sigma(\mathbf{r}),\\
\end{alignedat}
\end{alignat*}
where the last equality follows from~\eqref{conv_H_to_psi_eq} together
with the temporal locality of the operators $\mathcal{R}$ and
$\mathcal{T}$. In view of the nonnegativity of the integrands on the
right-hand side of this estimate, it follows that
\begin{equation}
\begin{split}
\mathcal{Q} \le \int_\Gamma
\int_{-\infty}^\infty \left|\mathcal{R} \widetilde{u}_{*,T_0}(\mathbf{r}, t')\right|^2\,\d
t'\,\d\sigma(\mathbf{r}) + \int_\Gamma
\int_{-\infty}^\infty \left|\mathcal{T} \widetilde{u}_{*,T_0}(\mathbf{r}, t')\right|^2\,\d
t'\,\d\sigma(\mathbf{r}).\label{HtoUbound0}
\end{split}
\end{equation}
Using once again Plancherel's theorem, \Cref{HtoUbound0} becomes
\begin{equation}\label{HtoUbound}
\begin{split}
\mathcal{Q} \le
\int_{-\infty}^\infty \left\|\hat{\mathcal{R}} \widetilde{U}_{*,T_0}^f(\cdot,
\omega)\right\|^2_{L^2(\Gamma)}\,\d\omega + \int_{-\infty}^\infty \left\|\hat{\mathcal{T}}
\widetilde{U}_{*,T_0}^f(\cdot, \omega)\right\|^2_{L^2(\Gamma)}\,\d\omega,
\end{split}
\end{equation}
where $\widetilde{U}_{*,T_0}^f$ denotes the temporal Fourier transform of the single layer
potential $\widetilde{u}_{*,T_0}$,
\begin{equation}\label{UstarT0}
\widetilde{U}_{*,T_0}^f({\mathbf{r}}, \omega) = \left(S_\omega \widetilde{\psi}^f_{*,T_0}\right)({\mathbf{r}}, \omega).
\end{equation}
Using \Cref{ST_op_freq}, \Cref{per_freq_rhs_oper_bound_i} and
\Cref{per_freq_rhs_oper_bound_ii} to bound the integrands on the
right-hand side of \Cref{HtoUbound}, and then using the fact that
$\omega^{2r} \le (1 + \omega^2)^{r}$ for nonnegative $r$, we obtain
\begin{equation*}
\begin{split}
&\begin{alignedat}{2}
\mathcal{Q} \le
C_1 \int_{-\infty}^\infty &\left(1 + \omega^2\right)
\left\|\widetilde{\psi}_{*,T_0}^f(\cdot,
\omega)\right\|^2_{L^2(\Gamma)}\,\d\omega\\
&+ C_2 \int_{-\infty}^\infty \omega^{2q} \left(1 + \omega^2\right)
\left\|\widetilde{\psi}_{*,T_0}^f(\cdot,
\omega)\right\|^2_{L^2(\Gamma)}\,\d\omega
\end{alignedat}\\
&\quad\le C\int_{-\infty}^\infty \left(1 + \omega^2\right)^{q+1}
\left\|\widetilde{\psi}_{*,T_0}^f(\cdot,
\omega)\right\|^2_{L^2(\Gamma)}\,\d\omega.
\end{split}
\end{equation*}
for a suitable constant $C$, and the result follows.
\end{proof}
In order to establish the uniform-in-time
estimates~\eqref{density_unif_time_bound}
and~\eqref{density_unif_time_bound_Tn}, we utilize a Bochner-space
version (cf. \Cref{def:sob_bochner}) below of the Sobolev embedding
lemma; it is primarily used in the case $\mathcal{S} = \Gamma$, for its other use cf.\ \Cref{decay_corr}. The proof results by merely incorporating the Bochner space
nomenclature in the classical arguments used for the real-valued
case~\cite[Lem.\ 6.5]{Folland}.
\begin{lemma}[Sobolev embedding in Bochner spaces]\label{sob_lemma}
Let $k$ and $n$ denote nonnegative and positive integers respectively, let $q \ge 0$, and let $\mathcal{S} \subset \mathbb{R}^n$. Then for
$p > k + 1/2$, $H^p(\mathbb{R}; H^q(\mathcal{S}))$ is continuously
embedded in $C^k(\mathbb{R}; H^q(\mathcal{S}))$, or in symbols,
$H^p(\mathbb{R}; H^q(\mathcal{S})) \hookrightarrow C^k(\mathbb{R};
H^q(\mathcal{S}))$. In particular, there exists a constant $C$ such that
for all functions $f \in H^p(\mathbb{R}; H^q(\mathcal{S}))$ the bound
\begin{equation}\label{eq:sob_lemma}
\max_{0 \le \ell \le k} \sup_{t \in \mathbb{R}}
\left\|\frac{\partial^\ell}{\partial t^\ell} f(t)\right\|_{H^q(\mathcal{S})} \le C \left\|f\right\|_{H^p(\mathbb{R}; H^q(\mathcal{S}))},
\end{equation}
holds, where $\frac{\partial^\ell}{\partial t^\ell} f(t)$ denotes
the $\ell$-th classical derivative of $f$ with respect to $t$ (see
\Cref{sob-boch}).
\end{lemma}
\begin{proof}
Let $f \in H^p(\mathbb{R}; H^q(\mathcal{S}))$. We first show that
$\widehat{\partial^\ell f} \in L^1(\mathbb{R}; H^q(\mathcal{S}))$ for all
nonnegative integers $\ell \le k$. Indeed, using the relation
$\widehat{\partial^\ell f}(\omega) = (\i \omega)^\ell
\widehat{f}(\omega)$ (in accordance with the Fourier transform
convention~\cref{eq:fourier_transf}) we obtain
\begin{equation}\label{eq:bochner_L1_estimate}
\begin{split}
&\int_{-\infty}^\infty \left\|\widehat{\partial^\ell
f}\right.\left. (\omega)\right\|_{H^q(\mathcal{S})}\,\d\omega =
\int_{-\infty}^\infty \left\|(\i\omega)^\ell
\widehat{f}(\omega)\right\|_{H^q(\mathcal{S})}\,\d\omega\\
&\le \int_{-\infty}^\infty (1 + \omega^2)^{k/2}\left\|\widehat{f}(\omega)\right\|_{H^q(\mathcal{S})}\,\d\omega\\
&= \int_{-\infty}^\infty (1 +
\omega^2)^{(k-p)/2}\left\|\widehat{f}(\omega)\right\|_{H^q(\mathcal{S})} (1 +
\omega^2)^{p/2}\,\d\omega\\
&\le \left(\int_{-\infty}^\infty (1 +
\omega^2)^{k-p}\,\d\omega \right)^{1/2}
\left(\int_{-\infty}^\infty (1 +
\omega^2)^p \left\|\widehat{f}(\omega)\right\|_{H^q(\mathcal{S})}^2\,\d\omega\right)^{1/2}\\
&\le \tilde{\tilde{C}} \left\|f\right\|_{H^p(\mathbb{R};\, H^q(\mathcal{S}))},
\end{split}
\end{equation}
where we used the fact that $p > k + 1/2$ to bound the integral of
$(1 + \omega^2)^{k-p}$ by a constant $\widetilde{C}$ independent
of $f$, and we utilized the equivalent norm displayed
in~\eqref{sob_bochner_fourier} in the space
$H^p(\mathbb{R}; H^q(\mathcal{S}))$. It follows that
$\widehat{\partial^\ell f} \in L^1(\mathbb{R}; H^q(\mathcal{S}))$, as
claimed.
By the Bochner-specific version of the Riemann-Lebesgue lemma~\cite[Lem.\ 2.4.3]{Weis}
(which ensures that the Fourier transform of an $L^1(\mathbb{R}; H^q(\mathcal{S}))$ function is
continuous), in conjunction with the fact that
$\widehat{\partial^\ell f} \in L^1(\mathbb{R}; H^q(\mathcal{S}))$, it follows
that
\begin{equation}\label{fell_C}
\widehat{\widehat{\partial^\ell f}} \in C(\mathbb{R}; H^q(\mathcal{S})), \quad 0 \le \ell \le k.
\end{equation}
Since by hypothesis $f\in H^p(\mathbb{R}; H^q(\mathcal{S}))$, further, it follows
that for $0 \le \ell \le k$, $\partial^\ell f \in L^2(\mathbb{R}; H^q(\mathcal{S}))$, and
thus by the Bochner Plancherel theorem~\cite[Thm.\ 2.20]{Karunakaran:98} we
have, additionally,
$\widehat{\partial^\ell f} \in L^1(\mathbb{R}; H^q(\mathcal{S})) \cap L^2(\mathbb{R};
H^q(\mathcal{S}))$. Now, applying the Bochner Fourier inversion
theorem~\cite{Karunakaran:98} (see also~\cite[\S 8.4]{Zemanian}),
we obtain
\begin{equation}\label{sob_lemma_partial_ell_f}
\partial^\ell f(t) = \frac{1}{2\pi} \int_{-\infty}^\infty
\widehat{\partial^\ell f}(\omega) {\mathrm{e}}^{\i\omega t}\,\d\omega,\quad 0 \le
\ell \le k,
\end{equation}
which, in view of \Cref{fell_C} shows that
$\partial^\ell f(t) = \frac{1}{2\pi}
\widehat{\widehat{\partial^\ell f}}(-t) \in C(\mathbb{R};
H^q(\mathcal{S}))$ for $0 \le \ell \le k$ and, therefore, that
$H^p(\mathbb{R}; H^q(\mathcal{S})) \subset C^k(\mathbb{R};
H^q(\mathcal{S}))$. Finally, using \Cref{sob_lemma_partial_ell_f} and
then \Cref{eq:bochner_L1_estimate} we see that, for
$0 \le \ell \le k$,
\begin{equation*}
\begin{split}
\sup_{t \in \mathbb{R}} \left\|\frac{\partial^\ell}{\partial t^\ell} f(t)\right\|_{H^q(\mathcal{S})} &= \sup_{t \in
\mathbb{R}} \frac{1}{2\pi} \left\|\int_{-\infty}^\infty
\widehat{\partial^\ell f}(\omega)
{\mathrm{e}}^{\i \omega (-t)}\,\d\omega\right\|_{H^q(\mathcal{S})}\\
&\le \frac{1}{2\pi} \int_{-\infty}^\infty
\left\|\widehat{\partial^\ell f}(\omega) \right\|_{H^q(\mathcal{S})}\,\d\omega \le C
\left\|f\right\|_{H^p(\mathbb{R}; H^q(\mathcal{S}))}.
\end{split}
\end{equation*}
This establishes \Cref{eq:sob_lemma} and concludes the proof of
the lemma.
\end{proof}
\begin{lemma}\label{int_eq_pderiv}
Let $p,q$ denote non-negative integers, assume $\Gamma$ satisfies the $q$-growth condition, and assume $\widetilde{b}$
satisfies the $s$-regularity
conditions~\eqref{eq:gamma_Hs_assump} with $s = p + q + 1$.
Then, letting $\widetilde{\psi}$ denote the solution of
\Cref{eq:tdie_sl_generic} we have
$\widetilde{\psi} \in C^{p}(\mathbb{R}; L^2(\Gamma))$, and
$\partial^p \widetilde{\psi}$ satisfies the integral equation
\begin{equation}\label{eq:tdie_sl_generic_p}
\left(S\partial^p \widetilde{\psi}\right)(\mathbf{r}, t) = \gamma^+ \partial^p \widetilde{b}(\mathbf{r}, t)\quad\mbox{for}\quad (\mathbf{r}, t) \in
\Gamma\times\mathbb{R}.
\end{equation}
\end{lemma}
\begin{proof}
From \Cref{psi_Hp} we have
$\widetilde{\psi} \in H^{p+1}(\mathbb{R};\, L^2(\Gamma))$, and,
thus, by \Cref{sob_lemma} we obtain
$\widetilde{\psi} \in C^{p}(\mathbb{R};\,L^2(\Gamma))$ as claimed.
Similarly, by \Cref{sob_lemma}, $\widetilde{b}$ satisfies
$\gamma^+ \widetilde{b} \in C^{p+q+1}(\mathbb{R};
L^2(\Gamma))\subset C^{p}(\mathbb{R}; L^2(\Gamma))$. The proof is
now completed by differentiating \Cref{eq:tdie_sl_generic} under the
integral sign.
\end{proof}
\noindent
The proof of \Cref{3d_decay_thm} is presented in what follows.
\vspace{0.5cm}
\begin{center}
{\bf Proof of \Cref{3d_decay_thm}}
\end{center}
The proof of~\Cref{density_Hp_time_bound} follows by first deriving an
$L^2$-in-time estimate for the solution $\widetilde{\psi}$ to
\Cref{eq:tdie_sl_generic} with a generic right-hand side
$\widetilde{b}$ in suitable Sobolev-Bochner spaces, and then applying
that estimate to the particular cases $\widetilde{\psi} = \psi$ (for
$\widetilde{b} = b$) and $\widetilde{\psi} = \partial^p \psi$ (for
$\widetilde{b} = \partial^p b$, see~\Cref{def:sob_bochner}).
To derive the necesary estimates for $\widetilde{\psi}$ we develop
$L^2$-in-time estimates for the quantities $\widetilde{\psi}_{+,T_0}$
that are related to the solution $\widetilde{\psi}$ of
\Cref{eq:tdie_sl_generic} via \Cref{timewinddens} and
\Cref{rem:tilde_1}. Since, by hypothesis, both aforementioned
selections $\widetilde{b} = b$ and $\widetilde{b} = \partial^p b$ of
$\widetilde b$ satisfy
$\gamma^+\widetilde{b} \in H^{q+1}(\mathbb{R}; H^1(\Gamma))$ and
$\gamma^+\partial_\mathbf{n} \widetilde{b} \in H^{q}(\mathbb{R};
L^2(\Gamma))$, it follows that the conditions of
\Cref{3d_decay_lemma_2ndkind} are met, and, thus
\Cref{3d_decay_lemma_Ainv_freq_bounds} tells us that
$\widetilde{\psi}_{+,T_0}^f$ satisfies the
estimate~\cref{psiplus_estimate_3}. Using this bound in conjunction
with the relation $\widetilde{\psi} = \widetilde{\psi}_{+,T_0}$ for
$t > T_0$ (see \Cref{psipm_def}) we obtain, using Plancherel's
identity,
\begin{equation}\label{psi_to_psiplus_to_H}
\begin{alignedat}{2}
\left\|\widetilde{\psi}\right.\mkern-24mu &&&\left.\vphantom{\widetilde{\psi}}\right\|^2_{L^2([T_0, \infty);\,L^2(\Gamma))} \le
\left\|\widetilde{\psi}_{+,T_0}\right\|^2_{L^2(\mathbb{R};\,L^2(\Gamma))} =
\left\|\widetilde{\psi}_{+,T_0}^f\right\|^2_{L^2(\mathbb{R};\,L^2(\Gamma))} \\
&\le\,&& C_1\int_0^{\omega_0}
\left\|\gamma^-\partial_\mathbf{n} \widetilde{H}^f_{T_0}(\cdot,
\omega) - \i \gamma^-\widetilde{H}^f_{T_0}(\cdot,
\omega)\right\|^2_{L^2(\Gamma)}\,\d\omega\\
& && + C_2\int_{\omega_0}^\infty \omega^{2q}
\left\|\gamma^-\partial_\mathbf{n} \widetilde{H}^f_{T_0}(\cdot,
\omega) - \i\omega \gamma^-\widetilde{H}^f_{T_0}(\cdot.
\omega)\right\|^2_{L^2(\Gamma)}\,\d\omega.\\
\end{alignedat}
\end{equation}
Then using \Cref{conv_H_to_psi_eq} in \Cref{conv_H_to_psi}, the integrals on the right-hand side of
\Cref{psi_to_psiplus_to_H} can be bounded by
\begin{equation}\label{final_psiplus_estimate}
\left\|\widetilde{\psi}\right\|^2_{L^2([T_0, \infty);\,L^2(\Gamma))} \le \tilde{C} \int_{-\infty}^\infty (1 + \omega^2)^{q+1} \left\|
\widetilde{\psi}_{*,T_0}^f(\cdot, \omega)\right\|_{L^2(\Gamma)}^2\,\d\omega,
\end{equation}
and, in view of the equivalence of the norms~\eqref{sob_bochner_norm}
and~\eqref{sob_bochner_fourier}, we can re-write \Cref{final_psiplus_estimate} as
\begin{equation}\label{final_psiplus_estimate_2}
\left\|\widetilde{\psi}\right\|^2_{L^2([T_0, \infty);\,L^2(\Gamma))} \le \tilde{\tilde{C}}\left(\left\|
\widetilde{\psi}_{*,T_0}\right\|_{L^2(\mathbb{R}; L^2(\Gamma))}^2 + \left\|
\partial^{q+1} \widetilde{\psi}_{*,T_0}\right\|_{L^2(\mathbb{R}; L^2(\Gamma))}^2\right).
\end{equation}
Applying the Leibniz formula to the expression
\[
\partial^{q+1} \widetilde{\psi}_{*,T_0}({\mathbf{r}}, t) = \partial^{q+1} w_+(t - T_0 + T_* + \tau)w_-(t - T_0)
\widetilde{\psi}
\]
and noting that for all ${\mathbf{r}} \in \Gamma$ and for $1 \le i \le p$ we
have
$\supp \partial^{i} \widetilde{\psi}_{*,T_0}({\mathbf{r}}, \cdot) \subset
I_{T_0}$, using straightforward bounds on the functions $w_-$ and
$w_+$ (that do not depend on $T_0$---see \Cref{timewinddens}), we
obtain
\begin{equation}\label{final_psiplus_estimate_2a}
\left\|\widetilde{\psi}\right\|^2_{L^2([T_0, \infty);\,L^2(\Gamma))} \le \sum_{i=0}^{q+1} C_i\left\|
\partial^i \widetilde{\psi}\right\|_{L^2(I_{T_0}; L^2(\Gamma))}^2, \quad C_i = C_i(\Gamma, \tau).
\end{equation}
Using the continuity of the inclusion map in Sobolev spaces, it follows that
\begin{equation}\label{psiplus_estimate_L2_nontrapping}
\lVert\widetilde{\psi}\rVert_{L^2([T_0, \infty);\,L^2(\Gamma))} \le
C\left\|\widetilde{\psi}\right\|_{H^{q+1}(I_{T_0};\,L^2(\Gamma))} < \infty,\quad C = C(\Gamma, \tau),
\end{equation}
where the finiteness of the norm over $I_{T_0}$ in
\Cref{psiplus_estimate_L2_nontrapping} follows from \Cref{psi_Hp},
which tells us that
$\widetilde{\psi} \in H^{q+1}(I_{T_0};L^2(\Gamma))$ since, by
hypothesis,
$\gamma^+\widetilde{b} \in H^{2q+2}(\mathbb{R};L^2(\Gamma))$ and
$\gamma^+\partial_\mathbf{n} \widetilde{b} \in
H^{2q+1}(\mathbb{R};L^2(\Gamma))$ for each of $\widetilde{b} = b$ and
$\widetilde{b} = \partial^p b$. Applying
\Cref{psiplus_estimate_L2_nontrapping} with $\widetilde{b} = b$ yields
\Cref{density_Hp_time_bound} in the case $p = 0$. Furthermore,
\Cref{int_eq_pderiv} with $\widetilde{b} = b$ implies that
$\widetilde{\psi} = \psi$ satisfies \Cref{eq:tdie_sl_generic_p}. But
this equation can be expressed in the form \Cref{eq:tdie_sl_generic}
with $\widetilde{b} = \partial^p b$ and
$\widetilde \psi = \partial^p \psi$, for which the estimate
\Cref{psiplus_estimate_L2_nontrapping} becomes
\[
\|\partial^p \psi\|_{L^2([T_0, \infty);\,L^2(\Gamma))} \le
C\left\|\partial^p \psi\right\|_{H^{q+1}(I_{T_0};\,L^2(\Gamma))} < \infty,\quad C = C(\Gamma, \tau).
\]
which, using once again the continuity of the inclusion map in Sobolev spaces, equation
implies \Cref{density_Hp_time_bound}.
Applying~\cref{density_Hp_time_bound} with $p = 1$ together with \Cref{sob_lemma}
yields
\begin{equation}
\begin{split}
\sup_{t>T_0} \left\|\psi(\cdot, t)\right\|_{L^2(\Gamma)} &\le \widetilde{C}
\left\|\psi\right\|_{H^{1}([T_0, \infty);\,L^2(\Gamma))}\\
&\le C(\Gamma, \tau)
\left\|\psi\right\|_{H^{q+2}(I_{T_0};\,L^2(\Gamma))},
\end{split}
\end{equation}
and, thus, \cref{density_unif_time_bound}. The proof is now
complete. $\qed$
\section{Superalgebraic decay estimates of \\ finite-time-history boundary densities}\label{sec:theory_part_ii}
This section extends the theoretical results of
\Cref{sec:theory_part_i}: it establishes that not only is it possible
to bound the density $\psi$ in the unbounded time interval
$[T_0, \infty)$ by its values on the preceding bounded subinterval
$I_{T_0}$, as shown in \Cref{3d_decay_thm}, but also, in
\Cref{3d_decay_thm_ii} below, the main theorem of this paper, that the
{\em temporal} Sobolev and maximum norms of the solution $\psi$ on
time intervals of the form $[T_0 + t, \infty)$, $t > 0$, each decay
rapidly (e.g., superalgebraically fast for temporally smooth incident
signals) as $t\to \infty$; see also \Cref{3d_decay_rmk_iii} where a
related but somewhat modified decay result and proof strategy are
suggested. The statement and proof of \Cref{3d_decay_thm_ii} rely on
the nomenclature introduced in Sections~\ref{sec:prelim}
and~\ref{sec:theory_part_i}. Two important corollaries to this theorem,
namely Corollaries~\ref{decay_corr} and~\ref{decay_corr_energy},
relate \Cref{3d_decay_thm_ii} to rapid decay and associated local energies
of solutions of the
wave equation. Following these corollaries a brief discussion is presented that lays
out the main lines of the proof of \Cref{3d_decay_thm_ii}; a detailed
proof of \Cref{3d_decay_thm_ii} is presented at the end of this
section, following a sequence of preparatory lemmas.
\begin{theorem}\label{3d_decay_thm_ii}
Let $p$, $q$ and $n$ denote non-negative integers, $n>0$, let
$T_0 > 0$ and $\tau > 0$ be given, and assume (i) $\Gamma$ satisfies
the $q$-growth condition (\Cref{q-nontrapp}); (ii) The incident
field $b({\mathbf{r}}, t)$ satisfies the $s$-regularity
conditions~\eqref{eq:gamma_Hs_assump} with
$s = p + q + (n+1)(q+1)$; as well as, (iii) The
incident field $b = b({\mathbf{r}}, t)$ satisfies \Cref{all_t_b} and
it vanishes for $({\mathbf{r}}, t) \in \widebar{\Omega}\times\left \{I_{T_0} \cup [T_0, \infty)\right\}$, with $I_{T_0}$ as in
\Cref{domainofdep}. Then for $t > T_0$ the solution $\psi$ of
\Cref{eq:tdie_sl} satisfies the $t$-decay estimate
\begin{equation}\label{density_Hp_time_bound_Tn}
\left\|\psi\right\|_{H^p([t, \infty);\,L^2(\Gamma))} \le
C(\Gamma, \tau, p, n) (t - T_0)^{1/2-n} \left\|\psi\right\|_{H^{p + (n+1)(q +
1)}(I_{T_0};\,L^2(\Gamma))} < \infty.
\end{equation}
If $p\geq 1$ then $\psi$ additionally satisfies the pointwise
$t$-decay estimate
\begin{equation}\label{density_unif_time_bound_Tn}
\left\|\psi(\cdot, t)\right\|_{L^2(\Gamma)} \le
C(\Gamma, \tau, n) (t - T_0)^{1/2-n} \left\|\psi\right\|_{H^{(n+1)(q + 1) +
1}(I_{T_0};\,L^2(\Gamma))} < \infty
\end{equation}
for all $t > T_0$.
\end{theorem}
\begin{corr}[Boundary-separated pointwise / Full spatially-local-$L^2$ solution decay]\label{decay_corr}
Let $q$ denote a non-negative integer, let $\Gamma$ satisfy the
$q$-growth condition (\Cref{q-nontrapp}), and assume that the data
$b$ for the problem~\cref{eq:w_eq}-\cref{eq:bdef} is such that
\begin{equation}\label{b_smooth}
\gamma^+b \in C^\infty(\mathbb{R}; L^2(\Gamma))\quad\mbox{and}\quad
\gamma^+\partial_\mathbf{n} b \in C^\infty(\mathbb{R};L^2(\Gamma)).
\end{equation}
Further, assume that $b = b({\mathbf{r}}, t)$ satisfies~\eqref{all_t_b}, and
that, for given $T_0 > 0$ and $\tau > 0$, $b({\mathbf{r}}, t)$ vanishes for
$({\mathbf{r}}, t) \in \widebar{\Omega} \times \left\lbrace I_{T_0} \cup [T_0,
\infty)\right\rbrace$, where $I_{T_0} = I_{T_0, T_*, \tau}$ is
defined in \Cref{domainofdep} with $T_* = \diam(\Gamma)/c$ as
indicated in eqn.~\Cref{Tstar_def}.
Then, letting $R > 0$ be such that $\Omega \subset \left\lbrace |\mathbf{r}| <
R\right\rbrace$, letting $D = \Omega^c \cap \left\lbrace |\mathbf{r}| <
R\right\rbrace$, and defining $r_{\mathrm{max}} = \sup_{\mathbf{r} \in D, \mathbf{r}' \in
\Gamma} |\mathbf{r} - \mathbf{r}'|$, for each pair of integers $n > 0$ and $p \ge 0$ there exists a constant $C
= C(p, n, \tau, R, \Gamma) > 0$ such that the solution $u$ to \Cref{eq:w_eq} satisfies the $t$-decay estimate
\begin{equation}\label{uk_l2_sup_bound_decay}
\left\|\partial_t^p u(\cdot, t)\right\|_{L^2(D)} \le C (t - T_0 -
r_{\mathrm{max}}/c)^{1/2-n} \left\|\psi\right\|_{H^{p + (n+1)(q+1)+1}(I_{T_0};
L^2(\Gamma))} < \infty,
\end{equation}
for all $t \in (T_0 + r_{\max}/c, \infty)$.
Further, $u$ also decays
superalgebraically fast with increasing time $t$ for each point
$\mathbf{r}$ outside $\widebar{\Omega}$: given any compact set
$\mathcal{R} \subset (\widebar{\Omega})^c$ and defining
$r_{\mathrm{max}} = \max_{\mathbf{r} \in \mathcal{R}, \mathbf{r}' \in
\Gamma} |\mathbf{r} - \mathbf{r}'|$, for each pair of integers
$n > 0$ and $p \ge 0$ there exists a constant
$C = C(p, n, \tau, \mathcal{R}, \Gamma) > 0$ such that $u$ satisfies
the $t$-decay estimate
\begin{equation}\label{u_ptwise_bound_decay}
\left|\partial_t^p u(\mathbf{r}, t)\right| \le
C(t - T_0 - r_{\mathrm{max}}/c)^{1/2-n} \left\|\psi\right\|_{H^{p + (n+1)(q+1)+1}(I_{T_0}; L^2(\Gamma))} < \infty,
\end{equation}
for all $(\mathbf{r}, t) \in \mathcal{R}\times (T_0 + r_{\mathrm{max}}/c,
\infty)$.
\end{corr}
\begin{proof}
Except for hypothesis (ii) of \Cref{3d_decay_thm_ii}, all conditions of that Theorem follow immediately from hypotheses of the present corollary. Hypothesis (ii), in turn, follows since
by hypothesis~\eqref{b_smooth} and the assumed compact support of $b$, $b$ satisfies $\gamma^+b \in H^p(\mathbb{R}; L^2(\Gamma))$ and
$\gamma^+ \partial_{\mathbf{n}} b \in H^p(\mathbb{R}; L^2(\Gamma))$ for every integer $p\ge 0$. Thus, the conditions of \Cref{3d_decay_thm_ii} are satisfied for every integer $n > 0$ and every integer $p \ge 0$ and so we have
\begin{equation}\label{psi_decay}
\begin{split}
\left\|\partial_t^p \psi\right\|_{H^1([t', \infty);\,L^2(\Gamma))} &\le
\left\|\psi\right\|_{H^{p+1}([t', \infty);\,L^2(\Gamma))}\\
&\le C_1(p, n, \Gamma, \tau) (t' - T_0)^{1/2-n} \left\|\psi\right\|_{H^{p + (n+1)(q +
1) + 1}(I_{T_0};\,L^2(\Gamma))},
\end{split}
\end{equation}
for $t' > T_0$.
The estimates \Cref{uk_l2_sup_bound_decay} and
\Cref{u_ptwise_bound_decay} are obtained in what follows by relying on
a corresponding estimate on $\partial_t^p u$ in the norm of
$H^1([T_0 + r_{\max}/c, \infty); L^2(\Gamma))$, on one hand, and an
estimate on the norm of $\partial_t^p u(\mathbf{r}, \cdot)$ in
$H^1([t + r_{\max}/c, \infty))$ for each $\mathbf{r} \in \mathcal{R}$,
on the other hand. Estimate~\eqref{uk_l2_sup_bound_decay} is
established by first differentiating $s$ times under the integral sign
in the integral representation~\eqref{eq:kirchhoff_3d_soft} for the
solution $u$ (for certain values of the integer $s$) obtaining
$\partial_t^s u(\mathbf{r}, t) = \left(S \partial_t^s
\psi\right)(\mathbf{r}, t)$, and then applying the Cauchy-Schwarz
inequality to obtain, for all $t'$,
\begin{equation}
\begin{split}
\left\|\partial_t^s u(\cdot, t')\right.&\hspace{-1.5mm}\left.\vphantom{\partial_t^s u(\cdot, t')}\right\|^2_{L^2(D)} = \int_D
\left|\partial_t^s u(\mathbf{r}, t')\right|^2\,\d V(\mathbf{r})\\
&\le \frac{1}{16\pi^2} \int_D \left(\int_\Gamma
\left|\partial_t^s \psi(\mathbf{r}', t' - |\mathbf{r} -
\mathbf{r}'|/c)\right|^2\,\d\sigma(\mathbf{r}')\right)
\left(\int_\Gamma \frac{\d\sigma(\mathbf{r}')}{|\mathbf{r} -
\mathbf{r}'|^2}\right)\d V(\mathbf{r}).
\end{split}
\label{uk_L2_estimate}
\end{equation}
Integrating this bound in time, we obtain
\begin{equation}
\begin{split}
\left\|\partial_t^s u\right\|^2_{L^2([t, \infty); L^2(D))} &\le
\frac{1}{16\pi^2} \int_{D} \left[ \left(\int_t^\infty \int_\Gamma
|\partial_t^s \psi(\mathbf{r}', t' - |\mathbf{r} - \mathbf{r}'|/c)|^2\,
\d\sigma(\mathbf{r}')\,\d t'\right) \times\right.\\
&\left.\quad\quad\quad\quad \times \left(\int_\Gamma \frac{\d\sigma(\mathbf{r}')}{|\mathbf{r} -
\mathbf{r}'|^2}\right)\right]\,\d V(\mathbf{r}).
\end{split}
\label{uk_L2_estimate_integrated}
\end{equation}
Letting $I_{s,1}(\mathbf{r}, t)$ and $I_2(\mathbf{r})$ denote, respectively, the first
and second factors in the integrand of the integral over
$D$,
\begin{equation}\label{I1_I2_decay_corr}
I_{s,1}(\mathbf{r}, t) = \int_t^\infty \int_\Gamma
|\partial_t^s\psi(\mathbf{r}', t' - |\mathbf{r} - \mathbf{r}'|/c)|^2\,
\d\sigma(\mathbf{r}')\,\d t',\;\mbox{and}\; I_2(\mathbf{r}) = \int_\Gamma \frac{\d\sigma(\mathbf{r}')}{|\mathbf{r} -
\mathbf{r}'|^2},
\end{equation}
we seek to bound each of $I_{s,1}(\mathbf{r}, t)$ and $I_2(\mathbf{r})$ by quantities independent of $\mathbf{r}$.
Considering first $I_{s,1}(\mathbf{r}, t)$ for fixed $\mathbf{r} \in D$, a change in integration order and a slight extension of the integration domain in the $t'$ variable yields
\begin{equation}\label{I1_decay_corr_estimate}
I_{s,1}(\mathbf{r}, t) \le \int_\Gamma \int_{t - r_{\max}/c}^\infty |\partial_t^s \psi(\mathbf{r}', t')|^2\,\d t'\,\d\sigma(\mathbf{r}')= \left\|\partial_t^s\psi\right\|^2_{L^2([t - r_{\max}/c, \infty); L^2(\Gamma))}.
\end{equation}
The integral of $I_2(\mathbf{r})$ over $D$, in turn, is bounded by an $R-$ and $\Gamma$-dependent constant since by switching the order of integration
\begin{equation*}
\int_{D} I_2(\mathbf{r}) \,\d V(\mathbf{r}) = \int_\Gamma \left(\int_{D} \frac{1}{|\mathbf{r} - \mathbf{r}'|^2}\,\d V(\mathbf{r})\right)\d\sigma(\mathbf{r}'),
\end{equation*}
and the inner integral is bounded for each $\mathbf{r}'
\in \Gamma$ by an $R-$ and $\Gamma$-dependent constant---a fact which can be
established by use of a spherical coordinate system centered at $\mathbf{r}'
\in \Gamma$ for integration. We have therefore shown that
\begin{equation}
\label{u_l2_to_psi}
\left\| \partial_t^s u\right\|^2_{L^2([t, \infty); L^2(D))} \le C_2\left\|\partial_t^s \psi\right\|^2_{L^2([t - r_{\max}/c, \infty); L^2(\Gamma))}.
\end{equation}
Taking separately $s = p$ and $s = p + 1$ in \Cref{u_l2_to_psi} and adding the results we obtain
\begin{equation}\label{u_h1_to_psi}
\left\| \partial_t^p u\right\|^2_{H^1([t, \infty); L^2(D))} \le C_3\left\|\partial_t^p\psi\right\|^2_{H^1([t - r_{\max}/c, \infty); L^2(\Gamma))},
\end{equation}
Using \Cref{u_h1_to_psi} in conjunction
with \Cref{psi_decay} and the Sobolev embedding \Cref{sob_lemma} we obtain
\begin{equation*}
\begin{split}
\left\|\partial_t^p u(\cdot, t)\right\|_{L^2(D)} &\le C_4 \left\|\partial_t^p u\right\|_{H^1([t, \infty); L^2(D))} \le \sqrt{C_3} C_4\left\|\partial_t^p\psi\right\|_{H^1([t - r_{\max}/c, \infty); L^2(\Gamma))}\\
&\le C_5(\Gamma, \tau, n) (t - T_0 - r_{\max}/c)^{1/2-n} \left\|\psi\right\|_{H^{p + (n+1)(q +
1) + 1}(I_{T_0};\,L^2(\Gamma))},
\end{split}
\end{equation*}
and, thus, \Cref{uk_l2_sup_bound_decay} follows.
To establish \Cref{u_ptwise_bound_decay}, in turn, we once again
use $s$-times differentiation under the integral sign in the
variable $t$ in the representation~\Cref{eq:kirchhoff_3d_soft} and
the Cauchy-Schwarz inequality to obtain
\begin{equation}\label{u_CS_to_psi_sup}
\begin{split}
|\partial_t^s u(\mathbf{r}, t')|^2 &\le \left( \int_\Gamma \frac{1}{|\mathbf{r} -
\mathbf{r}'|^2}\,\d\sigma(\mathbf{r}')\right) \left(\int_\Gamma
|\partial_t^s \psi(\mathbf{r}', t' - |\mathbf{r} - \mathbf{r}'|/c)|^2
\,\d\sigma(\mathbf{r}')\right)\\
&\le C_6^2 \int_\Gamma
|\partial_t^s \psi(\mathbf{r}', t' - |\mathbf{r} - \mathbf{r}'|/c)|^2
\,\d\sigma(\mathbf{r}'),
\end{split}
\end{equation}
where
$C_6^2 = \sup_{\mathbf{r}\in \mathcal{R}} \int_\Gamma
\frac{1}{|\mathbf{r} - \mathbf{r}'|^2}\,\d\sigma(\mathbf{r}')$.
Integrating in $t'$ and using \Cref{I1_decay_corr_estimate} to estimate
the resulting quantity $I_{s,1}(\mathbf{r}, t)$, for each
$\mathbf{r} \in \mathcal{R}$ we obtain the $L^2$ bound
\begin{equation}\label{u_ptwise_l2_to_psi}
\int_{t}^\infty |\partial_t^s u(\mathbf{r}, t')|^2\,\d t' \le C_6 \left\|\partial_t^s \psi\right\|^2_{L^2([t - r_{\max}/c, \infty); L^2(\Gamma))}.
\end{equation}
Taking separately $s = p$ and $s = p + 1$ in \Cref{u_ptwise_l2_to_psi}, adding the results, and using \Cref{psi_decay} in conjunction with the Sobolev embedding \Cref{sob_lemma}, we obtain, again for $\mathbf{r} \in \mathcal{R}$,
\begin{equation*}
\begin{split}
\left|\partial_t^p u(\mathbf{r}, t)\right| &\le C_7 \left\|\partial_t^p u(\mathbf{r}, \cdot)\right\|_{H^1([t, \infty))} \le C_8\left\|\partial_t^p \psi\right\|_{H^1([t - r_{\max}/c, \infty); L^2(\Gamma))}\\
&\le C_9(\Gamma, \tau, n) (t - T_0 - r_{\max}/c)^{1/2-n} \left\|\psi\right\|_{H^{p + (n+1)(q +
1) + 1}(I_{T_0};\,L^2(\Gamma))} < \infty,
\end{split}
\end{equation*}
establishing \Cref{u_ptwise_bound_decay}. The proof is complete.
\end{proof}
While equation~\eqref{u_ptwise_bound_decay} provides a concrete
solution decay estimate at each spatial point $\mathbf{r}$ outside
$\widebar{\Omega}$, the associated constant $C$ does increase without
bound as $\mathbf{r}$ approaches $\Omega$ (cf.\
\Cref{u_CS_to_psi_sup}). The result presented next, in contrast, which
holds for arbitrary bounded subsets of $\widebar{\Omega}^c$,
establishes decay of the local energy expression~\eqref{local_energy}
considered in much of the literature---thus strengthening the previous
estimate~\eqref{uk_l2_sup_bound_decay}. The proof below relies on the
estimates provided in \Cref{decay_corr} in conjunction with standard
elliptic regularity properties.
\begin{corr}[Local energy decay estimates]\label{decay_corr_energy}
Let $\Gamma$ and $b$ satisfy the hypotheses of \Cref{decay_corr}
and let $u$ denote the solution to the problem~\eqref{eq:w_eq}. Then,
the local energy $E$ (equation~\eqref{local_energy}) associated with
the solution $u$ decays superalgebraically fast as
$t\to\infty$. More precisely, letting $R > 0$ be such that $\partial B_R$ does not intersect $\Omega$, defining the
compact region
$D = \Omega^c \cap \left\lbrace |\mathbf{r}| \le R\right\rbrace$,
and letting
$r_{\mathrm{max}} = \sup_{\mathbf{r} \in D, \mathbf{r}' \in \Gamma}
|\mathbf{r} - \mathbf{r}'|$, for each integer $n > 0$ there exists a
constant $C = C(\Gamma, R, \tau, n) > 0$ such that
\begin{equation}\label{local_energy_superalg_decay}
E(u, D, t) \le C (t - T_0 - r_{\max}/c)^{1 - 2n} \left\| \psi \right\|^2_{H^{(n+1)(q+1)+3}(I_{T_0}; L^2(\Gamma))} < \infty,
\end{equation}
for all $t \in (T_0 + r_{\max}/c, \infty)$.
\end{corr}
\begin{proof}
In view of the bound on
$\int_D |u_t(\mathbf{r}, t)|^2\,\d V(\mathbf{r})$ provided by
\Cref{decay_corr}, it suffices to establish a corresponding estimate
for $\int_D |\nabla u(\mathbf{r}, t)|^2\,\d V(\mathbf{r})$ for
$t \in (T_0 + r_{\max}/c, \infty)$. To do this, we use the fact that
$u$ is a solution of the wave equation~\eqref{eq:w_eq_a} as well as
the hypothesis that $\gamma^+ b(\mathbf{r}', t') = 0$ for
$(\mathbf{r}', t') \in \Gamma \times \left\{ I_{T_0} \cup [T_0,
\infty)\right\}$, which together imply that $u$ satisfies, for
each $t' \in I_{T_0} \cup [T_0, \infty)$, the exterior Dirichlet
problem
\begin{subequations}\label{u_ellipt}
\begin{align}
\Delta u(\mathbf{r}, t') &= \frac{1}{c^2} \partial_t^2 u(\mathbf{r}, t') \eqqcolon f(\mathbf{r}, t'), \quad\mbox{for}\quad \mathbf{r} \in \Omega^c,\label{u_ellipt_a}\\
u(\mathbf{r}, t') &= 0,
\quad\mbox{for}\quad \mathbf{r}\in\Gamma = \partial\Omega,\label{u_ellipt_b}
\end{align}
\end{subequations}
for the Poisson equation~\eqref{u_ellipt_a}. In order to do so, we
first apply \Cref{decay_corr} so that from
\Cref{uk_l2_sup_bound_decay} with $p = 2$ we have, for
$t > T_0 + r_{\max}/c$,
\begin{equation}\label{f_l2_decay_estimate}
\left\|f(\cdot, t)\right\|_{L^2(D)} \le \frac{C_1}{c^2}(t - T_0 -
r_{\mathrm{max}}/c)^{1/2-n} \left\|\psi\right\|_{H^{(n+1)(q+1)+3}(I_{T_0};
L^2(\Gamma))} < \infty,
\end{equation}
in particular implying
\begin{equation}\label{f_poisson_def}
f(\cdot, t) \in L^2(D) \quad\mbox{for}\quad t \in (T_0 + r_{\max}/c, \infty).
\end{equation}
We also introduce the region $D_\varepsilon = \Omega^c \cap \left\lbrace |\mathbf{r}| \le R + \varepsilon\right\rbrace$ for some $\varepsilon > 0$ as well as the
function $\varphi: D_\varepsilon \to \mathbb{R}$ satisfying $\varphi = u(\cdot, t)$ in a
neighborhood of $|\mathbf{r}| = R + \varepsilon$ and satisfying $\varphi = 0$ in a
neighborhood of $\partial\Omega$ as well as for $|\mathbf{r}| > R + 2\varepsilon$---which can be easily constructed by
multiplication of $u$ by a smooth cutoff function in the radial variable.
Moreover, from the representation~\eqref{eq:kirchhoff_3d_soft} for
$\mathbf{r} \in D_\varepsilon$ bounded away from $\partial \Omega$ we have $\varphi \in
H^1(\Omega)$. Applying the regularity result~\cite[Thm.\ 8.9]{GilbargTrudinger} to the
problem~\eqref{u_ellipt} on $D_\varepsilon$ with $f(\cdot, t) \in L^2(D_\varepsilon)$ and with boundary values given by $\varphi
\in H^1(D_\varepsilon)$, we have $u(\cdot, t) \in H^1(D_\varepsilon)\cap H^2_{\mathrm{loc}}(D_\varepsilon)$. Since $D \subset D_\varepsilon$ we also have $u(\cdot, t) \in H^1(D)\cap H^2_{\mathrm{loc}}(D)$.
Since $u(\cdot, t) \in H^1(D)$ and $\Delta u(\cdot, t) \in L^2(D)$ we can
apply a version~\cite[Thm.\ 4.4]{McLean} of Green's first identity to obtain, for each $t \in (T_0 + r_{\max}/c, \infty)$,
\begin{equation}\label{gradu_grns_identity}
\int_D \left|\nabla u(\mathbf{r}, t)\right|^2\,\d V(\mathbf{r}) = \left\langle \gamma u(\cdot, t), \gamma \partial_\mathbf{n} u(\cdot, t) \right\rangle_{\partial D} - \int_D u(\mathbf{r}, t) \Delta u(\mathbf{r}, t)\,\d V(\mathbf{r}),
\end{equation}
where $\left\langle \cdot, \cdot \right\rangle$ denotes the duality pairing of $H^{-1/2}(\partial D)$ and $H^{1/2}(\partial D)$ and $\gamma$ denotes the trace operator on $D$. In fact,
since $\gamma u(\cdot, t) = 0$ on $\Gamma$ and $u(\cdot, t)$ is smooth for $\mathbf{r}$ in a neighborhood of $\partial B_R$, we have
$\left\langle \gamma u(\cdot, t), \gamma \partial_\mathbf{n} u(\cdot, t)\right\rangle_{\partial D} = \int_{\partial B_R} u(\mathbf{r}, t) \partial_\mathbf{n} u(\mathbf{r}, t)\,\d\sigma(\mathbf{r})$.
Using \Cref{gradu_grns_identity} we now proceed to derive bounds for
$E(u, D, t)$. Since $u$ satisfies \Cref{u_ellipt}, using
the Cauchy-Schwarz inequality we obtain
\begin{equation}\label{gradu_first_bound}
\begin{split}
\int_D \left|\nabla u(\mathbf{r}, t)\right|^2\,\d & V(\mathbf{r}) = \int_{\partial B_R} u(\mathbf{r}, t) \partial_{\mathbf{n}} u(\mathbf{r}, t) \,\d\sigma(\mathbf{r}) - \int_D u(\mathbf{r}, t) f(\mathbf{r}, t)\,\d V(\mathbf{r})\\
&\le \left\| u(\cdot, t)\right\|_{L^2(\partial B_R)} \left\|\partial_{\mathbf{n}} u(\cdot, t)\right\|_{L^2(\partial B_R)} + \left\|u(\cdot, t)\right\|_{L^2(D)} \left\|f(\cdot, t)\right\|_{L^2(D)},
\end{split}
\end{equation}
where $B_R$ denotes the ball of radius $R$ centered at the origin.
The first volume term in \Cref{gradu_first_bound} can be estimated
using \Cref{decay_corr} with $p = 0$:
\begin{equation}\label{u_volume_decay}
\left\|u(\cdot, t)\right\|_{L^2(D)} \le C_2 (t - T_0 - r_{\max}/c)^{1/2-n} \left\|\psi\right\|_{H^{(n+1)(q+1)+1}(I_{T_0}; L^2(\Gamma))}.
\end{equation}
Then, using \Cref{f_l2_decay_estimate} and the continuity of the
inclusion map in Sobolev spaces, \Cref{u_volume_decay} provides a bound for the second summand on the right-hand side of~\eqref{gradu_first_bound}:
\begin{equation}\label{gradu_volume_bound}
\left\|u(\cdot, t)\right\|_{L^2(D)} \left\|f(\cdot, t)\right\|_{L^2(D)} \le C_3 (t - T_0 - r_{\max}/c)^{1-2n} \left\|\psi\right\|_{H^{(n+1)(q+1)+3}(I_{T_0}; L^2(\Gamma))}^2.
\end{equation}
We now to turn to the first summand on the right-hand side
of~\eqref{gradu_first_bound}, and we estimate each one of the
corresponding boundary terms by relying on the representation
formula~\cref{eq:kirchhoff_3d_soft}. Indeed, differentiating that
formula once with respect to the normal $\mathbf{n}$ and $s$-times
($s = 0, 1$) with respect to time for
$(\mathbf{r},t') \in \partial B_R \times \mathbb{R}$ , we obtain the
relation
\begin{equation}
\begin{split}
\partial_{\mathbf{n}} \partial_t^s u(\mathbf{r}, t') = -\int_\Gamma &\left( \frac{\hat{\mathbf{r}}\cdot (\mathbf{r} - \mathbf{r}') \partial_t^s \psi(\mathbf{r}', t' - |\mathbf{r} - \mathbf{r}'|/c)}{|\mathbf{r} - \mathbf{r}'|^3}\right.\\&\left.+ \frac{\hat{\mathbf{r}} \cdot (\mathbf{r} - \mathbf{r}') \partial_t^{s+1} \psi(\mathbf{r}', t' - |\mathbf{r} - \mathbf{r}'|/c)}{c|\mathbf{r} - \mathbf{r}'|^2}\right)\,\d\sigma(\mathbf{r}'),
\end{split}
\end{equation}
for which the Cauchy-Schwarz inequality implies that, for $(\mathbf{r},t') \in \partial B_R \times \mathbb{R}$,
\begin{equation}\label{nablau_ptwise_to_l2_bound}
\begin{split}
|\partial_{\mathbf{n}} \partial_t^s u(\mathbf{r}, t')|^2 &\le C_4(\Gamma, R, \varepsilon) \int_\Gamma |\partial_t^s \psi(\mathbf{r}, t' - |\mathbf{r} - \mathbf{r}'|/c)|^2\,\d\sigma(\mathbf{r}')\\
&+ C_5(\Gamma, R, \varepsilon) \int_\Gamma |\partial_t^{s+1} \psi(\mathbf{r}', t' - |\mathbf{r} - \mathbf{r}'|/c)|^2\,\d\sigma(\mathbf{r}').
\end{split}
\end{equation}
The argument we use to bound $\partial_{\mathbf{n}} u$ uniformly in time is similar
to the one used in the proof of \Cref{decay_corr}: we obtain bounds
on $\partial_{\mathbf{n}} u(\mathbf{r}, \cdot)$ in the norm of $H^1([t, \infty))$
by taking $s = 0$ and $s = 1$ in \Cref{nablau_ptwise_to_l2_bound},
summing the result, and integrating this inequality for $t' > t$;
the uniform-in-time bound then follows from the Sobolev
Lemma. Thus, applying \Cref{nablau_ptwise_to_l2_bound} with $s=0$ and $s=1$ and using the definition~\eqref{I1_I2_decay_corr} of
$I_{0, 1}(\mathbf{r}, t)$ and $I_{1,1}(\mathbf{r}, t)$ and
$I_{2,1}(\mathbf{r}, t)$,
\begin{equation}\label{nablau_ptwise_to_l2_bound_integrated}
\begin{split}
\left\| \partial_{\mathbf{n}} u(\mathbf{r}, \cdot) \right\|^2_{H^1([t, \infty))} &\le C_4(\Gamma, R, \varepsilon) I_{0,1}(\mathbf{r}, t) + C_5(\Gamma, R, \varepsilon) I_{2,1}(\mathbf{r}, t)\\
&\quad + (C_4(\Gamma, R, \varepsilon) + C_5(\Gamma, R, \varepsilon))I_{1,1}(\mathbf{r}, t).
\end{split}
\end{equation}
Then, using the bound~\eqref{I1_decay_corr_estimate}, $I_{s,1}(\mathbf{r}, t)
\le \left\|\partial_t^s \psi\right\|^2_{L^2([t - r_{\max}/c, \infty);
L^2(\Gamma))}$, we obtain the relation
\begin{equation}\label{nablau_H1_bound}
\left\|\partial_{\mathbf{n}} u(\mathbf{r}, \cdot)\right\|_{H^1([t, \infty)} \le C_6(\Gamma, R, \varepsilon) \left\|\psi\right\|_{H^2([t - r_{\max}/c, \infty); L^2(\Gamma))}.
\end{equation}
In conjunction with the Sobolev embedding \Cref{sob_lemma} and the bound \Cref{density_Hp_time_bound_Tn} of \Cref{3d_decay_thm_ii}, the bound~\Cref{nablau_H1_bound} implies that
\begin{equation}\label{nablau_ptwise_bound}
\begin{split}
\left|\partial_{\mathbf{n}} u(\mathbf{r}, t)\right| &\le C_7 \left\|\partial_{\mathbf{n}} u(\mathbf{r}, \cdot)\right\|_{H^1([t, \infty)} \le C_7 C_6(\Gamma, R, \varepsilon) \left\|\psi\right\|_{H^2([t - r_{\max}/c, \infty); L^2(\Gamma))}\\
&\le C_8(\Gamma, R, \tau, n, \varepsilon) (t - T_0 - r_{\max}/c)^{1/2 - n} \left\|\psi\right\|_{H^{(n+1)(q+1)+2}(I_{T_0}; L^2(\Gamma))}
\end{split}
\end{equation}
for
$(\mathbf{r}, t) \in \partial B_R \times (T_0 + r_{\max}/c,
\infty)$. Taking $L^2(\partial B_R)$ norms in both
\Cref{u_ptwise_bound_decay} with $p=0$ and in
\Cref{nablau_ptwise_bound} yields the desired estimate for the first
summand on the right-hand side of~\eqref{gradu_first_bound}:
\begin{equation}\label{unablau_ptwise_bound}
\begin{split}
\left\|u(\cdot, t)\right\|_{L^2(\partial B_R)} \left\|\partial_{\mathbf{n}} u(\cdot, t)\right\|_{L^2(\partial B_R)} \le C_9 &(t - T_0 - r_{\max}/c)^{1-2n} \times\\
&\times \left\|\psi\right\|^2_{H^{(n+1)(q+1)+2}(I_{T_0}; L^2(\Gamma))},
\end{split}
\end{equation}
for $t\geq T_0 + r_{\max}/c$, where we again used the continuity of
the inclusion map in Sobolev spaces.
To obtain the desired decay result, finally, we apply
the bound \Cref{uk_l2_sup_bound_decay} in \Cref{decay_corr}
with $p = 1$ to obtain, for $t \in (T_0 + r_{\max}/c, \infty)$,
\begin{equation}\label{dt2u_decay}
\left\|\partial_t u(\cdot, t)\right\|_{L^2(D)} \le C_{10} (t - T_0 -
r_{\mathrm{max}}/c)^{1/2-n} \left\|\psi\right\|_{H^{(n+1)(q+1)+2}(I_{T_0};
L^2(\Gamma))}.
\end{equation}
Then, using~\Cref{gradu_first_bound} for the first term in the
definition~\eqref{local_energy} of $E(u, D, t)$, we conclude that
\begin{equation}
\begin{split}
E(u, D, t) &\le \left\| u(\cdot, t)\right\|_{L^2(\partial B_R)} \left\|\partial_{\mathbf{n}} u(\cdot, t)\right\|_{L^2(\partial B_R)}\\
&\quad + \left\|u(\cdot, t)\right\|_{L^2(D)} \left\|f(\cdot, t)\right\|_{L^2(D)} + \left\|\partial_t u(\cdot, t)\right\|^2_{L^2(D)}\\
&\le C_9 (t - T_0 - r_{\max}/c)^{1-2n} \left\|\psi\right\|^2_{H^{(n+1)(q+1)+2}(I_{T_0}; L^2(\Gamma))}\\
&\quad + C_3^2 (t - T_0 - r_{\max}/c)^{1-2n} \left\|\psi\right\|_{H^{(n+1)(q+1)+3}(I_{T_0}; L^2(\Gamma))}^2\\
&\quad + C_{10}^2 (t - T_0 -
r_{\mathrm{max}}/c)^{1-2n} \left\|\psi\right\|^2_{H^{(n+1)(q+1)+2}(I_{T_0};
L^2(\Gamma))},
\end{split}
\end{equation}
where the last inequality results by incorporating the
estimates~\eqref{unablau_ptwise_bound}, \Cref{gradu_volume_bound}, and \Cref{dt2u_decay}. The desired
estimate~\eqref{local_energy_superalg_decay} follows, once again, by
continuity of the inclusion map in Sobolev spaces and the proof is thus
complete.
\end{proof}
\begin{rem}\label{ralston}
The well known contribution~\cite{Ralston:69} implies that, for a
trapping obstacle $\Omega$ and for an arbitrarily large time
$\mathcal{T}$, initial conditions can be selected so that the local
energy $E(u, D, t)$ in~\eqref{energy_decay} with
$D =\Omega^c\cap \{|{\mathbf{r}}| < R\}$ remains arbitrarily close to the
initial energy value $E(u, D, 0)$ for $0\leq t\leq \mathcal{T}$. In
particular, this result implies that a decay bound of the
form~\eqref{energy_decay} that is uniformly valid for all admissible
incident initial conditions and for all compact sets
$D\subset \Omega^c$ cannot hold for a trapping
obstacle. References~\cite{Ikawa:82,Ikawa:88} do present, however,
uniformly valid decay estimates relative to higher order Sobolev
norms over the complete exterior domain, which are valid for
trapping structures consisting a unions of convex obstacles, for
which the trapping rays span spatial regions of zero measure
(indeed, a single ray in the case of the structure consisting of two
convex connected obstacles in~\cite{Ikawa:82}, and as implied by
Assumption (H.2) in~\cite{Ikawa:88}). In the same spirit,
\Cref{3d_decay_thm_ii} and Corollaries~\ref{decay_corr} and~\ref{decay_corr_energy} present decay results,
including of the local energy $E(u, D, t)$,
that are uniformly valid for all admissible incident fields in terms
of higher-order {\em surface} Sobolev norms, which are valid for
trapping obstacles satisfying the $q$-growth condition---which
includes obstacles, such as those depicted in \Cref{fig:3d_connected_trapping}, for
which the trapping rays span volumetric regions of positive mesaure.
\end{rem}
The overall approach to the proof of \Cref{3d_decay_thm_ii} relies
critically on the novel domain-of-dependence time-history technique
described in \Cref{rem:h_equiv_rem_2} (see also \Cref{3d_decay_rmk_iii} where a related but somewhat
modified decay result and proof strategy are suggested). Technically, the proof of
\Cref{3d_decay_thm_ii} proceeds on the basis of an argument resulting from
integration-by-parts with respect to the temporal frequency $\omega$,
which requires $\omega$-differentiation of a certain function
$\breve{\psi}^f_{+,0}({\mathbf{r}}, \omega)$ closely related to the boundary
integral density $\psi^f({\mathbf{r}}, \omega)$. Certain necessary results
concerning differentiability of boundary integral operators and
associated integral densities are established in a series of lemmas
presented in the Appendix. Some of the main elements of the proof of
\Cref{3d_decay_thm_ii}, in turn, are presented in
Lemmas~\ref{3d_decay_thm_recursive_bound_general}
through~\ref{decay_estimate_L2} below. Thus, with reference to the
Appendix, \Cref{3d_decay_thm_recursive_bound_general} provides
pointwise bounds on density derivatives. Then, the technical
\Cref{Rderiv_sobolev_bound} (of a similar character to
\Cref{conv_H_to_psi}) establishes bounds on the integrals of certain
quantities in~\Cref{3d_decay_thm_recursive_bound_general}, and
\Cref{decay_estimate_L2} provides the primary decay estimate used in
the proof of \Cref{3d_decay_thm_ii}.
\begin{remark}\label{3d_decay_rmk_iii}
Before proceeding with the proof of \Cref{3d_decay_thm_ii} we note
that a related but somewhat less informative decay result and proof
strategy, which do not depend on the bootstrap DoD concept, can be
contemplated. In the alternative approach, the decay proof results
once again from an argument based on integration by parts (in
frequency-domain) in the Fourier integral that represents the time-domain
solution. In detail, starting from the given time-domain data
$ b(r,t)$ defined in the neighborhood $\Omega^\textit{inc}$ of
$\Omega$, a Fourier transform is performed to obtain the right-hand
side of the integral equation~\eqref{CFIE_direct}. Since $\Gamma$ satisfies the
$q$-growth condition, it follows that $\psi^f({\mathbf{r}},\omega)$, the solution
of this equation, admits an
upper bound that grows polynomially as a function of $\omega$. Using
an argument similar to the one presented in \Cref{3d_decay_thm_recursive_bound_general} below,
corresponding polynomially growing bounds are obtained for the
$\omega$ derivatives of $\psi^f({\mathbf{r}},\omega)$. Thus, the decay
result can be obtained by an integration by parts argument followed
by a bound on the integral of the resulting integrands. Such a bound
can be obtained by relying on smooth partitioning of the integral
together with a Young inequality-based estimate in an argument
similar to the one in \Cref{decay_estimate_L2}. The resulting time-decay bound
resembles the bound~\eqref{density_Hp_time_bound_Tn}. But,
unlike equation~\eqref{density_Hp_time_bound_Tn}, which expressed
decay in terms of the norm of the solution $\psi$ itself over the
bootstrap DoD $I_{T_0}$, the alternative bound expresses decay in
terms of the norm {\em over all time} of the right-hand side
function $b({\mathbf{r}},t)$ and its normal derivative on $\Gamma$. Thus
\Cref{3d_decay_thm_ii} provides a significantly more precise decay estimate,
but it does so at the cost of certain added complexity in the
proof---which is mainly confined to \Cref{Rderiv_sobolev_bound}.
\end{remark}
Following the aforementioned proof plan for
Theorem~\ref{3d_decay_thm_ii}, we first present, in
\Cref{3d_decay_thm_recursive_bound_general}, estimates on
frequency-derivatives of certain frequency-domain density solutions.
\begin{lemma}\label{3d_decay_thm_recursive_bound_general}
Let $\widetilde{b}$ satisfy the assumptions of \Cref{3d_decay_lemma_2ndkind}
as well as the additional assumption that for all ${\mathbf{r}} \in \widebar{\Omega}$ the function
$\widetilde{b}$ is compactly-supported as a function of time, and let
$\Omega$ satisfy the $q$-growth condition for some non-negative integer $q$.
Further, call $\widetilde{R}_T(\mathbf{r}, \omega)$ the right-hand side of
\cref{CFIE_proof_generic} and let $\widetilde{\psi}$ and
$\mu = \widetilde{\psi}_{+,T}^f$ denote the solutions to
\Cref{eq:tdie_sl_generic} and \cref{CFIE_proof_generic},
respectively.
Then, for all nonnegative integers $p$ and all
$\omega \neq \pm \omega_0$ (cf. \Cref{Aop_def}) we have
\begin{equation}\label{recursion_hypothesis_squared_general}
\begin{split}
\left\| \partial_\omega^p \mu(\cdot, \omega)\right\|_{L^2(\Gamma)}^2
\le \sum_{i=0}^{p} &\left(\sum_{j=0}^{(i+1)(q+1)-1} d_{ij}^p \omega^{2j}
\left\|\partial_\omega^{p-i} \widetilde{R}_T(\cdot,
|\omega|)\right\|^2_{L^2(\Gamma)}\right),
\end{split}
\end{equation}
where $d_{ij}^p > 0$ denote certain positive constants independent of $T$. Additionally, for each nonnegative integer $p$
there exists a constant $C_p$ such that
\begin{equation}\label{psit_nongrowth}
\left\|\partial_\omega^p\mu(\cdot, \omega)\right\|_{L^2(\Gamma)} \le C_p |\omega|^{(p+1)(q+1)}
\end{equation}
holds for all sufficiently large values of $|\omega|$.
\end{lemma}
\begin{proof}
Let $p$ denote a nonnegative integer and let
$\omega \ne \pm \omega_0$. Since from \Cref{int_eq_freq_regularity}
we have
$\widetilde{R}_T \in C^\infty(\mathbb{R}\setminus \pm \omega_0;
L^2(\Gamma))$ and
$\widetilde{\psi}_{+,T}^f \in C^\infty(\mathbb{R}; L^2(\Gamma))$, it
follows that all quantities in
\Cref{recursion_hypothesis_squared_general} are well-defined. We
restrict the remainder of the proof to the case $\omega > 0$; the
full result then follows by the property of Hermitian symmetry
satisfied by $\mu = \widetilde{\psi}^f_{+,T}$ (see
\Cref{negative_freq}).
In order to prove \Cref{recursion_hypothesis_squared_general} we first show that
\begin{equation}\label{recursion_hypothesis_general}
\begin{split}
\left\| \partial_\omega^p \mu(\cdot, \omega)\right\|_{L^2(\Gamma)}
\le \sum_{i=0}^{p - 1} &\left(\sum_{j=0}^{(i+1)(q+1)-1} b_{ij}^p \omega^j
\left\|\partial_\omega^{p-i} \widetilde{R}_T(\cdot,
\omega)\right\|_{L^2(\Gamma)}\right)\\
&+ \sum_{i=0}^{p(q+1)} c_{i}^p \omega^i \left\|\mu(\cdot,
\omega)\right\|_{L^2(\Gamma)},
\end{split}
\end{equation}
for some positive constants $b_{ij}^p > 0$ and $c_i^p > 0$. The
proof of \Cref{recursion_hypothesis_general} proceeds inductively:
assuming that, for a given nonnegative integer $p$, the relation
\begin{equation}\label{recursion_hypothesis_general_s}
\begin{split}
\left\| \partial_\omega^s \mu(\cdot, \omega)\right\|_{L^2(\Gamma)}
\le \sum_{i=0}^{s - 1} &\left(\sum_{j=0}^{(i+1)(q+1)-1} b_{ij}^s \omega^j
\left\|\partial_\omega^{s-i} \widetilde{R}_T(\cdot,
\omega)\right\|_{L^2(\Gamma)}\right)\\
&+ \sum_{i=0}^{s(q+1)} c_{i}^s \omega^i \left\|\mu(\cdot,
\omega)\right\|_{L^2(\Gamma)},
\end{split}
\end{equation}
holds for all nonnegative integers $s \le p$, we show that
\Cref{recursion_hypothesis_general_s} holds for $s = p + 1$. (The base
case $s = 0$ is trivially satisfied as the first sum in the inequality
equals zero in that case.) To do this we use Leibniz's product
differentiation rule (\Cref{int_eq_freq_regularity}), which tells us that
\begin{equation}\label{leibniz}
\left(\partial_\omega^{p+1} \mu \right)(\mathbf{r}, \omega) =
A_\omega^{-1}\left(\partial_\omega^{p+1} \widetilde{R}_T(\mathbf{r}, \omega) -
\sum_{k=1}^{p+1} a_k^{p+1}(\partial_\omega^k A_\omega)
(\partial_\omega^{p+1-k} \mu)(\mathbf{r}, \omega) \right)
\end{equation}
for certain integers $a_k^{p+1}$; $k=1, \ldots, p+1$. But, in
view of the $q$-growth condition (\Cref{q-nontrapp}),
there exist positive $C_1, C_2$ such that
$\left\|A_\omega^{-1}\right\|_{L^2(\Gamma)\to L^2(\Gamma)}\le C_1 +
C_2 \omega^q$. Further, the operator-norm bound
\Cref{Aop_deriv_bound} in \Cref{omega_explicit_norms_deriv_int_op}
tells us that, for certain constants $\alpha_{0k}$ and
$\alpha_{1k}$, we have
$\left\|\partial_\omega^k A_\omega\right\|_{L^2(\Gamma)\to
L^2(\Gamma)} \le \alpha_{0k} + \alpha_{1k} \omega$ for all
$\omega \in \mathbb{R}^+ \setminus \omega_0$. It thus follows
from~\eqref{leibniz} that
\begin{equation}\label{partial_p_mu_deriv_first_bound}
\begin{split}
\left\|\vphantom{\partial_\omega^{p+1} \mu(\cdot, \omega)}
\partial_\omega^{p+1} \mu(\cdot, \omega)\right.&
\left.\vphantom{\partial_\omega^{p+1} \mu(\cdot, \omega)}\right\|_{L^2(\Gamma)}
\le
(C_1 + C_2\omega^q) \left(\vphantom{\sum_{k=1}^{p+1}}
\left\|\partial_\omega^{p+1}
\widetilde{R}_T(\cdot, \omega)\right\|_{L^2(\Gamma)} \right.\\
&+ \left.\sum_{k=1}^{p+1} |a_k^{p+1}|
(\alpha_{0k} + \alpha_{1k} \omega)\left\|\partial_\omega^{p+1-k}
\mu(\cdot, \omega)\right\|_{L^2(\Gamma)}\right).
\end{split}
\end{equation}
Substituting \Cref{recursion_hypothesis_general_s} with
$s = p + 1 - k $ for $k = 1, \ldots, p$ into
\Cref{partial_p_mu_deriv_first_bound} we obtain
\begin{alignat*}{2}
\left\|\partial_\omega^{p+1}\mu(\cdot, \omega)\right\|_{L^2(\Gamma)} &\le
(C_1 + C_2\omega^q)\left\|\partial_\omega^{p+1} \widetilde{R}_T(\cdot,
\omega)\right\|_{L^2(\Gamma)}\\
&\begin{alignedat}[t]{2}
+ \sum_{k=1}^{p+1} &|a_k^{p+1}|\left(C_1\alpha_{0k} +
C_2\alpha_{0k}\omega^q + C_1\alpha_{1k}\omega +
C_2\alpha_{1k}\omega^{q+1}\right)\times\\
&\begin{alignedat}[t]{2}
\times\left[\vphantom{\sum_{i=0}^{p+1-k}\sum_{j=0}^i}\right.&
\sum_{i=0}^{p-k}\sum_{j=0}^{(i+1)(q+1)-1} b_{ij}^{p+1-k} \omega^j
\left\|\partial_\omega^{p+1-k-i} \widetilde{R}_T(\cdot,
\omega)\right\|_{L^2(\Gamma)}\\
&\left.+ \sum_{i=0}^{(p+1-k)(q+1)} c_i^{p+1-k} \omega^i
\left\|\mu(\cdot, \omega)\right\|_{L^2(\Gamma)}\right],
\end{alignedat}
\end{alignedat}
\end{alignat*}
from which, expanding the products, we obtain
\begin{alignat}{2}\label{final_inequality}
\left\|\partial_\omega^{p+1}\mu(\cdot, \omega)\right\|_{L^2(\Gamma)} &
\le (C_1 + C_2\omega^q)\left\|\partial_\omega^{p+1} \widetilde{R}_T(\cdot,
\omega)\right\|_{L^2(\Gamma)} + (A) + (B),
\end{alignat}
where
\begin{alignat*}{2}
(A) = \sum_{k=1}^{p+1}&\sum_{i=0}^{p-k}\sum_{j=0}^{(i+1)(q+1)-1}
|a_k^{p+1}b_{ij}^{p+1-k}|\left(C_1\alpha_{0k}\omega^j +
C_2\alpha_{0k}\omega^{q+j}\vphantom{+ C_1\alpha_{1k}\omega^{j+1} +
C_2\alpha_{1k}\omega^{q+j+1}}\right.\\
&\left.+\;C_1\alpha_{1k}\omega^{j+1} +
C_2\alpha_{1k}\omega^{q+j+1}\right)\left\|\partial_\omega^{p+1-k-i}
\widetilde{R}_T(\cdot, \omega)\right\|_{L^2(\Gamma)},
\end{alignat*}
and
\begin{alignat*}{2}
(B) = \sum_{k=1}^{p+1}&\sum_{i=0}^{(p+1-k)(q+1)}
|a_k^{p+1}c_{i}^{p+1-k}|\left(C_1\alpha_{0k}\omega^i +
C_2\alpha_{0k}\omega^{q+i}\vphantom{+ C_1\alpha_{1k}\omega^{i+1} +
C_2\alpha_{1k}\omega^{q+i+1}}\right.\\
&\left.+\; C_1\alpha_{1k}\omega^{i+1} +
C_2\alpha_{1k}\omega^{q+i+1}\right)\left\|
\mu(\cdot, \omega)\right\|_{L^2(\Gamma)}.
\end{alignat*}
It is easy to show that \Cref{final_inequality} implies
\Cref{recursion_hypothesis_general_s} with $s=p+1$. Indeed, on one
hand, the first term on the right-hand side of \Cref{final_inequality}
and every term arising from the summations in $(A)$ can be expressed
as a constant multiplied by a term of the form
$\omega^\ell \left\|\partial_\omega^m \widetilde{R}_T(\cdot,
\omega\right\|_{L^2(\Gamma)}$ for some $m \le p$ and for some
$\ell\leq (p+1)(q+1)$---all of which match corresponding terms in
\Cref{recursion_hypothesis_general_s} for $s = p +
1$. The maximal power of $\omega$ in the expression $(B)$, on the other
hand, equals $(p+1)(q+1)$ (it occurs for $k=1$ and $i=p(q+1)$), the
corresponding term in $(B)$, namely
$\omega^{(p+1)(q+1)}\left\|\mu(\cdot,
\omega)\right\|_{L^2(\Gamma)}$, is also present
in~\cref{recursion_hypothesis_general_s} for $s = p+1$. Similarly, it
is easily checked that all remaining terms
$\omega^\ell \left\|\partial_\omega^{p+1-k-i} \widetilde{R}_T(\cdot,
\omega)\right\|_{L^2(\Gamma)}$ and
$\omega^i\left\|\mu(\cdot, \omega)\right\|_{L^2(\Gamma)}$
in~\eqref{final_inequality} are accounted for in
\Cref{recursion_hypothesis_general_s} with $s = p+1$, and, thus, the
validity of the inequality~\cref{recursion_hypothesis_general_s} with
$s = p + 1$ and certain constants $b_{ij}^{p+1}$ and $c_{i}^{p+1}$
follows, which completes the inductive proof
of~\Cref{recursion_hypothesis_general}.
Using \Cref{recursion_hypothesis_general} together with the $q$-growth
condition
$$\left\|\mu(\cdot, \omega)\right\|_{L^2(\Gamma)} \le (C_1 + C_2
\omega^q) \left\|\widetilde{R}_T(\cdot,
\omega)\right\|_{L^2(\Gamma)}\quad (C_1,C_2\in\mathbb{R})$$ for
the solution $\mu$ of the equation $A_\omega \mu = \widetilde{R}_T$,
we obtain the bound
\begin{equation}\label{dpmu}
\begin{split}
\left\| \partial_\omega^p \mu(\cdot, \omega)\right\|_{L^2(\Gamma)}
\le \sum_{i=0}^{p} &\left(\sum_{j=0}^{(i+1)(q+1)-1} f_{ij}^p \omega^{j}
\left\|\partial_\omega^{p-i} \widetilde{R}_T(\cdot,
\omega)\right\|_{L^2(\Gamma)}\right)
\end{split}
\end{equation}
for some positive constants $f_{ij}^p > 0$. The desired
inequality~\Cref{recursion_hypothesis_squared_general} follows
directly from \Cref{dpmu} using the relation
$\left\|\sum_{i=1}^p f_i\right\|^2 \le p\sum_{i=1}^p
\left\|f_i\right\|^2$.
To establish the relation \Cref{psit_nongrowth}, finally, for each
integer $\ell = p - i$, $0\leq \ell\leq p$ in~\eqref{dpmu} we
estimate the expression
$\left\|\partial^{\ell}_\omega \widetilde{R}_T(\cdot,
\omega)\right\|_{L^2(\Gamma)}$ where, by assumption for
$\omega \ge \omega_0$ (cf.~\Cref{Aop_def_eqn}),
$\widetilde{R}_T = \left(\gamma^- \partial_\mathbf{n} -
\i\omega\gamma^- \right) \widetilde{H}^f_T$. To do this we note
that, in view of~\eqref{Ht_def_generic} and the jump relations for
the normal derivative of the single layer
potential~\cite[p. 219]{McLean}, for $\omega > \omega_0$ we have
\begin{equation}\label{Rtderiv_def}
\partial^{\ell}_\omega \gamma^- \widetilde{R}_T = \partial^{\ell}_\omega \gamma^- \widetilde{B}^f - \partial^{\ell}_\omega \left(\frac{1}{2}I + K_\omega^* -
\i\omega \gamma^-S_\omega\right) \widetilde{\psi}_{-,T}.
\end{equation}
It therefore suffices to show that each one of the two terms on the
right-hand side of~\Cref{Rtderiv_def} is bounded by a power of
$\omega$. Since $\gamma^-\widetilde{B}^f$ equals the temporal
Fourier transform of the compactly-supported function
$\gamma^- \widetilde{b} = \gamma^+ \widetilde{b}\in H^1(\mathbb{R};
L^2(\Gamma)) \subset L^2(\mathbb{R}; L^2(\Gamma))$ (see the $s$-regularity hypotheses of the present lemma), differentiation under the Fourier-transform
integral sign, which is justified since $b$ is compactly-supported
tells us that $\partial^{\ell}_\omega \gamma^- \widetilde{B}^f$
equals the temporal Fourier transform of the compactly-supported
function
$(-\i t)^\ell\gamma^+ \widetilde{b}\in L^2(\mathbb{R};
L^2(\Gamma))$. Further, for each $0 \le \ell \le p$ and for all
$\omega \in \mathbb{R}$, using the triangle and Cauchy-Schwarz
inequalities we obtain
\begin{equation}
\label{paley_der_b}
\left\|\partial^{\ell}_\omega \gamma^-\widetilde{B}^f(\cdot,
\omega)\right\|_{L^2(\Gamma)} \le C'
\end{equation}
for some constant $C'$ (that depends on $\widetilde{b}$ and $p$),
which provides the needed estimate of the first right-hand term.
In order to obtain a bound for the second term on the right-hand
side of~\Cref{Rtderiv_def}, in turn, we note that since
$\widetilde{\psi}_{-,T}$ is compactly-supported
(cf.~\Cref{psipm_def}) and satisfies
$\widetilde{\psi}_{-,T} \in L^2(\mathbb{R}; L^2(\Gamma))$
(cf. \Cref{psi_Hp}), and, thus, an argument similar to the one leading
to~\eqref{paley_der_b} yields the bound
\begin{equation}\label{eq:dpi}
\left\|\partial_\omega^{\ell} \widetilde{\psi}^f_{-,T}(\cdot, \omega)\right\|_{L^2(\Gamma)} \le C''
\end{equation}
for all $\ell$, $0 \le \ell \le p$ and all $\omega > 0$,
where $C''$ denotes a constant that, once again, depends on
$\widetilde{b}$ and $p$. Then, employing the operator norm bounds
presented in \Cref{omega_explicit_norms_deriv_SK} for
$\partial_\omega^\ell S_\omega$ and
$\partial_\omega^\ell K_\omega^*$ ($\ell = 0, \ldots, p$) together
with \Cref{eq:dpi} we obtain the second-term bound
\begin{equation}\label{eq:dpii}
\left\|\partial^{\ell}_\omega \left(\left(\frac{1}{2}I + K_\omega^* -
\i\omega \gamma^-S_\omega\right) \widetilde{\psi}_{-,T}(\cdot, \omega)\right)\right\|_{L^2(\Gamma)} \le C'''(1 + |\omega|)
\end{equation}
for all integers $\ell$, $0 \le \ell \le p$ and for all
$\omega > 0$, where $C'''$ denotes a constant independent of
$\omega$.
Finally, in view of~\Cref{Rtderiv_def} and incorporating
\Cref{paley_der_b} and \Cref{eq:dpii} we obtain
\begin{equation}\label{eq:dpiii}
\left\|\partial^\ell \gamma^- \widetilde{R}_T(\cdot, \omega)\right\|_{L^2(\Gamma)} \le C''''(1 + \omega)
\end{equation}
for all integers $\ell$, $0 \le \ell \le p$ and for all
$\omega > \omega_0$, where $C''''$ denotes a constant independent of
$\omega$. The estimate \Cref{psit_nongrowth} follows directly from
\Cref{dpmu} and~\Cref{eq:dpiii}.
\end{proof}
\begin{rem}\label{rem:breve}
In order to obtain the estimates in \Cref{3d_decay_thm_ii} that are
uniform with respect to $T_0$ it will be necessary to perform
temporal shifts on the data $\widetilde{b}$ (see \Cref{rem:tilde_1})
and the solution $\widetilde{\psi}$---see \Cref{ITvsI0}. Given a
real number $T_0$ we define for a given $\widetilde{b}$ the
``breve'' quantities
\begin{equation}\label{def:breve}
\breve{b}({\mathbf{r}}, t) = \widetilde{b}({\mathbf{r}}, t + T_0),\quad\mbox{and}\quad \breve{\psi}({\mathbf{r}}, t) = \widetilde{\psi}({\mathbf{r}}, t + T_0).
\end{equation}
With reference to \Cref{rem:tilde_1}, note that
$\breve{\psi}({\mathbf{r}}, t)$ equals the solution
$\widetilde{\psi}({\mathbf{r}}, t)$ of \Cref{eq:tdie_sl_generic} with
$\widetilde{b}$ substituted by $\breve{b}$. Note also that if
$\widetilde{b}$ satisfies \Cref{all_t_b_generic} for some $\alpha$
then $\breve{b}$ satisfies \Cref{all_t_b_generic} with $\alpha$
substituted by $\alpha - T_0$. Our consideration of $\breve{\psi}$
will be limited to the choice $I_T$ with $T = 0$, and we define
$\breve{\psi}_{\pm,0}$, $\breve{\psi}_{*,0}$,
$\breve{\psi}_{p,\pm,0}$ and $\breve{\psi}_{p,*,0}$ analogously to
$\psi_{\pm,T}$, $\psi_{*,T}$, $\psi_{p, \pm, T}$ and
$\psi_{p, *, T}$ respectively, in \Cref{timewinddens}, but with
$T=0$.
\end{rem}
Consistent with the conventions laid out in
\Cref{timewinddens} and \Cref{rem:tilde_1} and letting
\begin{equation}\label{breveupmstardef}
\breve{u}_{\pm,0}({\mathbf{r}}, t) = (S \breve{\psi}_{\pm,0})({\mathbf{r}}, t)\quad\mbox{and}\quad \breve{u}_{*,0}({\mathbf{r}}, t) = (S \breve{\psi}_{*,0})({\mathbf{r}}, t),
\end{equation}
we define also the function $\breve{h}_0$ (cf. \Cref{htildedef})
\begin{equation}\label{brevehdef}
\breve{h}_0({\mathbf{r}}, t) = \breve{b}({\mathbf{r}}, t) - \breve{u}_{-,0}({\mathbf{r}}, t),\quad (\mathbf{r}, t) \in \Omega^{inc} \times \mathbb{R},
\end{equation}
and its Fourier transform
\begin{equation}\label{Ht_def_breve}
\breve{H}^f_0(\mathbf{r}, \omega) = \breve{B}^f(\mathbf{r}, \omega) - \left(S_\omega \breve{\psi}_{-,0}^f\right)(\mathbf{r}, \omega),\quad \mathbf{r}\in\Omega^{inc} \times \mathbb{R}.
\end{equation}
Finally, we will denote the right-hand side of
\Cref{CFIE_proof_generic} in the present setting by
\begin{equation}\label{Rt_def_breve}
\breve{R}_0({\mathbf{r}}, \omega) = \gamma^- \partial_\mathbf{n} \breve{H}^f_0({\mathbf{r}}, \omega) - \i\eta\gamma^- \breve{H}^f_0({\mathbf{r}}, \omega), \quad ({\mathbf{r}},\omega) \in \Gamma\times \mathbb{R}.
\end{equation}
\begin{lemma}\label{Rderiv_sobolev_bound}
Let $T_0$ and $\tau$ denote given real numbers, let $q$ denote a
non-negative integer, let $\breve{b}$ be defined as in
\Cref{rem:breve}, and assume that $\breve{b}({\mathbf{r}}, t)$ vanishes for
all $({\mathbf{r}}, t) \in \widebar\Omega\times \left\lbrace I_0\cup \left[0, \infty\right) \right\rbrace$
(where $I_0 = I_T$ as in \Cref{domainofdep} for $T=0$). Additionally, let $\Omega$ satisfy
the $q$-growth condition and assume $\breve{b}$ satisfies the $s$-regularity conditions
\Cref{eq:gamma_Hs_assump} with $s = q$.
Finally, assume that for a given nonnegative integer
$n$, $\breve{\psi}_{*,0}$ satisfies
$\breve{\psi}_{*,0} \in H^{n+1}(I_0;L^2(\Gamma))$. Then for all
integers $m \ge 0$ and all integers $j$ such that $0 \le j \le n$ we
have
\[
\int_0^\infty \omega^{2j} \left\| \partial_\omega^m \breve{R}_0(\cdot,
\omega)\right\|_{L^2(\Gamma)}^2\,\d\omega \le C \left\| \breve{\psi}_{*,0}
\right\|_{H^{j+1}(I_0;\,L^2(\Gamma))}^2,
\]
where $C$ is a constant independent of $T_0$ and $b$.
\end{lemma}
\begin{rem} In order to obtain an upper bound on integrals
containing powers of the temporal variable $t$, the proof of
\Cref{Rderiv_sobolev_bound} relies in an essential manner on the
domain-of-dependence relations~\eqref{h_equiv}
and~\eqref{h_compactsupp} for the function $\breve{h}_0$ (see
\Cref{rem:h_equiv_rem_2})---which, limiting the integration of the
aforementioned powers of $t$ to a bounded interval, yields
meaningful integral estimates necessary for the proof of the lemma.
\end{rem}
\begin{proof}[Proof of \Cref{Rderiv_sobolev_bound}.]
Since $\breve{b}$ satisfies the $s$-regularity conditions with $s = q$, by \Cref{psi_Hp} the quantity $\psi$ that enters the definition \Cref{Rt_def_breve} of $\breve{R}_0$ (cf.\ \Cref{Ht_def_breve}) satisfies $\psi \in L^2(\mathbb{R}; L^2(\Gamma))$.
In view of~\eqref{Rt_def_breve} we have
\begin{equation}\label{w2j_proof_first_eqn}
\int_0^\infty \omega^{2j} \left\| \partial_\omega^m \breve{R}_0(\cdot,
\omega)\right\|_{L^2(\Gamma)}^2\,\d\omega = \int_0^\infty \int_\Gamma \left|\widehat{S}_{jm}
\breve{H}^f_0(\mathbf{r}, \omega)\right|^2\,\d\sigma(\mathbf{r})\,\d\omega,
\end{equation}
where the operator $\hat{S}_{jm}$ is defined as
\begin{equation}\label{Sdef_def}
\widehat{S}_{jm} = \omega^{j}
\partial_\omega^m\left(\gamma^-\partial_{\mathbf{n}} - \i \eta\gamma^-\right).
\end{equation}
In view of the definition~\Cref{Aomega_def_eqn} of the quantity
$\eta$ in~\eqref{Sdef_def}, which depends on whether
$0\leq\omega< \omega_0$ or
$\omega>\omega_0$,~\cref{w2j_proof_first_eqn} can be re-expressed in
the form
\begin{equation}\label{Rderiv_split}
\begin{split}
\int_0^\infty \omega^{2j} &\left\| \partial_\omega^m \breve{R}_0(\cdot,
\omega)\right\|_{L^2(\Gamma)}^2\,\d\omega\\
&= \int_\Gamma\left(\int_0^{\omega_0} + \int_{\omega_0}^\infty\right)\left|\widehat{S}_{jm}
\breve{H}^f_0(\mathbf{r}, \omega)\right|^2\,\d\omega\,\d\sigma(\mathbf{r}).
\end{split}
\end{equation}
Then defining the operators
\begin{equation}\label{S_split_omega}
\widehat{S}_{jm}^1 = \omega^j \partial_\omega^{m}
(\gamma^-\partial_{\mathbf{n}} - \i\omega\gamma^-),\quad\mbox{and}\quad
\widehat{S}_{jm}^2 = \omega^j \partial_\omega^{m}
(\gamma^-\partial_{\mathbf{n}} - \i\gamma^-),
\end{equation}
which clearly satisfy $\widehat{S}_{jm} = \widehat{S}_{jm}^1$ for
$\omega > \omega_0$ and $\widehat{S}_{jm} = \widehat{S}_{jm}^2$ for
$\omega < \omega_0$,~\eqref{Rderiv_split} yields
\begin{align*}
\int_0^\infty \omega^{2j} &\left\| \partial_\omega^m \breve{R}_0(\cdot,
\omega)\right\|_{L^2(\Gamma)}^2\,\d\omega \\
&= \int_\Gamma
\int_{0}^{\omega_0} \left|\widehat{S}_{jm}^1 \breve{H}^f_0(\mathbf{r},
\omega)\right|^2\,\d\omega\,\d\sigma(\mathbf{r})
+\int_\Gamma\int_{\omega_0}^\infty \left|\widehat{S}_{jm}^2 \breve{H}^f_0(\mathbf{r},
\omega)\right|^2\,\d\omega\,\d\sigma(\mathbf{r})\\
&\le \int_\Gamma
\int_{-\infty}^\infty \left|\widehat{S}_{jm}^1 \breve{H}^f_0(\mathbf{r},
\omega)\right|^2\,\d\omega\,\d\sigma(\mathbf{r})
+\int_\Gamma\int_{-\infty}^\infty \left|\widehat{S}_{jm}^2 \breve{H}^f_0(\mathbf{r},
\omega)\right|^2\,\d\omega\,\d\sigma(\mathbf{r}).
\end{align*}
Thus, utilizing the time-domain operators
\begin{equation}\label{S_split_t}
S_{jm}^1 = (-\i\frac{\partial}{\partial t})^j (-it)^{m}
(\gamma^-\partial_{\mathbf{n}} - \frac{\partial}{\partial t}\gamma^-) \quad\mbox{and}\quad
S_{jm}^2 = (-\i\frac{\partial}{\partial t})^j (-it)^{m}
(\gamma^-\partial_{\mathbf{n}} - \i\gamma^-)
\end{equation}
corresponding to~\eqref{S_split_omega} together with Plancherel's
theorem and equations~\eqref{brevehdef} and~\eqref{Ht_def_breve} we
obtain
\begin{align*}
\int_0^\infty \omega^{2j} &\left\| \partial_\omega^m \breve{R}_0(\cdot,
\omega)\right\|_{L^2(\Gamma)}^2\,\d\omega \\
&\le \int_\Gamma
\int_{-\infty}^\infty \left|S_{jm}^1 \breve{h}_0(\mathbf{r},
t')\right|^2\,\d t'\,\d\sigma(\mathbf{r})
+\int_\Gamma\int_{-\infty}^\infty \left|S_{jm}^2 \breve{h}_0(\mathbf{r},
t')\right|^2\,\d t'\,\d\sigma(\mathbf{r})\\
&\begin{alignedat}{2}=\int_\Gamma
\int_{-\tau}^{T_*} &\left|S_{jm}^1 \breve{u}_{*,0}(\mathbf{r},
t')\right|^2\,\d t'\,\d\sigma(\mathbf{r})\\
&+\int_\Gamma\int_{-\tau}^{T_*} \left|S_{jm}^2 \breve{u}_{*,0}(\mathbf{r},
t')\right|^2\,\d t'\,\d\sigma(\mathbf{r}),
\end{alignedat}
\end{align*}
where, in view of \Cref{rem:breve}, which, in particular, tells us
that $T=0$, and since
$\Gamma = \partial\Omega \subset \widebar\Omega$, the last equality
follows by using~\Cref{h_equiv} and~\Cref{h_compactsupp}. Letting
\begin{equation}\label{v1v2_def}
v_1 = (\gamma^-\partial_{\mathbf{n}} -
\frac{\partial}{\partial t}\gamma^-)\breve{u}_{*,0}, \quad\mbox{and}\quad
v_2 = (\gamma^-\partial_{\mathbf{n}} -
\i\gamma^-)\breve{u}_{*,0},
\end{equation}
and calling $a_\ell = {j \choose \ell}$, by Leibniz' product rule we
have
\begin{alignat*}{2}
\int_0^\infty \omega^{2j} &\left\| \partial_\omega^m \breve{R}_0(\cdot,
\omega)\right\|_{L^2(\Gamma)}^2\,\d\omega\\
&\begin{alignedat}{2}\le \int_\Gamma&
\int_{-\tau}^{T_*} \left| (-\i \frac{\partial}{\partial t'})^j (-\i t')^m
v_1(\mathbf{r}, t')\right|^2\,\d t'\,\d\sigma(\mathbf{r})\\
&+ \int_\Gamma \int_{-\tau}^{T_*} \left| (-\i \frac{\partial}{\partial
t'})^j (-\i t')^m
v_2(\mathbf{r}, t')\right|^2\,\d t'\,\d\sigma(\mathbf{r})
\end{alignedat}\\
&\begin{alignedat}{2}=
\int_\Gamma&
\int_{-\tau}^{T_*} \left|\sum_{\ell=0}^j a_\ell \left(
\left(\frac{\partial}{\partial t'}\right)^{\ell} (t')^m\right)
\left(\left(\frac{\partial}{\partial t'}\right)^{j-\ell}
v_1(\mathbf{r}, t')\right)\right|^2\,\d t'\,\d\sigma(\mathbf{r})\\
&+ \int_\Gamma
\int_{-\tau}^{T_*} \left|\sum_{\ell=0}^j a_\ell \left(
\left(\frac{\partial}{\partial t'}\right)^\ell (t')^m\right)
\left(\left(\frac{\partial}{\partial t'}\right)^{j-\ell}
v_2(\mathbf{r}, t')\right)\right|^2\,\d t'\,\d\sigma(\mathbf{r}).
\end{alignedat}
\end{alignat*}
Substituting the exact value of the derivative
$\frac{\partial^\ell}{\partial t^\ell} (\i t')^m$ in these
expressions, we further obtain
\begin{alignat*}{2}
\int_0^\infty \omega^{2j} &\left\| \partial_\omega^m \breve{R}_0(\cdot,
\omega)\right\|_{L^2(\Gamma)}^2\,\d\omega\\
&\begin{alignedat}{2}\le
\int_\Gamma&
\int_{-\tau}^{T_*} \left|\sum_{\ell=0}^j \tilde{a}_\ell (t')^{m-\ell}
\left(\left(\frac{\partial}{\partial t'}\right)^{j-\ell}
v_1(\mathbf{r}, t')\right)\right|^2\,\d t'\,\d\sigma(\mathbf{r})\\
&+ \int_\Gamma
\int_{-\tau}^{T_*} \left|\sum_{\ell=0}^j \tilde{a}_\ell (t')^{m-\ell}
\left(\left(\frac{\partial}{\partial t'}\right)^{j-\ell}
v_2(\mathbf{r}, t')\right)\right|^2\,\d t'\,\d\sigma(\mathbf{r}),
\end{alignedat}
\end{alignat*}
where $\tilde{a}_\ell = \frac{m!}{(m-\ell)!} a_\ell$ for
$m - \ell \ge 0$ and $\tilde{a}_\ell = 0$ for $m - \ell < 0$. Since
the $t'$-integration is limited to the bounded region
$[- \tau, T_*]$ the quantities $|t'|^{m-\ell}$ are bounded by a
constant (which, importantly, is independent of $T_0$---see \Cref{ITvsI0}),
and thus
\begin{alignat*}{2}
\int_0^\infty \omega^{2j} &\left\| \partial_\omega^m \breve{R}_0(\cdot,
\omega)\right\|_{L^2(\Gamma)}^2\,\d\omega\\
&\le C_1\sum_{\ell=0}^j \int_\Gamma \int_{-\tau}^{T_*}
\left(\left|\left(\frac{\partial}{\partial t'}\right)^\ell v_1(\mathbf{r},
t')\right|^2 +\left|\left(\frac{\partial}{\partial t'}\right)^\ell v_2(\mathbf{r},
t')\right|^2 \right)\,\d t'\,\d\sigma(\mathbf{r})\\
&\le C_1\sum_{\ell=0}^j \int_\Gamma \int_{-\infty}^{\infty}
\left(\left|\left(\frac{\partial}{\partial t'}\right)^\ell v_1(\mathbf{r},
t')\right|^2 +\left|\left(\frac{\partial}{\partial t'}\right)^\ell v_2(\mathbf{r},
t')\right|^2 \right)\,\d t'\,\d\sigma(\mathbf{r}),
\end{alignat*}
where the last inequality simply bounds the space-time norm on the
finite temporal region $[- \tau, T_*]$ by the full time integral on
$\mathbb{R}$. In view of~\Cref{v1v2_def}, using once again the Plancherel
theorem, and denoting, per \Cref{FT_conv},
$\breve{U}_{*,0}^f$ the Fourier transform of $\breve{u}_{*,0}$, we
estimate
\begin{alignat*}{2}
\int_0^\infty \omega^{2j} &\left\| \partial_\omega^m \breve{R}_0(\cdot,
\omega)\right\|_{L^2(\Gamma)}^2\,\d\omega\\
&\begin{alignedat}{2}%
\le C_1\sum_{\ell=0}^j &\left(\int_\Gamma \int_{-\infty}^{\infty}
\left|\omega^\ell(\gamma^-\partial_\mathbf{n} -
\i\omega\gamma^-)\breve{U}_{*,0}^f(\mathbf{r},
\omega)\right|^2\,\d\omega\,\d\sigma(\mathbf{r})\right.\\
&\left.+ \int_\Gamma \int_{-\infty}^{\infty}
\left|\omega^\ell(\gamma^-\partial_\mathbf{n} - \i\gamma^-)\breve{U}_{*,0}^f(\mathbf{r},
\omega)\right|^2\,\d\omega\,\d\sigma(\mathbf{r})\right)
\end{alignedat}\\
&\begin{alignedat}{2}%
\le \tilde{C}_1&\int_\Gamma \int_{-\infty}^{\infty}
\left|(1 + \omega^2)^{j/2}(\gamma^-\partial_\mathbf{n} -
\i\omega\gamma^-)\breve{U}_{*,0}^f(\mathbf{r},
\omega)\right|^2\,\d\omega\,\d\sigma(\mathbf{r})\\
&+ \tilde{C}_1\int_\Gamma \int_{-\infty}^{\infty}
\left|(1 + \omega^2)^{j/2}(\gamma^-\partial_\mathbf{n} -
\i\gamma^-)\breve{U}_{*,0}^f(\mathbf{r},
\omega)\right|^2\,\d\omega\,\d\sigma(\mathbf{r}).
\end{alignedat}\\
\end{alignat*}
We thus have established that
\begin{equation}\label{Rderiv_sobolev_norm_estimate}
\begin{split}
\int_0^\infty \omega^{2j} &\left\| \partial_\omega^m \breve{R}_0(\cdot,
\omega)\right\|_{L^2(\Gamma)}^2\,\d\omega\\
&\le \tilde{C}_1\int_{-\infty}^\infty (1 + \omega^2)^j \left\| (\gamma^- \partial_\mathbf{n} -
\i\omega\gamma^-) \breve{U}_{*,0}^f(\cdot, \omega)\right\|_{L^2(\Gamma)}^2\,\d\omega\\
&+ \tilde{C}_1\int_{-\infty}^\infty (1 + \omega^2)^j \left\| (\gamma^-\partial_\mathbf{n} -
\i\gamma^-) \breve{U}_{*,0}^f(\cdot, \omega)\right\|_{L^2(\Gamma)}^2\,\d\omega.
\end{split}
\end{equation}
To complete the proof we now use \Cref{per_freq_rhs_oper_bounds},
which provides the frequency-wise bounds
\begin{equation}\label{Rderiv_normbound_1}
\left\| (\gamma^-\partial_\mathbf{n} - \i\omega\gamma^-) \breve{U}_{*,0}^f(\cdot, \omega)\right\|_{L^2(\Gamma)}
\le D(1 + \omega^2)^{1/2}\left\|\breve{\psi}_{*,0}^f(\cdot,
\omega)\right\|_{L^2(\Gamma)},
\end{equation}
and
\begin{equation}\label{Rderiv_normbound_2}
\left\| (\gamma^-\partial_\mathbf{n} - \i\gamma^-) \breve{U}_{*,0}^f(\cdot, \omega)\right\|_{L^2(\Gamma)}
\le E (1 + \omega^2)^{1/2}\left\|\breve{\psi}_{*,0}^f(\cdot, \omega)\right\|_{L^2(\Gamma)},
\end{equation}
where $D, E > 0$ are constants independent of $\omega$, $\breve{b}$
and $\breve{\psi}$. Substituting~\cref{Rderiv_normbound_1}
and~\cref{Rderiv_normbound_2} in~\Cref{Rderiv_sobolev_norm_estimate}
we conclude that
\begin{alignat*}{2}
\int_0^\infty \omega^{2j} &\left\| \partial_\omega^m \breve{R}_0(\cdot,
\omega)\right\|_{L^2(\Gamma)}^2\,\d\omega\\
&\begin{alignedat}{2}
\le \int_{-\infty}^\infty C_2(1 + &\omega^2)^{j+1}
\left\|\breve{\psi}_{*,0}^f(\cdot,
\omega)\right\|_{L^2(\Gamma)}^2\,\d\omega\\
&+ \int_{-\infty}^\infty C_3(1 + \omega^2)^{j+1}\left\|\breve{\psi}_{*,0}^f(\cdot,
\omega)\right\|_{L^2(\Gamma)}^2\,\d\omega.
\end{alignedat}\\
&\le
C\left\|\breve{\psi}_{*,0}\right\|_{H^{j+1}(\mathbb{R};\,L^2(\Gamma))}^2
=
C\left\|\breve{\psi}_{*,0}\right\|_{H^{j+1}(I_0;\,L^2(\Gamma))}^2,
\end{alignat*}
as desired.
\end{proof}
\Cref{Rderiv_sobolev_bound} is used in what follows to establish the
main building block in the proof of \Cref{3d_decay_thm_ii}, namely,
\Cref{decay_estimate_L2} below. The proof of \Cref{decay_estimate_L2},
in turn, incorporates a smooth windowing procedure which relies on use
of compactly-supported time-domain window functions. For definiteness
we utilize the time-window functions introduced in what follows.
\begin{definition}\label{def_w}
Letting $v(u) = \exp(\frac{2e^{-1/u}}{u-1})$, we define
\begin{equation}\label{wTdef}
w(s) =
\begin{cases}
1 - v(\frac{s + 2s_0}{s_0}), & -2s_0 \le s \le -s_0\\
1, & -s_0 < s < s_0\\
v(\frac{s - s_0}{s_0}), & s_0 \le s \le 2s_0\\
0, & |s| > 2s_0,
\end{cases}
\qquad\mbox{and}\qquad w_\varphi(s) = w(s - \varphi),
\end{equation}
where $\varphi\in\mathbb{R}$ denotes an important ``time-shift''
parameter, and where $s_0 > 0$ is a given parameter that can be
selected arbitrarily and remains unchanged throughout the remainder of
this paper.
\end{definition}
Clearly, the functions $w$ and $w_\varphi$ (i)~Satisfy
$w,w_\varphi\in C^\infty(\mathbb{R})$; (ii)~Equal $1$ in an interval
of length $2s_0$; (iii)~Increase/decrease from $0$ to $1$ in intervals
of length $s_0$; (iv)~Satisfy $0\leq w(s) \le 1$ and
$0\leq w_\varphi(s) \le 1$ for all $s\in\mathbb{R}$.
It is easy to check that for every $\varphi \in\mathbb{R}$ we have
\begin{equation}
\label{eq:POU2}
\begin{split}
& w_{\varphi + s_0}(s) + w_{\varphi + 4 s_0}(s) =1\\
& w_{\varphi + s_0 + 3\ell s_0}(s) = 0\quad (\ell \not\in \{0,1 \})
\end{split}
\qquad\mbox{for} \quad s \in [\varphi, \varphi + 3s_0].
\end{equation}
In particular, the functions $w_{3\ell s_0}(s)$ with $\ell \in\mathbb{Z}$
form a partition of unity, wherein at most two functions in the family do not vanish at any given $s\in\mathbb{R}$.
\begin{lemma}\label{decay_estimate_L2}
Let $n$ and $q$ denote non-negative integers, $n>0$, let $T_0>0$,
let $\widetilde{b}$ be defined as in \Cref{rem:tilde_1}, and assume that
$\widetilde{b}$ vanishes for all $({\mathbf{r}}, t) \in \widebar{\Omega}\times\left \{I_{T_0} \cup [T_0, \infty)\right\}$, with $I_{T_0}$ as in
\Cref{domainofdep}. Additionally, let $\Omega$ satisfy the
$q$-growth condition and assume $\widetilde{b}$ satisfies
the $s$-regularity conditions~\eqref{eq:gamma_Hs_assump} with $s = (n+1)(q+1) + q$.
Then, the functions $\breve{\psi}_{+,0}$ and $\breve{\psi}$ defined in~\Cref{rem:breve} satisfy
\begin{equation}\label{L2_rchidecay}
\left\|w_\varphi \breve{\psi}_{+,0}\right\|_{L^2(\mathbb{R};\,L^2(\Gamma)}^2 \le
C(\Gamma, \tau, n, s_0) \varphi^{-2n}
\left\|\breve{\psi}\right\|_{H^{(n+1)(q+1)}(I_0;\,L^2(\Gamma))}^2,
\end{equation}
for all $\varphi > 0$, where $C(\Gamma, \tau, n, s_0)$ denotes a
constant independent of $\varphi, T_0,$ and $\widetilde{b}$.
\end{lemma}
\begin{proof}
It is useful to first note that since by hypothesis $\widetilde{b}({\mathbf{r}}, t)$ vanishes for
$({\mathbf{r}},t) \in \widebar\Omega \times \left\lbrace I_{T_0}\cup \left[T_0, \infty\right)\right\rbrace$,
$\breve{b}({\mathbf{r}}, t)$ vanishes for all $({\mathbf{r}},t) \in \widebar\Omega \times \left\lbrace I_0\cup \left[0, \infty\right)\right\rbrace$.
Similarly, the $s$-regularity condition hypotheses also are satisfied for $\breve{b}$.
In order to establish the desired decay
estimate~\eqref{L2_rchidecay}, by Plancherel's theorem we may
estimate, instead, the $L^2$ norm of the Fourier transform:
\begin{equation}\label{conv_w_2}
\left\|w_\varphi \breve{\psi}_{+,0}\right\|_{L^2(\mathbb{R};\,L^2(\Gamma)}^2 = \left\|\widehat{w_\varphi \breve{\psi}_{+,0}}\right\|_{L^2(\mathbb{R};\,L^2(\Gamma)}^2 =
\left\|\hat{w}_\varphi \ast
\breve{\psi}_{+,0}^f\right\|_{L^2(\mathbb{R};\,L^2(\Gamma)}^2.
\end{equation}
Using the relation
$\hat{w}_\varphi = {\mathrm{e}}^{-\i\omega \varphi} \hat{w}(\omega)$ we obtain
\begin{equation}\label{conv_w}
\left(\hat{w}_\varphi \ast
\breve{\psi}_{+,0}^f\right)({\mathbf{r}}, \omega) = \int_{-\infty}^\infty {\mathrm{e}}^{-\i \tau \varphi} \hat{\omega}(\tau) \breve{\psi}_{+,0}^f({\mathbf{r}}, \omega - \tau)\,\d \tau,
\end{equation}
and we proceed to integrate by parts this integral $n$ times with
respect to $\tau$. To do this we note that, (i)~Both
$\hat{w}_\varphi(\omega)$ and
$\breve{\psi}_{+,0}^f (\mathbf{r}, \omega)$ are infinitely
differentiable functions of $\omega$ (in view
of \Cref{def_w}, \Cref{rem:breve} and
\Cref{int_eq_freq_regularity}); (ii)~The Fourier transform
$\hat w(\tau)$ and all of its derivatives tends to zero faster than
any positive power of $\tau$ as $\tau\to\pm\infty$, as it befits
the Fourier transform of an infinite differentiable and compactly
supported function; and (iii)~We have
$\left\|\partial_\omega^m \breve{\psi}_{+,0}^f(\cdot, \omega -
\tau)\right\|_{L^2(\Gamma)} \le C_m|\omega - \tau|^N$ for some
integer $N$ as $|\tau| \to \infty$, as it follows directly from
\Cref{psit_nongrowth} in
\Cref{3d_decay_thm_recursive_bound_general}. Thus integrating by
parts~\eqref{conv_w} and using Leibniz' product rule, we see that
all of the resulting boundary terms at $\tau = \pm\infty$ vanish
and~\eqref{conv_w} becomes
\begin{equation}\label{nIntByParts}
\begin{split}
\left( \hat{w}_\varphi \ast \breve{\psi}_{+,0}^f \right)(\mathbf{r}, \omega) =
\left(\frac{1}{i\varphi}\right)^n \int_{-\infty}^\infty
{\mathrm{e}}^{-i\tau \varphi} \left(\vphantom{\sum_{m=0}^n} \right. & \sum_{m=0}^n a_m \left( \partial^{n-m}_\tau
\hat{w}(\tau)\right) \times\\
&\left.\times \left( \vphantom{\sum_{m=0}^n} \partial^m_\omega \breve{\psi}_{+,0}^f(\mathbf{r}, \omega -
\tau)\right)\right)d\tau,
\end{split}
\end{equation}
where $a_m = {n \choose m}$.
In view of~\eqref{conv_w_2} and \Cref{nIntByParts}, calling, for $0 \le m \le n$,
\begin{equation}\label{w_psi}
F_{nm}^\varphi(\tau) = {\mathrm{e}}^{-\i \tau \varphi}\partial^{n-m}_\tau \hat{w}(\tau) \quad\mbox{and}\quad G_m({\mathbf{r}},\tau) = \partial^m_\tau \breve{\psi}_{+,0}^f({\mathbf{r}}, \tau),
\end{equation}
and using the relation $\| \sum_{i=1}^n f_i \|^2 \le n \sum_{i=1}^n \| f_i\|^2$
we obtain
\begin{equation}
\begin{split}
\left\| w_\varphi \breve {\psi}_{+,0} \right.&\negmedspace\left.\vphantom{w_\varphi
\breve{\psi}_{+,0}}\right\|_{L^2(\mathbb{R};\,L^2(\Gamma))}^2
\le (n+1)\varphi^{-2n} \sum_{m=0}^n |a_m|^2 \left\|
F_{nm}^\varphi \ast G_m\right\|_{L^2(\mathbb{R}; L^2(\Gamma))}^2.\label{pre_young}
\end{split}
\end{equation}
In order to obtain a bound on the norms on the right-hand side
of~\eqref{pre_young} we rely on the Bochner version of Young's
convolution inequality~\cite[Lem.\ 1.2.30]{Weis}, and we thus first
establish that relevant hypotheses hold, namely, that
\begin{equation}\label{w_psi_spaces}
F_{nm}^\varphi \in L^1(\mathbb{R})\quad\mbox{and}\quad G_m \in L^2(\mathbb{R}; L^2(\Gamma)) \quad\mbox{for}\quad 0 \le m \le n.
\end{equation}
The first of these relations is easily established: since $w$ is
smooth and compactly supported, it is in the Schwartz class, and
thus~\cite{Folland}, its Fourier transform is also in the Schwartz
class---and, in particular, $\hat{w}$ and all of its derivatives are
elements of $L^1(\mathbb{R})$. To verify the second relation
in~\eqref{w_psi_spaces}, on the other hand, we first note from
\Cref{3d_decay_lemma_2ndkind} that, for $\omega\geq 0$,
$\breve{\psi}_{+,0}^f$ satisfies the integral equation
\[
\left(A_\omega \breve{\psi}_{+,0}^f\right)(\mathbf{r}, \omega) = \breve{R}_0(\mathbf{r}, \omega),
\]
where $\breve{R}_0(\mathbf{r}, \omega)$ is defined by
\Cref{Rt_def_breve}. It then follows
from~\cref{recursion_hypothesis_squared_general} in
\Cref{3d_decay_thm_recursive_bound_general} that, for $m \le n$, there
exist constants $d_{ij}^m > 0$ such that for all
$\omega \in \mathbb{R}$, $\omega \neq \pm \omega_0$,
\begin{equation}\label{bound_on_deriv_psiplus}
\begin{split}
\left\| G_m(\cdot, \omega)\right\|_{L^2(\Gamma)}^2
\le \sum_{i=0}^{m} &\sum_{j=0}^{(i+1)(q+1)-1} d_{ij}^m \omega^{2j}
\left\|\partial_\omega^{m-i} \breve{R}_0(\cdot,
|\omega|)\right\|^2_{L^2(\Gamma)}.
\end{split}
\end{equation}
Integrating, we obtain
\begin{equation}\label{fixmeplease}
\begin{split}
\int_{-\infty}^\infty \left\|\vphantom{G_m(\cdot,
\omega)}\right.&\hspace{-1.4mm}\left.G_m(\cdot,
\omega)\right\|_{L^2(\Gamma)}^2 \,\d\omega\\
&\le \sum_{i=0}^{m} \sum_{j=0}^{(i+1)(q+1)-1} 2d_{ij}^m \int_0^\infty \omega^{2j}
\left\|\partial_\omega^{m-i} \breve{R}_0(\cdot,
\omega)\right\|_{L^2(\Gamma)}^2\,\d\omega.
\end{split}
\end{equation}
We now use \Cref{Rderiv_sobolev_bound} to estimate each term on the
right-hand side of \Cref{fixmeplease}. The main hypothesis of that lemma,
which, in the present context amounts to the statement
$\breve{\psi}_{*,0} \in H^{(n+1)(q+1)}(I_0;L^2(\Gamma))$, follows from
\Cref{psi_Hp} since $\breve{b}$ satisfies the $s$-regularity
conditions~\eqref{eq:gamma_Hs_assump} with $s = (n+1)(q+1)+q$ (which
hold in view of the $s$-regularity condition hypotheses for
$\widetilde{b}$ and the definition~\cref{def:breve} of $\breve{b}$).
For $m \le n$ we may therefore write
\begin{equation}\label{partialomega_psibreve}
\begin{split}
\int_{-\infty}^\infty \left\| G_m\right.&\hspace{-1.4mm}\left.
(\cdot,
\omega)\right\|_{L^2(\Gamma)}^2\,\d\omega\\
&\le C_1\left\| \breve{\psi}_{*,0}\right\|_{H^{(m+1)(q+1)}(I_0;\,L^2(\Gamma))}^2 \le C_2\left\|
\breve{\psi}_{*,0}\right\|_{H^{(n+1)(q+1)}(I_0;\,L^2(\Gamma))}^2,
\end{split}
\end{equation}
with $C_1$ and $C_2 > 0$ independent of $\widetilde{b}$ and $T_0$, which, in
particular, establishes the second relation in \Cref{w_psi_spaces}.
Having established \Cref{w_psi_spaces} we may apply Young's
convolution inequality~\cite[Lemma\ 1.2.30]{Weis} to each of the terms
in the right-hand sum in \Cref{pre_young} and obtain
\begin{equation}\label{3d_decay_thm_main_estimate}
\begin{split}
\left\| w_\varphi \breve{\psi}_{+,0} \right\|_{L^2(\mathbb{R};\,L^2(\Gamma))}^2
&\le C(n) \varphi^{-2n} \sum_{m=0}^n |a_m|^2
\left\|F_{nm}^\varphi \right\|_{L^1}^2
\left\|G_m \right\|_{L^2(\mathbb{R}; L^2(\Gamma))}^2\\
&\le C(n, s_0) \varphi^{-2n}
\sum_{m=0}^n \left\|G_m \right\|^2_{L^2(\mathbb{R};\,L^2(\Gamma))}
\end{split}
\end{equation}
(since, of course,
$\left\| F_{nm}^\varphi\right\|_{L^1(\mathbb{R}} =
\left\|\partial^{n-m}\hat{w}\right\|_{L^1(\mathbb{R})}$), which using
\Cref{partialomega_psibreve} yields
\begin{equation*}
\begin{split}
\left\|w_\varphi \breve{\psi}_{+,0}\right\|_{L^2(\mathbb{R};\,L^2(\Gamma)}^2 \le C(\Gamma, \tau, n, s_0)\varphi^{-2n}\sum_{i=0}^n C_i\left\|
\breve{\psi}_{*,0}\right\|_{H^{(i+1)(q+1)}(I_0;\,L^2(\Gamma))}^2.
\end{split}
\end{equation*}
Appropriately adjusting the constant, in view of the continuity of the
inclusion map
$\iota : H^{(i+1)(q+1)}(I_0; L^2(\Gamma)) \to H^{(n+1)(q+1)}(I_0;
L^2(\Gamma))$ for all $i \le n$ we obtain
\begin{equation*}
\left\|w_\varphi \breve{\psi}_{+,0}\right\|_{L^2(\mathbb{R};\,L^2(\Gamma)}^2 \le C(\Gamma, \tau, n, s_0) \varphi^{-2n}
\left\|\breve{\psi}_{*,0}\right\|_{H^{(n+1)(q+1)}(I_0;\,L^2(\Gamma))}^2.
\end{equation*}
The claimed relation \Cref{L2_rchidecay} follows from this inequality
since, according to \Cref{rem:breve},
$\breve{\psi}_{*,0}({\mathbf{r}}', t') = w_-(t' + T_* +
\tau)w_+(t')\breve{\psi}({\mathbf{r}}',t')$, and since each derivative of the
functions $w_-$ and $w_+$ is uniformly bounded.
\end{proof}
On the basis of the preparatory lemmas established above in this
section we now present the proof of \Cref{3d_decay_thm_ii}.
\begin{proof}[Proof of \Cref{3d_decay_thm_ii}]
Letting (cf.\ \Cref{def:breve})
\begin{equation}\label{psi_check}
\breve{b}({\mathbf{r}}, t) = b({\mathbf{r}}, t + T_0)\quad\mbox{and}\quad \breve{\psi}({\mathbf{r}}', \theta) = \psi({\mathbf{r}}', \theta + T_0),
\end{equation}
instead of \Cref{density_Hp_time_bound_Tn}
we establish the equivalent $\theta$-decay ($\theta = t - T_0 > 0$) estimate
\begin{equation}\label{density_Hp_time_bound_Tn_shifted}
\left\|\breve{\psi}\right\|_{H^p([\theta, \infty); L^2(\Gamma))} \le C(\Gamma,
\tau, p, n) \theta^{1/2-n} \left\|\breve{\psi}\right\|_{H^{p + n(q+1)}(I_0;
L^2(\Gamma))} < \infty,
\end{equation}
where $I_0$ (i.e. $I_T$ for $T=0$) is defined per
\Cref{domainofdep}. Clearly, $\breve{\psi}$ in \Cref{psi_check} satisfies the integral equation
\begin{equation}\label{brevepsi_int_eq}
\left(S\breve{\psi}\right)(\mathbf{r}, t) = \gamma^+ \breve{b}(\mathbf{r}, t)\quad\mbox{for}\quad (\mathbf{r}, t) \in
\Gamma\times\mathbb{R},
\end{equation}
which coincides with \Cref{eq:tdie_sl_generic} for
$\widetilde{b}({\mathbf{r}}, t) = \breve{b}({\mathbf{r}}, t) = b({\mathbf{r}}, t + T_0)$
and $\widetilde{\psi} = \breve{\psi}$.
To establish \Cref{density_Hp_time_bound_Tn_shifted} we first obtain
certain decay results on a sequence of bounded time intervals and then
produce the final estimate by summing over the sequence. Using
\Cref{eq:POU2} with $\varphi = \theta + 3\ell s_0$ together with the
identity $\breve{\psi}(\cdot, t') = \breve{\psi}_{+,0}(\cdot, t')$ for
$t' > 0$ (see \Cref{rem:breve}) and the relation
$(A +B)^2\leq 2A^2 +2 B^2$ we obtain
\begin{equation}\label{ptwise_tprime_bound}
\begin{split}
\left\|\breve{\psi}\right\|^2_{L^2([\theta, \infty); L^2(\Gamma))} &= \int_\theta^\infty\hspace{-2mm} \int_\Gamma \left( \sum_{\ell =0}^\infty (w_{\theta + 3\ell s_0 + s_0} + w_{\theta + 3(\ell+1) s_0 + s_0}) \breve{\psi}_{+,0} ({\mathbf{r}}', t')\hspace{-1mm}\right)^2 \hspace{-2mm}\d\sigma({\mathbf{r}}') \d t'\\
&\le 4 \sum_{\ell=0}^\infty \left\|w_{\theta +
3\ell s_0 + s_0} \breve{\psi}_{+,0}\right\|^2_{L^2(\mathbb{R}; L^2(\Gamma))}.
\end{split}
\end{equation}
The same argument applied to $\partial^p \breve{\psi}$ tells us that
\begin{equation}\label{Wt_to_sum_ii}
\left\|\partial^p \breve{\psi}\right\|^2_{L^2([\theta, \infty); L^2(\Gamma))} \le 4
\sum_{\ell=0}^\infty \left\|w_{\theta +
3\ell s_0 + s_0} \breve{\psi}_{p,+,0}\right\|^2_{L^2(\mathbb{R}; L^2(\Gamma))},
\end{equation}
Combining \Cref{ptwise_tprime_bound} and
\Cref{Wt_to_sum_ii} and using \Cref{def:sob_bochner} we obtain
\begin{equation}\label{psik_to_wT}
\begin{split}
\left\|\breve{\psi}\right\|^2_{H^p([\theta,
\infty);\,L^2(\Gamma))} &\le 4 \sum_{\ell = 0}^\infty \left(\left\| w_{\theta + 3\ell s_0 + s_0} \breve{\psi}_{+,0} \right\|_{L^2(\mathbb{R};
L^2(\Gamma))}^2\right.\\
&\quad\quad\quad\quad\quad\left. + \left\| w_{\theta + 3\ell s_0 + s_0} \breve{\psi}_{p,+,0} \right\|_{L^2(\mathbb{R};
L^2(\Gamma))}^2\right).
\end{split}
\end{equation}
To complete the proof we now estimate the $L^2$ norms on the
right-hand side of~\eqref{psik_to_wT}, that is to say, the norms
$\left\| w_{\varphi} \breve{\psi}_{+,0} \right\|_{L^2(\mathbb{R};
L^2(\Gamma))}$ and
$\left\| w_{\varphi} \breve{\psi}_{p,+,0} \right\|_{L^2(\mathbb{R};
L^2(\Gamma))}$ for $\varphi = \theta + 3\ell s_0 + s_0$,
$\ell = 0, 1, \cdots$; the desired result then follows by addition of
the resulting estimates.
\paragraph{\bf Decay estimate for
$\left\| w_{\varphi} \breve{\psi}_{+,0} \right\|_{L^2(\mathbb{R};
L^2(\Gamma))}^2$.}
Since by assumption $b$ satisfies the $s$-regularity conditions~\eqref{eq:gamma_Hs_assump}
with $s=(n+1)(q+1)+q$, and since, by hypothesis, $\breve{b}({\mathbf{r}}', t')$ vanishes for
$({\mathbf{r}}',t') \in \widebar{\Omega} \times \left\lbrace I_0\cup[0, \infty)\right\rbrace$,
\Cref{decay_estimate_L2} with $\widetilde{b} = b$ yields
\begin{equation}\label{rchipsi_L2_estimate}
\left\|w_\varphi \breve{\psi}_{+,0}\right\|_{L^2(\mathbb{R};\,L^2(\Gamma)}^2 \le
C(\Gamma, \tau, n, s_0) \varphi^{-2n}
\left\|\breve{\psi}\right\|_{H^{(n+1)(q+1)}(I_0;\,L^2(\Gamma))}^2
\end{equation}
for arbitrary $\varphi > 0$, where $C$ a constant independent of
$\varphi$, $T_0$, and $b$.
\paragraph{\bf Decay estimate for
$\left\| w_{\varphi} \breve{\psi}_{p,+,0} \right\|_{L^2(\mathbb{R};
L^2(\Gamma))}$ ($p > 0$).}
\Cref{int_eq_pderiv} tells us that
\begin{equation}\label{eq:pderiv_brevepsi}
\left(S \partial^p \psi\right)({\mathbf{r}}, t) = \gamma^+ \partial^p b({\mathbf{r}}, t),\quad\mbox{for}\quad
({\mathbf{r}}, t) \in \Gamma \times \mathbb{R}.
\end{equation}
We now apply \Cref{decay_estimate_L2} to obtain decay estimates for
$w_\varphi \breve{\psi}_{p,+,0}$. Since (i) $b({\mathbf{r}}', t')$ vanishes for
$({\mathbf{r}}',t') \in \widebar{\Omega} \times \left\lbrace I_{T_0} \cup [T_0, \infty)\right\rbrace$
so does $\partial^p b$ and (ii) $b$ satisfies
the $s$-regularity conditions~\eqref{eq:gamma_Hs_assump}
with $s=(n+1)(q+1)+q$ (by the hypotheses on $b$),
\Cref{decay_estimate_L2} with $\widetilde{b} = \partial^p b$ yields
the estimate
\begin{equation}\label{L2_deriv_rchidecay}
\left\| w_\varphi \breve{\psi}_{p,+,0} \right\|_{L^2(\mathbb{R}; L^2(\Gamma))}^2 \le
C(\Gamma, \tau, n, s_0) \varphi^{-2n} \left\|\partial^p
\breve{\psi}\right\|_{H^{(n+1)(q+1)}(I_0;\,L^2(\Gamma))}^2,
\end{equation}
for arbitrary $\varphi > 0$, where $C$ is again a constant independent of
$\varphi$, $T_0$, and $b$.
\paragraph{\bf Combined decay estimate.}
Using \Cref{psik_to_wT}, \Cref{rchipsi_L2_estimate}, and \Cref{L2_deriv_rchidecay} we obtain
\begin{equation}\label{Hp_tdecay_i}
\begin{split}
\left\|\breve{\psi}\right\|^2_{H^p([\theta,
\infty);\,L^2(\Gamma))} &\le 4 \sum_{\ell = 0}^\infty \left(\left\| w_{\theta + 3\ell s_0 + s_0} \breve{\psi}_{+,0} \right\|_{L^2(\mathbb{R};
L^2(\Gamma))}^2\right.\\
&\quad\quad\quad\left. + \left\| w_{\theta + 3\ell s_0 + s_0} \breve{\psi}_{p,+,0} \right\|_{L^2(\mathbb{R};
L^2(\Gamma))}^2\right)\\
&\le C_1 \sum_{\ell = 0}^\infty \left( (\theta + 3\ell s_0 + s_0)^{-2n}
\left\|\breve{\psi}\right\|^2_{H^{(n+1)(q+1)}(I_0;\,L^2(\Gamma))}\right.\\
&\quad\quad\quad\left. + (\theta + 3\ell s_0 + s_0)^{-2n}
\left\|\partial^p \breve{\psi}\right\|^2_{H^{(n+1)(q+1)}(I_0;\,L^2(\Gamma))} \right),
\end{split}
\end{equation}
where $C_1$ is a constant dependent only on $\Gamma$,
$\tau$, $n$ and $s_0$. It follows that
\begin{equation}\label{Hp_tdecay_ii}
\begin{split}
\left\|\breve{\psi}\right\|^2_{H^p([\theta, \infty);\,L^2(\Gamma))} &\le C \left\|\breve{\psi}\right\|^2_{H^{p + (n+1)(q+1)}(I_0;\,L^2(\Gamma))}
\sum_{\ell = 0}^\infty (\theta + 3\ell s_0 + s_0)^{-2n},
\end{split}
\end{equation}
where, again, $C = C(\Gamma, \tau, n, s_0) > 0$. Since $\theta > 0$
the term $(\theta + 3\ell s_0 + s_0)^{-n}$ is a strictly decreasing and
positive function of $\ell$, and, thus, estimating the sum by an
integral it is easy to check that
\[
\sum_{\ell=0}^\infty (\theta + 3\ell s_0 + s_0)^{-2n} \le \widetilde{C} \theta^{-2n+1},
\]
which in conjunction with \Cref{Hp_tdecay_ii} establishes
\Cref{density_Hp_time_bound_Tn_shifted} which, as indicated at the
beginning of this proof, is equivalent to the desired
estimate~\cref{density_Hp_time_bound_Tn}. The
result~\cref{density_unif_time_bound_Tn}, finally, follows from use
of~\cref{density_Hp_time_bound_Tn} with $p=1$ and the use of
\Cref{sob_lemma}. The proof is now complete.
\end{proof}
| proofpile-arXiv_059-15690 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Neural machine translation~\cite{Bahdanau2015NeuralMT,Wu2016GooglesNM,Vaswani2017AttentionIA} has achieved great progress and reached near human-level performance. However, most current sequence-to-sequence NMT models translate sentences individually.
In such cases, discourse phenomena, such as pronominal anaphora, lexical consistency, and document coherence that depend on long-range context going further than a few previous sentences, are neglected~\cite{Bawden2017EvaluatingDP}. As a result, \citet{laubli2018has} find human raters still show a markedly stronger preference for human translations when evaluating at the level of documents.
Many methods have been proposed to improve document-level neural machine translation (DNMT). Among them, the mainstream studies focus on the model architecture modification, including hierarchical attention~\cite{Wang2017ExploitingCC,Werlen2018DocumentLevelNM,Tan2019HierarchicalMO}, additional context extraction encoders or query layers~\cite{Jean2017NeuralMT,Bawden2017EvaluatingDP,Zhang2018ImprovingTT,Voita2018ContextAwareNM,Kuang2018FusingRI,Maruf2019SelectiveAF,Yang2019EnhancingCM,Jiang2019DocumentlevelNM,zheng2020toward,yun2020improving,xuefficient}, and cache-like memory network~\cite{Maruf2018DocumentCN,Kuang2018ModelingCF,Tu2018LearningTR}.
These studies come up with different structures in order to include discourse information, namely, introducing adjacent sentences into the encoder or decoder as document contexts. Experimental results show effective improvements on universal translation metrics like BLEU~\cite{Papineni2001BleuAM} and document-level linguistic indices~\cite{Tiedemann2017NeuralMT,Bawden2017EvaluatingDP,Werlen2017ValidationOA,Mller2018ALT,Voita2018ContextAwareNM,Voita2019WhenAG}.
Unlike previous work, this paper does not aim at introducing a novel model. Instead, we hope to answer the following question: Is the basic sequence-to-sequence model strong enough to directly handle document-level translation? To this end, we head back to the original Transformer~\cite{Vaswani2017AttentionIA} and conduct literal document-to-document (Doc2Doc) training.
Though many studies report negative results of naive Doc2Doc translation~\cite{Zhang2018ImprovingTT,Liu2020MultilingualDP}, we successfully activate it with \textit{Multi-resolutional Training}, which involves multiple levels of sequences.
It turns out that end-to-end document translation is not only feasible but also stronger than sentence-level models and previous studies. Furthermore, if assisted by extra sentence-level corpus, which can be much more easily obtained, the model can significantly improve the performance and achieve state-of-the-art results. It is worth noting that our method does not change the model architecture and need no extra parameters.
Our experiments are conducted on nine document-level datasets, including TED (ZH-EN, EN-DE), News (EN-DE, ES-EN, FR-EN, RU-EN), Europarl (EN-DE), Subtitles (EN-RU), and a newly constructed News dataset (ZH-EN). Additionally, two sentence-level datasets are adopted in further experiments, including Wikipedia (EN-DE) and WMT (ZH-EN).
Experiment results show that our strategy outperforms previous methods in a comprehensive set of metrics, including BLEU, four lexical indices, three newly proposed assistant linguistic indicators, and human evaluation. In addition to serving as improvement evidence, our newly proposed document-level datasets and metrics can also be a boosting contribution to the community.
\section{Unconvincing Improvements}
\section{Re-examining Recent DNMT Studies}
\label{sec:reg}
For DNMT, though many improvements have been reported, a couple of studies have proposed challenges against these results~\cite{Kim2019WhenAW,Jwalapuram2020CanYC,li2020does}. And we also find some of previous gains should be attributed to overfitting to some extent.
The most used datasets of previous work are \textit{News Commentary} and \textit{TED Talks}, which contain only 200 thousand sentences.
The small scale of the datasets gives rise to the frequent occurrence of overfitting, let alone that the distribution of the test set is highly similar to the training set.
And some work even conduct an unfair comparison with dropout=0.1 for sentence-level models and dropout=0.2 for document-level models ~\cite{Maruf2019SelectiveAF,Yang2019EnhancingCM,zheng2020toward}.
As a result, the parameter regularization and overfitting on small datasets make the improvements not solid enough.
To verify our assumption,
we perform different training by switching hyperparameters on sentence-level experiments. We follow the datasets provided by \citet{Maruf2019SelectiveAF} and \citet{zheng2020toward}, including \textit{TED} (ZH-EN/EN-DE), \textit{News} (EN-DE), and \textit{Europarl} (EN-DE), as well as all the model architecture settings they adopt, including a four-layer Transformer base version.
\begin{table}[ht]\scriptsize
\centering
\begin{tabular}{l|c|ccc}
\hline
\multirow{2}{*}{Models} & ZH-EN & \multicolumn{3}{c}{EN-DE} \\
& TED & TED & News & Europarl \\ \hline
Transformer-base (dropout=0.1) & 17.32 & 23.58 & 22.10 & \textbf{31.70} \\
Transformer-base (dropout=0.2) & 18.87 & 24.70 & 24.36 & 31.44 \\
Transformer-base (dropout=0.3) & \textbf{19.21} & \textbf{25.19} & 24.98 & 30.56 \\
\hline
DocT~\cite{Zhang2018ImprovingTT} & - & 24.00 & 23.08 & 29.32 \\
HAN~\cite{Werlen2018DocumentLevelNM} & 17.90 & 24.58 & \textbf{25.03} & 28.60 \\
SAN~\cite{Maruf2019SelectiveAF} & - & 24.42 & 24.84 & 29.75 \\
QCN~\cite{Yang2019EnhancingCM} & - & 25.19 & 22.37 & 29.82 \\
MCN~\cite{zheng2020toward} & 19.10 & 25.10 & 24.91 & 30.40 \\ \hline
\end{tabular}
\captionsetup{font={footnotesize}}
\caption{Document translation experiments on ZH-EN and EN-DE. ``-'' means not provided. Only the results of TED \& News with dropout=0.1 and a much lower score of Europarl are reported in previous work. However, Transformer-base with dropout=0.3 for TED \& News and a strong baseline of Europarl outperform almost all other methods.}
\label{tab:reg}
\end{table}
As is shown in Table~\ref{tab:reg}, we surprisingly find that simply employing larger dropout can eliminate all the improvements gained by previous work.
For \textit{TED}, the setting of dropout=0.2 can boost baseline for more than 1.0 BLEU, which immediately marginalizes the previous advance, while the setting of dropout=0.3 can outperform all the previous studies.
When it comes to \textit{News}, though the state-of-the-art results are yet to be obtained, the gap between sentence and document models has been largely narrowed up.
As for \textit{Europarl}, a much higher baseline has been easily achieved, which also makes other improvements not solid enough.
Our results show that preceding experiments lack the comparison with a strong baseline. An important proportion of the improvements may come from the regularization of the models since they bring in extra parameters for context encoders or hierarchical attention weights. However, the regularization can be also achieved in sentence-level models and is not targeted at improving document coherence. Essentially, the small scale of related datasets and identically distributed test sets make the improvements questionable.
\citet{Kim2019WhenAW} draw the same conclusion that well-regularized or pre-trained sentence-level models can beat document-level models in the same settings. They check the translation and find that most improvements are not from coreference or lexical choice but ``not interpretable". Similarly, \citet{Jwalapuram2020CanYC} adopt a wide evaluation and find that the existing context-aware models do not improve discourse-related translations consistently across languages and phenomena. Also, \citet{li2020does} find that the extra context encoders act more like a noise generator and the
BLEU improvements mainly come from the robust training instead of the leverage of contextual information. All these three studies appeal for stronger baselines for a fair comparison.
We suggest that the current research tendency in DNMT should be reviewed since it is hard to tell whether the improvements are targeted at document coherence or just normal regularization, let alone complicated modules are introduced. Therefore, as a simpler alternative, we head back to the original but concise style, using end-to-end training framework to cope with document translation.
\section{Doc2Doc: End-to-End DNMT}
\label{sec:context}
In this section, we attempt to analyze the different training patterns for DNMT. Firstly, let us formulate the problem.
Let $D_x=\{x^{(1)}, x^{(2)},\cdots, x^{(M)}\}$ be a source-language document containing $M$ source sentences.
The goal of the document-level NMT is to translate the document $D_x$ in language $x$ to a document $D_y$ in language $y$. $D_y=\{y^{(1)}, y^{(2)},\cdots, y^{(N)}\}$.
We use $L_y^{(i)}$ to denote the sentence length of $y^{(i)}$.
Previous work translate a document sentence-by-sentence, regarding DNMT as a step-by-step sentence generating problem (Doc2Sent) as:
\begin{equation}\small
\mathcal{L}_\text{Doc2Sent}= -\sum_{i=1}^{N}\sum_{j=1}^{L_y^{(i)}} \log p_\theta(y_j^{(i)}|y_{(<j)}^{(i)},x^{(i)},S^{(i)},T^{(i)}),
\label{eq:dsneglike}
\end{equation}
$S^{(i)}$ is the context in the source side, depending on the model architecture and is comprised of only two or three sentences in many work. Most current work focus on $S^{(i)}$, by utilizing hierarchical attention or extra encoders. And $T^{(i)}$ is the context in the target side, which is involved by only a couple of work. They usually make use of a topic model or word cache to form $T^{(i)}$.
Different from Doc2Sent, we propose to resolve document translation with the end-to-end, namely document-to-document (Doc2Doc) pattern as:
\begin{equation}\small
\mathcal{L}_\text{Doc2Doc}= -\sum_{i=1}^{\sum_{L_y}} \log p_\theta(y_i|y_{<i},D_x),
\label{eq:dneglike}
\end{equation}
where $D_x$ is the complete context in the source side, and $y_{<i}$ is the complete historical context in the target side.
\subsection{Why We Dive into Doc2Doc?}
\label{subsec:why}
\paragraph{Full Source Context:}
First, many Doc2sent studies show that more sentences beyond can harm the results~\cite{Werlen2018DocumentLevelNM,Zhang2018ImprovingTT,Tu2018LearningTR}. Therefore, many Doc2Sent work are more of ``a couple of sentences to sentence'' since they only involve two or three preceding sentences as context.
However, broader contexts provide more information, which shall theoretically lead to more improvements.
Thus, We attempt to re-visit involving the full context and choose Doc2Doc, as it is required to take account of all the source-side context.
\paragraph{Full Target Context:}
Second, many Doc2sent work abandon the target-side historical context, and some even claim that it is harmful to translation quality~\cite{Wang2017ExploitingCC,Zhang2018ImprovingTT,Tu2018LearningTR}.
However, once the cross-sentence language model is discarded, some problems, such as tense mismatch (especially when the source language is tenseless like Chinese), may occur.
Therefore, we attempt to re-visit involving the full context and choose Doc2Doc, as it treats the whole document as a sequence and can naturally take advantage of all the target-side historical context.
\paragraph{Relaxed Training:} Third, Doc2Sent restricts the training scene. The previous work focus on adjusting the model structure to feed preceding source sentences, so the training data has to be in the form of consecutive sentences so as to meet the model entrance. As a result, it is hard to use large numbers of piecemeal parallel sentences.
Such a rigid form of training data also greatly limits the model potential because the scale of parallel sentences can be tens of times of parallel documents. On the contrary, Doc2Doc can naturally absorb all kinds of sequences, including sentences and documents.
\paragraph{Simplicity:} Last, Doc2Sent inevitably introduces extra model modules with extra parameters in order to capture contextual information. It complicates the model architecture, making it hard to renovate or generalize. On the contrary, Doc2Doc does not change the model structure and brings in no additional parameters.
\begin{table*}[htb]\scriptsize
\centering
\begin{tabular}{cccccccc}
\hline
Group & Datasets & Source & Language & N\_Sent & N\_Doc & Development Sets & Test Sets \\ \hline
\multirow{4}{*}{Main Experiments} & \href{https://wit3.fbk.eu/mt.php?release=2015-01}{TED} & IWSLT 2015 & ZH-EN & 205K & 1.7K & dev2010 & tst2010-2013 \\
& \href{https://github.com/sameenmaruf/selective-attn}{TED} & IWSLT 2017 & EN-DE & 206K & 1.7K & dev2010+tst201[0-5] & tst2016-2017 \\
& \href{https://github.com/sameenmaruf/selective-attn}{News} & News Commentary v11 & EN-DE & 236K & 6.1K & newstest2015 & newstest2016 \\
& \href{https://github.com/sameenmaruf/selective-attn}{Europarl} & Europarl v7 & EN-DE & 1.67M & 118K & \multicolumn{2}{c}{\cite{Maruf2019SelectiveAF}} \\ \hline
\multirow{3}{*}{Other Languages} & \href{http://data.statmt.org/news-commentary/v14/}{News} & News Commentary v14 & ES-EN & 355K & 9.2K & newstest2012 & newstest2013 \\
& \href{http://data.statmt.org/news-commentary/v14/}{News} & News Commentary v14 & FR-EN & 303K & 7.8K & newstest2013 & newstest2014 \\
& \href{http://data.statmt.org/news-commentary/v14/}{News} & News Commentary v14 & RU-EN & 226K & 6.0K & newstest2018 & newstest2019 \\ \hline
\multirow{2}{*}{Sentence-level Corpus} & \href{http://opus.nlpl.eu/Wikipedia.php}{Wiki} & Wikipedia & EN-DE & 2.40M & - & - & - \\
& \href{http://www.statmt.org/wmt19/translation-task.html}{WMT} & WMT 2019 & ZH-EN & 21M & - & - & - \\ \hline
Contrastive Experiments & \href{https://github.com/lena-voita/good-translation-wrong-in-context#training-data}{Subtitles} & OpenSubtitles & EN-RU & 6M & 1.5M & \multicolumn{2}{c}{\cite{Voita2019WhenAG}} \\ \hline
\multirow{1}{*}{Our New Datasets} & \href{https://github.com/sunzewei2715/Doc2Doc_NMT}{PDC} & FT/NYT & ZH-EN & 1.39M & 59K & newstest2019 & PDC \\
\hline
\end{tabular}
\caption{The detailed information of the used datasets in this paper with downloading links on their names.}
\label{tab:datasets}
\end{table*}
\subsection{Multi-resolutional Doc2Doc NMT}
Although Doc2Doc seems more concise and promising in multiple terms, it is not widely recognized. \citet{Zhang2018ImprovingTT,Liu2020MultilingualDP} conduct experiments by directly feeding the whole documents into the model. We refer to it as \textit{Single-resolutional Training} (denoted as SR Doc2Doc). Their experiments report extremely negative results unless pre-trained in advance. The model either has a large drop in performance or does not work at all. As pointed out by ~\citet{Koehn2017SixCF}, one of the six challenges in neural machine translation is the dramatic drop of quality as the length of the sentences increases.
However, we find that Doc2Doc can be activated on any datasets and obtain better results than Doc2Sent models as long as we employ \textit{Multi-resolutional Training}, mixing documents with shorter segments like sentences or paragraphs (denoted as MR Doc2Doc).
Specifically, we split each document averagely into $k$ parts for multiple times and collect all the sequences together, $k \in \{1,2,4,8,...\}$. For example, a document containing eight sentences will be split into two four-sentences segments, four two-sentences segments, and eight single-sentence segments. Finally, fifteen sequences are all gathered and fed into sequence-to-sequence training $(15=1+2+4+8)$.
In this way, the model can acquire the ability to translate long documents since it is assisted by easier and shorter segments. As a result, multi-resolutional Doc2Doc is able to translate all forms of sequences, including extremely long ones such as a document with more than 2000 tokens, as well as shorter ones like sentences. In the following sections, we conduct the same experiments as the aforementioned studies by translating the whole document directly and atomically.
\section{Experiment Settings}
\subsection{Datasets}
For our main experiments, we follow the datasets provided by \citet{Maruf2019SelectiveAF} and \citet{zheng2020toward}, including \textit{TED} (ZH-EN/EN-DE), \textit{News} (EN-DE), and \textit{Europarl} (EN-DE). The Chinese-English and English-German TED datasets are from IWSLT 2015 and 2017 evaluation campaigns, respectively. For ZH-EN, we use dev2010 as the development set and tst2010-2013 as the test set. For TED (EN-DE), we use tst2016-2017 as the test set and the rest as the development set. For News (EN-DE). the training/develop/test sets are: News Commentary v11, WMT newstest2015, and WMT newstest2016. For Europarl (EN-DE). The corpus is extracted from the Europarl v7 according to the method proposed in~\citet{Maruf2019SelectiveAF}.~\footnote{EN-DE datasets are from~\url{https://github.com/sameenmaruf/selective-attn}}
Experiments on Spanish, French, Russian to English are also conducted, whose training sets are News Commentary v14
, with the development sets and test sets are newstest2012 / newstest2013 (ES-EN), newstest2013 / newstest2014 (FR-EN), newstest2018 / newstest2019 (RU-EN), respectively.
Besides, two additional sentence-level datasets are also adopted. For EN-DE, we use \textit{Wikipedia}
, a corpus containing 2.4 million pairs of sentences. For ZH-EN, we extract one-tenth of WMT 2019
, around 2 million sentence pairs.
Additionally, a document-level dataset with contrastive test sets in EN-RU~\cite{Voita2019WhenAG} is used to evaluate lexical coherence.
Lastly, we propose a new document-level dataset in this paper, whose source, scales, and benchmark will be illustrated in the subsequent sections.
For sentences without any ending symbol inside documents, periods are manually added.
For our Doc2Doc experiments, the development and test sets are documents merged by sentences.
We list all the detailed information of used datasets in Table~\ref{tab:datasets}, including languages, scales, and downloading URLs for reproducibility.
\subsection{Models}
For the model setting, we follow the base version of Transformers~\cite{Vaswani2017AttentionIA}, including 6 layers for both encoders and decoders, 512 dimensions for model, 2048 dimensions for ffn layers, 8 heads for attention. For all experiments, we use subword~\cite{sennrich2016bpe} with 32K merge operations on both sides and cut out tokens appearing less than five times. The models are trained with a batch size of 32000 tokens on 8 Tesla V100 GPUs. Parameters are optimized by using Adam optimizer \cite{Kingma2015AdamAM}, with $\beta_1 = 0.9$, $\beta_2 = 0.98$, and $\epsilon = 10^{-9}$. The learning rate is scheduled according to the method proposed in \citet{Vaswani2017AttentionIA}, with $warmup\_steps = 4000$. Label smoothing \cite{Szegedy2016RethinkingTI} of value=0.1 is also adopted. We set dropout=0.3 for small datasets like \textit{TED} and \textit{News}, and dropout=0.1 for larger datasets like \textit{Europarl}, unless stated elsewise.
\subsection{Evaluation}
For inference, we generate the hypothesis with a beam size of 5. Following previous related work, we adopt tokenized case-insensitive BLEU~\cite{Papineni2001BleuAM}. Specifically, we follow the methods in ~\citet{Liu2020MultilingualDP}, which calculate sentence-level BLEU (denoted as s-BLEU) and document-level BLEU (denoted as d-BLEU), respectively. For d-BLEU, the computing object is either the concatenation of generated sentences or the directly generated documents. Since our documents are generated atomically and hard to split into sentences, we only report d-BLEU for Doc2Doc.
\section{Results and Analysis}
\subsection{MR Doc2Doc Improves Performance}
\paragraph{MR matters.} It can be seen from the upper part of Table~\ref{tab:slct_mix} that SR Doc2Doc indeed has a severe drop on \textit{News} and even fails to generate normal results on \textit{TED}, which accords with the findings of ~\citet{Zhang2018ImprovingTT,Liu2020MultilingualDP}. It seems too hard to learn long-range translation directly.
However, once equipped with our training technique, MR Doc2Doc can yield the best results, outperforming our strong baseline and previous works on \textit{TED} and \textit{Europarl}. We suggest that NMT is able to acquire the capacity of translating long-range context, as long as it cooperates with some shorter segments as assistance. With the multi-resolutional help of easier patterns, the model can gradually master how to generate complicated sequences.
\paragraph{Doc2Doc matters.} We also compare MR Doc2Doc to a intuitive baseline: MR Doc2Sent. The latter one is trained in a typical Doc2Sent way: the source is the whole past context, the target is the current sentence. From the experimental results, we can see Doc2Doc outperforms it due to much broader contexts. Language model can effectively improve translation performance~\cite{sun2021multilingual}.
To show the universality of MR Doc2Doc, we also conduct the experiments on other language pairs: Spanish, French, Russian to English. As is shown in Table~\ref{tab:more_langs}, MR Doc2Doc can be achieved on all language pairs and obtains comparable or better results compared with Sent2Sent.
\begin{table}[ht]\footnotesize
\centering
\begin{tabular}{lccc}
\hline
Models & ES-EN & FR-EN & RU-EN \\ \hline
Sent2Sent & \textbf{29.55} & 28.69 & 23.22 \\
SR Doc2Doc & 26.79 & 23.86 & 16.47 \\
MR Doc2Sent & 29.23 & 28.75 & 23.48 \\
MR Doc2Doc & 29.37 & \textbf{28.85} & \textbf{23.98} \\ \hline
\end{tabular}
\caption{Document translation experiments on more languages, showing the comprehensive effectiveness.}
\label{tab:more_langs}
\end{table}
It is worth noting that all our results are obtained without any adjustment of model architecture or any extra parameters.
\subsection{Additional Sentence Corpus Helps}
\label{add_sentences}
Furthermore, introducing extra sentence-level corpus is also an effective technique. This can be regarded as another form of multi-resolutional training, as it supplements more sentence-level information. This strategy makes an impact in two ways: activating SR Doc2Doc and boosting MR Doc2Doc.
We merge the datasets mentioned above and \textit{Wikipedia} (EN-DE), WMT (ZH-EN), two out-of-domain sentence-level datasets to do experiments. \footnote{Sentences and documents in non-MR settings are oversampled for six times to keep the same data ratio with the MR settings,
which is proved helpful to the performance in Appendix~\ref{appendix:os}.
Due to the larger scale, we find the settings of dropout=0.2 for \textit{TED}, \textit{News} and dropout=0.1 for \textit{Europarl} yield the best results for both Sent2Sent and Doc2Doc.}
As is shown in the lower part of Table~\ref{tab:slct_mix}, on the one hand, SR Doc2Doc models are activated and can reach comparable levels with Sent2Sent models as long as assisted with additional sentences. On the other hand, MR Doc2Doc obtains the best results on all datasets and further widens the gap with the sentence corpus's boost. Even out-of-domain sentences can leverage the learning ability of document translation. It again proves the importance of multi-resolutional assistance.
In addition, as analyzed in the previous section, Doc2Sent models are not compatible with sentence-level corpus since the model entrance is specially designed for consecutive sentences. However, Doc2Doc models can naturally draw on the merits of any parallel pairs, including piecemeal sentences. Considering the amount of parallel sentence-level data is much larger than the document-level one, MR Doc2Doc has a powerful application potential compared with Doc2Sent.
\subsection{Further Analysis on MR Doc2Doc}
\subsubsection{Improved Discourse Coherence}
Except for BLEU, whether Doc2Doc truly learns to utilize the context to resolve discourse inconsistencies has to be verified. We use the contrastive test sets proposed by~\citet{Voita2019WhenAG}, which include deixis, lexicon consistency, ellipsis (inflection), and ellipsis (verb phrase) on English-Russian. Each instance contains a positive translation and a few negative ones, whose difference is only one specific word. With force decoding, if the score of the positive one is the highest, then this instance is counted as correct.
As is shown in Table~\ref{tab:voita}, MR Doc2Doc achieves significant improvements and obtain the best results, which proves MR Doc2Doc indeed well captures the context information and maintain the cross-sentence coherence.
\begin{table}[ht]\footnotesize
\centering
\begin{tabular}{lcccc}
\hline
Models & deixis & lex.c & ell.infl & ell.VP \\ \hline
Sent2Sent & 51.1 & 45.6 & 55.4 & 27.4 \\
\citet{zheng2020toward} & 61.3 & 46.1 & 61.0 & 35.6 \\
MR Doc2Doc & \textbf{64.7} & \textbf{46.3} & \textbf{65.9} & \textbf{53.0} \\ \hline
\end{tabular}
\caption{Discourse phenomena evaluation on the contrastive test sets. Our Doc2Doc shows a much better capacity for building the document coherence.}
\label{tab:voita}
\end{table}
\subsubsection{Strong Context Sensibility}
\citet{li2020does} find the performance of previous context-aware systems does not decrease with intentional incorrect context and suspect the context usage of context encoders. To verify whether Doc2Doc truly takes advantage of the contextual information in the document, we also conduct the inference with the wrong context deliberately. If the model neglects discourse dependency, then there should be no difference in the performance.
Specifically, we firstly shuffle the sentence order inside each document randomly, marking it as \textit{Local Shuffle}. Furthermore, we randomly swap sentences among all the documents to make the context more disordered, marking it as \textit{Global Shuffle}. As is shown in Table~\ref{tab:shuffle}, the misleading context results in a significant drop for the Doc2Doc model in BLEU. Besides, Global Shuffle brings more harm than Local Shuffle, showing that more chaotic contexts lead to more harm. After all, Local Shuffle still reserves some general information, like topic or tense. These experiments
prove the usage of the context.
\begin{table}[ht]\footnotesize
\centering
\begin{tabular}{l|c|ccc}
\hline
\multirow{2}{*}{Models} & ZH-EN & \multicolumn{3}{c}{EN-DE} \\
& TED & TED & News & Europarl \\ \hline
MR Doc2Doc & 25.84 & 29.27 & 26.71 & 34.48 \\ \hline
Local Shuffle & 24.10 & 27.48 & 25.22 & 33.52 \\
Global Shuffle & 23.69 & 27.17 & 24.96 & 32.47 \\ \hline
\end{tabular}
\caption{Misleading contexts can bring negative effects to Doc2Doc, proving the dependent usage of the context information. And more chaotic contexts harm more (Global vs. Local).}
\label{tab:shuffle}
\end{table}
\subsubsection{Compatible with Sentences}
The performance with sequence length is also analyzed in this study. Taking \textit{Europarl} as an example, we randomly split documents into shorter paragraphs in different lengths and evaluate them with our models, as is shown in Figure~\ref{fig:bleu_with_length}. Obviously, the model trained only on sentence-level corpus has a severe drop when translating long sequences, while the model trained only on document-level corpus shows the opposite result, which reveals the importance of data distribution. However, the model trained with our multi-resolutional strategy can sufficiently cope with all situations, breaking the limitation of sequence length in translation. By conducting MR Doc2Doc, we obtain an all-in-one model that is capable of translating sequences of any length, avoiding deploying two systems for sentences and documents, respectively.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.45]{fig/bleu_with_length.pdf}
\captionsetup{font={footnotesize}}
\caption{The model trained only on sentence-level or document-level corpus fails to translate sequences in unseen lengths while the MR model yields the best results in all scenarios.}
\label{fig:bleu_with_length}
\end{figure}
\section{Further Evidence with Newly Proposed Datasets and Metrics}
\label{sec:datasets}
To further verify our conclusions and push the development of this field, we also contribute a new dataset along with new metrics.
Specifically, we propose a package of a large and diverse parallel document corpus, three deliberately designed metrics, and correspondingly constructed test sets~\footnote{\url{https://github.com/sunzewei2715/Doc2Doc_NMT}}. On the one hand, they make our conclusions more solid. On the other hand, they may benefit future researches to expand the comparison scenes.
\subsection{Parallel Document Corpus}
We crawl bilingual news corpus from two websites\footnote{\url{https://cn.nytimes.com}} \footnote{\url{https://cn.ft.com}} with both English and Chinese content provided. The detailed cleaning procedure is in Appendix~\ref{appendix:pdc}. Finally, $1.39$ million parallel sentences within almost $60$ thousand parallel documents are collected. The corpus contains large-scale data with internal dependency in different lengths and diverse domains, including politics, finance, health, culture, etc.
We name it \textbf{PDC} (Parallel Document Corpus).
\subsection{Metrics}
To inspect the coherence improvement, we sum up three common linguistic features in document corpus that the Sent2Sent model can not handle:
\textbf{Tense Consistency (TC):}
If the source language is tenseless (e.g. Chinese), it is hard for Sent2Sent models to maintain the consistency of tense.
\textbf{Conjunction Presence (CP):}
Traditional models ignore cross-sentence dependencies, and the sentence-level translation may cause the missing of conjunctions like ``And'' \cite{Xiong2018ModelingCF}.
\textbf{Pronoun Translation (PT):}
In pro-drop languages such as Chinese and Japanese, pronouns are frequently omitted. When translating from a pro-drop language into a non-pro-drop language (e.g., Chinese-to-English), invisible dropped pronouns may be missing \cite{Wang2016DroppedPG,Wang2016ANA,Wang2018TranslatingPL,Wang2018LearningTJ}.
Afterward, we collect documents that contain abundant verbs in the past tense, conjunctions, and pronouns, as test sets. These words, as well as their positions, are labeled.
Some cases are in Appendix~\ref{appendix:cases}.
For each word-position pair $<w, p>$, we check whether $w$ appears in the generated documents within a rough span. And we calculate the appearance percentage as the evaluation score, Specifically:
\begin{equation}
\text{TC / CP / PT} = \frac{\sum_{i}^{n}\sum_{j}^{|W_i|} \mathbb{I}(w_{ij} \in y_{i}^{\text{span}})}{\sum_{i}^{n}|W_i|}
\label{eq:metric}
\end{equation}
\begin{equation}
\text{span} = \left[ \alpha_i p_{ij} -d, \alpha_i p_{ij} +d \right]
\end{equation}
$n$ indicates the number of sequences in the test set, $W_i$ indicates the labeled word set of sequence$_i$, $w$ indicates labeled words, $y_i$ indicates output$_i$, $p_{ij}$ indicates the labeled position of $w_{ij}$ in the reference$_i$, $\alpha_i$ indicates the length ratio of translation and reference, $d$ indicates the span radius. We set $d=20$ in this paper, and calculate the geometric mean as the overall score denoted as \textbf{TCP}.
\subsection{Test Sets}
Along with the filtration of the aforementioned coherence indices, the test sets are built based on websites that are totally different from the training corpus to avoid overfitting.
Meanwhile, to alleviate the bias of human translation, the English documents are selected as the reference and manually translated to the Chinese documents as the source.
Finally, a total of nearly five thousand sentences within 148 documents is obtained.
\subsubsection{Benchmark}
Basic experiments with Sent2Sent and Doc2Doc are conducted based on our new datasets, along with full WMT ZH-EN corpus, a sentence-level dataset containing around 20 million pairs.
\footnote{We set dropout=0.2 for Sent2Sent and MR Doc2Doc without WMT, and dropout=0.1 for the rest settings according to the performance on the development set. Oversampling is done again, as aforementioned, to enhance the performance for non-MR settings.}
We use WMT newstest2019 as the development set and evaluate the models with our new test sets as well as metrics. The results are shown in Table~\ref{tab:bench}.
\begin{table}[ht]\scriptsize
\centering
\begin{tabular}{l|c|cccc|c}
\hline
Systems & d-BLEU & TC & CP & PT & TCP & Man\\ \hline
Sent2Sent & 27.05 & 54.0 & 25.5 & 62.5 & 44.1 & 2.89\\
SR Doc2Doc & 24.33 & 46.7 & 24.8 & 61.5 & 41.5 & 2.87 \\
MR Doc2Doc & \textbf{27.80} & \textbf{56.9} & \textbf{25.7} & \textbf{63.9} & \textbf{45.4} & \textbf{3.02} \\
\hline
Sent2Sent ++ & 30.28 & 58.3 & 34.1 & 64.5 & 50.4 & 3.58 \\
SR Doc2Doc ++ & 31.20 & 59.3 & 36.3 & 64.9 & 51.9 & 3.61 \\
MR Doc2Doc ++ & \textbf{31.62} & \textbf{59.7} & \textbf{36.3} & \textbf{65.9} & \textbf{52.3} & \textbf{3.69} \\
\hline
\end{tabular}
\caption{Benckmark of our new datasets. ``++'' indicates using additional WMT corpus. ``Man'' refers to human evaluation. Doc2Doc shows much better results in all terms.}
\label{tab:bench}
\end{table}
\textbf{BLEU:} In terms of BLEU, MR Doc2Doc outperforms Sent2Sent, illustrating the positive effect of long-range context. Moreover, with extra sentence-level corpus, Doc2Doc shows significant improvements again.
\textbf{Fine-grained Metrics:} Our metrics show much clearer improvements. Considering the usage of contextual information, tense consistency is better guaranteed with Doc2Doc. Meanwhile, Doc2Doc is much more capable of translating the invisible pronouns by capturing original referent beyond the current sentence. Finally, the conjunction presence shows the same tendency.
\textbf{Human Evaluation:} Human evaluation is also conducted to illustrate the reliability of our metrics. One-fifth of translated documents are sampled and scored by linguistics experts from 1 to 5 according to not only translation quality but also translation consistency~\cite{sun2020generating}.
As is shown in Table~\ref{tab:bench}, human evaluation shows a strong correlation with TCP. More specifically, the Pearson Correlation Coefficient (PCCs) between human scores and TCP is higher than that of BLEU (97.9 vs. 94.1).
\subsection{Case Study}
Table~\ref{tab:case-coherence} shows an example of document translation. Sent2Sent model neglects the cross-sentence context and mistakenly translate the ambiguous word, which leads to a confusing reading experience. However, the Doc2Doc model can grasp a full picture of the historical context and make accurate decisions.
\begin{table}[!ht]\scriptsize
\centering
\begin{tabular}{p{0.15\columnwidth}|p{0.77\columnwidth}}
\hline
& \begin{CJK*}{UTF8}{gbsn}与大多数欧洲人一样, {\color{blue}德国}{\color{red}总理}对美国总统的“美国优先”民族主义难以掩饰不屑。
\end{CJK*}\\
Source& ... \\
& \begin{CJK*}{UTF8}{gbsn} 但她已进入第四个、也必定是最后一个{\color{red}总理}任期 。\end{CJK*} \\
\hline
& Like most Europeans , the {\color{blue}German} {\color{red}chancellor} has struggled to hide his disdain for the US president’s “America First” nationalism.\\
Sent2Sent & ... \\
& But she has entered a fourth and surely last term as {\color{red}prime minister}. \\ \hline
& Like most Europeans, the {\color{blue}German} {\color{red}chancellor}’s disdain for the US president’s “America First” nationalism is hard to hide. \\
Doc2Doc & ... \\
& But she has entered her fourth and certainly final term as {\color{red}chancellor}. \\ \hline
\end{tabular}
\captionsetup{font={footnotesize}}
\caption{Coherence problem in document translation. Without discourse contexts, the Chinese word \begin{CJK*}{UTF8}{gbsn}``总理'' \end{CJK*} is usually translated to ``prime minister'', while in the context of ``German'', it should be translated into ``chancellor''. }
\label{tab:case-coherence}
\end{table}
Also, we manually switch the context information in the source side to test the model sensibility, as is shown in Table~\ref{tab:case-country}. It turns out that Doc2Doc is able to adapt to different contexts.
\begin{table}[!ht]\scriptsize
\centering
\begin{tabular}{l|ccc}
\hline
Country & Sent2Sent & Doc2Doc & Oracle \\ \hline
Germany & prime minister & \textbf{chancellor} & chancellor \\
Italy & \textbf{prime minister} & \textbf{prime minister} & prime minister \\
Austria & prime minister & \textbf{chancellor} & chancellor \\
France & \textbf{prime minister} & \textbf{prime minister} & prime minister \\ \hline
\end{tabular}
\captionsetup{font={footnotesize}}
\caption{Further study of Table~\ref{tab:case-coherence}. We switch the country information in the source side like \textit{German} $\rightarrow$ \textit{Italian}/\textit{Austrian}/\textit{French}, \textit{Berlin} $\rightarrow$ \textit{Rome}/\textit{Vienna}/\textit{Paris}. Doc2Doc model shows strong sensibility to the discourse context.}
\label{tab:case-country}
\end{table}
\section{Discussion and Future Work}
\section{Limitation}
Though multi-resolutional Doc2Doc achieves direct document translation and obtains better results, there still exists a big challenge: efficiency. The computation cost of self-attention in Transformer rises with the square of the sequence length. As we feed the entire document into the model, the memory usage will be a bottleneck for larger model deployment.
And the inference speed may be affected if no parallel operation is conducted.
Recently, many studies focus on the efficiency enhancement on long-range sequence processing~\cite{correia2019adaptively,child2019generating,kitaev2020reformer,wu2020lite,beltagy2020longformer,rae2020compressive}. We leave reducing the computation cost to the future work.
\section{Related Work}
\label{sec:related}
Document-level neural machine translation is an important task and has been abundantly studied with multiple datasets as well as methods.
The mainstream research in this field is the model architecture improvement. Specifically, several recent attempts extend the Sent2Sent approach to the Doc2Sent-like one. \citet{Wang2017ExploitingCC,Werlen2018DocumentLevelNM,Tan2019HierarchicalMO} make use of hierarchical RNNs or Transformer to summarize previous sentences.
\citet{Jean2017NeuralMT,Bawden2017EvaluatingDP,Zhang2018ImprovingTT,Voita2018ContextAwareNM,Kuang2018FusingRI,Maruf2019SelectiveAF,Yang2019EnhancingCM,Jiang2019DocumentlevelNM,zheng2020toward,yun2020improving,xuefficient} introduce additional encoders or query layers with attention model and feed the history contexts into decoders.
\citet{Maruf2018DocumentCN,Kuang2018ModelingCF,Tu2018LearningTR} propose to augment NMT models with a cache-like memory network, which generates the translation depending on the decoder history retrieved from the memory.
Besides, some works intend to resolve this problem in other ways.
\citet{Jean2019ContextAwareLF} propose a regularization term for encouraging to focus more on the additional context using a multi-level pair-wise ranking loss.
\citet{Yu2020BetterDM} utilize a noisy channel reranker with Bayes' rule.
\citet{Garcia2019ContextAwareNM} extends the beam search decoding process with fusing an attentional RNN with an SSLM by modifying the computation of the final score.
\citet{Saunders2020UsingCI} present an approach for structured loss training with document-level objective functions.
\citet{Liu2020MultilingualDP,ma2020simple} combine large-scale pre-train model with DNMT.
\citet{Unanue2020LeveragingDR,Kang2020DynamicCS} adopt reinforcement learning methods.
There are also some works sharing similar ideas with us. \citet{Tiedemann2017NeuralMT,Bawden2017EvaluatingDP} explore concatenating two consecutive sentences and generate two sentences directly. Obviously, we leverage greatly longer information and capture the full context.
\citet{JunczysDowmunt2019MicrosoftTA} cut documents into long segments and feed them into training like BERT~\cite{devlin2019bert}. There are at least three main differences. Firstly, they need to add specific boundary tokens between sentences while we directly translate the original documents without any additional processing. Secondly, we propose a novel multi-resolutional training paradigm that shows consistent improvements compared with regular training. Thirdly, for extremely long documents, they restrict the segment length to 1000 tokens or make a truncation while we preserve entire documents and achieve literal document-to-document training and inference.
Finally, our work is also related to a series of studies in long sequence generation like GPT \cite{Radford2018ImprovingLU}, GPT-2 \cite{Radford2019LanguageMA}, and Transformer-XL \cite{Dai2019TransformerXLAL}. We all suggest that the deep neural generation models have the potential to well process long-range sequences.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we try to answer the question of whether Document-to-document translation works. It seems naive Doc2Doc can fail in multiple scenes. However, with the multi-resolutional training proposed in this paper, it can be successfully activated. Different from traditional methods of modifying the model architectures, our approach introduces no extra parameters. A comprehensive set of experiments on various metrics show the advantage of MR Doc2Doc. In addition, we contribute a new document-level dataset as well as three new metrics to the community.
\section{Experimental Datasets Illustration}
\section{Oversampling Illustration}
\label{appendix:os}
When combining document-level datasets with sentence-level datasets (especially out-of-domain corpus), we employ oversampling for non-MR settings. This can keep them the same data ratio with the MR setting and is helpful for their performance. Since the data size of MR is around 6 times of non-MR ($\approx \log_2 64$), as shown in Table~\ref{tab:ratio}, we mainly oversample for 6 times. The contrastive experiments are in Table~\ref{tab:os}. We attribute the improvements to the reduction of the proportion of out-of-domain data.
\begin{table}[ht]\footnotesize
\centering
\begin{tabular}{lc}
\hline
Datasets & Ratio \\ \hline
TED (ZH-EN) & 6.7 \\
TED (EN-DE) & 7.6 \\
News (EN-DE) & 5.9 \\
Europal & 4.6 \\
News (ES-EN) & 5.9 \\
News (FR-EN) & 5.9 \\
News (RU-EN) & 5.9 \\
PDC & 5.3 \\ \hline
Mean & 6.0 \\ \hline
\end{tabular}
\caption{Ratio of MR/non-MR in data size}
\label{tab:ratio}
\end{table}
\begin{table}[ht]\scriptsize
\centering
\begin{tabular}{lcccc}
\hline
\multirow{2}{*}{Dataset} & \multicolumn{2}{c}{Sent2Sent} & \multicolumn{2}{c}{SR Doc2Doc} \\
& non-OS & OS & non-OS & OS \\ \hline
TED(ZH-EN)+WMT & 27.52 & \textbf{27.90} & 26.05 & \textbf{26.67} \\
TED(EN-DE)+Wiki & 29.19 & \textbf{30.74} & 29.81 & \textbf{29.96} \\
News+Wiki & 27.77 & \textbf{29.41} & 30.15 & \textbf{30.61} \\
Europarl+Wiki & 33.93 & \textbf{34.20} & 34.25 & \textbf{34.38} \\
PDC+WMT & 29.52 & \textbf{30.28} & 29.60 & \textbf{31.20} \\
\hline
\end{tabular}
\caption{The contrastive results of oversampling when combining sentence-level corpus.}
\label{tab:os}
\end{table}
\section{Clean Procedure on PDC}
\label{appendix:pdc}
We mainly crawl bilingual news corpus from two websites~(\url{https://cn.nytimes.com}, \url{https://cn.ft.com}) with both English and Chinese content provided. Then three steps are followed to clean the corpus.
\begin{compactenum}
\item \textbf{Deduplication:} We deduplicate the documents that include almost the same content.
\item \textbf{Sentence Segmentation:} We use \textit{Pragmatic Segmenter} \footnote{\url{https://github.com/diasks2/pragmatic_segmenter}} to segment paragraphs into sentences.
\item \textbf{Filtration:} We use \textit{fast\_align} \footnote{\url{https://github.com/clab/fast_align}} to align sentence pairs and label the pairs as misaligned ones if the alignment scores are less than $40\%$. Documents are finally removed if they contain misaligned sentence pairs.
\end{compactenum}
Finally, we obtain $1.39$ million parallel sentences within almost $60$ thousand cleaned parallel documents. The dataset contains diverse domains including politics, finance, health, culture, etc.
\section{Cases of Our Test Sets}
\label{appendix:cases}
Apart from the statistic number in the main paper, we also provide some cases in our test sets to illustrate the value of our test sets and metrics, as shown in Table~\ref{tab:case-tense},\ref{tab:case-conjunction},\ref{tab:case-pronoun}.
\begin{table}[ht]\scriptsize
\centering
\begin{tabular}{l|p{0.8\columnwidth}}
\hline
Src & \begin{CJK*}{UTF8}{gbsn}1.双方 在 2017 年 都 向 法庭 提交 了 申请 。
\end{CJK*}\\
& \begin{CJK*}{UTF8}{gbsn} 2.邓普顿 奈特 {\color{red}想要} 报销 他 的 租金 。\end{CJK*} \\
& \begin{CJK*}{UTF8}{gbsn} 3.伯德特 {\color{red}想要} 赶走 邓普顿 奈特 。\end{CJK*} \\
\hline
Ref & 1.Both parties had lodged applications with the tribunal in 2017. \\
& 2.Templeton-Knight {\color{red}wanted} his rent reimbursed. \\
& 3.Burdett {\color{red}wanted} to evict Templeton-Knight. \\
\hline
NMT & 1.Both parties filed applications with the court in 2017. \\
& 2.Templeton Knight {\color{red}wants} to reimburse his rent. \\
& 3.Burdett {\color{red}wants} to get rid of Templeton Knight. \\ \hline
\end{tabular}
\captionsetup{font={footnotesize}}
\caption{Tense inconsistency problem in translating tenseless languages (e.g. Chinese) to tense-sensitive languages (e.g. English). Individual sentences are translated into present tense with sentence-level models while the history context has provided the signal of past tense.}
\label{tab:case-tense}
\end{table}
\begin{table}[ht]\scriptsize
\centering
\begin{tabular}{l|p{0.8\columnwidth}}
\hline
Src & \begin{CJK*}{UTF8}{gbsn}1.我 女儿 使用 的 胰岛素 类型 — — 世界 上 只有 两家 类似 类型 的 制造商 。\end{CJK*}\\
& \begin{CJK*}{UTF8}{gbsn} 2.他们 继续 保持一致 同时 提高 价格 。\end{CJK*} \\
\hline
Ref & 1.The type of insulin that my daughter uses — there are only two manufacturers worldwide of a similar type. \\
& 2.{\color{red}And} they continue to increase their prices lockstep together. \\
\hline
NMT & 1.The type of insulin my daughter uses - there are only two manufacturers of similar types in the world. \\
& 2.{\color{red}[conj miss]} They continue to be consistent while raising prices. \\ \hline
\end{tabular}
\captionsetup{font={footnotesize}}
\caption{Conjunction missing problem in sentence-level translation. The sentences has strong semantic connection but are translated without any conjunction.}
\label{tab:case-conjunction}
\end{table}
\begin{table}[ht]\scriptsize
\centering
\begin{tabular}{l|p{0.8\columnwidth}}
\hline
Src & \begin{CJK*}{UTF8}{gbsn}1.根据 市政府 的 说法 , 奥特里 工厂 的 其他 拟议 功能 似乎 极 不 可能 实施 。\end{CJK*}\\
& \begin{CJK*}{UTF8}{gbsn} 2.即使 顾问 和 调查人 推荐 {\color{red}[pro drop]}。\end{CJK*} \\
\hline
Ref & 1.Other proposed features for Autrey Mill seem highly unlikely to be implemented according to the City Manager. \\
& 2.Even though consultants and surveys recommended {\color{red}them}. \\
\hline
NMT$_A$ & 1.According to the city government, other proposed functions at the Autry plant appear highly unlikely to be implemented. \\
& 2.Even if consultants and surveys recommend {\color{red}[pro miss]}. \\ \hline
NMT$_B$ & 1.According to the municipal government , other proposed functions of the Autry plant seem highly impossible to implement . \\
& 2.Even if consultants and surveys recommended {\color{red}it}. \\ \hline
\end{tabular}
\captionsetup{font={footnotesize}}
\caption{Pronoun drop problem in translating pro-drop languages (e.g. Chinese) to non-pro-drop languages (e.g. English). The pronoun is omitted or translated wrongly with sentence-level models..}
\label{tab:case-pronoun}
\end{table}
| proofpile-arXiv_059-15691 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:intro}
\input{002intro.tex}
\section{Background}
\label{sec:back}
\input{003back.tex}
\section{Threat Model}
\label{sec:threat}
\input{004threat.tex}
\section{System Design}
\label{sec:system}
\input{005sys.tex}
\section{Integrity Analysis}
\label{sec:integ}
\input{006sec.tex}
\section{Experimental Evaluation}
\label{sec:exp}
\input{007eval.tex}
\section{Related Works}
\label{sec:rel}
\input{008relate.tex}
\section{Conclusion}
\label{sec:conc}
\input{099concl.tex}
\subsubsection*{Acknowledgments}
This work was supported, in part by grant 2R01HG006844 from the National Human Genome Research Institute, NSF Awards CNS-1837627, OAC-1828467, IIS-1939728 and ARO award W911NF-17-1-0356. Finally, the authors would like to thank \mbox{Dr.~Yan~Zhou for} constructive criticism of the manuscript.
\subsection{Attacks on DNN Models in Training Phase\label{section:attacks_on_dnn_models_train}}
Attacks on DNN models can be realized during both \textit{training} or \textit{test} phases.
However, \systemGOAT{}\xspace{} is concerned with integrity/accountability issues during the training phase of DNN models, such that attacks related to testing are out of the scope of this paper since test time attacks (~\cite{paper:adv_ml,paper:adv_ml2}) have been addressed before (e.g., \emph{Slalom}~\cite{Paper:Slalom}).
In the literature, particularly in the computer vision domain, targeted trojan attacks on DNN classification models have become a real concern as deep learning has grown in its adoption. These attacks tend to alter the prediction of models if a specific condition in the input is met. These conditions may be \textit{feature-based}~\cite{paper:backdoor_1,paper:backdoor2,paper:backdoor_clean_label_1} or \textit{instance-based}~\cite{paper:backdoor_3,Poison_Frogs}.
Recently, trojan attacks have been extended to Reinforcement Learning (RL) and text classification models
~\cite{paper:backdoor_rfl_1,paper:backdoor_nlp_1}.
In practice, these attacks are implemented by manipulating samples during training through data poisoning. For instance, stamping images with a pattern and modifying its label. Interestingly, these models provide similar competitive classification test accuracy compared to clean models (i.e., models have not been attacked). As a consequence, it is non-trivial to distinguish these trojaned models from non-trojaned ones based on model accuracy alone. To make matters worse, even if the model owner was aware of examples of the trojan trigger pattern, the owner would need to patch the model through re-training to dampen the efficacy of the trojan trigger pattern.
Retraining does not always guarantee complete removal of the trojan behavior from the model.
To date, various techniques have been proposed to diagnose and mitigate of trojaned models. However, all approaches are either based on unrealistic assumptions or are excessively costly. For instance, the Neural Cleanse~\cite{paper:backdoor_mitigation_neural_cleanse} requires access to a sizable sample of clean inputs to reverse-engineer the backdoor and has shown to be successful only for trigger patterns with a relatively small size.
ABS~\cite{paper:backdoor_mitigation_abs}
improves upon Neural Cleanse in that requires a significantly smaller number of samples; however, it assumes that the responsible trojan neurons can activate trojan behavior independently from each other, which is unlikely to be true in practice.
Attacking the training pipeline to inject a trojan(s) in the final model is the cheapest and, thus, is likely the most desirable form of attack for real-world adversaries to launch. As such, throughout this work, we mainly focus on showing our methods' effectiveness in preventing this type of attack from happening.
It should be noted that our method is \textit{orthogonal} to \textit{attack} type and is sufficiently \textit{generic} to catch any continuous attack during the training of a DNN model. \systemGOAT{}\xspace{} relies upon \textit{proactive} training as opposed to \mbox{post-training} or \mbox{deployment-time} methods to assess the health of a DNN model. As we explain later in section~\ref{sec:threat}, we assume that the initial training dataset is provided by an honest user and is free of manipulation. With this as a basis, \systemGOAT{}\xspace{} limits the amount of change an adversary can inflict on a model through a single SGD step. As a consequence, the adversary is forced to keep attacking while randomly being verified by the TEE.
\subsection{
Integrity for DNN Training}
\systemGOAT{}\xspace{}'s main goal is to enable high-integrity training pipeline so that end users are assured that the model is built on the specified dataset, using specified parameters without modification. Thus, the final model users know who built the model, what dataset was used for training, and what algorithms were put in place for building the model.
If, at any point during training, \systemGOAT{}\xspace{} detects a deviation from the specified execution, it will not sign the final model to ascertain its validity.
\cite{Paper:Slalom} took a first step towards achieving both \textit{fast} and \textit{reliable} execution in the \textit{test} phase but neglected the training phase. The training phase is far more computationally demanding than the test phase, such that verification of all steps in training requires a substantially longer time. Since parameters keep changing, we cannot benefit from pre-computation.
Second, backward pass involves computing gradients for both the inputs and the parameters and takes longer than forward pass. Despite the mentioned hurdles, as our investigation shows, it may not be necessary to verify every step to achieve integrity guarantees with high probability.
\subsection{Intel SGX}
SGX~\cite{paper:intelsgxexplained} is an example of a common TEE that is available in many \mbox{modern-day} computers. As outlined in Table~\ref{tab:symbols},
it provides a secluded hardware reserved area, namely, processor reserved memory (PRM), that is kept private (i.e., it is not readable in plaintext) from the host, or any privileged processes, and is free from \textit{direct} undetected tampering.
It also supports \textit{remote attestation}, such that users can attest the platform and the running code within enclave before provisioning their secrets to a remote server.
Calls from routines that should transition to/from enclave are handled through predefined entry points that are called Ecall/Ocall that must be defined in advance, before building the enclave image. While it provides security and privacy for numerous applications (e.g., ~\cite{paper:enclavedb,paper:sgx_big_matrix,paper:Tensorscone}), due to its limited memory and computational capacity, directly running unmodified applications inside SGX can induce a significant hit on performance. This is especially the case for applications that require large amounts of memory, such as training DNNs.
\section{Model Signing By TEE}
\label{appendix:signature}
We assume, an honest and authenticated user will send her data encryption key $K_{client}$ (after remote-attestation) to the TEE. Next, the TEE decrypts and verifies the initial encrypted dataset using the $K_{client}$ and supplies the trainer (GPU) the plain-text of the training set. If the TEE fails to detect any violations of the protocol during training, it will sign the following message that certifies the final model where $\mathcal{W}$ is the model parameters, $ds$ is training dataset, $v\_num$ is the model version number, $SHA256$ is Sha 256 bit cryptographic hash function,
\newline
\[SHA256(SHA256(\mathcal{W}) || v\_num || SHA256(ds)||Sig_{client}^{v})\]
\noindent with signature key $Sig_{SGX}^{s}$ of the enclave
\begin{table}[htb]
\centering
\caption{Symbols and Acronyms Description}
\scalebox{0.8}{\begin{tabular}{l|c|c}
Category & Symbol & Description \\ \hline
\multirow{7}{*}{TEE}
& $K_{SGX}^{session}$ & TEE's session key for learning task \\
& $Sig_{SGX}^{s}$ & SGX signature signing key \\
& $K_{client}$ & client's encryption key \\
& $Sig_{client}^{v}$ & clients public key \\
& PRM & Processor Reserved Memory \\
& EPC & Enclave Page Cache \\
\hline
\multirow{3}{*}{Neural Network}
& RMM & Randomized Matrix Multiplication \\
& FV & Full Verification (No RMM)\\
& DNN & Deep Neural Network \\ \hline
\multirow{2}{*}{General}
&$\mathcal{W}$& model parameters \\
& $ds$ & training dataset \\
& $v\_num$ & model version number \\
\hline
\end{tabular}}
\label{tab:symbols}
\end{table}
\section{Deep Learning Training}
\label{subsection:training}
In the recent decade, Deep Neural Networks (DNN) gained an enormous attraction in solving problems related to computer vision and natural language processing~\cite{paper:alexnet,paper:VGG,paper:resnet,paper:GoogleNet,paper:inceptionv4}. In practice, these networks are stacks of layers that each perform a transformation $\mathcal{F}_{\mathcal{W}}^{l}(\cdot) \; \forall l \in |L|$ where $\mathcal{X}^{l+1} = \mathcal{F}_{\mathcal{W}}^{l}(\mathcal{X}^{l}) $ and $|L|$ is the number of layers.
The training task is to learn the correct parameters (point-estimates) $\mathcal{W^{*}}$ that optimizes (commonly minimizes) a task-specific (e.g. classification) loss function $\mathcal{L}$.
The most common way of training the DNN's parameters is through mini-batch Stochastic Gradient Descent (SGD)~\cite{paper:optimizer_minibatch_sgd}. A randomly selected mini-batch of dataset is fed to the DNN and the value of objective function $\mathcal{L}$ is calculated. This is usually called the \textit{forward} pass. Next, to derive the partial gradients of $\mathcal{L}$ w.r.t $\mathcal{W}$ ($\mathcal{\nabla_{W}^{L}}$), a \textit{backward} pass is performed~\cite{paper:backprop1}. Finally, parameters are then updated according to the equation~\ref{eq:DNN_Loss} where $0 < \alpha < 1$ is called the learning rate. Depending on the complexity of the dataset and the task, it might require hundreds of passes (called \textit{epoch}) over the input dataset for convergence.
\begin{eqnarray}
\mathcal{W}^{t+1} = \mathcal{W}^{t} - \alpha \mathcal{\nabla}_{\mathcal{W}^{t}}^{\mathcal{L}^{t}}
\label{eq:DNN_Loss}
\end{eqnarray}
\section{Gradient Clipping}
\label{subsection:gradclipping}
Gradient Clipping (GC) is a method that is mainly known to help mitigate the problem of exploding gradients during training~\cite{paper:gcexplodinggrad}. GC forces the gradients to fall within a narrow interval. There have been some efforts to analyze GC with respect to convergence. Authors in ~\cite{paper:gradclip_convergence} prove (assuming a fixed step size) that training with GC can be arbitrarily faster than training without it. Moreover, the theoretical analysis suggested that small clipping values can damage the training performance. However, in practice that is rarely the case. ~\cite{paper:gradclip_convergence_small} has an interesting theoretical analysis coupled with empirical evidence (symmetry of gradients distribution w.r.t SGD trajectory) that answers the gap between previous theoretical and practical observations.
\section{Integrity Proofs}
\subsection{Random Mini-Batch Verification}
\begin{definition}
Define random variable $X = \sum_{b=1}^{\ceil{ B\times p_{v}}} V_{b}$ to be the total number of random verification done. Here $V_b$ is $1$ if the batch is chosen for verification but failed the verification (due to malicious computation). Please note that we need to catch at least one deviation with probability greater than $p_{i}$, and invalidate the overall model learning.
\end{definition}
\begin{proof}
\label{proof:rand_mb_verf}
Proof of theorem ~\ref{theorem:rand_mb_verf}
\begin{eqnarray}
P(X \geq 1) = 1 - P(X = 0) &\geq& p_{i} \nonumber\\
1-p_{i} &\geq& P(X=0) \nonumber \\
1 - p_{i} &\geq& \binom{\ceil{ B\times p_{v}}}{0}p_{c}^0(1-p_{c})^{\ceil{ B\times p_{v}}} \nonumber\\
1 - p_{i} &\geq& (1-p_{c})^{\ceil{ B\times p_{v}}} \nonumber \\
\log(1 - p_{i}) &\geq& \ceil{ B\times p_{v}}\log(1-p_{c}) \nonumber \\
p_{v} &>& B^{-1}(\frac{\log(1 - p_{i})}{\log(1-p_{c})} - 1)
\label{eq:rand-sel}
\end{eqnarray}
\end{proof}
As shown in Figure. \ref{fig:random-verification-probs} it is only required to verify a much smaller subset of the batch computations inside the TEE to ensure a high probability of correct computation. For example, for large datasets such as Imagenet~\cite{Dataset:imagenet_cvpr09}, it is needed to verify less than \%1 of computation to have \emph{0.9999} guarantee (when corruption probability is \%0.5) on the correctness of computation outsourced to the GPU.
\subsection{Random Mini-Batch Verification with Randomized Matrix Multiplication}
\begin{definition}
Define random variable $V'_{b} = 1 \; if R_{b} = 1 \; \land \; MM\_verify(b) = 0 \; otherwise \; 0$ and $X = \sum_{b=1}^{\ceil{B\times p_{v}}} V'_{b}$. We need to detect at least one deviation with probability greater than $p_{i}$ while conducting random matrix multiplication verification.
\end{definition}
\begin{proof}
\label{proof:rand_mb_mm_verf}
Proof of theorem ~\ref{theorem:rand_mb_mm_verf}
\begin{eqnarray}
P(X \geq 1) &\geq& p_{i} \nonumber \\
1 - P(X = 0) &\geq& p_{i} \nonumber \\
1 - p_{i} &\geq& \binom{\ceil{ B\times p_{v}}}{0}(p_{c}(1-\alpha))^0((1-p_{c})+p_{c}\alpha)^{\ceil{ B\times p_{v}}} \nonumber \\
1 - p_{i} &\geq& ((1-p_{c})+p_{c}\alpha)^{\ceil{ B\times p_{v}}} \nonumber \\
\log(1 - p_{i}) &\geq& \ceil{ B\times p_{v}}\log((1-p_{c})+p_{c}\alpha) \nonumber \\
p_{v} &>& B^{-1}(\frac{\log(1 - p_{i})}{ \log((1+(\alpha-1)p_{c})}-1)
\label{eq:rand-sel-randmm}
\end{eqnarray}
\end{proof}
\section{Verification Probability Growth with Respect to Detection Probability}
Fig.~\ref{fig:random-verification-probs} shows how verification probability changes with respect to the probability that a batch step is maliciously manipulated by the attacker. First row shows the verification probability for a dataset with 60K samples. Second row depicts the required for much bigger dataset (1M samples) over different mini-batch sizes. The smaller the mini-batch size is, there is a higher chance for detecting malicious behavior.
\begin{figure}[tb]
\centering
\subfloat[]{%
\label{subfig:detect_0.99_ds_60K}
\begin{tikzpicture}[scale=0.8]
\begin{axis} [
title={epochs$=200$,dataset size $=60K$, $p_{i}> 0.99$},
ylabel=Verification Probability ($p_{v}$),
xlabel=Corruption Probability ($p_{c}$),
ymax=0.45,
ymin=0.00001,
xmin=0.00009,
xmax=0.000501,
xtick pos=left,
ytick pos=left,
enlargelimits=0.05,
domain=0.0001:0.0005,
scaled x ticks=true,
scaled y ticks=false,
try min ticks=10,
xticklabel style={font=\tiny,
},
yticklabel style={font=\tiny},
axis x line*=bottom,
axis y line*=left,
]
\addplot [blue,samples=900,thick,solid]{
(1/(200*(60000/64))) * ((ln{1.0-0.99})/(ln{1.0-x})-1.0)
};
\addlegendentry{{$b=64$}}
\addplot [black,samples=900,thick,loosely dotted]{
(1/(200*(60000/128))) * ((ln{1.0-0.99})/(ln{1.0-x})-1.0)
};
\addlegendentry{{$b=128$}}
\addplot [red,samples=900,thick,loosely dashed]{
(1/(200*(60000/256))) * ((ln{1.0-0.99})/(ln{1.0-x})-1.0)
};
\addlegendentry{{$b=256$}}
\addplot [brown,samples=900,thick,loosely dashdotdotted]{
(1/(200*(60000/512))) * ((ln{1.0-0.99})/(ln{1.0-x})-1.0)
};
\addlegendentry{{$b=512$}}
\end{axis}
\end{tikzpicture}
}
\subfloat[]{%
\label{subfig:detect_0.999_ds_60K}
\begin{tikzpicture}[scale=0.8]
\begin{axis} [
title={epochs$=200$,dataset size $=60K$, $p_{i}> 0.999$},
xlabel=Corruption Probability ($p_{c}$),
ymax=0.45,
ymin=0.00001,
xmin=0.00009,
xmax=0.000501,
xtick pos=left,
ytick pos=left,
enlargelimits=0.05,
domain=0.0001:0.0005,
scaled x ticks=true,
scaled y ticks=false,
try min ticks=10,
xticklabel style={font=\tiny,
},
yticklabel style={font=\tiny},
axis x line*=bottom,
axis y line*=left,
]
\addplot [blue,samples=900,thick,solid]{
(1/(200*(60000/64))) * ((ln{1.0-0.999})/(ln{1.0-x})-1.0)
};
\addlegendentry{{$b=64$}}
\addplot [black,samples=900,thick,loosely dotted]{
(1/(200*(60000/128))) * ((ln{1.0-0.999})/(ln{1.0-x})-1.0)
};
\addlegendentry{{$b=128$}}
\addplot [red,samples=900,thick,loosely dashed]{
(1/(200*(60000/256))) * ((ln{1.0-0.999})/(ln{1.0-x})-1.0)
};
\addlegendentry{{$b=256$}}
\addplot [brown,samples=900,thick,loosely dashdotdotted]{
(1/(200*(60000/512))) * ((ln{1.0-0.999})/(ln{1.0-x})-1.0)
};
\addlegendentry{{$b=512$}}
\end{axis}
\end{tikzpicture}
}
\subfloat[]{%
\label{subfig:detect_0.99_ds_1M}
\begin{tikzpicture}[scale=0.8]
\begin{axis} [
title={epochs$=200$,dataset size $=1M$, $p_{i}> 0.99$},
ylabel=Verification Probability ($p_{v}$),
xlabel=Corruption Probability ($p_{c}$),
ymax=0.6,
ymin=0.00001,
xmin=0.000004,
xmax=0.0000505,
xtick pos=left,
ytick pos=left,
scaled y ticks=false,
scaled x ticks=true,
enlargelimits=0.05,
domain=0.000005:0.00005,
try min ticks=10,
xticklabel style={font=\tiny,
},
yticklabel style={font=\tiny},
axis x line*=bottom,
axis y line*=left,
]
\addplot [blue,samples=2000,thick,solid]{
(1/(200*(1000000/64))) * ((ln{1.0-0.99})/(ln{1.0-x})-1.0)
};
\addlegendentry{{$b=64$}}
\addplot [black,samples=2000,thick,loosely dotted]{
(1/(200*(1000000/128))) * ((ln{1.0-0.99})/(ln{1.0-x})-1.0)
};
\addlegendentry{{$b=128$}}
\addplot [red,samples=2000,thick,loosely dashed]{
(1/(200*(1000000/256))) * ((ln{1.0-0.99})/(ln{1.0-x})-1.0)
};
\addlegendentry{{$b=256$}}
\addplot [brown,samples=2000,thick,loosely dashdotdotted]{
(1/(200*(1000000/512))) * ((ln{1.0-0.99})/(ln{1.0-x})-1.0)
};
\addlegendentry{{$b=512$}}
\end{axis}
\end{tikzpicture}
}
\subfloat[]{%
\label{subfig:detect_0.999_ds_1M}
\begin{tikzpicture}[scale=0.8]
\begin{axis} [
title={epochs$=200$,dataset size $=1M$, $p_{i}> 0.999$},
xlabel=Corruption Probability ($p_{c}$),
ymax=0.6,
ymin=0.00001,
xmin=0.000004,
xmax=0.0000505,
xtick pos=left,
ytick pos=left,
scaled y ticks=false,
scaled x ticks=true,
enlargelimits=0.05,
domain=0.000005:0.00005,
try min ticks=10,
xticklabel style={font=\tiny,
},
yticklabel style={font=\tiny},
axis x line*=bottom,
axis y line*=left,
]
\addplot [blue,samples=2000,thick,solid]{
(1/(200*(1000000/64))) * ((ln{1.0-0.999})/(ln{1.0-x})-1.0)
};
\addlegendentry{{$b=64$}}
\addplot [black,samples=2000,thick,loosely dotted]{
(1/(200*(1000000/128))) * ((ln{1.0-0.999})/(ln{1.0-x})-1.0)
};
\addlegendentry{{$b=128$}}
\addplot [red,samples=2000,thick,loosely dashed]{
(1/(200*(1000000/256))) * ((ln{1.0-0.999})/(ln{1.0-x})-1.0)
};
\addlegendentry{{$b=256$}}
\addplot [brown,samples=2000,thick,loosely dashdotdotted]{
(1/(200*(1000000/512))) * ((ln{1.0-0.999})/(ln{1.0-x})-1.0)
};
\addlegendentry{{$b=512$}}
\end{axis}
\end{tikzpicture}
}
\caption{Required verification probability with respect to batch corruption probability and the desired integrity probability for a fixed 200 epochs and different SGD batch size.
}
\label{fig:random-verification-probs}
\end{figure}
\section{Experimental Evaluation}
\subsection{Enclave Heap Size Impact on TEE Performance}
\begin{figure}[t]
\begin{center}
\subfloat[Available heap with respect to throughput]{%
\begin{tikzpicture}%
\begin{axis}[sharp plot,
xlabel={(SGX Max Heap Size, Blocking Size) MB},
ylabel={Throughput (Images/Sec)},
ymin=0.1,
ymax=1.9,
xticklabels={{(100,32)},{(150,48)},{(180,64)},{(200,80)},{(220,96)}},
xtick=data,
yticklabels={0.1,0.25,0.5,0.75,1.0,1.25,1.5,1.75,2.0},
ytick={0.1,0.25,0.5,0.75,1.0,1.25,1.5,1.75,2.0},
legend columns = 4,
legend style = {
at={(.55, 0.68)},
anchor=north,
inner sep=1pt,
style={column sep=0.05cm},
nodes={
scale=0.45,
transform shape},
cells={align=left,anchor=west},
},
axis lines*=left,
width=.5\textwidth,
height=.3\textwidth,
xticklabel style={font=\tiny},
yticklabel style={font=\tiny},
xlabel style={font=\tiny},
ylabel style={font=\tiny},
]
\addlegendimage{black,only marks,mark=square*} %
\addlegendentry{VGG19}
\addlegendimage{black,only marks,mark=diamond} %
\addlegendentry{VGG16}
\addlegendimage{black,only marks,mark=triangle*} %
\addlegendentry{ResNet152}
\addlegendimage{black,only marks,mark=pentagon} %
\addlegendentry{ResNet34}
\addlegendimage{red,line legend}
\addlegendentry{\makebox[0pt][l]{SGX}}
\addlegendimage{empty legend}
\addlegendentry{}
\addlegendimage{blue,line legend,loosely dashdotdotted}
\addlegendentry{\makebox[0pt][l]{SGX RMM}}
\addplot[color=red,mark=triangle*,mark options={solid}] coordinates {
(1,0.4286207975)
(2,0.42832118)
(3,0.4317611809)
(4,0.4054276782)
(5,0.4099751594)
};
\addplot[color=blue,loosely dashdotdotted,mark=triangle*,mark options={solid}] coordinates {
(1,0.5091803588)
(2,0.5128553853)
(3,0.5189976453)
(4,0.4832300157)
(5,0.4842871703)
};
\addplot[color=red,mark=pentagon,mark options={solid}] coordinates {
(1,1.413340008)
(2,1.426687564)
(3,1.410301974)
(4,1.428368046)
(5,1.426795196)
};
\addplot[color=blue,loosely dashdotdotted,mark=pentagon,mark options={solid}] coordinates {
(1,1.752690016)
(2,1.760609633)
(3,1.780741377)
(4,1.776867528)
(5,1.786623001)
};
\addplot[color=red,mark=square*,mark options={solid}] coordinates {
(1,0.2726836983)
(2,0.2901144753)
(3,0.2909748658)
(4,0.2663475845)
(5,0.2112050663)
};
\addplot[color=blue,loosely dashdotdotted,mark=square*,mark options={solid}] coordinates {
(1,0.6311147749)
(2,0.5718062852)
(3,0.5703824493)
(4,0.5254294729)
(5,0.4210763144)
};
\addplot[color=red,mark=diamond,mark options={solid}] coordinates {
(1,0.3077821949)
(2,0.3310894556)
(3,0.3318761619)
(4,0.3113047857)
(5,0.2433932361)
};
\addplot[color=blue,loosely dashdotdotted,mark=diamond,mark options={solid}] coordinates {
(1,0.7007663515)
(2,0.6389719644)
(3,0.6368137533)
(4,0.5932088652)
(5,0.4718251353)
};
\end{axis}
\end{tikzpicture}
\label{subfig:throughput_heap}
}
\subfloat[Available heap with respect to time spent on matrix-matrix(vector) multiplication]{%
\begin{tikzpicture}%
\begin{axis}[sharp plot,
xlabel={(SGX Max Heap Size, Blocking Size) MB},
ylabel={Time (Sec)},
xticklabels={{(100,32)},{(150,48)},{(180,64)},{(200,80)},{(220,96)}},
xtick=data,
axis lines*=left,
width=.5\textwidth,
height=.3\textwidth,
xticklabel style={font=\tiny},
yticklabel style={font=\tiny},
xlabel style={font=\tiny},
ylabel style={font=\tiny},
]
\addplot[color=red,mark=triangle*,mark options={solid}] coordinates {
(1,38.185016)
(2,38.072908)
(3,37.711327)
(4,40.612592)
(5,40.142905)
};
\addplot[color=blue,loosely dashdotdotted,mark=triangle*,mark options={solid}] coordinates {
(1,21.480757)
(2,21.039984)
(3,20.193895)
(4,23.620996)
(5,23.635319)
};
\addplot[color=red,mark=pentagon,mark options={solid}] coordinates {
(1,20.412225)
(2,20.198708)
(3,20.33375)
(4,20.18655)
(5,20.235605)
};
\addplot[color=blue,loosely dashdotdotted,mark=pentagon,mark options={solid}] coordinates {
(1,13.448346)
(2,13.51716)
(3,13.172704)
(4,13.276503)
(5,13.091452)
};
\addplot[color=red,mark=square*,mark options={solid}] coordinates {
(1,165.139311)
(2,148.394364)
(3,148.479209)
(4,154.785324)
(5,203.783577)
};
\addplot[color=blue,loosely dashdotdotted,mark=square*,mark options={solid}] coordinates {
(1,39.002411)
(2,45.289763)
(3,44.895975)
(4,44.999765)
(5,55.11462)
};
\addplot[color=red,mark=diamond,mark options={solid}] coordinates {
(1,146.94794)
(2,130.328297)
(3,129.872369)
(4,130.306258)
(5,177.165922)
};
\addplot[color=blue,loosely dashdotdotted,mark=diamond,mark options={solid}] coordinates {
(1,35.667228)
(2,40.37162)
(3,39.963483)
(4,40.954017)
(5,49.954216)
};
\end{axis}
\end{tikzpicture}
\label{subfig:gemm_heap}
}
\end{center}
\caption{The impact of increasing TEE heap size on (\protect{\subref{subfig:throughput_heap}}) overall throughput and (\protect{\subref{subfig:gemm_heap}}) the time spent in matrix multiplication routine . VGG shows significant reduction in performance as opposed to ResNet.}
\label{fig:heap_block_impact_performance}
\end{figure}
As shown in Figure~\ref{fig:heap_block_impact_performance}, which depicts the impact of the heap size on the performance of DNNs for a single SGD step, it can be seen that increasing the heap size way beyond the available hardware limit (around 92MB) can have a negative impact on performance especially in the case of the VGG architecture. This result is mainly due to 1) driver level paging, which needs to evict enclave pages that require an extra level of encryption/decryption and 2) extra bookkeeping for the evicted pages.
\subsection{TEE Performance on CIFAR10}
\label{subsec:appdx_tee_cifar10}
\begin{table}[t]
\centering
\caption{TEE Architectures Used}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Arch} & \textbf{FC1} & \textbf{FC2} & \textbf{FC3} \\
\hline
VGG11 & (128,10) & (128,64,10) & (256,128,10) \\
\hline
VGG13 & (128,10) & (128,64,10) & (256,128,10) \\
\hline
VGG16 & (128,10) & (128,64,10) & (256,128,10) \\
\hline
\end{tabular}
\label{tab:sgx_cifar_archs}
\end{table}
\begin{figure}[t]
\begin{center}
\begin{tikzpicture}%
\begin{groupplot}[
group style={
group name=teeperf,
group size=9 by 1,
%
/pgf/bar width=.2,
ylabels at=edge left,
xlabels at=edge bottom,
y descriptions at=edge left,
%
%
horizontal sep=0.5cm,
%
},
ybar=\pgflinewidth,
ylabel=Throughput (Images/Sec),
ylabel style={font=\small},
xticklabels={SGX,SGX\textsuperscript{RMM}},
xticklabel style={font=\tiny},
yticklabel style={font=\tiny},
xtick = data,
nodes near coords,
nodes near coords style={font=\tiny,rotate=90,anchor=west},
ymax=60,
ymin=5,
axis lines*=left,
width=0.18\textwidth,
height=0.40\textwidth,
clip=false,
ytick align=outside,
enlarge x limits={abs=.22},
legend entries = {Forward,Backward,Overall},
legend columns=-1,
legend to name=legendnamed2,
]
\nextgroupplot[
xlabel=VGG11-FC1,xlabel style={font=\scriptsize}]
\addplot coordinates {
(1,59.84348124) (2,45.75329039)
};
\addplot coordinates {
(1,17.02189002) (2,34.59035027)
};
\addplot coordinates {
(1,13.252380619) (2,17.07460978)
};
\nextgroupplot[
xlabel=VGG11-FC2,y axis line style={draw opacity=0},ytick style={draw=none},xlabel style={font=\scriptsize}]
\addplot coordinates {
(1,57.11175164) (2,45.30102709)
};
\addplot coordinates {
(1,16.9941199) (2,31.43602977)
};
\addplot coordinates {
(1,13.0969913) (2,15.73407254)
};
\nextgroupplot[
xlabel=VGG11-FC3,y axis line style={draw opacity=0},ytick style={draw=none},,xlabel style={font=\scriptsize}]
\addplot coordinates {
(1,57.41802792) (2,53.27388855)
};
\addplot coordinates {
(1,17.09887867) (2,32.76016691)
};
\addplot coordinates {
(1,13.17531736) (2,17.30244361)
};
\nextgroupplot[
xlabel=VGG13-FC1,y axis line style={draw opacity=0},ytick style={draw=none},xlabel style={font=\scriptsize}]
\addplot coordinates {
(1,50.90926736) (2,41.35537058)
};
\addplot coordinates {
(1,16.26040634) (2,24.89223413)
};
\addplot coordinates {
(1,12.32409402) (2,12.75320848)
};
\nextgroupplot[
xlabel=VGG13-FC2,y axis line style={draw opacity=0},ytick style={draw=none},xlabel style={font=\scriptsize}]
\addplot coordinates {
(1,47.98793703) (2,42.32061103)
};
\addplot coordinates {
(1,16.10680017) (2,31.61707767)
};
\addplot coordinates {
(1,12.05921338) (2,16.76332426)
};
\nextgroupplot[
xlabel=VGG13-FC3,y axis line style={draw opacity=0},ytick style={draw=none},xlabel style={font=\scriptsize}]
\addplot coordinates {
(1,48.31360968) (2,40.75428541)
};
\addplot coordinates {
(1,16.04266748) (2,32.65643517)
};
\addplot coordinates {
(1,12.04356761) (2,16.9894733)
};
\nextgroupplot[
xlabel=VGG16-FC1,y axis line style={draw opacity=0},ytick style={draw=none},xlabel style={font=\scriptsize}]
\addplot coordinates {
(1,40.17692915) (2,34.02410236)
};
\addplot coordinates {
(1,10.74517991) (2,19.2535721)
};
\addplot coordinates {
(1,8.477817195) (2,9.824769857)
};
\nextgroupplot[
xlabel=VGG16-FC2,y axis line style={draw opacity=0},ytick style={draw=none},xlabel style={font=\scriptsize}]
\addplot coordinates {
(1,39.29785781) (2,30.6699589)
};
\addplot coordinates {
(1,10.78273744) (2,19.87296209)
};
\addplot coordinates {
(1,8.461131117) (2,9.95890396)
};
\nextgroupplot[
xlabel=VVG16-FC3,y axis line style={draw opacity=0},ytick style={draw=none},xlabel style={font=\scriptsize}]
\addplot coordinates {
(1,38.10416309) (2,34.39118468)
};
\addplot coordinates {
(1,10.71732568) (2,17.43162683)
};
\addplot coordinates {
(1,8.364651219) (2,9.011844239)
};
\end{groupplot}
\node [above] at (current bounding box.north) {Throughput Performance (CIFAR10)};
\end{tikzpicture}
\ref{legendnamed2}
\caption{Throughput of SGD training step for VGG19,VGG16, ResNet152, and Resnet34 on \mbox{ImageNet} dataset. Randomized Matrix Multiplication can make verification twice faster in case of VGG architecture.}
\label{fig:vgg_cifar10_multiconv_multifc_performance}
\end{center}
\end{figure}
Figure~\ref{fig:vgg_cifar10_multiconv_multifc_performance} shows throughput performance for CIFAR10 dataset and 9 different VGG architectures (Table~\ref{tab:sgx_cifar_archs}). We chose three VGG(11,13,16) architectures adapted for CIFAR10 image inputs with custom fully connected layers attached to its end. For CIFAR10, we generally do not benefit from randomized matrix multiplication scheme as well as ImageNet. Mainly it is because most of the operations and network layers fit well within the hardware memory limit. Therefore, since dimensions of MM operations are not too big, it does not improve significantly to use randomized matrix multiplications.
\subsection{Impact of Gradient Clipping for Honest Trainers}
\begin{figure}[t]
\begin{center}
\subfloat[]{%
\label{fig:no_attack_clean_refruns_meanstd}
\input{tex_plots/mean_std_ref_acc}
}
\subfloat[]{%
\label{fig:no_attack_clean_runs_best}
\input{tex_plots/best_gaps}
}
\subfloat[]{%
\label{fig:no_attack_clean_runs_worst}
\input{tex_plots/worst_gaps}
}
\end{center}
\caption{~\ref{fig:no_attack_clean_refruns_meanstd} Reference Models (no gradient clipping) mean/std on test accuracy of 5 repeats for two different learning rates. Each configuration had 5 repeats and a reference model (no attack and unbounded updates).~\ref{fig:no_attack_clean_runs_best} For each run configuration the test accuracy difference $diff_{lr,clip}$ is defined as $max\left( acc_{lr,clip}^{rep}-acc_{ref}^{rep}\right) \quad \forall rep \in [1,5]$.~\ref{fig:no_attack_clean_runs_worst} $min\left( acc_{lr,clip}^{rep}-acc_{ref}^{rep}\right) \quad \forall rep \in [1,5]$}
\label{fig:no_attack_clean_runs}
\end{figure}
One important question is whether the gradient clipping used to prevent attacker to change parameters in a given mini-batch update would have performance impact during training session where there is no attack.
Six experiment configurations were repeated $5$ times each with different randomness (initialization, batch order, etc.). Initial learning rates are set $ \in (0.1,0.01)$ and clipping thresholds are set $\in (nil,0.001,0.0005)$.
In total, there are 30 ResNet56 (complex architecture with state-of-the-art performance) models trained on the CIFAR10 dataset (with no attack) for $200$ epochs. It is shown that the clipping rate have very little impact on the test set accuracy, given an appropriate initial learning rate is chosen.
Usually, for \textit{SGD}~\cite{paper:optimizer_minibatch_sgd} optimizer with momentum, a value of $0.1$ is chosen, (and for \textit{Adam} optimizer a value less than $0.001$).
For these experiments, we used the configuration with unbounded gradient updates as the main reference point. For learning decay schedule, we used a fixed decay by tenfold at epochs (50, 90, 130,160).
In figure~\ref{fig:no_attack_clean_refruns_meanstd} describes the mean and standard deviation (dashed lines) of test accuracy taken for 5 repetitions at each epoch for the two learning rate configurations. As it can be seen both models start to take giant leaps toward convergence at the first two learning decays enforced by the scheduler. Please note that these are reference runs that no gradient clipping is applied during the update step. Toward the end of training, the setting with the higher initial learning rate slightly performs better in terms of accuracy (the y-axis is not in \% scale).
In figure~\ref{fig:no_attack_clean_runs_best}, for each composition of learning rate, and clipping value, the highest difference (accuracy rate) with respect to reference run is plotted. The plot shows test accuracy is not influenced so much by the clipping value, rather, it is highly dependent on the learning rate value. when $lr=0.1$, both clipping values can achieve values that are close to the reference runs that has no gradient clipping, however, slightly smaller (most epochs it is negative, except jumps in the start).
In figure~\ref{fig:no_attack_clean_runs_worst}, the opposite of the previous measure is plotted. Again, by the end of the training, the gaps are significantly tightened for the case where a better learning is chosen. Therefore, having a smaller clipping value is not really impacting the performance in any considerable way.
Overall, figure~\ref{fig:no_attack_clean_runs}, shows that clipping does not really impact the learning task negatively, once a good learning rate is chosen. One can observe that if the trainer choose an acceptable learning rate for the task, small clipping values such as $0.001$ or $0.0005)$ does not impede the learning task. Once the model passes the first learning rate decay schedule, all the configuration behave the same in terms of their test performance compared to their reference model (no gradient clipping limit).
\subsection{All Backdoor Trigger Examples}
\begin{figure}[!htb]
\begin{center}
\subfloat[\tiny Large with multiple color variations]{%
\label{fig:appdx_CIFAR10_Trigger_Example:trigger1}
\scalebox{.95}{\includegraphics{assets/cifar10_w_triggers1.png}}
}%
\subfloat[\tiny Small with low color variations in two separate corners]{%
\label{fig:appdx_CIFAR10_Trigger_Example:trigger2}
\scalebox{.95}{\includegraphics{assets/cifar10_w_triggers2.png}}
}%
\subfloat[\tiny Gray scale]{%
\label{fig:appdx_CIFAR10_Trigger_Example:trigger3}
\scalebox{.95}{\includegraphics{assets/cifar10_w_triggers3.png}}
}%
\subfloat[\tiny Instagram Kelvin Filter]{%
\label{fig:appdx_CIFAR10_Trigger_Example:trigger4}
\scalebox{.95}{\includegraphics{assets/cifar10_w_triggers4.png}}
}%
\subfloat[\tiny Color Rotation Filter]{%
\label{fig:appdx_CIFAR10_Trigger_Example:trigger5}
\scalebox{.95}{\includegraphics{assets/cifar10_w_triggers5.png}}
}%
\subfloat[\tiny Mix of Rotation and Instagram Nashville Filters]{%
\label{fig:appdx_CIFAR10_Trigger_Example:trigger6}
\scalebox{.95}{\includegraphics{assets/cifar10_w_triggers6.png}}
}%
\end{center}
\caption[]{All examples of triggers on CIFAR10 images}
\label{fig:appdx_CIFAR10_Trigger_Example}
\end{figure}
Figure~\ref{fig:appdx_CIFAR10_Trigger_Example} shows six different backdoor trigger patterns that are used to conduct the attacks.
\section{Matrix Multiplication Ops of Common DNN Layers}
\label{section:MM_OPS_APX}
\begin{table}[htb]
\centering
\caption{Matrix Multiplication Operations}
\label{tab:MM-OPS-Verification}
\begin{adjustbox}{width=1\textwidth}
\begin{tabular}{|c|c|l|c|
}
\hline
\textbf{Layer Type} & \textbf{Pass} & \textbf{Computation} & \textbf{Verification}
\\
\hline
\multirow{3}{*}{Fully Connected}&Forward& $\mathcal{O}_{[B][O]} = \mathcal{I}_{[B][I]} \times (\mathcal{W}_{[O][I]})^\intercal $ &
$\begin{aligned}
\Upsilon_{[I][1]} & = (\mathcal{W}_{[O][I]})^\intercal \times \mathcal{R}_{[O][1]} \\
\mathcal{Z}_{[B][1]} & = \mathcal{I}_{[B][I]} \times \Upsilon_{[I][1]} \\
\mathcal{Z^{\prime}}_{[B][1]} & = \mathcal{O}_{[B][O]} \times \mathcal{R}_{[O][1]}
\end{aligned}$
\\
\cline{2-4}
&\makecell{Backward Parameters \\ Gradient}&
$\nabla_{[O][I]}^{\mathcal{W}} = (\nabla_{[B][O]}^{\mathcal{O}})^\intercal \times \mathcal{I}_{[B][I]}$ &
$\begin{aligned}
\Upsilon_{[B][1]} & = \mathcal{I}_{[B][I]} \times \mathcal{R}_{[I][1]} \\
\mathcal{Z}_{[O][1]} & = (\nabla_{[B][O]}^{\mathcal{O}})^\intercal \times \Upsilon_{[B][1]} \\
\mathcal{Z^{\prime}}_{[O][1]} & = \nabla_{[O][I]}^{\mathcal{W}} \times \mathcal{R}_{[I][1]}
\end{aligned}$
\\
\cline{2-4}
&\makecell{Backward Inputs \\ Gradient}&
$\nabla_{[B][I]}^{\mathcal{I}} = \nabla_{[B][O]}^{\mathcal{O}} \times \mathcal{W}_{[O][I]}$ &
$\begin{aligned} \Upsilon_{[O][1]} & = \mathcal{W}_{[O][I]} \times \mathcal{R}_{[I][1]} \\
\mathcal{Z}_{[B][1]} & = \nabla_{[B][O]}^{\mathcal{O}} \times \Upsilon_{[O][1]} \\
\mathcal{Z^{\prime}}_{[B][1]} & = \nabla_{[B][I]}^{\mathcal{I}} \times \mathcal{R}_{[I][1]}
\end{aligned}$
\\
\hline
\multirow{3}{*}{Convolutional}&Forward& $\mathcal{O}_{[f] [w_{o} . h_{o}]} = \mathcal{W}_{[f] [k^{2} . C_{i}]} \times \mathcal{I}_{[k^{2} . C_{i}][w_{o} . h_{o}]}$ &
$\begin{aligned}
\Upsilon_{[1][k^{2} . C_{i}]} & = \mathcal{R}_{[1][f]} \times \mathcal{W}_{[f] [k^{2} . C_{i}]} \\
\mathcal{Z}_{[1][w_{o} . h_{o}]} & = \Upsilon_{[1][k^{2} . C_{i}]} \times \mathcal{I}_{[k^{2} . C_{i}][w_{o} . h_{o}]} \\
\mathcal{Z^{\prime}}_{[1][w_{o} . h_{o}]} & = \mathcal{R}_{[1][f]} \times \mathcal{O}_{[f] [w_{o} . h_{o}]}
\end{aligned}$
\\
\cline{2-4}
&\makecell{Backward Parameters \\ Gradient}&
$\nabla_{[f][k^{2} . C_{i}]}^{\mathcal{W}} = \nabla_{[f][w_{o} . h_{o}]}^{\mathcal{O}} \times (\mathcal{I}_{[k^{2} . C_{i}][w_{o} . h_{o}]})^\intercal $ &
$\begin{aligned}
\Upsilon_{[w_{o} . h_{o}][1]} & = (\mathcal{I}_{[k^{2} . C_{i}][w_{o} . h_{o}]})^\intercal \times \mathcal{R}_{[k^{2} . C_{i}][1]} \\
\mathcal{Z}_{[f][1]} & = \nabla_{[f][w_{o} . h_{o}]}^{\mathcal{O}} \times \Upsilon_{[w_{o} . h_{o}][1]} \\
\mathcal{Z^{\prime}}_{[f][1]} & = \nabla_{[f][k^{2} . C_{i}]}^{\mathcal{W}} \times \mathcal{R}_{[k^{2} . C_{i}][1]}
\end{aligned}$
\\
\cline{2-4}
&\makecell{Backward Inputs \\ Gradient}&$\nabla_{[k^{2} . C_{i}][w_{o} . h_{o}]}^{\mathcal{I}} = (\mathcal{W}_{[f] [k^{2} . C_{i}]})^\intercal \times \nabla_{[f][w_{o} . h_{o}]}^{\mathcal{O}}$ &
$\begin{aligned}
\Upsilon_{[1][f]} & = \mathcal{R}_{[1][k^{2} . C_{i}]} \times (\mathcal{W}_{[f] [k^{2} . C_{i}]})^\intercal \\
\mathcal{Z}_{[1][w_{o} . h_{o}]} & = \Upsilon_{[1][f]} \times \nabla_{[f][w_{o} . h_{o}]}^{\mathcal{O}} \\
\mathcal{Z^{\prime}}_{[1][w_{o} . h_{o}]} & = \mathcal{R}_{[1][k^{2} . C_{i}]} \times \nabla_{[k^{2} . C_{i}][w_{o} . h_{o}]}^{\mathcal{I}}
\end{aligned}$
\\
\hline
\end{tabular}
\end{adjustbox}
\end{table}
Table.~\ref{tab:MM-OPS-Verification} shows common MM operations in DNNs. Connected and convolutional layers use MM routines to compute feed forward output, parameter gradients, and input gradients.
\section{\systemGOAT{}\xspace{} Blocking of Big Matrices}
By default, \systemGOAT{}\xspace{} allocates/releases resources on a per layer basis. In the event that even for a single sample, it is not possible to satisfy the memory requirements of network (either large network or large inputs), \systemGOAT{}\xspace{} breaks each layer even further.
For convolutional layers, the main memory bottleneck is \textit{im2col}\footnote{extracts redundant patches from the input image and lays in columnar format} which converts the layer's input (for each sample) of size $c_i \cdot w_i \cdot h_i $ to $[k^{2} \cdot c_i] \times [w_o \cdot h_o]$ (k is kernel window size) matrix for a more efficient matrix multiplication. \systemGOAT{}\xspace{} divides the inputs across the channel dimension and processes the $im2col$ on maximum possible channels that can be processed at once.
For fully-connected layers, the main memory bottleneck is the parameters matrix $\mathcal{W}_{[O] \cdot [I]}$ that does not depend on the batch size. \systemGOAT{}\xspace{} divides the matrix across the first dimension (rows), and processes the outputs on the maximum possible size of rows that fits inside the TEE for the corresponding layer.
| proofpile-arXiv_059-15692 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Experiments}
\label{sec:Experiment}
We experimentally demonstrate that our approach, which we refer to as \emph{GradOpt}, is both more efficient---requiring far fewer actual or simulated drives---and effective---in successfully finding attack patterns---than the state-of-the-art Bayesian Optimization (\emph{BO}) method~\citep{Boloor_2020_JSA}.
To carry out a large scale evaluation, we perform our experiments in simulation using a packaged version of the Carla simulator~\citep{Dosovitskiy17} provided by~\citet{Boloor_2020_JSA}
that allows the addition of painted road patterns to existing environments.
Our experiments evaluate attacks against the neural network-based controller network that is included with Carla and uses only camera inputs and outputs steering angles; this network was trained using imitation learning.
We run evaluations only on scenarios where this controller drives successfully without infractions in the unmodified environment.
Our experiments use 40 scenarios of
driving through a stretch of road in a virtual town.
Each scenario begins an episode with the vehicle spawned at a given starting waypoint, and the controller is then tasked with reaching a defined destination waypoint.
The episode runs until the vehicle reaches this destination or a time-limit expires (e.g., if the car crashes). Our scenarios are of three types: (a) the expected behavior is for the car to go \emph{straight} (16 scenarios), (b) veer \emph{left} (12 scenarios), or (c) \emph{right} (12 scenarios).
In each scenario, the attacker can draw a pattern on the road with the intent of causing the
car to deviate from its intended path.
We consider patterns that are unions of rectangles (i.e., each ``figure'' in Sec.~\ref{sec:physmap} is a rectangle), where the shape of each rectangle is determined by four parameters (i.e., $x_{k}^{C}\in\mathbb{R}^{4}$): rotation, width, length, and horizontal offset.\footnote{We center the rectangles vertically prior to rotation, as in~\citet{Boloor_2020_JSA}.}
We report results from optimizing
shape and color parameters for different numbers of rectangles $K$, ranging from $K=1$ (7 parameters) to $K=5$ (35 parameters), and additionally for the single black rectangle case, also when optimizing only its shape (4 parameters).
We learn these parameters with respect to the top-view co-ordinate frame of a canvas, that during evaluation will be passed to the simulator to be superimposed on the road (and then captured by the camera as part of the scene in all frames).
We train both BO and GradOpt in a simulated setting without any pedestrians or other cars.
We then evaluate the success of the attack on actual simulations with Carla, in terms of two metrics. The first measures deviation between the paths with and without the attack pattern without pedestrians or other cars.
We define deviation as
\begin{equation}
\label{eq:devdef}
\mbox{Deviation} = \frac{1}{2T}\sum_{t=1}^{t=T} \min_{t'} |\widetilde{W}_t - W_{t'}|+\min_{t'} |W_t - \widetilde{W}_{t'}|,
\end{equation}
where $\widetilde{W}_{t}$ and $W_{t}$ are sequences of car locations when driving with and without the attack pattern, at a fixed set of time instances defined as those when the region of the road where the attack pattern would appear is visible.
Our second metric is the total infraction penalty when driving \emph{with} pedestrians and other cars, as defined by the Carla Autonomous Driving Challenge~\citep{CarlaChallenge} (for example, a lane violation carries a penalty of 2 points,
hitting a static object or another vehicle of 6,
hitting a pedestrian of 9, etc.).
For each attack and scenario, we run 10 simulations, randomly spawning pedestrians and cars each time, and average the infraction penalty scores.
Finally, while both BO and GradOpt are trained in a \emph{clear noon} weather setting, we measure infractions on that setting as well as three others: \emph{cloudy noon}, \emph{rainy noon}, and \emph{clear sunset}.
\subsection{Attack Optimization}
\noindent{\bf Proposed Method:} Our approach requires two steps for every scenario: (1) collecting a set of frame sequences and calibrating them, and (2) performing gradient-based optimization. For (1), we collect frames from $T=3$ trajectories; we compare different ways to generate these and the impact of varying $T$ below.
We estimate the homographies $G_{f}^{t}$ for every frame $f$ in trajectory $t$ by running additional simulations with calibration patterns painted on the canvas: we run 12 simulations for each trajectory, with 5 calibration points of different colors in each simulation, and use the 60 automatically detected correspondences to estimate the homography matrix for every frame. We also learn a common set of color transform parameters for all frames in all trajectories, which we obtain by running the simulator 22 times, each time with the entire canvas painted a different high-contrast color, on the unperturbed trajectory. In addition, we collect clean frames for each trajectory.
Altogether, our method calls the simulator a total of \emph{61} times.
Once we have a set of calibrated frames, we employ gradient-based optimization (see Sec.~\ref{sec:gopt}).
We run the optimization with four different random starts, each for 250 iterations, for a total of 1000 iterations. We begin with a learning rate of 0.1 for each start, and drop it by a factor of 0.1 after the first 100, and the first 200 iterations. Each time we drop the learning rate, we also reset the parameters to those that yielded the lowest value of the loss thus far.
\noindent{\bf Bayesian Optimization:} We employ BO with the same objective as ours based on the same divergence metric, and closely follow the setting in \citet{Boloor_2020_JSA}---i.e., we run the optimization for a total of 1000 iterations, of which the first 400 are random exploration. Note that while this is the same number of iterations as we use for our method, every iteration of BO requires running a full episode with the Carla simulator.
\subsection{Results}
\noindent{\bf Run-time:}
Recall that the training budget for both BO and GradOpt, is 1000 iterations. BO requires a simulator call at each iteration, with training times ranging 7-25 hours, depending on the scenario. In contrast, our method only calls the simulator 61 times for calibration.
Ignoring the potential of parallelizing the iterations for the four random starts, GradOpt has a total running time of up to 2.5 hours per scenario, including both calibration and training. Thus, our method affords a significant advantage in computational cost and, as we show next, is also more successful at finding optimal attack patterns.
\begin{table*}[h]
\centering
\begin{tabular}{ccccccc}
\toprule
& \multicolumn{6}{c}{\bf Deviation}\\
K
&\multicolumn{4}{c}{Group} & \multicolumn{1}{c}{Group-All} & \multicolumn{1}{c}{Random}\\%
\cmidrule(lr){2-5}\cmidrule(lr){6-6} \cmidrule(lr){7-7}
(\#Rect.)
&\multirow{1}{*}{T=1}
&\multirow{1}{*}{T=3}
&\multirow{1}{*}{T=5}
&\multirow{1}{*}{T=7}
&\multirow{1}{*}{T=3}
&\multirow{1}{*}{T=3}\\
\midrule
1-b & \emph{0.89} & \emph{0.92} & \emph{1.03} & \emph{0.98} & \emph{0.92} & \emph{0.84}\\
1 & \emph{0.86} & \emph{0.89} & \emph{0.89} & \emph{0.95} & \emph{0.94} & \emph{0.86}\\
2 & \emph{1.05} & \emph{1.13} & \emph{1.07} & \emph{1.20} & \emph{1.13} & \emph{1.16}\\
3 & \emph{1.11} & \emph{1.14} & \emph{1.22} & \emph{1.33} & \emph{1.26} & \emph{1.09}\\
4 & \emph{1.21} & \emph{1.30} & \emph{1.28} & \emph{1.28} & \emph{1.32} & \emph{1.14}\\
5 & \emph{1.23} & \emph{1.33} & \emph{1.33} & \emph{1.33} & \emph{1.33} & \emph{1.22}\\ \bottomrule
\end{tabular}
\begin{tabular}{ccccccc}
\toprule
&\multicolumn{6}{c}{\bf Infraction Penalty} \\
K&\multicolumn{4}{c}{Group}&
\multicolumn{1}{c}{Group-All}& \multicolumn{1}{c}{Random} \\
\cmidrule(lr){2-5}\cmidrule(lr){6-6} \cmidrule(lr){7-7}
(\#Rect.)
&\multirow{1}{*}{T=1}
&\multirow{1}{*}{T=3}
&\multirow{1}{*}{T=5}
&\multirow{1}{*}{T=7}
&\multirow{1}{*}{T=3}
&\multirow{1}{*}{T=3}\\
\midrule
1-b & \emph{3.80} &\emph{4.25} &\emph{4.19} & \emph{4.43} & \emph{3.82} & \emph{3.98}\\
1 & \emph{4.18} &\emph{5.20} &\emph{5.54} & \emph{4.97} & \emph{4.73} & \emph{4.68}\\
2 & \emph{3.86} &\emph{5.16} &\emph{5.33} & \emph{5.85} & \emph{4.96} & \emph{4.88}\\
3 & \emph{3.79} &\emph{6.04} &\emph{5.62} & \emph{6.50} & \emph{5.28} & \emph{4.97}\\
4 & \emph{5.04} &\emph{6.35} &\emph{5.45} & \emph{6.00} & \emph{5.39} & \emph{5.39}\\
5 & \emph{4.17} &\emph{6.65} &\emph{6.29} & \emph{6.26} & \emph{6.54} & \emph{5.83}\\ \bottomrule
\end{tabular}
\caption{Ablation analysis of variations of GradOpt.}
\label{T:ablation}
\end{table*}
\noindent{\bf Ablation Analysis of GradOpt:}
First we identify the best variation of GradOpt, in terms of the choice of $T$, the choice between \emph{Random} (randomly perturbing each trajectory) and \emph{Group} (generating pairs of perturbed trajectories, one with positive and another with negative perturbations), and for the latter, whether we pool all trajectories in one optimization problem (\emph{Group-All}) or separately optimize only positively/negatively perturbed trajectories, respectively (\emph{Group}).
The results in Table~\ref{T:ablation} show that \emph{Group} has the best performance, particularly in terms of infraction penalties.
Moreover, $T=3$ yields significant improvement over $T=1$, but further increasing $T$ does not.
This shows that our approach that makes use of perturbed trajectories to counter the car's self-correcting behavior is indeed important, and remarkably effective, requiring only 2 perturbed trajectories.
Moreover, we can see that separately solving the problem with only positive, and only negative, perturbation (both including the baseline), rather than pooling these into a single objective, is important in yielding more infractions, even though there is no difference in terms of divergence.
The intuition for this is that pooling only, say, positively perturbed trajectories makes them consistent with the goal of the optimization (which is to steer sharply to the right, in that case), and the attack is better able to counter self-correcting behavior of the vehicle.
Thus, we use the \emph{Group} variant of GradOpt in the sequel.
\begin{table*}[h]
\centering
\begin{tabular}{cp{10pt}cp{10pt}p{0pt}p{10pt}cp{10pt}}
\toprule
& \multicolumn{3}{c}{\bf Deviation} & & \multicolumn{3}{c}{\bf ~Infraction Penalty} \\% \cline{2-7}
K
&\multirow{2}{*}{ BO}&\multirow{2}{*}{GradOpt}&\multirow{2}{*}{\% $\geq$}&
&\multirow{2}{*}{ BO}&\multirow{2}{*}{GradOpt}&\multirow{2}{*}{\% $\geq$}\\
(\#Rect.)&&&&\\\midrule
1-b & 0.85 & \emph{0.92} & \emph{53\%} && 3.85 & \emph{4.25} &\emph{80\%}\\
1 & 0.85 & \emph{0.89} & \emph{45\%} && 4.30 & \emph{5.20} &\emph{69\%}\\
2 & 0.79 & \emph{1.13} & \emph{70\%} && 4.55 & \emph{5.16} &\emph{76\%}\\
3 & 0.93 & \emph{1.14} & \emph{78\%} && 4.53 & \emph{6.04} &\emph{74\%} \\
4 & 0.70 & \emph{1.30} & \emph{90\%} && 3.79 & \emph{6.35} &\emph{81\%}\\
5 & 0.84 & \emph{1.33} & \emph{82\%} && 4.73 & \emph{6.65} &\emph{80\%}\\ \bottomrule
\end{tabular}
\caption{Average deviation
and infraction penalties
over all scenarios for GradOpt and BO, when optimizing parameters of different numbers of rectangles (1-b optimizes only the shape of one black rectangle) in
``clear noon'' weather.
The $\%\ge$ column reports the percentage of instances where GradOpt has $\ge$ score than BO.
}
\label{T:base-all}
\end{table*}
\begin{figure}[h]
\centering
\includegraphics[width=0.82\columnwidth]{figureTrajectory.pdf}
\caption{Trajectory deviations induced by GradOpt and BO for 6 example scenarios.
}
\label{fig:traj}
\end{figure}
\noindent{\bf Efficacy of GradOpt:}
Table~\ref{T:base-all} compares GradOpt and BO
in terms of the metrics deviation and infraction penalty discussed above, computed with simulations in the same standard weather setting as used for attack optimization.
We report the averages, as well as the percentage of scenarios when GradOpt outperforms BO as comparison statistics.
We find that GradOpt is significantly more successful than BO, despite also being computationally less expensive as discussed earlier. It has higher average values of both deviation and infraction penalties, with the gap growing significantly for higher values of $K$---indicating that GradOpt is much better able to carry out optimization in a higher-dimensional parameter space and leverage the ability to use more complex patterns. Moreover, we find that it yields attack patterns that are as or more successful than BO in more than 70\% of cases in all settings, with the advantage rising to 82\% for $K=5$. Figure \ref{fig:traj} shows some example scenarios comparing trajectory deviations induced by the attack patterns discovered by the two algorithms.
Additional illustrations and visualizations are provided in the Supplement.
We take a closer look at the vulnerability of different types of scenarios by separately reporting infraction penalties for each in Table \ref{T:pertype}. In addition, we also report the corresponding infraction penalties computed in simulations \emph{without} cars or pedestrians in Table~\ref{T:pertypeScoreWithoutVP}.
We see that scenarios where the expected behavior is driving straight are the hardest to attack, likely because they are the simplest to drive in. BO tends to achieve only a moderate infraction score in these settings, even at higher values of $K$. In contrast, GradOpt reveals that even these scenarios are in fact vulnerable when one is allowed to consider more complex adversarial patterns---achieving an average infraction penalty that is
significantly
higher than BO at $K=5$. Conversely, driving right scenarios are the most vulnerable with both methods being successful even with simple patterns, with GradOpt again yielding higher deviation and more infractions.
\begin{table*}[h]
\centering
\begin{tabular}{cccccccccccc}
\toprule
K& \multicolumn{3}{c}{\bf Straight} &~~& \multicolumn{3}{c}{\bf Left} &~~& \multicolumn{3}{c}{\bf Right}\\
\#Rect.& {BO}&{GradOpt}&{\% $\geq$}&& {BO}&{GradOpt}&{\% $\geq$}&& {BO}&{GradOpt}&{\% $\geq$}\\\midrule
1-b & 2.24 & \emph{2.33} & \emph{84\%} && 4.70 & \emph{4.15} & \emph{76\%} && 5.14 & \emph{6.90} & \emph{78\%}\\
1 & 2.02 & \emph{2.81} & \emph{77\%} && 5.53 & \emph{4.79} & \emph{60\%} && 6.11 & \emph{8.78} & \emph{68\%}\\
2 & 1.91 & \emph{2.73} & \emph{91\%} && 5.28 & \emph{5.28} & \emph{63\%} && 7.33 & \emph{8.28} & \emph{68\%}\\
3 & 2.74 & \emph{4.30} & \emph{84\%} && 5.47 & \emph{5.45} & \emph{67\%} && 5.98 & \emph{8.94} & \emph{70\%}\\
4 & 0.97 & \emph{5.16} & \emph{95\%} && 5.00 & \emph{4.55} & \emph{72\%} && 6.33 & \emph{9.73} & \emph{73\%}\\
5 & 1.69 & \emph{4.86} & \emph{86\%} && 5.60 & \emph{6.65} & \emph{76\%} && 7.92 & \emph{9.04} & \emph{76\%}\\
\bottomrule
\end{tabular}
\caption{Infraction penalties by scenario type (driving straight, left, or right) in ``clear noon'' conditions.
}
\label{T:pertype}
\end{table*}
\begin{table*}[h]
\centering
\begin{tabular}{cccccccccccc}
\toprule
K& \multicolumn{3}{c}{\bf Straight} &~~& \multicolumn{3}{c}{\bf Left} &~~& \multicolumn{3}{c}{\bf Right}\\
\#Rect.& {BO}&{GradOpt}&{\% $\geq$}&& {BO}&{GradOpt}&{\% $\geq$}&& {BO}&{GradOpt}&{\% $\geq$}\\\midrule
1-b & 2.25 & \emph{1.50} & \emph{62\%} && 1.83 & \emph{2.67} & \emph{67\%} && 3.00 & \emph{6.50} & \emph{92\%}\\
1 & 1.38 & \emph{1.88} & \emph{69\%} && 3.17 & \emph{2.33} & \emph{67\%} && 5.33 & \emph{7.00} & \emph{67\%}\\
2 & 0.62 & \emph{1.62} & \emph{94\%} && 2.33 & \emph{2.17} & \emph{67\%} && 5.50 & \emph{8.33} & \emph{83\%}\\
3 & 2.38 & \emph{2.62} & \emph{75\%} && 3.83 & \emph{3.83} & \emph{75\%} && 5.50 & \emph{7.33} & \emph{75\%}\\
4 & 0.50 & \emph{2.25} & \emph{100\%} && 2.17 & \emph{1.83} & \emph{83\%} && 4.67 & \emph{7.00} & \emph{92\%}\\
5 & 1.75 & \emph{3.00} & \emph{81\%} && 3.17 & \emph{4.17} & \emph{83\%} && 5.67 & \emph{7.00} & \emph{67\%}\\\bottomrule
\end{tabular}
\caption{Infraction penalties \emph{without cars or pedestrians}, i.e., infraction penalties computed with only \emph{static objects}, in standard ``clear noon'' simulations for each type of scenario.}
\label{T:pertypeScoreWithoutVP}
\end{table*}
\begin{table*}[h]
\centering
\begin{tabular}{cccccccccccc}
\toprule
K& \multicolumn{3}{c}{\bf Cloudy Noon} &~~& \multicolumn{3}{c}{\bf Rainy Noon} &~~& \multicolumn{3}{c}{\bf Clear Sunset}\\
\#Rect.& {BO}&{GradOpt}&{\% $\geq$}&& {BO}&{GradOpt}&{\% $\geq$}&& {BO}&{GradOpt}&{\% $\geq$}\\\midrule
1-b & 2.29 & \emph{3.56} & \emph{78\%} && 2.69 & \emph{3.14} & \emph{83\%} && 2.41 & \emph{3.19} & \emph{89\%}\\
1 & 3.19 & \emph{3.61} & \emph{82\%} && 2.98 & \emph{4.17} & \emph{86\%} && 3.36 & \emph{3.12} & \emph{85\%}\\
2 & 2.87 & \emph{4.45} & \emph{86\%} && 4.03 & \emph{4.45} & \emph{78\%} && 2.92 & \emph{3.85} & \emph{84\%}\\
3 & 3.06 & \emph{5.21} & \emph{85\%} && 3.28 & \emph{5.40} & \emph{82\%} && 2.60 & \emph{4.38} & \emph{87\%}\\
4 & 2.39 & \emph{4.96} & \emph{89\%} && 3.18 & \emph{4.88} & \emph{85\%} && 2.38 & \emph{3.68} & \emph{84\%}\\
5 & 3.56 & \emph{5.87} & \emph{83\%} && 3.81 & \emph{5.26} & \emph{77\%} && 2.67 & \emph{4.37} & \emph{83\%}\\\bottomrule
\end{tabular}
\caption{Infraction penalties
over all scenarios
with weather conditions different from that used for optimizing attacks (``clear noon'').}
\label{T:climate}
\end{table*}
\noindent{\bf Transferability:}
We evaluate the robustness of our adversarial test generation approach by evaluating the success of generated adversarial perturbations in different climate and visibility conditions than those used for attack optimization. Table~\ref{T:climate} presents results for simulations with four such climate settings, and as expected, we find that both BO and GradOpt do see a drop in penalty scores compared to the standard setting in Table~\ref{T:base-all}. Nevertheless, most of the attacks induce infractions, especially at higher values of $K$, with GradOpt again being significantly more successful than BO.
\section*{Supplement}
\renewcommand{\subsection}{\section}
In this supplement, we provide further details of our shape parameterization and calibration, as well as additional results and visualizations from our experiments.
\subsection{Pattern Shape Parameters}
Each rectangle in our shape is parameterized by four values $x_{k}^{S}=[w,l,o,\theta]$, corresponding to width, length, horizontal offset, and rotation or orientation. These parameters are defined with respect to the top-down view of a $400\times 400$ pixel ``canvas'' that is composited onto the road surface. Each rectangle is first drawn aligned with the $x-$ and $y-$axes of this canvas to be of width $w$ and length $l$ pixels, and vertically centered and horizontally offset so that its left edge is at $x=o$, and then rotated about the center of the canvas by angle $\theta$. Portions of rectangles that lay outside the canvas after this process were clipped from the pattern. Our parameterization expands on the one originally used by \citet{Boloor_2020_JSA} in two respects: by searching over length $l$ instead of fixing it to the length of the canvas, and having a separate orientation $\theta$ for each rectangle rather than a common one for all rectangles.
\subsection{Calibration Details}
We estimate homographies between the canvas and each frame from 60 corresponding pairs as described in Sec.~\ref{sec:Experiment}, using a direct linear transform. While doing so, we ensure the matrix has the correct sign so that homogeneous co-ordinates of points projected in the frame have a positive third co-ordinate when they are visible, and a negative one when they are ``behind the camera''. When compositing patterns on the frame, this allows us to retain only the portion of the pattern that would be visible from that viewpoint. The color transforms are estimated simply from the color correspondences using a least-squares fit.
\begin{table}[h]
\centering
\begin{tabular}{cp{10pt}cp{10pt}p{0pt}p{10pt}cp{10pt}}
\toprule
& \multicolumn{3}{c}{\bf Deviation} & & \multicolumn{3}{c}{\bf ~Infraction Penalty} \\% \cline{2-7}
K
&\multirow{2}{*}{BO}&\multirow{2}{*}{GradOpt}&\multirow{2}{*}{\% $\geq$}&~~~~~~
&\multirow{2}{*}{BO}&\multirow{2}{*}{GradOpt}&\multirow{2}{*}{\% $\geq$}\\
\#Rect.&&&&\\\midrule
1-b & 0.97 & \emph{1.03} & \emph{45\%} && 1.45 & \emph{1.95} &\emph{80\%}\\
1 & 0.49 & \emph{0.94} & \emph{72\%} && 1.90 & \emph{2.25} &\emph{82\%}\\
2 & 0.55 & \emph{1.08} & \emph{82\%} && 1.80 & \emph{3.10} &\emph{90\%}\\
3 & 0.50 & \emph{1.11} & \emph{88\%} && 1.90 & \emph{3.40} &\emph{90\%} \\
4 & 0.57 & \emph{1.11} & \emph{90\%} && 1.90 & \emph{3.00} &\emph{85\%}\\
5 & 0.51 & \emph{1.19} & \emph{88\%} && 2.55 & \emph{3.15} &\emph{78\%}\\ \bottomrule
\end{tabular}
\caption{Deviation and infraction penalties, \emph{both} computed without cars or pedestrians, over all scenarios in ``cloudy noon'' conditions.}
\label{T:base-cloudy-all}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{cp{10pt}cp{10pt}p{0pt}p{10pt}cp{10pt}}
\toprule
& \multicolumn{3}{c}{\bf Deviation} & & \multicolumn{3}{c}{\bf ~Infraction Penalty} \\% \cline{2-7}
K
&\multirow{2}{*}{BO}&\multirow{2}{*}{GradOpt}&\multirow{2}{*}{\% $\geq$}&~~~~~~
&\multirow{2}{*}{BO}&\multirow{2}{*}{GradOpt}&\multirow{2}{*}{\% $\geq$}\\
\#Rect.&&&&\\\midrule
1-b & 0.72 & \emph{1.02} & \emph{57\%} && 1.15 & \emph{2.30} &\emph{88\%}\\
1 & 0.50 & \emph{0.86} & \emph{72\%} && 1.45 & \emph{2.45} &\emph{90\%}\\
2 & 0.51 & \emph{0.88} & \emph{75\%} && 1.95 & \emph{2.85} &\emph{82\%}\\
3 & 0.57 & \emph{1.10} & \emph{75\%} && 1.90 & \emph{3.90} &\emph{85\%} \\
4 & 0.58 & \emph{0.97} & \emph{80\%} && 1.75 & \emph{2.70} &\emph{85\%}\\
5 & 0.58 & \emph{1.24} & \emph{90\%} && 2.50 & \emph{3.80} &\emph{82\%}\\ \bottomrule
\end{tabular}
\caption{Deviation and infraction penalties, \emph{both} computed without cars or pedestrians, over all scenarios in ``rainy noon'' conditions.}
\label{T:base-Rain-all}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{cp{10pt}cp{10pt}p{0pt}p{10pt}cp{10pt}}
\toprule
& \multicolumn{3}{c}{\bf Deviation} & & \multicolumn{3}{c}{\bf ~Infraction Penalty} \\% \cline{2-7}
K
&\multirow{2}{*}{BO}&\multirow{2}{*}{GradOpt}&\multirow{2}{*}{\% $\geq$}&
&\multirow{2}{*}{BO}&\multirow{2}{*}{GradOpt}&\multirow{2}{*}{\% $\geq$}\\
\#Rect.&&&&\\\midrule
1-b & 0.49 & \emph{0.69} & \emph{57\%} && 1.15 & \emph{1.30} &\emph{88\%}\\
1 & 0.40 & \emph{0.62} & \emph{62\%} && 1.75 & \emph{2.00} &\emph{82\%}\\
2 & 0.29 & \emph{0.71} & \emph{78\%} && 1.30 & \emph{1.90} &\emph{85\%}\\
3 & 0.38 & \emph{0.96} & \emph{90\%} && 0.95 & \emph{3.30} &\emph{95\%} \\
4 & 0.35 & \emph{1.07} & \emph{85\%} && 1.60 & \emph{2.25} &\emph{85\%}\\
5 & 0.44 & \emph{1.02} & \emph{85\%} && 1.90 & \emph{2.75} &\emph{85\%}\\ \bottomrule
\end{tabular}
\caption{Deviation and infraction penalties, \emph{both} computed without cars or pedestrians, over all scenarios in ``clear sunset'' conditions.}
\label{T:base-sunset-all}
\end{table}
\subsection{Additional Results}
In our main evaluation, we reported deviations (without cars or pedestrian) only in the overall evaluation in Table~\ref{T:base-all} and reported infraction penalties primarily \emph{with} cars and pedestrians for all other comparisons.
For completeness, we report deviation scores for those comparisons here, as well as infraction penalties computed in simulations without cars or pedestrians (with the caveat that some of the highest penalties defined by the challenge are for collisions with pedestrians and cars).
Tables~\ref{T:base-cloudy-all}-\ref{T:base-sunset-all} report both deviation and car and pedestrian-free infraction penalty scores for simulations in the different non-standard weather conditions. Table~\ref{T:pertypeDeviation} reports deviation scores separately for different types of scenarios. We find that these results are qualitatively consistent with those in our main evaluation in Sec.~\ref{sec:Experiment}.
\begin{table*}[!b]
\centering
\begin{tabular}{cccccccccccc}
\toprule
K& \multicolumn{3}{c}{\bf Straight} &~~& \multicolumn{3}{c}{\bf Left} &~~& \multicolumn{3}{c}{\bf Right}\\
\#Rect.& {BO}&{GradOpt}&{\% $\geq$}&& {BO}&{GradOpt}&{\% $\geq$}&& {BO}&{GradOpt}&{\% $\geq$}\\\midrule
1-b & 0.72 & \emph{0.52} & \emph{38\%} && 0.89 & \emph{1.25} & \emph{58\%} && 0.99 & \emph{1.12} & \emph{67\%}\\
1 & 0.49 & \emph{0.50} & \emph{31\%} && 0.89 & \emph{1.23} & \emph{75\%} && 1.30 & \emph{1.06} & \emph{33\%}\\
2 & 0.47 & \emph{0.70} & \emph{69\%} && 0.94 & \emph{1.47} & \emph{75\%} && 1.05 & \emph{1.35} & \emph{67\%}\\
3 & 0.62 & \emph{0.79} & \emph{81\%} && 1.23 & \emph{1.35} & \emph{67\%} && 1.02 & \emph{1.39} & \emph{83\%}\\
4 & 0.40 & \emph{0.87} & \emph{94\%} && 0.93 & \emph{1.75} & \emph{92\%} && 0.86 & \emph{1.42} & \emph{83\%}\\
5 & 0.54 & \emph{0.93} & \emph{81\%} && 0.94 & \emph{1.83} & \emph{92\%} && 1.15 & \emph{1.36} & \emph{75\%}\\\bottomrule
\end{tabular}
\caption{Deviations in simulations without cars or pedestrians, in standard ``clear noon'' weather conditions, for each type of scenario. }
\label{T:pertypeDeviation}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.5\columnwidth]{perturbations.pdf}
\caption{Attack patterns with $K=4$ rectangles (with transparent background) returned by GradOpt and BO for the example ``drive straight'' scenario illustrated in Figures \ref{fig:visual-gradOptReal} and \ref{fig:visual-gradOptBase}.}
\label{fig:perturbations}
\end{figure*}
\subsection{Visualization}
Finally, we use an example ``drive straight'' scenario to visualize the behavior of the controller when driving with attack patterns, with $K=4$ rectangles each, returned by GradOpt and BO. We show these patterns, in the top-down canvas view, in Fig.~\ref{fig:perturbations}. Then, we show frames from the vehicle's camera feed as it drives through the road with the respective patterns painted on the road in various climate conditions, in simulations with pedestrians and cars in Fig.~\ref{fig:visual-gradOptReal}, and without in Fig.~\ref{fig:visual-gradOptBase}. For this scenario, the pattern returned by BO is unable to cause a significant deviation in the vehicle's trajectory as it drives across the stretch of road with the pattern painted on it. In contrast, GradOpt's pattern is able to cause the car to veer sharply to the left in all but the ``cloudy noon'' climate setting---causing it to crash into another car in Fig.~\ref{fig:visual-gradOptReal} and veer sharply to the left in "clear noon" and "clear sunset" climate settings into the opposite sidewalk in Fig.~\ref{fig:visual-gradOptBase}.
\begin{figure*}[h]
\centering
\textbf{Simulations with GradOpt Pattern}\\
\includegraphics[width=0.95\columnwidth]{GradOptRealistic-compressed.pdf}\\~\\
\textbf{Simulations with BO Pattern}\\
\includegraphics[width=0.95\columnwidth]{BayesRealistic-compressed.pdf}\\
\caption{Frames from driving simulations, with cars and pedestrians in different weather conditions, after introducing attack patterns from GradOpt (top) and BO (bottom).}
\label{fig:visual-gradOptReal}
\end{figure*}
\begin{figure*}[h]
\centering
\textbf{Simulations with GradOpt Pattern}\\
\includegraphics[width=0.95\columnwidth]{GradOptBase-compressed.pdf}\\~\\
\textbf{Simulations with BO Pattern}\\
\includegraphics[width=0.95\columnwidth]{BayesBase-compressed.pdf}\\
\caption{Frames from driving simulations, without cars or pedestrians in different weather conditions, after introducing attack patterns from GradOpt (top) and BO (bottom).}
\label{fig:visual-gradOptBase}
\end{figure*}
\section{Proposed Method}
\label{sec:Technique}
Autonomous driving systems are equipped with decision algorithms that produce control signals for a vehicle based on high-level instructions---such as a given route or destination---and inputs from cameras and other sensors that make continuous measurements of the vehicle's physical environment. We assume that the decision algorithm is in the form of a differentiable function---such as a neural network---that maps video frames from the camera, along with other inputs, to the control outputs. Given such a network or function, our goal is to determine if it is vulnerable to attack. Specifically, we seek to build a scalable and efficient method to find modifications that can be applied to a simulated autonomous driving environment with a sophisticated physics engine, and result in a stream of video frames which cause the control network to produce output signals that disrupt the vehicle's operation, moving it away from the expected ideal trajectory.
This task is challenging since the relationship between modifications to the simulated physical environment and the network's inputs is complex: the video frames correspond to images of the environment from a sequence of changing viewpoints, where the sequence itself depends on the network's control outputs. The precise effect of any given modification can be determined only by actually driving the vehicle in the modified simulated environment that uses
a physics engine.
However, it is expensive to use such a simulator
when searching for the right modification, since the process is not differentiable with respect to parameters of the modification, and would require repeated trials with candidate modifications in every step of the search process.
Instead, we propose a fast approximation to produce video frames for a given environment given a candidate modification that is differentiable with respect to parameters of the modification. Our approach requires a small number of initial simulated calibration runs,
after which the search for optimal parameters can be carried out with end-to-end gradient-based optimization. Specifically, we consider the case when the modification takes the form of figures (such as rectangles) drawn on a restricted stretch of the road, and task the optimization with finding their optimal shape and color so as to maximize deviation from the controller's trajectory prior to modification. We now describe our model for the physical modification, our approximate mapping to create video frames for a given modification, and our optimization approach based on this mapping.
\subsection{Parameterized Scene Modifications}
\label{sec:physmap}
We assume modifications are in the form of a collection of $K$ figures (e.g., rectangles) that will be painted on a flat surface in the environment (e.g., road).
Let $\Phi=\{x_k^S, x_k^C\}_{k=1}^K$ denote the parameters of this modification, with $x_k^S$ corresponding to the shape parameters, and $x_k^C$ the RGB color, of the $k^{th}$ figure. These parameters are defined with respect to co-ordinates in some canonical---say, top-down---view of the surface.
We let $M(n_c; x^S)$ denote a scalar-valued mask that represents whether a pixel at spatial location $n_c\in\mathbb{Z}^2$ in the canonical view is within a figure with shape parameters $x^S$. This function depends simply on the chosen geometry of the figures, and has value of $1$ for pixels within the figure, $0$ for those outside, and real values between $0$ and $1$ for those near the boundary (representing partial occupancy on a discrete pixel grid to prevent aliasing artifacts).
Since the spatial extents for different figures may overlap, we next account for occlusions by assuming that the lines will be painted in order. Accordingly, we define a series of visibility functions $V_k(n_c; \Phi)$, each representing the visibility of the $k^{th}$ figure at pixel $n_c$, after accounting for occlusions. We set the function for the last figure as $V_K(n_c; \Phi) = M(n_c; x_K^S)$, and for the other figures with $k < K$,
\begin{equation}
V_k(n_c; \Phi) = M(n_c; x_k^S) \prod_{k'=k+1}^K \left(1-V_{k'}(n_c; \Phi)\right).
\end{equation}
\subsection{Approximate Frames via Compositing}
\label{sec:composite}
The next step in our pipeline deals with generating the video inputs that the controller network is expected to receive from a modified environment for given parameter values $\Phi$. These frames will represent views of the environment, including the surface with the painted figures, from a sequence of viewpoints as the car drives through the scene. Of course, the precise viewpoint sequence will depend on the trajectory of the car, which will depend on the control outputs from the network, that in turn depends on the frames. Instead of modeling the precise trajectory for every modification, we consider a small set of $T$ representative trajectories, corresponding to those that the vehicle will follow when driven with small perturbations around control outputs, when operating in the unmodified environment.
One trajectory involves driving the car with the actual output control signals.
To generate others, we consider two variants: 1) adding random noise to control outputs (\emph{Random}), and 2) adding trajectories in pairs, one with random deviations to the left only, and the other only including random deviations to the right (termed \emph{Group}).
Given the fact that actual control is closed-loop, it is not evident that either variant of this simple approach would work; however, our experiments below (using $T=3$) show that both ideas are remarkably effective.
This gives $T$ sequences of video frames, one for each trajectory, where we assume each sequence contains $F$ frames. We let $\tilde{I}_{f}^t(n)$ denote the $f^{th}$ ``clean'' image in the $t^{th}$ sequence, representing a view of the environment without any modifications. Here, $n\in\mathbb{Z}^2$ indexes pixel location within each image, and the intensity vector $\tilde{I}_f^t(n)\in\mathbb{R}^3$ at each location corresponding to the recorded RGB values. These clean images can be obtained by driving the car---actually, or in simulation---in the original environment.
For each frame in each sequence, we also determine a spatial mapping $n_c = G_{f}^t(n)$ that maps pixel locations in the image to the canonical view. We model each $G_f^t(n)$ as a homography: the parameters of which can be determined by either using correspondences between each image and the canonical view of the surface---from calibration patterns rendered using the simulator, or from user input---or by calibrating the vehicle's camera and making accurate measurements of ego-motion when the vehicle is being driven. Additionally, we also determine color mapping parameters $C_f^t \in \mathbb{R}^{3\times 3}, b_f^t \in \mathbb{R}^3$ for each frame representing an approximate linear relationship between the RGB colors $x^C$ of the painted figures, and their colors as visible in each frame. These parameters are also determined through calibration.
Given this set of clean frames and the geometric and color mapping parameters, we generate corresponding frames with views of the modified environment simply as:
\begin{align}
\label{eq:composite}
\begin{split}
I_f^t(n; \Phi)= \left(1 - \sum_{k=1}^K V_k(G_f^t(n); \Phi)\right) \tilde{I}_f^t(n)
+ \sum_{k=1}^K V_k(G_f^t(n); \Phi)\left(C_f^t x_k^C + b_f^t\right).
\end{split}
\end{align}
\subsection{Computing Adversarial Perturbations}
\label{sec:gopt}
Given the ``forward'' process of generating a set of frames for a given set of modification values $\Phi$, we finally describe our approach to finding the value of $\Phi$ that misleads the control network. We let $D(\{I_f[n]\}_f)$ denote the controller network, which takes as input a sequence of frames $\{I_f[n]\}$ in a single trajectory and generates a corresponding sequence of real-valued control signals, such as a steering angle at each time instant. Our goal is to find the value of $\Phi$ that maximizes deviation of these outputs from those for an unmodified environment. We cast this as minimization of a loss over our $T$ trajectories, i.e.,
\begin{equation}
\label{eq:tloss}
\Phi = \arg \min - \sum_{t=1}^T \delta\left(D(\{I_f^t[n,\Phi]\}_f),D(\{\tilde{I}_f^t[n]\}_f)\right),
\end{equation}
in which $\delta(\cdot,\cdot)$ measures divergence between two sequences of control outputs.
In addition, we propose a physically meaningful variation of this, where we split the $T-1$ trajectories other than the one using actual control outputs into two subgroups: the one with positive and the other with negative perturbations, with both groups including the original trajectory.
Note that this is meaningful in the \emph{Groups} approach for generating trajectories above, but not for \emph{Random}.
After solving the two resulting problems independently, we can then choose the solution that has the highest divergence.
In our experiments where the control network outputs a sequence of steering angles, we use the absolute value of the sum of the differences between the angles as our divergence metric when we pool all trajectories together, and use the difference (for the positive subgroup) or negative difference (for the negative subgroup).
We do this because we would expect that positive perturbations will be more representative when we are trying to force steering further to the right, and negative perturbations are most physically meaningful when we aim to cause sharp steering to the left.
We carry out either version of optimization iteratively using gradient descent with Adam~\citep{Adam} because, as shown next, we are able to compute gradients of the loss in \eqref{eq:tloss} with respect to the modification parameters $\Phi$. Since the controller network $D(\cdot)$ is assumed to be a neural network (or a differentiable function), we are able to use standard back-propagation to compute gradients $\nabla\left(I_f^t(n;\Phi)\right)$ of the loss with respect to each intensity in the images of the modified environment. The gradients with respect to the color parameters $\{x_k^C\}$ can then be computed based on \eqref{eq:composite} as:
\begin{equation}
\label{eq:cgrad}
\nabla(x_k^C) = \sum_{t,f} \left(C_f^t\right)^T \left(\sum_{n} V_k(G_f^t(n); \Phi)~\nabla\left(I_f^t(n;\Phi)\right)\right).
\end{equation}
Computing gradients with respect to the shape parameters $\{x_k^S\}$ requires an approximation, since the mask functions $M(\cdot)$ are not generally differentiable with respect to these parameters. We adopt a simple local linearization approach: for every scalar parameter $\theta$ in the shape parameters $x_k^S$ for each figure, we discretize its range into a fixed set of equally separated values. Then, given the current (continuous) value of $\theta$, we let $\theta^-$ and $\theta^+$ represent the consecutive pair of values in this discrete set, such that $\theta^- \leq \theta \leq \theta^+$, and denote $\Phi_{\theta^-}$ and $\Phi_{\theta^+}$ the full set of current parameter values, with $\theta$ replaced by $\theta^+$ and $\theta^-$ respectively. We make the approximation that if $\alpha \in \mathbb{R}$ such that $\theta = \alpha \theta^+ + (1-\alpha) \theta^-$, then
$
I_f^t(n; \Phi) \approx \alpha I_f^t(n; \Phi_{\theta^+}) + (1-\alpha) I_f^t(n;\Phi_{\theta^-}).
$
Therefore, although we only use frames $I_f^t(n; \Phi)$ with the actual value of $\Phi$ as input to the control network, we also generate an extra pair of images $I_f^t(n; \Phi_{\theta^-}), I_f^t(n; \Phi_{\theta^+})$ for each frame for every element $\theta$ of the shape parameters. We then use these frames to compute parameter gradients as:
\begin{align}
& \nabla(\alpha) = \sum_{t,f,n} \nabla\left(I_f^t(n;\Phi)\right)\left(I_f^t(n;\Phi_{\theta^+})-I_f^t(n;\Phi_{\theta^-})\right), \nonumber \\
& \nabla(\theta) = \nabla(\alpha) \times (\theta^+-\theta^-)^{-1}.
\end{align}
\if 0
\subsection{Autonomous Driving Systems}
Self-driving systems have sensors that automatically capture driving environments including road conditions, weathers etc, and then self-driving models take sensor data as input and output driving control. In particular, we discuss the vulnerabilities of vision-based autonomous driving systems, i.e. systems which sense the environment via cameras.
Let $F = \{F_1, F_2, \cdots, F_T \}$ be a sequence of $T$ frames taken by the cameras. $F_t$ is a frame at time $t$.
An autonomous driving system uses the sequence of frames $F$, to issue a sequence of controls to the actual vehicle, we refer to this sequence of controls as $C = \{C_1, C_2, \cdot,C_T \}$. Under this formulation (of frames and controls) an \text{autonomous driving system } can be thought of as a function $g$, which takes in a sequence $F$ as input and outputs a sequence $C$, i.e.
\[
g(F) = C.
\]
The objective of these driving systems is to travel from point A to point B, along some route. Formally, a route can be defined by a sequence of waypoints $W = \{W_1, W_2, \cdots, W_n \}$. We consider a route to be successfully completed as the car can travel to each waypoints without infraction.
\subsection{Physical Attack to Autonomous Driving Systems}
Our goal is to attack self-driving vehicles under a wide array of driving conditions in the physical world by subjecting the car's vision system to adversarial perturbations. These perturbations can be easily placed in locations such that it is difficult for human observers to notice them. For example, in the physical world, people walk through graffiti painting on the road without notice. With this in mind, we increase the visual subtlety of the perturbation by allowing the adversary to select a specific region where the perturbation must lie. The perturbation is covertly placed with the intention of causing the autonomous driving vehicle to collide with pedestrians, other cars, static objects or driving to the wrong destination. We refer any such behavior as an infraction.
\subsubsection{Objective}
To cause these infractions, we aim to manipulate the control of the autonomous driving systems. Typically the main control commands of an autonomous driving vehicle are throttle, brake and steering angle, specifically, we focus on the manipulation of steering angle. The manipulation of steering angle is meant to deviate the autonomous driving systems from the waypoints $W$ as much as possible with perturbation $\delta $.
$F^{(\delta)}$ is a sequence of frames collected when there is perturbation $\delta$ painting on the road. Let $\theta^{
(\delta)}$,$\theta^{
(\delta)} = \{\theta^{\delta}_{1},\theta^{\delta}_{2},\cdots,\theta^{\delta}_{T}\}$ , represents the sequence of output steering angle from $g$ with the input $F^{(\delta)}$. We learn $\delta$ by maximizing the total of absolute difference between $\theta$ and $\theta^{\delta}$ over the sequence of $T$ frames.
\[
\text{obj} = \Bigg|\sum_t (\theta_t^ {(\delta)} - \theta_t)\Bigg|
\]
\subsubsection{Approximation Frames Construction}
The objective function obj, is iteratively optimized. At each iteration the sequence $F^{(\delta)}$ must be generated for the current perturbation $\delta$. Generating $F^{(\delta)}$ is costly to simulate and infeasible to terrestrially generate. For example, in the realm of physical-attacks there may not be enough time in a human life span to paint, and repaint, enough different shapes of adversarial patterns, $\delta$, on the road to successfully optimize obj. To circumvent such hurtles we introduce an approach for approximating $F^{(\delta)}$ which yields nearly optimal perturbations when compared with the actual sequences $F^{(\delta)}$.
We refer to the approximation of $F^{(\delta)}$ as $I^{(\delta)}$. Both $\delta$ and approximate sequence $I^{(\delta)}$ generate via our proposed offline approach , where there is no need to use simulators or self-driving vehicles during the learning process.
$I^{(\delta)}$ obtained via our proposed system. There are two parts of the construction system of $I^{(\delta)}$ : 1. color correction: correct offset between the color of the painted perturbation $\delta$ in $I^{(\delta)}$ and $F^{(\delta)}$. 2. spatial transform: warp the perturbation $\delta$ on $F$. We refer to the color transform as $y$ and spatial transform as $f$.
\[
I^{(\delta)} = f(y(\delta),F)
\]
TODO: Maybe add a pipeline figure here.
\paragraph{Color Correction}
In general, when one paints some pattern on the road with a color, the sensors of autonomous driving vehicle may capture different colors of the perturbation $\delta$ in the physical world or simulator due to different lighting and rendering process.
In order to offset the difference, we correct color space of $\gamma$ to the color space of physical world or simulator, before we "artificially paint" (I mean "warp" here, but cannot figure out a better way because I am not sure if all reviewers know warp mean) it on $F$. Specifically, we compute a sequence of color correction matrix $\mathbf{M} = \{\mathbf{M_1}, \mathbf{M_2}, \cdots, \mathbf{M_T}\}$ for $F$, where $\mathbf{M_t}$ is the color correction matrix for frame $F_t$.
\begin{align*}
\mathbf{M_{t}} =
\begin{pmatrix}
m_{1,1} & m_{1,2} & m_{1,3} \\
m_{2,1} & m_{2,2} & m_{2,3} \\
m_{3,1} & m_{3,2} & m_{3,3}
\end{pmatrix}
\end{align*}
For each frame $F_t$, to compute $\mathbf{M_{t}}$, we collect a bunch of color pairs $\{R,R'\}$, where $R$ is the a collection of color sets $R = \{R_1,R_2,\cdots,R_J\}$ that used to paint different colors of perturbations, and $R'$ is the correpsonding color collection that capture by autonomous driving cameras, $R' = \{R^{'}_1,R^{'}_2,\cdots,R^{'}_J\}$
Each color pair $\{R_j,R_j'\}$, where $R_j = \{r_j,g_j,b_j\}$ is a color set that used to paint the perturbation $\gamma$, and $R_j' = \{r_j',g_j',b_j'\}$ is a color set of the perturbation that captured by the autonomous driving sensors and used by the autonomous driving model $g$. $B_{t}=\{B_r, B_g, B_b \}$
\[
R' = \mathbf{M_{t}} R + B_t
\]
After getting $M_t$, we a color set $\{ r,g,b \}$ painting perturbation $\delta$, we will have $\{ r',g',b' \}$ captured by cameras from autonomous driving cars in the physical world or simulator. For any color set $\beta = \{r,g,b\}$, we get the "real" color $\beta'}$(pixel value) at any time $t$.
\[
\beta' = M_t \beta + B_t
\]
\paragraph{Spatial Transform}
It is straightforward to draw a top-down view of the perturbation $\delta$ on a clean canvas, by clean canvas we mean a frame in which all pixels that do not on the perturbation are transparent. However the viewing system of an autonomous driving vehicle will not view the top-down view, but will rather view the perturbation from a sequence of dynamically changing angles. Further the viewing system will view the perturbation along with the driving environment. In the physical world, such a perturbation would be painted on the road, and as such the perturbation is fixed on the road and unchanging. In order to account for the static nature of realizable perturbations in simulators one must develop a spatial mapping such that the perturbation appears static as the autonomous driving vehicle views the perturbation from its sequence of dynamic of viewing angles.
For a given time $t$, let $\mathbf{p}_t$ be a $3\times m$ matrix where each of the $m$ columns gives the $x, y, z$ coordinates of each point of the perturbation as viewed top-down. For the top-down view we set the $z$ coordinate of each point to $1$.
\begin{align*}
\mathbf{p}_t =
\begin{pmatrix}
p_{x_1}_t,& p_{x_2}_t,& \cdots,& p_{x_m}_t \\
p_{y_1}_t,& p_{y_2}_t,& \cdots,& p_{y_m}_t \\
1,& 1,& \cdots,& 1 \\
\end{pmatrix}
\end{align*}
To compute the approximate frame with the projected perturbation we use a $3\times 3$ homography matrix with up to $8$ degrees of freedom, we call this matrix $\mathbf{H}_t$. A homography matrix is an invertible mapping of points and lines on the projective plane.
The points of the perternation, as viewed by the car, at time $t$ is then,
\[
\mathbf{p}_t' = \mathbf{H}_t \mathbf{p}_t
\]
Once $\mathbf{H}_t$ is calculated, it can then be used to map the perturbation onto the clean frame $F_t$, yielding the approximation frame $I_t^{(\delta)} = f(\delta, \mathbf{H}_T)$.
\subsubsection{Learning Scheme}
We define perturbation $\delta$ with spatial parameters $\gamma$ and color parameters $\beta$. For example, $\gamma$ could has dimensions of width, length, position etc, and $\beta$ could has dimensions of pixel value of red,greed,blue channels. We represent $\gamma$ as a $m$ dimension vector $\gamma = \langle \gamma_1, \gamma_2, \cdots, \gamma_m \rangle$. $\gamma$ has $m$ number of parameters. We represent $\beta$ as a $n$ dimension vector $\beta = \langle \beta_1, \beta_2, \cdots, \beta_n \rangle$. $\beta$ has $n$ number of parameters.
There are $n+m$ parameters to define perturbation. $\delta = \langle \gamma_1, \gamma_2, \cdots, \gamma_M, \beta_1, \beta_2, \cdots, \beta_N \rangle$
In the learning process, we use gradient information to optimize the parameters $\gamma$ and $\beta$ of the perturbation. The only assumption of our approach is that we have access to the gradient information of the autonomous driving model $g$. We get the gradient information of spatial parameters $\gamma$ and color parameters $\beta$ in different ways. We refer $\delta^{k}$ as the perturbation paramaters at iteration $k$, where $\delta^{k} = \langle \gamma_1^{k},\gamma_2^{k}, \cdots,\gamma_M^{k},\beta_1^{k},\beta_2^{k},\cdots,\beta_N^{k} \rangle$. Let's discuss the gradient information of spatial parameters and color parameters separately.
\subsubsection{Spatial Parameters}
To learn the gradient of a spatial parameter, we use an approach that is adapted from STN.(todo: cite Spatial Transform Network.).
We firstly descretize the parameter space for each parameter $\gamma_m$ in $\gamma$. We discretize the parameter space $\gamma^{D}_j$ into some discrete basis value set $\gamma_m^{D} = \{\gamma_{m,1}, \gamma_{m,2}, \cdots, \gamma_{m,l},\cdots, \gamma_{m,L}\}$ where each element is a real value number which is defined by step size $s_m$ and the range of the parameter $\big[\gamma_{m,1}, \gamma_{m,L},\big]$. Element $\gamma_{m,l}$ in $\gamma_m^{D}$ is defined as $\gamma_{m,l} = \gamma_{m,1} + (l-1)s_m$.
To get the gradient information for each $\gamma_m$ in $\gamma$, we use the weighted sum of images with elements in discrete basis value. We refer this weighted sum of images as blur image for $I_{blur}^{\gamma_{m}}$. Mathematically, we define $I_{blur}^{\gamma_{m}}$ as $I_{blur}^{\gamma_{m}} = \sum_l w_l I^{\gamma_{m,l}} $, where $w_l$ represent weight for image $I^{\gamma_{m,l}}$. The perturbation in $I^{\gamma_{m,l}}$ is defined as $\langle \gamma_1^{k},\gamma_2^{k},\cdots,\gamma_{m,l},\cdots,\gamma_M^{k},\beta_1^{k},\beta_2^{k},\cdots,\beta_N^{k} \rangle$ where all parameters in $\gamma$ are the actual value at iteration $k$, but $l^{th}$ value,$\gamma_{m,l}$, is from the discrete basis set. The blur image $I_{b}^{\gamma_m}$ of $\gamma_m$ is $I_{b}^{\gamma_m} = \sum_l w_{m,l} I^{\gamma_{m,l}}$.
We use bilinear interpolation to get the blur image, therefore only two nearest neighbor values in $\gamma_m^{D}$ matter. Suppose the nearest smaller value than $\gamma_m$ in $\gamma_m^{D}$ is $\gamma_{m,l},$ and the nearest bigger value than $\gamma_m$ in $\gamma_m^{D}$ is $\gamma_{m,(l+1)}$. The weights are calculated by $w_{m,l} = \frac{(\gamma_{m, (l+1)} - \gamma_{m})}{s_m}$, $w_{m,l+1} = \frac{(\gamma_{m} - \gamma_{m,l})}{s_m}$, and other weights are zero.
Considering all $M$ spatial parameters, the sum of the sequence of blur images of all parameters is
\[
I_{b} = \sum\limits_m I_{b}^{\gamma_m} = \sum\limits_m \sum\limits_l w_{m,l} I^{\gamma_{m,l}}
\]
We refer the input images at iteration $k$ as $I_{input}$; $I_a$ as the sequence of images that be used in the forward pass at iteration $k$ where perturbation is defined as $\delta^{k}$; $I_b$ as the sequence of images that be used in the backward pass at iteration $k$.
We update each parameter $\gamma_m$ in the parameter set $\gamma$ through:
\[
\gamma_m = \gamma_m + \eta \triangledown_{\gamma_m} \text{obj}(I_{input})
\]
where $\eta$ is the learning rate.
Next, we want to get the gradient of parameter $\gamma_m$. We use $p,q$ as the index for width and height of an image.
\begin{align*}
&\triangledown_{\gamma_m} \text{obj}(I_{input}) \\
= &\sum_t\sum_p\sum_q \sum_l\frac{\partial \text{obj}(I^t_{a}(p,q) )}{\partial I^t_{b} (p,q)} \cdot \frac{\partial I^t_{b} (p,q)}{\partial w_{m,l}} \cdot \frac{\partial w_{m,l}}{\partial \gamma_m}
\label{image blur}
\end{align*}
For bilinear interpolation:
\[
I_b(p,q) = w_{m,l} I^{\gamma_{m,l}}(p,q) + (1-w_{m,l}) I^{\gamma_{m,(l+1)}}(p,q)
\]
\[
\frac{\partial I_b(p,q)}{\partial w_{m,l}} = I^{r_{m,l}}(p,q) - I^{r_{m,(l+1)}}(p,q)
\]
\[
\pd{w_{m,l}}{\gamma_m}=
\begin{cases}
-1 & \text{if $\gamma_{m,l} \leq \gamma_m^{k} \leq \gamma_{m,l+1}$} \\
1 & \text{if $\gamma_{m,l-1} \leq \gamma_m^{k} < \gamma_{m,l} $} \\
0 & \text{if $\gamma_m^{k} < \gamma_{m,l-1}$ or $\gamma_m^{k} > \gamma_{m,l+1}$} \\
\end{cases}
\]
\subsubsection{Color Parameters}
We refer $\langle \beta_1, \beta_2,\beta_3\ \rangle$ as $\langle r, g, b \rangle$ parameters. (It's not a good way to specify $\langle \beta_1, \beta_2, \cdots,\beta_n\ \rangle$ as $\langle \beta_1, \beta_2,\beta_3\ \rangle$ but I couldn't think about another better way.)
We update each parameter $\beta_n$ in the parameter set $\beta$ through:
\[
\beta_n = \beta_n + \eta \triangledown_{\beta_n} \text{obj}(I_{a})
\]
where $\eta$ is the learning rate.
We get gradient of color parameters $\beta_n$ from $I_a$, the sequence of input images at iteration $k$ with perturbation $\delta^{k}$.
\begin{align*}
\triangledown_{\beta_n} \text{obj}(I_{a}) = \sum_t\sum_p\sum_q \frac{\partial \text{obj}(I_{a}^{t}(p,q) )}{\partial I_{a}^{t} (p,q)} \cdot \frac{\partial y(\beta_n)}{\partial \beta_n}
\label{image blur}
\end{align*}
$M$ is the sequence of color correction matrix $M=\{M_1, M_2, \cdots,M_T \}$, $B$ is the sequence of bias $B=\{B_1, B_2, \cdots,B_T \}$. At time $t$,
\begin{align*}
M_{t} =
\begin{pmatrix}
m_{1,1} & m_{1,2} & m_{1,3} \\
m_{2,1} & m_{2,2} & m_{2,3} \\
m_{3,1} & m_{3,2} & m_{3,3}
\end{pmatrix}
\end{align*}
\begin{align*}
&\bigg[\triangledown_{\beta_1}\text{obj}(I_{a}),\triangledown_{\beta_2}\text{obj}(I_{a}),\triangledown_{\beta_3}\text{obj}(I_{a}) \bigg]\\
= &\sum_t \sum_p \sum_q
\bigg[\frac{\partial \text{obj}(I_{a}^{t}(p,q) )}{\partial I_{a}^{t} (p,q)}, \frac{\partial \text{obj}(I_{a}^{t}(p,q) )}{\partial I_{a}^{t} (p,q)}, \frac{\partial \text{obj}(I_{a}^{t}(p,q) )}{\partial I_{a}^{t} (p,q)}\bigg] \\
&\times M_t^{T}
\end{align*}
\subsubsection{Sum of View Points (SVP)}
TODO
(We define the "viewpoint" of a frame as the 2-D snapshot of the 3-D world, taken by the vehicle at the given position of a given viewing angle. We create each approximation frame by mapping the perturbation on to a frame with a previously obtained viewpoint. (todo: redudant to mention this again? Since it maybe more approprite to be mentioned in the techinque section.))
\fi
\section*{Checklist}
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{}
\item Did you describe the limitations of your work?
\answerYes{} See Section \ref{sec:ConclusionEthics}.
\item Did you discuss any potential negative societal impacts of your work?
\answerYes{} See Section \ref{sec:ConclusionEthics}.
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{} See Section \ref{sec:ConclusionEthics}.
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerNA{}
\item Did you include complete proofs of all theoretical results?
\answerNA{}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerYes{} See Section \ref{sec:Experiment}.
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{} See Section \ref{sec:Experiment}.
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerYes{} See Section \ref{sec:Experiment}.
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{} See Section \ref{sec:Experiment}.
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{} See Section \ref{sec:Experiment}.
\item Did you mention the license of the assets?
\answerNA{}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerYes{} See Supplemental Material.
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerYes{} See Section \ref{sec:Experiment}.
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNA{}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{}
\end{enumerate}
\end{enumerate}
\section{Conclusion and Ethical Considerations}
\label{sec:ConclusionEthics}
A great deal of attention has been devoted to understanding adversarial perturbations in computational perception, with autonomous driving the most common motivation. However, most prior research has considered these to be independent for each input image. In contrast, autonomous driving is dynamic, and even if a perturbation succeeds to fool the car in a particular frame, it can fail in another frame, and the car can self-correct. Thus, to fully understand vulnerabilities in autonomous vehicle architectures we need to evaluate them in an end-to-end fashion using driving scenarios. However, this is inherently challenging because the resulting experimentation, be it using a simulator or actual driving, is extraordinarily time consuming. We developed a novel framework that allows us to largely avoid costly driving experiments, relying instead on a novel compositing approach which is fully differentiable. Our approach is significantly more potent at discovering physically realizable adversarial examples than prior art, while also requiring far fewer runs of actual driving trials or simulations. Moreover, we find that the vulnerabilities we discover are robust, and persist even under variability in weather conditions.
Security research in general, and security of AI in particular, inevitably raises ethical considerations when the primary contribution develops novel attacks to demonstrate vulnerabilities in systems.
We, too, have developed a novel framework that is able to exploit autonomous driving systems far more effectively than the state of the art.
Moreover, our focus on \emph{physically realizable} adversarial examples (that is, examples that are designed to modify objects in the world---road surface, in our case---rather than digital images at the level of pixels) brings our work even closer to reality than many other efforts that attack perception by adding adversarial noise at the level of pixels.
This line of research, however, is absolutely critical: if we are to put autonomous cars on the roads without adequately stress-testing them, the consequences of failures can be catastrophic.
Our approach targets primarily simulation-based stress-testing of autonomous vehicles.
We do this for two reasons.
First, the fact that perturbations are restricted to simulations means that our approach cannot be used "out-of-the-box" to directly cause accidents.
Second, simulations, coupled with our method, enable far highly scalable testing of vehicles, identifying vulnerabilities before the autonomous vehicle architecture is deployed on actual roads, thereby reducing the likelihood of unanticipated vulnerabilities manifesting themselves when it truly matters.
\subsection{Ethics Statement}
\label{sec:Ethics}
Security research in general, and security of AI in particular, inevitably raises ethical considerations when the primary contribution develops novel attacks to demonstrate vulnerabilities in systems.
We, too, have developed a novel framework that is able to exploit autonomous driving systems far more effectively than the state of the art.
Moreover, our focus on \emph{physically realizable} adversarial examples (that is, examples that are designed to modify objects in the world---road surface, in our case---rather than digital images at the level of pixels) brings our work even closer to reality than many other efforts that attack perception by adding adversarial noise at the level of pixels.
This line of research, however, is absolutely critical: if we are to put autonomous cars on the roads without adequately stress-testing them, the consequences of failures can be catastrophic.
\section{Introduction}
Computer vision has made revolutionary advances in recent years by leveraging a combination of deep neural network architectures with abundant high-quality perceptual data.
One of the transformative applications of computational perception is autonomous driving, with autonomous cars and trucks already being evaluated for use in geofenced settings, and partial autonomy, such as highway assistance, leveraging state-of-the-art perception embedded in vehicles available to consumers.
However, a history of tragic crashes involving autonomous driving, most notably Tesla~\citep{Thorbecke20} and Uber~\citep{Hawkins19} reveals that modern perceptual architectures still have some limitations even in non-adversarial driving environments.
Even more concerning is the increasing abundance of evidence that state-of-the-art deep neural networks used in perception tasks are vulnerable to \emph{adversarial perturbations}, or imperceptible noise added to an input image and designed to cause misclassification~\citep{goodfellow2014explaining,yuan2019adversarial}.
Furthermore, several lines of work consider specifically \emph{physical adversarial examples} which modify the \emph{scene} being captured by a camera, rather than the image~\citep{Kurakin_2016_CoRR,Eykholt_2018_CVPR,Sitawarin_2018_CoRR,dutta2018security,Duan_2020_CVPR}.
Despite this body of evidence demonstrating vulnerabilities in deep neural network perceptual architectures, it is not evident that they are consequential in realistic autonomous driving, even if primarily using cameras for perception.
First, most such attacks involve independent perturbations to input images.
Autonomous driving is a dynamical system, so that a fixed adversarial perturbation to a scene is perceived through a series of highly interdependent perspectives.
Second, even if we succeed in causing the control outputs of self-driving cars to deviate from normal, the vehicle will now perceive a sequence of frames that is \emph{different from those encountered on its normal path}, and typically deploy self-correcting behavior in response.
For example, if the vehicle is driving straight and then begins swerving towards the opposite lane, its own perception will inform the control that it's going in the wrong direction, and the controller will steer it back on course.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figure1C.pdf}
\caption{Overview. We collect and calibrate frames from the unmodified environment (shown in the green box), and given a choice of attack pattern parameters, composite the pattern to create approximate renderings of frames corresponding to placing the pattern in the environment. Our composition function is differentiable with respect to the attack pattern parameters, and we are thus able to use end-to-end gradient-based optimization when attacking a differentiable control network, to cause the network to output incorrect controls that cause the vehicle to deviate from its intended trajectory (from the green to the blue trajectory, as shown in the right column), and crash.
}
\label{fig:crash}
\end{figure*}
To address these limitations, \citet{Boloor_2020_JSA} recently proposed Bayesian Optimization (BO)
as a way to design simple \emph{physically realizable} (that is, easy to implement in a physical scene) adversarial examples
in Carla autonomous driving simulations~\citep{Dosovitskiy17} against end-to-end autonomous driving architectures.
As simulations are a critical part of autonomous vehicle development and testing~\citep{Dosovitskiy17,Waymo20}, this was an important advance in enabling scalable identification of practically consequential vulnerabilities in autonomous vehicle architectures.
The key challenge with this approach, however, is that attack design must execute actual simulation experiments
for a large number of iterations (1000 in the work above), making it impractical for large-scale or physical driving evaluation.
We propose a highly scalable framework for designing physically realizable adversarial examples for adversarial testing of simulated end-to-end autonomous driving architectures.
Our framework is illustrated in Figure~\ref{fig:crash}, and develops a differentiable pipeline for digitally approximating driving scenarios.
The proposed approximation makes use of image compositing, learning homography and color mappings from a birds-eye view of embedded adversarial examples to projections of these in images based on actual driving frames, and sampling sequences of actual frames with small perturbations to control to ensure adequate sampling of possible perspectives.
The entire process can then be fed into automatic differentiators to obtain adversarial examples that maximize a car's deviation from its normal sequence of controls (e.g., steering angle) for a target driving scenario.
We evaluate the proposed framework using Carla simulations in comparison with the state-of-the-art BO method.
Our experiments show that the resulting attacks are significantly stronger, with effects on induced deviations and road infractions often considerably outperforming BO, using an order of magnitude fewer simulation runs.
Furthermore, we show that our approach yields adversarial test instances that are robust to unforeseen variations in weather and visibility.
| proofpile-arXiv_059-15693 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
A type IIP supernova (SN~IIP) is an outcome of the explosion of
a red supergiant (RSG) caused by the core-collapse.
Following the explosion, ejecta expands in the environment
created by the presupernova mass loss.
The RSG wind of pre-SN~IIP has a moderate density parameter
$\dot{M}/u \sim 10^{14}-10^{15}$\gcm\ indicated by the radio and X-ray emission
(Chevalier et al. 2006).
The optical emission lines, e.g., \ha\ from the wind of that density are generally expected to be weak
for the detection.
Yet in several SNe~IIP the spectra show during first
couple days rather strong narrow emission lines apparently originated from a confined
ionized dense circumstellar (CS) shell
(Quimby et al. 2007; Groh et al. 2014; Khazov et al. 2016; Yaron et al. 2017).
The most interesting case is SN~2013fs with spectra starting
from 6\,h after the shock breakout (Yaron et al. 2017).
The analysis of these spectra led authors
to conclude that the early CS emission lines originate from the confined shell
($r < 10^{15}$\,cm) with the mass of (several)$\times10^{-3}$\msun,
the expansion velocity of 50-100\kms, and the Thomson optical depth of
$\tau_{\sc T} \sim 1-2$ (Yaron et al. 2017).
The line profiles of early spectra of SN~2013fs show narrow core and extended wings
very much similar to emission lines of early SN~1998S, in which case they
have been attributed to the emission from the dense CS shell with the optical depth to
the Thomson scattering of $\tau_{\sc T} \sim 3-4$ (Chugai 2001).
However this interpretation fails for SN~2013fs because broad wings observed in
\ha\ are more intense compared to the model of the Thomson scattering; moreover the
blue wing is stronger than the red one unlike the model (Yaron et al. 2017).
The problem has been resolved in the model that includes the acceleration of the
preshock CS gas by the SN radiation (Chugai 2020a); for the \ha\ at 10.3\,h the recovered
preshock CS velocity is 3000\kms.
The broad \ha\ wings in early spectra of SN~2013fs are essentially dominated by the
emission from the accelerated CS gas and not by the Thomson scattering.
Remarkably, in SN~1998S the effect of the radiative acceleration of the circumstellar gas
upto 1000\kms\ is overwehlmed by the Thomson scattering (Chugai 2001).
Dessart et al. (2017) explored numerically effects of dense CS shell around SN~IIP using
radiation hydrodynamics and non-LTE radiation transfer.
The models predict significant acceleration of the CS gas upto $> 5000$\kms\ during
first couple days.
However the reported synthetic spectra do not indicate, in which cases line wings
are dominated by either Thomson scattering or the bulk velocity of the accelerated CS gas.
The proposed explanation of the the early \ha\ spectrum of SN~2013fs prompts a
possibility that the preshock velocity
inferred from the early \ha\ can be used to probe the early bolometric luminosity of SN~2013fs
that otherwise is poorly determined.
Here I present the method for the
reconstruction of the early bolometric light curve of SN~2013fs based on
preshock velocity extracted from the \ha\ emission line.
To this end we first determine velocities of the preshock CS gas from the \ha\
in Keck-I spectra between 6\,h and 10\,h after the shock breakout.
The found velocities are used then to recover the bolometric luminosity
of SN~2013fs during first 10 hours after the shock breakout.
The study exploits spectra of SN~2013fs retrieved from the database
WISeREP (Yaron \& Gal-Yam 2012) ({\em https://wiserep.weizmann.ac.il}).
\section{Model}
The shock breakout of an exploding star with an extended envelope is accompanied with the
sweeping of the outmost atmosphere into the dense shell (Grasberg et al. 1971).
For the exploding RSG the mass of the thin dense shell is
$10^{-4}-10^{-3}$\msun\ (Chevalier 1981).
The thin shell decelerates in the CS gas that results in the formation of
the forward and reverse shock with the thin dense shell inbetween (Chevalier 1982a; Nadyozhin 1985).
The reverse shock is generally radiative,
so the shocked cold ejecta are accumulated in the thin dense shell that we dub the
cold dense shell (CDS) since its kinetic temperature is lower than the temperature of
both shocks.
The size and density of the dense CS shell of SN~2013fs can be
recovered from spectral data in the following way.
On day 2.42 the spectrum shows the broad He\,II 4686\,\AA\ emission (Bullivant et al. 2018)
that is attributed to the fragmented CDS with the expansion velocity of
$v_{cds} = 16600$\kms\ (Chugai 2020b).
Narrow \ha\ associated with the CS shell disappeares between epochs of Keck-II
spectra on day 2.1 and 5.1 (Yaron et al. 2017).
This means that at about $t \sim 3$\,d the CS shell is overtaken by the CDS,
which gives us the extension of the CS shell $R_{cds} \sim v_{cds}t \sim 5\times10^{14}$\,cm.
The \ha\ modelling in the spectrum on 10.3\,h implies the Thomson optical depth
of the CS shell $\tau_{\sc T} \sim 2$ (Chugai 2020a).
The average electron number density in the CS shell is thus
$n_e = \tau_{\sc T}/(R_{cs}\sigma_{\sc T}) \sim 6\times10^9$\cmq\ and the
density $\rho_0 = 1.2\times10^{-14}$\gcmq\ assuming hydrogen abundance $X = 0.7$.
Below we adopt homogeneous CS shell with the density $\rho_0 = 1.35\times10^{-14}$\gcmq\ that
is by a factor of three higher compared to the model that is aimed at the minimization of
the explosion energy of SN~2013fs (Chugai 2020b).
One can apply the hydrodynamics in the thin shell approximation
(cf. Chugai 2020b) to find the CDS dynamics that matches the expansion velocity on day 2.42.
The rate of the CDS decceleration is determined by the CS density and the SN ejecta
density distribution in outer layers; the latter generally follows the power law
$\rho(v) = \rho_1(t_1/v)^3 (v_1/v)^q$.
We adopt $q = 7.6$ in line with the hydrodynamic model of the normal type IIP
SN~2008in (Utrobin \& Chugai 2013).
For the reference values $t_1 = 1$\,d, $v_1 = 10^4$\kms\ the CDS expansion
satisfies the velocity constrain on day 2.42 (Fig. \ref{fig:dyn}) for
$\rho_1(t_1,v_1) = 3.44\times10^{-10}$\gcmq.
At the initial phase of $\sim 1$ day the photosphere of SN~2013fs coincides with the CDS.
Indeed, the momentum flux conservation suggests that the CDS density
is $\sim \rho_0(v_{cds}/c_s)^2 \sim 6\times10^{-9}$\gcmq\ (Grasberg et al. 1971),
where $c_s \approx 37$\kms\ is the isothermal sound speed for the CDS temperature of
$T \sim (L/4\pi r^2\sigma)^{0.25} \sim 3.5\times10^4 L^{0.25}_{43}/r_{14}^{0.5}$\,K.
For these conditions the Rosseland opacity $k_{\sc R} \sim 2$ (Badnell et al. 2005).
With the CDS mass of $\sim 3\times10^{-4}$\msun\ and the CDS radius at $t = 10$\,h
of $R_{cds} \sim 10^{14}$\,cm (Fig. \ref{fig:dyn} ) the CDS optical depth turns out
to be $\tau = k_{\sc R}M_{cds}/(4\pi R_{cds}^2) \sim 10$, so indeed the CDS at the
considered phase is opaque and the photosphere resides at the CDS.
The overall picture for the \ha\ formation at the considered stage thus
can be described as a photosphere (CDS) of the radius $R_{cds}$ enclosed by the hot
forward shock of
the radius $R_s \approx \xi R_{cds}$ that is embedded into the ionized confined CS shell
$R_s < r < R_{cs}$.
Noteworthy that the \ha\ model is not sensitive to the parameter $\xi$.
We adopt $\xi = 1.2$ that corresponds to the self-similar solution for
$q = 7$ and the uniform CS medium (Chevalier 1982).
The forward shock layer is assumed to be uniform with the density of $4\rho_0$.
The high number density in the forward shock ($n \sim 3 \times10^{10}$\cmq) implies
the fast electron-proton thermal equilibration compared to the expansion time, so
the electron temperature is $T_e = 1.6\times10^9v_{s,4}^2$, where $v_{s,4} = v_s/10^4$\kms.
The radiative cooling time is significantly longer than the expansion time,
however the Compton cooling time $t_{\sc C} =
1.2\times10^4r_{14}L_{43}^{-1}$ s can be comparable to the
expansion time; the postshock electrons can therefore cool by a factor of $\sim 2$.
We therefore adopt $T_e = 10^9$\,K through the postshock layer.
Noteworthy that
the \ha\ profile is not sensitive to the variation of the electron temperature
even by a factor of ten.
The powerful early SN radiation results in the significant acceleration of the CS gas
with the velocity being maximal just before the shock and decreasing outwards.
We describe the velocity distribution of the CS gas at the fixed moment by the expression
\begin{equation}
v(r) = (v_{ps} - v_{cs})\left(\frac{R_{cs} - r}{R_{cs} - R_s}\right)^s + v_{cs}\,,
\end{equation}
where $v_{ps}$ is the preshock CS velocity at $r = R_s$
and $v_{cs}$ is the velocity of the undisturbed CS gas at $r = R_{cs}$.
The value of power index is $s \approx 1.6$ for optimal \ha\ models.
\vspace{1cm}
\section{Results}
\subsection{CS gas velocity}
The radiation transfer of \ha\ photons in the CS shell is treated using the Monte Carlo technique.
The \ha\ emission is dominated by the recombination mechanism, so the emissivity in the uniform
CS envelope is assumed to be constant along the radius.
The model radiation transfer takes into account resonanant scattering in \ha\
in the Sobolev approximation.
However the previous modelling of the \ha\ at 10.3\,h (Chugai 2020a) indicates that
the Sobolev optical depth in \ha\ should be negligibly small.
This situation reflects strong depopulation of the hydrogen second level due
to the high photoionization rate by the SN radiation.
The angle-averaged frequency redistribution function (Mihalas 1978) is applied for the photon
scattering on thermal electrons.
The \ha\ profile is not sensitive to the exact value of
the CS electron temperature since the broadenning is dominanted by the high velocities of the
radiatively accelerated CS gas.
Nevertheless the modelling takes into account the evolution of the electron temperature.
At the first iteration we use the value $T_e = 4\times10^4$\,K for
all the considered epochs.
With this temperature we recover CS velocity from \ha, the bolometric luminosity and
the effective temperature.
This temperature is adopted as the electron temperature and
these values of $T_e$ are used for the final \ha\ models.
The radiation transfer includes a diffuse reflection of photons from the photosphere.
However this effect for \ha\ photons is equivalent to zero albedo because reflected
photons scatter in the far blue wing due to
the high photosphere velocity of $\gtrsim 26000$\kms\ at the considered epoch (Fig.~\ref{fig:dyn}).
The optimal \ha\ models are overplotted on the observed low resolution
(160\kms) spectra between 6\,h and 10\,h (Fig. \ref{fig:sp}) with parameters
given in the Table 1.
The table columns contain the time after the shock breakout, the CDS radius, the electron temperature of
the CS gas, the preshock Thomson optical depth, and the preshock velocity inferred from the
\ha\ fit.
The bottom line includes the result obtained earlier for the high resolution spectrum
at 10.3\,h (Chugai 2020a).
The uncertainty of the inferred velocity is in the range of 20\%.
The similar uncertainty refers to the Thomson optical depth recovered from \ha.
The primary indicator of the Thomson optical depth is the profile skewing towards blue being
apparent in all the profiles of Fig.~\ref{fig:sp}.
However, $\tau_{\sc T}$ in Table 1 are taken from the
interaction model for consistency; these values are used for the \ha\ model.
The crucial role of the radiative acceleration of the CS gas for the \ha\ profile is
emphasised by the \ha\ computed without radiative acceleration effect, and using
the constant velocity of 100\kms\ for the CS shell (Fig. \ref{fig:sp}).
It is clear that the Thomson scattering exclusively cannot account for the observed \ha.
This differs SN~2013fs from SN~1998S where the Thomson scattering dominates over the
effect of the moderate radiative acceleration of 1000\kms\ (Chugai 2001).
\subsection{Early bolometric luminosity}
\label{sec:bol}
Preshock CS velocity $v_{ps}$ is a direct indicator of the radiation energy $E_r$ emitted
by the supernova between the shock breakout and the observation epoch.
The radiative force is dominated by the Thomson scattering
for conditions in the CS shell (cf. Chugai et al. 2002).
Neglecting the CS gas displacement the solution of the equation of motion of the CS gas
in the field of SN radiation results in the preshock velocity
\begin{equation}
v_{ps} = \frac{k_{\sc T}E_r}{4\pi R_s^2c}\,,
\label{eq:accel}
\end{equation}
where $k_{\sc T} = 0.34$\cmsqg\ is the coefficient of the Thomson scattering,
$c$ is the speed of light.
The inferred $v_{ps}$ values (Table 1) with the equation (\ref{eq:accel}) permit us to
recover $E_r$ for the explored moments.
We apply two different descriptions of the initial stage of the luminosity decline:
the exponential law $L = L_0\exp{(-t/t_0)}$ and the power law $L = L_0/[1 + (t/t_0)^p]$.
In each case parameters are determined by the $\chi^2$ minimization.
For the exponential law $t_0 = 0.12$\,d and $L_0 = 7.23\times 10^{44}$\ergs, whereas
for the power law $t_0 = 0.12$\,d, $L_0 = 5.8\times 10^{44}$\ergs, and $p = 2.6$
(Fig. \ref{fig:bol}).
Both descriptions for the luminosity coincide within 10\%, while the energy radiated
during initial 0.5\,d in both cases is $7.4\times10^{48}$\,erg.
The relative error of $E_r$ is the same as that of the velocity, i.e. 20\%.
The recovered bolometric luminosity is compared (Fig. \ref{fig:bol}, panel b) to the
reported bolometric
light curves obtained from the multiband photometry in two ways (Yaron et al. 2017).
The first approach suggests the reconstruction of the spectral energy distribution (SED),
while the second method is based on the recovered blackbody
temperature, photospheric radius, and the blackbody luminosity $L = 4\pi R^2\sigma T^4$.
These two reported versions differ by a factor of 100 (Fig. \ref{fig:bol}, panel b) at the first
observational
epoch that emphasises difficulties of the early light curve reconstruction
from the photometric data.
Amazingly, our light curve is consistent with the blackbody version of the reported light
curve despite of radically different methods.
It is noteworthy that the early bolometric light curve of SN~2013fs recovered
from the radiative acceleration effect in \ha\ does not requires
the photometry, the distance, and the extinction.
The point is that this method exploits only measurements of the velocities based
on spectra expressed in relative fluxes.
\section{Conclusion}
The paper has been aimed at the reconstruction of the early bolometric light curve
of SN~2013fs exploiting effect of the radiative acceleration of the CS gas being
apparent in the \ha\ emission.
The proposed method turns out to be a success in the case of SN~2013fs
thanks to the unique set of Keck-I spectra between 6 -- 10.3\,h after the shock breakout.
The attractive feature of the novel method is that the early bolometric light curve of SN~2013fs
is recovered disregarding the photometry, distance, and extinction.
Some uncertainty might stem from the choice of a function that approximates
the luminosity decline after the shock breackout.
In fact this arbitrariness does not affect the result.
Two different choices, exponential and power law, result in the bolometric light curves
that agree with each other within 10\%; moreover the total radiation emitted during
0.5 day after the shock breakout in both cases is the same.
Suprisingly, that the recovered bolometric luminosity is close to the luminosity
estimated using blackbody approximation for the moments 0.16, 0.36, and 0.55\,d
((Yaron et al. 2017).
The agreement between results obtained by two completely different approaches indicates
that both methods catch behavior of the real luminosity of SN~2013fs at the initial stage.
Yet unlike the case of SN 2013fs with the known distance 47-51 Mpc (NED) and small extinction
(Yaron et al. 2017), for nearby SNe~IIP with less certain distance and extinction
sytematic errors in the bolometric luminosity recovered from the photometry can be
large.
All three factors of the uncertainty (distance, extinction, and photometric incompleteness)
are lacking in the new method.
It should be emphasised potential significance of the new method for the reconstruction
of the early bolometric luminosity of a future SN~IIP in our Galaxy, since in that case
the distance and the extinction will be known likely with large uncertainties.
Obviously, a successfull application of the proposed method requires the
observation of a set of spectra during the first day after the explosion.
\pagebreak
| proofpile-arXiv_059-15694 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}\label{sec:introduction}
Stochastic control problems can be, in general, formulated as follows,
\begin{IEEEeqnarray*}{ccl}
~(\mathcal{P})~~ & \min\limits_{u_t, t=0, \ldots, T-1} & ~\mathbb{E}[J(x_0,u_0,x_1,u_1, \ldots, x_{T-1},u_{T-1},x_T)]\\
& {\rm s.t.} & ~x_{t+1} = f_t(x_t,u_t, \xi_t),\\
& & ~g_t(x_t,u_t) \leq 0,~g_T(x_T) \leq 0,\\
& & ~t = 0, 1, \ldots, T-1,
\end{IEEEeqnarray*}
where $x_t$ $\in$ $\mathbb{R}^m$ is the state with $x_0$ given, $u_t$ $\in$ $\mathbb{R}^n$ is the control, and $g_t(x_t,u_t)\leq 0$ and $g_T(x_T)\leq 0$ represent, respectively, the running constraints on states and controls, and the constraint on the terminal state. Moreover, $\xi_t$ $\in$ $\mathbb{R}^p$ is a white noise vector, and $f_t:\mathbb{R}^m\times\mathbb{R}^n\times\mathbb{R}^p\rightarrow\mathbb{R}^m$ is the system dynamics. Thus, the system under consideration is of a Markovian property.
The performance measure $J$ is {\it backward separable} if there exist functions $\phi_{t}:$ $\mathbb{R}^m\times \mathbb{R}^n \times \mathbb{R}
\rightarrow \mathbb{R}$, $t=0, 1, \ldots, T-1$, and $\phi_{T}:$ $\mathbb{R}^m
\rightarrow \mathbb{R}$ such that
\begin{IEEEeqnarray*}{c}
J=\phi_{0}(x_0,u_0, \phi_{1}(x_1,u_1,
\phi_{2}(\ldots
\phi_{T-2}(x_{T-2},u_{T-2}, \phi_{T-1}(x_{T-1},u_{T-1},
\phi_T(x_T)))\ldots ))).
\end{IEEEeqnarray*}
The backward separable objective function $J$ is then said {\it backward monotonic} if for all $t$ = 0, 1, $\ldots$, $T-1$, the condition
\begin{align*}
\phi_{t}(\hat{x}_t,\hat{u}_t, \phi_{t+1}(\ldots
\phi_{T-1}(\hat{x}_{T-1},\hat{u}_{T-1}, \phi_T(\hat{x}_T))\ldots)) \leq
\phi_{t}(\tilde{x}_t,\tilde{u}_t, \phi_{t+1}(\ldots
\phi_{T-1}(\tilde{x}_{T-1},\tilde{u}_{T-1}, \phi_T(\tilde{x}_T))\ldots))
\end{align*}
implies the following: for
any triple $(x_{t-1},u_{t-1},\xi_{t-1})$ such that $\hat{x}_t = \tilde{x}_t = f_{t-1}(x_{t-1},u_{t-1}, \xi_{t-1})$, we have
\begin{align*}
&\phi_{t-1}(
x_{t-1},u_{t-1},\phi_{t}(\hat{x}_t,\hat{u}_t,\phi_{t+1}(\ldots
\phi_{T-1}(\hat{x}_{T-1},\hat{u}_{T-1}, \phi_T(\hat{x}_T))\ldots)))\leq \\
&\phi_{t-1}(
x_{t-1},u_{t-1},\phi_{t}(\tilde{x}_t,\tilde{u}_t,\phi_{t+1}(\ldots
\phi_{T-1}(\tilde{x}_{T-1},\tilde{u}_{T-1},\phi_T(\tilde{x}_T))\ldots))).
\end{align*}
When $(\mathcal{P})$ satisfies both the separability and the monotonicity as defined above, the celebrated dynamic programming (DP) \cite{Bellman1952} is a powerful time-decomposition solution approach, which is based on the principle of optimality.
There exist, however, a plenty of problems of interests that do not satisfy these fundamental requirements in DP. One notorious nonseparable case is the variance minimization problem (see \cite{white1974dynamic} and \cite{li2003variance}). The obstacle is mainly due to that the variance operation, unlike the expectation operator, does not satisfy the tower property along the time horizon. The variance minimization naturally emerges in the dynamic mean-variance (MV) portfolio selection problem. After many years of struggle, \cite{li2000optimal} finally solves it by embedding the original nonseparable problem into a family of separable auxiliary problems that are analytically solvable by DP. \cite{sniedovich1986c} and \cite{domingo1993experiments} in the early days consider nonseparable problems with the objective function of the form $h(u) = \psi(v(u), z(u))$, where both $v$ and $z$ are functions in additive forms w.r.t. stages. Under the assumption that $\psi$ is pseudo-concave w.r.t. its arguments, the authors of \cite{sniedovich1986c} and \cite{domingo1993experiments} develop the so-called C-programming to convert the primal problem into a separable version which could be handled by DP and report its applications in the variance minimization (see also \cite{sniedovich1987class}) and fractional programming (see also \cite{sniedovich1990solution}). \cite{carraway1990generalized} proposes a generalized DP for the multi-criteria optimization problem that violates the monotonicity. \cite{li1990new}, \cite{li1991extension}, and \cite{li1990multiple}
consider a class of nonseparable problems where the nonseparable objective function is a monotone function of several separable sub-objectives. Among these three papers, the first two deal with the deterministic cases, whereas the last one deals with the stochastic counterpart. They introduce the concept of $k$th-order separability and convert the primal nonseparable problem into a separable $k$-objective optimization problem which could be solved by the multi-objective DP \cite{Envelope}. They further develop conditions under which a specific Pareto solution is optimal to the original nonseparable problem. Moreover, \cite{li1997cost} investigates a nonseparable cost smoothing problem for the discrete-time deterministic linear-quadratic control.
Different from the above works, this paper aims to develop a novel solution framework through the \emph{scenario decomposition}, which is fundamentally distinct from the methods governed by \emph{time-decomposition}-based DP. Our solution framework relies on the progressive hedging algorithm (PHA) pioneered in \cite{rockafellar1991scenarios}. In contrast to DP, our PHA-oriented solution scheme can be applied to stochastic control problems which may not be separable and/or monotonic. We emphasize that PHA has not been fully recognized up to today for its powerful capability in dealing with the non-separability or non-monotonicity in stochastic control. We will further apply the newly-developed solution scheme to two nonseparable (thus non-tractable by DP) real-world applications: online quadratic programming (QP) and a novel variation of the dynamic portfolio selection problem with smoothing properties. Interestingly, the considered MV problem with smoothing feature could be embedded into a series of auxiliary problems that turn out to be a concrete type of our proposed online QP model.
The rest of the paper proceeds as follows. We build up in Section 2 the scenario-decomposition solution framework through adopting PHA on general stochastic control problems, where the information flow follows a tree structure. We then demonstrate its prominent application to the online QP problem in Section 3. On top of that, we also apply this solution methodology to dynamic portfolio selection problems and their novel variations with smoothing features, and analyze experimental results in Section 4. Finally, we conclude the paper in Section 5.
\section{Solution Approach by Scenario Decomposition}\label{PHA_section}
We consider in this paper the problem $(\mathcal{P})$ with a Markovian system. As the most prominent feature of our new formulation, the objective function in general could be nonseparable and/or non-monotonic. On the other hand, we confine the structure of the information flow to a tree form, where the system randomness $\boldsymbol{\xi}=\{\xi_0,\xi_1,\ldots,\xi_{T-1}\}$ is realized stage by stage, and a series of realizations of $\xi_t$'s will form a \emph{scenario} of the tree, indexed by $i$. From the scenario analysis prospective, the dynamic stochastic control problem could be armed with a scenario tree in order to reflect its information flow for the underlying uncertainties.
Figure \ref{tree} exemplifies a specific three-stage tree structure, where $\boldsymbol{\xi}$ is realized successively from $\xi_0$ to $\xi_1$ and finally to $\xi_2$, thus leading to seven possible scenarios (paths of $\boldsymbol{\xi}$) in total. The number in each circle node represents a possible value of the disturbance realized right before that stage. Note that any parent node (starting from the square root node) could in general result in different numbers of children nodes. In contrast to DP whose effectiveness comes from the time decomposition, the solution power by PHA that we adopt in this paper roots in the scenario decomposition. Invented almost thirty years ago, PHA has been successfully applied to several application areas including power systems scheduling problems (see \cite{dos2009practical} among others) and water resource planning problems (see, e.g., \cite{carpentier2013long}). For more details on the general methodology of PHA, please refer to \cite{rockafellar1991scenarios}.
\begin{figure}[!t]
\begin{center}
\scalebox{1}{
\begin{tikzpicture}
every node/.style={scale=0.8},
level 1/.style={sibling distance=4cm, level distance=1cm},
level 2/.style={sibling distance=2cm, level distance=1cm},
level 3/.style={sibling distance=2cm, level distance=1cm},
]
\node (Root) [rectangle,draw,minimum size=0.5cm] {}
child {node (xi0_1) [circle,draw,minimum size=0.8cm] {$\xi_{0}$}
child {node (xi1_1) [circle,draw,minimum size=0.8cm] {$\xi_{1}$}
child {node (xi2_1) [circle,draw,minimum size=0.8cm] {$\xi_{2}$}}
}
child {node (xi1_2) [circle,draw,minimum size=0.8cm] {$\xi_{1}$}
child {node (xi2_2) [circle,draw,minimum size=0.8cm] {$\xi_{2}$}}
child {node (xi2_3) [circle,draw,minimum size=0.8cm] {$\xi_{2}$}}
}
child {node (xi1_3) [circle,draw,minimum size=0.8cm] {$\xi_{1}$}
child {node (xi2_4) [circle,draw,minimum size=0.8cm] {$\xi_{2}$}}
}
}
child {node (xi0_2) [circle,draw,minimum size=0.8cm] {$\xi_{0}$}
child {node (xi1_4) [circle,draw,minimum size=0.8cm] {$\xi_{1}$}
child {node (xi2_5) [circle,draw,minimum size=0.8cm] {$\xi_{2}$}}
}
child {node (xi1_5) [circle,draw,minimum size=0.8cm] {$\xi_{1}$}
child {node (xi2_6) [circle,draw,minimum size=0.8cm] {$\xi_{2}$}}
child {node (xi2_7) [circle,draw,minimum size=0.8cm] {$\xi_{2}$}}
}
};
\begin{scope}[every node/.style={right,scale=0.8}]
\path (Root -| xi2_7) ++(2mm,0) node (t0) {} ++(5mm,0) node {$t=0$};
\path (xi0_1 -| xi2_7) ++(2mm,0) node (t1) {} ++(5mm,0) node {$t=1$};
\path (xi1_1 -| xi2_7) ++(2mm,0) node (t2) {} ++(5mm,0) node {$t=2$};
\path (xi2_1 -| xi2_7) ++(2mm,0) node (t3) {} ++(5mm,0) node {$T=3$};
\end{scope}
\node [below = 0.1cm of xi2_1,align=center] {$i_1$};
\node [below = 0.1cm of xi2_2,align=center] {$i_2$};
\node [below = 0.1cm of xi2_3,align=center] {$i_3$};
\node [below = 0.1cm of xi2_4,align=center] {$i_4$};
\node [below = 0.1cm of xi2_5,align=center] {$i_5$};
\node [below = 0.1cm of xi2_6,align=center] {$i_6$};
\node [below = 0.1cm of xi2_7,align=center] {$i_7$};
\end{tikzpicture
}
\end{center}
\caption{A scenario tree with three stages and seven scenarios.}
\label{tree}
\end{figure}
Let us denote by $\mathcal{I}$ the scenario set which consists of all possible scenarios, and denote by $\boldsymbol{\xi}^i=\{\xi_0^i,\xi_1^i,\ldots,\xi_{T-1}^i\}$ the realizations of disturbance under the scenario $i\in\mathcal{I}$. Assuming the occurring probability of scenario $i$ to be $\rho_i$ that is fixed at time $0$, we can rewrite the objective of $(\mathcal{P})$ as $\min\sum_{i\in\mathcal{I}}\rho_i J_i$, where $J_i$ denotes the sub-objective under $\boldsymbol{\xi}^i$. Then it is natural to decompose problem $(\mathcal{P})$ into a family of \emph{scenario subproblems} and consider the following individual scenario subproblem for each $i\in\mathcal{I}$,
\begin{IEEEeqnarray*}{ccl}
~(\mathcal{P}^i)~~ & \min_{u_t,\forall t} & ~J_i = J(x_0^i,u_0,x_1^i,u_1,\ldots,x_{T-1}^i,u_{T-1},x_T^i) \\
& \rm{s.t.} & ~x_{t+1}^i = f_t(x_t^i,u_t, \xi_t^i),~x_0^i=x_0,\\
&& ~g_t(x_t^i,u_t) \leq 0,~g_T(x_T^i) \leq 0,\\
&& ~t = 0, 1, \ldots, T-1,
\end{IEEEeqnarray*}
which is a {\it deterministic} optimal control problem, and should be much easier to solve than the original stochastic one. In this paper, we further assume that each $(\mathcal{P}^i)$ is convex w.r.t. the control variable $\mathbf{u}=(u_0',u_1',\ldots,u_{T-1}')'\in\mathbb{R}^{nT}$. Although the optimal solution of $(\mathcal{P}^i)$ satisfies all the \emph{admissible} constraints of the primal problem $(\mathcal{P})$, it is \emph{not implementable} in reality, since we have ``stolen'' the \emph{future} information (i.e., the future realization of $\boldsymbol{\xi}$) when solving each scenario subproblem at time $0$. In other words, the scenario-specific solutions violate the so-called \emph{nonanticipative} constraint which is either explicitly or implicitly implied in any stochastic control problem. To force any admissible solution to meet nonanticipativity, the \emph{scenario bundles}, as a partition of $\mathcal{I}$, are formed at each time according to the scenario tree of the underlying problem. Graphically speaking, scenarios passing through each node at a certain time stage are grouped together to form a bundle. In Figure \ref{tree}, for instance, at time 0 all the scenarios form a single bundle that is the scenario set itself and we denote this partition by $\mathcal{I}_0=\{\mathcal{I}_{0,1}\}=\{\{i_1,\ldots,i_7\}\}$; and when $t=1$ we have two bundles together to form $\mathcal{I}_1=\{\mathcal{I}_{1,1},\mathcal{I}_{1,2}\}=\{\{i_1,\ldots,i_4\},\{i_5,\ldots,i_7\}\}$; and finally for $t=2$ we have five bundles to form the partition of $\mathcal{I}$ at that time, i.e., $\mathcal{I}_2=\{\mathcal{I}_{2,1},\mathcal{I}_{2,2},\mathcal{I}_{2,3},\mathcal{I}_{2,4},\mathcal{I}_{2,5}\}
=\{\{i_1\},\{i_2,i_3\},\{i_4\},\{i_5\},\{i_6,i_7\}\}$. The nonanticipativity naturally requires any \emph{implementable} policy to react the same to all indifferent scenarios (the scenarios from the same bundle), and this is achieved by taking conditional expectations on the scenario-specific solutions from the related bundle. More specifically, the implementable control at time $t$, if the scenario $i$ occurs, is computed through
\begin{align}
\hat{u}^{i}_t= \sum_{j\in \mathcal{I}_{t,l}}\frac{\rho_j}{\sum_{j'\in \mathcal{I}_{t,l}}\rho_{j'}}u_t^{j},~i\in \mathcal{I}_{t,l},~l=1,\ldots,|\mathcal{I}_t|,
\label{conditional_expectation}
\end{align}
where $u_t^{j}$ is the scenario-$j$-based admissible control at time $t$, and $|\mathcal{I}_t|$ is the number of scenario bundles in the partition $\mathcal{I}_t$. Note that $|\mathcal{I}_t|$ determines the number implementable controls corresponding to different realizations at that time. In fact, the above procedure in \eqref{conditional_expectation}
can be characterized in a linear transformation $\hat{\mathbf{u}}_t=\mathbf{T}_t\mathbf{u}_t$, where $\hat{\mathbf{u}}_t=((\hat{{u}}_t^1)',(\hat{{u}}_t^2)',\ldots,(\hat{{u}}_t^{|\mathcal{I}|})')'
\in\mathbb{R}^{n|\mathcal{I}|}$, ${\mathbf{u}}_t=(({{u}}_t^1)',({{u}}_t^2)',\ldots,({{u}}_t^{|\mathcal{I}|})')'
\in\mathbb{R}^{n|\mathcal{I}|}$, and the projection matrix $\mathbf{T}_t$ can be easily build up by scenario probabilities based on the structure of $\mathcal{I}_t$.
Then the overall linear mapping is
\begin{align*}
\left(
\begin{array}{c}
\hat{\mathbf{u}}_0\\
\hat{\mathbf{u}}_1\\
\vdots\\
\hat{\mathbf{u}}_{T-1}
\end{array}
\right)=
\left(
\begin{array}{cccc}
\mathbf{T}_0 & 0 & \cdots & 0 \\
0 & \mathbf{T}_1 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & \mathbf{T}_{T-1}
\end{array}
\right)
\left(
\begin{array}{c}
{\mathbf{u}}_0\\
{\mathbf{u}}_1\\
\vdots\\
{\mathbf{u}}_{T-1}
\end{array}
\right).
\end{align*}
The beauty of PHA lies in its augmented Lagrangian formulation that progressively aggregates the scenario-specific solutions into an implementable one and forces them to converge to the optimal solution of the primal problem $(\mathcal{P})$, which are both admissible and implementable.
More precisely, it deals with an augmented Lagrangian problem at each iteration $\nu=0,1,\ldots$, which is constructed by adding a linear Lagrangian term and a quadratic penalty term to the scenario-specific objective function in order to penalize any utilization of the anticipative information from the future. More precisely, we solve the following augmented Lagrangian problem in the $\nu$th iteration for each $i\in\mathcal{I}$,
\begin{IEEEeqnarray*}{ccl}
~(\mathcal{P}^{i,\nu})~~ & \min_{\mathbf{u}} & ~J(x_0^i,u_0,x_1^i,u_1,\ldots,x_{T-1}^i,u_{T-1},x_T^i)
+\mathbf{u}'\mathbf{w}^{i,\nu}+\frac{1}{2}\alpha|\mathbf{u}-\hat{\mathbf{u}}^{i,\nu}|_2^2, \\
& \rm{s.t.} & ~x_{t+1}^i = f_t(x_t^i,u_t, \xi^i_t),~x_0^i=x_0,\\
&& ~ g_t(x_t^i,u_t) \leq 0,~g_T(x_T^i) \leq 0,\\
&& ~ t = 0, 1, \ldots, T-1,
\end{IEEEeqnarray*}
where we define, for compactness, $\mathbf{u}=(u_0',u_1',\ldots,u_{T-1}')'\in\mathbb{R}^{nT}$ as the overall control vector, and
\begin{align}
\hat{\mathbf{u}}^{i,\nu}=((\hat{u}^{i,\nu}_0)',(\hat{u}^{i,\nu}_1)',\ldots,(\hat{u}^{i,\nu}_{T-1})')'
\in\mathbb{R}^{nT}
\end{align}
is a given implementable control for $(\mathcal{P}^{i,\nu})$. Let us denote the optimal solution of $(\mathcal{P}^{i,\nu})$ by
\begin{align}
\mathbf{u}^{i,\nu+1}=((u^{i,\nu+1}_0)',(u^{i,\nu+1}_1)',\ldots,(u^{i,\nu+1}_{T-1})')',
\in\mathbb{R}^{nT}
\label{nu+1_sol}
\end{align}
which is a new scenario-based solution. We then aggregate all $\mathbf{u}^{i,\nu+1}$, $i\in\mathcal{I}$, into a new implementable control, denoted by
\begin{align}
\hat{\mathbf{u}}^{i,\nu+1}=((\hat{u}^{i,\nu+1}_0)',(\hat{u}^{i,\nu+1}_1)',\ldots,
(\hat{u}^{i,\nu+1}_{T-1})')'\in\mathbb{R}^{nT},
\label{nu+1_hat}
\end{align}
through the componentwise calculations of (\ref{conditional_expectation}), or in the following compact way: we first gather $u_t^{i,\nu+1}$ of all $i$ to form
\begin{align}
\mathbf{u}_t^{\nu+1}=((u_t^{1,\nu+1})',(u_t^{2,\nu+1})',\ldots,(u_t^{|\mathcal{I}|,\nu+1})')'
\in\mathbb{R}^{n|\mathcal{I}|},
\label{nu+1_project1}
\end{align}
and conduct the transformation $\hat{\mathbf{u}}_t^{\nu+1}=\mathbf{T}_t\mathbf{u}_t^{\nu+1}$, where
\begin{align}
\hat{\mathbf{u}}_t^{\nu+1}=((\hat{u}_t^{1,\nu+1})',(\hat{u}_t^{2,\nu+1})',\ldots,
(\hat{u}_t^{|\mathcal{I}|,\nu+1})')'\in\mathbb{R}^{n|\mathcal{I}|}
\label{nu+1_project2}
\end{align}
and this is done for every $t=0,1,\ldots,T-1$. We then pick up the $i$th component of $\hat{\mathbf{u}}_t^{\nu+1}$, $\hat{u}_t^{i,\nu+1}$, for all $t$, to serve as $\hat{\mathbf{u}}^{i,\nu+1}$ in $(\mathcal{P}^{i,\nu+1})$. When $\nu=0$, all the initial $\hat{\mathbf{u}}^{i,0}$, $i\in\mathcal{I}$, are attained from $\mathbf{u}^{i,0}$, $i\in\mathcal{I}$, following the above procedure, where $\mathbf{u}^{i,0}$ could be selected as the optimal solution of $(\mathcal{P}^i)$. In $(\mathcal{P}^{i,\nu})$, the penalty parameter $\alpha>0$ is predetermined, and the Lagrangian multiplier $\mathbf{w}^{i,\nu}=((w^{i,\nu}_0)',(w^{i,\nu}_1)',\ldots,(w^{i,\nu}_{T-1})')'\in\mathbb{R}^{nT}$,
for every $i$, satisfies the recursion below,
\begin{align}
\mathbf{w}^{i,\nu+1}=\mathbf{w}^{i,\nu}+\alpha (\mathbf{u}^{i,\nu+1}-\hat{\mathbf{u}}_t^{i,\nu+1}),
\label{w_update}
\end{align}
where $\mathbf{w}^{i,0}$ is set at zero. The solution process repeats until a stopping criterion is satisfied. We now provide the convergence result as follows.
\begin{thm}[Convergence of PHA, \cite{rockafellar1991scenarios}]
If all the scenario subproblems $(\mathcal{P}^{i})$ are convex w.r.t. $\mathbf{u}$ and have been solved exactly, and $\{\mathbf{u}: g_t(x_t,u_t) \leq 0,\forall t\}$ is a convex set under any $x_t$, then the sequence $\{\hat{\mathbf{u}}^{i,\nu+1}\}_\nu$, generated by $(\mathcal{P}^{i,\nu})$, $\nu=0,1,\ldots$, converges to the real optimal $\mathbf{u}^{i,*}$, $i\in\mathcal{I}$, of the primal problem $(\mathcal{P})$. And on the other hand, the sequence $\{\mathbf{w}^{i,\nu+1}\}_\nu$ converges to $\mathbf{w}^{i,*}$, which is also known as the shadow price for each scenario $i$ of the problem. Moreover, the solution quality is guaranteed continuously improved,
in the sense that
\begin{align}
\sum_{i\in\mathcal{I}}\rho_i\left(|\hat{\mathbf{u}}^{i,\nu+1}-\mathbf{u}^{i,*}|_2^2
+\frac{1}{\alpha^2}|\mathbf{w}^{i,\nu+1}-\mathbf{w}^{i,*}|_2^2\right) \leq
\sum_{i\in\mathcal{I}}\rho_i\left(|\hat{\mathbf{u}}^{i,\nu}-\mathbf{u}^{i,*}|_2^2
+\frac{1}{\alpha^2}|\mathbf{w}^{i,\nu}-\mathbf{w}^{i,*}|_2^2\right),
\end{align}
and the equality is finally achieved when $(\hat{\mathbf{u}}^{i,\nu+1},\mathbf{w}^{i,\nu+1})=(\mathbf{u}^{i,*},\mathbf{w}^{i,*})$ for some $\nu$.
\label{PHA_thm}
\end{thm}
\section{Online Quadratic Programming}\label{QP_section}
Quadratic programming (QP) is a fundamental subject in mathematical programming with wide spectra of applications in various fields, including business and finance (see \cite{QP_Survey} for a survey). Although QP has been investigated broadly and deeply, almost all of the studies up to today have been confined in a deterministic framework. Recently, \cite{Online_LP} studies the online linear programming (LP), where the constraint matrix is revealed column by column along with the corresponding coefficients in the objective function. In this section, we will extend the online programming from online LP to online QP and solve it by our newly proposed solution scheme introduced in Section \ref{PHA_section}. More precisely, we consider an online version of a general QP,
\begin{IEEEeqnarray*}{ccl}
~(\mathcal{Q})~~ & \min\limits_{u_t,\forall t} & ~\mathbb{E}\left[\sum\nolimits_{i,j=0}^{T} \frac{1}{2}x_i' Q_{ij} x_j + \sum\nolimits_{t=0}^{T}x_t'c_{t}
+ \sum\nolimits_{i,j=0}^{T-1} \frac{1}{2}u_i' R_{ij} u_j + \sum\nolimits_{t=0}^{T-1}u_t'd_{t}\right] \\
& \rm{s.t.} & ~x_{t+1}=A_tx_t+B_t u_t+\xi_t,~t=0,1,\ldots,T-1,
\end{IEEEeqnarray*}
where $x_t\in\mathbb{R}^m$ is the state with $x_0$ given, $u_t\in\mathbb{R}^n$ is the control, and $\xi_t \in\mathbb{R}^m$ is the system randomness at time $t$ following some discrete distribution $D_{\xi_t}$ with $|D_{\xi_t}|$ possible outcomes and the probability $\pi_t^k$ for each $k=1,2,\ldots,|D_{\xi_t}|$. We further assume that $\xi_t$'s are independent across time stages. Therefore, there are in total $\prod_{t=0}^{T-1}|D_{\xi_t}|$ scenarios for this $T$-period problem, and each scenario reflects a path of $\xi_t$'s along the time horizon, and the scenario probability $\rho_i$ is calculated by the product of the involved $\pi_t^k$'s. The assumptions on the coefficients will be stated later. Note that the system disturbance $\xi_t$ is realized after the decision is made at time $t$. To see the \emph{online} nature of $(\mathcal{Q})$, we can aggregate all the constraints into the following compact form,
\begin{align}
&\left(
\begin{array}{cccccc}
I_m & 0 & 0 & \cdots & 0 & 0\\
-A_1 & I_m & 0 & \cdots & 0 & 0 \\
0 & -A_2 & I_m & \cdots & 0 & 0 \\
\vdots & \vdots & \ddots & \ddots & \vdots & \vdots\\
0 & 0 & 0 & \cdots & -A_{T-1} & I_m
\end{array}
\right)\left(
\begin{array}{c}
x_1\\
x_2\\
x_3\\
\vdots\\
x_T
\end{array}
\right)
-\left(
\begin{array}{cccc}
B_{0} & 0 & \cdots & 0\\
0 & B_{1} & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & B_{T-1}
\end{array}
\right)\left(
\begin{array}{c}
u_0\\
u_1\\
\vdots\\
u_{T-1}
\end{array}
\right)\nonumber\\
&=\left(
\begin{array}{c}
A_0x_0\\
0\\
\vdots\\
0
\end{array}
\right)+\left(
\begin{array}{c}
\xi_0\\
\xi_1\\
\vdots\\
\xi_{T-1}
\end{array}
\right),
\label{QP_constraint}
\end{align}
where $I_m$ is an identity matrix of size $m$. It becomes clear now that the right hand side of the constraints (\ref{QP_constraint}) is fully uncertain at time $0$, and it becomes partially deterministic when time evolves. For instance, at time 1 (before $u_1$ is made), only the first constraint becomes deterministic. In general, at time $t$ (before $u_t$ is determined), the first $t$ constraints are realized. Although we can observe the states when the system randomness is gradually achieved, it is often the case that we need to make optimal decisions before that happens. On the other hand, the objective function of $(\mathcal{Q})$, in general, includes cross terms on $x_t$'s and $u_t$'s in terms of $t$, respectively. These interactions among time stages make $(\mathcal{Q})$ a concrete nonseparable instance of $(\mathcal{P})$. Let us take a deeper look on its compact form and make some assumptions on its coefficients,
\begin{IEEEeqnarray*}{ccl}
~(\mathcal{Q}_c)~~&\min_{\mathbf{u}} &~\mathbb{E}\left[\frac{1}{2}\mathbf{x}'\mathbf{Q}\mathbf{x}
+\mathbf{x}'\mathbf{c}+\frac{1}{2}\mathbf{u}'\mathbf{R}\mathbf{u}+\mathbf{u}'\mathbf{d}\right]\\
& {\rm s.t.} & ~\mathbf{x}=\mathbf{A}+\mathbf{B}\mathbf{u}+\mathbf{C}\boldsymbol{\xi},
\end{IEEEeqnarray*}
where $\mathbf{x}=(x_0',x_1',\ldots,x_{T-1}',x_T')'\in\mathbb{R}^{m(T+1)}$, $\mathbf{u}=(u_0',u_1',\ldots,u_{T-1}')'\in\mathbb{R}^{nT}$, and $\boldsymbol{\xi}=(\xi_0',\xi_1',\ldots,\xi_{T-1}')'\in\mathbb{R}^{mT}$, and the coefficient matrices are given by
\begin{align*}
&\mathbf{Q}=\left(
\begin{array}{cccc}
Q_{00} & Q_{01} & \cdots & Q_{0T} \\
Q_{10} & Q_{11} & \cdots & Q_{1T} \\
\vdots & \vdots & \ddots & \vdots \\
Q_{T0} & Q_{T1} & \cdots & Q_{TT}
\end{array}
\right)\in\mathbb{R}^{m(T+1)\times m(T+1)},\\
&\mathbf{R}=\left(
\begin{array}{cccc}
R_{00} & R_{01} & \cdots & R_{0(T-1)} \\
R_{10} & R_{11} & \cdots & R_{1(T-1)} \\
\vdots & \vdots & \ddots & \vdots \\
R_{(T-1)0} & R_{(T-1)1} & \cdots & R_{(T-1)(T-1)}
\end{array}
\right)\in\mathbb{R}^{nT\times nT},
\end{align*}
$\mathbf{c}=(c_0',c_1',\ldots,c_T')'\in\mathbb{R}^{m(T+1)}$, $\mathbf{d}=(d_0',d_1',\ldots,d_{T-1}')'\in\mathbb{R}^{nT}$,
\begin{align*}
&\mathbf{A}=\left(x_0',(A_0x_0)',(A_1A_0x_0)',\ldots,(\prod_{t=0}^{T-1}A_tx_0)'\right)'
\in\mathbb{R}^{m(T+1)}
\end{align*}
\begin{align*}
&\mathbf{B}=\left(
\begin{array}{cccc}
{0} & {0} & \ldots & {0}\\
B_0 & {0} & \ldots & {0}\\
A_1B_0 & B_1 & \ldots & {0}\\
\vdots & \vdots & \ddots & \vdots \\
(\prod\limits_{t=1}^{T-1}A_t)B_0 & (\prod\limits_{t=2}^{T-1}A_t)B_1 &
\ldots & B_{T-1}
\end{array}
\right)\in\mathbb{R}^{m(T+1)\times nT}
\end{align*}
and
\begin{align*}
&\mathbf{C}=\left(
\begin{array}{cccc}
{0} & {0} & \ldots & {0}\\
I_m & {0} & \ldots & {0}\\
A_1 & I_m & \ldots & {0}\\
\vdots & \vdots & \ddots & \vdots \\
\prod_{t=1}^{T-1}A_t & \prod_{t=2}^{T-1}A_t &
\ldots & I_m
\end{array}
\right)\in\mathbb{R}^{m(T+1)\times mT}
\end{align*}
\begin{asmp}
The matrices $\mathbf{Q}$ and $\mathbf{R}$ are positive semidefinite.
\label{QR_assumption}
\end{asmp}
The conventional stochastic linear-quadratic (LQ) problem turns out to be a special case of $(\mathcal{Q}_c)$ in which both $\mathbf{Q}$ and $\mathbf{R}$ are diagonal block matrices, and are positive semidefinite and positive definite, respectively.
Under Assumption \ref{QR_assumption}, $(\mathcal{Q}_c)$ is solvable by our proposed scenario-decomposition scheme, as each scenario subproble
\begin{IEEEeqnarray*}{ccl}
~(\mathcal{Q}_c^i)~~&\min_{\mathbf{u}} &~\frac{1}{2}\mathbf{x}'\mathbf{Q}\mathbf{x}
+\mathbf{x}'\mathbf{c}+\frac{1}{2}\mathbf{u}'\mathbf{R}\mathbf{u}+\mathbf{u}'\mathbf{d}\\
& {\rm s.t.} & ~\mathbf{x}=\mathbf{A}+\mathbf{B}\mathbf{u}+\mathbf{C}\boldsymbol{\xi}^i,
\end{IEEEeqnarray*}
is convex w.r.t. the decision variable $\mathbf{u}$. If $\mathbf{R}$ is further positive definite, we have the optimal solution to $(\mathcal{Q}_c^i)$, denoted by $\mathbf{u}^{i,0}$, in an analytical form given by
\begin{align}
\mathbf{u}^{i,0}=-(\mathbf{B}'\mathbf{Q}\mathbf{B}+\mathbf{R})^{-1}
[\mathbf{B}'\mathbf{Q}(\mathbf{A}+\mathbf{C}\boldsymbol{\xi}^i)+\mathbf{B}'\mathbf{c}+\mathbf{d}].
\label{Q_scenario_sol}
\end{align}
Note again that the optimal solution to the $i$th scenario problem, $\mathbf{u}^{i,0}$, is not the optimal result to the primal problem $(\mathcal{Q}_c)$, even not a feasible one since it violates the nonanticipative constraint. We now apply the scenario-decomposition solution approach to $(\mathcal{Q}_c)$. More precisely, let us consider at iteration $\nu=0,1,\ldots$, the following augmented Lagrangian problem for each scenario $i$,
\begin{IEEEeqnarray*}{ccl}
~(\mathcal{Q}_c^{i,\nu})~~&\min_{\mathbf{u}}&~\frac{1}{2}\mathbf{x}'\mathbf{Q}\mathbf{x}
+\mathbf{x}'\mathbf{c}+\frac{1}{2}\mathbf{u}'\mathbf{R}\mathbf{u}+\mathbf{u}'\mathbf{d}
+\mathbf{u}'\mathbf{w}^{i,\nu}+\frac{1}{2}\alpha|\mathbf{u}-\hat{\mathbf{u}}^{i,\nu}|_2^2\\
&{\rm s.t.}&~\mathbf{x}=\mathbf{A}+\mathbf{B}\mathbf{u}+\mathbf{C}\boldsymbol{\xi}^i,
\end{IEEEeqnarray*}
for a given implementable policy $\hat{\mathbf{u}}^{i,\nu}$ and a Lagrangian multiplier $\mathbf{w}^{i,\nu}$ (note that when $\nu=0$, $\hat{\mathbf{u}}^{i,0}$ is set at the implementable solution attained from $\mathbf{u}^{i,0}$, the optimal solution of $(\mathcal{Q}^i_c)$, and $\mathbf{w}^{i,0}$ is set as a zero vector). This time, due to the newly-added quadratic term on $\mathbf{u}$ in the objective, the optimal solution of $(\mathcal{Q}_c^{i,\nu})$, denoted by $\mathbf{u}^{i,\nu+1}$, is always given analytically by
\begin{align}
\mathbf{u}^{i,\nu+1}={}&-(\mathbf{B}'\mathbf{Q}\mathbf{B}+\mathbf{R}+\alpha I_{nT})^{-1}
\left[\mathbf{B}'\mathbf{Q}(\mathbf{A}+\mathbf{C}\boldsymbol{\xi}^i)+\mathbf{B}'\mathbf{c}+\mathbf{d}
+\mathbf{w}^{i,\nu}-\alpha\hat{\mathbf{u}}^{i,\nu}\right],
\label{Q_aug_sol}
\end{align}
where $I_{nT}$ is an $nT$-by-$nT$ identity matrix. Note that the explicit recursions in \eqref{Q_aug_sol} help us saving efforts when we deal with the iterative augmented Lagrangian problems. Therefore, the algorithm for this type of application is quite efficient. We then calculate $\hat{\mathbf{u}}^{i,\nu+1}$, the implementable solution for the next iteration, based on \eqref{conditional_expectation} or following the same procedure shown from \eqref{nu+1_sol} to \eqref{nu+1_project2}, and update $\mathbf{w}^{i,\nu+1}$ according to \eqref{w_update}. In practice, we could select the following condition as our stopping criterion,
\begin{align}
\sum_{i\in\mathcal{I}}\rho_i\left(|\hat{\mathbf{u}}^{i,\nu+1}-\hat{\mathbf{u}}^{i,\nu}|_2^2 + \frac{1}{\alpha^2}|\mathbf{w}^{i,\nu+1}-\mathbf{w}^{i,\nu}|_2^2\right) \leq \epsilon,
\label{Q_stopping criterion}
\end{align}
for a sufficiently small tolerance $\epsilon>0$. The set of implementable controls $\{\hat{\mathbf{u}}^{i,\nu+1}:i\in\mathcal{I}\}$ that satisfies this stopping rule is chosen as the optimal solution to $(\mathcal{Q})$ or $(\mathcal{Q}_c)$, which is denoted by $\{\hat{\mathbf{u}}^{i,\infty}:i\in\mathcal{I}\}$.
\begin{exam}\label{onlineQ_eg
Let us consider an illustrative problem with a scalar state ($m=1$), a two-dimensional control ($n=2$), and a planning horizon of $T=3$. The system parameters are simply given by $A_t=1$ and $B_t=(1,1)$ for all $t$, whereas $\mathbf{Q}=(Q_{ij})_{i,j=0}^T$ and $\mathbf{R}=(R_{ij})_{i,j=0}^{T-1}$ in the performance measure are randomly generated as follows,
\begin{align*}
&\mathbf{Q}=\left(
\begin{array}{c:c:c:c}
2.4512 & 1.0930 & 1.0243 & 1.8873 \\
\hdashline
1.0930 & 0.7852 & 0.2319 & 1.0027 \\
\hdashline
1.0243 & 0.2319 & 0.7276 & 0.5147 \\
\hdashline
1.8873 & 1.0027 & 0.5147 & 1.7188
\end{array}
\right)
\\
&\mathbf{R}=\left(
\begin{array}{cc:cc:cc}
1.3281 & 1.4932 & 1.2903 & 0.7788 & 1.0149 & 1.0774 \\
1.4932 & 2.6110 & 2.2984 & 1.3315 & 1.3902 & 2.3629 \\
\hdashline
1.2903 & 2.2984 & 2.7214 & 1.7258 & 1.7339 & 2.6799 \\
0.7788 & 1.3315 & 1.7258 & 1.3102 & 1.0305 & 1.6583 \\
\hdashline
1.0149 & 1.3902 & 1.7339 & 1.0305 & 1.3073 & 1.6208 \\
1.0774 & 2.3629 & 2.6799 & 1.6583 & 1.6208 & 2.9734
\end{array}
\right)
\end{align*}
The above two matrices are positive semidefinite and positive definite, respectively. To have a positive definite $\mathbf{R}$ in this example is for the purpose of comparison with the classical stochastic LQ control. Furthermore, $\mathbf{c}$ and $\mathbf{d}$ are set to be zero vectors for simplicity. The white system disturbance $\xi_t$ is modeled by a two-point distribution at each time $t$ with $D_{\xi_t}=\{1,-1\}$ and equal probability. Hence this is simply a binomial scenario tree as shown in Figure \ref{Q_eg_tree}, where the possible realizations of $\xi_t$'s at different time stages and under different scenarios are listed next to the related circle nodes. The total number of scenarios is $|\mathcal{I}|=8$ with the scenario probability $\rho_i=1/8$ for every $i\in\mathcal{I}$. The partitions of the scenario set, $\mathcal{I}_t$'s, together with scenario bundles at each time, $\mathcal{I}_{t,l}$'s, are easily recognized: $\mathcal{I}_0=\{\mathcal{I}_{0,1}\}=\{\{i_1,\ldots,i_{8}\}\}$; $\mathcal{I}_1=\{\mathcal{I}_{1,1},\mathcal{I}_{1,2}\}=\{\{i_1,\ldots,i_{4}\},\{i_{5},\ldots,i_{8}\}\}$; and finally $\mathcal{I}_2=\{\mathcal{I}_{2,1},\ldots,\mathcal{I}_{2,4}\}
=\{\{i_1,i_2\},\ldots,\{i_{7},i_{8}\}\}$. Suppose the system starts from $x_0=1$. The optimal controls $\hat{\mathbf{u}}^{i,\infty}=((\hat{u}_0^{i,\infty})',(\hat{u}_1^{i,\infty})',(\hat{u}_2^{i,\infty})')'$, $i\in\mathcal{I}$, solved by the scenario-decomposition scheme in MATLAB for the above online QP problem, are displayed (rounding in two decimals) beneath the corresponding nodes in Figure \ref{Q_eg_tree}. We next keep only diagonal blocks and set others to be zeros in the above $\mathbf{Q}$ and $\mathbf{R}$ and investigate the resulted standard stochastic LQ problem using both PHA and DP. We find that the optimal controls obtained from both methods coincide with each other. This exercise numerically demonstrates equivalent solution powers to certain degrees from both time decomposition and scenario decomposition approaches when both are applied to the separable and monotone stochastic control problems with convex scenario subproblems.
\begin{figure}[!t]
\begin{center}
\scalebox{0.9}{
\begin{tikzpicture}
every node/.style={scale=0.8},
level 1/.style={sibling distance=8cm, level distance=1cm},
level 2/.style={sibling distance=4cm, level distance=1cm},
level 3/.style={sibling distance=3cm, level distance=2cm},
]
\node (Root) [rectangle,draw,minimum size=0.5cm] {}
child {node (xi0_1) [circle,draw,minimum size=0.8cm] {1}
child {node (xi1_1) [circle,draw,minimum size=0.8cm] {1}
child {node (xi2_1) [circle,draw,minimum size=0.8cm] {1}}
child {node (xi2_2) [circle,draw,minimum size=0.8cm] {-1}}
}
child {node (xi1_2) [circle,draw,minimum size=0.8cm] {-1}
child {node (xi2_3) [circle,draw,minimum size=0.8cm] {1}}
child {node (xi2_4) [circle,draw,minimum size=0.8cm] {-1}}
}
}
child {node (xi0_2) [circle,draw,minimum size=0.8cm] {-1}
child {node (xi1_3) [circle,draw,minimum size=0.8cm] {1}
child {node (xi2_5) [circle,draw,minimum size=0.8cm] {1}}
child {node (xi2_6) [circle,draw,minimum size=0.8cm] {-1}}
}
child {node (xi1_4) [circle,draw,minimum size=0.8cm] {-1}
child {node (xi2_7) [circle,draw,minimum size=0.8cm] {1}}
child {node (xi2_8) [circle,draw,minimum size=0.8cm] {-1}}
}
};
\begin{scope}[every node/.style={right,scale=0.8}]
\path (Root -| xi2_8) ++(2mm,0) node {} ++(5mm,0) node {$t=0$};
\path (xi0_1 -| xi2_8) ++(2mm,0) node {} ++(5mm,0) node {$t=1$};
\path (xi1_1 -| xi2_8) ++(2mm,0) node {} ++(5mm,0) node {$t=2$};
\path (xi2_1 -| xi2_8) ++(2mm,0) node {} ++(5mm,0) node {$T=3$};
\end{scope}
\path (xi0_1) -- (xi0_2) node [midway,align=center] {
$\footnotesize\hat u_0^{i,\infty}=\left(
\begin{aligned
0.37 \\
-1.83
\end{aligned}
\right)$,\\$\footnotesize\forall i\in\mathcal{I}_{0,1}$};
\path (xi1_1) -- (xi1_2) node [below = 0.02cm of xi0_1,align=center] {
$\footnotesize\hat u_1^{i,\infty}=\left(
\begin{aligned
3.58 \\
-5.01
\end{aligned}
\right)$,\\$\footnotesize\forall i\in\mathcal{I}_{1,1}$};
\path (xi1_3) -- (xi1_4) node [below = 0.02cm of xi0_2,align=center] {
$\footnotesize\hat u_1^{i,\infty}=\left(
\begin{aligned
1.60 \\
-1.50
\end{aligned}
\right)$,\\$\footnotesize\forall i\in\mathcal{I}_{1,2}$};
\node (u_1_1) [below = 0.8cm of xi1_1,align=center] {
$\footnotesize\hat u_2^{i,\infty}=\left(
\begin{aligned
-3.23 \\
1.87
\end{aligned}
\right)$,\\$\footnotesize\forall i\in\mathcal{I}_{2,1}$};
\node (u_1_2) [below = 0.8cm of xi1_2,align=center] {
$\footnotesize\hat u_2^{i,\infty}=\left(
\begin{aligned
-1.25 \\
1.41
\end{aligned}
\right)$,\\$\footnotesize\forall i\in\mathcal{I}_{2,2}$};
\node (u_1_3) [below = 0.8cm of xi1_3,align=center] {
$\footnotesize\hat u_2^{i,\infty}=\left(
\begin{aligned
-1.60 \\
1.25
\end{aligned}
\right)$,\\$\footnotesize\forall i\in\mathcal{I}_{2,3}$};
\node (u_1_4) [below = 0.8cm of xi1_4,align=center] {
$\footnotesize\hat u_2^{i,\infty}=\left(
\begin{aligned
0.38 \\
0.79
\end{aligned}
\right)$,\\$\footnotesize\forall i\in\mathcal{I}_{2,4}$};
\node [below = 0.1cm of xi2_1,align=center] {$i_1$};
\node [below = 0.1cm of xi2_2,align=center] {$i_2$};
\node [below = 0.1cm of xi2_3,align=center] {$i_3$};
\node [below = 0.1cm of xi2_4,align=center] {$i_4$};
\node [below = 0.1cm of xi2_5,align=center] {$i_5$};
\node [below = 0.1cm of xi2_6,align=center] {$i_6$};
\node [below = 0.1cm of xi2_7,align=center] {$i_7$};
\node [below = 0.1cm of xi2_8,align=center] {$i_8$};
\path (xi1_1)edge[->,dashed](u_1_1);
\path (xi1_2)edge[->,dashed](u_1_2);
\path (xi1_3)edge[->,dashed](u_1_3);
\path (xi1_4)edge[->,dashed](u_1_4);
\end{tikzpicture
}
\end{center}
\caption{Scenario tree for $\boldsymbol{\xi}$ of Example \ref{onlineQ_eg} and its optimal controls at different times and under different scenarios.}
\label{Q_eg_tree}
\end{figure}
\end{exam}
\section{Dynamic Portfolio Selection with Smoothing Properties}
In this section, let us consider a financial market consisting of $n$ risky assets and one risk-free asset, and an investment time horizon $T$ (with the time indices $t=0,1,\cdots,T-1$). The total return of the riskless asset, denoted by $r_t$, is deterministic and given, whereas the random total return of risky assets at time $t$, denoted by ${e}_t=(e_t^1,\ldots,e_t^n)'\in\mathbb{R}^n$, is assumed to follow a discrete distribution $D_{e_t}$ with $|D_{e_t}|$ possible realizations and corresponding probabilities $\pi_t^k\geq 0$, $k=1,\ldots,|D_{e_t}|$. Furthermore, ${e}_t$'s from different time stages are assumed to be independent.
A series of realizations on $\{{e}_t\}_t$ then defines a \emph{scenario}. Therefore, given the time horizon $T$, there are in total $\prod_{t=0}^{T-1}|D_{e_t}|$ number of scenarios and the scenario probability $\rho_i$ is then calculated by the product of related $\pi_t^k$'s that are attached to this scenario $i$. Let $x_t\in\mathbb{R}$ be the wealth level at time $t$ with the initial wealth $x_0$ given, and ${u}_t=(u_t^1,,\cdots,u_t^n)'\in\mathbb{R}^n$ be the portfolio allocation where $u_t^i$ is the dollar amount to invest in the risky asset $i$, $i=1,\cdots,n$. Then the dollar amount to the riskless asset at time $t$ is $(x_t-\sum_{i=1}^n u_t^i)$ under the assumption of self-financing. Therefore, the wealth dynamic under policy ${u}_t$ becomes
\begin{align}
x_{t+1} = \sum_{i=1}^n e_t^i {u}_t^i+(x_t-\sum_{i=1}^n {u}_t^i)r_t
= r_{t}x_t + {P}_t'{u}_t,~t=0,1,\ldots,T-1,\label{wealth_dynamic}
\end{align}
where ${P}_t={e}_t-r_t{1}_n\in\mathbb{R}^n$ is known as the excess total return and ${1}_n\in\mathbb{R}^n$ is an all-one vector of size $n$.
There are in general two directions on objectives for modelling the portfolio selection problem, i.e., the expected utility maximization framework and the mean-variance formulation. Among conventional formulations, most objective functions focus on the performance of the \emph{terminal} wealth. Failing to take into account the investment behavior during the investment process could lead to large fluctuations either in the wealth level or in the policy values, while the former may further lead to a bankruptcy (see \cite{Discrete_Bankruptcy} and \cite{Continuous_Bankruptcy}) and the latter may cause large transaction costs. Thus, a relatively smooth wealth growth may often be desirable, even with some sacrifice of the terminal wealth. In some other situations, to avoid the transaction cost as much as possible, investors may demand relatively uniform budget allocation during the whole investment period. In order to reflect these practical concerns, we extend in this research both the traditional utility formulation and the conventional dynamic mean-variance model by attaching to their original objective functions an expectation of a \emph{smoothing} term in a \emph{quadratic variation} form along the time horizon,
\begin{align}
S(\{x_t\}_t,\{u_t\}_t)=\sum_{t\in\mathcal{T}}\Big(f_t(x_t,u_t)
-\frac{1}{|\mathcal{T}|}\sum_{\tau\in\mathcal{T}}f_\tau(x_\tau,u_\tau)\Big)^2,
\label{smoothing_term}
\end{align}
for some types of functions $f_t:\mathbb{R}\times\mathbb{R}^n\rightarrow\mathbb{R}$, where $\mathcal{T}\subseteq \{0,1,\ldots,T\}$ is a subset of time stages selected for smoothing purpose. Some concrete choices of $f_t$ could be
\begin{align}
f_t(x_t,u_t)=x_t
\label{wealth_smooth}
\end{align}
in order for us to smooth the wealth levels, and
\begin{align}
f_t(x_t,u_t)=\sum_{i\in\mathcal{N}}{u}_t^i,
\label{control_smooth}
\end{align}
in order for us to smooth the total investment amount in some specific risky assets specified by $\mathcal{N}\subseteq\{1,2,\ldots,n\}$.
\subsection{Smoothing under Expected Utility Maximization}
Conventionally, the investor seeks to find the optimal ${u}_t$ for all $t$ such that the expected utility of the final wealth, denoted by $\mathbb{E}[U(x_T)]$, is maximized subject to the wealth dynamic (\ref{wealth_dynamic}), where $U(x)$ is the investor's utility function. In this paper we further assume that the utility function $U(x)$ satisfies $-U'(x)/U''(x)=a+bx$ for certain coefficients $a$ and $b$, which is known as the hyperbolic-absolute-risk-aversion (HARA) utility. Some commonly-used utilities, for example, the exponential utility of the form $\{U(x)=-e^{-x/a}:x\in\mathbb{R}\}$ where $a>0$ and $b=0$ and the power utility of the form $\{U(x)=\frac{1}{b-1}(a+bx)^{1-1/b}:x \geq -a/b\}$ where $b\neq 1$ and $b\neq 0$ are two special cases of HARA utility. In this subsection, we consider expected utility maximization with a general smoothing term,
\begin{IEEEeqnarray*}{ccl}
~(\mathcal{US}(\gamma))~~ & \max_{{u}_t,\forall t} & ~\mathbb{E}[U(x_T)]-\gamma \mathbb{E}[S(\{x_t\}_{t\in\mathcal{T}},\{u_t\}_{t\in\mathcal{T}})]\\
& \rm{s.t.} & ~x_{t+1} = r_{t}x_t + {P}_t'{u}_t,~t=0,1,\ldots,T-1,
\end{IEEEeqnarray*}
where the trade-off parameter $\gamma\geq 0$ specified by the investor represents a trade-off between the expected utility of the \emph{terminal} wealth and the smoothing demand during the \emph{intermediate} process. The larger the $\gamma$, the more the investor is concerned about smoothing. When $\gamma=0$, this problem reduces to the classical model under the HARA utility. It is well known that $(\mathcal{US}(0))$ is solvable by DP and the optimal policy (see \cite{bertsekas2017dynamic}) is given by
\begin{align}
{u}^*_{t}(x_{t})={\beta}_{t}\left(\frac{a}{\prod_{\tau=t+1}^{T-1}r_{\tau}}+br_{t}x_{t}\right),
\label{utility_analytical}
\end{align}
for $t=0,1,\ldots,T-1$ (let us define the operator $\prod_{\tau=T}^{T-1}r_{\tau}=1$ for consistency), where ${\beta}_{t}=(\beta_{t}^1,\ldots,\beta_{t}^n)'\in\mathbb{R}^n$ should be derived from the optimality condition once given $x_t$,
\begin{align}
\mathbb{E}\left[U'\left(r_{t}x_{t}
+\left(\displaystyle\frac{a}{\prod_{\tau=t+1}^{T-1}r_{\tau}}+br_{t}x_{t}\right)
{\beta}'_{t}{P}_{t}\right){P}_{t}\right]={0},
\label{beta_condition}
\end{align}
which is a system of $n$ nonlinear equations at time $t$. In general, the condition (\ref{beta_condition}) is hard to solve for ${\beta}_{t}$. We point out here that $(\mathcal{US}(0))$ is also solvable by PHA under a discrete market setting, leading to a family of numerical optimal solutions in a tabular form, conditional on the future realizations of $e_t$'s. More importantly, PHA will display its extra solution power more on $(\mathcal{US}(\gamma))$ with $\gamma>0$, the expected utility maximization with smoothing term, whose non-separability prevents DP from its adoption. It is obvious that, for any $\gamma\geq 0$, the quadratic smoothing terms in the forms of (\ref{wealth_smooth}) or (\ref{control_smooth}), together with the linear wealth dynamic in (\ref{wealth_dynamic}), make the problem $(\mathcal{US}(\gamma))$ satisfying the conditions in Theorem \ref{PHA_thm}, thus being solvable by PHA.
We complete this subsection by investigating a case study below.
\begin{exam}
Consider a similar market setting as in Example 3 of \cite{cui2014optimal}, where there are three risky assets ($n=3$) and the distribution, for simplicity, is directly imposed on the random excess total return ${P}_t$, instead of ${e}_t$. Suppose that ${P}_t$ is independent and identically distributed with a discrete uniform distribution of five possible realizations ($|D_{P_t}|=5$ for all $t$ and $\pi_t^k=1/5$ for all $k=1,\ldots,5$ and all $t$),
\begin{align}
{P}_t\in&
\left(
\left[
\begin{array}{c}
0.18\\
-0.05\\
-0.14
\end{array}
\right],
\left[
\begin{array}{c}
0.03\\
-0.12\\
-0.03
\end{array}
\right],
\left[
\begin{array}{c}
-0.05\\
0.15\\
0.05
\end{array}
\right],
\left[
\begin{array}{c}
-0.01\\
0.15\\
0.10
\end{array}
\right],
\left[
\begin{array}{c}
-0.05\\
0.01\\
0.06
\end{array}
\right]
\right),~\forall t.
\label{P_distribution}
\end{align}
We scale the initial wealth to $x_0=1$, and set $T=3$ and $r_t=1.04$ for all $t$. Suppose that the investor has an exponential utility $U(x)=-e^{-x}$ (hence $a=1$ and $b=0$). Originally, the model $(\mathcal{US}(0))$ with this exponential utility can still be solved by DP under the above discrete market setting \eqref{P_distribution}, and based on (\ref{utility_analytical}), the analytical optimal feedback policy is given by, for $t=0,1,\ldots,T-1$,
\begin{eqnarray}
{u}^*_{t}(x_t)=\frac{{\beta}_{t}}{\prod_{\tau=t+1}^{T-1}r_{\tau}},
\label{utility_analytical_concrete}
\end{eqnarray}
According to (\ref{beta_condition}), the $x_t$-dependent ${\beta}_{t}=(\beta_{t}^1,\beta_{t}^2,\beta_{t}^3)'$ at each time $t$ should be derived from the following system of nonlinear equations, starting from $t=0$,
\begin{eqnarray}
\sum_{k=1}^{|D_{P_t}|}\pi_k\exp\left\{-\left(
r_tx_t+\frac{\beta'_{t}{P}_{t,k}}{\prod_{\tau=t+1}^{T-1}r_{\tau}}
\right)\right\}{P}_{t,k}={0},
\label{beta_condition_concrete}
\end{eqnarray}
where ${P}_{t,k}$ stands for the $k$th possible realization in $D_{P_t}$ of (\ref{P_distribution}). Although obtaining the value of ${\beta}_{t}$ is indispensable for executing the DP-based optimal policy, solving for ${\beta}_{t}$ from (\ref{beta_condition_concrete}) is not easy, even under the current discrete market setting.
We now resolve the above $(\mathcal{US}(0))$ by the scenario-decomposition method PHA and get the optimal asset allocations $\hat{u}_t^{i,\infty}$ in Table \ref{utility_tabular} (rounding in two decimals), which is a \emph{tabular} form in the sense that it indicates how much to invest at what time, on which asset (the symbols A1, A2 and A3 represent the three risky assets, respectively), and under which scenario (a path of realized $P_{t,k}$'s). Then the wealth trajectory under the optimal policy can be traced for any scenario $i$, and we denote it by $\{x_t^{i,\infty}\}_t$ with $x_0^{i,\infty}=x_0$ given for every $i$. Since the number of all possible realizations of wealth trajectories under the optimal solutions is finite in our discrete market (which in this example is $|\mathcal{I}|=125$), we could easily check the consistency of the optimal solutions between numerical values from PHA and those outputted by the analytical policy from DP when plugging in those possible future wealth levels. This is done by, for each scenario $i$, replacing the left hand side of \eqref{utility_analytical_concrete} with the value of $\hat{u}_t^{i,\infty}$ and deriving \emph{reversely} the scenario-specific ${\beta}_{t,i}$, and we succeed to confirm that the resulted ${\beta}_{t,i}$ satisfies the equality in (\ref{beta_condition_concrete}) where we set $x_t=x_t^{i,\infty}$. We again numerically demonstrate the equivalence between the scenario-decomposition and the time-decomposition approaches, when available, for solving separable, monotone and convex multistage decision-making problems under a finite-scenario setting. More importantly, we attain the exact investment decisions that DP often fails to provide due to the difficulty in finding $\beta_t$ from \eqref{beta_condition}.
When we consider $(\mathcal{US}(\gamma))$ with $\gamma>0$, only PHA works for numerical solutions. In this example, we test $\gamma=1$ and $\gamma=10$ for the wealth smoothing term in (\ref{wealth_smooth}) with $\mathcal{T}=\{1,2,3\}$. The computational tabular results are also listed in Table \ref{utility_tabular}. Comparing them with the results without smoothing, we find that, when $\gamma>0$, the asset allocations in general become moderate. This further leads to smoother wealth trajectories, no matter in a single-scenario level $\{x_t^{i,\infty}\}_t$ (which could be seen from our experiments but we omit the details here), or in an overall level in terms of their expectations and variances as exhibited in Table \ref{utility_appropriate} (rounding in two decimals if needed). From Table \ref{utility_appropriate}, we could see a stabler growth on the expected wealth and a less-fluctuated wealth movement (i.e., lower variances) when the wealth smoothing is considered ($\gamma=1,10$). These naturally cause a decrease on the expected terminal wealth $\mathbb{E}[x_T^{i,\infty}]$ compared with the non-smoothing setting ($\gamma=0$). And the larger the $\gamma$, the more conservative the investment decision, thus the bigger the sacrifice on $\mathbb{E}[x_T^{i,\infty}]$. On the other hand, however, the smoothing helps, to a certain degree, on reducing the possible bankruptcy induced by the relatively aggressive investing style during the investment process. To see this, let us define the bankruptcy rate at time $t$ by (similar to \cite{Discrete_Bankruptcy})
\begin{align}
BR_t
&= \mathbb{P}(x_t<x_t^b,~x_\tau\geq x_\tau^b~\text{for }\tau=0,1,\ldots,t-1)\nonumber\\
&=\frac{BN_t}{|\mathcal{I}|-\sum_{\tau=0}^{t-1}BN_{\tau}},~t=1,\ldots,T,
\label{bankruptcy_definition}
\end{align}
where $x_t^b$ denotes the wealth benchmark at time $t$ specified by the investor, and we define $BN_t$ as the number of bankruptcy scenarios at time $t$ under which $x_t<x_t^b$ and $x_\tau\geq x_\tau^b$ for $\tau<t$. In fact, the denominator of (\ref{bankruptcy_definition}) indicates the number of scenarios that still survive at time $t$. Initially at $t=0$, we set $BR_0=0$, $BN_0=0$, and $x_0^b=x_0$, and we choose a risk-free-growing wealth benchmark, that is, $x_t^b=\prod_{\tau=0}^{t-1}r_tx_0$ for $t\geq 1$. From Table \ref{utility_appropriate}, we could see a distinct reduction on the bankruptcy rates after introducing the smoothing property with some appropriate smoothing balances (such as $\gamma=1$ and $\gamma=10$ here). Moreover, adding a smoothing term also leads to a better worst case of the final wealth (in this example we obtain $0.3782$, $0.9853$, and $1.0131$ for $\gamma=0,1,10$, respectively): the conservative behavior under smoothing helps to avoid severe losses in case an adverse scenario occurs.
\begin{table}[t]
\centering
\caption{Tabular Optimal Solutions $\hat{u}_t^{i,\infty}$ of ($\mathcal{US}(\gamma))$ in Example \ref{utility_eg}}
\resizebox{\columnwidth}{!}
\begin{tabular}{ccccccccccc}
\toprule
\multirow{2}[2]{*}{$t$} & \multicolumn{3}{c}{$\gamma=0$} & \multicolumn{3}{c}{$\gamma=1$} & \multicolumn{3}{c}{$\gamma=10$} & \multirow{2}[3]{*}{Scenarios} \\
\cmidrule{2-10} & A1 & A2 & A3 & A1 & A2 & A3 & A1 & A2 & A3 & \\
\midrule
0 & 10.92 & 3.31 & 7.25 & 13.76 & 0.92 & 14.04 & 12.16 & 0.14 & 13.43 & \\
\midrule
\multirow{5}[2]{*}{1} & 8.65 & 3.33 & 4.97 & 2.79 & -0.85 & 3.52 & -0.55 & -0.09 & -0.44 & if ${P}_{0,1}$ occurs \\
& 13.65 & 2.85 & 10.46 & 1.92 & 0.10 & 1.74 & -0.36 & -0.02 & -0.33 & if ${P}_{0,2}$ occurs \\
& 9.86 & 3.25 & 6.23 & 0.88 & 0.20 & 0.64 & -0.65 & 0.04 & -0.67 & if ${P}_{0,3}$ occurs \\
& 7.65 & 3.31 & 4.09 & -0.74 & -0.16 & -0.55 & -1.75 & 0.06 & -1.75 & if ${P}_{0,4}$ occurs \\
& 12.47 & 2.98 & 9.12 & 0.88 & 0.18 & 0.67 & -0.75 & 0.04 & -0.76 & if ${P}_{0,5}$ occurs \\
\midrule
\multirow{25}[10]{*}{2} & 7.12 & 3.31 & 3.56 & 1.78 & -0.60 & 2.31 & -0.49 & -0.05 & -0.42 & if ${P}_{0,1},{P}_{1,1}$ occurs \\
& 10.64 & 3.06 & 7.14 & 3.57 & -1.37 & 4.81 & -0.74 & -0.07 & -0.64 & if ${P}_{0,1},{P}_{1,2}$ occurs \\
& 7.81 & 3.31 & 4.16 & 3.25 & -0.66 & 3.78 & -0.51 & -0.12 & -0.37 & if ${P}_{0,1},{P}_{1,3}$ occurs \\
& 6.48 & 3.26 & 3.06 & 0.72 & -0.69 & 1.39 & -0.50 & 0.00 & -0.48 & if ${P}_{0,1},{P}_{1,4}$ occurs \\
& 9.68 & 3.18 & 6.07 & 3.31 & -1.22 & 4.40 & -0.68 & -0.07 & -0.58 & if ${P}_{0,1},{P}_{1,5}$ occurs \\
\cmidrule{2-11} & 10.41 & 3.09 & 6.89 & 1.40 & 0.06 & 1.28 & -0.30 & -0.01 & -0.28 & if ${P}_{0,2},{P}_{1,1}$ occurs \\
& 17.32 & 2.04 & 15.15 & 2.30 & 0.08 & 2.14 & -0.44 & -0.01 & -0.41 & if ${P}_{0,2},{P}_{1,2}$ occurs \\
& 12.56 & 2.76 & 9.41 & 2.05 & 0.12 & 1.85 & -0.39 & -0.03 & -0.35 & if ${P}_{0,2},{P}_{1,3}$ occurs \\
& 9.02 & 3.23 & 5.39 & 1.09 & -0.06 & 1.11 & -0.23 & 0.00 & -0.22 & if ${P}_{0,2},{P}_{1,4}$ occurs \\
& 15.66 & 2.28 & 13.14 & 2.11 & 0.10 & 1.93 & -0.42 & -0.01 & -0.39 & if ${P}_{0,2},{P}_{1,5}$ occurs \\
\cmidrule{2-11} & 7.96 & 3.31 & 4.31 & 0.64 & 0.16 & 0.46 & -0.53 & 0.04 & -0.54 & if ${P}_{0,3},{P}_{1,1}$ occurs \\
& 12.25 & 2.81 & 9.03 & 1.04 & 0.25 & 0.75 & -0.72 & 0.05 & -0.75 & if ${P}_{0,3},{P}_{1,2}$ occurs \\
& 8.92 & 3.25 & 5.28 & 0.91 & 0.17 & 0.71 & -0.80 & 0.05 & -0.82 & if ${P}_{0,3},{P}_{1,3}$ occurs \\
& 7.13 & 3.29 & 3.61 & 0.60 & 0.09 & 0.48 & -0.27 & 0.02 & -0.29 & if ${P}_{0,3},{P}_{1,4}$ occurs \\
& 11.17 & 2.97 & 7.78 & 0.93 & 0.25 & 0.65 & -0.70 & 0.05 & -0.72 & if ${P}_{0,3},{P}_{1,5}$ occurs \\
\cmidrule{2-11} & 6.46 & 3.26 & 3.05 & -0.63 & -0.11 & -0.50 & -1.41 & 0.05 & -1.41 & if ${P}_{0,4},{P}_{1,1}$ occurs \\
& 9.34 & 3.21 & 5.72 & -0.93 & -0.18 & -0.72 & -1.97 & 0.08 & -1.98 & if ${P}_{0,4},{P}_{1,2}$ occurs \\
& 6.84 & 3.27 & 3.37 & -0.64 & -0.20 & -0.42 & -2.07 & 0.07 & -2.06 & if ${P}_{0,4},{P}_{1,3}$ occurs \\
& 5.80 & 3.15 & 2.69 & -0.47 & -0.10 & -0.35 & -0.83 & 0.04 & -0.84 & if ${P}_{0,4},{P}_{1,4}$ occurs \\
& 8.57 & 3.25 & 4.96 & -0.88 & -0.16 & -0.69 & -1.89 & 0.07 & -1.89 & if ${P}_{0,4},{P}_{1,5}$ occurs \\
\cmidrule{2-11} & 9.70 & 3.18 & 6.10 & 0.66 & 0.13 & 0.50 & -0.61 & 0.03 & -0.61 & if ${P}_{0,5},{P}_{1,1}$ occurs \\
& 15.66 & 2.28 & 13.14 & 1.00 & 0.23 & 0.73 & -0.85 & 0.04 & -0.86 & if ${P}_{0,5},{P}_{1,2}$ occurs \\
& 11.34 & 2.94 & 7.97 & 0.91 & 0.16 & 0.72 & -0.90 & 0.04 & -0.90 & if ${P}_{0,5},{P}_{1,3}$ occurs \\
& 8.45 & 3.25 & 4.85 & 0.61 & 0.07 & 0.52 & -0.34 & 0.02 & -0.35 & if ${P}_{0,5},{P}_{1,4}$ occurs \\
& 14.41 & 2.46 & 11.64 & 0.95 & 0.21 & 0.71 & -0.81 & 0.04 & -0.82 & if ${P}_{0,5},{P}_{1,5}$ occurs \\
\bottomrule
\end{tabular
}
\label{utility_tabular
\end{table
\begin{table}[t]
\centering
\caption{Wealth Statistics and Bankruptcy Evaluations of Example \ref{utility_eg}}
\resizebox{\columnwidth}{!}
\begin{tabular}{ccccccccccc}
\toprule
\multirow{2}[2]{*}{$t$} & \multirow{2}[2]{*}{$x_t^b$} & \multicolumn{3}{c}{$\mathbb{E}[x_t^{i,\infty}]$} & \multicolumn{3}{c}{\text{Var}$(x_t^{i,\infty})$} & \multicolumn{3}{c}{$BR_t$} \\
\cmidrule{3-11} & & $\gamma=0$ & $\gamma=1$ & $\gamma=10$ & $\gamma=0$ & $\gamma=1$ & $\gamma=10$ & $\gamma=0$ & $\gamma=1$ & $\gamma=10$\\
\midrule
1 & 1.04 & 1.41 & 1.45 & 1.39 & 0.27 & 0.28 & 0.21 & 0.4 & 0.2 & 0.2\\
2 & 1.08 & 1.82 & 1.54 & 1.43 & 0.49 & 0.28 & 0.22 & 0 & 0 & 0\\
3 & 1.13 & 2.23 & 1.63 & 1.46 & 0.66 & 0.28 & 0.22 & 0.03 & 0 & 0\\
\bottomrule
\end{tabular
}
\label{utility_appropriate
\end{table
\label{utility_eg}
\end{exam}
\subsection{Smoothing under Mean-variance Formulation}\label{MV_subsection}
Let us now consider a conventional discrete-time mean-variance (MV) formulation given as follows,
\begin{IEEEeqnarray*}{ccl}
~(\mathcal{MV}(w))~~ & \max\limits_{u_t,\forall t} & ~\mathbb{E}(x_T) - w Var(x_T) \\
& {\rm s.t.} &~x_{t+1}=r_tx_t+{P}_t'{u}_t,~t= 0, 1, \ldots, T-1,
\end{IEEEeqnarray*}
where the parameter $w$, predetermined by the investor, explicitly reveals her trade-off between the expected terminal wealth and its variance. Note that $(\mathcal{MV}(w))$, and also other types of MV models, is nonseparable owing to the variance operator. From \cite{li2000optimal}, we know that $(\mathcal{MV}(w))$ can be embedded into a family of separable auxiliary problems that are solvable by DP and the solution of an auxiliary problem with a special value of the parameter in turn solves the primal problem. We list the analytical optimal feedback policy of $(\mathcal{MV}(w))$ below,
{\small
\begin{align}
u_{t}^*(x_{t};w)
={}&-K_tr_tx_{t}+\left(x_0\prod_{s=0}^{T-1}r_s+\frac{1}{2w\prod_{s=0}^{T-1}
(1-\mathbb{E}'[P_{s}]K_{s})}\right)
\left(\prod_{\tau=t+1}^{T-1}\frac{1}{r_\tau}\right)K_t,~t=0,1,\ldots,T-1,
\label{MV_policy}
\end{align}
where $K_t=\mathbb{E}^{-1}\left[P_{t}P_{t}'\right]\mathbb{E}[P_{t}]$, and we define the operator $\prod_{\tau=T}^{T-1}(1/r_\tau)=1$ for consistency.
Owing to the similar issues as in the utility framework, the objective in $(\mathcal{MV}(w))$ merely considers the \emph{final} wealth, thus may suffer possible large fluctuations during the investment process. Therefore, in this subsection we also investigate a more general mean-variance formulation by adding a wealth smoothing term,
\begin{IEEEeqnarray*}{ccl}
~(\mathcal{MVS}(w,\gamma))~~ & \max\limits_{u_t,\forall t} &~ \mathbb{E}(x_T) - w Var(x_T)
-\gamma \mathbb{E}\left[\sum\nolimits_{t=1}^{T}(x_t-\bar x)^2\right] \\
& {\rm s.t.} &~ x_{t+1}=r_tx_t+{P}_t'{u}_t,~t= 0, 1, \ldots, T-1,
\end{IEEEeqnarray*}
where $\bar x =1/T\sum_{t=1}^{T}x_t$ denotes the average wealth \emph{along the time horizon}, and $\gamma\geq 0$ reflects a preselected trade-off between the MV objective and the smoothing term. Note that PHA cannot be directly applied to $(\mathcal{MVS}(w,\gamma))$ since it is originally designed for solving the stochastic problem with only the \emph{risk-neutral} evaluation criterion (i.e., the expectation measure). We first rearrange the objective in $(\mathcal{MVS}(w,\gamma))$ as
{\small
\begin{align}
{}&\mathbb{E}(x_T) - w Var(x_T)-\gamma \mathbb{E}\left[\sum_{t=1}^{T}(x_t-\bar x)^2\right] \nonumber\\
={}&\mathbb{E}[x_T] - w (\mathbb{E}[x_T^2]-\mathbb{E}^2[x_T]) -\gamma \mathbb{E}\left[\sum_{t=1}^{T}x_t^2-\frac{1}{T}(\sum_{t=1}^{T}x_t)^2\right]\nonumber\\
={}&-\mathbb{E}\left[(\gamma-\frac{\gamma}{T}) \sum_{t=1}^{T-1}x_t^2
+(w+\gamma-\frac{\gamma}{T})x_T^2-\frac{\gamma}{T}\sum_{1\leq i\neq j\leq T}x_ix_j\right]
+w\mathbb{E}^2[x_T]+\mathbb{E}[x_T]\nonumber\\
={}&\tilde{U}(\mathbb{E}[x_1^2],\mathbb{E}[x_1x_2],\ldots,\mathbb{E}[x_1x_T],\mathbb{E}[x_2x_1],
\mathbb{E}[x_2^2],\ldots,\mathbb{E}[x_2x_T],
\ldots,\mathbb{E}[x_Tx_{1}],
\mathbb{E}[x_Tx_{2}],\ldots,\mathbb{E}[x_T^2],\mathbb{E}[x_T]).
\label{MVS_objective}
\end{align}
Note that $\tilde{U}$ is a convex function of
$\mathbb{E}[x_ix_j],~1\leq i,j\leq T$, and $\mathbb{E}[x_T]$. By invoking the embedding scheme as in \cite{li2000optimal}, we consider the following auxiliary problem
\begin{IEEEeqnarray*}{ccl}
~(\mathcal{A}(w^c,\lambda))~~& \max\limits_{u_t,\forall t} & ~\mathbb{E}\left[-\sum_{1\leq i,j\leq T}w_{ij}x_ix_j+\lambda x_T\right] \\
& {\rm s.t.} & ~x_{t+1}=r_tx_t+{P}_t'{u}_t,~t= 0, 1, \ldots, T-1,
\end{IEEEeqnarray*}
where $\lambda\in\mathbb{R}$ and we assemble all the $w_{ij}$'s into
\begin{align*}
w^c
={}&(w_{11},w_{12},\ldots,w_{1T},w_{21},w_{22},\ldots,
w_{2T},\ldots,
w_{T1},w_{T2},\ldots,w_{TT})'\in\mathbb{R}^{T^2},
\end{align*}
with $w_{tt}=\gamma-\gamma/T$ for $t=1,\ldots,T-1$, $w_{TT}=w+\gamma-\gamma/T$ and $w_{ij}=-\gamma/T$ for $1\leq i,j\leq T$ with $i\neq j$. Let us further denote the solution set of $(\mathcal{MVS}(w,\gamma))$ by $\Pi(w,\gamma)$, and the solution set of $(\mathcal{A}(w^c,\lambda))$ by $\Pi_\mathcal{A}(w^c,\lambda)$. We also denote, for any policy $\mathbf{u}=(u_0',u_1',\ldots,u_{T-1}')'\in\mathbb{R}^{nT}$, the first-order derivative of $\tilde{U}$ w.r.t. $\mathbb{E}[x_T]$ by
\begin{eqnarray}
d(\mathbf{u})=\frac{d\tilde{U}}{d\mathbb{E}[x_T]}\bigg|_\mathbf{u}=1+2w\mathbb{E}[x_T]|_\mathbf{u}.
\label{U_to_EX}
\end{eqnarray}
\begin{lem}
For any $\mathbf{u}^*\in \Pi(w,\gamma)$, $\mathbf{u}^*\in \Pi_\mathcal{A}(w^c,d(\mathbf{u}^*))$.
\label{MV_thm1}
\end{lem}
\begin{proof}
As $\tilde{U}$ is convex w.r.t. $\mathbb{E}[x_ix_j]$, $1\leq i,j\leq T$, and $\mathbb{E}[x_T]$, the proof is similar to Theorem 1 in \cite{li2000optimal}. Thus, we omit the details here.
\end{proof}
The interpretation of Lemma \ref{MV_thm1} is similar to \cite{li2000optimal}, that is, in order to obtain the primal solution, the problem $(\mathcal{MVS}(w,\gamma))$ can be embedded into the auxiliary problem $(\mathcal{A}(w^c,\lambda))$. Moreover, the auxiliary problem in \cite{li2000optimal} is a special case of ours: only $\mathbb{E}[x^2_{T}]$ exists in the auxiliary problem of \cite{li2000optimal}, while all cross terms, $\mathbb{E}[x_ix_j],~1\leq i,j\leq T$, appear in our setting. What significantly distinguishes our auxiliary problem from those in \cite{li2000optimal} and \cite{Discrete_Bankruptcy} is that $(\mathcal{A}(w^c,\lambda))$ in our case cannot be solved by DP anymore, since the smoothing introduces cross terms of wealth levels among different time stages. Before we demonstrate that $(\mathcal{A}(w^c,\lambda))$ can be solved by PHA, we need to prove first that $(\mathcal{A}(w^c,\lambda))$ satisfies the conditions in Theorem \ref{PHA_thm}. To see this, let us rewrite the auxiliary problem $(\mathcal{A}(w^c,\lambda))$ as the following equivalent compact form,
\begin{IEEEeqnarray*}{ccl}
~(\mathcal{A}(\mathbf{W},\lambda))~~ & \max\limits_{\mathbf{u}} & ~\mathbb{E}\left[-\mathbf{x}'\mathbf{W}\mathbf{x}+\lambda \mathbf{x}'\delta\right] \\
&{\rm s.t.} & ~\mathbf{x}=\mathbf{P}\mathbf{u}+x_0\mathbf{r},
\end{IEEEeqnarray*}
where $\mathbf{x}=(x_1,\ldots,x_T)'\in\mathbb{R}^T$, $\delta=(0,\ldots,0,1)'\in\mathbb{R}^T$, and
\begin{align*}
\mathbf{W}
&=(w_{ij})_{1\leq i,j\leq T}
=\left(
\begin{array}{cccc}
\gamma-\frac{\gamma}{T} & -\frac{\gamma}{T} & \ldots & -\frac{\gamma}{T}\\
-\frac{\gamma}{T} & \gamma-\frac{\gamma}{T} & \ldots & -\frac{\gamma}{T}\\
\vdots & \vdots & \ddots & \vdots \\
-\frac{\gamma}{T} & -\frac{\gamma}{T} & \ldots & w+\gamma-\frac{\gamma}{T}
\end{array}
\right)\in\mathbb{R}^{T\times T},\\
\mathbf{P}&=\left(
\begin{array}{cccc}
{P}_0' & {0} & \ldots & {0}\\
r_1{P}_0' & {P}_1' & \ldots & {0}\\
r_2r_1{P}_0' & r_2{P}_1' & \ldots & {0}\\
\vdots & \vdots & \ddots & \vdots \\
(\prod\limits_{t=1}^{T-1}r_t){P}_0' & (\prod\limits_{t=2}^{T-1}r_t){P}_1' &
\ldots & {P}_{T-1}'
\end{array}
\right)\in\mathbb{R}^{T\times nT},\\
\mathbf{r}&=\left(r_0,r_1r_0,\ldots,\prod\nolimits_{t=0}^{T-1}r_t\right)'\in\mathbb{R}^{T}.
\end{align*}
Given $w\geq 0$ and $\gamma\geq 0$, we have $\mathbf{x}'\mathbf{W}\mathbf{x}=w\mathbb{E}[x_T^2]+\gamma \mathbb{E}\left[\sum_{t=1}^{T}(x_t-\bar x)^2\right]\geq 0$ for any $\mathbf{x}$, thus the matrix $\mathbf{W}$ is positive semidefinite. Together with the fact that $\mathbf{x}$ is linear in $\mathbf{u}$, we conclude that each scenario subproblem of $(\mathcal{A}(\mathbf{W},\lambda))$ (with a certain realization on the matrix $\mathbf{P}$) is concave w.r.t. $\mathbf{u}$. Thus, $(\mathcal{A}(\mathbf{W},\lambda))$ satisfies the conditions in Theorem \ref{PHA_thm}. Notice that the structure of $(\mathcal{A}(\mathbf{W},\lambda))$ falls into the framework of the online QP discussed in Section \ref{QP_section}, except that the underlying systems dynamics are slightly different. Before presenting the solution algorithm, let us consider the condition under which the solution of $(\mathcal{A}(\mathbf{W},\lambda))$ also constitutes a solution to $(\mathcal{MVS}(w,\gamma))$.
\begin{thm}
Suppose $\mathbf{u}^*\in \Pi_\mathcal{A}(w^c,\lambda^*)$. A necessary condition for $\mathbf{u}^*\in \Pi(w,\gamma)$ is $\lambda^*=1+2w\mathbb{E}[x_T]|_{\mathbf{u}^*}$.
\label{MV_thm2}
\end{thm}
\begin{proof}
The solution set $\Pi_\mathcal{A}(w^c,\lambda)$ can be characterized by $\lambda$ when we fix $w^c$. Note that from Lemma \ref{MV_thm1} we have $\Pi(w,\gamma)\subseteq\cup_\lambda \Pi_\mathcal{A}(w^c,\lambda)$. Therefore, solving $(\mathcal{MVS}(w,\gamma))$ is equivalent to considering the following,
\begin{align*}
&\max_\lambda~\tilde{U}(\mathbb{E}[x_i(w^c,\lambda)x_j(w^c,\lambda)],\forall i,j,\mathbb{E}[x_T(w^c,\lambda)])\nonumber\\
={}&\max_\lambda~-\left(\sum\nolimits_{1\leq i,j\leq T}w_{ij}\mathbb{E}[x_i(w^c,\lambda)x_j(w^c,\lambda)]\right)
+w\mathbb{E}^2[x_T(w^c,\lambda)]
+\mathbb{E}[x_T(w^c,\lambda)].
\end{align*}
The first-order necessary optimality condition for $\lambda^*$ is
\begin{align}
&-\left(\sum\nolimits_{1\leq i,j\leq T}w_{ij}\frac{d\mathbb{E}[x_i(w^c,\lambda)x_j(w^c,\lambda)]}{d\lambda}\bigg|_{\lambda^*}\right)
+(1+2w\mathbb{E}[x_T]|_{\mathbf{u}^*})\frac{d\mathbb{E}[x_T(w^c,\lambda)]}{d\lambda}\bigg|_{\lambda^*}=0.
\label{MV_condition1}
\end{align}
On the other hand, as $\mathbf{u}^*\in \Pi_\mathcal{A}(w^c,\lambda^*)$, we have the following according to \cite{reid1971noninferior},
\begin{align}
&-\left(\sum\nolimits_{1\leq i,j\leq T}w_{ij}\frac{d\mathbb{E}[x_i(w^c,\lambda)x_j(w^c,\lambda)]}{d\lambda}\bigg|_{\lambda^*}\right)
+\lambda^*\frac{d\mathbb{E}[x_T(w^c,\lambda)]}{d\lambda}\bigg|_{\lambda^*}=0.
\label{MV_condition2}
\end{align}
Combining (\ref{MV_condition1}) and (\ref{MV_condition2}), the vector $(-(w^c)',\lambda^*)'$ should be proportional to the vector $(-(w^c)',1+2w\mathbb{E}[x_T]|_{\mathbf{u}^*})'$, thus we must have $\lambda^*=1+2w\mathbb{E}[x_T]|_{\mathbf{u}^*}$.
\end{proof}
We now apply our scenario-decomposition solution method to solve $(\mathcal{A}(\mathbf{W},\lambda))$ for a given $\lambda$. We first deal with individual scenario subproblems in their equivalent minimization forms,
\begin{IEEEeqnarray*}{ccl}
~(\mathcal{A}^i(\mathbf{W},\lambda))~~ & \min\limits_{\mathbf{u}} & ~\mathbf{x}'\mathbf{W}\mathbf{x}-\lambda \mathbf{x}'\delta \\
& {\rm s.t.} & ~\mathbf{x}=\mathbf{P}^i\mathbf{u}+x_0\mathbf{r},
\end{IEEEeqnarray*}
where $\mathbf{P}^i$ is the realization of $\mathbf{P}$ under the scenario $i\in\mathcal{I}$.
Although $(\mathbf{P}^i)'\mathbf{W}\mathbf{P}^i$ could be singular for some $i$, we could always leverage on any convex optimization algorithm to find the global optima of $(\mathcal{A}^i(\mathbf{W},\lambda))$, which are denoted by $\mathbf{u}^{i,0}$. We next consider its augmented Lagrangian at the iteration $\nu$,
\begin{IEEEeqnarray*}{ccl}
~(\mathcal{A}^{i,\nu}(\mathbf{W},\lambda))~~ & \min\limits_{\mathbf{u}} & ~\mathbf{x}'\mathbf{W}\mathbf{x}-\lambda \mathbf{x}'\delta + \mathbf{u}'\mathbf{w}^{i,\nu}
+\frac{1}{2}\alpha|\mathbf{u}-\hat{\mathbf{u}}^{i,\nu}|_2^2 \\
& {\rm s.t.} & ~\mathbf{x}=\mathbf{P}^i\mathbf{u}+x_0\mathbf{r},
\end{IEEEeqnarray*}
and the optimal solution of $(\mathcal{A}^{i,\nu}(\mathbf{W},\lambda))$, denoted by $\mathbf{u}^{i,\nu+1}$, can always be analytically obtained as
\begin{align}
\mathbf{u}^{i,\nu+1}={}&-[2(\mathbf{P}^i)'\mathbf{W}\mathbf{P}^i+\alpha I_{nT}]^{-1}
[2x_0(\mathbf{P}^i)'\mathbf{W}\mathbf{r}-\lambda(\mathbf{P}^i)'\delta +\mathbf{w}^{i,\nu}
-\alpha\hat{\mathbf{u}}^{i,\nu}],
\end{align}
for some given $\hat{\mathbf{u}}^{i,\nu}$ and $\mathbf{w}^{i,\nu}$. Then the new implementable policy $\hat{\mathbf{u}}^{i,\nu+1}$ is calculated according to (\ref{conditional_expectation}) or following the projection procedure from (\ref{nu+1_sol}) to (\ref{nu+1_project2}). The iteration process continues until the stopping condition in (\ref{Q_stopping criterion}) is satisfied. We finally obtain the optimal solution of $(\mathcal{A}(\mathbf{W},\lambda))$ for a certain $\lambda\in\mathbb{R}$. Now we need to design a solution method to find the optimal $\lambda^*$. When the optimal solution of $(\mathcal{A}(\mathbf{W},\lambda))$ can be expressed in a function form of $\lambda$, we can substitute it back to $(\mathcal{MVS}(w,\gamma))$ and find the optimal $\lambda^*$ such that $\tilde{U}$ is maximized. In the current situation, however, as PHA does not yield an analytical solution, we need to invoke a heuristic method to carry out the job.
Theorem \ref{MV_thm2} reveals the connection between $\lambda^*$ and $\mathbf{u}^*$ as the optimal solution to \emph{both} $(\mathcal{A}(\mathbf{W},\lambda^*))$ and $(\mathcal{MVS}(w,\gamma))$. In fact, we could rely on this relationship to narrow down the possible range of $\lambda^*$. More precisely, the analysis of the previous subsection
indicates that there is a sacrifice on the expected final wealth when we consider the smoothing. Then according to Theorem $\ref{MV_thm2}$, we could set the upper bound of $\lambda^*$ as
\begin{align}
\lambda_{\text{max}}=1+2w\mathbb{E}[x_T]|_{\mathbf{u}^{\text{ns}}},
\label{lambda_max}
\end{align}
where $\mathbf{u}^{\text{ns}}=\{u_t^{\text{ns}}\}_t$ denotes the optimal policy of the classical dynamic MV model with no smoothing term given in (\ref{MV_policy}). On the other hand, we could anticipate that the expected terminal wealth, under an optimal policy in the dynamic MV model with a smoothing term, should be larger than the initial wealth. Thus, we set the lower bound by
\begin{eqnarray}
\lambda_{\text{min}}=1+2w x_0.
\label{lambda_min}
\end{eqnarray}
In summary, we claim that $\lambda^*\in[\lambda_{\text{min}},\lambda_{\text{max}}]$. Within this specified range, we could use a line search method to efficiently find the optimal value of $\lambda$ and hence the optimal policy of the primal problem $(\mathcal{MVS}(w,\gamma))$. We summarize our search procedure in details in Algorithm \ref{lambda_algo}. From our extensive experimental studies, we essentially find that the value of $\tilde{U}$ is always concave w.r.t. $\lambda$. This phenomenon was also analytically found in \cite{li2000optimal}. Therefore, we add one more step to fit a quadratic function in the algorithm in order to enhance the accuracy of the ordinary line search. We discuss a case study of a dynamic MV problem with a smoothing term in the following to complete this subsection.
\begin{algorithm}[!t]
\caption{Find $\lambda^*$ of $(\mathcal{A}(\mathbf{W},\lambda))$ and $\mathbf{u}^*$ of
$(\mathcal{MVS}(w,\gamma))$}
\label{lambda_algo}
\textbf{Input}: The parameters $w$ and $\gamma$, the distribution $D_{e_t}$ and the risk-free rate for $t=0,1,\ldots,T-1$, the initial wealth $x_0$, the penalty $\alpha$, the tolerance $\epsilon$, and the step size $\theta$.\\
\textbf{Output}: $\lambda^*$ of $(\mathcal{A}(\mathbf{W},\lambda))$ and
$\mathbf{u}^*$ of $(\mathcal{MVS}(w,\gamma))$.\\
\vspace{-0.4cm}
\begin{algorithmic}[1]
\STATE Decide $\lambda_{\text{max}}$ by (\ref{lambda_max}) and $\lambda_{\text{min}}$ by (\ref{lambda_min}). Let $\kappa=0$.
\STATE Set $\lambda^\kappa=\lambda_{\text{min}}+\kappa\theta$, and solve $(\mathcal{A}(\mathbf{W},\lambda^\kappa))$, and denote the optimal solution by $\mathbf{u}_\mathcal{A}(\lambda^\kappa)$.
\STATE Compute $\tilde{U}|_{\mathbf{u}_\mathcal{A}(\lambda^\kappa)}$ of (\ref{MVS_objective}) and denote its value by $\tilde{U}(\lambda^\kappa)$.
\STATE If $\lambda^\kappa=\lambda_{\text{max}}$, then stop; else, $\kappa~\leftarrow~\kappa+1$, and go back to Step 2.
\STATE Fit the dataset $\{\lambda^\kappa,\tilde{U}(\lambda^\kappa)\}_{\kappa}$ into a quadratic function (a downward parabola in $\mathbb{R}^2$) and find its optimal solution denoted by $\lambda^{\text{fit}}$.
\STATE Finally, $\lambda^*\in\argmax \{\tilde{U}(\lambda):
\lambda\in\{\lambda^\kappa,\forall\kappa\}\cup\{\lambda^{\text{fit}}\}\}$ and hence $\mathbf{u}^*=\mathbf{u}_\mathcal{A}(\lambda^*)$.
\end{algorithmic}
\end{algorithm}
\begin{exam}\label{MV_eg}
Let us consider a market with the following expectation vector and covariance matrix of the random total return $e_t\in\mathbb{R}^3$, which has been investigated in \cite{li2000optimal},
\begin{align*}
\mathbb{E}[{e}_t]=\left(
\begin{array}{c}
1.162\\
1.246\\
1.228
\end{array}
\right),~
cov({e}_t)=\left[
\begin{array}{ccc}
0.0146 & 0.0187 & 0.0145 \\
0.0187 & 0.0854 & 0.0104 \\
0.0145 & 0.0104 & 0.0289
\end{array}
\right].
\end{align*}
We randomly generate discrete distributions in this example to match exactly the above two moments (and, at the same time, prevent arbitrage opportunities as a conventional assumption in the finance literature), so that we could easily verify pros and cons of adding a smoothing term when we compare it with the classical results in \cite{li2000optimal}. More precisely, we begin with an initial wealth $x_0=10$, an investment horizon $T=3$, and one risk-free bond with the total rate $r_t=1.04$. Suppose that $e_t$ follows different uniform distributions at different time $t=0,1,2$ (but independent across time stages), which are given below,
\begin{align*}
e_0\in&
\left\{
\left(
\begin{array}{c}
1.2722 \\
1.4294 \\
1.3126
\end{array}
\right),
\left(
\begin{array}{c}
1.3352 \\
1.4018 \\
1.2519
\end{array}
\right),
\left(
\begin{array}{c}
1.0996 \\
1.3859 \\
1.2868
\end{array}
\right),
\left(
\begin{array}{c}
0.9448 \\
0.6111 \\
0.8722
\end{array}
\right),
\left(
\begin{array}{c}
1.1904 \\
0.8172 \\
1.4877
\end{array}
\right),\right.\\
&\left.\left(
\begin{array}{c}
1.0606 \\
1.1211 \\
1.2403
\end{array}
\right),
\left(
\begin{array}{c}
1.0363 \\
1.4716 \\
1.0339
\end{array}
\right),
\left(
\begin{array}{c}
1.3191 \\
1.4242 \\
1.4224
\end{array}
\right),
\left(
\begin{array}{c}
1.1968 \\
1.5481 \\
1.1376
\end{array}
\right),
\left(
\begin{array}{c}
1.1649\\
1.2495\\
1.2346
\end{array}
\right)
\right\},~|D_{e_0}|=10,
\end{align*}
\begin{align*}
e_1\in&\left\{
\left(
\begin{array}{c}
1.3056 \\
1.2997 \\
1.4462
\end{array}
\right),
\left(
\begin{array}{c}
1.1498 \\
1.4673 \\
1.0048
\end{array}
\right),
\left(
\begin{array}{c}
1.0833 \\
0.9035 \\
1.2252
\end{array}
\right),
\left(
\begin{array}{c}
0.9665 \\
0.7577 \\
0.9926
\end{array}
\right),
\left(
\begin{array}{c}
1.2876 \\
1.5520 \\
1.2225
\end{array}
\right),\right.\\
&\left.
\left(
\begin{array}{c}
1.2733 \\
1.1900 \\
1.4485
\end{array}
\right),
\left(
\begin{array}{c}
1.0679\\
1.5517\\
1.2561
\end{array}
\right)
\right\},~|D_{e_1}|=7,
\end{align*}
\begin{align*}
e_2\in&\left\{
\left(
\begin{array}{c}
1.0724 \\
0.7472 \\
1.1059
\end{array}
\right),
\left(
\begin{array}{c}
1.0976 \\
1.0795 \\
1.3050
\end{array}
\right),
\left(
\begin{array}{c}
1.3114 \\
1.5110 \\
1.2140
\end{array}
\right),
\left(
\begin{array}{c}
1.3031 \\
1.4376 \\
1.5043
\end{array}
\right),
\left(
\begin{array}{c}
1.0255\\
1.4547\\
1.0109
\end{array}
\right)
\right\},~|D_{e_2}|=5.
\end{align*}
Therefore, we have $|\mathcal{I}|=350$ scenarios. The corresponding scenario tree and scenario partitions and bundles can also be easily constructed and obtained so that we omit the details here due to the space limit. We then solve $(\mathcal{MVS}(w,\gamma))$ by the procedure introduced in this subsection for $\gamma=1$ but with different $w=0.5,1,5$, respectively, and obtain the optimal allocation, denoted by $\{\hat{\mathbf{u}}^{i,\infty}(w,\gamma)\}_i$, and calculate the wealth trajectory $\mathbf{x}^{i,\infty}(w,\gamma)$ under $\hat{\mathbf{u}}^{i,\infty}(w,\gamma)$ for all the possible scenarios. We also obtain the optimal policy ${\mathbf{u}}^{*}(x;w)$ of $(\mathcal{MV}(w))$ for the same $w$'s based on (\ref{MV_policy}) and calculate the corresponding wealth trajectories $\mathbf{x}^{i,DP}(w)$ under all circumstances starting from $x_0$.
The statistical results are listed in Table \ref{MV_table} (rounding in four decimals if needed). It is obvious that, in general, taking smoothing into account facilitates investors to better manage their intermediate wealth fluctuations, and this can be seen from the much lower variances under $(\mathcal{MVS}(w,\gamma))$ across all $w$'s considered, compared to those under $(\mathcal{MV}(w))$. Similar to the utility framework, there is also a sacrifice on the expected terminal wealth in $(\mathcal{MVS}(w,\gamma))$ at all levels of $w$. What different from the utility case is that the bankruptcy almost disappears in the current MV example when we set the bankruptcy boundary at $x_t^b=0$ for all $t$ (except for a very small positive bankruptcy rate 0.0143 when $t=2$ under $\mathcal{MV}(0.5)$). It seems plausible that smoothing has little to do on controlling the bankruptcy rate in MV models. However, we claim that smoothing is still a better choice if the investor really cares about the worst case. To see this, suppose the investor does not consider smoothing. Although she could still achieve relatively good management for extreme situations through increasing $w$ (i.e., emphasizing more on the variance part) in $\mathcal{MV}(w)$ (and this is evidenced from the worst-case column from $\mathcal{MV}(0.5)$ to $\mathcal{MV}(5)$ in Table \ref{MV_table}), it costs her nearly a half drop on the expected terminal wealth (from $25.1709$ to $12.6409$ in our experiments). On the other hand, smoothing not only brings better worst cases at every level of $w$, but also makes the investor only suffer a quite mild loss on her expected final wealth (from $13.3638$ to $11.8048$ shown under $\mathcal{MVS}(w,\gamma)$'s).
\begin{table}[t]
\centering
\caption{Statistics of Wealth Levels under $\mathcal{MVS}(w,\gamma)$ and $\mathcal{MV}(w)$ with $w=0.5,1,5$ and $\gamma=1$}
\resizebox{\columnwidth}{!}
\begin{tabular}{cccccccccc}
\toprule
\multirow{2}[2]{*}{$t$} & \multicolumn{4}{c}{$\mathcal{MVS}(0.5,1)$} & & \multicolumn{4}{c}{$\mathcal{MV}(0.5)$} \\
\cmidrule{2-5}\cmidrule{7-10} & $\mathbb{E}[x_t^{i,\infty}]$ & Var$(x_t^{i,\infty})$ & $BR_t$ & Worst case & & $\mathbb{E}[x_t^{i,DP}]$& Var$(x_t^{i,DP})$ & $BR_t$ & Worst case \\
\midrule
1 & 12.3502 & 2.8302 & 0 & 7.8774 && 18.5926 & 45.9106 & 0 & 1.0500 \\
2 & 12.8505 & 2.0145 & 0 & 7.5099 && 22.7971 & 28.3624 & 0.0143 & -6.3719 \\
3 & 13.3638 & 1.3668 & 0 & 7.4497 && 25.1709 & 13.9223 & 0 & -7.4081 \\
\midrule
\midrule
\multirow{2}[2]{*}{$t$} & \multicolumn{4}{c}{$\mathcal{MVS}(1,1)$} & & \multicolumn{4}{c}{$\mathcal{MV}(1)$} \\
\cmidrule{2-5}\cmidrule{7-10} & $\mathbb{E}[x_t^{i,\infty}]$ & Var$(x_t^{i,\infty})$ & $BR_t$ & Worst case & & $\mathbb{E}[x_t^{i,DP}]$& Var$(x_t^{i,DP})$ & $BR_t$ & Worst case \\
\midrule
1 & 11.6889 & 1.2014 & 0 & 8.7909 && 14.4963 & 11.4776 & 0 & 5.7250 \\
2 & 12.1788 & 0.7319 & 0 & 8.5394 && 16.8066 & 7.0906 & 0 & 2.2220 \\
3 & 12.6825 & 0.4080 & 0 & 8.6301 && 18.2098 & 3.4806 & 0 & 1.9203 \\
\midrule
\midrule
\multirow{2}[2]{*}{$t$} & \multicolumn{4}{c}{$\mathcal{MVS}(5,1)$} & & \multicolumn{4}{c}{$\mathcal{MV}(5)$} \\
\cmidrule{2-5}\cmidrule{7-10} & $\mathbb{E}[x_t^{i,\infty}]$ & Var$(x_t^{i,\infty})$ & $BR_t$ & Worst case & & $\mathbb{E}[x_t^{i,DP}]$& Var$(x_t^{i,DP})$ & $BR_t$ & Worst case \\
\midrule
1 & 10.9083 & 0.1788 & 0 & 9.8317 && 11.2193 & 0.4591 & 0 & 9.4650 \\
2 & 11.3420 & 0.0823 & 0 & 9.9199 && 12.0141 & 0.2836 & 0 & 9.0972 \\
3 & 11.8048 & 0.0392 & 0 & 10.3300 && 12.6409 & 0.1392 & 0 & 9.3830 \\
\bottomrule
\end{tabular
\label{MV_table
\end{table
\end{exam}
\section{Conclusion}
Stochastic control problems can be in general classified into two categories: separable and nonseparable. The former class can be solved by dynamic programming (DP), at least theoretically. For the latter one, however, no general solution framework has been developed so far in the literature. Recognizing the applicability of progressive hedging algorithm (PHA) in dealing with nonseparable stochastic control problems, we develop in this paper the scenario decomposition solution framework to fill in the gap. To the best of our knowledge, this is the first attempt in the literature to solve nonseparable stochastic control problems under a general framework. We believe that our new development will greatly extend the reach of the stochastic control. Our results in the online quadratic programming and dynamic portfolio selections with smoothing properties clearly demonstrate the applicabilities of the scenario decomposition approach when the time decomposition methodology inherent in DP fails. We would like to point out one future research direction: While the curse of dimensionality blocks DP from solving relatively large-scale problems, the curse of dimensionality also affects negatively the performance of PHA, especially due to the model assumption of a tree structure.
\newpage
| proofpile-arXiv_059-15695 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
The goal of this paper is to show how to construct a family of groups from suitable higher rank graphs.
By a theorem of Matui \cite[Theorem 3.10]{Matui2015}, these groups are complete invariants of the usual groupoids associated with such higher rank graphs.
To understand a group properly usually requires the group to be acting on a suitable geometric structure.
In the case of the classical groups, the groups arise as groups of units of matrix rings
and therefore act on geometric structures constructed from vector spaces.
In this paper, we generalize \cite{LV2019b}
and show how to construct a group from a suitable higher rank graph.
Our approach is closely related to the one adopted in \cite{DM}.
As part of this construction, we show that the group arises as a group of units of a Boolean inverse monoid.
Such inverse monoids have ring-theoretic characteristics, without themselves being rings \cite{Wehrung}.
In addition, their sets of idempotents form Boolean algebras on which the group acts.
Thus showing that a group is a group of units of a Boolean inverse monoid brings with it geometric information.
We now state the main theorem we prove in this paper.
All undefined terms will be defined in due course.\\
\noindent
{\bf Main Theorem. }{\em Let $C$ be a higher rank graph having a finite number of identities and no sources and being row finite.
If $C$ is aperiodic and cofinal then we may associate with $C$ a countable, simple Boolean inverse $\wedge$-monoid $\mathsf{B}(C)$.
There are two possibilities:
\begin{enumerate}
\item The Boolean inverse monoid $\mathsf{B}(C)$ is finite and isomorphic to a finite symmetric inverse monoid.
Its group of units is a finite symmetric group.
\item The Boolean inverse monoid $\mathsf{B}(C)$ is countably infinite.
Its group of units is then isomorphic
to a full subgroup of the group of self-homeomorphisms of the Cantor space
which acts minimally and in which each element has clopen support.
\end{enumerate}
}
\begin{remark}{\em In the case where $\mathsf{B}(C)$ is countably infinite,
we are interested in the situation where its commutator subgroup is simple.
We conjecture that when the higher rank graph satisfies, in addition, the condition of \cite[Proposition~4.9]{KP}
then the commutator subgroup should be simple.}
\end{remark}
Under non-commutative Stone duality, our Boolean inverse monoids are related to Hausdorff \'etale
topological groupoids which are effective, minimal and have a space of identities homeomorphic to
the Cantor space. By \cite[Theorem~3.10]{Matui2015}, our group is therefore a complete invariant
for the \'etale groupoid. This suggests that both the homology of the groupoid, and the $K$-groups
of its $C^{\ast}$-algebra should be interesting group-theoretically. This theme is developed in
Sections~7 and 8. Section 7 develops the invariants of the groups depending on the 1-skeleton of
the $k$-graph. Section 8 deals with explicit constructions of infinite families of pair-wise
non-isomorphic $k$-graphs, for every $k \geq 2$. Using the invariants of the Section 8 we show
that the corresponding groups are non-isomorphic as well. In particular, they are not isomorphic
to the known examples of groups $nV$ for $n\geq 2$.
We begin this paper by considering more general categories than the higher rank graphs, such as those in \cite{JS2014, JS2018}.
\begin{center}
{\bf Notation}
\end{center}
\begin{itemize}
\item $I(X,\tau)$ the inverse semigroup of all partial homeomorphisms between the open subsets of the topological space $(X,\tau)$.
When $X$ is endowed with the discrete topology, we simply denote this inverse semigroup by $I(X)$, the symmetric inverse monoid on the set $X$. Section~2.1.
\item $S^{e}$ the inverse semigroup of all elements $s \in S$ where both $s^{-1}s$ and $ss^{-1}$ are essential idempotents.
Section~2.1.
\item $\mathsf{RI}(C)$ the inverse semigroup of all bijective morphisms between the right ideals of the category $C$. Section~2.2.
\item $\mathsf{R}(C)$ the inverse semigroup of all bijective morphisms between the finitely generated right ideals of the finitely aligned category $C$. Section~2.2.
\item $\mathscr{G}(C) = \mathsf{R}(C)^{e}/\sigma$ the group constructed from the category $C$ which is finitely aligned and has only a finite number of identities. Section~2.2.
\item $\Sigma (C)$ the inverse hull of the cancellative category $C$. Section~2.3.
\item $\mathsf{P}(C)$ the inverse monoid of all bijective morphisms between finitely generated right ideals generated by codes when $C$ is a strongly finitely aligned conical cancellative category with a finite number of identities. Section~2.4.
\item $C_{\mathbf{m}}$ the set of all elements of the higher rank $k$-graph with degree{} equal to $\mathbf{m}$ where $\mathbf{m} \in \mathbb{N}^{k}$. Section~3.
\item If $X$ is a subset of a distributive inverse semigroup, then $X^{\vee}$ denotes the set of all binary joins of compatible elements of
$X$. Beginning of Section~4.
\item $a \leq_{e} b$ the element $a$ is essential in the element $b$. Section~4.1.
\item $\mathsf{B}(C) = \mathsf{R}(C)/{\equiv}$, a Boolean inverse monoid with group of units isomorphic to $\mathscr{G}(C)$
when $C$ is a strongly finitely aligned higher rank graph with a finite number of identities with no sources and is row finite. Section~4.2.
\item $C^{\infty}$ the set of all $k$-tilings of the $k$-graph $C$. Section~6.
\item $\mathcal{G}(C)$ the groupoid associated with the $k$-graph $C$. Section~6.
\end{itemize}
\section{Constructing groups from suitable categories}
In this section, we shall show how to construct a group from a category under certain assumptions on the category.
Later in this paper, we shall specialize the results of this section to those categories that arise as higher rank graphs.
\subsection{Background on inverse semigroups}
We shall construct our groups using partial bijections and so our work is rooted in inverse semigroup theory.
We refer the reader to \cite{Lawson1998} for background on such semigroups but we recall some important definitions here.
An {\em inverse semigroup} is a semigroup in which for each element $a$ there is a unique element, denoted by $a^{-1}$,
such that $a = aa^{-1}a$ and $a^{-1} = a^{-1}aa^{-1}$.
The partial isometries of a $C^{\ast}$-algebra {\em almost} form an inverse semigroup (see \cite[Section~4.2]{Lawson1998})
and, perhaps for this reason, inverse semigroups have come to play an important role in the theory of $C^{\ast}$-algebras.
The set of idempotents in $S$ is denoted by $\mathsf{E}(S)$.
It is called the {\em semilattice of idempotents} of $S$.
Observe that if $e$ is any idempotent and $a$ is any element then $aea^{-1}$ is an idempotent.
Thus the `conjugate of an idempotent is an idempotent'.
If $S$ is an inverse {\em monoid} its group of units is denoted by $\mathsf{U}(S)$.
Define $\mathbf{d}(a) = a^{-1}a$ and $\mathbf{r}(a) = aa^{-1}$.
Define the {\em natural partial order} on $S$ by $a \leq b$ if and only if $a = ba^{-1}a$.
It can be proved that with respect to this order, an inverse semigroup is partially ordered
but observe that $a \leq b$ implies that $a^{-1} \leq b^{-1}$.
An inverse semigroup is called {\em $E$-unitary} if $e \leq a$, where $e$ is an idempotent, implies that $a$ is an idempotent.
Define the {\em compatibility relation} $a \sim b$ precisely when $a^{-1}b$ and $ab^{-1}$ are both idempotents;
this relation is reflexive and symmetric, but not transitive.
If $a \sim b$ we say that $a$ and $b$ are {\em compatible}.
A non-empty subset $X$ of an inverse semigroup is said to be {\em compatible}
if each pair of elements of $X$ is compatible.
Observe that if $a,b \leq c$ then $a \sim b$.
It follows that $a \sim b$ is a necessary condition for $a$ and $b$ to have a join $a \vee b$
with respect to the natural partial order.
The compatibility relation plays an important role in this paper.
The following is
\cite[Lemma~1.4.11]{Lawson1998}
and
\cite[Lemma~1.4.12]{Lawson1998}.
\begin{lemma}\label{lem:compatibility-meets} In an inverse semigroup, we have the following:
\begin{enumerate}
\item $a \sim b$
if and only if all of the following hold: $a \wedge b$ exists, $\mathbf{d}(a \wedge b) = \mathbf{d}(a)\mathbf{d}(b)$, and
$\mathbf{r}(a \wedge b) = \mathbf{r}(a)\mathbf{r}(b)$.
\item If $a \sim b$ then $a \wedge b = ab^{-1}b = bb^{-1}a = ba^{-1}a = aa^{-1}b$.
\end{enumerate}
\end{lemma}
Observe that the meet $a \wedge b$ may exist without $a$ and $b$ being compatible.
An inverse semigroup is called a {\em $\wedge$-semigroup} if each pair of elements has a meet with respect
to the natural partial order.
Inverse $\wedge$-semigroups were first studied in \cite{Leech}
and will play an important role in this paper.
Let $\rho$ be a congruence on a semigroup $S$;
it is said to be {\em idempotent-pure} if $a \, \rho \, e$, where $e$ is an idempotent, implies that $a$ is an idempotent;
it is said to be {\em $0$-restricted} if if $a \, \rho \, 0$ implies that $a = 0$.
Let $\theta \colon S \rightarrow T$ be a homomorphism between semigroups;
it is said to be {\em $0$-restricted} if $\theta (a)$ is zero if and only if $a$ is zero;
it is said to be {\em idempotent-pure} if $\theta (a)$ is an idempotent if and only if $a$ is an idempotent.
The proofs of the following are straightforward from the definitions.
\begin{lemma}\label{lem:idpt-pure-characterizations}
Let $S$ be an inverse semigroup:
\begin{enumerate}
\item The congruence $\rho$ is idempotent-pure if and only if $\rho \, \subseteq \, \sim$.
\item The homomorphism $\theta \colon S \rightarrow T$ is idempotent-pure if and only if
$\theta (a) \sim \theta (b)$ implies that $a \sim b$.
\end{enumerate}
\end{lemma}
If $(X,\tau)$ is a topological space, then the set $I(X,\tau)$ of all
homeomorphisms between the open subsets of $X$ is an inverse monoid.
The elements of $I(X,\tau)$ are called {\em partial homeomorphisms}.
If $\tau$ is the discrete topology we just write $I(X)$ instead of $I(X,\tau)$
and call it the {\em symmetric inverse monoid} on $X$.
If $A$ is a subset of $X$ then the identity function defined on $A$ is denoted by $1_{A}$.
One way, of course, to construct a group from an inverse monoid is to consider its group of units.
We now consider an alternative approach.
Let $S$ be an inverse semigroup.
There is a congruence $\sigma$ defined on $S$ such that $S/\sigma$ is a group
and if $\rho$ is any congruence on $S$ such that $S/\rho$ is a group then $\sigma \subseteq \rho$.
Thus $\sigma$ is the {\em minimum group congruence}.
In fact, $s \, \sigma \, t$ if and only if there exists $z \leq s,t$.
See \cite[Section 2.4]{Lawson1998}.
The following was proved as \cite[Theorem 2.4.6]{Lawson1998}.
\begin{proposition}\label{prop:four} Let $S$ be an inverse semigroup.
Then $S$ is $E$-unitary if and only if $\sigma \, = \, \sim$.
\end{proposition}
The following is well-known and easy to check.
\begin{lemma}\label{lem:five} Let $S$ be an $E$-unitary inverse semigroup.
Then for $a,b \in S$ we have $a \sim b$ if and only if $ab^{-1}b = ba^{-1}a$.
\end{lemma}
Intuitively, the above lemma says that the partial bijections $a$ and $b$ are identified
precisely when they agree on the intersection of their domains of definition.
Groups often arise as groups of symmetries but, sometimes, how they arise is more elusive.
For example, the {\em abstract commensurator} of a group $G$ is the set of all isomorphisms between
subgroups of finite index factored out by the equivalence that identifies two such isomorphisms if they agree on
a subgroup of finite index.
This forms a group $\mbox{Comm}(G)$, called the {\em abstract commensurator} of $G$ \cite{Nek2002}.
In fact, this group is best understood using inverse semigroup theory.
The set, $\Omega (G)$, of all isomorphisms between subgroups of finite index is an inverse semigroup.
The group $\mbox{Comm}(G)$ is then $\Omega (G)/\sigma$ where $\sigma$ is the minimum group congruence on $\Omega (G)$.
The elements of $\mbox{Comm}(G)$ are `hidden symmetries' to use the terminology of Farb and Weinberger \cite{FW2004}.
The elements of $\Omega (G)$ are, in some sense, `large'.
We now describe an analogous procedure to the one described above for constructing a group
from an inverse semigroup (of partial bijections).
Let $S$ be an inverse semigroup (of partial isomorphisms, for example).
Let $S' \subseteq S$ be an inverse subsemigroup whose elements are, in some sense, large;
whatever this might mean, we require that $S'$ does not contain a zero.
Then we obtain a group $S'/\sigma$.
We regard the elements of $S'/\sigma$ as hidden symmetries of the structure that gives rise to $S$.
{\em We now define what `large' means in the context of this paper.}
A non-zero idempotent $e$ of an inverse semigroup $S$ is said to be {\em essential} if $ef \neq 0$ for all non-zero
idempotents $f$ of $S$.
An element $s$ is said to be {\em essential} if both $s^{-1}s$ and $ss^{-1}$ are essential.
Denote by $S^{e}$ the set of all essential elements of $S$.
It follows by \cite[Lemma~4.2]{Lawson2007}, that $S^{e}$ is an inverse semigroup (without zero).
We therefore expect the group $S^{e}/\sigma$ to be interesting.
This will be the basis of our construction of a group from an inverse semigroup:
\begin{center}
{\small inverse semigroup $S$ \,$\Rightarrow$\, inverse semigroup of essential elements $S^{e}$ \,$\Rightarrow$\, group $S^{e}/\sigma$.}
\end{center}
Let $S$ be an inverse semigroup with zero.
We write $a \perp b$, and say that $a$ and $b$ are {\em orthogonal}, if $\mathbf{d}(a) \mathbf{d}(b) = 0$ and $\mathbf{r}(a) \mathbf{r}(b) = 0$.
Observe that if $e$ and $f$ are idempotents then $e \perp f$ means precisely that $ef = 0$.
If $a \perp b$ and $a \vee b$ exists we speak of an {\em orthogonal join}.
We need a little notation from the theory of posets.
Let $(X,\leq)$ be a poset.
If $A \subseteq X$ then $A^{\uparrow}$ is the set of all elements of $X$ above some element of $A$
and $A^{\downarrow}$ is the set of all elements of $X$ below some element of $A$.
If $A = \{a\}$ we write $a^{\uparrow}$ instead of $\{a\}^{\uparrow}$ and $a^{\downarrow}$ instead of $\{a\}^{\downarrow}$.
If $A = A^{\downarrow}$ we say that $A$ is an {\em order ideal}.
\subsection{Constructing a group from a suitable category}
In this section, we show how to construct a group from any finitely aligned category with a finite number of identities.
We shall follow the procedure outlined in the previous section by constructing an inverse semigroup $\mathsf{R}(C)$ of bijective morphisms between
the finitely generated right ideals of the category $C$ and then building the group from $\mathsf{R}(C)$.
We regard a category as a generalized monoid or `monoid with many identities'.
Thus, the set of identities of the category $C$, denoted by $C_{o}$, is a subset of $C$ and there are two maps $\mathbf{d}, \mathbf{r} \colon C \rightarrow C_{o}$,
called, respectively, {\em domain} and {\em codomain}.
The elements of the category $C$ are {\em arrows} such that $\mathbf{r}(a)\stackrel{a}{\longleftarrow} \mathbf{d}(a)$.
If $a,b \in C$ then the product $ab$ is defined if $\mathbf{d}(a) = \mathbf{r}(b)$;
in this case, we often write $\exists ab$.
A category $C$ is said to be {\em cancellative} if $ab = ac$ implies that $b = c$,
and $ba = ca$ implies that $b = c$.
An arrow $x$ is {\em invertible} if there is an arrow $y$ such that $xy$ and $yx$ are identities.
Clearly, every identity is invertible.
A category in which the identities are the only invertible arrows is said to be {\em conical}.
Since categories generalize monoids, we can generalize monoid-theoretic definitions to a category-theoretic setting.
If $C$ is a category and $a \in X$ then we can consider the set $aC = \{ax \colon x \in C\}$.
We call this the {\em principal right ideal} generated by $a$.
Observe that $aC \subseteq \mathbf{r}(a)C$.
More generally, if $X \subseteq C$ then $XC$ is the {\em right ideal} of $C$ generated by $X$;
we allow the set $X$ to be empty and so the empty set is counted as a right ideal.
We say that a right ideal $A$ is {\em finitely generated} if there is a finite set $X \subseteq C$
such that $A = XC$.
\begin{lemma}\label{lem:fgcat} Let $C$ be a category.
Then $C$ is finitely generated as a right ideal if and only if
it has a finite number of identities.
\end{lemma}
\begin{proof} Suppose that $C = XC$ where $X$ is a finite set.
Let $e \in C_{o}$.
Then $e \in XC$.
It follows that $e = xy$ for some $x \in X$ and $y \in Y$.
Thus $e = \mathbf{r}(x)$.
We have proved that every identity of $C$ is the range of an element of $X$.
But $X$ is a finite set.
Thus the number of identities is finite.
Conversely, suppose that the number of identities is finite.
Then $C = C_{o}C$ and so $C$ is finitely generated as a right ideal.
\end{proof}
The intersection of two right ideals is always a right ideal
but we need something stronger.
We say that $C$ is {\em finitely aligned} if $aC \cap bC$ is always finitely generated (we include the possibility that it is empty).
\begin{lemma}\label{lem:fgright} Let $C$ be a category.
Then the intersection of any two finitely generated right ideals is finitely generated if and only if
$C$ is finitely aligned.
\end{lemma}
\begin{proof} It is clear that being finitely aligned is a necessary condition, we now show that it is sufficient.
Let $XC$ and $YC$ be two finitely generated right ideals.
Then $XC \cap YC = \bigcup_{x \in X, y \in Y} (xC \cap yC)$.
If $C$ is finitely aligned then each $xC \cap yC$ is finitely generated and so too is $XC \cap YC$.
\end{proof}
A function $\theta \colon R_{1} \rightarrow R_{2}$ between two right ideals of a category $C$ is called a {\em morphism}
if $\theta (rs) = \theta (r)s$ for all $r \in R_{1}$ and $s \in C$.
As usual, if $\alpha$ is a bijective morphism then $\alpha^{-1}$ is also a morphism.
\begin{lemma}\label{lem:right-ideal} Let $\theta \colon R_{1} \rightarrow R_{2}$ be a morphism between two right ideals.
Let $XC \subseteq R_{1}$ be a right ideal.
Then $\theta (XC)$ is a right ideal contained in $R_{2}$.
If $XC$ is finitely generated then $\theta (XC)$ is finitely generated.
In particular, if $\theta \colon XC \rightarrow YC$ is a bijective morphism,
we can always assume that $\theta (X) = Y$.
\end{lemma}
\begin{proof} We claim that $\theta (XC)$ is a right ideal.
Let $c \in C$ be arbitrary and let $\theta (xa) \in \theta (XC)$.
Then $\theta (xa)c = \theta (x(ac))$, since $\theta$ is a morphism.
We claim that $\theta (XC) = \theta (X)C$.
Clearly, $\theta (XC) \subseteq \theta (X)C$ since $\theta (xc) = \theta (x)c$.
Conversely, $\theta (x)c = \theta (xc)$, since $\theta$ is a morphism.
It follows from this that if $XC$ is finitely generated then $\theta (XC)$ is finitely generated.
We now prove the last claim.
Suppose that $\theta \colon XC \rightarrow YC$ is a bijective morphism.
Then $\theta (X)C = YC$.
Replace $Y$ by $\theta (X)$.
\end{proof}
\noindent
{\bf Definition. }Denote by $\mathsf{RI}(C)$ the set of all bijective morphisms between the right ideals of $C$
and denote by $\mathsf{R}(C)$ the set of all bijective morphisms between finitely generated right ideals of $C$.\\
\begin{proposition}\label{prop:inverse-monoid} Let $C$ be a category.
Then $\mathsf{RI}(C)$ is an inverse monoid.
If $C$ is finitely aligned and has a finite number of identities then $\mathsf{R}(C)$ is an inverse submonoid of $\mathsf{RI}(C)$.
\end{proposition}
\begin{proof} The whole category $C$ is a right ideal and so the identity function on $C$ is a bijective morphism,
and is an identity for $\mathsf{RI}(C)$.
The intersection of two right ideals is a right ideal.
It follows by Lemma~\ref{lem:right-ideal} that the composition of two bijective morphisms is a bijective morphism.
It is now clear that $\mathsf{RI}(C)$ is an inverse monoid.
Suppose now that $C$ is finitely aligned and has a finite number of identities.
Then by Lemma~\ref{lem:fgcat}, the identity function on $C$ is the identity of $\mathsf{R}(C)$.
By Lemma~\ref{lem:fgright}, the intersection of any two finitely generated right ideals is a finitely generated right ideal.
By Lemma~\ref{lem:right-ideal}, it is now easy to see that $\mathsf{R}(C)$ is an inverse submonoid of $\mathsf{RI}(C)$.
\end{proof}
Let $C$ be a finitely aligned category with a finite number of identities.
We are now interested in the structure of the inverse monoid $\mathsf{R}(C)^{e}$.
We say that a right ideal $XC$ of $C$ is {\em essential} if it intersects every right ideal of $C$ in a non-empty set.
Thus the inverse monoid $\mathsf{R}(C)^{e}$ consists of bijective morphisms
between the finitely generated essential
right ideals of $C$.
The following result tells us that if we want $\mathsf{R}(C)^{e}$
to be non-empty then we must assume that $C$ has a finite number of identities.
\begin{lemma}\label{lem:two} Let $C$ be a category.
If $XC$ is a finitely generated essential right ideal of $C$
then for each $e \in C_{o}$, there exists an $x \in X$ such that $\exists ex$.
It follows that the number of identities is finite.
\end{lemma}
\begin{proof} Let $e \in C_{o}$.
Then $eC \cap XC$ must be non-empty.
It follows that there exists $x \in X$ such that $eu = xv$ for some $u,v \in C$.
Thus $e = \mathbf{r}(x)$.
We have proved that each identity in $C$ is the range of an element of $X$.
But $X$ is a finite set.
It follows that the number of identities is finite.
\end{proof}
\begin{lemma}\label{lem:one} Let $C$ be a finitely aligned category.
Then $1_{XC}$ is an essential idempotent in $\mathsf{R}(C)^{e}$
if and only if for each element $a \in C$ we have that $aC \cap XC \neq \varnothing$.
\end{lemma}
\begin{proof} Suppose that $1_{XC}$ is an essential idempotent in $\mathsf{R}(C)^{e}$.
Then $1_{aC}1_{XC} \neq \varnothing$.
But this is simply the identity function on the set $aC \cap XC$.
It follows that $aC \cap XC \neq \varnothing$.
The proof of the converse is similar.
\end{proof}
We shall need the following definitions.
Let $C$ be a category and let $a,b \in C$.
We say that $a$ and $b$ are {\em dependent} if $aC \cap bC \neq \varnothing$;
otherwise, we say that they are {\em independent}.\footnote{The terms `comparable' and `incomparable' were used in \cite{LV2019b}. Strictly speaking, we should say `dependent on the right'
and `independent on the right' but we only work `on the right' in this paper, anyway.}
If $aC \cap bC \neq \varnothing$ then we also say that {\em $a$ is dependent on $b$} or that {\em $b$ is dependent on $a$}.
A subset $X \subseteq C$ is said to be {\em large in $C$} if each $a \in C$ is dependent on an element of $X$.
A subset $X \subseteq aC$ is said to be {\em large in $aC$} if each $b \in aC$ is dependent on an element of $X$.
\begin{remark}
{\em What we call a `large' subset of $aC$ is called `exhaustive' in, say, \cite{LS2010}}
\end{remark}
\begin{lemma}\label{lem:zero} Let $C$ be a category.
Then $X$ is large if and only if $XC$ is essential.
\end{lemma}
\begin{proof} Suppose that $X$ is large.
We prove that $XC$ is essential.
Consider a principal right ideal $aC$.
Then $au = xv$ for some $x \in X$ and $u,v \in C$.
Thus $aC \cap xC \neq \varnothing$.
It follows that $XC$ is essential.
Suppose that $XC$ is essential.
Let $a \in C$ be arbitrary.
Then $aC \cap XC \neq \varnothing$.
Thus $au = xv$ for some $x \in X$ and $u,v \in C$.
It follows that $X$ is large.
\end{proof}
We can now define the group that we shall be interested in. Recall from just before Proposition~\ref{prop:four} that
the minimal group congruence on $\mathsf{R}(C)^e$ is the congruence such that $x \mathbin{\sigma} y$ iff there exists $z \le x, y$.
\vspace{5mm}
\begin{center}
\fbox{\begin{minipage}{15em}
{\bf Definition.} Let $C$ be a finitely aligned category with a finite number of identities.
Then
$$\mathscr{G}(C) = \mathsf{R}(C)^{e}/\sigma$$
is the {\em group associated with $C$}.
\end{minipage}}
\end{center}
\vspace{5mm}
\subsection{The cancellative case}
We shall now revisit the construction of the previous section under the additional assumption that $C$ is both cancellative and conical.
\begin{lemma}\label{lem:unique} Let $C$ be a category that is conical and cancellative.
Then $aC = bC$ if and only if $a = b$.
\end{lemma}
\begin{proof} Suppose that $aC = bC$.
Then $a = bx$ and $b = ay$ for some $x,y \in C$.
Thus $a = ayx$ and $b = bxy$.
By cancellation $xy$ and $yx$ are identities.
This implies $x$ and $y$ are invertible.
But $C$ is conical.
Thus $x$ and $y$ are identities.
It follows that $a = b$.
The converse is immediate.
\end{proof}
The above result tells us that when the category is conical and cancellative,
we can identify principal right ideals by the unique elements that generate them.
We begin by constructing some special elements of the inverse monoid $\mathsf{R}(C)$ in the case where $C$ is cancellative.
Let $a \in C$.
Define $\lambda_{a}: \mathbf{d}(a)C \rightarrow aC$ by $\mathbf{d}(a)C \owns x \mapsto ax$.
It is easy to check, given $C$ is cancellative, that $\lambda_{a}$ is a bijection.
We denote the inverse of this map by $\lambda_{a}^{-1}$.
These maps are elements of the symmetric inverse monoid $I(C)$ and so generate
an inverse subsemigroup $\Sigma (C)$ we call the {\em inverse hull} of $C$.
A product of the form $\lambda_{a}\lambda_{b}^{-1}$ is the empty function
unless $\mathbf{d}(a)(C) \cap \mathbf{d}(b)C \neq \varnothing$.
But this only occurs if $\mathbf{d}(a) = \mathbf{d}(b)$.
We therefore need $\mathbf{d}(a) = \mathbf{d}(b)$ in order for
$\lambda_{a}\lambda_{b}^{-1} \colon bC \rightarrow aC$ to be an honest-to-goodness function.
We describe the properties of this function.
Suppose that $(\lambda_{a}\lambda_{b}^{-1})(x) = (\lambda_{a}\lambda_{b}^{-1})(y)$
where $x,y \in bC$.
Then $x = be$ and $y = bf$ for some $e,f \in C$.
Then
$(\lambda_{a}\lambda_{b}^{-1})(x) = ae$
and
$(\lambda_{a}\lambda_{b}^{-1})(y) = af$.
By assumption, $ae = af$.
Thus by left cancellation $e = f$ and so $be = bf$ giving $x = y$.
It follows that $\lambda_{a}\lambda_{b}^{-1}$ is injective.
Now, let $ax \in aC$ be arbitrary.
Then $\mathbf{d}(a) = \mathbf{r}(x)$.
But $\mathbf{d}(a) = \mathbf{d}(b)$.
It follows that $bx$ is defined.
It is clear that $(\lambda_{a}\lambda_{b}^{-1})(bx) = ax$.
We have therefore proved that $\lambda_{a}\lambda_{b}^{-1}$ is a bijection.
Now, let $bx \in bC$.
Then for any $s \in C$ we have that
$(\lambda_{a}\lambda_{b}^{-1})(bxs) = axs$.
Thus
$(\lambda_{a}\lambda_{b}^{-1})((bx)s) = (\lambda_{a}\lambda_{b}^{-1})(bx)s$.
We have proved explicitly that $\lambda_{a}\lambda_{b}^{-1}$ is a bijective morphism.\\
\noindent
{\bf Definition. }We work in a conical, cancellative category.
Let $\mathbf{d}(a) = \mathbf{d}(b)$.
Then $ab^{-1}$ is the bijective morphism from $bC$ to $aC$ given by $bx \mapsto ax$.
We call this a {\em basic morphism}.
Observe that $\mathbf{d}(ab^{-1}) = bb^{-1}$ and $\mathbf{r}(ab^{-1}) = aa^{-1}$.\\
In general, the inverse hull of a cancellative category is hard to describe but under the assumption that $C$ is finitely aligned
the product of two basic morphisms can be explicitly computed. Note that, in the specific instance of higher-rank grpahs,
our basic morphisms are closely related to the building blocks of the inverse semigroup constructed in \cite{FMY}.
\begin{lemma}\label{lem:apollo} Let $C$ be a finitely aligned conical cancellative category with a finite number of identities.
Suppose that $(\lambda_{a}\lambda_{b}^{-1})(\lambda_{c}\lambda_{d}^{-1})$ is non-empty
and that $bC \cap cC = \{x_{1}, \ldots, x_{m}\}C$.
Then
$$(\lambda_{a}\lambda_{b}^{-1})(\lambda_{c}\lambda_{d}^{-1})
=
\bigcup_{i=1}^{n} \lambda_{ap_{i}} \lambda_{dq_{i}}^{-1}$$
where $x_{i} = bp_{i} = cq_{i}$ where $1 \leq i \leq m$.
\end{lemma}
\begin{proof} We have to calculate the product
$aC \leftarrow bC \colon \lambda_{a}\lambda_{b}^{-1}$ with $cC \leftarrow dC \colon \lambda_{c} \lambda_{d}^{-1}$.
Let $bC \cap cC = \{x_{1}, \ldots, x_{m}\}C$
and $e = \mathbf{r}(b) = \mathbf{r}(c)$.
Then $\{x_{1}, \ldots, x_{m}\}C \subseteq eC$.
Let $x_{i} = bp_{i} = cq_{i}$ where $1 \leq i \leq m$.
Observe that $\mathbf{d}(a) = \mathbf{d}(b) = \mathbf{r}(p_{i})$ and so $ap_{i}$ is defined.
Similarly, $\mathbf{d}(d) = \mathbf{d}(c) = \mathbf{r}(q_{i})$ and so $dq_{i}$ is defined.
Observe that $\mathbf{d}(x_{i}) = \mathbf{d}(p_{i}) = \mathbf{d}(q_{i})$.
Thus the product $\lambda_{ap_{i}} \lambda_{dq_{i}}^{-1}$ is non-empty.
Observe that $\lambda_{ap_{i}} \lambda_{dq_{i}}^{-1}$ and $\lambda_{ap_{j}} \lambda_{dq_{j}}^{-1}$
are compatible (when $i \neq j$);
this is easily checked by calculating
$(\lambda_{ap_{i}}\lambda_{dq_{i}}^{-1})^{-1}\lambda_{ap_{j}} \lambda_{dq_{j}}^{-1}$
and
$\lambda_{ap_{i}}\lambda_{dq_{i}}^{-1}(\lambda_{ap_{j}} \lambda_{dq_{j}}^{-1})^{-1}$
and showing that both are idempotents.
To prove that
$$(\lambda_{a}\lambda_{b}^{-1})(\lambda_{c}\lambda_{d}^{-1})
=
\bigcup_{i=1}^{n} \lambda_{ap_{i}} \lambda_{dq_{i}}^{-1}$$
it is enough, by symmetry, to check that both left and right hand sides have the same domains
and that the maps do the same thing, both of which are routine.
\end{proof}
In the case where $C$ is a finitely aligned cancellative category, the inverse hull
$\Sigma (C)$ is an inverse submonoid of $\mathsf{R}(C)$, but it is much easier to work with the latter than the former;
we describe the mathematical relationship between them below.
Lemma~\ref{lem:joinbasic}, Lemma~\ref{lem:oreo} and Lemma~\ref{lem:key-property}
show the important role played by the basic morphisms in the inverse monoid $\mathsf{R}(C)$.
\begin{lemma}\label{lem:joinbasic} Let $C$ be a finitely aligned conical cancellative category with a finite number of identities.
Let $\theta \colon XC \rightarrow YC$ be a bijective morphism between two finitely generated right ideals of $C$.
Then $\theta$ is a join of a finite number of basic morphisms.
\end{lemma}
\begin{proof} We may assume that $\theta$ induces a bijection between $X$ and $Y$ by Lemma~\ref{lem:right-ideal}.
Let $x \in X$ and let $y_{x} = \theta (x)$.
Observe that $\mathbf{d}(x) = \mathbf{d}(y_{x})$ by using right identities.
We may therefore form the basic morphism $y_{x}x^{-1}$.
We claim that $\theta = \bigvee_{x \in X}y_{x}x^{-1}$.
Let $xc \in XC$
Then $\theta (xc) = \theta (x)c = y_{x}c$.
But $(y_{x}x^{-1})(xc) = y_{x}c$.
\end{proof}
\begin{lemma}\label{lem:oreo} Let $C$ be a finitely aligned conical cancellative category with a finite number of identities.
\begin{enumerate}
\item $xy^{-1} \leq uv^{-1}$ if and only if $(x,y) = (us,vs)$ for some $s \in C$.
It follows that if $xy^{-1}$ is an idempotent so too is $uv^{-1}$.
\item $xx^{-1} \perp yy^{-1}$ if and only if $x$ and $y$ are incomparable in $C$.
\item Let $xC \cap yC = UC$ where $U$ is a finite set.
Then
$xx^{-1} yy^{-1} = \bigvee_{u \in U} uu^{-1}$.
\end{enumerate}
\end{lemma}
\begin{proof} (1) By the definition of the order on partial functions, we have that $yC \subseteq vC$ and $xC \subseteq uC$.
In addition, $xy^{-1}$ and $uv^{-1}$ agree on elements of $yC$.
We have that $y = va$ and $x = ub$.
Now, $(xy^{-1})(y) = x$.
But $(uv^{-1})(y) = ua$.
It follows that $x = ua$ and so $ub = ua$ and so $a = b$.
The result now follows with $s = a = b$.
In order that $xy^{-1}$ be an idempotent, we must have that $x = y$.
It is therefore immediate that if $xy^{-1}$ is an idempotent then so too is $uv^{-1}$.
(2) The idempotents $xx^{-1}$ and $yy^{-1}$ are orthogonal if and only if $xC \cap yC = \varnothing$.
But this is equivalent to saying that $x$ and $y$ are incomparable.
(3) The product of $xx^{-1}$ and $yy^{-1}$ is the identity function on $xC \cap yC$ which is the identity function on $UC$.
\end{proof}
The following is a key property since it shows how the basic morphisms sit inside the inverse monoid $\mathsf{R}(C)$.
\begin{lemma}\label{lem:key-property} Let $C$ be a finitely aligned conical cancellative category with a finite number of identities.
Let $xy^{-1} \leq \bigvee_{j=1}^{n} u_{j}v_{j}^{-1}$ then
$xy^{-1} \leq u_{j}v_{j}^{-1}$ for some $j$.
\end{lemma}
\begin{proof} Observe that $yC \subseteq \{v_{1}, \ldots, v_{n}\}C$.
Thus $y = v_{j}p$ for some $j$ and $p \in C$.
Now $(xy^{-1})(y) = x$
whereas $(u_{j}v_{j}^{-1}) = u_{j}p$.
It follows that
$xy^{-1} \leq u_{j}v_{j}^{-1}$.
\end{proof}
An inverse semigroup is said to be {\em distributive} if each pair of compatible elements has a join
and multiplication distributes over such joins.
\begin{proposition}\label{prop:dis} Let $C$ be a finitely aligned conical cancellative category with a finite number of identities.
Then $\mathsf{R}(C)$ is a distributive inverse $\wedge$-monoid.
\end{proposition}
\begin{proof} By proposition~\ref{prop:inverse-monoid}, we know that $\mathsf{R}(C)$ is an inverse monoid.
It is distributive because Lemma~\ref{lem:fgright} the union of two finitely generated right ideals is a finitely generated right ideal.
We now prove that $\mathsf{R}(C)$ has all binary meets.
We use \cite[Theorem~1.9]{Leech}.
Let $\theta \colon XC \rightarrow YC$ be a bijective morphism.
We may assume that $\theta (X) = Y$ by Lemma~\ref{lem:right-ideal}.
We are interested in the fixed-point set of $\theta$.
Then $\theta (xc) = xc$ if and only if $\theta (x)c = xc$.
It follows by right cancellation that $\theta (x) = x$.
Define $X' \subseteq X$ to be those elements $x \in X$ such that $\theta (x) = x$.
Then the fixed-point set of $\theta$ is the finitely generated right ideal $X'C$.
\end{proof}
A {\em morphism} of distributive inverse semigroups is a homomorphism that preserves compatible joins.
Let $S$ be an inverse semigroup.
We say that $T$, a distributive inverse semigroup, is the {\em distributive completion} of $S$,
if there is a homomorphism $\iota \colon S \rightarrow T$ such that for any homomorphism $\alpha \colon S \rightarrow D$,
to a distributive inverse semigroup $D$, there is a morphism $\beta \colon T \rightarrow D$ such that $\alpha = \beta \iota$.
The exact mathematical relationship between $\Sigma (C)$ and $\mathsf{R}(C)$ can now be spelled out.
\begin{proposition}\label{prop:completion} Let $C$ be a finitely aligned conical cancellative category with a finite number of identities.
Then $\mathsf{R}(C)$ is the distributive completion of $\Sigma (C)$.
\end{proposition}
\begin{proof} Let $\alpha \colon \Sigma (C) \rightarrow D$ be any homomorphism to a distributive inverse monoid $D$.
Define $\beta (\bigvee_{j=1}^{n} u_{j}v_{j}^{-1}) = \bigvee_{j=1}^{n} \alpha (u_{j}v_{j}^{-1})$.
We need to check that this is well-defined.
Suppose that
$$\bigvee_{j=1}^{n} u_{j}v_{j}^{-1}
=
\bigvee_{i=1}^{m} a_{i}b_{i}^{-1}$$
in $\mathsf{R}(C)$.
Then by Lemma~\ref{lem:key-property}, for each $j$ there exists an $i$ such that
$u_{j}v_{j}^{-1} \leq a_{i}b_{i}^{-1}$.
Thus $\alpha (u_{j}v_{j}^{-1}) \leq \alpha (a_{i}b_{i}^{-1})$.
From this result, and symmetry, the well-definedness of $\beta$ follows.
That $\beta$ is a homomorphism is immediate by Lemma~\ref{lem:apollo} and the fact that $\alpha$ is a homomorphism.
\end{proof}
The following is important in the construction of the associated group.
\begin{proposition}\label{prop:three}
Let $C$ be a finitely aligned conical cancellative category with a finite number of identities.
Then the inverse monoid $\mathsf{R}(C)^{e}$ is $E$-unitary.
\end{proposition}
\begin{proof}
Let $\alpha \colon XC \rightarrow YC$ be a bijective morphism between
two finitely generated essential right ideals.
Suppose that $\alpha$ is the identity when restricted to the finitely generated essential right ideal $ZC$ where $ZC \subseteq XC$.
Let $xa \in XC$.
Then, since $ZC$ is an essential right ideal, we have that $xaC \cap ZC \ne \varnothing$.
It follows that $xab = zc$ for some $b,c \in C$.
But $\alpha (zc) = zc$ and so $\alpha (xab) = xab$.
But $\alpha$ is a morphism and so $\alpha (xab) = \alpha (x)ab$.
By cancellation, it follows that $\alpha (x) = x$.
We have therefore proved that $\alpha$ is the identity on $X$ and so $\alpha$ is also an idempotent.
\end{proof}
Suppose that $C$ is a finitely aligned conical cancellative category with a finite number of identities.
Then by Proposition~\ref{prop:three}, Proposition~\ref{prop:four} and Lemma~\ref{lem:five},
we can say that two elements of $\mathsf{R}(C)^{e}$ are identified under $\sigma$ if they agree on the intersection of their domains of definition.
This process for constructing a group from an inverse semigroup
of partial bijections is identical to the one used in \cite{YC}
though our group is quite different from the one defined in \cite{YC}.
The following result summarizes some important properties of the distributive inverse $\wedge$-monoid $\mathsf{R}(C)$.
It uses Proposition~\ref{prop:dis}, Lemma~\ref{lem:key-property}, Lemma~\ref{lem:oreo}.
\begin{proposition}\label{prop:primely-generated} Let $C$ be a finitely aligned conical cancellative category with a finite number of identities.
Put $\mathscr{B}$ equal to the set of basic morphisms in $\mathsf{R}(C)$.
Then the following three properties hold:
\begin{enumerate}
\item Each element of $\mathsf{R}(C)$ is a join of a finite number of elements of $\mathscr{B}$.
\item If $e,a \in \mathscr{B}$, $e$ is a non-zero idempotent and $e \leq a$ then $a$ is an idempotent.
\item If $a \leq \bigvee_{i=1}^{m} b_{i}$ where $a,b_{i} \in \mathscr{B}$ then $a \leq b_{i}$ for some $i$.
\end{enumerate}
\end{proposition}
\begin{remark}{\em We can see that the reason $\mathsf{R}(C)$ has all binary meets is due to property (2) above.
To see why, we carry out a small calculation first.
Let $a = \bigvee_{i=1}^{m} a_{i}$, where $a_{i} \in \mathscr{B}$, be an arbitrary element of $S = \mathsf{R}(C)$.
Let $e = \bigvee_{j=1}^{n} e_{j}$, where $e_{i} \in \mathscr{B}$, be any idempotent such that $e \leq a$.
Observe that for each $j$ we have that $e_{j} \leq e$.
It follows by property (3), that for each $j$ there exists an $i$ such that $e_{j} \leq a_{i}$.
By property (2), it follows that $a_{i}$ is an idempotent.
Now, we may split the join $a = \bigvee_{i=1}^{m} a_{i}$ into two parts so that
$a = \left( \bigvee_{k=1}^{p} b_{k} \right) \vee \left( \bigvee_{l=1}^{q} f_{l} \right)$
where $b_{k}, f_{l} \in \mathscr{B}$ and all the $f_{l}$ are idempotents and none of the $b_{k}$ is an idempotent.
It follows by our calculations above, that $\bigvee_{l=1}^{q} f_{l}$ is the largest idempotent less than or equal to $a$.
This proves that $S$ is a $\wedge$-semigroup by \cite{Leech}.}
\end{remark}
\subsection{The group described in terms of maximal codes}
From a finitely aligned conical cancellative category $C$ with a finite number of identities we have constructed a group $\mathscr{G}(C)$
but we have little idea about the structure of this group.
To gain a better handle on this structure, we need to make further assumptions on the category $C$.
First of all, we shall need to strengthen the notion of finite alignment.
A {\em code} is a finite subset $X \subseteq C$ such that any two distinct elements are independent.
Observe that `finiteness' is part of the definition of a code in this paper.
A {\em maximal code} is a large code.
Given an identify $e$ of $C$, a {\em code in $e$} is a finite subset $X \subseteq eC$ such that any two distinct elements are independent.
A {\em maximal code in $e$} is a code in $e$ which is large in $eC$.
We now have the following refinement of the notion of a category's being finitely aligned.\\
\noindent
{\bf Definition.} We say that a category $C$ is {\em strongly finitely aligned} if the set $xC \cap yC$, when non-empty,
is finitely generated by independent elements;
thus $xC \cap yC$ is generated by a code.\\
We now strengthen Lemma~\ref{lem:unique}.
\begin{lemma}\label{lem:conical} Let $C$ be a conical cancellative category.
Let $U$ and $V$ be codes such that $UC = VC$.
Then $U = V$.
\end{lemma}
\begin{proof} Let $u \in U$.
Then $u = va$ for some $v \in V$ and $a \in C$.
Let $v = u'b$ for some $u' \in U$ and $b \in C$.
Then $u = va = u'ba$.
But $U$ is a code and so $u = u'$.
By cancellation, $ba$ is an identity.
Similarly, $ab$ is an identity.
But $C$ is conical and so $a$ and $b$ are identities.
We have therefore proved that $U \subseteq V$.
By symmetry, $V \subseteq U$ and so $U = V$ as claimed.
\end{proof}
Let $C$ be a strongly finitely aligned cancellative category.
Let $x,y \in C$.
Define $x \vee y$ to be the finite subset of $C$ such that $xC \cap yC = (x \vee y)C$.
We assume that $x \vee y$ is a code.
\begin{lemma}\label{lem:sfa} Let $C$ be a strongly finitely aligned conical cancellative category with a finite number of identities.
Let $XC$ and $YC$ be finitely generated right ideals both generated by codes.
Then $XC \cap YC$ is either empty or a finitely generated right ideal generated by a code.
\end{lemma}
\begin{proof} We have that $XC \cap YC = \bigcup_{x \in X, y \in Y} (x \vee y)X$.
Thus $XC \cap YC$ is certainly finitely generated.
We prove that the set $Z = \bigcup_{x \in X, y \in Y} x \vee y$ is a code.
Let $a \in x_{i} \vee y_{j}$ and $b \in x_{k} \vee y_{l}$
where $x_{i}, x_{k} \in X$ and $y_{j}, y_{l} \in Y$.
Suppose that $a$ and $b$ are dependent.
Then $z = au = bv$ for some $u,v \in C$.
But $a \in x_{i}C \cap y_{j}C$ and $b \in x_{k}C \cap y_{l}C$.
It follows that $z \in x_{i}C \cap y_{j}C \cap x_{k}C \cap y_{l}C$.
Thus $x = x_{i} = x_{k}$, since $x_{i}, x_{k} \in X$ and $X$ is a code
and $y = y_{j} = y_{l}$, since $y_{j}, y_{l} \in Y$ and $Y$ is a code.
It follows that $a,b \in x \vee y$.
But $a,b \in x \vee y$ and $x \vee y$ is a code and so $a = b$.
\end{proof}
\noindent
{\bf Definition. } Let $C$ be a strongly finitely aligned cancellative category with a finite number of identities.
Define $\mathsf{P}(C)$ to be the set of all bijective morphisms between finitely generated right ideals generated
by codes.\\
\begin{lemma}\label{prop:projective} Let $C$ be a strongly finitely aligned conical cancellative category with a finite number of identities.
Then $\mathsf{P}(C)$ is an inverse subsemigroup of $\mathsf{R}(C)$.
\end{lemma}
\begin{proof} By Lemma~\ref{lem:sfa}, the intersection of any two finitely generated right ideals generated by codes is itself
a finitely generated right ideal generated by a code.
Let $\alpha \colon XC \rightarrow YC$ be a bijective morphism between two finitely generated right ideals.
Let $ZC \subseteq XC$ be a finitely generated right ideal generated by a code.
We prove that $\alpha (Z)$ is also a code.
Suppose that $\alpha (z)$ and $\alpha (z')$ are dependent for some $z,z' \in Z$.
Then $\alpha (z)u = \alpha (z')v$ for some $u,v \in C$.
Then $\alpha (zu) = \alpha (z'v)$ since $\alpha$ is a morphism.
Thus $zu = z'v$ since $\alpha$ is a bijection.
But $Z$ is a code and so $z = z'$.
It follows that $\alpha (z) = \alpha (z')$.
We have therefore proved that $\alpha (Z)$ is a code.
It follows that $\alpha$ restricts to a bijective morphism $ZC \rightarrow \alpha (Z)C$.
The fact that $\mathsf{P}(S)$ is an inverse subsemigroup of $\mathsf{R}(S)$ is now immediate,
\end{proof}
The elements of $\mathsf{P}(C)^{e}$ are (using Lemma~\ref{lem:zero}) the bijective morphisms between the right ideals of $C$ generated by maximal codes.
\begin{lemma}\label{lem:projright} Let $C$ be a strongly finitely aligned conical cancellative category with a finite number of identities.
Then $\mathsf{P}(C)^{e}$ is an inverse submonoid of $\mathsf{R}(C)^{e}$.
\end{lemma}
\begin{proof} We prove first that $\mathsf{P}(C)^{e}$ is contained in $\mathsf{R}(C)^{e}$.
Observe that an idempotent in $\mathsf{P}(C)^{e}$ is an identity function defined on a
finitely generated right ideal of $C$ generated by a code which intersects every finitely generated right ideal of $C$ generated
by a code. In particular, it intersects principal right ideals of $C$.
It is now clear that $\mathsf{P}(C)^{e}$ is contained in $\mathsf{R}(C)^{e}$.
The composition in $\mathsf{P}(C)^{e}$ is just the restriction of the composition in $\mathsf{R}(C)^{e}$.
\end{proof}
\noindent
{\bf Condition (MC)}. Let $C$ be a strongly finitely aligned conical cancellative category with a finite number of identities.
We assume that if $XC$ is any finitely generated essential right ideal then there is a $YC \subseteq XC$ where
$Y$ is a maximal code.\\
\begin{lemma}\label{lem:restriction} Let $C$ be a strongly finitely aligned conical cancellative category with a finite number of identities
satisfying condition (MC).
Then each element of $\mathsf{R}(S)^{e}$ is above an element of $\mathsf{P}(S)^{e}$.
\end{lemma}
\begin{proof} The condition (MC) is essentially a condition on idempotents.
Let $\alpha \colon XC \rightarrow YC$ be a bijective morphism between two finitely generated essential right ideals;
thus $\alpha \in \mathsf{R}(C)^{e}$.
We assume that $\alpha (X) = Y$ by Lemma~\ref{lem:right-ideal}.
Let $ZC \subseteq XC$ be a finitely generated right ideal generated by a maximal code.
We prove that $\alpha (Z)$ is also a maximal code.
To do this, we need to show that every element of $C$ is dependent on an element of $\alpha (Z)$.
Let $a \in C$.
Since $YC$ is an essential right ideal we have that $au = yv$ for some $u,v \in C$.
Now $y \in Y$ and so there is an $x \in X$ such that $\alpha (x) = y$.
It follows that $au = \alpha (x)v$.
Thus $au = \alpha (xv)$.
Now, $Z$ is a maximal code.
It follows that $xvp = zs$ for some $p,s \in C$.
Thus $\alpha (x)vp = \alpha (z)s$.
Hence, $aup = \alpha (x)vp \alpha (z)s$.
We have proved that $a \in C$ is dependent on an element of $\alpha (Z)$.
It follows that $\alpha (Z)$ is a maximal code.
\end{proof}
The following was proved as \cite[Lemma~7.10]{LV2019b}.
\begin{lemma}\label{lem:quotients} Let $S$ be an inverse subsemigroup of an inverse semigroup $T$.
Suppose that each element of $T$ lies above an element of $S$ in the natural partial order.
Then $S/\sigma \cong T/\sigma$.
\end{lemma}
On the strength of Lemma~\ref{lem:restriction} and Lemma~\ref{lem:quotients}, we have proved the following.
\begin{theorem}\label{them:groups}
Let $C$ be a strongly finitely aligned conical cancellative category with a finite number of identities
satisfying condition (MC).
Then
$$\mathsf{P}(C)^{e}/\sigma \cong \mathsf{R}(C)^{e}/\sigma.$$
\end{theorem}
A typical element of $\mathsf{P}(C)^{e}$ is a bijective morphism $\theta \colon XC \rightarrow YC$
where $X$ and $Y$ are maximal codes.
We may assume that $\theta (X) = Y$.
Then $\theta = \bigvee_{x \in X} \theta (x)x^{-1}$ by Lemma~\ref{lem:joinbasic}.
By Lemma~\ref{lem:oreo}, this is an orthogonal join.
\section{The group associated with a higher rank graph}
The goal of this section is to show how to construct the group $\mathscr{G}(C)$,
described in Section~2, in the case where $C$ is a higher rank graph (under some suitable assumptions on $C$).
Higher-rank graphs were introduced in \cite{KP} as combinatorial models for the systems of matrices, and
associated $C^*$-algebras, studied in \cite{RS}. The following definition comes from \cite{KP}.\\
\noindent
{\bf Definition.} A countable category $C$ is said to be a {\em higher rank graph} or a {\em $k$-graph}
if there is a functor $d \colon C \rightarrow \mathbb{N}^{k}$,
called the {\em degree map},
satisfying the {\em unique factorization property} (UFP): if $d(a) = \mathbf{m} + \mathbf{n}$ then there are unique elements $a_{1}$ and $a_{2}$ in $C$
such that $a = a_{1}a_{2}$ where $d(a_{1}) = \mathbf{m}$ and $d(a_{2}) = \mathbf{n}$.
We call $d(x)$ the {\emdegree} of $x$.
A {\em morphism} of $k$-graphs is a degree-preserving functor.\\
Repeated applications of the UFP show that if $C$ is a $k$-graph and $a \in C$, and if $0 \le \mathbf{m} \le \mathbf{n} \le d(a)$,
then there is a unique factorisation $a = a' a'' a'''$ such that $d(a') = \mathbf{m}$, $d(a'') = \mathbf{n}-\mathbf{m}$ and
$d(a''') = d(a) - \mathbf{n}$. We define $a[\mathbf{m}, \mathbf{n}] := a''$. Again, the UFP implies that for any
$0 \le \mathbf{m}_1 \le \mathbf{m}_2 \le \cdots \le \mathbf{m}_l \le d(a)$, we have
\begin{equation}\label{eq:segments}
a = a[0, \mathbf{m}_1] a[\mathbf{m}_1, \mathbf{m}_2] \cdots a[\mathbf{m}_{l-1}, \mathbf{m}_l] a[\mathbf{m}_l, d(a)].
\end{equation}
It follows, for example, that if $a = bc$ and $\mathbf{m} \le d(b)$, then $a[0,\mathbf{m}] = b[0,\mathbf{m}]$.
By \cite[Remarks~1.2]{KP}, we have the following.
All are easy to prove directly.
\begin{lemma}\label{lem:important-hrg}
Let $C$ be a $k$-graph.
\begin{enumerate}
\item $C$ is cancellative.
\item $C$ is conical.
\item The elements of $C$ of degree $\mathbf{0}$ are precisely the identities.
\end{enumerate}
\end{lemma}
The following two definitions are important.\\
\noindent
{\bf Definition. }We say that $C$ has {\em no sources} if for each identity $e$ of $C$ and element $\mathbf{m} \in \mathbb{N}^{k}$ there exists
an arrow $x \in C$ such that $\mathbf{r}(x) = e$ and $d(x) = \mathbf{m}$.
We say that $C$ is {\em row finite} if for each identity $e$ of $C$, the number of elements of $eC$ of degree{} $\mathbf{m}$ is finite.\\
We shall now derive some properties of higher rank graphs that will be important later.
The following is proved by a simple application of the UFP.
It generalizes \cite[Lemma 3.10]{LV2019b}.
\begin{lemma}\label{lem:levi} Let $d \colon C \rightarrow \mathbb{N}^{k}$ be a $k$-graph.
Suppose that $xy = uv$ where $d(x) \geq d(u)$.
Then there exists an element $t \in C$ such that $x = ut$ and $v = ty$.
In particular, if $d(x) = d(u)$ then $x = u$.
\end{lemma}
The above lemma proves the following.
\begin{lemma}\label{lem:indep} Let $d \colon C \rightarrow \mathbb{N}^{k}$ be a $k$-graph.
If $a \neq b$ and $d(a) = d(b)$ then $a$ and $b$ are independent.
\end{lemma}
\begin{proof} Suppose that $ax = by$ for some $x,y \in C$.
Then by Lemma~\ref{lem:levi}, we have that $a = b$ which contradicts our assumption.
\end{proof}
By the above lemma, any finite set of elements of $C$ in which all elements have the same degree{} is independent
and so forms a code.
Let $\mathbf{m} \in \mathbb{N}^{k}$.
Define $C_{\mathbf{m}}$ to be the subset of $C$ which consists of all elements of degree{} $\mathbf{m}$.
\begin{lemma}\label{lem:mc} Let $d \colon C \rightarrow \mathbb{N}^{k}$ be a $k$-graph with no sources.
Then $C_{\mathbf{m}}$ is a maximal code.
\end{lemma}
\begin{proof} By Lemma~\ref{lem:indep}, $C_{\mathbf{m}}$ is a code.
We prove that it is maximal.
Let $a \in C$ be arbitrary.
Since $C$ is assumed to have no sources, there is an element $x$ such that $ax$ is defined and $d(x) = \mathbf{m}$.
It follows that $d(ax) \geq \mathbf{m}$.
By the UFP, we can write $ax = by$ where $d(b) = \mathbf{m}$.
This proves the claim.
\end{proof}
\begin{lemma}\label{lem:maximal-code} Let $C$ be a $k$-graph with no sources, let $\mathbf{m} \in \mathbb{N}^{k}$ and let $e$ be an identity.
Then the set $eC_{\mathbf{m}}$ of all elements of $C$ with range $e$ and degree{} $\mathbf{m}$ is a maximal code in $eC$.
\end{lemma}
\begin{proof} Fix $c \in eC$. By Lemma~\ref{lem:mc}, there exists $a \in C_{\mathbf{m}}$ such that $a \vee c$ exists, say $a a' = a \vee c = cc'$.
Hence $\mathbf{r}(a) = \mathbf{r}(aa') = \mathbf{r}(cc') = \mathbf{r}(c) = e$, so $a \in eC$.
\end{proof}
We can now show that condition (MC) holds for higher rank graphs.
\begin{lemma}\label{lem:containsmc} Let $d \colon C \rightarrow \mathbb{N}^{k}$ be a $k$-graph with no sources.
Let $XC$ be an essential finitely generated right ideal.
Put $\mathbf{m} = \bigvee_{x \in X}d(x)$.
Then $C_{\mathbf{m}}C \subseteq XC$.
Thus, every finitely generated essential right ideal contains a right ideal generated by a maximal code.
\end{lemma}
\begin{proof}
Let $y \in C_{\mathbf{m}}$.
We are assuming that $XC$ is an essential right ideal and so $X$ is a large subset by Lemma~\ref{lem:zero}.
It follows that $ya = xb$ for some $x \in X$ and $a,b \in C$.
But $d(y) \geq d(x)$.
Thus by Lemma~\ref{lem:levi}, there is $t \in C$ such that $y = xt$.
We have therefore proved that
$C_{\mathbf{m}}C \subseteq XC$.
\end{proof}
The following result is well-known but we prove it for the sake of completeness.
\begin{lemma}\label{lem:fa} Let $d \colon C \rightarrow \mathbb{N}^{k}$ be a $k$-graph and suppose that $a,b,c \in C$
satisfy $cC \subseteq aC, bC$.
Then there is an element $e \in C$ such that $cC \subseteq eC \subseteq aC \cap bC$ where $d(e) = d(a) \vee d(b)$.
\end{lemma}
\begin{proof} By assumption, $c = au = bv$ for some $u,v \in C$.
Then $d(c) = d(a) + d(u)$ and $d(c) = d(b) + d(v)$.
It follows that $d(a), d(b) \leq d(c)$.
Thus $d(a) \vee d(b) \leq d(c)$ (in the lattice-orderd group $\mathbb{Z}^{k}$).
Hence, in the notation of Equation~\ref{eq:segments}, $e := c[0, d(a) \vee d(b)] \in C_{d(a) \vee d(b)}$ and
$f := c[d(a) \vee d(b), d(c)]$ satisfy $c = ef$. Hence $cC = efC \subseteq eC$.
Since $a u = c = ef = e[0, d(a)]e[d(a), d(e)]f$, the UFP forces $e[0, d(a)] = a$.
Hence $eC = a e[d(a), d(e)] C \subseteq a C$. A symmetric argument gives $eC \subseteq bC$.
\end{proof}
Let $a,b \in C$.
We define the notation $a \vee b$ (it will agree with the notation introduced in Section~2.4).
If $a$ and $b$ are independent, define $a \vee b = \varnothing$.
Otherwise, $a \vee b$ consists of all elements $e \in aC \cap bC$ such that $d(e) = d(a) \vee d(b)$.
\begin{lemma}\label{lem:meets} Let $d \colon C \rightarrow \mathbb{N}^{k}$ be a $k$-graph.
Then $aC \cap bC = \bigcup_{x \in a \vee b} xC$.
\end{lemma}
\begin{proof} Without loss of generality, we assume that $a \vee b$ is non-empty.
Let $y \in aC \cap bC$.
Then by Lemma~\ref{lem:fa}, we have that $y = ex$ for some $e \in a \vee b$.
It follows that the LHS is contained in the RHS.
The reverse inclusion is immediate.
\end{proof}
The following is immediate from the definitions and Lemma~\ref{lem:meets}.
\begin{lemma}\label{lem:finitely-aligned-rf}
A higher rank graph that is row finite is finitely aligned.
\end{lemma}
The following is immediate by Lemma~\ref{lem:indep} and shows that the categories underlying higher rank graphs are in
fact strongly finitely aligned.
\begin{lemma}\label{lem:code} Let $d \colon C \rightarrow \mathbb{N}^{k}$ be a $k$-graph.
Let $a,b \in C$. If $a \vee b$ is non-empty and finite it is a code.
\end{lemma}
Note that the set $a \vee b$ is precisely the set $\operatorname{MCE}(a, b)$ of \cite{RSY}.
\begin{example}\label{ex:free-cat}
{\em This example is well-known but it is included for completeness.
Let $G$ be a finite directed graph.
Denote by $G^{\ast}$ the free category generated by $G$.
This consists of all finite allowable strings of elements of $G$ with
the identities being identified with the vertices of $G$.
Let $x \in G^{\ast}$.
Then $x = x_{1} \ldots x_{n}$ where $x_{1}, \ldots, x_{n}$ are edges of $G$ so that the edge $x_{i}$ begins where the edge $x_{i+1}$ ends.
Define $\mathbf{d}(x)$ to be the identity at the source of $x_{n}$
and define $\mathbf{r}(x)$ to be the identity at the target of $x_{1}$.
Thus, in particular, $G^{\ast}$ has a finite number of identities.
Suppose that $xG^{\ast} \cap yG^{\ast}$ is non-empty.
Then $\mathbf{r}(x) = \mathbf{r}(y)$.
There are therefore three possibilities:
$x = y$ or $x = yu$ for some $u \in G^{\ast}$ or $y = xu$ for some $u \in G^{\ast}$.
In the first case, $xG^{\ast} = yG^{\ast}$,
in the second case, $xG^{\ast} \subseteq yG^{\ast}$,
and in the third case, $yG^{\ast} \subseteq xG^{\ast}$.
It follows that $xG^{\ast} \cap yG^{\ast}$ is either empty or a principal right ideal.
It follows that $G^{\ast}$ is finitely aligned.
See \cite{JL} for more on such categories.}
\end{example}
\begin{proposition}\label{prop:free-cat} The $1$-graphs are precisely the countable free categories.
There are a finite number of identities and row finiteness holds
precisely when the free category is generated by a finite directed graph.
\end{proposition}
\begin{proof} Example~1.3 of \cite{KP} gives the first statement.
If $G$ is finite then, clearly, $G^*$ has a finite number of identities and is row finite.
Conversely if $G^*$ is finite with finitely many identities, then the vertex set of $G$ is
the finite set $G^*_{o}$ and the edge set of $G$ is a finite union $\bigcup_{e \in G^*_{o}} eG^*_{1}$
of finite sets. So $G$ is finite.
\end{proof}
We now show how to construct a group from a suitable higher rank graph.
By Lemma~\ref{lem:important-hrg}, higher rank graphs are conical and cancellative.
By Lemma~\ref{lem:mc} and Lemma~\ref{lem:containsmc}, condition (MC) holds.
By Lemma~\ref{lem:code}, if a higher rank graph is finitely aligned then it is strongly finitely aligned.
If $C$ is row finite then by Lemma~\ref{lem:finitely-aligned-rf}, it is finitely aligned.
We therefore assume that $C$ is a row-finite higher rank graph with a finite number of identities and no sources.
We may therefore define the group $\mathscr{G}(C)$ either in the fashion of Section~2.2 or in the fashion of Section~2.4.
Theorem~\ref{them:groups} guarantees that the two approaches yield isomorphic groups.
\section{The group $\mathscr{G}(C)$ as a group of units of a Boolean inverse monoid}
In this section, we shall prove that when $C$ is a higher rank graph with a finite number of identities,
no sources and is row finite, then the group $\mathscr{G}(C)$ is isomorphic to the group of units of a Boolean inverse $\wedge$-monoid.
By non-commutative Stone duality \cite{Lawson2010, Lawson2012, Lawson2016, LL},
this implies that $\mathscr{G}(C)$ is isomorphic to the topological full group of a second-countable Hausdorff \'etale topological groupoid
whose space of identities is a compact, Hausdorff zero-dimensional space (that is, a {\em Boolean space}).
Such a groupoid is itself said to be {\em Boolean}.
A distributive inverse semigroup is said to be {\em Boolean} if its semilattice of idempotents is a generalized Boolean algebra.
A distributive inverse monoid is said to be {\em Boolean} if its semilattice of idempotents is a Boolean algebra.
The complement of an element $e$ of a Boolean algebra is denoted by $\bar{e}$.
If $X \subseteq S$ define $X^{\vee}$ to be the set of all joins of finite compatible subsets of $X$.
\subsection{Generalities}
In this section, we shall reprove some results from \cite[Section~9]{LV2019b} in a slightly more general setting.
We recall first a definition due to Daniel Lenz \cite{Lenz}.
Let $S$ be an inverse semigroup with zero.
Define the relation $\equiv$ on $S$ as follows:
$s \equiv t$ if and only if for each $0 < x \leq s$ we have that $x^{\downarrow} \cap t^{\downarrow} \neq 0$
and for each $0 < y \leq t$ we have that $y^{\downarrow} \cap s^{\downarrow} \neq 0$.
Then $\equiv$ is a $0$-restricted congruence on $S$.
We denote the $\equiv$-class of $a$ by $[a]$.
We dub $\equiv$ the {\em Lenz congruence}.
Let $a \leq b$.
We say that $a$ is {\em essential in} $b$ if
$0 < x \leq b$ implies that $x^{\downarrow} \cap a^{\downarrow} \neq 0$.
In this case, we write $a \leq_{e} b$.
Clearly,
$$a \leq_{e} b \Leftrightarrow a \equiv b \text{ and } a \leq b.$$
\begin{lemma} In an inverse semigroup, suppose that $a \leq b$.
Then $a \leq_{e} b$ if and only if $\mathbf{d}(a) \leq_{e} \mathbf{d}(b)$.
\end{lemma}
\begin{proof} Suppose that $a \leq_{e} b$.
We prove that $\mathbf{d}(a) \leq_{e} \mathbf{d}(b)$.
Let $0 < e \leq \mathbf{d}(a)$.
Then $0 < be \leq b$.
Now $a,be \leq b$ and so $a \sim be$.
By assumption, $a \wedge be \neq 0$.
Thus $\mathbf{d}(a) e \leq 0$.
We have therefore proved that $\mathbf{d}(a) \leq_{e} \mathbf{d}(b)$.
Conversely, suppose that $\mathbf{d}(a) \leq_{e} \mathbf{d}(b)$.
Let $0 < x \leq b$.
Then $0 < \mathbf{d}(x) \leq \mathbf{d}(b)$.
By assumption, $\mathbf{d}(x)\mathbf{d}(a) \neq 0$.
But $x,a \leq b$ implies that $x \sim a$.
It follows that $x \wedge a$ exists and is non-zero.
We have therefore proved that $a \leq_{e} b$.
\end{proof}
Let $\theta \colon S \rightarrow T$ be a homomorphism.
We say that it is {\em essential} if $a \leq_{e} b$ implies that $\theta (a) = \theta (b)$.
We say that a congruence $\rho$ on a semigroup is {\em essential}
if $a \leq_{e} b$ implies that $a \mathbin{\rho} b$.
\begin{lemma}\label{lem:important-tool} Let $S$ be an inverse semigroup.
If $\rho$ is any $0$-restricted, idempotent-pure essential congruence on $S$
then $a \, \rho \, b$ implies that $a \wedge b$ is defined and $(a \wedge b) \leq_{e} a,b$.
\end{lemma}
\begin{proof}
Suppose that $a \, \rho \, b$.
Then $a \sim b$ by Lemma~\ref{lem:idpt-pure-characterizations}
and so $a \wedge b$ exists by Lemma~\ref{lem:compatibility-meets}.
We prove that $a \wedge b \leq_{e} a$;
the fact that $a \wedge b \leq_{e} b$ follows by symmetry.
Since $a \sim b$ we have that $a \wedge b = ab^{-1}b$.
Let $0 < x \leq b$.
Then $x \sim a \wedge b$ since $x, a \wedge b \leq b$.
Thus $(a \wedge b) \wedge x = (a \wedge b)x^{-1}x$.
But $(ab^{-1}b)x^{-1}x \, \rho \, x$.
We are given that $x \neq 0$ and so $(a \wedge b)\wedge x \neq 0$.
The claim now follows.
\end{proof}
\begin{lemma}\label{lem:new-characterization} Let $S$ be an inverse semigroup
in which $\equiv$ is idempotent-pure.
Then $a \equiv b$ if and only if there exists $c \leq_{e} a,b$.
\end{lemma}
\begin{proof} It is immediate that if there exists $c \leq_{e} a,b$ then $a \equiv b$.
The converse follows by Lemma~\ref{lem:important-tool}.
\end{proof}
\begin{proposition}\label{prop:uniqueness}
Let $S$ be an inverse semigroup on which $\equiv$ is idempotent-pure.
Then $\equiv$ is the unique $0$-restricted, idempotent-pure essential congruence on $S$.
\end{proposition}
\begin{proof} By definition $\equiv$ is $0$-restricted, it is idempotent-pure by assumption and it is an essential congruence
by virtue of its definition.
Let $\rho$ be any $0$-restricted, idempotent-pure essential congruence on $S$.
We shall prove that $\rho \, = \, \equiv$.
Let $a \, \rho \, b$.
By Lemma~\ref{lem:important-tool} there exists $x \leq_{e} a,b$, and so
Lemma~\ref{lem:new-characterization} gives $a \equiv b$.
We have therefore shown that $\mathbin{\rho} \subseteq \mathbin{\equiv}$.
We now prove the reverse inclusion.
Let $a \equiv b$.
Then Lemma~\ref{lem:important-tool} shows that $a \wedge b$ is defined and
$a \wedge b \leq_{e} a,b$ and hence $a \mathbin{\rho} b$ because $\rho$ is an essential
congruence.\end{proof}
The following is now immediate by Lemma~\ref{lem:new-characterization}.
\begin{lemma}\label{lem:cheese} Let $S$ be an inverse semigroup on which $\equiv$ is idempotent-pure.
Let $\rho$ any congruence on $S$ such that if $a \leq_{e} b$ then $\rho (a) = \rho (b)$.
Then $\equiv$ is contained in $\rho$.
\end{lemma}
\begin{proposition}\label{prop:panda} Let $S$ be a distributive inverse semigroup.
Let $\mathcal{B}$ be a subset of $S$ having the following properties:
\begin{enumerate}
\item Each element of $S$ is a finite join of elements from $\mathcal{B}$.
\item If $a \leq \bigvee_{i=1}^{m} a_{i}$ where $a,a_{i} \in \mathcal{B}$ then $a \leq a_{i}$ for some $i$.
\item If $a \leq b$, where $a,b \in \mathcal{B}$ and $a$ is a non-zero idempotent, then
$b$ is an idempotent.
\end{enumerate}
Then $\equiv$ is idempotent-pure on $S$.
\end{proposition}
\begin{proof} Suppose that $a \, \equiv \, e$ where $e$ is a non-zero idempotent.
Then $\left( \bigvee_{i=1}^{m} a_{i} \right) \equiv \left( \bigvee_{j=1}^{n} e_{j} \right)$
where $a_{i}, e_{j} \in \mathcal{B}$.
We assume all elements are non-zero.
For each $i$, we have that $a_{i} \leq \bigvee_{i=1}^{m} a_{i}$.
Thus, from the definition of $\equiv$, there is a non-zero element $z$ in $S$
such that $z \leq a_{i}$ and $z \leq \bigvee_{j=1}^{n} e_{j}$.
Without loss of generality, we may assume that $z \in \mathcal{B}$.
But $z$ is an idempotent and so $a_{i}$ is an idempotent.
It follows that all of $a_{1}, \ldots , a_{m}$ are idempotents and so $a$ is an idempotent,
as required.
\end{proof}
The following was proved as \cite[Lemma~9.12]{LV2019b}.
\begin{lemma}\label{lem:idpt-pure-dist} Let $S$ be a distributive inverse semigroup.
If $\rho$ is idempotent-pure then $S/\rho$ is a distributive inverse semigroup and the natural map from $S$ to $S/\rho$
is a morphism of distributive inverse semigroups.
If, in addition, $S$ is a $\wedge$-semigroup then $S/\rho$ is a $\wedge$-semigroup
and the morphism preserves meets.
\end{lemma}
The following refines Lemma~\ref{lem:cheese}.
\begin{lemma} Let $S$ be a distributive inverse semigroup.
Let $\mathcal{B}$ be a subset of $S$ having the following properties:
\begin{enumerate}
\item Each element of $S$ is a finite join of elements from $\mathcal{B}$.
\item If $a \leq \bigvee_{i=1}^{m} a_{i}$ where $a,a_{i} \in \mathcal{B}$ then $a \leq a_{i}$ for some $i$.
\item If $a \leq b$, where $a,b \in \mathcal{B}$ and $a$ is a non-zero idempotent, then
$b$ is an idempotent.
\end{enumerate}
Let $\theta \colon S \rightarrow T$ be a morphism of distributive inverse semigroups such that $b \leq_{e} a$, where $a \in \mathcal{B}$,
implies that $\theta (b) = \theta (a)$.
Then $\equiv$ is contained in the kernel of $\theta$.
\end{lemma}
\begin{proof} Suppose that $a \leq_{e} b$ where $b = \bigvee_{i} b_{i}$ such that $b_{i} \in \mathcal{B}$.
Then $a \wedge b_{i} \leq_{e} b_{i}$ for all $i$.
Observe that $a \wedge b_{i}$ is algebraically defined since $a \sim b_{i}$ by Lemma~\ref{lem:compatibility-meets}.
It follows that $\theta (b_{i}) \leq \theta (a)$ for all $i$ giving $\theta (b) \leq \theta (a)$.
But $\theta (a) \leq \theta (b)$ and so $\theta (a) = \theta (b)$.
\end{proof}
\begin{lemma}\label{lem:essential-equiv} Let $T$ be an inverse monoid.
Then $e$ is an essential idempotent if and only if $e \equiv 1$.
\end{lemma}
\begin{proof} Suppose that $e$ is an essential idempotent.
Then by definition for any non-zero idempotent $f$ we have that $ef \neq 0$.
This proves that $e \equiv 1$.
The proof of the converse is immediate.
\end{proof}
The proof of the following can be deduced using the proof of \cite[Lemma~9.7]{LV2019b}.
\begin{lemma}\label{lem:red-panda} Let $T$ be an inverse monoid in which $\equiv$ is idempotent-pure
and suppose that $T^{e}$ is $E$-unitary.
Then the group of units of $T/{\equiv}$ is isomorphic to $T^{e}/\sigma$.
\end{lemma}
Let $S$ be an inverse semigroup and
let $\{a_{1}, \ldots, a_{m}\} \subseteq a^{\downarrow}$.
We say that this is a {\em tight cover} of $a$ if for every $0 < z \leq a$
we have that $z \wedge a_{i} \neq 0$ for some $i$.
The proof of the following is routine using \cite[Lemma~2.5(4)]{Lawson2016}.
\begin{lemma}\label{lem:nanaimo} Let $S$ be a distributive inverse semigroup.
Then $\{a_{1}, \ldots, a_{m}\}$ is a tight cover of $a$ if and only if
$\bigvee_{i=1}^{m} a_{i} \leq_{e} a$.
\end{lemma}
Let $S$ be an inverse semigroup.
A subset $A \subseteq S$ is a {\em filter} if $A = A^{\uparrow}$
and if $a,b \in A$ there exists $c \in A$ such that $c \leq a,b$.
It is {\em proper} if it does not contain $0$.
The proper filter $A$ is {\em tight} if for every $a \in A$ and every tight cover
$\{a_{1}, \ldots, a_{m}\}$ of $a$, there exists $i \le m$ such that $a_{i} \in A$.
A maximal proper filter is called an {\em ultrafilter}.
If $S$ is a distributive inverse semigroup, then a proper filter $A$ is said to be {\em prime}
if $a \vee b \in A$ implies that $a \in A$ or $b \in A$.
By Zorn's lemma, every proper filter is contained in an ultrafilter.
\begin{lemma}\label{lem:tight} Let $S$ be a distributive inverse semigroup.
Every tight filter is a prime filter.
\end{lemma}
\begin{proof}
Let $A$ be a tight filter and suppose that $a = \bigvee_{i=1}^{m} a_{i} \in A$.
Observe that $\{a_{1}, \ldots, a_{m}\}$ is a tight cover of $a$ using \cite[Lemma~2.5(4)]{Lawson2016}.
Thus $a_{i} \in A$, as claimed.
\end{proof}
By \cite[Proposition~5.10]{LL} and Lemma~\ref{lem:tight}, we have that:
every ultrafilter is a tight filter, and every tight filter is a prime filter.
\begin{lemma}\label{lem:mars} Let $S$ be an inverse semigroup with zero
in which $\equiv$ is idempotent-pure.
Let $X$ be a tight filter in $S$.
\begin{enumerate}
\item If $x \in X$ and $y \leq_{e} x$ then $y \in X$.
\item If $x \in X$ and $y \equiv x$ then $y \in X$.
\end{enumerate}
\end{lemma}
\begin{proof} (1) By definition, $\{y\}$ is a tight cover of $x$.
It follows that $y \in X$.
(2) This follows by (1) and Lemma~\ref{lem:new-characterization}.
\end{proof}
To prove that a distributive inverse semigroup is Boolean,
we have to prove, by \cite[Lemma~3.20]{LL} that every prime filter is an ultrafilter.
By Lemma~\ref{lem:idpt-pure-dist}, if $S$ is distributive and $\equiv$ is idempotent-pure
then $S/{\equiv}$ is distributive.
The following theorem is now relevant.
\begin{theorem}\label{them:seven} Let $S$ be a distributive inverse semigroup on which $\equiv$ is idem\-potent-pure.
Then $S/{\equiv}$ is Boolean if and only if every tight filter in $S$ is an ultrafilter.
\end{theorem}
\begin{proof} Put $T = S/{\equiv}$ and let $\theta \colon S \rightarrow T$ be the associated natural map.
We shall prove that there is a bijection between the set of ultrafilters in $S$ and the set of ultrafilters in $T$;
we shall also prove that there is an order-isomorphism between the set of tight filters in $S$
and the set of prime filters in $T$.
We describe first the relationship between filters in $T$ and filters in $S$.
Let $A$ be a proper filter in $T$ and put $A' = \theta^{-1}(A)$.
Since $\theta$ is $0$-restricted, it follows that $A'$ does not contain zero.
Suppose that $a',b' \in A'$.
Then $\theta (a'), \theta (b') \in A$.
Since $A$ is a filter there exists $c \in A$ such that $c \leq \theta (a'), \theta (b')$.
It follows that $\theta (a')c^{-1}c = \theta (b')c^{-1}c$.
The map $\theta$ is surjective and so there exists $e \in S$ such that $\theta (e) = c^{-1}c$.
It follows that $\theta (a'e) = \theta (b'e)$.
But then $a'e \sim b'e$ since we are assuming that $\equiv$ is idempotent-pure.
Thus $a'e \wedge b'e$ exists.
Put $d = a'e \wedge b'e$.
Then $d \leq a', b'$ and $\theta (d) = c$.
Thus $d \in A'$.
Let $a' \in A$ and $a' \leq b'$.
Then $\theta (a') \in A$ and $\theta (a') \leq \theta (b')$.
Thus $\theta (b') \in A$.
It follows that $b' \in A'$.
We have therefore proved that $A'$ is a proper filter.
We now go in the opposite direction.
Let $X$ be a proper filter in $S$.
Since $\theta$ is $0$-restricted, we know that $\theta (X)$ does not contain zero.
Let $\theta (a'), \theta (b') \in \theta (X)$.
Then $a' \wedge b' \in X$ and so there exists $c' \in X$ such that $c' \leq a', b'$.
Thus $\theta (c') \leq \theta (a'), \theta (b')$.
It follows that $\theta (X)^{\uparrow}$ is a proper filter in $T$.
Write $\overline{X} = \theta (X)^{\uparrow}$.
It is easy to check that $A = \overline{A'}$ for each proper filter $A$ in $T$.
Let $X$ be a proper filter in $S$.
Then $X \subseteq \overline{X}'$ always.
Suppose, now, that $X$ is a tight filter in $S$.
Let $y \in \overline{X}'$.
Then $\theta (y) \in \overline{X}$.
By definition, $\theta (x) \leq \theta (y)$ for some $x \in X$.
Observe that $yx^{-1}x \leq y$ and that $\theta (yx^{-1}x) = \theta (x)$.
It follows that $x \equiv yx^{-1}x$.
But $x \in X$ implies that $yx^{-1}x \in X$ and so $y \in X$ since $X$ is a tight filter by Lemma~\ref{lem:mars}.
Thus $X = \overline{X}'$ when $X$ is a tight filter in $S$.
We now prove that $A$ is an ultrafilter in $T$ if and only if $A'$ is an ultrafilter in $S$.
Suppose that $A$ is an ultrafilter.
Let $A' \subseteq Y$ where $Y$ is an ultrafilter in $S$.
Observe that since $A' \subseteq Y$ we have that $\overline{A'} \subseteq \overline{Y}$.
But $A = \overline{A'}$
Thus $A \subseteq \overline{Y}$.
But $A$ is assumed to be an ultrafilter and so $A = \overline{Y}$.
Thus $A = \theta (Y)^{\uparrow}$.
It follows that $A' = \overline{Y}'$.
So $Y \subseteq \overline{Y}' = A' \subseteq Y$ giving equality throughout.
Hence $A'$ is an ultrafilter.
Suppose, now, that $A'$ is an ultrafilter.
We prove that $A$ is an ultrafilter.
Suppose that $A \subseteq B$ where $B$ is an ultrafilter.
Then $A' = \theta^{-1} (A) \subseteq \theta^{-1}(B)$.
But $A'$ is an ultrafilter.
Thus $\theta^{-1}(A) = \theta^{-1}(B)$.
It follows that $A = \overline{B}'$.
So $B \subseteq \overline{B}' = A \subseteq B$, and hence $A = B$. So $A$ is an ultrafilter.
We prove that $X \mapsto \overline{X}$ is a bijection from the set of ultrafilters in $S$
to the set of ultrafilters in $T$, with inverse $A \mapsto A'$.
Let $X$ be an ultrafilter in $S$.
Suppose that $\overline{X} \subseteq A$, where $A$ is a proper filter.
Then $\overline{X}' \subseteq A'$.
But then $X \subseteq A'$ and so $X = A'$, since $X$ is an ultrafilter.
Thus $\overline{X} = \overline{A'} = A$.
We have proved that $\overline{X}$ is an ultrafilter.
Suppose that $X$ and $Y$ are ultrafilters in $S$ and that $\overline{X} = \overline{Y}$.
Then $\overline{X}' = \overline{Y}'$.
Hence $X = \overline{X}' = \overline{Y}' = Y$.
Let $A$ be an ultrafilter in $T$.
Then $A'$ is an ultrafilter in $S$ and $A = \overline{A'}$.
This establishes our bijection.
We now prove that $A$ is a prime filter in $T$ if and only if $A'$ is a tight filter in $S$.
Suppose first that $A'$ is a tight filter.
We prove that $A$ is a prime filter.
Suppose that $a \vee b \in A$.
Let $\theta (a') = a$ and $\theta (b') = b$.
Since $a \sim b$ and $\theta$ is idempotent-pure, we have that $a' \sim b'$.
Thus $a' \vee b' \in A'$.
Now $\{a',b'\} \subseteq (a' \vee b')^{\downarrow}$ is a tight cover.
Thus, by assumption and without loss of generality, we have that $a' \in A'$ and so $a \in A$.
It follows that $A$ is a prime filter.
Suppose now that $A$ is a prime filter.
We prove that $A'$ is a tight filter.
Let $a' \in A$ and let $b_{1}', \ldots, b_{n}'$ be a tight cover of $a'$.
Then $\bigvee_{i=1}^{n} b_{i}' \leq_{e} a'$ by Lemma~\ref{lem:nanaimo}.
Thus $\theta (a') = \theta (b_{1}') \vee \ldots, \theta (b_{n}')$ since $\theta$ is essential.
But $A$ is a prime filter.
It follows that $\theta (b_{i}') \in A$ for some $i$ and so $b_{i}' \in A'$ for some $i$, proving that $A'$ is a tight filter.
If $X$ is a tight filter in $S$ then $\overline{X}$ is a prime filter in $T$.
Let $\bigvee_{i=1}^{m} t_{i} \in \overline{X}$.
Then $\theta (x) \leq \bigvee_{i=1}^{m} t_{i}$ for some $x \in X$.
Let $\theta (y_{i}) = t_{i}$.
Since $\{t_{1}, \ldots, t_{m}\}$ is a compatible set so too is $\{y_{1}, \ldots, y_{m}\}$ since $\theta$ is idempotent-pure.
Put $y = \bigvee_{i=1}^{m} y_{i}$.
Put $x' = yx^{-1}x$.
Then $\theta (x') = \theta (x)$.
Thus $x \equiv x'$ and $x \in X$ and so $x' \in X$ by Lemma~\ref{lem:mars}.
But every tight filter is a prime filter and so $y_{i} \in X$ for some $i$.
It follows that $t_{i} \in \overline{X}$ for some $i$.
We prove that $X \mapsto \overline{X}$
is an order-isomorphism between the set of tight filters in $S$ and the set of prime filters in $T$.
Let $X$ be a tight filter in $S$.
Then $\overline{X}$ is a prime filter in $T$ and $X = \overline{X}'$.
Let $A$ be a prime filter in $T$.
Then $A'$ is a tight filter in $S$ and $A = \overline{A'}$.
This establishes the bijection.
The fact that it is an order-isomorphism is straightforward
on the basis of what we have proved.
We now prove the theorem.
Suppose first that every tight filter in $S$ is an ultrafilter.
We prove that every prime filter in $T$ is an ultrafilter which proves that $T$ is Boolean.
Let $P$ be a prime filter in $T$ and let $P \subseteq Q$ where $Q$ is a proper filter in $T$.
Then $P' \subseteq Q'$.
It follows that $P'$ is a tight filter and so, by assumption, it is an ultrafilter.
Thus $P' = Q'$.
Now $\overline{P'} = \overline{Q'}$.
It follows that $P = Q$ and we have shown that $P$ is an ultrafilter.
Conversely, suppose that $T$ is Boolean which is equivalent to saying that every prime filter in $T$ is an ultrafilter.
Let $X$ be a tight filter in $S$.
We prove that $X$ is an ultrafilter.
Let $X \subseteq Y$ where $Y$ is an ultrafilter.
Then $\overline{X} \subseteq \overline{Y}$.
Then $\overline{Y}$ is an ultrafilter and $\overline{X}$ is a prime filter.
Thus $\overline{X} = \overline{Y}$, by assumption.
It follows that $\overline{X}' = \overline{Y}'$
and so $X = Y$, as required.
\end{proof}
The following can easily be deduced from \cite{Lawson2010, Lawson2012}.
Let $A$ be a filter in the inverse semigroup $S$.
Define $\mathbf{d}(A) = (A^{-1}A)^{\uparrow}$.
Then $\mathbf{d}(A)$ is a filter in $S$ which is also an inverse subsemigroup.
Furthermore, $A = (a \mathbf{d}(A))^{\uparrow}$ for any $a \in A$.
Clearly, $0 \in A$ if and only if $0 \in \mathbf{d}(A)$.
Also, $\mathbf{d}(A) \cap \mathbf{E}(S)$ is a filter in $\mathsf{E}(S)$.
The following result shows that to check whether every tight filter is an ultrafilter
it is enough to restrict attention to the distributive lattice of idempotents.
\begin{lemma}\label{lem:rioja} Let $A$ be a filter in an inverse semigroup $S$.
Then $x \in \mathbf{d}(A)$ if and only if $a^{-1}a \leq x$ for some $a \in A$.
\end{lemma}
\begin{proof} Suppose that $x \in \mathbf{d}(A)$.
Then $a^{-1}b \leq x$ for some $a,b \in A$.
But $A$ is a filter and so there exists $c \leq a,b$.
It follows that $c^{-1}c \leq x$.
Conversely, suppose that $a^{-1}a \leq x$ for some $a \in A$.
Then it is immediate that $x \in \mathbf{d}(A)$.
\end{proof}
\begin{proposition}\label{prop:idempotents} Let $S$ be a distributive inverse semigroup.
Then each tight filter in $S$ is an ultrafilter in $S$
if and only if
each tight filter in $\mathsf{E}(S)$ is an ultrafilter in $\mathsf{E}(S)$.
\end{proposition}
\begin{proof} Observe first that we have the following:
$A$ is a prime filter (respectively, ultrafilter, tight filter) in $S$ if and only if $\mathbf{d}(A)$ is a prime filter (respectively, ultrafilter, tight filter) in $S$ if
and only if $\mathsf{E}(\mathbf{d}(A))$ is a prime filter (respectively, ultrafilter, tight filter)
in $\mathsf{E}(S)$.
We first prove the results relating $A$ and $\mathbf{d}(A)$.
Suppose that $A$ is a prime filter.
We prove that $\mathbf{d}(A)$ is a prime filter.
Let $x \vee y \in \mathbf{d}(A)$.
Then $a^{-1}a \leq x \vee y$ for some $a \in A$.
We have that $a^{-1}a = xa^{-1}a \vee ya^{-1}a$.
Whence $a = axa^{-1}a \vee aya^{-1}a$.
But $A$ is prime.
Without loss of generality, we may assume that $axa^{-1}a \in A$.
Thus $ax \in A$.
Now, $a^{-1}ax \in \mbox{d}(A)$.
It follows that $x \in \mathbf{d}(A)$.
We now prove the converse.
Suppose that $\mbox{d}(A)$ is prime.
Let $x \vee y \in A$.
Then $\mathbf{d}(x) \vee \mathbf{d}(y) \in \mathbf{d}(A)$.
But $\mathbf{d}(A)$ is prime.
Without loss of generality, we may assume that $\mathbf{d}(x) \in \mathbf{d}(A)$.
It follows that $(x \vee y)\mathbf{d}(x) \in A$ and so $x \in A$.
It is routine to check that $A$ is an ultrafilter if and only if $\mathbf{d}(A)$ is an ultrafilter.
The fact that $A$ is a tight filter if and only if $\mathbf{d}(A)$ is a tight filter follows by \cite[Lemma~5.9(1)]{LL}.
By \cite[Lemma 5.9(2)]{LL}, we have that $\mathbf{d}(A)$ is a tight filter (respectively, ultrafilter) if and only if $\mathsf{E}(\mathbf{d}(A))$ is a tight filter (respectively, ultrafilter).
We can now prove the proposition.
Suppose that each tight filter in $\mathsf{E}(S)$ is an ultrafilter in $\mathsf{E}(S)$.
Let $A$ be a tight filter in $S$.
Then $\mathbf{d}(A)$ is a tight filter in $S$.
Thus $\mathsf{E}(\mathbf{d}(A))$ is a tight filter in $\mathsf{E}(S)$.
By assumption, $\mathsf{E}(\mathbf{d}(A))$ is an ultrafilter in $\mathsf{E}(S)$.
Thus $\mathbf{d}(A)$ is an ultrafilter in $S$.
Finally, we deduce that $A$ is an ultrafilter in $S$.
The proof of the converse is now straightforward.
\end{proof}
\subsection{Specifics}
We now apply the results of the previous section to the specific inverse monoids constructed
in this paper.
By Lemmas~\ref{lem:joinbasic}, \ref{lem:oreo}~and~Lemma~\ref{lem:key-property},
the inverse monoid $\mathsf{R}(C)$ satisfies the conditions of Proposition~\ref{prop:panda}
and so $\equiv$ is idempotent-pure.
By Proposition~\ref{prop:dis}, $\mathsf{R}(C)$ is a distributive inverse $\wedge$-monoid and so
by Lemma~\ref{lem:idpt-pure-dist}, it follows that $\mathsf{R}(C)/{\equiv}$ is a distributive
inverse $\wedge$-monoid.
By Proposition~\ref{prop:three}, $\mathsf{R}(C)^{e}$ is $E$-unitary.
Thus by Lemma~\ref{lem:red-panda}, the group of units of $\mathsf{R}(C)/{\equiv}$
is isomorphic to $\mathscr{G}(C)$.
We have therefore proved the following.
\begin{theorem}\label{them:units}
Let $C$ be a higher rank graph with a finite number of identities which has no sources and is row finite.
Then the group $\mathscr{G}(C)$, defined at the end of Section~3, is isomorphic with the group of units of $\mathsf{R}(C)/{\equiv}$.
\end{theorem}
\noindent
{\bf Definition. }Let $C$ be a higher rank graph with a finite number of identities which has no sources and is row finite.
Put $\mathsf{B}(C) = \mathsf{R}(C)/{\equiv}$.\\
To prove that $\mathsf{R}(C)/{\equiv}$ is Boolean, it is enough to prove that
every tight filter in $\mathsf{R}(C)$ is an ultrafilter by Theorem~\ref{them:seven}.
To do this, we shall relate filters in $\mathsf{R}(C)$ to appropriate subsets of $C$.
Let $C$ be a category.
Let $A \subseteq C$ be a non-empty subset.
We say that it is a {\em filter} if it satisfies the following two conditions:
\begin{enumerate}
\item If $x,y \in A$ then there exist $u, v \in C$ such that $xu = yv \in A$.
In particular, this implies that the elements of $A$ are pairwise comparable.
\item If $x = yz$ and $x \in A$ then $y \in A$.
\end{enumerate}
\begin{lemma}\label{lem:filter-property} Let $C$ be a finitely aligned category.
Let $A$ be a filter in $C$ and let $a,b \in A$.
Suppose that $aC \cap bC = \{c_{1}, \ldots, c_{n}\}C$.
Then $c_{i} \in A$ for some $i$.
\end{lemma}
\begin{proof} Let $u,v \in C$ such that $z = au = bv \in A$.
Then $z \in aC \cap bC$ and so $z = c_{i}p$ for some $i$ and some $p \in C$.
But $z \in A$ and $A$ is a filter and so $c_{i} \in A$.
\end{proof}
Let $a \in C$ and $\{a_{1}, \ldots, a_{m}\}$ such that $a_{i} = ap_{i}$.
We say that $\{a_{1}, \ldots, a_{m}\}$ is a {\em tight cover} of $a$
if whenever $z = ap$ there exists an $i$ and a $u$ such that
$u = zq = a_{i}r$; thus $z$ is comparable with some element $a_{i}$.
Let $A \subseteq C$ be a filter.
We say that it is {\em tight} if the following condition holds:
if $\{a_{1}, \ldots, a_{m}\}$ is a tight cover of $a$, where $a \in A$, then $a_{i} \in A$ for some $i$.
\begin{lemma}\label{lem:correspondence} Let $C$ be a finitely aligned cancellative category.
Then $$\{a_{1}a_{1}^{-1}, \ldots, a_{m}a_{m}^{-1}\} \subseteq (aa^{-1})^{\downarrow}$$ is a tight cover in $\mathsf{R}(C)$
if and only if $$\{a_{1}, \ldots, a_{m}\} \subseteq aC$$ is a large subset.
\end{lemma}
\begin{proof} Suppose that $\{a_{1}, \ldots, a_{m}\} \subseteq aC$ is a large subset.
Let $0 < bb^{-1} \leq aa^{-1}$.
Then $b = ap$ for some $p$.
Thus $b \in aC$.
It follows that $b$ is comparable with some element $a_{i}$.
Thus $z = bu = a_{i}v$ for some $u,v \in C$.
But then $zz^{-1} \leq bb^{-1}, a_{i}a_{i}^{-1}$.
Now let $\bigvee_{j=1}^{n} b_{j}b_{j}^{-1} \leq aa^{-1}$.
Then $0 < b_{j}b_{j}^{-1} \leq aa^{-1}$ for all $j$.
Then $b_{j} = ap_{j}$ for some $p_{j} \in C$.
In particular, $b_{j} \in aC$.
It follows that $b_{j}$ is comparable with some element $a_{i}$.
Suppose that $\{a_{1}a_{1}^{-1}, \ldots, a_{m}a_{m}^{-1}\} \subseteq (aa^{-1})^{\downarrow}$ is a tight cover.
Let $b \in aC$.
Then $b = ap$.
Thus $bb^{-1} \leq aa^{-1}$.
It follows that there is $zz^{-1} \leq bb^{-1}, a_{i}a_{i}^{-1}$ for some $i$.
Thus $z = bu = a_{i}v$
It follows that $\{a_{1}, \ldots, a_{m}\} \subseteq aC$ is a large subset.
\end{proof}
\begin{proposition}\label{prop:mo} Let $C$ be a strongly finitely aligned cancellative category with a finite number of identities.
Given a filter $A \subseteq C$, define
\[
\mathsf{P}(A) = \{xx^{-1} \colon x \in A\}^{\uparrow} \cap \mathsf{E}(\mathsf{R}(S)).
\]
Then $\mathsf{P}$ is a bijective correspondence between filters in $C$ (resp. maximal filters) and prime filters in $\mathsf{E}(\mathsf{R}(C))$ (resp. maximal filters).
Under this bijective correspondence, tight filters correspond to tight filters. The inverse of $\mathsf{F}$ is given by
\[
\mathsf{F}(P) = \{x \in C \colon xx^{-1} \in P \}
\]
for each prime filter $P$ in $\mathsf{E}(\mathsf{R}(C))$.
\end{proposition}
\begin{proof}
Let $A \subseteq C$ be a filter in $C$. We claim that $\mathsf{P}(A)$ is a filter in $\mathsf{E}(\mathsf{R}(S))$.
Because of Lemma~\ref{lem:key-property}, whenever we have a relation $xx^{-1} \leq \bigvee_{i=1}^{m} a_{i}a_{i}^{-1}$ then,
in fact, $xx^{-1} \leq a_{i}a_{i}^{-1}$ for some $i$.
Bearing this in mind, let $aa^{-1}, bb^{-1} \in \mathsf{P}(A)$.
Then $xx^{-1} \leq aa^{-1}$ and $yy^{-1} \leq bb^{-1}$ for some $x,y \in A$.
Thus $x = ap$ and $y = bq$ for some $p,q \in C$.
But $A$ is a filter and $a,b \in A$.
By assumption, $aC \cap bC \cap A \neq \varnothing$.
It follows that $au = bv \in A$ for some $u,v \in S$.
Put $z = au = bv$.
Then $zz^{-1} \in \mathsf{P}(A)$ and $zz^{-1} \leq aa^{-1}, bb^{-1}$.
It is now clear that $\mathsf{P}(A)$ is a filter in $\mathsf{E}(\mathsf{R}(S))$;
it is a proper filter by construction.
We prove that $\mathsf{P}(A)$ is a prime filter.
Suppose that $\bigvee_{i=1}^{m} x_{i}x_{i}^{-1} \in \mathsf{P}(A)$.
Then $xx^{-1} \leq \bigvee_{i=1}^{m} x_{i}x_{i}^{-1}$ for some $x \in A$.
Thus by Lemma~\ref{lem:key-property}, we have that $xx^{-1} \leq x_{i}x_{i}^{-1}$ for some $i$.
Hence $x = x_{i}p$.
Since $x \in A$ it follows that $x_{i} \in A$ and so $x_{i}x_{i}^{-1} \in \mathsf{P}(A)$.
Now let $P$ be a prime filter in $\mathsf{E}(\mathsf{R}(C))$.
Since $P$ is a prime filter the set $\mathsf{F}(P)$ is non-empty, and it is then
routine to check that $\mathsf{F}(P)$ is a filter in $C$.
It remains to show that the maps $A \mapsto \mathsf{P}(A)$ and $P \mapsto \mathsf{F}(P)$
are mutually inverse and order-preserving.
Clearly, $A \subseteq \mathsf{F}(\mathsf{P}(A))$.
Let $x \in \mathsf{F}(\mathsf{P}(A))$.
Then $yy^{-1} \leq xx^{-1}$ where $y \in A$.
Then $y = xp$.
Thus $x \in A$.
It follows that $A = \mathsf{F}(\mathsf{P}(A))$.
Clearly, $\mathsf{P}(\mathsf{F}(P)) \subseteq P$.
Let $\bigvee_{i=1}^{m} x_{i}x_{i}^{-1} \in P$.
Then $x_{i}x_{i}^{-1} \in P$ for some $i$.
Thus $x_{i} \in \mathsf{F}(P)$ and so $\bigvee_{i=1}^{m} x_{i}x_{i}^{-1} \in \mathsf{P}(\mathsf{F}(P))$.
It follows that $\mathsf{P}(\mathsf{F}(P)) = P$.
Suppose that $A$ and $B$ are filters in $C$ such that $A \subseteq B$.
Then $\mathsf{P}(A) \subseteq \mathsf{P}(B)$.
If $P$ and $Q$ are prime filters in $\mathsf{E}(\mathsf{R}(C))$ such that $P \subseteq Q$ then
$\mathsf{F}(P) \subseteq \mathsf{F}(Q)$.
It follows that $\mathsf{P}$ is an order-isomorphism between the poset of filters in $C$ and the
poset of prime filters in $\mathsf{E}(\mathsf{R}(C))$.
Since $\mathsf{P}$ is an order isomorphism, it restricts to a bijection between the
ultrafilters of $A$ and the maximal prime filters of $\mathsf{E}(\mathsf{R}(C))$.
Since every ultrafilter of $\mathsf{E}(\mathsf{R}(C))$ is a prime filter, the maximal prime
filters of $\mathsf{E}(\mathsf{R}(C))$ are the ultrafilters. So $\mathsf{P}$ restricts to a
bijection between ultrafilters of $A$ and ultrafilters of $\mathsf{E}(\mathsf{R}(C))$.
We conclude by showing that tight filters correspond to tight filters.
Suppose now that $A$ is a tight filter in $C$.
We prove that $\mathsf{P}(A)$ is a tight filter in $\mathsf{E}(\mathsf{R}(C))$.
Let $\{a_{1}a_{1}^{-1}, \ldots, a_{m}a_{m}^{-1}\}$ be a tight cover of $xx^{-1}$ where $xx^{-1} \in \mathsf{P}(A)$.
Then $\{a_{1}, \ldots, a_{m}\}$ is a tight cover of $x$ where $x \in A$.
By assumption, $a_{i} \in A$ for some $i$.
Thus $a_{i}a_{i}^{-1} \in \mathsf{P}(A)$ for some $i$.
It follows that $\mathsf{P}(A)$ is a tight filter in $\mathsf{E}(\mathsf{R}(C))$.
We now prove the converse.
Let $P$ be a tight filter in $\mathsf{E}(\mathsf{R}(C))$.
We prove that $\mathsf{F}(P)$ is a tight filter in $C$.
Let $x \in \mathsf{F}(P)$ and suppose that $\{a_{1}, \ldots, a_{m}\} \subseteq aC$ is large.
Then by Lemma~\ref{lem:correspondence}, we have that
$\{a_{1}a_{1}^{-1}, \ldots, a_{m}a_{m}^{-1}\}$ is a tight cover of $aa^{-1}$.
But $aa^{-1} \in P$ and $P$ is a tight filter and so $a_{i}a_{i}^{-1} \in P$ for some $i$.
It follows that $a_{i} \in \mathsf{F}(P)$ for some $i$, as required.
\end{proof}
We define a subset $A \subseteq C$ to be {\em good} if it has the following two properties:
\begin{enumerate}
\item Any two elements of $A$ are comparable.
\item For each $\mathbf{m} \in \mathbb{N}^{k}$
there exists $a \in A$ such that $d(a) = \mathbf{m}$.
\end{enumerate}
\begin{remark}\label{rem:good-properties}
{\em If $A$ is a good subset of $C$ then there is an identity $e$ in $C$ such that
$A \subseteq eC$. This follows from the fact that any two elements in $A$ are comparable.
In fact, $e \in A$ since $e$ is the unique element of $A$ such that $d(e) = \mathbf{0}$.}
\end{remark}
Observe that if $a,b \in C$ and $d(a) = d(b)$ then, in fact, $a = b$ since,
being comparable, we have that $au = bv$ for some $u,v \in C$ and then we apply
Lemma~\ref{lem:levi}.
Our rationale for defining good subsets is explained by the following result
which is fundamental.
\begin{lemma}\label{lem:good-filter} Let $C$ be a finitely aligned $k$-graph.
We prove that the following three classes of subsets are the same:
\begin{enumerate}
\item Good subsets.
\item Tight filters.
\item Maximal filters.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove first that good subsets are filters.
There are two claims we have to prove.
Let $A$ be a good subset of $C$. Let $x,y \in A$.
We prove that $xC \cap yC \cap A \neq \varnothing$.
By assumption, $z = xu = yv$ for some $u,v \in C$.
Let $z' \in A$ such that $d(z') = d(z)$.
Since $z',x \in A$ we have that $z's = xt$ for some $s,t \in C$.
Now, $d(z') = d(z) = d(x) + d(u)$ thus $d(z') > d(x)$.
It follows by Lemma~\ref{lem:levi} that $z' = xp$ for some $p \in C$.
Likewise, $z' = yq$ for some $q \in C$.
Thus $z' = xu = yq$.
It follows that $z' \in xC \cap yC \cap A$.
Let $x = yz$ where $x \in A$.
We want to prove that $y \in A$.
Let $y' \in A$ be the unique element such that $d(y') = d(y)$.
Since $x,y' \in A$ we have that $xu = y'v$ for some $u,v \in C$.
But then $yzu = y'v$.
However, $d(y) = d(y')$, thus by Lemma~\ref{lem:levi}, it follows that $y = y'$ and so $y \in A$, as claimed.
We now prove that good subsets are in fact tight filters.
Let $A$ be a good subset.
Let $a \in A$ and suppose that $\{a_{1}, \ldots, a_{m}\}$ is a tight cover of $a$.
We need to prove that $a_{i} \in A$ for some $i$.
Observe that $d(a_{i}) \geq d(a)$ for all $i$
since $a_{i} = ap_{i}$ for some $p_{i} \in C$.
Put $\mathbf{m} = \bigvee_{i=1}^{m} d(a_{i})$.
By assumption, there is a unique $z \in A$ such that $d(z) = \mathbf{m}$.
But $d(z) \geq d(a)$ and, since $a,z \in A$, they are comparable.
It follows by Lemma~\ref{lem:levi} that $z = as$ for some $s \in C$.
By assumption, $z$ is comparable with some element $a_{i}$.
But $d(z) \geq d(a_{i})$.
Thus $z = a_{i}t$ for some $t$ by Lemma~\ref{lem:levi}.
But $A$ is a filter, $z \in A$ and so $a_{i} \in A$, as required.
We now prove that every tight filter is a good subset.
Let $A$ be a tight filter.
We prove that it is a good subset.
Let $\mathbf{m} \in \mathbb{N}^{k}$ be arbitrary.
We prove that there is an element $y \in A$ such that $d(y) = \mathbf{m}$.
Let $a \in A$ be arbitrary.
If $d(a) \geq \mathbf{m}$ then $a = ya'$ where $d(y) = \mathbf{m}$.
But $A$ is a filter and so $y \in A$ and we are done.
It follows that,
putting $\mathbf{n} = d(a) \vee \mathbf{m}$,
we can assume in what follows that $\mathbf{n} > d(a)$.
Let $p_{1}, \ldots, p_{s} \in C$ be all the elements such that
$d(ap_{i}) = \mathbf{n}$.
(We assume that there are only finitely many
which is fine since they all have the same degree{} $\mathbf{n} - \mathbf{m}$.)
We shall prove below that $\{ap_{1}, \ldots, ap_{s}\}$ is a tight cover of $a$.
Assuming this result,
then $ap_{j} \in A$ for some $j$ since $A$ is a tight filter.
But $d(ap_{j}) = \mathbf{n} = \mathbf{m} + (\mathbf{n} - \mathbf{m})$.
Thus $A$ contains an element of degree{} $\mathbf{m}$ by the UFP and the properties of filters.
It remains to prove that $\{ap_{1}, \ldots, ap_{s}\}$ is a tight cover of $a$
where $p_{1}, \ldots, p_{s} \in C$ is the set of all the elements such that
$d(ap_{i}) = \mathbf{n}$.
Observe that all these elements have the same range, $e$ say,
and they are all the elements in $eC$ with degree{} $\mathbf{n} - \mathbf{m}$.
Let $z = ap$ for some $p$.
We prove that $z$ is comparable with some $ap_{k}$.
Observe that $p \in eC$.
Choose $u$ such that $d(pu) \geq \mathbf{n} - \mathbf{m}$.
Then $pu = p_{k}v$ for some $v \in C$ and $k$.
Then $apu = ap_{k}v$ and so $zu = ap_{k}v$.
We now prove that good subsets are maximal filters.
Let $A$ be a good subset and suppose that $A \subseteq B$ where $B$ is a filter.
Let $b \in B$.
Then, by assumption, there exists $a \in A$ such that $d(a) = d(b)$.
But $a,b \in B$ and $B$ is a filter and so $a$ and $b$ are comparable.
It follows by Lemma~\ref{lem:levi} that $a = b$.
Thus $b \in A$.
It follows that $A = B$ and so $A$ is maximal.
Finally, we prove that maximal filters are good subsets;
in fact, we prove that maximal filters are tight filters.
Let $A$ be a maximal filter.
Let $x$ be an element of $C$ which is comparable with every element of $A$.
Let $x = yx'$.
Then $y$ is comparable with every element of $A$ since $x$ is.
Put $X = \{y \in C \colon x = yx' \text{ for some } x' \in C\}$.
Then $A \cup X$ is a filter.
But $A$ is maximal and so $X \subset A$;
this simply means that $x \in A$.
We prove that maximal filters are tight filters.
Let $a \in A$ and suppose that $\{a_{1}, \ldots, a_{m}\}$ is a tight cover of $a$.
If $a_{i} \notin A$ then there is some $b_{i} \in a_{i}$ so that
$b_{i}$ and $a_{i}$ are not comparable.
Now $b_{1}, \ldots, b_{m} \in A$.
By repeated application of part (1) of the definition of a filter,
we can find elements $u_{1}, \ldots, u_{m}$ such that
$b_{1}u_{1} = b_{2}u_{2} = \ldots = b_{m}u_{m} \in A$.
In fact, we can assume that there is an element $u$ such that
$z = au = b_{1}u_{1} = b_{2}u_{2} = \ldots = b_{m}u_{m} \in A$.
By assumption, $zs = a_{i}t$ for some $s,t$.
It follows that $b_{i}u_{i}s = a_{i}t$.
But this says that $b_{i}$ is comparable to $a_{i}$ which is a contradiction.
It follows that some $a_{j} \in A$ and so $A$ is a tight filter.
\end{proof}
By
Theorem~\ref{them:seven},
Lemma~\ref{lem:good-filter}
and Proposition~\ref{prop:idempotents}
we deduce that $\mathsf{R}(C)/{\equiv}$ is a Boolean inverse monoid.
By Theorem~\ref{them:units} the group of units of $\mathsf{R}(C)/{\equiv}$ is isomorphic to
$\mathsf{R}(C)^{e}/\sigma$ which is the group $\mathscr{G}(C)$.
We have therefore proved the following.
\begin{theorem}\label{them:first-main} Let $C$ be a higher rank graph with a finite number of identities which has no sources and is row finite.
Then $\mathsf{B}(C) = \mathsf{R}(C)/{\equiv}$ is a Boolean inverse monoid
whose group of units is isomorphic to $\mathscr{G}(C)$.
\end{theorem}
\section{Properties of the Boolean inverse monoid $\mathsf{B}(C)$}
In this section, we shall impose natural conditions on the $k$-graph $C$ and then determine the properties
acquired by the Boolean inverse monoid $\mathsf{B}(C)$ as a result.
\subsection{Aperiodicity}
Our first definition is taken from \cite{LS2010}.
Let $C$ be a $k$-graph.
We say that $C$ is {\em aperiodic} if for all $a,b \in C$ such that $a \neq b$ and $\mathbf{d}(a) = \mathbf{d}(b)$ there exists
an element $u \in C$ such that $au$ and $bu$ are not comparable.
To make use of this concept in an inverse semigroup setting we need some definitions.
A non-zero element $a$ of an inverse semigroup with zero is called an {\em infinitesimal} if $a^{2} = 0$.
Observe that $a$ is an infinitesimal if and only if $\mathbf{d}(a) \perp \mathbf{r}(a)$ by \cite[Lemma~2.5]{Lawson2017}.
An inverse semigroup $S$ is said to be {\em fundamental} if the only elements of $S$ that commute
with all the idempotents of $S$ are themselves idempotents.
The following is an easy consequence of \cite[Theorem~5.2.9]{Lawson1998}.
\begin{lemma}\label{lem:finite-fundamental}
If a fundamental inverse semigroup has a finite number of idempotents then it is itself finite.
\end{lemma}
Now, let $S$ be a Boolean inverse monoid.
Denote the group of units of $S$ by $\mathsf{U}(S)$ and the Boolean algebra of idempotents of $S$ by $\mathsf{E}(S)$.
There is an action of $\mathsf{U}(S)$ on $\mathsf{E}(S)$ given by $e \mapsto geg^{-1}$.
We call this the {\em natural action}.
The following was proved as \cite[Proposition~3.1]{Lawson2017}.
\begin{lemma}\label{lem:faithful} Let $S$ be a Boolean inverse monoid.
Then the natural action is faithful if and only if $S$ is fundamental.
\end{lemma}
The following is a souped up version of \cite[Lemma~3.2]{Lawson2017}.
In the proof of the lemma below, we use the result that in a Boolean algebra
$e \leq f$ if and only if $e\bar{f} = 0$.
\begin{lemma}\label{lem:plum}
Let $S$ be a Boolean inverse monoid.
Let $e$ be an idempotent and $a$ an element such that $ea \neq ae$.
Then there is an idempotent $f \leq e$ such that $f \perp afa^{-1}$.
\end{lemma}
\begin{proof} There are two cases.
{\bf Case~1}: Suppose that $e \overline{(aea^{-1})} \neq 0$.
Put $f = e \overline{(aea^{-1})} \leq e$.
Then
$f(afa^{-1}) = e \overline{(aea^{-1})}a(e \overline{(aea^{-1})})a^{-1}$.
Thus
$$f(afa^{-1})
= e \overline{(aea^{-1})}a(ea^{-1} a\overline{(aea^{-1})})a^{-1}
= e \overline{(aea^{-1})}(aea^{-1})a \overline{(aea^{-1})} a^{-1}
= 0,$$
and the result is proved.
{\bf Case~2}: Now, suppose that $e \overline{(aea^{-1})} = 0$.
Then $e \leq aea^{-1}$ and so $ea \leq ae$.
For the sake of a contradiction, suppose that $aea^{-1}\overline{e} = 0$.
Then $aea^{-1} \leq e$ and so $ae \leq ea$.
It follows that $ea = ae$, which is a contradiction.
Thus, in fact, $aea^{-1}\overline{e} \neq 0$.
Put $h = aea^{-1}\overline{e}$ and $f = a^{-1}ha$.
Suppose that $f = 0$.
Then $aha^{-1} = 0$.
But $aha^{-1} = h \neq 0$.
It follows that $f \neq 0$.
Also,
$f = a^{-1}ha = a^{-1}aea^{-1}\overline{e}a = ea^{-1}\overline{e}a \leq e$.
Finally, by direct computation, $afa^{-1}f = 0$.
\end{proof}
The following is now immediate by the above lemma when we put $b = af$.
\begin{corollary}\label{cor:plum} Suppose that $ae \neq ea$.
Then there is an infinitesimal $b$ such that $b \leq a$ and $b^{-1}b \leq e$.
\end{corollary}
\begin{proposition}\label{prop:infinitesimal} Let $S$ be a Boolean inverse monoid.
Then the following are equivalent:
\begin{enumerate}
\item $S$ is fundamental
\item Each non-idempotent element of $S$ is above an infinitesimal.
\end{enumerate}
\end{proposition}
\begin{proof} (1)$\Rightarrow$(2).
Let $a$ be a non-idempotent element.
Then, since $S$ is fundamental, there is an idempotent $e$ such that $ae \neq ea$.
Thus, by Corollary~\ref{cor:plum}, there is an infinitesimal $b \leq a$.
(2)$\Rightarrow$(1).
Let $a$ be a non-idempotent element.
Then $b \leq a$ where $b$ is an infinitesimal.
In particular, $b \neq 0$.
Suppose that $a$ commutes with $b^{-1}b$.
Then $ab^{-1}b = b^{-1}ba$.
But $ab^{-1}b = b$ because $b \le a$, and so $b = b^{-1}ba$.
It follows that $b^{2} = ba$.
Thus $ba = 0$ and so $b^{-1}ba = 0$.
It follows that $b = 0$, which is a contradiction.
We have shown that $a$ cannot commute with $b^{-1}b$.
\end{proof}
\begin{lemma}\label{lem:needful} Let $C$ be a $k$-graph.
Then $C$ is aperiodic if and only if each non-idempotent basic morphism $ab^{-1}$ of $\mathsf{R}(C)$ lies above an infinitesimal basic morphism.
\end{lemma}
\begin{proof} Suppose that $C$ is aperiodic.
Let $ab^{-1}$ be a non-idempotent basic morphism.
Thus $a \neq b$ and $\mathbf{d}(a) = \mathbf{d}(b)$.
By assumption, there is an element $u \in C$ such that $au$ and $bu$ are incomparable.
Observe that $(au)(bu)^{-1} \leq ab^{-1}$ by Lemma~\ref{lem:oreo}.
To say that $au$ and $bu$ are incomparable means precisely that the idempotents $(au)(au)^{-1}$ and $(bu)(bu)^{-1}$
are orthogonal by Lemma~\ref{lem:oreo}.
Thus $(au)(bu)^{-1}$ is an infinitesimal and is below $ab^{-1}$.
Let $a \neq b$ and $\mathbf{d}(a) = \mathbf{d}(b)$.
Then $ab^{-1}$ is a non-idempotent basic morphism.
By assumption, there exists an infinitesimal $cd^{-1}$ such that $cd^{-1} \leq ab^{-1}$.
By Lemma~\ref{lem:oreo}, we have that $c = au$ and $d = bu$ for some $u \in C$.
By assumption, $(au)(au)^{-1}$ and $(bu)(bu)^{-1}$ are orthogonal and so $au$ and $bu$ are incomparable.
We have therefore proved that $C$ is aperiodic.
\end{proof}
\begin{lemma}\label{lem:juice} Let $S$ be a Boolean inverse semigroup.
Then $\bigvee_{j=1}^{n} a_{j}$ an infinitesimal implies that each $a_{j}$ is an infinitesimal.
\end{lemma}
\begin{proof} By assumption, $\bigvee_{j=1}^{n} \bigvee_{i=1}^{n} a_{j}a_{i} = 0$.
In particular, $a_{j}^{2} = 0$ for each $1 \leq j \leq n$.
\end{proof}
The following theorem is key.
\begin{theorem}\label{them:fundamental} Let $C$ be a strongly finitely aligned higher rank graph with a finite number of identities which has no sources and is row finite.
Then the following are equivalent:
\begin{enumerate}
\item $\mathsf{B}(C)$ is fundamental.
\item $C$ is aperiodic.
\end{enumerate}
\end{theorem}
\begin{proof} (1)$\Rightarrow$(2).
We shall use Lemma~\ref{lem:needful} to prove that $C$ is aperiodic.
Let $ab^{-1}$ be a non-idempotent basic morphism of $\mathsf{R}(C)$.
Then $[ab^{-1}]$ is not an idempotent since $\equiv$ is idempotent-pure.
Thus, by Proposition~\ref{prop:infinitesimal}, $[\bigvee_{j=1}^{n}c_{j}d_{j}^{-1}] \leq [ab^{-1}]$
where $[\bigvee_{j=1}^{n}c_{j}d_{j}^{-1}]$ is an infinitesimal.
Thus by Lemma~\ref{lem:juice}, each $[c_{j}d_{j}^{-1}]$ is an infinitesimal.
But then each $c_{j}d_{j}^{-1}$ is an infinitesimal, since $\equiv$ is $0$-restricted.
Observe that $(ab^{-1})(\bigvee_{j=1}^{n}d_{j}d_{j}^{-1})$ maps to $[\bigvee_{j=1}^{n}c_{j}d_{j}^{-1}]$.
Thus $(ab^{-1})(\bigvee_{j=1}^{n}d_{j}d_{j}^{-1})$ is an infinitesimal and lies below $ab^{-1}$.
By relabelling if necessary, we can assume that $(ab^{-1})d_{1}d_{1}^{-1}$ is non-zero and an infinitesimal in $\mathsf{R}(C)$ and lies below $ab^{-1}$.
But $(ab^{-1})d_{1}d_{1}^{-1}$ is a join of basic morphisms and each of these basic morphisms must be either zero or an infinitesimal.
Pick a non-zero such basic morphism $cd^{-1}$.
Then $cd^{-1} \leq ab^{-1}$ and is an infinitesimal.
It follows by Lemma~\ref{lem:needful} that $C$ is aperiodic.
(2)$\Rightarrow$(1).
Let $[\bigvee_{i=1}^{m} a_{i}b_{i}^{-1}]$ be a non-idempotent element of $\mathsf{B}(C)$.
Then the element $\bigvee_{i=1}^{m} a_{i}b_{i}^{-1}$ is a non-idempotent in $\mathsf{R}(C)$ since $\equiv$ is idempotent-pure.
It follows that for some $i$ we have that $a_{i} \neq b_{i}$.
By Lemma~\ref{lem:needful}, there is an infinitesimal basic morphism $cd^{-1} \leq a_{i}b_{i}^{-1}$.
Thus $[cd^{-1}] \leq [\bigvee_{i=1}^{m} a_{i}b_{i}^{-1}]$.
Because $cd^{-1}$ is non-zero it follows that $[cd^{-1}]$ is non-zero.
We have therefore proved that each non-idempotent element of $\mathsf{B}(C)$ is above an infinitesimal.
It follows by Proposition~\ref{prop:infinitesimal}, that $\mathsf{B}(C)$ is fundamental.
\end{proof}
\subsection{Cofinality}
Our second definition is also taken from \cite{LS2010}.
Define $C$ to be {\em cofinal} if for all $e,f \in C_{o}$ there exists a large subset $X \subseteq fC$
such that $eC\mathbf{d}(x) \neq \varnothing$ for all $x \in X$.
A semigroup ideal $I$ of a Boolean inverse semigroup is said to be an {\em additive ideal}
if $a,b \in I$ and $a \sim b$ implies that $a \vee b \in I$.
A Boolean inverse semigroup is called {\em $0$-simplifying} if it has no non-trivial additive ideals.
A Boolean inverse semigroup that is both fundamental and $0$-simplifying is called {\em simple}.
\begin{lemma}\label{lem:orange} Let $C$ be a $k$-graph.
Then each non-trivial additive ideal of $\mathsf{B}(C)$ contains an idempotent of the form $[ee^{-1}]$ where $e$ is an identity of $C$.
\end{lemma}
\begin{proof} We begin with some generalities on non-trivial additive ideals $I$ in Boolean inverse semigroups $S$.
Let $a \in I$. Then $aa^{-1} \in I$ since $I$ is a semigroup ideal.
On the other hand if $a^{-1}a \in I$ then $a \in I$ since $a = a(a^{-1}a)$.
It follows that non-trivial additive ideals always contain idempotents.
If $a \in I$ then $ae \in I$ for all $e \in \mathsf{E}(S)$.
It follows that additive ideals are always order ideals.
Let $I$ be a non-trivial additive ideal in $\mathsf{B}(C)$.
Then, by the above, we can assume that $[xx^{-1}] \in \mathsf{B}(C)$ for some $x \in C$.
Let $e = \mathbf{d}(x)$.
Then $ex^{-1}$ is a well-defined basic morphism and so belongs to $\mathsf{R}(C)$.
Observe that $\mathbf{d}(ex^{-1}) = xx^{-1}$ and so $[ex^{-1}] \in I$.
It follows that $[ee^{-1}] \in I$.
\end{proof}
Let $S$ be a Boolean inverse monoid.
If $X \subseteq S$ define $\mathbf{d}(X) = \{\mathbf{d}(x) \colon x \in X\}$ and $\mathbf{r}(X) = \{\mathbf{r}(x) \colon x \in X\}$.
Let $e$ and $f$ be non-zero idempotents in $S$.
We write $e \preceq f$, and say there is a {\em pencil} from $e$ to $f$, if there is a finite set $X$ of $S$ such that $e = \bigvee \mathbf{d}(X)$
and $\bigvee \mathbf{r}(X) \leq f$.
By \cite[Lemma~4.1]{Lawson2016}, if $I$ is an additive ideal of $S$ and $f \preceq e \in I$ then $f \in I$.
The following was first proved in \cite{Lenz} but we give the proof for completeness.
\begin{lemma}\label{lem:cake} Let $S$ be a Boolean inverse semigroup.
Let $e$ be an idempotent.
Then $I = (SeS)^{\vee}$ is an additive ideal of $S$ and $f \in I$ is an idempotent if and only if $f \preceq e$.
\end{lemma}
\begin{proof} We may write $f = \bigvee_{i=1}^{m} e_{i}$ where $e_{1}, \ldots , e_{m} \in SeS$.
But it is easy to prove that $e_{i} = x_{i}^{-1}x_{i}$, where $x_{i} \in eSe_{i}$
Thus $x_{i}x_{i}^{-1} \leq e$.
It follows that $X = \{x_{1},\ldots,x_{m}\}$ is a pencil from $f$ to $e$.
We now prove the converse.
Suppose that $f \preceq e$.
Then there is a pencil $X$ from $f$ to $e$.
Now $\mathbf{r}(X) \leq e$ and so $\mathbf{r}(X) \in (SeS)^{\vee}$.
Thus $\mathbf{r}(x) \in (SeS)^{\vee}$ and so $\mathbf{d}(x) \in (SeS)^{\vee}$ for all $x \in X$.
Thus $f \in (SeS)^{\vee}$.
\end{proof}
\begin{lemma}\label{lem:simple} Let $C$ be a finitely aligned $k$-graph.
The following are equivalent:
\begin{enumerate}
\item $C$ is cofinal.
\item For all idempotents in $\mathsf{R}(C)$ of the form $ee^{-1}$ and $ff^{-1}$, where $e,f \in C_{o}$,
there exists a tight cover $\{x_{1}x_{1}^{-1}, \ldots, x_{n}x_{n}^{-1}\}$ of $ff^{-1}$ and
elements $a_{1}a_{1}^{-1}, \ldots, a_{n}a_{n}^{-1} \leq ee^{-1}$ such that
$a_{1}x_{1}^{-1}, \ldots, a_{n}x_{n}^{-1}$ are well-defined basic morphisms.
\end{enumerate}
\end{lemma}
We can now prove our main result about cofinality.
\begin{theorem}\label{them:zero-simplifying}
Let $C$ be a strongly finitely aligned higher rank graph with a finite number of identities which has no sources and is row finite.
Then the following are equivalent:
\begin{enumerate}
\item $C$ is cofinal.
\item $\mathsf{B}(S)$ is $0$-simplifying.
\end{enumerate}
\end{theorem}
\begin{proof} (1)$\Rightarrow$(2).
Let $I$ be a non-trivial additive ideal of $\mathsf{B}(S)$.
By Lemma~\ref{lem:orange}, it contains an idempotent of the form $[ee^{-1}]$
for some $e \in C_{o}$.
Let $f \in C_{o}$ be arbitrary.
Then by Lemma~\ref{lem:simple}, there exits a tight cover $\{x_{1}x_{1}^{-1}, \ldots, x_{n}x_{n}^{-1}\}$ of $ff^{-1}$ and
elements $a_{1}a_{1}^{-1}, \ldots, a_{n}a_{n}^{-1} \leq ee^{-1}$ such that
$a_{1}x_{1}^{-1}, \ldots, a_{n}x_{n}^{-1}$ are well-defined basic morphisms.
Observe that $\bigvee \mathbf{d}[a_{i}x_{i}^{-1}] = [ff^{-1}]$ and that $\mathbf{r}[a_{i}x_{i}^{-1}] \leq [ee^{-1}]$.
Thus $\{[a_{1}x_{1}^{-1}], \ldots [a_{n}x_{n}^{-1}]\}$ is a pencil from $[ff^{-1}]$ to $[ee^{-1}]$.
But $I$ is an additive ideal and so $[ff^{-1}] \in I$.
Thus $I$ contains every idempotent of the form $[ff^{-1}]$ where $f \in C_{o}$.
Let $[ab^{-1}]$ be arbitrary.
Then $\mathbf{d}(ab^{-1}) = bb^{-1} \leq \mathbf{r}(b)\mathbf{r}(b)^{-1}$ and
$\mathbf{r}(ab^{-1}) = aa^{-1} \leq \mathbf{r}(a)\mathbf{r}(a)^{-1}$.
But $I$ being an additive ideal is also an order ideal.
Thus $\mathbf{d}(ab^{-1}), \mathbf{r}(ab^{-1}) \in I$.
But in an inverse semigroup, an ideal that contains $x^{-1}x$ must contain $x$.
Thus $[ab^{-1}] \in I$.
But $I$ is additive and so $I$ contains all elements of $\mathsf{B}(S)$.
We have therefore shown that $I = \mathsf{B}(C)$.
(2)$\Rightarrow$(1).
Let $e$ and $f$ be identities in the category $C$.
Observe that $I = (\mathsf{B}(S)[ee^{-1}]\mathsf{B}(S))^{\vee}$ is an additive ideal of $\mathsf{B}(S)$ containing $[ee^{-1}]$.
Then, by assumption, $I = \mathsf{B}(S)$.
Thus $[ff^{-1}] \in I$.
It follows by Lemma~\ref{lem:cake} that $[ff^{-1}] \preceq [ee^{-1}]$.
Thus there is a pencil $X = \{\mathbf{x}_{1}, \ldots, \mathbf{x}_{m}\}$ in $\mathsf{B}(S)$ such that
$[ff^{-1}] = \bigvee_{i=1}^{m} \mathbf{d}(\mathbf{x}_{i})$ and $\mathbf{r}(\mathbf{x}_{i}) \leq [ee^{-1}]$.
Without loss of generality, we can assume that $X = \{[a_{1}b_{1}^{-1}], \ldots, [a_{m}b_{m}^{-1}]\}$.
We therefore have that $ff^{-1} \equiv \bigvee_{i=1}^{m}b_{i}b_{i}^{-1}$ and $a_{i}a_{i}^{-1} \equiv ee^{-1}a_{i}a_{i}^{-1}$.
We carry out the multiplications $ee^{-1}a_{i}b_{i}^{-1}ff^{-1}$, remove any zero elements, write as joins of basic morphisms and then,
without loss of generality, assume that our elements are all basic morphisms.
We therefore obtain the following (relabelling where necessary):
there is a set of elements $\{b_{1}, \ldots, b_{m}\} \subseteq fC$ which is large
a set $\{a_{1},\ldots, a_{m}\} \subseteq eC$
and the basic morphisms $a_{1}b_{1}^{-1}, \ldots, a_{m}b_{m}^{-1}$ are all defined.
By Lemma~\ref{lem:simple}, it follows that $C$ is cofinal.
\end{proof}
By theorem~\ref{them:fundamental} and Theorem~\ref{them:zero-simplifying}, we have proved the following.
\begin{theorem}\label{them:main} Let $C$ be a finitely aligned higher rank graph with a finite number of identities which has no sources and is row finite.
Then the following are equivalent:
\begin{enumerate}
\item $C$ is aperiodic and cofinal.
\item The Boolean inverse monoid $\mathsf{B}(C)$ is simple.
\end{enumerate}
\end{theorem}
It is clear that the Boolean inverse monoids $\mathsf{B}(C)$ are countable.
We now apply some results from \cite{Lawson2016}.
We call the countable atomless Boolean algebra the {\em Tarski algebra}
and use the term {\em Tarski monoid} to mean a countable, Boolean inverse $\wedge$-monoid whose semilattice of idempotents is the Tarski algebra.
The following is \cite[Proposition~4.4]{Lawson2016}.
\begin{proposition}\label{prop:old} let $S$ be a countable Boolean inverse $\wedge$-monoid.
If $S$ is $0$-simplifying then either $S$ is a Tarski monoid or the semilattice of idempotents of $S$ is finite.
\end{proposition}
By Lemma~\ref{lem:finite-fundamental},
a fundamental inverse semigroup in which the semilattice of idempotents is finite must itself be finite.
The finite simple Boolean inverse monoids are precisely the finite symmetric inverse monoids \cite[Theorem~4.18]{Lawson2012};
the groups of units of the finite symmetric inverse monoids are the finite symmetric inverse monoids.
With the help of \cite[Theorem~2.22]{Lawson2016}, we have therefore proved the following which is our main theorem.
\begin{theorem}\label{them:A} Let $C$ be a finitely aligned higher rank graph with a finite number of identities which has no sources and is row finite.
If $C$ is aperiodic and cofinal then there are two possibilities:
\begin{enumerate}
\item The Boolean inverse monoid $\mathsf{B}(C)$ is finite and isomorphic to a finite symmetric inverse monoid.
Its group of units is a finite symmetric group.
\item The Boolean inverse monoid $\mathsf{B}(C)$ is countably infinite.
Its group of units is isomorphic
to a full subgroup of the group of self-homeomorphisms of the Cantor space
which acts minimally and in which each element has clopen support.
\end{enumerate}
\end{theorem}
\section{The groupoid associated with $\mathsf{B}(C)$}
The goal of this section is to prove that the \'etale groupoid associated with the Boolean inverse monoid $\mathsf{B}(C)$
under non-commutative Stone duality
is the usual groupoid $\mathcal{G}(C)$ associated with the higher rank graph $C$.
Let $C$ be a $k$-graph.
Define $\Omega_{k}$ to be the category of all ordered pairs $(\mathbf{m}, \mathbf{n}) \in \mathbb{N}^k \times \mathbb{N}^k$ where $\mathbf{m} \leq \mathbf{n}$;
see \cite[Examples~1.7(ii)]{KP}.
A {\em $k$-tiling} in $C$ is a degree-preserving functor $w$ from $\Omega_{k}$ to $C$. These are called \emph{infinite paths} in $C$ elsewhere in the literature (for example in \cite{KP}).
Explicitly, $w$ satisfies the following three conditions:
\begin{enumerate}
\item $w(\mathbf{m},\mathbf{m})$ is an identity.
\item $w(\mathbf{m}, \mathbf{n})w(\mathbf{n},\mathbf{p}) = w(\mathbf{m},\mathbf{p})$
\item $d(w(\mathbf{m},\mathbf{n})) = \mathbf{n} - \mathbf{m}$.
\end{enumerate}
Denote the set of all $k$-tilings of $C$ by $C^{\infty}$.
For each $\mathbf{p}$ and $w \in C^{\infty}$ define $\sigma^{\mathbf{p}}(w) \in C^{\infty}$ by
$\sigma^{\mathbf{p}}(w)(\mathbf{m}, \mathbf{n}) = w(\mathbf{p} + \mathbf{m}, \mathbf{p} + \mathbf{n})$.
Put $e = w(\mathbf{0},\mathbf{0})$, an identity.
Denote by $C^{\infty}(e)$ all $k$-tilings $w$ such that $e = w(\mathbf{0},\mathbf{0})$.
Define $\mathcal{G}(C)$, the groupoid associated with $C$, as follows \cite{KP, FMY}.
Its elements are triples $(w_{1}, \mathbf{n}, w_{w}) \in C^{\infty} \times \mathbb{Z}^{k} \times C^{\infty}$
where there are elements $\mathbf{l}, \mathbf{m} \in \mathbb{N}^{k}$ such that
$\mathbf{n} = \mathbf{l} - \mathbf{m}$ and $\sigma^{\mathbf{l}}(w_{1}) = \sigma^{\mathbf{m}}(w_{2})$.
Define $\mathbf{d}(w_{1}, \mathbf{n}, w_{2}) = (w_{2}, \mathbf{0}, w_{2})$ and define $\mathbf{r}(w_{1}, \mathbf{n}, w_{2}) = (w_{1}, \mathbf{0}, w_{1})$.
Multiplication is defined by $(w_{1}, \mathbf{m}, w_{2})(w_{2}, \mathbf{n}, w_{3}) = (w_{1}, \mathbf{m} + \mathbf{n}, w_{3})$
and the inverse by $(w_{2}, \mathbf{n},w_{1})^{-1} = (w_{1}, -\mathbf{n}, w_{2})$.
A topology is defined on $\mathscr{G}(C)$ with basis elements of the form
$$Z(x,y) = \{ (xw, d(x) - d(y), yw) \colon w \in C^{\infty}(\mathbf{d}(x))\}$$
where $\mathbf{d}(x) = \mathbf{d}(y)$.
The remainder of this section is devoted to proving that the groupoid $\mathcal{G}(C)$
is the \'etale groupoid associated with the Boolean inverse monoid $\mathsf{B}(C)$
when $C$ is a $k$-graph.
\begin{lemma}\label{lem:good-tilings} Let $C$ be a $k$-graph. For each $k$-tiling $w$, let
\[
\mathscr{C}_w := \{w(\mathbf{0}, \mathbf{m}) \colon \mathbf{m} \in \mathbb{N}^{k}\}.
\]
Then the map $w \mapsto C_w$ is a bijection between $k$-tilings of $C$ and good subsets of $C$.
\end{lemma}
\begin{proof}Let $w \colon \Omega_{k} \rightarrow C$ be a $k$-tiling.
Observe that $\mathbf{r}(w(\mathbf{0}, \mathbf{m})) = \mathbf{r}(w(\mathbf{0}, \mathbf{0})) = e$, say.
Thus $\mathscr{C}_{w} \subseteq eC$.
The elements of $\mathscr{C}_{w}$ are pairwise comparable and for each $\mathbf{n} \in \mathbb{N}^{k}$
there exists $x \in \mathscr{C}_{w}$ such that $d(x) = \mathbf{n}$.
So $\mathscr{C}_w$ is a good subset.
Now let $A \subseteq C$ be a good subset. Then \cite[Remarks~2.2]{KP}
whose that there is a unique $k$-tiling $w_A$ such that $\mathscr{C}_{w_A} = A$.
\end{proof}
Let $A$ be a good subset of $C$.
By Lemma~\ref{lem:good-filter}, this is (precisely) a maximal filter in $C$.
Define $\mathsf{F}(A) = \{[xx^{-1}] \colon x \in A \}^{\uparrow}$ in $\mathsf{B}(C)$.
By Theorem~\ref{them:seven} and Proposition~\ref{prop:mo}, this is an ultrafilter in $\mathsf{B}(C)$.
It is worth recalling the proof of this.
If $x,y \in A$ then, since $A$ is a good subset, we have that $xC \cap yC \cap A \neq \varnothing$
by Lemma~\ref{lem:good-filter}.
Thus $a = xu = yv$ for some $u,v \in C$ and $a \in A$.
Observe that $aa^{-1} \leq xx^{-1}, yy^{-1}$.
It follows that $[aa^{-1}] \leq [xx^{-1}], [yy^{-1}]$.
It is a prime filter (and so an ultrafilter by Theorem~\ref{them:first-main} and \cite[Lemma~3.20]{LL})
as a result of the following argument.
Let $[xx^{-1}] \leq [yy^{-1}]$ where $x \in A$.
Suppose that $xC \cap yC = \{u_{1}, \ldots, u_{m}\}C$.
Then $xx^{-1} \equiv \bigvee_{i=1}^{m}u_{i}u_{i}^{-1}$.
Thus $\{u_{1}, \ldots, u_{m}\} \subseteq xC$ is a tight cover.
It follows that $u_{i} \in A$, for some $i$, since good subsets are tight filters by Lemma~\ref{lem:good-filter}.
Now $u_{i} = yc$ for some $c \in C$.
Thus $y \in A$.\\
\noindent
{\bf Definition. }Let $C$ be a higher rank graph.
A subset $A \subseteq C$ is called {\em expanding} if each pair of elements of $A$ is comparable
and for each $\mathbf{m} \in \mathbb{N}^{k}$ there exists $a \in A$ such that $d(a) \geq \mathbf{m}$.\\
\begin{lemma}\label{lem:boozy} Let $C$ be a higher rank graph.
Let $A$ be an expanding subset of $C$.
Define $\mathsf{Pref}(A)$ to be all elements $x \in C$ such that $a = xu$ for some $a \in A$ and $u \in C$.
Then $\mathsf{Pref}(A)$ is a good subset.
\end{lemma}
\begin{proof} Let $x,y \in \mathsf{Pref}(A)$.
Then $a = xu$ and $b = yv$ for some $a,b \in A$ and $u,v \in C$.
The elements $a$ and $b$ are comparable.
Thus $xuc = yvd$ for some $c,d \in C$.
It follows that $x$ and $y$ are comparable.
Now let $\mathbf{m} \in \mathbb{N}^{k}$ be arbitrary.
Then there exists $a \in A$ such that $d(a) \geq \mathbf{m}$.
Let $d(a) = \mathbf{m} + \mathbf{n}$.
By the UFP, there exists $x,y \in C$ such that $a = xy$, $d(x) = \mathbf{m}$ and $d(y) = \mathbf{n}$.
By definition, $x \in \mathsf{Pref}(A)$ and $d(x) = \mathbf{m}$.
We have therefore proved that $\mathsf{Pref}(A)$ is a good subset.
\end{proof}
\begin{lemma}\label{lem:hocus-pocus} Let $C$ be a $k$-graph.
Suppose that $A$ is an expanding subset such that $A \subseteq \mathbf{d}(x)C$.
Then $xA$ is an expanding subset.
\end{lemma}
\begin{proof} Let $xa, xb \in xA$.
Then $a,b \in A$ and so are comparable.
Thus $au = bv$ for some $u,v \in C$.
It follows that $(xa)u = (xb)v$ and so $xa$ and $xb$ are comparable.
Let $\mathbf{m} \in \mathbb{N}^{k}$.
Then there exists $a \in A$ such that $d(a) \geq \mathbf{m}$.
It follows that $d(xa) \geq \mathbf{m}$.
We have therefore proved that $xA$ is an expanding subset.
\end{proof}
\noindent
{\bf Definition. }Let $C$ be a higher rank graph.
Let $A$ be a good subset and let $x \in A$.
Define $x^{-1}A$ to be all elements $a$ such that $xa \in A$.\\
\begin{lemma}\label{lem:shazam}
Let $C$ be a higher rank graph.
Let $A$ be a good subset and let $x \in A$.
Then $x^{-1}A$ is an expanding set.
\end{lemma}
\begin{proof} The product $x\mathbf{d}(x)$ is defined.
Thus $\mathbf{d}(x) \in x^{-1}A$.
Let $u,v \in x^{-1}X$.
Then $xu,xv \in A$.
Thus $xub = xvc$ for some $b,c \in C$.
By cancellation, $ub = vc$.
Thus $u$ and $v$ are comparable.
Let $\mathbf{m} \in \mathbb{N}^{k}$.
Put $\mathbf{n} = \mathbf{m} + d(x)$.
Then there exists $a \in A$ such that $\mathbf{n} = d(a)$.
By the UFP, we can write $a = xy$ where $d(y) = \mathbf{m}$.
Thus $y \in x^{-1}A$.
\end{proof}
The proof of the following is immediate.
\begin{lemma}\label{lem:bien} Let $C$ be a $k$-graph and let $A$ be a good subset.
Suppose that $\exists xy$ and $xy \in A$.
Then $(xy)^{-1}A = y^{-1}(x^{-1}A)$.
\end{lemma}
Let $A$ be a good subset and let $\mathbf{m} \in \mathbb{N}^{k}$.
Then there is a unique $x \in A$ such that $d(x) = \mathbf{m}$.
Define
$$\sigma^{\mathbf{m}}(A) = x^{-1}A.$$
We shall now describe the groupoid $\mathcal{G}(C)$ using good subsets.
We say that a triple $(A,\mathbf{n},B)$ where $A$ and $B$ are good subsets and $\mathbf{n} \in \mathbb{Z}^{k}$ is {\em allowable}
if there exists $x \in A$ and $y \in B$
such that $\mathbf{d}(x) = \mathbf{d}(y)$,
$\mathbf{n} = d(x) - d(y)$,
and $x^{-1}A = y^{-1}B$.
\begin{lemma}\label{lem:nice-one} Let $C$ be a $k$-graph.
Let $(A,\mathbf{n},B)$ be an allowable triple where
$A$ and $B$ are good subsets,
$\mathbf{n} \in \mathbb{Z}^{k}$,
$x \in A$ and $y \in B$
are such that $\mathbf{d}(x) = \mathbf{d}(y)$,
$\mathbf{n} = d(x) - d(y)$,
and $x^{-1}A = y^{-1}B$.
Let $\mathbf{m} \geq d(y)$.
Then there exists $y_{1} \in B$ such that $d(y_{1}) = \mathbf{m}$
and there is an $x_{1} \in A$ such that
$\mathbf{d}(x_{1}) = \mathbf{d}(y_{1})$,
$\mathbf{n} = d(x_{1}) - d(y_{1})$,
and $x_{1}^{-1}A = y_{1}^{-1}B$.
\end{lemma}
\begin{proof} Given any $\mathbf{m} \in \mathbb{N}^{k}$ there exists $y_{1} \in B$ such that $d(y_{1}) = \mathbf{m}$
since $B$ is a good subset.
But $y$ and $y_{1}$ are comparable, again because $B$ is a good subset,
and so by Lemma~\ref{lem:levi} there exists $t \in C$ such that $y_{1} = yt$.
Now $t \in y^{-1}B = x^{-1}A$.
It follows that $xt \in A$.
Define $x_{1} = xt$.
Then $d(x_{1}) - d(y_{1}) = \mathbf{m}$.
The fact that $x_{1}^{-1}A = y_{1}^{-1}B$ follows by Lemma~\ref{lem:bien}.
\end{proof}
\begin{lemma}\label{lem:new-groupoid} Let $C$ be a $k$-graph.
The set of allowable triples forms a groupoid isomorphic to the groupoid $\mathcal{G}(C)$ defined above.
\end{lemma}
\begin{proof}
Observe that for any good subset, $(B,\mathbf{0},B)$ is a well-defined triple
since $B$ contains a unique identity.
It follows that defining $\mathbf{d}(A,\mathbf{n},B) = (B,\mathbf{0},B)$ and $\mathbf{r}(A,\mathbf{n},B) = (A,\mathbf{0},A)$
makes sense.
It is clear that if $(A,\mathbf{n},B)$ is an allowable triple so, too, is $(B,-\mathbf{n},A)$.
Let $(A,\mathbf{m}, B)$ and $(B,\mathbf{n},C)$ be allowable triples.
The fact that $(A,\mathbf{m} + \mathbf{n},C)$ is an allowable triple follows by Lemma~\ref{lem:nice-one}.
This proves that the allowable triples form a groupoid.
To prove that the isomorphism holds we map
$(w_{2},\mathbf{m},w_{1})$ to $(\mathscr{C}_{w_{2}}, \mathbf{n}, \mathscr{C}_{w_{1}})$
using Lemma~\ref{lem:good-tilings}.
This is a well-defined bijection and functor.
\end{proof}
The basis sets for the topology on the set of allowable triples has the following form.
Let $x,y \in C$ such that $\mathbf{d}(x) = \mathbf{d}(y)$.
Then
$$Z'(x,y) = \{ (\mathsf{Pref}(xA), d(x) - d(y), \mathsf{Pref}(yA)) \colon A \subseteq \mathbf{d}(x)C\text{ is expanding}\}$$
is a set of allowable triples.
We shall now show that the groupoid $\mathcal{G}(C)$ is isomorphic to the groupoid constructed from the Boolean
inverse monoid $\mathsf{B}(C)$ as described in \cite{LL}.
We therefore need to relate ultrafilters in $\mathsf{B}(C)$ with allowable triples.
The following definitions are all expanded upon in \cite{Lawson2010, Lawson2012, Lawson2016, LL}.
Let $S$ be a Boolean inverse semigroup.
Recall that an {\em ultrafilter} is a maximal proper filter.
Let $A$ be an ultrafilter in $S$.
Define $\mathbf{d}(A) = (A^{-1}A)^{\uparrow}$ and $\mathbf{r}(A) = (AA^{-1})^{\uparrow}$.
An {\em identity ultrafilter} is an ultrafilter containing an idempotent and this is equivalent to its
being an inverse subsemigroup.
Both $\mathbf{d}(A)$ and $\mathbf{r}(A)$ are identity ultrafilters.
Observe that $A = (a \mathbf{d}(A))^{\uparrow}$ where $a \in A$.
More generally, if $F$ is any identity ultrafilter and $\mathbf{d}(a) \in F$ then $(aF)^{\uparrow}$ is an ultrafilter
and if $\mathbf{r}(a) \in F$ then $(Fa)^{\uparrow}$ is an ultrafilter.
Lemma~\ref{lem:good-tilings}, Lemma~\ref{lem:good-filter} Proposition~\ref{prop:mo}
and the fact that in $C$ all tight filters are ultrafilters, we have proved the following.
\begin{proposition}\label{prop:eight} Let $C$ be a higher rank graph with a finite number of identities
which is row finite and has no sources. For each $k$-tiling $w$ of $C$, the set
$\mathsf{P}_w := \{w(0,\mathbf{m})w(0, \mathbf{m})^{-1} : m \in \mathbb{N}^k\}^{\uparrow}\}$ is an ultrafilter
in $\mathsf{E}(\mathsf{R}(C))$, and the map $w \mapsto \mathsf{P}_w$ is a bijection between
$k$-tilings in $C$ and ultrafilters in $\mathsf{E}(\mathsf{R}(C))$.
\end{proposition}
We work with allowable triples.
\begin{lemma}\label{lem} Let $C$ be a higher rank graph with a finite number of identities
which is row finite and has no sources.
Then there is a bijection between the set of allowable triples in $C$ and the set of ultrafilters in $\mathsf{B}(C)$.
\end{lemma}
\begin{proof} Let $\tau = (A,\mathbf{n},B)$ be an allowable triple.
Then $A$ and $B$ are good sets,
there is an $x \in A$, such that $d(x) = \mathbf{l}$, and there is $y \in B$, such that $d(y) = \mathbf{m}$,
where $\mathbf{d}(x) = \mathbf{d}(y)$, $\mathbf{n} = \mathbf{l} - \mathbf{m}$ and $x^{-1}A = y^{-1}B$.
Observe that $xy^{-1}$ is a well-defined morphism in $\mathsf{R}(C)$.
Define $\mathcal{F} = \mathsf{F}(B)$.
Then $\mathcal{F}$ is an identity filter in $\mathsf{B}(C)$.
Put
$$\mathcal{A} = \mathcal{A}_{\tau} = ([xy^{-1}]\mathcal{F})^{\uparrow};$$
this is the ultrafilter in $\mathsf{B}(C)$ associated with the allowable triple $\tau$.
We now show that this ultrafilter is independent of the choices we made above.
Let $x_{1} \in A$ be such that $d(x_{1}) = \mathbf{l}_{1}$ and let $y_{1} \in B$ be such that $d(y_{1}) = \mathbf{m}_{1}$
where $\mathbf{d}(x_{1}) = \mathbf{d}(y_{1})$,
$\mathbf{n} = \mathbf{l}_{1} - \mathbf{m}_{1}$
and $x_{1}^{-1}A = y_{1}^{-1}B$.
We show that
$\mathcal{B} = ([x_{1}y_{1}^{-1}]\mathcal{F})^{\uparrow}$
and
$\mathcal{A} = ([xy^{-1}]\mathcal{F})^{\uparrow}$
are equal.
Consider the product $(yx^{-1})(x_{1}y_{1}^{-1})$.
Let $xC \cap x_{1}C = \{u_{1}, \ldots, u_{m}\}C$.
Since $x,x_{1} \in A$ there exists an $i$ such that $u_{i} = xp_{i} = x_{1}q_{i} \in A$ by Lemma~\ref{lem:filter-property}.
Observe that $(yp_{i})(y_{1}q_{i})^{-1} \leq (yx^{-1})(x_{1}y_{1}^{-1})$.
We have that $yp_{i}, y_{1}q_{i} \in B$
--- ths follows because $p_{i} \in x^{-1}A = y^{-1}B$ and so $yp_{i} \in B$,
and $q_{i} \in x_{1}^{-1}A = y_{1}^{-1}B$ and so $y_{1}q_{i} \in B$.
We shall prove that $d(yp_{i}) = d(y_{1}q_{i})$ from which it will follow that $yp_{i} = y_{1}q_{i}$,
since $B$ is a filter and so a good subset.
But this follows from the fact that $\mathbf{l}_{1} - \mathbf{m}_{1} = \mathbf{l} - \mathbf{m}$.
We have therefore found an idempotent in $\mathcal{F}$ below $[yx^{-1}][x_{1}y_{1}^{-1}]$.
This proves that $\mathcal{A} = \mathcal{B}$.
We now go in the opposite direction.
Let $\mathcal{A}$ be an ultrafilter in $\mathsf{B}(C)$.
Then, using the fact that ultrafilters are prime,
we may write this in the form $([xy^{-1}]\mathcal{F})^{\uparrow}$ where $[yy^{-1}] \in \mathcal{F}$
and $\mathcal{F}$ is an idempotent ultrafilter in $\mathsf{B}(C)$.
The ultrafilter $\mathcal{F}$ is completely determined by the ultrafilter $\mathcal{F} \cap \mathsf{E}(\mathsf{B}(C))$ which is in
$\mathsf{E}(\mathsf{B}(C))$.
The ultrafilter $\mathcal{F} \cap \mathsf{E}(\mathsf{B}(C))$ arises from the maximal filter $B$ in $C$ via Proposition~\ref{prop:mo}.
Observe that $y \in B$.
Now $([xy^{-}] \mathcal{F}[yx^{-1}])^{\uparrow}$ is an idempotent ultrafilter in $\mathsf{B}(C)$ that contains $[xx^{-1}]$.
This corresponds to a maximal filter $A$ in $C$ that contains $x$.
We have therefore constructed a triple $(A,d(x) - d(y),B)$.
It remains to show that it is allowable.
Thus we need to prove that $x^{-1}A = y^{-1}B$.
let $u \in y^{-1}A$.
Then $yu \in A$.
It follows that $[yu(yu)^{-1}] \in \mathcal{F} = \mathbf{d}(\mathcal{A})$.
But $[xy^{-1}] \in \mathcal{A}$.
It follows that $[xy^{-1}][yu(yu)^{-1}] \in \mathcal{A}$.
Thus $[xu(yu)^{-1}] \in \mathcal{A}$.
It folows that $xu \in B$, as required.
The converse is proved by symmetry.
\end{proof}
We now prove that the groupoid of allowable triples and the groupoid of ulrafilters are isomorphic.
By Lemma~\ref{lem:nice-one}, a composable pair of allowable triples has the following form:
$$(C, d(u)-d(v), B)(B, d(v) - d(w), A)$$
where $u \in C$, $v \in B$ and $w \in A$.
Their product is
$$(C, d(u) - d(w) A).$$
We now turn to products of ultrafilters.
Let $A$ and $B$ be ultrafilters such that $\mathbf{d}(A) = \mathbf{r}(B)$.
Then $A \cdot B = (AB)^{\uparrow}$.
Let $A = (aF)^{\uparrow}$ and $B = (bG)^{\uparrow}$ where $\mathbf{d}(A) = F$ and $\mathbf{d}(B) = G$.
Observe that $A \cdot B = (abG)^{\uparrow}$.
Now, $a\mathbf{r}(b) \in A\mathbf{d}(A) \subseteq A$.
Similarly, $\mathbf{d}(a)b \in B$.
It follows that we can assume $\mathbf{d}(a) = \mathbf{r}(b)$.
The ultrafilter associated with $(C, d(u)-d(v), B)$ is $([uv^{-1}]\mathcal{F})^{\uparrow}$
where $x \in B$ iff $[xx^{-1}] \in \mathcal{F}$.
The ultrafilter associated with $(B, d(v) - d(w), A)$ is
$([vw^{-1}]\mathcal{G})^{\uparrow}$.
The product of
$([uv^{-1}]\mathcal{F})^{\uparrow}$
and
$([vw^{-1}]\mathcal{G})^{\uparrow}$
is defined
and equals
$([uw^{-1}]\mathcal{G})^{\uparrow}$
which corresponds to the allowable triple
$(C, d(u) - d(w) A)$.
We now describe the topology.
Let $\mathbf{d}(x) = \mathbf{d}(y)$.
Then
$$Z'(x,y) = \{ (\mathsf{Pref}(xA), d(x) - d(y), \mathsf{Pref}(yA)) \colon A \subseteq \mathbf{d}(x)C\}$$
is a set of allowable triples.
This corresponds to the set of ultrafilters in $\mathsf{B}(C)$ that contain the element $[xy^{-1}]$.
It follows that the groupoids are isomorphic as topological groupoids.
\section{Invariants of the group $\mathscr{G}(C)$}
As mentioned earlier, since our group $\mathscr{G}(C)$ coincides with the topological full group
of the groupoid $\mathcal{G}(C)$, which is a Hausdorff, \'etale, effective, minimal groupoid with
unit space homeomorphic to the Cantor space, \cite[Theorem~3.10]{Matui2015} implies that if
$\mathscr{G}(C) \cong \mathscr{G}(C')$, then $\mathcal{G}(C) \cong \mathcal{G}(C')$. Consequently
both the $K$-theory of the groupoid $C^*$-algebra $C^*(\mathcal{G}(C))$ and the homology, in the
sense of Matui, of the groupoid $\mathcal{G}(C)$ are isomorphism invariants of $\mathscr{G}(C)$.
When $C$ is a $k$-graph, this is particularly interesting because, as we saw in the preceding
section, $\mathcal{G}(C)$ coincides with Kumjian and Pask's groupoid $\mathcal{G}_C$ \cite{KP}. So
known invariants of $\mathcal{G}_C$ are also invariants of our group $\mathscr{G}(C)$.
Kumjian and Pask prove in \cite[Corollary~3.5(i)]{KP} that the $k$-graph $C^*$-algebra $C^*(C)$
coincides with the groupoid $C^*$-algebra $C^*(\mathcal{G}_C)$. Hence the $K$-theory of $C^*(C)$
provides an isomorphism invariant of $\mathscr{G}(C)$.
We therefore have the following.
\begin{corollary}
Let $C$ and $C'$ be row-finite aperiodic, cofinal higher rank graphs with finitely many vertices
and no sources. If $\mathscr{G}(C) \cong \mathscr{G}(C')$ as discrete groups, then $K_*(C^*(C))
\cong K_*(C^*(C'))$.
\end{corollary}
The difficulty here is that the $K$-theory of $k$-graph $C^*$-algebras has proven notoriously
difficult to compute. The most general results are those of \cite{Evans}, but these apply in
general only when $k \le 2$.
However another, closely related, invariant of the groupoid $\mathcal{G}_C$ and hence of our
group $\mathscr{G}(C)$ is the homology of $\mathcal{G}_C$.
\begin{corollary}
Let $C$ and $C'$ be row-finite aperiodic, cofinal higher rank graphs with finitely many vertices
and no sources. If $\mathscr{G}(C) \cong \mathscr{G}(C')$ as discrete groups, then
$H_*(\mathcal{G}_C) \cong H_*(\mathcal{G}_{C'})$.
\end{corollary}
In this case, we have an explicit calculation of the invariant obtained from \cite[Proposition~7.6]{FKPS}
building on earlier work of Matui \cite{Matui2012}. Specifically, the homology of the groupoid
$\mathcal{G}_C$ is precisely the homology of a chain complex developed by Evans \cite{Evans}.
To describe it, we proceed as follows.
Let $\varepsilon_1, \dots, \varepsilon_k$ denote the generators of $\mathbb{Z}^k$ (so
$\varepsilon_i = (0, \dots, 0, 1, 0, \dots, 0)$, with the $1$ appearing in the $i$th coordinate).
Also for each $i$, let $M_i$ be the $i$th coordinate matrix of the $k$-graph $C$. That is,
recalling that $C_o$ is the (finite) space of identity morphisms (or vertices) of $C$,
the matrix $M_i$ is the $C_o \times C_o$ matrix with entries
\[
M_i(e, f) = |\{a \in C : d(a) = \varepsilon_i, \mathbf{r(a)} = e\text{ and }\mathbf{d}(a) = f\}|.
\]
We regard each $M_i$ as an endomorphism of the free abelian group $\mathbb{Z} C_o$.
Recall that for $p \ge 1$, we write $\bigwedge^p \mathbb{Z}^k$ for the $p$th exterior power of
$\mathbb{Z}^k$, which is generated by the elements $\varepsilon_{i_1} \wedge \cdots \wedge
\varepsilon_{i_p}$ where $1 \le i_j \le k$ for all $j$.
Define $D^C_0 = \mathbb{Z} C_o$,
and for $p \ge 1$, define $D^C_p =
\big(\bigwedge^p \mathbb{Z}^k\big) \otimes \mathbb{Z}C_{\mathbf{0}}$; observe that this forces
$D^C_p = \{0\}$ for $p \ge k$. For $p \ge 2$, define $\partial_p \colon D^C_p \to D^C_{p-1}$ (using
the hat symbol to indicate deletion of a term) by
\[
\partial_p((\varepsilon_{i_1} \wedge \cdots \wedge \varepsilon_{i_p}) \otimes \varepsilon_e)
= \sum^p_{j=0} (-1)^{j+1} (\varepsilon_{i_1} \wedge \cdots \wedge \widehat{\varepsilon}_{i_j} \wedge \cdots \wedge \varepsilon_{i_p}) \otimes (I - M_{i_j}^t)\varepsilon_e;
\]
and, finally, define $\partial_1 \colon D^C_1 \to D^C_0$ by
\[
\partial_1(\varepsilon_i \otimes \varepsilon_e) = (1 - M^t_i)\varepsilon_e.
\]
Then $(D^C_*, \partial_*)$ is a chain complex, and \cite[Proposition~7.6]{FKPS} shows that
$H_*(D^C_*, \partial_*)$ is isomorphic to the groupoid homology $H_*(\mathcal{G}_C)$. So we obtain the following corollary.
\begin{corollary}\label{cor:H* invariants}
Let $C$ and $C'$ be row-finite aperiodic, cofinal higher rank graphs with finitely many vertices
and no sources. If $\mathscr{G}(C) \cong \mathscr{G}(C')$ as discrete groups, then $H_*(D^C_*,
\partial_*) \cong H_*(D^{C'}_*, \partial_*)$. In particular,
\[
H_k(D^C_*, \partial_*)
=
\bigcap^k_{i=1} \ker(I - M(C)^t_i) \cong \bigcap^k_{i=1} \ker(I - M(C')^t_i),
\]
and
\[
H_0(D^C_*, \partial_*)
=
\mathbb{Z}C_o / \Big(\sum^k_{i=1} \operatorname{im}(I - M(C)^t_i)\Big)
\cong \mathbb{Z}C'_o / \Big(\sum^k_{i=1} \operatorname{im}(I - M(C')^t_i)\Big).
\]
\end{corollary}
\begin{proof}
The first statement follows immediately from \cite[Theorem~3.10]{Matui2015} and
\cite[Proposition~7.6]{FKPS}.
The second is by direct computation: the intersection of the kernels
of the $I - M(C)^t_i$ is isomorphic to $H_k(D^C_*, \partial_*)$ and the quotient of
$\mathbb{Z}C_o$ by the sum of their images is isomorphic to $H_0(D^C_*, \partial_*)$
(and similarly for $C'$).
\end{proof}
\section{Two series of concrete examples}
In this section we provide two constructions of infinite families of $k$-graphs $C$ with mutually
non-isomorphic groups $\mathscr{G}(C)$. In the first family, the higher-rank graphs $C$ can all
be chosen to be of the same rank $k \ge 2$, and the associated groups $\mathscr{G}(C)$
are distinguished by the finite $0$th homology groups of the $k$-graphs. In the second family of
examples, we show that for each $k \ge 1$ and each $R \ge 1$, there is an aperiodic cofinal
$k$-graph $C_{k, R}$ that is row-finite and has finitely many vertices and whose $k$th homology
group is of rank $R$.
\subsection{Examples distinguished by their 0th homology}
We shall use the results of the previous section to construct, for each $k \ge 2$ and each
$k$-tuple of integers $(m_1,m_2,\ldots,m_k)$, two series of pairwise non-isomorphic $k$-graphs
with two vertices and different groups
\[
\mathbb{Z}C_o/\Big(\sum^k_{i=1} \operatorname{im}(I - M(C)^t_i)\Big).
\]
Our construction consists of two steps:
first, we construct a family of cube complexes with two vertices, covered by products of $k$ trees,
and second, we explain how to get a $k$-graph from each complex.
For background on cube complexes covered by products of $k$ trees see \cite{RSV} and references in the paper.
\noindent{\bf Step 1.} Let $X_1,...,X_k$ be distinct alphabets, such that
$\left| X_i \right|=m_i$ and
$$X_i=\{x_1^i,x_2^i,...,x_{m_i}^i\}.$$
Let $F_i$ be the free group generated by $X_i$.
The direct product
\[
G = F_1\times F_2 \times \ldots \times F_k
\]
has presentation
\[
G=\langle X_1,X_2,...,X_k | [x^i_s,x^j_l]=1, i \neq j=1,...,k; s=1,...,m_i;l=1,...,m_j \rangle,
\]
where $[x,y]$ denotes the commutator $xyx^{-1}y^{-1}$.
The group $G$ acts simply and transitively on a Cartesian product $\Delta$ of $k$ trees $T_1,T_2,..,T_k$ of valencies $2m_1,2m_2,...,2m_k$ respectively:
each $T_i$ is identified with the Cayley tree of $F_i$, and the action of $G$ is the coordinatewise action of the component groups $F_i$.
The quotient of this action is a cube complex $P$ with one vertex $v$ such that the universal cover of $P$ is $\Delta$. The edges of the cube complex $P$
are naturally labelled by elements of $X=X_1\cup X_2...\cup X_k$, and are naturally oriented by the usual algebraic ordering on $F_i$. The $1$-skeleton of $P$ is a wedge of $\sum_{i=1}^k m_i$ circles.
We construct a family of double covers of $P$ in the following way. Consider a labelling $\ell : X \to \mathbb{Z}_2$ of the elements of $X$ (equivalently the edges of the $1$-skeleton of $P$).
We obtain a cover $P^2_\ell$ of $P$ whose vertex set is $\{v\} \times \mathbb{Z}_2$ and whose set of $1$-cubes is $X \times \mathbb{Z}_2$, with range and domain maps given by
$r(x, i) = (r(x), i)$ and $s(x,i) = (s(x), i + \ell(x))$. Specifically, $P^2_\ell$ is the quotient of $\Delta$ by the action of the kernel of the homomorphism $G \to \mathbb{Z}_2$
induced by $\ell$. Observe that in $P^2_\ell$, for a given $x \in X$ either $(x, 0)$ and $(x,1)$ are loops based at $(v,0)$ and $(v,1)$ (if $\ell(x) = 0$), or $(x,0)$ is an edge
from $(v,1)$ to $(v,0)$ and $(x,1)$ is an edge from $(v,0)$ to $(v,1)$.
So Figure~\ref{fig:1} illustrates the $2$-cubes and part of the $1$-skeleton of $P$ corresponding to symbols $a \in X_i$ and $b \in X_j$ with $i \not= j$
in which $\ell(a) = 1$ and $\ell(b) = 0$; Figure~\ref{fig:2} illustrates the corresponding $2$-cubes and part of the $1$-skeleton if $\ell(a) = \ell(b) = 1$.
\begin{figure}[!htbp]
\[
\begin{tikzpicture}
\node (00) at (0,0) {$(v,0)$};
\node (01) at (0,2) {$(v,1)$};
\node (10) at (2,0) {$(v,0)$};
\node (11) at (2,2) {$(v,1)$};
\draw[->-] (00) to node[pos=0.5, right] {\small$(a,1)$} (01);
\draw[->-] (00) to node[pos=0.5, below] {\small$(b,0)$} (10);
\draw[->-] (10) to node[pos=0.5, left] {\small$(a,1)$} (11);
\draw[->-] (01) to node[pos=0.5, above] {\small$(b,1)$} (11);
\begin{scope}[xshift=3.8cm]
\node (00) at (0,0) {$(v,1)$};
\node (01) at (0,2) {$(v,0)$};
\node (10) at (2,0) {$(v,1)$};
\node (11) at (2,2) {$(v,0)$};
\draw[->-] (00) to node[pos=0.5, right] {\small$(a,0)$} (01);
\draw[->-] (00) to node[pos=0.5, below] {\small$(b,0)$} (10);
\draw[->-] (10) to node[pos=0.5, left] {\small$(a,0)$} (11);
\draw[->-] (01) to node[pos=0.5, above] {\small$(b,1)$} (11);
\end{scope}
\begin{scope}[xshift=8cm]
\node (v0) at (0,1) {$(v,0)$};
\node (v1) at (2,1) {$(v,1)$};
\draw[->-, out=30, in=150] (v0) to node[pos=0.5, above] {\small$(a,1)$} (v1);
\draw[->-, out=210, in=330] (v1) to node[pos=0.5, below] {\small$(a,0)$} (v0);
\draw[->-] (v0) .. controls +(-1, 1) and +(-1, -1) .. (v0) node[pos=0.25, above] {\small$(b,0)$};
\draw[->-] (v1) .. controls +(1, 1) and +(1, -1) .. (v1) node[pos=0.75, below] {\small$(b,1)$};
\end{scope}
\end{tikzpicture}
\]
\caption{$\ell(a) = 1$ and $\ell(b) = 0$}\label{fig:1}
\end{figure}
\begin{figure}[!htbp]
\[
\begin{tikzpicture}
\node (00) at (0,0) {$(v,0)$};
\node (01) at (0,2) {$(v,1)$};
\node (10) at (2,0) {$(v,1)$};
\node (11) at (2,2) {$(v,0)$};
\draw[->-] (00) to node[pos=0.5, right] {\small$(a,1)$} (01);
\draw[->-] (00) to node[pos=0.5, below] {\small$(b,1)$} (10);
\draw[->-] (10) to node[pos=0.5, left] {\small$(a,0)$} (11);
\draw[->-] (01) to node[pos=0.5, above] {\small$(b,0)$} (11);
\begin{scope}[xshift=3.8cm]
\node (00) at (0,0) {$(v,1)$};
\node (01) at (0,2) {$(v,0)$};
\node (10) at (2,0) {$(v,0)$};
\node (11) at (2,2) {$(v,1)$};
\draw[->-] (00) to node[pos=0.5, right] {\small$(a,0)$} (01);
\draw[->-] (00) to node[pos=0.5, below] {\small$(b,0)$} (10);
\draw[->-] (10) to node[pos=0.5, left] {\small$(a,1)$} (11);
\draw[->-] (01) to node[pos=0.5, above] {\small$(b,1)$} (11);
\end{scope}
\begin{scope}[xshift=8cm]
\node (v0) at (0,1) {$(v,0)$};
\node (v1) at (3,1) {$(v,1)$};
\draw[->-, out=20, in=160] (v0) to node[pos=0.5, above] {\small$(b,1)$} (v1);
\draw[->-, out=200, in=340] (v1) to node[pos=0.5, below] {\small$(b,0)$} (v0);
\draw[->-, out=60, in=120] (v0) to node[pos=0.5, above] {\small$(a,1)$} (v1);
\draw[->-, out=240, in=300] (v1) to node[pos=0.5, below] {\small$(a,0)$} (v0);
\end{scope}
\end{tikzpicture}
\]
\caption{$\ell(a) = \ell(b) = 1$}\label{fig:2}
\end{figure}
We will consider two specific labellings $\ell_u$ and $\ell_m$ (the $u$ and $m$ stand for
``uniform" and ``mixed"). The uniform labelling $\ell_u$ is given by $\ell_u(x) = 1$ for all $x$,
whereas the mixed labelling $\ell_m$ satisfies $\ell_m|_{X_1} \equiv 0$ and $\ell_m|_{X \setminus
X_1} \equiv 1$. So under $\ell_u$ all $2$-squares are as in Figure~\ref{fig:2}; but under $\ell_m$
squares in which $b$ belongs to $X_1$ are as in Figure~\ref{fig:1}, and the remaining squares are
as in Figure~\ref{fig:2}. Figure~\ref{fig:4} illustrates a $3$-cube in the cube complex for
$\ell_u$ (left) and a $3$-cube in the cube complex for $\ell_m$ in which $a$ belongs to $X_1$ and
the edges $b, c$ belong to $X \setminus X_1$ (right).
\begin{figure}[!htbp]
\[
\begin{tikzpicture}[scale=1.25]
\node[inner sep=1pt] (000) at (0,0,0) {\small$(v,0)$};
\node[inner sep=1pt] (100) at (2,0,0) {\small$(v,1)$};
\node[inner sep=1pt] (010) at (0,2,0) {\small$(v,1)$};
\node[inner sep=1pt] (110) at (2,2,0) {\small$(v,0)$};
\node[inner sep=1pt] (001) at (0,0,3) {\small$(v,1)$};
\node[inner sep=1pt] (101) at (2,0,3) {\small$(v,0)$};
\node[inner sep=1pt] (011) at (0,2,3) {\small$(v,0)$};
\node[inner sep=1pt] (111) at (2,2,3) {\small$(v,1)$};
\draw[->-] (000) to node[pos=0.75, above, inner sep=0.5pt] {\tiny$(a,1)$} (100);
\draw[->-] (000) to node[pos=0.65, right, inner sep=0.5pt] {\tiny$(b,1)$} (010);
\draw[->-] (000) to node[pos=0.3, anchor=north west, inner sep=0pt] {\tiny$(c,1)$} (001);
\draw[->-] (100) to node[pos=0.5, right, inner sep=0.5pt] {\tiny$(b,0)$} (110);
\draw[->-] (100) to node[pos=0.3, anchor=north west, inner sep=0pt] {\tiny$(c,0)$} (101);
\draw[->-] (001) to node[pos=0.5, below, inner sep=0.5pt] {\tiny$(a,0)$} (101);
\draw[->-] (001) to node[pos=0.5, left, inner sep=0.5pt] {\tiny$(b,0)$} (011);
\draw[->-] (010) to node[pos=0.5, above, inner sep=0.5pt] {\tiny$(a,0)$} (110);
\draw[->-] (010) to node[pos=0.4, anchor=south east, inner sep=0pt] {\tiny$(c,0)$} (011);
\draw[->-] (110) to node[pos=0.4, anchor=south east, inner sep=0pt] {\tiny$(c,1)$} (111);
\draw[very thick, white] (101) to (111);
\draw[->-] (101) to node[pos=0.3, left, inner sep=0.5pt] {\tiny$(b,1)$} (111);
\draw[very thick, white] (011) to (111);
\draw[->-] (011) to node[pos=0.25, below, inner sep=1pt] {\tiny$(a,1)$} (111);
\end{tikzpicture}
\qquad\qquad
\begin{tikzpicture}[scale=1.25]
\node[inner sep=1pt] (000) at (0,0,0) {\small$(v,0)$};
\node[inner sep=1pt] (100) at (2,0,0) {\small$(v,0)$};
\node[inner sep=1pt] (010) at (0,2,0) {\small$(v,1)$};
\node[inner sep=1pt] (110) at (2,2,0) {\small$(v,1)$};
\node[inner sep=1pt] (001) at (0,0,3) {\small$(v,1)$};
\node[inner sep=1pt] (101) at (2,0,3) {\small$(v,1)$};
\node[inner sep=1pt] (011) at (0,2,3) {\small$(v,0)$};
\node[inner sep=1pt] (111) at (2,2,3) {\small$(v,0)$};
\draw[->-] (000) to node[pos=0.75, above, inner sep=0.5pt] {\tiny$(a,0)$} (100);
\draw[->-] (000) to node[pos=0.65, right, inner sep=0.5pt] {\tiny$(b,1)$} (010);
\draw[->-] (000) to node[pos=0.3, anchor=north west, inner sep=0pt] {\tiny$(c,1)$} (001);
\draw[->-] (100) to node[pos=0.5, right, inner sep=0.5pt] {\tiny$(b,1)$} (110);
\draw[->-] (100) to node[pos=0.3, anchor=north west, inner sep=0pt] {\tiny$(c,1)$} (101);
\draw[->-] (001) to node[pos=0.5, below, inner sep=0.5pt] {\tiny$(a,1)$} (101);
\draw[->-] (001) to node[pos=0.5, left, inner sep=0.5pt] {\tiny$(b,0)$} (011);
\draw[->-] (010) to node[pos=0.5, above, inner sep=0.5pt] {\tiny$(a,1)$} (110);
\draw[->-] (010) to node[pos=0.4, anchor=south east, inner sep=0pt] {\tiny$(c,0)$} (011);
\draw[->-] (110) to node[pos=0.4, anchor=south east, inner sep=0pt] {\tiny$(c,0)$} (111);
\draw[very thick, white] (101) to (111);
\draw[->-] (101) to node[pos=0.3, left, inner sep=0.5pt] {\tiny$(b,0)$} (111);
\draw[very thick, white] (011) to (111);
\draw[->-] (011) to node[pos=0.25, below, inner sep=1pt] {\tiny$(a,0)$} (111);
\end{tikzpicture}
\]
\caption{A $3$-cube for $\ell_u$ (left) and for $\ell_m$ (right)}\label{fig:4}
\end{figure}
\noindent{\bf Step 2.} For each of $\star = u$ and $\star = m$, we explain how to construct a $k$-graph $C_\star$ from $P^2_{\ell_\star}$,
by specifying their skeletons and factorisation rules as in \cite{HRSW}. For either value of $\star$, we define
$(C_\star)_o = \{(v,0), (v,1)\}$, and for each $i \le k$, we define $(C_\star)_{e_i}$ to be the set
\[
X_i \times \mathbb{Z}_2 \sqcup \{(\overline{x}, j) : x \in X_i, j \in \mathbb{Z}_2\},
\]
a disjoint union of two copies $X \times \mathbb{Z}_2$. The range and source maps on
$(C_\star)_{e_i}$ restrict to those in $P^2_{\ell_\star}$ on $X_i \times \mathbb{Z}_2$, and we
define $r(\overline{x}, j) = s(x, j)$ and $s(\overline{x},j) = r(x,j)$. The factorisation rules
are as follows: for each $2$-cube
\[
\begin{tikzpicture}
\node (00) at (0,0) {$(v,0)$};
\node (01) at (0,2) {$(v,1)$};
\node (10) at (2,0) {$(v,0)$};
\node (11) at (2,2) {$(v,1)$};
\draw[->-] (00) to node[pos=0.5, left] {\small$(a,i_a)$} (01);
\draw[->-] (00) to node[pos=0.5, below] {\small$(b,i_b)$} (10);
\draw[->-] (10) to node[pos=0.5, right] {\small$(a,j_a)$} (11);
\draw[->-] (01) to node[pos=0.5, above] {\small$(b,j_b)$} (11);
\end{tikzpicture}
\]
in $P^2_{\ell_\star}$, we have four factorisation rules:
\begin{align*}
(b,i_b)(a,i_a) &= (a,j_a)(b,j_b),&\qquad
(a,i_a)(\overline{b},j_b) &= (\overline{b},i_b)(a,j_a),\\
(\overline{a},j_a)(b,i_b) &= (b,j_b)(\overline{a},i_a),&\qquad
(\overline{b},j_b)(\overline{a},j_a) &= (\overline{a},i_a)(\overline{b},i_b)
\end{align*}
That these factorisation rules satisfy the associativity condition of \cite{HRSW} follows from a routine calculation
using that the $P^2_{\ell_\star}$ is a quotient of of a direct product of trees.
To proceed, we define matrices
\[
D_i := \begin{pmatrix} 2m_i&0\\ 0&2m_i \end{pmatrix}
\quad\text{ and }\quad
T_i := \begin{pmatrix} 0&2m_i\\ 2m_i&0 \end{pmatrix}.
\]
Observe that in $C_u$ the adjacency matrices $M(C)_i$ are equal to $T_i$, while in $C_m$, we have
$M(C)_1 = D_1$ and $M(C)_i = T_i$ for $2 \le i \le k$. Routine calculations using this show that
\begin{align*}
\mathbb{Z} (C_u)_o / \Big(\sum^k_{i=1} \operatorname{im}(I - M(C_u)^t_i)\Big)
&\cong \mathbb{Z}/\gcd(4m_1^2-1, \dots, 4m_k^2-1)\mathbb{Z},\quad\text{ and}\\
\mathbb{Z} (C_m)_o / \Big(\sum^k_{i=1} \operatorname{im}(I - M(C_m)^t_i)\Big)
&\cong \mathbb{Z}/\gcd(2m_1 - 1, 4m_2^2-1, \dots, 4m_k^2 - 1)\mathbb{Z}.
\end{align*}
\begin{corollary}
There are at least two non-isomorphic groups $\mathscr{G}(C)$ for each $k \ge 2$ and infinitely many choices of alphabets $Y_1,\ldots,Y_k$ of sizes
$2m,\ldots,2m$.
\end{corollary}
\begin{proof}
Take $m_1 = \cdots = m_k$ in the examples above. We obtain $H_0(D^{C_u}_*, \partial_*) \cong
\mathbb{Z}/(4m^2-1)\mathbb{Z}$, and since $2m-1$ divides $4m^2-1$ we have $H_0(D^{C_m}_*,
\partial_*) \cong \mathbb{Z}/(2m-1)\mathbb{Z}$. The result then follows from Corollary~\ref{cor:H*
invariants}.
\end{proof}
Note that the known groups $nV$ can be presented in our language using an $n$-graph $C$ with $1
\times 1$ adjacency matrices with single entry equal $2$. It is relatively easy to check that each
$\ker(\partial_j)$ in Evans' complex is a subgroup of a direct sum of kernels of the maps $I -
M(C)^t_i$, which in this instance are all equal to the $1 \times 1$ matrix $(-1)$. So $H_j(C) = 0$
for $1 \le j \le n$. Also, $H_0(C) = \mathbb{Z}/\big(\sum^n_{i=1} I - M(C)^t_i\mathbb{Z}) =
\mathbb{Z}/(-1)\mathbb{Z} = 0$. Hence the homology groups of all of these $n$-graphs are trivial.
In particular none of the groups $\mathscr{G}(C)$ discussed above are isomorphic to the groups
$nV$.
\subsection{Examples with nontrivial $k$th homology}
Fix integers $k, R \ge 1$. We will construct a $k$-graph $C_{k, R}$ by specifying its
skeleton and factorisation rules as in \cite{HRSW}.
The vertex set $V$ of the skeleton $E$ has $R+1$ elements (for example, we could take $V = \{1, \dots, R+1\}$,
but to lighten notation, we will avoid choosing a particular enumeration).
For each $i \le k$, the set of edges of $E$ of colour $i$ is
\[
\{e^i_{v, m, w} : v \not=w \in V \text{ and } 1 \le m \le 2\}
\cup \{e^i_{v, m, v} : v \in V \text{ and } 1 \le m \le 3\}
\]
and the range and domain maps are given by $\mathbf{r}(e^i_{v, m, w}) = v$ and $\mathbf{d}(e^i_{v,
m, w}) = w$.
So for any two distinct vertices $v,w$, there are $2$ edges of each colour pointing from $v$ to $w$
(and also from $w$ to $v$), and there are $3$ loops of each colour at each vertex.
For example, in the skeleton of $C_{k, 2}$, each of the singly-coloured subgraphs is as in
Figure~\ref{fig:skeleton}.
\begin{figure}[!htbp]
\[
\begin{tikzpicture}
\node[inner sep=0.5pt] (v) at (0,0) {$v$};
\node[inner sep=0.5pt] (w) at (4,0) {$w$};
\draw[->-, out=20, in=160] (v) to node[pos=0.5, below,inner sep=0pt] {\small$e^i_{w,1,v}$} (w);
\draw[->-, out=45, in=135] (v) to node[pos=0.5, above, inner sep=0pt] {\small$e^i_{w,2,v}$} (w);
\draw[->-, out=200, in=340] (w) to node[pos=0.5, above,inner sep=0pt] {\small$e^i_{v,1,w}$} (v);
\draw[->-, out=225, in=315] (w) to node[pos=0.5, below, inner sep=0pt] {\small$e^i_{v,2,w}$} (v);
\draw[->-] (v) .. controls +(-1, 1) and +(1, 1) .. (v) node[pos=0.5, above,] {\small$e^i_{v,1,v}$};
\draw[->-] (v) .. controls +(-1, 1) and +(-1, -1) .. (v) node[pos=0.5, left] {\small$e^i_{v,2,v}$};
\draw[->-] (v) .. controls +(-1, -1) and +(1, -1) .. (v) node[pos=0.5, below] {\small$e^i_{v,3,v}$};
\draw[->-] (w) .. controls +(-1, 1) and +(1, 1) .. (w) node[pos=0.5, above] {\small$e^i_{v,1,v}$};
\draw[->-] (w) .. controls +(1, 1) and +(1, -1) .. (w) node[pos=0.5, right] {\small$e^i_{v,2,v}$};
\draw[->-] (w) .. controls +(-1, -1) and +(1, -1) .. (w) node[pos=0.5, below] {\small$e^i_{v,3,v}$};
\end{tikzpicture}
\]
\caption{One of the singly-coloured subgraphs in the skeleton of $C_{k,2}$}\label{fig:skeleton}
\end{figure}
We must now specify factorisation rules. For $j \not= l \le k$, the $jl$-coloured paths are those
of the form $e^j_{u,m,v}e^l_{v,n,w}$, and we must specify factorisation rules that determine range
and source preserving bijections between $jl$-coloured paths and $lj$-coloured paths and that
satisfy the associativity condition \cite[Equation~(3.2)]{HRSW}.
We define them in two cases:
\begin{itemize}
\item[(F1)] if $u,v \in V$ are distinct, then we define $e^i_{u, m, u}e^j_{u, n, v} = e^j_{u, n, v}e^i_{v, m, v}$.
\item[(F2)] if either $u = v = w$ or $u \not= v$ and $v \not= w$, we define $e^i_{u, m, v}e^j_{v, n, w} = e^j_{u, n, v}e^i_{v, m, w}$.
\end{itemize}
It is routine to check that these factorisation rules determine a complete collection of squares
as in \cite[p.578]{HRSW}. Routine but tedious calculations also verify the associativity condition
\cite[Equation~(3.2)]{HRSW}.
By \cite[Theorem~4.4]{HRSW} there is a $k$-graph $C_{k,R}$ whose skeleton is the coloured graph
$E$, and whose factorisation rules are given by (F1)~and~(F2). Since for all $v,w \in V$ we have
$v (C_{k,R})_{e_1} w \not= \emptyset$, it is immediate that $C_{k,R}$ is cofinal. To see that it
is aperiodic, let $B_{R+1}$ be the $1$-graph whose skeleton is a bouquet of $R+1$ loops at a
single vertex. Fix a vertex $v \in V$ and observe that the sub-$k$-graph generated by the edges
$\{e^i_{(v,m,v)} : i \le k\text{ and }m \le R+1\}$ is isomorphic to the cartesian product
$\prod^k_{i-1} B_{R+1}$ of $k$ copies of $B_{R+1}$. Since the $1$-graph $B_{R+1}$ is aperiodic, so
is the $k$-graph $\prod^k_{i-1} B_{R+1}$, and so it contains an aperiodic infinite path. This is
then an aperiodic infinite path in $C_{k, R}$.
We now observe that for each $i \le k$, the matrix $(1 - M(C_{k, R})^t_i)$ is the $(R+1) \times (R+1)$
integer matrix
\[
A_R := \left(\begin{matrix}
-2 & -2 & \cdots & -2 \\
-2 & -2 & \cdots & -2 \\
\vdots & \vdots & &\vdots \\
-2 & -2 & \cdots & -2
\end{matrix}\right).
\]
Consequently $H_k(C_{k, R}) = \ker(A_R) \cong \mathbb{Z}^R$.
We have now proved the following.
\begin{corollary}
For each positive integer $k$, there exists a family $\{C_{k, R} : R \ge 2\}$ of $k$-graphs, such
that $\mathscr{G}(C_{k, R}) \cong \mathscr{G}(C_{k', R'})$ only if $k = k'$ and $R = R'$.
Moreover, for any $k, R$, the group $\mathscr{G}(C_{k,R})$ is not isomorphic to $\mathscr{G(C')}$
for any $l$-graph $C'$ with $l < k$.
\end{corollary}
| proofpile-arXiv_059-15696 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\@startsection{section}{1{\@startsection{section}{1}
\z@{0.8\linespacing\@plus\linespacing}{.7\linespacing}{\Large}}
\def\@startsection{subsection}{2{\@startsection{subsection}{2}
\z@{.5\linespacing\@plus.7\linespacing}{.7\linespacing}{\large}}
\def\@startsection{subsubsection}{3{\@startsection{subsubsection}{3}
\z@{.5\linespacing\@plus.7\linespacing}{-.5em}{\normalfont\bfseries}}
\makeatother
\setcounter{MaxMatrixCols}{10}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{proposition}{Proposition}[section]
\newtheorem{lemma}{Lemma}[section]
\newtheorem{corollary}{Corollary}[section]
\theoremstyle{definition}
\newtheorem{definition}{Definition}[section]
\theoremstyle{definition}
\newtheorem{assumption}{Assumption}[section]
\theoremstyle{definition}
\newtheorem{example}{Example}[section]
\newcommand{\hline \vspace{-1.5ex} \\}{\hline \vspace{-1.5ex} \\}
\newcommand{\\ \vspace{-1.8ex} \\ \hline \vspace{-1.5ex} \\}{\\ \vspace{-1.8ex} \\ \hline \vspace{-1.5ex} \\}
\newcommand{\\ \vspace{-1.8ex} \\ \hline}{\\ \vspace{-1.8ex} \\ \hline}
\setlength{\textwidth}{\paperwidth}
\setlength{\textheight}{\paperheight}
\addtolength{\textwidth}{-2in}
\addtolength{\textheight}{-2in}
\calclayout
\vfuzz4pt
\hfuzz4pt
\title{}
\begin{document}
\vspace*{5ex minus 1ex}
\begin{center}
\Large \textsc{A Decomposition Approach to Counterfactual Analysis in Game-Theoretic Models}
\bigskip
\end{center}
\date{%
\today%
}
\vspace*{3ex minus 1ex}
\begin{center}
Nathan Canen and Kyungchul Song\\
\textit{University of Houston and University of British Columbia}\\
\bigskip
\end{center}
\thanks{We thank Aimee Chin, Sukjin Han, Chinhui Juhn, Arvind Magesan, Daniela Scur, Eduardo Souza-Rodrigues, Ko Sugiura, Andrea Szabo, Xun Tang, and participants at many seminars and conferences for their helpful comments. All errors are ours. Song acknowledges that this research was supported by Social Sciences and Humanities Research Council of Canada. Corresponding address: Kyungchul Song, Vancouver School of Economics, University of British Columbia, Vancouver, BC, Canada. Email address: kysong@mail.ubc.ca.}
\fontsize{12}{14} \selectfont
\begin{bibunit}[econometrica]
\begin{abstract}
Decomposition methods are often used to produce counterfactual predictions in non-strategic settings. However, when the outcome of interest arises from a strategic setting where agents are better off by deviating from their strategies after a new policy, such predictions, despite their practical simplicity, are hard to justify. We present conditions in generic games such that decomposition-based predictions coincide with equilibrium-based ones. In many games, such coincidence follows from an invariance condition for the equilibrium selection rules. To illustrate our message, we revisit an empirical analysis in \cite{Ciliberto/Tamer:09:Eca} on firms' entry decisions in the airline industry.
\medskip
{\noindent \textsc{Key words:} Counterfactual Analysis, Game-Theoretic Models, Bayes Correlated Equilibria, Decomposition Method}
\medskip
{\noindent \textsc{JEL Classification: C30, C57}}
\end{abstract}
\maketitle
\bigskip
\bigskip
\bigskip
\bigskip
\@startsection{section}{1{Introduction}
One of the central goals of empirical research in economics is to quantify the effects of new policies yet to be implemented. Examples include analyzing the effects of increasing minimum wages on labor outcomes, the effects of different government legislation in healthcare and the effects of different market characteristics on firm entry, to name only a few.
To obtain appropriate counterfactual predictions in strategic environments, researchers typically specify a game-theoretic model, and estimate structural parameters of the model, the results of which are used for generating post-policy predictions. The main virtue of this approach is that its predictions are incentive compatible, that is, agents have no incentive to deviate from the strategies that yield post-policy predictions. However, it requires specification of the game to fine details, often made to address computational challenges in implementation. These challenges frequently arise from the presence of multiple equilibria in the game, which may result in identified sets that are too large for meaningful policy analysis. In fact, it is not uncommon that they produce opposite conclusions (e.g., see \cite{Aguirregabiria/Nevo:12:WP}, p.111).
An alternative approach for counterfactual analysis is to use decomposition methods. Such methods extrapolate the observed relationship between an outcome and an explanatory variable to a counterfactual environment. They are widely used in labor economics. For instance, a researcher studying the effects of minimum wages on wage inequality (e.g., \cite{DiNardo/Fortin/Lemieux:96:Eca}) would first estimate this relationship in the observed data, then use those estimates with different (counterfactual) minimum wages to evaluate the distribution of wages under the new policy. The decomposition-based approach has several practical merits: it is computationally simple, yields point-identified predictions, and does not rely on detailed specifications (e.g., of payoffs or unobserved heterogeneity). Furthermore, statistical inference is often straightforward: one can just use bootstrap. However, to the best of our knowledge, decomposition-methods have been used mostly in non-strategic settings.
What prevents us from using decomposition methods for counterfactual analysis in a strategic setting? Decomposition methods assume that the causal relationship between the outcome variable and the policy variable remains the same in the counterfactual environment. This assumption can be violated if an agent has an incentive to deviate from the strategies in the original game after a new policy. In such cases, one cannot put forth a decomposition-based prediction as a sound counterfactual prediction.
In this paper, we focus on the problem of generating counterfactual predictions in a strategic environment, where the policy of interest alters part of observed variables in the payoff function.\footnote{Throughout the paper, we focus on counterfactual analysis where a policy of interest changes an observed random vector constituting the payoff state, and the target of prediction is the action of the agents (or a known function of such actions). While this restriction covers a large set of policy analysis settings, it does exclude many important situations of counterfactual analysis such as those that focus on a change in the welfare. We clarify this restriction on the scope later in the paper.} For a formal analysis, we consider a generic game, which includes games with various information structures, with the solution concept of Bayes Correlated Equilibria of \cite{Bergemann/Morris:16:TE} (which includes Nash equilibria as a special case). We present a set of conditions under which the predictions obtained from the decomposition method coincide with the equilibrium-based predictions (i.e., those obtained from equilibria in a game). It turns out that the coincidence applies to both complete information and incomplete information games. It does not depend on whether one uses Nash equilibria as a solution concept or not. Rather, the coincidence depends on the class of counterfactual policies that are considered. The conditions for the validity of decomposition methods can be summarized as follows: (a) the policy alters only a publicly observed component of the payoff state (observed by players and the econometrician), (b) the policy keeps the payoff state within its support in the pre-policy game, and (c) the equilibrium selection rule conditional on a payoff state remains the same if the set of equilibrium actions conditional on the payoff state remains the same after the policy. When these conditions are met, researchers can use a decomposition-based prediction as a counterfactual prediction because this prediction is the same as the equilibrium-based prediction.
Condition (a) is satisfied by many policies, including those that affect taxes, tariffs or laws, which are often observed by all agents. Most of all, complete information games satisfy this condition immediately.
Condition (b) is also satisfied in many empirical applications. Even when the condition is not met, we show below that one can obtain bounds for the equilibrium-based prediction using decomposition-based predictions. These bounds are intuitive and easily implementable.
The invariance condition (c) in this paper requires that, for each value of the payoff state, if the pre-policy set of equilibrium actions and the post-policy set of equilibrium actions are the same, so are their selection probabilities. This condition, in a sense, requires the selection probability to be \textit{label-free}. When the set of equilibrium actions remains the same after the policy, the only change caused by the policy is in their labels: one labeled as ``pre-policy'' actions and the other as ``post-policy'' actions.
As we demonstrate in the paper, this invariance condition is already widely used in empirical work in various disguises. For example, this condition is implicitly used when one focuses on a specific equilibrium played (e.g., a Pareto superior equilibrium, or the most profitable for a specific firm as in \cite{Jia:08:Eca}), assumes that the same equilibrium is played after the policy (as discussed in \cite{Aguirregabiria/Mira:10:JOE}), or parametrizes the equilibrium selection rule and uses the estimated selection function for counterfactual analysis (e.g., \cite{Bajari/Hong/Ryan:10:Eca}). In their analysis of counterfactual predictions on games, \cite{Aguirregabiria/Mira:13:WP} explicitly considered an invariance condition for the equilibrium selection rule though differently from ours. On the other hand, there are alternative approaches that are fully agnostic about the equilibrium selection rule or even about the structure of the game, such as the partial identification approach of \cite{Ciliberto/Tamer:09:Eca} and \cite{Haile/Tamer:03:JPE}.
Hence, our results show the tradeoff that researchers face when they perform counterfactual analysis with policies satisfying the aforementioned assumptions. One can use the decomposition approach which is more computationally tractable and does not require specifying the fine details of the model, at the expense of assuming the invariance of the equilibrium selection rule. The right balance in the tradeoff will depend on the details of the empirical settings and the policy questions of interest. The primary contribution of our paper is to provide formal results that clarify this tradeoff for researchers, when they use game-theoretic models for counterfactual analysis with policies considered above.
Our results can be applied to many empirical settings, including counterfactual exercises in entry games where a policy changes part of the market characteristics (e.g., \cite{Jia:08:Eca}) or regulatory policy (e.g., \cite{Ciliberto/Tamer:09:Eca}), in auction markets where the focus is on the impact of a change in reserve prices on auction outcomes, in various policy settings in labor economics such as increases in taxes or minimum wages on employment, among others. (We provide details on the validity and its implementation below.) In all of these examples, under the invariance condition, it suffices to run a decomposition-based prediction to recover the equilibrium-based prediction without further assumptions on payoffs or distributions of unobserved heterogeneity.
Finally, we note that using decomposition-based predictions to point or interval-identify the equilibrium-based predictions does not eliminate entirely the need to use a game-theoretic model for counterfactual analysis. On the contrary, to check the validity of decomposition-based predictions for counterfactual analysis, one needs to clarify the strategic environment, the agents' information structure, and then consider what components of the strategic environment are invariant to the policy of interest. We believe that our results are useful for this step, as they show which specifications are needed (and which ones are not) for the use of decomposition-based predictions in such settings. Once the validity of decomposition-based predictions is confirmed, one does not need to specify further details of the game for counterfactual analyses.
As an illustration of our results, we revisit the empirical application of \cite{Ciliberto/Tamer:09:Eca}. Using a model of a complete information entry game, they studied the U.S. airline market and assessed the effects on entry from a repeal of the Wright amendment, a law that restricted entry in routes using Dallas Love Field Airport. Due to the multiplicity of equilibria, they pursued a set identification approach and reported maximum entry probabilities as counterfactual predictions. In our application, we produce a decomposition-based prediction and compare it with the prediction from \cite{Ciliberto/Tamer:09:Eca}. We also compare these predictions with the actual results following the repeal of the Wright amendment in 2014. We find that the decomposition-based predictions using the pre-policy data perform well out-of-sample.
As a second empirical application, we follow \cite{Goolsbee/Syverson:08:QJE} and revisit the effects of Southwest Airlines' threat of entry on its competitors' decisions. Our focus, though, is on its effects on competitors' entry in a strategic environment, rather than prices. We find that other airlines' entry behavior is consistent, on average, with deterrence. However, this effect is heterogeneous across markets, and driven primarily by small markets. In large markets, they appear to ``accommodate'' Southwest's entry (i.e., by not changing their own entry behavior relative to an environment without a threat of Southwest entry). This is consistent with evidence in other industries (e.g., \cite{Tenn/Wendling:14:ReStat}). We then extend this exercise to settings where multiple competitors potentially threaten entry into the market.
\medskip\medskip
\noindent \textbf{Related Literature}\medskip
The idea that one may not need to identify a full structural model to identify policy effects of interest goes back at least to \cite{Marschak:53:Cowles}, as pointed out by \cite{Heckman:10:JEL}. See \cite{Wolpin:13} for further examples and discussion. Recent approaches exploring similar insights include the sufficient statistics approach (most notably in public finance, e.g., \cite{Chetty:09:ARE}; see \cite{Kleven:20:WP} for a recent overview) and policy relevant treatment effects proposed by \cite{Ichimura/Taber:00:NBER} and \cite{Heckman/Vytlacil:01:AERPP}. See \cite{Heckman:10:JEL} for a review of this literature. We convey a similar message in this paper by studying conditions for the validity of decomposition-based predictions in strategic settings.
The decomposition approach is widely used in economics, most notably in labor economics. In the decomposition approach, the researcher estimates their model, and then decomposes the variation in the outcome into the effects from different covariates. One example is the study of the effects of minimum wages on wage inequality (e.g., \cite{DiNardo/Fortin/Lemieux:96:Eca}): while there might be other mechanisms that affect wage inequality (e.g., unionization), once the model has been estimated, one can check how the distribution of wages would have differed if minimum wages had increased. Salient examples of the decomposition approach include the early work of \cite{Oaxaca:73:IER} and \cite{Blinder:73:JHR}, \cite{Juhn/Murphy/Pierce:93:JPE}, the nonparametric/semiparametric approach of \cite{DiNardo/Fortin/Lemieux:96:Eca}, and other extensions which have been very widely applied, see \cite{Fortin/Lemieux/Firpo:11:Handbook} for a survey. The connection between the decomposition methods and methods of program evaluations have been pointed out by \cite{Fortin/Lemieux/Firpo:11:Handbook}. \cite{Kline:11:AERPP} shows that the estimated decomposition based prediction can be interpreted as a reweighting estimator in the program evaluation setting.
More relevant to our paper is the decomposition approach used for counterfactual policy predictions. \cite{Rothe:10:JoE} and \cite{Chernozhukov/Fernandez/Melly:13:Eca} provided inference on the full counterfactual distributions. \cite{Hsu/Lai/Lieli:22:JBES} introduced the quantile counterfactual treatment effects on a different population, and showed how one can identify and perform inference on the treatment effects using an invariance condition. While the main emphasis of this literature is on the general method of inference on various counterfactual distributions, our focus is on presenting a generic set of conditions in game-theoretic models under which such decomposition-based predictions can be accepted as valid predictions.
A common way to generate counterfactual predictions in a strategic environment is to specify and estimate a game-theoretic model, and then use the predictions from its equilibria. In many cases, point-identification of these models is not possible due to multiple equilibria. Researchers often either choose one equilibrium from the game (e.g \cite{Jia:08:Eca}) or conduct inference on the identified set allowing for all equilibria (e.g., \cite{Ciliberto/Tamer:09:Eca}). In light of these challenges, a recent literature studies point-identification of counterfactual predictions without identifying the full details of the model. These developments have been centered around dynamic discrete choice models – e.g., \cite{Aguirregabiria/Suzuki:14:QME}, \cite{Norets/Tang:14:ReStud}, \cite{Arcidiacono/Miller:20:JOE}, \cite{Kalouptsidi/Scott/Souza:20:QE}. While many structural models within this class are unidentified - see the discussion in \cite{Aguirregabiria/Suzuki:14:QME}, some counterfactuals are point-identified such as those characterized by linear changes in payoffs (the so called ``additive transfers counterfactuals'' in \cite{Kalouptsidi/Scott/Souza:17:IJIO}). See also \cite{Kalouptsidi:Kitamura:Lima:Souza:20:NBER} for partial identification of counterfactuals in a similar context. \cite{Jun/Pinkse:20:JOE} explored various approaches to produce a point-decision on partially identified counterfactual predictions from a game.
The message in this paper is related to that of \cite{Kocherlakota:19:JME}. Using a model of a dynamic game between the private sector and the government, he showed that to obtain an optimal policy, one can just use predictions by regressing policy payoffs on policies using historical data, without relying on a structural macroeconomic model. There are major differences between his framework and ours. First, his model is a dynamic model where the policy-maker is a player of the game, whereas ours is a static one in which the policy-maker is outside the model. Second, his model is a macroeconomic model where the analysis is based on one equilibrium which generated the data. In our setting, many independent copies of a static game are observed so we need to deal with the problem of multiple equilibria. Third, in our framework, the information structure of a game plays a prominent role, whereas its role is limited in his dynamic model where the forward looking private sector needs to form beliefs about future government policies.
There is a small body of recent literature that uses the solution concept of Bayes Correlated Equilibrium (BCE) in an empirical setting. \cite{Magnolfi/Roncoroni:20:WP} adopt it in their study of entry decisions in the supermarket sector. They focus on characterizing the identified set obtained under a weak assumption on the information structure of the game, and use the set to study a policy that changes a covariate (presence of large malls). \cite{Bergemann/Brooks/Morris:22:AER} present a general framework to produce a set of counterfactual predictions in games exploiting the requirement that observations be consistent with some unknown information structure of the games. Their counterfactual policies are concerned with a change of payoff functions or the set of action profiles, whereas in our setting, the policies are mainly a change in the distribution of the payoff state vector. More importantly, our emphasis is on the use of decomposition-based predictions without requiring the econometrician to specify details on the payoff functions and the distribution of the payoff states. Other empirical examples using BCE and obtaining partial identification include \cite{Gualdani/Sinha:20:WP} on discrete choice models and \cite{Syrgkanis/Tamer/Ziani:18:WP} on auctions. \medskip
This paper is organized as follows. In Section 2, we give an overview of our main insights using a simple entry game model. In Section 3, we present formal results of the validity of decomposition-based predictions for generic Bayesian games, and provide discussions and extensions. In Section 4, we provide empirical applications. Section 5 concludes. The mathematical proofs, details on generalizations of our results, details on the data and the implementation of decomposition-based predictions are found in the Online Appendix to this paper.
\bigskip
\@startsection{section}{1{Overview}
Sections \ref{subsubsec:simple1}-\ref{subsec:main_results} below provide an overview of our main results in the context of a simple, complete information, two-player entry game model, which has been widely used in the empirical literature (e.g., \cite{Ciliberto/Tamer:09:Eca}, \cite{Jia:08:Eca}, \cite{Grieco:14:RAND} and \cite{Magnolfi/Roncoroni:20:WP}).
\@startsection{subsection}{2{A Simple Entry Game}\label{subsubsec:simple1}
\@startsection{subsubsection}{3{A Two-Player Entry Game}
\label{subsubsec: a simple entry game}
Consider a complete information game where two firms, $i=1,2$, choose a binary action $y_i \in \{0,1\}$, $y_i = 1$ representing entry in the market, and $y_i=0$ staying out of the market. The payoff for firm $i$ is given by:
\begin{equation}
u_i(y,W_i) = y_i(\delta y_{-i} + X_i'\beta_i + \varepsilon_i), \quad W_i = (X_i,\varepsilon_i),\label{utility_entry}
\end{equation}
where $\delta<0$ is a parameter measuring the effect of competition and $W_i$ is the payoff state with $X_i$ and $\varepsilon_i$ denoting the characteristics of firm $i$ observable and unobservable by the econometrician. We let $X = (X_1,X_2)$ and $\varepsilon = (\varepsilon_1,\varepsilon_2)$. For the purpose of illustration, we assume that: (i) $X$ and $\varepsilon$ are independent (as commonly assumed in empirical applications), (ii) that $X_i'\beta_i + \varepsilon_i + \delta < 0 < X_i'\beta_i + \varepsilon_i$, so that competition in a market is detrimental to firms, and (iii) the observed outcomes are generated from a pure-strategy Nash equilibrium.\footnote{Our main results in Section \ref{sec3} do not require assumptions (i)-(iii).}
There are two pure-strategy Nash equilibria in this game, characterized by one firm entering, but the other staying out. More specifically, the set of equilibria is given by
\begin{align*}
\mathcal{E}= \{g^1, g^2\},
\end{align*}
where, $w = (w_1,w_2)$, $g^1(w) = (1- 1_{A_{0,1}}(w), 1_{A_{0,1}}(w))$, and $g^2(w) = (1_{A_{1,0}}(w),1-1_{A_{1,0}}(w))$, with
\begin{align*}
A_{0,1} &= \{(w_1,w_2) : x_1'\beta_1 + \varepsilon_1 + \delta < 0, x_2'\beta_2 + \varepsilon_2 \ge 0\}, \text{ and }\\
A_{1,0} &= \{(w_1,w_2) : x_1'\beta_1 + \varepsilon_1 \ge 0, x_2'\beta_2 + \varepsilon_2 + \delta < 0\}, \quad w_i = (x_i, \varepsilon_i), i=1,2.
\end{align*}
\medskip
\@startsection{subsubsection}{3{Data Generation}
We first introduce the causal structure of data generation in which counterfactual predictions arise naturally as a causal parameter. Later we discuss the point- or interval-identification of the predictions from data. As often done in the empirical literature, we assume a setting with many markets where the data are generated from the entry game independently across the markets. Let $Y = (Y_1,Y_2)$ be the pair of entry indicators by firms $1$ and $2$, and define $W = (W_1, W_2)$, $W_i = (X_i, \varepsilon_i)$, the pair of payoff states for the firms. The econometrician observes $Y$ and $X$, but not $\varepsilon$. The data generation is described as follows:\medskip
Step 1: The payoff state $W$ is drawn (i.i.d. across the markets) from a common distribution.
Step 2: The outcome $Y$ is generated i.i.d. across the markets from the conditional probability:
\begin{align*}
P\{Y = (y_1,y_2) \mid W\} = \sum_{g \in \mathcal{E}} 1\{g(W) = (y_1,y_2)\}e(g \mid W),
\end{align*}
where $e(\cdot \mid W)$ denotes the equilibrium selection rule for given $W$.\medskip
The equilibrium selection rule is introduced in a generic form, solely to explain our observing only one realization of $(Y,X)$ for each market in the data, when the model predicts a multiplicity of possible outcomes due to multiple equilibria.
The causal interpretation of a counterfactual prediction in this paper comes from such a data generation process and invariance conditions. For example, suppose that the set of equilibria and the equilibrium selection rule remain the same after the policy of ``fixing'' $W$ to be $w'$. Then, the predicted entry probability of firm 1 is given by
\begin{align*}
P\{Y = (y_1,y_2) \mid W = w'\} = \sum_{g \in \mathcal{E}} 1\{g(w') = (y_1,y_2)\}e(g \mid w').
\end{align*}
In practice, one needs to introduce further restrictions to identify this probability from data because the payoff state $W$ involves $\varepsilon$, and we discuss this later.
\@startsection{subsubsection}{3{Counterfactual Policy Predictions}
Our focus is on predicting the entry probability for the firms when a policy changes the payoff state $(X,\varepsilon)$ into $(f(X),\varepsilon)$, for some map $f$. Policies in such a form have often been studied in the literature. For example, \cite{Ciliberto/Tamer:09:Eca} studied a change in an observable policy (modeled as a dummy variable in the payoff state), \cite{Jia:08:Eca} a change in the market size, \cite{Grieco:14:RAND} a change in the presence of a supercenter, and \cite{Magnolfi/Roncoroni:20:WP} a change in large malls.\footnote{While our primary focus is on policies that change the payoff state vector $X$, some situations with a policy that changes a parameter of the model fall within our framework. For example, suppose that the payoff state takes the form: $W_i = (X_{i,1} \beta_i, X_{i,2}'\theta_i, \varepsilon_i)$, $X_i = (X_{i,1}, X_{i,2})$, and the policy changes $\beta_i$ to be zero, or $1/2$. This policy is equivalent to the policy that fixes the value of the corresponding entry of $X_{i,1}$ to be zero, or the half of its value, with the coefficient $\beta_i$ kept the same.}
Given policy $f$, our target parameter is what we call the \bi{average equilibrium-based prediction (AEP)} of entry defined as follows:
\begin{align}
\label{av pred}
\mathsf{AEP} \equiv \sum_{g \in \mathcal{E}_f} \mathbf{E} \left[g(X,\varepsilon) e_f(g \mid X,\varepsilon) \right],
\end{align}
where $e_f$ denotes the equilibrium selection rule, $\mathcal{E}_f$ the set of equilibria after the policy $f$ and the expectation is over the joint distribution of $(X,\varepsilon)$. The AEP corresponds to the average entry probabilities of the two firms as predicted by the entry game model after the (observable) payoff state is changed from $X$ into $f(X)$.\footnote{One could consider other parameters of interest, including any (known) function of outcomes. See Section \ref{sec3}.}
The main difficulty in recovering this prediction from data is in recovering the equilibria, $\{g^1,g^2\}$ and the equilibrium selection rule, $e_f$, from data. Without any assumption about the equilibrium selection rule, $g^1$ and $g^2$ are typically set-identified, since the econometrician cannot infer whether an equilibrium is played due to the payoff parameters assuming a certain value, or due to the type of an equilibrium selection mechanism involved in the data generation. This further complicates the implementation of this prediction using data.
\@startsection{subsection}{2{The Main Results}\label{subsec:main_results}
\@startsection{subsubsection}{3{Bounds for the Average Equilibrium-Based Prediction}
\label{subsubsec: suff conds for validity}
In this context of a complete information entry game, we give a general result which shows that, under an invariance condition for equilibrium selection rules, the AEP has bounds which often do not require us to specify and recover $g_1$ and $g_2$ from data. Let us introduce some notation. Let us define the \bi{average decomposition-based prediction (ADP):}
\begin{align*}
\mathsf{ADP} \equiv \sum_{g \in \mathcal{E}} \mathbf{E}\left[ g(f(X),\varepsilon)e(g \mid f(X), \varepsilon)1\left\{ f(X) \in \mathbb{S}_{X} \right\} \right],
\end{align*}
where $\mathbb{S}_X$ denotes the support of $X$. As compared to using the AEP, the practical advantage of using the ADP is clear. Because $X$ and $\varepsilon$ are independent, the ADP is point-identified as follows:
\begin{align*}
\mathsf{ADP} = \mathbf{E}\left[ m(f(X))1\left\{ f(X) \in \mathbb{S}_{X} \right\} \right],
\end{align*}
where $m(x) = \mathbf{E}[Y \mid X = x]$. This conditional expectation is based on the joint distribution of $(Y,X)$ from the pre-policy game. Therefore, it extrapolates the pre-policy distribution of $(Y,X)$ to the counterfactual environment. Unlike the AEP, the identification of the ADP does not require recovering $g^1$ and $g^2$ from data. This means that, within the class of counterfactual policies we consider (e.g., changes to the payoff states), it is not necessary to specify or recover details of the game such as the functional form specifications of payoffs and unobserved heterogeneity. Thus, one does not need to worry about misspecifications of such details. Furthermore, there is no need to compute the set of equilibria under a new policy, which substantially reduces computational burden in practice.
Despite the apparent advantages of the ADP, it is not clear whether using it in a strategic setting can be justified, unless one successfully relates it to the AEP.\footnote{In the Online Appendix, we give a concrete example where the ADP and the AEP are different, and the ADP cannot be used in producing a valid prediction.} Our main result establishes the following bounds for the AEP:
\begin{align}
\label{bound2}
\mathsf{ADP}
\le \mathsf{AEP} \le \mathsf{ADP} + \mathsf{EB} \times \mathbf{1},
\end{align}
where\footnote{In this context of the two-firm entry game, AEP and ADP are $2 \times 1$ vectors, the inequalities are point-wise, and $\mathbf{1}$ is the $2 \times 1$ vector of ones.}
\begin{align}
\label{EEB}
\mathsf{EB} \equiv P\left\{f(X) \notin \mathbb{S}_X \right\}.
\end{align}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.37]{Extrapolation_Error4.pdf}
\caption{The Error Bound for the Decomposition-Based Prediction}
\parbox{5.8in}{\footnotesize The figure illustrates the derivation of the set $f^{-1}(\mathbb{S}_{f(X)} \setminus \mathbb{S}_X)$ in EB in (\ref{EEB}). The set $f^{-1}(\mathbb{S}_{f(X)} \setminus \mathbb{S}_X)$ is a subset of $\mathbb{S}_X$. Here, the set $\mathbb{S}_X$ denotes the support of the payoff state $X$ in the pre-policy game $G$. When the policy sends the payoff states further outside the support $\mathbb{S}_{X}$ in the data so that the set $\mathbb{S}_{f(X)} \setminus \mathbb{S}_X$ becomes larger, the EB that is the probability of $X \in f^{-1}(\mathbb{S}_{f(X)} \setminus \mathbb{S}_X)$ increases.}
\label{fig: eeb}
\end{center}
\end{figure}
The result shows that the AEP is bounded by the terms involving the ADP which we can estimate without relying on any further details of the game.
The length of the interval depends on the size of \textsf{EB}. The term represents the probability that the policy sends the payoff state outside of the support of the payoff state in the pre-policy game. When the policy does not send the payoff state outside of the support (i.e., $f(X) \subset \mathbb{S}_X$), we have $\textsf{EB} = 0$, and obtain the following identification:
\begin{align*}
\mathsf{AEP} = \mathsf{ADP}.
\end{align*}
The term \textsf{EB} is identified and can be estimated because $X$ is observable. It tends to increase as the policy $f$ transforms the payoff state further away from the support of the payoff state in the pre-policy game $G$, as illustrated in Figure \ref{fig: eeb}. We call this term the \bi{Error Bound (EB)} for the decomposition-based prediction. The EB represents the maximal extrapolation error when one does not rely on the functional form restrictions in the specification details of the game, such as payoff functions and distributional assumptions on the unobserved heterogeneity. This bound can still be useful even when such parametrizations are imposed, as it shows the extent to which the latter restrictions drive the results.
As an illustration, consider the policy, which changes $X = (X_1,X_2)$ into
\begin{align*}
f(X) = (X_1 + a_1, X_2 + a_2), \quad a_1, a_2 >0.
\end{align*}
That is, the policy changes the observed payoff state $X_i$ of each firm $i$ to $X_i + a_i$ for some number $a_i>0$. Suppose further that the pre-policy support of $X_i$ is given by $[x_{L,i}, x_{U,i}]$. We define $m_i(x_1,x_2) = \mathbf{E}[Y_i \mid (X_1,X_2) = (x_1,x_2)]$, $i=1,2$. Then, we have
\begin{align*}
\mathsf{ADP} = \left(\begin{array}{c}
\mathbf{E}\left[ m_1(X_1 + a_1, X_2 + a_2) 1\left\{ X_i \in [x_{L,i} - a_i, x_{U,i} - a_i], \text{ for all } i = 1,2\right\}\right]\\
\mathbf{E}\left[ m_2(X_1 + a_1, X_2 + a_2) 1\left\{ X_i \in [x_{L,i} - a_i, x_{U,i} - a_i], \text{ for all } i = 1,2\right\}\right]
\end{array}
\right)
\end{align*}
and
\begin{align*}
\mathsf{EB} &= P\left\{ X_1 \notin [x_{L,1} - a_1, x_{U,1} - a_1], \text{ or } X_2 \notin [x_{L,2} - a_2, x_{U,2} - a_2] \right\}\\
&=P\left\{ x_{U,1} - a_1 < X_1, \text{ or } x_{U,2} - a_2 < X_2 \right\}.
\end{align*}
Since the probability in the EB is with respect to the pre-policy distribution of $X = (X_1,X_2)$, one can easily see that the EB increases as $a_1$ or $a_2$ increases.
\@startsection{subsubsection}{3{Invariance Condition}
The key condition underpinning the bound results in (\ref{bound2}) is that the equilibrium selection rules satisfy an invariance property as we explain now. First, with $w = (x,\varepsilon)$, we let
\begin{align*}
\mathcal{E}(w) = \{g(w): g \in \mathcal{E}\}, \quad w \in \mathbb{S}_W,
\end{align*}
where $\mathbb{S}_W$ denotes the pre-policy support of $W = (X,\varepsilon)$. Hence, $\mathcal{E}(w)$ represents the $w$-section of $\mathcal{E}$ before the policy, which is essentially the set of actions supported by an equilibrium when the payoff state $W$ is fixed to be $w$ in the pre-policy game. Let $e_f(\cdot\mid w)$ be the equilibrium selection rule of the game after the policy $f$. Define the set of post-policy equilibria to be $\mathcal{E}_f$, and its $w$-section $\mathcal{E}_{f}(w)$. Then, the invariance condition that justifies the bounds is as follows:\medskip
\noindent \textsc{Invariance Condition: } For any $w \in \mathbb{S}_W$ such that $\mathcal{E}(w) = \mathcal{E}_{f}(w)$, we have $e(\cdot\mid w) = e_{f}(\cdot\mid w)$.\medskip
The invariance condition requires that, if the equilibrium action profiles remain the same after the policy \textit{at the same payoff state $w$}, their selection probability remains the same at the payoff state as well. In other words, once the payoff state is realized, and the set of action profiles in equilibrium in the post-policy game is the same as that in the pre-policy game, the action profile in the post-policy game is selected with the same probability as in the pre-policy game. The invariance condition is satisfied if the equilibrium selection probability depends only on the value of payoff state $w$ regardless of whether we are in the pre-policy or post-policy regime.
This invariance condition is already satisfied by many equilibrium selection rules used empirically. This includes the common assumption that the same equilibrium is played in the data and in the counterfactual (see \cite{Aguirregabiria/Mira:10:JOE}). Relatedly, many papers choose a specific equilibrium which is analyzed in both the data and the counterfactual. This includes the highest profit equilibrium for a certain firm (\cite{Jia:08:Eca}), the Pareto superior equilibrium (see \cite{DePaula:13:ARE}), or the equilibrium studied by \cite{Milgrom/Weber:82:ECMA} in common value auctions. Furthermore, this invariance condition already holds when an equilibrium selection rule is kept the same for counterfactual predictions (e.g., in \cite{Bajari/Hong/Ryan:10:Eca}, or as assumed in \cite{Berry/Haile:14:Ecma}). It is also satisfied if it is deemed constant across the equilibrium in the data and counterfactual (e.g., \cite{Aguirregabiria:12:EL}, applied in \cite{Aguirregabiria/Ho:12:JOE}). Our invariance condition is closely related to the one used in \cite{Aguirregabiria/Mira:13:WP}. Their invariance condition requires that the equilibrium selection rule depend on $(w,\theta)$ only through the payoff function evaluated at $(w,\theta)$. On the other hand, our invariance condition is satisfied when the equilibrium selection rule is determined by the payoff state $w$ only.
Sometimes the invariance condition may be reasonable in a setting with small changes to the payoff state (e.g., small increases in minimum wages), because small changes can be well approximated by keeping the same game environment. In fact, many of the policy experiments in entry games in empirical research are ``small policies'' which change payoff states (or the set of affected players) by only a small amount, such as 10-15\% changes in observed policy variables (as opposed to, say, doubling). For example, \cite{Jia:08:Eca} considered the increase in market size by a small amount, 10\%, and \cite{Magnolfi/Roncoroni:20:WP} considered a change in a market characteristic affecting only 13\% of markets. This local argument is also used in \cite{Aguirregabiria:12:EL} and \cite{Aguirregabiria/Ho:12:JOE}, since they explore local approximations of the counterfactual values around the data.
\@startsection{subsection}{2{The Scope of Decomposition-Based Predictions\label{subsec:limitations}}
So far our overview has focused on the two-player complete information entry game where the solution concepts are pure strategy Nash equilibrium, using a policy that affects an observable payoff state. How far does the bound approach extend beyond this setting?
It turns out that the validity of our approach extends to various incomplete information games, and beyond the solution concept of pure strategy Nash equilibria. We demonstrate this by considering a generic Bayesian game which includes complete information game and private information game as special cases, and by adopting the general solution concept of Bayes Correlated Equilibria (BCE) of \cite{Bergemann/Morris:16:TE}. This solution concept is quite flexible as it includes mixed strategy Nash equilibria or, more generally, correlated equilibria as special cases.
Furthermore, the approach extends to the situation where the outcome variable is not an action, but a measure of aggregate welfare or profit, as long as the latter is identified from the distribution of observed actions. It further extends to cases where $X$ and $\varepsilon$ are potentially correlated, but a control function exists.
Despite the wide applicability of our results, there are important examples that the scope of our proposal does not cover. First and foremost, our main results restrict the counterfactual policies to those that affect the observed payoff state. In doing so, we keep other aspects of the environment unchanged after the policy. For instance, the results do not generally apply when the counterfactual policy alters payoff functions, the action space, or the set of players (e.g., mergers). The latter cases include \cite{Hortacsu/McAdams:10:JPE}, who study the effects of different auction formats for selling treasuries on bidder expected surplus, and \cite{Roberts/Sweeting:13:AER} who study the effect of changing the mechanism by which a good is sold (from an auction set-up to a sequential bidding design) on expected revenues and payoffs in an incomplete information game. A change in the mechanism alters (unobserved) payoffs and expected revenues.
In other cases, researchers are interested in counterfactuals involving a change in the agents' choice sets. For example, \cite{Keane/Wolpin:10:IER} present and estimate a model where women choose labor supply, fertility and welfare participation, among other outcomes. One of their counterfactuals eliminates a welfare program (and hence, the agents' possibility to choose to participate in it) to study the program's effects on labor supply across racial groups. The results on decomposition methods in this paper do not apply in this context either. See \cite{Kalouptsidi/Scott/Souza:20:QE} for further examples and some identification results when the policy changes agents' choice sets.
Our paper's framework takes a policy variable among the payoff states $W$, not among the endogenous outcomes $Y$ in the game. However, in some applications, one may be interested in the counterfactual analysis which involves a policy that changes an endogenous outcome variable, such as a policy that changes the price in a simultaneous system of equations for price and quantity. Our framework excludes such counterfactual analysis.
Finally, our framework focuses on the effect of a policy on the observable outcomes of the game. It extends to settings where the target of prediction is identified from the distribution of the equilibrium outcomes (see the Online Appendix for examples). Thus our framework excludes counterfactual analysis of a policy's effect on the welfare or the profits of the agents in the game, where the identification of the welfare or the profits require identification of structural parameters in the first place.
\@startsection{section}{1{Counterfactual Analysis Using Game-Theoretic Models\label{sec3}}
\@startsection{subsection}{2{A Finite-Player Bayesian Game}
\label{subsec: a finite-player bayesian game}
We introduce a finite player Bayesian game following \cite{Bergemann/Morris:16:TE} (BM, hereafter).\footnote{\cite{Bergemann/Morris:16:TE} considered the case where the state space and the signal space are finite sets for simplicity. In this paper, we consider more general spaces for the state and signal spaces, because the econometrician's models often involve both discrete and continuous variables.} In our model, the Bayesian game is populated by a finite set of players indexed by $N=\{1,...,n\}$. Let $\mathbb{W}$ denote the set from which the payoff state $W$ takes value, and $\mathbb{Y} = \mathbb{Y}_1 \times ... \times \mathbb{Y}_n$ the set from which the action profile $Y = (Y_1,...,Y_n)$ takes values. Each player's action space $\mathbb{Y}_i$ can be a continuum or a countable set.
Each player $i$ is endowed with a payoff function $u_i: \mathbb{Y} \times \mathbb{W} \rightarrow \mathbf{R}$, and chooses an action from $\mathbb{Y}_i$. Let the payoff state $W$ be drawn from a distribution $\mu_W$ on $\mathbb{W}$.\footnote{We assume that $\mathbb{Y} \times \mathbb{W}$ is endowed with a topology and a Borel $\sigma$-field. Throughout the section, we suppress measure-theoretic qualifiers, such as Borel sets, measurable functions, or a statement holding almost everywhere. Details of the mathematical set-up are found in the Online Appendix.} Following BM, we call $B = (\mathbb{Y},\mathbb{W},u,\mu_W)$ a \bi{basic game}, where $u = (u_i)_{i \in N}$ denotes the payoff profile. Each player $i$ observes a signal vector $T_i$ taking values in the space $\mathbb{T}_i\subset \mathbf{R}^{d_T}$, $d_T \ge 1$. Define the signal profile $T=(T_1,...,T_n) \in \mathbb{T}$, where $\mathbb{T} = \mathbb{T}_1 \times...\times \mathbb{T}_n$. Once the payoff state is realized to be $w \in \mathbb{W}$, the signal profile $T$ is drawn from a distribution $\mu_{T|W}(\cdot\mid w)$ on $\mathbb{T}$. As in BM, we call $I = (\mathbb{W},\mathbb{T},\mu_{T|W})$ an \bi{information structure}. A \bi{Bayesian game} $G$ consists of the basic game and information structure, i.e., $G = (B,I)$. A Bayesian game can be a complete information game or an incomplete information game depending on the information structure $I$. We will give examples of various information structures later.
Let us introduce strategies. First let $\sigma(\cdot \mid w,t)$ be a conditional distribution on $\mathbb{Y}$, when the payoff state and the signal profile are realized to be $(w,t) \in \mathbb{W}\times \mathbb{T}$. Let $\Sigma$ be a collection of such conditional distributions. For each $i=1,...,n$, let $\sigma_i(\cdot \mid w,t)$ be the $i$-th coordinate marginal conditional distribution of $\sigma(\cdot \mid w,t)$. Following BM, we call each $\sigma \in \Sigma$ a \bi{decision rule}.
For each $i=1,...,n$, $t_i \in \mathbb{T}_i$ and $\sigma \in \Sigma$ and a transform $\tau_i: \mathbb{Y}_i \rightarrow \mathbb{Y}_i$, we write the expected payoff of player $i$ as
\begin{eqnarray*}
U_i(\tau_i,t_i;\sigma) = \int \int u_i(\tau_i(y_i),y_{-i},w)d\sigma(y_i,y_{-i}|w,t_i,t_{-i}) d\mu_{W,T_{-i}|T_i}(w,t_{-i} \mid t_i),
\end{eqnarray*}
where $\mu_{W,T_{-i}|T_i}(\cdot,\cdot \mid t_i)$ denotes the conditional distribution of $(W,T_{-i})$ given $T_i = t_i$ under the joint distribution $\mu_{W,T}$ of $(W,T)$ obtained from $(\mu_{T|W},\mu_W)$. The quantity $U_i(\tau_i,t_i;\sigma)$ denotes the conditional expected payoff of player $i$ given her signal $T_i = t_i$ when the player $i$ deviates from the action $y_i$ recommended to her according to the decision rule $\sigma$ and chooses $\tau_i(y_i)$ instead. We say that a decision rule $\sigma \in \Sigma$ is a \bi{Bayes Correlated Equilibrium (BCE)} for $G$ if for each $i=1,...,n$, and each $\tau_i: \mathbb{Y}_i \rightarrow \mathbb{Y}_i$ and $t_i \in \mathbb{T}_i$,
\begin{eqnarray}
U_i(\mathsf{Id},t_i;\sigma) \ge U_i(\tau_i,t_i;\sigma),
\end{eqnarray}
where $\mathsf{Id}$ is the identity map on $\mathbb{Y}_i$. Denote by $\Sigma_{\mathsf{BCE}}(G)$ the set of BCE's of game $G$. BCE generalizes other solution concepts, such as Bayes Nash equilibria. We demonstrate later how our result carries over to these other solution concepts.
\@startsection{subsection}{2{Counterfactual Predictions from a Game}
\@startsection{subsubsection}{3{Predictions from a Bayesian Game\label{subsec:predictions}}
To generate predictions from a game $G$, the econometrician only needs to know the distribution of the observed action profile $Y$ conditional on the payoff state $W$, when such an action profile is generated by an equilibrium of the game $G$.
Given $\sigma \in \Sigma$ and the information structure $I = (\mathbb{W},\mathbb{T},\mu_{T|W})$, we define a probability measure $\rho_\sigma(A\mid w)$ on $\mathbb{Y}$ for each $w \in \mathbb{W}$ as follows:\footnote{The conditional probability $\rho_\sigma$ corresponds to what \cite{Bergemann/Morris:16:TE} called an ``outcome induced by the decision rule $\sigma$'' in their paper with the finite signal space $\mathbb{T}$.} for all $A \subset \mathbb{Y}$,
\begin{eqnarray}
\rho_\sigma(A\mid w) \equiv \int \sigma(A \mid w,t)d\mu_{T|W}(t\mid w).
\end{eqnarray}
Hence $\rho_\sigma(A\mid w)$ indicates the probability of the action profile being chosen from $A$ when the payoff state is $W = w$, and the actions are drawn by the decision rule $\sigma$.
The econometrician observes only one action profile $Y$ from the game $G$, even when there are multiple equilibria in this game. In order to complete the description of how $Y$ is generated, let us introduce a generic form of the equilibrium selection rule. For a game $G$, and $w \in \mathbb{W}$, we define the \bi{equilibrium selection rule} (denoted by $e_G(\cdot\mid w)$) to be a conditional distribution on $\Sigma_{\mathsf{BCE}}(G)$ given $W = w$. Thus, the generation of $Y$ is described as follows:\medskip
Step 1: The value of the payoff state $W=w$ is drawn from the distribution $\mu_W$.
Step 2: An equilibrium $\sigma \in \Sigma_{\mathsf{BCE}}(G)$ is drawn from the distribution $e_G(\cdot\mid w)$.
Step 3: The action profile $Y$ is drawn from the distribution $\rho_\sigma(\cdot\mid w)$.\medskip
The three steps summarize the causal structure of the model for observed actions $Y$ and the payoff state $W$. Given a game $G$ and $w \in \mathbb{W}$, we define a \bi{(randomized) reduced form of game $G$} as\footnote{The integral with respect to the equilibrium selection rule is an integral over a real function on the space of conditional distributions which we can topologize appropriately. Details can be found in the Online Appendix.}, for $A \subset \mathbb{Y}$,
\begin{align}
\label{RRF}
\rho_G(A\mid w) \equiv \int_{\Sigma_{\mathsf{BCE}}(G)} \rho_\sigma(A\mid w) d e_G(\sigma\mid w).
\end{align}
The generation of $Y$ is completely described by the couple $(\rho_G,\mu_W)$ and can be represented by first drawing $W=w$ from $\mu_W$, then drawing $Y$ from the distribution $\rho_G(\cdot\mid w)$. Hence the reduced form $\rho_G$ gives the prediction rule of the game in counterfactual analysis. The probability of $Y$ taking a value in a set $A$ when $W$ is \textit{fixed} to be $w$ is $\rho_G(A\mid w)$.
From here on, any probability statements (including expectation and conditional expectation) involving $(Y,W)$ are with respect to the joint distribution defined as follows:
\begin{align*}
P\{Y \in A, W \in S\} = \int_{S} \rho_G(A \mid w) d\mu_W(w), \quad A \subset \mathbb{Y}, \quad S \subset \mathbb{W}.
\end{align*}
\@startsection{subsubsection}{3{Counterfactual Predictions}
Our counterfactual experiment involves a new policy that changes the payoff state $W$ into $f(W)$ for some map $f:\mathbb{W} \rightarrow \mathbb{W}$. The policy changes the basic game $B$ into
\begin{eqnarray}
\label{f(B)}
f(B) = (\mathbb{Y},\mathbb{W},u,\mu_W \circ f^{-1}).
\end{eqnarray}
Thus our counterfactual analysis involves a policy that transforms the pre-policy game $G = (B,I)$ into a post-policy game $f(G) = (f(B),I)$. A counterfactual prediction at $f(W) = w'$ in game $f(G)$ can be made from the reduced forms $\rho_{\sigma_f}$ induced by $\sigma_f \in \Sigma_{\mathsf{BCE}}(f(G))$, i.e.,
\begin{eqnarray}
\label{eq prediction}
\rho_{\sigma_f}(A \mid w') = \int \sigma_f(A \mid w',t)d\mu_{T|W}(t \mid w').
\end{eqnarray}
Let $e_{f(G)}(\cdot \mid w')$ be the equilibrium selection rule of the game $f(G)$ at payoff state $f(W) = w'$. Then the \bi{equilibrium-based prediction of the post-policy game $f(G)$} is given by
\begin{align}
\label{rho fG}
\rho_{f(G)}(A \mid w') \equiv \int_{\Sigma_{\mathsf{BCE}}(f(G))} \rho_{\sigma_f}(A \mid w') d e_{f(G)}(\sigma_f \mid w'), \quad A \subset \mathbb{Y}.
\end{align}
The quantity $\rho_{f(G)}(A \mid w')$ represents the probability of the action profile from the post-policy game $f(G)$ realizing in $A$ when the payoff state is $w'$.
An alternative way of generating a prediction is to use the following:
\begin{align}
\label{decomp pred}
\rho_G(A \mid w') \equiv \int_{\Sigma_{\mathsf{BCE}}(G)} \rho_{\sigma}(A \mid w') d e_{G}(\sigma \mid w') = P\{Y \in A \mid W = w'\}.
\end{align}
We call $\rho_G(A \mid w')$ the \bi{decomposition-based prediction from game $G$}. This prediction extrapolates the relation between $Y$ and $W$ in the pre-policy game to the post-policy game.
In general, decomposition-based predictions do not coincide with equilibrium-based predictions in (\ref{rho fG}), when $\rho_G$ is not policy-invariant. The Online Appendix provides such an example in the context of an incomplete information analogue of Section \ref{subsubsec:simple1}. Our main result below presents a set of sufficient conditions under which the equilibrium-based predictions have upper and lower bounds that can be identified using only decomposition-based predictions.
\@startsection{subsection}{2{Interval-Identification of Equilibrium-Based Predictions}
\@startsection{subsubsection}{3{The Main Result}
The main result of this paper shows that the equilibrium-based prediction is interval-identified by the decomposition-based prediction under two sufficient (but not necessary) conditions. The first condition is concerned with the information structure and requires that the policy affects only part of the payoff states that are commonly observed by all the players. The second condition is an invariance condition on the equilibrium selection rules. Let us introduce the first condition.
\begin{assumption}
\label{assump: public observability1}
(i) The payoff state $W$ and the signal profile $T = (T_1,...,T_n)$ of the pre-policy game $G$ satisfy that $W = (W_1,...,W_n)$, with $W_i$ representing player $i$'s payoff state,
\begin{align*}
T_i = W_i, \text{ and } W_i = (\tilde W_1,W_{2,i}), \text{ for } i=1,...,n.
\end{align*}
\noindent (ii) The policy function $f$ satisfies that
\begin{align*}
f(W) = \left((\tilde f_1(\tilde W_1),W_{2,1}),...,(\tilde f_n(\tilde W_1),W_{2,n})\right),
\end{align*}
for some maps $\tilde f_i: \mathbb{\overline W}_1 \rightarrow \mathbb{\overline W}_1$, $i=1,...,n$, where $\mathbb{\overline W}_1$ is the set $\tilde W_1$ takes values from.
\end{assumption}
Assumption \ref{assump: public observability1} requires that (i) each player has a common payoff component ($\tilde W_1$) which is observed by every player in the game, and (ii) the policy is restricted to this commonly observed payoff component.\footnote{In the literature of games with correlated equilibria, the decision rule $\sigma$ is viewed as recommended by a mediator to each player and each player rejects or accepts the decision rule individually. Then Assumption \ref{assump: public observability1} has the consequence that the mediator has the same information as the aggregated information from all the players.} Assumption \ref{assump: public observability1} seems plausible in many settings because a government policy is often publicly announced in advance.\footnote{A special case of the set-up in this theorem is considered, for example, by \cite{Grieco:14:RAND}. He studied entry decisions among firms, where the publicly observed payoff component $\tilde W_1$ is a market characteristic that is commonly observed by all the firms in the market.} Note that the impact of the new policy on each firm is allowed to be different. Furthermore, Assumption \ref{assump: public observability1} does not make any restrictions on the stochastic dependence among the individual components of the payoff state $W$ or those of the signal vector $T$, or make any restrictions on the conditional distribution of $W_{-i}$ given $T_i$. For instance, there can be correlation across signals, and each player's signal is allowed to contain information on other players' payoff states.
Assumption \ref{assump: public observability1} admits a broad class of information structures. For example, suppose that $W_i = (X_i, \varepsilon_i)$, where $X_i$ is observed and $\varepsilon_i$ is unobserved by the econometrician. Suppose further that $X_i = (\tilde X_1,X_{2,i})$ and $\varepsilon_i = (\tilde \varepsilon_1, \varepsilon_{2,i})$, where $(\tilde X_1,\tilde \varepsilon_1)$ is the same across the players. After the policy, $W_i = (\tilde X_1, X_{2,i},\tilde \varepsilon_1, \varepsilon_{2,i})$ changes to $f(W_i) = (\tilde f(\tilde X_1), X_{2,i},\tilde \varepsilon_1, \varepsilon_{2,i})$ for some map $\tilde f$. As for the information structure, we assume that each player $i$ observes $W_i$ so that every player commonly observes $(\tilde X_1, \tilde \varepsilon_1)$. Then this information structure is seen to satisfy Assumption \ref{assump: public observability1} by taking $T_i = W_i$, $\tilde W_1 = (\tilde X_1, \tilde \varepsilon_1)$, and $W_{2,i} = (X_{2,i}, \varepsilon_{2,i})$. This information structure already accommodates a wide class of information structures used in the literature. We list the examples below.\medskip
\noindent \textbf{Example 1: Complete Information: } In the case of a complete information game, we specify $W_i = (\tilde X_1, \tilde \varepsilon_1)$, where $\tilde X_1 = (\tilde X_{1,1},...,\tilde X_{1,n})$ and $\tilde \varepsilon_1 = (\tilde \varepsilon_{1,1},...,\tilde \varepsilon_{1,n})$ and each $(\tilde X_{1,i},\tilde \varepsilon_{1,i})$ denotes the payoff state that player $i$'s payoff depends on. $\blacksquare$ \medskip
\noindent \textbf{Example 2: Public-Private Dichotomy of Signals: } In the case of a public-private information structure where $X_i$ is publicly observed while $\varepsilon_i$ belongs to private information, we specify $W_i = (\tilde X_1, \varepsilon_{2,i})$, where $X_i = \tilde X_1$ for all $i=1,...,n$, $\tilde X_1 = (\tilde X_{1,1},...,\tilde X_{1,n})$, and each $(\tilde X_{1,i},\varepsilon_{2,i})$ denotes the payoff state that player $i$'s payoff depends on.
Many modifications of this setting of public-private information are also accommodated. For example, the econometrician may observe signals that are not payoff-relevant, which may be captured by specifying that the payoff function $u_i$ does not depend on the payoff-irrelevant, pure signal part of $X_i$. As another example, the econometrician may observe signals that are part of the private information. For this, we specify $X_i = (\tilde X_1,X_{2,i})$ and $W_i = (X_i, \varepsilon_{2,i})$, so that $X_{2,i}$ belongs to private information and observed by the econometrician. $\blacksquare$\medskip
Let us introduce the invariance condition on the equilibrium selection rules as follows.
\begin{assumption}
\label{assump: invariance}
For any $w \in \mathbb{S}_W \cap \mathbb{S}_{f(W)}$ such that $\Sigma_{\mathsf{BCE},w}(G) = \Sigma_{\mathsf{BCE},w}(f(G))$, we have
\begin{align*}
e_G(A \mid w) = e_{f(G)}(A \mid w),
\end{align*}
for each $A \subset \Sigma$, where $\Sigma_{\mathsf{BCE},w}(G)=\{\sigma(\cdot \mid w,\cdot): \sigma \in \Sigma_{\mathsf{BCE}}(G)\}$, and $\mathbb{S}_W$ and $\mathbb{S}_{f(W)}$ denote the supports of $\mu_W$ and $\mu_W \circ f^{-1}$.
\end{assumption}
The invariance condition requires that if the set of equilibria at fixed $w$ remains the same after the policy, then the equilibrium selection rule at $w$ remains the same as well. The invariance condition is naturally satisfied if the equilibrium selection rule arises as a conditional probability on the set of decision rules given that the decision rule is an equilibrium. See the Online Appendix for details.
For a real map $h$ on $\mathbb{Y} \times \mathbb{W}$, let us define
\begin{align}
\label{EP DP}
\mathsf{EP}(h \mid C) &\equiv \int_{\mathbb{S}_{f(W)} \cap C} \int_{\mathbb{Y}} h(y,w) d \rho_{f(G)}(y \mid w)d(\mu_W \circ f^{-1})(w), \text{ and }\\ \notag
\mathsf{DP}(h \mid C) &\equiv \int_{\mathbb{S}_{W}}\mathbf{E}[h(Y,W) \mid W = f(w)]1\{f(w) \in \mathbb{S}_W \cap C\}d\mu_W(w).
\end{align}
The quantity $\mathsf{EP}(h \mid C)$ represents our target parameter which is the equilibrium-based conditional expectation of $h(Y,f(W))$ given $f(W) \in C$ after the policy, whereas $\mathsf{DP}(h \mid C)$ represents its decomposition-based counterpart. Depending on the choice of $h$, we can express various objects of interest as illustrated below.\medskip
\noindent \textbf{Example 1: Expected Actions} We simply take $h(y,w) = y$. Then $\mathsf{EP}(h \mid C)$ represents the conditional expectation of the action profile given $f(W) \in C$, after the policy.\medskip
\noindent \textbf{Example 2: Distribution of the Action Profile} We take $h(y,w) = 1\{y \le t\}$, $t \in \mathbf{R}^n$. Then $\mathsf{EP}(h \mid C)$ is the conditional CDF of the action profile at $t$ given $f(W) \in C$, after the policy.\medskip
\noindent \textbf{Example 3: Distribution of Maximum Actions} We take $h(y,w) = 1\{\max_{1 \le i \le n: w_i \in S} y_i \le t\}$, $t \in \mathbf{R}$, $w = (w_1,...,w_n)$. Then $\mathsf{EP}(h \mid C)$ represents the conditional CDF at $t$ of the maximum action among those players $i$ with $W_i \in S$, conditional on that $f(W) \in C$, after the policy.\medskip
We are prepared to present our main result.
\begin{theorem}
\label{thm: bounds1}
Suppose that Assumptions \ref{assump: public observability1} and \ref{assump: invariance} hold for the pre-policy game $G = (B,I)$ and the post-policy game $f(G) = (f(B),I)$. Let a map $h:\mathbb{Y} \times \mathbb{W} \rightarrow \mathbf{R}$, and a set $C \subset \mathbb{W}$ be given such that for all $w \in f^{-1}((\mathbb{S}_{f(W)} \setminus \mathbb{S}_W) \cap C)$,
\begin{align}
\label{h bounds}
\underline h(w) \le \inf_{y \in \mathbb{Y}} h(y,w) \le \sup_{y \in \mathbb{Y}} h(y,w) \le \overline h(w),
\end{align}
for some maps $\underline h, \overline h: \mathbb{W} \rightarrow \mathbf{R}$.
Then,
\begin{align}
\label{bounds0}
&\mathsf{DP}(h \mid C) + \mathbf{E}\left[\underline h(f(W))1\left\{f(W) \in C \setminus \mathbb{S}_W\right\} \right]\\ \notag
&\quad \quad \le \mathsf{EP}(h \mid C) \le \mathsf{DP}(h \mid C) + \mathbf{E}\left[\overline h(f(W))1\left\{f(W) \in C \setminus \mathbb{S}_W\right\} \right].
\end{align}
\end{theorem}
It is immediate from Theorem \ref{thm: bounds1} that when the support of the payoff state after the policy is within its support in the data (i.e., $\mathbb{S}_{f(W)} \subset \mathbb{S}_W$),
\begin{align}
\mathsf{DP}(h \mid C) = \mathsf{EP}(h \mid C).
\end{align}
When $Y_i \in [h_L,h_U]$, $h_L < h_U$, the theorem yields bounds for the conditional average predicted outcome of player $i$. More specifically, define for each $i=1,...,n$,
\begin{align}
\label{AEP, ADP}
\mathsf{AEP}_i(C) &\equiv \int_{\mathbb{S}_{f(W)} \cap C} \int_{\mathbb{Y}_i} y_i d\rho_{f(G),i}(y_i \mid w)d\left(\mu_W \circ f^{-1}\right)(w), \text{ and }\\ \notag
\mathsf{ADP}_i(C) &\equiv \int_{\mathbb{S}_{W}}\mathbf{E}[Y_i \mid W = f(w)]1\{f(w) \in \mathbb{S}_W \cap C\}d\mu_W(w),
\end{align}
where $\rho_{f(G),i}(\cdot \mid w)$ denotes the $i$-th marginal of $\rho_{f(G)}(\cdot \mid w)$. Hence $\mathsf{AEP}_i(C)$ represents the target parameter of the equilibrium-based conditional average predicted outcome of player $i$, given that the post-policy payoff state $f(W)$ is in $C$, with $\mathsf{ADP}_i(C)$ its decomposition-based counterpart. Then the theorem above yields that for each $i = 1,...,n$ and a set $C \subset \mathbb{W}$,
\begin{align*}
\mathsf{ADP}_i(C) + h_L \cdot \mathsf{EB}(C) \le \mathsf{AEP}_i(C) \le \mathsf{ADP}_i(C) + h_U \cdot \mathsf{EB}(C),
\end{align*}
where $\mathsf{EB}(C) = P\left\{f(W) \in C \setminus \mathbb{S}_W\right\}$. In an entry game where $Y_i \in \{0,1\}$, the bounds above with $h_U = 1$ and $h_L = 0$ are bounds for the predicted entry probability of firm $i$ after the policy.
One can use the results to obtain bounds for the average effect of the policy $f$ on the player $i$'s outcome. The average policy effect (denoted by $\Delta_i$) is defined as
\begin{align*}
\Delta_i \equiv \mathsf{AEP}_i(\mathbb{W}) - \mathbf{E}[Y_i].
\end{align*}
Then the above bounds give the following bounds for the average policy effect:
\begin{align*}
\mathsf{ADP}_i(\mathbb{W}) - \mathbf{E}[Y_i] + h_L \cdot \mathsf{EB}(\mathbb{W}) \le \Delta_i \le \mathsf{ADP}_i(\mathbb{W}) - \mathbf{E}[Y_i] + h_U \cdot \mathsf{EB}(\mathbb{W}).
\end{align*}
One might ask whether the bounds in Theorem \ref{thm: bounds1} are sharp. The following proposition gives a sense in which the answer is affirmative.
\begin{proposition}
\label{prop: sharp bounds0}
Suppose that a policy $f: \mathbb{W} \rightarrow \mathbb{W}$ and $\mu_W$ are given such that the support of $\mu_W$ overlap with that of $\mu_W \circ f^{-1}$. Suppose further that maps $h: \mathbb{Y} \times \mathbb{W} \rightarrow \mathbf{R}$, $\underline h, \overline h: \mathbb{W} \rightarrow \mathbf{R}$, and a set $C \subset \mathbb{W}$ are given as in Theorem \ref{thm: bounds1} where $\mathbb{Y}$ is a countable set and for all $w \in \mathbb{W}$, $\overline h(w) = \sup_{y \in \mathbb{Y}} h(y,w)$ and $\underline h(w) = \inf_{y \in \mathbb{Y}} h(y,w)$.\footnote{The condition of countability of $\mathbb{Y}$ can be removed, for example, if $h$ is a continuous map and $\mathbb{Y}$ is compact.}
Then there exists a Bayesian game $G=(B,I)$, with $B = (\mathbb{Y},\mathbb{T},u,\mu_W)$ such that Assumptions \ref{assump: public observability1} and \ref{assump: invariance} hold, and either of the two inequalities in (\ref{bounds0}) holds with equality.
\end{proposition}
Note that simple shape constraints such as $\rho_{f(G)}( \cdot \mid w)$ being monotonous or concave in $w$ in all the BCEs do not help improve the bounds because a constant map also satisfies such constraints trivially. However, shape constraints may help estimate $\mathsf{DP}(h \mid C)$ more accurately.
\@startsection{subsubsection}{3{Identification of the Bounds}\label{sec:single-index-restrictions}
If we observe the actions $Y$ and the payoff state $W$ of the pre-policy game, then we can recover the bounds from data, without specifying the details of the game. However, in practice, we often do not observe the whole state vector $W$.
Suppose that $W_i = (X_i, \varepsilon_i)$, where $X_i$ is observed and $\varepsilon_i$ unobserved by the econometrician. Let the policy $f$ be of the form $f(W) = (f_1(W_1),...,f_n(W_n))$, where
\begin{align*}
f_i(W_i) = (\tilde f_i(X_i),\varepsilon_i), \quad i=1,...,n,
\end{align*}
for some map $\tilde f_i$, and let the set $C$ be such that $f(W) \in C$ if and only if $\tilde f(X) \in \tilde C$, for some set $\tilde C$, where $\tilde f(X) = (\tilde f_1(X_1),...,\tilde f_n(X_n))$. As for $h$, we consider $h(y,w) = h^*(y,x)$, $\overline h(w) = \overline h^*(x)$, and $\underline h(w) = \underline h^*(x)$, for some maps $h^*$, $\overline h^*$, and $\underline h^*$, which depend only on observable states.
Let us discuss identification of bounds in this setting. First, we can identify
\begin{align*}
\mathbf{E}\left[\underline h(f(W))1\left\{f(W) \in C \setminus \mathbb{S}_W\right\} \right] &= \mathbf{E}\left[\underline h^*(\tilde f(X))1\left\{\tilde f(X) \in \tilde C \setminus \mathbb{S}_X\right\} \right] \text{ and }\\
\mathbf{E}\left[\overline h(f(W))1\left\{f(W) \in C \setminus \mathbb{S}_W\right\} \right] &=\mathbf{E}\left[\overline h^*(\tilde f(X))1\left\{\tilde f(X) \in \tilde C \setminus \mathbb{S}_X\right\} \right],
\end{align*}
where $\mathbb{S}_X$ denotes the support of $X$. However, the identification of $\mathsf{DP}(h \mid C)$ depends on whether $X$ and $\varepsilon$ are independent or not.
First, suppose that $X$ and $\varepsilon$ are independent. Then, we can identify $\mathsf{DP}(h \mid C)$ as follows:
\begin{align}
\label{DP ident}
\mathsf{DP}(h \mid C) = \int \mathbf{E}\left[h^*(Y,X) \mid X = \tilde f(x)\right] 1\left\{ \tilde f(x) \in \tilde C \right\}d\mu_X(x),
\end{align}
where $\mu_X$ denotes the distribution of $X$. The identification result allows for $\varepsilon_1,...,\varepsilon_n$ to be correlated; this correlation can come from some unobserved characteristics of the game.
Second, suppose that $X$ and $\varepsilon$ are potentially correlated. In this case, the decomposition-based approach may still be implemented using a control function approach (\cite{Blundell/Powell:03:Adv} and \cite{Imbens/Newey:09:Eca}).\footnote{Game-theoretic models often involve a simultaneous system of equations. Note that we exclude the setting where the policy variable is part of the endogenous outcomes in such equations. For example, in a two-player game, with outcomes, $Y = (Y_1,Y_2)$, we focus on a policy that changes the payoff state $X$ which is not one of the two outcomes. When one of the endogenous outcomes is a policy variable, the structural equations need to be transformed into a triangular system of equations to apply a control function approach. \cite{Blundell/Matzkin:14:QE} present precise conditions for such a transform to exist. These conditions may be implausible in some empirical applications.} Suppose that $X_i = (\tilde X_{i,1},\tilde X_{i,2})$ and $\tilde f_i(X_i) = (\tilde g_i(\tilde X_{i,1}),\tilde X_{i,2})$ for some map $\tilde g_i$ so that the policy alters only $\tilde X_{i,1}$. Define $\tilde X_1 = (\tilde X_{1,1},...,\tilde X_{n,1})$ and $\tilde X_2 = (\tilde X_{1,2},...,\tilde X_{n,2})$, and let $\tilde g(x_1) = (\tilde g_1(x_{1,1}),...,\tilde g_n(x_{n,1}))$, $x_1 = (x_{1,1},...,x_{n,1})$. We choose the set $C$ to be such that $\tilde f(X) \in \tilde C$ if and only if $\tilde g(\tilde X_1) \in \tilde C_1$, for some set $\tilde C_1$. Then, we obtain the following identification result.
\begin{proposition}
\label{prop: control fn}
Suppose that $\tilde X_1$ and $\varepsilon$ are conditionally independent given $\tilde X_2$. Then,
\begin{align}
\label{control fn}
\mathsf{DP}(h \mid C) = \int \mathbf{E}\left[ h^*(Y,X) \mid (\tilde X_1,\tilde X_2) = (\tilde g(x_1),x_2)\right] 1\left\{\tilde g(x_1) \in \mathbb{S}_{\tilde X_1} \cap \tilde C_1 \right\} d\mu_{\tilde X_1,\tilde X_2}(x_1,x_2),
\end{align}
where $\mu_{\tilde X_1,\tilde X_2}$ is the distribution of $(\tilde X_1,\tilde X_2)$, and $\mathbb{S}_{\tilde X_1}$ denotes the support of $\tilde X_1$.
\end{proposition}
In general, when $\tilde X_1$ and $\varepsilon$ are dependent due to some unobserved game-specific characteristics, we may consider $\tilde X_2$ as an observed vector including game characteristics such that conditioning on $\tilde X_2$ removes the stochastic dependence between $\tilde X_1$ and $\varepsilon$.
In many examples, the payoff state $W_i$ of each player $i$ enters the payoff as a partial index form: $W_i = (\tilde X_{i,1},V_i,\varepsilon_i)$, where $V_i = \tilde X_{i,2}'\theta_i$, with coefficients $\theta_i$. If $\tilde X_1$ and $\varepsilon$ are conditionally independent given $V = (V_1,...,V_n)$, which is an assumption weaker than the previous assumption that $X$ and $\varepsilon$ are independent, we can rewrite (\ref{control fn}) as:
\begin{align}
\label{index}
\mathsf{DP}(h \mid C) = \int \mathbf{E}\left[ h^*(Y,X) \mid (\tilde X_1,V) = (\tilde g(x_1),v)\right] 1\left\{\tilde g(x_1) \in \mathbb{S}_{\tilde X_1} \cap \tilde C_1 \right\} d\mu_{\tilde X_1,V}(x_1,v),
\end{align}
where $\mu_{\tilde X_1,V}$ is the distribution of $(\tilde X_1,V)$, and $V = (V_1,...,V_n)$. We can identify and estimate $\theta_1,...,\theta_n$ (up to a scale) following the literature of multi-index models (see \cite{Ichimura/Lee:91:NSEM}, \cite{Lee:95:JOE}, \cite{Donkers/Schafghans:08:ET}, \cite{Xia:08:JASA} and \cite{Ahn/Ichimura/Powell/Ruud:18:JBES} and references therein.) The main difference here is that we do \textit{not} introduce the multi-index structure as a semiparametric restriction on a nonparametric function; it naturally follows from the index structure of the payoff function in the game. The multi-index models are useful for dimension reduction when the game involves only a few players, and the dimension of $X_i$ is large. We provide some implementation details in the Online Appendix.
\@startsection{subsection}{2{Examples}
\label{sec: examples}
\@startsection{subsubsection}{3{Entry Games}\label{example1_detail}
Let us revisit the entry game in Section \ref{subsubsec: a simple entry game} in a more general setting. Suppose that there are $n$ firms, $i=1,...,n$, who choose a binary action $y_i \in \{0,1\}$, $y = (y_1,...,y_n)$, where $y_i = 1$ represents entry in the market and $y_i=0$ staying out of the market. We specify the payoff generically as $u_i(y, X, \varepsilon)$, where $X = (X_1,...,X_n)$ is observed and $\varepsilon = (\varepsilon_1,...,\varepsilon_n)$ unobserved by the econometrician. We assume that $X$ is subject to a change by a policy whereas $\varepsilon$ is not.
\begin{assumption}
\label{assump: assump2}
$f(X,\varepsilon) = (\tilde f(X),\varepsilon)$, for some map $\tilde f$.
\end{assumption}
A counterfactual policy $f$ in the above assumption was considered by all the papers cited in Section \ref{subsubsec:simple1}. For example, \cite{Ciliberto/Tamer:09:Eca} and \cite{Grieco:14:RAND} set the values of a dummy variable in $X_i$ to 0. Meanwhile, \cite{Jia:08:Eca} changes market size, a variable in $X_i$.
As for the information structure, we make the following assumption.
\begin{assumption}\label{example1_assump}
Either of the following two conditions are satisfied:\medskip
(i) The game is of complete information, so that $T_i = (X, \varepsilon)$, for $i=1,...,n$.
(ii) The game has a public-private dichotomy of signals: $T_i = (X, \varepsilon_i)$, for $i=1,...,n$.
\end{assumption}
This assumption is satisfied in most empirical models of entry games in the literature. For example, Assumption \ref{example1_assump}(i) is satisfied in the complete information environments of \cite{Bresnahan/Reiss:91:JOE}, \cite{Jia:08:Eca} and \cite{Ciliberto/Tamer:09:Eca}. The public-private dichotomy case is studied in depth by \cite{Grieco:14:RAND}. In fact, Assumption \ref{example1_assump} includes a much wider class of information structures, as we do not put any restriction on the conditional distribution of other players' payoff states given player $i$'s signal. Hence, it accommodates the information structures in \cite{Magnolfi/Roncoroni:20:WP} as long as the signal includes publicly observable policy variables.
\begin{assumption}
$X$ and $\varepsilon$ are independent.
\end{assumption}
This assumption is also frequently used in the literature of estimating entry games. (e.g., \cite{Ciliberto/Tamer:09:Eca} and \cite{Jia:08:Eca}.) When $X$ and $\varepsilon$ are dependent but an appropriate control function exists, we can still use the decomposition-based bounds. (Recall Proposition \ref{prop: control fn} above.)
Let us describe the invaraince condition for the equilibrium selection rule in this setting. Let $W = (X,\varepsilon)$, and $\mathbb{S}_W$ denote the support of $W$. Each BCE (or BNE) of the game before the policy determines the conditional distribution, $\sigma(\cdot \mid w)$ of the action profile $Y = (Y_1,...,Y_n)$ given the payoff state $W = w$. Let the set of those equilibrium conditional distributions be denoted by $\mathcal{E}$. Then, the equilibrium selection rule $e(\cdot \mid w)$ is a conditional distribution on $\mathcal{E}$ given $W = w$. For each state vector $w$, define
\begin{align*}
\mathcal{E}(w) = \{\sigma(\cdot \mid w): \sigma \in \mathcal{E}\}.
\end{align*}
We also define $\mathcal{E}_f(w)$ and $e_f( \cdot \mid w)$ similarly for the post-policy game. Then, we introduce the following invariance condition.
\begin{assumption}
\label{assump: invar} For any $w \in \mathbb{S}_W$ such that $\mathcal{E}(w) = \mathcal{E}_f(w)$, we have $e(\cdot\mid w) = e_{f}(\cdot\mid w)$.
\end{assumption}
Our target parameter is the conditional probability of $Y = a$, $a \in \{0,1\}^n$, after the policy given $\tilde f(X) \in \tilde C$ for some set $\tilde C$, which is defined as follows:
\begin{align*}
p_a^f(\tilde C) \equiv \int_{w \in \mathbb{S}_{f(W)} : x \in \tilde C} \int_{\mathcal{E}_f} \sigma_f(a \mid w) de_f(\sigma_f \mid w) d(\mu_W \circ f^{-1})(w),
\end{align*}
where $w = (x,\varepsilon)$, and $\mu_W \circ f^{-1}$ denotes the distribution of $f(W)$. For example, $p_{1,1,...,1}^f(\tilde C)$ denotes the conditional probability of all firms entering the market given that $\tilde f(X) \in \tilde C$, after the policy. The result below shows how this conditional probability is bounded by decomposition-based quantities.
\begin{corollary}\label{example1_corollary}
Suppose that Assumptions \ref{assump: assump2}-\ref{assump: invar} hold. For $a \in \{0,1\}^n$, we define
\begin{align*}
p_a(x) \equiv P\left\{ Y = a \mid X = x \right\}.
\end{align*}
Then for each $a \in \{0,1\}^n$,
\begin{align*}
\mathbf{E}\left[p_a(\tilde f(X)) 1\{\tilde f(X) \in \mathbb{S}_X \cap \tilde C\}\right] &\leq p_a^f(\tilde C) \\ \notag
&\leq \mathbf{E}\left[p_a(\tilde f(X)) 1\{f(X) \in \mathbb{S}_X \cap \tilde C\}\right] + P\left\{\tilde f(X) \in \tilde C \setminus \mathbb{S}_X \right\},
\end{align*}
where $\mathbb{S}_X$ denotes the support of $X$. In particular, if the support of $X$ contains the support of $\tilde f(X)$,
\begin{equation*}
p_a^f(\tilde C) = \mathbf{E}\left[p_a(\tilde f(X))1\{\tilde f(X) \in \tilde C\}\right].
\end{equation*}
\end{corollary}
The results do not rely on any further specification of the payoff function, or a parametric assumption for the distribution of $\varepsilon_i$. The term $\mathbf{E}[p_a(\tilde f(X)) 1\{\tilde f(X) \in \mathbb{S}_X \cap \tilde C\}]$ is identified using the data from the games before the policy. We provide a step-by-step empirical implementation of this result in Section \ref{sec:empirical} and the Online Appendix.
\@startsection{subsubsection}{3{Auctions}
A common approach for counterfactual analysis in the empirical auction literature is to first nonparametrically identify and estimate the distribution of valuations from the distribution of bids, and use those estimates for counterfactual analysis (see \cite{Athey/Haile:07:Handbook} for a survey.) One may wonder if we can use the decomposition approach to generate counterfactual predictions without identifying the valuation distribution from data. While we believe that we can in a more general setting, for the sake of concreteness, we focus on the setting of the first-price sealed bid, independent private value auction of \cite{Guerre/Perrigne/Vuong:09:ECMA}, and consider a counterfactual policy that alters the reserve price (see \citealp{Paarsch:97:JOE,Haile/Tamer:03:JPE} for two examples of such a policy).\footnote{Reserve prices are set by the seller before the auction takes place. They are the minimum value for which the seller is willing to sell the good: if no bid is higher than the reserve price during the auction, then the good remains unsold. As a result, in empirical work, they are usually considered as an auction characteristic (primitive). The reserve price is often observed by the bidders and by the researcher. If the focus is on auctions with unobserved reserve prices (as in \cite{Elyakime/Laffont/Loisel/Vuong:97:JBES}) our results do not apply. If the researchers are interested in predicting bids after setting the reserve price beyond its support in the data (which is suggested in Table 4 of \cite{Haile/Tamer:03:JPE}), they may use the bounds approach outlined above.} The reserve price is often a publicly observed state variable.
In this model, there are potential bidders $i=1,...,n$, who observe both private valuation $V_i$ drawn from a distribution with common support, $\mathbb{S}_V$, and commonly observe the vector of auction specific characteristics $(X,\eta)$, where $X=(X_1,R)$, with $R$ denoting the reserve price, is observed by the econometrician, but $\eta$ represents unobserved auction heterogeneity. Each potential bidder $i$ chooses to enter or not depending on the value of $(V_i,\eta,X)$. The entry rule is modeled as a reduced form $I: \mathbb{S}_V \times \mathbb{S}_\eta \times \mathbb{S}_X \rightarrow \{0,1\}$, where $\mathbb{S}_\eta$ and $\mathbb{S}_X$ denote the support of $\eta$ and $X$. Each participating bidder $i$ uses a bidding strategy (bid) $s_i: \mathbb{S}_V \times \mathbb{S}_\eta \times \mathbb{S}_X \rightarrow \mathbf{R}_+$, and wins if they submit the highest bid which is higher than the reserve price. The policy of interest is a change in the reserve price: a change of $R$ into $f(R)$ for some map $f$. For each auction, let $\tilde N =\{i: I(V_i,\eta,X) = 1\}$ be the set of participants, and $s^* = (s_i^*)_{i \in \tilde N}$ a symmetric BNE in the post-entry auction game. The econometrician observes $(X,B)$, where $B = (B_i)_{i \in \tilde N}$ and $B_i = s_i^*(V_i,\eta,X)$, $i \in \tilde N$.
The following assumption summarizes the features of this game relevant to the decomposition approach.
\begin{assumption}\label{example2_assump}
(i) $(X,\eta)$ is publicly known, but valuations, $V_i$, are private information.
(ii) The policy $f$ changes the reserve price $R$ into $f(R)$ for some map $f$.
(iii) The auction has a unique symmetric BNE.\footnote{The symmetric BNE can be viewed as a refinement of BCE. As we show in the subsequent section, the decomposition-based approach applies to a setting where the researcher knows that the equilibrium behind the data generation belongs to a subset of BCE satisfying restrictions such as symmetry or differentiability.}
\end{assumption}
The uniqueness of the BNE in this auction is well studied in the literature. (See \cite{Guerre/Perrigne/Vuong:09:ECMA}.) We introduce an additional assumption that is used to identify the bounds in the decomposition approach.
\begin{assumption}
\label{example2_assump2}
$(V,\eta)$ is conditionally independent of $R$ given $X_1$.
\end{assumption}
This assumption requires selection on observables, i.e., the source of dependence between $(V,\eta)$ and the reserve price $R$ is fully captured by observed auction characteristics $X_1$.
Our object of interest is the distribution of the auction revenue under the counterfactual reserve price $f(R)$ conditional on that $X_1 \in C$ for some set $C$:
\begin{align*}
p^f(A \mid C) \equiv P\left\{\max_{i \in \tilde N^f} B_i^f \in A \mid X_1 \in C \right\}, \quad A \subset \mathbb{S}_V,
\end{align*}
where $B_i^f = s_i^*(V_i, X_1,f(R))$ and $\tilde N^f$ denotes the set of participants after the policy. We define
\begin{align*}
q(A \mid r,v,x_1) \equiv P\left\{ \max_{i \in \tilde N} B_i \in A \mid (R,V,X_1) = (r,v,x_1) \right\}.
\end{align*}
The conditional probability $q(A \mid r,v,x_1)$ is identified for all $(r,v,x_1)$ is the support of $(R,V,X_1)$, and can be estimated from the pre-policy auction data without recovering the value distribution from the data. The following result is a corollary to Theorem \ref{thm: bounds1} and Proposition \ref{prop: control fn}.
\begin{corollary}\label{corollary_auction}
Suppose that Assumptions \ref{example2_assump}-\ref{example2_assump2} hold, and let $A$ be a subset of the common support of $V_i$ and $C$ a subset of the support of $X_1$ respectively. Then,
\begin{align*}
&\mathbf{E}\left[q( A \mid f(R),V,X_1)1\{ f(R) \in \mathbb{S}_R, X_1 \in C \} \right] \\
&\quad \quad \quad \leq p^f(A \mid C) \leq \mathbf{E}\left[q( A \mid f(R),V,X_1)1\{ f(R) \in \mathbb{S}_R, X_1 \in C \} \right] + P\left\{f(R) \notin \mathbb{S}_R, X_1 \in C \right\}.
\end{align*}
Furthermore, if the support of the reserve price after the policy is within the support of the reserve price before the policy (i.e., $\mathbb{S}_{f(R)} \subset \mathbb{S}_R$),
\begin{align*}
p^f(A \mid C) = \mathbf{E}\left[q( A \mid f(R),V,X_1)1\{X_1 \in C \} \right].
\end{align*}
\end{corollary}
The corollary says that when $\mathbb{S}_{f(R)} \subset \mathbb{S}_R$, the counterfactual quantity $p^f(A \mid C)$ can be directly recovered from data, without first recovering the valuation distribution. Hence, we can obtain point-identification of the counterfactual prediction without relying on the conditions invoked in the literature to ensure nonparametric identification of the valuation functions. This also simplifies the estimation procedure as well, as there is no need to estimate the latter from data.
\@startsection{subsection}{2{Extensions}
\label{subsec: ext to other solution concepts}
\@startsection{subsubsection}{3{Other Solution Concepts}
Our results so far are based on the solution concept of Bayes Correlated Equilibrium (BCE), but they carry over to a setting with an equilibrium concept that is a refinement of BCE. As pointed out by BM, many solution concepts in Bayesian games including Bayes Nash Equilibria (BNE) are an refinement of BCE. We let $\Sigma' \subset \Sigma$ be a given subcollection of decision rules $\sigma$, and consider the restricted BCE:
\begin{align}
\label{BNE2}
\Sigma_{\mathsf{BCE}}'(G) = \Sigma_{\mathsf{BCE}}(G) \cap \Sigma'.
\end{align}
We call this set the set of \bi{Bayes Correlated Equilibria (BCE) restricted to} $\Sigma'$.
For example, suppose that $\Sigma'$ is the collection of decision rules $\sigma$ of the following form: for any $A = A_1 \times ... \times A_n$,
\begin{eqnarray}
\label{B}
\sigma(A \mid w,t) = \prod_{i=1}^n \beta_i(A_i \mid w,t_i),
\end{eqnarray}
where $\beta_i(\cdot \mid w,t_i)$ is a conditional distribution on $Y_i$ given $(W,T_i) = (w,t_i)$. Then a BCE restricted to $\Sigma'$ is a BNE. One can add further restrictions such as symmetry or differentiability depending on the application.
Let us turn to the interval-identification of equilibrium-based predictions in terms of a BCE restricted to $\Sigma'$. First, similarly as before, define the equilibrium selection rules $e_G'$ and $e_{f(G)}'$ as a conditional distribution on $\Sigma_{\mathsf{BCE}}'(G)$ and $\Sigma_{\mathsf{BCE}}'(f(G))$ respectively. Similarly again, we define $\rho_G'$ and $\rho_{f(G)}'$ using $\Sigma_{\mathsf{BCE}}'(G)$, $\Sigma_{\mathsf{BCE}}'(f(G))$, $e_G'$ and $e_{f(G)}'$. Let $\mathsf{DP}'(h \mid C)$ and $\mathsf{EP}'(h \mid C)$ be the same as $\mathsf{DP}(h \mid C)$ and $\mathsf{EP}(h \mid C)$ except that we substitute $\rho_G'$ and $\rho_{f(G)}'$ for $\rho_G$ and $\rho_{f(G)}$ in the definition in (\ref{EP DP}). Similarly as in the case of BCE, we assume that the invariance condition on $e_{G}'$ holds in terms of the BCEs restricted to $\Sigma'$. Then, we obtain an analogue of Theorem \ref{thm: bounds1} as follows.
\begin{corollary}
\label{cor: bounds}
Suppose that Assumptions \ref{assump: public observability1} and \ref{assump: invariance} (in terms of the BCEs restricted to $\Sigma'$) hold for the pre-policy game $G = (B,I)$ and the post-policy game $f(G) = (f(B),I)$. Suppose further that maps $h: \mathbb{Y} \times \mathbb{W} \rightarrow \mathbf{R}$, $\underline h, \overline h: \mathbb{W} \rightarrow \mathbf{R}$, and a set $C \subset \mathbb{W}$ are given as in Theorem \ref{thm: bounds1}. Then,
\begin{align}
\label{bounds1}
&\mathsf{DP}'(h \mid C) + \mathbf{E}\left[\underline h(f(W))1\left\{f(W) \in C \setminus \mathbb{S}_W\right\} \right]\\ \notag
&\quad \quad \le \mathsf{EP}'(h \mid C) \le \mathsf{DP}'(h \mid C) + \mathbf{E}\left[\overline h(f(W))1\left\{f(W) \in C \setminus \mathbb{S}_W\right\} \right].
\end{align}
\end{corollary}
Hence, decomposition-based predictions can still be used for counterfactual analysis in a setting with various other solution concepts such as BNE or its further restricted version. They coincide with the equilibrium-based predictions when $\mathbb{S}_{f(W)} \subset \mathbb{S}_W$.
\@startsection{subsubsection}{3{When the Policy Also Changes the Information Structure}
When a government policy is announced in advance, additional signals are often created through various reports of analysis on the policy and may be used by agents. Thus, the policy may change the information structure of the players as well. In the Online Appendix, we extend our main results to such cases.
In particular, we show that the bounds in Theorem \ref{thm: bounds1} accommodate scenarios where the policy also introduces new signals, as long as the latter do not reveal other players' pre-policy signals and the payoff state beyond what has been known to the player. This includes the cases where the policy may be used as a coordination device by players (e.g., sunspots, as in \cite{Shell:89:GE,Peck/Shell:91:ReStud}), or when this signal is about future policy implications (e.g., in one empirical application below, government reports discuss market structure following the policy repeal, but are unlikely to reveal pre-policy signals of individual agents.)
\@startsection{section}{1{Empirical Applications}
\label{sec:empirical}
\@startsection{subsection}{2{Ciliberto and Tamer (2009) Revisited}
We revisit the counterfactual analysis in \cite{Ciliberto/Tamer:09:Eca} using our results from Section \ref{example1_detail}. They investigated the effect of the repeal of the Wright amendment on airline entry in markets out of Dallas Love Field Airport. The Wright amendment had been implemented in 1979 to stimulate the use of the newer (and not as central) Dallas Fort Worth (DFW) Airport. As of the early 2000's, it restricted the flights out of the central Dallas Love Field to other cities in Texas or those from some neighboring states. A full repeal of the amendment was agreed by the major airlines and DFW Airport in 2008\footnote{The agreement involved, most notably, decreasing the number of gates in Dallas Love Field to restrict its impact on Dallas Fort Worth.} and was to be fully implemented in 2014. This repeal could have led to significant changes in market entry and, hence, on consumer welfare.
\cite{Ciliberto/Tamer:09:Eca} produced a counterfactual prediction of the outcomes after the repeal of the Wright amendment, after estimating the identified set from a complete information entry game permitting multiple equilibria. This is the game presented in Example \ref{example1_detail}.\footnote{We provide extensive details on the empirical application, including the description of the covariates, estimators and inference procedures in the Online Appendix. Following \cite{Ciliberto/Tamer:09:Eca}, we assume that unobservable payoff components $\varepsilon_i$ are i.i.d. and independent of all covariates, so we do not need to use a control function approach.} A market was defined by a route between two airports, irrespective of directions or layovers. They modeled the Wright amendment as a dummy variable covariate, $X_{i,m}^{\text{Wright}}$, which equaled 1 if market $m$ was affected by the Wright amendment (affecting all the firms in the market) and 0 otherwise. For a counterfactual analysis, they considered the counterfactual experiment of repealing the Wright amendment, setting $X^{\text{Wright}}_{i,m}$ to 0, and studied its effects on market entry. The support of $X_{i,m}^{\text{Wright}}$ in the data is $\{0,1\}$, and hence contains its post-policy support that is $\{0\}$. Hence, Corollary \ref{example1_corollary} says that the decomposition-based prediction coincides with the equilibrium-based prediction.
\@startsection{subsubsection}{3{Our Set-up}
For this application, we follow their work and focus on the decisions of the four main airlines in their analysis (American Airlines, Delta Airlines, Southwest Airlines and United Airlines) and use their same dataset. We perform a dimension reduction to resolve near multicollinearity between covariates. This reduction is useful because our decomposition-based prediction must include all firm-level covariates in $X_m$ (the covariates $X_{j,m}$ for every $j \neq i$ impact $i$'s entry decision in equilibrium through affecting $j$'s decision to enter).\footnote{In this context there are 8 market-level covariates and 2 variables at the firm-market level. They are described in detail in the Online Appendix. This generates a total of 16 covariates to be included in the analysis. While the parametric estimator is robust to including all 16 covariates due to its additional structure, the performance of the nonparametric estimator is improved with a smaller subset.} We drop three out of eight market level variables (market size, per capita income growth and Dallas market) that lack variation for nonparametric estimation.\footnote{Both market size and per capita income growth appear well predicted by income per capita and market presence (variables that already capture economic performance at the market level and included in the analysis). Meanwhile the binary Dallas market variable is highly correlated with the Wright amendment variable - by construction, any market in Dallas that does not use Dallas Love Field Airport must be using Dallas Fort Worth Airport instead. However, Dallas Fort Worth is the hub for American Airlines - and American Airlines' market presence is already included as a covariate. Details are provided in the Online Appendix.} We drop one additional firm-market level covariate (a proxy for the firm's cost) using the causal structure of the game, because this variable is a function of other covariates in the analysis, such as route distance, by construction.
\@startsection{subsubsection}{3{Results}
We compare the results from the decomposition approach to the original results in \cite{Ciliberto/Tamer:09:Eca}. The results of this exercise are shown in Columns 1 and 2 of Table \ref{ct_decomp} using two different estimators (a linear/parametric and a nonparametric estimator), while their original results are shown in Column 3.
In the first column, we assume that the expected entry of $i$ in market $m$ is given by the linear form $\mathbf{E}[Y_{i,m} \mid X_m=x_m] = x_m'\gamma_i$ and we estimate it using Ordinary Least Squares in a linear regression framework, where $Y_{i,m}$ denotes the indicator of entry by firm $i$ in market $m$.\footnote{This specification is misspecified because a linear reduced form cannot be induced from equilibria in the entry game. We present the result from this specification as a simple benchmark.} We then present the counterfactual estimate which is the estimated change in entry in the post-policy game relative to the data for the markets previously affected by the policy. This estimated change can be written as
\begin{align*}
\frac{1}{|\mathcal{M}|} \sum_{m \in \mathcal{M}} \tilde{X}_{m}'\hat{\gamma}_i - \overline{Y}_i,
\end{align*}
where $\hat \gamma_i$ is the estimator of $\gamma_i$, $\mathcal{M}$ represents the set of markets previously affected by the Wright amendment, $\tilde{X}_m$ represents the values of the covariates for market $m$ after the policy, and $\overline{Y}_i$ is the average outcome in the data for firm $i$ in those markets in $\mathcal{M}$. In the second column, we estimate the conditional expectation $g_i(x) = \mathbf{E}[Y_{i,m} \mid X_m=x]$ nonparametrically. We use a leave one out kernel estimator with a bandwidth chosen by cross-validation. (See the Online Appendix for details.) By comparing Columns 1 and 2, we see that the results do not appear to be driven by the nonlinearity in the conditional expectation function. In the third column, we restate the results of counterfactual predictions from the main specification in \cite{Ciliberto/Tamer:09:Eca} (Table VII, Column 1). These are the maximum predicted increase in the share of Dallas Love Field markets that are served by each airline following the 2014 repeal of the Wright amendment according to their estimates. We find that our results are broadly consistent with theirs, as they are below their estimated maximum entry.
Now we take this exercise one step further. The Wright amendment was actually repealed in 2014. This means that we can observe how airlines entered the markets after the repeal of the Wright Amendment and after any new information arose during its implementation. We compile 2015 data from the DB1B Market and Ticket Origin and Destination Dataset of the U.S. Department of Transportation (the same source as the original dataset), and treat it the same way as the original authors - see the Online Appendix for details. We focus on the same 81 markets from the original paper. The actual change in entry in 2015 in the data relative to the original data is shown in Column 4 of Table \ref{ct_decomp}. We then compare the counterfactual estimates from \cite{Ciliberto/Tamer:09:Eca} and our decomposition approach to the actual changes.
\begin{table}[t]
\begin{centering}
\small
\caption{\small \cite{Ciliberto/Tamer:09:Eca} Revisited: Model Predicted and Empirical Counterfactuals of the Repeal of the Wright Amendment}
\label{ct_decomp}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{cccccc}
\hline
\hline
\tabularnewline
& &\multicolumn{4}{c}{Outcome: Change in Probability of Entry in Dallas-Love Markets} \\
\\
\cline{2-6}
\tabularnewline
& & Decomposition Method & Decomposition Method & Ciliberto \& Tamer (2009) & Empirical \\
& & Linear Model & Nonparametric Model & Maximum Predicted Entry & \\
\tabularnewline
\hline
\multicolumn{1}{c}{} & & & & & \\
American Airlines & & -0.030 & 0.128 & 0.463 & -0.04 \\
& & (0.037) & (0.036) & &\\
\tabularnewline
Delta Airlines & & -0.023 & 0.174 & 0.499 & 0.46 \\
& & (0.043)& (0.039) & &\\
\tabularnewline
Southwest Airlines & & 0.508 & 0.451 & 0.474 & 0.471 \\
& & (0.037)&(0.056) & &\\
\tabularnewline
United Airlines & & -0.009 & 0.043 & 0.147 & 0 \\
& &(0.031) &(0.016) & &\\
\\
\hline
\multicolumn{1}{c}{} & & & & &\\
\end{tabular}
}
\par\end{centering}
\parbox{6.2in}{\footnotesize
Notes: We report the estimated counterfactual changes to the entry of major airlines into Dallas Love Field markets following the repeal of the Wright Amendment. In Columns 1-2, we use our decomposition approach to provide point estimates of this counterfactual effect, using the same pre-2014 dataset of \cite{Ciliberto/Tamer:09:Eca}. Column 1 uses a linear model, while Column 2 reports a nonparametric estimate. Standard errors for these columns are computed by the bootstrap, following the approach in the Online Appendix with $B=999$ replications. In the third column, we restate the results in Table VII, Column 1 of \cite{Ciliberto/Tamer:09:Eca}, who presented the maximum change in entry of those airlines. Finally, the Wright Amendment was fully repealed in 2014, allowing all airlines to enter those markets. The final column shows the realized values of the change in entry for those airlines in affected markets in 2015, after the repeal.}
\end{table}
The results show that the decomposition method (Columns 1-2) using pre-repeal data performs well relative to the empirical outcomes in Column 4. Both the parametric and nonparametric estimates of the decomposition approach capture the large increase in entry by Southwest Airlines, and the negligible post-repeal entry by American Airlines and United Airlines. This lack of entry by American and United post-repeal is broadly consistent with the multiple equilibria in an entry model: Southwest and Delta entered frequently after the repeal, but American and United stayed out of those markets. The empirical values are also within the maximum bounds reported in \cite{Ciliberto/Tamer:09:Eca}. However, as the authors only reported the maximum predicted entry, their results appear further apart from the realized values for American and United. While the lack of entry results for these firms is consistent with \cite{Ciliberto/Tamer:09:Eca}'s results of maximum predicted entry, this would imply that their counterfactual analysis predicted a range of 0 to 50\% of markets entered by those airlines, a large range for policy analysis.
While the decomposition-based results perform well for American, United and Southwest, the method performs worse in predicting entry by Delta Airlines. This could simply be a feature of out-of-sample prediction, possibly due to changes to Delta between 2008-2014 (including the acquisition of Northwest Airlines, which was completed in 2010), and/or due to the definition of markets in this dataset.\footnote{Delta Airlines only operates from Dallas Love Field to Atlanta, but there are multiple connecting flights from Atlanta. Routes that include layovers are considered as separate markets by \cite{Ciliberto/Tamer:09:Eca}.} Nevertheless, we consider that the decomposition-based prediction performed well overall in this out-of-sample exercise, particularly as it used data from years before the policy was implemented and matched well with multiple observed outcomes.
\@startsection{subsubsection}{3{The ``Other Term'' in Aggregate Decomposition: The Role of Market Characteristics}
Table \ref{ct_decomp} presents the results for the (average) effect of the repeal of the Wright amendment on the set of markets affected by the policy. However, this is only one of two components in the typical aggregate decomposition (e.g., Oaxaca-Blinder - see \cite{Fortin/Lemieux/Firpo:11:Handbook} for an overview). The other term in such exercises is the "`observable effect": i.e., how the characteristics of markets affected and not affected by the repeal differ.
To see this, let us write the payoff state as $X_m = (X_{m}^{\text{Wright}}, \tilde X_m)$, which makes explicit which variables are related to the policy ($X_m^{\text{Wright}}$) and which other states are firm-market specific ($\tilde X_m$). Let $\tilde X_m^1$ and $\tilde X_m^0$ denote the (observable) payoff states for firm-markets that have $X_{i,m}^{\text{Wright}} = 1$ or $0$, respectively. Then, the change in the (average) probability of entry by firms not subject to the Wright amendment, to that by those subject to the policy (i.e., the effect of a repeal of the amendment) can be written as\footnote{We note that this has a flipped sign relative to standard aggregate decompositions because we study the repeal of the Wright amendment, rather than imposing such a policy. Hence, the comparison is between markets not affected tothose affected, rather than the other way round.}:
\begin{align*}
& \mathbf{E}[Y_{i,m} | \tilde X_m^0, X_{i,m}^{\text{Wright}} = 0] - \mathbf{E}[Y_{i,m} | \tilde X_m^1, X_{i,m}^{\text{Wright}} = 1] \\
& = \underbrace{\mathbf{E}[Y_{i,m} | \tilde X_{m}^0, X_{i,m}^{\text{Wright}} = 0] - \mathbf{E}[Y_i | \tilde X_{m}^1, X_{i,m}^{\text{Wright}} = 0]}_{\text{Observable Effect}} + \underbrace{\mathbf{E}[Y_{i,m} | \tilde X_{m}^1, X_{i,m}^{\text{Wright}} = 0] - \mathbf{E}[Y_{i,m} | \tilde X_{m}^1, X_{i,m}^{\text{Wright}} = 1]}_{\text{Policy Effect}}
\end{align*}
The second term on the right-hand side is the effect of the policy (repeal of the amendment), keeping fixed the distribution of observable characteristics for firms/markets subject to the Wright amendment. Its estimates are shown in Table \ref{ct_decomp}. However, the other term shows the average difference in entry across markets that are not affected/affected by the amendment, which are due solely to differences in observable characteristics (as the policy variable does not change). The estimates for this observable effect are presented in Table \ref{ct_decomp2} in the Online Appendix. (Details on its estimation are also presented in the Online Appendix).
The effects in Table \ref{ct_decomp2} are much larger for American, Delta and United relative to Table \ref{ct_decomp}. For Southwest, the effect is 20\% lower than those in Table \ref{ct_decomp}. This effect can be explained by the characteristics of markets subject to the Wright amendment. Such markets, including Dallas Love Airport, are strictly in the South, often smaller than elsewhere, and with lower market presence of the major airlines. Hence, it would be natural that airlines are less likely to enter such markets, even if the Wright amendment was not present/repealed. Hence, a naive comparison of entry across different markets would overstate the effects of the policy, with effects of over 50\% for most airlines (found by aggregating the estimates in Tables \ref{ct_decomp} and \ref{ct_decomp2}). However, we estimate that the repeal itself would generate entry into affected markets as seen in Table \ref{ct_decomp}.
\@startsection{subsection}{2{The Threat of Southwest Airlines on Competitors' Entry Behavior}
We can pursue further counterfactual exercises beyond those in \cite{Ciliberto/Tamer:09:Eca}. One salient example is to study competition effects in the airline industry. This includes the role of Southwest Airlines' rise on its competitors' behavior, which has received attention due to the latter's status as a new, lower-cost entry with new organizational strategies (see \cite{Knorr/Arndt}, for example). \cite{Goolsbee/Syverson:08:QJE} studied one angle to this question, focusing on whether Southwest Airlines' threat of entry affected established airlines' (e.g., American, Delta, United) pricing behavior. In this section, we use the decomposition-based prediction within the same entry game above to extend their analysis beyond pricing.
We consider whether the threat of Southwest entry changes established airlines' actual entry behavior (as opposed to pricing) and whether such effects vary across smaller or larger markets (as found in \cite{Ellison/Ellison:11:AEJ} and \cite{Tenn/Wendling:14:ReStat} in the pharmaceutical industry).\footnote{Since our framework is static, the interpretation of our results also differs from those cited above. In a static environment, there is no dynamic choice of capacity and there are no incumbents, so the latter cannot "`accommodate"' entry over time. Nevertheless, we think the present exercise is still informative about whether a state in which Southwest is likely to enter induces differential (strategic) behavior by established airlines.}
\@startsection{subsubsection}{3{Set-up}
We use the same dataset and the same model specification from the last section. This includes the entry game environment with complete information. Our only change is regarding the policy of interest: we replace `Wright Amendment' by a binary variable called \textit{Southwest Threat of Entry}. This variable is defined following \cite{Goolsbee/Syverson:08:QJE}: it equals 1 if Southwest operates in both endpoints of a market, and 0 otherwise.\footnote{In their interpretation, this generates a discontinuous threat of Southwest entry (relative to when they operate in one or no endpoints). Note that entry decisions in other markets are independent from those in market $m$ due to the maintained assumption of independence of $\varepsilon_{i,m}$. Hence, it can be used as a policy variable in this exercise. However, we cannot condition on actual entry in a certain market, whether by Southwest or another airline, since that is a simultaneous decision in this environment.}
Our counterfactual policy sets this Southwest Threat of Entry variable to 0. Therefore, we identify whether American, Delta and United are more/less likely to enter markets when Southwest is no longer a threat to entry. The arguments for the validity of our decomposition-based prediction are analogous to those in the previous exercise: (i) the policy is observable to players, since Southwest's routes are observable and (ii) the policy is within the support of the data, as there are many markets where Southwest does not operate in both endpoints. Hence, under the invariance condition on the equilibrium selection rules, Corollary \ref{example1_corollary} can be applied. Estimation and inference on these effects follow those in the previous section, detailed in Appendix \ref{app:D}. The results are presented below.
\@startsection{subsubsection}{3{Results}
\begin{table}[t]
\begin{centering}
\small
\caption{\small \cite{Goolsbee/Syverson:08:QJE} Revisited: Effects of Removing the Threat of Southwest Airlines Entry on Other Airlines' Probability of Entry}
\label{southwest_decomp}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{ccccccccc}
\hline
\hline
\tabularnewline
& &\multicolumn{6}{c}{Outcome: \footnotesize Change in the Entry Prob. after Removing Southwest Threat of Entry} \\
\\
\cline{2-8}
\tabularnewline
& & \multicolumn{2}{c}{All Markets} & \multicolumn{2}{c}{Small Markets} & \multicolumn{2}{c}{Large Markets} \\
& & Linear & Nonparametric & Linear & Nonparametric& Linear & Nonparametric \\
\tabularnewline
\hline
\multicolumn{1}{c}{} & & & & & & & \\
American Airlines & & -0.080 & -0.074 & -0.153 & -0.189 & -0.006 & 0.052 \\
& & (0.023) & (0.021) & (0.034) & (0.031) & (0.033) & (0.027)\\
\tabularnewline
Delta Airlines & & -0.079 & -0.078 & -0.109 & -0.134 & -0.037 & -0.015 \\
& & (0.024)& (0.021) & (0.037) & (0.028) & (0.034) & (0.028) \\
\tabularnewline
United Airlines & & -0.056 & -0.047 & -0.073 & -0.079 & -0.056 & 0.006 \\
& &(0.023) &(0.020) & (0.029) & (0.025) & (0.032) & (0.026) \\
\\
\hline
\multicolumn{1}{c}{} & & & & &\\
\end{tabular}
}
\par\end{centering}
\parbox{6.2in}{\footnotesize
Notes: We report the estimated counterfactual changes to the entry of major airlines (American, Delta, United) after the removal of the threat of Southwest Airlines entry. The effect is estimated on the markets originally subject to that threat, as defined in \cite{Goolsbee/Syverson:08:QJE}. The first two columns show the effects for all such markets, for both linear and nonparametric estimates. Columns 3-4 show the results for markets affected by such a threat, but below the median market size, while the last shows the effects for markets larger than the median. Standard errors for these columns are computed by the bootstrap, following the approach in the Online Appendix with $B=999$ replications. A negative coefficient represents a decrease in entry if there were no threat of Southwest entry, relative to there being such a threat.}
\end{table}
As we can see from Table \ref{southwest_decomp}, a threat of Southwest entry \textit{increases} the average entry probability of American Airlines (7.4-8\%), Delta Airlines (7.8-7.9\%) and United Airlines (4.7-5.6\%) in such threatened markets (i.e., removing a threat of Southwest entry decreases competitors' average entry probabilities). The results suggest that in markets where a Southwest entry is likely, the established firms (with larger market presence) will be more likely to enter. While our results cannot be strictly interpreted as endogenous deterrence or accommodation (as there are no dynamics), they are consistent with the multiple equilibria in the game, together with an equilibrium selection mechanism whereby larger and established firms are more likely to enter when many firms are willing to do so. This is an additional outcome affected by competition, beyond pricing (\cite{Goolsbee/Syverson:08:QJE} and \cite{Tenn/Wendling:14:ReStat}).
To further understand our results, we follow \cite{Ellison/Ellison:11:AEJ} and \cite{Tenn/Wendling:14:ReStat} and check whether such effects depend on market size. To do so, we re-estimate the model for markets below and above the median market size. Consistent with those papers, we find nonmonotonic effects of the Southwest threat of entry on its competitors' decisions. In particular, we find that the increased entry due to Southwest's threat is driven by small markets. In small markets (i.e., below the median market size), where profits are more limited, the threat of Southwest entry induces other firms to do so. This is consistent with such airlines coordinating on entry as an equilibrium ``deterrence'' to Southwest (beyond pricing). In contrast, the threat of Southwest does not induce entry in larger markets. This is consistent with (a static interpretation of) ``accommodation'' in large markets: when profit is large enough, all firms enter for that state even if others are likely to enter, as suggested in \cite{Tenn/Wendling:14:ReStat}.\footnote{Table \ref{southwest_decomp2} in the Online Appendix shows the results for the second term in the average aggregate decomposition (i.e., the role of market characteristics, keeping the same threat of Southwest Airlines entry).}
\@startsection{subsubsection}{3{Heterogeneous Effects depending on Number of Airlines Threatening Entry}
We can extend the previous exercise beyond Southwest Airlines. For instance, researchers may be interested whether the competition effects differ across the number of potential entrants. This can also be answered within our framework.
Indeed, we can redefine our policy variable as the number of airline $i$'s competitors that threaten entry in market $m$. We keep the same definition of threat of entry as in the last section: i.e., a competitor operates flights out of each endpoint of a route, but not the route itself. Since our emphasis is on the same four airlines (American, Delta, Southwest, United), each firm $i$ may face $\{0,1,2,3\}$ competitors threatening entry in each market. We conduct three separate counterfactuals exercises to study how a decrease in the number of airlines threatening entry affect $i$'s choice to enter. Each exercise decreases the number of potential entrants by one (i.e., making markets with three entrants have only two entrants, etc.). The results are shown in Table \ref{ct_number} in the Online Appendix.\footnote{We note that the different exercises are not generally comparable, because the effects are calculated over different markets (i.e., those markets with two competitors threatening entry for American are likely to be very different than those with only one threat).}
As we can see, airlines would generally increase entry if they had no potential competitors rather than one. After all, the airline is more likely to benefit from the market's profits and less likely to compete in such a market. However, there is a net decrease in entry for American, Delta and United in markets facing higher threat of entry. Indeed, decreasing the number of $i$'s competitors from three to two decreases average entry in such markets - consistent with the previous section's results.
\@startsection{section}{1{Conclusion}
Decomposition methods are appealing in counterfactual analysis for their computational tractability and simplicity. However, in strategic settings, predictions generated by the methods may fail to be incentive compatible after the policy or to account for additional coordination possibilities induced by the policy. In this paper, we have presented a set of core conditions that validate the use of the methods in strategic settings. Most importantly, we have provided a precise formulation of the invariance condition on the equilibrium selection rules that is required for the approach. Essentially, under the invariance condition, the agents can be viewed as playing ``the same game'' after the policy. As demonstrated in this paper, this condition already encompasses many existing assumptions on equilibrium selection in empirical research. This opens up a new approach of counterfactual analysis in a strategic setting, where one does not need to recover the structural parameters and the set of equilibria for the analysis.
There are several extensions from this paper's proposal that look promising to us. A most prominent extension is to explore a set of conditions for the decomposition-based approach in a dynamic game setting. A policy in these games generally induces a change of the agents' posteriors through a change of a future path of the payoff states, and it seems nontrivial to maintain the invariance of the posterior after the policy. It appears interesting in this regard to note the approach of \cite{Kocherlakota:19:JME} who introduced independent shocks to the policy so that the posterior of the private sector for future policies remains invariant. Future work can expand this insight and explore the validity of decomposition-based predictions in a dynamic setting.
Another interesting extension is to broaden the class of policies beyond those considered in this paper, for instance, policies that change the value of a structural parameter of the game. While our paper's framework generally does not allow for such a policy, if the parameter appears as part of an index in the payoff state, the decomposition-based approach may be applicable, as mentioned in the paper. It seems promising to investigate this possibility in a more general setting in future research.
Finally, our proposal can generate a wide range of intermediate approaches, where the target of the prediction is of the form $h(Y,W;\theta)$, and $\theta$ is part of the structural parameters in the game. Then, the message of our paper is that under the conditions stated in this paper, one can perform a counterfactual analysis using the decomposition method, after identifying $\theta$. The main point here is that one does not need to recover the full set of structural parameters of the game or the set of equilibria from data. For example, such an intermediate approach can be used to extend this paper's framework to counterfactual analysis where the target of the prediction is welfare or profits.
\putbib[counterfactual2]
\end{bibunit}
\begin{bibunit}[econometrica]
\newpage
| proofpile-arXiv_059-15697 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Let $T>0$ be a positive time, $\mathcal{D}$ be a bounded, connected, open subset of $\R^N$, $N\in\mathbb N^*$, with boundary $\Gamma := \partial\mathcal{D}$ regular enough. Let $\mathcal{D}_0$ be a nonempty open subset of $\mathcal{D}$. As usual, we introduce the notation $\chi_{\mathcal{D}_0}$ to refer to the characteristic function of the set $\mathcal{D}_0$. To abridge the notation, hereinafter we write $Q_T:=(0,T)\times \mathcal{D}$ and $\Sigma_T:=(0,T)\times \Gamma$.
Let $(\Omega, \mathcal{F}, \{\mathcal{F}_t\}_{t \geq 0}, \mathbb{P})$ be a complete filtered probability space on which a one-dimensional standard Brownian motion $\{ W(t)\}_{t \geq 0}$ is defined such that $\{\mathcal{F}_t\}_{t \geq 0}$ is the natural filtration generated by $W(\cdot)$ augmented by all the $\mathbb{P}$-null sets in $\mathcal{F}$. Let $X$ be a real Banach space, for every $p \in [1, +\infty]$, we introduce
\begin{equation*}
L_{\mathcal{F}}^p(0,T;X) := \{ \phi : \phi\ \text{is an }X\text{-valued } \mathcal{F}_t\text{-adapted process on } [0,T]\ \text{and}\ \phi \in L^p([0,T] \times \Omega ;X)\},
\end{equation*}
endowed with the canonical norm and we denote by $L_{\mathcal{F}}^2(\Omega; C([0,T];X))$ the Banach space consisting on all $X$-valued $\mathcal{F}_t$-adapted process $\phi(\cdot)$ such that $\mathbb{E}\left(\norme{\phi(\cdot)}_{C([0,T];X)}^2 \right) < \infty$, also equipped with the canonical norm.
Let us consider the stochastic forward semilinear equation
\begin{equation}\label{eq:forward_semilinear}
\begin{cases}
\textnormal{d}{y}=(\Delta y + f(\omega,t,x,y)+\chi_{\mathcal{D}_0}h)\textnormal{d}t+(g(\omega,t,x,y)+H)\textnormal{d} W(t) &\text{in }Q_T, \\
y=0 &\text{on }\Sigma_T, \\
y(0)=y_0 &\text{in }\mathcal{D}.
\end{cases}
\end{equation}
In \eqref{eq:forward_semilinear}, $y$ is the state variable, the pair $(h,H)\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}_0))\times L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))$ is the control and $y_0\in L^2(\Omega,\mathcal F_0; L^2(\mathcal{D}))$ is the initial datum. Here, $f$ and $g$ are two globally Lipschitz nonlinear functions, that is, there exists a positive constant $L>0$ such that
\begin{align}
\begin{cases}
&\forall (\omega,t,x, s_1,s_2)\in \Omega\times[0,T]\times \mathcal{D}\times \R^2,\label{eq:UniformLipschitzf_forw}\\
& |f(\omega,t,x,s_1)-f(\omega,t,x,s_2)| + |g(\omega,t,x,s_1)-g(\omega,t,x,s_2)| \leq L|s_1-s_2|.
\end{cases}
\end{align}
We also impose
\begin{equation}
\label{eq:f_g_zero}
\forall (\omega,t, x) \in \Omega\times[0,T]\times\mathcal{D},\ f(\omega,t,x,0) = g(\omega,t,x,0)=0.
\end{equation}
Under these conditions, by taking $y_0\in L^2(\Omega,\mathcal F_0; L^2(\mathcal{D}))$ and $(h,H)\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}_0))\times L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))$, it is known (see \cite[Theorem 2.7]{LZ19}) that system \eqref{eq:forward_semilinear} is globally defined in $[0,T]$. More precisely, we can establish the existence and uniqueness of the solutions to \eqref{eq:forward_semilinear} in the class
\begin{equation}\label{eq:wp_forward}
y\in \mathcal W_T:= L^2_{\mathcal F}(\Omega; C([0,T];L^2(\mathcal{D})))\cap L^2_{\mathcal F}(0,T; H_0^1(\mathcal{D})).
\end{equation}
One of the key questions in control theory is to determine whether a system enjoys the so-called null controllability property. System \eqref{eq:forward_semilinear} is said to be globally null-controllable if for any initial datum $y_0\in L^2(\Omega,\mathcal F_0; L^2(\mathcal{D}))$, there exist controls $(h,H)\in L^2_{\mathcal F}(0,T;\mathcal{D}_0)\times L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))$ such that the corresponding solution satisfies
\begin{equation}\label{eq:null_cond}
y(T,\cdot)=0 \quad\text{in $\mathcal{D}$, a.s.}
\end{equation}
Observe that the regularity \eqref{eq:wp_forward} justifies the definition we have introduced.
In this paper, we are interested in studying this controllability notion for system \eqref{eq:forward_semilinear}. Before introducing our main results we give a brief panorama of previous results available in the literature and emphasize the main novelty of this work.
\subsection{Known results}
The controllability of parabolic partial differential equations (PDEs) has been studied by many authors and the results available in the literature are very rich. In the following paragraphs, we focus on (small-time) global null-controllability results for scalar parabolic equations.
\textbf{Deterministic setting.} In the case where $g\equiv H\equiv 0$ and $f$ and $y_0$ are deterministic functions, system \eqref{eq:forward_semilinear} has been studied by several authors. In the mid 90's, Fabre, Puel \& Zuazua in \cite{FPZ95} studied the so-called global approximate-null controllability in the case where $f$ is a globally Lipschitz nonlinearity and condition \eqref{eq:null_cond} is replaced by the weaker constraint $\|y(T)\|_{L^2(\mathcal{D})}\leq \epsilon$. Later, Imanuvilov \cite{Ima95} and Fursikov \& Imanuvilov \cite{fursi} improved this result and proved that the global null-controllability holds, see also \cite{LR95} for the case of the (linear) heat equation, i.e. $f \equiv 0$. After these seminal works, Fern\'andez-Cara \cite{FC97}, Fern\'andez-Cara \& Zuazua \cite{FCZ00} have considered slightly superlinear functions $f$ leading to blow-up without control, see also \cite{Bar00} and the more recent work \cite{LB20}. Results for nonlinearities including $\nabla y$ and depending on Robin boundary conditions have also been studied for instance in \cite{DFCGBZ02,FCGBGP06}.
One common feature among these results is that the authors study the controllability problem by using the following general strategy, due to Zuazua in the context of the wave equation, see \cite{Zua93} or \cite[Chapter 4.3]{Cor07} for a presentation of this method. First, linearize the system and study the controllability of the system \eqref{eq:forward_semilinear} replacing $f$ by $a(t,x) y$ where $a \in L^{\infty}(Q_T)$. Then, use a suitable fixed point method (commonly Schauder or Kakutani) for addressing the controllability of the nonlinear system. At this point, the important property of compactness is needed. In fact, compact embeddings relying on Aubin-Lions lemma like $W(0,T):=\{y\in L^2(0,T;H_0^1(\mathcal{D})),\ y_t\in L^2(0,T;H^{-1}(\mathcal{D}))\} \hookrightarrow L^2(0,T;L^2(\mathcal{D}))$ are systematically used.
\textbf{Stochastic setting.} In the case where $f(y)=\alpha y$ and $g(y)=\beta y$, $\alpha,\beta\in \R$, the controllability results for \eqref{eq:forward_semilinear} were initiated by Barbu, R\u{a}\c{s}canu \& Tessitore in \cite{BRT03}. Under some restrictive conditions and without introducing the control $H$ on the diffusion, they established a controllability result for linear forward stochastic PDEs. Later, Tang \& Zhang in \cite{TZ09} improved this result and considered more general coefficients $\alpha$ and $\beta$ (depending on $t,x$ and $\omega$). The main novelty in that work was to introduce the additional control $H$ and prove fine Carleman estimates for stochastic parabolic operators. The same methodology has been used to study other cases like the ones of Neumann and Fourier boundary conditions (\cite{Yan18}), degenerate equations (\cite{LY19}) and fourth-order parabolic equations (\cite{GCL15}). As a side note, we shall mention the work by L\"{u} in \cite{LU11} who, by using the classical Lebeau-Robbiano strategy (\cite{LR95}), noticed that the action of the control $H$ can be omitted at the prize of considering random coefficients $\alpha$ and $\beta$ only depending on the time variable $t$.
In the framework proposed in this paper, as far the author's knowledge, there are not any results available in the literature. Compared to the deterministic setting, while establishing controllability properties for stochastic PDEs, many new difficulties arise. For instance, the solution of stochastic PDEs are usually not differentiable with respect to the variable with noise (i.e., the time variable). Also the diffusion term introduces additional difficulties while analyzing the problem. But most importantly, as remarked in \cite[Remark 2.5]{TZ09}, the compactness property, which is one of the key tools in the deterministic setting, is known to be false for the functional spaces related to stochastic PDEs. This is the main obstruction for employing some classical methodologies like in \cite{fursi}, \cite{FCZ00} for establishing null-controllability of semilinear problems at the stochastic level.
\subsection{Statement of the main results}
To overcome the lack of compactness mentioned in the last section, in this work we propose a new tweak on an old strategy for controlling parabolic systems. We use a classical methodology for controlling a linear system with source terms $F,G\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))$ of the form
\begin{equation}\label{eq:sys_forward_source_intro}
\begin{cases}
\textnormal{d}{y}=(\Delta y + F+ \chi_{\mathcal{D}_0}h)\textnormal{d}t + (G+H)\textnormal{d}{W}(t) &\text{in }Q_T, \\
y=0 &\text{on } \Sigma_T, \\
y(0)=y_0 &\text{in }\mathcal{D},
\end{cases}
\end{equation}
in a suitable weighted space. Note that this strategy has been widely used in the literature and it has been revisited in \cite{LLT13} to obtain local results. In turn, such weighted space is naturally defined through the weights arising in the Carleman estimates needed for studying the observability of the corresponding linear adjoint system. But, unlike many other works out there, we make precise the dependency on the parameters involved in the construction of the Carleman weights and use them in a second stage to prove that the nonlinear map $\mathcal N(F,G)\mapsto (f(t,x,\omega,y),g(t,x,\omega,y))$, with $y$ solution of \eqref{eq:sys_forward_source_intro}, is well-defined and is strictly contractive in a suitable functional space. In this way, the controllability of \eqref{eq:forward_semilinear} is ensured through a Banach fixed point method which does not rely on any compactness argument.
Our first main result is as follows.
\begin{theo}
\label{th:semilinear_forward}
Under assumptions \eqref{eq:UniformLipschitzf_forw}--\eqref{eq:f_g_zero}, system \eqref{eq:forward_semilinear} is small-time globally null-controllable, i.e. for every $T>0$ and for every $y_0 \in L^2(\Omega,\mathcal F_0;L^2(\mathcal{D}))$, there exists $h \in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}_0))$ such that the unique solution $y$ of \eqref{eq:forward_semilinear} satisfies $y(T,\cdot) = 0$ in $\mathcal{D}$, a.s.
\end{theo}
\begin{rmk}
As compared to some results in the deterministic framework, on the one hand notice that here we are not considering any differentiability condition on the nonlinearities, that is, $f,g$ are merely $C^0$-functions. On the other hand, our method does not permit to establish global controllability result for slightly superlinearities as considered in \cite{FCZ00}.
\end{rmk}
\begin{rmk}\label{rmk:carleman_new}
As mentioned above, the main ingredient to prove \Cref{th:semilinear_forward} is a precise Carleman estimate for the linear adjoint system to \eqref{eq:sys_forward_source_intro} (see \Cref{thm:carleman_backward_0}), which in this case is a backward parabolic equation. Previous to this work, such Carleman estimate was not available in the literature (see \cite[Theorem 2.5]{BEG16} for a similar estimate in the deterministic case). The methodology employed to prove the result is the weighted identity method introduced in the stochastic framework in \cite{TZ09}.
\end{rmk}
As classical in the stochastic setting, for completeness, using the same strategy described above, it is possible to establish a controllability result for semilinear backward parabolic equations. More precisely, consider
\begin{equation}\label{eq:backward_semilinear}
\begin{cases}
\textnormal{d}{y}=\left(-\Delta y+f(\omega,t,x,y,Y)+\chi_{\mathcal{D}_0}h\right)\textnormal{d}{t}+Y\textnormal{d}{W}(t) &\text{in } Q_T, \\
y=0 &\text{on }\Sigma_T, \\
y(T)=y_T &\text{in }\mathcal{D},
\end{cases}
\end{equation}
where $f$ is a globally Lipschitz nonlinearity, i.e., there exists $L>0$ such that
\begin{align}
\label{eq:UniformLipschitzf}
\begin{cases}
\forall (w,t, x) \in \Omega\times [0,T]\times\mathcal{D},\ \forall s_1,s_2,\ov{s}_1,\ov{s}_2 \in \R, \\ |f(\omega,t,x,s_1,\ov{s}_1) - f(\omega,t,x,s_2,\ov{s}_2)| \leq L \left( |s_1-s_2|+ |\ov{s}_1-\ov{s}_2|\right).
\end{cases}
\end{align}
Moreover, we impose that
\begin{equation}
\label{eq:fzero}
\forall (\omega,t, x) \in \Omega\times[0,T]\times\mathcal{D},\ f(\omega,t,x,0,0) = 0.
\end{equation}
Under these conditions, by taking $y_T\in L^2(\Omega,\mathcal F_T; L^2(\mathcal{D}))$ and $h \in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}_0))$, it is known (see \cite[Theorem 2.12]{LZ19}) that system \eqref{eq:backward_semilinear} is also globally well-defined in $[0,T]$. In this case, we can establish the existence and uniqueness of the solutions to \eqref{eq:backward_semilinear} in the class
\begin{equation}\label{eq:wp_backward}
(y,Y)\in \mathcal W_T\times L^2_{\mathcal F}(0,T;L^2(\mathcal{D})).
\end{equation}
Our second main result is as follows.
\begin{theo}
\label{th:semilinear_backward}
Under assumptions \eqref{eq:UniformLipschitzf}--\eqref{eq:fzero}, system \eqref{eq:backward_semilinear} is small-time globally null-controllable, i.e. for every $T>0$ and for every $y_T \in L^2(\Omega,\mathcal F_T;L^2(\mathcal{D}))$, there exists $h \in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}_0))$ such that the unique solution $y$ of \eqref{eq:backward_semilinear} satisfies $y(0,\cdot) = 0$ in $\mathcal{D}$, a.s.
\end{theo}
\Cref{th:semilinear_backward} extends to the nonlinear setting the previous results in
\cite[Corollary 3.4]{BRT03} and \cite[Theorem 2.2]{TZ09} for the backward equation.
The strategy to prove \Cref{th:semilinear_backward} is very close to the one of \Cref{th:semilinear_forward} but one major difference can be spotted. For this case, it is not necessary to prove a Carleman estimate for forward stochastic parabolic equations. Actually, it suffices to use the deterministic Carleman inequality of \cite[Thm. 2.5]{BEG16} and employ the duality method introduced by Liu in \cite{liu14}.
\subsection{Outline of the paper}
The rest of the paper is organized as follows. In \Cref{sec:forward}, we present the proof of \Cref{th:semilinear_forward}. In particular, \Cref{sec:new_carleman} is devoted to prove the new Carleman estimate anticipated in \Cref{rmk:carleman_new}. \Cref{sec:backward} is devoted to prove \Cref{th:semilinear_backward}. Finally in \Cref{sec:conclusion} we present some concluding remarks.
\section{Controllability of a semilinear forward stochastic parabolic equation}\label{sec:forward}
\subsection{A new global Carleman estimate for a backward stochastic parabolic equation}\label{sec:new_carleman}
This section is devoted to prove a new Carleman estimate for a backward stochastic parabolic equation. The main novelty here is that the weight does not degenerate as $t\to 0^+$ (compared with the classical work \cite{fursi}). This estimate has been proved in the deterministic case in \cite[Theorem 2.5]{BEG16} in a slightly more general framework. Here, we use many of the ideas presented there and adapt them to the stochastic setting.
To make a precise statement of our result, let $\mathcal{D}^\prime$ be a nonempty subset of $\mathcal{D}$ such that $\mathcal{D}^\prime\subset\subset\mathcal{D}_0$. Let us introduce $\beta\in C^4(\ov{\mathcal{D}})$ such that
\begin{equation}\label{eq:prop_weight_beta}
\begin{cases}
0<\beta(x)\leq 1, \ \forall x\in{\mathcal{D}}, \ \\
\beta(x)=0, \ \forall x\in \partial\mathcal{D}, \\
\inf_{\mathcal{D}\setminus\ov{\mathcal{D}^\prime}} \{|\nabla \beta|\}\geq \alpha >0.
\end{cases}
\end{equation}
The existence of such a function is guaranteed by \cite[Lemma 1.1]{fursi}.
Without loss of generality, in what follows we assume that $0<T<1$. For some constants $m\geq 1$ and $\sigma\geq 2$ we define the following weight function depending on the time variable
\begin{equation}\label{eq:def_theta}
\gamma(t):=
\begin{cases}
\gamma(t)=1+(1-\frac{4t}{T})^\sigma, \quad t\in(0,T/4], \\
\gamma(t)=1, \quad t\in[T/4,T/2], \\
\gamma \textnormal{ is increasing on $[T/4,T/2]$}, \\
\gamma(t)=\frac{1}{(T-t)^m}, \quad t\in[3T/4,T], \\
\gamma\in C^2([0,T)).
\end{cases}
\end{equation}
We take the following weight functions $\varphi=\varphi(t,x)$ and $\xi=\xi(t,x)$
\begin{equation}\label{eq:weights_0}
\varphi(t,x):=\gamma(t)\left(e^{\mu(\beta(x)+6m)}-\mu e^{6\mu(m+1)}\right), \quad \xi(t,x):=\gamma(t)e^{\mu(\beta(x)+6m)},
\end{equation}
where $\mu$ is a positive parameter with $\mu\geq 1$ and $\sigma$ is chosen as
\begin{equation}\label{def:sigma}
\sigma=\lambda\mu^2e^{\mu(6m-4)},
\end{equation}
for some parameter $\lambda\geq 1$. Observe that with these elections on $\mu$ and $\lambda$, the parameter $\sigma$ is always greater than 2 and this also ensures that $\gamma(t)\in C^2([0,T))$. We finally set the weight $\theta=\theta(t,x)$ as
\begin{equation}\label{def:theta_ell}
\theta:=e^{\ell} \quad\text{where } \ell(t,x):=\lambda\varphi(t,x).
\end{equation}
Using these notations, we state the main result of this section which is a Carleman estimate for backward stochastic parabolic equations.
\begin{theo} \label{thm:carleman_backward_0}
For all $m\geq 1$, there exist constants $C>0$, $\lambda_0\geq 1$ and $\mu_0\geq 1$ such that, for any $z_T\in L^2(\Omega,\mathcal F_T;\mathcal{D})$ and any $\Xi\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))$, the solution $(z,\ov{z})\in \mathcal W_T \times L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))$ to
\begin{equation}\label{eq:system_z}
\begin{cases}
\textnormal{d}{z}=(-\Delta z+\Xi)\textnormal{d}t+\ov{z}\textnormal{d}{W}(t) &\text{in }Q_T, \\
z=0 &\text{on }\Sigma_T, \\
z(T)=z_T &\text{in }\mathcal{D},
\end{cases}
\end{equation}
satisfies
\begin{align}\notag
{\mathbb{E}}&\left(\int_{\mathcal{D}}\theta^2(0)|\nabla z(0)|^2\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_{\mathcal{D}}\lambda^2\mu^3e^{2\mu(6m+1)}\theta^2(0)|z(0)|^2\,\textnormal{d}x\right)\\ \notag
&+{\mathbb{E}}\left(\int_{Q_T}\lambda\mu^2\xi\theta^2|\nabla z|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\lambda^3\mu^4\xi^3\theta^2| z|^2\,\textnormal{d}x\textnormal{d}t\right) \\ \label{eq:car_backward_0}
&\leq C{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\lambda^3\mu^4\xi^3\theta^2|z|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\theta^2|\Xi|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\lambda^2\mu^2\xi^3\theta^2|\ov{z}|^2\,\textnormal{d}x\textnormal{d}t\right)
\end{align}
for all $\mu\geq \mu_0$ and $\lambda\geq \lambda_0$.
\end{theo}
Before giving the proof of \Cref{thm:carleman_backward_0}, we make the following remark.
\begin{rmk}
We make the following comments.
\begin{itemize}
\item The proof of this result is rather classical except for the definition of the weight function $\gamma(t)$ which does not blow-up as $t\to 0^+$ and thus preventing $\theta$ to vanish at $t=0$. This change introduces some additional difficulties to the classical proof of the Carleman estimate for backward stochastic parabolic equations shown in \cite[Theorem 6.1]{TZ09}, but which can be handled just as in the deterministic case (see \cite[Appendix A.1]{BEG16}).
\item Different to the Carleman estimate in \cite[Eq. (6.2)]{TZ09}, the power of $\xi$ in the last term of \eqref{eq:car_backward_0} is $3$ rather than $2$. This is due to the definition of the weight $\gamma$ which modifies a little bit the estimate of $\varphi_t$ in $[0,T/4]$ as compared, for instance, to \cite{TZ09}. This does not represent any problem for proving our main controllability result.
\item Just as in \cite[Remark 6.1]{TZ09}, we can estimate the last term in \eqref{eq:car_backward_0} by weighted integrals of $f$ and $z$, more precisely
\begin{equation*}
{\mathbb{E}}\left(\int_{Q_T}\lambda^2\mu^2\xi^3\theta^2|\ov{z}|^2\,\textnormal{d}x\textnormal{d}t\right)\leq C{\mathbb{E}}\left(\int_{Q_T}\lambda^4\mu^4\xi^6\theta^2|z|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\theta^2|\Xi|^2\,\textnormal{d}x\textnormal{d}t\right).
\end{equation*}
Nevertheless, the new term of $z$ cannot be controlled by its counterpart in the left-hand side of \eqref{eq:car_backward_0} and this does not improve our result.
\end{itemize}
\end{rmk}
\begin{proof}[Proof of \Cref{thm:carleman_backward_0}]
As we have mentioned before, the proof of this result is close to other proofs for Carleman estimates in the stochastic setting (see, e.g. \cite{TZ09,Yan18} or \cite[Ch. 3]{FLZ19}). Some of the estimates presented in such works are valid in our case but others need to be adapted. For readability, we have divided the proof in several steps and we will emphasize the main changes with respect to previous works.
\smallskip
\textbf{Step 1. A point-wise identity for a stochastic parabolic operator}. We set $\theta=e^{\ell}$ where we recall that $\ell=\lambda\varphi$ with $\varphi$ is defined in \eqref{eq:weights_0}. Then, we write $\psi=\theta z$ and for the operator $\textnormal{d} z+\Delta z \textnormal{d}t$ we have the following identity
\begin{equation}\label{eq:iden_operator}
\theta(\textnormal{d}{z}+\Delta z \textnormal{d}t)=I_1+I\textnormal{d}t,
\end{equation}
where
\begin{align}\label{defs:init}
\begin{cases}
I_1=\textnormal{d}\psi-2\sum_{i}\ell_i\psi_i\textnormal{d}t+\Psi\psi\textnormal{d}t, \\
I= A\psi+\sum_{i}\psi_{ii}, \\
A=-\ell_t+\sum_{i}(\ell_i^2-\ell_{ii})-\Psi,
\end{cases}
\end{align}
where $\Psi=\Psi(x,t)$ is a function to be chosen later. Hereinafter, to abridge the notation, we simply write $\rho_i=\partial_{x_i}\rho$ and $\rho_t=\partial_t\rho$ and we denote $\sum_i$ and $\sum_{i,j}$ to refer to $\sum_{i=1}^{N}$ and $\sum_{i=1}^{N}\sum_{j=1}^{N}$, respectively.
From It\^{o}'s formula, we have that
\begin{align*}
\textnormal{d}(A\psi^2)&=A\psi\textnormal{d}{\psi}+\psi\textnormal{d}(A\psi)+\textnormal{d}(A\psi)\textnormal{d}{\psi} \\
&= 2A\psi\textnormal{d}{\psi}+A_t\psi^2\textnormal{d}{\psi}+2\psi\textnormal{d}{A}\textnormal{d}{\psi}+A(\textnormal{d}{\psi})^2+(\textnormal{d}{A})(\textnormal{d}{\psi})^2
\end{align*}
and
\begin{align*}
\psi_{ii}\textnormal{d}{\psi}=(\psi_i\textnormal{d}{\psi})_i-\psi_i\textnormal{d}{\psi_i}=(\psi_i\textnormal{d}{\psi})_i-\frac{1}{2}\textnormal{d}(\psi_i^2)+\frac{1}{2}(\textnormal{d}\psi_i)^2.
\end{align*}
Therefore
\begin{align}\notag
I\textnormal{d}{\psi}&=\left(A\psi+\sum_{i}\psi_{ii}\right) \textnormal{d} \psi \\ \label{eq:iden_I_dpsi}
&= \sum_{i} (\psi_i \textnormal{d}\psi)_i-\frac{1}{2}\textnormal{d}\left(\sum_{i}\psi_i^2\right)+\frac{1}{2}\sum_{i}(\textnormal{d}\psi_i)^2+\frac{1}{2}\textnormal{d}(A\psi^2)-\frac{1}{2}A_t\psi^2\textnormal{d}t-\frac{1}{2}A(\textnormal{d}{\psi})^2.
\end{align}
On the other hand, a direct computation gives
\begin{align} \notag
-2\sum_i\ell_i\psi_i I=&-\sum_i(A\ell_i\psi^2)_i+\sum_i(A\ell_i)_i\psi^2 \\ \label{eq:iden_li_psii_I}
&-\sum_i\Big[\sum_j(2\ell_j\psi_i\psi_j-\ell_i\psi_j\psi_j)\Big]_i+\sum_{i,j}\sum_{k,h}\left[2\delta_{ih}\delta_{kj}\ell_{kh}-\delta_{ij}\delta_{kh}\ell_{kh}\right]\psi_i\psi_j,
\end{align}
where $\delta_{ij}=1$ if $i=j$ and 0 otherwise. Multiplying both sides of \eqref{eq:iden_operator} by $I$ and taking into account identities \eqref{eq:iden_I_dpsi}--\eqref{eq:iden_li_psii_I}, we get the following point-wise identity
\begin{align}\notag
\theta I(\textnormal{d} z+\Delta z\textnormal{d}t)&=\left(I^2+\sum_{i,j}c^{ij}\psi_i\psi_j+F \psi^2+I\Psi\psi+\nabla\cdot V \right)\textnormal{d}t+\sum_{i}(\psi_i \textnormal{d}\psi)_i \\ \notag
&\quad +\frac{1}{2}\sum_{i}(\textnormal{d}\psi_i)^2-\frac{1}{2}A(\textnormal{d}\psi)^2 \\ \label{eq:pointwise_iden}
&\quad -\frac{1}{2}\textnormal{d}\left(\sum_{i}\psi_i^2-A\psi^2\right),
\end{align}
where
\begin{align*}
\begin{cases}
V=\left[V^1,V^2,\ldots,V^N\right], \\
V^i=-\sum_{j}\left(2\ell_j\psi_i\psi_j-\ell_i\psi_j\psi_j\right)-A\ell_i\psi^2, \quad i=1,\ldots,N, \\
c^{ij}=\sum_{k,h}\left[2\delta_{ih}\delta_{kj}\ell_{kh}-\delta_{ij}\delta_{kh}\ell_{kh}\right], \\
F=-\frac{1}{2}A_t\psi^2+\sum_{i}(A\ell_i)_i\psi^2.
\end{cases}
\end{align*}
\smallskip
\textbf{Step 2. Some old and new estimates}. The main goal of this step is to start building our Carleman estimate taking as a basis the point-wise identity \eqref{eq:pointwise_iden}. Integrating with respect to time in both sides of \eqref{eq:pointwise_iden}, we get
\begin{align}\label{eq:point_iden_lhs}
\int_0^{T}&\theta I(\textnormal{d} z+\Delta z\textnormal{d}t) \\ \label{eq:0_T_term}
&= -\frac{1}{2}\left.\left(\sum_{i}\psi_i^2-A\psi^2\right)\right|_{0}^{T} \\ \label{eq:F_term}
&\quad + \int_{0}^{T}\left(F \psi^2+I\Psi\psi\right)\textnormal{d}t \\ \label{eq:comb_terms}
&\quad +\int_{0}^{T}I^2\textnormal{d}t+\int_0^{T}\sum_{i,j}c^{ij}\psi_i\psi_j\textnormal{d}t+\int_0^{T}\nabla\cdot V \textnormal{d}t+\int_{0}^{T}\sum_{i}(\psi_i \textnormal{d}\psi)_i \\ \label{eq:point_last_rhs}
&\quad +\int_{0}^{T}\frac{1}{2}\sum_{i}(\textnormal{d}\psi_i)^2-\int_{0}^{T}\frac{1}{2}A(\textnormal{d}\psi)^2,
\end{align}
for a.e. $x\in \R^N$ and a.s. $\omega\in \Omega$. We will pay special attention to terms \eqref{eq:0_T_term} and \eqref{eq:F_term} which yield positive terms that are not present in other Carleman estimates using the classical weight vanishing both at $t=0$ and $t=T$.
At this point, we shall choose the function $\Psi=\Psi(x,t)$ as
\begin{equation}\label{eq:definicion_Psi}
\Psi:=-2\sum_{i}\ell_{ii}
\end{equation}
and, for convenience, we give some identities that will be useful in the remainder of the proof. From the definition of $\ell$ in \eqref{def:theta_ell}, we have
\begin{equation}\label{eq:deriv_weights}
\begin{split}
\ell_i&=\lambda\gamma\mu\beta_ie^{\mu(\beta+6m)}, \\
\ell_{ii}&=\lambda\gamma\mu^2\beta_i^2e^{\mu(\beta+6m)}+\lambda\gamma\mu\beta_{ii}e^{\mu(\beta+6m)}.
\end{split}
\end{equation}
For shortness, we have dropped the explicit dependence of $x$ and $t$ on the expressions above.
\textit{-Positivity of the term \eqref{eq:0_T_term}}. From the definitions of $\psi$ and $\ell$, we readily see that $\lim_{t\to T^{-}}\ell(t,\cdot)=-\infty$ and thus the term at $t=T$ vanishes. Therefore, \eqref{eq:0_T_term} simplifies to
\begin{equation}\label{eq:term_at_0}
-\frac{1}{2}\left.\left(\sum_{i}\psi_i^2-A\psi^2\right)\right|_{0}^{T}=\frac{1}{2}\left(\sum_{i}\psi_i^2(0)+A(0)\psi^2(0)\right).
\end{equation}
It is clear that the first term in the right-hand side of \eqref{eq:term_at_0} is positive. For the second one, we will generate a positive term by using the explicit expression of the function $\gamma$. Using definition \eqref{eq:def_theta}, we obtain that
\begin{equation*}
\gamma^\prime(t)=-\frac{4\sigma}{T}\left(1-\frac{4t}{T}\right)^{\sigma-1}, \quad \forall t\in[0,T/4],
\end{equation*}
whence, from \eqref{def:sigma} and the above expression, we get
\begin{align}\notag
\ell_t(0,\cdot)&=-\frac{4\lambda^2\mu^2e^{\mu(6m-4)}}{T}\left(e^{\mu(\beta(\cdot)+6m)}-\mu e^{6\mu(m+1)}\right)
\\ \label{eq:est_lt_0}
&\geq c\lambda^2\mu^3 e^{\mu(12m+2)}
\end{align}
for all $\mu\geq 1$ and some constant $c>0$ uniform with respect to $T$. On the other hand, from the derivatives \eqref{eq:deriv_weights} and using the facts that $\gamma(0)=2$ and $\beta\in C^4(\ov{\mathcal{D}})$, we get
\begin{equation}\label{eq:est_A_0}
|\ell_i^2(0,\cdot)+\ell_{ii}(0,\cdot)|\leq C\lambda^2\mu^2e^{2\mu(6m+1)}.
\end{equation}
In this way, by using \eqref{eq:term_at_0}, the definition of $A$ in \eqref{defs:init} and estimates \eqref{eq:est_lt_0}--\eqref{eq:est_A_0}, there exists $\mu_1>0$, such that for all $\mu\geq \mu_1\geq 1$, we get
\begin{align}\notag
-\frac{1}{2}\left.\left(\sum_{i}\psi_i^2-A\psi^2\right)\right|_{0}^{T}&\geq \frac{1}{2}|\nabla \psi(0)|^2+c\lambda^2\mu^3e^{2\mu(6m+1)}\psi^2(0)-C\lambda^2\mu^2e^{2\mu(6m+1)}\psi^2(0) \\ \label{eq:est_final_t0}
&\geq c_1|\nabla\psi(0)|^2+c_1\lambda^2\mu^3e^{2\mu(6m+1)}\psi^2(0),
\end{align}
for some constant $c_1> 0$ only depending on $\mathcal{D}$ and $\mathcal{D}^\prime$.
\textit{-Estimate of the term \eqref{eq:F_term}}. This term is the most cumbersome one since the combination of some terms of $F\psi^2$ and $I\Psi\psi$ will yield a positive term that does not appear in the classical Carleman estimate with weight vanishing at $t=0$ and $t=T$.
Recalling the definition of $A$, we see that that the first term in \eqref{eq:F_term} can be written as
\begin{equation}\label{eq:F_extended}
\int_{0}^{T}F\psi^2\textnormal{d}t=\int_{0}^{T}\left(F_1+F_2+F_3\right)\psi^2\textnormal{d}t,
\end{equation}
where
\begin{align}\label{eq:defs_F}
F_1=\frac{1}{2}\ell_{tt}, \quad
F_2=-\frac{1}{2}\sum_{i}(\ell_i^2+\ell_{ii})_t, \quad
F_3=\sum_{i}(A_i\ell_i+A\ell_{ii}).
\end{align}
For the first term of \eqref{eq:F_extended} we argue as follows. For $t\in(0,T/4)$, using the definition of $\gamma(t)$, it is not difficult to see that $|\gamma_{tt}|\leq C\lambda^2\mu^4e^{2\mu(6m-4)}$
thus
\begin{equation}\label{eq:est_ltt_0_T4}
|\ell_{tt}|\leq C\lambda^3\mu^5e^{2\mu(6m-4)}e^{6\mu(m+1)}\leq C\lambda^3\mu^2\xi^3
\end{equation}
where we recall that $\xi=\xi(t,x)$ is defined in \eqref{eq:weights_0}. Here, we also have used that $\mu^3e^{-2\mu}<1/2$ for all $\mu>1$.
For $t\in(T/2,T)$, using once again the definition of $\gamma$ we have $|\gamma_{tt}|\leq C\gamma^3$.
Noting that $\varphi_{tt}=\frac{\gamma_{tt}}{\gamma}\varphi$ and using the estimate $|\varphi\gamma|\leq \mu\xi^2$ we get
\begin{equation}\label{eq:est_ltt_T2_T}
|\ell_{tt}|=\left|\lambda\frac{\gamma_{tt}}{\gamma}\varphi\right| \leq C\lambda \mu\xi^3.
\end{equation}
Since obviously $\ell_{tt}$ vanishes for $t\in(T/4,T/2)$, we can put together estimates \eqref{eq:est_ltt_0_T4} and \eqref{eq:est_ltt_T2_T} to deduce that
\begin{equation}\label{eq:est_F1}
\int_{0}^{T}F_1\psi^2\textnormal{d}t\geq -C\lambda^3\mu^2\int_{0}^{T}\xi^3\psi^2\textnormal{d}t.
\end{equation}
We move now to the second and third terms of \eqref{eq:F_extended}. To abridge the notation, in what follows, we set
\begin{equation*}
\alpha(x):=e^{\mu(\beta(x)+6m)}-\mu e^{6\mu(m+1)}.
\end{equation*}
Notice that $\alpha(x)<0$ for all $x\in\mathcal{D}$.
From \eqref{eq:deriv_weights}, a direct computation yields
\begin{align}\notag
(\ell_i^2+\ell_{ii})_t&=2\lambda^2\mu^2\beta_i^2e^{2\mu(\beta+6m)}\gamma\gamma_t+\lambda\mu^2\beta_i^2e^{\mu(\beta+6m)}\gamma_t+\lambda\mu\beta_{ii}e^{\mu(\beta+6m)}\gamma_t, \\ \label{eq:iden_li_lii_t}
&=: M_i
\end{align}
On the other hand, after a long but straightforward computation, we get from \eqref{eq:deriv_weights} that
\begin{equation}\label{eq:def_Ali_Alii}
A_i\ell_i+A\ell_{ii}=P^{(1)}_i+P^{(2)}_i
\end{equation}
where
\begin{align}
P_i^{(1)}&:=-\lambda^2\alpha\gamma_t\gamma\mu^2\beta_i^2e^{\mu(\beta+6m)}-\lambda^2\alpha\gamma_t\gamma\mu\beta_{ii}e^{\mu(\beta+6m)}-\lambda^2\gamma_t\gamma\mu^2\beta_i^2e^{2\mu(\beta+6m)}, \label{eq:def_Pi_1} \\ \notag
P_i^{(2)}&:=\sum_{k}\left[3\lambda^3\mu^4\xi^3\beta_k^2\beta_i^2+2\lambda^2\mu^4\xi^2\beta_k^2\beta_i^2+\lambda^2\mu^3\xi^2\beta_{kk}\beta_i^2+\lambda^3\mu^3\xi^3\beta_k\beta_i\beta_{ki}\right. \\ \label{eq:def_Pi_2}
&\left.\qquad\qquad + 2\lambda^2\mu^3\xi^2\beta_k\beta_{ki}\beta_i+\lambda^3\mu^3\xi^3\beta_k^2\beta_{ii}+\lambda^2\mu^3\xi^2\beta_k^2\beta_{ii}+\lambda^2\mu^2\xi^2\beta_{kk}\beta_{ii}\right].
\end{align}
In the term $P_i^{(2)}$, we have further simplified the notation by recalling that $\xi=e^{\mu(\beta+6m)}\gamma$. Also observe that we have deliberately put together all the terms containing $\gamma_t$ in the above expression.
We will use now the term $I\Psi\psi$ in \eqref{eq:F_term} to collect other terms containing $\gamma_t$. Indeed, from the definition of $\Psi$ (see eq. \eqref{eq:definicion_Psi}), we see that this term can be rewritten as
%
\begin{align}\notag
\int_{0}^{T}I\Psi\psi\textnormal{d}t&=\int_{0}^{T}A\Psi\psi^2\textnormal{d}t-2\int_{0}^T\left(\sum_{i,k}\psi_{ii}\ell_{kk}\right)\psi \textnormal{d}t \\ \notag
&= 2 \int_{0}^{T}\sum_{i}P_i^{(3)}\psi^2\textnormal{d}t -2\int_{0}^{T}\left(\sum_{i,k}(\ell_i^2+\ell_{ii})\ell_{kk}\right)\psi^2\textnormal{d}t \\ \label{eq:iden_I_Psi_psi}
&\quad -2\int_{0}^{T}\left(\sum_{i,k}\psi_{ii}\ell_{kk}\right)\psi\textnormal{d}t,
\end{align}
%
where
%
\begin{equation*}
P_i^{(3)}:=\lambda^2\alpha\gamma\gamma_t\mu^2\beta_i^2e^{\mu(\beta+6m)}+\lambda^2\alpha\gamma\gamma_t\mu\beta_{ii}e^{\mu(\beta+6m)}.
\end{equation*}
%
Hence, from \eqref{eq:defs_F}, \eqref{eq:iden_li_lii_t}, \eqref{eq:def_Ali_Alii}, and \eqref{eq:iden_I_Psi_psi}, we get
\begin{align}\notag
\int_{0}^{T}&(F_2+F_3)\psi^2\textnormal{d}t+\int_{0}^{T}I\Psi\psi\textnormal{d}t \\ \notag
&= \underbrace{\int_0^{T}\left(\sum_{i}\left(-\frac{1}{2}M_i-P_i^{(1)}+P_i^{(3)}\right)\right) \psi^2\textnormal{d}t}_{=: Q_1}+\underbrace{\int_{0}^{T}\sum_i\left(P_i^{(2)}-2\sum_k(\ell_i^2+\ell_{ii})\ell_{kk}\right)\psi^2\textnormal{d}t}_{=:Q_2} \\ \label{eq:def_Q_term}
&\quad-\underbrace{2\int_{0}^{T}\left(\sum_{i,k}\psi_{ii}\ell_{kk}\right)\psi\textnormal{d}t}_{=: Q_3}.
\end{align}
We shall focus on the term $Q_1$. From the definition of $M_i$, \eqref{eq:def_Pi_1} and using that $\xi=e^{\mu(\beta+6m)}\gamma$ and $\varphi=\alpha\gamma$, we see that
\begin{align}\notag
-\frac{M_i}{2}-P_i^{(1)}+P_i^{(3)}=&-\frac{\gamma_t}{\gamma}\left(2\lambda^2\mu^2\xi^2\beta_i^2+\frac{1}{2}\lambda\mu^2\xi\beta_i^2+\frac{1}{2}\lambda\mu\xi\beta_{ii}\right) \\ \label{eq:terms_gamma_t}
&-\frac{\gamma_t}{\gamma}\left(\lambda^2(-\varphi)\xi\mu^2\beta_i^2+\lambda^2(-\varphi)\xi\mu\beta_{ii}\right).
\end{align}
From the definition of $\gamma$, it is clear that the above expression vanishes on $(T/4,T/2)$. On $(T/2,T)$, we use the fact that there exists $C>0$ such that $|\gamma_t|\leq C\gamma^2$.
Hence, for all $(t,x)\in (T/2,T)\times \mathcal{D}$, there exists a constant $C>0$ only depending on $\mathcal{D}$ and $\mathcal{D}^\prime$ such that
\begin{equation}\label{eq:est_Q1_T2_T}
\abs{\sum_{i}\left(-\frac{M_i}{2}-P_i^{(1)}+P_i^{(3)}\right)}\leq C\lambda^2\mu^2\left(\xi^2+\xi\varphi\right)\leq C\lambda^2\mu^3\xi^3,
\end{equation}
where we have used that $|\varphi\gamma|\leq \mu\xi^2$.
On $(0,T/4)$, we are going to use the fact that $\gamma_t\leq 0$, $\varphi<0$, and $\gamma\in[1,2]$ to deduce that $Q_1$ has the good sign outside $\mathcal{D}^\prime$. Indeed, from \eqref{eq:prop_weight_beta}, we can find $\mu_2=\mu_2(\alpha,\|\Delta\psi\|_{\infty})$ such that for all $\mu\geq \mu_2\geq \mu_1\geq 0$
\begin{align}\notag
\sum_{i}&\left[2\lambda^2\mu^2\xi^3\beta_i^2+\frac{1}{2}\lambda\mu^2\xi\beta_i^2+\frac{1}{2}\lambda\mu_\xi\beta_{ii}\right]\\ \label{eq:est_pos_D_D0}
&+\sum_{i}\left[\lambda^2(-\varphi)\xi\mu^2\beta_i^2+\lambda^2(-\varphi)\xi\mu\beta_{ii}\right]\geq c\lambda^2\mu^2|\varphi|\xi, \quad x\in \mathcal{D}\setminus\ov{\mathcal{D}^\prime}.
\end{align}
In this way, in a subsequent step, by \eqref{eq:terms_gamma_t}, \eqref{eq:est_pos_D_D0}, we will obtain from $Q_1$ a positive term in $(0,T)\times \mathcal{D}$ and a localized term at $\mathcal{D}^\prime$ in the right-hand side of the inequality.
The conclusion of this sub-step is quite classical. For the term $Q_2$ in \eqref{eq:def_Q_term}, we can readily see that the leading term in \eqref{eq:def_Pi_2} is positive. Hence, from \eqref{eq:deriv_weights} and straightforward computation, we have
\begin{equation}\label{eq:est_Q2}
Q_2\geq \int_{0}^{T}\lambda^3\mu^4\xi^3|\nabla \beta|^4\psi^2\textnormal{d}t-C\int_{0}^{T}\left(\lambda^2\mu^4\xi^2+\lambda^2\mu^3\xi^3+\lambda^3\mu^3\xi^3\right)\psi^2\textnormal{d}t
\end{equation}
for some constant $C=C(\|\nabla\beta\|_\infty,\|D^2 \beta\|_\infty)>0$. As in the previous case, using \eqref{eq:prop_weight_beta} will yield a positive term, a localized term on the right-hand side. The terms with lower powers of $\mu$ and $\lambda$ will be absorbed later.
Finally, for analyzing $Q_3$, we will use that $-\psi_{ii}\ell_{kk}\psi=-(\psi_i\ell_{kk}\psi)_i+\psi_{i}\ell_{kki}\psi+\psi_i^2\ell_{kk}$. Thus,
\begin{equation}\label{eq:Q3_iden}
Q_3=2\int_{0}^{T}\sum_{i,k}\psi_i^2\ell_{kk}\textnormal{d}t+2\int_{0}^{T}\sum_{i,k}\psi_i\psi\ell_{kki}\textnormal{d}t-2\int_{0}^{T}\sum_{i,k}(\psi_i\ell_{kk}\psi)_i\textnormal{d}t.
\end{equation}
We will leave this term as it is. In the next sub-step we will use it for producing a positive term depending on $|\nabla\psi|$.
\textit{-Estimates on the gradient of $\psi$}. The last positive term we shall obtain in this step, comes from the second term in \eqref{eq:comb_terms} and the first term in \eqref{eq:Q3_iden}. Using \eqref{eq:deriv_weights} it can be readily seen that
\begin{equation}\label{eq:est_grad_final}
\int_{0}^{T}\sum_{i,j}\left(2\psi_i^2\ell_{jj}+c^{ij}\psi_i\psi_j\right)\textnormal{d}t \geq \int_{0}^{T} \lambda\mu^2\xi|\nabla\beta|^2|\nabla\psi|^2 - C\int_{0}^{T}\lambda\mu\xi|\nabla \psi|^2,
\end{equation}
for some $C>0$ only depending on $\mathcal{D}$ and $\mathcal{D}^\prime$. From here, using the properties of $\beta$ we will obtain a positive term and a localized term in $\mathcal{D}^\prime$.
From the second term in \eqref{eq:Q3_iden} and the fact that
\begin{equation*}
\ell_{kki}=2\lambda\xi\mu^2\beta_k\beta_{ki}+\lambda\xi\mu^3\beta_{k}^2\beta_i+\lambda\xi\mu\beta_{kki}+\lambda\xi\mu^2\beta_{kk}\beta_i,
\end{equation*}
we can use Cauchy-Schwarz and Young inequalities to deduce
\begin{equation}\label{eq:est_cross_grad_sol}
\int_{0}^{T}\sum_{i,k}\psi_i\psi\ell_{kki}\textnormal{d}t \geq - C\int_{0}^{T}\mu^2|\nabla\psi|^2\textnormal{d}t-C\int_0^{T}\lambda^2\mu^4\xi^2|\psi|^2\textnormal{d}t.
\end{equation}
Notice that the term containing $\nabla\psi$ does not have any power for $\lambda$ so it can be absorbed later.
The last term in \eqref{eq:Q3_iden} is left as it is, since by the divergence theorem we will see later that this term is actually 0.
\textbf{Step 3. Towards the Carleman estimate.} We begin by integrating \eqref{eq:point_iden_lhs}--\eqref{eq:point_last_rhs} in $\mathcal{D}$ and take expectation in both sides of the identity. Taking into account the estimates obtained in the previous step, i.e., \eqref{eq:est_final_t0}, \eqref{eq:est_F1}, \eqref{eq:def_Q_term}, and \eqref{eq:est_Q1_T2_T}--\eqref{eq:est_cross_grad_sol}, we get
\begin{align}\notag
c_1&{\mathbb{E}}\left(\int_{\mathcal{D}}|\nabla\psi(0)|^2\,\textnormal{d}x+\int_{\mathcal{D}}\lambda^2\mu^3e^{2\mu(6m+1)}|\psi(0)|^2\,\textnormal{d}x\right)+c{\mathbb{E}}\left(\int_0^{T/4}\!\!\!\!\int_{\mathcal{D}\setminus{\mathcal{D}^\prime}}\lambda^2\mu^2\xi |\varphi||\gamma_t| |\psi|^2\,\textnormal{d}x\textnormal{d}t\right) \\ \notag
&\quad +{\mathbb{E}}\left(\int_{Q_T}\lambda^3\mu^4\xi^3|\nabla\beta|^4|\psi|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\lambda\mu^2\xi|\nabla\beta|^2|\nabla\psi|^2\,\textnormal{d}x\textnormal{d}t\right)\\ \notag
&\quad +{\mathbb{E}}\left(\int_{Q_T}I^2\,\textnormal{d}x\textnormal{d}t\right) +\frac{1}{2}{\mathbb{E}}\left(\int_{Q_T}\sum_{i}(\textnormal{d}\psi_i)^2\,\textnormal{d}x\right) \\ \label{eq:first_car_estimate}
&\leq {\mathbb{E}}\left(\int_{Q_T}\theta I(\textnormal{d}{z}+\Delta z \textnormal{d}t)\,\textnormal{d}x\right)+\frac{1}{2}{\mathbb{E}}\left(\int_{Q_T} A(\textnormal{d}{\psi})^2\,\textnormal{d}x\right) + \mathcal{BT}+\mathcal{R},
\end{align}
where
\begin{align}\label{eq:boundary_terms}
\mathcal{BT}&:=2{\mathbb{E}}\left(\int_{Q_T}\sum_{i,k}(\psi_i\ell_k\psi)_i\,\textnormal{d}x\textnormal{d}t\right)-{\mathbb{E}}\left(\int_{Q_T}\sum_{i}(\psi_id\psi)_i\right)-{\mathbb{E}}\left(\int_{Q_T}\nabla\cdot V\,\textnormal{d}x\textnormal{d}t \right), \\ \label{eq:residual}
\mathcal R&:=C{\mathbb{E}}\left(\int_{Q_T}\left[\lambda^2\mu^3\xi^3+\lambda^2\mu^4\xi^2+\lambda^3\mu^3\xi^3\right]|\psi|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\left[\mu^2+\lambda\mu\xi\right]|\nabla\psi|^2\,\textnormal{d}x\textnormal{d}t\right).
\end{align}
We remark that the positive constants $c_1$ and $C$ in \eqref{eq:first_car_estimate}--\eqref{eq:residual} only depend on $\mathcal{D}$ and $\mathcal{D}^\prime$, while $c>0$ depends only on $\mathcal{D},\mathcal{D}^\prime$ and $\alpha$ (see \eqref{eq:prop_weight_beta}).
We proceed to estimate the rest of the terms. We begin with those gathered on $\mathcal{BT}$, defined in \eqref{eq:boundary_terms}. It is clear that $z=0$ on $\Sigma_T$ implies $\psi=0$ on $\Sigma_T$. Moreover, $\psi_i=\frac{\partial \psi}{\partial \nu}\nu^{i}$, with $\nu=(\nu_1,\ldots,\nu_N)$ being the unit outward normal vector of $\mathcal{D}$ at $x\in\partial \mathcal{D}$. Also, by the construction of the weight $\beta$, we have
\begin{equation*}
\ell_i=\lambda\mu\xi\psi_i=\lambda\mu\xi\frac{\partial \psi}{\partial \nu}\nu^{i} \quad\text{and}\quad \frac{\partial \psi}{\partial \nu}<0, \quad\text{on } \Sigma_T.
\end{equation*}
Hence, it is not difficult to see that using divergence theorem we have
\begin{gather*}
2{\mathbb{E}}\left(\int_{Q_T}(\psi_i\ell_k\psi)_i\,\textnormal{d}x\textnormal{d}t\right)=2{\mathbb{E}}\left(\int_{\Sigma_T}\sum_{i,k}\psi_i\ell_k\psi \nu^{i}\,\textnormal{d}x\textnormal{d}t\right)=0, \\
-{\mathbb{E}}\left(\int_{Q_T}\sum_i(\psi_i\textnormal{d}{\psi})_i\right)=-{\mathbb{E}}\left(\int_{\Sigma_T}\sum_{i}\psi_i\nu_i\textnormal{d}\psi\,\textnormal{d}x\right)=0,
\end{gather*}
and
\begin{align*}
-{\mathbb{E}}\left(\int_{Q_T}\nabla\cdot V\,\textnormal{d}x\textnormal{d}t\right)&={\mathbb{E}}\left(\int_{\Sigma_T}\sum_{i,j}\left[(2\ell_i\psi_i\psi_j-\ell_i\psi_j\psi_j)+A\ell_i\psi^2\right]\nu^j\,\textnormal{d}x\textnormal{d}t\right) \\
&= {\mathbb{E}}\left(\int_{\Sigma_T}\lambda\mu\xi\frac{\partial \beta}{\partial\nu}\left(\frac{\partial z}{\partial \nu}\right)^2\sum_{i,j}\left(\nu^{i}\nu^j\right)^2\,\textnormal{d}x\textnormal{d}t\right)\leq 0.
\end{align*}
Thus, we get
\begin{equation}\label{eq:est_boundary}
\mathcal{BT}\leq 0.
\end{equation}
For the following three terms, we will use the change of variables $\psi=\theta z$ and the fact that $z$ solves system \eqref{eq:system_z}. First, we see that
\begin{equation}\label{eq:est_positive_ov_zi}
{\mathbb{E}}\left(\int_{Q_T}\sum_i(\textnormal{d}\psi_i)^2\,\textnormal{d}x\right)={\mathbb{E}}\left(\int_{Q_T}\theta^2\sum_{i}\left(\ov{z}_i+\ell_i\ov{z}\right)^2\,\textnormal{d}x\textnormal{d}t\right)\geq 0.
\end{equation}
In the same spirit, using the equation verified by $z$ and Cauchy-Schwarz and Young inequalities, we get
\begin{equation}\label{eq:est_rhs_F}
{\mathbb{E}}\left(\int_{Q_T}\theta I(\textnormal{d}{z}+\Delta z \textnormal{d}t)\,\textnormal{d}x\right)\leq \frac{1}{2}{\mathbb{E}}\left(\int_{Q_T}I^2\,\textnormal{d}x\textnormal{d}t\right)+\frac{1}{2}{\mathbb{E}}\left(\int_{Q_T}\theta^2|\Xi|^2\,\textnormal{d}x\textnormal{d}t\right).
\end{equation}
Lastly, from \eqref{eq:deriv_weights} and the fact that $|\varphi_t|\leq C\lambda\mu\xi^3$ for $(t,x)\in(0,T)\times\mathcal{D}$, a direct computation shows that
\begin{equation}\label{eq:est_A_ov_z}
{\mathbb{E}}\left(\int_{Q_T}A(\textnormal{d}\psi)^2\,\textnormal{d}x\right)={\mathbb{E}}\left(\int_{Q_T}\theta^2A|\ov{z}|^2\,\textnormal{d}x\textnormal{d}t\right)\leq C\left(\int_{Q_T}\theta^2\lambda^2\mu^2\xi^3|\ov{z}|^2\,\textnormal{d}x\textnormal{d}t\right).
\end{equation}
Using that $\inf_{x\in\mathcal{D}\setminus\ov{\mathcal{D}^\prime}}|\nabla\beta|\geq \alpha>0$, we can combine estimate \eqref{eq:first_car_estimate} with \eqref{eq:est_boundary}--\eqref{eq:est_A_ov_z} to deduce
\begin{align*}\notag
&{\mathbb{E}}\left(\int_{\mathcal{D}}|\nabla\psi(0)|^2\,\textnormal{d}x+\int_{\mathcal{D}}\lambda^2\mu^3e^{2\mu(6m+1)}|\psi(0)|^2\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_0^{T/4}\!\!\!\!\int_{\mathcal{D}}\lambda^2\mu^2\xi |\varphi||\gamma_t| |\psi|^2\,\textnormal{d}x\textnormal{d}t\right) \\ \notag
&\quad +{\mathbb{E}}\left(\int_{Q_T}\lambda^3\mu^4\xi^3|\psi|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\lambda\mu^2\xi|\nabla\psi|^2\,\textnormal{d}x\textnormal{d}t\right) +\frac{1}{2}{\mathbb{E}}\left(\int_{Q_T}I^2\,\textnormal{d}x\textnormal{d}t\right) \\ \notag
&\leq C{\mathbb{E}}\left(\int_0^{T/4}\!\!\!\!\int_{\mathcal{D}^\prime}\lambda^2\mu^2\xi |\varphi||\gamma_t| |\psi|^2\,\textnormal{d}x\textnormal{d}t+\int_{0}^{T}\!\!\!\int_{\mathcal{D}^\prime}\lambda^3\mu^4\xi^3|\psi|^2\,\textnormal{d}x\textnormal{d}t+\int_{0}^{T}\!\!\!\int_{\mathcal{D}^\prime}\lambda\mu^2\xi|\nabla\psi|^2\,\textnormal{d}x\textnormal{d}t\right) \\
&\quad + C\mathcal R + C {\mathbb{E}}\left(\int_{Q_T}\theta^2|\Xi|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\theta^2\lambda^2\mu^2\xi^3|\ov{z}|^2\,\textnormal{d}x\textnormal{d}t\right),
\end{align*}
for some $C>0$ only depending on $\mathcal{D}$, $\mathcal{D}^\prime$ and $\alpha$. We observe that, unlike the traditional Carleman estimate with weight vanishing at $t=0$ and $t=T$, we have three local integrals, one of those being only for $t\in(0,T/4)$. We will handle this in the following step.
Also notice that all of the terms in $\mathcal R$ have lower powers of $\lambda$ and $\mu$, thus, we immediately see that there exists some $\mu_3\geq \mu_2$ and $\lambda_1\geq C$ such that, for all $\mu\geq \mu_3$ and $\lambda\geq \lambda_1$
\begin{align}\notag
&{\mathbb{E}}\left(\int_{\mathcal{D}}|\nabla\psi(0)|^2\,\textnormal{d}x+\int_{\mathcal{D}}\lambda^2\mu^3e^{2\mu(6m+1)}|\psi(0)|^2\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_0^{T/4}\!\!\!\!\int_{\mathcal{D}}\lambda^2\mu^2\xi |\varphi||\gamma_t| |\psi|^2\,\textnormal{d}x\textnormal{d}t\right) \\ \notag
&\quad +{\mathbb{E}}\left(\int_{Q_T}\lambda^3\mu^4\xi^3|\psi|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\lambda\mu^2\xi|\nabla\psi|^2\,\textnormal{d}x\textnormal{d}t\right) \\ \notag
&\leq C{\mathbb{E}}\left(\int_0^{T/4}\!\!\!\!\int_{\mathcal{D}^\prime}\lambda^2\mu^2\xi |\varphi||\gamma_t| |\psi|^2\,\textnormal{d}x\textnormal{d}t+\int_{0}^{T}\!\!\!\int_{\mathcal{D}^\prime}\lambda^3\mu^4\xi^3|\psi|^2\,\textnormal{d}x\textnormal{d}t+\int_{0}^{T}\!\!\!\int_{\mathcal{D}^\prime}\lambda\mu^2\xi|\nabla\psi|^2\,\textnormal{d}x\textnormal{d}t\right) \\
&\quad + C {\mathbb{E}}\left(\int_{Q_T}\theta^2|\Xi|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\theta^2\lambda^2\mu^2\xi^3|\ov{z}|^2\,\textnormal{d}x\textnormal{d}t\right).
\label{eq:est_almost0}
\end{align}
\smallskip
\textbf{Step 4. Last arrangements and conclusion.} As usual, the last steps in Carleman strategies consist in removing the local term containing the gradient of the solution and coming back to the original variable. We will see that the original strategy also helps to remove the local term in $(0,T/4)$.
First, using that $z_i=\theta^{-1}(\psi_i-\ell_i\psi)$, it is not difficult to see that $\theta^2|\nabla z|^2\leq 2|\nabla \psi|^2+2C\lambda^2\mu^2\xi^2|\psi|^2$ for some $C>0$ only depending on $\mathcal{D}$ and $\mathcal{D}^\prime$, hence from \eqref{eq:est_almost0} we have
\begin{align}\notag
&{\mathbb{E}}\left(\int_{\mathcal{D}}\theta^2(0)|\nabla z(0)|^2\,\textnormal{d}x+\int_{\mathcal{D}}\lambda^2\mu^3e^{2\mu(6m+1)}\theta^2(0)|z(0)|^2\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_0^{T/4}\!\!\!\!\int_{\mathcal{D}}\theta^2\lambda^2\mu^2\xi |\varphi||\gamma_t| |z|^2\,\textnormal{d}x\textnormal{d}t\right) \\ \notag
&\quad +{\mathbb{E}}\left(\int_{Q_T}\theta^2\lambda^3\mu^4\xi^3|z|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\theta^2\lambda\mu^2\xi|\nabla z|^2\,\textnormal{d}x\textnormal{d}t\right) \\ \notag
&\leq C{\mathbb{E}}\left(\int_0^{T/4}\!\!\!\!\int_{\mathcal{D}^\prime}\theta^2\lambda^2\mu^2\xi |\varphi||\gamma_t| |z|^2\,\textnormal{d}x\textnormal{d}t+\int_{0}^{T}\!\!\!\int_{\mathcal{D}^\prime}\theta^2\lambda^3\mu^4\xi^3|z|^2\,\textnormal{d}x\textnormal{d}t+\int_{0}^{T}\!\!\!\int_{\mathcal{D}^\prime}\theta^2\lambda\mu^2\xi|\nabla z|^2\,\textnormal{d}x\textnormal{d}t\right) \\ \label{eq:est_almost}
&\quad + C {\mathbb{E}}\left(\int_{Q_T}\theta^2|\Xi|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\theta^2\lambda^2\mu^2\xi^3|\ov{z}|^2\,\textnormal{d}x\textnormal{d}t\right),
\end{align}
for all $\lambda\geq \lambda_1$ and $\mu\geq \mu_3$.
We choose a cut-off function $\eta\in C_c^\infty(\mathcal{D})$ such that
\begin{equation}\label{eq:cut_off}
0\leq \eta\leq 1, \quad \eta\equiv 1 \text{ in } \mathcal{D}^\prime, \quad \eta\equiv 0 \text{ in }\mathcal{D}\setminus\mathcal{D}_0
\end{equation}
with the additional characteristic that
\begin{equation}\label{eq:prop_nabla_cut}
\frac{\nabla \eta}{\eta^{1/2}}\in L^\infty(\mathcal{D})^N.
\end{equation}
This condition can be obtained by taking some $\eta_0\in C_c^\infty(\mathcal{D})$ satistying \eqref{eq:cut_off} and defining $\eta=\eta_0^4$. Then $\eta$ will satisfy both \eqref{eq:cut_off} and \eqref{eq:prop_nabla_cut}.
Using It\^{o}'s formula, we compute $\textnormal{d}\left(\theta^2\xi z^2\right)=(\theta^2\xi)_tz^2+2\theta^2\xi z\textnormal{d}{z}+\theta^2\xi(\textnormal{d}{z})^2$
and thus, using the equation verified by $z$, we get
\begin{align}\notag
{\mathbb{E}}&\left(\int_{\mathcal{D}_0}\theta^2(0)\xi(0)|z(0)|^2\eta \,\textnormal{d}x\right)+2{\mathbb{E}}\left(\int_0^T\!\!\!\int_{\mathcal{D}_0}\theta\theta_t\xi |z|^2\eta\,\textnormal{d}x\textnormal{d}t\right)+2{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^2\xi|\nabla z|^2\eta\,\textnormal{d}x\textnormal{d}t\right)\\ \notag
&+{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^2\xi|\ov{z}|^2\eta\,\textnormal{d}x\textnormal{d}t\right) =-{\mathbb{E}}\left(\int_0^T\!\!\!\int_{\mathcal{D}_0}\theta^2\xi_t |z|^2\eta \,\textnormal{d}x\textnormal{d}t\right)-2{\mathbb{E}}\left(\int_0^T\!\!\!\int_{\mathcal{D}_0}\theta^2\xi z \Xi \eta \,\textnormal{d}x\textnormal{d}t \right) \\ \label{eq:iden_local}
&-2{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^2\xi\nabla\eta\cdot\nabla z z \,\textnormal{d}x\textnormal{d}t\right)-2{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\nabla(\theta^2\xi)\cdot\nabla z z \eta\,\textnormal{d}x\textnormal{d}t\right).
\end{align}
We readily see that the first and last terms in the left-hand side of \eqref{eq:iden_local} are positive, so they can be dropped. Also, notice that using the properties of $\eta$, the third term gives (up to the constants $\mu$ and $\lambda$) the local term containing $|\nabla z|$.
We shall focus on the second term on the left-hand side of \eqref{eq:iden_local}. Similar to Step 2 above, we analyze it on different time intervals. Obviously, for $t\in(T/4,T/2)$ this term vanishes since $\gamma_t=0$. For $t\in(0,T/4)$, we notice that $\theta\theta_t=\theta^2\lambda\varphi\frac{\gamma_t}{\gamma}\xi$ and since $\gamma_t\leq 0$, $\varphi<0$ and $\gamma\in[1,2]$, this yields a positive term. Lastly, in the interval $(T/2,T)$, we use that $|\varphi_t|\leq C\lambda\mu \xi^3$ to obtain the bound $|\theta_t|\leq C\theta\lambda^2\mu\xi^3$. Summarizing, we have
\begin{align}\notag
2{\mathbb{E}}&\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta\theta_t\xi|z|^2\eta\,\textnormal{d}x\textnormal{d}t\right)\\ \label{eq:est_positive_t}
&\geq {\mathbb{E}}\left(\int_{0}^{T/4}\!\!\!\int_{\mathcal{D}_0}\theta^2\lambda\xi|\gamma_t||\varphi||z|^2\eta\,\textnormal{d}x\textnormal{d}t\right)- C{\mathbb{E}}\left(\int_{T/2}^{T}\int_{\mathcal{D}_0}\theta^2\lambda^2\mu\xi^3|z|^2\eta\,\textnormal{d}x\textnormal{d}t\right).
\end{align}
Let us estimate each term on the right-hand side of \eqref{eq:iden_local}. For the first one, using that $|\xi_t|\leq C\lambda\mu\xi^3$ for all $(t,x)\in(0,T)\times \mathcal{D}$, we get
\begin{equation}\label{eq:est_xi_t}
\abs{{\mathbb{E}}\left(\int_0^T\!\!\!\int_{\mathcal{D}_0}\theta^2\xi_t |z|^2\eta \,\textnormal{d}x\textnormal{d}t\right)}\leq C{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^2\lambda\mu\xi^3|z|^2\eta \,\textnormal{d}x\textnormal{d}t\right).
\end{equation}
For the second one, using Cauchy-Schwarz and Young inequalities yields
\begin{align}\notag
&\abs{{\mathbb{E}}\left(\int_0^T\!\!\!\int_{\mathcal{D}_0}\theta^2\xi z \Xi \eta \,\textnormal{d}x\textnormal{d}t \right)}\\ \label{eq:est_cross_Fz}
&\quad \leq \frac{1}{2}{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^2\lambda^{-1}\mu^{-2}|\Xi|^2\eta\,\textnormal{d}x\textnormal{d}t\right)+\frac{1}{2}{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^2\lambda\mu^2\xi^2|z|^2\eta\,\textnormal{d}x\textnormal{d}t\right).
\end{align}
For the third one, we will use property \eqref{eq:prop_nabla_cut} and Cauchy-Schwarz and Young inequalities to deduce that
\begin{align}\notag
&\abs{{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^2\xi\nabla\eta\cdot\nabla z z \,\textnormal{d}x\textnormal{d}t\right)}\\ \label{eq:est_nablaz_z}
&\qquad \leq \epsilon {\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^2\xi |\nabla z|^2\eta\,\textnormal{d}x\textnormal{d}t\right)+C(\epsilon){\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^2\xi|z|^2\,\textnormal{d}x\textnormal{d}t\right)
\end{align}
for any $\epsilon>0$. For the last term, using that $|\nabla(\theta^2\xi)|\leq C\theta^2\lambda\mu\xi^2$ and arguing as above, we get
\begin{align}\notag
&\abs{{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\nabla(\theta^2\xi)\cdot\nabla z z \eta\,\textnormal{d}x\textnormal{d}t\right)}\\ \label{eq:est_nablaweight_nablaz}
&\qquad \leq \epsilon {\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^2\xi |\nabla z|^2\eta\,\textnormal{d}x\textnormal{d}t\right) + C(\epsilon){\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^2\xi^3\mu^2\lambda^2|z|^2\eta\,\textnormal{d}x\textnormal{d}t\right).
\end{align}
Therefore, taking $\epsilon=\frac{1}{2}$ and using estimates \eqref{eq:est_positive_t}--\eqref{eq:est_nablaweight_nablaz} together with the properties of the cut-off $\eta$, we get
\begin{align}\notag
{\mathbb{E}}&\left(\int_{0}^{T/4}\!\!\!\int_{\mathcal{D}^\prime}\theta^2\lambda\xi|\gamma_t||\varphi||z|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}^\prime}\theta^2\xi|\nabla z|^2\,\textnormal{d}x\textnormal{d}t\right) \\ \label{eq:est_locales}
&\leq C{\mathbb{E}}\left(\int_{0}^{T}\int_{\mathcal{D}_0}\theta^2(\lambda^2\mu\xi^3+\lambda\mu^2\xi^2+\lambda^2\mu^2\xi^3)|z|^2\,\textnormal{d}x\textnormal{d}t\right)+C{\mathbb{E}}\left(\int_{Q_T}\theta^2\lambda^{-1}\mu^{-2}|\Xi|^2\,\textnormal{d}x\textnormal{d}t\right).
\end{align}
As usual, we have paid the price of estimating locally the gradient by enlarging a little bit the observation domain. Notice that this procedure gives us the local estimate in $(0,T/4)$ by using the properties of the weight function $\varphi$ and $\gamma_t$. Finally, the desired estimate follows by multiplying both sides of \eqref{eq:est_locales} by $\lambda\mu^2$ and using the result to bound in the right-hand side of \eqref{eq:est_almost}. We conclude the proof by setting $\mu_0=\mu_3$ and $\lambda_0=\lambda_1$.
\end{proof}
\subsection{A controllability result for a linear forward stochastic heat equation with two source terms and two controls}
In this section, we will prove a controllability result for a linear forward equation. More precisely, recall the equation defined in \eqref{eq:sys_forward_source_intro}
\begin{equation}\label{eq:sys_forward_source}
\begin{cases}
\textnormal{d}{y}=(\Delta y + F+ \chi_{\mathcal{D}_0}h)\textnormal{d}t + (G+H)\textnormal{d}{W}(t) &\text{in }Q_T, \\
y=0 &\text{on } \Sigma_T, \\
y(0)=y_0 &\text{in }\mathcal{D}.
\end{cases}
\end{equation}
In \eqref{eq:sys_forward_source}, $(h,H)\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}_0))\times L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))$ is a pair of controls and $F, G$ are given source terms in $L_{\mathcal F}^2(0,T;L^2(\mathcal{D}))$. Observe that given $y_0\in L^2(\Omega,\mathcal F_0; L^2(\mathcal{D}))$ and the aforementioned regularity on the controls and source terms, system \eqref{eq:sys_forward_source} admits a unique solution $y\in \mathcal W_T$, see \cite[Theorem 2.7]{LZ19}.
Under the notation of \Cref{sec:new_carleman}, let us set the parameters $\lambda$ and $\mu$ to a fixed value sufficiently large, such that inequality \eqref{eq:car_backward_0} holds true.
We define the space
\begin{align}
&\mathcal S_{\lambda,\mu}=\Bigg\{(F,G)\in [L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))]^2: \notag\\
& \quad \left[{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|F|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-2}\mu^{-2}\xi^{-3}|G|^2\,\textnormal{d}x\textnormal{d}t \right)\right]^{1/2}<+\infty \Bigg\},\label{eq:defslambdamu}
\end{align}
endowed with the canonical norm.
Our linear controllability result reads as follows.
\begin{theo}\label{teo:contr_forward_source}
For any initial datum $y_0\in L^2(\Omega,\mathcal F_0;L^2(\mathcal{D}))$ and any source terms $(F,G)\in \mathcal S_{\lambda,\mu}$, there exists a pair of controls $(\widehat{h},\widehat{H})\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}_0))\times L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))$ such that the associated solution $\widehat{y} \in \mathcal{W}_T$ to system \eqref{eq:sys_forward_source} satisfies $\widehat{y}(T)=0$ in $\mathcal{D}$, a.s. Moreover, the following estimate holds
\begin{align}\notag
{\mathbb{E}}&\left(\int_{Q_T}\theta^{-2}|\widehat{y}|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|\widehat{h}|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-2}\mu^{-2}\xi^{-3}|\widehat{H}|^2\,\textnormal{d}x\textnormal{d}t\right) \\ \label{eq:est_weighted_spaces_forward}
& \leq C_1{\mathbb{E}}\left(\|y_0\|^2_{L^2(\mathcal{D})}\right)+C\norme{(F,G)}_{\mathcal S_{\lambda,\mu}}^2,
\end{align}
where $C_1>0$ is a constant depending on $\mathcal{D},\mathcal{D}_0,\lambda,\mu$ and $C>0$ only depends on $\mathcal{D}$ and $\mathcal{D}_0$.
\end{theo}
\begin{rmk}
\label{rmk:uniquetrajectory}
Using classical arguments, see for instance \cite[Proposition 2.9]{LLT13}, from \Cref{teo:contr_forward_source} one can construct a linear continuous mapping that associates every initial datum $y_0\in L^2(\Omega,\mathcal F_0;L^2(\mathcal{D}))$ and every source terms $(F,G) \in \mathcal{S}_{\lambda, \mu}$, to a trajectory $(\widehat{y}, \widehat{h}, \widehat{H})$ such that $\widehat{y}(T)=0$ in $\mathcal{D}$, a.s. and \eqref{eq:est_weighted_spaces_forward} holds.
\end{rmk}
The proof of \Cref{teo:contr_forward_source} is based on a classical duality method, called penalized Hilbert Uniqueness Method, which ideas can be traced back to the seminal work \cite{GL94}. The general strategy consists in three steps:
\begin{itemize}
\item[-] \textit{Step 1.} Construct a family of optimal approximate-null control problems for system
\eqref{eq:sys_forward_source}.
\item[-] \textit{Step 2.} Obtain a uniform estimate for the approximate solutions in terms of the data of the problem, i.e., the initial datum $y_0$ and the source terms $F$ and $G$.
\item[-] \textit{Step 3.} A limit process to derive the desired null-controllability result.
\end{itemize}
We shall mention that in the stochastic setting, similar strategies have been used for deducing controllability results and Carleman estimates for forward and backward equations, see e.g. \cite{liu14,Yan18,LY19}.
In what follows, $C$ will denote a generic positive constant possibly depending on $\mathcal{D},\mathcal{D}_0$, but never on the parameters $\lambda$ and $\mu$.
\begin{proof}[Proof of \Cref{teo:contr_forward_source}] We follow the steps described above.
\smallskip
\textbf{Step 1}. For any $\epsilon>0$, let us consider the weight function $\theta_\epsilon(t)$ given by
\begin{equation*}
\gamma_\epsilon(t):=
\begin{cases}
\gamma_\epsilon(t)=1+(1+\frac{4t}{T})^\sigma, \quad t\in[0,T/4], \\
\gamma_\epsilon(t)=1, \quad t\in [T/4,T/2+\epsilon], \\
\gamma_\epsilon(t)=\gamma(t-\epsilon), \quad t\in [T/2+\epsilon,T], \\
\sigma \textnormal{ as in \eqref{def:sigma}}.
\end{cases}
\end{equation*}
Defined in this way, it is not difficult to see that $\gamma$ does not blow up as $t\to T^{-}$ and that $\gamma_\epsilon(t)\leq \gamma(t)$ for $t\in[0,T]$. With this new function, we set the weight $\varphi_\epsilon$ as in \eqref{eq:weights_0} by replacing the function $\gamma$ by $\gamma_\epsilon$. In the same manner, we write $\theta_\epsilon=e^{\lambda\varphi_\epsilon}$.
With this notation, we introduce the functional
\begin{align}\notag
J_\epsilon(h,H):=&\frac{1}{2}{\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}|y|^2\,\textnormal{d}x\textnormal{d}t\right)+\frac{1}{2}{\mathbb{E}}\left(\int_{0}^T\!\!\!\int_{\mathcal{D}_0} \theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|h|^2\,\textnormal{d}x\textnormal{d}t\right)\\ \label{eq:func}
&+\frac{1}{2}{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-2}\mu^{-2}\xi^{-3}|H|^2\,\textnormal{d}x\textnormal{d}t\right)+\frac{1}{2\epsilon}{\mathbb{E}}\left(\int_{\mathcal{D}}|y(T)|^2\,\textnormal{d}x\right)
\end{align}
and consider the minimization problem
\begin{equation}\label{eq:prob_min}
\begin{cases}
\min_{(h,H)\in \mathcal H} J_\epsilon(h,H), \\
\textnormal{subject to equation \eqref{eq:sys_forward_source},}
\end{cases}
\end{equation}
where \begin{align*}
\mathcal H=&\left\{(h,H)\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D})): \right. \\ &\quad \left.
{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|h|^2\,\textnormal{d}x\textnormal{d}t\right)<+\infty, \ {\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-2}\mu^{-2}\xi^{-3}|H|^2\,\textnormal{d}x\textnormal{d}t\right)<+\infty\right\}.
\end{align*}
It can be readily seen that the functional $J_\epsilon$ is continuous, strictly convex and coercive. Therefore, the minimization problem \eqref{eq:prob_min} admits a unique optimal pair solution that we denote by $(h_\epsilon,H_\epsilon)$. From classical arguments: the Euler-Lagrange equation for \eqref{eq:func} at the minimum $(h_\epsilon,H_\epsilon)$ and a duality argument (see, for instance, \cite{Lio71}), the pair $(h_\epsilon,H_\epsilon)$ can be characterized as
\begin{equation}\label{eq:h_epsilon}
h_\epsilon=-\chi_{\mathcal{D}_0}\theta^2\lambda^3\mu^4\xi^3 z_\epsilon, \quad H_\epsilon=-\theta^2\lambda^2\mu^2\xi^3Z_{\epsilon} \quad\text{in Q}, \quad a.s.,
\end{equation}
where the pair $(z_\epsilon,Z_\epsilon)$ verifies the backward stochastic equation
\begin{equation}\label{eq:q_optim}
\begin{cases}
\textnormal{d}{z_\epsilon}=(-\Delta z_\epsilon-\theta_\epsilon^{-2}y_\epsilon)\textnormal{d}t+ Z_\epsilon \textnormal{d}{W}(t)&\text{in }Q_T, \\
z_\epsilon=0 &\text{on }\Sigma_T, \\
z_\epsilon(T)=\frac{1}{\epsilon}y_\epsilon(T) &\text{in }\mathcal{D},
\end{cases}
\end{equation}
and where $(y_\epsilon,y_\epsilon(0))$ can be extracted from $y_\epsilon$ the solution to \eqref{eq:sys_forward_source} with controls $h=h_\epsilon$ and $H=H_\epsilon$. Observe that since $y_\epsilon\in L^2_{\mathcal F}(\Omega; C([0,T];L^2(\mathcal{D})))$ the evaluation of $y_\epsilon$ at $t=T$ is meaningful and \eqref{eq:q_optim} is well-posed for any $\epsilon>0$.
\smallskip
\textbf{Step 2.} Using It\^{o}'s formula, we can compute $\textnormal{d}(y_\epsilon z_\epsilon)$ and deduce
\begin{align*}
{\mathbb{E}}\left(\int_{\mathcal{D}}y_\epsilon(T)z_\epsilon(T)\,\textnormal{d}x\right)&={\mathbb{E}}\left(\int_{\mathcal{D}}y_\epsilon(0)z_\epsilon(0)\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_{Q_T}(\Delta y_\epsilon+F+\chi_{\mathcal{D}_0}h_\epsilon)z_\epsilon\,\textnormal{d}x\textnormal{d}t\right) \\
&\quad + {\mathbb{E}}\left(\int_{Q_T}(-\Delta z_\epsilon-\theta_\epsilon^{-2}y_\epsilon)y_\epsilon\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}(H_\epsilon+G)Z_\epsilon \,\textnormal{d}x\textnormal{d}t \right)
\end{align*}
whence, replacing the initial data of systems \eqref{eq:sys_forward_source}, \eqref{eq:q_optim} and using identity \eqref{eq:h_epsilon}, we get
\begin{align}
{\mathbb{E}}&\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^2\lambda^3\mu^4\xi^3|z_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^2\lambda^2\mu^2\xi^3|Z_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)\notag\\
&\quad +{\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+\frac{1}{\epsilon}{\mathbb{E}}\left(\int_{\mathcal{D}}|y_\epsilon(T)|^2\,\textnormal{d}x\right) \notag \\ \label{eq:id_prod_yq}
&={\mathbb{E}}\left(\int_{\mathcal{D}}y_0z_\epsilon(0)\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_{Q_T}F z_\epsilon \,\textnormal{d}x\textnormal{d}t \right)+{\mathbb{E}}\left(\int_{Q_T}G Z_\epsilon \,\textnormal{d}x\textnormal{d}t \right).
\end{align}
Now, we will use the Carleman estimate in \Cref{thm:carleman_backward_0}. We will apply it to equation \eqref{eq:q_optim} with $\Xi=-\theta^{-2}y_\epsilon$ and $\ov{z}=Z_\epsilon$. Then, after removing some unnecessary terms, we get for any $\lambda$ and $\mu$ large enough
\begin{align}\notag
{\mathbb{E}}&\left(\int_{\mathcal{D}}\lambda^2\mu^3\theta^2(0)|z_\epsilon(0)|^2\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_{Q_T}\lambda^3\mu^4\xi^3\theta^2| z_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\lambda^2\mu^2\xi^3\theta^2| Z_\epsilon |^2\,\textnormal{d}x\textnormal{d}t\right) \\ \label{eq:car_random}
&\leq C{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\lambda^3\mu^4\xi^3\theta^2|z_\epsilon|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\theta^2|\theta_\epsilon^{-2} y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\lambda^2\mu^2\xi^3\theta^2|Z_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right).
\end{align}
Notice that we have added an integral of $Z_\epsilon$ on the left-hand side of the inequality. This increases a little bit the constant $C$ on the right-hand side but it is still uniform with respect to $\lambda$ and $\mu$.
In view of \eqref{eq:car_random}, we use Cauchy-Schwarz and Young inequalities in the right-hand side of \eqref{eq:id_prod_yq} to obtain
\begin{align}\notag
{\mathbb{E}}&\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^2\lambda^3\mu^4\xi^3|z_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^2\lambda^3\mu^4\xi^3| z_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right) \\ \notag
&\quad +{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+\frac{1}{\epsilon}{\mathbb{E}}\left(\int_{\mathcal{D}}|y_\epsilon(T)|^2\,\textnormal{d}x\right) \\ \notag
&\leq \delta\left[{\mathbb{E}}\left(\int_{\mathcal{D}}\theta^2(0)\lambda^2\mu^3|z_\epsilon(0)|^2\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^2\lambda^3\mu^4\xi^3|z_\epsilon|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\theta^2\lambda^2\mu^2\xi^3|Z_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)\right]\\ \notag
&\quad + C_\delta\left[{\mathbb{E}}\left(\int_{\mathcal{D}}\theta^{-2}(0)\lambda^{-2}\mu^{-3}|y_0|^2\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_{Q_{T}}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|F|^2\,\textnormal{d}x\textnormal{d}t\right) \right. \\ \label{eq:des_prod_yq}
&\qquad \qquad \left. + {\mathbb{E}}\left(\int_{Q_{T}}\theta^{-2}\lambda^{-2}\mu^{-2}\xi^{-3}|G|^2\,\textnormal{d}x\textnormal{d}t\right) \right]
\end{align}
for any $\delta>0$. Using inequality \eqref{eq:car_random} to estimate in the right-hand side of \eqref{eq:des_prod_yq} and the fact that $\theta^2 \theta_\epsilon^{-2}\leq 1$ for all $(t,x)\in Q_T$, we obtain, after taking $\delta>0$ small enough, that
\begin{align*}
{\mathbb{E}}&\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^2\lambda^3\mu^4\xi^3|z_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^2\lambda^2\mu^2\xi^3|Z_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)\\
&\quad +{\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+\frac{1}{\epsilon}{\mathbb{E}}\left(\int_{\mathcal{D}}|y_\epsilon(T)|^2\,\textnormal{d}x\right) \\
&\leq C\left[{\mathbb{E}}\left(\int_{\mathcal{D}}\theta^{-2}(0)\lambda^{-2}\mu^{-3}|y_0|^2\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_{Q_{T}}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|F|^2\,\textnormal{d}x\textnormal{d}t\right) \right. \\
&\qquad\quad \left. +{\mathbb{E}}\left(\int_{Q_{T}}\theta^{-2}\lambda^{-2}\mu^{-2}\xi^{-3}|G|^2\,\textnormal{d}x\textnormal{d}t\right) \right].
\end{align*}
Recalling the characterization of the optimal control $h_\epsilon$ in \eqref{eq:h_epsilon} we obtain
\begin{align}\notag
{\mathbb{E}}&\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|h_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-2}\mu^{-2}\xi^{-3}|H_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)\\ \notag
&\quad +{\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+\frac{1}{\epsilon}{\mathbb{E}}\left(\int_{\mathcal{D}}|y_\epsilon(T)|^2\,\textnormal{d}x\right) \\ \notag
&\leq C\left[{\mathbb{E}}\left(\int_{\mathcal{D}}\theta^{-2}(0)\lambda^{-2}\mu^{-3}|y_0|^2\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_{Q_{T}}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|F|^2\,\textnormal{d}x\textnormal{d}t\right) \right. \\ \label{eq:iden_uniform_final}
&\qquad\quad \left. +{\mathbb{E}}\left(\int_{Q_{T}}\theta^{-2}\lambda^{-2}\mu^{-2}\xi^{-3}|G|^2\,\textnormal{d}x\textnormal{d}t\right) \right].
\end{align}
Observe that the right-hand side of \eqref{eq:iden_uniform_final} is well-defined and finite since $\theta^{-2}(0)<+\infty$ and the source terms $(F,G)$ belongs to $ \mathcal{S}_{\lambda, \mu}$, defined in \eqref{eq:defslambdamu}.
\smallskip
\textbf{Step 3.} Since the right-hand side of \eqref{eq:iden_uniform_final} is uniform with respect to $\epsilon$, we readily deduce that there exists $(\widehat{h},\widehat{y},\widehat{Y})$ such that
\begin{equation}\label{eq:weak_conv}
\begin{cases}
h_\epsilon\rightharpoonup \widehat{h} &\textnormal{weakly in } L^2(\Omega\times(0,T);L^2(\mathcal{D}_0)), \\
H_\epsilon\rightharpoonup \widehat{H} &\textnormal{weakly in } L^2(\Omega\times(0,T);L^2(\mathcal{D})), \\
y_{\epsilon}\rightharpoonup \widehat{y} &\textnormal{weakly in } L^2(\Omega\times(0,T);L^2(\mathcal{D})).
\end{cases}
\end{equation}
We claim that $\widehat{y}$ is the solution to \eqref{eq:sys_forward_source} associated to $(\widehat{h},\widehat{H})$. To show this, let us denote by $\tilde{y}$ the unique solution in $L^2_{\mathcal F}(0,T;C([0,T];L^2(\mathcal{D})))\cap L^2_{\mathcal F}(0,T;H_0^1(\mathcal{D}))$ to \eqref{eq:sys_forward_source} with controls $(\widehat{h},\widehat{H})$. For any $m\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))$, we consider $(z,Z)$ the unique solution to the backward equation
\begin{equation}\label{eq:z_gen}
\begin{cases}
\textnormal{d}{z}=(-\Delta z-m)\textnormal{d}t+Z\textnormal{d}{W}(t) &\text{in }Q_T, \\
z=0 &\text{on }\Sigma_T, \\
z(T)=0 &\text{in } \mathcal{D}.
\end{cases}
\end{equation}
Then, using It\^{o}'s formula, we compute the duality between \eqref{eq:z_gen} and \eqref{eq:sys_forward_source} associated to $(h,H)=(h_\epsilon,H_\epsilon)$ and $(h,H)=(\widehat{h},\widehat{H})$, respectively. We have
\begin{align}\notag
-{\mathbb{E}}\left(\int_{\mathcal{D}}y_0z(0)\,\textnormal{d}x\right)&=-{\mathbb{E}}\left(\int_{Q_T} m y_\epsilon \,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}F z\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}G Z\,\textnormal{d}x\textnormal{d}t\right) \\ \label{eq:duality_eps}
&\quad +{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}h_\epsilon z\,\textnormal{d}x\textnormal{d}t\right) + {\mathbb{E}}\left(\int_{Q_T}H_\epsilon z\,\textnormal{d}x\textnormal{d}t\right)
\end{align}
and
\begin{align}\notag
-{\mathbb{E}}\left(\int_{\mathcal{D}}y_0z(0)\,\textnormal{d}x\right)&=-{\mathbb{E}}\left(\int_{Q_T} m \tilde y \,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}F z\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}G Z\,\textnormal{d}x\textnormal{d}t\right) \\ \label{eq:duality_tilde}
&\quad +{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\widehat{h} z\,\textnormal{d}x\textnormal{d}t\right) + {\mathbb{E}}\left(\int_{Q_T}\widehat{H} z\,\textnormal{d}x\textnormal{d}t\right).
\end{align}
Then, using \eqref{eq:weak_conv} in \eqref{eq:duality_eps} to pass to the limit $\epsilon\to 0$ and subtracting the result from \eqref{eq:duality_tilde}, we get $\tilde y=\widehat{y}$ in $Q_T$, a.s.
To conclude, we notice from \eqref{eq:iden_uniform_final} that $\widehat{y}(T)=0$ in $\mathcal{D}$, a.s. Also, from the weak convergence \eqref{eq:weak_conv}, Fatou's lemma and the uniform estimate \eqref{eq:iden_uniform_final} we deduce \eqref{eq:est_weighted_spaces_forward}. This ends the proof.
\end{proof}
\subsection{Proof the nonlinear result for the forward equation}
Now, we are in position to prove \Cref{th:semilinear_forward}. To this end, let us fix the parameters $\lambda$ and $\mu$ in \Cref{teo:contr_forward_source} to a fixed value sufficiently large. Recall that in turn, these parameters come from \Cref{thm:carleman_backward_0} and should be selected as $\lambda \geq \lambda_0$ and $\mu\geq \mu_0$ for some $\lambda_0\geq 1$ and $\mu_0\geq 1$, so there is no contradiction.
Note that at this point, we have preserved explicitly the parameters $\lambda$ and $\mu$ in the controllability result of \Cref{teo:contr_forward_source}. This was possible due to the selection of the weight $\theta$ in the Carleman estimate \eqref{eq:car_backward_0}, which allows to have a term depending on $z(0)$ in the left-hand side.
\begin{proof}[Proof of \Cref{th:semilinear_forward}]
Let us consider nonlinearities $f$ and $g$ fulfilling \eqref{eq:f_g_zero} and \eqref{eq:UniformLipschitzf_forw} and define the nonlinear map
\begin{equation*}
\mathcal N: (F,G)\in \mathcal S_{\lambda,\mu}\mapsto (f(\omega,t,x,y),g(\omega,t,x,y))\in \mathcal S_{\lambda,\mu},
\end{equation*}
where $y$ is the trajectory of \eqref{eq:sys_forward_source} associated to the data $y_0$, $F$, and $G$, see \Cref{teo:contr_forward_source} and \Cref{rmk:uniquetrajectory}. In what follows, to abridge the notation, we simply write $f(y)$ and $g(y)$.
We will check the following facts for the nonlinear mapping $\mathcal N$.
\textbf{ The mapping $\mathcal N$ is well-defined}. To this end, we need to show that for any $(F,G)\in \mathcal S_{\lambda,\mu}$, $\mathcal N(F,G)\in \mathcal S_{\lambda,\mu}$. We have from \eqref{eq:UniformLipschitzf_forw} and \eqref{eq:f_g_zero},
\begin{align*}
&\norme{\mathcal N(F,G)}_{\mathcal S_{\lambda,\mu}}^2={\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|f(y)|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-2}\mu^{-2}\xi^{-3}|g(y)|^2\,\textnormal{d}x\textnormal{d}t\right) \\
&\quad \leq \lambda^{-2}\mu^{-2} L^2 {\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\xi^{-3}|y|^2\,\textnormal{d}x\textnormal{d}t\right).
\end{align*}
At this point, we have also used that $\mu$ and $\lambda$ are much greater than one.
Using \eqref{eq:est_weighted_spaces_forward} and $\norme{\xi^{-1}}_\infty\leq 1$ for all $(t,x)\in Q_T$, we get
\begin{align*}
&\norme{\mathcal N(F,G)}_{\mathcal S_{\lambda,\mu}}^2 \leq L^2 \lambda^{-2}\mu^{-2}\left(C_1{\mathbb{E}}\norme{y_0}^2_{L^2(\mathcal{D})}+C\norme{(F,G)}_{\mathcal S_{\lambda,\mu}}^2 \right) <+\infty.
\end{align*}
This proves that $\mathcal N$ is well-defined.
\textbf{The mapping $\mathcal N$ is strictly contractive.} Let us consider couples $(F_i,G_i)\in \mathcal S_{\lambda,\mu}$ for $i=1,2$. We denote the solutions of the corresponding equations by $y_1$ and $y_2$, respectively. Using the fact that the nonlinearities $f$ and $g$ are globally Lipschitz, i.e. \eqref{eq:UniformLipschitzf_forw}, we have
\begin{align*}
\norme{\mathcal N(F_1,G_1)-\mathcal N(F_2,G_2)}^2_{\mathcal S_{\lambda,\mu}} &= {\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|f(y_1)-f(y_2)|^2\,\textnormal{d}x\textnormal{d}t\right) \\
&\quad + {\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-2}\mu^{-2}\xi^{-3}|g(y_1)-g(y_2)|^2\,\textnormal{d}x\textnormal{d}t\right) \\
&\leq L^2\lambda^{-2}\mu^{-2}{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}|y_1-y_2|^2\,\textnormal{d}x\textnormal{d}t\right).
\end{align*}
Then applying \Cref{teo:contr_forward_source} and \Cref{rmk:uniquetrajectory}, and using the estimate \eqref{eq:est_weighted_spaces_forward} to the equation associated to $(F,G)=(F_1-F_2,G_1-G_2)$, $y_0=0$, we deduce from the above inequality that
\begin{align}\notag
\norme{\mathcal N(F_1,G_1)-\mathcal N(F_2,G_2)}^2_{\mathcal S_{\lambda,\mu}} &\leq C L^2\lambda^{-2}\mu^{-2} \left[{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|F_1-F_2|^2\,\textnormal{d}x\textnormal{d}t\right) \right. \\ \notag
&\hspace{2.6cm} \left.+{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-2}\mu^{-2}\xi^{-3}|G_1-G_2|^2\,\textnormal{d}x\textnormal{d}t\right)\right]
\\ \label{eq:ineq_map}
&= C L^2 \lambda^{-2}\mu^{-2}\norme{(F_1,G_1)-(F_2,G_2)}^2_{\mathcal S_{\lambda,\mu}},
\end{align}
where $C=C(\mathcal{D},\mathcal{D}_0)>0$ comes from \Cref{teo:contr_forward_source}. Observe that all the constants in the right-hand side of \eqref{eq:ineq_map} are uniform with respect to $\lambda$ and $\mu$ thus, if necessary, we can increase their value so $C L^2 \lambda^{-2}\mu^{-2}<1$. This yields that the mapping $\mathcal N$ is strictly contractive.
Once we have verified these two conditions, by the Banach fixed point theorem, it follows that $\mathcal N$ has a unique fixed point $(F,G)$ in $\mathcal S_{\lambda,\mu}$. By setting $y$ the trajectory associated to the pair $(F,G)$, we observe that $y$ is the solution to \eqref{eq:forward_semilinear} and verifies $y(T,\cdot)=0$ in $\mathcal{D}$, a.s. This concludes the proof of \Cref{th:semilinear_forward}.
\end{proof}
\section{Controllability of a semilinear backward stochastic parabolic equation}\label{sec:backward}
As for the forward equation, the main ingredient to prove \Cref{th:semilinear_backward} is a controllability result for a linear system with a source term. In this case, we shall focus on studying the controllability of
\begin{equation*}
\begin{cases}
\textnormal{d} y=(-\Delta y+\chi_{\mathcal{D}_0}h+F)\textnormal{d}t+Y\textnormal{d}{W}(t) &\text{in }Q_T, \\
y=0 &\text{on }\Sigma_T, \\
y(T)=y_T &\text{in }\mathcal{D},
\end{cases}
\end{equation*}
where $F\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))$ and $y_T\in L^2(\Omega,\mathcal F_T;L^2(\mathcal{D}))$ are given. Unlike the previous section, we shall not devote to prove a Carleman estimate for the corresponding adjoint system (i.e. a forward equation). Although this is possible, we will see later that we can greatly simplify the problem by studying a random parabolic equation, for which a deterministic Carleman estimate will suffice.
\subsection{A deterministic Carleman estimate and its consequence}
As we mentioned in \Cref{sec:new_carleman}, in \cite{BEG16} the authors have proved a Carleman estimate for the (backward) heat equation with weights that do not vanish as $t\to 0^{+}$ (see \eqref{eq:def_theta} and \eqref{eq:weights_0}). Following their approach it is possible to prove the analogous result for a forward equation. For this, we need to introduce some new weight functions which are actually the mirrored version of \eqref{eq:def_theta} and \eqref{eq:weights_0}.
In more detail, let us consider the function $\beta$ as in \eqref{eq:prop_weight_beta} and let $0<T<1$. We define the function $\widetilde{\gamma}(t)$ as
\begin{equation}\label{eq:def_theta_tilde}
\widetilde{\gamma}(t):=
\begin{cases}
\widetilde{\gamma}(t)=\frac{1}{t^m}, \quad t\in(0,T/4], \\
\widetilde{\gamma} \textnormal{ is decreasing on $[T/4,T/2]$}, \\
\widetilde{\gamma}(t)=1, \quad t\in[T/2,3T/4], \\
\widetilde{\gamma}(t)=1+\left(1-\frac{4(T-t)}{T}\right)^\sigma, \quad t\in[3T/4,T], \\
\widetilde{\gamma}\in C^2([0,T]),
\end{cases}
\end{equation}
where $m\geq 1$ and $\sigma\geq 2$ is defined on \eqref{def:sigma}. Observe that $\widetilde{\gamma}(t)$ is the mirrored version of $\gamma(t)$ in \eqref{eq:def_theta} with respect to $T/2$. Analogous to the properties of $\gamma$, the function $\widetilde\gamma$ preserves one important property which is that for the interval $[3T/4,T]$ the derivative of $\widetilde{\gamma}$ has a prescribed sign, i.e., $\widetilde{\gamma}_t\geq 0$.
With this new function, we define the weights $\widetilde{\varphi}=\widetilde\varphi(t,x)$ and $\widetilde{\xi}=\widetilde{\xi}(t,x)$ as
\begin{equation}
\label{eq:weights_tilde}
\widetilde{\varphi}(t,x):=\widetilde{\gamma}(t)\left(e^{\mu(\beta(x)+6m)}-\mu e^{6\mu(m+1)}\right), \quad \widetilde{\xi}(t,x):=\tilde{\gamma}(t)e^{\mu(\beta(x)+6m)},
\end{equation}
where $\mu\geq 1$ is some parameter. In the same spirit, we set the weight $\widetilde{\theta}=\widetilde{\theta}(t,x)$ as
\begin{equation*}
\widetilde{\theta}:=e^{\widetilde{\ell}} \quad\textnormal{where }\widetilde{\ell}(t,x)=\lambda \widetilde{\varphi}(t,x)
\end{equation*}
for a parameter $\lambda\geq 1$.
In what follows, to keep the notation as light as possible and emphasizing that there is no possibility for confusion since the notation is specific for this section, we simply write $\widetilde\gamma=\gamma$, $\widetilde\theta=\theta$, and so on.
We have the following Carleman estimate for the heat equation with source term
\begin{equation}
\label{eq:qsolheatSource}
\begin{cases}
\partial_t q-\Delta q=g(t,x) & \text{in } Q_T, \\
q= 0 &\text{on } \Sigma_T ,\\
q(0)=q_0(x) & \text{in } \mathcal{D}.
\end{cases}
\end{equation}
\begin{theo}\label{car_refined}
For all $m\geq 1$, there exist constants $C>0$, $\lambda_0\geq 1$ and $\mu_0\geq 1$ such that for any $q_0\in L^2(\mathcal{D})$ and any $g\in L^2(Q_T)$, the weak solution to \eqref{eq:qsolheatSource}
satisfies
\begin{align*}
&\int_{Q_T}\theta^2\lambda\mu^2\xi|\nabla q|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\theta^2\lambda^3\mu^4\xi^{3}|q|^2\,\textnormal{d}x\textnormal{d}t\notag\\
& + \int_{\mathcal{D}} \theta^2(T)|\nabla q(T)|^2\,\textnormal{d}x + \int_{\mathcal{D}} \lambda^2\mu^3e^{2\mu(6m+1)}\theta^2(T)|q(T)|^2 \,\textnormal{d}x\notag \\
& \leq C\left(\int_{Q_T} \theta^2|g|^2\,\textnormal{d}x\textnormal{d}t+\iint_{\mathcal \mathcal{D}_0\times(0,T)}\theta^2\lambda^3\mu^4\xi^3 |q|^2\,\textnormal{d}x\textnormal{d}t \right),
\end{align*}%
for all $\mu\geq \mu_0$ and $\lambda\geq \lambda_0$.
\end{theo}
The proof of this result is a straightforward adaptation of \cite[Theorem 2.5]{BEG16}, just by taking into account that in this case the weight $\gamma$ verifies $\gamma_t\geq 0$ in $[3T/4,T]$, contrasting with the fact that $\gamma_t\leq 0$ in [0,T/4] as in \cite{BEG16} or as we have used in the proof of \Cref{thm:carleman_backward_0}.
Let us consider the forward parabolic equation given by
\begin{equation}\label{eq:forw_gen}
\begin{cases}
\textnormal{d} q=(\Delta q+G_1)\textnormal{d}t+ G_2\textnormal{d}{W}(t) & \textnormal{in } Q_T, \\
q= 0 &\textnormal{on } \Sigma_T ,\\
q(0,x)=q_0(x) & \textnormal{in } \mathcal{D},
\end{cases}
\end{equation}
where $G_i\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))$, $i=1,2$, and $q_0\in L^2(\Omega,\mathcal F_0;L^2(\mathcal{D}))$. An immediate consequence of \Cref{car_refined} is a Carleman estimate for a random parabolic equation. More precisely, we have the following.
\begin{lem} \label{lem:car_random}
Assume that $G_2\equiv 0$. For all $m\geq 1$, there exists constants $C>0$, $\lambda_0\geq 1$ and $\mu_0\geq 1$ such that for any $q_0\in L^2(\Omega,\mathcal F_0; L^2(\mathcal{D}))$ and any $g\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))$, the corresponding solution to \eqref{eq:forw_gen} with $G_2=0$ satisfies
\begin{align}
&{\mathbb{E}}\left(\int_{Q_T}\theta^2\lambda\mu^2\xi|\nabla q|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^2\lambda^3\mu^4\xi^{3}|q|^2\,\textnormal{d}x\textnormal{d}t\right)\notag\\
& + {\mathbb{E}}\left(\int_{\mathcal{D}} \theta^2(T)|\nabla q(T)|^2\,\textnormal{d}x\right) + {\mathbb{E}}\left(\int_{\mathcal{D}} \lambda^2\mu^3e^{2\mu(6m+1)}\theta^2(T)|q(T)|^2 \,\textnormal{d}x\right) \notag \\ \label{car_sigma}
& \leq C{\mathbb{E}}\left(\int_{Q_T} \theta^2|g|^2\,\textnormal{d}x\textnormal{d}t+\iint_{\mathcal \mathcal{D}_0\times(0,T)}\theta^2\lambda^3\mu^4\xi^3 |q|^2\,\textnormal{d}x\textnormal{d}t \right),
\end{align}%
for all $\mu\geq \mu_0$ and $\lambda\geq \lambda_0$.
\end{lem}
\subsection{A controllability result for a linear backward stochastic heat equation with source term and one control}
Inspired by the duality technique presented in \cite[Prop. 2.2]{liu14}, we present a controllability result for a linear backward stochastic heat equation with a source term. To this end, consider the linear control system given by %
\begin{equation}\label{eq:backward_source}
\begin{cases}
\textnormal{d} y=(-\Delta y+\chi_{\mathcal{D}_0}h+F)\textnormal{d}t+Y\textnormal{d}{W}(t) &\text{in }Q_T, \\
y=0 &\text{on }\Sigma_T, \\
y(T)=y_T &\text{in }\mathcal{D},
\end{cases}
\end{equation}
where $F\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))$ is a given fixed source term and $h\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}_0))$ is a control.
In what follows, we consider constants $\mu$ and $\lambda$ large enough such that \eqref{car_sigma} holds. We define the space $\widetilde{\mathcal{S}}_{\lambda,\mu}:=\left\{F\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D})):{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|F|^2\,\textnormal{d}x\textnormal{d}t\right)<+\infty\right\}$, endowed with the canonical norm. We have the following global null-controllability result for system \eqref{eq:backward_source}.
\begin{theo}\label{teo:contr_backward_source}
For any initial datum $y_T\in L^2(\Omega,\mathcal F_T;L^2(\mathcal{D}))$ and any $F\in \widetilde{\mathcal{S}}_{\lambda,\mu}$, there exists a control $\widehat{h}\in L^2(0,T;L^2(\mathcal{D}_0))$ such that the associated solution $(\widehat{y},\widehat{Y})\in [L^2_{\mathcal F}(\Omega;C[0,T];L^2(\mathcal{D}))\cap L^2_{\mathcal F}(0,T;H_0^1(\mathcal{D}))]\times L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))$ to system \eqref{eq:backward_source} satisfies $\widehat{y}(0)=0$ in $\mathcal{D}$, a.s. Moreover, the following estimate holds
\begin{align}\notag
{\mathbb{E}}&\left(\int_{Q_T}\theta^{-2}|\widehat{y}|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-2}{\mu^{-2}}\xi^{-2}|\widehat{Y}|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|\widehat{h}|^2\,\textnormal{d}x\textnormal{d}t\right) \\ \label{eq:est_control_weighted_spaces}
&\quad \leq C_1{\mathbb{E}}\left(\|y_T\|^2_{L^2(\mathcal{D})}\right)+C{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|F|^2\,\textnormal{d}x\textnormal{d}t\right),
\end{align}
where $C_1>0$ is a constant depending on $\mathcal{D},\mathcal{D}_0,\mu,\lambda$ and $C>0$ only depends on $\mathcal{D}$ and $\mathcal{D}_0$.
\end{theo}
\begin{rmk}
\label{rmk:uniquetrajectory_backward}
As before, from classical arguments, see e.g. \cite[Proposition 2.9]{LLT13}, from \Cref{teo:contr_backward_source} we can construct a linear continuous mapping that associates every initial datum $y_T\in L^2(\Omega,\mathcal F_T;L^2(\mathcal{D}))$ and every source term $F\in \widetilde{\mathcal{S}}_{\lambda,\mu}$, to a trajectory $(\widehat{y}, \widehat{h})$ such that $\widehat{y}(0)=0$ in $\mathcal{D}$, a.s. and \eqref{eq:est_control_weighted_spaces} holds.
\end{rmk}
\begin{proof} The proof is very similar to the one of \Cref{teo:contr_forward_source} and requires only some adaptations. We emphasize their main differences.
\smallskip
\textbf{Step 1}. For any $\epsilon>0$, let us consider the weight function $\gamma_\epsilon(t)$ given by
\begin{equation*}
\gamma_\epsilon(t):=
\begin{cases}
\gamma_\epsilon(t)=\gamma(t+\epsilon), \quad t\in [0,T/2-\epsilon], \\
\gamma_\epsilon(t)=1, \quad t\in[T/2-\epsilon,3T/4] \\
\gamma_\epsilon(t)=1+(1+\frac{4(T-t)}{T})^{\sigma}, \quad t\in [3T/4,T], \\
\sigma \textnormal{ as in \eqref{def:sigma}}.
\end{cases}
\end{equation*}
In this way, $\gamma_\epsilon(t)\leq \gamma(t)$ for $t\in[0,T]$. We set the corresponding weight $\varphi_\epsilon$ as in \eqref{eq:weights_tilde} by replacing the function $\gamma$ by $\gamma_\epsilon$. Also, we write $\theta_\epsilon=e^{\lambda\varphi_\epsilon}$.
We introduce the cost functional
\begin{align*}\notag
\mathcal I_\epsilon(h):=&\frac{1}{2}{\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}|y|^2\,\textnormal{d}x\textnormal{d}t\right)+\frac{1}{2}{\mathbb{E}}\left(\int_{0}^T\!\!\!\int_{\mathcal{D}_0}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|h|^2\,\textnormal{d}x\textnormal{d}t\right)\\ \label{eq:func}
&+\frac{1}{2\epsilon}{\mathbb{E}}\left(\int_{\mathcal{D}}|y(0)|^2\,\textnormal{d}x\right)
\end{align*}
and consider the minimization problem
\begin{equation}\label{eq:prob_min_forw}
\begin{cases}
\min_{h\in \mathcal H} \mathcal I_\epsilon(h), \\
\textnormal{subject to equation \eqref{eq:backward_source},}
\end{cases}
\end{equation}
where \begin{equation*}
\mathcal H=\left\{h\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}_0)):{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|h|^2\,\textnormal{d}x\textnormal{d}t\right)<+\infty\right\}.
\end{equation*}
It can be readily seen that the functional $\mathcal I_\epsilon$ is continuous, strictly convex and coercive. Therefore, the minimization problem \eqref{eq:prob_min_forw} admits a unique optimal solution that we denote by $h_\epsilon$. As in the proof of \Cref{teo:contr_forward_source} the minimizer $h_\epsilon$ can be characterized as
\begin{equation}\label{eq:h_epsilon_forw}
h_\epsilon=\chi_{\mathcal{D}_0}\lambda^3\mu^{4}\xi^3\theta^2q_\epsilon \quad\text{in Q}, \quad a.s.,
\end{equation}
where $q_\epsilon$ verifies the random forward equation
\begin{equation}\label{eq:q_optim_forw}
\begin{cases}
\textnormal{d}{q_\epsilon}=(\Delta q_\epsilon+\theta^{-2}_\epsilon y_\epsilon)\textnormal{d}t &\text{in }Q_T, \\
q_\epsilon=0 &\text{on }\Sigma_T, \\
q_\epsilon(0)=\frac{1}{\epsilon}y_\epsilon(0) &\text{in }\mathcal{D},
\end{cases}
\end{equation}
and where $(y_\epsilon,y_\epsilon(0))$ can be extracted from $(y_\epsilon,Y_\epsilon)$ the solution to \eqref{eq:backward_source} with control $h=h_\epsilon$. Observe that since $y_\epsilon\in L^2_{\mathcal F}(\Omega; C([0,T];L^2(\mathcal{D})))$ the evaluation of $y_\epsilon$ at $t=0$ is meaningful and \eqref{eq:q_optim_forw} is well-posed for any $\epsilon>0$. Also notice that there is no term containing $W(t)$ so \eqref{eq:q_optim_forw} is regarded as a random equation. This greatly simplifies our task, since we only need to use the Carleman estimate of \Cref{lem:car_random} to deduce the uniform estimate for the solutions to $(y_\epsilon,Y_\epsilon)$ in the next step.
\smallskip
\textbf{Step 2.} Using It\^{o}'s formula, we can compute $\textnormal{d}(y_\epsilon q_\epsilon)$ and deduce
\begin{align*}
{\mathbb{E}}\left(\int_{\mathcal{D}}y_\epsilon(T)q_\epsilon(T)\,\textnormal{d}x\right)&={\mathbb{E}}\left(\int_{\mathcal{D}}y_\epsilon(0)q_\epsilon(0)\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_{Q}(-\Delta y_\epsilon+F+\chi_{\mathcal{D}_0}h_\epsilon)q_\epsilon\,\textnormal{d}x\textnormal{d}t\right) \\
&\quad + {\mathbb{E}}\left(\int_{Q}(\Delta q_\epsilon+\theta^{-2}_\epsilon y_\epsilon)y_\epsilon\,\textnormal{d}x\textnormal{d}t\right)
\end{align*}
whence, using equations \eqref{eq:backward_source}, \eqref{eq:q_optim_forw}, and identity \eqref{eq:h_epsilon_forw}, we get
\begin{align}\notag
{\mathbb{E}}&\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\lambda^3\mu^4\xi^3\theta^2|q_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}_\epsilon|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+\frac{1}{\epsilon}{\mathbb{E}}\left(\int_{\mathcal{D}}|y_\epsilon(0)|^2\,\textnormal{d}x\right) \\ \label{eq:id_prod_yq_forw}
&={\mathbb{E}}\left(\int_{\mathcal{D}}y_Tq_\epsilon(T)\,\textnormal{d}x\right)-{\mathbb{E}}\left(\int_{Q_T}F q_\epsilon \,\textnormal{d}x\textnormal{d}t \right).
\end{align}
In view of \eqref{car_sigma}, we use Cauchy-Schwarz and Young inequalities in the right-hand side of \eqref{eq:id_prod_yq_forw} to introduce the weight function as follows
\begin{align}\notag
{\mathbb{E}}&\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^2 \lambda^3\mu^4\xi^3e^{-2s\varphi}|q_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+\frac{1}{\epsilon}{\mathbb{E}}\left(\int_{\mathcal{D}}|y_\epsilon(0)|^2\,\textnormal{d}x\right) \\ \notag
&\leq \delta\left[{\mathbb{E}}\left(\int_{\mathcal{D}}\lambda^2\mu^3\theta^{2}(T)|q_\epsilon(T,x)|^2\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^2 \lambda^3\mu^4\xi^3|q_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)\right]\\ \label{eq:des_prod_yq_forw}
&\quad + C_\delta\left[{\mathbb{E}}\left(\int_{\mathcal{D}}\lambda^{-2}\mu^{-3}\theta^{-2}(T)|y_T|^2\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_{Q_{T}}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|F|^2\,\textnormal{d}x\textnormal{d}t\right)\right]
\end{align}
with $\delta>0$. Applying inequality \eqref{car_sigma} to \eqref{eq:q_optim_forw} and using it to estimate in the right-hand side of \eqref{eq:des_prod_yq_forw}, we obtain, after taking $\delta>0$ small enough, that
\begin{align}\notag
{\mathbb{E}}&\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^{2}\lambda^3\mu^4\xi^3|q_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta_{\epsilon}^{-2}|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+\frac{1}{\epsilon}{\mathbb{E}}\left(\int_{\mathcal{D}}|y_\epsilon(0)|^2\,\textnormal{d}x\right) \\ \notag
&\leq C\left[{\mathbb{E}}\left(\int_{\mathcal{D}}\lambda^{-2}\mu^{-3}\theta^{-2}(T)|y_T|^2\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_{Q_{T}}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|F|^2\,\textnormal{d}x\textnormal{d}t\right)\right]
\end{align}
for some constant $C>0$ only depending on $\mathcal{D}$ and $\mathcal{D}_0$. At this point, we have used the fact that $\theta^2 \theta_\epsilon^{-2}\leq 1$ for all $(t,x)\in Q_T$.
Recalling the characterization of the optimal control $h_\epsilon$ in \eqref{eq:h_epsilon_forw} we obtain
\begin{align}\notag
{\mathbb{E}}&\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|h_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}e^{2s\varphi_\epsilon}|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+\frac{1}{\epsilon}{\mathbb{E}}\left(\int_{\mathcal{D}}|y_\epsilon(0)|^2\,\textnormal{d}x\right) \\ \label{eq:iden_uniform_forw}
&\leq C\left[{\mathbb{E}}\left(\int_{\mathcal{D}}\theta^{-2}\lambda^{-2}\mu^{-3}|y_T|^2\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_{Q_{T}}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|F|^2s^{-3}\xi^{-3}\,\textnormal{d}x\textnormal{d}t\right)\right].
\end{align}
Now, our task is to add a weighted integral of the process $Y$ on the left-hand side of the above inequality. To do that, using It\^{o}'s formula and equation \eqref{eq:backward_source} with $h=h_\epsilon$ yield
\begin{align*}
\textnormal{d}(\theta^{-2}_\epsilon \lambda^{-2}\xi^{-2}y_\epsilon^2)=&(\theta^{-2}_\epsilon \lambda^{-2}\xi^{-2})_ty_\epsilon^2\textnormal{d}t+\theta^{-2}_\epsilon \lambda^{-2}\xi^{-2}Y_\epsilon^2\\
&+2\theta^{-2}_\epsilon \lambda^{-2}\xi^{-2} y_\epsilon\left[(-\Delta y_\epsilon+\chi_{\mathcal{D}_0}h_\epsilon+F)\textnormal{d}t+Y_\epsilon\textnormal{d}{W}(t)\right]
\end{align*}
and after some integration by parts and substituting the initial datum, we get
\begin{align}\notag
{\mathbb{E}}&\left(\int_{Q_T}\theta^{-2}_\epsilon \lambda^{-2}\xi^{-2}|Y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+2{\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2}|\nabla y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)\\ \notag
&+{\mathbb{E}}\left(\int_{Q_T}(\theta_\epsilon ^{-2}\lambda^{-2}\xi^{-2})_t|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)={\mathbb{E}}\left(\int_{\mathcal{D}}\theta_\epsilon^{-2}(T)\lambda^{-2}\xi^{-2}(T)|y_T|^2\,\textnormal{d}x\right) \\ \notag
&-2{\mathbb{E}}\left(\int_{Q_T}\nabla(\theta_\epsilon^{-2} \lambda^{-2}\xi^{-2})\cdot\nabla y_{\epsilon}y_\epsilon\,\textnormal{d}x\textnormal{d}t\right)-2{\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2}y_\epsilon F\,\textnormal{d}x\textnormal{d}t\right) \\ \label{eq:iden_Y_weight_forw}
&-2{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2}y_\epsilon h_\epsilon \,\textnormal{d}x\textnormal{d}t\right)
\end{align}
Observe that the term containing $y_T$ is well defined since, by construction, the weight $\theta_\epsilon^{-1}$ does not blow up at $t=T$. Also, notice that there is no term of $y_\epsilon(0,x)$ since $\xi^{-1}(0)=0$ and the weight $\theta_\epsilon^{-1}$ does not blow up at $t=0$.
Let us analyze the term containing $(\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2})_t$ in the left-hand side of the above identity. We split the integral as
\begin{align}\notag
{\mathbb{E}}\left(\int_{Q_T}(\theta^{-2}_\epsilon \lambda^{-2}\xi^{-2})_t|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)&={\mathbb{E}}\left(\int_{0}^{3T/4}\!\!\!\int_{\mathcal{D}}(\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2})_t|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)\\ \label{eq:split_deriv}
&\quad +{\mathbb{E}}\left(\int_{3T/4}^{T}\!\int_{\mathcal{D}}(\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2})_t|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right).
\end{align}
We note that for $t\in[3T/4,T]$, $\gamma_\epsilon(t)=\gamma(t)$, so we can drop the dependence of $\epsilon$. Also notice that on this time interval $\gamma_t\geq 0$ and $1\leq \gamma \leq 2$. Thus, computing explicitly, we have
\begin{equation}\label{eq:iden_deriv_eps}
(\theta^{-2}\lambda^{-2}\xi^{-2})_t=-2\theta^{-2}\lambda^{-1}\frac{\gamma_t}{\gamma}\varphi \xi^{-2}-2\theta^{-2}\lambda^{-2}\frac{\gamma_t}{\gamma}\xi^{-2}.
\end{equation}
Recall that $\varphi<0$, thus
\begin{equation}
(\theta^{-2}\lambda^{-2}\xi^{-2})_t \geq c \theta^{-2} \lambda^{-1} \gamma_t |\varphi| \xi^{-2}.
\end{equation}
for all $t\in[3T/4,T]$, where $c>0$ only depends on $\mathcal{D}$ and $\mathcal{D}_0$. Therefore,
\begin{equation}\label{eq:est_deriv_34_T}
{\mathbb{E}}\left(\int_{3T/4}^{T}\!\int_{\mathcal{D}}(\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2})_t|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)\geq 0
\end{equation}
and this term can be dropped. For $t\in[0,3T/4]$, we can use expression \eqref{eq:iden_deriv_eps} (replacing everywhere the weights depending on $\epsilon$) and the fact that $|\partial_t\gamma_\epsilon|\leq C\gamma_\epsilon^2$ to obtain
\begin{equation*}
\abs{(\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2})_t}\leq C\theta_\epsilon^{-2}\lambda^{-1}\mu,
\end{equation*}
where the constant $C>0$ is uniform with respect to $\lambda$ and $\mu$. Therefore,
\begin{equation}\label{eq:est_0_34}
\abs{{\mathbb{E}}\left(\int_{0}^{3T/4}\!\!\!\int_{\mathcal{D}}(\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2})_t|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)} \leq C{\mathbb{E}}\left(\int_{0}^{3T/4}\!\!\!\int_{\mathcal{D}}\theta_\epsilon^{-2}\lambda^{-1}\mu|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right).
\end{equation}
Thus, using formulas \eqref{eq:split_deriv} and \eqref{eq:est_deriv_34_T}--\eqref{eq:est_0_34} we deduce from \eqref{eq:iden_Y_weight_forw} that
\begin{align}\notag
{\mathbb{E}}&\left(\int_{Q_T}\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2}|Y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+2{\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2}|\nabla y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)\\ \notag
&\leq C{\mathbb{E}}\left(\int_{\mathcal{D}}\theta^{-2}(T) \lambda^{-2}|y_T|^2\,\textnormal{d}x\right)+C{\mathbb{E}}\left(\int_{0}^{3T/4}\!\!\!\int_{\mathcal{D}}\theta_\epsilon^{-2}\lambda^{-1}\mu |y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)\\ \notag
&+2\abs{{\mathbb{E}}\left(\int_{Q_T}\nabla(\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2})\cdot\nabla y_{\epsilon}y_\epsilon\,\textnormal{d}x\textnormal{d}t\right)}+2\abs{{\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2}y_\epsilon F\,\textnormal{d}x\textnormal{d}t\right)} \\ \label{eq:ineq_Y_weight_forw}
&+2\abs{{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2}y_\epsilon h_\epsilon \,\textnormal{d}x\textnormal{d}t\right)}.
\end{align}
For the first term in the right-hand side, we have used that $\theta^{-2}_\epsilon(T)=\theta^{-2}(T)$ and $\xi^{-1}(T)\leq C$ for some $C>0$ only depending on $\mathcal{D}$ and $\mathcal{D}_0$.
Employing Cauchy-Schwarz and Young inequalities, we estimate the last three terms of the above inequality. For the first one, we have
\begin{align}\notag
2&\abs{{\mathbb{E}}\left(\int_{Q_T}\nabla(\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2})\cdot\nabla y_{\epsilon}y_\epsilon\,\textnormal{d}x\textnormal{d}t\right)}\\ \label{eq:first_est_forw}
&\leq \delta {\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2}|\nabla y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right) + C(\delta){\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}\mu^2|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)
\end{align}
for any $\delta>0$. Here, we have used that $|\nabla(\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2})|\leq C\theta_\epsilon^{-2}\mu \lambda^{-1} \xi^{-1}$. For the second term, we get
\begin{align}\notag
2&\abs{{\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2}y_\epsilon F\,\textnormal{d}x\textnormal{d}t\right)} \\ \label{eq:second_est_forw}
&\leq {\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}\mu^2\lambda^{-1}\xi^{-1}|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-3}\mu^{-2}\xi^{-3}|F|^2\,\textnormal{d}x\textnormal{d}t\right),
\end{align}
where we have used that $\theta_\epsilon^{-2}\leq \theta^{-2}$. For the last one, we readily have
\begin{align}\notag
2&\abs{{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta_\epsilon^{-2}\lambda^{-2}\xi^{-2}y_\epsilon h_\epsilon \,\textnormal{d}x\textnormal{d}t\right)}\\\label{eq:third_est_forw}
&\leq {\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}\mu^2|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta_\epsilon^{-2}\lambda^{-4}\mu^{-2}\xi^{-4}|h_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right).
\end{align}
Using estimates \eqref{eq:first_est_forw}--\eqref{eq:third_est_forw} in \eqref{eq:ineq_Y_weight_forw} and taking $\delta>0$ small enough, we deduce after collecting similar terms that
\begin{align}\notag
{\mathbb{E}}&\left(\int_{Q_T}\theta_\epsilon^{-2}\lambda^{-2}\mu^{-2}\xi^{-2}|Y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}\lambda^{-2}\mu^{-2}\xi^{-2}|\nabla y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right) \\ \notag
&\leq C{\mathbb{E}}\left(\int_{\mathcal{D}}\theta^{-2}(T)\lambda^{-2}\mu^{-2}|y_T|^2\,\textnormal{d}x\right)+C{\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}|y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)\\\label{eq:ineq_Y_weight_final_forw}
&\quad +C{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|F|^2\,\textnormal{d}x\textnormal{d}t\right) +C{\mathbb{E}}\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta_\epsilon^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|h_\epsilon|^2 \,\textnormal{d}x\textnormal{d}t\right).
\end{align}
At this point, we have adjusted the powers of $\lambda$ and $\xi$ in the last term by using the fact that $\lambda^{-1}\xi^{-1}\leq C$ for some constant only depending on $\mathcal{D},\mathcal{D}_0$.
Finally, combining \eqref{eq:ineq_Y_weight_final_forw} and \eqref{eq:iden_uniform_forw} we get
\begin{align}\notag
{\mathbb{E}}&\left(\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta_\epsilon^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|h_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}(|y_\epsilon|^2+\lambda^{-2}\mu^{-2}\xi^{-2}|Y_\epsilon|^2)\,\textnormal{d}x\textnormal{d}t\right) \\ \notag
&+{\mathbb{E}}\left(\int_{Q_T}\theta_\epsilon^{-2}\lambda^{-2}\mu^{-2}\xi^{-2}|\nabla y_\epsilon|^2\,\textnormal{d}x\textnormal{d}t\right)+\frac{1}{\epsilon}{\mathbb{E}}\left(\int_{\mathcal{D}}|y_\epsilon(0)|^2\,\textnormal{d}x\right) \\ \label{eq:iden_uniform_final_forw}
&\leq C\left[{\mathbb{E}}\left(\int_{\mathcal{D}}\theta^{-2}(T)\lambda^{-2}\mu^{-2}|y_T|^2\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_{Q_{T}}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|F|^2 \,\textnormal{d}x\textnormal{d}t\right)\right],
\end{align}
for some positive constant $C$ only depending on $\mathcal{D},\mathcal{D}_0$.
\textbf{Step 3.} The last step is essentially the same as in the proof of \Cref{teo:contr_forward_source}. Since the right-hand side of \eqref{eq:iden_uniform_final_forw} is uniform with respect to $\epsilon$, we readily deduce that there exists $(\widehat{h},\widehat{y},\widehat{Y})$ such that
\begin{equation}\label{eq:weak_conv_forw}
\begin{cases}
h_\epsilon\rightharpoonup \widehat{h} &\textnormal{weakly in } L^2(\Omega\times(0,T);L^2(\mathcal{D}_0)), \\
y_{\epsilon}\rightharpoonup \widehat{y} &\textnormal{weakly in } L^2(\Omega\times(0,T);H_0^1(\mathcal{D})), \\
Y_\epsilon\rightharpoonup Y &\textnormal{weakly in } L^2(\Omega\times(0,T);L^2(\mathcal{D})).
\end{cases}
\end{equation}
To check that $(\widehat{y},\widehat{Y})$ is the solution to \eqref{eq:backward_source} associated to $\widehat{h}$ can be done exactly as in \Cref{teo:contr_forward_source}, so we omit it.
To conclude, we notice from \eqref{eq:iden_uniform_forw} that $\widehat{y}(0)=0$ in $\mathcal{D}$, a.s. Also, from the weak convergence \eqref{eq:weak_conv_forw}, Fatou's lemma and the uniform estimate \eqref{eq:iden_uniform_forw} we deduce \eqref{eq:est_control_weighted_spaces}. This ends the proof of \Cref{teo:contr_backward_source}.
\end{proof}
\subsection{Proof the nonlinear result for the backward equation}
Now, we are in position to prove \Cref{th:semilinear_backward}. The proof is very similar to the one of \Cref{th:semilinear_forward} but for the sake of completeness, we give it.
Let us fix the parameters $\lambda$ and $\mu$ in \Cref{teo:contr_backward_source} to a fixed value sufficiently large. Recall that in turn, this parameter comes from the \Cref{car_refined} and should be selected as $\lambda\geq \lambda_0$ and $\mu\geq \mu_0$ for some $\lambda_0,\mu_0\geq 1$, so there is no contradiction.
Let us consider a nonlinearity $f$ fulfilling \eqref{eq:UniformLipschitzf} and \eqref{eq:fzero} and define
\begin{equation*}
\widetilde{\mathcal N}: F\in \mathcal{\widetilde{S}}_{\lambda,\mu}\mapsto f(\omega,t,x,y,Y)\in \mathcal{\widetilde{S}}_{\lambda,\mu},
\end{equation*}
where $(y,Y)$ is the trajectory of \eqref{eq:backward_source} associated to the data $y_T$ and $F$, defined by \Cref{teo:contr_backward_source} and \Cref{rmk:uniquetrajectory_backward}. In what follows, to abridge the notation, we simply write $f(y,Y)$.
We will check the following facts for the nonlinear mapping $\widetilde{\mathcal N}$.
\textbf{The mapping $\mathcal N$ is well-defined}. To this end, we need to show that for any $F\in \mathcal{\widetilde{S}}_{\lambda,\mu}$, $\mathcal N(F)\in \mathcal{\widetilde{S}}_{\lambda,\mu}$. We have from \eqref{eq:UniformLipschitzf} and \eqref{eq:fzero}
\begin{align*}
\norme{\widetilde{\mathcal N}(F)}_{\mathcal{\widetilde{S}}_{\lambda,\mu}}^2&={\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|f(y,Y)|^2\,\textnormal{d}x\textnormal{d}t\right) \\
&\leq 2 L^2{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}\left[|y|^2+|Y|^2\right]\,\textnormal{d}x\textnormal{d}t\right) \\
&\leq 2 L^2 \lambda^{-1}\mu^{-2} \left[ {\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-2}\mu^{-2}\xi^{-2}|Y|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}|y|^2\,\textnormal{d}x\textnormal{d}t\right)\right] \\
&\leq 2L^2 \lambda^{-1}\mu^{-2}\left(C_1{\mathbb{E}}\left[\norme{y_T}^2_{L^2(\mathcal{D})}\right)+C{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|F|^2\,\textnormal{d}x\textnormal{d}t\right)\right]\\
&<+\infty,
\end{align*}
where we have used \eqref{eq:est_control_weighted_spaces} and that $\norme{\xi^{-1}}_\infty\leq 1$. This proves that $\mathcal N$ is well-defined.
\textbf{The mapping $\mathcal N$ is a strictly contraction mapping.} Let us consider $F_i\in \mathcal \mathcal{\widetilde{S}}_{\lambda,\mu}$, $i=1,2$. From the properties of the nonlinearity $f$, we have
\begin{align*}\notag
&\norme{\widetilde{\mathcal N}(F_1)-\widetilde{\mathcal N}(F_2)}^2_{\mathcal{\widetilde{S}}_{\lambda,\mu}}\\
&\quad ={\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|f(y_1,Y_1)-f(y_2,Y_2)|^2\,\textnormal{d}x\textnormal{d}t\right) \\ \notag
&\quad \leq 2L^2\lambda^{-1}\mu^{-2} {\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-2}\mu^{-2}\xi^{-2}|Y_1-Y_2|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\theta^{-2}|y_1-y_2|^2\,\textnormal{d}x\textnormal{d}t\right).
\end{align*}
Then applying \Cref{teo:contr_backward_source} to the equation associated to $F=F_1-F_2$, $y_T=0$, and using the corresponding estimate \eqref{eq:est_control_weighted_spaces}, we deduce from the above inequality that
\begin{align}\notag
\norme{\widetilde{\mathcal N}(F_1)-\widetilde{\mathcal N}(F_2)}^2_{\mathcal{\widetilde{S}}_{\lambda,\mu}} &\leq 2C L^2\lambda^{-1}\mu^{-2} {\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|F_1-F_2|^2\,\textnormal{d}x\textnormal{d}t\right) \\ \label{eq:ineq_map_back}
&= 2CL^2 \lambda^{-1}\mu^{-2}\norme{F_1-F_2}^2_{\mathcal{\widetilde{S}}_{\lambda,\mu}},
\end{align}
where $C=C(\mathcal{D},\mathcal{D}_0)>0$ comes from \Cref{teo:contr_backward_source}. Observe that all the constants in the right-hand side of \eqref{eq:ineq_map_back} are uniform with respect to $\lambda$ and $\mu$ thus, if necessary, we can increase the value of $\lambda$ and $\mu$ so $CL^2 \lambda^{-1}\mu^{-2}<1$. This yields that the mapping is strictly contractive.
Once we have verified these two conditions, it follows that $\widetilde{\mathcal N}$ has a unique fixed point $F$ in $\mathcal{\widetilde{S}}_{\lambda,\mu}$. By setting $(y,Y)$ the trajectory associated to this $F$, we observe that $(y,Y)$ is the solution to \eqref{eq:backward_semilinear} and verifies $y(0,\cdot)=0$ in $\mathcal{D}$, a.s. This concludes the proof of \Cref{th:semilinear_backward}.
\section{Further results and remarks}\label{sec:conclusion}
\subsection{A new Carleman estimate for forward equation as a consequence of \Cref{teo:contr_backward_source}}
The controllability result provided by \Cref{teo:contr_backward_source} yields as a byproduct the obtention of a new global Carleman estimate for forward stochastic parabolic equations with a weight that do not vanish as $t\to T^-$. In fact, under the construction of weights shown in \eqref{eq:def_theta_tilde} and \eqref{eq:weights_tilde} (where again we drop the tilde notation for simplicity), we are able to prove the following result.
\begin{prop}\label{prop:car_forward_nonvanishing}
For all $m\geq 1$, there exist constants $C>0$, $\lambda_0\geq 1$ and $\mu_0\geq 1$ such that for any $q_0\in L^2(\Omega,\mathcal F_0;L^2(\mathcal{D}))$ and $G_i\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))$, $i=1,2$, the solution $y\in \mathcal W_T$ to \eqref{eq:forw_gen} satisfies
\begin{align*}
{\mathbb{E}}&\left(\int_{Q_T}\theta^2\lambda\mu^2\xi|\nabla q|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left( \int_{Q_T}\theta^2 \lambda^{3}\mu^{4}\xi^{3}|q|^2\,\textnormal{d}x\textnormal{d}t\right) + {\mathbb{E}}\left(\int_{\Omega}\theta^{2}(T) \lambda^2 {\mu^{2}} |q(T)|^2 \,\textnormal{d}x \right) \\
&\quad \leq C{\mathbb{E}}\left(\int_{Q_T}\theta^2|G_1|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\theta^2\lambda^2\mu^2\xi^2|G_2|^2\,\textnormal{d}x\textnormal{d}t+\iint_{\mathcal \mathcal{D}_0\times(0,T)}\theta^2\lambda^3\mu^4\xi^{3}|q|^2\,\textnormal{d}x\textnormal{d}t \right),
\end{align*}
for all $\mu\geq \mu_0$ and $\lambda\geq \lambda_0$.
\end{prop}
The proof of \Cref{prop:car_forward_nonvanishing} can be achieved by following the proof of \cite[Theorem 1.1]{liu14} with a few straightforward adaptations. For completeness, we give a brief sketch below.
The starting point is to use \Cref{teo:contr_backward_source} with $F=\theta^2 \lambda^3\mu^4\xi^3 q$ and $y_T=-s^2\mu^{2}\theta^2(T)q(T)$, where $q$ is the solution to \eqref{eq:forw_gen} with given $G_1$ and $G_2$. Observe that the weight functions in these data are well defined and bounded. We also remark that since the solution $q$ belongs to $ \mathcal W_T$, we have that $y_T=-\lambda^2\mu^2\theta^2(T)q(T)\in L^2(\Omega,\mathcal F_T;L^2(\mathcal{D}))$ and thus system \eqref{eq:backward_source} with these given data is well-posed.
Thus, from \Cref{teo:contr_backward_source}, we get that there exists a control $\widehat{h}\in L^2_{\mathcal F}(0,T;L^2(\mathcal{D}))$ such that the solution $\widehat{y}$ to
\begin{equation}\label{eq:backward_source_q}
\begin{cases}
\textnormal{d}\widehat{y}=(-\Delta \widehat{y}+\chi_{\mathcal{D}_0}\widehat{h}+\theta^2\lambda^3\mu^4\xi^3 q)\textnormal{d}t+\widehat{Y}\textnormal{d}{W}(t) &\text{in }Q_T, \\
\widehat{y}=0 &\text{on }\Sigma_T, \\
\widehat{y}(T)=-\theta^2\lambda^2\mu^{2}q(T) &\text{in }\mathcal{D}.
\end{cases}
\end{equation}
satisfies $\widehat{y}(0)=0$ in $\mathcal{D}$, a.s. Moreover, the following estimate holds
\begin{align}\notag
{\mathbb{E}}&\left(\int_{Q_T}\theta^{-2}|\widehat{y}|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-2}\mu^{-2}\xi^{-2}|\widehat{Y}|^2\,\textnormal{d}x\textnormal{d}t\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^{-2}\lambda^{-3}\mu^{-4}\xi^{-3}|\widehat{h}|^2\,\textnormal{d}x\textnormal{d}t\right) \\ \label{eq:est_for_carleman_weighted}
&\quad \leq C{\mathbb{E}}\left(\int_{\mathcal{D}}\lambda^2\mu^2\theta^2(T)|q(T)|^2\,\textnormal{d}x+\int_{Q_T}e^{-2s\varphi}s^{3}\xi^{3}|q|^2\,\textnormal{d}x\textnormal{d}t\right),
\end{align}
for some constant $C>0$ only depending on $\mathcal{D},\mathcal{D}_0$.
From \eqref{eq:forw_gen}, \eqref{eq:backward_source_q} and It\^{o}'s formula, we get
\begin{align*}
{\mathbb{E}}&\left(\int_{\mathcal{D}}\theta^2(T)\lambda^2\mu^{2}|q(T)|^2\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_{Q_T}e^{-2s\varphi}s^3\xi^3|q|^2\,\textnormal{d}x\textnormal{d}t\right)\\
&=-{\mathbb{E}}\left(\int_{Q_T}\widehat{y}G_1\,\textnormal{d}x\textnormal{d}t\right)-{\mathbb{E}}\left(\int_{Q_T}\widehat{Y}G_2\,\textnormal{d}x\textnormal{d}t\right)-{\mathbb{E}}\left(\int_0^{T}\!\!\!\int_{\mathcal{D}_0}\widehat{h}q\,\textnormal{d}x\textnormal{d}t\right).
\end{align*}
Using Cauchy-Schwarz and Young inequalities, together with \eqref{eq:est_for_carleman_weighted}, it can be obtained from the above identity that
\begin{align*}
{\mathbb{E}}&\left(\int_{\mathcal{D}}\theta^2(T)\lambda^2\mu^2|q(T)|^2\,\textnormal{d}x\right)+{\mathbb{E}}\left(\int_{Q_T}\theta^2\lambda^3\mu^4\xi^3|q|^2\,\textnormal{d}x\textnormal{d}t\right)\\
&\leq C{\mathbb{E}}\left(\int_{Q_T}\theta^2|G_1|^2\,\textnormal{d}x\textnormal{d}t+\int_{Q_T}\theta^2\lambda^2\mu^2\xi^2|G_2|^2\,\textnormal{d}x\textnormal{d}t+\int_{0}^{T}\!\!\!\int_{\mathcal{D}_0}\theta^2\lambda^3\mu^4\xi^3|q|^2\,\textnormal{d}x\textnormal{d}t\right).
\end{align*}
To add the integral containing $\nabla q$, it is enough to compute $\textnormal{d}(e^{-2s\varphi}\lambda\xi q^2)$ and argue as in Step 2 of the proof of \Cref{teo:contr_backward_source}. For brevity, we omit the details.
\subsection{Globally Lipschitz nonlinearities depending on the gradient state}
It should be interesting to extend \Cref{th:semilinear_forward} to the case where the semilinearities $f$ and $g$ depend on the gradient of the state. More precisely, let us consider \begin{equation}\label{eq:forward_semilinear_grad}
\begin{cases}
\textnormal{d}{y}=(\Delta y + f(\omega,t,x,y, \nabla y)+\chi_{\mathcal{D}_0}h)\textnormal{d}t+(g(\omega,t,x,y,\nabla y)+H)\textnormal{d} W(t) &\text{in }Q_T, \\
y=0 &\text{on }\Sigma_T, \\
y(0)=y_0 &\text{in }\mathcal{D},
\end{cases}
\end{equation}
where $f$ and $g$ are two globally Lipschitz nonlinear functions. We may wonder if \eqref{eq:forward_semilinear_grad} is small-time globally null-controllable. A good starting point seems to obtain a Carleman estimate for the backward equation \eqref{eq:system_z}, with a source term $\Xi \in L_{\mathcal{F}}^2(0,T;H^{-1}(\mathcal{D}))$. This seems to be possible, according to \cite[Remark 1.4]{liu14}. By a duality argument, this would lead to a null-controllability result for the system \eqref{eq:sys_forward_source} similar to \Cref{teo:contr_forward_source}, with an estimate of $ \rho \widehat{y}$ in $L_{\mathcal F}^2(0,T;H_0^1(\mathcal{D}))$, where $\rho$ is some suitable weight function. Details remain to be written.
\subsection{Extension of the method to other equations}
The method introduced in this article could probably be applied to other nonlinear equations for which there is a lack of compactness embeddings for the solutions spaces and for which we are able to derive Carleman estimates in the spirit of \cite[Theorem 2.5]{BEG16}. For instance, for the Schrödinger equation, it is a well known fact that there is no regularizing effect so there is a lack of compactness. Up to our knowledge, the following question is still open. Let $f : \ensemblenombre{C} \rightarrow \ensemblenombre{C}$ be a globally Lipschitz nonlinearity and $(T, \mathcal{D}, \mathcal{D}_0)$ be such that the so-called Geometric Control Condition holds. Is the system
\begin{equation*}\label{eq:schrodinger}
\begin{cases}
i \partial_t y =\Delta y + f(y) +\chi_{\mathcal{D}_0}h&\text{in }Q_T, \\
y=0 &\text{on }\Sigma_T, \\
y(0)=y_0 &\text{in }\mathcal{D},
\end{cases}
\end{equation*}
globally null-controllable? See \cite{Zua03} or \cite{Lau14} for an introduction to this problem. We also refer to \cite{Lu13} for results in the stochastic setting.
\renewcommand{\abstractname}{Acknowledgements}
\begin{abstract}
\end{abstract}
\vspace{-0.5cm}
The work of the first author was supported by the programme ``Estancias posdoctorales por M\'exico'' of CONACyT, Mexico. The work of the second author has been supported by the SysNum cluster of excellence University of Bordeaux.
\bibliographystyle{alpha}
\small{ | proofpile-arXiv_059-15698 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Derivation of the Linear Invariant Operator}
From a straightforward evaluation of the Liouville-von Neumann equation,
\begin{equation}
{d \hat{I}}/{d t} = {\partial \hat{I}}/{\partial t} + [\hat{I},\hat{H}]/(i\hbar) = 0,
\end{equation}
using the Hamiltonian given in
Eq. (\ref{1}) in the text, we can easily derive the linear invariant operator $\hat{I}$
that is given in Eq. (\ref{Lio}) in the text (see Ref. \cite{gso}). Notice
that the Hermitian adjoint of this operator, $\hat{I}^\dagger$, is also an invariant operator. From a combined evaluation
of the two equations for $\hat{I}$ and $\hat{I}^\dagger$, it is possible to
eliminate $\hat{p}$ and, as a consequence, the expression
for $\hat{q}$ which appeared in Eq. (\ref{po4}) in the text can be obtained.
From a similar method, we can also obtain the
expression for $\hat{p}$.
By solving the eigenvalue equation of the invariant operator, Eq. (\ref{ive}),
in the configuration
space on the basis of the technique adopted in Ref. \cite{gco},
we obtain the eigenvalue
as
\begin{equation}
\lambda = \beta e^{i\omega t},
\end{equation}
where
$\beta = -i \sqrt{{m\omega}/{(2\hbar)}} Q_{0} e^{-i(\omega t + \varphi - \chi)}$,
and the eigenstate of the form
\begin{eqnarray}
\langle q| \phi \rangle &=& \sqrt{\frac{m\omega}{\hbar\pi}} \exp \Bigg[
e^{\gamma t/2}\frac{A q_{\rm p} - B q_{\rm p}^2 }{ \hbar} + C \Bigg], \label{(S1)}
\end{eqnarray}
where $q_{\rm p} = q-Q_{\rm p}(t)$ and
\begin{eqnarray}
A &=&\sqrt{2\hbar m \omega} \beta, \\
B &=&\frac{1}{2} m e^{\gamma t/2} \left(\omega + i {\gamma}/{2}\right), \\
C &=&\frac{iP_{\rm p}(t)q}{\hbar}+ \frac{\gamma t}{4}- \frac{\beta^2}{2}- \frac{|\beta|^2}{2} .
\end{eqnarray}
\section{Expectation Value of the Energy Operator}
We present how to evaluate the expectation value of the energy operator.
From a minor evaluation with the energy operator using the expression
of $\hat{I}$ (and its Hermitian conjugate $\hat{I}^\dagger$),
it is possible to represent the energy operator in terms of $\hat{I}$ and $\hat{I}^\dagger$ such that
\begin{equation}
\hat{E} = \Bigg[\frac{\hbar}{4} \left( \frac{2\omega_0^2}{\omega} (2\hat{I}^\dagger \hat{I}+1)
- \varepsilon \hat{I}^2 - \varepsilon^* \hat{I}^{\dagger 2} \right)+ \sqrt{\frac{\hbar}{2}}
(\Theta \hat{I} + \Theta^* \hat{I}^\dagger )\Bigg] e^{-\gamma t} + E_{\rm p} ,
\label{(S2)}
\end{equation}
where $\varepsilon =\gamma [\gamma/(2\omega)+i]e^{-2i (\omega t + \chi)}$ and
\begin{eqnarray}
& &\Theta = \left[ \sqrt{\frac{\omega}{m}} e^{-\gamma t/2} \eta P_{\rm p}(t) + i
e^{\gamma t/2} \sqrt{\frac{m}{\omega}} \omega_0^2 Q_{\rm p}(t)
\right] e^{-i(\omega t + \chi)}, \label{(S3)} \\
& &E_{\rm p} = e^{-2\gamma t} \frac{P_{\rm p}^2(t)}{2m} + \frac{1}{2} m
\omega_0^2 Q_{\rm p}^2(t) , \label{(S4)}
\end{eqnarray}
with $\eta = 1-i\gamma /(2\omega)$.
Now by considering the fact that the eigenvalues of $\hat{I}$ and $\hat{I}^\dagger$
are $\lambda$ and $\lambda^*$ respectively,
we can easily identify the expectation value of the energy
operator, $\langle \psi |\hat{E} |\psi \rangle$, that is given in Eq. (\ref{6}) in the text.
Notice that the $\hbar$ must not be taken simplistically to zero at
the initial stage of the evaluation under the pretext of obtaining
the classical limit. We should keep it until we arrive at the
final representation, Eq. (\ref{6}).
\section{Cantilever System}
Description of the cantilever system appears in Ref. \cite{spam}.
If we denote the effective mass of the cantilever as $m_{\rm eff}$, the force
acted on the lever is represented in the form
\begin{equation}
f(t)=[F_{\rm ext} + k(D_0 - a_0 \sin \omega_{\rm d} t)]/m_{\rm eff},
\end{equation}
where $F_{\rm ext}$ is the tip-sample force, $k(=m_{\rm eff} \omega_0^2)$ is the cantilever
spring constant, $D_0$ is the resting position of the cantilever base, $a_0$ is
the driving amplitude, and $\omega_{\rm d}$ is the drive frequency \cite{spam}.
\section{Damped Harmonic Oscillator with a Sawtooth Force}
We regard the damped harmonic oscillator to which applied an external sawtooth
force with the period
$\tau=2\pi/\omega_{\rm d}$.
The sawtooth force can be represented as $f(t)= f_0 t/(m \tau)$ for a
period $-\tau/2 < t < \tau /2$ (see Fig. 2), where $f_0$ is a constant that represents
the strength of the force. In this case, $f(t)$ can be rewritten in terms
of an infinite series such that \cite{Lc}
\begin{equation}
f(t) = [{f_0}/{(\pi m)}] \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} \sin (n\omega_{\rm d} t).
\label{(S5)}
\end{equation}
| proofpile-arXiv_059-15699 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Appendix}
\subsection{Practical Issues}
\label{sec:Apppendix_practical_issues}
We now describe some of the key practical issues of the framework and how we address them. %
\textbf{Optimal Planning.} We use value iteration (VI) \citep{Kneale1958DynamicPB} to optimally solve the core-state MDPs, which can have a number of states equal to the size of the dataset $N$. While in general storing a transition matrix with $N$ states can take $O(n^2)$ space, the core-state MDPs have sparse transitions with each state having at most $k$ successors, which our implementation exploits. Further, VI is highly parallelizable on GPUs, since the update of each state at each VI iteration can be done independently. We developed a simple GPU implementation of VI, which can provide between 20-1000x wall clock speedup over its serial counterpart. Our current implementation can solve MDPs with a million states in less than 30 seconds and easily scale to MDPs with several million states and medium action spaces (up to 10). Thus, this implementation is adequate for the dataset sizes that can be expected in many offline RL settings, where often we are interested in performing well given limited data. For extremely large datasets, we expect that future work can develop effective sub-sampling approaches, similar to those used for scaling other non-parametric models, e.g. \cite{miche2009}. We will release an open-source implementation and further details are in Appendix \ref{sec:AppendixScalingVI}.
\textbf{Weighted Averaging:} The above DAC-MDP construction and theory is based on uniform averaging across kNN sets. In practice, however, most non-parametric regression techniques use weighted averaging, where the influence of a neighbor decreases with its distance. Our implementation also uses weighted averaging, which we found has little impact on overall performance (see Appendix), but can reduce sensitivity to the choice of $k$, especially for smaller datasets. In our implementation we compute normalized weights over kNN sets according to the inverse of distances. In particular, the modified DAC-MDP reward and transition functions become,
\begin{align}
\tilde{R}(s,a) &= \sum_{i\in kNN(s,a)} {\alpha(s,a,s_i,a_i)} (r_i - C\cdot d(s,a,s_i,a_i) )\\
\tilde{T}(s,a,s') &= \sum_{i \in kNN(s,a)} \alpha(s,a,s_i,a_i) \; I[s' = s'_i],
\end{align}
where $\alpha(s,a,s_i, a_i) = \frac{d'(s,a,s_i,a_i) }{\sum_{j\in kNN(s,a)} d'(s,a,s_j,a_j)}$, $d'(s,a,s_i, a_i) = \frac{1}{d(s,a,s_i,a_i) + \delta_d }$ and $\delta_d = 1e^{-5}$
\textbf{Optimized Policy Computation.} After solving the DAC-MDP, the policy computation for new states requires computing Q-values for each action via Equation \ref{eq:DAC-Q}, which involves a kNN query for each action. We find that we can reduce this computation by a factor of $|A|$ via the following heuristic that performs just one kNN query at the state level. Here $kNN(s)$ returns the data tuple of indices whose source states are the $k$ nearest neighbors of $s$.
\begin{align}
\tilde{\pi}(s) &=\max_{a\in A} \frac{1}{k} \sum_{i\in kNN(s)} \alpha(s,s_i)\;\tilde{Q^*}(s'_i,a)
\end{align} where, $d'(s,s_i) = \frac{1}{d(s,s_i) + \delta_d }$, $\alpha(s,s_i) = \frac{d'(s,s_i) }{\sum_{i\in kNN(s)} d'(s,s_i)}$ and $\delta_d$ is set to $1e^{-5}$. We have found that this approach rarely hurts performance and results in the same action choices. However, when there is limited data, there is some evidence that this computation can actually help since it avoids splitting the dataset across actions (see the Appendix for ablation study).
\textbf{Hyperparameter Selection.} The choice of the cost parameter $C$ can significantly influence the DAC-MDP performance. At one extreme, if $C$ is large, then the resulting policy will focus on ``staying close" to the dataset. At the other extreme when $C=0$, there is no penalty for exploiting the under-represented parts of the model. In general, the best choice will lie between the extremes and must be heuristically and/or empirically selected. Here, We use the simple rule-of-thumb of setting $C$ to be in the order of magnitude of the observed rewards, to avoid exploitation but not dominate the policy. In addition, if the online evaluation budget $N_e$ (see Section \ref{sec:preliminaries}) is greater than one, we also consider values that are orders of magnitude apart to span qualitatively different ranges.
The DAC-MDP construction used the same smoothness parameter $k$ for both building the MDP and computing the policy for unknown states via Equation \ref{eq:DAC-Q}. We have found it is useful to use different values and in particular, there can be a benefit for using a larger value for the later use to reduce variance. Thus, our experiments will specify a value of $k$ used to construct the MDP and a value $k_{\pi}$ used to compute the policy. Our default setting is $k=5$ and $k_{\pi}=11$, with more values possibly considered depending on the evaluation budget $N_e$.
\subsection{Theoretial Proofs}
\label{sec:theoritical_proofs}
Let $\tilde{\pi}$ bet the optimal policy of a DAC-MDP. We wish to bound the value $V^{\tilde{\pi}}(s)$ of $\tilde{\pi}$ in the true MDP in terms of the optimal value $V^*(s)$ for any state $s$. Using the following lemma from \cite{pazis2013} it is sufficient to bound the \emph{Bellman Error} $\tilde{Q}(s,a) - B[\tilde{Q}](s,a)$ of $\tilde{Q}$ across all $s$ and $a$ with respect to the true MDP.
\vspace{1em}
\begin{lemma}
\label{lemma:bellman-error}\citep{pazis2013},Theorem 3.12
For any Q-function $Q$ with greedy policy $\pi$, if for all $s\in \mathcal{S}$ and $a\in \mathcal{A}$, $-\epsilon_{-} \leq Q(s,a) - B[Q](s,a) \leq \epsilon_+$, then for all $s\in \mathcal{S}$, $$V^{\pi}(s) \geq V^*(s) - \frac{\epsilon_- + \epsilon_+}{1-\gamma}$$.
\end{lemma}
In general, however, the Bellman Error can be arbitrarily large without further assumptions. Thus, in this work, we make Lipschitz smoothness assumptions on $B[\tilde{Q}]$. In particular, we assume that there is a constant $L(k,C)$ such that for any state-action pairs $(s,a)$ and $(s',a')$ we have $$\left|B[\tilde{Q}](s,a) - B[\tilde{Q}](s',a')\right| \leq L(k,C)\cdot d(s,a,s',a').$$
Given the smoothness assumption for any $(s,a)$, data sample $(s_i,a_i,r_i,s'_i)$, and a constant $L$ we define a constant $\Delta_{i}(s,a)$ such that $$B[\tilde{Q}](s,a) = B[\tilde{Q}](s_i,a_i) - \Delta_{i}(s,a).$$ Note that based on the smoothness assumption we have that $\left|\Delta_i(s,a)\right| \leq L(k,C)\cdot d(s,a,s_i,a_i)$.
Using this definition we introduce a new operator $\hat{B}$, which will allow us to relate $\tilde{B}$ to $B$.
\begin{align}
\hat{B}[Q](s,a) = \frac{1}{k}\sum_{i\in kNN(s,a)} r_i + \gamma \max_{a'} Q(s'_i,a') - \Delta_{i}(s,a).
\end{align}
The following lemma shows that this operator approximates $B$ with high probability.
\vspace{1em}
\begin{lemma}
\label{lemma:sampling}
For any data set $\mathcal{D}$ of size $N$, value of $k$, and any cost parameter $C$, if $\tilde{Q}$ is the optimal Q-function of the corresponding DAC MDP, then with probability at least $1-\delta$, for all $(s,a)\in \mathcal{S}\times\mathcal{A}$,
\begin{align*}
\hat{B}[\tilde{Q}](s,a)-B[\tilde{Q}](s,a) & \leq Q_{max} \epsilon(k,N,\delta)\text{ for all }(s,a)\in \mathcal{S}\times\mathcal{A} \\
\epsilon(k,N,\delta) &= \sqrt{\frac{1}{2k}\ln{\frac{2 M_{N,k} }{\delta}}}
\end{align*}
where $Q_{max} = \max_{(s,a)} \tilde{Q}(s,a)$ and $n$ is the dimensionality of the state-action encoding.
\end{lemma}
\begin{proof}
In order to capture the variance of $\hat{B}[\tilde{Q}]$ associated with the transition dynamics, for each state-action pair $(s,a)$ and each nearest neighbor index $i\in kNN(s,a)$, define a random variable $X_i(s,a) = r_i + \gamma \max_{a'} \tilde{Q}(S'_i,a') - \Delta_i(s,a)$, where $S'_i \sim T(s_i,a_i,\cdot)$. Note that each term of $\hat{B}[\tilde{Q}](s,a)$ is a single sample of one $X_i(s,a)$. That is, $$\hat{B}[\tilde{Q}](s,a) = \frac{1}{k}\sum_{i\in kNN(s,a)} x_i(s,a), \mbox{ where } x_i(s,a)\sim X_i(s,a).$$ Also note that according to the definition of $\Delta_i(s,a)$ the expected value of each $X_i(s,a)$ is equal to $B[\tilde{Q}](s,a)$.
\begin{align*}
\mathbb{E}\left[X_i(s,a)\right] & = r_i + \gamma \mathbb{E}\left[\max_{a'} \tilde{Q}(S'_i,a')\right] - \Delta_i(s,a) \\
& = B[\tilde{Q}](s_i,a_i) - \Delta_i(s,a) \\
& = B[\tilde{Q}](s,a)
\end{align*}
Accordingly $\mathbb{E}\left[\hat{B}[\tilde{Q}](s,a)\right] = B[\tilde{Q}](s,a)$.
From the above, we can apply the Hoeffding inequality, which states that for $k$ independent random variables $X_1,\ldots, X_k$ with bounded support $a_i\leq X_i \leq b_i$, if $\bar{X} = \frac{1}{k}\sum_i X_i$ is the empirical mean, then for all $\epsilon > 0$, $$Pr \left( \left| \bar{X} - \mathbb{E}\left[\bar{X}\right] \right| \geq \epsilon\right) \leq 2\exp\left(\frac{-2k^2\epsilon^2}{\sum_i (b_i - a_i)^2}\right).$$ Applying this bound to $\hat{B}[\tilde{Q}](s,a)$ implies that: $$Pr \left( \left| \hat{B}[\tilde{Q}](s,a) - B[\tilde{Q}](s,a) \right| \geq \epsilon\right) \leq 2\exp\left(\frac{-2k\epsilon^2}{Q^2_{max}}\right),$$ which can be equivalently written as
$$Pr\left( \left| \hat{B}[\tilde{Q}](s,a) - B[\tilde{Q}](s,a) \right| \geq Q_{max}\sqrt{\frac{1}{2k}\ln{\frac{2}{\delta'}}}\right) \leq \delta'.$$
This bound holds for individual $s,a$ pairs. However, we need to bound the probability across all $s,a$ pairs. To do this note that the computed value $\hat{B}[\tilde{Q}](s,a)$ is based on the nearest neighbor set $kNN(s,a)$ and let $M_{N,k}$ denote an upper bound on the possible number of those sets across all $\mathcal{S}\times\mathcal{A}$. To ensure that the bound holds simultaneously for all such sets, we can apply the union bound using $\delta'=\delta/M_{N,k}$. This bounds the probability over all state-action pairs simultaneously by $\delta$.
$$Pr\left( \left| \hat{B}[\tilde{Q}](s,a) - B[\tilde{Q}](s,a) \right| \geq Q_{max}\sqrt{\frac{1}{2k}\ln{\frac{2 M_{N,k} }{\delta}}}\right) \leq \delta.$$
\end{proof}
\textbf{Theorem \ref{theorem:main}} For any data set $\mathcal{D}$ of size $N$, let $\tilde{Q}$ and $\tilde{\pi}$ be the optimal Q-function and policy for the corresponding DAC-MDP with parameters $k$ and $C$. If $B[\tilde{Q}]$ is Lipschitz continuous with constant $L(k,C)$, then with probability at least $1-\delta$, \begin{align*}
V^{\tilde{\pi}} & \geq V^* - \frac{2\left(L(k,C)\cdot \bar{d}_{max} + Q_{max}\epsilon(k,N,\delta)\right)}{1-\gamma}, \\ \epsilon(k,N,\delta) &= \sqrt{\frac{1}{2k}\ln{\frac{2 M_{N,k} }{\delta}}},
\end{align*}
which for L2 distance over a $d$-dimensional space yields $\epsilon(k,N,\delta) = O\left(\sqrt{\frac{1}{k}\left(d\ln{kN}+\ln{\frac{1}{\delta}}\right)}\right).$
\begin{proof}
The proof first will bound the Bellman error of $\tilde{Q}$ from above an below and then apply Lemma \ref{lemma:bellman-error}. We first decompose the Bellman error into two parts by adding and subtracting $\hat{B}[\tilde{Q}]$.
\begin{align*}
\tilde{Q}(s,a)-B[\tilde{Q}](s,a) = \underbrace{\left(\tilde{Q}(s,a) - \hat{B}[\tilde{Q}](s,a)\right)}_{\xi_d(s,a)} + \underbrace{\left(\hat{B}[\tilde{Q}](s,a) - B[\tilde{Q}](s,a)\right)}_{\xi_{sim}(s,a)}
\end{align*}
The first term corresponds to the error due to the non-zero distance of the $k$ nearest neighbors and the second term is due to sampling error.
Noting that $\tilde{B}$ and $\hat{B}$ only differ in one term, $\xi_d(s,a)$ can be simplified as follows.
\begin{align*}
\xi_d(s,a) &= \tilde{Q}(s,a) - \hat{B}[\tilde{Q}](s,a) \\
& = \tilde{B}[\tilde{Q}](s,a) - \hat{B}[\tilde{Q}](s,a) \\
& = \frac{1}{k}\sum_{i\in kNN(s,a)} \Delta_i(s,a) - C\cdot d(s,a,s_i,a_i)
\end{align*}
From this and the fact tht $\left|\Delta_i(s,a)\right|\leq L(k,C)\cdot d(s,a,s_i,a_i)$ we can immediately derive upper and lower bounds on $\xi_d(s,a)$ for all $s$ and $a$.
$$-\left(L(k,C) + C \right)\cdot \bar{d}_{max} \leq \xi_d(s,a) \leq \left(L(k,C) - C\right)\cdot \bar{d}_{max}$$
We can bound $\xi_{sim}(s,a)$ by a direct application of Lemma \ref{lemma:sampling}. Specifically, with probability at least $1-\delta$, for all $s$ and $a$, $\left|\xi_{sim}(s,a)\right|\leq \epsilon(k,N,\delta)$. Putting these together we get that with probability at least $1-\delta$, for all $s$ and $a$, $$-\left(\left(L(k,C) + C \right)\cdot \bar{d}_{max} + \epsilon(k,N,\delta)\right) \leq \tilde{Q}(s,a) - B[\tilde{Q}](s,a) \leq \left(L(k,C) - C\right)\cdot \bar{d}_{max} + \epsilon(k,N,\delta).$$
The proof is completed by applying Lemma \ref{lemma:bellman-error} to this bound.
Note: For a Eucledian distance metric, \cite{toth2017handbook}[Chapter 27] has established an upper bound on $M_{N,k}$. $$M_{N,k} = O\left(N^{\left \lceil d/2\right \rceil}k^{\left \lfloor d/2\right \rfloor + 1}\right) = O\left((kN)^{d/2+1}\right).$$
\end{proof}
\subsection{Ablation Study}
\label{sec:AblationStudy}
We highlight the deviation of the practical implementation of DAC-MDPs from theory in section \ref{sec:Apppendix_practical_issues}, namely, weighted averaging based on KNN distances ($WA$) and, KNN over states instead of state-action pairs ($sKNN$). The theory for DAC-MDP is derived for uniform averaging. i.e., the probability distribution to the candidate next states from KNN state action pairs is uniformly distributed irrespective of their relative distances. In contrast, weighted averaging normalizes the probability distribution according to their relative distances from the query state-action pair. Secondly, the theory states that we query the K nearest neighbors for each state-action pair to calculate the q values for any unseen state. This entails that $|A|$ numbers of KNN calls have to be made for each action decision. However, we can reduce this by simply querying k nearest neighbors for states over state-action pairs. We query for K- nearest states when $sKNN$ option is turned on. We conduct the ablation study on the full[100k] dataset as well as a smaller dataset comprising of only 10\% of the full dataset. Below in Figure~\ref{fig:CartPoleAblation} we show ablation study for each of these choices in the standard CartPole domain.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth,trim={15cm 10cm 15cm 10cm},clip]{"resources/CartPoleAblationStudy_K5_KP11.pdf"}
\caption{(a) Ablation study for WA and sKNN in CartPole Domain. Greedy and eps-greedy policy returns for different sets of hyperparameters and dataset versions of size 100k. (left) and 10K (right) Hyperparameters: $[k=5, k_\pi = 11, C= 1, N_e = 1]$}
\label{fig:CartPoleAblation}
\end{figure}
We find that for a larger dataset, neither weighted averaging nor the state KNN approximation affects the performance of DAC-MDPs. However, there is a noticeable drop in performance for $optimalBag$ dataset for smaller dataset sizes when state KNN approximation is turned off. This suggests that when the dataset is mostly comprised of sub samples of optimal trajectories, the data is not divided uniformly over the actions, resulting in less accurate estimates, especially when the data is limited and skewed towards a particular policy.
\subsection{Additional CartPole Experiments}
\label{sec:addtionalCartPole}
To further investigate the impact of DAC-MDP parameters, we conduct similar experiments to section \ref{sec:exploration} for different dataset sizes on CartPole. In addition to the greedy policy performance we also track the performance of $\epsilon$-greedy run of the policy to further distinguish the quality/robustness of the policies learned.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth,trim={11cm 1cm 16cm 1.5cm},clip]{"resources/cartpole_tunable_params_eps_greedy_10_50k_.pdf"}
\caption{ Eps-Greedy performance for CartPole on different dataset sizes. (a) Top row: dataset size 10k (b) Bottom Row: dataset size 50k }
\label{fig:cartpole_tunable_params_eps_greedy[10_50k]}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth,trim={11cm 1cm 16cm 1.5cm},clip]{"resources/cartpole_tunable_params_10_50k_.pdf"}
\caption{ Greedy performance for CartPole on different dataset sizes. (a) Top row: dataset size 10k (b) Bottom Row: dataset size 50k }
\label{fig:cartpole_tunable_params[10_50k]}
\end{figure}
We see a very similar trend (as in 100k) for the choice of cost parameter $C$ even for the low data regime of 10k. Figure \ref{fig:cartpole_tunable_params_eps_greedy[10_50k]} shows that for all datasets, when there is no penalty for under-represented transitions,i.e. $C= 0$, the policy is extremely poor. At the other extreme, when $C$ is very large, the optimal policy tries to stay as close as possible to transitions in the dataset. This results in good performance for the $OptimalBag$ dataset, since all actions are near optimal. However, the policies resulting from the $MixedBag$ and $RandomBag$ datasets fail. This is due to those datasets containing a large number of sub-optimal actions, which the policy should actually avoid for purposes of maximizing reward. Between these extremes, the performance is relatively insensitive to the choice of $C$.
DAC-MDP, however, is more sensitive towards the choice of k for small dataset regime.
Figure \ref{fig:cartpole_tunable_params_eps_greedy[10_50k]} second column, explores the impact of varying k using fixed $k_\pi= 1$ and $C= 1.$ The main observation is that there is a slight disadvantage to using $k= 1$, which defines a deterministic finite MDP, compared to $k >1$, especially for the non-optimal datasets. Moreover, there is also a slight disadvantage in using larger k when the dataset is not dense. It is to be noted that although $RandomBag$ dataset contains the same amount of experience, it is much denser than other datasets as random trajectories are quite short compared to optimal trajectories. This indicates that the optimal planner is able to benefit by reasoning about the stochastic transitions of the DAC-MDP for $k >1$. However, DAC-MDP suffers from high values of k when dataset is sparse.
Finally Figure~\ref{fig:cartpole_tunable_params_eps_greedy[10_50k]} third column, varies the policy smoothing parameter $k_\pi$ from 1 to 101. Similar to the results for varying smoothness parameter k, there is a benefit to using a value of $k_\pi>1$, but should not be chosen to be too large depending on how large the dataset is.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth,trim={9cm 4cm 10cm 4cm},clip]{"resources/AllRepr2Row_100K__c2_.pdf"}
\caption{(a) Greedy performance for Atari using different learnt representations and evaluation candidate policies $N_e$[100k dataset] }
\label{fig:secondarAtariResult2}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth,trim={9cm 4cm 10cm 4cm},clip]{"resources/AllRepr2Row_2.5m__c2_.pdf"}
\caption{(a) Greedy performance for Atari using different learnt representations and evaluation candidate policies $N_e$ [2.5M dataset] }
\label{fig:secondarAtariResult3}
\end{figure}
\subsection{Additional Atari Experiments}
\textbf{Policy search for DQN and BCQ}: In our experiments we consider an evaluation protocol that makes the amount of online access to the environment explicit. In particular, the offline RL algorithm is allowed to use the environment to evaluate $N_e$ policies (e.g., each evaluation can be an average over repeated trials), which, for example, may derived from different hyperparameter choices. It is not clear how many of these evaluations will be actually needed for Q-learning algorithms such as DQN that is primarily designed for online learning. Even approaches focusing on offline learning are not spared from hyperparameter search and stopping criteria. Hence it is not clear how to evaluate Q-iterating policies such as DQN or BCQ for different values of $N_e$ where we are already using the best parameters reported from previous works.
However, we can still assume that we know the value of $N_e$ before hand and tune the learning process accordingly. More specifically given a fixed number of training iterations the evaluation frequency can be set such that we complete $N_e$ evaluations by the end of the training. We can then use the last $N_e$ evaluation checkpoints to obtain the set of candidate policies. We can then choose the best among them. It is worth noting that, even if we disregard the hyperparameter tuning, it is still not entirely fair to compare BCQ,$N_e=6$ directly with DAC-BCQ, $N_e$=6 as the DAC-MDP only has access to the very last representation. Moreover, DAC-MDPs do not need stopping criteria and are more robust to small representational changes. We show the policy search results for both DQN and BCQ for 100k dataset as well as 2.5M dataset in Figure~\ref{fig:secondarAtariResult2} and Figure~\ref{fig:secondarAtariResult3} respectively.
\begin{figure}[h]
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth,trim={10cm 1cm 10cm 1cm},clip]{"resources/RollingBCQ_1M_Small.pdf"}
\caption{}
\label{fig:Atari[1M]lineplot}
\end{subfigure}
\begin{subfigure}[b]{0.535\textwidth}
\includegraphics[width=\textwidth,trim={10cm 1cm 10cm 1cm},clip]{"resources/search_plot_1M__BCQ_.pdf"}
\caption{}
\label{fig:Atari[1M]barplot}
\end{subfigure}
\caption{ Results on Medium dataset size of 1M for (a) BCQ representation and BCQ agent is trained for 1m timesteps, and evaluated on 10 episodes every 50k steps. At each of 6 uniformly distributed evaluation checkpoints, we use the internal representation to compile DAC-MDPs. We then evaluate the DAC-MDPs for $N_e=6$. (b) Final BCQ policy along with the corresponding performance of DAC-BCQ for different values of $N_e$ }
\end{figure}
Additionally we also run the experiment on dataset size 1M at different evaluation checkpoints as done in the main paper. We trained offline $BCQ$ agent on the dataset for 1M iterations. This allows us to compare to these baselines as well as use the $BCQ$ latent state representations for DAC-MDP construction. The performance of the corresponding $BCQ$ policies is evaluated using 10 test episodes every 50K iterations. At some select position of $BCQ$ evaluation iteration, we constructed DAC-MDPs using the latent state representation at that point. In particular, this first experiment considers the offline RL setting where $N_e= 6$, meaning that we can evaluate 6 policies(using 10 episodes each) corresponding to 6 different DAC-MDP parameter settings comprised of the combinations of $k= 5$,$C\in{1,100,1M}$, and$k_\pi \in{11,51}$. For each representation the best performing DAC-MDP setting at each point is then recorded. The entire 1M iteration protocol was repeated 3 times and Figure~\ref{fig:Atari[1M]lineplot} show the averaged curves with 90\%confidence intervals. Figure~\ref{fig:Atari[1M]barplot} investigates the performance of final iteration for different values of $N_e$. All hyperparameters and normalization were selected as in the 100K experiment. We see that in the DAC-MDPs can perform as good as BCQ or better in most of the evaluation checkpoints even for the larger dataset size and training iterations.
\subsection{Scaling Value Iteration With GPUs}
\label{sec:AppendixScalingVI}
Value iteration (VI) \citep{Kneale1958DynamicPB} successively approximates the value function, starting from an arbitrary initial estimate by turning the Bellman optimality equation into an update rule. While VI is simple and able to calculate the exact solution for any MDP, one of the major disadvantages is that they are computationally inefficient. A large number of bellman backups has to be computed for each state before convergence and the memory required grows exponentially with the number of states in the MDPs. Hence, it is only natural to optimize value iteration using parallel computing platforms such as GPUs. \cite{Jhannsson2009GPUbasedMD} was one of the first to introduce a GPU based MDP solver. In particular, they introduce two algorithms: Block Divided Iteration and Result Divided Iteration. However, there have been works leveraging GPU optimized value iteration for specific domains. (e.g, \cite{7429422}, \cite{Wu2016GPUAcceleratedVI}), there does not exist a standard public library for GPU optimized value iteration. To facilitate this, we implement a version that is a hybrid between the Block Divided Iteration and Result Divided Iteration approaches using a simple CUDA kernel as described in pseudo-code \ref{psedo:GPU_vi_kernel}.
\begin{algorithm}
\caption{GPU Value Iteration Kernel}\label{alg:gpu_kernel}
\begin{algorithmic}[1]
\Procedure{BellmanBackup}{$*T_P,*T_I, *R, *V, *V', *Q',*\delta$}
\State $i \gets get\_thread\_id()$
\State $v, v_{max} \gets 0,0$
\For{$j \in range(A)$}
\For{$k \in range(k_b)$} \Comment{$k_b$ is initialized externally as per the MDP build parameter $k_b$}
\State $P_{ss'} \gets T_P[i,j,k]$
\State $I_{s'} \gets T_I[i,j,k]$
\State $v \gets v + P_{ss'}V[I_{s'}]$
\EndFor
\State $Q'[i,j] \gets v$
\State $v_{max} \gets max(v,v_{max})$
\EndFor
\State $V'[i] \gets v_{max}$
\State $\delta[i] = abs(V[i] - V'[i])$
\EndProcedure
\end{algorithmic}
\label{psedo:GPU_vi_kernel}
\end{algorithm}
\begin{algorithm}
\caption{GPU Value Iteration Function}\label{alg:gpu_api}
\begin{algorithmic}[1]
\Procedure{ValueIteration}{$tranDict, rewardDict,\delta_{min}$}
\State $T_p, T_I, R = get\_sparse\_representation(tranDict,rewardDict)$
\State $T_p, T_I, R = allocate\_gpu\_memory(T_p, T_I,R)$
\State $V, Q', \delta = allocate\_gpu\_memory\_for\_value\_vectors()$
\While{$min(\delta) > \delta_{min}$}
\State $V' = allocate\_gpu\_memory(V.size())$
\State $RunGPUKernel(T_p, T_I, R,V, V', Q', \delta)$
\State $V = V')$
\State $\delta[i] = abs(V[i] - V'[i])$
\State $release\_memory\_for(V')$
\EndWhile
\EndProcedure
\end{algorithmic}
\label{psedo:GPU_vi_function}
\end{algorithm}
For a compact representation of the Transition matrix, we use a list of lists as our sparse representations. Here the Transition Matrix is divided into two matrices, one that holds the index of next states with a non-zero transition probability ($T_I$) and the other, which holds the actual transition probability($T_P$). Each thread takes a single row from these two matrices and the Reward matrix to compute each state-action pair's Q values and the new value of a state. The value vector is shared among the threads and synchronized after each bellman backup operation.
To benchmark the GPU implementation's performance, we compare its run-time with a standard serial implementation of Value iteration across MDPs of varying sizes. These MDPs are DAC-MDPs generated by different samples from a large pool of datasets with continuous state vectors. The serial implementation is run on an Intel Xeon processor and does not use any CPU multi-processing. We plot the relative performance gain across different GPUs with varying CUDA cores. We consider 3 GPUs, namely, GTX 1080ti, RTX 8000, and Tesla V100, each with a CUDA core count of 3584, 4608 and 6912, respectively. The GPU optimized implementation provides anywhere between 20-1000X boost in solving speed over its serial counterpart, as shown in Figure~\ref{fig:gpubenchmark}. Currently, our implementation of VI can solve MDPs with a million states less than 30 seconds.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth,trim={3cm 1cm 2cm 1cm},clip]{"resources/gpu_benchmark.pdf"}
\caption{Compares the performance results for serial VI solvers with its GPU optimized implementation. The serial VI solver is benchmarked using Intel Xeon CPU. We plot the performance gain over different MDP sizes.}
\label{fig:gpubenchmark}
\vspace{-2em}
\end{figure}
\newpage
\csection{Experimental Details}
\csubsection{Atari Preprocessing}
The Atari 2600 environment is processed in teh same manner as previous work \citep{Mnih2015HumanlevelCT,Machado2018RevisitingTA} and we use consistent preprocesssing across all tasks and algorithms.
\begin{center}
Table 1: Atari pre-processing details \\
\begin{tabular}{lr}
\hline Name & Value \\
\hline Sticky actions & Yes \\
Sticky action probability & 0.25 \\
Grey-scaling & True \\
Observation down-sampling & (84,84) \\
Frames stacked & 4 \\
Frameskip (Action repetitions) & 4 \\
Reward clipping & {[-1,1]} \\
Terminal condition & Game Over \\
Max frames per episode & $108 \mathrm{K}$ \\
\hline
\end{tabular}
\end{center}
\csubsection{Architecture and Hyper-parameters}
Same architecture and hyperparameters were used as in \cite{Fujimoto2019BenchmarkingBD} with slight modifications in the architecture.
\begin{center}
Table 2: Architecture used by each Network \\
\begin{tabular}{lcl}
\hline
Layer & Number of outputs & Other details \\
\hline Input frame size & (4x84x84) & $-$ \\
Downscale convolution 1 & 12800 & kernel 8x8, depth 32, stride 4x4 \\
Downscale convolution 2 & 5184 & kernel 4x4, depth 32, stride 2x2 \\
Downscale convolution 3 & 3136 & kernel 3x3, depth 32, stride 1x1 \\
Hidden Linear Layer 1 & 512 & - \\
Hidden Linear Layer 2 & 16 & - \\
Output Layer & $|A|$ & - \\
\hline
\end{tabular}
\end{center}
\begin{center}
Table 3: All Hyperparameters for DQN and BCQ
\begin{tabular}{ll}
\hline Hyper-parameter & Value \\
\hline Network optimizer & Adam \cite{Kingma2015AdamAM} \\
Learning rate & 0.0000625 \\
Adam $\epsilon$ & 0.00015 \\
Discount $\gamma$ & 0.99 \\
Mini-batch size & 32 \\
Target network update frequency & $8 \mathrm{k}$ training iterations \\
Evaluation $\epsilon$ & 0.001 \\
Threshold $\tau$ (BCQ) & 0.3 \\
\hline
\end{tabular}
\end{center}
\begin{center}
Table 4: All Hyperparameters for $DQN^*$\\
\begin{tabular}{ll}
\hline Hyper-parameter & Value \\
\hline Replay buffer size & 1 million \\
Training frequency & Every 4th time step \\
Warmup time steps & $20 \mathrm{k}$ time steps \\
Initial $\epsilon$ & 1.0 \\
Final $\epsilon$ & 0.01 \\
$\epsilon$ decay period & $250 \mathrm{k}$ training iterations \\
\hline
\end{tabular}
\end{center}
\newpage
\csection{Visualization of 3D Navigation Environments}
\label{sec:appendix_visualize}
\begin{figure}[ht]
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{"resources/SimpleRoom.png"}
\caption{}
\label{fig:SimpleRoomAgentView}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{"resources/boxAndPillar.png"}
\caption{}
\label{fig:BoxAndPillarRoomAgentView}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{"resources/TunnelRoom.png"}
\caption{}
\label{fig:TunnelRoomAgentView}
\end{subfigure}
\caption{ (a) Agent view and top view for gridworld domains. (a) \textit{Simple Room}, (b) \textit{Box and Pillar Room} and (c) \textit{Tunnel Room}.}
\end{figure}
\begin{figure}[ht]
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth, height=\textwidth, trim={7cm 1cm 7cm 1cm},clip]{"resources/SimpleRoomNoRight.png"}
\caption{}
\label{fig:AllRooms}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth, height=\textwidth, trim={7cm 1cm 7cm 1cm},clip]{"resources/simpleRoom2.png"}
\caption{}
\label{fig:SimpleRoom}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth, height=\textwidth, trim={7cm 1cm 7cm 1cm},clip]{"resources/boxAndPillarRoom2.png"}
\caption{}
\label{fig:BoxAndPillarRoom}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth, height=\textwidth, trim={7cm 1cm 7cm 1cm},clip]{"resources/tunnelRoom2.png"}
\caption{}
\label{fig:TunnelRoom}
\end{subfigure}
\caption{(a) \textbf{Simple Room} \textit{Solid arrow}: policy rollout of standard DAC-MDP. \textit{Dotted arrow:} policy rollout of modified DAC-MDP ;Right Turn Penalized . (b) \textbf{Simple Room} \textit{Solid arrow:} policy rollout of standard DAC-MDP. \textit{Dotted arrow:} policy rollout of modified DAC-MDP; Left Turn Penalized .
(c) \textbf{ Box and Pillar Room} \textit{Dotted Arrow:} policy rollout of DAC-MDP solved with small discount factor [short-term planning]. \textit{Solid Arrow:} Policy rollout of DAC-MDP solved with Large Discount Factor [long-term planning] (d) \textbf{Tunnel Room }\textit{ Dotted Arrow:} Policy rollout of standard DAC-MDP. No stochasticity Introduced. [optimal policy]. \textit{Solid Arrow:} Poicy rollout of modified DAC-MDP with added Stochasticity in dynamics. [safe policy]}
\end{figure}
\end{document}
\section{#1}
\vspace{-0.04in}
}
\newcommand{\csubsection}[1]{
\vspace{-0.02in}
\subsection{#1}
\vspace{-0.02in}
}
\newcommand{\csubsubsection}[1]{
\vspace{-0.01in}
\subsubsection{#1}
\vspace{-0.01in}
}
\setlength{\floatsep}{1ex plus 0ex minus 0.5ex}
\setlength{\textfloatsep}{4ex plus 0ex minus 0ex}
\setlength{\abovecaptionskip}{1ex plus 0ex minus 0ex}
\setlength{\belowcaptionskip}{2ex plus 0ex minus 0ex}
\abovedisplayskip 5.0pt plus2pt minus2pt%
\belowdisplayskip \abovedisplayskip
| proofpile-arXiv_059-15700 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Collectible card games (CCGs) involve buying some cards, picking a subset or \emph{deck} of those cards, and playing against someone else who separately picked their own deck.
Collectible card games are fun and popular! In 2017 Hearthstone reported over 70 million registered players \cite{HearthstonePlayers}.
With the release of the Hearthstone set, The Boomsday Project, Blizzard introduced a number of puzzles related to the game. All of the puzzles require changing the board to match some desired state in a single turn. The puzzle types are: ``Lethal'', ``Mirror'', ``Board Clear'', and ``Survival''.
In Lethal, the player must reduce their opponents health to zero. In Mirror, the player must make both sides of the board exactly the same. This means both boards must have minions of the same type in the same order, and any damage and status effects on those minions must be the same. In Board Clear, the player must ensure there are no more minions on the board, usually by destroying ones already in play. In Survival, the player must return their character to full health.
We will focus on the problem of finding lethal (analogous to ``mate-in-1'' from Chess) since it is the most common puzzle type; however, our proofs can be adapted to show \ccNP-hardness for the other three puzzle types. Even before Blizzard released The Boomsday Project, there were numerous collections of challenging ``lethal puzzles'' available online, e.g. on websites such as \href{http://hearthstonepuzzles.net/}{\color{blue} hearthstonepuzzles.net}, \href{http://www.hsdeck.com/forum/puzzles/}{\color{blue} hsdeck.com}, and \href{http://www.reddit.com/r/HearthPuzzle/}{\color{blue} reddit.com} (also see~\cite{Kotaku}). The frequent difficulty of such lethal puzzles leads to the consideration of the formal computational complexity of such puzzles - can these problems be shown to be computationally intractable under standard complexity-theoretic assumptions?
The Boomsday Project Labs allow for carefully designed situations with non-random decks, cards with reduced costs, complex board states, and even cards that are not normally available in the game. These are all useful tools for designing both puzzles and hardness proofs, however, we are also interested in the problem of finding lethal in which it would occur within a game itself. For this reason, we restrict our proofs to using cards normally available to players and we give a description of how the board state used in the reduction could have been created in a game of Hearthstone. However, for one proof, we do extensively use the puzzle property of the player knowing the contents and order of their deck.
\textbf{Related work.}
The body of work on the computational complexity of games and puzzles has become quite expansive, however, only a small amount of it considers the mate-in-one question.
Although deciding who will win in a two player game is frequently \ccPSPACE-
or \ccEXP-complete,\footnote{See~\cite{GPC} for several examples.}, the problem of `mate-in-1' or `finding lethal' is often far less computationally complex because many games have only a polynomial number of moves on any given turn and evaluating whether the new state of the board results in a win is computationally easy.
Two examples where this problem is interesting are Conway's Phutball for which mate-in-1 is \ccNP-complete~\cite{Phutball} and Checkers for which mate-in-1 is in \ccP~\cite{Checkers}.
Although a fair amount of academic study has gone into collectible card games~\cite{Ward-2009a,zhang2017improving}, less is known about their computational complexity. Magic: The Gathering, perhaps the most well-know CCG, is one example that has been analyzed.
A recent paper shows that deciding who wins in a game of Magic is undecideable~\cite{MtGTuring, churchill2019magic}. It has been shown that simply deciding if a move is legal is \cccoNP-complete~\cite{chatterjee2016complexity}. Mate-in-1 and Mate-in-2 for another CCG, Android Netrunner, was shown to be weakly \ccNP-hard~\cite{Netrunner}.
\section{Hearthstone}
\label{sec:Hearthstone}
Hearthstone is a popular online CCG made by Blizzard and themed after World of Warcraft. Players are able to purchase virtual cards with which to construct their decks. The game consists of players taking actions on their own turns, including casting spells, summoning and attacking with minions, and controlling heroes with the objective of reducing the enemy hero's health to zero.
New cards come out regularly in sets. Only the base set and most recent sets are allowed in the Standard format. However, all cards are allowed in the Wild format. Blizzard also occasionally changes cards to adjust game-play. Configurations of Hearthstone game considered here take place in the Wild format at the release of The Boomsday Project.\footnote{For further details of the rules of Hearthstone, see \url{http://www.hearthstone.com}.} There are also Solo Adventures which can have unique rules and cards. The Boomsday Project Lab is an example of a Solo Adventure which includes custom rules and cards specifically to facilitate mate-in-1 like puzzles for Hearthstone.
\subsection{Rules Overview}
Here we present a very brief summary of some of the basic rules in Hearthstone which are relevant to the proofs. The cards themselves often have text which specifies additional abilities and rules that may make game-play differ from the typical behavior described in this section.
In Hearthstone, players use \emph{mana} to pay for cards and abilities. Each turn they gain a \emph{mana crystal}, up to 10, and then gain mana equal to their mana crystals. Players also have a \emph{hero} which has \emph{health} and a \emph{hero power}. If a player's hero is ever reduced to zero or less health, that player loses. Hero powers are abilities that cost 2 mana and can be used once a turn.
There are four main types of cards which can be played: \emph{minions}, \emph{spells}, \emph{weapons}, and \emph{traps}.
\textbf{Spells} typically have a one time effect, such as healing, drawing cards, or doing damage, and are discarded after they are played. Minions are played onto the \emph{Battlefield}. Each player cannot have more than 7 minion on the battlefield at a time.
\textbf{Minons} have \emph{attack} and \emph{health} which are both non-negative integers. If a minion's health is reduced to zero or lower, it is removed from the battlefield. Minions and Heroes can attack once a turn if they have a positive attack value. When a minion (or hero) attacks, the player chooses an opponent's minion or hero as the target. If the opponent has a minion with \emph{taunt} then a minion with taunt must be selected as the target for any attacks. The attacking and attacked cards simultaneously deal damage to each other equal to their attack value. When a card takes damage its health is reduced by that amount. Minions can also have \emph{abilities} which effect game-play while they are on the battlefield. They can also have \emph{battlecry} or \emph{deathrattle} which triggers an effect (similar to a spell) when the minion is played or dies respectively. Minons without card text (abilities, battlecry, deathrattle) are called \emph{vanilla} minions. For example, \hyperref[hscard:raptor]{Bloodfen Raptor} and \hyperref[hscard:pitfighter]{Pit Fighter} are vanilla minions.
\textbf{Weapons} give a hero an attack value. They also have a \emph{durability} which is reduced by 1 every time that hero attacks. If a weapon reaches zero durability it is destroyed. If a player plays a weapon while they already have one in play, the original weapon is destroyed and replaced by a new one.
\textbf{Traps} are not used in any of these proofs as their effects trigger on the opponents turn in response to some action taken by the opponent.
Not all cards can be included in a deck, most notably each hero has a class and decks can only contain \emph{neutral} cards or cards from their class. Decks must normally contain exactly 30 cards, no more than 1 copy of any legendary card and no more than 2 copies of any other card. However, during the game cards may add or remove cards from players' decks and the above constraints only apply while building decks, not in the middle of a game. Games generally begin with cards in a deck being randomly shuffled. However, in The Boomsday Lab puzzles, players might have pre-determined decks with a specific card ordering.
\subsection{Generalizing the game}
In the game of Hearthstone as available to players, the time and complexity of games are limited in several ways.
For instance, each turn has a time limit of 75 seconds (plus animations), games are limited to 89 turns, decks are limited to 60 cards, hands to 10 cards, and boards to 14 minions. The Boomsday Lab puzzles do not have a time limit.
In comparison, the computational complexity of problems is considered as the problem size grows to infinity.
In order to formalize finding lethal into a problem that can be analyzed, we consider generalizing the game in one of several ways, enabling puzzles of arbitrarily large size.
For Hearthstone, we see three natural generalizations: arbitrarily large boards, hands, and decks.
The game configurations obey all rules of Hearthstone, except that turns may take arbitrarily long, they may use arbitrarily many turns (and thus played cards and card copies) to reach, and will be either:
\begin{itemize}
\item \emph{Board-scaled}: the board may have arbitrarily many minions (beyond the~7 permitted in the game).
\item \emph{Hand-scaled}: players' hands may have arbitrarily many cards (beyond the~10 permitted in the game).
\item \emph{Deck-scaled}: players' decks may have arbitrarily many cards (beyond the~60 permitted in the game)
\end{itemize}
Unless otherwise stated, the configurations all occur at turns with~10 mana.
The \emph{lethal problem} of a configuration of Hearthstone is as follows: can the current player reduce their opponents health to zero this turn?
For the Deck-scaled versions of the game we make a further alteration: each player knows the entire content of her deck, including card ordering (i.e. each player has \emph{perfect information} about her deck).
Normally cards are drawn uniformly at random from those in the deck.
It would be very interesting to know whether the Deck-scaled version of the game remains hard (or is harder) with random draw.
\subsection{A Preliminary Combo}
\label{sec:prelimcombo}
The reductions below involve large numbers of specific cards.
Although the Boomsday Lab puzzles allow us the freedom to design the precise board state, we show a way of generating cards in a game.
Here we describe a sequence of players yielding arbitrarily many of desired sets of cards (with the caveat of working with very low probability). In particular, many of the cards generate random cards of a given type. In these cases, we assume they happen to generate exactly the cards we desire.
\textbf{The setup.}
On the prior turn, the opponent plays a \hyperref[hscard:millhouse]{Millhouse Manastorm}, causing all spells cast this turn to cost~0 mana.
Our board contains a \hyperref[hscard:brann]{Brann Bronzebeard}, which causes our battlecries to trigger twice, and another vanilla minion.
Playing \hyperref[hscard:cabalists]{Cabalist's Tome} adds three random Mage spells to your hand. In this case, we assume it generates three copies of \hyperref[hscard:unstable]{Unstable Portal} which adds a random minion to your hand and reduces its mana cost by 3.
Playing two of the three \hyperref[hscard:unstable]{Unstable Portal}s generates a \hyperref[hscard:spellslinger]{Spellslinger} and \hyperref[hscard:void]{Void Terror}.
\textbf{A cyclic play sequence.}
Since \hyperref[hscard:spellslinger]{Spellslinger} and \hyperref[hscard:void]{Void Terror} cards have mana cost~3, when obtained from \hyperref[hscard:unstable]{Unstable Portal} they have cost~0. \hyperref[hscard:spellslinger]{Spellslinger} has a battlecry which adds a random spell to each player's hand.
Playing \hyperref[hscard:spellslinger]{Spellslinger} yields a \hyperref[hscard:cabalists]{Cabalist's Tome} and a second spell, due to \hyperref[hscard:brann]{Brann Bronzebeard} causing its battlecry to trigger twice. \hyperref[hscard:void]{Void Terror} has a battlecry which destroys adjacent minions.
Playing \hyperref[hscard:void]{Void Terror} between the played \hyperref[hscard:spellslinger]{Spellslinger} and vanilla minion destroys them, recovering space on the board.
In our hand is now an arbitrary spell (generated from \hyperref[hscard:spellslinger]{Spellslinger}), an arbitrary minion (generated from \hyperref[hscard:unstable]{Unstable Portal}), and a new copy of \hyperref[hscard:cabalists]{Cabalist's Tome}.
No mana has been spent, so this process can be repeated, obtaining a new arbitrary spell and a minion at each iteration.
\textbf{Playing cards.}
If we need to actually play these cards, we can generate \hyperref[hscard:innervate]{Innervates} to gain mana and alternate between generating \hyperref[hscard:void]{Void Terrors} and some other minion that costs 3 mana or less.
This combo requires 4~free minion slots (out of a maximum of~7) and 4~free hand slots, leaving 3 board slots and 6 hand slots for other aspects of later constructions.
\textbf{Obtaining the setup.}
If neither hero is a mage, we could have generated the initial \hyperref[hscard:cabalists]{Cabalist's Tome} by playing \hyperref[hscard:yogg]{Yogg-Saron, Hope's End}, after having cast at least one spell during the game, and having it cast an \hyperref[hscard:unstable]{Unstable Portal} that generated a \hyperref[hscard:spellslinger]{Spellslinger}.
\section{NP-hardness for Hearthstone}
Here we give three different proofs for the three different generalized versions of Hearthstone. A large battlefield size is considered in Section~\ref{sec:board}, a large hand size is considered in Section~\ref{sec:hand}, and a large deck size is considered in Section~\ref{sec:deck}. The reductions are from \threepart{} and \twopart{} and use similar ideas. In all cases the opponent will have minions with taunt which we must destroy to be able to attack the opponent's hero. These will encode target sums and the attack values of our minions will encode our set of numbers which we want to partition. At this high level the reductions are very simple; the majority of the complication comes from properly constructing the needed attack and health values with cards in the game and a limited hand/deck/board space. It is also important to note that because some cards apply multiplicative factors to attack and health our \twopart{} reductions actually also yield strong NP-hardness for finding lethal in Hearthstone.
\subsection{Hardness of Board-Scaled Lethal}
\label{sec:board}
\begin{theorem}
The lethal problem for board-scaled instances of Hearthstone is \ccNP-hard.
\end{theorem}
\begin{proof}
The reduction is from \threepart.
Let $A = \{a_1, a_2, \dots, a_{3n}\}$ be the input multiset of positive integers that sum to $S$.
The goal is to partition $A$ into $n$ parts that each sum to $S/n$.
The game state is as follows, and is described from the first-person perspective of the current player's turn.
\textbf{Your hero, hand, deck, and board.}
Your hero is Anduin (Priest) with 1 health and \hyperref[hscard:lightsjustice]{Light's Justice} equipped (from \hyperref[hscard:bling]{Blingtron 3000}). \hyperref[hscard:lightsjustice]{Light's Justice} is a weapon allowing the hero to attack for 1 damage.
Your hand and deck are empty.
Your board consists of $3n$ vanilla minions with attack values $4a_1, 4a_2, \dots, 4a_{3n}$ and 3 health each.
\textbf{The opponent's hero, hand, deck, and board.}
The opponent's hero is Valeera Sanguinar (Rogue) with~1 health.
The opponent's board consists of $n$ minions, each with taunt, 5~attack, $4S/n$ health, and no other special text.
\textbf{Lethal strategies.}
To win, you must kill all $n$ of the opponent's minions, then do~$1$ damage to the opponent.
Attacking any minion with your weapon causes you to die.
So the opponent's minions must be killed with your minions.
The total attack of your minions is exactly $4S$, so to kill all enemy minions, each minion must not overdamage, i.e. each minion must do exactly its attack damage to some opponent minion.
Thus all of your minions must attack the enemy minions in a way that corresponds exactly to partitioning $a_1, a_2, \dots, a_{3n}$ (your minions' attack values) into $n$ groups of $S/n$ each (your opponent's minions' health values).
Such partitions are exactly the solutions to the \threepart{} instance.
\textbf{Achieving your board state.}
Your board is constructed by using the preliminary combo to generate $3n$ copies of \hyperref[hscard:duskboar]{Duskboar} and $S-n$ copies of \hyperref[hscard:blessing]{Blessing of Kings}.
\hyperref[hscard:duskboar]{Duskboar}s have $4$~attack and $1$~health and the \hyperref[hscard:blessing]{Blessing of Kings} each add $4$ attack and $4$ health.
These are cast on the \hyperref[hscard:duskboar]{Duskboar}s such that their attacks correspond to the values $4a_1, 4a_2, \ldots, 4a_{3n}$.
The extra minions needed for the combo are removed later via combo-generated \hyperref[hscard:assassinate]{Assassinate}s.
\textbf{Achieving the opponent's board state.}
The opponent's board can be constructed using the preliminary combo from Section~\ref{sec:prelimcombo}, where the combo generates and the opponent plays $n$ \hyperref[hscard:heckler]{Evil Hecklers} and $S-n$ \hyperref[hscard:blessing]{Blessing of Kings}.
\hyperref[hscard:heckler]{Evil Hecklers} have 5~attack, 4~health, and taunt.
\hyperref[hscard:blessing]{Blessing of Kings} gives a minion $+4$ attack and $+4$ health.
These can be distributed to construct the opponents board which contains $n$ minions with taunt, each with $4S/n+1$ attack, and $4S/n$ health.
\end{proof}
\subsection{Hardness of Hand-Scaled Lethal}
\label{sec:hand}
\begin{theorem}
The lethal problem for hand-scaled instances of Hearthstone is weakly \ccNP-hard.
\end{theorem}
\begin{proof}
We reduce from \twopart.
Let $A = \{a_1, a_2, \dots, a_n\}$ be the input set of (exponentially large in $n$) integers that sum to $S$.
The goal is to partition the integers into two sets that each sum to $S/2$.
\textbf{Your hero, hand, deck, and board.}
Your hero is Jaina (Mage) with 1 health and a \hyperref[hscard:lightsjustice]{Light's Justice} (created by an earlier \hyperref[hscard:bling]{Blingtron 3000}).
Your hand consists of $n$ copies of \hyperref[hscard:bolvar]{Bolvar Fordragon} with attack equal to $b_i = 4a_i - 2$, $n$ copies of \hyperref[hscard:charge]{Charge}, $6n$ copies of \hyperref[hscard:innervate]{Innervate} (to pay for the \hyperref[hscard:bolvar]{Bolvar Fordragon} and \hyperref[hscard:charge]{Charge}).
Your board is empty. \hyperref[hscard:charge]{Charge} allows minions to attack the turn they come into play and Innervate generates additional mana.
\textbf{Your opponent's hero and board.}
Your opponent's hero is Uther (Paladin) with 2 health.
Your opponents board consists of 2 vanilla minions with taunt, at least~7 attack, $4S$~health (buffed via \hyperref[hscard:blessing]{Blessing of Kings} and \hyperref[hscard:champion]{Blessed Champion}).
\textbf{Lethal strategies.}
In order to win this turn (or at all), you must kill both large minions of the opponent, then do 2~total damage by attacking and using your hero power.
To win, we must play the \hyperref[hscard:bolvar]{Bolvar Fordragon}s, give them Charge, and attack the minions with taunt such that they both die.
Since the total attack of all of the \hyperref[hscard:bolvar]{Bolvar Fordragon}s and \hyperref[hscard:charge]{Charge}s equals the health of the two minions, we cannot succeed unless we cast \hyperref[hscard:charge]{Charge} on every \hyperref[hscard:bolvar]{Bolvar Fordragon} exactly once.
Thus we must allocate the \hyperref[hscard:bolvar]{Bolvar Fordragon}s between the two enemy minions such that their attack adds up exactly to that of the health of the minions.
This partition is exactly the solution to the given \twopart{} instance.
The scaling of the attack and health is to deal with the bonus to attack given by \hyperref[hscard:charge]{Charge} and the possibility of $a_i=1$.
\textbf{Achieving your board state.}
In general we will use the infinite combo given earlier in this section to obtain the necessary cards.
First, we generate all of the cards needed except for the Bolvar Fordragons, as well as some additional cards specified shortly.
We need to create the \hyperref[hscard:bolvar]{Bolvar Fordragon}s and between them cause many minions to die until we reach the correct values for our set being partitioned.
Assume the $b_i$ are ordered from largest to smallest.
We will fill our hand with $n$ copies of \hyperref[hscard:unstable]{Unstable Portal}, $b_1$ copies of \hyperref[hscard:stonetusk]{Stonetusk Boar}, $b_1/2$ copies of \hyperref[hscard:innervate]{Innervate}, and~1 \hyperref[hscard:bestialwrath]{Bestial Wrath}.
The \hyperref[hscard:bestialwrath]{Bestial Wrath} is cast on an opponent's beast, say \hyperref[hscard:raptor]{Bloodfen Raptor}, to ensure we have a way of killing all of the \hyperref[hscard:stonetusk]{Stonetusk Boar}s.
We play an \hyperref[hscard:unstable]{Unstable Portal} which summons a \hyperref[hscard:bolvar]{Bolvar Fordragon}s.
We then play a copy of \hyperref[hscard:stonetusk]{Stonetusk Boar} and attack the opponent's \hyperref[hscard:raptor]{Bloodfen Raptor} $b_1-b_2$ times.
This causes the \hyperref[hscard:bolvar]{Bolvar Fordragons} to gain $b_1-b_2$ attack.
We then cast another \hyperref[hscard:unstable]{Unstable Portal} to obtain and play another \hyperref[hscard:bolvar]{Bolvar Fordragons}, then cast $b_2-b_3$ copies of \hyperref[hscard:stonetusk]{Stonetusk Boar}, and attack with them.
Both \hyperref[hscard:bolvar]{Bolvar Fordragon}s gain $b_2-b_3$ attack from the minions that die.
We repeat this pattern until we have the $n$th copy of Bolvar Fordragon and we attack with $b_n-1$ copies of Stonetusk Boar.
Since Bolvar Fordragon starts with~1 attack, we've now caused them to gain the exact amount of attack to have one equal to each of our \threepart{} values.
\end{proof}
\subsection{Hardness of Deck-Scaled Lethal}
\label{sec:deck}
\begin{theorem}
The lethal problem for deck-scaled instances of Hearthstone is \ccNP-hard.
\end{theorem}
\begin{proof}
We reduce from \twopart.
Let $A = \{a_1, a_2, \dots, a_n\}$ be the input set of (exponentially large in $n$) integers that sum to $S$.
The goal is to partition the integers into two sets that each sum to $S/2$.
\textbf{Your hero, hand, and board.}
Your champion is Uther (Paladin).
Your hand consists of~8 \hyperref[hscard:pitfighter]{Pit Fighter}s and~1 \hyperref[hscard:vigil]{Solemn Vigil}.
Your board consists of~4 frozen vanilla minions.
\textbf{Your opponent's hero, hand, deck, and board.}
Your opponent's hero is Uther (Paladin).
Your opponent's board consists of~2 \hyperref[hscard:dummy]{Target Dummy}s buffed to $S/2$ health (via \hyperref[hscard:blessing]{Blessing of Kings} and \hyperref[hscard:champion]{Blessed Champion} to get a large attack and then swapping attack and health with \hyperref[hscard:crazed]{Crazed Alchemist}) and \hyperref[hscard:millhouse]{Millhouse Manastorm}.
Your opponent played \hyperref[hscard:millhouse]{Millhouse Manastorm} last turn, so all spells you cast this turn are free.
Your opponent's deck consists of~1 \hyperref[hscard:bluegill]{Bluegill Warrior}.
\textbf{Your deck.}
Your deck contains the following cards: \hyperref[hscard:vigil]{Solemn Vigil} (SV), \hyperref[hscard:blessing]{Blessing of Kings} (BoK), \hyperref[hscard:champion]{Blessed Champion} (BC), \hyperref[hscard:pitfighter]{Pit Fighter} (PF), and \hyperref[hscard:anyfin]{Anyfin Can Happen} (ACH).
A \hyperref[hscard:bluegill]{Bluegill Warrior} (BW) had been played earlier and died.
No other murlocs have died in this game, thus ACH will always summon BW.
The sequence of the first~4 cards, called the \emph{setup sequence}, is SV, PF, SV, PF.
For an integer $n$ ($= b_1 b_2 \ldots b_k$ in binary) with $b_{k-1} b_k = 00$, define the \emph{encoding sequence} of $n$ as a polynomial length sequence of left \emph{bit shifts} (equivalent to multiplying by~2) and \emph{increments} by~8 (equivalent to incrementing by~100 in binary) to obtain $n$. We will be interested in using this to encode the input sequence to the \twopart{} instance.
Define the \emph{integer card sequence} of $n$ to be the sequence of cards obtained by replacing each bit shift and increment in an encoding sequence by BC and BoK, respectively, appending ACH and BC to the beginning of the sequence, and then replacing each card with a SV followed by the card.
For example, the following sequence of cards encodes $1110100100$: [SV, ACH, SV, BC, SV, BC, SV, BoK, SV, BC, SV, BoK, SV, BC, SV, BC, SV, BoK, SV, BC, SV, BC, SV, BC, SV, BoK].
The complete deck consists of the setup sequence, followed by the integer card sequence for each $a_i$, followed by an ACH.
\textbf{Lethal strategies.}
In order to win this turn, both \hyperref[hscard:dummy]{Target Dummy}s must be killed and~1 damage dealt to the opponent.
To have any possibility of lethal, damage must be done by obtaining ACH via draw.
Thus both \hyperref[hscard:pitfighter]{Pit Fighter}s must be drawn and played, leaving only one open slot to play minions.
Since you've spent all 10 mana, nothing else with positive mana cost (namely the other \hyperref[hscard:pitfighter]{Pit Fighter}s in your hand) can be played.
Thus your hand has between~8 and~10 cards for the remainder of the turn.
Moreover, the interleaving of SV with other spells implies that any attempt to play SV with a hand of~9 cards causes one of the two drawn cards, namely another SV, to be burnt, preventing further card draw.
At the end of each integer card sequence, to continue to draw into the deck without burning an ACH, we need a minion to die and trigger \hyperref[hscard:cultmaster]{Cult Master} to draw an additional card.
Since \hyperref[hscard:bluegill]{Bluegill Warrior}s are the only minions that can attack, each must be killed at the end of the integer card sequence in which it was drawn and thus only the buffs from one integer card sequence can be applied to a given \hyperref[hscard:bluegill]{Bluegill Warrior}.
Since these buffs on the BW yield a total attack value of $S$, if any buff is not played on a BW then the BWs will have a total attack less than $S$ and thus the two Target Dummies cannot be killed.
Thus any lethal turn involves each BW being buffed with exactly the buffs in the integer card sequence they belong to, and attacking one of the two large minions.
So any lethal play sequence consists of buffing BW to attack values corresponding to $a_1, a_2, \dots, a_n$ and attacking each into one of two opponent \hyperref[hscard:dummy]{Target Dummy}s, followed by a final ACH being drawn and attacking the opponent with the final BW.
Since buffed \hyperref[hscard:bluegill]{Bluegill Warrior}s have $S$ total attack damage, killing both \hyperref[hscard:dummy]{Target Dummy}s requires partitioning the attack values of the buffed BW into two subsets, each of $S/2$ total attack value.
\textbf{Strong hardness.}
Note that unlike the prior reduction from \twopart, this reduction establishes strong NP-hardness.
This is due to the encoding of exponentially large numbers as minion attack and health values in a polynomial number of cards, one of which (\hyperref[hscard:champion]{Blessed Champion}) doubles minion attack.
because the input to the problem is given by a polynomial number of cards and prior plays in the game. The exponentially large minion sizes are efficiently encoded by a series of plays of BC and BoK.
\end{proof}
\subsection{Adapting to Other Puzzle Types}
In addition to finding ``Lethal'', our proofs can be adapted to show the other three puzzle types introduced in the Boomsday Lab are also \ccNP-hard. In general these proofs require us to carefully construct appropriate minions to remove powerful minions with taunt and allow us to attack the enemy hero. We will show that there are minions which could be chosen that we can attack instead of the enemy hero to accomplish the other goals.
\paragraph{Survival.} In this puzzle, the player's objective is to restore their hero to full, normally 30, health. To adapt to this case, in each reduction we give our opponent a \hyperref[hscard:mistress]{Mistress of Mixtures} which has two attack, two health, and upon dying restores 4 health to each hero. We make sure the \hyperref[hscard:mistress]{Mistress of Mixtures} only has one health remaining, perhaps by previously damaging it with a \hyperref[hscard:stonetusk]{Stonetusk Boar}. We have your character's health set to 28, so even if you must attack the \hyperref[hscard:mistress]{Mistress of Mixtures} to kill it you will take 2 damage but then regain 4 health returning you to full.
\paragraph{Board Clear.} In this puzzle, the player's objective is to kill all minions on the board. This is achieved in our board scaled reduction. In the other two reductions either your opponent has a leftover \hyperref[hscard:mistress]{Bloodfen Raptor} or you have leftover frozen minions (which we will assume are also Bloodfen Raptors for the sake of simplicity). To fix this issue, we give your opponent an \hyperref[hscard:sheep]{Explosive Sheep} which is a 1 attack, 1 health minion that does 2 damage to all other minions when it dies. Instead of attacking your opponent once you've gotten rid of the taunt minions in the way, attack the \hyperref[hscard:sheep]{Explosive Sheep} whose damage will kill off the remaining unwanted Bloodfen Raptors. In this case we will set your hero's health to 2, so it is greater than the damage done by attacking the \hyperref[hscard:sheep]{Explosive Sheep}.
\paragraph{Mirror.} In this puzzle the player must make both sides of the board identical. We note that if the player clears the board, then both sides will be identically empty, fulfilling the technical requirement if not the spirit of the puzzle. We use the same augmentation as we did with the board clear goal and note that the player has no way of playing the same minions as their opponent and thus cannot fulfill the mirror requirement if any of their opponent's minions are on the board. Thus the only solution is one that involves clearing the board.
\section{Open Problems}
Given the ability to generate an arbitrary number of Hearthstone cards on a single turn, it is not clear Hearthstone puzzles are in \ccNP, or even \ccPSPACE. Obtaining upper bounds on the complexity is a clear open question. Also, for these puzzles, we assume perfect information. If we exploit imperfect information and randomness, even with a bounded number of plays we might suspect the problem is \ccPSPACE-hard.
Hearthstone puzzles also occur on a single turn, thus eliminating the 2-player aspect of the game. What is the complexity of deciding if a player in a game of Hearthstone has a forced win?
Although Magic and Hearthstone are likely the two most famous CCG's at the moment, there have been a number of other such games in the past. It would be interesting to see other examples studied, as well as a general framework for understanding when such games are computationally intractable.
Finally, we have yet to see a formalization of the problem of deck construction and the meta-game involved in most competitive CCGs. Since deck building is such an important and integral part of many of these games, it would be interesting to have a more formal understanding of the questions and process involved.
\subparagraph*{Acknowledgments}
We wish to thank Jeffrey Bosboom for significant feedback and discussion about this paper, as well as LaTeX expertise. We would also like to thank the other participants and especially the organizers (Erik Demaine and Godfried Toussaint) of the Bellairs Research Institute Winter Workshop on Computational Geometry 2015.
\bibliographystyle{plainurl}
| proofpile-arXiv_059-15701 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Cryptocurrency is an entirely new finanical asset, which increases market capitalization rapidly and attracts growing attention from market participants. The advantages of cryptocurrencies over traditional currencies are abundant, and include reliability and anonymity in transactions with lower costs that are facilitated by novel blockchain technology. Despite the appeal of these advantages, cryptocurrencies present potential vulnerabilities in that they do not enjoy the aegis of central banks or any other monetary authorities. This leads to price volatility in the face of real economic events, such as the recent Covid-19 pandemic, ultimately strengthening global calls for regulations on cryptocurrency trading.\footnote{\citet{BCBS:2019} emphasizses that cryptocurrency is not legal tender and warns about the potential financial stability concerns caused by its continuous growth.} These peculiar price movements motivate scholars to study the impact of cryptocurrencies on financial markets.
From an econometric point of view, it is of interest to explore which kind of time series model accounts for highly volatile returns on and accurately forecasts risks associated with cryptocurrencies. Among expanding literature, the studies by \citet{Caporale:2019}, \citet{Cerqueti:2020}, and \citet{Troster:2019} are based on a generalized autoregressive conditional heteroscedasticity (GARCH) model. They share the common conclusion that the normally distributed GARCH model is inadequate for describing cryptocurrency returns and the introduction of non-Gaussian distribution substantially improves the goodness-of-fit of a GARCH-type model. Another approach includes the stochastic volatility model \citep{Chaim:2018}, and the generalized autoregressive score model \citep{Troster:2019}. Also, the high volatility of cryptocurrency highlights its speculative nature. \citet{Brauneis:2019} investigate the risk-return relationship of an optimized cryptocurrency portfolio based on the Markowitz mean-variance framework.
This paper studies the portfolio optimization of cryptocurrencies by employing a union of sophisticated time series models and risk measures. Four major cryptocurrencies are selected as samples. Our time series model is the multivariate normal tempered stable (MNTS) distributed GARCH model. The MNTS distribution \citep{Kim:2012} has demonstrated excellent fit to joint dynamics of physical asset returns in a number of empirical studies (\citeauthor{Anand:2016}, \citeyear{Anand:2016}; \citeauthor{Anand:2017}, \citeyear{Anand:2017}; \citeauthor{Bianchi:2019}, \citeyear{Bianchi:2019}; \citeauthor{Kim:2015}, \citeyear{Kim:2015}; \citeauthor{Kurosaki:2013a}, \citeyear{Kurosaki:2013a}; \citeauthor{Kurosaki:2013b}, \citeyear{Kurosaki:2013b}; \citeauthor{Kurosaki:2019}, \citeyear{Kurosaki:2019}; and \citeauthor{Shao:2015}, \citeyear{Shao:2015}). Our portfolio optimization strategy is based on Foster-Hart risk, which is very sensitive to risky left tail events. Foster-Hart (FH, hereafter) risk was originally introduced in the field of game theory \citep{Foster:2009}, and was subsequently applied to risk management related to financial markets (\citeauthor{Anand:2016}, \citeyear{Anand:2016}; \citeauthor{Kurosaki:2019}, \citeyear{Kurosaki:2019}; \citeauthor{Leiss:2018}, \citeyear{Leiss:2018}). These cutting-edge techniques are not yet documented in the context of cryptocurrency. Statistical tests demonstrate that the MNTS distributed GARCH model has better explanatory power for cryptocurrency returns than the normally distributed GARCH model. Also, we find that the model creates more profitable portfolios in tandem with FH risk than the traditional mean-variance approach.
The rest of this paper is organized as follows. Section \ref{Methodology} briefly introduces our methodology. Section \ref{Data and Estimation} describes the data of cryptocurrency used herein. Section \ref{Statistical Tests} outlines the empirical results of statistical tests for each cryptocurrency return. Section \ref{Portfolio Optimization} conducts portfolio optimization and discusses performance. Section \ref{Concluding Remarks} summarizes our findings.
\section{Methodology}
\label{Methodology}
We introduce the non-Gaussian time series model with the MNTS distribution as well as FH risk, which work together to achieve efficient portfolio optimization \citep{Anand:2016}. See also \ref{Supplement to Methodology} for supplementary explanations.
\subsection{Non-Gaussian time series model}
We utilize a GARCH-type model to describe the dynamics of cryptocurrency returns. Given that both autoregressive (AR) and moving average (MA) processes are typically observed in financial data, we apply the most standard ARMA(1,1)-GARCH(1,1) specification. After GARCH filtering, we obtain independent and identically distributed (i.i.d.) standard residuals $\eta_t$ with a mean of zero and unit variance for each cryptocurrency. In order to describe complicated interdependency among cryptocurrencies, we conduct multivariate modeling on each cryptocurrency's $\eta_t$ jointly. We employ an i.i.d. standard MNTS as an assumptive distribution that $\eta_t$ follows. We also hypothesize that $\eta_t$ follows an i.i.d. standard normal and student t as competing models. Hereafter, we denote the ARMA(1,1)-GARCH(1,1) model with multivariate normal, student t, and NTS distributed standard residuals as AGNormal, AGT, and AGNTS, respectively.
We prefer the MNTS to other miscellaneous non-Gaussian distributions because of its flexibility with respect to a multivariate extension. Both the estimation of the MNTS from real data and the scenario generation based on the estimated MNTS are feasible without computational difficulty even in considerably high dimensional settings. These features are critical in their application to portfolio optimization. Also, the reproductive property of the stable distribution has an affinity for portfolio modeling.
\subsection{Risk measures}
We introduce FH risk. Let a gamble be any bounded random variable $g$ with a positive expected value and a positive probability of losses: $\mathbb{E}(g)>0$, $\mathbb{P}(g<0)>0$. FH risk is the minimum reserve that an agent should initially possess to prevent himself from almost certainly going bankrupt, even after the infinite repetition of the gamble $g$. \citet{Foster:2009} demonstrate that, for a gamble $g$, irrespective of the utility function, FH risk $R(g)$ is the unique positive root of the following equation:
\begin{equation}
\label{eq:def_FH}
\mathbb{E}\left(\log\left[1+\dfrac{g}{R(g)}\right]\right)=0.
\end{equation}
The bankruptcy-proof property endows FH risk with extremely high sensitivity to negative events. By regarding investments in financial assets as a gamble, FH risk is expected to sense market downturn in a forward-looking manner.
We also utilize more popular risk measures, Value at Risk (VaR) and Average VaR (AVaR), in order to supplement FH risk.\footnote{AVaR is also called Conditional VaR or Expected Shortfall.} Accuracy of risk forecasting is an important aspect of time series models. Statistically backtesting VaR and AVaR is feasible due to their relative simplicity, whereas no backtesting methodology has been established for FH risk. We backtest VaR by the Christoffersen's likelihood ratio (CLR) test as well as AVaR by the Berkowitz's likelihood ratio (BLR) tail test and Acerbi and Szekely (AS) test. See \cite{Christoffersen:1998}, \cite{Berkowitz:2001}, and \cite{Acerbi:2014}, respectively.
\section{Data and estimation}
\label{Data and Estimation}
Our dataset contains daily logarithmic returns of cryptocurrency exchange spot rates in U.S. Dollars per unit from 2015/08/31 to 2020/03/31, resulting in 1,674 observations for each cryptocurrency. Following \cite{Caporale:2019}, we select the following four cryptocurrencies as samples: Bitcoin (BTC), Ethereum (ETH), Litecoin (LTC), and XRP. The data source is CoinMarketCap.\footnote{\url{https://coinmarketcap.com/}} Table \ref{table:DS} reports the descriptive statistics of our dataset. All cryptocurrencies have larger kurtosis than those of the normal distribution, and show either negative or positive skewness. These observations motivate us to apply the non-Gaussian model. We estimate all models based on maximum likelihood estimation. In order to obtain the AGNTS model, we estimate the univariate AGT model for each cryptocurrency first and subsequently fit the standard MNTS distribution to the same residuals $\eta_t$. We refer the readers to \cite{Kurosaki:2019} for details regarding these techniques.
Our analysis procedure is as follows. First, we arrange a moving window with a length of 500 days. The first window, ranging from 2015/08/31 to 2017/01/12, moves ahead day by day until 2020/03/31, which amounts to 1,175 distinct windows. Subsequently, we iteratively estimate time series models from returns data within each window, and forecast a one-day-ahead return distribution. Finally, we assess the statistical performance of the resulting 1,175 models, as well as the profitability of the optimized portfolio based on a forecasted return distribution.
\section{Statistical Tests}
\label{Statistical Tests}
We assess the capability of our multivariate GARCH-type models to account for marginal return dynamics of each cryptocurrency. Specifically, we examine the statistical performance of the 1,175 iteratively-estimated models based both on in-sample and out-of-sample tests.
\subsection{In-Sample Test}
As an in-sample test, we investigate the fitting performance of standard residuals $\eta_t$ of the univariate ARMA(1,1)-GARCH(1,1) model for the assumptive distributions (normal, student t, and NTS). To do so, we exploit both the Kolmogorov-Smirnov (KS) and the Anderson-Darling (AD) tests. While both tests are designed to assess the goodness-of-fit of the proposed distributions, the AD test puts more emphasis on fitting at the tail. Under the reasonable postulation that our sample is sufficient in number, we can compute p-values for both tests.
Tables \ref{table:KS} and \ref{table:AD} report the number of rejections of KS and AD tests out of 1,175 iterated estimations for AGNormal, AGT, and AGNTS residuals, respectively, by significance level. We see that AGNormal is almost always rejected by both tests and thus significantly underperforms AGT and AGNTS. In the KS test, AGNTS has a smaller number of rejections than AGT in three out of four cryptocurrencies at the 10\% level. More clearly, in the AD test, AGNTS has a smaller number of rejections than AGT in four (three) out of four cryptocurrencies at the 5\% (10\%) level due to the excellent ability of AGNTS to track tail behavior. Overall, AGNTS is the most preferable model.
\subsection{Out-of-Sample Test}
As an out-of-sample test, we backtest risk measures, namely, VaR and AVaR. Each of the iteratively-estimated models forecasts one-day-ahead VaR and AVaR, constituting a time series of VaR and AVaR forecasts with a length of 1,175 days from 2017/01/12 to 2020/03/31.\footnote{Following \citet{Kim:2010}, the VaR estimation with AGNTS relies on the discrete Fourier Transform.} In line with the Basel accord, we adopt the 99\% confidence level for VaR and AVaR. Backtesting is achieved by comparing VaR and AVaR with actual returns every day. Out-of-sample testing is more important than in-sample testing since our research interest lies in portfolio risk forecasting and optimization. In order to clarify our results, we divide the sample period into three subperiods and conduct out-of-sample tests by subperiod. Specifically, let Periods 1, 2, and 3 cover from 2017/01/12 to 2018/03/31, from 2018/04/01 to 2019/03/31, and from 2019/04/01 to 2020/03/31, respectively. Notice that each subperiod includes some form of turmoil. The cryptocurrency boom and crash around the end of 2017 for Period 1, the crash following the release of "hardfork" of Bitcoin cash at the end of 2018 for Period 2, and the Covid-19 crisis in the beginning of 2020 for Period 3.
Table \ref{table:out-of-sample-tests} summarizes the p-values of out-of-sample tests. The CLR test with conditional coverage is for VaR forecasts, and the BLR tail and the AS tests are for AVaR forecasts.\footnote{The p-values of the AS test are computed from $10^4$ sample statistics generated by time series models and for a left tail.} First of all, AGNormal has lower p-values than AGT and AGNTS, especially in the BLR and AS tests. AGNTS passes the CLR tests during any period and with any cryptocurrency, including at the 10\% level, whereas AGT fails in Periods 1 and 2 at the same level. Also, AGNTS always passes the BLR tests except for in Period 2 and in BTC. By contrast, AGT fails in Periods 1 and 2 at the 5\% level. Finally, AGNTS fails the AS tests for at most one cryptocurrency in each subperiod at the 10\% level. However, AGT fails for two cryptocurrencies in Periods 1 and 3 at the same level. Therefore, we conclude that AGNTS shows the best performance in out-of-sample tests more clearly than in in-sample tests.
\section{Portfolio optimization}
\label{Portfolio Optimization}
We practice portfolio optimization with cryptocurrencies consisting of BTC, ETH, LTC, and XPR, in line with \citet{Kurosaki:2019}. The portfolio risk and reward is forecasted through multivariate time series models. The optimization is carried out by minimizing the objective risk measure under the tradeoff with expected returns and transaction costs following the procedure detailed in \ref{Portfolio Optimization Procedure}. Since, as is shown in Section \ref{Statistical Tests}, AGNTS provides a more accurate expression than AGNormal and AGT for leptokurtic and skewed behaviors of cryptocurrency returns, it is also expected to show better performance in portfolio optimization. We exploit FH risk as the objective risk measure to be minimized, as well as standard deviation (SD) and AVaR.
Table \ref{table:performance} exhibits optimization results under the absence of transaction costs ($\lambda=0$), through a combination of time series models and objective risk measures. When the portfolio is optimized with respect to SD, AVaR, and FH under the tradeoff against expected returns, we refer to the corresponding portfolio as mean-SD, mean-AVaR, and mean-FH portfolio, respectively. Column 2 reports the cumulative returns that each optimized portfolio accrues during the period from 2017/01/12 to 2020/03/31.\footnote{Since AGT and AGNTS share the same residuals, both models produce the same mean-SD portfolio. Also, note that the first revenue recognition takes place on the day after the first optimization, 2017/01/13.} We see that the mean-FH portfolio with AGNTS forecasts yields the largest profit, followed by the mean-AVaR portfolio with AGNTS forecasts. Columns 3 through 5 show the SD, AVaR, and FH risk of the optimized portfolio itself, which are computed from their historical returns. Columns 6 through 8 indicate the cumulative returns adjusted by each risk measure. Note that the return-to-SD ratio in Column 6 is the well-known Sharpe ratio. We find that the mean-FH portfolio with AGNTS forecasts shows the highest return-to-risk ratios of any risk measures. Therefore, we conclude that this combination not only achieves the largest cumulative returns but also generates the most ideal balance between risk and reward.
As a robustness check, we conduct the optimization in the presence of transaction costs, where the corresponding cost ($\lambda\cdot |w_t-w_{t-1}|$) is deducted from daily returns. Figure \ref{fig:Time_evolution_return} shows how the cumulative returns of the optimized portfolio evolve temporarily over the investment period based on different investor cost aversion ($C=0.01, 0.1, 1$), as well as in cases without transaction costs ($\lambda=0$). We focus on the forecasts of future returns created by AGNTS. We observe that the mean-FH portfolio accrues the largest profit irrespective of cost aversion parameters. Therefore, the superiority of the combination of AGNTS and FH risk still holds for the case where the transaction cost is present.\footnote{The presented optimization allows for both long and short positions. As a further robustness check, we conduct the optimization under the restriction that a short position is prohibited. This restriction is realistic for conservative investors. The results are consistent with unrestricted cases; The mean-FH portfolio and/or the AGNTS forecasts generally create the largest profit.}
\section{Concluding remarks}
\label{Concluding Remarks}
This paper studies the portfolio optimization of four major cryptocurrencies. A cryptocurrency as an asset class is characterized by higher volatility and more nonlinear return dynamics compared to traditional assets. Statistical analysis demonstrates that the introduction of the MNTS distribution enhances the explanatory power of the GARCH-type model for cryptocurrency return dynamics substantially, especially in terms of risk forecasting. FH risk warns of market crashes, for example, in scenarios such as those caused by the Covid-19 pandemic, in an immeasurably sensitive manner. The combination of the MNTS distributed GARCH model and FH risk leads to desirable portfolio optimization with respect to cumulative returns as well as risk-return balance. We first document the effectiveness of those sophisticated techniques in the context of cryptocurrency.
\bibliographystyle{elsarticle-harv}
| proofpile-arXiv_059-15702 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{How to spin up a Neutron Star: the recycling scenario}
Millisecond Pulsars (hereafter MSPs) are fast-spinning Neutron Stars (NS) with periods shorter than 30 ms, and hence spin frequency higher than 30 Hz. Now we know that the vast majority of these fast-spinning NS are in binary systems with a low-mass ($< 1\, M_\odot$) companion star and possess a relatively weak magnetic field (less than $10^8$ -- $10^9$ Gauss). Moreover, a large amount of these systems are found in Globular Clusters (old clusters of stars). It was soon realised that these NS must belong to old systems in order to have the time for the magnetic field to decay from the large strength in young NS (usually above $10^{12}$ Gauss) to their present, much lower strength. It was therefore proposed that old NS are spun up to millisecond periods by the accretion of matter and angular momentum during a Low Mass X-ray Binary (hereafter LMXB) phase; this is the so-called {\it recycling scenario} (see e.g. \cite{Bhattacharya1991}. Once a {\it recycled} NS reaches a high spin frequency, even if its magnetic field has decayed, the rotation-powered emission mechanism (which depends on the fourth power of the spin frequency, see below) can be re-activated; hence, at the end of the accretion phase, when the companion star has lost its atmosphere and/or has detached from its Roche lobe, the NS should be visible as a rotation-powered MSP.
\subsection{Evolution of rotation-powered Neutron Stars in the $P - \dot P$ diagram}
According to the recycling scenario, a newly born NS should have on average a relatively slow spin period (above few tens of milliseconds) and a relatively strong magnetic field; an example is given by the Crab pulsar, a 33 ms isolated pulsar with a magnetic field of $\sim 4 \times 10^{12}$ Gauss discovered in 1968 at the center of a young, $\sim 1000$-years old, supernova remnant called the Crab Nebula. The magnetic field, rotating at the spin period of the NS, behaves as a rotating magnetic dipole which emits radiation according to the Larmor formula (e.g. \cite{Jackson}):
\begin{equation}
\label{Larmor}
P_{\rm rad} = \frac{2}{3} \frac{(\ddot{\mu}_\bot)^2}{c^3} =
\frac{2}{3}\frac{\mu_\bot^2 \Omega^4}{c^3} = \frac{2}{3c^3}( B R^3 \sin \alpha)^2 \biggl( \frac{2\pi}{P} \biggr)^4~,
\end{equation}
where $\mu_\bot = B R^3 \sin \alpha$ is the component of the magnetic dipole moment perpendicular to the rotation axis, $B$ and $R$ are the surface magnetic field and the NS radius, respectively, $\alpha$ is the angle between the rotation axis and the magnetic dipole axis, $\Omega$ is the spin angular frequency of the NS and $P$ its spin period.
In this case the (pulsed) emission is usually visible in the radio (and often in the gamma-ray) band; it is mainly due to synchrotron emission of charge currents, formed by electrons and positrons extracted from the NS surface by the intense Lorentz force due to the magnetic field and the fast rotation, moving along curved open magnetic field lines \cite{Jackson}. Because of this emission, the NS looses rotational energy and slows down, according to the relation (see e.g. \cite{Spitkovsky2006}):
\begin{equation}
\label{spin-down}
\dot E = \frac{d}{dt} \left(\frac{1}{2} I \Omega^2\right) = I \Omega \dot \Omega \simeq -\frac{\mu^2 \Omega^4}{c^3} (1+\sin^2 \alpha),
\end{equation}
where $I \propto M R^2$ is the moment of inertia of the NS, $M$ its mass, and $\mu = B_0 R^3/2$ is the magnetic moment, where $B_0$ is the magnetic field strength at the pole. Solving this equation for $B_0$ and inserting typical values of $I \simeq 10^{45}$ g cm$^2$, $R\simeq10^6$ cm and $\alpha = 90^\circ$, gives:
\begin{equation}
\label{B0}
B_0 \sim 6 \times 10^{19} (P \dot P)^{1/2}\, \rm Gauss,
\end{equation}
which allows to relate the magnetic field strength with the spin period and its derivative, and hence to give an estimate of the magnetic field once the spin period and its derivative are measured. In the hypothesis that the magnetic field does not change significantly with time, from eq.~(\ref{spin-down}) we can estimate the pulsar characteristic age $\tau = P / (2 \dot P)$, defined as the timescale necessary to bring the pulsar from its initial period $P_0$ to the current period $P$ at the observed spin-down rate by assuming $P_0<<P$. For instance the characteristic age of the Crab pulsar comes out to be $\sim 2.5$ kyr.
In the meantime, the NS magnetic field rapidly decays due to mechanisms not fully understood yet (probably ohmic dissipation, see e.g. \cite{Tauris2001} for a review, perhaps with a contribution due to accretion of matter in the subsequent LMXB phase, see e.g \cite{Cumming2001}), and causes the pulsar to move down almost vertically in a $P-\dot P$ (or magnetic field strength vs.\ P) diagram. Below the so-called death line, the NS enters the {\textit graveyard} where the rotation-powered pulsar switches off because not enough spin-down power is available to feed the emission mechanism. At this point, if the NS is in a binary system with a low-mass star (less than $1\, M_\odot$), the latter may be able to fill its Roche-lobe due to nuclear evolution and/or losses of the orbital angular momentum caused by Magnetic Braking (MB) of the companion star and/or Gravitational Radiation (GR). This scenario envisages mass-transfer phases in which the system will start emitting in X-rays and will be observed as a LMXB. At the end of the LMXB phase, the NS, spun-up to millisecond periods by the accretion of matter and its angular momentum, can exit the graveyard and be observed as a rotation-powered MSP.
\subsection{Low-Mass X-ray Binaries and accretion onto a Neutron Star}
LMXBs are (Gyr-old) binary systems in which a low-mass star transfers matter onto a compact object via Roche lobe overflow. Matter passing through the inner Lagrangian point has a large specific angular momentum (because of the rotation of the system around its center of mass) that inhibits radial accretion. Matter starts to spiral-in around the compact obiect creating a structure usually defined as \textit{accretion disk}, in which internal torques transfer angular momentum outwards, allowing the accreting plasma to slowly approach the compact object. The mechanical energy of matter is partially dissipated in the disk which emits a blackbody-like spectrum with a temperature increasing towards the center. If the compact object is a NS, the rest of the mechanical energy of the matter is released when the accreting matter reaches the surface of the NS, giving a total luminosity of $\sim G M_{NS} \dot M / R_{NS}$, where $\dot M$ is the mass accretion rate onto the NS. This implies an efficiency in the conversion of rest mass energy to luminosity of $\eta = G M_{NS} / (c^2 R_{NS}) \sim 0.21$ for a $1.4\, M_\odot$ NS with 10 km radius. The mass accretion rate onto the NS is limited by radiation pressure which can be high enough to balance the gravitational force towards the NS. This happens at the so-called Eddington limit; in the hypothesis of stationary and spherical accretion, the maximum luminosity of the system is given by: $L_{Edd} \simeq 1.3 \times 10^{38}\, M/M_\odot$ erg s$^{-1}$.
For a NS with $M_{NS} = 1.4\, M_\odot$, the Eddington limit is given by $L_{Edd} \simeq 2.5 \times 10^{38}$ erg s$^{-1}$ (appropriate for helium-rich material and a moderate, $z=1.2$, gravitational redshift correction factor, \cite{vanParadijs1994}), corresponding to a mass accretion rate of $\dot M_{Edd} \sim 1.3 \times 10^{18} g s^{-1} \sim 2 \times 10^{-8}\, M_\odot\, yr^{-1}$. A the Eddington luminosity, the blackbody emission from the NS surface will reach a temperature of about 20 million K, corresponding to $\sim 2$ keV in photon energy, and implying that the emission from the innermost region of the system will be mainly in the X-ray band. At such high luminosity, strong outflows of matter can rise driven by the strong radiation pressure, and the inner part of the disk may inflate and form a geometrically thick and optically thick disk, also known as {\it thick disk}.
In general, besides the blackbody-like components produced by the accretion disk and the NS surface, the X-ray spectrum of a LMXB is often complicated by the presence of a hot electron corona in the central part of the system, which up-scatters soft photons coming from the disk and/or the NS surface, producing hard Comptonization spectra.
In the case of a magnetised NS, the accretion flow towards the NS can be halted by the magnetosphere depending on the magnetic field strength. For a dipolar magnetic field, the magnetic energy ($B^2/8\pi$) increases at small radii and can overcome the Kinetic energy ($\rho v^2/2$) of the (free-falling) in-falling matter at the magnetospheric radius (which delimits the NS magnetosphere); inside this radius (charged) particles are forces to flow along the magnetic field lines and will be accreted at the NS polar caps. For spherical accretion, this radius is called Alfv\'en radius and is given by:
\begin{equation}
\label{Alfven}
R_A = \left(\frac{\mu^4}{2 G M_{NS} \dot M^2}\right)^{1/7} \sim 3.7 \times 10^6 \mu_{26}^{4/7} \dot M_{-10}^{-2/7} (M/M_\odot)^{-1/7}\, \rm cm
\end{equation}
where $\mu_{26}$ is the magnetic moment ($B R^3$) in units of $10^{26}$ Gauss cm$^3$ and $\dot M_{-10}$ is the mass accretion rate in units of $10^{-10}\, M_\odot$ yr$^{-1}$. It is easy to deduce that, in order to have a magnetospheric radius larger than the NS radius for a mass accretion rate of about $10\%$ of the Eddington limit, the magnetic field should be higher than $10^8$ Gauss. In this case, under the hypothesis of magnetic axis not aligned with the NS spin axis, accretion onto the polar caps can produce a lighthouse signal visible as X-ray (accretion-powered) pulsations, which give us a direct measure of the NS spin period.
For disk-fed accretion flows, the ram pressure of matter is concentrated in the disk plane, allowing it to penetrate further the NS magnetosphere and reducing the magnetospheric radius, $r_m$, with respect to the Alfv\'en radius, by a factor $\sim$ 0.3 -- 0.5 (see e.g. \cite{Ghosh1991,Burderi1998}).
The interaction of the accretion flow with the NS magnetosphere allows an exchange of angular momentum between the accreting matter and the NS which results in a spin-up or spin-down of the NS (see e.g. \cite{Ghosh1979} for a first study of the torques exerted by the accreting matter onto the NS). Assuming that the inner accretion disk is truncated at the magnetospheric radius and defining the co-rotation radius $r_{CO}$ as the radius at which the Keplerian angular velocity of the disk, $\omega_K(r)$, matches the NS angular velocity $\Omega_0$, that is $r_{CO} = (G M_{NS} / \Omega^2_0)^{1/3} \sim 2.8 \times 10^6 (M / M_\odot)^{1/3} P_{ms}^{2/3}$ cm (where $P_{ms}$ is the NS spin period in millisecond), we can envisage the following three possibilities:
i) $r_m < r_{CO}$, i.e. the inner disk rotates faster than the magnetosphere and exerts a positive torque spinning-up the NS. In this case the spin-up torque is given at zero order by: $I \Omega \dot \Omega = \dot{M} (G M_{NS} r_m)^{1/2}$, i.e. by the mass accretion rate onto the NS times the specific angular momentum at the magnetospheric radius (the latter also has a weak dependence on the mass accretion rate, as $\dot M^{-2/7}$). According to \cite{Ghosh1979}, the rate of change of the NS period is:
\begin{equation}
\frac{\dot P}{P} = -3 \times 10^{-8} f \frac{P} {1 ms} \left(\frac{L_X}{10^{37} erg s^{-1}}\right)^{6/7} yr^{-1},
\end{equation}
where the dimensionless parameter $f$ is expected to be of the order of unity. This demonstrates that in the LMXB phase the NS can be efficiently spun-up by accretion torques within its lifetime. Moreover, it can be shown that, for a slow-rotating low-magnetic field NS, it is enough to accrete no more than $\sim$ 0.1 -- 0.2 $M_\odot$ to spin-up the NS to millisecond periods (see e.g. \cite{Burderi1999}), unless the mass transfer is highly not conservative. Hence, for a NS it is in principle possible to reach mass-shedding spin periods before the gravitational collapse into a black hole.
ii) $r_m > r_{CO}$, i.e.\ the Keplerian velocity at the inner accretion disk is lower than the angular velocity of the NS and this causes a spin-down torque onto the NS. Indeed, in this case, the centrifugal barrier should prevent matter to penetrate the magnetosphere, giving the so-called {\it propeller effect}. However, magneto-hydrodynamic simulations \cite{Romanova2005} suggest that this is true only when the magnetospheric radius is very large when compared to $r_{CO}$ ($r_m >> r_{CO}$, strong propeller regime), otherwise matter can still (at least in part) penetrate the magnetosphere and accrete onto the NS (weak propeller regime), allowing the possibility to observe spin-down during (low-rate) accretion phases. The latter may also be favoured by some threading of the magnetic field lines by the accretion disk beyond the co-rotation radius (see e.g. \cite{Wang87, Rappaport2004, Kluzniak2007}, and references therein) when magnetic field lines are not completely shielded by current sheets at the magnetospheric radius. In this case, threading of the magnetic filed can result in both a spin-up (due to threading inside $r_{CO}$) and a spin-down (due to threading outside $r_{CO}$), and the balance of the two, plus the material torque, gives the net torque exerted onto the NS. The possibility to have a spin-down of the NS during accretion phases for fast rotators has been studied by \cite{Rappaport2004, Kluzniak2007}; these authors argue that the accretion disk structure around a fast pulsar will adjust itself so that the inner edge of the disk, also known as the truncation radius, will remain fixed near $r_{CO}$ while accretion will continue. In this case, the net torque onto the NS is given by the accretion torque of matter captured at the co-rotation radius decreased by a spin-down torque due to the magnetic field drag on the accretion disk, which, at a first order of approximation, can be expressed as $\mu^2 / (9 r_{CO}^3)$, resulting in the net torque:
\begin{equation}
\label{torque}
\tau_{NS}= 2\pi I \dot{\nu}_{NS} = \dot M (G M_{NS} r_{CO})^{1/2} - \frac{\mu^2}{9 r_{CO}^3}.
\end{equation}
iii) $r_m \sim r_{CO}$, in this case matter loaded by the magnetic field will have the same angular velocity of the NS and no net torque is expected onto the NS, meaning that the NS will be at the spin equilibrium. In this case, the equilibrium period is given by:
\begin{equation}
\label{Peq}
P_{eq} = 0.5 \mu_{26}^{6/7} L_{37}^{-3/7} R_6^{-3/7} (M/M_\odot)^{-2/7}\, \rm ms,
\end{equation}
where $R_6$ is the NS radius in units of $10^6$ cm. This means that a NS can reach in principle a spin equilibrium period shorter than a millisecond (see Sec.~3 on Chapter~7 for similar discussions based on the MSPs changes of state). However, the maximum spin frequency that a NS can attain also depends on its Equation of State (EoS, see e.g.\ \cite{Ozel2016} for a review), which sets the mass-shedding spin limit depending on the mass-radius relation. The fastest-spinning NS known to date is PSR J1748-2446ad, a rotation-powered pulsar spinning at 716 Hz \cite{Hessels2006}. This spin frequency is, however, not high enough to put strong constraints onto the NS EoS and it is not clear yet whether other mechanisms (e.g.\ GR emission, a relatively large magnetic field, observational bias, and so on) can be responsible of the lack of ultra-fast spinning NS (see e.g. \cite{Burderi2001,Patruno2017b,Haskell2018}).
\section{The discovery of Accreting X-ray Millisecond Pulsars: the missing link in the recycling scenario}
From what is discussed above, it appears clear that NS can be spun-up to millisecond periods or below, depending on the constraints imposed by the EoS of ultra-dense matter, during the accretion phase in a LMXB. However, till 1998, no LMXB was found to show any coherent pulsation at such low periods. The fact that the vast majority of LMXBs does not show coherent pulsations is still a fascinating enigma. Several possible explanations have been invoked to interpret this fact, but none of them is fully satisfactory (see also \cite{Patruno2012} and references therein, for further discussion of this issue). Thanks to the large effective area ($\sim 6500\, cm^2$) and good timing capabilities ($\sim 1\, \mu s$) of the NASA satellite Rossi X-ray Timing Explorer ({\em RXTE\xspace}{}), in 1998 \cite{Wijnands1998} discovered the first LMXB, {\rm SAX~J1808.4$-$3658\xspace}, to show coherent pulsations (at about $401$ Hz). This was the first direct confirmation of the recycling scenario, since it demonstrated that LMXBs could indeed host a fast-spinning NS. Doppler effects visible in the spin period of the pulsar revealed the $\sim 2$ h orbital period of the system \cite{Chakrabarty1998}.
The final confirmation of the recycling scenario arrived only in 2013, when \cite{Papitto2013b} discovered a transient system, IGR~J18245-2452, showing accretion-powered pulsations during the X-ray outburst and rotation-powered radio pulsations during X-ray quiescence. This source, which is one of the members of the so-called {\it transitional} MSP class, is the direct evidence of the fact that, when accretion stops, the rotation-powered pulsar mechanism should resume on a short timescale.
{\rm SAX~J1808.4$-$3658\xspace}, first discovered in 1996 by the Wide Field Camera (WFC) on board the X-ray satellite BeppoSAX, is a transient system, which spends most of the time in quiescence (with X-ray luminosity around few $10^{31}$ erg s$^{-1}$, \cite{Campana2004}) and shows month-long X-ray outbursts every $\sim3$ yr, during which it reaches an X-ray luminosity in the range $10^{36}$ -- $10^{37}$ erg s$^{-1}$. Now we know about two dozens of these systems, belonging to the class of Accreting Millisecond X-ray Pulsars (hereafter AMXPs), most of them discovered by {\em RXTE\xspace}{} and the ESA satellite {\em XMM-Newton\xspace}{}, and more recently by {\em NuSTAR\xspace}\ and {\em NICER\xspace}{}.
All of them are transient systems, although with very different transient behaviour (see Table 2 in \cite{2019A&A...627A.125M} for an overview). X-ray outbursts usually last from few days to less than three months. Most of the AMXPs have shown just one outburst since their discovery, while a few sources show recurrent outbursts. The shortest outburst recurrence time is about a month, registered for the globular cluster source NGC 6440 X-2, with an outburst duration of less than $4-5$ days, whereas the longest outburst has been observed from HETE J1900.1-2455, and has lasted for about 10 years (up to late 2015 when the source returned to quiescence, \cite{Degenaar2017b}).
Another peculiar behaviour is the intermittency of the pulsations, which is important because the understanding of this phenomenon could give insights on the lack of X-ray pulsations in the large majority of NS LMXBs. This phenomenon was observed for the first time in the AMXP HETE J1900.1-2455, which went into X-ray outburst in 2005 and showed X-ray pulsations at 377 Hz. However, after the first 20 days of the outburst, pulsations became intermittent for about 2.5 yr, and then disappeared with very stringent upper limits on the pulsed fraction ($< 0.07\%$; \cite{Patruno2012a}). The most peculiar behaviour was observed from Aql X-1, a transient LMXB showing regular outbursts more or less every 0.5 -- 1 year (see e.g. \cite{Campana2013}); it showed coherent pulsations in only one 150-s data segment out of a total exposure time of $\sim1.5$ Ms from more than 10 years of {\em RXTE\xspace}{} monitoring \cite{Casella2008}. Another AMXP showing intermittency of pulsations is SAX J1748.9-2021, where pulsations were detected sporadically in several data segments and in three out of four outbursts observed by the source (see e.g. \cite{Patruno2009}). Note that these AMXPs may have a long-term average mass accretion rate higher than the other AMXPs. To explain this {\it intermittent} behaviour, it has been proposed that the accreting matter could screen the NS magnetic field, weakening it by orders of magnitude on a few hundred days timescale, hampering the possibility to effectively channel the accretion flow towards the NS polar caps (e.g. \cite{Patruno2012a}). However, it is not clear yet whether this hypothesis can explain all the phenomenology observed in AMXPs, and more observations and theoretical efforts are needed to reach a satisfactory explanation of this puzzling behaviour.
In Table~\ref{tab:lmxbs} we show the main properties of the AMXPs known to date. The following sections will be dedicated to the description of the main results obtained to date on the spectral and timing properties of this class of sources.
\begin{table}
\caption{Accreting X-ray Pulsars in Low Mass X-ray Binaries.}
\scriptsize
\begin{center}
\begin{tabular}{lllllll}
\hline
\hline
Source & $\nu_{s}/P$ & $P_{\rm orb}$ & $f_{x}$ & $M_{c,min}$ & Companion & Ref.\\
& (Hz)/(ms) & (hr) & ($M_{\odot}$) & ($M_{\odot}$) & Type & \\
\hline
\textbf{Accreting Millisecond X-ray Pulsars}\\
\hline
Aql X-1 & 550 (1.8) & 18.95 & $1.4\times 10^{-2}$ & 0.56 & MS & \cite{Casella2008,MataSanchez2017}\\
IGR J17591-2342 & 527 (1.9) & 8.80 & $1.5\times 10^{-2}$ & 0.37 & MS & \cite{Sanna2018c}\\
Swift J1749.4-2807 & 518 (1.9) & 8.82 & $5.5\times 10^{-2}$ & 0.59 & MS & \cite{Altamirano2011,DAvanzo2011}\\
SAX J1748.9-2021 & 442 (2.3) & 8.77 & $4.8\times 10^{-4}$ & 0.1 & MS& \cite{Altamirano2008,Cadelano2017}\\
IGR J17498-2921 & 401 (2.5) & 3.84 & $2.0\times10^{-3}$ & 0.17 & MS & \cite{Papitto2011b}\\
XTE J1814-338 & 314 (3.2) & 4.27 & $2.0\times 10^{-3}$ & 0.17 & MS & \cite{Markwardt2003,Wang2017}\\
IGR J1824-2453 & 254 (3.9) & 11.03 & $2.3\times 10^{-3}$ & 0.17 & MS & \cite{Papitto2013b}\\
IGR J17511-3057 & 245 (4.1) & 3.47 & $1.1\times 10^{-3}$ & 0.13 & MS & \cite{Papitto2010}\\
\hline
IGR J00291+5934 & 599 (1.7) & 2.46 & $2.8\times 10^{-5}$ & 0.039 & BD & \cite{Galloway2005}\\
IGR J17379-3747 & 468 (2.1) & 1.88 & $8\times 10^{-5}$ & 0.056 & BD & \cite{Sanna2018c}\\
SAX J1808.4-3658 & 401 (2.5) & 2.01 & $3.8\times 10^{-5}$ & 0.043 & BD & \cite{Wijnands1998,Wang2013}\\
HETE J1900.1-2455& 377 (2.7) & 1.39 & $2.0\times 10^{-6}$ & 0.016 & BD & \cite{Kaaret2006,Elebert2008}\\
\hline
XTE J1751-305 & 435 (2.3) & 0.71 & $1.3\times 10^{-6}$ & 0.014 & He WD & \cite{Markwardt2002,DAvanzo2009}\\
MAXI J0911-655 & 340 (2.9) & 0.74 & $6.2\times 10^{-6}$ & 0.024 & He WD? & \cite{Sanna2017a}\\
NGC6440 X-2 & 206 (4.8) & 0.95 & $1.6\times 10^{-7}$ & 0.0067 & He WD & \cite{Altamirano2010}\\
Swift J1756.9-2508 & 182 (5.5) & 0.91 & $1.6\times 10^{-7}$ & 0.007 & He WD & \cite{Krimm2007}\\
IGR J17062-6143 & 164 (6.1) & 0.63 & $9.1\times 10^{-8}$ & 0.006 & He WD? & \cite{Strohmayer2017}\\
IGR J16597-3704 & 105 (9.5) & 0.77 & 1.2$\times 10^{-7}$ & 0.006 & He WD & \cite{Sanna2018a}\\
\hline
XTE J0929-314 & 185 (5.4) & 0.73 & $2.9\times 10^{-7}$ & 0.0083 & C/O WD & \cite{Galloway2002,Giles2005}\\
XTE J1807-294 & 190 (5.3) & 0.67 & $1.5\times 10^{-7}$ & 0.0066 & C/O WD & \cite{Campana2003,DAvanzo2009} \\
\hline
\hline
\end{tabular}\\
\end{center}
$\nu_{s}$ is the spin frequency, $P_{b}$ the orbital period, $f_{x}$
is the X-ray mass function, $M_{c,min}$ is the minimum companion mass, calculated for an inclination $\sin i =1$ of the
binary system and for an assumed NS mass of 1.4 M$_\odot$.
The companion types are: WD = White Dwarf, BD= Brown Dwarf, MS = Main Sequence, He Core = Helium Star.\newline
$^{b}$ Binary with parameters that are still compatible with an intermediate/high mass donor.\newline
Adapted and updated from \cite{Patruno2012a}.
\label{tab:lmxbs}
\end{table}
\section{Timing and Spectral Properties of AMXPs}
\subsection{Spectral properties}
In the vast majority of the AMXPs, the X-ray luminosity during outburst remains below $10\%$ the Eddington luminosity, and the spectra do not show transitions between hard and soft spectral states, as it usually happens for non-pulsating LMXBs (harbouring a NS or a black hole, see e.g. \cite{Done2007}). For this reason, AMXPs are often referred to as hard X-ray transients. Hence, their spectra are quite similar to the spectra usually observed for NS LMXBs in the hard state, with little spectral evolution during the X-ray outburst. In particular, the X-ray continuum is composed of one or two blackbody-like components and an unsaturated Comptonization component, usually with cutoff energies (corresponding to the electron temperature of the Comptonizing cloud, often called {\it corona}) of tens of keV \cite{Gierlinski2002,Gierlinski2005,Poutanen2006}. In this case, the presence of a {\it reflection} of the hard Comptonization photons off the cold accretion disk is expected. This reflection component usually contains discrete features, such as the florescence iron line at 6.4 -- 6.7 keV (depending on the iron ionization state), which are smeared by Doppler and relativistic effects due to the large velocity of matter in the inner accretion disk (see the right panel of Fig.~\ref{fig:spec} where these spectral components are indicated). The precise modelling of these features may give information about some important physical parameters as ionization state of matter in the inner accretion disk, the inclination of the system with respect to the line of sight, the radius at which the inner accretion disk is truncated, the outer radius of the emitting region in the disk, and the index of the emissivity law in the disk, which is $\propto r^{index}$.
The inner disk radius is an important parameter since it may be useful to obtain an estimate of the magnetospheric radius, which can be compared to the co-rotation radius of the source in order to test the accretion torque paradigm described above. Together with discrete features (emission lines and absorption edges), the hard photons impinging the disk are scattered by Thomson/(direct) Compton effect, generating a continuum spectrum (with a shape similar to the primary Comptonization spectrum) which is usually evident as an excess of emission between 10 and 30 keV named Compton hump. The spectral shape of this continuum is also sensitive to the inclination angle of the system and the ionization state of matter in the disk. The latter is measured through the parameter $\xi = L_X/(n_e r^2)$, where $L_X$ is the bolometric X-ray luminosity of the ionizing continuum, $n_e$ is the electron density in the disk atmosphere and $r$ is the distance of the disk from the center of the system. For high values of the ionization parameter $\xi$, photoelectric absorption of soft X-rays in the disk is suppressed and this results in a strong reflection continuum, which increases at soft X-ray energies instead of decreasing.
Most, but not all, of the AMXPs have been observed with moderate/high resolution instruments in order to perform a broad-band spectral analysis and look for reflection features. In the following we describe the main spectral results obtained for the available sample of AMXPs, with particular attention to reflection features and the constraints that can be inferred on the inner accretion flow.
Indeed, a relatively small inner disk radius is implied for most of the AMXPs for which a spectral analysis has been performed and a broad iron line has been detected in moderately high resolution spectra.
The AMXP IGR~J17511-3057, observed by {\em XMM-Newton\xspace}{} for 70 ks and {\em RXTE\xspace}{} \cite{Papitto2010}, shows both a broad iron line and the Compton hump at $\sim 30$ keV. In this case, the inner disc radius was at $\ge 40$ km from the NS center, assuming a $1.4\, M_\odot$ NS, with an inclination angle between $38^\circ$ and $68^\circ$ (see also \cite{Papitto2016}).
The AMXP and transitional pulsar IGR~J18245-2452 observed by {\em XMM-Newton\xspace}{} \cite{Papitto2013b}, showed a broad iron line at 6.7 keV (identified as K$\alpha$ emission from Fe XXV) with a width of $\sim 1.6$ keV, corresponding to R$_{in} \simeq 17.5$ R$_g$ (where $R_g = G M_{NS}/c^2$ is the gravitational radius) or $\sim 36.7$ km for a $1.4\, M_\odot$ NS. For comparison, the inner disk radius derived from the blackbody component was quite similar , $28 \pm 5$ km. The (intermittent) AMXP HETE J1900-2455, observed by {\em XMM-Newton\xspace}{} for $\sim 65$ ks \cite{Papitto2013a}, showed a broad iron line at 6.6 keV (Fe XXIII-XXV) and an intense and broad line at $\sim 0.98$ keV, visible both in the pn and in the RGS spectrum, compatible with being produced in the same disk region (see Fig.~\ref{fig:spec}, right panel). In this case, the inner disc radius was estimated to be $25 \pm 15\, R_g$, with an inclination angle ranging between $27^\circ$ and $34^\circ$.
A moderately broad, neutral Fe emission line has been observed during the 2015 outburst of IGR~J00291+5934 observed by {\em XMM-Newton\xspace}{} and {\em NuSTAR\xspace}{} \cite{Sanna2017d}. Fitted with a Gaussian profile the line centroid was at an energy of $6.37 \pm 0.04$ keV with a $\sigma = 80 \pm 70$ eV, while using a {\it diskline} profile, the line parameters were poorly constrained. The newly discovered AMXP, MAXI J0911-655, observed by {\em XMM-Newton\xspace}{} and {\em NuSTAR\xspace}{} \cite{Sanna2017d}, shows the presence of a weak, marginally significant and relatively narrow emission line in the range $6.5-6.6$ keV, modelled with a Gaussian profile with $\sigma$ ranging between 0.02 and 0.2 keV, identified with a K$\alpha$ transition from moderate-to-highly ionized iron.
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[angle=-90,width=1.1\linewidth]{nus_xmm_relxill_diskl_new}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{hete_spec_croped.png}
\label{fig:sfig2}
\end{subfigure}
\caption{\textit{Left Panel:} Broad-band spectrum of the AMXP {\rm SAX~J1808.4$-$3658\xspace}{} during its 2015 outburst as observed by {\em XMM-Newton\xspace}{} (black and red points) and {\em NuSTAR\xspace}{} (green points). The model includes a blackbody component, the {\it relxillCP} component, which includes the Comptonization continuum and the smeared reflection component, and three low-energy features modelled with {\it disklines}. \textit{Right Panel:} Broad-band spectrum of the AMXP HETE J1900.1-2455 in outburst as observed by {\em XMM-Newton\xspace}{} (blue, green and red points) and {\em RXTE\xspace}{} (cyan and magenta points). The single spectral components used to fit the X-ray spectrum are indicated in the figure. The Gaussian feature at 0.98 keV may be identified with Fe L$\alpha$ or He-like Ne K$\alpha$ transition. [Figures from \cite{DiSalvo2019,Papitto2013a}]}
\label{fig:spec}
\end{figure}
The (intermittent) AMXP SAX J1748.9-2021, observed by {\em XMM-Newton\xspace}{} for $\sim 115$ ks and {\em INTEGRAL\xspace}{} \cite{Pintore2016}, was caught at a relatively high luminosity of $\sim 5 \times 10^{37}$ erg/s corresponding to $\sim 25\%$ of the Eddington limit for a $1.4\, M_\odot$ NS, and, exceptionally for an AMXP, showed a spectrum compatible with a soft state. The broad-band spectrum is in fact dominated by a cold thermal Comptonization component (electron temperature of $\sim 2$ keV) with an additional hard X-ray emission described by a power-law (photon index $\Gamma \sim 2.3$), typically detected in LMXBs in the soft state (see e.g. \cite{DiSalvo2000}). In addition, a number of broad (Gaussian $\sigma = 0.1 - 0.4$ keV) emission features, likely associated to reflection processes, have been observed in the {\em XMM-Newton\xspace}{} spectrum. A broad iron line was observed at an energy of $\sim 6.7-6.8$ keV, consistent with a Fe XXV K$\alpha$ transition produced in the disc at a distance of $\sim 20-43\, R_g$ ($\sim 42 - 90$ km), with an inclination angle of $\sim 38-45^\circ$. The other broad emission lines may be associated to K-shell emission of S XVI (2.62 keV), Ar XVIII (3.32 keV) and Ca XX or Ca XIX (4.11 keV or 3.90 keV, respectively), and are compatible with coming from the same emission region as the iron line.
High-quality X-ray spectra of {\rm SAX~J1808.4$-$3658\xspace}{} were obtained during the 2008 outburst with {\em XMM-Newton\xspace}{} \cite{Papitto2009} and {\em Suzaku\xspace}{} \cite{Cackett2009} and during the 2015 outburst with {\em XMM-Newton\xspace}{} (which observed the source at the peak of the outburst) and {\em NuSTAR\xspace}{} which observed the source four days later \cite{DiSalvo2019}. The 2015 spectrum of {\rm SAX~J1808.4$-$3658\xspace}{} taken with {\em NuSTAR\xspace}{} was quite similar to the 2008 spectra; the continuum emission was modelled with one or two blackbody-like components and a hard Comptonization component with an electron temperature $> 40$ keV. On the other hand, the 2015 {\em XMM-Newton\xspace}{} spectrum was surprisingly much softer, with an electron temperature below 10 keV and a much colder blackbody component (corresponding to a large radius, $> 100$ km, for the emitting surface, \cite{DiSalvo2019}).
In all the cases, a reflection component was also required to model both the broad iron line and the Compton hump observed on top of the continuum (the composite broad-band spectrum of {\rm SAX~J1808.4$-$3658\xspace}{} observed during the 2015 outburst is shown in Fig.~\ref{fig:spec}, left panel). All the smearing parameters were quite similar in the 2008 and 2015 spectra, with the exception of the ionization parameter, much higher during the 2015 {\em XMM-Newton\xspace}{} observation ($\log \xi \sim 3.5$),
which also showed broad emission lines from highly ionized elements (S XVI, Ar XVIII and Ca XIX-XX) at low energies. In particular, the inner disk radius was $\sim 10\, R_g$, corresponding to about 20 km; for comparison the co-rotation radius of the system is $31 m_{1.4}^{1/3}$ km, where $m_{1.4}$ is the NS mass in units of $1.4\, M_\odot$, indicating that the disk is truncated inside the co-rotation radius during the outburst, as expected from the observed timing properties of the source.
However, the inclination angle is required to be high (usually values $> 60^\circ$ are required; the best constraint comes from the 2015 {\em XMM-Newton\xspace}{} spectrum, where $i = 58^\circ-64^\circ$, \cite{DiSalvo2019}). This result is in agreement with evidences from the 2015 {\em XMM-Newton\xspace}{} spectrum of discrete absorption features, namely an absorption edge at $\sim 7.4$ keV from neutral or mildly ionized iron and at least two absorption lines, possibly from K transitions of highly ionized (He-like) Ne IX (at 0.947 keV) and Mg XI (at 1.372 keV). These lines are relatively broad (implying a velocity dispersion of $\sigma_v \sim 0.01c$) and blue-shifted at a velocity a few percent the speed of light \cite{DiSalvo2019}. If confirmed, these lines may suggest the presence of a weakly relativistic outflowing wind towards the observer. A high inclination angle is also compatible with other estimates (see e.g. \cite{Ibragimov2009,Deloye2008,Bildsten2001,Wang2013}). However, high values for the inclination angle of the system look at odd when considered together with optical estimates of the radial velocity of the companion star \cite{Elebert2008,Cornelisse2009}, since it implies quite low values for the NS mass, $M_{NS} \sim 0.5-0.8\, M_\odot$. These results may indicate some problem with the interpretation of the reflection component and/or the need of more precise measurements of the radial velocity of the companion star.
It is noteworthy that reflection features are not always observed in AMXPs. Indeed, no evidence of iron emission lines or reflection humps has been reported for IGR~J16597-3704 \cite{Sanna2018a} observed by {\em Swift\xspace}{} and {\em NuSTAR\xspace}{}, IGR~J17379-3747 \cite{Sanna2018b} observed by {\em XMM-Newton\xspace}{}, XTE J1807-294 \cite{Falanga2005a} observed by {\em RXTE\xspace}{}, {\em XMM-Newton\xspace}{} and {\em INTEGRAL\xspace}{}, and XTE J1751-305 \cite{Miller2003} observed by {\em XMM-Newton\xspace}{}. Similar results have been reported for the 2018 outburst of the AMXP SWIFT J1756.9-2508 monitored by several satellites such as {\em NICER\xspace}{}, {\em Swift\xspace}{}, {\em XMM-Newton\xspace}{}, {\em NuSTAR\xspace}{} and {\em INTEGRAL\xspace}{}. Evidences of iron emission lines in the 6 -- 7 keV band have been, however, reported from the analysis of {\em RXTE\xspace}{} observations of the source during its 2007 and 2009 outbursts \cite{Patruno2010c}.
\subsection{Short-term variations of the spin during outbursts}
Accretion torque theories can be tested studying the spin variations of AMXPs during accretion phases. These studies can provide valuable information on the mass accretion rate and magnetic field of the NS in these systems, as well as their spin evolution. Coherent timing has been performed on several sources of the sample, with sometimes controversial results (see \cite{Campana2018} for a recent review). Some sources seem to show spin-up during outbursts (e.g.\ IGR~J00291+5934, XTE J1751-305, XTE J1807-294, IGR~J17511-3057), while other sources seem to show spin-down even during accretion phases (e.g. XTE J0929-314, XTE J1814-338, IGR~J17498-2921 and IGR~J17591-2342). Although some AMXPs show pulse phase delays distributed along a second order polynomial, indicating an almost constant spin frequency derivative, other sources show strong timing noise (e.g. SAX J1808-3658, HETE 1900-2455), sometimes correlated with sharp variations of the X-ray flux, which can hamper any clear measurement of the spin derivative.
The first AMXP for which a spin derivative has been measured is the fastest spinning ($\sim 599$ Hz, in a 2.46 hr orbit) among these sources, IGR~J00291+5934. It is now generally accepted that this source shows spin up at a rate of $\sim (5-8) \times 10^{-13}$ Hz s$^{-1}$ \cite{Falanga2005b,Patruno2010,Hartman2011,Papitto2011c} (see Fig.~\ref{fig:tim_noise}, right panel). \cite{Burderi2007} have attempted to fit the phase delays vs.\ time with physical models taking into account the observed decrease of the X-ray flux as a function of time during the X-ray outburst, with the aim to get a reliable estimate of the mass accretion rate onto the compact object.
Because the X-ray flux, which is assumed to be a good tracer of the mass accretion rate, is observed to decrease along the outburst, this variation has to be considered in eq.(\ref{torque}) in order to obtain the correct value of the mass accretion rate, and hence of the spin frequency derivative, at the beginning of the outburst as well as its variation during the outburst. This approach has been successfully applied to the timing of the 2014 outburst of the so-called {\it bursting pulsar}, GRO J1744-28, an X-ray pulsar with a spin frequency of $2.14$ Hz in a $11.83$ days orbit around a $0.2\, M_\odot$ companion star. \cite{Sanna2017b} were able in this way to obtain a good fit of the pulse phase delays versus time, deriving a frequency spin-up of $\sim 9 \times 10^{-12}$ Hz/s and inferring a distance to the source between 3.4 and 4.1 kpc, assuming a disk viscous parameter $\alpha$ in the range of 0.1-1.
In the case of IGR~J00291+5934, this technique gives a spin frequency derivative at the beginning of the outburst of $\dot \nu \sim 1.2(2) \times 10^{-12}$ Hz s$^{-1}$, corresponding to a mass accretion rate of $\sim 7 \times 10^{-9}\, M_\odot/yr$ and a peak bolometric luminosity of $\sim 7 \times 10^{37}$ erg/s for the 2004 outburst. This is at least one order of magnitude higher than the X-ray luminosity inferred from the observed X-ray flux, assuming a distance of 4.2 kpc. Once we will have a direct, independent, estimate of the distance to the source, we will have the possibility to test the $\dot M$ vs.\ X-ray luminosity relation, torque theories and/or the physical parameters of the NS in this system.
A recent example of an AMXP showing spin-down while accreting is given by IGR~J17591-2342. This source has been extensively monitored by {\em NICER\xspace}{} during its outburst starting from 2018 August 15 up to 2018 October 17 for a total exposure time of $\sim101$ ks distributed into 37 dedicated observations. X-ray pulsations have been detected uninterruptedly for almost 60 days allowing to accurately investigate the NS spin frequency evolution. Phase-coherent timing analysis of the frequency fundamental and second harmonic components revealed a statistically significant negative frequency derivative $\dot{\nu}\sim -7\times 10^{-14}$ Hz/s \cite{Sanna2020c} (see however, \cite{Kuiper2020} for different results from the timing analysis). Further analysis of the the X-ray pulse phase evolution of IGR~J17591-2342, adopting a physical model that accounts for the accretion material torque as well as the magnetic threading of the accretion disc in regions where the Keplerian velocity is slower than the magnetosphere velocity, allows to constrain the NS magnetic field to be $B_{eq} = 2.8(3)\times10^8$ G \cite{Sanna2020c}.
A similar spin frequency evolution has been reported for the AMXPs IGR~J17498-2921 ($\dot{\nu}=-6.3(1.9)\times 10^{-14}$ Hz/s \cite{Papitto2011b}), XTE J1814-338 ($\dot{\nu}=-6.7(7)\times 10^{-14}$ Hz/s \cite{Papitto2007}), and XTE J0929-314 ($\dot{\nu}=-9.2(4)\times 10^{-14}$ Hz/s \cite{Galloway2002}).
These observations indicate that spin-down during accretion phases is possible and requires a magnetic threading of the accretion disk.
The best studied, as well as most discussed source, is certainly the first discovered AMXP, SAX J1808-3658. Differently from all the other AMXPs, this source shows X-ray outbursts almost regularly every 2 -- 4 years. To date we have observed (with instruments with high-time resolution capabilities) eight outbursts from this source, each lasting about one month. The pulse phase evolution during the outburst shows a strong timing noise, with phases going up and down without any clear correlation with flux variations, or remaining constant for a long time before jumping to another constant phase value (see e.g. \cite{Hartman2008,Hartman2009a, Hartman2009b}). In the attempt to gain information on the spin variations in this source, \cite{Burderi2006} have analysed separately the fundamental and second harmonic of the pulse profile during the 2002 outburst of the source, finding that while the phases of the fundamental are dominated by timing noise (see Fig.~\ref{fig:tim_noise}, left panel), the second harmonic shows a more regular behaviour. This suggests that the phase jump in the fundamental (clearly visible in Fig.~\ref{fig:tim_noise}, left panel) is not related to an intrinsic spin variation (which would have affected the whole pulse profile), but is instead caused by a change of the shape of the pulse profile, leaving the possibility that the harmonic is a better tracer of the NS spin. A similar behavior of the second harmonic has also been observed in other AMXPs (e.g. \cite{Riggio2008,Riggio2011,Papitto2012}). Under this hypothesis, the fitting of the second harmonic phase delays reveals a spin-up at the beginning of the outburst of $\dot \nu = 4.4(8) \times 10^{-13}$ Hz s$^{-1}$, corresponding to a mass accretion rate of $\dot M \sim 1.8 \times 10^{-9}\, M_\odot$ yr$^{-1}$, and a constant spin-down, of $\dot \nu_{sp} = -7.6(1.5) \times 10^{-14}$ Hz s$^{-1}$, dominating the phase delays at the end of the outburst. The mass accretion rate inferred from timing is only a factor of 2 larger than the inferred X-ray bolometric luminosity at the beginning of the outburst, that is $\sim 10^{37}$ ergs s$^{-1}$. The spin-down can be interpreted in terms of the threading of the accretion disk by the NS magnetic field outside the co-rotation radius, which, in agreement with expectations, appears to be more relevant at the end of the outburst, when the mass accretion rate significantly decreases. The derived magnetic field, $B = (3.5 \pm 0.5) \times 10^8$ G, is perfectly in agreement with other, independent, constraints (see \cite{Burderi2006}, and references therein).
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{1808_phases}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.05\linewidth]{00291_phases_2}
\label{fig:sfig2}
\end{subfigure}
\caption{Phase residuals of the fundamental frequency for the AMXP {\rm SAX~J1808.4$-$3658\xspace}{} (left Panel) and IGR~J00291+5934 (right Panel) [Figure adapted from \cite{Patruno2009d}]}
\label{fig:tim_noise}
\end{figure}
The latest outburst of {\rm SAX~J1808.4$-$3658\xspace}{} in 2019 was monitored with {\em NICER\xspace}{} for one month and a total exposure of 355.4 ks. Timing analysis of these data showed that the pulse profile was dominated by the fundamental (the second harmonic was significantly detected only in a handful of intervals) and the relative phase delays show a clear parabolic trend typical of a spin-down at the rate of $\dot{\nu} = -3.02(13)\times 10^{-13}$ Hz s$^{-1}$ \cite{Bult2020}. Since these phase shifts appear to be correlated with the source flux, the authors interpreted this trend in terms of hot-spot drifts on the stellar surface, driven by changes in the mass accretion rate.
Other recent results regard phase-coherent timing analysis of the outburst of the AMXPs SWIFT~J1759-2508 and IGR~J17379-3737, which allowed to set upper limits on the spin frequency derivative of $\dot{\nu}<|1.4|\times 10^{-12}$ Hz/s \cite{Sanna2019,Bult2018b} and $-0.5\times 10^{-14}<\dot{\nu}<0.9\times 10^{-14}$ Hz/s \cite{Bult2020}, respectively.
\subsection{Long-term variations of the spin}
AMXPs for which more than one outburst has been observed with high time resolution instruments, allow to derive long term spin evolution comparing the averaged spin frequency measured in each outburst. To date only six AMXPs have been observed in different outbursts: SAX J1808.4-3658, IGR~J00291+5934, XTE J1751-305, Swift J1756.9-2508, IGR~J17379-3747, IGR~J17511-3057, NGC 6440 X-2, and SAX J1748.9-2021 (although, with relatively low S/N and short outburst duration for some of these sources, see Table ~\ref{Tab2}).
The best constrained long-term spin evolution is obtained for {\rm SAX~J1808.4$-$3658\xspace}{} (see left panel of Fig.~\ref{fig:1808_sec}), for which secular spin evolution has been measured over a 13 year time span (between 1998 and 2011), which shows a constant long-term spin-down at a rate of $\sim -1 \times 10^{-15}$ Hz s$^{-1}$ (see \cite{Patruno2012}, and references therein). Because of the stability of the spin-down rate over the years, the most likely explanation appears to be loss of angular momentum via magnetic-dipole radiation, which is expected for a rapidly rotating NS with a magnetic field. The measured spin-down is consistent with a polar magnetic field of $(1.5 - 2.5) \times 10^8$ G, in agreement with other estimates. The spin frequency measured during the 2015 outburst had a large uncertainty because of strong timing noise of the fundamental. Interestingly, the spin frequency measured using the phases of the second harmonic falls very close (less than $2 \sigma$) from the value predicted by the secular evolution (see \cite{Sanna2017c}). For the 2019 outburst, the second harmonic is significantly detected only in few {\em NICER\xspace}{} snapshots and the exact value of the spin frequency inferred from the fundamental depends on the adopted timing solution. \cite{Bult2020} have fitted the phase delays using a linear model (which leaves large residuals), a quadratic model (indicating a spin-down during the outburst), and a flux-adjusted model (under the hypothesis that phase variations with time originate from a hot-spot drifting on the stellar surface, driven by changes in the mass accretion rate). The linear and flux-adjusted models give a spin frequency relatively close to the secular spin-down trend, while the quadratic model gives a frequency lying significantly above the trend (see Fig.~\ref{fig:1808_sec}, left panel). Considering the linear model (which provides the frequency value closest to the expected trend), the long-term evolution of the spin shows a modulation around a constant spin-down behaviour at the Earth's orbital period (right panel of Fig.~\ref{fig:1808_sec}), which is used to astrometrically refine the source coordinates.
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{1808_secular_3}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[angle=-90,width=1.3\linewidth]{1808_new_pos}
\label{fig:sfig2}
\end{subfigure}
\caption{\textit{Left Panel:} Secular spin frequency evolution of the AMXP {\rm SAX~J1808.4$-$3658\xspace}{} calculated relatively to the 1998 epoch. Black points represent measurements obtained with {\em RXTE\xspace}{} while colored squared represent the {\em NICER\xspace}{} measurements obtained for the 2019 outburst of the source for three different models (see \cite{Bult2020} for more details). The solid line indicates the spin evolution best-fit model. \textit{Right Panel:} Spin frequency measurements of the AMXP {\rm SAX~J1808.4$-$3658\xspace}{} relative to the best-fit spin-down model as a function of the Earth's ecliptic longitude. [Figure from \cite{Bult2020}]}
\label{fig:1808_sec}
\end{figure}
\begin{comment}
\begin{figure}
\centering
\includegraphics[width=0.55\textwidth]{1808_secular_3}
\caption{ [Figure from \cite{Bult2020}}
\label{fig:lc_0911}
\end{figure}
\end{comment}
A long-term spin-down has also been measured for IGR~J00291+5934 between the 2004 and 2008 outbursts, at a rate of $-4.1(1.1) \times 10^{-15}$ Hz s$^{-1}$ \cite{Papitto2011c,Patruno2010,Hartman2011}, larger than that observed in {\rm SAX~J1808.4$-$3658\xspace}{}, as expected given that IGR~J00291+5934 spins at a higher frequency.
The less accurate spin measurement from the {\em XMM-Newton\xspace}{} observation of
its 2015 outburst could only constrain the spin-down since the
previous outburst as $|{\nu}|<6\times 10^{-15}$~Hz~s$^{-1}$ (see
[113] and references therein.
If interpreted in terms of magneto-dipole emission, the measured spin-down translates into an estimate of the NS magnetic field of $(1.5-2) \times 10^8$ G. Another possibility is given by the spin-down torque associated with the emission of GR, strongly dependent on the NS spin, which has also been proposed as a limiting factor for the maximum spin frequency observed for a NS (to date 716 Hz, \cite{Hessels2006}). Assuming that the long-term spin-down observed in IGR~J00291+5934, the fastest spinning AMXP known to date, is due to this mechanism, the measurement of the average spin-down in this source translates to an upper limit on the average mass quadrupole moment of $Q \lesssim 2 \times 10^{36}$ g cm$^2$ \cite{Hartman2011}. Under this hypothesis, it is possible to predict that the long-term spin-down in IGR~J00291+5934 should be a factor 7.6 higher than in {\rm SAX~J1808.4$-$3658\xspace}{}. The large uncertainties on these measurements prevent at the moment to assess this prediction, but it can be checked with future, high-quality, monitoring of the spin frequency in these systems.
Long-term spin evolution has been constrained for a few other sources of the sample of AMXPs. After the discovery of X-ray pulsations during the 2018 outburst of IGR~J17379-374, pulsations from this source have been discovered also in the {\em RXTE\xspace}{} archival data of its 2004 and 2008 outbursts after applying the binary ephemeris. Combining the barycentric spin frequency values from the three oubursts, an upper limit on the secular spin derivative has been estimated, $-8.3\times10^{-13}$ Hz/s $<\dot{\nu}<1.1\times 10^{-12}$ Hz/s. This corresponds to an upper limit on the magnetic field strength of $B<2.8\times 10^9$ G, under the assumption of a NS radius of 10 km and an angle $\alpha\simeq 10^\circ$ between the magnetic hotspot and the rotational pole \cite{Sanna2018b}. Swift J1756.9-2508 has been detected in outburst three times (2007, 2009 and 2018) since its discovery, which allowed the detection of a long-term spin-down derivative of $-4.8(6)\times 10^{-16}$ Hz/s \cite{Sanna2019}, corresponding to a NS superficial magnetic field $1.5\times 10^8 < B_{eq} < 2.2\times 10^8$ G (consistent with the value reported by \cite{Mukherjee2015}).
\subsection{Long-term timing of the orbital period}
The study of the orbital evolution in Low Mass X-ray Binary systems is very important to constrain the evolutionary path leading to the formation of rotation-powered MSPs, and hence to obtain information on the progenitors of fast-rotating NS and on the recycling scenario. It is worth noting, however, that the discussions on the AMXPs long-term changes of the orbital period described in this section reflect changes on timescales relatively short with respect to the secular evolution of the binary systems (see Chapter~9 for more details on the topic). Nonetheless, orbital evolution can in principle be useful to put constraints on alternative theories of Gravity. In fact, since the difference in the orbital period evolution of binaries interpreted with General Relativity (GR) and other theories of Gravity (e.g. Brans-Dicke gravity) is related to the mass difference of the two members of the binary system \cite{Will2006}, these sources provide prime candidates for constraining deviations from GR \cite{Psaltis2008}. In this framework, AMXPs are the most promising candidates for an experimental test on these alternative theories, because the companion star is, in most cases, a very light white dwarf or even a brown dwarf \cite{Bildsten2001}, and the primary stars are millisecond pulsars with orbital periods accurately determined.
However, these studies require a large baseline (tens of years) of data to be able to constraint the orbital period derivative. Hence, one of the main difficulty is given by the limited number of AMXPs observed recurrently into X-ray outburst. To date, only eight AMXPs have more than one outburst observed with high-time resolution instruments since their discovery, and therefore only few constraints on the orbital period derivative have been derived to date (see Table ~\ref{Tab2}). Moreover, long-term orbital solutions show sometimes residuals that are complex and difficult to interpret. {Understanding these complex orbital residuals is therefore of fundamental importance, since it would allow to constrain the orbital period evolution in these systems, at least on a dynamical timescale,
providing hints on their evolutionary paths or at least important information on the long-term dynamical behaviour of these systems.
Furthermore, the precise determination of the orbital period derivative caused by mass transfer may give in perspective the possibility to constrain alternative theories of Gravity.
The best constraint on the orbital evolution of AMXPs comes again from {\rm SAX~J1808.4$-$3658\xspace}{}, which has shown eight X-ray outbursts to date, allowing to follow its orbital period over 21 years. As reported in the left panel of Fig.~\ref{fig:orb_evo}, for each outburst the time of passage of the NS through the ascending node ($T^*$, which is the most sensitive parameter to variations of the orbital period) can be derived and plotted versus time. The orbital residuals (with respect to a constant orbital period) were dominated by a clear parabolic trend up to the 2015 outburst, with residuals with respect to this trend of the order of few seconds \cite{Sanna2017c}. Interpreting this parabolic trend as a constant orbital period derivative, the best-fit value is $\dot P_{orb} = 3.6(4) \times 10^{-12}$ s s$^{-1}$, implying a strong orbital expansion. The origin of the observed $\dot P_{orb}$ is still not fully understood, yet, and different possible mechanisms have been proposed over the years (see e.g. \cite{DiSalvo2008,Hartman2008,Burderi2009,Patruno2012b,Patruno2016}. However, there is consensus on the fact that conservative mass transfer is not compatible with the observed value of $\dot P_{orb}$ for {\rm SAX~J1808.4$-$3658\xspace}{}. This can be easily demonstrated by estimating the mass-loss rate from the secondary star as a function of the observed orbital period derivative (see e.g. \cite{Burderi2009}), which implies a mass transfer of the order of $2 \times 10^{-9}\, M_\odot$ yr$^{-1}$. This mass transfer rate is much larger than the mass accretion rate onto the NS, considering that the source accretes matter for about a month every 2-4 yr with a bolometric luminosity at the peak of the outburst barely reaching $10^{37}$ erg s$^{-1}$ (corresponding to a maximum mass accretion rate of $\sim 10^{-9}\, M_\odot$ yr$^{-1}$). The average mass accretion rate over the 17 years from 1998 to 2015 is indeed three orders of magnitude below, $\sim 2 \times 10^{-11}\, M_\odot$ yr$^{-1}$.
A not conservative mass transfer can explain the large orbital period derivative assuming that the mass transfer rate is $\dot M \sim 10^{-9}\, M_\odot$ yr$^{-1}$, and that most of the transferred matter is expelled from the system, instead of being accreted onto the NS, with the specific angular momentum at the inner Lagrangian point (see \cite{DiSalvo2008,Burderi2009}). In this case, the non-conservative mass transfer may be a consequence of the so-called \textit{radio-ejection} model, extensively discussed by \cite{Burderi2001}, envisaging that a fraction of the transferred matter in the disc could be swept out by the (radiative and high-energy particles) pressure of the pulsar wind. Alternatively, the large orbital period derivative observed in {\rm SAX~J1808.4$-$3658\xspace}{} has been interpreted as the effect of short-term angular momentum exchange between the donor star and the orbit \cite{Hartman2009b,Patruno2012b}, resulting from variations in the spin of the companion star (holding the star out of synchronous rotation) caused by intense magnetic activity driven by the pulsar irradiation, the so-called Applegate \& Shaham mechanism (hereafter A\&S \cite{Applegate1994}). In this case, the orbital period should oscillate, alternating epochs of orbital period increase and decrease, because of the gravitational quadrupole-coupling to the orbit. However, according to this mechanism, the system should evolve to longer orbital periods, because of mass and angular momentum loss, on a timescale of $10^8$ yr (for a 2-hr orbital period and a companion mass of $0.1-0.2\, M_\odot$), thus implying a strong orbital period derivative, similar to that inferred from the quadratic trend observed in {\rm SAX~J1808.4$-$3658\xspace}{}. In this framework, the orbital residuals in {\rm SAX~J1808.4$-$3658\xspace}{} up to 2015 may be interpreted as small oscillations of few-seconds amplitude caused by the A\&S mechanism superposed on a global orbital period derivative induced by the strong mass-loss from the system \cite{Sanna2017c}. Alternatively, variations of the orbital period with respect to the global parabolic trend may be caused by random fluctuations of the mass transfer (and loss) rate.
The latest outburst of {\rm SAX~J1808.4$-$3658\xspace}{} in 2019, however, shows an abrupt flattening of the parabolic trend \cite{Bult2020} (as is evident in Fig.~\ref{fig:orb_evo}, left, top panel). Indeed, the measurements between 2008 and 2019 taken alone seem to imply an orbital contraction of the orbit in the last 10 years, with an orbital period derivative of $\dot P_{orb} \simeq -5.2 \times 10^{-12}$ s s$^{-1}$. Alternatively, fitting all the measurements with a global parabolic trend, gives an orbital period derivative of $\dot P_{orb} = 1.6 \pm 0.7 \times 10^{-12}$ s s$^{-1}$. As shown in the left, bottom panel of Fig.~\ref{fig:orb_evo}, the residuals around this mean trend show a sinusoidal-like, 7-s amplitude, oscillation with a period of approximately 18.2 years. Additional monitoring of future outbursts is needed to confirm the presence of oscillations around a steadily expanding orbit, or, instead, a $\sim 20$ s amplitude modulation around a constant (or much less variable) orbital period.
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{1808_orb_2}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.45\linewidth]{00291_orb_2}
\label{fig:sfig2}
\end{subfigure}
\caption{\textit{Left Panel:} Orbital evolution of the AMXP {\rm SAX~J1808.4$-$3658\xspace}{}. The dashed, dashed-dot and solid curves represent the parabolic trends fit between 1998-2008, 2008-2019, and 1998-2019 subsets of the data, respectively. Residuals relative to the 1998-2008 and 1998-2019 parabolic models are shown in the middle and bottom panels, respectively. The dashed line shown in the bottom panel represents a sinusoid with a 18.2 yr period and 7 s amplitude that has been inserted to tentatively describe the residuals (see \cite{Bult2020} for more details). The solid line indicates the spin evolution best-fit model. \textit{Right Panel:} Orbital evolution of the AMXP IGR~J00291+5934. The cyan dashed line represents the best-fitting parabola used to model the data. Residuals in seconds of the time delays with respect to the best-fitting timing solution are shown in the bottom panel. [Figures from \cite{Bult2020,Sanna2017d}]}
\label{fig:orb_evo}
\end{figure}
A very different evolution is found for IGR~J0029+5934, as shown in the right panel of Fig.~\ref{fig:orb_evo}, which has orbital parameters very similar to those of {\rm SAX~J1808.4$-$3658\xspace}{}, and it is considered its orbital twin. IGR~J0029+5934 has shown only four outburst since its discovery, but tight upper limits could be derived on its orbital period derivative, $|\dot P_{orb}| < 5 \times 10^{-13}$ s s$^{-1}$ (90\% confidence level \cite{Patruno2017,Sanna2017d}). This implies a much slower orbital evolution, on a timescale longer than $\sim 0.5$ Gyr, as compared to the fast (up to 2015) orbital evolution of {\rm SAX~J1808.4$-$3658\xspace}{}, $\sim 70$ Myr. Although the orbital evolution observed in IGR~J0029+5934 is obtained using only four points with large error bars, and more measurements are needed to confirm this result, it seems to be compatible with the expected timescale of mass transfer driven by angular momentum loss via GR, with no need of A\&S mechanism or non-conservative mass transfer.
\begin{table}
\caption{Accreting Millisecond X-ray Pulsars: secular spin and orbital evolution}
\scriptsize
\begin{center}
\begin{tabular}{lcccccl}
\hline
\hline
Source & \# outbursts & $P_{\rm orb}$ & $T_{ASC}$ & $\dot{P}_{\rm orb}$ & $\dot{\nu}$ & Ref.\\
& & (s) & (MJD) & (s/s) & (Hz/s) &\\
\hline
\textbf{AMXP} & & & & & & \\
\hline
NGC 6440 X-2 & 4 & 3457.8929(7) & 55318.04809(2) & $\pm8\times 10^{-11}$ &$\pm5\times 10^{-13}$ & \cite{Bult2015c}\\
SAX J1748.9-2021 & 4 & 31554.868(3) & 52191.52737(3) & $3.9(1.1)\times 10^{-11}$ & -& \cite{Sanna2020}\\
IGR J00291+5934 & 4 & 8844.07673(3) &53345.16192640(5) & $-0.7(2.2)\times 10^{-13}$ & $-3.0(8)\times 10^{-15} $ & \cite{Patruno2017,Sanna2017d,Patruno2010,Papitto2011c}\\
IGR J17379-3747 & 3 & 6765.84521(3) & 53056.03926(12) & $−2.5(2.3)\times 10^{-12}$ & - & \cite{Sanna2018b}\\
SAX J1808.4-3658 & 8 & 7249.1541(2) & 50914.79449(4) & $1.7(0.6)\times 10^{-12}$ & $-1.01(7)\times 10^{-15}$ & \cite{Bult2020,Sanna2020b}\\
Swift J1756.9-2508 &3 & 3282.3519(5) & 54265.28087(10) & $1.5(2.8)\times 10^{-12}$ & $-4.8(6)\times 10^{-16}$ & \cite{Sanna2018d,Bult2018b}\\
IGR J17511-3057 & 2 & 12487.50(7) & 57107.85882(8) & $4.4(7)\times10^{-11}$ & - & \cite{Riggio2020} \\
IGR J1751-305 & 2 & 2545.342(2) & 52368.0129023(4) & $\pm1.4\times10^{-11}$ & $-5.5(1.2)\times 10^{-15}$ & \cite{Riggio2011b}\\
\hline
\hline
\end{tabular}\\
\end{center}
$P_{\rm orb}$ is the orbital period, $T_{ASC}$ is the time of passage from the Ascending Node and the reference of the orbital solution, $\dot{P}_{\rm orb}$ is the orbital period derivative, and $\dot \nu$ is the long-term spin frequency derivative.
\label{Tab2}
\end{table}
What causes such an enormous difference between the orbital evolution of two sources with very similar orbital parameters?
\cite{Tailo2018} have studied the effects of irradiation of the companion star in order to reproduce the evolution of {\rm SAX~J1808.4$-$3658\xspace}{}. They have simulated the binary evolution of its possible progenitor system, starting at an orbital period of $\sim 6.6$ h and taking into account angular momentum losses via MB and GR. They also consider the effects of illumination of the donor by both the X-ray emission during accretion phases and the spin-down luminosity of the pulsar. They show that pulsar irradiation is a necessary ingredient to reach the correct orbital period when the donor mass is reduced to the actual value of $0.04-0.06\, M_\odot$. Also it is shown that irradiation alters the internal structure of the donor, causing the companion star to be not completely convective at the values of mass observed for the system and keeping the MB active along the whole evolution (see also \cite{Chen2017}). Mass transfer proceeds through cycles: the donor reacts to the irradiation expanding and starting a phase of large mass-transfer; consequently, mass loss dominates the period evolution. When the thermal relaxation of the envelope takes over, the star radius shrinks and the system detaches (see also \cite{Benvenuto2017} and references therein). In this framework, {\rm SAX~J1808.4$-$3658\xspace}{} and IGR~J0029+5934 may be at different phases of this cycling behavior, with the first in a phase of high mass transfer rate (and a fast orbital evolution) and the latter in an almost detached phase (with a low mass transfer rate and slow orbital evolution). In both cases, a non-conservative mass transfer is implied with matter expelled from the system by the radiation pressure of the pulsar, that should be stronger in the case of IGR~J0029+5934 because of its faster rotation. More details on the orbital evolution of these systems from a theoretical point of view can be found in Chapter 9 of this book.
In order to test this or other models for the orbital evolution in these systems it is important to continue monitoring the behavior of these and other sources. Other AMXPs have shown more than one outburst for which an orbital solution has been obtained. Long-term evolution of the time of passage from the ascending node of SAX J1748.9-2021 has been clearly observed after combining the orbital solutions of the five observed outbursts to date (in 2001, 2005, 2010, 2015 and 2018). Although marginally significant ($\sim 3.5 \sigma$ confidence level), an orbital period derivative of $\dot{P}_{\rm orb}=3.9(1.1)\times 10^{-11}$ s/s has been determined \cite{Sanna2020}, suggesting again a fast orbital expansion of the system. In the case of IGR~J17379-3747, the combination of the ephemerides obtained for the observed outbursts allows to set an upper limit on the orbital period derivative of $-4.4\times10^{-12} < \dot{P}_{\rm orb} < 9.4\times 10^{-12}$ \cite{Sanna2018b}. Swift J1756.9-2508 has been detected in outburst three times (2007, 2009 and 2018) since its discovery; the orbital timing of the source sets an upper limit on the orbital period derivative of $-4.1 \times 10^{-12} < \dot{P}_{\rm orb} < 7.1 \times 10^{-12}$ \cite{Sanna2019}. \cite{Riggio2020} analysed a {\em NuSTAR\xspace}{} observation of the 2015 outburst of IGR~J17511-3057, obtaining a new local orbital solution. Combining that with the the orbital solution of the 2011 outburst \cite{Riggio2011}, they inferred an orbital period derivative of $\dot{P}_b = 4.4(7) \times 10^{-11}$ s s$^{-1}$, suggesting a fast orbital expansion of the binary system similar to that reported for SAX J1748.9-2021. These results are summarised in Table ~\ref{Tab2}.
\subsection{Not conservative mass transfer?}
Despite the reduced statistics, the majority of the results suggests that these sources are undergoing a fast orbital expansion, notwithstanding the low averaged mass accretion rate observed from these sources.
Besides AMXPs, one of the most evident example of non-conservative mass transfer is given by the slowly rotating (spin period of $\sim 0.59$ s, \cite{Jonker2001}) X-ray pulsar and eclipsing LMXB 4U 1822-37, which shows a persistent X-ray luminosity of $\sim 10^{36}$ erg/s and an orbital period of $\sim 5.57$ h, measured from the periodic eclipse of the X-ray source and confirmed through the timing of the X-ray pulsations. The compilation of the eclipse times over the last 40 years shows a fast orbital expansion at a rate of $\dot P_{orb} \sim 1.5 \times 10^{-10}\, M_\odot/yr$ (see e.g. \cite{Chou2016,Mazzola2019}). The delays on the eclipse arrival times with respect to a constant orbital period show a clear parabolic trend, which implies a constant orbital period derivative, more than three orders of magnitude what is expected from conservative mass transfer driven by MB and GR (e.g. \cite{Burderi2010,Iaria2011}). Mechanisms based on the gravitational quadrupole coupling of the companion star with the orbit (see e.g. \cite{Applegate1992,Applegate1994}) have been investigated, however, they resulted not suitable since the ($\sim 0.3\, M_\odot$) companion star lacks enough internal power to produce such a large orbital period variation (e.g. \cite{Mazzola2019}).
A possible explanation is given by a highly not conservative mass transfer, in which the companion star transfers mass at a high rate. Most of the transferred mass is then expelled from the system by the strong radiation pressure of the central source emitting at the Eddington limit. In fact, it has been proposed that 4U 1822-37 is accreting at the Eddington limit (while just 1\% of the total X-ray luminosity is visible due to the high inclination angle, $80-85^\circ$, \cite{Iaria2011}), while the companion star is transferring at a higher rate (of the order of seven times Eddington, \cite{Burderi2010}), and most of the transferred mass is expelled from the system by the radiation pressure producing strong outflows and winds.
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.2\linewidth]{heic0201b}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.04\linewidth]{making_a_Nova}
\label{fig:sfig2}
\end{subfigure}
\caption{Artistic impression of a system during the accretion phase (\textit{right panel}) and during a \textit{radio ejection}
phase \textit{left panel}. [Credits to NASA \& ESO, respectively]}
\label{fig:con_mt}
\end{figure}
Indeed, there are other indirect evidences of a non-conservative mass transfer in AMXPs. \cite{Marino2019} have analysed a sample of AMXPs, starting from XTE J0929-314 \cite{Marino2017}, finding that the averaged (over the time since their discovery) X-ray luminosity of most sources of the sample is significantly lower than what would be predicted by conservative mass transfer driven by GR and/or MB. Comparing their averaged X-ray luminosity with that predicted for a conservative mass transfer, a lower limit on the source distance may be estimated. Based on a sample of 18 sources, strong evidence of a non-conservative mass transfer was found for five sources, for which the estimated distance lower limits are significantly higher than their known distances, while hints of mass outflows are found in further six sources of the sample. The discrepancy can be fixed under the hypothesis of a non-conservative mass transfer in which a fraction of the mass transferred onto the compact object is swept away from the system, likely due to the radiation pressure of the rotating magnetic dipole and/or pulsar wind (see an artistic impression of the \textit{accretion} and \textit{ejection} phases in Fig.~\ref{fig:con_mt}). Interestingly, the possibility of strong outflows from these systems has been recently confirmed by general-relativistic MHD simulations \cite{2017ApJ...851L..34P} showing how the interaction of a turbulent accretion flow with the electromagnetic wind of the pulsar can lead to the transition of a rotational-powered pulsar to the accreting state, causing in turn the formation of relativistic jets whose power can greatly exceed the power of the pulsar wind. If the accretion rate is below a critical value, the pulsar may instead expel the accretion stream.
A similar argument has also been proposed for the black-hole X-ray Binary and microquasar V404 Cyg \cite{Ziolkowski2018}; considering the donor evolution and mass transfer in the microquasar V404 Cyg and X-ray observations of its two outbursts, the authors find that the average mass accretion rate is substantially lower than the model mass-loss rate from the low-mass giant donor; to fix this discrepancy, they propose that a large fraction of the mass flowing from the donor leaves the binary in the form of outflows from the accretion disc around the accretor.
We can conclude that, regardless of the nature of the accretor and the radiation emitted, it seems that radiation pressure has an important role in limiting the accretion of matter onto the compact object and in initiating a not-conservative mass transfer in LMXB systems, which may be a common feature in these systems, possibly related (as a cause or consequence) to the transient behavior itself.
\section{Summary and Open questions}
Despite the amount of information we have gained in the last two decades of observations and theoretical studies of AMXPs, several issue still remain to be addressed, as for instance the torque imparted by the accreting matter onto the NS, in most cases hidden inside the strong timing noise present in the timing of the spin period. Is the spin-up or spin-down of the NS overwhelmed with the large timing noise? Moreover, what is the origin of this large timing noise? Movements of the hot spot on the NS surface caused by flux variations have been proposed to explain the large timing noise, although it is not clear why in some sources (e.g. {\rm SAX~J1808.4$-$3658\xspace}{}) is appears to be much stronger than in other sources (e.g. \ IGR~J00291+5934). Even more puzzling are the orbital residuals observed in {\rm SAX~J1808.4$-$3658\xspace}{} and the different behaviour observed in IGR~J00291+5934, as well as the role of non-conservative mass transfer in AMXPs and LMXBs in general. Beside that, there are other important issues that should be addressed and are briefly described in the following.
The discovery of AMXPs and the subsequent discovery of transitional millisecond pulsars (see Chapter 7 of this book for further details) has confirmed the recycling scenario. As a consequence of that, we improved our understanding of the formation of millisecond pulsars, which are accelerated by the accretion of matter and angular momentum during the LMXB phase, and of the evolutionary path linking the progenitors, i.e. LMXBs, to the end products of the evolution, i.e. Black-Widow pulsars and Redbacks, possibly through the transitional phase. Nevertheless, several open questions remain to be addressed, the first one regarding pulsations themselves.
In fact, apart from the presence of coherent pulsations, AMXPs resemble the behaviour of transient LMXBs of the atoll class. Both the spectral properties and the aperiodic and quasi-periodic variability (so-called QPOs) are very similar between AMXPs and not-pulsating LMXBs (see e.g.\ \cite{Wijnands1999}, see also \cite{Patruno2012} for a review). Similar to LMXBs, AMXPs show type-I X-ray bursts and all the associated phenomenology, as for example the presence of (quasi-coherent) oscillations at the spin frequency of the NS during type-I X-ray bursts. From the observation of burst oscillations we know that many NS in LMXBs indeed rotate at millisecond periods. However, despite all these similarities, the large majority of LMXBs harbouring a NS do not show coherent pulsations, not even when the mass accretion rate decreases (for instance in transient systems) enough to allow the magnetosphere to expand outside the NS surface. The observation of an intermittent behaviour of coherent pulsations in some AMXPs (see e.g. the case of Aql X-1 or HETE J1900.1-2455) has suggested that magnetic field burial caused by accretion of fresh matter may play a role (see e.g. \cite{Cumming2001} and references therein). However, it is not clear whether this can explain the lack of pulsations in most of LMXBs or other factors contribute in hampering the detection of pulsations in these sources. These may be for instance a smearing of the pulsations by an optically thick corona, a smearing of pulsations due to gravitational light bending, alignment of the NS magnetic and rotational axes, onset of MHD instabilities at the disk/magnetospheric boundary. None of these models, however, furnish a satisfactorily explanation valid for all the cases (see a discussion in \cite{Patruno2012,Campana2018}).
Even more puzzling is the lack of radio pulsations in both AMXPs and LMXBs during X-ray quiescence. In principle, when the accretion of matter stops during (long) quiescent periods, the mechanism producing radio (or gamma-ray) pulsations should resume and the millisecond pulsar should shine in radio (or gamma-ray) as a rotation-powered pulsar. However, this has been observed to date in just one source, the AMXP and transitional pulsar IGR~J18245-2452 (J18245 hereafter) in the Globular Cluster M28 \cite{Papitto2013b}. This source has been first observed as a radio millisecond pulsar (spinning at 3.93 ms) in a binary system with a 11-h orbital period. In 2013 it went into X-ray outburst and was discovered as an AMXP; soon after the end of the outburst J18245 was detected again as a radio pulsar, demonstrating that the transition between the rotation-powered and the accretion-powered regime can occur on short timescales (in about 10 days or even less). It is worth noting that the other 2-3 sources, belonging to the transitional millisecond pulsar class, also show radio pulsations during X-ray quiescence and X-ray pulsations during the so-called disk state with (possibly) a low-level of accretion. However, none of these sources ever showed an X-ray outburst to date similar to the one showed by J18245 or the other AMXPs. The compelling possibility that these systems could swiftly switch from accretion-powered to rotation-powered magneto-dipole emitters during quiescence gives the opportunity to study a phase that could shed new light on the not yet cleared up radio pulsar emission mechanism. Therefore, if the swing between the rotation-powered and the accretion-powered pulsar can happen on fast timescales, why is this observed just in few cases? Why radio millisecond pulsations at the known spin period have never been detected in other LMXBs or AMXPs during X-ray quiescence?
In the framework of the so-called radio-ejection model \cite{Burderi2001}, the radio pulsar mechanism switches on when a significant reduction of the mass-transfer rate occurs. The accretion of matter onto the NS is then inhibited by the radiation pressure from the radio pulsar, which may be capable of ejecting out of the system most of the matter overflowing from the companion-star Roche lobe. One of the strongest predictions of this model is the presence, during the radio-ejection phase, of a strong wind of matter emanating from the system (see an artistic impression of a system in the \textit{radio-ejection} phase in the left panel of Fig.~\ref{fig:con_mt}). The non-detection of radio pulsations in this situation may be due to free-free absorption of the radio signal interacting with the ejected matter. A possibility to test this scenario is, therefore, to perform deep (tens of hours) radio observations of these sources at high radio frequency (above $5-6$ GHz), since the cross-section of free-free absorption decreases with frequency as $\nu^{-2}$ (see e.g. \cite{Iacolina2009,Iacolina2010}). However, the question remains: why do transitional millisecond pulsars, and J18245 in particular, behave in a different way, showing radio pulsations when the X-ray emission is off? Perhaps, a favourable geometry, e.g. a relatively low inclination angle of the system, may reduce the amount of matter along our line of sight, since most of the matter is expected to lie in the equatorial plane, and therefore reduce the amount of free-free absorption in these systems. Alternatively, pulsating radio emission should be searched in systems with long orbital periods, in which the matter transferred by the companion star is spread over a wide orbit.
Despite the fact that radio pulsations remain elusive in AMXPs, sporadic detection of transient emission in the radio band has been reported in a few cases. On the other extreme of the electromagnetic spectrum, in the gamma-ray band, searches of AMXPs counterpart is also quite difficult. Because of the paucity of photons at such high energies, in order to obtain a statistically significant detection, it is necessary to integrate over several years. The analysis of $\sim 6$ yr of data from the Large Area Telescope on board the Fermi gamma-ray Space Telescope (Fermi-LAT) revealed a possible gamma-ray counterpart of {\rm SAX~J1808.4$-$3658\xspace}{}, at a significance of $\sim 6 \sigma$, with a position compatible with that of the source within the $95\%$ confidence level \cite{deOnaWilhelmi2016}. However, the search for coherent pulsations did not produce a significant detection taking into account the number of trials. Uncertainties in the source position, orbital motion of the pulsar as well as the intrinsic evolution of the pulsar spin, which still are not known with enough precision to maintain the phase over years, are likely the causes of the non detection. A precise knowledge of the spin and orbital parameters of AMXPs is of fundamental importance to allow deep searches of their counterparts in the gamma-ray band, which has the advantage of not suffering the free-free absorption as in the radio band, but the disadvantage of the reduced number of photons, which requires folding over years in order to reach the statistics needed for detecting a (weak) pulsed signal.
On other other hand, searches of the optical counterpart of these systems has given interesting, unexpected results. In several AMXPs, the optical counterpart during X-ray quiescence appears surprisingly luminous, inconsistent with both intrinsic emission from the companion star and X-ray reprocessing (e.g. \cite{Homer2001,DAvanzo2009,DAvanzo2011}). In fact, the optical counterpart shows an approximately sinusoidal modulation with photometric minimum at the superior conjunction of the pulsar. The lack of ellipsoidal, double-humped variations, rules out an origin from intrinsic emission from the companion star, while it is best explained as caused by the irradiated surface of the companion star. Given the lack of significant X-ray emission during quiescence, this has been interpreted as a strong (indirect) evidence that a rotating magneto-dipole powers the quiescent emission of AMXPs \cite{Burderi2003,Campana2004}. In fact, the magnetic dipole rotator, if active during quiescence, has a bolometric luminosity given by the Larmor's formula and may power the reprocessed optical emission.
Even more puzzling is the recent discovery of optical pulsations at the NS spin period in one of the transitional pulsars, PSR J1023+0038 \cite{Ambrosino2017,Papitto2019}, the first time ever from a millisecond pulsar. Optical pulsations, with a maximum pulsed optical luminosity of $L_{pulse} \simeq 0.01 L_{opt} \simeq 10^{31}$ erg s$^{-1}$, were observed when the source was in a bright active state corresponding to an X-ray luminosity of $7\times 10^{33}$ erg s$^{-1}$ (see Chapter 7 of this book).
From the discussion above it is clear that, despite the amount of observations and information obtained on AMXPs to date, there are still several issues that deserve further investigation, also considering that some new discoveries have raised other new questions. Nevertheless, one of the most important open questions about AMXPs is their spin period distribution and, most of all, the minimum spin period for a NS. Since (recycled) millisecond pulsars are accelerated during the LMXB phase, we expect that the minimum period of a NS is reached during this accretion phase, before the starting of the (non-accreting) spin-down phase caused by the emission of the magnetic dipole rotator. Hence, we expect that the fastest spinning NS should reside in an AMXP. Since the maximum frequency of a NS depends on its compactness, that is on its mass to radius ratio, the detection of the maximum spin frequency of NS may give strong and important constraints on the EoS of ultra-dense matter.
However the distribution of spin frequencies of the ensemble of AMXPs has an abrupt cutoff at about 730 Hz (e.g. \cite{Patruno2017b}, see also \cite{Papitto2014}), well below the maximum spin frequency allowed by the majority of realistic EoS. We are left therefore with the following questions: is the maximum frequency of NS telling us something related to the EoS of ultra-dense matter? Alternatively, which is the factor limiting the spin of NS well below the maximum possible possible frequency? Several possibilities have been proposed as a limiting factor for the NS rotation, such as emission of Gravitational Radiation \cite{Hartman2011,Papitto2011b}, the presence of a (not completely decayed) magnetic field \cite{ Patruno2012}, bias caused by a fast orbital motion \cite{Burderi2001}, and so on. However, none of these possibilities seems to explain all the phenomenology of AMXPs and LMXBs, and further investigation is needed to assess this fascinating question. To this aim, future X-ray missions, with large effective area and fast timing capabilities, such as {\it Athena}, possibly coupled with polarimetric capabilities, as is the case of the enhanced X-ray Timing and Polarimetry mission, {\it eXTP}, may be fundamental to put forward the research in this field and to open an new era of exciting discoveries on millisecond pulsars.
\begin{comment}
\section{Observational overview of individual AMXPs}
\subsection{SAX J1808.4-3658}
The most regular among recurrent AMXPs, and therefore the best studied of these sources, {\rm SAX~J1808.4$-$3658\xspace} is the first discovered AMXP, which has shown an X-ray outburst every $1.6-3.5$ years, the latest one occurred in 2019 \cite{Bult2020}. The outburst light curve is characterised by a fast rise (on a couple of days timescale), a slow exponential decay (with a timescale of $\sim 10$ days) followed by a fast decay (with a timescale of $\sim 2$ days). After the end of the main outburst, usually a flaring activity, called reflares, is observed, with a quasi-oscillatory behaviour and a variation in luminosity of up to three orders of magnitude on timescales of $\sim 1-2$ days \cite{Patruno2016}. Moreover a strong $\sim 1$ Hz oscillation is observed to modulate the reflares \cite{Bult2014}. A similar behaviour was also observed in the AMXP NGC 6440 X-2 \cite{Patruno2013}. The reflaring behaviour has no clear explanation, however, \cite{Patruno2016} proposed a possible explanation in terms of either a strong propeller with a large amount of matter being expelled from the system or a trapped (dead) disk truncated at the co-rotation radius \cite{DAngelo2012}.
Several thermonuclear X-ray bursts have observed during the source outbursts, some of which exhibited characteristics compatible with photospheric radius expansion X-ray bursts likely originating in a flash of pure helium layer created by stable burning hydrogen \cite{Galloway2006,Galloway2008}. Combining X-ray burst from different outbursts, \cite{Galloway2006} estimated a source distance of $3.5\pm0.1$ kpc.
Coherent timing analyses of the X-ray pulsations revealed a 2.01hr orbital period \cite{Chakrabarty1998}. Moreover, it showed that, with few exceptions (see e.g. \cite{Burderi2006}), no clear accretion-induced spin frequency variations are detectable during the X-ray outbursts \cite{Hartman2008,Patruno2012a,Patruno2017}. However, combining the analysis of multiple outbursts it appears clear a secular evolution of the spin frequency compatible with a constant spin-down derivative of magnitude $\sim10^{−15}$ Hz s$^−1$ \cite{Hartman2008,Patruno2012,Sanna2017c,Bult2020}, likely reflecting a magnetic dipole torque acting during quiescence. Long-term variation of the binary period indicates a fast expansion of the orbit, which has been explained with either a highly non-conservative mass-transfer \cite{DiSalvo2008,Burderi2009, Sanna2017c}, or, alternatively, a gravitational quadrupole coupling mechanism acting between the system and the companion star \cite{Hartman2008,Patruno2012,Sanna2017c}.
The pulsar mass function $f(m_2,m_1,i)\sim 3.8\times 10^{-5}$ M$_\odot$ gives a minimum companion mass of $m_2=4.4\times10^{-2}$ M$_\odot$ for a NS mass $m_1=1.4$ M$_\odot$. Statistical considerations on the a priori probability distribution of the binary inclination angle suggests a $m_2\leq$ 0.14 M$_\odot$, likely, Brown Dwarf companion star \cite{Bildsten2001,Deloye2008,DiSalvo2008}.
V4584 Sagittarii, the optical/IR counterpart of {\rm SAX~J1808.4$-$3658\xspace}{}, was discovered during the 1998 outburst with magnitude $V=16.6\pm0.2$, $R=16.1\pm0.2$, $I=15.6\pm0.2$, $J=15.0\pm0,1$, $H=14.4\pm0.1$, $K=13.8\pm0.1$. Optical flux consistent with an X-
ray heated accretion disk and an inclination of $i=51\pm18$ degrees (90\% c.l.)\cite{Wang2001}. Radio observations performed with ATCA revealed a transient radio counterpart (flux $\sim$0.8 mJy) interpreted as synchrotron emission, which could be consistent with the IR transient excess. This suggests the presence of relativistic jets and/or outflows ejected from the binary.
\subsection{XTE J0929-314}
XTE J0929-314 is a high-latitude X-ray source discovered by the All-Sky Monitor aboard {\em RXTE\xspace}{} in 2002 and observed in outburst for about 53 days starting from May 2 \cite{Galloway2002a}. Analysis of the high time resolution {\em RXTE\xspace}{} data allowed to detect persistent $\sim185$ Hz pulsations \cite{Remillard2002}. Doppler modulation of the coherent X-ray pulsation is compatible with an ultra-compact nature of the binary with an orbital period of $\sim48$ minutes \cite{Galloway2002}. Moreover, the pulsar showed a clear spin down at an average rate of $\dot{\nu}\sim-9\times 10^{-14}$ Hz s$^{-1}$ for which the following possible mechanisms have been isolated: a) magnetic coupling to the accretion disk; b) magneto-hydrodynamic wind; c) gravitational radiation from the rapidly spinning pulsar (see \cite{Galloway2002} and references therein). No radio pulsation has been detected during the quiescence state of the source with an upper limits of 68 $\mu$Jy at 6.4 GHz and 26 $\mu$Jy at 8.5 GHz \cite{Iacolina2009}.
Spectral properties have been investigated modelling the broad-band energy spectrum collected combining {\em Chandra\xspace}{} and {\em RXTE\xspace}{} observations during the 2002 outburst of the source. The spectrum is well modelled by an absorbed Comptonization ($\Gamma\sim1.7$) plus blackbody ($kT\sim 0.7$ keV) model. No emission or absorption features are found in the {\em Chandra\xspace}{} high-resolution spectrum \cite{Juett2003}.
The pulsar mass function $f(m_2,m_1,i)\sim 2.7\times 10^{-7}$ M$_\odot$ gives a minimum companion mass of $m_2=8\times10^{-3}$ M$_\odot$ for a NS mass $m_1=1.4$ M$_\odot$. Considerations on the binary parameters suggested a $m_2\simeq$ 0.1 M$_\odot$ White Dwarf companion star and a moderately high inclination. Multiwavelength study of XTE J0929-314 have been performed allowing to detect V variable optical \cite{Greenhill2002,Cacella2002} and radio \cite{Rupen2002} counterparts compatible with the X-ray source position measured with the {\em Chandra\xspace}{} X-Ray Observatory \cite{Juett2003}. C \rom{3}/N \rom{3} and $H\alpha$ emission lines were reported in the optical spectrum \cite{Castro2002}, while a radio counterpart (during the outburst) was detected with the VLA at 4.86 GHz \cite{Rupen2002}.
\subsection{XTE J1751-305}
The NS X-ray binary XTE J1751-305 was detected for the first time by {\em RXTE\xspace}{} on April 3, 2002 \cite{Markwardt2002a}. During the outburst coherent X-ray pulsations at $\sim$435 Hz were observed and a full orbital solution has been determined revealing a $\sim42$ minute orbital period and the ultra-compact nature of the binary system \cite{Markwardt2002}. Phase-coherent timing analysis of the 10-days long outburst allowed to constrain a spin up effect with an average rate of $(3.7\pm1.0)\times10^{-13}$ Hz s$^{-1}$ \cite{Papitto2008}. {\em INTEGRAL\xspace}{} detected a possible new outburst of the source on March 28, 2005 \cite{Grebenev2005}. The short duration ($\sim2$ days) and the weak intensity of the outburst (peak flux almost ten times dimmer than the first outburst) combined with the lack of high resolution timing data did not allow to make a clear association with XTE J1751-305 \cite{Swank2005}. Very similarly, on April 5, 2007, {\em RXTE\xspace}{} observed only for a short time the source that resulted too dim to detect pulsations. However, a clear identification of the source has been possible thanks to a follow-up {\em Swift\xspace}{} observation \cite{Markwardt2007}. A fourth outburst was first observed with {\em INTEGRAL\xspace}{} \cite{Chenevez2009}, and shortly after followed by {\em RXTE\xspace}{} (casually observing the newly discovered AMXP IGR~J17511-3057 \cite{Markwardt2009}). X-ray pulsations were clearly detected and combined to estimate an updated orbital solution \cite{Riggio2011}.
Spectral analysis of the 2002 outburst revealed that the broad-band X-ray spectrum of XTE J1751-305 requires three components: two soft components modelling the thermal emission from the accretion disk ($kT\sim0.6$ keV) and the accretion spot ($1$ keV) on the NS surface, and a hard component representing the thermal Comptonization in a plasma with $kT_e\sim40$ keV and optical depth $\sim1.5$ in a slab geometry \cite{Gierlinski2005}. No clear evidence for narrow or broad emission or absorption lines has been detected in the time-averaged spectrum. Upper limits of 4 and 6 eV have been estimated for the strength of narrow (FWHM$\sim0.1$ keV) and broad (FHWM$\sim0.7$ keV) Fe $K\alpha$ emission lines \cite{Miller2003}.
The NS mass function $f(m_2,m_1,i) = 1.3\times10^{â6}$ M$_\odot$ allows to infer a minimum companion mass of $m_2>1.4\times10^{-2}$ M$_\odot$ for an inclination angle of $i<75^\circ$ (lack of eclipses or dips) and a NS mass $m_2=1.4$ M$_\odot$, suggesting a heated He or C/O white dwarf companion \cite{Deloye2003}.
Studies of optical and near-infrared images of the region around XTE J1751-305 revealed no stars within the {\em Chandra\xspace}{} positional error circle (0''.7; \cite{Markwardt2002}). Upper limits for the counterpart $R>23.1$, $I>21.6$, $Z>20.6$, $J>19.6$, $K>19.2$ have been estimated \cite{Jonker2003}. No evidence for GW emission from XTE J1751-305 has been found in the frequency bands 434.5-436.5 Hz, 620.5-622.5 Hz, or 869.5-871.5 Hz. No candidate templates passed both threshold and coincidence requirements between LIGO Hanford and LIGO Livingston observatories \cite{Meadors2017}.
\subsection{XTE J1814-338}
The accreting millisecond X-ray pulsar XTE J1814-338 was discovered on June 5 2003 during the scanning of the Galactic-center region performed by {\em RXTE\xspace}{}. Standard timing analysis showed the presence of a pulsed signal at the frequency of $\sim314$ Hz interpreted as the spin frequency of a NS harboured in a binary system with a 4.3 hrs orbital period \cite{Markwardt2003}. A peculiar negative spin frequency derivative (spin-down) of $\dot{\nu}\sim -7\times 10^{-14}$ Hz has been reported combining the observations covering the almost 50-days long outburst \cite{Papitto2007}.
Numerous Type I X-ray bursts (28 in total) have been observed from XTE J1814-338 during its outburst. Burst oscillations with frequency in the vicinity of the 314 Hz pulsar frequency have been reported for all the detected Type I X-ray bursts (see e.g. \cite{Strohmayer2003,Watts2005}). A source distance of $8\pm1.6$ kpc was inferred from one of the Type-I burst showing signs of photospheric radius expansion \cite{Strohmayer2003}.
An optical counterpart of XTE J1814-338 at $R\sim18.3$ has been observed during the outburst; moreover, hydrogen and helium emission lines were detected through optical spectroscopy, suggesting for a non-degenerate companion \cite{Krauss2005}.
Optical observations in quiescence reported a rather faint ($R\sim22.5$ and $V\sim23.3$; \cite{DAvanzo2009}) counterpart.
A multiband Very Large Telescope (VLT) campaign carried in 2009 (during quiescence) shows an irradiated companion star that requires an energy source compatible with the spin-down luminosity of a millisecond pulsar \cite{Baglio2013}. This proves further evidence that AMXPs might turn on as radio pulsars when in quiescence. Doppler tomography of the Bowen region applied to the 2003 outburst allowed to constrained the binary mass ratio $q\sim0.123$, the inclination angle $35^\circ<i<78^\circ$ and the companion mass $0.19$ M$_\odot$ $<m_2<0.32$ M$_\odot$ assuming a NS mass ranging between M$_\odot$ $<m_1<2.7$ M$_\odot$. The dynamical mass constraints confirmed previous suggestions of a significantly bloated main-sequence M-type companion star \cite{Wang2017}.
\subsection{XTE J1807-294}
The transient X-ray binary XTE J1807-294 was first detected in outburst on February 2003 with {\em RXTE\xspace}{} while performing monitoring observations of the Galactic center region \cite{Markwardt2003}. Coherent pulsations at the frequency of $\sim190$ Hz were detected, suggesting the detection of a new AMXP. Significant sinusoidal modulation of the X-ray pulsations compatible with an orbital period of $\sim40$ minutes and projected semi-major axis of $\sim4.8\times 10^{-3}$ lt-s were established \cite{Markwardt2003b}, making XTE J1807-294 a member of the ultra-compact binary group. Timing analysis of the almost 120-days long outburst (exceptionally long with respect to the average outburst duration shown by AMXPs) allowed to refine the orbital solution and to constrain the NS spin evolution during the accretion phase (see e.g. \cite{Kirsch2004,Riggio2007,Riggio2008,Chou2008,Patruno2010b}). Pulse-shape modelling analysis have been carried out using the {\em RXTE\xspace}{} observations of the source, allowing to constrain the mass and radius of the NS to $1-2.5$ M$_\odot$ and $\sim12$ km, respectively \cite{Leahy2011}. Simultaneous twin kHz QPOs have observed and investigated in the power spectra of XTE J1807-294 (see e.g. \cite{Linares2005,Zhang2008,Tasheva2018}).
The 0.5-200 keV broad-band spectral analysis of XTE J1807-294 obtained combining almost simultaneous {\em INTEGRAL\xspace}{}, {\em RXTE\xspace}{} and {\em XMM-Newton\xspace}{} observations showed that the source spectrum is well described by a combination of thermal Comptonization (with electron temperature $kT_e\sim40$ keV) and a disk black body (with temperature $kT_B\sim0.8$ keV). Assuming a binary inclination $60^\circ<i<83^\circ$ an inner disk radius lying in the range 20-40 km (for the distance of 8 kpc) has been suggested \cite{Falanga2005a}. Neither absorption nor emission lines have been reported during the outburst (see e.g. \cite{Campana2003}).
The pulsar mass function $f(m_2,m_1,i) = 1.6\times10^{â7}$ M$_\odot$ allows to infer a minimum companion mass of $m_2>7\times 10^{-3}$ M$_\odot$ for an inclination angle of $i = 90^\circ$ and a NS mass $m_2=1.4$ M$_\odot$, suggesting a very low mass dwarf origin. Binary evolution scenarios applied to ultra-compact X-ray binaries suggest that the donor star could be either a C/O white dwarf or a He white dwarf \cite{Deloye2003}. Constraints on the companion mass ($m_2<0.022$ M$\odot$) obtained from the spectral analysis XTE J1807-294 \cite{Falanga2005a} seems to favours the former scenario. No counterparts have been reported at any wavelength during either outburst or quiescence. A candidate optical counterpart detection within the 0''.6 {\em Chandra\xspace}{} error circle (90\% confidence level) was reported in 2009 \cite{DAvanzo2009} with $V\sim22.1$ and $R\sim21.4$. No variability has been observed, with upper limits of 0.1 mag.
\subsection{IGR~J00291+5934}
IGR~J00291+5934 is a transient LMXB observed in outburst for the first by {\em INTEGRAL\xspace}{} on 2004 December 2 \cite{Shaw2005}. Hints for possible outbursts occurring already in 1998 and 2001 have been found from the re-analysis of the {\em RXTE\xspace}{}/ASM light curve of the source \cite{Remillard2004}. Detection of coherent X-ray pulsations at $\sim599$ Hz (the fastest among the known AMXPs; \cite{Eckert2004,Markwardt2004a}) and Type-I X-ray burst \cite{Kuin2015} revealed the NS nature of the compact object harboured in the binary system. Analysis of the {\em RXTE\xspace}{} observations of the outburst highlighted sinusoidal variation of the pulsations on timescales of $\sim147$ minutes compatible with orbital Doppler modulation \cite{Markwardt2004b,Galloway2005}. Highly sinusoidal pulse profiles have been observed across a wide range of energies \cite{Galloway2005,Falanga2005b} characterised by soft lags with a complex energy dependence \cite{Galloway2005,Falanga2007}. Coherent timing analysis of the 2004 outburst revealed a spin derivative of $8.5\pm1.1 \time 10^{-13}$ Hz s$^{-1}$ \cite{Falanga2005b,Burderi2007}, compatible with an accretion torque spinning up the NS during outburst \cite{Burderi2007}.
The source has been detected in outburst again in August and September 2008, showing two separate outbursts with a peak flux level half that observed in 2004 \cite{Chakrabarty2008, Lewis2008}. Finally, on July 2015, a fourth outburst closely resembling the behaviour observed in 2004 has been reported \cite{Sanna2015,DeFalco2017,Sanna2017d}. Timing analysis of the four outbursts allowed to refine the orbital parameters of the system (see e.g. \cite{Patruno2010,Hartman2011,Papitto2011c,Sanna2017d} and to constrain secular evolution of the spin frequency (significant spin-down of $\dot{\nu}\sim-4\times10^{-15}$ Hz s$^{-1}$; \cite{Patruno2010, Hartman2011, Papitto2011c} and the orbital period (upper limit on the orbital period derivative of $|\odot{P}_{orb}|< 6\times 10^{-13}$ s s$^{-1}$; \cite{Patruno2017,Sanna2017d}.
Aperiodic timing analysis of IGR~J00291+5934 revealed an unusual amount of timing noise at very low frequencies (0.01-0.1 Hz). Interestingly, harmonically related QPOs at $\sim20$ mHz and $\sim40$ mHz \cite{Linares2007,Hartman2011} and a prominent 8 mHz QPO (< 2 keV, \cite{Ferrigno2017}) observed in the 2004-2008 and the 2015 outbursts, respectively, resemble the low-frequency QPOs observed in black holes \cite{Linares2007} rather than the feature typically seen in accreting NS. Coupling between the mHz QPOs and the coherent X-ray pulsation of IGR~J00291+5934 has been investigated using the {\em RXTE\xspace}{} and {\em XMM-Newton\xspace}{} observations of the source \cite{Bult2017}.
The spectral analysis of the unique Type-I X-ray burst exhibited by IGR~J00291+5934 provided indications of a photospheric radius expansion phase allowing to infer a source distance of $d = 4.2\pm0.5$ kpc \cite{Bozzo2015,DeFalco2017}.
Broad-band spectral observations performed during the 2004 and the 2015 outbursts reported the source in a typical hard state dominated at energies by a Comptonization of soft photons (likely produced in the NS surface) by an electron population with kTe $30-50$ keV, and at lower energies by a blackbody component with kT $0.5-1$ keV \cite{Falanga2005b,Sanna2017d}. No signature of the emission from the NS hot spot blackbody has been reported from the study of simultaneous {\em RXTE\xspace}{}-{\em Chandra\xspace}{}/HETGS observations collected towards the end of the 2004 outburst \cite{Paizis2005}. The {\em XMM-Newton\xspace}{}/{\em NuSTAR\xspace}{} energy spectrum revealed also the presence of a moderately broad, neutral Fe emission line and four narrow absorption lines as well as a strong correlation between the pulse profile and the blackbody component, suggesting that the latter component resides at the NS surface \cite{Sanna2017d}.
The pulsar mass function $f(m_2,m_1,i)\sim 2.8\times 10^-5$ M$_\odot$ implies a companion mass of $m_2=3.9\times 10^{-2}$ M$_\odot$ for a NS mass of 1.4 M$_\odot$ and inclination angle $i=90^\circ$. Stellar evolution simulations predicted a hydrogen-rich white or brown dwarfs for the secondary star (see \cite{Bildsten2001}). The optical counterpart ($R\sim17.4$) of IGR~J00291+5934 has been tentatively identified on 4 December 2004 by the Robotic Palomar 60-inch Telescope \cite{Fox2004} and confirmed by the 4.2-m William Herschel Telescope on La Palma \cite{Roelofs2004} and the Keck I 10-m telescope (Filippenko et al. 2004). IR counterpart of the source has been reported on 8 December 2004 with preliminary magnitudes of $J=16.8\pm0.1$, $H=16.8\pm0.3$, $K=16.1\pm 0.2$ \cite{Steeghs2004}. An optical/NIR photometric study in quiescence performed in 2005 with the 3.6-m Telescopio Nazionale Galileo allowed to determine the VRIJH counterparts of IGR~J00291+5934 as well as the observation of optical variability consistent with the orbital period \cite{DAvanzo2007}. The optical counterpart has been observed again on 25 July 2015 with the MASTER-IAC robotic telescope \cite{Rebolo2015b}, detection confirmed by the 2-m Liverpool Telescope \cite{Kopac2015} and the 2-m Faulkes Telescope North at Maui \cite{Russell2015}. Observation of the Radio counterpart of the source at 15 GHz has been reported on 4 December 2004 with the Ryle Telescope in Cambridge \cite{Pooley2004}, not confirmed by a follow up observation with the same observatory (upper limits of $\sim0.6$ mJy; \cite{Fender2004}. A 10-hour observation performed with the Westerbork Synthesis Radio Telescope between 2004 December 6 and 7 showed a radio counterpart (at 5 GHz) with a flux of $0.250\pm0.035$ mJy \cite{Fender2004}. Very Large Array observations performed on 2004 December 9 at 4.86 GHz confirmed the reported radio detection, with a flux of $0.17\pm0.05$ mJy. No radio emission has been detected during the 2008 outburst (upper limits of 0.16 mJy at 5 GHz; \cite{Linares2008}).
\subsection{HETE J1900.1-2455}
HETE J1900.1-2455 is an atypical transitional LMXB discovered by the High Energy Transient Explorer 2 (HETE-2) in the 2005 \cite{Vanderspek2005}. Follow-up {\em RXTE\xspace}{} observations of the outburst revealed coherent X-ray pulsations at $\sim377$ Hz showing Doppler shift compatible with an orbital period of $\sim1.4$ hr and projected semi-major axis $\sim0.18\time10^{-1}$ lt-s \cite{Kaaret2006}. After a few tens of days the source displayed the decrease of the pulse fraction amplitude from $\sim4.5$\% to below the sensitivity level, unveiling for the first time the intermittency of X-ray pulsations in accreting millisecond pulsars \cite{Galloway2005b}. The pulsations remained extremely intermittent, with sporadic detections at very low amplitudes for the next 2 years \cite{Patruno2012a} after which they became undetectable till the beginning of the quiescence phase in 2015 \cite{Degenaar2017} with an upper limits $<0.5$\% (see e.g. \cite{Galloway2008,Patruno2012a,Papitto2013a}). Appearance of a 882 Hz QPO has been reported in right before the beginning of the pulse intermittency \cite{Kaaret2006}.
The source exhibited several tens of thermonuclear X-ray bursts from which a distance between 4.3 kpc \cite{Suzuki2007} and 4.7 kpc \cite{Galloway2008} has been suggested. Burst oscillations at frequency within 1 Hz with respect to the NS spin frequency have been detected only in one Type-I burst on April 2009 \cite{Watts2009}.
The X-ray spectrum of HETE J1900.1-2455 is well fitted by a combination of Comptonized component and accretion disc multi-temperature blackbody \cite{Falanga2007,Papitto2013a}. Evidences for a significant reflection spectrum originating from a truncated disk have been reported by Papitto et al. (2013).
The small NS mass function $f(m_2,m_1,i)\sim 2\times 10^6$ M$_\odot$ implies either a very low mass companion or a very improbable orbital inclination. Assuming a NS mass of 1.4 M$_\odot$ and inclination angle $i<75^\circ$ a minimum companion mass $m_2=0.0164$ M$_\odot$ is inferred. \cite{Kaaret2006} suggested that the donor star in HETE J1900.1-2455 is most likely a Roche lobe-filling brown dwarf.
The optical counterpart of HETE J1900.1-2455 was detected on 2005 June 18 with an R-band magnitude of $\sim18.4$ mag \cite{Fox2005}. Subsequent observations found the R-band magnitude to be $18.02\pm0.03$ mag, and the V-R colour to be $-0.16$ mag, with spectroscopy revealing a broad HeII $\lambda=468.6$ nm emission line \cite{Elebert2008}. No radio counterpart was identified in VLA observations on 2005 June 19 and 24 at 8.46 GHz \cite{Rupen2005}.
\subsection{SWIFT J1756.9-2508}
SWIFT J1756.9-2508 is a NS low-mass X-ray binary first observed on 2007 June 7 by the Burst Alert Telescope aboard the {\em Swift\xspace}{} satellite \cite{Krimm2007}. Follow-up observations carried out with {\em Swift/XRT\xspace}{} and {\em RXTE\xspace}{} provided the accurate localization of the source and led to the discovery of pulsations at a frequency of $\sim182$ Hz \cite{Markwardt2007b}. Phase-coherent timing analysis of the {\em RXTE\xspace}{} observations allowed to calculate the orbital solution which reveals an orbital period of of $\sim55$ min and a projected semi-major axis of $\sim5.9\times 10^{-3}$ lt-s \cite{Krimm2007}.
The source was observed again in outburst three more time on July 2009 \cite{Patruno2009b,Patruno2010d}, April 2018 \cite{Mereminskiy2018,Bult2018,Sanna2018d} and June 2019 \cite{Sanna2019}. Updates on the orbital solution of have been reported based on the timing analysis of the latest outbursts (see e.g. \cite{Patruno2010d,Bult2018,Sanna2018d}. Studies on the secular evolution of the spin frequency and the orbital period have been carried out combining all outbursts revealing a spin frequency derivative of $5-7 \times 10^{-16}$ Hz s$^{-1}$ (suggesting a magnetic field at he polar caps of $B_{PC}=4-6\times 10^8$ G) and an almost constant orbital period with a $3\sigma$ upper limit of $|\dot{P}_{orb}|< 7\times 10^{-12}$ s s$^{-1}$ \cite{Bult2018,Sanna2018d}. Broad energy-band (0.3-80 keV) analysis of the pulsation spectral energy distribution of SWIFT J1756.9-2508 showed an increase (decrease) of the pulse fractional amplitude with energy for the fundamental (second harmonic) component suggesting a Comptonization origin \cite{Gierlinski2002, Ibragimov2009}. Moreover, \cite{Bult2018} reported an improved set of source coordinates through astrometric analysis of the pulse arrival times.
The broad-band (3-90 keV) energy properties of SWIFT J1756.9-2508 observed during its 2018 outburst suggested a hard spectral state, compatible to what observed during the previous outbursts \cite{Linares2008b,Patruno2010d}. The energy spectrum is well modelled by an absorbed cut-off power law ($\Gamma\sim1.5$ and cut-off energy 70 keV) plus a soft thermal component ($kT_e\sim 0.8$ keV). Interestingly, contrary to the previous outbursts \cite{Patruno2010d}, no significant reflection features have been reported, with a stringent upper limit on the iron line equivalent width of $\sim5$ eV \cite{Sanna2018d}.
The pulsar mass function $f(m_2,m_1,i)\sim 1.6\times 10^{-7}$ M$_\odot$ implies a minimum donor star mass of 0.0067 M$_\odot$ suggesting that Swift J1756.9-2508 harbours a He rich white dwarf which is irradiated by X-ray flux from the accretor \cite{Krimm2007}. A possible NIR Ks counterpart was identified by inspecting with VLT equipped with the NACO camera within the {\em Swift/XRT\xspace}{} positional confidence region of the source. No radio counterpart has been identified yet \cite{Possenti2007,Hessels2007}.
\subsection{Aql X-1}
Aql X-1 is a NS LMXB observed for the first time in the late sixties \cite{Friedman1967} and detected in outburst every 200-300 days (see e.g. \cite{Priedhorsky1984,Kitamoto1993, Campana2013}). These outbursts, lasting typically few months, show a variety of shape and peak luminosity, from $L_X\simeq10^{35}$ to $10^{38} (D/5.0$ kpc$)^2$ erg s$^{-1}$ (see e.g. \cite{Kuulkers2003,Campana2013}). The source shows Type-I X-ray bursts, some of which led to the detection of burst oscillations at a frequency $\sim549$ Hz \cite{Zhang1998} and the determination of its distance between 4.5-6 kpc \cite{Jonker2004}. Intriguingly, coherent X-ray pulsations at 550.27 Hz were observed in the persistent emission of the source during its 1998 outburst \cite{Casella2008}. The pulsations, uniquely detected for 150 s, is slightly higher in frequency with respect to the asymptotic frequencies obtained from the burst oscillations. Due to the extremely short duration of the pulsation, neither orbital solution nor mass function have been inferred with X-rays. Twin kHz QPOs were observed at a frequency of $\sim800$ and $\sim1080$ Hz \cite{Barret2008}.
Based on its spectral and timing properties, Aql X-1 was classified as an atoll-type X-ray binary \cite{Hasinger1989, Reig2004}. At low luminosities the source energy spectrum is dominated by hard power-law emission, while at higher luminosity the energy spectrum pivots to be dominated by its soft thermal components (e.g. \cite{Lin2007}). Recent studies on the broad-band energy spectrum of the source in its soft state revealed significant reflection features allowing to infer the inner radius of the accretion disk, hence to constrain the NS radius \cite{Ludlam2017}.
The optical counterpart of Aql X-1 was identified as a $K7V$ star by \cite{Chevalier1999}. Recent VLT observations allowed to refine this classification to a $K4\pm2$ star moving at a projected velocity of $136\pm4$ km s$^{-1}$, constraining the orbital inclination to $36-47^\circ$, lower than that inferred from the detection of two dipping episodes in the {\em RXTE\xspace}{} X-ray light curves ($72-79^\circ$; \cite{Galloway2016}). Based on the optical light curve modulations of the companion star during a quiescence phases an orbital period of 19 h and a $\sim1$ M$_\odot$ companion star have been suggested (see e.g \cite{Thorstensen1978}. Radio emission with flux densities of $\leq0.4$ mJy was first detected from Aql X-1 in outburst with VLA \cite{Hjellming1990}. No significant correlation between radio/optical emission were reported from a thorough analysed of the March 2002 outburst \cite{Tudose2009,Maitra2008}. \cite{MillerJones2010} reported positive correlation between radio and X-ray during the entire 2009 outburst highlighting strong similarities with the behaviour observed in BHs. Finally, multi-wavelength studies of Aql X-1 over accretion state transitions during its 2016 outburst showed radio to millimetre spectra consistent with emission from a jet, with a decreasing spectral break from optically thick to optically thin synchrotron emission during the transition from a hard to a soft accretion state. Moreover, in the same outburst, radio flux density (at 5.5 GHz) of $\sim0.82$ mJy has been reported, setting the highest flux recorded to date for the source \cite{DiazTrigo2018}.
\subsection{SAX J1748.9-2021}
SAX J1748.9-2021 is an intermittent AMXP discovered by Beppo-SAX in 1998 \cite{intZand1998}. The source has been located in the globular
NGC 6440 at a distance of $\sim8.2$ kpc (\cite{intZand1999,Valenti2007}, and references therein). The detection of a type-I X-ray burst associated to SAX J1748.9-2021 confirmed the nature of the compact object hosted in the LMXB system \cite{intZand2001a}. The identification of the optical and quiescent counterparts of the binary followed shortly afterwards \cite{intZand2001b}. SAX J1748.9-2021 has been observed in outbursts in 2001, 2005, 2009-2010, 2015 and 2017 (\cite{intZand1999,Verbunt2000,intZand2001a,Markwardt2005,Patruno2009,Pintore2016,Sanna2016,Pintore2018}), with also a possible a posteriori associated outburst in 1971 \cite{Markert1975}.
X-ray pulsations at the frequency of $\sim442.3$ Hz were observed for the first time in a single {\em RXTE\xspace}{} observation during the 2005 outburst \cite{Gavriil2007}. Independent analysis of {\em RXTE\xspace}{} archival observations from the 2001 and 2005 outbursts led to an increase of the detected signals and it revealed the intermittent nature of the coherent X-ray pulsation on timescales of hundreds of seconds \cite{Altamirano2008,Patruno2010d}. The source orbital solution characterised by an orbital period of $\sim8.76$ has been firstly proposed by \cite{Altamirano2008} and later refined by \cite{Patruno2009} applying phase-coherent timing analysis on the same dataset. Intermittent pulsations have been reported also during the 2009-2010 \cite{Patruno2010d} and the 2017 ourburst (Sanna et al. in prep.), while both the available observations of the 2015 outburst led to a detection \cite{Sanna2016}. Studies on the energy dependence of the pulse profile showed that the pulse fraction in SAX J1748.9-2021 linearly increases ranging between 0.5\% up to 4\% in the energy range 2.5-20 keV \cite{Patruno2009}, result confirmed in the range 0.5-15 keV from the analysis of the 2015 {\em XMM-Newton\xspace}{} observation \cite{Sanna2016}.
SAX J1748.9-2021 has shown numerous Type-I X-ray bursts observed with {\em RXTE\xspace}{} (see e.g. \cite{Galloway2008b}) {\em XMM-Newton\xspace}{} \cite{Pintore2016,Pintore2018} {\em NuSTAR\xspace}{}/ASTROSAT \cite{Sharma2019} and {\em INTEGRAL\xspace}{}-{\em Swift\xspace}{} \cite{Li2018} since its discovery. Time resolved spectroscopy of a sample of four thermonuclear X-ray bursts observed from SAX J1748.9-2021 was used to constrain the mass and radius of the NS in the range 1.3-1.8 M$_\odot$ and 8-11 km, respectively \cite{Guver2013}.
Spectral properties of SAX J1748.9-2021 have been extensively investigated expecially during the last two outbursts. Rapid spectral changes from hard-to-soft state have been reported from the analysis of the available {\em INTEGRAL\xspace}{}, {\em Swift\xspace}{}, and {\em XMM-Newton\xspace}{} observations of the 2015 outburst (see e.g \cite{Li2018,Pintore2016}). The spectral properties of SAX J1748.9-2021 during its 2017 outburst are consistent with an absorbed Comptonization ($\Gamma\sim 1.65$ keV and $kT_e\sim 20$ keV) plus a blackbody ($kT_{bb}\sim0.6$ keV, $R_{bb}sim2.5$ km) suggest a hard state of the source \cite{Pintore2018}.
Considerations on the age and the metallicity content of the globular cluster combined with accretion condition led \cite{Altamirano2008} to describe the companion star as a main-sequence or a slightly evolved star with mass ranging between (0.85 - 1.1) M$_\odot$. Interestingly, the companion star has been recently confirmed by the Hubble Space Telescope, revealing its main-sequence nature with a mass of 0.70-0.83 M$_\odot$, a radius of $0.88\pm0.02$ R$_\odot$, and a surface temperature of $5250\pm80$ K. These parameters combined with the orbital properties of the binary suggest at a very low inclination angle ($8^\circ-14^\circ$) and filling/overflowing Roche-Lobe \cite{Cadelano2017}. VLA radio observations provided stringent upper limits during the 2009 outburst and implied that the radio emission is quenched at high X-ray luminosities \cite{MillerJones2010b}.
\subsection{NGC 6440 X-2}
NGC 6440 X-2 is a NS LMXB serendipitously discovered with {\em Chandra\xspace}{} in the globular cluster NGC 6440 on July 28th, 2009 \cite{Heinke2009a} and observed in outburst again a month later with {\em RXTE\xspace}{} \cite{Altamirano2010b}. The nature of the compact object harboured in the binary system was revealed by the detection of coherent pulsations at $\sim205.9$ Hz \cite{Altamirano2009}. The spin frequency drifts observed from the {\em RXTE\xspace}{} observations of the source have been interpreted with an orbital period of $\sim57$ minutes and a projected semi-major axis of $\sim6.2$ lt-ms, parameters that classify the source among the ultra-compact binary systems. Studies of the coherent pulsations performed on the outbursts detected by {\em RXTE\xspace}{} showed that the fundamental pulse amplitude ranges between 5 and 15\% rms, whereas the the second harmonic, if detected, has a lower amplitude ($\sim2$\% rms). Moreover no evidence of a spin derivative has been detected (upper limit of $5\times 10^{-13}$ Hz s$^{-1}$; \cite{Bult2015c}). Besides coherent pulsations, the power density spectrum of the source shows a strong 1 Hz quasi-periodic modulation (also visible in the light-curves; \cite{Patruno2013}, which strongly resembled that observed in {\rm SAX~J1808.4$-$3658\xspace}{} \cite{Patruno2009c,vanderKlis2000}).
The NS mass function $f(m_2,m_1, i)\simeq 1.7\times 10^{-7}$ M$_\odot$ suggests a minimum mass for the companion star of $m_2=7\times10^{-3}$ M$_\odot$, assuming a NS mass $m_1=1.4$ M$_\odot$ and an inclination angle $i<75^\circ$ inferred from the inspection of the X-ray light curves. Light companion stars in ultra-compact X-ray binary systems are probably inconsistent with brown dwarf models while white dwarf models suggest that the companion is probably a He-dominated donor (see e.g. \cite{Altamirano2010,Krimm2007}).
Extremely short outburst durations (typically between 3 and 5 days; \cite{Altamirano2010}, short recurrence time (10 outbursts between 2009 and 2011; \cite{Patruno2013}) and very low long-term average mass accretion rate (<$\dot{M}$> $< 2\times10^{-12}$ M$_{\odot}$ yr$^{-1}$; \cite{Heinke2010} make NGC 6440 X-2 extremely peculiar among the AMXPs. As already remarked by \cite{Patruno2012}, AMXPs with properties similar to NGC 6440 X-2 might be difficult to detect with the currently available scan flux sensitivities. This is particularly important given that AMXPs seem to be associated with faint LMXBs, and systems of this type may therefore contain undiscovered AMXPs.
No optical counterpart of NGC 6440 X-2 has been identified yet, with upper limits of B > 22 and V > 21 from archival Hubble Space Telescope (HST) imaging of the globular cluster NGC 6440 when the source was in quiescence. Gemini-South observations of the source during its outburst on August 2009 confirm the HST result with the g > 22 upper limit. Observations carried with the CTIO 4-m telescope during the 2009 July outburst also did not reveal any counterpart with J > 18.5 and K > 17 \cite{Heinke2010}.
\subsection{IGR~J17511-3057}
IGR~J17511-3057 is a LMXB observed for the first time by {\em INTEGRAL\xspace}{} on 12 September, 2009 \cite{Baldovin2009}. Coherent modulation of the {\em RXTE\xspace}{} flux at about 4 ms allowed to classify the source as an AMXP \cite{Markwardt2010}. Interestingly, another transient X-ray source (XTE J1751-305) was observed in outburst on 7 October, 2009 very near the position of IGR~J17511-3057 \cite{Chenevez2009,Falanga2011}. The detection of pulsations at $\sim 435$ Hz \cite{Markwardt2010} while observing IGR~J17511-3057 combined with the small difference between the source coordinates, allowed to distinguish between the two AMXPs.
Standard timing analysis determined the orbital of IGR~J17511-3057 in $\sim3.47$ h, with a semi-major projected axes of $2.7\times 10^2$ lt-ms (see e.g. \cite{Riggio2011}, and references therein). The pulse profile of IGR~J17511-3057 displays several peculiarities such as: a)the highest second harmonic fractional amplitude ($\sim23\%$ that linearly decreased to $\sim17\%$ at the end of the outburst) among the known AMXPs; b)rich harmonic content (with sporadic detection of the fourth harmonic) similarly to what reported for the AMXPs XTE J1807-294 \cite{Patruno2010b}, IGR~J17591-2342 \cite{Sanna2018c} and rarely for {\rm SAX~J1808.4$-$3658\xspace}{} \cite{Hartman2008}; c)significantly different behaviours of the phase delays for the four harmonic components with time (see \cite{Riggio2011} for details on the topic). Broad-band energy (up to $\sim100$ keV) study of the coherent pulsation tracked with {\em INTEGRAL\xspace}{} showed that the pulsed fraction (fundamental component) significantly decreases from $\sim22$\% at 3 keV to a constant value of $\sim17-18$\% between 7-30 keV, to possibly decrease again down to $\sim13$\% at 60 keV \cite{Falanga2011}.
The orbital ephemeris of the source imply a pulsar mass function of $f(m_2,m_1,i) \sim 1\times10^{-3}$ M$_\odot$ from which a lower limit of $m_2=0.14$ M$_\odot$ can be inferred for the companion star under the assumption of an inclination angle of $90^\circ$ and a NS mass of 1.4 M$_\odot$ \cite{Markwardt2009}. An improved estimate of this lower limit was reported by \cite{Papitto2010}, who considered the lack of eclipses or dips during the outburst of IGR~J17511-3057.
Thermonuclear X-ray bursts were observed with {\em Swift\xspace}{}, {\em RXTE\xspace}{}, {\em Chandra\xspace}{} and {\em XMM-Newton\xspace}{} (see e.g. \cite{Bozzo2010,Watts2009b,Novak2009, Papitto2010}), from which burst oscillations at $\sim245$ Hz phase locked to the persistent pulsations have been detected \cite{Watts2009b,Papitto2010,Altamirano2010c,Riggio2011}.
The {\em XMM-Newton\xspace}{} 0.5-11 keV spectrum of IGR~J17511-3057 has been well modelled by at least three components interpreted as the multicoloured disc emission, the thermal emission from the NS surface and the thermal Comptonization emission. Spectral fit of the {\em XMM-Newton\xspace}{} and of the {\em RXTE\xspace}{}
data, taken in a simultaneous temporal window, helped to constrain the Comptonization parameters: the electron temperature, kTe=51$\pm$6 keV, is rather high, while the optical
depth ($\tau_T$=1.34$\pm$0.03) is moderate.
The broad-band average spectrum of the source obtained combining {\em RXTE\xspace}{}, {\em Swift/XRT\xspace}{} and {\em INTEGRAL\xspace}{} data has been well described by an absorbed thermal Comptonization with an electron temperature of $\sim25$ keV and Thomson optical depth $\tau_T\sim2$ in a slab geometry \cite{Falanga2011}. A Similar model has been reported for the combined {\em XMM-Newton\xspace}{}/{\em RXTE\xspace}{} spectrum \cite{Papitto2010}. Signatures of reflection, such as a broadened iron line at 6.4 keV, have been observed and modelled in the {\em XMM-Newton\xspace}{} dataset \cite{Papitto2010} allowing to infer estimates of the inclination in the range $38^\circ-68^\circ$. A new outburst of IGR~J17511-3057 was detected by {\em INTEGRAL\xspace}{} on March 23, 2015 \cite{Bozzo2015b,Bozzo2015c}. Combined analysis of the {\em XMM-Newton\xspace}{}, {\em INTEGRAL\xspace}{} and {\em Swift\xspace}{} observations during the latest outburst of the source report remarkably similar timing and spectral results with respect to the 2009 outburst suggesting that the accretion flow properties did not change much in the two episodes \cite{Papitto2016}.
A NIR candidate counterpart was reported on September 22, 2009 with a Ks-band magnitude of $\sim18.0$, that decreased to Ks > 19 by October 7, 2009 \cite{Torres2009}. Radio upper limits of 0.16-0.18 mJy between September 16 and 25, 2009 were set
with the VLA by \cite{MillerJones2009}.
\subsection{SWIFT J1749.4-2807}
SWIFT J1749.4-2807 was discovered in June 2, 2006 \cite{Schady2006} by {\em Swift/BAT\xspace}{}, during a bright type-I X-ray burst; right after {\em Swift/XRT\xspace}{} started monitoring the evolution of the source outburst. Detailed analysis of the {\em Swift\xspace}{} data \cite{Wijnands2009} revealed that the 2006 burst presents spectral properties consistent with that of a thermonuclear Type I X-ray burst allowing to constrain the source distance to the value $6.7\pm 1.3$ kpc under the assumption that the peak X-ray luminosity of the burst corresponded to the Eddington value. Further {\em Swift/XRT\xspace}{} observations revealed an X-ray counterpart of the burst, also confirmed by the detection of a coincident faint point source in the {\em XMM-Newton\xspace}{} archival dataset \cite{Wijnands2009}.
The source was detected again in outburst between April 10 and 13, 2010 by {\em INTEGRAL\xspace}{} and {\em Swift\xspace}{} \cite{Pavan2010,Chenevez2010}. Follow-up {\em RXTE\xspace}{} observations revealed significant coherent pulsations at $\sim518$ Hz accompanied by its strong first overtone at $\sim1036$ Hz \cite{Altamirano2010d,Bozzo2010b}. The system orbital solution derived applying pulse phase-coherent techniques, constrained the orbital period to $\sim8.8$ hr and the projected semi-major axis to $1900$ lt-ms \cite{Strohmayer2010}. This solution implies a NS mass function of $\sim5.5\times 10^2$ M$_\odot$ and a corresponding minimum mass for the companion of $0.475$ M$_\odot$ (assuming a NS mass of $1.4$ M$_\odot$). Three X-ray eclipses with mean duration of 2172 seconds were discovered in the {\em RXTE\xspace}{} light curve of the source, making SWIFT J1749.4-2807 the first and only eclipsing AMXP \cite{Markwardt2010c}. The simultaneous presence of X-ray pulsations and X-ray eclipses allowed to set tight constraint on the companion star around $\sim0.7$ M$_\odot$, resulting rather large with respect to most of the other AMXP companions. Moreover, the combined timing analysis allowed the first attempted detection of Shapiro delay effects in X-rays for extrasolar objects \cite{Markwardt2010b}.
Spectral properties of the source have been inferred by investigating its broad-band (0.5-40 keV) energy spectrum, well described by an absorbed comptonised component with an absorption column density of $NH = 3.0\times 10^{22}$ cm$^{-2}$ (almost 3 times larger than the expected Galactic column density in the direction of the source) and a power-law spectral index $\Gamma\simeq1.7$ \cite{Ferrigno2011}.
Neither optical nor NIR counterparts of SWIFT J1749.4-2807 have been identified as yet \cite{Yang2010,DAvanzo2011}.
\subsection{IGR~J1749.8-2921}
IGR~J17498-2921 is a transient X-ray binary system observed for the first time on 2011 August 11 by {\em INTEGRAL\xspace}{} \cite{Gibaud2011}. Follow-up {\em Swift\xspace}{} \cite{Bozzo2011} and {\em Chandra\xspace}{} \cite{Chakrabarty2011} led to the precise determination of the source position. A subsequent {\em RXTE\xspace}{} observation allowed to discover coherent X-ray pulsations at $\sim401$ Hz \cite{Papitto2011b}, with an orbital period modulation of 3.8 hr. The pulse profile of the source is well described by a sinusoidal model with an rms amplitude ranging between 6 and 11 per cent. Evidences for a weak second harmonic have been observed in a subset of observations \cite{Papitto2011b}. Phase coherent timing analysis of the pulse phases of IGR~J17498-2921 suggested the presence of a marginally significant negative spin frequency derivative (spin-down) at a rate of $(-6.3\pm1.9)\times10^{-14}$ Hz/s, similar to that observed in other four AMXPs \cite{Galloway2002,Papitto2007,Bult2020}.
Type-I bursts were detected by {\em INTEGRAL\xspace}{} \cite{Ferrigno2011b} and {\em RXTE\xspace}{}, while \cite{Linares2011} found burst oscillations consistent with the previously discovered coherent X-ray pulsations. Moreover, \cite{Linares2011} reported evidences for a photospheric radius expansion episode, that allowed to locate IGR~J17498-2921 at a distance of $\sim7.6$ kpc. The transient returned to quiescence on 2011 Sep 19, after a 37 day-long outburst \cite{Linares2011b}.
The pulsar mass function $f(m_2, m_1, i)\simeq 2\times10^{-3}$ M$_\odot$ and the lack of eclipses in the observed X-ray light curves suggest a lower limit to the companion mass of $m_2 > 0.17$ M$_\odot$ assuming a NS with mass $m_1=1.4$ M$_\odot$. An upper limit to the
companion mass of $m_2 = 0.48$ M$_\odot$ (corresponding to an inclination of $i = 24.6^\circ$) has been reported under the assumption of a Roche Lobe filling star with zero age mean sequence mass-radius relation \cite{Papitto2011b}.
Simultaneous {\em INTEGRAL\xspace}{}, {\em RXTE\xspace}{}, and {\em Swift\xspace}{} observations have been combined to investigate the broad-band spectrum and timing behavior of the source \cite{Falanga2012}. The broad-band (0.6-300 keV) energy spectrum of the persistent emission is well-modelled by thermal Comptonization with an electron temperature of $\sim50$ keV, seed photon temperature of $\sim1$ keV, and optical depth $\tau_T\sim1$ in a slab geometry. The pulsed fraction showed no significant energy dependence around a constant value of 6-7\%. Soft lags of the sinusoidal profile have been observed to significantly increase in absolute value with a peak at $\sim-60\mu$s around 10 keV.
{\em Chandra\xspace}{} archival data allowed to detect IGR~J17498-2921 in quiescence at a luminosity of $\sim2\times10^{32}$ erg cm$^{-2}$ s$^{-1}$ (assuming a distance of 8 kpc; \cite{Jonker2011}). The near IR and optical counterparts have also been found to be very faint \cite{Greiss2011,Russell2011,Torres2011}.
\subsection{MAXI J0911-655}
MAXI J0911-655 (also known as Swift J0911.9-6452) is an X-ray transient detected for the first time by the {\em MAXI/GSC\xspace}{} nova-alert system trigger on February 19, 2016 \cite{Serino2016}, at a position compatible with the globular cluster NCG 2808. Follow-up observations performed by {\em Swift\xspace}{} and by {\em Chandra\xspace}{} \cite{Homan2016}, confirmed the detection of the new X-ray transient at the position, RA = 09h 12m 2.43s and Dec = -64$^\circ$ 52m 6.4s, with a 90\% confidence level uncertainty of 0.6''.
Almost 2 months after the first detection, MAXI J0911-655 was observed twice by {\em XMM-Newton\xspace}{} and once by {\em NuSTAR\xspace}{}, on April 24, May 22 and 24 2017, respectively. Timing analysis of these observations brought the discovery of coherent pulsations at a period of $\sim$2.9 ms, allowing the AMXP classification of MAXI J0911-655 \cite{Sanna2017a}. Doppler shift of the coherent pulsation confirmed the binary nature of the system, revealing an ultra-compact origin of the system with an orbital period of $\sim44.3$ min and a projected semi-major axis of $\sim17.6$ lt-ms, which recalls very similarly the orbital properties of other AMXPs such as XTE J1751-305 \cite{Markwardt2002,Papitto2008}, XTE J0299-314
cite{Galloway2002}, XTE J1807-294 \cite{Kirsch2004,Riggio2008,Chou2008,Patruno2010b}, SWIFT J1756.9-2508 \cite{Krimm2007,Linares2008b,Patruno2010b} and NGC 6440 X-2 \cite{Altamirano2010b,Bult2015c}.
\begin{figure}
\centering
\includegraphics[width=0.55\textwidth]{lc_updated_maxi_0911.pdf}
\caption{MAXI J0911-655 light curve as observed by {\em Swift/XRT\xspace}{} (black points) since its discovery at the beginning of 2016. Green stars, blue diamonds, red hexagon, purple pentagons and blue squares represent the observations collected by {\em XMM-Newton\xspace}{}, {\em NuSTAR\xspace}{}, {\em INTEGRAL\xspace}{}, {\em Chandra\xspace}{} and {\em NICER\xspace}{} respectively. [Figure from \cite{Sanna2017a}}
\label{fig:lc_0911}
\end{figure}
Fig.~\ref{fig:lc_0911} shows the long-term monitoring of the source performed with {\em Swift/XRT\xspace}{} (black points) since its discovery on early 2016. Overlaid in the figure are also shown the source observations performed with X-ray facilities such as {\em XMM-Newton\xspace}{}, {\em NuSTAR\xspace}{}, {\em INTEGRAL\xspace}{}, {\em Chandra\xspace}{} and {\em NICER\xspace}{} performed during the last $\sim3.5$ years of uninterrupted outburst. We note that recently the source underwent an anomalous swing in luminosity, with the flux dropping from a {\em Swift/XRT\xspace}{} count rate of $~4.5$ cts/s to almost zero within a couple of weeks period, before returning to a high flux level \cite{Bult2019b}. The ongoing long-lasting X-ray outburst resembles the activity of the AMXP HETE J1900.1-2455, that underwent a long-lasting (about 10 years) active state.
Interestingly, similarly to HETE J1900.1-2455, MAXI J0911-655 showed pulsations only during the first two months from its discovery \cite{Sanna2017a}.
Based on the mass function $f(m_2, m_1, i) \sim 6.2\times10^{-6}$ M$_\odot$, and the lack of eclipses (as well as dips) in the light curves, a minimum companion mass of $2.4\times 10^{-2}$ M$_\odot$ for a canonical 1.4 M$_\odot$ NS has been estimated. The Roche-lobe of the companion star could either be filled by a hot ($5\times10^{6}$ K) pure helium white dwarf with a $2.8\times 10^{-2}$ M$_\odot$ mass (implying an inclination angle $i\simeq 58^\circ$) or an old ($>5$ Gyr) brown dwarf with metallicity abundances between solar/sub-solar and mass ranging in the interval $6.5\times 10^{-2}$ and $8.5\times 10^{-2}$ M$_\odot$ ($16 < i < 21$).
Finally, the broad-band energy spectra of MAXI J0911-655 is well described by the superposition of a weak soft black-body-like component (kT$\sim0.5$ keV) and a hard high-energy cut-off power-law ($\Gamma \sim 1.7$ and kTe $\sim130$ keV), in agreement with spectral properties of other AMXPs observed in a hard state \cite{Falanga2005a,Falanga2005b,Gierlinski2005,Patruno2009, Papitto2009,Papitto2009,Papitto2013a}. Moreover, the source shows marginal evidence of a weak and narrow reflection component in the energy range 6.5-6.6 keV which has been identifies as the K$\alpha$ emission line from helium-like iron \cite{Sanna2017a}.
Nearly-simultaneous radio and X-ray observations of the source performed by ATCA on April 6, 2016, failed to detect a radio counterpart \cite{Tudor2016}.
\subsection{IGR~J17062-6143}
IGR~J17062-6143, observed for the first time in 2006 by the {\em INTEGRAL\xspace}{} observatory \cite{Churazov2007}, it has been persistently accreting at luminosities in the range $\sim10^{-3}$ and $10^{-2}$ L$_\text{Edd}$ since then. The detection of Type-I X-ray bursts \cite{Degenaar2013} revealed the NS nature of the primary star, and it allowed to constrain the distance to 7.3$\pm$0.5 kpc \cite{Keek2017}.
X-ray pulsations at $\sim$163.6 Hz (fractional pulsed amplitude of $9.4\pm1.1$) have been detected in a single $\simeq 1200$ seconds observation with {\em RXTE\xspace}{} \cite{Strohmayer2017}, however, extensive pulsation search using the {\em XMM-Newton\xspace}{} EPIC timing mode data of the source, did not confirm them.
Between August 9 and August 15 2017, the {\em NICER\xspace}{} observatory observed IGR~J17062-6143 for a total exposure of 26 ks confirming that the source is a $\sim$163.6 Hz pulsar, and also revealing an ultra-compact orbit. Phase coherent timing analysis of the {\em NICER\xspace}{} data revealed an orbital period of $\sim38$ minutes, the shortest currently known for an AMXP, and a projected semi-major axis of $\sim4\times10^{-3}$ lt-s \cite{Strohmayer2018}.
Variable detection of the coherent pulsation suggest that IGR~J17062-6143 might be an intermittent AMXP. These sources, of which four other candidates are currently known (SAX J1748-2021 \cite{Altamirano2008,Sanna2016}, Aql X-1 \cite{Casella2008}, HETE J1900.1-2455 \cite{Kaaret2006} and MAXI J0911-655 \cite{Sanna2017a}, only show detectable pulsations a fraction of the time. In HETE J1900.1-2455, the pulsations disappeared around 2 months into the outbursts, only to sporadically reappear afterwards \cite{Patruno2012a} before the source returned to quiescence over a decade later \cite{Degenaar2017b}. Very similarly, MAXI J0911-655, showed pulsations for a few months from the beginning of the outburst, while none has been reported for the last 3 years of activity. Mechanisms such as accretion induced magnetic field burial has been proposed to explain the disappearance of the pulsations (see e.g. \cite{Cumming2001}, however, no definitive consensus has been reached on the subject.
The NS mass function, $f_x=9.12\times 10^ {-8}$ M$_\odot$ defines a lower limit to the mass of the secondary star, $m_2$ in the range 0.005-0.007 M$_\odot$ for a NS mass in $m_1$=1.2-2 M$_\odot$ \cite{Strohmayer2018}. The constraints summarised in Fig.~\ref{fig:17062_mass} suggest that IGR~J17062-6143
is observed at relatively low inclination, and the secondary appear to be consistent with the helium donors of AM CVn systems explored by \cite{Deloye2007} (
dotted curves in Fig.~\ref{fig:17062_mass}).
\begin{figure}
\centering
\includegraphics[width=0.55\textwidth]{17062_donor_Strohmayer_2018.png}
\caption{Constraints on the companion star in IGR~J17062-6143. The Roche lobe constraint is plotted for three different NS masses, 1.2 (green), 1.4 (black), and
1.8 M$_\odot$ (red). The different symbols along the curves denote the secondary masses
from the mass function constraint for different assumed inclinations, $i$, and for
two values of the NS mass at each inclination. The dashed curve is the fitting formula from \cite{Nelemans2001} that approximates the mass-radius relation for low-mass, cold,
pure helium white dwarfs \cite{Zapolsky1969}. The dotted curves
denote a range of mass-radius values from the binary evolutionary calculations of \cite{Deloye2007} for the helium donors of AM CVn systems. The
dashed-dotted curves show mass-radius relations for carbon white dwarfs with
central temperatures of $10^4$ (lower) and $3\times10^6$ K (upper), from \cite{Deloye2003}. (Figure from \cite{Strohmayer2018}})
\label{fig:17062_mass}
\end{figure}
A hint for accretion from a degenerate helium dwarf in an ultra-compact system has been suggested from the properties of these long-duration (tens of minutes) thermonuclear X-ray bursts observed by {\em Swift/XRT\xspace}{} in 2015, consistent with the accumulation of
helium-rich material on the NS surface \cite{Keek2017}. However, accretion of hydrogen-rich fuel under certain conditions can also lead to thick, combustible helium layers, making helium-powered nuclear flashes not necessarily a definitive indication of a degenerate helium dwarf companion \cite{Fujimoto1981,Galloway2006}.
The spectral properties of the source have been extensively investigated by {\em Swift\xspace}{}, {\em NuSTAR\xspace}{}, {\em Chandra\xspace}{}, and {\em XMM-Newton\xspace}{}. \cite{Degenaar2017} reported the presence of Fe $K\alpha$ reflection features in the {\em NuSTAR\xspace}{} data, from which an inner disk truncated out to $\sim100$ gravitational radii can be inferred. Simultaneous {\em NuSTAR\xspace}{} and {\em XMM-Newton\xspace}{} observations \cite{VandenEijnden2018}. They also report the presence of reflection features, suggesting a similarly truncated disk as in \cite{Degenaar2017}. They note, however, that a disk extending down to the NS surface cannot be excluded if the binary inclination is very low. Based on analysis of {\em XMM-Newton\xspace}{} Reflection Grating Spectrometer data they also suggest that the system may have an oxygen-rich circumbinary environment, perhaps due to an outflow.
Multiwavelength analysis of IGR~J17062-6143 have been carried out, showing that UV to NIR spectral energy distribution (SED) can be very well described by a standard accretion disc with outer radius of $\sim2.2\times10^{10}$ cm \cite{HernandezSantisteban2019}. Moreover, the SED modelling demonstrates that accretion disc spectrum does not extend into the soft X-rays implying that the thermal emission component seen in the X-ray spectrum of IGR~J17062-6143 (and other NS LMXBs accreting at low rates) is likely from the surface of the NS, as was previously hypothesised based on X-ray spectral analysis (e.g. \cite{ArmasPadilla2013,Degenaar2017}. Studies of the low-resolution optical spectrum of the source show a blue-continuum consistent with an accretion disc, but no emission lines of H, He, or other elements are observed that do not allow to directly constrain the donor type.
\subsection{IGR~J16597-3704}
IGR~J16597-3704 is a transient LMXB discovered by {\em INTEGRAL\xspace}{} on October 2017 \cite{Bozzo2017} and located within the globular cluster NGC 6256 at a distance of roughly 9.1 kpc. Follow-up radio (VLA; \cite{Tetarenko2017}) and X-ray ({\em Chandra\xspace}{}; \cite{Chakrabarty2017}) observations provided accurate coordinates for the source.
A {\em NuSTAR\xspace}{} observation performed a few days after the discovery of the source revealed X-ray pulsations at $\sim 105$ Hz characterised by a clear Doppler modulation compatible with a binary system of $\sim$46 minutes orbital period and projected semi-major axis of $\sim5$ lt-ms \cite{Sanna2018a}. Its short orbital period classifies IGR~J16597-3704 among the so-called ultra-compact LMXBs.
The system mass function $f(m_2 ,m_1 ,i)\sim 1.2\times 10^{-7}$ M$_\odot$ and the lack of eclipses/dips in the X-ray light curve of the source suggest a secondary mass $m_2\gtrsim6.5\times 10^{-3}$ M$_\odot$ ($m_2\gtrsim8\times 10^{-3}$ M$_\odot$) for a 1.4M$_\odot$ (2M$_
\odot$)NS, consistent with the expected donor mass of $\sim0.01$ M$_\odot$ for an ultra-compact NS binary in a 46-minute orbital period (e.g. \cite{vanHaaften2012}). Considerations on the pulsar spin equilibrium suggest a dipolar magnetic field of $9.2\times 10^{8} < B < 5.2\times 10^{10}$ G, significantly larger than the average magnetic field of known AMXPs (see e.g. \cite{Mukherjee2015, Degenaar2017}). Combined with the higher-than-average spin period of the source, this magnetic field suggests that IGR~J16597-3704 has been discovered in a relatively early stage of its recycling process. \cite{Sanna2018a} note that the mass required to spin-up an old slowly-rotating NS up to $\sim105$ Hz (of the order of $10^{-3}$ M$_\odot$; see also \cite{Burderi1999} does not efficiently suppress the dipolar magnetic field, which limits the NS spin period.
Interestingly, the best pulse profile obtained epoch-folding the $\sim40$ ks {\em NuSTAR\xspace}{} observation (see Fig.~\ref{fig:16597} corrected for the updated ephemeris, it is harmonically reach with a peculiar pulse shape well fitted with a combination of four sinusoidal components, where the fundamental, second, third and fourth harmonics have fractional amplitudes of $\sim$14\%, $\sim$4\%, $\sim$3.8\% and $\sim$0.9\%, respectively.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{pulse_profile_nustar_latest2.pdf}
\caption{IGR~J16597-3704 pulse profile (black points) obtained from the epoch-folded {\em NuSTAR\xspace}{} data. The best fit model obtained by combining four sinusoidal components with harmonically related periods is also shown (red line).
Two cycles of the pulse profile are shown for clarity. (Image taken from \cite{Sanna2018a}}
\label{fig:16597}
\end{figure}
The energy spectrum of IGR~J16597-3704 is well-described by an absorbed disk blackbody ($k_T=1.42\pm0.07$ keV) plus thermally comptonized continuum ($\Gamma=2.3\pm0.2$) with seed
photons from the blackbody ($k_{bb}=2.6\pm0.1$) radiation. The measured absorption column density
of $(8.2\pm1.0)\times10^{21}$~cm$^{-2}$ is consistent with that expected in the direction of the source ($\sim 9.5\times10^{21}$ cm$^{-2}$) using the cluster A$_V$ \cite{Harris1996} and the appropriate conversion from A$_V$ to N$_H$ assuming Wilms abundances (see e.g. \cite{Bahramian2015,Foight2016}). No evidence for spectral lines (e.g. Iron K-$\alpha$) or reflection humps has been reported \cite{Sanna2018b}.
Interestingly, the large magnetic field combined with the moderately long spin period could be also responsible for the structured pulse profile, as well as the lack of emission lines and reflection components in the energy spectrum.
Radio counterpart to IGR~J16597-3704 was observed by the VLA facility between 2017 October 23 and 27. The radio observations revealed that IGR~J16597-3704 is one of the more radio faint systems in the NS X-ray binary population. Moreover, on 2017 November 3 the source was observed at the Parkes radio telescope with the aim of searching for radio pulsations. No radio pulsation were found down to a flux density of 0.05 mJy.
\subsection{IGR~J17379-3747}
The X-ray transient IGR~J17379-3747 was first discovered on February 2004 through the detection of a type I X-ray burst with {\em INTEGRAL\xspace}{} \cite{Chelovekov2006}. An independent classification was obtained later on with {\em RXTE\xspace}{} \cite{Markwardt2008}, while the source was ultimately catalogued after the {\em Swift/XRT\xspace}{} X-ray localization \cite{Bird2007,Krivonos2007,Krimm2008}. Archival {\em INTEGRAL\xspace}{} and {\em RXTE\xspace}{} observations suggest that the 2004 outburst of the source lasted for almost 40 days \cite{Markwardt2008,Chelovekov2010}. On September 2008, {\em RXTE\xspace}{} revealed a second outburst of the source characterised by a gradual decline in flux, with the total outburst lasting roughly 2-3 weeks \cite{Markwardt2008,Shaw2008}.
Renewed activity from the source was detected by {\em MAXI/GSC\xspace}{} on March 2018 \cite{Negoro2018}. Follow-up observations with {\em NICER\xspace}{} and {\em XMM-Newton\xspace}{} allowed the discovery of 468 Hz coherent X-ray pulsations \cite{Strohmayer2018b,Sanna2018b}. Analysis of the archival {\em RXTE\xspace}{} data led to recover pulse detections in both previous outbursts \cite{Sanna2018b}.
The source distance is still not precisely determined, however, based on its location in the direction of the Galactic center an assumed distance of 8.5 kpc is typically adopted.
The binary system mass function $f(m_2 , m_1 , i) \sim 8\times 10^{-5}$ M$_{\odot}$ as well as the lack of eclipses (inclination angle of $i<75^\circ$ ) in the X-ray light curve, suggests a donor mass $m_2 < 0.056$ M$_\odot$ for a 1.4 M$_\odot$ NS ($m_2< 0.07$ M$_\odot$ for a 2 M$_\odot$ NS). Considerations on the binary mass transfer conditions suggest that the companion star could be a hot brown dwarf, likely heated by low-level X-ray radiation during the quiescent phases (e.g. \cite{Sanna2018b,Bildsten2001,Galloway2005b}).
The 0.5-10 keV average energy spectrum of IGR~J17379-3747 is well described by a superposition of a soft disk component (kT$\sim$0.45keV) and a hard power law ($\Gamma\sim1.9$), consistent with typical AMXPs observed in outburst (see e.g \cite{Gierlinski2005,Papitto2009,Falanga2012}). No evidence of emission lines or reflection components in the energy spectrum have been reported \cite{Sanna2018b}, in analogy with the AMXPs XTE J1807-294 \cite{Falanga2005a}, XTE J1751-305 \cite{Miller2003}, SWIFT J1756.9-2508 \cite{Sanna2018d}, and IGR~J16597-3704
\cite{Sanna2018a}.
The dipolar magnetic field $B$ has been constrained between $0.4\times 10^{8}$ G and $2.3\times 10^{9}$ G assuming accretion-torque equilibrium in combination with the source luminosity extrapolated from the latest outburst. The value of $B$ is consistent with the average magnetic field of known AMXPs (see e.g. \cite{Mukherjee2015,Degenaar2017}).
Coherent pulsation at $\sim 468$ Hz shows clear Doppler modulation compatible with the NS being part of a binary system with orbital period of $\sim$ 1.9 hours and a projected semi-major axis of $\sim8$ lt-ms. Combining the barycentric spin frequency values observed in the three observed outbursts, it has been estimated an upper limit (3$\sigma$ confidence level) of the secular spin derivative $-8.3\times 10^{-13}$< $\dot{\nu}< 1.1\times 10^{-12}$ Hz s$^{-1}$, corresponding to an upper limit on the magnetic field strength of $B < 2.8\times 10^{9}$ G (assuming a NS R = 10 km and an angle $\alpha\simeq20^{\circ}$ between the magnetic hotspot and the rotational pole), consistent with the estimate obtained from the dynamics of the NS in accretion \cite{Sanna2018b}.
Finally, an update valued of the binary orbital period has been estimated by investigating the orbital period secular evolution of the source through the ephemeris of the three observed outbursts of the source, obtaining a more accurate value of the orbital period Porb = 6765.84521(2) s and an orbital period derivative $\dot{P}_{orb} = (-2.5\pm 2.3)\times 10^{-12}$ s s$^{-1}$.
IGR~J17379-3747 was observed by VLA on March 2018, at 4.5 and 7.5 GHz (each with a bandwidth of 1 GHz) simultaneously. A flat-spectrum radio counterpart with a flux density of $\simeq0.4$mJy at 4.5 and 7.5 GHz at a position consistent with the X-ray source (see e.g. \cite{vandenEijnden2018b}).
\subsection{IGR~J17591-2342}
The source {\rm IGR~J17591$-$2342\xspace}{} was discovered by {\em IBIS/ISGRI\xspace}{} on board the {\em INTEGRAL\xspace}{} satellite on August 10, 2018. Trigger by the discovery of the source, {\em Swift\xspace}{}, {\em NuSTAR\xspace}{} and {\em NICER\xspace}{} observed the source leading to the detection of coherent X-ray pulsations at $\sim$527 Hz \cite{Sanna2018c}. The NS spin frequency showed a clear drift compatible with a Doppler shift induced by the binary orbital motion with period close to 8.8 hours, very similar to the intermittent AMXP SAX J1748.9-2021 \cite{Altamirano2008} and the eclipsing AMXP SWIFT J1749.4-2807 \cite{Markwardt2010b,Altamirano2011, Ferrigno2011}.
The mass function $f(m_2, m_1, i)\sim1.5 \times 10^{-2}$~M$_{\odot}$ of {\rm IGR~J17591$-$2342\xspace}{} implies a minimum companion mass of $m_2=0.37$~M$_{\odot}$ (for a 1.4~M$_{\odot}$ NS and binary inclination $i=90^{\circ}$). Since neither total eclipses nor dips have been observed in the X-ray light curves, the binary inclination can be limited to values lower than 60 degrees
\cite{Frank2002}, for which a lower limit $m_2 \gtrsim $0.42~M$_{\odot}$ can be inferred (for a 1.4~M$_{\odot}$ NS). The value increases up to $m_2 \gtrsim 0.52$~M$_{\odot}$ if we consider a 2~M$_{\odot}$ NS.
A comparison between mass-radius relation of the donor star obtained from the Roche-lobe overflow condition (see e.g. \cite{Sanna2018c}) and numerically simulated mass-radius relations for zero-age main-sequence stars (ZAMS; \cite{Tout1996}), as well as isochrones for stars of 8 and 12\,Gyr \cite{Girardi2000}, suggest that the companion star is compatible with either a ZAMS with mass $\sim1.1$ M$_\odot$ (corresponding to an inclination angle of $i\sim24$ degrees) or an old main-sequence star with mass 0.85$-$0.92 M$_\odot$ ($i$ ranging between $28$ and $30$ degrees) for a stellar age between 8 and 12\,Gyr. It should be noted, however, that the \textit{\textup{a priori}} probability of observing a binary system with inclination $i\leq 30$ degrees is of the order of 13\%. Nonetheless, it should not be excluded the possibility of a bloated donor star with the limitation that its thermal timescale ($GM^2_c/R_c L_c$) should be much longer than the evolutionary timescale ($M_c/\dot{M_c}$).
Phase-coherent timing analysis of the {\em NICER\xspace}{} observations performed between August 15 and August 24, revealed a spin-up frequency derivative of (2.0$\pm$1.6)$\times 10^{-13}$\,Hz/s.
This values is compatible with the maximum spin-up derivative estimated under the assumption of accretion of matter leaving the accretion disc with angular momentum equal to that at the co-rotation radius, and mass accretion rate of $\dot{M}\simeq5.2\times10^{-10}$ M$_{\odot}$/yr (for an NS radius and mass of 1.4\,M$_{\odot}$ and 10 km) obtained for a broad-band (0.1-100 keV) absorbed flux of $\sim7\times10^{-10}$ erg/s/cm$^2$ and a source distance of 8.5 kpc (assumed near the Galactic centre, see, e.g. \cite{Kerr1986}).
However, increasing the analysis baseline of the outburst (between August 15 and October 15) seems to suggest a spin-down frequency derivative of (-7.4$\pm$0.3)$\times 10^{-14}$\,Hz/s (Sanna et al. 2020 in prep.).
The NS dipolar magnetic field can be roughly constrained in the range $1.4\times10^8<B<8\times 10^{9}$ G by assuming the condition of spin equilibrium for accreting X-ray pulsars. This value is consistent with the average magnetic field of known AMXPs \cite{Mukherjee2015}.
Finally, the broad-band energy spectrum (0.5-80 keV) of {\rm IGR~J17591$-$2342\xspace}{}, obtained combining almost simultaneous {\em Swift\xspace}{}, {\em NuSTAR\xspace}{} and {\em INTEGRAL\xspace}{} observations, is well described by an absorbed soft black-body-like component ($kT\sim 0.8$ keV) with a relatively small emitting area that is compatible with emission from the NS surface (or part of it) plus a Comptonised component ($\Gamma \sim 1.8$) with a seed photon temperature compatible with the soft thermal component. The spectral properties of the source are consistent with those of other AMXPs observed in the hard state \cite{Falanga2005a,Gierlinski2005,Papitto2009, Papitto2013a,Sanna2017d,Sanna2017b}. Marginal evidence of a weak emission line compatible with the iron K-$\alpha$ transition are present, in accordance with other AMXPs in the same accretion state \cite{Sanna2017d,Sanna2017b}.
\end{comment}
\bibliographystyle{spmpsci.bst}
| proofpile-arXiv_059-15703 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Ground-level ozone is one of the most important pollutants whose adverse effects on human health have been intensively studied \cite{goldsmith1969experimental,saez2001comparing}. Further, its impact on agriculture has been estimated to cause several billion dollars of annual economic loss from its negative impact on plants and crops \cite{larsen1991air}. As a consequence, accurate forecast of ground-level ozone concentration plays an important role in air quality management, environmental monitoring and control.
The concentration of the tropospheric $O_3$ may have a profound interaction with other pollutants such as nitrogen oxides ($NO_x$) etc through complex photochemical reactions \cite{barrero2006prediction}. Also, meteorological factors such as temperature, humidity, solar radiation, and wind speed \cite{barrero2006prediction} often have an impact on $O_3$ levels. Many environmental agencies rely on ozone concentration forecasting framework for decision making. With the advent of the big data age, machine learning and regression methods have been more and more applied in the accurate modeling and forecast of ozone concentration. Generally speaking, within the ozone prediction problem, there are mainly two types of machine learning approaches: statistical modeling and non-statistical methodologies. Methods such as time-series inference (e.g., ARMA or ARIMA forecasting) \cite{robeson1990evaluation}, multiple linear regression \cite{barrero2006prediction,ghazali2010transformation,sousa2006prediction}, ridge regression \cite{salazar2008development} belong to the formal category, while the non-statistical methods include ANN \cite{gomez2006neural,abdul2002assessment} or other neural network type of methods \cite{dutot200724,coman2008hourly,prybutok2000comparison,wang2003prediction,yi1996neural}, support vector regression \cite{hajek2012ozone,lu2008ground} and others \cite{sokhi2006prediction}.
Among the ozone prediction literature, many aim to target the next-day's maximum concentration \cite{dutot200724,wang2003prediction,yi1996neural,robeson1990evaluation}. On the other hand, the next-day's maximum of the 8-hour mean concentration \cite{link3} also plays a significant role in the environmental regulations/ control in Canada. Therefore we consider both of these two variables to be the target variable we aim to predict in our research.
The aforementioned statistical and non-statistical methods in the ozone prediction literature usually are conducted on a carefully-chosen set of features. Quite often the number of variables in the model is less than ten (10). On the other hand, due to the renowned ``curse of dimensionality'' phenomenon \cite{greblicki2008nonparametric,buhlmann2011statistics} in the machine learning and artificial intelligence literature, it is impossible to conduct large dimensional modeling without proper feature selection as a first step before the modeling. However, the classical feature selection techniques such as AIC, BIC, forward selection, backward selection all have drawbacks \cite{efron2016computer}. On the other hand, as the least absolute shrinkage and selection operator (Lasso) has been invented in the late 1990s \cite{tibshirani1996regression}, its $L_1$ design nature allows this approach to automatically reduce the redundant features and render a sparse model. Since then, the Lasso has become a popular approach in modern-day high-dimensional statistics \cite{buhlmann2011statistics}. It has been applied in biometrics \cite{mikula2009effects}, power systems \cite{lv2012prediction,mo2020power}, energy \cite{lv2020very}, etc. So far as we know, it has not been applied in pollutant modeling areas yet.
The rest of the paper will be organized as follows. Section \ref{sbsec:Data} will discuss the data we are using in this research. Section \ref{sbsec:Lasso} will give a description about the LASSO methodology and the algorithm to solve Lasso. Section \ref{sec:Results} will show the modeling result by our method. The prediction performance of some competing methods including ARMA model, support vector regression, multiple linear regression, and ridge regression will be shown as a comparison to our method. The discussions and future work will be shown in Section \ref{sec:Discussions}.
The contributions of this paper include:
\begin{enumerate}
\item Use sparse machine learning algorithms (Lasso) to model and predict the next-day maximum ozone concentration, as well as the next-day maximum 8-hour-mean ozone concentration. The $L_1$ design enables the automatic feature selection from more than hundreds of dimensions of candidate features. In our research we use a large number of features available from the Environmental Canada official website. However, the same technique can also apply if some of these features were not available to the user.
\item We compare our modeling methods with several other competing methods in the field. Namely, they are multiple linear regression (MLR), ridge regression, ARMA modeling, and support vector regression. Our Lasso shows superior prediction accuracy compared to the other approaches on the same set of features.
\end{enumerate}
\section{Materials and Methods}
\subsection{Data Collection and Pre-processing}\label{sbsec:Data}
The study site is the city of Windsor (N $42^\circ$$16'$$34''$ W $82^\circ$$57'$$19''$), Ontario, Canada; the pollutant concentration observations and the meteorological data from 2014 to 2017 were downloaded downloaded from the Ontario Ministry of The Environment, Conservation and Parks \cite{link1} and Environment Canada \cite{link2}. The map of the city of Windsor and the location of the air quality and meteorological monitoring station is illustrated in Fig. \ref{fig:Map}.
\begin{figure}
\centering
\includegraphics[width=85mm]{Map2.png}
\caption{The illustration of the map of Windsor and the location of the air quality and meteorological monitoring station (N $42^\circ$$16'$$34''$ W $82^\circ$$57'$$19''$)}\label{fig:Map}
\end{figure}
The candidate features we used in our modeling are shown in Table \ref{tab:Features}. In our modeling studies, we not only use all of them, but also include the interactive features among them. The detailed modeling will be shown in Section \ref{sec:Results}.
\begin{table}[width=.9\linewidth,cols=4,pos=h]
\caption{Air quality and meteorological features used in our modeling}\label{tab:Features}
\begin{tabular*}{\tblwidth}{@{} LL@{} }
\toprule
\normalsize{Model Output} & \scriptsize{Next-day's maximum $O_3$ (ppb)} \\
~& \scriptsize{Next-day's maximum 8-hour-mean $O_3$ (ppb)} \\
\midrule
\normalsize{Model Input} & \scriptsize{Present-day's hourly $O_3$ (ppb)} \\
~ & \scriptsize{Present-day's hourly $SO_2$ (ppb)} \\
~ & \scriptsize{Present-day's hourly $NO$ (ppb)} \\
~ & \scriptsize{Present-day's hourly $NO_2$ (ppb)} \\
~ & \scriptsize{Present-day's hourly $NO_X$ (ppb)} \\
~ & \scriptsize{Present-day's hourly $CO$ (ppb)} \\
~ & \scriptsize{Present-day's hourly $PM_{2.5}$ ($mg/m^3$)} \\
~ & \scriptsize{Present-day's max/min/mean $O_3$ (ppb)} \\
~ & \scriptsize{Present-day's max/min/mean $SO_2$ (ppb)} \\
~ & \scriptsize{Present-day's max/min/mean $NO$ (ppb)} \\
~ & \scriptsize{Present-day's max/min/mean $NO_2$ (ppb)} \\
~ & \scriptsize{Present-day's max/min/mean $NO_X$ (ppb)} \\
~ & \scriptsize{Present-day's max/min/mean $CO$ (ppb)} \\
~ & \scriptsize{Present-day's max/min/mean $PM_{2.5}$ ($mg/m^3$)} \\
~ & \scriptsize{Present-day's max/min/mean $O_3$ (ppb)} \\
~ & \scriptsize{Present-day's hourly temperature (${}^\circ C$)} \\
~ & \scriptsize{Present-day's hourly dew point (${}^\circ C$)} \\
~ & \scriptsize{Present-day's hourly real humidity (\%)} \\
~ & \scriptsize{Present-day's hourly wind direction (deg)} \\
~ & \scriptsize{Present-day's hourly wind speed (km/h)} \\
~ & \scriptsize{Present-day's hourly visibility (km)} \\
~ & \scriptsize{Present-day's hourly atmospheric pressure (kPa)} \\
~ & \scriptsize{Present-day's max/min/mean temperature (${}^\circ C$)} \\
~ & \scriptsize{Present-day's max/min/mean dew point (${}^\circ C$)} \\
~ & \scriptsize{Present-day's max/min/mean rel humidity (\%)} \\
~ & \scriptsize{Present-day's max/min/mean wind direction (deg)} \\
~ & \scriptsize{Present-day's max/min/mean wind speed (km/h)} \\
~ & \scriptsize{Present-day's max/min/mean visibility (km)} \\
~ & \scriptsize{Present-day's max/min/mean atmospheric pressure (kPa)} \\
~ & \scriptsize{Present-day's hourly temperature (${}^\circ C$)} \\
~ & \scriptsize{Next-day's hourly dew point (${}^\circ C$)} \\
~ & \scriptsize{Next-day's hourly rel humidity (\%)} \\
~ & \scriptsize{Next-day's hourly wind direction (deg)} \\
~ & \scriptsize{Next-day's hourly wind speed (km/h)} \\
~ & \scriptsize{Next-day's hourly visibility (km)} \\
~ & \scriptsize{Next-day's hourly atmospheric pressure (kPa)} \\
~ & \scriptsize{Next-day's max/min/mean temperature (${}^\circ C$)} \\
~ & \scriptsize{Next-day's max/min/mean dew point (${}^\circ C$)} \\
~ & \scriptsize{Next-day's max/min/mean rel humidity (\%)} \\
~ & \scriptsize{Next-day's max/min/mean wind direction (deg)} \\
~ & \scriptsize{Next-day's max/min/mean wind speed (km/h)} \\
~ & \scriptsize{Next-day's max/min/mean visibility (km)} \\
~ & \scriptsize{Next-day's max/min/mean atmospheric pressure (kPa)} \\
\bottomrule
\end{tabular*}
\end{table}
\subsection{Machine Learning Methods for Sparse Modeling}\label{sbsec:Lasso}
In this section, we give a brief description on the machine learning algorithms we use in the ozone forecast problem. Generally speaking, in system modeling, the researchers are given data of the form
\[
D_n = \{(\bm{X}_1, Y_1), (\bm{X}_2, Y_2), \cdots, (\bm{X}_n, Y_n)\},
\]
where $\bm{X}_i \in \mathbb{R}^p$, $i\in\{1,\cdots,n\}$ is a $p$-dimensional vector denoting the input observations, and $Y_i \in \mathbb{R}^1$, $i\in\{1,\cdots,n\}$ is a 1-dimensional variable denoting the output response. The user is interested to learn the mapping $f: X \to Y$, so that by applying the model $f(\cdot)$ for the future observations $\bm{X}_i^{[new]}$, the predicted response $\hat{f}(\bm{X}_i^{[new]})$ can be as close as possible to the underlying rue response $Y_i^{[new]}$.
In our ground-level ozone concentration modeling problem, we try to build two models. The first model $f_1(\cdot)$ tries to predict the next-day maximum ozone concentration level; in other words, $Y_i$ corresponds to the next-day maximum $O_3$ concentration level. The input variable $\bm{X}_i$ corresponds to all the variables in Table \ref{tab:Features}. In the second model $f_2(\cdot)$, $Y_i$ corresponds to the next-day maximum 8-hour-mean concentration level, while $\bm{X}_i$ includes all those features in Table \ref{tab:Features}, together with the 8-hour-mean concentration values of $O_3$ between 0 am to 4 pm of the present day. It is worth mentioning that according to the definition of the ``8-hour-mean concentration'' \cite{link3}, the value at 4 pm already include the composition of the hourly value at 11 pm of the same day.
Before introducing the sparse learning method used in our research, let's start from a classical regression method called ``multiple linear regression'' (MLR) and a more advanced technique called ridge regression. It is worth mentioning that these two techniques have been applied in the context of air pollutant modeling in \cite{barrero2006prediction,ghazali2010transformation,sousa2006prediction} and in \cite{salazar2008development}.
There could be various ways for system modeling. If the model has completely unknown structure due to e.g., physical limitations, then it is always reasonable by starting from a linear structure assumption
\begin{equation}
Y_i = \beta_0+\sum_{j=1}^p \beta_j X_{ij}+\varepsilon_i,
\label{eq:LrMd}
\end{equation}
where $\varepsilon_i$ is an innovation process, or the noise process, $\{\beta_0, \cdots, \beta_p\}$ are the coefficients that represent the weight of each feature. To identify the linear model represented by (\ref{eq:LrMd}), it is equivalent to identify only $\{\beta_0, \cdots, \beta_p\}$.
Statistical researchers also often writes the data form of (\ref{eq:LrMd}) in the following design matrix form
\begin{equation} \label{matrixeq}
\bm{Y}=\bm{X} \bm{\beta} +\bm{\varepsilon},
\end{equation}
where the design matrix $\bm{Y}$, $\bm{X}$, $\bm{\beta}$, $\bm{\varepsilon}$ correspond to
\[
\bm{Y}=\begin{bmatrix} Y_1 \\ \vdots \\ Y_n \end{bmatrix}, \qquad
\bm{X}=\begin{bmatrix} 1 & X_{11} & \cdots & X_{1p} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & X_{n1} & \cdots & X_{np} \end{bmatrix},
\]
\[
\bm{\beta} = \begin{bmatrix} \beta_0, \beta_1, \cdots, \beta_p \end{bmatrix}^{\top}, \qquad
\bm{\varepsilon}=\begin{bmatrix} \varepsilon_1, \cdots, \varepsilon_n \end{bmatrix}^{\top},
\]
and $X_{ij}$, corresponds to the $j$-th coordinate of the $p$-dimensional observation $\bm{X}_i$, $i=1,\cdots,n$.
The most classical approach to solve the linear model in (\ref{eq:LrMd}) is through the minimization of the mean square error, and this approach is often called ``multiple linear regression'' (MLR)
\begin{equation}
\hat{\bm{\beta}}_{MLR}= \arg\min_{\bm{\beta}} \frac{1}{n} ||\bm{Y}-\bm{X}\bm{\beta}||_2^2,
\end{equation}
where $\frac{1}{n} ||\bm{Y}-\bm{X}\bm{\beta}||_2^2$ is equivalent to $\frac{1}{n} \sum_{i=1}^n (Y_i-\beta_0-\sum_{j=1}^p \beta_j X_{ij})^2 $. The solution of the MLR has the direct analytical form \cite{seber2012linear}
\begin{equation}
\hat{\bm{\beta}}_{MLR} = (\bm{X}^T \bm{X})^{-1} (\bm{X}^T \bm{Y}).
\label{eq:LrRg}
\end{equation}
The multiple linear regression provides a simple but straightforward solution to the linear model. The similar principle, i.e., minimization of the mean square loss, has also been applied in many engineering areas such as signal processing \cite{pawlak2007nonparametric}, image recognition \cite{liao1996image}, etc. However, the MLR naturally processes some drawbacks that inherently makes it infeasible as a modern statistical tool. These inherent deficiencies include
\begin{enumerate}
\item Numerical instability. When new observations come to be available, the estimated weights $\{\hat{\beta}_j, j=1,\cdots,p\}$ can change dramatically. Even worse, some weights can change from positive to negative, or vice versa. This phenomenon can create undesirable scenarios when the researcher wants to e.g., explain the positive or negative influence of a feature.
\item The matrix $\bm{X}^T \bm{X}$ may not be full rank. In this way there would be some problems in the matrix inversion process in the solution (\ref{eq:LrRg}).
\end{enumerate}
Targeting these drawbacks, a more modern approach called ``ridge regression'' was developed in the 1970s \cite{hoerl1970ridge,hoerl1970ridge2}. The original intuition was to add a penalty term to the weights, so that the numerical instability problem and the matrix inversion problem can be overcome. The ridge regression tries to find the solution to the following penalized criterion function
\begin{equation}
\hat{\bm{\beta}}_{ridge}= \arg\min_{\bm{\beta}} \Big( \frac{1}{n} ||\bm{Y}-\bm{X}\bm{\beta}||_2^2 +\lambda \sum_{j=1}^p \beta_j^2 \Big),
\label{eq:RidgeCr}
\end{equation}
where $\lambda$ is a regularization parameter that needs to be selected beforehand. This parameter controls the balance between the least square loss and the penalty of the weights. In practice, the user can always use some re-sampling methods to select this parameter, e.g., selecting the value that minimize the cross-validation error.
The ridge regression approach also has close-form solution in the following form,
\begin{equation}
\hat{\bm{\beta}}_{ridge} = (\bm{X}^{\top} \bm{X}+n\lambda \bm{I}_p)^{-1} (\bm{X}^{\top} \bm{Y}),
\label{eq:Ridge}
\end{equation}
where $\bm{I}_p$ denotes the $p \times p$ identity matrix.
The solution in (\ref{eq:Ridge}) ensures that ridge regression successfully overcomes the numerical instability drawback processed by the MLR. As a matter of fact, in many applications, the users observe significant improvement by using the ridge regression compared with the MLR. The ridge regression has recently been more and more often applied in many fields of engineering and science \cite{jain1985ridge,jayasekara2006derivation}.
Following the same line as MLR and ridge regression, another technique was developed in the late 1990s \cite{tibshirani1996regression} and was named ``least absolute shrinkage and selection operator'' (Lasso). Similar to the ridge regression, the Lasso also targets to minimize the mean square loss together with a penalty term. However, the uniqueness of the Lasso lies in that the penalty term is in the form of $L_1$ norm rather than the $L_2$ norm as in the ridge regression case. Specifically, the Lasso approach aims to find the solution according to the following criterion
\begin{equation}
\hat{\bm{\beta}}_{Lasso}= \arg\min_{\bm{\beta}} \Big( \frac{1}{n} ||\bm{Y}-\bm{X}\bm{\beta}||_2^2 +\lambda \sum_{j=1}^p |\beta_j| \Big),
\label{eq:LassoCr}
\end{equation}
where similar to the ridge regression case, here $\lambda$ is the regularization term which should be specified before the predictive modeling.
According to the optimization theory \cite{boyd2004convex}, the ridge regression in (\ref{eq:RidgeCr}) and the Lasso regression in (\ref{eq:LassoCr}) can be alternatively written in the following equivalent primal forms,
\begin{equation}
\hat{\bm{\beta}}_{ridge,primal}= \arg\min_{\bm{\beta}, ||\bm{\beta}||_2 < s} \Big( \frac{1}{n} ||\bm{Y}-\bm{X}\bm{\beta}||_2^2 \Big),
\label{eq:RidgePrCr}
\end{equation}
\begin{equation}
\hat{\bm{\beta}}_{Lasso,primal}= \arg\min_{\bm{\beta}, ||\bm{\beta}||_1 < s} \Big( \frac{1}{n} ||\bm{Y}-\bm{X}\bm{\beta}||_2^2 \Big),
\label{eq:LassoPrCr}
\end{equation}
where $||\bm{\cdot}||_2$ and $||\bm{\cdot}||_1$ denote the $L_2$ norm and the $L_1$ norm. More specifically, $||\bm{\beta}||_2 = (\sum_{j=1}^p \beta_j^2)^{1/2}$ and $||\bm{\beta}||_1=\sum_{j=1}^p|\beta_j|$. It is worth to mention that there is one-to-one mapping between the parameter $s$ in primal form and the parameter $\lambda$ in the previous dual form.
Equation (\ref{eq:RidgePrCr}) and (\ref{eq:LassoPrCr}) suggest the difference between the Lasso and ridge regression, which is furthermore illustrated in Fig. \ref{fig:L2L1Compare}.
\begin{figure}[t]
\centering
\includegraphics[width=70mm]{L2L1Compare.pdf}
\caption{The comparison between ridge regression and the Lasso.}\label{fig:L2L1Compare}
\end{figure}
Fig. \ref{fig:L2L1Compare} shows a 2-dimensional regression case as an example. The elliptical shapes correspond to the same level of mean square error caused by different pairs of coefficients. The shaded area correspond to the restraints of coefficients. The $L_2$ penalty $||\bm{\beta}||_2=(\beta_1^2+\beta_2^2)^{1/2}$ in this case corresponds to the square shape, whereas the $L_1$ penalty $||\bm{\beta}||_1=|\beta_1|+|\beta_2|$ in this case corresponds to the square shape. The pre-specified regularization parameter $\lambda$ (or equivalently $s$) controls how large the shaded zone is. The point where the elliptical shape touches the shaded area corresponds to the solution of the regression given the pre-specified parameter $\lambda$ (or $s$). From Fig. \ref{fig:L2L1Compare}, one can see that there is a chance on the right panel that the solution can ``hit the corner'', meaning that one feature is assigned zero weight and completely eliminated from the regression model in the Lasso set-up. For the large dimensionality scenario, this would correspond to that a considerable proportion of the total candidate features being eliminated from the model. This can hardly happen in the ridge regression case, in which the shaded area is square rather than the rectangular, as the left panel of Fig. \ref{fig:L2L1Compare} illustrates.
\begin{algorithm}[h]
\caption{Coordinate descent algorithm for computing the Lasso}
\label{alg:Shooting}
\begin{algorithmic}[1]
\State Let $\bm{\beta}^{[0]} \in \mathbb{R}^{p}$ be the initial estimator. Set $s=0$.
\Repeat
\State $s=s+1$
\For{$j=1,\cdots,p$}
\State $\beta_j^{[m]}=\frac{\operatorname{sign}(Z_j)(|Z_j|-\frac{\lambda}{2})_+}{\hat{\Sigma}_{jj}}$,
\State \multiline{%
where $Z_j=\bm{X}_j^{\top}(\bm{Y}-\bm{X}\bm{\beta}_{-j}^{[m]})/n$, $\bm{\beta}_{-j}^{[m]}$ is the same as $\bm{\beta}^{[m]}$ except the $j$-th component is set to zero vector, $\hat{\Sigma}=n^{-1}\bm{X}^{\top} \bm{X}$, and $\hat{\Sigma}_{jj}$ is the $j$-th diagonal component of $\hat{\Sigma}$}
\EndFor
\Until numerical convergence
\end{algorithmic}
\end{algorithm}
In Algorithm \ref{alg:Shooting}, $(\cdot)_+ = \max(\cdot,0)$. The coordinate descent optimization deals with $\beta_1,\cdots, \beta_p$, whereas $\beta_0$ can be estimated by the empirical mean of the response variable.
It is worth noting that the $L_1$ penalty on the coefficients in Lasso suggests that it only make sense if the different dimensions of features have the same ``strength''. Therefore it is the common practice of statisticians to standardize data before applying the Lasso (as well as for the ridge regression case). Namely, each covariate of data should be subtracted by the mean, and then be divided by the standard deviation. In this way, all the dimensions of data after the pre-processing procedure would have zero mean and unit variance.
The specification of the regularization parameter $\lambda$ plays a critical role. A large $\lambda$ will put more penalty on the coefficients, and shrink more weights to zero. In practice, one can select $\lambda$ that minimizes the cross-validation error. Another approach is to use the value of $\lambda$ that leads to one standard deviation of distance from the minimum cross-validation error. Both of these two approaches have been offered as an option in many softwares, e.g., Matlab. Comparing between these options, the first one leads to a slightly smaller prediction error, while the second approach usually render a more sparse model, i.e., a model composed of a noticeably smaller number of features. Nonetheless, whether a feature is useful in the model can only be examined by statistical testing. Therefore, the difference that these two approaches made in the final set of features does not mean that a group of features is useful according to the first approach but not useful according to the second. In our research, we adopt the first approach.
\section{Results}\label{sec:Results}
\subsection{Forecasting the next-day's maximum $O_3$}\label{sbsec:Max}
First of all, we examine our approach by modeling the next-day's maximum $O_3$. After dealing with the missing data, we use the first three years' data (2014-2016) as the training data set, and use the year 2017's data as the testing dataset.
When we perform the Lasso described in Section \ref{sbsec:Lasso}, we use all the features in Table \ref{tab:Features}. Due to the design of the linear Lasso, we can see that unlike the multiple linear regression (MLR), adding a feature like $X_i-X_j$ makes a difference than merely having the feature $X_i$ and $X_j$, therefore we also include the terms representing the difference of the future day's meteorological variables and today's value of the same hour (as well as the next-day's max/min/mean minus the current day's max/min/mean). Since we have 7 different pollutants available, there are $7\times 24 +7\times 3=189$ non-meteorological features. Besides, when we are dealing with the wind direction within the meteorological variables, we not only include its value in degrees, but also in cosine and sine values. Since we have 7 different kinds of meteorological variables available, and we have $(7+2)\times (24+3) \times 3 = 729$ meteorological features. Here $7+2$ corresponds the original 7 features plus the cosine and sine of the wind direction, $24+3$ corresponds to the 24 hours values plus the max/min/mean, and the last multiplier $3$ indicates the values of the current day, the future day, and their differences. In total, there are $189+729=918$ features in our original model.
Also, rather than using the future-day's maximum concentration value of $O_3$ as the model output, we use the value of the future-day's maximum minus the current day's maximum as the target for the model output. As discussed in Section \ref{sbsec:Lasso}, all the variables are standardized before the modeling, and the ``shooting algorithm'' as in Algorithm 1 is used to solve the Lasso. In order to select the regularization parameter $\lambda$ as in (\ref{eq:LassoCr}), we use the 5-fold cross validation and select the value that minimizes the cross-validation error. In this way, $\lambda$ is chosen as $0.0121$.
After the modeling solved by the Lasso, we find that among the $918$ original features, only $120$ of them correspond to nonzero weight in the final model. In other words, 798 of the original 918 features have been eliminated from the model. Then we use the testing dataset to evaluate the prediction accuracy, and we find that the root mean square error (RMSE) to be 6.12 ppb, and the mean absolute error (MAE) to be 4.92 ppb.
Further from the sparse modeling of the aforementioned 918 linear features solved by the Lasso, we can also include the interactive features expanded by the product of those 918 features. In this way, the modeling is conducted onto a $918+918\times917/2+918= 422739$ feature space. Among them, there are terms like the original variables (918 of them) and their second polynomial terms (918 of them), and there are terms like wind speed multiplied by cosine of wind direction ($918\times917/2$ of them). The regularization parameter $\lambda$ found by a 5-fold cross validation is set to be 0.0295, and we find only 193 features of the total 422739 candidate features exist in the final model. After applying the model to the testing dataset, we find the RMSE= 5.63 ppb, and MAE= 4.42 ppb.
It is worth mentioning that if we just use the current day's maximum value of $O_3$ concentration to approximate the next day's maximum $O_3$ concentration, then the RMSE=9.21 ppb, and MAE=6.81 ppb.
In order to compare our modeling approach with other machine learning methods, we compare the modeling using the multiple linear regression (MLR) \cite{barrero2006prediction,ghazali2010transformation,sousa2006prediction}, and the ridge regression \cite{salazar2008development} which also were described in Section \ref{sbsec:Lasso}. Both methods are conducted on the same 918 feature space. Apart from that, we also compare with the time-series modeling \cite{robeson1990evaluation}, in which we consider the daily maximum value as a ARMA process. Thirdly, we also compared with the Support Vector Regression (SVM-regression) approach \cite{hajek2012ozone,lu2008ground}. The prediction accuracy of these techniques are shown in Table \ref{tab:ResultMax}.
\begin{table}[width=.9\linewidth,cols=4,pos=h]
\caption{The prediction of daily maximum $O_3$ concentration using various statistical and machine learning methods. The RMSE and MAE values are in the unit of ppb. The last column of this table shows the final number of features in the model, as well as the number of candidate features originally examined by the model. }\label{tab:ResultMax}
\begin{tabular*}{\tblwidth}{@{} LLLL@{} }
\toprule
~ & RMSE & MAE & \#Features\\
\midrule
Lasso (linear) & 6.12 & 4.92 & 105/ 918 \\
\midrule
Lasso (polynomial)& 5.63 & 4.42 & 193/ 422739 \\
\midrule
ridge regression & 8.15 & 6.52 & 918/ 918 \\
\midrule
ARMA (time series)& 9.26 & 6.86 & n/a \\
\midrule
MLR & 11.52& 8.39 & 918/ 918 \\
\midrule
SVM regression & 9.32 & 7.53 & 918/ 918 \\
\midrule
\scriptsize{Use previous day's value} & 9.21 & 6.81 & n/a \\
\bottomrule
\end{tabular*}
\end{table}
From Table \ref{tab:ResultMax}, we can see that our Lasso approach on the original 918 candidate features already outperforms the other competing methods, moreover, conducting the Lasso on the interactive features (polynomial model) yields even stronger prediction accuracy.
The model weight $\beta_j$ corresponds to the linear Lasso are shown in Fig. \ref{fig:MaxWeights} (a). Since the features have been standardized, therefore the strength of each feature can be compared to each other, i.e., the features corresponds to the highest value of weight is the most important one in the model. From Fig. \ref{fig:MaxWeights} (a), we can clearly see that the majority of features have been eliminated. On the other hand, the weights correspond to the ridge regression are shown in Fig. \ref{fig:MaxWeights} (b). The comparison between these two graphs clearly demonstrate the difference between the nature of $L_1$ and $L_2$ design leads to the unique automatic feature selection property inherent to the Lasso approach, which explains the superiority of the Lasso's prediction accuracy.
\begin{figure}
\centering
\subfloat[]{\includegraphics[width=85mm]{MaxLassoWeights.png}}\\
\subfloat[]{\includegraphics[width=85mm]{MaxRidgeWeights.png}}
\caption{The weights in the final forecast model for the next-day's maximum $O_3$ obtained through
(a) Lasso method (b) ridge regression.} \label{fig:MaxWeights}
\end{figure}
In accordance to Fig. \ref{fig:MaxWeights} (a), if we show the ten (10) most important features and their strength, we obtain Table \ref{tab:10FMax}. It is worth to mention that these features here have already been standardized in the pre-processing.
\begin{table}[width=.9\linewidth,cols=4,pos=h]
\caption{The first 10 dominant features in the linear model solved by Lasso to predict the next-day's maximum $O_3$ concentration.}\label{tab:10FMax}
\begin{tabular*}{\tblwidth}{@{} LL@{} }
\toprule
Weight & Feature \\
\midrule
0.1971 & \scriptsize{next-day's maximum temperature} \\
\midrule
0.1508 & \scriptsize{current-day's $O_3$ concentration at 11 pm} \\
\midrule
-0.1502 & \scriptsize{next-day's minimum relative humidity} \\
\midrule
0.1453 & \scriptsize{current-day's maximum $O_3$ concentration} \\
\midrule
0.1355 & \scriptsize{current-day's $O_3$ concentration at 6 pm} \\
\midrule
0.0888 & \scriptsize{next-day's temperature at 6 am} \\
\midrule
-0.0780 & \scriptsize{next-day's relative humility at 3 pm} \\
\midrule
-0.0766 & \scriptsize{current-day's $O_3$ concentration at 5 pm} \\
\midrule
0.0617 & \scriptsize{current-day's $NO_2$ concentration at 11 pm} \\
\midrule
-0.0601 & \scriptsize{current-day's cosine of wind direction at 8 am} \\
\midrule
\vdots & \vdots \\
\bottomrule
\end{tabular*}
\end{table}
\begin{figure}
\centering
\subfloat[]{\includegraphics[width=85mm]{MaxPredict1C.png}}\\
\subfloat[]{\includegraphics[width=85mm]{MaxPredict2C.png}}\\
\subfloat[]{\includegraphics[width=85mm]{MaxPredict3C.png}}\\
\subfloat[]{\includegraphics[width=85mm]{MaxPredict4C.png}}
\caption{The prediction of the next-day's maximum $O_3$ concentration using the polynomial model solved by the Lasso. (a) The 1st trimester, (b) the 2nd trimester, (c) the 3rd trimester, (d) the 4rd trimester.} \label{fig:MaxPredict}
\end{figure}
We also plot the actual values of the daily maximum $O_3$ concentration in the test dataset together with the predicted values using our modeling with the linear as well as the interactive features (the modeling using the 422739 candidate features). The result is shown in Fig. \ref{fig:MaxPredict}.
\begin{figure}
\centering
\subfloat[]{\includegraphics[width=43mm]{MaxOPyearC.png}}\\
\subfloat[]{\includegraphics[width=43mm]{MaxOP1C.png}} ~\
\subfloat[]{\includegraphics[width=43mm]{MaxOP2C.png}}\\
\subfloat[]{\includegraphics[width=43mm]{MaxOP3C.png}} ~\
\subfloat[]{\includegraphics[width=43mm]{MaxOP4C.png}}
\caption{Scatterplots of forecast vs observed next-day maximum ozone concentrations for the testing set. (a) The whole year, (b) the 1st trimester, (c) the 2nd trimester, (d) the 3rd trimester, (e) the 4th trimester. Note that the solid line represents the perfect prediction while the dashed line is an ordinary least square fit in each graph.} \label{fig:MaxScatter}
\end{figure}
If we examine the scatterplot of the predicted concentration value vs the true value, we obtain Fig. \ref{fig:MaxScatter}. We plot not only the whole year but also each trimester. From Fig. \ref{fig:MaxScatter}, we find that our modeling predict the best for the second trimester among all the four.
\subsection{Forecasting the next-day's maximum of 8-hour-mean $O_3$}\label{sbsec:8HourMeanMax}
The maximum of the 8-hour-mean value has played an important role in environmental monitor and control \cite{link3}. The definition of this concept is that the mathematical average of a consecutive eight hours' ozone concentration values is recorded to be the ``8-hour-mean'' value at the first hour. The daily maximum of this 8-hour-mean concentration is not necessarily the highest concentration of this variable in a whole day. Similar to Section \ref{sbsec:Max}, three whole years data (2014-2016) are used as the training dataset, and the year 2017's data are used as the testing dataset.
When the Lasso is performed for the linear model, all the same 918 features as described in Section \ref{sbsec:Max} are used. Apart from that, we also use the 8-hour-mean values of $O_3$ concentration at 0 am - 4 pm of the current-day. It is worth mentioning in the current day, we only have the 8-hour-mean value at up to 4 pm, which actually is composed of the hourly value at 11 pm. Together with the maximum, mean, minimum of those seventeen (17) 8-hour-mean values, the total numbers of candidate features in the linear model is thus 938. A 5-fold cross validation is applied to select the regularization parameter $\lambda$, and we use $\lambda=0.0118$. By using the Lasso, the number of final features in the model is reduced to 113. In Fig. \ref{fig:Max8HMWeights}, the weights selected by Lasso is compared to the weights selected by ridge regression, and in Table \ref{tab:ResultMax8HM}, it shows the prediction performance of various techniques.
The second row of Table \ref{tab:ResultMax8HM} shows the performance of the Lasso with the interactive features (polynomial model). Similar to the way in Section \ref{sbsec:Max}, the 938 linear features are expanded into $938+938 \times 937/2+938=441329$ features.
In Table \ref{tab:ResultMax8HM}, our proposed modeling approach is compared with the other competing methods mentioned in Section \ref{sbsec:Max}.Table \ref{tab:ResultMax8HM} again confirms the two Lasso approaches (linear model and polynomial model) outperform the other ones.
\begin{table}[width=.9\linewidth,cols=4,pos=h]
\caption{The prediction of daily maximum 8-hour-mean $O_3$ concentration using various statistical and machine learning methods. The RMSE and MAE values are in the unit of ppb. The last column of this table shows the final number of features in the model, as well as the number of candidate features originally examined by the model.}\label{tab:ResultMax8HM}
\begin{tabular*}{\tblwidth}{@{} LLLL@{} }
\toprule
~ & RMSE & MAE & \#Features\\
\midrule
Lasso (linear) & 6.20 & 4.85 & 113/ 938 \\
\midrule
Lasso (polynomial)& 5.68 & 4.52 & 160/ 441329 \\
\midrule
ridge regression & 7.87 & 6.18 & 938/ 938 \\
\midrule
ARMA (time series)& 10.51 & 7.65 & n/a \\
\midrule
MLR & 11.14& 8.02 & 938/ 938 \\
\midrule
SVM regression & 9.06 & 7.16 & 938/ 938 \\
\midrule
\scriptsize{Use previous day's value} & 8.98 & 6.52 & n/a \\
\bottomrule
\end{tabular*}
\end{table}
\begin{figure}
\centering
\subfloat[]{\includegraphics[width=85mm]{Max8HMLassoWeights.png}}\\
\subfloat[]{\includegraphics[width=85mm]{Max8HMRidgeWeights.png}}
\caption{The weights in the final forecast model for the next-day's maximum 8-hour-mean $O_3$ obtained through
(a) Lasso method, (b) ridge regression.} \label{fig:Max8HMWeights}
\end{figure}
The comparison between the weights selected by Lasso and ridge regression in the linear set-up is shown in Fig. \ref{fig:Max8HMWeights}. Again, this shows the nature of $L_1$ design allows the optimal feature selection, which leads to better prediction accuracy. The ten (10) most prominent features in the resultant Lasso linear model is shown in Table \ref{tab:10FMax8HM}. Again, these features here have already been standardized in the pre-processing.
\begin{table}[width=.9\linewidth,cols=4,pos=h]
\caption{The first 10 dominant features in the linear model solved by Lasso to predict the next-day's maximum 8-hour-mean $O_3$ concentration.}\label{tab:10FMax8HM}
\begin{tabular*}{\tblwidth}{@{} LL@{} }
\toprule
Weight & Feature \\
\midrule
0.2395 & \scriptsize{next-day's temperature at 6 am} \\
\midrule
0.1546 & \scriptsize{current-day's $O_3$ concentration at 6 pm} \\
\midrule
0.1429 & \scriptsize{current-day's $O_3$ concentration at 11 pm} \\
\midrule
-0.1261 & \scriptsize{next-day's minimum relative humility} \\
\midrule
-0.1011 & \scriptsize{next-day's relative humility at 3 pm} \\
\midrule
0.0835 & \scriptsize{current-day's maximum 8-hour-mean $O_3$ concentration} \\
\midrule
0.0724 & \scriptsize{current-day's $O_3$ concentration at 4 pm} \\
\midrule
-0.0670 & \scriptsize{next-day's atmospheric pressure at 7 am} \\
\midrule
0.0592 & \scriptsize{current-day's $NO_2$ concentration at 11 pm} \\
\midrule
-0.0574 & \scriptsize{next-day's minimum visibility} \\
\midrule
\vdots & \vdots \\
\bottomrule
\end{tabular*}
\end{table}
It is worth mentioning that the prediction accuracy of the daily maximum 8-hour-mean $O_3$ concentration is a little worse than that of the daily maximum concentration in Section \ref{sbsec:Max}. The reason could be that the next-day's maximum 8-hour-mean $O_3$ concentration could sometimes happen after 4 pm, which include the information in the day after the next day. On the other hand, comparing Table \ref{tab:10FMax} and Table \ref{tab:10FMax8HM}, we can see that the same seven (7) features are present in both tables. (We consider the maximum 8-hour-mean $O_3$ concentration in Table \ref{tab:10FMax8HM} to be the counterpart of maximum $O_3$ concentration in Table \ref{tab:10FMax}.) However, the sequence and weights of these 7 identical features are different, and the most dominating feature in the two tables are different. The reason is probably due to the fact that the information in many important pollutants/meteorological variables are partially included in each other in a profoundly interactive way. Among the various pollutants, it seems the concentration of $NO_2$ can greatly contribute to predict the $O_3$ concentration accurately. Among the meteorological variables, the temperature and the relative humility play an significant role. Regarding to some other pollutants and meteorological variables, their influence seems to be less significant, but the elimination of their influence can only be concluded through statistical testings in a rigorous way, e.g., the lack-of-fit model tests, which is beyond the scope of this paper.
For the performance of Lasso polynomial model (interactive features), the true daily maximum 8-hour-mean $O_3$ concentration values in the testing set as well as the predicted values are shown in Fig. \ref{fig:Max8HMPredict}.
\begin{figure}
\centering
\subfloat[]{\includegraphics[width=85mm]{Max8HMPredict1C.png}}\\
\subfloat[]{\includegraphics[width=85mm]{Max8HMPredict2C.png}}\\
\subfloat[]{\includegraphics[width=85mm]{Max8HMPredict3C.png}}\\
\subfloat[]{\includegraphics[width=85mm]{Max8HMPredict4C.png}}
\caption{The prediction of the next-day's maximum 8-hour-mean $O_3$ concentration using the polynomial model solved by the Lasso. (a) The 1st trimester, (b) the 2nd trimester, (c) the 3rd trimester, (d) the 4rd trimester.} \label{fig:Max8HMPredict}
\end{figure}
\begin{figure}
\centering
\subfloat[]{\includegraphics[width=43mm]{Max8HOPyearC.png}}\\
\subfloat[]{\includegraphics[width=43mm]{Max8HOP1C.png}} ~\
\subfloat[]{\includegraphics[width=43mm]{Max8HOP2C.png}}\\
\subfloat[]{\includegraphics[width=43mm]{Max8HOP3C.png}} ~\
\subfloat[]{\includegraphics[width=43mm]{Max8HOP4C.png}}
\caption{Scatterplots of forecast vs observed next-day maximum 8-hour-mean ozone concentrations for the testing set. (a) The whole year, (b) the 1st trimester, (c) the 2nd trimester, (d) the 3rd trimester, (e) the 4th trimester. Note that the solid line represents the perfect prediction while the dashed line is an ordinary least square fit in each graph.} \label{fig:Max8HScatter}
\end{figure}
If we examine the scatterplot of the predicted concentration value vs the true value for the maximum eight-hour-mean concentration value, we obtain Fig. \ref{fig:Max8HScatter}. We plot not only the whole year but also each trimester. Similar to the maximum daily concentration forecast, we find that our modeling predict the best for the second trimester among all the four.
\section{Discussions and Conclusion}\label{sec:Discussions}
In this paper, the Lasso approach is used for prediction of the $O_3$ concentration in terms of the next-day's maximum value, as well as the next-day's maximum 8-hour-mean value. The simulation studies show that this approach outperforms some other competing methods recently applied in the field.
It is worth mentioning that rather than modeling directly the next-day's maximum concentration (or next-day's maximum 8-hour-mean value), we also have tried the approach to target the difference between the next-day's and current-day's values. We achieve similar level of prediction accuracy. Therefore it is conjured that in case that the Lasso is applied in cities where the $O_3$ concentrations are usually significantly larger than the levels in Windsor, Canada, it will likely render prediction accuracy similarly in RMSE (or MAE) as the level we achieved in this paper.
\bibliographystyle{cas-model2-names}
| proofpile-arXiv_059-15704 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section*{Acknowledgements}
We thank Nicole Lloyd-Ronning at The University of New Mexico and Poonam Chandra at Tata Institute of Fundamental Research for kindly making their observational GRB data available to us. This work would have not been accomplished without the vast time and effort spent by many scientists and engineers who designed, built and launched the gamma-ray and radio observatories and were involved in the collection and analysis of GRB data.{\\}
\bibliographystyle{mnras}
\section{An alternative interpretation for the existence of radio loud and quiet LGRBs}
\label{sec:methods}
\begin{figure*
\centering
\makebox[\textwidth]
{
\begin{tabular}{ccc}
\subfloat[]{\includegraphics[width=0.31\textwidth]{./fig/lloyd2019/BootHistAvgLog10Zone.pdf} \label{fig:bootL19a}} &
\subfloat[]{\includegraphics[width=0.31\textwidth]{./fig/lloyd2019/BootHistAvgLog10Eiso.pdf} \label{fig:bootL19b}} &
\subfloat[]{\includegraphics[width=0.31\textwidth]{./fig/lloyd2019/BootHistAvgLog10Durz.pdf} \label{fig:bootL19c}} \\
\subfloat[]{\includegraphics[width=0.31\textwidth]{./fig/lloyd2019/BootHistStdLog10Zone.pdf} \label{fig:bootL19d}} &
\subfloat[]{\includegraphics[width=0.31\textwidth]{./fig/lloyd2019/BootHistStdLog10Eiso.pdf} \label{fig:bootL19e}} &
\subfloat[]{\includegraphics[width=0.31\textwidth]{./fig/lloyd2019/BootHistStdLog10Durz.pdf} \label{fig:bootL19f}} \\
\subfloat[]{\includegraphics[width=0.31\textwidth]{./fig/lloyd2019/BootHistCorrEisoDurz.pdf} \label{fig:bootL19g}} &
\subfloat[]{\includegraphics[width=0.31\textwidth]{./fig/lloyd2019/BootHistCorrZoneEiso.pdf} \label{fig:bootL19h}} &
\subfloat[]{\includegraphics[width=0.31\textwidth]{./fig/lloyd2019/BootHistCorrZoneDurz.pdf} \label{fig:bootL19i}} \\
\end{tabular}
}
\caption{
The bootstrapping simulation results for the some of the population properties of the two classes of radio-loud and radio-quiet LGRBs. Plots (a), (b), (c) display respectively, the bootstrapping distributions of the averages of the distributions of $z+1$, $E_{\operatorname{iso}}$, $T_{\operatorname{90,z}}$ distributions of the two radio classes. In the same order, plots (d), (e), (f) display the corresponding bootstrapping distributions of the standard deviations of the three LGRB property distributions for the two classes. Plots (g), (h), (i) display the bootstrapping distributions of the Spearman's correlation strengths between the three LGRB properties. The inset values represent the mean and the $1\sigma$ dispersions in the bootstrap distributions in each plot, with $\pi(\cdot)$ denoting the probability and $\rho$ denoting the Spearman's correlation strength.
\label{fig:bootL19}
}
\end{figure*}
To better quantify the differences between the two hypothesized radio classes, we first apply the statistical bootstrapping \citep[e.g.,][]{efron1994introduction} technique to the radio-loud and radio-quiet samples of \citetalias{lloyd2019comparison} to form confidence bounds on the mean and standard deviations of the two populations, as well as the correlation strengths between the three quantities: redshift ($z$), the total isotropic gamma-ray emission ($E_{\operatorname{iso}}$), and the intrinsic duration ($T_{\operatorname{90,z}}$). In brief, bootstrapping is a statistical resampling method (with replacements) that, in the absence of multiple independent sets of observational data, can provide confidence bounds on the various statistical properties of a dataset.
{\\}
In the following sections, we discuss and quantify the significance of each of the differences and similarities of the two radio-loud and radio-quiet LGRBs as quantified by the bootstrap confidence bounds and provide further evidence from Monte Carlo simulations that shed light on the classification of the radio emissions of LGRBs.
\subsection{The redshift distributions of radio-loud and radio-quiet LGRBs}
\label{sec:methods:energetics}
Figure \ref{fig:bootL19} displays the bootstrapping results for some of the statistical properties of the two radio classes of \citetalias{lloyd2019comparison}. The first and foremost quantity in the studies of cosmological objects is redshift ($z$). Similar to \citetalias{lloyd2019comparison}, we find the redshift distributions of the two classes of radio-loud and radio-quiet LGRBs are consistent with hypothesis of belonging to the same parent population distribution. A two-sample Kolmogorov-Smirnov test on the two redshift distributions reveals no significant difference between the two redshift distributions, that is, we cannot reject the null hypothesis that the two redshift samples originate from the same distribution with a KS-test probability of $\rho\sim0.24$.
{\\}
We note, however, that this hypothetical parent distribution for the redshifts of the two classes is likely severely affected by selection effects due to the gamma-ray detection and the redshift measurement thresholds. We find the average redshifts of the radio-quiet and radio-loud samples to be $\overline{z}_{loud}\sim2.21<\overline{z}_{quiet}\sim2.6$. This is contrary to the reported value for $\overline{z}_{loud}$ in Table (1) of \citetalias{lloyd2019comparison}. Nevertheless, our bootstrapping simulations indicate that the differences in the redshift averages are insignificant as illustrated in Figure \ref{fig:bootL19a}. There is a $0.14$ probability that the average redshift of the radio-loud sample could be actually larger than the average redshift of the radio-quiet sample. The dispersion in the redshift distributions of the two radio classes also appear to be consistent with each other as depicted by the bootstrapping results in Figure \ref{fig:bootL19d}.
\subsection{The energetics of radio-loud and radio-quiet LGRBs}
\label{sec:methods:energetics}
\begin{figure*}
\centering
\makebox[\textwidth]{
\subfloat[$E_{\operatorname{iso}}$ vs. Peak Radio Luminosity]{%
\includegraphics[width=0.47\textwidth]{./fig/chandra2012/ChandraLog10EradLog10Eiso.pdf}
\label{fig:ChandraLradEiso}}
\quad
\subfloat[A Depiction of detector thresholds and classification limit]{%
\includegraphics[width=0.47\textwidth]{./fig/radioCutoffs.png}
\label{fig:RadioCutoffs}}
}
\caption
{
{\bf (a)}: An illustration of the observational sample of \citet{chandra2012radio} showing the relationship between $E_{\operatorname{iso}}$ and the peak radio luminosity. The only X-Ray Flash (XRF) event in the sample of \citet{chandra2012radio} is shown by the red point. Exclusion of the XRF from the plots results in a minor decrease in the Spearman's correlation strength from $\rho=0.45\pm0.13$ to $\rho=0.43\pm0.13$.
{\bf (b)}: A schematic illustration of the combined effects of the gamma-ray and radio detection thresholds as well as the potential existence of an underlying correlation between the radio and gamma-ray emissions of LGRBs on the observed LGRBs sample and how these factors can lead to the appearance of two separate classes of radio-loud and radio-quiet LGRBs. The red oval represents the positive underlying correlation between $E_{\operatorname{iso}}$ and the radio emission. However, the gamma-ray detector threshold (and the complex redshift selection effects) impose a severe cut on the y-axis, hiding much of the cosmic population of LGRBs (represented by the pink color) from our view. Simultaneously, the radio detector threshold effects, place another severe cut on the observed sample of LGRBs to create an impression of two separate classes of radio-loud and radio-quiet LGRBs. Because of the positive correlation of $E_{\operatorname{iso}}$ with the radio emission, any LGRB above a certain $E_{\operatorname{iso}}$ threshold (i.e., the {\it radio classification limit}) will be automatically {\it always} classified as a radio-loud event. This leads to an apparent increase in the average $E_{\operatorname{iso}}$ of the radio-loud sample (represented by the yellow-colored points) relative to the radio-quiet sampled (represented by the black-colored points), similar to what has been reported in the literature as the differing characteristics of radio-quiet and radio-loud LGRBs.
\label{fig:chandraSchematic}
}
\end{figure*}
Similar to \citetalias{lloyd2019comparison}, the results of our bootstrapping simulations indicate a strong evidence, at almost $100\%$ confidence level, that the average and the standard deviation of the $E_{\operatorname{iso}}$ distribution of the radio-loud LGRBs are larger than the average and the standard deviation of the $E_{\operatorname{iso}}$ distribution of the radio-quiet sample (Figures \ref{fig:bootL19b} \& \ref{fig:bootL19e}). However, unlike \citetalias{lloyd2019comparison}, here we hypothesize a different origin for the observed differences in the $E_{\operatorname{iso}}$ distributions of the radio-loud and radio-quiet LGRBs.
{\\}
Our {\it fundamental hypothesis} in this work is that {\it there is potentially a significant (but not necessarily strong) positive correlation between the radio-afterglow and the prompt gamma-ray energetics of LGRBs}. Such a hypothesis may not be too far from reality as there has been already evidence for the potential existence of a positive correlation between the total isotropic gamma-ray emission ($E_{\operatorname{iso}}$) of GRBs with their peak radio luminosity at 8.5 GHz ($\lrad$). In Figure \ref{fig:ChandraLradEiso}, we have regenerated the plot of Figure 20 in \citet{chandra2012radio}. We find a positive Spearman's correlation strength of $\rho\sim0.45$ for the $E_{\operatorname{iso}}-\lrad$ relationship.
{\\}
The scatter in the underlying intrinsic $E_{\operatorname{iso}}-\lrad$ correlation of the LGRBs population is likely different from what is seen in Figure \ref{fig:ChandraLradEiso} since the distributions of both quantities $E_{\operatorname{iso}}$ and $\lrad$ are severely affected by the corresponding gamma-ray and radio detector thresholds. We defer a quantification of this relationship to a future work and suffice, in this work, to only note the high plausibility of the validity such hypothesis given the existing evidence. Hints to the existence of correlations between the prompt gamma-ray and afterglow emissions in wavelengths other than radio have been also provided by other independent studies \citep[e.g.,][]{margutti2013prompt, dainotti2017gamma}.
{\\}
The existence of such correlation between the radio and gamma-ray energy releases would readily explain the appearance of the two classes of radio-quiet and radio-loud LGRBs: {\bf The more energetic LGRBs in gamma-ray emission tend to be more luminous in radio afterglows, and therefore, tend to be classified as radio-loud LGRBs more frequently}. Consequently, radio-loud LGRBs appear to be much more energetic as measured by $E_{\operatorname{iso}}$ relative to radio-quiet LGRBs. This phenomenon is well illustrated in the schematic plot of Figure \ref{fig:RadioCutoffs} where the effects of the radio and gamma-ray detector thresholds create apparently two distinct classes of radio loud and quiet LGRBs with significantly different characteristics, similar in behavior to the findings of \citetalias{lloyd2019comparison}.
{\\}
As illustrated in Figure \ref{fig:RadioCutoffs}, when LGRBs reach a certain gamma-ray emission as measured by $E_{\operatorname{iso}}$, their radio-emission also surpasses the radio-emission detection threshold. Therefore, LGRBs beyond a certain $E_{\operatorname{iso}}$ threshold are automatically classified as radio-loud LGRBs. This results in the apparent segregation of the LGRB population into two distinct groups whose $E_{\operatorname{iso}}$ distributions are different despite having the same redshift distributions. Indeed, \citetalias{lloyd2019comparison} report such excess in the average energetics of their radio-loud LGRB sample relative to radio-quiet LGRBs (e.g., see the middle plots of Figures 1 \& 2 of \citetalias{lloyd2019comparison} and Figure \ref{fig:bootL19b} in this work). Meanwhile, the presence of two strong and independent radio and gamma-ray detection thresholds on the bivariate $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ distribution severely undercuts any traces of a significant $E_{\operatorname{iso}}-\erad$ correlation, where $\erad$ denotes the total radio emission.
{\\}
The above alternative hypothesis for the origins of the two radio classes automatically provides also a natural explanation for the significantly smaller dispersion in the $E_{\operatorname{iso}}$ distribution of radio-quiet LGRBs relative to the radio loud sample, which is evident the middle plots of Figures 1 \& 2 of \citetalias{lloyd2019comparison} and Figure \ref{fig:bootL19e} in this work. The $E_{\operatorname{iso}}$ distribution of the radio-quiet sample is strongly affected by the gamma-ray detection threshold and redshift-measurement selection effects in its lower tail and, by the radio-loud classification limit at its upper tail. However, in the case of radio-loud LGRBs, such radio-classification upper limit on the $E_{\operatorname{iso}}$ distribution does not exist and the lower-limit on this distribution is also vaguely defined by a classification limit whose sharpness depends on the strength of the $E_{\operatorname{iso}}-\erad$ correlation. These effects are well-illustrated in Figure \ref{fig:RadioCutoffs}.
\subsection{The duration distribution of radio-loud and radio-quiet LGRBs}
\label{sec:methods:durz}
As soon as the origins of the energetics differences between the radio-loud and radio-quiet LGRBs are understood and accepted as explained in the previous section, the apparent differences between the duration distributions of radio-loud and radio-quiet LGRBs can be also readily explained.
We do so by utilizing the recent discovery of the potential existence of a strong positive correlation between the gamma-ray energetics of LGRBs and their intrinsic durations, $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$, quantified for the first time in a series of works by \citet{shahmoradi2013multivariate, shahmoradi2013gamma, shahmoradi2015short, shahmoradi2019catalog, 2019arXiv190306989S, osborne2020multilevel, 2020arXiv200601157O}.
{\\}
Plot (b) of Figure \ref{fig:monteCarloUniverse} displays a reproduction of the LGRB world model of \citet{shahmoradi2019catalog, osborne2020multilevel} who find an underlying Pearson's correlation coefficient of $\rho\sim0.5-0.6$ between the intrinsic cosmic distributions of $E_{\operatorname{iso}}$ and $T_{\operatorname{90,z}}$ in log-log space. Interestingly, \citet{shahmoradi2015short} discover a correlation of similar strength and significance to that of LGRBs in the population of SGRBs.
{\\}
Since the radio-loud sample of LGRBs has, on average, higher $E_{\operatorname{iso}}$ than the radio-quiet LGRBs (by about 0.58 dex as illustrated in Figure \ref{fig:bootL19b}, the strong positive $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ correlation depicted in plot (b) of Figure \ref{fig:monteCarloUniverse} also necessitates, on average, higher $T_{\operatorname{90,z}}$ values for radio-loud LGRBs relative to the radio-quiet sample. Such difference has been indeed reported by \citetalias{lloyd2019comparison} (e.g., see the middle plots of Figures 1 \& 2 of \citetalias{lloyd2019comparison} and Figure \ref{fig:bootL19c} in this work).
{\\}
This resolves the source of another apparent major difference between the two radio classes. In the following sections, we present the results from the Monte Carlo simulations of an LGRB world model in which we attempt to synthetically reconstruct the gamma-ray emission properties of the radio-loud and radio-quiet samples of \citetalias{lloyd2019comparison}. We show that the population-property differences between the two classes as enumerated in \S\ref{sec:intro} naturally emerge from the combined effects of the gamma-ray detection threshold, the artificial cuts on the observational data, and the intrinsic correlations between the prompt gamma-ray properties of LGRBs as reported by \citet{shahmoradi2013multivariate, shahmoradi2015short}.
\section{The LGRB World Model}
\label{sec:lgrbWorldModel}
\input{sec/B10correlations}
\begin{figure*
\centering
\makebox[\textwidth]
{
\begin{tabular}{cc}
\subfloat[]{\includegraphics[width=0.47\textwidth]{./fig/lisoEisoDurzSlopeDist.png}} &
\subfloat[]{\includegraphics[width=0.47\textwidth]{./fig/SwiftEisoDurz.png}\label{fig:eisodurzs}} \\
\subfloat[]{\includegraphics[width=0.47\textwidth]{./fig/SwifteisoZ.png}} &
\subfloat[]{\includegraphics[width=0.47\textwidth]{./fig/SynSample/RedshiftPlusOneDurzB10Lloyd.png}}
\end{tabular}
}
\caption{
{\bf (a)} The distribution of the exponent of $L_{\operatorname{iso}}-T_{\operatorname{90,z}}$ and $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ power-law relationships inferred from modeling the population distribution of 1336 BATSE LGRBs.
{\bf (b)} An illustration of the predicted underlying intrinsic distribution of LGRBs in the $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ plane. Associated with each simulated LGRB is also a probability of it being detectable by the Swift BAT, represented by the cyan-magenta color-map. The overlaid yellow and black points represent the sample of bright radio-loud and radio-quiet LGRBs in \citetalias{lloyd2019comparison} collected such that $E_{\operatorname{iso}}\gtrsim10^{52}$ [ergs]. The effective detection threshold of Swift BAT and the classification limit that separates the two classes of radio-loud and radio-quiet LGRBs are shown by the brown dashed lines. These two limits together potentially shape the narrow distribution of radio-quiet sample (black dots on this plane).
{\bf (c)} An illustration of the predicted underlying intrinsic distribution of LGRBs in the redshift $(z+1)-E_{\operatorname{iso}}$ plane. The color codings of the plot objects are the same as those of plot (b). The two dashed lines represent the effective limits that are potentially shaping the distribution of radio-quiet sample of radio-quiet LGRBs in this plot.
{\bf (c)} An illustration of the predicted underlying intrinsic distribution of LGRBs in the $(z+1)-T_{\operatorname{90,z}}$ plane. The color codings of the plot objects are the same as those of plots (b) and (c).
\label{fig:monteCarloUniverse}
}
\end{figure*}
In a series of works, \citet{shahmoradi2013multivariate, 2013arXiv1308.1097S, shahmoradi2015short, shahmoradi2019catalog, 2019arXiv190306989S, osborne2020multilevel, 2020arXiv200601157O} have presented evidence for the existence of a significant positive $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ correlation of strength $\rho\sim0.5-0.6$ in both populations LGRBs and SGRBs. Here, we build upon these works to create a Monte Carlo universe of LGRBs which are subjected to the detection thresholds of BATSE and Swift Burst-Alert-Telescope (BAT) detector thresholds. Since the majority of the events in the radio-loud and radio-quiet LGRB samples of \citetalias{lloyd2019comparison} belong to the Swift catalog, here we only present the results for the case of the Swift BAT.
{\\}
\citet{shahmoradi2013multivariate, shahmoradi2015short} model the joint population distributions of 1366 BATSE catalog LGRBs in the 5-dimensional LGRB property space of,
\begin{enumerate}
\item redshift ($z$),
\item the intrinsic bolometric 1-second isotropic peak energy luminosity ($L_{\operatorname{iso}}$) and its equivalent observable, the peak energy flux ($P_{\operatorname{bol}}$).
\item the intrinsic total isotropic bolometric emission ($E_{\operatorname{iso}}$) and its equivalent observable, the bolometric fluence ($S_{\operatorname{bol}}$).
\item the intrinsic spectral peak energy ($E_{\operatorname{p,z}}$) and its equivalent observable, the observed spectral peak energy ($E_{\operatorname{p}}$).
\item the intrinsic prompt gamma-ray duration as measured by the time interval during which $90\%$ of the total gamma-ray energy of the LGRB is released ($T_{\operatorname{90,z}}$) and its equivalent observable, the observed duration ($T_{\operatorname{90}}$).
\end{enumerate}
while carefully taking into account the intrinsic correlations between the LGRB prompt gamma-ray properties and the detection threshold of BATSE Large Area Detectors (LADs).
{\\}
To create our Monte Carlo universe of LGRBs, we use the inferred posterior distribution of the parameters of their multivariate model under the hypothesis of LGRBs following the LGRB rate density ${\dot\zeta}$ inferred by \cite{butler2010cosmic},
\begin{equation}
\label{eq:mz}
{\dot\zeta}(z) \propto
\begin{cases}
(1+z)^{\gamma_0} & z<z_0 \\
(1+z)^{\gamma_1} & z_0<z<z_1 \\
(1+z)^{\gamma_2} & z>z_1 ~, \\
\end{cases}
\end{equation}
\noindent where the parameters, ($z_0$,$z_1$,$\gamma_0$,$\gamma_1$,$\gamma_2$) for this equation are (0.97,4.00,3.14,1.36,-2.92). The rate density estimate of \cite{butler2010cosmic} is based on a careful multivariate modeling of Swift catalog of LGRBs. Previously we have used five other rate density models in \cite{osborne2020multilevel}, but found that the rate density estimate of \citetalias{butler2010cosmic} resulted in more accurate results than other models in predicting the redshifts of the BATSE catalog of LGRBs. Table \ref{tab:paraPostStat} summarizes the inferred Pearson's correlation coefficients between the four main prompt gamma-ray attributes of LGRBs considered in our LGRB world model. We refer the interested reader to \citet{shahmoradi2013multivariate, shahmoradi2015short, shahmoradi2019catalog, 2019arXiv190306989S, osborne2020multilevel, 2020arXiv200601157O} for a comprehensive discussion of the modeling approach, and to \citet{2020arXiv200914229S, 2020arXiv201000724S, 2020arXiv201004190S} for details of the MCMC sampling techniques used to construct the parameters posterior distribution.
{\\}
Once we have a constrained parametric model for the joint distribution of the population distribution of LGRBs, we generate a Monte Carlo universe of LGRBs by randomly selecting a set of parameters for the LGRB world model from the posterior distribution of parameters and then generating a set of LGRB attributes ($L_{\operatorname{iso}}$, $E_{\operatorname{iso}}$, $E_{\operatorname{p,z}}$, $T_{\operatorname{90,z}}$, $z$) given the randomly-selected set of parameters for the LGRB world model.
{\\}
Plot (a) of Figure \ref{fig:monteCarloUniverse} displays the distribution of the inferred exponent $\alpha$ for the power-law relationship between the intrinsic duration ($T_{\operatorname{90,z}}$) and energetics of LGRBs as measured by $L_{\operatorname{iso}}$ and $E_{\operatorname{iso}}$. A realization of the $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ relation is also displayed in plot (b) of Figure \ref{fig:monteCarloUniverse}.
\subsection{The $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ correlation in radio-loud and radio-quiet LGRBs}
\label{sec:methods:eisodurzcorr}
Similar to \citetalias{lloyd2019comparison}, we confirm the existence of a weaker $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ relationship in the population of radio-loud LGRBs ($\rho\sim0.23\pm0.11$) compared to the radio-quiet sample of LGRBs ($\rho\sim0.45\pm0.12$). However, bootstrapping results as depicted in plot (g) of Figure \ref{fig:bootL19} indicate that the difference in the correlation strengths between the two radio classes is insignificant. Indeed, there is $10\%$ probability that the underlying $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ correlation in the radio-loud sample could be stronger than the corresponding correlation in the radio-quiet sample.
{\\}
We have already shown in the previous sections of this manuscript and in \citet{shahmoradi2013multivariate, shahmoradi2015short, shahmoradi2019catalog, 2019arXiv190306989S, osborne2020multilevel, 2020arXiv200601157O} that there is likely a strong intrinsic $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ correlation in both LGRB and SGRB classes. This prediction readily explains the existence of a positive $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ correlation in both samples of radio-loud and radio-quiet LGRBs. The strengths of the observed correlations are, however, much weaker than the predictions of our LGRB world model because of the strong effects of sample-incompleteness on the radio loud and radio-quiet LGRB samples.
{\\}
Furthermore, the slight increase in $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ correlation strength in the radio-quiet sample relative to the radio-loud sample of LGRBs can be also potentially explained away in terms of the subtle effects of radio classification and sample-incompleteness on the two LGRB populations. These two artificial fuzzy cuts on the radio-quiet sample are schematically illustrated by the brown dashed lines in plot (b) of Figure \ref{fig:monteCarloUniverse}. The $E_{\operatorname{iso}}$ distribution of radio-quiet LGRBs is likely more affected by the detection threshold of Swift because radio-quiet LGRBs are generally less energetic.
{\\}
However, unlike he radio-loud sample, the radio-quiet LGRBs are also limited by an upper $E_{\operatorname{iso}}$ fuzzy threshold which is purely due to the classification of LGRBs into two classes of radio-loud and radio-quiet. If an LGRB is bright enough in gamma-ray, its radio-emission will also become bright enough to be detectable. Therefore, gamma-ray-bright LGRBs are automatically classified as radio-loud LGRBs. The two aforementioned artificial cuts on the radio-quiet sample create a narrow distribution of LGRBs in the $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ which artificially increases the observed $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ correlation strength of radio-quiet LGRBs compared to the radio-loud sample.
\subsection{The $z-E_{\operatorname{iso}}$ correlation in radio-loud and radio-quiet LGRBs}
\label{sec:methods:zeisocorr}
We confirm the existence of a weaker $z-E_{\operatorname{iso}}$ correlation in the population of radio-loud LGRBs ($\rho\sim0.08\pm0.12$) compared to the radio-quiet sample of LGRBs ($\rho\sim0.33\pm0.13$). However, bootstrapping results as depicted in plot (h) of Figure \ref{fig:bootL19} indicate that the difference in the correlation strengths between the two radio classes is insignificant. Indeed, there is $9\%$ probability that the underlying $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ correlation in the radio-loud sample could be stronger than the corresponding correlation in the radio-quiet sample.
{\\}
Furthermore, we hypothesize that the slight increase in $z-E_{\operatorname{iso}}$ correlation strength in the radio-quiet sample relative to the radio-loud sample of LGRBs can be also again potentially explained away in terms of the subtle effects of classification and sample-incompleteness on the two LGRB populations. These two artificial fuzzy cuts on the radio-quiet sample are schematically illustrated by the brown dashed lines in Plot (c) of Figure \ref{fig:monteCarloUniverse}. The $E_{\operatorname{iso}}$ distribution of radio-quiet LGRBs is likely more affected by the detection threshold of Swift because radio-quiet LGRBs are generally less energetic.
{\\}
\subsection{The $z-T_{\operatorname{90,z}}$ correlation in radio-loud and radio-quiet LGRBs}
\label{sec:methods:zdurzcorr}
Similar to \citetalias{lloyd2019comparison}, we confirm the existence of a stronger $(z+1)-T_{\operatorname{90,z}}$ anti-correlation in the population of radio-loud LGRBs ($\rho\sim-0.35\pm0.10$) compared to the radio-quiet sample of LGRBs ($\rho\sim-0.07\pm0.18$). However, bootstrapping results as depicted in plot (i) of Figure \ref{fig:bootL19} indicate that the difference in the correlation strengths between the two radio classes is insignificant. Indeed, there is $9\%$ probability that the underlying $z-T_{\operatorname{90,z}}$ correlation in the radio-loud sample could be weaker than the corresponding correlation in the radio-quiet sample.
{\\}
Furthermore, the slight increase in $(z+1)-T_{\operatorname{90,z}}$ anti-correlation strength in the radio-loud sample relative to the radio-quiet sample of LGRBs can be also potentially explained away in terms of the subtle effects of classification and sample-incompleteness on the two LGRB populations combined with the effects of the correlation strengths of $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ and $(z+1)-E_{\operatorname{iso}}$ relationships in the two LGRB radio classes.
{\\}
To illustrate the effects of $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ and $(z+1)-E_{\operatorname{iso}}$ correlations on the $(z+1)-T_{\operatorname{90,z}}$ correlation, we generate Monte Carlo realizations of the radio-loud and radio quiet LGRB samples, similar to the two observational samples of \citetalias{lloyd2019comparison}. We do so by generating two samples that have the same distributions of redshift and $E_{\operatorname{iso}}$ as those of the observational radio-loud and radio-quiet samples, while fixing their $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ and $(z+1)-E_{\operatorname{iso}}$ correlation strengths to their corresponding values in the observational samples. We then leave the $T_{\operatorname{90,z}}$ distribution of the two synthetic samples to be randomly determined by our Monte Carlo simulations.
{\\}
\input{sec/tabMonteCarloSim}
The above simulation scheme allows us to isolate the effects of $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ and $(z+1)-E_{\operatorname{iso}}$ correlation strengths on the strength of the $z-T_{\operatorname{90,z}}$ correlation. The results of the Monte Carlo simulations are summarized in Table \ref{tab:monteCarloSim}. As shown in the last column of the table, although the average simulated correlation strength of the $(z+1)-T_{\operatorname{90,z}}$ relationship for the sample of radio-loud LGRBs does not fully match the corresponding observed value, the simulations indicate that a similar trend in correlation strengths with comparable differences to those of the observational samples can be reproduced purely based on the correlation strengths of $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ and $(z+1)-E_{\operatorname{iso}}$ relationships. All of these Monte Carlo simulation results have been obtained without any a prior assumptions on the type of LGRBs, whether radio-loud or radio-quiet.
{\\}
In other words, the observed $(z+1)-T_{\operatorname{90,z}}$ correlation strength difference between the two radio-loud and radio-quiet samples likely has no physical origins but can be largely attributed to a complex combination of multiple sample incompleteness and selection effects in gamma-ray and radio detection, data collection, and redshift measurement.
{\\}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{./fig/SynSample/correlations.png}
\caption{
An illustration of the Monte Carlo simulations of the correlation coefficient strengths between the three LGRB intrinsic properties: redshift (represented by $z+1$), $E_{\operatorname{iso}}$, and $T_{\operatorname{90,z}}$. Each point on this plot represents a simulated LGRB sample of comparable size to the radio-loud and radio-quiet sample sizes of \citetalias{lloyd2019comparison}. The two overlaid black and red dots represent respectively, the locations of the radio-loud and radio-quiet samples of \citetalias{lloyd2019comparison} on this plot, with the observed correlation coefficients of $-0.36$ and $-0.08$ for the $(z+1)-T_{\operatorname{90,z}}$ relationship in the radio-loud and radio-quiet samples, respectively. The observed gradient of correlation strength in this plot is consistent with the observed $(z+1)-T_{\operatorname{90,z}}$ correlation strengths for the two radio classes.
\label{fig:correlations}
}
\end{figure}
To better illustrate the the above argument, we depict in Figure \ref{fig:correlations} the distribution of the correlation coefficients between $(z+1)$, $E_{\operatorname{iso}}$, and $T_{\operatorname{90,z}}$ for all Monte Carlo samples that have simulated. Overlaid on this distribution is the two observed radio-loud and radio-quiet samples of \citetalias{lloyd2019comparison}. From this figure, it is evident that a gradient in the strength of $(z+1)-T_{\operatorname{90,z}}$ correlation exists as a function of the strengths of the $(z+1)-E_{\operatorname{iso}}$ and $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ correlations. We note that this gradient purely results from the intrinsic gamma-ray properties of LGRBs inferred from our LGRB world model. We have made no assumptions on the existence of radio-loud or radio-quiet LGRBs in the aforementioned Monte Carlo simulations.
\section{Discussion}
\label{sec:discussion}
The existence of two classes of radio-loud and radio-quiet LGRBs with potentially different progenitors has been recently argued in the literature \citep[e.g.,][]{lloyd2017lack, lloyd2019comparison}. Radio-loud LGRBs have been shown to be on average more energetic, longer-duration and exhibit weaker positive $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ but stronger negative $(z+1)-T_{\operatorname{90,z}}$ correlations than the radio-quiet class of LGRBs (Figure \ref{fig:bootL19}).
{\\}
In this work, we have shown that much of the evidence in favor of such radio classification of LGRBs and their distinct progenitors can be purely attributed to the complex effects of detection thresholds of gamma-ray detectors \citep{shahmoradi2009real, shahmoradi2011possible} and radio telescopes \citep{chandra2012radio} on the observed sample of bright LGRBs. Our arguments are built upon the recent discovery of a significant positive $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ correlation ($\rho\sim0.5-0.6$) in both populations of LGRBs and SGRBs by \citet{shahmoradi2013multivariate, shahmoradi2015short}. We have shown that the intrinsic $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ correlation (Figure \ref{fig:eisodurzs}) along with a potential positive correlation between the gamma-ray and radio luminosity of LGRBs (Figure \ref{fig:chandraSchematic}) are sufficient conditions to generate much of the differing characteristics of radio-loud and radio-quiet LGRBs, without recourse to any radio classification of LGRBs.
{\\}
Bootstrapping simulations indicate that some of the proposed spectral and temporal differences between the two proposed radio classes are not statistically significant (\ref{fig:bootL19}). Furthermore, Monte Carlo simulations of the gamma-ray properties of the two radio classes reveal that more than $50\%$ of the reported difference between the $(z+1)-T_{\operatorname{90,z}}$ correlation strengths of the proposed radio classes can be readily and purely explained in terms of selection effects, sample incompleteness and the strong positive $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ correlation in LGRBs.
{\\}
In the light of the above arguments, it would seem likely that the presence of the very high energy GeV extended emission in the class of radio-loud LGRBs also results from the overall brighter light-curves of such LGRBs across all energy wavelengths, from radio to GeV.
{\\}
The question of whether the radio telescopes have been sensitive enough to detect the faint LGRB radio emissions has been raised previously \citep[e.g.,][]{chandra2012radio, resmi2017radio}. Given the proximity of the $3\sigma$ radio non-detection limits to the observed sample (Figure \ref{fig:ChandraRadioDetectionEfficiency}), it is conceivable that future radio telescopes with increased sensitivities will be able to detect the radio afterglows of more LGRB events \citep{chandra2016gamma}. The numerical simulations of \cite{burlon2015ska} also appear to support this conclusion. The future projects such as the Square Kilometer Array \citep{carilli2004science, johnston2008science} (whose operation is expected in 2027) and the recent upgrades to the existing telescopes such as the Giant Metrewave Radio Telescope \citep{swarup1991giant, ananthakrishnan1995giant, gupta2014gmrt} and many others \citep[e.g.,][]{gupta2017upgraded} will provide a definite answer to the problem of radio-classification of LGRBs.
\section{Introduction}
\label{sec:intro}
Gamma-ray bursts (GRBs) are extremely energetic explosions, the long-duration class of which has been long hypothesized to be due to the death super-massive stars, releasing energies on the orders of $10^{48}$ to $10^{52}$ ergs \citep[e.g.,][]{frail2001beaming, cenko2010collimation, shahmoradi2015short} and occur at distances far from us on the cosmological scale \citep{metzger1997spectral}. The current afterglow model is that of an expanding fireball \citep{piran1999gamma, meszaros2002theories, woosley2006supernova} where the high energy prompt gamma-ray emission is first observed, followed by afterglow emission at lower-energy frequencies after a few hours or days from the initial gamma-ray prompt emission \citep{frail1997radio, van1997transient, heng2008direct}.
{\\}
The standard fireball model used today was proposed after the first observation of both X-ray and optical afterglows in February of $1997$ \citep{kumar2015physics}. Not long after, the first GRB {\it without} an optical afterglow counterpart was found \citep{groot1998search}. This class of bursts without optical afterglows became known as the {\it dark GRBs} \citep{fynbo2001detection}. On October 9, 2000, however, the second High Energy Transient Explorer (HETE-2) \citep{ricker2003high} was launched and in December of 2002 it viewed it's first dark GRB where the optical afterglow was seen, but disappeared and was no longer visible after 2 hours. This begged the question, is there ever truly such a phenomenon as a {\it dark GRB} or is it possible that with better equipment and timing no such phenomena would ever appear?
{\\}
More recently, studies have raised the possibility of the existence of a new population of Long-duration GRBs (LGRBs) that are intrinsically dark in the radio-bandwidth afterglow. These events, named `radio-dark', `radio-quiet', or `radio-faint' LGRBs, have been hypothesized to have progenitors that are different from the progenitors of the mainstream `radio-bright' or synonymously-named `radio-loud' LGRBs \citep[e.g.,][]{hancock2013two, lloyd2017lack, lloyd2019comparison}.
{\\}
Alternative hypotheses on the lack of detection of radio afterglows in some LGRBs have been also studied \citep[e.g.,][]{frail2005radio, chandra2012radio}. In particular, \citet{chandra2012radio} used a comprehensive sample of 304 afterglows, consisting of 2995 flux density measurements at a wide range of frequencies between 0.6 GHz and 660 GHz, to argue against the potential existence of radio-bright and radio-dark LGRBs. The argument therein is based on the observation that the $3\sigma$ upper-limit for the non-detection of the radio afterglow of radio-dark LGRBs closely traces the faint-tail of the radio-bright LGRB sample, as illustrated in Figure \ref{fig:ChandraRadioDetectionEfficiency}.
{\\}
Responding to these observations, \citet{hancock2013two} employed image stacking techniques to increase the overall signal in the combined data from all radio-quiet GRBs. Based on their own specific classification, they conclude that radio-dark LGRBs are on average 2-3 orders of magnitude fainter than radio-bright LGRBs. Such result is not surprising and is in fact, relatively consistent with the observational sample of \citet{chandra2012radio}, illustrated in Figure \ref{fig:ChandraRadioDetectionEfficiency}.
{\\}
Nevertheless, based on simulations of the radio afterglows of LGRBs, \citet{hancock2013two} predict that the expected stacked radio flux density of radio-dark LGRBs must be on average 5 times brighter in order to be consistent with the hypothesis of a common continuous unimodal distribution for the radio afterglow properties of all LGRBs together. However, they acknowledge several limitations of their simulations, most importantly, the assumption that the observed sample of radio-afterglows in their studies is representative of the radio afterglow properties of the entire underlying population of detected and undetected LGRBs.
{\\}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{./fig/chandra2012/ChandraRadioDetectionEfficiency.pdf}
\caption{Histogram of the normalized count (probability density function) of 66 radio-loud GRBs taken from \citet{chandra2012radio}. The dashed-dotted red line represents the Cumulative Probability Density Function of the $3\sigma$ upper-limits for the non-detection of radio afterglows of 107 radio-quiet GRBs. \label{fig:ChandraRadioDetectionEfficiency}}
\end{figure}
\begin{figure*
\centering
\makebox[\textwidth]
{
\begin{tabular}{ccc}
\subfloat[]{\includegraphics[width=0.31\textwidth]{./fig/chandra2012/ChandraEisoCutoffMeanLogEiso.pdf} \label{fig:curoffMeanEiso}} &
\subfloat[]{\includegraphics[width=0.31\textwidth]{./fig/chandra2012/ChandraEisoCutoffMeanLogZone.pdf} \label{fig:curoffMeanZone}} &
\subfloat[]{\includegraphics[width=0.31\textwidth]{./fig/chandra2012/ChandraEisoCutoffMeanLogT90z.pdf} \label{fig:curoffMeanZone}} \\
\subfloat[]{\includegraphics[width=0.31\textwidth]{./fig/chandra2012/ChandraEisoCutoffLogEisoLogT90z.pdf} \label{fig:curoffEisoDurz}} &
\subfloat[]{\includegraphics[width=0.31\textwidth]{./fig/chandra2012/ChandraEisoCutoffLogZoneLogEiso.pdf} \label{fig:curoffZoneEiso}} &
\subfloat[]{\includegraphics[width=0.31\textwidth]{./fig/chandra2012/ChandraEisoCutoffLogZoneLogT90z.pdf} \label{fig:curoffZoneDurz}} \\
\end{tabular}
}
\caption{
A depiction of the dynamics of different properties of the radio-loud and radio-quiet LGRBs as a function of the sample selection cutoff. The dashed line represent the cutoff used in the study of \citetalias{lloyd2019comparison} to generate a sample of bright LGRBs. As evidenced, some of the differing attributes of the two radio classes appear to be highly sensitive to the arbitrarily chosen sample selection cutoff value used in \citetalias{lloyd2019comparison}, most importantly, the correlations between redshift ($z$), isotropic emission ($E_{\operatorname{iso}}$), and intrinsic duration ($T_{\operatorname{90,z}}$).
\label{fig:VaryingCutoffs}
}
\end{figure*}
Most recently, \citet{lloyd2019comparison} (hereafter \citetalias{lloyd2019comparison}) build upon the existing sample of the GRB radio afterglows of \citet{chandra2012radio} to further confirm and strengthen the findings of their previously published study \citep[][]{lloyd2017lack} on the fundamental spectral and temporal differences and similarities of the radio-quiet and radio-loud GRBs in multiple observational bandwidths including optical, x-ray, gamma-ray and GeV emission. Specifically, they find that,
\begin{enumerate}
\item
The total isotropic gamma-ray emission ($E_{\operatorname{iso}}$) does not correlate with the radio luminosity in their observational samples of 78 radio-loud and 41 radio-quiet LGRBs.
\item
Radio-quiet LGRBs have significantly lower total isotropic gamma-ray emission ($E_{\operatorname{iso}}$), on average, 5 times lower than the radio-loud LGRBs.
\item
Radio-quiet LGRBs have significantly shorter intrinsic prompt duration as measured by the quantity $T_{\operatorname{90,z}}$ \citep[see for example,][]{shahmoradi2013multivariate, shahmoradi2015short}, than the radio-loud LGRBs.
\item
Radio-quiet LGRBs exhibit a weak positive $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ correlation whereas this correlation is missing in the radio-loud LGRBs.
\item
Radio-quiet LGRBs exhibit a weak negative correlation between $T_{\operatorname{90,z}}$ and redshift ($z$) whereas the radio-loud LGRBs exhibit a much stronger such correlation.
\item
The very high energy ($0.1-100$ GeV) extended emission is only present in the radio-loud sample.
\item
Also, there is no significant difference in the the redshift distribution, the presence of X-ray/optical plateaus, or the average jet opening angles between the two radio-quiet and radio-loud GRB samples.
\end{enumerate}
Such observational evidence in favor of the two classes of radio-loud and radio-quiet LGRBs can have profound implications for the progenitors of LGRBs as highlighted and discussed by \citetalias{lloyd2019comparison}. However, here we argue and show that much of the above evidence in favor of the two fundamentally-distinct radio-quiet and radio-loud GRBs can be potentially explained in terms of the existing correlation between $E_{\operatorname{iso}}$ and $T_{\operatorname{90,z}}$ of both long and short duration classes of GRBs. To the extent of our knowledge, this positive $E_{\operatorname{iso}}-T_{\operatorname{90,z}}$ correlation in both classes of LGRBs and short-duration GRBs (SGRBs) was originally discovered, quantified, and reported for the first time by \citet{shahmoradi2013multivariate} and \citet{shahmoradi2015short} via a careful analysis of the largest catalog of GRBs available at the time. Hints to the existence of such a correlation has been also independently provided by \citet{butler2010cosmic}. The relationship has been also rediscovered recently in an independent study of a sample of Swift-detected GRBs \citep[][]{tu2018correlation}.
{\\}
In the following sections, we consider the arguments in favor of the existence of two separate populations of radio-loud and radio-quiet GRBs and show that much of the evidence provided can be likely explained in terms of strong selection effects and sample incompleteness of the radio afterglow data and the existing correlations among the spectral and temporal properties of LGRBs as discovered and quantified by \citet{shahmoradi2013multivariate, shahmoradi2015short}.
| proofpile-arXiv_059-15705 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{intro}
We present and analyze a new generalized Frank-Wolfe method~\cite{Frank_56,dem1967minimization,Lev_66,Canon_68,Dunn1978,Dunn1979,Dunn1980} for the following composite optimization problem:
\begin{equation}
(P): \ \ \ F^*:= {\min}_{x\in\mathbb{R}^n} \;[F(x):= f(\mathsf{A} x) + h(x)] \label{poi}
\end{equation}
where $f: \mathbb{R}^m\to\mathbb{R}\cup\{+\infty\}$ is an extended real-valued convex function that is differentiable on its domain $\mathsf{dom}\, f:=\{u \in \mathbb{R}^m : f(u) < +\infty \}$, $\mathsf{A}:\mathbb{R}^{n}\to \mathbb{R}^{m}$ is a linear operator (though not necessarily invertible or surjective) and the function $h:\mathbb{R}^n\to\mathbb{R}\cup\{+\infty\}$ is proper, closed and convex (but possibly non-smooth), for which $\mathsf{dom}\, h$ is a nonempty compact convex set. Furthermore, and in contrast to the standard setting where $f$ is assumed to be $L$-smooth on $\mathsf{dom}\, F$ (i.e., its gradient is $L$-Lipschitz on $\mathsf{dom}\, F$), our focus is on the setting where $f$ belongs to a particularly special and important class of functions that arise in practice, namely $\theta$-logarithmically-homogeneous self-concordant barrier functions (whose definition and properties will be reviewed below). For convenience we distinguish between $u \mapsto f(u)$ and $x \mapsto f(\mathsf{A} x)$ by defining
\begin{equation}\label{stones} \bar{f}(x):=f(\mathsf{A} x) \ . \end{equation}
The Frank-Wolfe method was developed in 1956 in the seminal paper of Frank and Wolfe~\cite{Frank_56} for the case $h = \iota_{\mathcal{X}}$, where $\iota_{\mathcal{X}}$ denotes the indicator function of $\mathcal{X}$ (i.e., $\iota_{\mathcal{X}}(x) = 0$ for $x\in\mathcal{X}$ and $+\infty$ otherwise), and $\mathcal{X}$ is a (bounded) polytope. (In particular, $(P)$ then is the constrained problem $\min_{x\in\mathcal{X}}\, \bar f(x)$.) The Frank-Wolfe method was a significant focus of research up through approximately 1980, during which time it was generalized to handle more general compact sets $\mathcal{X}$, see e.g., \cite{dem1967minimization,Dunn1978,Dunn1979,Dunn1980}. Each iteration of the Frank-Wolfe method computes the minimizer of the first-order (gradient) approximation of $\bar{f}(x)$ on $\mathcal{X}$, and constructs the next iterate by moving towards the minimizer. Just in the last decade, and due to the importance of optimization in modern computational statistics and machine learning, the Frank-Wolfe method has again attracted significant interest (see \cite{Jaggi_13,Harcha_15,Freund_16,Freund_17,Nest_18,Ghad_19} among many others), for at least two reasons. First, in many modern application settings, $\mathcal{X}$ is a computationally ``simple'' set for which the linear optimization sub-problem in the Frank-Wolfe method is easier to solve compared to the standard projection sub-problems required by other first-order methods, see e.g.,~\cite{Nest_13}. Secondly, the Frank-Wolfe method naturally produces ``structured'' (such as sparse, or low-rank) solutions in several important settings, which is very useful in the high-dimensional regime in machine learning. This is because each iterate of the Frank-Wolfe method is a convex combination of all the previous linear optimization sub-problem solutions, and hence if the extreme points of $\mathcal{X}$ are unit coordinate vectors (for example), then the $k$-th iterate will have at most $k$ non-zeroes. This statement can be made more precise in specific settings, and also generalizes to the matrix setting where $\mathcal{X}$ is the nuclear norm ball~\cite{Harcha_15} --- in this setting the $k$-th (matrix) iterate will have rank at most $k$.
More recently, the Frank-Wolfe method has been generalized to the composite setting, where the function $h$ is a general convex non-smooth function with compact domain $\mathcal{X}$, see e.g.,~\cite{Bach_15,Nest_18,Ghad_19}. In this generalized framework, the sub-problem solved at iteration $k$ is
\begin{equation}\label{subcbday}
\minimize_{x \in \mathbb{R}^n}\; \langle \nabla \bar{f}(x^k), x \rangle + h(x) \ ,
\end{equation} which specializes to the standard Frank-Wolfe sub-problem in the case when $h = \iota_\mathcal{X}$.
In certain situations, this minimization problem admits (relatively) easily computable solutions despite the presence of the non-smooth function $h$. For example, if $h= \bar{h} + \iota_\mathcal{P}$, where $\bar{h}$ is a polyhedral function and $\mathcal{P}$ is a polytope, then \eqref{subcbday} can be reformulated as a linear optimization problem (LP), which can be solved efficiently if it has moderate size or a special structure, e.g., network flow structure~\cite{Harcha_15}. For more such examples we refer the reader to~\cite{Nest_18}.
In addition, there has been recent research work on using the Frank-Wolfe method to solve the projection sub-problems (which are constrained quadratic problems) that arise in various optimization algorithms. For example, \cite{Liu_20} presents a projected Newton method for solving a class of problems that is somewhat different from (but related to) \eqref{poi}; specifically \cite{Liu_20} assumes that the linear operator $\mathsf{A}$ is invertible and the function $f$ is self-concordant but is not necessarily a logarithmically-homogeneous barrier. The Frank-Wolfe method is used therein to solve each projection sub-problem in the projected Newton method, and \cite{Liu_20} shows that the total number of linear minimization sub-problems needed is $O(\varepsilon^{-(1+o(1))})$. Another such example is in \cite[Section~5]{Doikov_20}, which develops an affine-invariant trust-region type of method for solving a class of convex composite optimization problems in a similar form as~\eqref{poi}, with the key difference being that in \cite{Doikov_20} $f$ is assumed to be twice differentiable with Lipschitz Hessian on $\mathsf{dom}\, h$. The Frank-Wolfe method is used in \cite{Doikov_20} to solve each projection sub-problem, wherein it is shown that the total number of linear minimization sub-problems needed is $O(\varepsilon^{-1})$.
When analyzing the convergence of the standard or generalized Frank-Wolfe method, almost all such analyses rely on the $L$-smooth assumption of the function $f$.
Perhaps accidentally, the first specific attempt to extend the Frank-Wolfe method and analysis beyond the case of $L$-smooth functions is due to Khachiyan \cite{khachiyan1996rounding}. In the specific case of the $D$-optimal design problem \cite{Fedorov_72}, Khachiyan \cite{khachiyan1996rounding} developed a ``barycentric coordinate descent'' method with an elegant computational complexity analysis, and it turns out that this method is none other than the Frank-Wolfe method with exact line-search~\cite{sun2004computation,alperman}. Khachiyan's proof of his complexity result (essentially $O(n^2/\varepsilon)$ iterations) used clever arguments that do not easily carry over elsewhere, and hence begged the question of whether or not any of the arguments in \cite{khachiyan1996rounding} might underly any general themes beyond $D$-optimal design, and if so what might be the mathematical structures driving any such themes? In this work, we provide affirmative answers to these questions, by considering the $D$-optimal design problem as a special instance of the broader class of composite optimization problem $(P)$. The second attempt was the recent paper of Dvurechensky et al.\ \cite{Dvu_20}, which presented and analyzed an adaptive step-size Frank-Wolfe method for tackling the problem $\min_{x \in \mathcal{X}} \bar {f}(x)$ where $\bar f$ is assumed to be a non-degenerate (i.e., with positive-definite Hessians) self-concordant function and $\mathcal{X}$ is a compact convex set, and was the first paper to study the Frank-Wolfe method for these special functions.
The set-up in \cite{Dvu_20} can be seen as an instance of $(P)$ with $h$ being the indicator function of $\mathcal{X}$, namely, $h=\iota_\mathcal{X}$, and the additional assumption that $\bar f$ is non-degenerate, which we do not need. (Note that in our setting, this amounts to assuming that $\mathsf{A} = \mathsf{I}$, namely the identity operator or that the linear operator $\mathsf{A}$ is invertible.)
However, unlike \cite{Dvu_20}, we additionally assume that $f$ is $\theta$-logarithmically homogeneous. As our analysis will show, this last property --- which holds true for all applications that we are aware of --- is the key property that leads to relatively simple and natural computational guarantees for the Frank-Wolfe method in this expanded relevant setting.
Let us now review the formal definition of a $\theta$-logarithmically-homogeneous self-concordant barrier function. Let $\mathcal{K}\subsetneqq \mathbb{R}^m$ be a regular cone, i.e., $\mathcal{K}$ is closed, convex, pointed ($\mathcal{K}$ contains no line), and has nonempty interior ($\mathsf{int}\,\mathcal{K}\ne \emptyset$). We say that $f$ is a $\theta$-logarithmically-homogeneous (non-degenerate) self-concordant barrier on $\mathcal{K}$ for some $\theta \ge 1$ and we write ``$f \in \mathcal{B}_\theta(\mathcal{K})$'', if $f$ is three-times differentiable and strictly convex on $\mathsf{int}\, \mathcal{K}$ and satisfies the following three properties:
\begin{enumerate}[label=(P\arabic*),leftmargin=3.9em]
\item $\abst{D^3f(u)[w,w,w]}\le 2(\lranglet{H(u)w}{w})^{3/2}$ \ $\forall\,u\in\mathsf{int}\,\mathcal{K}$, $\forall\,w\in\mathbb{R}^m$,\label{item:third_second_bounded}
\item $f(u_k)\to \infty$ for any $\{u_k\}_{k\ge 1}\subseteq\mathsf{int}\,\mathcal{K}$ such that $u_k\to u\in\mathsf{bd}\,\mathcal{K}$, and\label{item:boundary_growth}
\item $f(tu) = f(u) - \theta\ln (t)$ \ $\forall\,u\in\mathsf{int}\,\mathcal{K}$, $\forall\,t>0$ , \label{item:log_homogeneous}
\end{enumerate}
where $H(u)$ denotes the Hessian of $f$ at $u\in\mathsf{int}\,\mathcal{K}$. For details on these properties, we refer readers to Nesterov and Nemirovski~\cite[Section 2.3.3]{Nest_94} and Renegar \cite[Section 2.3.5]{Renegar_01}.
Properties (P1) and (P2) correspond to $f$ being a (standard, strongly) self-concordant function on $\mathsf{int}\,\mathcal{K}$ (cf.~\cite[Remark~2.1.1]{Nest_94}), and property (P3) corresponds to $f$ being a $\theta$-logarithmically-homogeneous barrier function on $\mathcal{K}$. Here $\theta$ is called the {\em complexity parameter} of $f$ in the terminology of Renegar~\cite{Renegar_01}. The two prototypical examples of such functions are (i) $-\ln\det(U)$ for $U \in \mathcal{K} := \mathbb{S}_{+}^{k}$ and $\theta = k$, and (ii) $-\sum_{j=1}^m w_j \ln(u_j)$ for $u \in \mathcal{K} := \mathbb{R}_{+}^{m}$ and $\theta = \sum_{j=1}^m w_j$ where $w_1, \ldots, w_n \ge 1$, see \cite{Nest_94, Renegar_01}.
We now present some application examples of $(P)$ where $f \in \mathcal{B}_\theta(\mathcal{K})$, including the aforementioned $D$-optimal design problem.
\subsection{Applications}\label{sec:applications}
\vspace{2ex}
\noindent {\em 1.\ Poisson image de-blurring with total variation (TV) regularization} \cite{Harmany_12,Dey_06,Chambolle_18}.
Let the $m\times n$ matrix $X$ be the true representation of an image, such that each entry $X_{ij}\ge 0$ represents the intensity of the pixel at location $(i,j)\in[m]\times[n]$,
and $X_{ij}\in\{0,1,\ldots,M\}$, where $M:=2^b-1$ for $b$-bit images.
In many applications, ranging from microscopy to astronomy, we observe a blurred image contaminated by Poisson noise, which we denote by $Y$, and we wish to estimate the true image $X$ from $Y$.
The generative model of $Y$ from $X$ is presumed to be as follows. Let $\mathsf{A}:\mathbb{R}^{m\times n}\to \mathbb{R}^{m\times n}$ denote the 2D discrete convolutional (linear) operator with periodic boundary conditions, which is assumed to be known. This convolutional operator is defined by a $p\times p$ 2D convolutional kernel with a size
$q:=p^2$ that is typically much smaller than the size of image $N:= mn$. (For an illustration of the 2D convolution, see~\cite{Bhar_18} for example.) The blurred image $\widetilde{Y}$ is obtained by passing $X$ through $\mathsf{A}$, i.e., $\widetilde{Y} := \mathsf{A}(X)$, and the observed image $Y$ results from adding independent entry-wise Poisson noise to $\widetilde{Y}$, i.e., $ Y_{ij}\sim {\sf Poiss}(\widetilde{Y}_{ij})$, for all $(i,j)\in[m]\times[n]$, and
$\{Y_{ij}\}_{(i,j)\in[m]\times [n]}$ are assumed to be independent.
It will be preferable to work with vectors in addition to matrices, whereby we equivalently describe the above model using vector notation as follows. We denote $X = [x_1 \;\cdots\; x_m]^\top$, where $x_i^\top$ denotes the $i$-th row of $X$, and let ${\sf vec}:\mathbb{R}^{m\times n}\to \mathbb{R}^{mn}$ denote the vectorization operator that sequentially concatenates $X$ into the column vector $ {\sf vec}(X) := x :=[x_1^\top \cdots~ x_m^\top]^\top$, and let ${\sf mat}(x)$ denote the inverse operator of ${\sf vec}$, so that ${\sf mat}(x) = X$. Define $y := {\sf vec}(Y)$ and $\widetilde{y} := {\sf vec}(\widetilde{Y})$.
In addition, we represent $\mathsf{A}$ in its matrix form $A\in\mathbb{R}^{N\times N}$ (recall $N := m n$), such that
$\widetilde{y} := A x$. Furthermore, let us represent $A:= [a_1\;\ldots\;a_N]^\top$, where $a_l^\top$ denotes the $l$-th row of $A$ for $l \in [N]$. Note that $A$ is a sparse doubly-block-circulant matrix, such that each row $a_l^\top$ of $A$ has at most $q$ non-zeros, where $q\ll N$ denotes the size of the 2D convolution kernel.
Finally, we have $y_{l}\sim {\sf Poiss}(\widetilde{y}_{l})$ for all $l\in[N]$, and $\{y_l\}_{l\in[N]}$ are independent.
The maximum likelihood (ML) estimator of $X$ from the observed image $Y$ is the optimal solution of the following optimization problem:
\begin{align}
{\min}_{x\in\mathbb{R}^N}&\;\; -\textstyle\sum_{l=1}^{N} y_l\ln(a_l^\top x) + (\sum_{l=1}^{N} a_l)^\top x \quad
\st\;\; 0\le x\le Me \ , \label{eq:deblurring}
\end{align}
where $e$ denotes the vector with all entries equal to one.
In addition, following~\cite{Rudin_92}, in order to recover a smooth image with sharp edges,
we add Total Variation (TV) regularization to the objective function in~\eqref{eq:deblurring}, which yields the following regularized ML estimation problem:
\begin{align}
{\min}_{x\in\mathbb{R}^N}&\;\; \bar{F}(x):= -\textstyle\sum_{l=1}^{N} y_l\ln(a_l^\top x) + (\sum_{l=1}^{N} a_l)^\top x + \lambda {\rm TV}(x)\quad \st\;\; 0\le x\le Me \ ,
\label{eq:deblurring_TV}
\end{align}
where
\begin{align*}{\rm TV}(x)&:= \textstyle\sum_{i=1}^m \sum_{j=1}^{n-1} \abs{[{\sf mat}(x)]_{i,j} - [{\sf mat}(x)]_{i,j+1}} + \textstyle\sum_{i=1}^{m-1} \sum_{j=1}^{n} \abs{[{\sf mat}(x)]_{i,j} - [{\sf mat}(x)]_{i+1,j}} \\ &:= \textstyle\sum_{i=1}^m \sum_{j=1}^{n-1} \abs{X_{i,j} - X_{i,j+1}} + \textstyle\sum_{i=1}^{m-1} \sum_{j=1}^{n} \abs{X_{i,j} - X_{i+1,j}}
\end{align*}
is a standard formulation of the total variation. Here we see that \eqref{eq:deblurring_TV} is an instance of $(P)$ with $f(u) := -\textstyle\sum_{l=1}^N y_l\ln \big(u_l)$, $\mathcal{K}:= \mathbb{R}^N_+$, $h(x) := (\sum_{l=1}^{N} a_l)^\top x + \lambda {\rm TV}(x) + \iota_{\mathcal{X}}$ where $\mathcal{X} = \{ x \in \mathbb{R}^N : 0 \le x \le Me \}$, $\mathsf{A}$ is defined by $(\mathsf{A} x)_l := a_l^\top x$, $l=1, \ldots, N$, and $\theta = \sum_{l=1}^N y_l$. We note that $y_l \ge 1$ whenever $y_l \ne 0$ for all $l \in [N]$, and hence $f \in \mathcal{B}_\theta(\mathcal{K})$. In Section \ref{sec:delurring} we will discuss how the Frank-Wolfe sub-problem \eqref{subcbday} associated with
\eqref{eq:deblurring_TV} can be efficiently solved.
\vspace{2ex}
\noindent {\em 2.\ Positron emission tomography (PET)}~\cite{Shepp_82,BenTal_01}. PET is a medical imaging technique that measures the metabolic activities of human tissues and organs. Typically, radioactive materials are injected into the organ of interest, and these materials emit (radioactive) events
that can be detected by PET scanners. The mathematical model behind this process is described as follows. Suppose that an emission object (e.g., a human organ) has been discretized into $n$ voxels. The number of
events emitted by voxel $i$ ($i\in[n]$) is a Poisson random variable $\tilde X_i$ with {\em unknown} mean $x_i \ge 0$ and so $\tilde X_i\sim {\sf Poiss}(x_i)$, and furthermore $\{\tilde X_i\}_{i=1}^n$ are assumed to be independent. We also have a scanner array with $m$ bins. Each event emitted by voxel $i$ has a {\em known} probability $p_{ij}$ of being detected by bin $j$ ($j\in[m]$), and we assume that $\sum_{j=1}^m p_{ij} = 1$, i.e., the event will be detected by exactly one bin. Let $\tilde Y_j$ denote the total number of events detected by bin $j$, whereby
\begin{equation}\label{thursday01}
\mathbb{E} [\tilde Y_j ]:= y_j := \textstyle\sum_{i=1}^n p_{ij} x_i \ .
\end{equation}
By Poisson thinning and superposition, it follows that $\{\tilde Y_j\}_{j=1}^m$ are independent random variables
and $\tilde Y_j\sim {\sf Poiss}(y_j)$ for all $j\in[m]$.
We seek to perform maximum-likelihood (ML) estimation of the unknown means $\{x_i\}_{i=1}^n$ based on observations $\{Y_j\}_{j=1}^m$of the random variables $\{\tilde Y_j\}_{j=1}^m$. From the model above, we easily see that the log-likelihood of observing $\{Y_j\}_{j=1}^m$ given $\{\tilde X_i\}_{i=1}^n$ is (up to some constants)
\begin{equation}
l(x):= - \textstyle\sum_{i=1}^n x_i + \textstyle\sum_{j=1}^mY_j\ln \big(\sum_{i=1}^n p_{ij} x_i\big) \ ,
\end{equation}
and therefore an ML estimate of $\{x_i\}_{i=1}^n$ is given by an optimal solution $x^*$ of
\begin{equation}
{\max}_{x\ge 0} \;l(x) \ . \label{eq:PET_0}
\end{equation}
%
It follows from the first-order optimality conditions that
any optimal solution $x$ must satisfy
\begin{equation}
\textstyle\sum_{i=1}^n x_i = S := \sum_{j=1}^m Y_j \ , \label{eq:PET_extra_constraint}
\end{equation}
and by incorporating \eqref{eq:PET_extra_constraint} into \eqref{eq:PET_0} and defining the re-scaled variable $z:= x/S$, \eqref{eq:PET_0} can be equivalently written as
\begin{equation}
{\min}_z \; L(z) := -\textstyle\sum_{j=1}^m Y_j\ln \big(\sum_{i=1}^n p_{ij} z_i\big)\quad \st\;\; {z\in\Delta_n} \ , \label{eq:PET_final}
\end{equation} where $\Delta_n := \{ z \in \mathbb{R}^n : \sum_{i=1}^n z_i =1, \ z \ge 0 \}$ is the unit simplex in $\mathbb{R}^n$. Here we see that \eqref{eq:PET_final} is an instance of \eqref{poi} with $f(u) := -\textstyle\sum_{j=1}^m Y_j\ln \big(u_j)$, $\mathcal{K}:= \mathbb{R}^m_+$, $h := \iota_{\Delta_n}$, $\mathsf{A}$ defined by $(\mathsf{A} z)_j := \sum_{i=1}^n p_{ij} z_i$, $j=1, \ldots, m$, and $\theta = \sum_{j=1}^m Y_j$. We note that $Y_j \ge 1$ whenever $Y_j \ne 0$ for all $j \in [m]$, and hence $f \in \mathcal{B}_\theta(\mathcal{K})$.
\vspace{1ex}
\noindent {\em 3. Poisson phase retrieval}~\cite{Odor_16}. In Poisson phase retrieval, we seek to estimate an unknown unit complex signal $x\in\mathbb{C}^n$, namely $\normt{x}_2:= (x^H x)^{1/2}=1$ where $x^H$ denotes the conjugate transpose of $x$. We estimate $x$ using $m$ linear measurements that are subject to Poisson noise; for $j\in[m]$, the $j$-th measurement vector is denoted by $a_j\in \mathbb{C}^n$, and the measurement outcome $y_j$ is a Poisson random variable such that $y_j\sim {\sf Poiss}(\widetilde{y}_j)$, where $\widetilde{y}_j:= \abst{\lranglet{a_j}{x}}^2$. Oder et al.~\cite{Odor_16} proposed to estimate $x$ by solving the following matrix optimization problem:
\begin{equation}
{\min}_X -\textstyle\sum_{j=1}^m y_j\ln \lranglet{a_j a_j^H}{X} + \lranglet{\sum_{j=1}^m a_j a_j^H}{X}\quad \st\;\; {X\in\mathcal{X}:= \{X\in\mathbb{H}_+^n:\tr(X)\le c\}} \ , \label{eq:PFR}
\end{equation}
where $\lranglet{\cdot}{\cdot}$ denotes the Frobenius matrix inner product,
$\mathbb{H}_+^n$ denotes the set of complex Hermitian positive semi-definite matrices of order $n$, $\tr(X)$ denotes the trace of $X$, and the parameter $c>0$ is typically chosen as $c = (1/m)\sum_{j=1}^m y_j$. Let $X^*$ be the optimal solution of~\eqref{eq:PFR}. One then computes a unit eigenvector $\bar x$ associated with the largest eigenvalue of $X^*$ and uses $\bar x$ as the estimate of $x$. Note that~\eqref{eq:PFR} has a similar form to~\eqref{eq:PET_final} except in two ways: first, the objective function in \eqref{eq:PFR} has an additional linear term, and second, the constraint set is the intersection of a nuclear norm ball with the positive semi-definite cone,
instead of a simplex. Therefore, using the same arguments as above, we see that~\eqref{eq:PFR} is an instance of \eqref{poi}. To solve~\eqref{eq:PFR}, \cite{Odor_16} proposed a Frank-Wolfe method with a pre-determined step-size sequence, and showed that this method converges with rate $O(C/k)$, where $C$ depends on several factors including (i) the diameter of $\mathcal{X}$ under the spectral norm, (ii) $\max_{j\in[m]} \normt{a_j}^2_2$, (iii) $\max_{j\in[m]}\max_{X\in\mathcal{X}} \lranglet{a_j a_j^H}{X}$, and (iv) $\min_{j\in[m]} \lranglet{a_j a_j^H}{X_0}$ where $X_0\in\mathcal{X}$ denotes the starting point of the Frank-Wolfe method.
\vspace{1ex}
\noindent {\em 4.\ Optimal expected log investment}~\cite{Cover_84,Algoet_88,Vardi_93}. In this problem we consider $n$ stocks in the market, and let $R_i$ denote the random per-unit capital return on investing in stock $i$, for $i\in[n]$. The random vector $R:= (R_1,\ldots,R_n)$ has unknown distribution $P$. An investor allocates her investment capital over these $n$ stocks, and let $w_i$ denote the (nonnegative) proportion of capital invested in stock $i$, whereby $w_i\ge 0$ for all $i\in[n]$ and $\sum_{i=1}^n w_i=1$. Define $w:=(w_1,\ldots,w_n)$. The goal of the investor is to maximize her expected log return $f(w):= \mathbb{E}_{R\sim P}[\ln(w^\top R)]$ subject to the constraint $w\in\Delta_n$ where $\Delta_n :=\{w \in \mathbb{R}^n : w \ge 0, \ e^Tw = 1\}$. The naturalness of this objective can be justified from several perspectives involving the principle that money compounds multiplicatively rather than additively, see the discussion and references in \cite{Cover_84,Algoet_88}. Since $P$ is unknown, one can use a (historical) data-driven empirical distribution such as $\widehat{P}_m:= \sum_{j=1}^m p_j \delta_{r_j}$, where $p_j > 0$, $\sum_{j=1}^m p_j=1$, $r_j\in\mathbb{R}^n$ is a realization of $R$ and $\delta_{r_j}$ denotes the unit point mass at $r_j$ for $j \in [m]$. Under this empirical distribution, the investor instead solves the problem:
\begin{align}
{\min}_{w\in\Delta_n}&\; -\textstyle\sum_{j=1}^{m} p_j\ln(r_j^\top w) \ . \label{eq:log_invest}
\end{align}
Note that~\eqref{eq:log_invest} has the same basic format as the PET problem in~\eqref{eq:PET_final}. Indeed,
both of these problems fall under a more general class of problems called ``positive linear inverse problems''~\cite{Vardi_93}.
Define $p_{\min} := \min_{j \in [m]}\{p_j\}>0$ and consider re-scaling the objective function of \eqref{eq:log_invest} by $1/p_{\min}$, which ensures the coefficient in front of each $\ln(\cdot)$ term is at least $1$. Then this re-scaled problem is an instance of $(P)$ with $f(u) := -\textstyle\sum_{j=1}^m (p_j/p_{\min}) \ln (u_j)$, $\mathcal{K}:= \mathbb{R}^m_+$, $h := \iota_{\Delta_n}$, $\mathsf{A}$ defined by $(\mathsf{A} w)_j := r_j^\top w$ for $j \in [m]$, $\theta = 1/p_{\min}$, and $f \in \mathcal{B}_\theta(\mathcal{K})$.
\vspace{1ex}
\noindent {\em 5.\ Computing the analytic center}~\cite{Nest_04}.
Given a nonempty solid polytope $\mathcal{Q} = \{x\in \mathbb{R}^n: a_i^\top x\ge d_i,\;i=1,\ldots,m\}$,
the function
\begin{equation}
b(x) := -\textstyle\sum_{i=1}^m \ln(a_i^\top x- d_i), \quad x\in \mathcal{Q} \ ,
\end{equation} is an $m$-self-concordant barrier $\mathcal{Q}$, see \cite{Nest_94}. We wish to compute the analytic center of $\mathcal{Q}$ under $b$, which is the optimal solution to the problem ${\min}_{x\in\mathcal{Q}}\, b(x)$. We can transform this problem into an instance of $(P)$ as follows. Define
\begin{equation}
f(x,t) := -\textstyle\sum_{i=1}^m \ln(a_i^\top x- td_i) = -\sum_{i=1}^m \ln(a_i^\top (x/t)- d_i) - m\ln (t)\ ,
\end{equation}
which is a $m$-logarithmically-homogeneous self-concordant barrier on the conic hull of $\mathcal{Q}$, denoted by $\mathrm{cone}\,(\mathcal{Q})$ and defined by $$\mathrm{cone}\,(\mathcal{Q}) := \mathsf{cl}\,\{(x,t)\in\mathbb{R}^{n+1}: x/t\in\mathcal{Q},\; t>0\} = \{x\in\mathbb{R}^n: a_i^\top x\ge td_i,\;i=1,\ldots,m,\; t \ge 0\} \ .$$ Then we can formulate the analytic center problem as
\begin{align*}
{\min}_{(x,t)\in\mathbb{R}^{n+1}}\; f(x,t)\quad\st\quad (x,t)\in\mathrm{cone}\,(\mathcal{Q}),\;t=1 \ ,
\end{align*}
which is an instance of $(P)$ with $f(u) := -\textstyle\sum_{i=1}^m \ln (u_i)$, $\mathcal{K}:= \mathbb{R}^m_+$, $h = \iota_{\mathcal{X}}$ where $\mathcal{X}= \{(x,t)\in\mathbb{R}^{n+1} : x \in \mathcal{Q}, \ t=1\}$, $\mathsf{A}$ defined by $(\mathsf{A} (x,t))_i := a_i^\top x- td_i$ for $i \in [m]$, and $\theta = m$.
The above formulation can be generalized to any nonempty convex compact set $\mathcal{Q}\subseteq\mathbb{R}^{n}$ equipped with a (standard strongly) $\vartheta$-self-concordant barrier $b$ on $\mathcal{Q}$.
From~\cite[Proposition~5.1.4]{Nest_94}, there exists a constant $c\le 20$ for which
\begin{equation}
f(x,t):= c^2(b(x/t)-2\vartheta\ln t) \label{eq:barrier_cone}
\end{equation}
is a $(2c^2\vartheta)$-logarithmically-homogeneous self-concordant barrier on
$\mathrm{cone}\,(\mathcal{Q})$.
Therefore using~\eqref{eq:barrier_cone} the analytic center problem can be formulated as an instance of $(P)$ in a similar way as above.
\vspace{1ex}
\noindent {\em 6.\ $D$-optimal design}~\cite{Fedorov_72}. Given $m$ points $a^1,\ldots,a^m \in \mathbb{R}^n$ whose affine hull is $\mathbb{R}^n$, the $D$-optimal design problem is:
\begin{align}
\min \; h(x):= -\ln\det \big(\textstyle\sum_{i=1}^m x_i a_ia_i^T\big) \quad \st \;\; x\in\Delta_m \ . \label{eq:Dopt2}
\end{align}
In the domain of statistics the $D$-optimal design problem corresponds to maximizing the determinant of the Fisher information matrix $\mathbb{E}(aa^T)$, see \cite{kiefer1960equivalence}, \cite{atwood1969optimal}, as well as the exposition in \cite{boyd_04}. And in computational geometry, $D$-optimal design arises as a Lagrangian dual problem of the minimum volume covering ellipsoid (MVCE) problem, which dates back at least 70 years to \cite{john}, see Todd \cite{toddminimum} for a modern treatment. Indeed, \eqref{eq:Dopt2} is useful in a variety of different application areas, for example, computational statistics \cite{croux2002location} and data mining \cite{knorr2001robust}.
A recent extension of \eqref{eq:Dopt2} is the design of a supervised learning pipeline for new datasets to maximize the Fisher information, see the recent paper by Yang et al. \cite{yang2020efficient}. The optimization problem they consider is the following variant of \eqref{eq:Dopt2}:
\begin{align}
\min &\; h(x):= -\ln\det \big(\textstyle\sum_{i=1}^m x_i a_ia_i^T\big) \label{eq:Dopt2.5}\\
\quad \st &\;\; \textstyle\sum_{i=1}^n \bar t_i x_i \le \tau \ \label{eq:Dopt3} \\
\quad &\;\;x_i \in [0,1] \ \mbox{for} \ i \in [m] \ , \label{eq:Dopt4}
\end{align}
where the decision variable $x_i$ models the decision to fit model $i$ or not, $\bar t_i$ is the estimated pipeline running time of pipeline $i$, $\tau$ is the runtime limit, and $a_i$ is the vector of latent meta-features of model $i$ for $i \in [m]$. The constraints \eqref{eq:Dopt4} are a linear relaxation of the (computationally unattractive) desired combinatorial constraints $x_i \in \{0,1\}$, $i \in [m]$. We refer the interested reader to \cite{yang2020efficient} for further details and model variations. Here we see that \eqref{eq:Dopt2.5}-\eqref{eq:Dopt4} is an instance of \eqref{poi} with $f(U) := -\ln\det(U)$, $\theta = n$, $\mathsf{A} x := \textstyle\sum_{i=1}^m x_i a_ia_i^T$, and $h$ is the indicator function of the feasible region of the constraints \eqref{eq:Dopt3}-\eqref{eq:Dopt4}.
As mentioned earlier, the $D$-optimal design problem was one of the primary motivators for the research in this paper.
Indeed, Khachiyan proved that his ``barycentric coordinate descent'' method for this problem -- which turns out to be precisely the Frank-Wolfe method (with exact line search) -- has a computational guarantee that is essentially $O\big(n^2/\varepsilon\big)$ iterations to produce an $\varepsilon$-approximate solution. What has been surprising about this result is that the $D$-optimal design problem violates the basic assumption underlying the premise for the analysis of the Frank-Wolfe method, namely $L$-smoothness. Khachiyan's proof used original and rather clever arguments that do not easily carry over elsewhere, which has begged the question of whether or not any of Khachiyan's arguments might underlie any general themes beyond $D$-optimal design, and if so what might be the mathematical structures driving any such themes? We will answer these questions in the affirmative in Section \ref{darkout} by showing that the Frank-Wolfe method achieves essentially $O((\theta + R_h)^2/\varepsilon)$ iteration complexity when used to tackle any problem of the form \eqref{poi}, where $R_h$ is the variation of $h$ on its domain ($R_h := {\max}_{x,y\in\mathcal{X}}\; \abst{h(x) - h(y)}$) and $f$ is a $\theta$-logarithmically homogeneous self-concordant barrier. When specialized to the $D$-optimal design problem, we have $R_h = 0$ (since $h$ is an indicator function), and $\theta=n$, whereby we recover Khachiyan's $O(n^2/\varepsilon)$ dependence on $\varepsilon$.
In this respect, our results reveal certain intrinsic connections between $\theta$-logarithmic homogeneity and the Frank-Wolfe method.
Interestingly, we note that historically the theory of self-concordant functions was initially developed to present a general underlying theory for Newton's method for barrier methods in convex optimization. However, the results in this paper indicate that a subclass of self-concordant functions, namely the class of $\theta$-logarithmically homogeneous self-concordant barriers, are also tied to an underlying theory for the Frank-Wolfe method
By way of concluding this discussion, we note that the relevant literature contains some first-order methods other than Frank-Wolfe have been proposed to solve problems similar to~\eqref{poi}. For example, \cite{Tran_15} considered~\eqref{poi} with the linear operator $\mathsf{A}$ being invertible and the function $f$ being standard self-concordant but not necessarily a logarithmically-homogeneous barrier. The authors in \cite{Tran_15} proposed a proximal gradient method for solving this problem, and showed that the method globally and asymptotically converges to the unique optimal solution. However, the global convergence rate of this method was not shown.
\subsection{Contributions}\label{sec:contribution}
We summarize our contributions as follows:
\begin{enumerate
\item We propose a generalized Frank-Wolfe method for solving $(P)$ with $f\in\mathcal{B}_\theta(\mathcal{K})$. We show that the Frank-Wolfe method requires $O((\delta_0 + \theta + R_h)\ln(\delta_0) + (\theta+R_h)^2/\varepsilon)$ iterations to produce an $\varepsilon$-approximate solution of $(P)$, namely $x\in\mathsf{dom}\, F$ such that $F(x) - F^*\le \varepsilon$, where $\delta_0$ denotes the initial optimality gap and $R_h$ denotes the variation of $h$ on its domain. This iteration complexity bound depends on just three (natural) quantities associated with $(P)$: (i) the initial optimality gap $\delta_0$, (ii) the complexity parameter $\theta$ of $f$, and (iii) the variation of $h$ on $\mathcal{X}$. When $h$ is the sum of a convex quadratic function and an indicator function, our algorithm specializes to that in Dvurechensky et al.\ \cite{Dvu_20}. However, our iteration complexity bounds are quite different from that in~\cite{Dvu_20} -- in particular our bounds are affine invariant, more naturally interpretable, and are typically easy to estimate and can yield an {\it a priori} bound on the number of iterations needed to guarantee a desired optimality tolerance $\varepsilon$. These issues are discussed in details in Remark \ref{rmk:Dvu}.
\item Our analysis also yields $O((\delta_0 + \theta + R_h)\ln(\delta_0) + (\theta+R_h)^2/\varepsilon)$ iteration complexity to produce $x\in\mathsf{dom}\, F$ whose Frank-Wolfe gap (defined in~\eqref{eq:FW_gap} below) is no larger than $\varepsilon$. Since the Frank-Wolfe gap is constructed at each iteration and is often used as the stopping criterion for the Frank-Wolfe method, our result provides a further constructive bound on the number of iterations required to detect a desired optimality tolerance.
\item When specialized to the $D$-optimal design problem, our general algorithm almost exactly recovers the iteration complexity of Khachiyan's specialized method for $D$-optimal design in~\cite{khachiyan1996rounding}. Indeed, the complexities of these two methods have identical dependence on $\varepsilon$, namely $n^2/\varepsilon$. However, Khachiyan's specialized method has an improved ``fixed'' term over our general method by a factor of $\ln(m/n)$; see Remark~\ref{autumn} for details.
\item We present a mirror descent method with adaptive step-size applied to the (Fenchel) dual problem $(D)$ of $(P)$. The dual problem $(D)$ shares a somewhat similar structure to $(P)$ in that its objective function is non-smooth and non-Lipschitz. However, unlike $(P)$, the objective function has unbounded domain. Although these features make the direct analysis of mirror descent rather difficult, through the duality of mirror descent and the Frank-Wolfe method we provide a computational complexity bound for this mirror descent method via the Frank-Wolfe method applied to $(P)$. An application of our mirror descent method for $(D)$ arises in the sub-problem in Bregman proximal-point methods.
\item We apply our method to the TV-regularized Poisson image de-blurring problem. We present computational experiments that point to the potential usefulness of our generalized Frank-Wolfe method on this imaging problem in Section \ref{sec:delurring}.
\end{enumerate}
\subsection{Outline and Notation}
\noindent {\bf Outline.} The paper is organized as follows. In Section~\ref{darkout} we present and analyze our generalized Frank-Wolfe method for $(P)$ when $f \in \mathcal{B}_\theta(\mathcal{K})$, using an adaptive step-size strategy that is a natural extension of the strategy developed in~\cite{Dvu_20}. In Section \ref{doodle} we study the (Fenchel) dual $(D)$ of ${(P)}$ and derive and analyze a dual mirror descent method for solving $(D)$ based on the generalized Frank-Wolfe method for solving $(P)$.
In Section~\ref{experiments} we present computational experiments that point to the potential usefulness of our generalized Frank-Wolfe method on Poisson image de-blurring problems with TV regularization, and we also present computational experiments on the PET problem.
\vspace{1ex}
\noindent {\bf Notation.} Let $\mathbb{R}^n_+ := \{ x \in \mathbb{R}^n : x \ge 0 \}$ and $\mathbb{R}^n_{++} := \{ x \in \mathbb{R}^n : x > 0 \}$. The set of integers $\{1,\ldots,n\}$ is denoted by $[n]$. The domain of a convex function $f$ is denoted by $\mathsf{dom}\, f :=\{x \in \mathbb{R}^n : f(x) < \infty \}$. We use $H(x)$ to denote the Hessian of the function $f$. The interior and relative interior of a set $\mathcal{S}$ are denoted by $\mathsf{int}\,\mathcal{S}$ and $\mathsf{ri}\, \mathcal{S}$, respectively. We use $e$ to denote the vector with entries all equal to 1, $\mathsf{diag}\,(x)$ to denote the diagonal matrix whose diagonal entries correspond to those of $x$, and $\Delta_n$ to denote the standard $(n-1)$-dimensional simplex in $\mathbb{R}^n$, namely $\Delta_n := \{ x \in \mathbb{R}^n : \sum_{i=1}^n x_i =1, \ x \ge 0 \}$. We use $\mathbb{S}_{+}^{n}$ ($\mathbb{S}_{++}^{n}$) to denote the set of $n\times n$ symmetric positive semidefinite (positive definite) matrices, and write $B\in\mathbb{S}_{+}^{n}$ as $B\succeq 0$ and $B\in\mathbb{S}_{++}^{n}$ as $B\succ 0$. The $p$-norm of a vector $x\in\mathbb{R}^n$ is denoted and defined by $\normt{x}_p = (\sum_{i=1}^n \abst{x_i}^p)^{1/p}$.
\section{A generalized Frank-Wolfe method for $(P)$ when $f$ is a \\$\theta$-logarithmically-homogeneous self-concordant barrier}\label{darkout}
In this section we present and analyze a generalized Frank-Wolfe method for the composite optimization problem $(P)$ in the case when $f \in \mathcal{B}_\theta(\mathcal{K})$, using an adaptive step-size strategy that is a natural extension of the strategy developed in Dvurechensky et al. \cite{Dvu_20}. We assume throughout that $\mathcal{X} := \mathsf{dom}\, h$ is a convex and compact set, and that $\mathsf{A}(\mathcal{X})\cap\mathsf{dom}\, f\ne \emptyset$. These two assumptions together with the differentiability of $f$ on $\mathsf{int}\,\mathcal{K}$ ensure that $(P)$ has at least one optimal solution which we denote by $x^*$ and hence $F^*=F(x^*)$.
Let us introduce some important notation. For any $u \in \mathsf{int}\,\mathcal{K}$, the Hessian $H(u)$ of $f$ is used to define the local (Hilbert) norm $\| \cdot \|_u$ defined by:
$$ \|w \|_u := \sqrt{\lranglet{w}{H(u)w}} \ \ \mathrm{for~all~} w \in \mathbb{R}^m \ . $$
The Dikin ball $\mathcal{D}(u,1)$ at $u\in \mathsf{int}\,\mathcal{K}$ is defined by $$ \mathcal{D}(u,1) := \{v\in\mathcal{K}:\normt{v-u}_u<1\} \ , $$ and it can be shown that $\mathcal{D}(u,1)\subseteq \mathsf{int}\,\mathcal{K}$, see Nesterov and Nemirovski~\cite[Theorem~2.1.1]{Nest_94}. The self-concordant function $f$ is well-behaved inside the Dikin ball as we will review shortly. Define the univariate function $\omega$ and its Fenchel conjugate $\omega^*$ as follows:
\begin{equation}\label{eq:omegas} \omega(a) := -a - \ln(1-a) \;\; \forall\, a < 1 \ , \ \ \ \mathrm{and} \ \ \omega^*(a) := a - \ln(1+a) \;\; \forall\, a > -1 \ .
\end{equation}
It turns out that $f$ can be nicely upper-bounded inside the Dikin ball, namely:
\begin{align}
f(v)&\le f(u) + \lranglet{\nabla f(u)}{v-u} + \omega\left(\normt{v-u}_u\right) \ \forall\,u\in\mathsf{int}\,\mathcal{K}, \;\forall \, v \in \mathcal{D}(u,1) \ ,\label{eq:curvature_SC}
\end{align}
see Nesterov~\cite[Theorem~4.1.8]{Nest_04}.
We now develop our generalized Frank-Wolfe method for the composite optimization problem $(P)$ under the condition that $f \in \mathcal{B}_\theta(\mathcal{K})$, and whose formal rules are presented in Algorithm~\ref{algo:FW_SC}. First, we choose any starting point $x^0\in\mathcal{X}$ such that $\mathsf{A} x^0\in\mathsf{dom}\, f(=\mathsf{int}\,\mathcal{K})$, namely $x^0\in\bar{\mathcal{X}}:=\mathcal{X}\cap\mathsf{A}^{-1}(\mathsf{dom}\, f)$. Indeed, from the description below, we will see that the whole sequence of iterates $\{x^k\}_{k\ge 0}$ generated by our method lies in $\bar{\mathcal{X}}$. Given the current iterate $x^k\in\bar{\mathcal{X}}$,
the method first computes the gradient $\nabla f({\mathsf{A}}x^k)$ and then solves for a minimizer $v^k$ of the generalized Frank-Wolfe sub-problem given by
$$v^k\in{\argmin}_{x\in\mathbb{R}^n} \lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A} x} + h(x)\ .$$ The next iterate is then determined as a convex combination of $x^k$ and $v^k$: $x^{k+1} \gets x^k + \alpha_k (v^k - x^k)$ for some $\alpha_k \in [0,1]$, where $\alpha_k$ is the step-size at iteration $k$. For $L$-smooth functions $f$, the step-size can be chosen by a variety of strategies, including simple rules such as $\alpha_k = \tfrac{2}{(k+2)}$, exact line-search to minimize $f(x^k + \alpha(v^k - x^k))$ on $\alpha\in [0,1]$, or an adaptive strategy based on curvature information, etc. Here we present an adaptive strategy based on the upper bound model \eqref{eq:curvature_SC}, which can also be viewed as an extension of the adaptive strategy used in \cite{Dvu_20} and which itself is an extension of the adaptive strategy in Demyanov and Rubinov~\cite{dem1967minimization}. Define the Frank-Wolfe gap (``FW-gap") $G_k$ by
\begin{equation}
G_k := \lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A}(x^k - v^k)}+ h(x^k) - h(v^k) \ , \label{eq:FW_gap}
\end{equation}
and the optimality gap $\delta_k := F(x^k) - F^*$. Note that $G_k \ge 0$ and in fact by the convexity of $f$ it holds that
\begin{align}
\delta_k &= (f(\mathsf{A} x^k) - f(\mathsf{A} x^*)) + (h(x^k) - h(x^*))\nn\\
&\le \lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A}(x^k - x^*)} + (h(x^k) - h(x^*))\nn\\
&\le \lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A}(x^k - v^k)} + (h(x^k) - h(v^k)) = G_k \ , \label{squirrels}
\end{align}
hence $G_k$ is indeed an upper bound on the optimality gap $\delta_k$. Denoting $D_k := \|\mathsf{A}(v^k-x^k)\|_{\mathsf{A} x^k}$, we then have that for any $\alpha\ge 0$,
\begin{equation}
f(\mathsf{A} x^{k} + \alpha\mathsf{A} (v^k-x^k)) \le f(\mathsf{A} x^k) - \alpha \lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A} (x^k - v^k)} + \omega\left(\alpha D_k\right) \ .
\label{eq:curvature_SC_algo}
\end{equation}
(Note that if $\alpha<1/D_k$, then~\eqref{eq:curvature_SC_algo} follows from \eqref{eq:curvature_SC}; otherwise, by the definition of $\omega$ in~\eqref{eq:omegas}, we have $\omega\left(\alpha D_k\right)=+\infty$ and~\eqref{eq:curvature_SC_algo} still holds.)
Also, by the convexity of $h$, we have
\begin{equation}
h(x^{k} + \alpha(v^k-x^k))\le (1-\alpha)h(x^{k}) + \alpha h(v^k) = h(x^{k}) - \alpha(h(x^k) - h(v^{k})) \ .\label{eq:ub_h}
\end{equation}
Adding~\eqref{eq:curvature_SC_algo} and~\eqref{eq:ub_h} together, we obtain
\begin{equation}
F(x^{k} + \alpha(v^k-x^k))\le F(x^k) - \alpha G_k + \omega\left(\alpha D_k\right) \ , \quad \forall\,\alpha\ge 0 \ , \label{eq:ub_F}
\end{equation}
and optimizing the right-hand-side over $\alpha \in [0,1]$
yields the step-size:
$$ \alpha_k := \min\left\{\frac{G_k}{D_k(G_k+D_k)}, 1\right\}.$$
Notice that this step-size specializes precisely to the adaptive step-size developed in \cite{Dvu_20} in the case when the function $h$ is the indicator function $\iota_{\mathcal{X}}$ of a compact convex set $\mathcal{X}$, since in this case $h(x^k)=h(v^k)=0$ for all $k$ and hence $G_k$ turns out to be the standard ``gap function'' $\lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A}(x^k - v^k)}$ as used in \cite{Dvu_20}. In addition, note that the step-size $\alpha_k$ ensures that $x^{k+1}\in\bar{\mathcal{X}}$.
\begin{algorithm}[t!]
\caption{(generalized) Frank-Wolfe Method for composite optimization involving $f\in \mathcal{B}_\theta(\mathcal{K})$ with adaptive step-size}\label{algo:FW_SC}
\begin{algorithmic}
\State {\bf Input}: Starting point $x^0\in\bar{\mathcal{X}}:=\mathcal{X}\cap\mathsf{A}^{-1}(\mathsf{dom}\, f)$
\State {\bf At iteration $k\in\{0,1,\ldots\}$}:
\begin{enumerate
{\setlength\itemindent{10pt} \item \label{item:LMO} Compute $\nabla f(\mathsf{A} x^k)$ and $v^k\in\argmin_{x\in\mathbb{R}^n} \lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A} x} + h(x)$}
{\setlength\itemindent{10pt} \item \label{item:step_size_SC} Compute $G_k := \lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A}(x^k - v^k)}+ h(x^k) - h(v^k)$ and $D_k:= \|\mathsf{A}(v^k-x^k)\|_{\mathsf{A} x^k}$ , and compute the step-size:
\begin{align}
\alpha_k := \min\left\{\frac{G_k}{D_k(G_k+D_k)} \ , 1\right\} \label{eq:step_size_SC}
\end{align}
\setlength\itemindent{10pt} \item Update $x^{k+1}:= x^k + \alpha_k (v^k-x^k)$
}
\end{enumerate}
\end{algorithmic
\end{algorithm}
Before presenting our analysis of Algorithm \ref{algo:FW_SC} we make two remarks. First, notice that the complexity parameter $\theta$ of $f$ is not needed to run Algorithm~\ref{algo:FW_SC}, but it will play a central role in analyzing the iteration complexity of the algorithm. In a sense, Algorithm~\ref{algo:FW_SC} automatically adapts to the value of $\theta$.
Second, for most applications --- including all of the applications discussed in Section~\ref{sec:applications} --- the computational cost of computing $D_k$ (in Step~\ref{item:step_size_SC}) is of the same order as computing $G_k$, and grows linearly in the ambient dimension of the variable $x$. (Note that here our discussion focuses on the dependence of the computational cost on the dimension of $x$ only.)
To see this, let us fix any $v,x\in\mathsf{dom}\, F$. For the first application in Section~\ref{sec:applications} (where $N$ denotes the dimension), both $u:=\mathsf{A} x$ and $w:=\mathsf{A} v$ can be computed in $O(qN)$ time, due to the fact that the matrix representation of $\mathsf{A}$ has only $O(qN)$ nonzeros.
Since $D_k^2= \sum_{l=1}^N (u_l-w_l)^2/u_l^2$, it can be computed in $O(qN)$ time. Using similar reasoning, we easily see that for the second, third and fourth applications (where $n$ denotes the dimension), $D_k$ can be computed in $O(mn)$ time. For the last application (namely the $D$-optimal design problem), where the dimension is denoted by $m$, $D_k$ can be computed in $O(mn^2+n^3)$ time for $k=0$ and $O(n^2)$ time for $k\ge 1$; for details see Appendix~\ref{app:D_k}.
\subsection{Computational guarantees for Algorithm \ref{algo:FW_SC}} \label{sec:comp_guarantee}
We now present our computational guarantees for Algorithm \ref{algo:FW_SC}. These guarantees depend on only three natural quantities associated with $(P)$: (i) the initial optimality gap $\delta_0$, (ii) the complexity parameter $\theta$ of $f$, and (iii) the {\em variation} of $h$ on $\mathcal{X}$, which is defined as:
\begin{equation}\label{range}
R_h := {\max}_{x,y\in\mathcal{X}}\; \abst{h(x) - h(y)} \ .
\end{equation}
Regarding the variation $R_h$ we mention two cases in particular:
\begin{enumerate}
\item when $h = \iota_{\mathcal{X}}$, we have $R_h = 0$, and
\item when $h$ is $L$-Lipschitz on $\mathcal{X}$ with respect to some norm $\normt{\cdot}$, we have $R_h\le L D_{\mathcal{X},\normt{\cdot}}$, where
\begin{equation}
D_{\mathcal{X},\normt{\cdot}}:= {\sup}_{x,x'\in\mathcal{X}}\,\normt{x-x'}<+\infty \label{eq:diameter}
\end{equation}
is the diameter of $\mathcal{X}$ under $\normt{\cdot}$. And in particular if $h=\normt{\cdot}$, then $R_h \le D_{\mathcal{X},\normt{\cdot}}$.
\end{enumerate}
\begin{theorem} \label{thm:SC}
Suppose that $f \in \mathcal{B}_\theta(\mathcal{K})$ and that Algorithm~\ref{algo:FW_SC} is initiated at $x^0 \in \bar{\mathcal{X}}$. Let $\delta_0$ denote the initial optimality gap.
\begin{enumerate}
\item \label{item:behavior_Algo_SC} At iteration $k$ of Algorithm~\ref{algo:FW_SC} the following hold:
\begin{enumerate}[leftmargin = 2.5em]
\item If $G_k > \theta + R_h$, then the optimality gap decreases at least linearly at the iteration:
\begin{equation}
\delta_{k+1}\le \left(1-\frac{1}{5.3(\delta_0+\theta+R_h)}\right)\delta_k \ , \label{eq:rmflin_conv_SC}
\end{equation}
\item If $G_k \le \theta + R_h$, then the inverse optimality gap increases by at least a constant at the iteration:
\begin{equation}
\frac{1}{\delta_{k+1}} \ge \frac{1}{\delta_k} + \frac{1}{12 (\theta+R_h)^2} \ , \ \ \ \mbox{and} \label{eq:sublin_conv_SC}
\end{equation}
\item \label{item:K_lin} The number of iterations $K_{\mathrm{Lin}}$ where $G_k > \theta+R_h$ occurs is bounded from above as follows: $K_{\mathrm{Lin}} \le \lceil 5.3(\delta_0 + \theta+R_h)\ln(10.6\delta_0) \rceil$.
\end{enumerate}
\item \label{item:K_eps} Let $K_\varepsilon$ denote the number of iterations required by Algorithm \ref{algo:FW_SC} to obtain $\delta_k \le \varepsilon$. Then:
\begin{equation}
K_\varepsilon \le \lceil 5.3(\delta_0 + \theta+R_h)\ln(10.6\delta_0) \rceil + \left\lceil12(\theta+R_h)^2 \max\left\{\frac{1}{\varepsilon} - \frac{1}{\delta_0} \ , 0 \right\}\right\rceil \ .
\label{rob6miles}
\end{equation}
\item \label{item:FWGAP} Let $\mathrm{FWGAP}_\varepsilon$ denote the number of iterations required by Algorithm \ref{algo:FW_SC} to obtain $G_k \le \varepsilon$. Then:
\begin{equation}
\mathrm{FWGAP}_\varepsilon \le \lceil 5.3(\delta_0 + \theta+R_h)\ln(10.6\delta_0) \rceil + \left\lceil \frac{24(\theta+R_h)^2}{\varepsilon} \right\rceil \ .
\label{rob7miles}
\end{equation}
\end{enumerate}
\end{theorem}\hfill$\square$
Before we present our proof, let us make a few remarks about the results in Theorem \ref{thm:SC}.
\begin{remark}[Discussion of complexity results]\label{itslate}
Theorem~\ref{thm:SC} indicates that if $f\in\mathcal{B}_\theta(\mathcal{K})$, then the iteration complexity to obtain an $\varepsilon$-optimal solution using Algorithm~\ref{algo:FW_SC} is
\begin{equation}
O\big((\delta_0 + \theta + R_h)\ln(\delta_0) + (\theta + R_h)^2/\varepsilon\big) \ , \label{eq:complexity_SC}
\end{equation}
which only depends on three measures, namely (i) the logarithmic homogeneity constant (also known as the ``complexity value'') $\theta$ of $f$, (ii) the initial optimality gap $\delta_0$, and (iii) the variation $R_h$ of $h$ on its domain, in addition to the desired optimality gap $\varepsilon$. Furthermore, in most of the applications discussed in Section \ref{sec:applications} (namely applications (2.), (4.), (5.), and (6.)) we have $h = \iota_{\mathcal{X}}$ for some $\mathcal{X}$ and hence $R_h = 0$, and so the iteration complexity depends only on $\theta$ and $\delta_0$.
It is interesting to note in the case when $h=\iota_\mathcal{X}$ is the indicator function of a compact region $\mathcal{X}$, that the iteration complexity bound \eqref{eq:complexity_SC} does not rely on any measure of size of the feasible region $\mathcal{X}$, since in this case $R_h = 0$. And even when $R_h >0$, the complexity bound \eqref{eq:complexity_SC} has no specific dependence on the behavior of $\bar f$ on $\mathcal{X}$. In this way we see that the only way that the behavior of $\bar f$ enters into the iteration complexity is simply through the value of $\theta$. (This is in contrast to the traditional set-up of the Frank-Wolfe method for $L$-smooth optimization, where the fundamental iteration complexity depends on a bound on the {\em curvature} of the function $\bar f$ on the feasible region $\mathcal{X}$ -- which we will discuss later in Remark \ref{rmk:Dvu} -- which in turn can grow quadratically in the diameter of the feasible region, see for example \cite{Jaggi_13,Freund_16}.)
We also note that the iteration complexity bound \eqref{eq:complexity_SC} is also independent of any properties of the linear operator $\mathsf{A}$, and in this way its dependence on a specific data instance $\mathsf{A}$ is only through the initial optimality gap $\delta_0$. Therefore \eqref{eq:complexity_SC} is data-instance independent except for the way that the data $\mathsf{A}$ affects the initial optimality gap.
\end{remark}
\begin{remark}[Comparison with Khachiyan~\cite{khachiyan1996rounding} for $D$-optimal design]\label{autumn}
Let us specialize the complexity bound in~\eqref{eq:complexity_SC} to the $D$-optimal design problem in~\eqref{eq:Dopt2}, and compare it with the complexity bound in Khachiyan~\cite{khachiyan1996rounding}. Note that for the problem in~\eqref{eq:Dopt2} we have $\theta=n$, i.e., the dimension of the ambient space of the points $a_1, \ldots, a_m$. In addition, if we choose the starting point $p^0 = (1/m)e$, where $e:=(1,\ldots,1)\in\mathbb{R}^m$,
then
\begin{equation}
\delta_0\le n\ln(m/n) \label{eq:bound_delta_0}
\end{equation}
(which we show in Appendix~\ref{app:proof_bound_delta_0}), and then based on~\eqref{eq:bound_delta_0} and $R_h =0$, the itration complexity bound in~\eqref{eq:complexity_SC} becomes
\begin{equation}
O\big(n\ln(m/n)(\ln n + \ln\ln(m/n)) + n^2/\varepsilon\big) \ . \label{eq:comp_Dopt_ours}
\end{equation}
Using the same starting point, Khachiyan's Frank-Wolfe method \cite{khachiyan1996rounding} uses exact line-search (based on a clever observation from the Inverse Matrix Update formula \cite{hager}), and attains the complexity bound
\begin{equation}
O\big(n(\ln n + \ln\ln(m/n)) + n^2/\varepsilon\big) \ . \label{eq:comp_Dopt_Khachiyan}
\end{equation}
Observe that~\eqref{eq:comp_Dopt_ours} has the exact same dependence on $\varepsilon$ as~\eqref{eq:comp_Dopt_Khachiyan}, namely $O(n^2/\varepsilon)$, but its first term
is inferior to~\eqref{eq:comp_Dopt_Khachiyan} by the factor $O(\ln(m/n))$. The improvement in the first term of Khachiyan's bound over the bound in \eqref{eq:comp_Dopt_ours} is due to his improved estimate of the linear convergence rate in the case $G_k > \theta$ in Algorithm~\ref{algo:FW_SC}, which arises from exploiting an exact line-search on $f$. This is in contrast with our method, which only does an exact line-search on the upper bound model of $f$ in \eqref{eq:ub_F}. A detailed analysis of this last point is given in Remark \ref{stormymonday} at the end of this section.
\end{remark}
\begin{remark}[Comparison with Dvurechensky et al.~\cite{Dvu_20}] \label{rmk:Dvu}
The recent work of Dvurechensky et al.~\cite{Dvu_20} considers the Frank-Wolfe method for the problem $\min_{x\in\mathcal{X}}\, \bar F(x)$,
where $\mathcal{X}$ is nonempty, convex and compact and $\bar F$ is a {non-degenerate} (strongly) $M$-self-concordant function for some $M>0$. This means $\bar{F}$ is convex and three-times differentiable on $\mathsf{dom}\, \bar{F}$, $\nabla^2 \bar{F}(x)\succ 0$ for all $x\in\mathsf{dom}\, \bar{F}$ (this is the definition that $\bar{F}$ is non-degenerate), and
\begin{equation}
\abst{D^3\bar{F}(x)[z,z,z]}\le 2M^{-1/2}(\lranglet{\nabla^2 \bar{F}(x)z}{z})^{3/2} \ \quad \forall\,x\in\mathsf{dom}\, \bar{F} \ , \;\;\forall\,z\in\mathbb{R}^n \ . \label{eq:SC_def_M}
\end{equation}
For convenience of comparison, henceforth let $M=1$.
When $h$ is the sum of a convex quadratic function and an indicator function, i.e., $h(x) = \tfrac{1}{2}\lranglet{x}{Qx} + \xi^\top x + \iota_\mathcal{X}(x)$ for some $Q \succeq 0$ and $\xi\in\mathbb{R}^n$,
then our method coincides with that in Dvurechensky et al.\ \cite{Dvu_20} with $\bar{F}(x) = f(\mathsf{A} x) + \tfrac{1}{2}\lranglet{x}{Qx} + \xi^\top x$.
The complexity bound developed in \cite{Dvu_20} for computing an $\varepsilon$-optimal solution is:
\begin{equation}
O\left(\sqrt{L(x^0)}D_{\mathcal{X},\normt{\cdot}_2}\ln\left(\frac{\delta_0}{\sqrt{L(x^0)}D_{\mathcal{X},\normt{\cdot}_2}}\right) + \frac{L(x^0)D_{\mathcal{X},\normt{\cdot}_2}^2}{\varepsilon}\right) \ , \label{eq:comp_Dvu}
\end{equation}
where
\begin{align}
\mathcal{S}(x^0) &:= \big\{x\in\mathsf{dom}\, \bar{F}\cap\mathcal{X}\,:\, \bar{F}(x) \le \bar{F}(x^0) \big\} \ , \mbox{and} \label{eq:S_Dvu}\\
L(x^0) &:= {\max}_{x\in \mathcal{S}(x^0)} \;\;\lambda_{\max} (\nabla^2 \bar{F}(x) ) <+\infty \ . \label{eq:L_Dvu}
\end{align}
In~\eqref{eq:L_Dvu} $\lambda_{\max} (\nabla^2 \bar{F}(x))$ denotes the largest eigenvalue of $\nabla^2 \bar{F}(x)$, and in \cite{Dvu_20} it is further assumed that $\bar{F}$ is non-degenerate (which necessarily presumes that $\rank (\mathsf{A}) = n $)
in order to ensure the compactness of $\mathcal{S}(x^0)$, and hence the finiteness of $L(x^0)$.
It is instructive to compare the two iteration complexity bounds~\eqref{eq:complexity_SC} and~\eqref{eq:comp_Dvu}, and we note the following advantageous features of our bound \eqref{eq:complexity_SC} as follows:
\begin{itemize}
\item {\em Affine invariance.} Affine invariance is an important structural property of certain algorithms; for instance Newton's method is affine invariant whereas the steepest descent method is not. It is well known that the Frank-Wolfe method is affine invariant. Current state-of-the-art complexity analysis of the Frank-Wolfe method for $L$-smooth functions yields an appropriate affine-invariant complexity bound by utilizing the so-called {\em curvature constant} $C_{\bar{F}}$ of Clarkson \cite{clarkson} defined by
\begin{equation}
C_{\bar{F}}:=
\max_{x,y\in\mathcal{X},\alpha\in[0,1]}\,\frac{2}{\alpha^2}(\bar{F}(x+\alpha(y-x))-\bar{F}(x) - \alpha\lranglet{\nabla \bar{F}(x)}{y-x}) \ ,
\end{equation}
which is a finite affine-invariant quantity \cite{clarkson} when $\bar F$ is $L$-smooth. (This is the same curvature which was alluded to in Remark \ref{itslate}.) Of course $C_{\bar{F}}$ is not typically finite when $\bar{F}$ is self-concordant. The complexity bound in \eqref{eq:comp_Dvu} depends on measures that are tied to the Euclidean inner product and norm, namely $D_{\mathcal{X},\normt{\cdot}_2}$ and $L(x^0)$, and are not affine-invariant, even though the Euclidean norm plays no part in the algorithm. (Under an invertible affine transformation of the variables these measures will change but the performance of the Frank-Wolfe method will not change.) In contrast, all the quantities in our complexity bounds, namely $\delta_0$, $\theta$ and $R_h$ are affine-invariant, therefore the complexity bounds in \eqref{eq:complexity_SC} are affine-invariant.
\item {\em Interpretability.} Apart from $\varepsilon$, our complexity result only depends on $\delta_0$, $\theta$, and $R_h$, all of which admit natural behavioral interpretations.
Specifically, $\delta_0$ measures the initial sub-optimality gap, $\theta$ is the ``complexity parameter'' of the barrier $f$ (in the lexicon of Renegar \cite{Renegar_01}), and $R_h$ measures the variation of $h$ over the set $\mathcal{X}$.
\item {\em Ease of parameter estimation.} Note that all of the three parameters $\delta_0$, $\theta$, and $R_h$ in our complexity bound~\eqref{eq:complexity_SC} are either easy to know or easy to appropriately bound.
Given a logarithmically-homogeneous self-concordant barrier $f$, its complexity value $\theta$ is typcially known {\it a priori} or can be easily determined using~\ref{item:grad_identity} of Lemma \ref{lem:LHSCB}.
Since $\delta_0 \le G_0$, a natural upper bound on $\delta_0$ is the initial FW-gap $G_0$, which is computed in the second step at iteration $k=0$ of Algorithm \ref{algo:FW_SC}. Regarding $R_h$, note that $R_h = 0 $ when $h$ is the indicator function of a convex set $\mathcal{X}$, namely $h = \iota_\mathcal{X}$. In the case when $h(x) := \xi^\top x + \iota_\mathcal{X}(x)$ then $R_h$ can be computed exactly as the difference of two linear optimization optimal values on $\mathcal{X}$ or can be upper bounded using $$R_h = {\max}_{x,x'\in\mathcal{X}}\,\abst{\xi^\top (x-x')}\le \normt{\xi}_*D_{\mathcal{X},\normt{\cdot}} \ , $$ in the case when the norm $\normt{\cdot}$ can possibly be chosen to yield easily computable values of $D_{\mathcal{X},\normt{\cdot}}$. Apart from this case, there also exist many other cases where the simple nature of $h$ and $\mathcal{X}$ yield easily computable upper-bounds on $R_h$.
\end{itemize}
\end{remark}
\subsection{Proof of Theorem \ref{thm:SC}}
We first state some facts about $\theta$-logarithmically homogeneous self-concordant barrier functions.
\begin{lemma}[{see Nesterov and Nemirovskii~\cite[Corollary~2.3.1, Proposition~2.3.4, and Corollary 2.3.3]{Nest_94}}]\label{lem:LHSCB}
If $f\in \mathcal{B}_\theta(\mathcal{K})$,
then for any $u\in\mathsf{int}\,\mathcal{K}$,
we have
\begin{enumerate}[start = 4,label = {\rm (P\arabic*)}, leftmargin = 3.9em]
\item \label{item:def_SCB} $\abs{\lranglet{\nabla f(u)}{w}} \le \sqrt{\theta}\|w\|_u$ $\quad \forall\,w\in\mathbb{R}^m$,
\item \label{item:recession_cone} $\|v\|_u \le -\lranglet{\nabla f(u)}{v}$ $\quad \forall\,v\in\mathcal{K}$,
\item \label{item:Hessian_grad} $\lranglet{\nabla f(u)}{w} = -\lranglet{H(u)u}{w}$ $\quad \forall\,w\in\mathbb{R}^m$,
\item \label{item:grad_identity} $\lranglet{\nabla f(u)}{u} = -\theta$, and
\item \label{item:thetageone} $\theta \ge 1$. \hfill$\square$
\end{enumerate}
\end{lemma}
We also introduce some properties of the function $\omega^*$ (cf.~\eqref{eq:omegas}) and present an ``old'' property of the logarithm function.
\begin{prop}\label{lem:omega_conj}
The function $\omega^*$ is strictly increasing on $[0,+\infty)$, and
\begin{align}
\omega^*(s)&\ge s^2/3 \quad\;\; \ \forall s \in [0,1/2] \ , \ \mbox{and} \label{eq:omega*_quad}\\
\omega^*(s)&\ge s/5.3 \quad\;\; \forall s \ge 1/2\ . \label{eq:omega*_linear}
\end{align}
\end{prop}
\begin{proof}
See Appendix~\ref{app:proof_omega_conj}.
\end{proof}
\begin{prop}\label{lem:Karmarkar}
\begin{equation}
\ln(1+s)\ge s - \frac{s^2}{2(1-\abs{s})} \quad\forall\ s\in(-1,1) \ .
\end{equation}
\end{prop}
\begin{proof}
See Appendix~\ref{app:proof_Karmarkar}.
\end{proof}
We have the following inequality concerning values of $D_k$ and $G_k$ computed in Step \ref{item:step_size_SC} of Algorithm \ref{algo:FW_SC}. For convenience, in the following, define
$$\widetilde{G}_k := \lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A}(x^k - v^k)} \quad \mbox{and}\quad \beta_k := h(x^k) - h(v^k) \ ,$$
so that $G_k = \widetilde{G}_k + \beta_k$. Also, by the definition of $R_h$, we know that $\abst{\beta_k}\le R_h$.
\begin{prop}\label{keyinequality} For all $k \ge 0$ it holds that \begin{equation}\label{eq:bound_Dk}
D_k \le G_k +\theta+ R_h \ . \end{equation}
\end{prop}
\begin{proof} We have:
\begin{align}
D_k^2 &= \lranglet{H(\mathsf{A} x^k)\mathsf{A} v^k}{\mathsf{A} v^k} - 2\lranglet{H(\mathsf{A} x^k)\mathsf{A} x^k}{\mathsf{A} v^k} + \lranglet{H(\mathsf{A} x^k)\mathsf{A} x^k}{\mathsf{A} x^k} \ .\label{eq:bound_Dk_0}
\end{align}
By~\ref{item:recession_cone} we see that
\begin{equation}\label{eq:bound_Dk_1}
\lranglet{H(\mathsf{A} x^k)\mathsf{A} v^k}{\mathsf{A} v^k} \le \lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A} v^k}^2 = (-\widetilde{G}_k + \lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A} x^k})^2 = (\widetilde{G}_k + \theta)^2 \ ,
\end{equation}
where the last equality above uses~\ref{item:grad_identity}.
In addition, from~\ref{item:Hessian_grad} and~\ref{item:grad_identity} we have
\begin{align}
-2\lranglet{H(\mathsf{A} x^k)\mathsf{A} x^k}{\mathsf{A} v^k} + \lranglet{H(\mathsf{A} x^k)\mathsf{A} x^k}{\mathsf{A} x^k} &= 2\lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A} v^k} - \lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A} x^k}\nn\\
&= -2\widetilde{G}_k + \lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A} x^k}\nn\\
&= -2\widetilde{G}_k - \theta \ .
\label{eq:bound_Dk_2}
\end{align}
Combining~\eqref{eq:bound_Dk_0},~\eqref{eq:bound_Dk_1} and~\eqref{eq:bound_Dk_2}, we have
\begin{align*}
D_k^2 &\le (\widetilde{G}_k + \theta)^2 -2\widetilde{G}_k - \theta\\
& = (G_k -\beta_k + \theta)^2 + 2\beta_k - (2G_k + \theta)\\
&\le (G_k + \theta)^2 + \beta_k^2 - 2\beta_k (G_k + \theta-1)\nt\label{eq:Dk_1} \\
&\le (G_k + \theta)^2 + R_h^2 + 2R_h (G_k + \theta)\nt\label{eq:Dk_2}\\
&= (G_k + \theta + R_h)^2 \ , \nt\label{eq:Dk_3}
\end{align*}
where in~\eqref{eq:Dk_1} we use $2G_k + \theta\ge 0$ and in~\eqref{eq:Dk_2} we use $\abst{\beta_k}\le R_h$ and $G_k + \theta\ge \theta\ge 1$.
\end{proof}
The basic iteration improvement inequality for the Frank-Wolfe was presented in \eqref{eq:curvature_SC_algo}, and the step-size in Algorithm \ref{algo:FW_SC} is given by \eqref{eq:step_size_SC}.
In the case when $\alpha_k < 1$, it follows from substituting $\alpha_k = \tfrac{G_k}{D_k(G_k+D_k)}$ from \eqref{eq:step_size_SC} into \eqref{eq:ub_F} that the iteration improvement bound is
\begin{equation}
F(x^{k+1}) \le F(x^k) -\omega^*\left(\frac{G_k}{D_k}\right).
\end{equation}
Using the notation $\Delta_k := F(x^k) - F(x^{k+1}) = \delta_k-\delta_{k+1}$, we can write this improvement as:
\begin{align}
\Delta_k = \delta_{k} - \delta_{k+1} = F(x^k) - F(x^{k+1}) &\ge \omega^*\left(\frac{G_k}{D_k}\right)\ge 0 \ \mbox{when} \ \alpha_k \ < 1 \ . \label{eq:improv_SC}
\end{align}
Let us now prove \eqref{eq:rmflin_conv_SC} and part~\ref{item:K_lin} of Theorem \ref{thm:SC}. Since $G_k > \theta + R_h$, by~\ref{item:def_SCB} in Lemma~\ref{lem:LHSCB} we have
\begin{align*}
D_k = \normt{\mathsf{A} v^k - \mathsf{A} x^k}_{\mathsf{A} x^k} & \ge \abst{\lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A} v^k-\mathsf{A} x^k}}/\sqrt{\theta} \\
&\ge \widetilde{G}_k/\sqrt{\theta} = (G_k-\beta_k)/\sqrt{\theta}\ge (G_k-R_h)/\sqrt{\theta} > \sqrt{\theta}\ge 1 \ . \nt \label{eq:D_k_ge_G_k}
\end{align*}
As a result, $\alpha_k = \tfrac{G_k}{D_k(G_k+D_k)}<1$.
Consequently, by~\eqref{eq:improv_SC} we have
\begin{equation}
F(x^{k+1})\le F(x^k) - \omega^*\left(\frac{G_k}{D_k}\right) \le F(x^k) - \omega^*\left(\frac{G_k}{G_k + \theta +R_h} \right) \ , \label{eq:1st_phase_descent}
\end{equation}
where the last inequality uses \eqref{eq:bound_Dk} and the monotonicity of $\omega^*$. Now notice from the condition $G_k > \theta + R_h$ that ${G_k}/(G_k+\theta+ R_h)>1/2$, whereby invoking \eqref{eq:omega*_linear} we have
\begin{align}
\Delta_k = \delta_k - \delta_{k+1} = F(x^{k}) - F(x^{k+1})&\ge \omega^*\left(\frac{G_k}{G_k+\theta+R_h}\right) \ge \frac{G_k}{5.3(G_k+\theta+R_h)} \ . \label{eq:improv_SC_lin}
\end{align}
In addition we have
\begin{equation}
\frac{G_k}{5.3(G_k+\theta+ R_h)}\ge \frac{\delta_k}{5.3(\delta_k+\theta+ R_h)}\ge \frac{\delta_k}{5.3(\delta_0+\theta+ R_h)} \ , \label{eq:improv_SC2}
\end{equation}
where the first inequality uses the strict monotonicity of the function $c \mapsto c/(c+\theta+ R_h)$ on $[0,+\infty)$, and the second inequality uses the monotonicity of the sequence $\{\delta_k\}_{k\ge 0}$ (see~\eqref{eq:improv_SC}).
Combining~\eqref{eq:improv_SC_lin} and~\eqref{eq:improv_SC2}, we obtain
\begin{equation}
\delta_{k+1}\le \left(1-\frac{1}{5.3(\delta_0+\theta+ R_h)}\right)\delta_k \ , \label{eq:lin_conv_SC}
\end{equation}
which proves \eqref{eq:rmflin_conv_SC}. Furthermore, $G_k>\theta+ R_h$ implies that
\begin{equation}
\delta_k\ge \Delta_k \ge \frac{G_k}{5.3(G_k+\theta+ R_h)} > \frac{G_k}{5.3(G_k+G_k)} = \frac{1}{10.6} \ . \label{holymoly}
\end{equation}
Now let $K_{\mathrm{Lin}}$ denote the number of iterations of Algorithm \ref{algo:FW_SC} where $G_k > \theta+R_h$ occurs. By~\eqref{eq:lin_conv_SC} and \eqref{holymoly} it follows that
\begin{align}
\frac{1}{10.6} < \delta_0 \left(1-\frac{1}{5.3(\delta_0+\theta+R_h)}\right)^{K_{\mathrm{Lin}}-1} \ ,
\end{align}
which then implies that $K_{\mathrm{Lin}} \le \lceil 5.3(\delta_0 + \theta+R_h)\ln(10.6\delta_0) \rceil$ and thus proving part~\ref{item:K_lin} of Theorem \ref{thm:SC}.
Let us now prove \eqref{eq:sublin_conv_SC} of Theorem \ref{thm:SC}. Towards doing so, we will establish:
\begin{equation}\label{acorns}
G_k \le \theta +R_h \ \implies \ \Delta_k \ge \frac{G_k^2}{12(\theta+R_h)^2} \ .
\end{equation}
We first consider the case where $\alpha_k=1$, whereby $D_k(G_k+D_k) \le G_k$, which implies that $D_k < 1$, and also can be rearranged to yield
\begin{equation}
G_k \ge \frac{D_k^2}{1-D_k} \ .\label{eq:G_k_D_k}
\end{equation}
In addition, by~\eqref{eq:ub_F} we obtain
\begin{align*}
\Delta_k = F(x^k)-F(x^{k+1}) \ge G_k - \omega(D_k)= G_k +D_k + \ln(1-D_k) \ .\nt \label{eq:Delta_lb_alpha_1}
\end{align*}
By~\eqref{eq:Delta_lb_alpha_1}, Proposition~\ref{lem:Karmarkar}, and~\eqref{eq:G_k_D_k}, we have
\begin{equation}
\Delta_k \ge G_k - \frac{D_k^2}{2(1-D_k)} \ge \frac{G_k}{2} \ , \label{eq:Delta_k_lb_lin}
\end{equation} which then implies
that
\begin{equation}
\Delta_k \ge \frac{G_k}{2} \ge \frac{G_k^2}{2(\theta+R_h)} \ge \frac{G_k^2}{2(\theta+R_h)^2} \ge \frac{G_k^2}{12(\theta+R_h)^2} \ ,
\end{equation}
where the second inequality used $G_k \le \theta+R_h$ and the third inequality used $\theta+R_h\ge 1$. This establishes \eqref{acorns} for the case when $\alpha_k = 1$.
We next consider the case where $\alpha_k <1$, whereby $\alpha_k = \frac{G_k}{D_k(G_k+D_k)}$, and then by \eqref{eq:improv_SC} we have
\begin{align*}
\Delta_k &= F(x^k) - F(x^{k+1}) \\
&\ge \omega^*\left(\frac{G_k}{D_k}\right) \ge \omega^*\left(\frac{G_k}{G_k + \theta+R_h}\right) \ge \frac{G_k^2}{3(G_k+\theta+R_h)^2}\ge \frac{G_k^2}{12(\theta+R_h)^2} \ , \nt\label{eq:Delta_k_lb_quasi-quad}
\end{align*}
where the second inequality uses \eqref{eq:bound_Dk} and the monotonicity of $\omega^*$, the third inequality
uses \eqref{eq:omega*_quad} in conjunction with $G_k/(G_k + \theta+R_h) \le 1/2$, and the fourth inequality uses $G_k \le \theta+R_h$. This establishes \eqref{acorns} for the case when $\alpha_k < 1$, completing the proof of \eqref{acorns}. It thus follows for $G_k \le \theta+R_h$ that
$$ \delta_k - \delta_{k+1} = \Delta_k \ge \frac{G_k^2}{12(\theta+R_h)^2} \ge \frac{\delta_k \delta_{k+1}}{12(\theta+R_h)^2} \ , $$
where the last inequality follows from $\delta_{k+1} \le \delta_k \le G_k$, and dividing both sides by $ \delta_k \delta_{k+1}$ and rearranging yields the inequality \eqref{eq:sublin_conv_SC}.
We next prove \eqref{rob6miles} and \eqref{rob7miles}. If $\delta_0 \le \varepsilon$ the result follows trivially; thus we assume that $\delta_0 > \varepsilon$. Let $\bar K$ denote the expression on the right-side of \eqref{rob6miles}, and suppose Algorithm \ref{algo:FW_SC} has been run for $\bar K$ iterations. Let $N:= \lceil12(\theta+R_h)^2\left({1}/{\varepsilon} - {1}/{\delta_0} \right)\rceil$, whereby it follows from part~\ref{item:K_lin}
of Theorem~\ref{thm:SC} that among the first $\bar K$ iterations, the number of iterations where $G_k \le \theta+R_h$ is at least $N$. Thus from \eqref{eq:sublin_conv_SC} it follows that
$$\frac{1}{\delta_{K_\varepsilon}} \ge \frac{1}{\delta_0} + \frac{N}{12 (\theta+R_h)^2} \ge \frac{1}{\delta_0} + \left(\frac{1}{\varepsilon} - \frac{1}{\delta_0} \right) = \frac{1}{\varepsilon} \ , $$ and rearranging yields part~\ref{item:K_eps} of Theorem~\ref{thm:SC}.
Let $k_0 < k_1 < k_2 < \cdots$ denote indices where $G_k \le \theta + R_h$.
From \eqref{acorns}, \eqref{squirrels}, and the monotonicity of the sequence $\{\delta_k\}_{k\ge 0}$, it follows for all $j\ge 0$ that
$$ \delta_{k_{j+1}} \le \delta_{k_j+1} \le \delta_{k_{j}} - \frac{G_{k_{j}}^2}{12 (\theta+R_h)^2} \ \ \ \mbox{and} \ \ \ G_{k_{j}} \ge \delta_{k_{j}} \ . $$
Let $d_j := \delta_{k_{j}}$ and $g_j := G_{k_{j}}$ for all $j\ge 0$, then the nonnegative sequences $\{ d_j \}_{j\ge 0}$ and $\{ g_j \}_{j\ge 0}$ satisfy for all $j\ge 0$:
$$ d_{{j+1}} \le d_{{j}} - \frac{g_{{j}}^2}{12 (\theta+R_h)^2} \ \ \ \mbox{and} \ \ \ g_{{j}} \ge d_{{j}} \ . $$
Thus $\{ d_j \}_{j\ge 0}$ and $\{ g_j \}_{j\ge 0}$ satisfy the hypotheses of the following elementary sequence proposition using $M=12(\theta+R_h)^2$. (This proposition is
a slight extension of the standard sequence property for Frank-Wolfe type sequences, and we provide a proof in Appendix \ref{holdenwood}.)
\begin{prop}\label{greatrun} Suppose the two nonnegative sequences $\{ d_j \}_{j\ge 0}$ and $\{ g_j \}_{j\ge 0}$ satisfy for all $j \ge 0$:
\begin{itemize}
\item $d_{j+1} \le d_j - g_j^2/M$ for some $M>0$, and
\item $g_j \ge d_j$.
\end{itemize}
Then for all $j \ge 0$ the following holds:
\begin{equation}\label{eq:rate_d_j} d_{j} \le \frac{M}{j + \frac{M}{d_0}} < \frac{M}{j} \ , \end{equation}
and \begin{equation}\label{eq:rate_g_j} \min\{g_0, \ldots, g_j\} < \frac{2M}{j} \ . \end{equation} \hfill$\square$
\end{prop}
\noindent
Let $\mathrm{FWGAP}_\varepsilon$ be as given in part~\ref{item:FWGAP} of Theorem~\ref{thm:SC}, and let $\tilde K$ denote the expression on the right-side of \eqref{rob7miles}. Suppose Algorithm \ref{algo:FW_SC} has been run for $\tilde K$ iterations. Let $\tilde N:= \lceil {24(\theta+R_h)^2}/{\varepsilon} \rceil$, whereby it follows from part~\ref{item:K_lin} of Theorem~\ref{thm:SC} that among the first $\tilde K$ iterations,
the number of iterations where $G_k \le \theta+R_h$ is at least $\tilde N$. Then, it follows that
$$ \min\{G_0, \ldots, G_{\tilde K}\} \le \min\{G_{k_{0}}, \ldots, G_{k_{\tilde N}}\} = \min\{g_0, \ldots, g_{\tilde N}\} < \frac{2M}{\tilde N} = \frac{24(\theta+R_h)^2}{\tilde N} \le \varepsilon \ , $$ where the strict inequality uses Proposition \ref{greatrun}. This shows \eqref{rob7miles} and completes the proof of Theorem~\ref{thm:SC}. \hfill$\square$
{\begin{remark}[Continued discussion from Remark \ref{autumn} comparing Theorem \ref{thm:SC} with Khachiyan~\cite{khachiyan1996rounding} for $D$-optimal design]\label{stormymonday}
Here $h = \iota_{\Delta_n}$, whereby $R_h = 0$, and the rate of linear convergence for iterates where $G_k > \theta$ in \eqref{eq:rmflin_conv_SC} is order $O(1-1/(n+\delta_0))$ as compared to the rate of $O(1-1/n)$ proved in~\cite{khachiyan1996rounding} specifically for the $D$-optimal design problem with exact line-search.
Due to the very special structure of the $D$-optimal design problem, the exact line-search is in closed-form, and it enables Khachiyan~\cite{khachiyan1996rounding} to show that the optimality gap improvement bound \eqref{eq:improv_SC} is instead
\begin{equation}
\delta_{k+1} \le \delta_{k} - \omega\left(\frac{G_k}{G_k+\theta}\right) \ . \label{eq:improved_lin_rate}
\end{equation}
Notice that $\omega$ is larger than $\omega^*$, and all the moreso for larger values of its argument, which corresponds to $G_k > \theta$; this then leads to an improved guaranteed linear rate of Khachiyan's algorithm in the case when $G_k > \theta$. However, we stress that the stronger estimate in~\eqref{eq:improved_lin_rate} is rather specific to the $D$-optimal design problem, and we do not expect it to hold in general for $f\in\mathcal{B}_\theta(\mathcal{K})$.
\end{remark}
}
\section{A Mirror Descent Method for the Dual Problem}\label{doodle}
In this section we present a mirror descent method with adaptive step-size applied to the (Fenchel) dual problem of $(P)$.
We denote the dual problem of $(P)$ by $(D)$, which is given by:
\begin{equation}
(D):\ \ \ -d^*:= -{\min}_{y\in\mathbb{R}^m} \;[d(y):= f^*(y) + h^*(-\mathsf{A}^*y)] \ , \label{doi}
\end{equation}
where $f^*$ and $h^*$ are the Fenchel conjugates of $f$ and $h$, respectively, and $\mathsf{A}^*:\mathbb{R}^{m}\to \mathbb{R}^{n}$ denotes the adjoint of $\mathsf{A}$. We observe the following properties related to $(D)$:
\begin{enumerate}
\item $f^*$ is a $\theta$-logarithmically-homogeneous self-concordant barrier on the polar of ${\mathcal{K}}$, namely ${\mathcal{K}}^\circ:= \{y\in\mathbb{R}^m:\lranglet{y}{u}\le 0\;\forall\,u\in{\mathcal{K}}\}$, and $\mathcal{K}^\circ$ is also a regular cone. This follows from~\cite[Theorem~2.4.4]{Nest_94}.
\item $h^*$ is Lipschitz (but not necessarily differentiable) on $\mathbb{R}^n$. Indeed, let $\normt{\cdot}$ be a given norm on the space of $x$ variables,
and define $R_\mathcal{X}:= \max_{x \in \mathcal{X}} \|x\|$.
Since $\mathcal{X}=\mathsf{dom}\, h$ is compact and $h$ is closed,
it follows that $R_\mathcal{X}<+\infty$ and $h^*$ is $R_\mathcal{X}$-Lipschitz on $\mathbb{R}^n$.
\item $F^*= -d^*$ and $(D)$ has at least one optimal solution. Indeed, since $\mathsf{A}(\mathcal{X})\cap\mathsf{dom}\, f\ne \emptyset$ and $f$ is continuous on $\mathsf{dom}\, f$, the strong duality and attainment follows from~\cite[Theorem~3.51]{Peyp_15}
\end{enumerate}
Although $(D)$ has a similar structure as $(P)$, certain key differences are unattractive in regards to the application of first-order methods for solving $(D)$. One key difference is that the domain of the dual function $d$ is unbounded, which is in obvious contrast to the bounded domain of the primal function $F$. This makes it difficult or prohibitive to apply a Frank-Wolfe type method to solve $(D)$. Furthermore, and similar to $(P)$, $\nabla f^*$ does not satisfy either uniform boundedness or uniform Lipschitz continuity on ${\mathcal{K}}^\circ$, thereby preventing the application of most other types of first-order methods. Nevertheless, below we present a mirror descent method with adaptive step-size for $(D)$. Although the lack of good properties prevents the direct analysis of mirror descent in the usual manner, through the duality of mirror descent and the Frank-Wolfe method we provide a computational complexity bound for this mirror descent method.
Before presenting our mirror descent method for tackling $(D)$, we first present an important application of $(D)$ in the Bregman proximal-point method.
{\em Application in the Bregman proximal-point method (BPPM)~\cite{Censor_92,Ecks_93,Aus_99}.}\; Consider the convex non-smooth optimization problem $\min_{y\in\mathbb{R}_+^m}\, \xi(y)$, where $\xi:\mathbb{R}^m\to\mathbb{R}$ is assumed to be Lipschitz on $\mathbb{R}^m$. At the $k$-th iteration of BPPM, one solves the following problem:
\begin{equation}
y^{k+1}:= {\argmin}_{y\in\mathbb{R}^m}\; \xi(y) + \beta_k^{-1} D_\zeta(y,y^k) \ , \label{eq:BPPM}
\end{equation}
where $y^k$ is the $k$-th iterate of BPPM, $\beta_k>0$ is the step-size, and $\zeta:\mathbb{R}_{++}^m\to\mathbb{R}$ is the prox function that induces the Bregman divergence
\begin{equation}
D_{\zeta}(y,y^k) := \zeta(y) - \zeta(y^k) - \lranglet{\nabla \zeta(y^k)}{y- y^k} \ .\label{eq:Bregman}
\end{equation}
As pointed out in~\cite{Aus_99}, one of the standard choices of $\zeta$ is $\zeta(y)= -\sum_{i=1}^m \ln (y_i)$, and under this choice, if $y^0\in \mathbb{R}_{++}^m = \mathsf{int}\, \mathbb{R}_{+}^m$, then $y^k\in \mathbb{R}_{++}^m $ for all $k\ge 1$, and so the constraint set $\mathbb{R}_+^m$ is
automatically taken care of by the prox-function $\zeta$.
From~\eqref{eq:BPPM} and~\eqref{eq:Bregman} we note that \eqref{eq:BPPM} is in the form of $(D)$ with $f^*(y) := \zeta(y)= -\sum_{i=1}^m \ln (y_i)$, $\mathsf{A}=-\mathsf{I}$ and $h^*(-\mathsf{A}^*y) =\beta_k \xi(y) -\lranglet{\nabla\zeta(y^{k})}{y}$
Our mirror descent method for $(D)$ is shown in Algorithm~\ref{algo:DMD}, and is based on using the function $f^*$ itself as the prox function to induce the Bregman divergence:
$$D_{f^*}(y,y^k) := f^*(y) - f^*(y^k) - \lranglet{\nabla f^*(y^k)}{y- y^k} \ , $$
(When $f$ is $L$-smooth, similar ideas have appeared in some previous works, for example Grigas \cite{grigas2016}, Bach \cite{Bach_15}, as well as Lu and Freund~\cite{LuFreund}.)
In step~\ref{item:subgrad}, we compute a subgradient of the dual function $d$ at $y^k$, which is denoted by $g^k$. In step~\ref{item:u_DMD} we update the primal variables $z^k$ which are used in the method to adaptively determine the step-size in the next step. In step~\ref{item:step_size_DMD}, we compute the step-size $\gamma_k$, which we will show to be same as the step-size $\alpha_k$ in the Frank-Wolfe method (shown in Algorithm~\ref{algo:FW_SC}). Equipped with $y^k$, $g^k$ and $\gamma_k$, in step~\ref{item:Bregman} we perform a Bregman proximal minimization step to obtain $y^{k+1}$. We emphasize that different from the classical mirror descent method (e.g.,~\cite{Nemi_79}), in step~\ref{item:Bregman} we use $f^*$ (which is part of the objective function) as the prox function to induce the Bregman divergence $D_{f^*}(\cdot,\cdot)$. Also notice that the domain of the sub-problem \eqref{eq:Breg_min} is $\mathsf{int}\,\mathcal{K}^\circ$ and it is perhaps not so obvious without further analysis that \eqref{eq:Breg_min} has an optimal solution.
At first glance it appears that Algorithm~\ref{algo:DMD} might not be efficient to implement, since it involves working with a system of linear equations to determine $z^0$ in the Input, and also involves solving the minimization sub-problem in step~\ref{item:Bregman}. However, as we show below, Algorithm~\ref{algo:DMD} corresponds exactly to the generalized Frank-Wolfe method in Algorithm~\ref{algo:FW_SC} for solving $(P)$, which does not involve these computationally expensive steps. This of course implies that Algorithm~\ref{algo:DMD} can be implemented via Algorithm~\ref{algo:FW_SC} to obtain the primal iterate sequence $\{x^k\}_{k\ge 0}$, and then the dual iterate sequence $\{y^k\}_{k\ge 0}$ is determined by the simple rule $y^k = \nabla f(\mathsf{A} x^k)$ for $k \ge 0$.
\begin{algorithm}[t!]
\caption{Mirror descent method for solving $(D)$ using $f^*$ as the prox function}
\label{algo:DMD}
\begin{algorithmic}
\State {\bf Input}: Starting points $y^0\in \mathsf{int}\, \mathcal{K}^\circ$ and $z^0\in\{z\in\mathcal{X}:\mathsf{A} z = \nabla f^*(y^0)\}$
\State {\bf At iteration $k\in\{0,1,\ldots\}$}:
\begin{enumerate
{\setlength\itemindent{10pt} \item \label{item:subgrad}
Let $s^k\in\partial h^*(-\mathsf{A}^*y^k)$
and define
\begin{equation}
g^k:= \nabla f^*(y^k) - \mathsf{A} s^k \in \partial d(y^k) \ . \label{eq:g_k}
\end{equation}
}
{\setlength\itemindent{10pt} \item \label{item:u_DMD}
If $k\ge 1$, compute
\begin{equation}
z^k:= (1-\gamma_{k-1})z^{k-1} + \gamma_{k-1}s^{k-1} \ . \label{eq:z_k}
\end{equation}
}
{\setlength\itemindent{10pt} \item \label{item:step_size_DMD} Compute $\bar{G}_k := \lranglet{g^k}{y^k}+ h(z^k) - h(s^k)$ and $\bar{D}_k:= \|g^k\|_{\nabla f^*(y^k)}$, and compute the step-size:
\begin{align}
\gamma_k := \min\left\{\frac{\bar{G}_k}{\bar{D}_k(\bar{G}_k+\bar{D}_k)} \ , 1\right\} \ . \label{eq:step_size_DMD}
\end{align}
\setlength\itemindent{10pt} \item \label{item:Bregman} Update
\begin{equation}
y^{k+1}:= {\argmin}_{y\in\mathbb{R}^m}\, \lranglet{g^k}{y} + \gamma_k^{-1} D_{f^*}(y,y^k) \ . \label{eq:Breg_min}
\end{equation}
}
\end{enumerate}
\end{algorithmic
\end{algorithm}
\begin{theorem}\label{thm:equiv}
Algorithms~\ref{algo:FW_SC} and~\ref{algo:DMD} are equivalent in the following sense:
If the starting points $x^0$ in Algorithm~\ref{algo:FW_SC} and $z^0$ in Algorithm~\ref{algo:DMD} satisfy $x^0 = z^0$ and $y^0 =\nabla f(\mathsf{A} x^0)$, then an iterate sequence of either algorithm exactly corresponds to an iterate sequences of the other.\end{theorem}
Before we prove this theorem, we first recall some properties of conjugate functions. Let $w:\mathbb{R}^p\to\mathbb{R}\cup\{+\infty\}$ be a closed convex function and let $w^*$ denote its conjugate function, which is defined by $w^*(g) := \max_u\{\lranglet{g}{u}-w(u)\}$. Then $w^*:\mathbb{R}^p\to\mathbb{R}\cup\{+\infty\}$ is a closed convex function, and
\begin{equation}
g \in \partial w(u) \Longleftrightarrow \ u \in \partial w^*(g) \Longleftrightarrow \lranglet{g}{u} = w(u) + w^*(g) \ . \label{eq:Fenchel_identity}
\end{equation}
\noindent \emph{Proof of Theorem \ref{thm:equiv}}. Let $\{y^k\}_{k\ge 0}$ be the sequence of iterates of Algorithm \ref{algo:DMD}, and let us also collect the sequences $\{z^k\}_{k\ge 0}$, $\{s^k\}_{k\ge 0}$, $\{g^k\}_{k\ge 0}$, $\{\bar{G}_k\}_{k\ge 0}$, $\{\bar{D}_k\}_{k\ge 0}$, and $\{\gamma^k\}_{k\ge 0}$ generated in Algorithm \ref{algo:DMD}, and use these sequences to define the following five sequences by the simple assignments $x^k := z^k$, $v^k := s^k$, $\alpha^k := \gamma^k$, $G_k := \bar{G}_k$, $D_k := \bar{D}_k$, for $k \ge 0$. We now show that these five sequences correspond to an iterate sequence of Algorithm \ref{algo:FW_SC}. Our argument will rely on the following identity:
\begin{equation}
y^k = \nabla f(\mathsf{A} z^k) \ \ \forall k \ge 0 \ , \label{beard}
\end{equation}
which we will prove by induction. Notice that \eqref{beard} is true for $k=0$ by supposition in the statement of the theorem. Now let us assume that \eqref{beard} holds for a given iteration $k$, and let us examine the properties of our sequences. We have
\begin{align*}
v^k :=s^k\in\partial h^*(-\mathsf{A}^*y^k) &\;\;\Longrightarrow \;\; -\mathsf{A}^*y^k \in \partial h(v^k)\nt\label{eq:min_subdiff}\\
&\;\;\Longrightarrow \;\; v^k \in {\argmin}_{x\in\mathbb{R}^n}\; \lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A} x} + h(x) \ , \nt\label{eq:min_subdiff2}
\end{align*}
where \eqref{eq:min_subdiff} follows from the conjugacy properties \eqref{eq:Fenchel_identity}. This shows that $v^k$ satisfies Step~\ref{item:LMO} of iteration $k$ in Algorithm \ref{algo:FW_SC}. We also have
\begin{align*}
G_k := \bar{G}_k :=&\lranglet{g^k}{y^k}+ h(z^k) - h(s^k) \nt\label{eq:ugh1}\\
=& \lranglet{\nabla f^*(y^k) - \mathsf{A} s^k}{\nabla f(\mathsf{A} z^k) } + h(z^k) - h(s^k) \ , \nt\label{eq:ugh2}\\
=& \lranglet{\mathsf{A} x^k - \mathsf{A} v^k}{\nabla f(\mathsf{A} x^k) } + h(x^k) - h(v^k) \ \nt\label{eq:ugh3}
\end{align*}
satisfies the definition of $G_k$ in Step~\ref{item:step_size_SC} of iteration $k$ of Algorithm \ref{algo:FW_SC}. Similarly, we have
\begin{align*}
D_k := \bar{D}_k := \normt{g^k}_{\nabla f^*(y^k)} = \normt{\nabla f^*(y^k) - \mathsf{A} s^k}_{\mathsf{A} z^k } = \normt{\mathsf{A} z^k - \mathsf{A} s^k}_{\mathsf{A} z^k } = \normt{\mathsf{A} x^k - \mathsf{A} v^k}_{\mathsf{A} x^k } \nt\label{eq:ugh4}
\end{align*}
satisfies the the definition of $D_k$ in Step~\ref{item:step_size_SC} of iteration $k$ in Algorithm \ref{algo:FW_SC}, which then implies similarly for the formula for $\alpha_k$ in \eqref{eq:step_size_SC} of Algorithm \ref{algo:FW_SC}. Last of all, we prove the inductive step of the equality \eqref{beard}. From the optimality conditions of the optimization problem in \eqref{eq:Breg_min} we have
\begin{align*}
\nabla f^*(y^{k+1}) = \nabla f^*(y^k) - \gamma_k g_k \;\;\Longrightarrow \;\; \nabla f^*(y^{k+1}) = \nabla f^*(y^{k}) - \gamma_k g_k = (1-\gamma_k) \mathsf{A} z^k + \gamma_k \mathsf{A} s^k,
\end{align*}
where in the last step we use $\nabla f^*(y^{k}) = \mathsf{A} z^k$ from \eqref{beard} and~\eqref{eq:g_k}. Since $z^{k+1}:= (1-\gamma_{k})z^{k} + \gamma_{k}s^{k}$, we have $\nabla f^*(y^{k+1}) = \mathsf{A} z^{k+1}$ implies $y^{k+1} = \nabla f(\mathsf{A} z^{k+1})$ by conjugacy and completes the proof of \eqref{beard}. This then shows that an iterate sequence of Algorithm \ref{algo:DMD} corresponds to an iterate sequence of Algorithm \ref{algo:FW_SC}. The reverse implication can also be proved using identical logic as above.
\hfill$\square$
We now leverage the equivalence of Algorithms~\ref{algo:FW_SC} and~\ref{algo:DMD} to analyze the iteration complexity of Algorithm \ref{algo:DMD}. The following proposition relating the duality gap to the Frank-Wolfe gap will be useful.
\begin{prop}\label{bbike} $G_k = d(y^k) + F(x^k)$ for all $k \ge 0$.
\end{prop}
\begin{proof} We have for all $k \ge 0$:
\begin{align}
G_k &= \lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A} x^k} - \lranglet{\nabla f(\mathsf{A} x^k)}{\mathsf{A} v^k} + h(x^k) - h(v^k)\\
&= f(\mathsf{A} x^k)+ h(x^k) + f^*(y^k)+ \lranglet{-\mathsf{A}^*y^k}{ v^k} - h(v^k)\label{eq:dualgap_1}\\
&= f(\mathsf{A} x^k)+ h(x^k) + f^*(y^k)+h^*(-\mathsf{A}^*y^k) = d(y^k) + F(x^k) \ . \label{eq:dualgap_2}
\end{align} where in~\eqref{eq:dualgap_1} we use the conjugacy property in~\eqref{eq:Fenchel_identity} and $y^k =\nabla f(\mathsf{A} x^k)$, and in~\eqref{eq:dualgap_2} we use $-\mathsf{A}^*y^k\in\partial h(v^k)$ (by~\eqref{eq:min_subdiff} and $s^k = v^k$) and~\eqref{eq:Fenchel_identity}.\end{proof}
Since $G_k$ upper bounds the dual optimality gap $d_k:= d(y^k) - d(y^{*})$, from part~\ref{item:FWGAP} of Theorem~\ref{thm:SC} we immediately have the following corollary.
\begin{corollary}
Let $\mathrm{DGAP}_\varepsilon$ denote the number of iterations required by Algorithm \ref{algo:DMD} to obtain $d_k \le \varepsilon$. Then:
\begin{equation}
\mathrm{DGAP}_\varepsilon \le \lceil 5.3(\delta_0 + \theta+R_h)\ln(10.6\delta_0) \rceil + \left\lceil \frac{24(\theta+R_h)^2}{\varepsilon} \right\rceil \ .
\end{equation}\hfill$\square$
\end{corollary}
We end this section with some remarks. First, if one considers $(D)$ directly then its ``primitive'' objects are $f^*$ and $h^*$, and implementing Algorithm~\ref{algo:DMD} via Algorithm~\ref{algo:FW_SC} requires knowing $h = (h^*)^*$ and also the Hessian of $f = (f^*)^*$ (used to compute the step-size $\gamma_k$). {While these objects are not part of the primitive description of $(D)$, it follows from conjugacy of self-concordant barriers that $\nabla f = (\nabla f^*)^{-1}$ and $H(\cdot) = H^*(\nabla f(\cdot))^{-1}$ where $H^*$ is the Hessian of $f^*$ (see~\cite[Theorem~3.3.4]{Renegar_01}), and therefore one can work directly with $\nabla f$ and $H$ through the primitive objects $\nabla f^*$ and $H^*$.} Of course, for standard logarithmically-homogeneous barriers $f^*$ such as $f^*(y)= -\sum_{i=1}^m \ln y_i$, where $y\in\mathbb{R}_{++}^m$ or $f^*(Y) = -\ln\det Y$, where $Y\in\mathbb{S}_{++}^p$ (and $m = p(p+1)/2$), its Fenchel conjugate of $f$ and the Hessian of $f$ are well-known. In addition, for many simple non-smooth functions $h^*$, e.g., $h^*(w) = \normt{w}_p$ ($p\in[1,+\infty]$),
its Fenchel conjugate either is well-known or can be easily computed. Therefore, for many problems of interest implementing Algorithm~\ref{algo:DMD} via Algorithm~\ref{algo:FW_SC} is likely to be quite viable.
Second, note that the step-size $\gamma_k$ used in Algorithm~\ref{algo:DMD} is adaptive, and is different from a standard step-size that is monotone decreasing in $k$, e.g., $\gamma_k = O(1/\sqrt{k})$ or $\gamma_k = O(1/{k})$ (cf.~\cite{Nemi_79}). This poses difficulties in attempting to directly analyze Algorithm~\ref{algo:DMD} via standard approaches which involves using $D_{f^*}(y^k,y^*)$ as the potential function (see e.g.,~\cite{Bach_15}). Nevertheless, the convergence guarantee for
$G_k$ derived from Algorithm~\ref{algo:FW_SC} enables us to analyze the converge of the dual optimality gap $d_k$ in Algorithm \ref{algo:DMD}.
\section{Computational Experiments}\label{experiments}
In this section we present the results of some basic numerical experiments where we evaluate the performance of our generalized Frank-Wolfe method in Algorithm \ref{algo:FW_SC} on the Poisson image de-blurring problem with TV regularization (Application 1 in Section \ref{sec:applications}), and also on the PET problem (Application 2 of Section \ref{sec:applications}).
\subsection{First numerical experiment: Poisson image de-blurring with TV regularization}\label{sec:delurring}
We consider the Poisson image de-blurring problem with TV regularization as described in Application 1 of Section~\ref{sec:applications}, where the formulation was presented in equation~\eqref{eq:deblurring_TV}. Observe that $f:u \mapsto -\textstyle\sum_{l=1}^N y_l\ln \big(u_l)$ is {neither Lipschitz nor $L$-smooth on the set $\{u\in\mathbb{R}^N: u = \mathsf{A} x, \ 0 \le x \le Me\}$}, and ${\rm TV}(\cdot)$ does not have an efficiently computable proximal operator, which prevents most standard first-order methods from being applicable to tackle~\eqref{eq:deblurring_TV}. As a result, very few methods have been proposed to solve~\eqref{eq:deblurring_TV} in the literature. In~\cite{Dey_06} an ad-hoc expectation-maximization (EM) method was proposed to solve~\eqref{eq:deblurring_TV}, however the method is not guaranteed to converge due to the non-smoothness of the function ${\rm TV}(\cdot)$ (see e.g.,~\cite{Pierro_95}). Both~\cite{Harmany_12} and~\cite{Chambolle_18} proposed methods to solve a ``perturbed'' version of~\eqref{eq:deblurring_TV} by adding a small offset to each $\ln(\cdot)$ term, which of course, compromises the original objective function $\bar{F}(x)$ near $x=0$. (Such a perturbed version is not needed for the theoretical analysis in \cite{Chambolle_18}, but seems to be used to improve practical performance.) Using the generalized Frank-Wolfe method of Algorithm \ref{algo:FW_SC}, we are able to directly solve formulation \eqref{eq:deblurring_TV}, the details of which we now describe.
\subsubsection{Implementation of generalized Frank-Wolfe method for solving \eqref{eq:deblurring_TV}}
We first re-describe the total variation function ${\rm TV}(x)$ by introducing some network definitions and terminology. The TV function penalizes potential differences on the horizontal and vertical grid arcs of the standard $m \times n$ pixel grid. Considering each pixel location as a node, let $\mathcal{V}$ denote these nodes enumerated as $\mathcal{V} = [N] =\{1, \ldots , N = m \times n\}$, and consider the undirected graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ where $\mathcal{E}$ is the set of horizontal and vertical edges of the grid. These horizontal and vertical edges can be described as $\mathcal{E}_\mathrm{h}$ and $\mathcal{E}_\mathrm{v}$, respectively, where
\begin{align*}
\mathcal{E}_\mathrm{h} := \{\{l,l+1\}: l \in[N],\;\; l \!\!\! \mod n \ne 0\}\quad \mbox{and}\quad \mathcal{E}_\mathrm{v} := \{\{l,l+n\}: l \in[N-m] \} \ .
\end{align*}
With this notation we have
\begin{equation}
{\rm TV}(x) = \textstyle{\sum}_{\{i,j\}\in \mathcal{E}} \;\abst{x_i-x_j} \ . \label{eq:TV_abs}
\end{equation}
Based on $\mathcal{G}$, we define the directed graph $\tilcalG = (\mathcal{V},\tilcalE)$ where $\tilcalE$ is obtained by replacing each (undirected) edge in $\mathcal{E}$ with two directed edges of opposite directions. Then from~\eqref{eq:TV_abs} we have
\begin{align}
{\rm TV}(x) = \textstyle\sum_{(i,j)\in \tilcalE}\;\; \max\{x_i-x_j,0\} = \min&\;\; e^\top r\nn\\
\st& \;\;r_{ij}\ge x_i-x_j \ ,\;\; \forall\;(i,j) \in \tilcalE \\
= \min & \;\; e^\top r\;\;\nn\\
\st& \;\;r\ge B^\top x \ , \label{eq:TV_LP}
\end{align}
where $B$ is the node-arc incidence matrix of the network $\tilcalE$.
The Frank-Wolfe subproblem~\eqref{subcbday} associated with~\eqref{eq:deblurring_TV} is:
\begin{align}
{\min}_{x\in\mathbb{R}^N}&\;\; \lranglet{\nabla \bar{f}(x^k)}{x} + \textstyle(\sum_{l=1}^{N} a_l)^\top x + \lambda {\rm TV}(x)\qquad \st\;\; 0\le x\le M e \ , \label{eq:box_TV_LO}
\end{align}
where
\begin{equation}
\bar{f}(x): = -\textstyle\sum_{l=1}^{N} y_l\ln(a_l^\top x) \ .\label{eq:barf_TV}
\end{equation} Based on~\eqref{eq:TV_LP}, we can rewrite~\eqref{eq:box_TV_LO} as the following linear optimization problem:
\begin{align}
\min_{(x,r)\in\mathbb{R}^N\times \mathbb{R}^{2\abst{\mathcal{E}}}}&\;\; \textstyle\lranglet{\nabla \bar{f}(x^k)}{x} + (\sum_{l=1}^{N} a_l)^\top x + \lambda e^\top r \nn\\
\st&\;\; 0\le x\le M e, \;\;r\ge B^\top x \ . \label{eq:box_TV_LO2}
\end{align} This linear problem can be solved as a linear program (LP) using available software, or as a constrained dual network flow problem using available network flow software. In our implementation we solved \eqref{eq:box_TV_LO2} using the primal simplex method in Gurobi \cite{gurobi}. Note that in~\eqref{eq:box_TV_LO2}, the variable $r$ in the constraint set appears to be unbounded. However, from the form of the objective function and the definition of $B$, it is easy to see that any optimal solution $(x^*,r^*)$ must satisfy $0\le r^*\le Me$, and hence~\eqref{eq:box_TV_LO2} always admits an optimal solution.
Given the representation of ${\rm TV}(\cdot)$ in~\eqref{eq:TV_LP},
we can also rewrite~\eqref{eq:deblurring_TV} as
\begin{align}
\min_{x\in\mathbb{R}^N, \;\;r\in\mathbb{R}^{2\abst{\mathcal{E}}}}&\;\; \bar{f}(x) + \textstyle(\sum_{l=1}^{N} a_l)^\top x + \lambda e^\top r \nn\\
\st&\;\; 0\le x\le M e, \;\;r\ge B^\top x. \label{eq:box_TV2}
\end{align
(Note that the Frank-Wolfe sub-problem~\eqref{subcbday} associated with~\eqref{eq:box_TV2} is
the same as that associated with~\eqref{eq:deblurring_TV}, which is shown in~\eqref{eq:box_TV_LO2}.)
In the following, we will apply our Frank-Wolfe method to solve the reformulated problem~\eqref{eq:box_TV2}. The advantage of~\eqref{eq:box_TV2} lies in that its structure yields an efficient procedure for an exact line-search to compute the step-size $\alpha_k$. Specifically, given $(x^k,r^k)$, let $(v^k,w^k)$ be an optimal solution of~\eqref{eq:box_TV_LO2}; then the exact line-search problem using \eqref{eq:box_TV2} is:
\begin{align}
\alpha_k = {\argmin}_{\alpha\in [0,1]} \; \bar{f}(x^k + \alpha(v^k - w^k)) + \alpha \big(\textstyle(\sum_{l=1}^{N} a_l)^\top (v^k - x^k) + \lambda e^\top (w^k - r^k)\big) \ . \label{lulup}
\end{align}
The detailed description of the exact line-search procedure for problems of the format \eqref{lulup} are presented in Appendix~\ref{app:line_search}. Henceforth, we denote our generalized Frank-Wolfe method for Poisson de-blurring with the adaptive step-size in~\eqref{eq:step_size_SC} as {\tt FW-Adapt}, and we denote our Frank-Wolfe method with exact line-search of \eqref{lulup} as {\tt FW-Exact}. Note that since in each iteration, {\tt FW-Exact} makes no less progress than {\tt FW-Adapt} in terms of the objective value, the computational guarantees in Theorem~\ref{thm:SC} (which are proved for {\tt FW-Adapt}) also apply to {\tt FW-Exact}.
\subsubsection{Results}\label{sec:res_deblurring}
We tested {\tt FW-Adapt} and {\tt FW-Exact} on the Shepp-Logan phantom image~\cite{Shepp_74} of size $100\times 100$ (hence $N = 10,000$). The true image $X$ is shown in Figure~\ref{fig:clean_blur_img}(a); this image acts as a 2D slice of a representative 3D human brain image, and is a standard test image in image reconstruction algorithms. We generated the blurred noisy image $Y$ shown in Figure~\ref{fig:clean_blur_img}(b) using the methodology exactly as described in Section~\ref{sec:applications}. For both {\tt FW-Adapt} and {\tt FW-Exact}, we chose the starting point $x^0 = {\sf vec}(Y)$, and we set $\lambda = 0.01$. In order to have a accurate computation of optimality gaps, we used CVXPY~\cite{Diamond_16} to (approximately) find the optimal objective value $\bar{F}^*$ of~\eqref{eq:deblurring_TV}. All computations were performed in Python 3.8 on a laptop computer.
\begin{figure}[t!]\centering
\subfloat[True image $X$]{\includegraphics[width=.3\linewidth]{TV2_n100_lam0dot01_hr4_primalsimp_true.png}
\subfloat[Noisy image $Y$]{\includegraphics[width=.3\linewidth]{TV2_n100_lam0dot01_hr4_primalsimp_noisy.png}}
\caption{True and noisy $100\times 100$ versions of the Shepp-Logan phantom image.}\label{fig:clean_blur_img}
\end{figure}
Figures \ref{fig:deblur_gap}(a) and \ref{fig:deblur_gap}(b) show (in log-log scale) the empirical optimality gaps $\bar{F}(x^k) - \bar{F}^*$ obtained by {\tt FW-Adapt} and {\tt FW-Exact} both as a function of the wall-clock time (in seconds) and as a function of the iteration counter $k$, respectively. From the figure we observe that {\tt FW-Exact} converges faster than {\tt FW-Adapt}, although the difference between the empirical optimality gaps of these two methods gradually lessens. This is expected since with exact line-search, {\tt FW-Exact} can take a potentially larger step at each iteration than {\tt FW-Adapt}, and likely therefore has faster convergence. The recovered images computed using {\tt FW-Adapt} and {\tt FW-Exact} are shown in Figure~\ref{fig:Box_TV_res_img}(a) and Figure~\ref{fig:Box_TV_res_img}(b), respectively. We observe that the image recovered by {\tt FW-Adapt} has similar but slightly inferior quality compared to that recovered by {\tt FW-Exact}. This is consistent with the algorithms' performance shown Figure~\ref{fig:deblur_gap}, as {\tt FW-Adapt} has a larger empirical optimality gap at termination compared to that of {\tt FW-Exact}.
\begin{figure}[t]
\subfloat[Optimality gap versus time (in seconds)]{\includegraphics[width=.48\linewidth]{TV2_n100_lam0dot01_hr4_primalsimp_time.png}} \hfill
\subfloat[Optimality gap versus iterations]{\includegraphics[width=.48\linewidth]{TV2_n100_lam0dot01_hr4_primalsimp_itr.png}
\caption{Comparison of empirical optimality gaps of {\tt FW-Adapt} ({\tt FW-A}) and {\tt FW-Exact} ({\tt FW-E}) for image recovery of the Shepp-Logan phantom image~\cite{Shepp_74} of size $100\times 100$.}\label{fig:deblur_gap}
\end{figure}
\begin{figure}[t!]\centering
\subfloat[Recovered image: {\tt FW-Adapt}]{\includegraphics[width=.3\linewidth]{TV2_n100_lam0dot01_hr4_primalsimp_fw-a.png}
\subfloat[Recovered image: {\tt FW-Exact}]{\includegraphics[width=.3\linewidth]{TV2_n100_lam0dot01_hr4_primalsimp_fw-e.png}}
\caption{Recovered images computed using {\tt FW-Adapt} and {\tt FW-Exact}.}\label{fig:Box_TV_res_img}
\end{figure}
\subsection{Second numerical experiment: positron emission tomography}\label{sec:PET}
We consider the positron emission tomography (PET) problem as described in Application 2 of Section~\ref{sec:applications}, where the formulation was presented in equation \eqref{eq:PET_final}. We generated artificial data instances of this problem according to the following data generation process. Because the events emitted from each voxel $i$ can only be detected by a small proportion of bins, it follows that the probability matrix $P$ should be highly sparse. We chose a sparsity value of $5\%$. Given the number of voxels $n$ and the number of bins $m$, for each $i \in [n]$ we randomly chose (without replacement) a subset of $[m]$ denoted by $\mathcal{J}_i$, such that $\abst{\mathcal{J}_i}=\floor{m/20}$. Next, for all $j\in\mathcal{J}_i$ we then generated i.i.d.\ entries $\bar p_{ij} \sim U[0,1]$ and normalized the values to obtain $p_{ij} := \bar{p}_{ij}/\sum_{j'\in\mathcal{J}_i} \bar p_{ij'}$. For all $j\not\in \mathcal{J}_i$ we set $p_{ij}=0$. We generated the mean values $x_i$ for $i \in [n]$ by first generating i.i.d.\ values $\bar x_i \sim N(100,9)$
and then set $x_i = \abst{\bar{x}_i}$ for $i\in[n]$. We then simulated the event counts $X_i$ at each voxel $i$ by generating $X_i \sim {\sf Poisson}(x_i)$ for $i\in[n]$. Finally, using $P$ and $\{X_i\}_{i\in[n]}$, we generated the number of observed events $Y_j$ detected at bin $j$ by independently generating values $\tilde Y_j \sim {\sf Poisson}(y_j)$ with $y_j := \textstyle\sum_{i=1}^n p_{ij} X_i$ for $j\in[m]$. Since $Y_j\in\{0,1,2,\ldots\}$, by omitting bins for which $Y_j = 0$ we ensure that $Y_j\ge 1$ for all $j\in[m]$ in the PET problem~\eqref{eq:PET_final}.
\subsubsection{Comparison of Algorithms}\label{sec:benchmark}
We solved instances of the PET problem \eqref{eq:PET_final} using the following five algorithms/variants:
\begin{itemize}
\item {\tt FW-Adapt} -- our generalized Frank-Wolfe method in Algorithm~\ref{algo:FW_SC} with the adaptive step-size as stated in~\eqref{eq:step_size_SC},
\item {\tt FW-Exact} -- our generalized Frank-Wolfe method in Algorithm~\ref{algo:FW_SC} with the adaptive step-size \eqref{eq:step_size_SC} replaced by an exact line-search as described in detail in Appendix~\ref{app:line_search},
\item {\tt RSGM-Fixed} -- relatively smooth gradient method with fixed step-size~\cite{Bauschke_17,Lu_18},
\item {\tt RSGM-LSBack} -- relatively smooth gradient method with backtracking line-search~\cite{Ston_20},
\item {\tt EM} -- a simple algorithm developed by Cover in 1984 specifically for a problem that is equivalent to the normalized PET problem ~\cite{Cover_84}.
\end{itemize}We excluded mirror descent from our computational comparisons because the sparsity of $P$ violates the basic assumption needed to apply the mirror descent (MD) method to \eqref{eq:PET_final} (see e.g.,~\cite{BenTal_01}). We now review the relevant details of the three algorithms {\tt RSGM-F}, {\tt RSGM-LS} and {\tt EM}.
\vspace{1ex}
\noindent 1.\ {\tt RSGM-Fixed}~\cite{Bauschke_17,Lu_18}. Although the objective function $L$ of~\eqref{eq:PET_final} is differentiable on its domain, its gradient $\nabla L$ is not Lipschitz on the constraint set $\Delta_n$.
Therefore standard gradient methods (or accelerated versions)~\cite{Nest_13} are not applicable. As a remedy for this situation, we can use the relatively-smooth gradient method \cite{Bauschke_17,Lu_18} to solve~\eqref{eq:PET_final}, which is designed in particular for problems whose objective functions have certain types of structured non-smooth gradients. Indeed, as shown in Bauschke et al.~\cite{Bauschke_17}, $L$ is $\bar Y$-smooth relative to the reference function
\begin{equation}
r(z) := -\textstyle\sum_{i=1}^n \ln (z_i) \ \mbox{for} \ z\in\mathbb{R}^n_{++}:=\{x\in\mathbb{R}^n:x>0\} \ ,
\end{equation}
where $\bar Y := \sum_{j=1}^m Y_j$. Specifically, this means that
\begin{equation}
\nabla^2 L(z) \preceq \bar Y \nabla^2 r(z) \quad \forall\,z\in\mathbb{R}^n_{++}\; \ .
\end{equation}
Algorithm \ref{algo:RSGM} describes RSGM specified to solve the PET problem \eqref{eq:PET_final} using the reference function $r$ with relative-smoothness parameter $\bar Y$, where $\mathsf{ri}\,\Delta_n$ denotes the relative interior of $\Delta_n$ in the Input, and $D_r(\cdot,\cdot)$ denotes the Bregman divergence in~\eqref{eq:equiv_BP} which is defined similarly as in~\eqref{eq:Bregman}.
We set the step-size $\alpha_k =1/\bar{Y}$ for all $k\ge 0$ in the fixed-step-size version of the method.
Regarding the sub-problem \eqref{eq:equiv_BP} that needs to be solved at each iteration, its optimal solution is unique and lies in $\mathsf{ri}\,\Delta_n$ since the reference function $r$ is Legendre with domain $\mathbb{R}_{++}^n$ (see~\cite[Section~2.1]{Bauschke_17} for details). Therefore we have $z^k\in\mathsf{ri}\,\Delta_n$ for all $k\ge 0$.
To efficiently solve \eqref{eq:equiv_BP}, we used the approach in~\cite[pp.\ 341-342]{Lu_18}, which reduces \eqref{eq:equiv_BP} to finding the unique positive root of a strictly decreasing univariate function on $(0,+\infty)$.
\begin{algorithm}[t!]
\caption{RSGM for solving the PET problem~\eqref{eq:PET_final}} \label{algo:RSGM}
\begin{algorithmic}
\State {\bf Input}: Starting point $z^0\in\mathsf{ri}\,\Delta_n := \{z>0:\sum_{i=1}^n z_i = 1\}$.
\State {\bf At iteration $k$:}
\begin{enumerate
{\setlength\itemindent{10pt} \item {Compute $\nabla L(z^k) = \sum_{j=1}^m (Y_j/\lranglet{p_j}{z^k}) p_j$, where $p_j$
is the vector corresponding to the $j$-th column of $P$ for $j\in[m]$\ .}}
{\setlength\itemindent{10pt} \item {Choose step-size $\alpha_k>0$ and compute}
\begin{align}
z^{k+1} &= {\argmax}_{z\in\Delta_n}\; \lranglet{\nabla L(z^k)}{z} - \alpha_k^{-1} D_r(z,z^k) \ .
\label{eq:equiv_BP}
\end{align}
}
\end{enumerate}
\end{algorithmic
\end{algorithm}
\vspace{1ex}
\noindent 2.\ {\tt RSGM-LSBack}~\cite{Ston_20}.
This method is a version of the relatively smooth gradient method shown in Algorithm~\ref{algo:RSGM}, with the extension that a backtracking line-search procedure is employed to compute the local relative-smoothness parameter $\bar Y_k$ at $z^k$ and then the step-size is chosen as $\alpha_k = 1/\bar Y_k$ at each iteration. (The details of this procedure can be found in~\cite[Algorithm~1]{Ston_20}.) Note that depending on the location of $z^k$, $\bar Y_k$ may be (significantly) smaller than the (global) relative-smoothness parameter $\bar{Y}$.
\vspace{1ex}
\noindent 3.\ {\tt EM}~\cite{Cover_84}.
This surprisingly simple algorithm was developed by Cover in 1984 specifically for a problem that is equivalent to the following normalized PET problem (see \cite{Cover_84}):
\begin{equation}
\max \; \bar{L}(z) := \textstyle\sum_{j=1}^m \bar{Y}_j\ln \big(\sum_{i=1}^n p_{ij} z_i\big)\quad \st\;\; {z\in\Delta_n} \ ,
\end{equation}
where $\bar{Y}_j:= Y_j/\sum_{j'=1}^m Y_{j'}>0$ for all $j\in[m]$ (recall that we have assumed without loss of generality that $Y_j\ge 1$ for all $j\in[m]$).
The method starts with $z^0\in \mathsf{ri}\,\Delta_n$ and at each iteration $k$ updates $z^k$ as follows:
\begin{equation}
z^{k+1}_i := z^k_i \nabla_i \bar{L}(z^k) = \sum_{j=1}^m \bar{Y}_j \frac{p_{ij} z^k_i }{\sum_{l=1}^n p_{lj} z_l^k} \ , \quad \forall\,i\in[n] \ . \label{eq:EM}
\end{equation}
Note that since $\bar{Y}_j>0$ for all $j\in[m]$, we easily see that $z^k\in\mathsf{ri}\,\Delta_n$ for all $k\ge 0$.
\subsubsection{Results}
We report results for problems of dimensions $n=m=1,000$, as we observed that the algorithms' relative performance were not particularly sensitive to the dimensions $m$ and $n$. We ran the algorithms using two different choices of starting points: (i) $z^0 = z^{\mathrm{bd}}\in \mathsf{ri}\,\Delta_n$ chosen very close to $\mathsf{bd}\,\mathbb{R}^n_+$ (the boundary of $\mathbb{R}^n_+$), and (ii) $z^0 = z^{\mathrm{ct}}:=\tfrac{1}{n}e$ which is the barycenter of $\Delta_n$. To determine $z^{\mathrm{bd}}$ we first used a greedy heuristic to determine a low- (or lowest-)cardinality index set $\mathcal{I}\subseteq [n]$
for which $\sum_{i\in\mathcal{I}} \, p_j>0$, where $p_j$ is the $j$-th column of $P$. Define $\bar\delta := 10^{-6}/n$, and we then set
$$z^{\mathrm{bd}}_i := \left\{ \begin{array}{ll} \bar\delta & \mbox{for} \ \ i\not\in \mathcal{I} \ , \\ (1 - (n-\abst{\mathcal{I}})\bar\delta)/\abst{\mathcal{I}} \ \ & \mbox{for} \ \ i\in \mathcal{I} \ . \end{array} \right. $$ Note that this ensures $z^{\mathrm{bd}}\in\mathsf{ri}\,\Delta_n$. Similar to Section~\eqref{sec:delurring}, we used CVXPY~\cite{Diamond_16} to (approximately) compute the optimal objective function value $L^*$ of~\eqref{eq:PET_final} in order to accurately report optimality gaps. Again, all computations were performed in Python 3.8 on a laptop computer.
Figures~\ref{fig:exp_res1} and \ref{fig:exp_res2} show plots of the empirical optimality gaps $L(z^k) -L^*$ of all five methods with the starting points $z^{\mathrm{bd}}$ and $z^{\mathrm{ct}}$, respectively. From Figure~\ref{fig:exp_res1} we observe the following:
\begin{enumerate}[label = (\roman*)]
\item The relatively smooth gradient methods, namely {\tt RSGM-Fixed} and {\tt RSGM-LSBack}, make very little progress in reducing the empirical optimality gap. For {\tt RSGM-Fixed}, this is because the relative smoothness parameter $\bar{Y} = \sum_{j=1}^m Y_j$ is typically very large, implying that the step-size $1/\bar{Y}$ is very small. Since the starting point $z^{\mathrm{bd}}$ is very close to $\mathsf{bd}\,\mathbb{R}^n_+$, where the local relative smoothness parameter of the objective function $L$ is close to $\bar{Y}$, {\tt RSGM-LSBack} exhibits similar behavior to {\tt RSGM-Fixed}. (In other words, the backtracking line-search does not help much in this case.)
\item The two versions of our generalized Frank-Wolfe methods, namely {\tt FW-Adapt} and {\tt FW-Exact}, outperform the relatively smooth gradient methods. In addition, {\tt FW-Exact} converges faster than {\tt FW-Adapt} initially, but when close to the optimum (for example, when the empirical optimality gap falls below 10), these two methods have the same convergence behavior. Indeed, this observation also appears on the Poisson image de-blurring problem with TV-regularization (see Section~\ref{sec:res_deblurring}).
\item The {\tt EM} algorithm outperforms all the other methods which is rather surprising at first glance. (And in fact it is unknown from the literature whether the method has any type of non-asymptotic convergence guarantee.) However, we note that this method, which uses a multiplicative form of update (see equation~\eqref{eq:EM}), is specifically designed for problems with the PET problem structure in~\eqref{eq:PET_final}, and it is not clear how to suitably generalize this method to more general problems of the form $(P)$ such as the Poisson image de-blurring problem with TV-regularization in~\eqref{eq:deblurring_TV}.
\end{enumerate}
Figure~\ref{fig:exp_res2} shows the performance of the five algorithms when $z^0 = z^{\mathrm{ct}}$ is the barycenter. We observe that the performance of the methods is mostly similar to when started at $z^{\mathrm{bd}}$ except for one significant difference, namely that {\tt RSGM-LSBack} exhibits significantly faster convergence compared to when started at $z^{\mathrm{ct}}$. Indeed, {\tt RSGM-LSBack} outperforms all the other general purpose methods, namely {\tt RSGM-Fixed}, {\tt FW-Adapt} and {\tt FW-Exact}. This is likely because on the ``central region'' of $\Delta_n$ the local relative smoothness parameter of $L$ is probably much smaller than the global bound $\bar{Y}$, and therefore {\tt RSGM-LSBack} is able to take a much larger steps. This also indicates that the convergence behavior of {\tt RSGM-LSBack} is likely to be sensitive to the choice of starting point, which is not the case for the other methods (including our generalized Frank-Wolfe methods).
\begin{figure}[t!]\centering
\subfloat[Time plot: $n=1000$, $m = 1000$]{\includegraphics[width=.48\linewidth]{PET_m_1000_n_1000_1e-06_time.png}}\hfill
\subfloat[Iteration plot: $n=1000$, $m = 1000$]{\includegraphics[width=.48\linewidth]{PET_m_1000_n_1000_1e-06_itr.png}}
\caption{Comparison of optimality gaps of {\tt FW-Adapt} ({\tt FW-A}), {\tt FW-Exact} ({\tt FW-E}), {\tt RSGM-Fixed} ({\tt RSGM-F}), {\tt RSGM-LSBack} ({\tt RSGM-LS}) and {\tt EM}, with $z^0 = z^{\mathrm{bd}}$.}\label{fig:exp_res1}
\end{figure}
\begin{figure}[t!]\centering
\subfloat[Time plot: $n=1000$, $m = 1000$]{\includegraphics[width=.48\linewidth]{PET_m_1000_n_1000_1_time.png}}\hfill
\subfloat[Iteration plot: $n=1000$, $m = 1000$]{\includegraphics[width=.48\linewidth]{PET_m_1000_n_1000_1_itr.png}}
\caption{Comparison of optimality gaps of {\tt FW-Adapt} ({\tt FW-A}), {\tt FW-Exact} ({\tt FW-E}), {\tt RSGM-Fixed} ({\tt RSGM-F}), {\tt RSGM-LSBack} ({\tt RSGM-LS}) and {\tt EM}, with $z^0 = z^{\mathrm{ct}}$.}\label{fig:exp_res2}
\end{figure}
\section{\hskip 0em~~#1}\vspace{-1pt}}
\newcommand{\Subsection}[1]{\vspace{-8pt}\subsection{\hskip -1em~~#1}\vspace{-3pt}}
\newcommand{\overset{\raisebox{-1mm}{\scriptsize$\mathrm{c}$}}{\le}}{\overset{\raisebox{-1mm}{\scriptsize$\mathrm{c}$}}{\le}}
\newcommand{\overset{\mathrm{c}}{\ge}}{\overset{\mathrm{c}}{\ge}}
\newcommand{\mathbbm{1}}{\mathbbm{1}}
\newcommand{\bar{\nabla}}{\bar{\nabla}}
\newcommand{\lranglet}{\lranglet}
\newcommand{\bar{\bf{a}}}{\bar{\bf{a}}}
\newcommand{\bar{\bf b}}{\bar{\bf b}}
\newcommand{\bar{\bf c}}{\bar{\bf c}}
\newcommand{\bar{\bf d}}{\bar{\bf d}}
\newcommand{\bar{\bf e}}{\bar{\bf e}}
\newcommand{\bar{\bf f}}{\bar{\bf f}}
\newcommand{\bar{\bf g}}{\bar{\bf g}}
\newcommand{\bar{\bf h}}{\bar{\bf h}}
\newcommand{\bar{\bf i}}{\bar{\bf i}}
\newcommand{\bar{\bf j}}{\bar{\bf j}}
\newcommand{\bar{\bf k}}{\bar{\bf k}}
\newcommand{\bar{\bf l}}{\bar{\bf l}}
\newcommand{\bar{\bf m}}{\bar{\bf m}}
\newcommand{\bar{\bf n}}{\bar{\bf n}}
\newcommand{\bar{\bf o}}{\bar{\bf o}}
\newcommand{\bar{\bf p}}{\bar{\bf p}}
\newcommand{\bar{\bf q}}{\bar{\bf q}}
\newcommand{\bar{\bf r}}{\bar{\bf r}}
\newcommand{\bar{\bf s}}{\bar{\bf s}}
\newcommand{\bar{\bf t}}{\bar{\bf t}}
\newcommand{\bar{\bf u}}{\bar{\bf u}}
\newcommand{\bar{\bf v}}{\bar{\bf v}}
\newcommand{\bar{\bf w}}{\bar{\bf w}}
\newcommand{\bar{\bf x}}{\bar{\bf x}}
\newcommand{\bar{\bf y}}{\bar{\bf y}}
\newcommand{\bar{\bf z}}{\bar{\bf z}}
\newcommand{\bar{\bm{\lambda}}}{\bar{\bm{\lambda}}}
\newcommand{\bar{\sigma}}{\bar{\sigma}}
\newcommand{\bar{\alpha}}{\bar{\alpha}}
\newcommand{\bar{\bf A}}{\bar{\bf A}}
\newcommand{\bar{\bf B}}{\bar{\bf B}}
\newcommand{\bar{\bf C}}{\bar{\bf C}}
\newcommand{\bar{\bf D}}{\bar{\bf D}}
\newcommand{\bar{\bf E}}{\bar{\bf E}}
\newcommand{\bar{\bf F}}{\bar{\bf F}}
\newcommand{\bar{\bf G}}{\bar{\bf G}}
\newcommand{\bar{\bf H}}{\bar{\bf H}}
\newcommand{\bar{\bf I}}{\bar{\bf I}}
\newcommand{\bar{\bf J}}{\bar{\bf J}}
\newcommand{\bar{\bf K}}{\bar{\bf K}}
\newcommand{\bar{\bf L}}{\bar{\bf L}}
\newcommand{\bar{\bf M}}{\bar{\bf M}}
\newcommand{\bar{\bf N}}{\bar{\bf N}}
\newcommand{\bar{\bf O}}{\bar{\bf O}}
\newcommand{\bar{\bf P}}{\bar{\bf P}}
\newcommand{\bar{\bf Q}}{\bar{\bf Q}}
\newcommand{\bar{\bf R}}{\bar{\bf R}}
\newcommand{\bar{\bf S}}{\bar{\bf S}}
\newcommand{\bar{\bf T}}{\bar{\bf T}}
\newcommand{\bar{\bf U}}{\bar{\bf U}}
\newcommand{\bar{\bf V}}{\bar{\bf V}}
\newcommand{\bar{\bf W}}{\bar{\bf W}}
\newcommand{\bar{\bf X}}{\bar{\bf X}}
\newcommand{\bar{\bf Y}}{\bar{\bf Y}}
\newcommand{\bar{\bf Z}}{\bar{\bf Z}}
\newcommand{\bar{\ell}}{\bar{\ell}}
\newcommand{\bar{\kappa}}{\bar{\kappa}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{J}}{\mathcal{J}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{O}}{\mathcal{O}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{U}}{\mathcal{U}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathcal{Y}}{\mathcal{Y}}
\newcommand{\mathcal{Z}}{\mathcal{Z}}
\newcommand{\hatcalA}{\widehat{\mathcal{A}}}
\newcommand{\hatcalB}{\widehat{\mathcal{B}}}
\newcommand{\hatcalC}{\widehat{\mathcal{C}}}
\newcommand{\hatcalD}{\widehat{\mathcal{D}}}
\newcommand{\hatcalE}{\widehat{\mathcal{E}}}
\newcommand{\hatcalF}{\widehat{\mathcal{F}}}
\newcommand{\hatcalG}{\widehat{\mathcal{G}}}
\newcommand{\hatcalH}{\widehat{\mathcal{H}}}
\newcommand{\hatcalI}{\widehat{\mathcal{I}}}
\newcommand{\hatcalJ}{\widehat{\mathcal{J}}}
\newcommand{\hatcalK}{\widehat{\mathcal{K}}}
\newcommand{\hatcalL}{\widehat{\mathcal{L}}}
\newcommand{\hatcalM}{\widehat{\mathcal{M}}}
\newcommand{\hatcalN}{\widehat{\mathcal{N}}}
\newcommand{\hatcalO}{\widehat{\mathcal{O}}}
\newcommand{\hatcalP}{\widehat{\mathcal{P}}}
\newcommand{\hatcalQ}{\widehat{\mathcal{Q}}}
\newcommand{\hatcalR}{\widehat{\mathcal{R}}}
\newcommand{\hatcalS}{\widehat{\mathcal{S}}}
\newcommand{\hatcalT}{\widehat{\mathcal{T}}}
\newcommand{\hatcalU}{\widehat{\mathcal{U}}}
\newcommand{\hatcalV}{\widehat{\mathcal{V}}}
\newcommand{\hatcalW}{\widehat{\mathcal{W}}}
\newcommand{\hatcalX}{\widehat{\mathcal{X}}}
\newcommand{\hatcalY}{\widehat{\mathcal{Y}}}
\newcommand{\hatcalZ}{\widehat{\mathcal{Z}}}
\newcommand{\tilcalA}{\widetilde{\mathcal{A}}}
\newcommand{\tilcalB}{\widetilde{\mathcal{B}}}
\newcommand{\tilcalC}{\widetilde{\mathcal{C}}}
\newcommand{\tilcalD}{\widetilde{\mathcal{D}}}
\newcommand{\tilcalE}{\widetilde{\mathcal{E}}}
\newcommand{\tilcalF}{\widetilde{\mathcal{F}}}
\newcommand{\tilcalG}{\widetilde{\mathcal{G}}}
\newcommand{\tilcalH}{\widetilde{\mathcal{H}}}
\newcommand{\tilcalI}{\widetilde{\mathcal{I}}}
\newcommand{\tilcalJ}{\widetilde{\mathcal{J}}}
\newcommand{\tilcalK}{\widetilde{\mathcal{K}}}
\newcommand{\tilcalL}{\widetilde{\mathcal{L}}}
\newcommand{\tilcalM}{\widetilde{\mathcal{M}}}
\newcommand{\tilcalN}{\widetilde{\mathcal{N}}}
\newcommand{\tilcalO}{\widetilde{\mathcal{O}}}
\newcommand{\tilcalP}{\widetilde{\mathcal{P}}}
\newcommand{\tilcalQ}{\widetilde{\mathcal{Q}}}
\newcommand{\tilcalR}{\widetilde{\mathcal{R}}}
\newcommand{\tilcalS}{\widetilde{\mathcal{S}}}
\newcommand{\tilcalT}{\widetilde{\mathcal{T}}}
\newcommand{\tilcalU}{\widetilde{\mathcal{U}}}
\newcommand{\tilcalV}{\widetilde{\mathcal{V}}}
\newcommand{\tilcalW}{\widetilde{\mathcal{W}}}
\newcommand{\tilcalX}{\widetilde{\mathcal{X}}}
\newcommand{\tilcalY}{\widetilde{\mathcal{Y}}}
\newcommand{\tilcalZ}{\widetilde{\mathcal{Z}}}
\newcommand{\barcalA}{\bar{\mathcal{A}}}
\newcommand{\barcalB}{\bar{\mathcal{B}}}
\newcommand{\barcalC}{\bar{\mathcal{C}}}
\newcommand{\barcalD}{\bar{\mathcal{D}}}
\newcommand{\barcalE}{\bar{\mathcal{E}}}
\newcommand{\barcalF}{\bar{\mathcal{F}}}
\newcommand{\barcalG}{\bar{\mathcal{G}}}
\newcommand{\barcalH}{\bar{\mathcal{H}}}
\newcommand{\barcalI}{\bar{\mathcal{I}}}
\newcommand{\barcalJ}{\bar{\mathcal{J}}}
\newcommand{\barcalK}{\bar{\mathcal{K}}}
\newcommand{\barcalL}{\bar{\mathcal{L}}}
\newcommand{\barcalM}{\bar{\mathcal{M}}}
\newcommand{\barcalN}{\bar{\mathcal{N}}}
\newcommand{\barcalO}{\bar{\mathcal{O}}}
\newcommand{\barcalP}{\bar{\mathcal{P}}}
\newcommand{\barcalQ}{\bar{\mathcal{Q}}}
\newcommand{\barcalR}{\bar{\mathcal{R}}}
\newcommand{\barcalS}{\bar{\mathcal{S}}}
\newcommand{\barcalT}{\bar{\mathcal{T}}}
\newcommand{\barcalU}{\bar{\mathcal{U}}}
\newcommand{\barcalV}{\bar{\mathcal{V}}}
\newcommand{\barcalW}{\bar{\mathcal{W}}}
\newcommand{\barcalX}{\bar{\mathcal{X}}}
\newcommand{\barcalY}{\bar{\mathcal{Y}}}
\newcommand{\barcalZ}{\bar{\mathcal{Z}}}
\newcommand{\mathbf{a}}{\mathbf{a}}
\newcommand{\mathbf{A}}{\mathbf{A}}
\newcommand{\mathbf{b}}{\mathbf{b}}
\newcommand{\mathbf{B}}{\mathbf{B}}
\newcommand{\mathbf{c}}{\mathbf{c}}
\newcommand{\mathbf{C}}{\mathbf{C}}
\newcommand{\mathbf{d}}{\mathbf{d}}
\newcommand{\mathbf{D}}{\mathbf{D}}
\newcommand{\mathbf{E}}{\mathbf{E}}
\newcommand{\mathbf{f}}{\mathbf{f}}
\newcommand{\mathbf{F}}{\mathbf{F}}
\newcommand{\mathbf{g}}{\mathbf{g}}
\newcommand{\mathbf{G}}{\mathbf{G}}
\newcommand{\mathbf{h}}{\mathbf{h}}
\newcommand{\mathbf{H}}{\mathbf{H}}
\newcommand{\mathbf{I}}{\mathbf{I}}
\newcommand{\mathbf{j}}{\mathbf{j}}
\newcommand{\mathbf{J}}{\mathbf{J}}
\newcommand{\mathbf{k}}{\mathbf{k}}
\newcommand{\mathbf{K}}{\mathbf{K}}
\newcommand{\mathbf{l}}{\mathbf{l}}
\newcommand{\mathbf{L}}{\mathbf{L}}
\newcommand{\mathbf{m}}{\mathbf{m}}
\newcommand{\mathbf{M}}{\mathbf{M}}
\newcommand{\mathbf{n}}{\mathbf{n}}
\newcommand{\mathbf{N}}{\mathbf{N}}
\newcommand{\mathbf{o}}{\mathbf{o}}
\newcommand{\mathbf{O}}{\mathbf{O}}
\newcommand{\mathbf{p}}{\mathbf{p}}
\newcommand{\mathbf{P}}{\mathbf{P}}
\newcommand{\mathbf{q}}{\mathbf{q}}
\newcommand{\mathbf{Q}}{\mathbf{Q}}
\newcommand{\mathbf{r}}{\mathbf{r}}
\newcommand{\mathbf{R}}{\mathbf{R}}
\newcommand{\mathbf{s}}{\mathbf{s}}
\newcommand{\mathbf{S}}{\mathbf{S}}
\newcommand{\mathbf{t}}{\mathbf{t}}
\newcommand{\mathbf{T}}{\mathbf{T}}
\newcommand{\mathbf{u}}{\mathbf{u}}
\newcommand{\mathbf{U}}{\mathbf{U}}
\newcommand{\mathbf{v}}{\mathbf{v}}
\newcommand{\mathbf{V}}{\mathbf{V}}
\newcommand{\mathbf{w}}{\mathbf{w}}
\newcommand{\mathbf{W}}{\mathbf{W}}
\newcommand{\mathbf{x}}{\mathbf{x}}
\newcommand{\mathbf{X}}{\mathbf{X}}
\newcommand{\mathbf{y}}{\mathbf{y}}
\newcommand{\mathbf{Y}}{\mathbf{Y}}
\newcommand{\mathbf{z}}{\mathbf{z}}
\newcommand{\mathbf{Z}}{\mathbf{Z}}
\newcommand{\mathrm{a}}{\mathrm{a}}
\newcommand{\mathrm{A}}{\mathrm{A}}
\newcommand{\mathrm{b}}{\mathrm{b}}
\newcommand{\mathrm{B}}{\mathrm{B}}
\newcommand{\mathrm{c}}{\mathrm{c}}
\newcommand{\mathrm{C}}{\mathrm{C}}
\newcommand{\mathrm{d}}{\mathrm{d}}
\newcommand{\mathrm{D}}{\mathrm{D}}
\newcommand{\mathrm{e}}{\mathrm{e}}
\newcommand{\mathrm{E}}{\mathrm{E}}
\newcommand{\mathrm{f}}{\mathrm{f}}
\newcommand{\mathrm{F}}{\mathrm{F}}
\newcommand{\mathrm{g}}{\mathrm{g}}
\newcommand{\mathrm{G}}{\mathrm{G}}
\newcommand{\mathrm{h}}{\mathrm{h}}
\newcommand{\mathrm{H}}{\mathrm{H}}
\newcommand{\mathrm{i}}{\mathrm{i}}
\newcommand{\mathrm{I}}{\mathrm{I}}
\newcommand{\mathrm{j}}{\mathrm{j}}
\newcommand{\mathrm{J}}{\mathrm{J}}
\newcommand{\mathrm{k}}{\mathrm{k}}
\newcommand{\mathrm{K}}{\mathrm{K}}
\newcommand{\mathrm{l}}{\mathrm{l}}
\newcommand{\mathrm{L}}{\mathrm{L}}
\newcommand{\mathrm{m}}{\mathrm{m}}
\newcommand{\mathrm{M}}{\mathrm{M}}
\newcommand{\mathrm{n}}{\mathrm{n}}
\newcommand{\mathrm{N}}{\mathrm{N}}
\newcommand{\mathrm{o}}{\mathrm{o}}
\newcommand{\mathrm{O}}{\mathrm{O}}
\newcommand{\mathrm{p}}{\mathrm{p}}
\newcommand{\mathrm{P}}{\mathrm{P}}
\newcommand{\mathrm{q}}{\mathrm{q}}
\newcommand{\mathrm{Q}}{\mathrm{Q}}
\newcommand{\mathrm{r}}{\mathrm{r}}
\newcommand{\mathrm{R}}{\mathrm{R}}
\newcommand{\mathrm{s}}{\mathrm{s}}
\newcommand{\mathrm{S}}{\mathrm{S}}
\newcommand{\mathrm{t}}{\mathrm{t}}
\newcommand{\mathrm{T}}{\mathrm{T}}
\newcommand{\mathrm{u}}{\mathrm{u}}
\newcommand{\mathrm{U}}{\mathrm{U}}
\newcommand{\mathrm{v}}{\mathrm{v}}
\newcommand{\mathrm{V}}{\mathrm{V}}
\newcommand{\mathrm{w}}{\mathrm{w}}
\newcommand{\mathrm{W}}{\mathrm{W}}
\newcommand{\mathrm{x}}{\mathrm{x}}
\newcommand{\mathrm{X}}{\mathrm{X}}
\newcommand{\mathrm{y}}{\mathrm{y}}
\newcommand{\mathrm{Y}}{\mathrm{Y}}
\newcommand{\mathrm{z}}{\mathrm{z}}
\newcommand{\mathrm{Z}}{\mathrm{Z}}
\newcommand{\mathbb{A}}{\mathbb{A}}
\newcommand{\mathbb{B}}{\mathbb{B}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbb{D}}{\mathbb{D}}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\mathbb{F}}{\mathbb{F}}
\newcommand{\mathbb{G}}{\mathbb{G}}
\newcommand{\mathbb{H}}{\mathbb{H}}
\newcommand{\mathbb{I}}{\mathbb{I}}
\newcommand{\mathbb{J}}{\mathbb{J}}
\newcommand{\mathbb{K}}{\mathbb{K}}
\newcommand{\mathbb{L}}{\mathbb{L}}
\newcommand{\mathbb{M}}{\mathbb{M}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{O}}{\mathbb{O}}
\newcommand{\mathbb{P}}{\mathbb{P}}
\newcommand{\mathbb{Q}}{\mathbb{Q}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{S}}{\mathbb{S}}
\newcommand{\mathbb{T}}{\mathbb{T}}
\newcommand{\mathbb{U}}{\mathbb{U}}
\newcommand{\mathbb{V}}{\mathbb{V}}
\newcommand{\mathbb{W}}{\mathbb{W}}
\newcommand{\mathbb{X}}{\mathbb{X}}
\newcommand{\mathbb{Y}}{\mathbb{Y}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\bar{\bbR}}{\bar{\mathbb{R}}}
\newcommand{\bar{\bbN}}{\bar{\mathbb{N}}}
\newcommand{\mathfrak{A}}{\mathfrak{A}}
\newcommand{\mathfrak{B}}{\mathfrak{B}}
\newcommand{\mathfrak{C}}{\mathfrak{C}}
\newcommand{\mathfrak{D}}{\mathfrak{D}}
\newcommand{\mathfrak{E}}{\mathfrak{E}}
\newcommand{\mathfrak{F}}{\mathfrak{F}}
\newcommand{\mathfrak{G}}{\mathfrak{G}}
\newcommand{\mathfrak{H}}{\mathfrak{H}}
\newcommand{\mathfrak{I}}{\mathfrak{I}}
\newcommand{\mathfrak{J}}{\mathfrak{J}}
\newcommand{\mathfrak{K}}{\mathfrak{K}}
\newcommand{\mathfrak{L}}{\mathfrak{L}}
\newcommand{\mathfrak{M}}{\mathfrak{M}}
\newcommand{\mathfrak{N}}{\mathfrak{N}}
\newcommand{\mathfrak{O}}{\mathfrak{O}}
\newcommand{\mathfrak{P}}{\mathfrak{P}}
\newcommand{\mathfrak{Q}}{\mathfrak{Q}}
\newcommand{\mathfrak{R}}{\mathfrak{R}}
\newcommand{\mathfrak{S}}{\mathfrak{S}}
\newcommand{\mathfrak{T}}{\mathfrak{T}}
\newcommand{\mathfrak{U}}{\mathfrak{U}}
\newcommand{\mathfrak{V}}{\mathfrak{V}}
\newcommand{\mathfrak{W}}{\mathfrak{W}}
\newcommand{\mathfrak{X}}{\mathfrak{X}}
\newcommand{\mathfrak{Y}}{\mathfrak{Y}}
\newcommand{\mathfrak{Z}}{\mathfrak{Z}}
\newcommand{\mathscr{A}}{\mathscr{A}}
\newcommand{\mathscr{B}}{\mathscr{B}}
\newcommand{\mathscr{C}}{\mathscr{C}}
\newcommand{\mathscr{D}}{\mathscr{D}}
\newcommand{\mathscr{E}}{\mathscr{E}}
\newcommand{\mathscr{F}}{\mathscr{F}}
\newcommand{\mathscr{G}}{\mathscr{G}}
\newcommand{\mathscr{H}}{\mathscr{H}}
\newcommand{\mathscr{I}}{\mathscr{I}}
\newcommand{\mathscr{J}}{\mathscr{J}}
\newcommand{\mathscr{K}}{\mathscr{K}}
\newcommand{\mathscr{L}}{\mathscr{L}}
\newcommand{\mathscr{M}}{\mathscr{M}}
\newcommand{\mathscr{N}}{\mathscr{N}}
\newcommand{\mathscr{O}}{\mathscr{O}}
\newcommand{\mathscr{P}}{\mathscr{P}}
\newcommand{\mathscr{Q}}{\mathscr{Q}}
\newcommand{\mathscr{R}}{\mathscr{R}}
\newcommand{\mathscr{S}}{\mathscr{S}}
\newcommand{\mathscr{T}}{\mathscr{T}}
\newcommand{\mathscr{U}}{\mathscr{U}}
\newcommand{\mathscr{V}}{\mathscr{V}}
\newcommand{\mathscr{W}}{\mathscr{W}}
\newcommand{\mathscr{X}}{\mathscr{X}}
\newcommand{\mathscr{Y}}{\mathscr{Y}}
\newcommand{\mathscr{Z}}{\mathscr{Z}}
\DeclareMathAlphabet{\mathbsf}{OT1}{cmss}{bx}{n}
\newcommand{\mathsf{a}}{\mathsf{a}}
\newcommand{\mathsf{A}}{\mathsf{A}}
\newcommand{\hat{\rvA}}{\hat{\mathsf{A}}}
\newcommand{\mathbsf{a}}{\mathbsf{a}}
\newcommand{\mathbsf{A}}{\mathbsf{A}}
\newcommand{\mathsf{b}}{\mathsf{b}}
\newcommand{\mathsf{B}}{\mathsf{B}}
\newcommand{\mathbsf{b}}{\mathbsf{b}}
\newcommand{\mathbsf{B}}{\mathbsf{B}}
\newcommand{\mathsf{c}}{\mathsf{c}}
\newcommand{\mathsf{C}}{\mathsf{C}}
\newcommand{\mathbsf{c}}{\mathbsf{c}}
\newcommand{\mathbsf{C}}{\mathbsf{C}}
\newcommand{\mathsf{d}}{\mathsf{d}}
\newcommand{\mathsf{D}}{\mathsf{D}}
\newcommand{\mathbsf{d}}{\mathbsf{d}}
\newcommand{\mathbsf{D}}{\mathbsf{D}}
\newcommand{\mathsf{e}}{\mathsf{e}}
\newcommand{\mathsf{E}}{\mathsf{E}}
\newcommand{\mathbsf{e}}{\mathbsf{e}}
\newcommand{\mathbsf{E}}{\mathbsf{E}}
\newcommand{\mathsf{f}}{\mathsf{f}}
\newcommand{\mathsf{F}}{\mathsf{F}}
\newcommand{\mathbsf{f}}{\mathbsf{f}}
\newcommand{\mathbsf{F}}{\mathbsf{F}}
\newcommand{\mathsf{g}}{\mathsf{g}}
\newcommand{\mathsf{G}}{\mathsf{G}}
\newcommand{\mathbsf{g}}{\mathbsf{g}}
\newcommand{\mathbsf{G}}{\mathbsf{G}}
\newcommand{\mathsf{h}}{\mathsf{h}}
\newcommand{\mathsf{H}}{\mathsf{H}}
\newcommand{\mathbsf{h}}{\mathbsf{h}}
\newcommand{\mathbsf{H}}{\mathbsf{H}}
\newcommand{\mathsf{i}}{\mathsf{i}}
\newcommand{\mathsf{I}}{\mathsf{I}}
\newcommand{\mathbsf{i}}{\mathbsf{i}}
\newcommand{\mathbsf{I}}{\mathbsf{I}}
\newcommand{\mathsf{j}}{\mathsf{j}}
\newcommand{\mathsf{J}}{\mathsf{J}}
\newcommand{\mathbsf{j}}{\mathbsf{j}}
\newcommand{\mathbsf{J}}{\mathbsf{J}}
\newcommand{\mathsf{k}}{\mathsf{k}}
\newcommand{\mathsf{K}}{\mathsf{K}}
\newcommand{\mathbsf{k}}{\mathbsf{k}}
\newcommand{\mathbsf{K}}{\mathbsf{K}}
\newcommand{\mathsf{l}}{\mathsf{l}}
\newcommand{\mathsf{L}}{\mathsf{L}}
\newcommand{\mathbsf{l}}{\mathbsf{l}}
\newcommand{\mathbsf{L}}{\mathbsf{L}}
\newcommand{\mathsf{m}}{\mathsf{m}}
\newcommand{\mathsf{M}}{\mathsf{M}}
\newcommand{\mathbsf{m}}{\mathbsf{m}}
\newcommand{\mathbsf{M}}{\mathbsf{M}}
\newcommand{\mathsf{n}}{\mathsf{n}}
\newcommand{\mathsf{N}}{\mathsf{N}}
\newcommand{\mathbsf{n}}{\mathbsf{n}}
\newcommand{\mathbsf{N}}{\mathbsf{N}}
\newcommand{\mathsf{o}}{\mathsf{o}}
\newcommand{\mathsf{O}}{\mathsf{O}}
\newcommand{\mathbsf{o}}{\mathbsf{o}}
\newcommand{\mathbsf{O}}{\mathbsf{O}}
\newcommand{\mathsf{p}}{\mathsf{p}}
\newcommand{\mathsf{P}}{\mathsf{P}}
\newcommand{\mathbsf{p}}{\mathbsf{p}}
\newcommand{\mathbsf{P}}{\mathbsf{P}}
\newcommand{\mathsf{q}}{\mathsf{q}}
\newcommand{\mathsf{Q}}{\mathsf{Q}}
\newcommand{\mathbsf{q}}{\mathbsf{q}}
\newcommand{\mathbsf{Q}}{\mathbsf{Q}}
\newcommand{\mathsf{r}}{\mathsf{r}}
\newcommand{\mathsf{R}}{\mathsf{R}}
\newcommand{\mathbsf{r}}{\mathbsf{r}}
\newcommand{\mathbsf{R}}{\mathbsf{R}}
\newcommand{\mathsf{s}}{\mathsf{s}}
\newcommand{\mathsf{S}}{\mathsf{S}}
\newcommand{\mathbsf{s}}{\mathbsf{s}}
\newcommand{\mathbsf{S}}{\mathbsf{S}}
\newcommand{\mathsf{t}}{\mathsf{t}}
\newcommand{\mathsf{T}}{\mathsf{T}}
\newcommand{\mathbsf{t}}{\mathbsf{t}}
\newcommand{\mathbsf{T}}{\mathbsf{T}}
\newcommand{\mathsf{u}}{\mathsf{u}}
\newcommand{\mathsf{U}}{\mathsf{U}}
\newcommand{\mathbsf{u}}{\mathbsf{u}}
\newcommand{\mathbsf{U}}{\mathbsf{U}}
\newcommand{\mathsf{v}}{\mathsf{v}}
\newcommand{\mathsf{V}}{\mathsf{V}}
\newcommand{\mathbsf{v}}{\mathbsf{v}}
\newcommand{\mathbsf{V}}{\mathbsf{V}}
\newcommand{\mathsf{w}}{\mathsf{w}}
\newcommand{\mathsf{W}}{\mathsf{W}}
\newcommand{\mathbsf{w}}{\mathbsf{w}}
\newcommand{\mathbsf{W}}{\mathbsf{W}}
\newcommand{\mathsf{x}}{\mathsf{x}}
\newcommand{\mathsf{X}}{\mathsf{X}}
\newcommand{\mathbsf{x}}{\mathbsf{x}}
\newcommand{\mathbsf{X}}{\mathbsf{X}}
\newcommand{\mathsf{y}}{\mathsf{y}}
\newcommand{\mathsf{Y}}{\mathsf{Y}}
\newcommand{\mathbsf{y}}{\mathbsf{y}}
\newcommand{\mathbsf{Y}}{\mathbsf{Y}}
\newcommand{\mathsf{z}}{\mathsf{z}}
\newcommand{\mathsf{Z}}{\mathsf{Z}}
\newcommand{\mathbsf{z}}{\mathbsf{z}}
\newcommand{\mathbsf{Z}}{\mathbsf{Z}}
\newcommand{\ssfTheta}{\ssfTheta}
\newcommand{\Theta}{\Theta}
\newcommand{\bsfTheta}{\bsfTheta}
\newcommand{\boldsymbol{\Theta}}{\boldsymbol{\Theta}}
\newcommand{\ssfPhi}{\ssfPhi}
\newcommand{\Phi}{\Phi}
\newcommand{\bsfPhi}{\bsfPhi}
\newcommand{\boldsymbol{\Phi}}{\boldsymbol{\Phi}}
\newcommand{\mathsf{\Xi}}{\mathsf{\Xi}}
\newcommand{\widehat{a}}{\widehat{a}}
\newcommand{\widehat{A}}{\widehat{A}}
\newcommand{\widetilde{a}}{\widetilde{a}}
\newcommand{\widetilde{A}}{\widetilde{A}}
\newcommand{\widehat{\ba}}{\widehat{\mathbf{a}}}
\newcommand{\widehat{\bA}}{\widehat{\mathbf{A}}}
\newcommand{\widetilde{\ba}}{\widetilde{\mathbf{a}}}
\newcommand{\widetilde{\bA}}{\widetilde{\mathbf{A}}}
\newcommand{\widehat{b}}{\widehat{b}}
\newcommand{\widehat{B}}{\widehat{B}}
\newcommand{\widetilde{b}}{\widetilde{b}}
\newcommand{\widetilde{B}}{\widetilde{B}}
\newcommand{\widehat{\bb}}{\widehat{\mathbf{b}}}
\newcommand{\widehat{\bB}}{\widehat{\mathbf{B}}}
\newcommand{\widetilde{\bb}}{\widetilde{\mathbf{b}}}
\newcommand{\widetilde{\bB}}{\widetilde{\mathbf{B}}}
\newcommand{\underline{\tilb}}{\underline{\widetilde{b}}}
\newcommand{\widehat{c}}{\widehat{c}}
\newcommand{\widehat{C}}{\widehat{C}}
\newcommand{\widetilde{c}}{\widetilde{c}}
\newcommand{\widetilde{C}}{\widetilde{C}}
\newcommand{\widehat{\bc}}{\widehat{\mathbf{c}}}
\newcommand{\widehat{\bC}}{\widehat{\mathbf{C}}}
\newcommand{\widetilde{\bc}}{\widetilde{\mathbf{c}}}
\newcommand{\widetilde{\bC}}{\widetilde{\mathbf{C}}}
\newcommand{\widehat{d}}{\widehat{d}}
\newcommand{\widehat{D}}{\widehat{D}}
\newcommand{\widetilde{d}}{\widetilde{d}}
\newcommand{\widetilde{D}}{\widetilde{D}}
\newcommand{\widehat{\bd}}{\widehat{\mathbf{d}}}
\newcommand{\widehat{\bD}}{\widehat{\mathbf{D}}}
\newcommand{\widetilde{\bd}}{\widetilde{\mathbf{d}}}
\newcommand{\widetilde{\bD}}{\widetilde{\mathbf{D}}}
\newcommand{\widehat{e}}{\widehat{e}}
\newcommand{\widehat{E}}{\widehat{E}}
\newcommand{\widetilde{e}}{\widetilde{e}}
\newcommand{\widetilde{E}}{\widetilde{E}}
\newcommand{\widehat{\be}}{\widehat{\mathbf{e}}}
\newcommand{\widehat{\bE}}{\widehat{\mathbf{E}}}
\newcommand{\widetilde{\be}}{\widetilde{\mathbf{e}}}
\newcommand{\widetilde{\bE}}{\widetilde{\mathbf{E}}}
\newcommand{\widehat{f}}{\widehat{f}}
\newcommand{\widehat{F}}{\widehat{F}}
\newcommand{\widetilde{f}}{\widetilde{f}}
\newcommand{\widetilde{F}}{\widetilde{F}}
\newcommand{\widehat{\boldf}}{\widehat{\mathbf{f}}}
\newcommand{\widehat{\bF}}{\widehat{\mathbf{F}}}
\newcommand{\widetilde{\boldf}}{\widetilde{\mathbf{f}}}
\newcommand{\widetilde{\bF}}{\widetilde{\mathbf{F}}}
\newcommand{\widehat{g}}{\widehat{g}}
\newcommand{\widehat{G}}{\widehat{G}}
\newcommand{\widetilde{g}}{\widetilde{g}}
\newcommand{\widetilde{G}}{\widetilde{G}}
\newcommand{\widehat{\bg}}{\widehat{\mathbf{g}}}
\newcommand{\widehat{\bG}}{\widehat{\mathbf{G}}}
\newcommand{\widetilde{\bg}}{\widetilde{\mathbf{g}}}
\newcommand{\widetilde{\bG}}{\widetilde{\mathbf{G}}}
\newcommand{\hat{h}}{\hat{h}}
\newcommand{\widehat{H}}{\widehat{H}}
\newcommand{\widetilde{h}}{\widetilde{h}}
\newcommand{\widetilde{H}}{\widetilde{H}}
\newcommand{\widehat{\bh}}{\widehat{\mathbf{h}}}
\newcommand{\widehat{\bH}}{\widehat{\mathbf{H}}}
\newcommand{\widetilde{\bh}}{\widetilde{\mathbf{h}}}
\newcommand{\widetilde{\bH}}{\widetilde{\mathbf{H}}}
\newcommand{\widehat{i}}{\widehat{i}}
\newcommand{\widehat{I}}{\widehat{I}}
\newcommand{\widetilde{i}}{\widetilde{i}}
\newcommand{\widetilde{I}}{\widetilde{I}}
\newcommand{\widehat{\bi}}{\widehat{\mathbf{i}}}
\newcommand{\widehat{\bI}}{\widehat{\mathbf{I}}}
\newcommand{\widetilde{\bi}}{\widetilde{\mathbf{i}}}
\newcommand{\widetilde{\bI}}{\widetilde{\mathbf{I}}}
\newcommand{\widehat{j}}{\widehat{j}}
\newcommand{\widehat{J}}{\widehat{J}}
\newcommand{\widetilde{j}}{\widetilde{j}}
\newcommand{\widetilde{J}}{\widetilde{J}}
\newcommand{\widehat{\bj}}{\widehat{\mathbf{j}}}
\newcommand{\widehat{\bJ}}{\widehat{\mathbf{J}}}
\newcommand{\widetilde{\bj}}{\widetilde{\mathbf{j}}}
\newcommand{\widetilde{\bJ}}{\widetilde{\mathbf{J}}}
\newcommand{\widehat{k}}{\widehat{k}}
\newcommand{\widehat{K}}{\widehat{K}}
\newcommand{\widetilde{k}}{\widetilde{k}}
\newcommand{\widetilde{K}}{\widetilde{K}}
\newcommand{\widehat{\bk}}{\widehat{\mathbf{k}}}
\newcommand{\widehat{\bK}}{\widehat{\mathbf{K}}}
\newcommand{\widetilde{\bk}}{\widetilde{\mathbf{k}}}
\newcommand{\widetilde{\bK}}{\widetilde{\mathbf{K}}}
\newcommand{\widehat{l}}{\widehat{l}}
\newcommand{\widehat{L}}{\widehat{L}}
\newcommand{\widetilde{l}}{\widetilde{l}}
\newcommand{\widetilde{L}}{\widetilde{L}}
\newcommand{\widehat{\bl}}{\widehat{\mathbf{l}}}
\newcommand{\widehat{\bL}}{\widehat{\mathbf{L}}}
\newcommand{\widetilde{\bl}}{\widetilde{\mathbf{l}}}
\newcommand{\widetilde{\bL}}{\widetilde{\mathbf{L}}}
\newcommand{\widehat{m}}{\widehat{m}}
\newcommand{\widehat{M}}{\widehat{M}}
\newcommand{\widetilde{m}}{\widetilde{m}}
\newcommand{\widetilde{M}}{\widetilde{M}}
\newcommand{\widehat{\boldm}}{\widehat{\mathbf{m}}}
\newcommand{\widehat{\bM}}{\widehat{\mathbf{M}}}
\newcommand{\widetilde{\boldm}}{\widetilde{\mathbf{m}}}
\newcommand{\widetilde{\bM}}{\widetilde{\mathbf{M}}}
\newcommand{\widehat{n}}{\widehat{n}}
\newcommand{\widehat{N}}{\widehat{N}}
\newcommand{\widetilde{n}}{\widetilde{n}}
\newcommand{\widetilde{N}}{\widetilde{N}}
\newcommand{\widehat{\bn}}{\widehat{\mathbf{n}}}
\newcommand{\widehat{\bN}}{\widehat{\mathbf{N}}}
\newcommand{\widetilde{\bn}}{\widetilde{\mathbf{n}}}
\newcommand{\widetilde{\bN}}{\widetilde{\mathbf{N}}}
\newcommand{\widehat{o}}{\widehat{o}}
\newcommand{\widehat{O}}{\widehat{O}}
\newcommand{\widetilde{o}}{\widetilde{o}}
\newcommand{\widetilde{O}}{\widetilde{O}}
\newcommand{\widehat{\bo}}{\widehat{\mathbf{o}}}
\newcommand{\widehat{\bO}}{\widehat{\mathbf{O}}}
\newcommand{\widetilde{\bo}}{\widetilde{\mathbf{o}}}
\newcommand{\widetilde{\bO}}{\widetilde{\mathbf{O}}}
\newcommand{\widehat{p}}{\widehat{p}}
\newcommand{\widehat{P}}{\widehat{P}}
\newcommand{\widetilde{p}}{\widetilde{p}}
\newcommand{\widetilde{P}}{\widetilde{P}}
\newcommand{\widehat{\bp}}{\widehat{\mathbf{p}}}
\newcommand{\widehat{\bP}}{\widehat{\mathbf{P}}}
\newcommand{\widetilde{\bp}}{\widetilde{\mathbf{p}}}
\newcommand{\widetilde{\bP}}{\widetilde{\mathbf{P}}}
\newcommand{\widehat{q}}{\widehat{q}}
\newcommand{\widehat{Q}}{\widehat{Q}}
\newcommand{\widetilde{q}}{\widetilde{q}}
\newcommand{\widetilde{Q}}{\widetilde{Q}}
\newcommand{\widehat{\bq}}{\widehat{\mathbf{q}}}
\newcommand{\widehat{\bQ}}{\widehat{\mathbf{Q}}}
\newcommand{\widetilde{\bq}}{\widetilde{\mathbf{q}}}
\newcommand{\widetilde{\bQ}}{\widetilde{\mathbf{Q}}}
\newcommand{\widehat{r}}{\widehat{r}}
\newcommand{\widehat{R}}{\widehat{R}}
\newcommand{\widetilde{r}}{\widetilde{r}}
\newcommand{\widetilde{R}}{\widetilde{R}}
\newcommand{\widehat{\br}}{\widehat{\mathbf{r}}}
\newcommand{\widehat{\bR}}{\widehat{\mathbf{R}}}
\newcommand{\widetilde{\br}}{\widetilde{\mathbf{r}}}
\newcommand{\widetilde{\bR}}{\widetilde{\mathbf{R}}}
\newcommand{\widehat{s}}{\widehat{s}}
\newcommand{\widehat{S}}{\widehat{S}}
\newcommand{\widetilde{s}}{\widetilde{s}}
\newcommand{\widetilde{S}}{\widetilde{S}}
\newcommand{\widehat{\bs}}{\widehat{\mathbf{s}}}
\newcommand{\widehat{\bS}}{\widehat{\mathbf{S}}}
\newcommand{\widetilde{\bs}}{\widetilde{\mathbf{s}}}
\newcommand{\widetilde{\bS}}{\widetilde{\mathbf{S}}}
\newcommand{\widehat{t}}{\widehat{t}}
\newcommand{\widehat{T}}{\widehat{T}}
\newcommand{\widetilde{t}}{\widetilde{t}}
\newcommand{\widetilde{T}}{\widetilde{T}}
\newcommand{\widehat{\bt}}{\widehat{\mathbf{t}}}
\newcommand{\widehat{\bT}}{\widehat{\mathbf{T}}}
\newcommand{\widetilde{\bt}}{\widetilde{\mathbf{t}}}
\newcommand{\widetilde{\bT}}{\widetilde{\mathbf{T}}}
\newcommand{\widehat{u}}{\widehat{u}}
\newcommand{\widehat{U}}{\widehat{U}}
\newcommand{\widetilde{u}}{\widetilde{u}}
\newcommand{\widetilde{U}}{\widetilde{U}}
\newcommand{\widehat{\bu}}{\widehat{\mathbf{u}}}
\newcommand{\widehat{\bU}}{\widehat{\mathbf{U}}}
\newcommand{\widetilde{\bu}}{\widetilde{\mathbf{u}}}
\newcommand{\widetilde{\bU}}{\widetilde{\mathbf{U}}}
\newcommand{\widehat{v}}{\widehat{v}}
\newcommand{\widehat{V}}{\widehat{V}}
\newcommand{\widetilde{v}}{\widetilde{v}}
\newcommand{\widetilde{V}}{\widetilde{V}}
\newcommand{\widehat{\bv}}{\widehat{\mathbf{v}}}
\newcommand{\widehat{\bV}}{\widehat{\mathbf{V}}}
\newcommand{\widetilde{\bv}}{\widetilde{\mathbf{v}}}
\newcommand{\widetilde{\bV}}{\widetilde{\mathbf{V}}}
\newcommand{\widehat{w}}{\widehat{w}}
\newcommand{\widehat{W}}{\widehat{W}}
\newcommand{\widetilde{w}}{\widetilde{w}}
\newcommand{\widetilde{W}}{\widetilde{W}}
\newcommand{\widehat{\bw}}{\widehat{\mathbf{w}}}
\newcommand{\widehat{\bW}}{\widehat{\mathbf{W}}}
\newcommand{\tilde{\bw}}{\tilde{\mathbf{w}}}
\newcommand{\widetilde{\bW}}{\widetilde{\mathbf{W}}}
\newcommand{\hat{x}}{\hat{x}}
\newcommand{\widehat{X}}{\widehat{X}}
\newcommand{\widetilde{x}}{\widetilde{x}}
\newcommand{\widetilde{X}}{\widetilde{X}}
\newcommand{\widehat{\bx}}{\widehat{\mathbf{x}}}
\newcommand{\widehat{\bX}}{\widehat{\mathbf{X}}}
\newcommand{\widetilde{\bx}}{\widetilde{\mathbf{x}}}
\newcommand{\widetilde{\bX}}{\widetilde{\mathbf{X}}}
\newcommand{\hat{y}}{\hat{y}}
\newcommand{\widehat{Y}}{\widehat{Y}}
\newcommand{\widetilde{y}}{\widetilde{y}}
\newcommand{\widetilde{Y}}{\widetilde{Y}}
\newcommand{\widehat{\by}}{\widehat{\mathbf{y}}}
\newcommand{\widehat{\bY}}{\widehat{\mathbf{Y}}}
\newcommand{\widetilde{\by}}{\widetilde{\mathbf{y}}}
\newcommand{\widetilde{\bY}}{\widetilde{\mathbf{Y}}}
\newcommand{\widehat{z}}{\widehat{z}}
\newcommand{\widehat{Z}}{\widehat{Z}}
\newcommand{\widetilde{z}}{\widetilde{z}}
\newcommand{\widetilde{Z}}{\widetilde{Z}}
\newcommand{\widehat{\bz}}{\widehat{\mathbf{z}}}
\newcommand{\widehat{\bZ}}{\widehat{\mathbf{Z}}}
\newcommand{\widetilde{\bz}}{\widetilde{\mathbf{z}}}
\newcommand{\widetilde{\bZ}}{\widetilde{\mathbf{Z}}}
\newcommand{\widehat{\bm{\xi}}}{\widehat{\bm{\xi}}}
\newcommand{\bar{\bm{\xi}}}{\bar{\bm{\xi}}}
\newcommand{\bar{a}}{\bar{a}}
\newcommand{\bar{b}}{\bar{b}}
\newcommand{\bar{c}}{\bar{c}}
\newcommand{\bar{d}}{\bar{d}}
\newcommand{\bar{e}}{\bar{e}}
\newcommand{\bar{f}}{\bar{f}}
\newcommand{\bar{g}}{\bar{g}}
\newcommand{\bar{h}}{\bar{h}}
\newcommand{\bar{i}}{\bar{i}}
\newcommand{\bar{j}}{\bar{j}}
\newcommand{\bar{k}}{\bar{k}}
\newcommand{\bar{l}}{\bar{l}}
\newcommand{\bar{m}}{\bar{m}}
\newcommand{\bar{n}}{\bar{n}}
\newcommand{\bar{o}}{\bar{o}}
\newcommand{\bar{p}}{\bar{p}}
\newcommand{\bar{q}}{\bar{q}}
\newcommand{\bar{r}}{\bar{r}}
\newcommand{\bar{s}}{\bar{s}}
\newcommand{\bar{t}}{\bar{t}}
\newcommand{\bar{u}}{\bar{u}}
\newcommand{\bar{v}}{\bar{v}}
\newcommand{\bar{w}}{\bar{w}}
\newcommand{\bar{x}}{\bar{x}}
\newcommand{\bar{y}}{\bar{y}}
\newcommand{\bar{z}}{\bar{z}}
\newcommand{\bar{A}}{\bar{A}}
\newcommand{\bar{B}}{\bar{B}}
\newcommand{\bar{C}}{\bar{C}}
\newcommand{\bar{D}}{\bar{D}}
\newcommand{\bar{E}}{\bar{E}}
\newcommand{\bar{F}}{\bar{F}}
\newcommand{\bar{G}}{\bar{G}}
\newcommand{\bar{H}}{\bar{H}}
\newcommand{\bar{I}}{\bar{I}}
\newcommand{\bar{J}}{\bar{J}}
\newcommand{\bar{K}}{\bar{K}}
\newcommand{\bar{L}}{\bar{L}}
\newcommand{\bar{M}}{\bar{M}}
\newcommand{\bar{N}}{\bar{N}}
\newcommand{\bar{O}}{\bar{O}}
\newcommand{\bar{P}}{\bar{P}}
\newcommand{\bar{Q}}{\bar{Q}}
\newcommand{\bar{R}}{\bar{R}}
\newcommand{\bar{S}}{\bar{S}}
\newcommand{\bar{T}}{\bar{T}}
\newcommand{\bar{U}}{\bar{U}}
\newcommand{\bar{V}}{\bar{V}}
\newcommand{\bar{W}}{\bar{W}}
\newcommand{\bar{X}}{\bar{X}}
\newcommand{\bar{Y}}{\bar{Y}}
\newcommand{\bar{Z}}{\bar{Z}}
\newcommand{\bar{\mu}}{\bar{\mu}}
\newcommand{\bar{\rho}}{\bar{\rho}}
\newcommand{\bm{\alpha}}{\bm{\alpha}}
\newcommand{\bm{\beta}}{\bm{\beta}}
\newcommand{\bm{\gamma}}{\bm{\gamma}}
\newcommand{\bm{\delta}}{\bm{\delta}}
\newcommand{\bm{\theta}}{\bm{\theta}}
\newcommand{\bm{\tau}}{\bm{\tau}}
\newcommand{\bm{\pi}}{\bm{\pi}}
\newcommand{\bm{\epsilon}}{\bm{\epsilon}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\bm{\varepsilon}}{\bm{\varepsilon}}
\newcommand{\bm{\sigma}}{\bm{\sigma}}
\newcommand{\bm{\zeta}}{\bm{\zeta}}
\newcommand{\bm{\eta}}{\bm{\eta}}
\newcommand{\bm{\kappa}}{\bm{\kappa}}
\newcommand{\bm{\chi}}{\bm{\chi}}
\newcommand{\bm{\phi}}{\bm{\phi}}
\newcommand{\bm{\psi}}{\bm{\psi}}
\newcommand{\bm{\omega}}{\bm{\omega}}
\newcommand{\bm{\xi}}{\bm{\xi}}
\newcommand{\bm{\lambda}}{\bm{\lambda}}
\newcommand{\bm{\rho}}{\bm{\rho}}
\newcommand{\bm{\Gamma}}{\bm{\Gamma}}
\newcommand{\bm{\Lambda}}{\bm{\Lambda}}
\newcommand{\bSigma }{\bm{\Sigma}}
\newcommand{\bm{\Psi}}{\bm{\Psi}}
\newcommand{\bm{\Delta}}{\bm{\Delta}}
\newcommand{\bm{\Xi}}{\bm{\Xi}}
\newcommand{\bm{\Upsilon}}{\bm{\Upsilon}}
\newcommand{\bm{\Omega}}{\bm{\Omega}}
\newcommand{\bm{\Phi}}{\bm{\Phi}}
\newcommand{\bm{\Pi}}{\bm{\Pi}}
\newcommand{\bm{\Theta}}{\bm{\Theta}}
\newcommand{\bar{\bomega}}{\bar{\bm{\omega}}}
\newcommand{\widetilde{\blambda}}{\widetilde{\bm{\lambda}}}
\newcommand{\widetilde{\alpha}}{\widetilde{\alpha}}
\newcommand{\widetilde{\beta}}{\widetilde{\beta}}
\newcommand{\widetilde{\gamma}}{\widetilde{\gamma}}
\newcommand{\widetilde{\Gamma}}{\widetilde{\Gamma}}
\newcommand{\widetilde{\delta}}{\widetilde{\delta}}
\newcommand{\widetilde{\theta}}{\widetilde{\theta}}
\newcommand{\widetilde{\tau}}{\widetilde{\tau}}
\newcommand{\widetilde{\pi}}{\widetilde{\pi}}
\newcommand{\widetilde{\epsilon}}{\widetilde{\epsilon}}
\newcommand{\widetilde{\varepsilon}}{\widetilde{\varepsilon}}
\newcommand{\widetilde{\sigma}}{\widetilde{\sigma}}
\newcommand{\widetilde{\zeta}}{\widetilde{\zeta}}
\newcommand{\widetilde{\eta}}{\widetilde{\eta}}
\newcommand{\widetilde{\kappa}}{\widetilde{\kappa}}
\newcommand{\widetilde{\chi}}{\widetilde{\chi}}
\newcommand{\widetilde{\phi}}{\widetilde{\phi}}
\newcommand{\widetilde{\psi}}{\widetilde{\psi}}
\newcommand{\widetilde{\omega}}{\widetilde{\omega}}
\newcommand{\widetilde{\xi}}{\widetilde{\xi}}
\newcommand{\widetilde{\lambda}}{\widetilde{\lambda}}
\newcommand{\widetilde{\rho}}{\widetilde{\rho}}
\newcommand{\widetilde{\nu}}{\widetilde{\nu}}
\newcommand{\widetilde{\iota}}{\widetilde{\iota}}
\newcommand{\widetilde{\bdelta}}{\widetilde{\bm{\delta}}}
\newcommand{\widetilde{\Delta}}{\widetilde{\Delta}}
\newcommand{\widetilde{\mu}}{\widetilde{\mu}}
\newcommand{\widetilde{\bx}}{\widetilde{\mathbf{x}}}
\newcommand{\widehat{\alpha}}{\widehat{\alpha}}
\newcommand{\widehat{\beta}}{\widehat{\beta}}
\newcommand{\widehat{\gamma}}{\widehat{\gamma}}
\newcommand{\widehat{\delta}}{\widehat{\delta}}
\newcommand{\widehat{\theta}}{\widehat{\theta}}
\newcommand{\widehat{\tau}}{\widehat{\tau}}
\newcommand{\widehat{\pi}}{\widehat{\pi}}
\newcommand{\widehat{\epsilon}}{\widehat{\epsilon}}
\newcommand{\widehat{\varepsilon}}{\widehat{\varepsilon}}
\newcommand{\widehat{\sigma}}{\widehat{\sigma}}
\newcommand{\widehat{\zeta}}{\widehat{\zeta}}
\newcommand{\widehat{\eta}}{\widehat{\eta}}
\newcommand{\widehat{\kappa}}{\widehat{\kappa}}
\newcommand{\widehat{\chi}}{\widehat{\chi}}
\newcommand{\widehat{\phi}}{\widehat{\phi}}
\newcommand{\widehat{\psi}}{\widehat{\psi}}
\newcommand{\widehat{\omega}}{\widehat{\omega}}
\newcommand{\widehat{\xi}}{\widehat{\xi}}
\newcommand{\widehat{\lambda}}{\widehat{\lambda}}
\newcommand{\widehat{\rho}}{\widehat{\rho}}
\newcommand{\underline{a}}{\underline{a}}
\newcommand{\underline{A}}{\underline{A}}
\newcommand{\underline{b}}{\underline{b}}
\newcommand{\underline{B}}{\underline{B}}
\newcommand{\underline{c}}{\underline{c}}
\newcommand{\underline{C}}{\underline{C}}
\newcommand{\underline{d}}{\underline{d}}
\newcommand{\underline{D}}{\underline{D}}
\newcommand{\underline{e}}{\underline{e}}
\newcommand{\underline{E}}{\underline{E}}
\newcommand{\underline{f}}{\underline{f}}
\newcommand{\underline{F}}{\underline{F}}
\newcommand{\underline{g}}{\underline{g}}
\newcommand{\underline{G}}{\underline{G}}
\newcommand{\underline{h}}{\underline{h}}
\newcommand{\underline{\bh}}{\underline{\mathbf{h}}}
\newcommand{\underline{H}}{\underline{H}}
\newcommand{\underline{i}}{\underline{i}}
\newcommand{\underline{I}}{\underline{I}}
\newcommand{\underline{j}}{\underline{j}}
\newcommand{\underline{J}}{\underline{J}}
\newcommand{\underline{k}}{\underline{k}}
\newcommand{\underline{K}}{\underline{K}}
\newcommand{\underline{l}}{\underline{l}}
\newcommand{\underline{L}}{\underline{L}}
\newcommand{\underline{m}}{\underline{m}}
\newcommand{\underline{M}}{\underline{M}}
\newcommand{\underline{n}}{\underline{n}}
\newcommand{\underline{N}}{\underline{N}}
\newcommand{\underline{o}}{\underline{o}}
\newcommand{\underline{O}}{\underline{O}}
\newcommand{\underline{P}}{\underline{P}}
\newcommand{\underline{q}}{\underline{q}}
\newcommand{\underline{Q}}{\underline{Q}}
\newcommand{\underline{r}}{\underline{r}}
\newcommand{\underline{R}}{\underline{R}}
\newcommand{\underline{s}}{\underline{s}}
\newcommand{\underline{S}}{\underline{S}}
\newcommand{\underline{t}}{\underline{t}}
\newcommand{\underline{T}}{\underline{T}}
\newcommand{\underline{u}}{\underline{u}}
\newcommand{\underline{U}}{\underline{U}}
\newcommand{\underline{v}}{\underline{v}}
\newcommand{\underline{V}}{\underline{V}}
\newcommand{\underline{w}}{\underline{w}}
\newcommand{\underline{W}}{\underline{W}}
\newcommand{\underline{x}}{\underline{x}}
\newcommand{\underline{X}}{\underline{X}}
\newcommand{\underline{y}}{\underline{y}}
\newcommand{\underline{Y}}{\underline{Y}}
\newcommand{\underline{z}}{\underline{z}}
\newcommand{\underline{Z}}{\underline{Z}}
\newcommand{\underline{\bE}}{\underline{\mathbf{E}}}
\newcommand{\underline{\bW}}{\underline{\mathbf{W}}}
\newcommand{\underline{\bH}}{\underline{\mathbf{H}}}
\newcommand{\underline{\lambda}}{\underline{\lambda}}
\newcommand{\dot{B}}{\dot{B}}
\newcommand{\dot{c}}{\dot{c}}
\newcommand{\dot{P}}{\dot{P}}
\newcommand{\dot{L}}{\dot{L}}
\newcommand{\dot{\bA}}{\dot{\mathbf{A}}}
\newcommand{\dot{\bx}}{\dot{\mathbf{x}}}
\newcommand{\dot{\by}}{\dot{\mathbf{y}}}
\newcommand{\dot{\bz}}{\dot{\mathbf{z}}}
\def\, \cdot \,{\, \cdot \,}
\def\, \diamond \,{\, \diamond \,}
\def\, \star \,{\, \star \,}
\newcommand{\eexp}[1]{e^{#1}}
\newcommand{i.i.d.\ }{i.i.d.\ }
\newcommand{\stackrel{\mathrm{p}}{\longrightarrow}}{\stackrel{\mathrm{p}}{\longrightarrow}}
\newcommand{\stackrel{\mathrm{w.p.1}}{\longrightarrow}}{\stackrel{\mathrm{w.p.1}}{\longrightarrow}}
\newcommand{\xrightarrow{\mathrm{a.s.}}}{\xrightarrow{\mathrm{a.s.}}}
\newcommand{\stackrel{\mathrm{d}}{\longrightarrow}}{\stackrel{\mathrm{d}}{\longrightarrow}}
\newcommand{\stackrel{\mathrm{D}}{\longrightarrow}}{\stackrel{\mathrm{D}}{\longrightarrow}}
\newcommand{\ceil}[1]{\lceil{#1}\rceil}
\newcommand{\floor}[1]{\lfloor{#1}\rfloor}
\newcommand{\lrangle}[2]{\left\langle{#1},{#2}\right\rangle}
\newcommand{\lranglet}[2]{\langle{#1},{#2}\rangle}
\newcommand{\stackrel{.}{\leq}}{\stackrel{.}{\leq}}
\newcommand{\stackrel{.}{<}}{\stackrel{.}{<}}
\newcommand{\stackrel{.}{\geq}}{\stackrel{.}{\geq}}
\newcommand{\stackrel{.}{>}}{\stackrel{.}{>}}
\newcommand{\stackrel{\,..}{=}}{\stackrel{\,..}{=}}
\newcommand{\stackrel{\rm(a)}{=}}{\stackrel{\rm(a)}{=}}
\newcommand{\stackrel{\rm(b)}{=}}{\stackrel{\rm(b)}{=}}
\newcommand{\stackrel{\rm(c)}{=}}{\stackrel{\rm(c)}{=}}
\newcommand{\stackrel{\rm(d)}{=}}{\stackrel{\rm(d)}{=}}
\newcommand{\stackrel{\rm(e)}{=}}{\stackrel{\rm(e)}{=}}
\newcommand{\stackrel{\rm(f)}{=}}{\stackrel{\rm(f)}{=}}
\newcommand{\stackrel{\rm(g)}{=}}{\stackrel{\rm(g)}{=}}
\newcommand{\stackrel{\rm(h)}{=}}{\stackrel{\rm(h)}{=}}
\newcommand{\stackrel{\rm(a)}{\le}}{\stackrel{\rm(a)}{\le}}
\newcommand{\stackrel{\rm(b)}{\le}}{\stackrel{\rm(b)}{\le}}
\newcommand{\stackrel{\rm(c)}{\le}}{\stackrel{\rm(c)}{\le}}
\newcommand{\stackrel{\rm(d)}{\le}}{\stackrel{\rm(d)}{\le}}
\newcommand{\stackrel{\rm(e)}{\le}}{\stackrel{\rm(e)}{\le}}
\newcommand{\stackrel{\rm(f)}{\le}}{\stackrel{\rm(f)}{\le}}
\newcommand{\stackrel{\rm(g)}{\le}}{\stackrel{\rm(g)}{\le}}
\newcommand{\stackrel{\rm(h)}{\le}}{\stackrel{\rm(h)}{\le}}
\newcommand{\les}[1]{\stackrel{#1}{\le}}
\newcommand{\stackrel{\rm(a)}{\ge}}{\stackrel{\rm(a)}{\ge}}
\newcommand{\stackrel{\rm(b)}{\ge}}{\stackrel{\rm(b)}{\ge}}
\newcommand{\stackrel{\rm(c)}{\ge}}{\stackrel{\rm(c)}{\ge}}
\newcommand{\stackrel{\rm(d)}{\ge}}{\stackrel{\rm(d)}{\ge}}
\newcommand{\stackrel{\rm(e)}{\ge}}{\stackrel{\rm(e)}{\ge}}
\newcommand{\stackrel{\rm(f)}{\ge}}{\stackrel{\rm(f)}{\ge}}
\newcommand{\stackrel{\rm(g)}{\ge}}{\stackrel{\rm(g)}{\ge}}
\newcommand{\stackrel{\rm(h)}{\ge}}{\stackrel{\rm(h)}{\ge}}
\newcommand{P_{\mathrm{e}}^{(n)}}{P_{\mathrm{e}}^{(n)}}
\newcommand{P_{\mathrm{e}, 1}^{(n)}}{P_{\mathrm{e}, 1}^{(n)}}
\newcommand{P_{\mathrm{e}, 2}^{(n)}}{P_{\mathrm{e}, 2}^{(n)}}
\DeclareMathOperator*{\argmax}{arg\,max}
\DeclareMathOperator*{\argmin}{arg\,min}
\DeclareMathOperator*{\argsup}{arg\,sup}
\DeclareMathOperator*{\arginf}{arg\,inf}
\DeclareMathOperator{\minimize}{minimize}
\DeclareMathOperator{\maximize}{maximize}
\DeclareMathOperator{\st}{s.t.}
\DeclareMathOperator{\erfc}{erfc}
\DeclareMathOperator{\cum}{cum}
\DeclareMathOperator{\sgn}{sgn}
\DeclareMathOperator{\tr}{tr}
\DeclareMathOperator{\spn}{span}
\DeclareMathOperator{\supp}{supp}
\DeclareMathOperator{\adj}{adj}
\DeclareMathOperator{\var}{\mathsf{Var}}
\DeclareMathOperator{\Vol}{Vol}
\DeclareMathOperator{\cov}{\mathsf{Cov}}
\DeclareMathOperator{\sech}{sech}
\DeclareMathOperator{\sinc}{sinc}
\DeclareMathOperator{\rank}{rank}
\DeclareMathOperator{\poly}{poly}
\DeclareMathOperator{\polylog}{polylog}
\DeclareMathOperator{\vect}{vec}
\newcommand{\Hb}{H_{\mathrm{b}}
\newcommand{\mathrm{Bern}}{\mathrm{Bern}}
\DeclareMathOperator*{\lms}{l.i.m.\,}
\newcommand{\varop}[1]{\var\left[{#1}\right]}
\newcommand{\covop}[2]{\cov\left({#1},{#2}\right)}
\newcommand{\mathbf{0}}{\mathbf{0}}
\newcommand{\mathbf{1}}{\mathbf{1}}
\def\independenT#1#2{\mathrel{\rlap{$#1#2$}\mkern5mu{#1#2}}}
\newcommand\indep{\protect\mathpalette{\protect\independenT}{\perp}}
\newtheorem{theorem}{Theorem}
\newtheorem*{theorem*}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{prop}{Proposition}
\newtheorem{corollary}{Corollary}
\newtheorem{assump}{Assumption}
\newtheorem*{assump*}{Assumption}
\newtheorem{example}{Example}
\newtheorem{exercise}{Exercise}
\newtheorem{application}{Application}
\theoremstyle{remark}
\newtheorem{remark}{Remark}
\newcommand{\qednew}{\nobreak \ifvmode \relax \else
\ifdim\lastskip<1.5em \hskip-\lastskip
\hskip1.5em plus0em minus0.5em \fi \nobreak
\vrule height0.75em width0.5em depth0.25em\fi}
| proofpile-arXiv_059-15706 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Conversational Machine Reading Comprehension~(CMRC) has been studied extensively over the past few years within the natural language processing (NLP) communities~\citep{zhu2018sdnet,liu2019roberta,yang2019xlnet}. Different from traditional MRC tasks, CMRC aims to enable models to learn the representation of the context paragraph and multi-turn dialogues. Existing methods to the conversational question answering (QA) tasks~\citep{huang2018flowqa,devlin2018bert,xu2019review,gong-etal-2020-recurrent} have achieved superior performances on several benchmark datasets, such as QuAC~\citep{choi2018quac} and CoQA~\citep{elgohary2018dataset}. However, few studies have investigated CMRC in both spoken content and text documents.
To incorporate spoken content into machine comprehension, there are few public datasets that evaluate the effectiveness of the model in spoken question answering (SQA) scenarios. TOEFL listening comprehension~\citep{tseng2016towards} is one of the related corpus for this task, an English test designed to evaluate the English language proficiency of non-native speakers. But the multi-choice question answering setting and its scale is limited to train for robust SCQA models. The rest two spoken question answering datasets are Spoken-SQuAD~\citep{li2018spoken} and ODSQA~\citep{lee2018odsqa}, respectively. However, there is usually no connection between a series of questions and answers within the same spoken passage among these datasets. More importantly, the most common way people seek or test their knowledge is via human conversations, which capture and maintain the common ground in spoken and text context from the dialogue flow. There are many real-world applications related to SCQA tasks, such as voice assistant and chat robot.
In recent years, neural network based methods have achieved promising progress in speech processing domain. Most existing works first select a feature extractor~\citep{gao2019dynamic}, and then enroll the feature embedding into the state-of-the-art learning framework, as used in single-turn spoken language processing tasks such as speech retrieval~\citep{lee2015spoken,fan2020spoken,karakos2020reformulating}, translation~\citep{berard2016listen,di2020instance,tu2020end} and recognition~\citep{zhang2019very,zhou2018syllable,bruguier2019phoebe,siriwardhana2020jointly}. However, simply adopting existing methods to the SCQA tasks will cause several challenges. First, transforming speech signals into ASR transcriptions is inevitably associated with ASR errors (See Table~\ref{fig3}). Previous work~\citep{lee2019mitigating} shows that directly feed the ASR output as the input for the following down-stream modules usually cause significant performance loss, especially in SQA tasks. Second, speech corresponds to a multi-turn conversation~(e.g. lectures, interview, meetings), thus the discourse structure will have more complex correlations between questions and answers than that of a monologue. Third, additional information, such as audio recordings, contains potentially valuable information in spoken form. Many QA systems may leverage kind of orality to generate better representations. Fourth, existing QA models are tailed for a specific (text) domain. For our SCQA tasks, it is crucial to guide the system to learn kind of orality in documents.
\begin{figure}
\begin{center}
\includegraphics[width=1\columnwidth] {flowgram.pdf}
\caption{An illustration of flow diagram for spoken conversational question answering tasks with an example from our proposed Spoken-CoQA dataset.}
\end{center}
\label{fig1}
\end{figure}
\begin{table}[t]
\caption{Comparison of Spoken-CoQA with existing spoken question answering datasets.}
\label{sample-table}
\begin{center}
\begin{tabular}{lccc}
\hline
\multicolumn{1}{c}{\bf Dataset} &\multicolumn{1}{c}{\bf Conversational} &\multicolumn{1}{c}{\bf Spoken} &\multicolumn{1}{c}{\bf Answer Type}
\\ \hline
TOEFL \citep{tseng2016towards} & $\times$ & $\surd$ & Multi-choice \\
Spoken-SQuAD \citep{li2018spoken} & $\times$ & $\surd$& Spans \\
ODSQA \citep{lee2018odsqa}& $\times$ & $\surd$ & Spans \\
\hline
Spoken-CoQA & $\surd$ & $\surd$ & Free-form text, Unanswerable \\
\hline
\end{tabular}
\end{center}
\label{table1}
\end{table}
In this work, we propose a new spoken conversational question answering task - SCQA, and introduce Spoken-CoQA, a spoken conversational question answering dataset to evaluate a QA system whether necessary to tackle the task of question answering on noisy speech transcripts and text document. We compare Spoken-CoQA with existing SQA datasets (See Table~\ref{table1}). Unlike existing SQA datasets, Spoken-CoQA is a multi-turn conversational SQA dataset, which is more challenging than single-turn benchmarks. First, every question is dependent on the conversation history in the Spoken-CoQA dataset. It is thus difficult for the machine to parse. Second, errors in ASR modules also degrade the performance of machines in tackling contextual understanding with context paragraph. To mitigate the effects of speech recognition errors, we then present a novel knowledge distillation (KD) method for spoken conversational question answering tasks. Our first intuition is speech utterances and text contents share the dual nature property, and we can take advantage of this property to learn these two forms of the correspondences. We enroll this knowledge into the~\textit{student} model, and then guide the~\textit{student} to unveil the bottleneck in noisy ASR outputs to boost performance. Empirical results show that our proposed \textit{DDNet} achieves remarkable performance gains in SCQA tasks. To the best of our knowledge, we are the first work in spoken conversational machine reading comprehension tasks.
In summary, the main contributions of this work are as follows:
\begin{itemize}
\item We propose a new task for machine comprehension of spoken question-answering style conversation to improve the network performance. To the best of our knowledge, our Spoken-CoQA is the first spoken conversational machine reading comprehension dataset.
\item We develop a novel end-to-end method based on data distillation to learn both from speech and text domain. Specifically, we use the model trained on clear syntax and close-distance recording to guide the model trained on noisy ASR transcriptions to achieve substantial performance gains in prediction accuracy.
\item We demonstrate the robustness of our \textit{DDNet} on Spoken-CoQA, and demonstrates that the model can effectively alleviate ASR errors in noisy conditions.
\end{itemize}
\begin{figure}
\begin{center}
\includegraphics[width=1\columnwidth] {model.pdf}
\caption{An illustration of the architecture of \textit{DDNet}.}
\end{center}
\label{fig2}
\end{figure}
\section{Related Work}
\textbf{Conversational Machine Reading Comprehension }
In recent years, the natural language processing research community has devoted substantial efforts to conversational machine reading comprehension tasks~\citep{huang2018flowqa,zhu2018sdnet,xu2019review,zhang2019sgnet,gong-etal-2020-recurrent}. Within the growing body of work on conversational machine reading comprehension, two signature attributes have emerged: the availability of large benchmark datasets~\citep{choi2018quac, elgohary2018dataset,reddy2019coqa} and pre-trained language models~\citep{devlin2018bert,liu2019roberta,lan2019albert}. However, these existing works typically focus on modeling the complicated context dependency in text form. In contrast, we focus on enabling the machine to build the capability of language recognition and dialogue modeling in both speech and text domains.
\textbf{Spoken Question Answering }
In parallel to the recent works in natural language processing, these trends have also been pronounced in the speech processing (SP) field, where spoken question answering, an extended form of Question Answering, have explored the prospect of machine comprehension in spoken form. Previous work on SQA typically includes two separate modules: automatic speech recognition and text question answering. It entails transferring spoken content to ASR transcriptions, and then employs natural language processing techniques to handle the speech language processing tasks. Prior to this point, the existing methods~\citep{tseng2016towards,serdyuk2018towards,su2020improving} focus on optimizing each module in a two-stage manner, where errors in the ASR module would suffer from severe performance loss. Concurrent with our research,~\cite{serdyuk2018towards} proposes an end-to-end approach for natural language understanding (NLU) tasks. SpeechBERT~\citep{chuang2019speechbert} cascades the BERT-based models as a unified model and then trains it in an audio-and-text jointly learned manner. However, the existing SQA methods aim at solving a single question given the related passage without building and maintaining the connections of different questions within the human conversations.
\textbf{Knowledge Distillation }
\cite{hinton2015distilling} introduces the idea of Knowledge Distillation~(KD)~in a~\textit{teacher-student} scenario. In other words, we can distill the knowledge from one model (massive or~\textit{teacher} model) to another (small or~\textit{student} model). Previous work has shown that KD can significantly boost prediction accuracy in natural language processing and speech processing~\citep{kim2016sequence,hu2018attention,huang2018knowledge,hahn2019self}, while adopting KD-based methods for SCQA tasks has been less explored. Although we share the same research topic and application, our research direction and methods differ. Previous methods design a unified model to model the single-turn speech-language task. In contrast, our model explores the prospect of handling SQA tasks. More importantly, we focus the question of nature property in speech and text: do spoken conversational dialogues can further assist the model to boost the performance. Finally, we incorporate the knowledge distillation framework to distill reliable dialogue flow from the spoken contexts, and utilize the learned predictions to guide the~\textit{student} model to train well on the noisy input data.
\section{Task Definition}
\subsection{Data Format}
We introduce Spoken-CoQA, a new spoken conversational machine reading comprehension dataset where the documents are in the spoken and text form. Given the spoken multi-turn dialogues and spoken documents, the task is to answer questions in multi-party conversations. Each example in this dataset is defined as follows:~\{$D_{i},Q_{i},A_{i}\}_{1}^{N}$, where $Q_{i}$=\{$q_{i1},q_{i2},...,q_{iL}$\} and $ A_{i}$= \{$a_{i1},a_{i2},...,a_{iL}$\} represent a passage with $L$-turn queries and corresponding answers, respectively. Given a passage $D_{i}$, multi-turn history questions~\{${q_{i1},q_{i2},...,q_{iL-1}}$\} and the reference answers \{${a_{i1},a_{i2},...,a_{iL-1}}$\}, our goal is to generate $a_{iL}$ for the given current question $q_{iL}$. In this study, we use the spoken form of questions and documents as the network input for training. Note that questions and documents (passages) in Spoken-CoQA are in both text and spoken forms, and answers are in the text form.
\subsection{Data Collection}
We detail the procedures to build Spoken-CoQA as follows. First, we select the conversational question-answering dataset CoQA~\citep{reddy2019coqa} since it is one of the largest public CMRC datasets. CoQA contains around 8k stories (documents) and over 120k questions with answers. The average dialogue length of CoQA is about 15 turns, and the answer is in free-form text. In CoQA, the training set and the development set contain 7,199 and 500 conversations over the given stories, respectively. Therefore, we use the CoQA training set as our reference text of the training set and the CoQA development set as the test set in Spoken-CoQA. Then we employ the Google text-to-speech system to transform questions and documents in CoQA into the spoken form. Next, we adopt CMU Sphinx to transcribe the processed spoken content into ASR transcriptions. As such, we collect more than 40G audio data, and the data duration is around 300 hours.
For clarity, we provide an example of our Spoken-CoQA dataset in Table~\ref{fig3}. Figure~\ref{fig:mel} compares spectrograms of samples from ASR modules. In this example, we observe that given the text document~(ASR-document), the conversation starts with the question~$Q_1$~(ASR-$Q_1$), and then the system requires to answer $Q_1$ (ASR-$Q_1$) with $A_1$ based on a contiguous text span $R_1$. Compared to the existing benchmark datasets, ASR transcripts~(both the document and questions) are much more difficult for the machine to comprehend questions, reason among the passage, and even predict the correct answer.
\begin{table}[ht]
\caption{An example from Spoken-CoQA. We can observe large misalignment between the manual transcripts and the corresponding ASR transcripts. Note that the misalignment is in~\textbf{bold} font.}
\label{fig3}
\begin{center}
\begin{tabular}{ll}
\multicolumn{1}{c}{\bf Manual Transcript} &\multicolumn{1}{c}{\bf ASR Transcript}
\\ \hline
\begin{minipage}[t]{0.45\columnwidth}%
Once upon a time, in a barn near a farm house, there lived a little white kitten named Cotton. Cotton lived high up in a nice warm place above the barn where all of the farmer's horses slept. But Cotton wasn't alone in her little home above the barn, oh no. She shared her hay bed with her mommy and 5 other sisters\dots
\end{minipage}
&
\begin{minipage}[t]{0.45\columnwidth}%
Once upon a time in a \textbf{bar} near farm house, there lived a little~\textbf{like captain} named cotton.~\textbf{How to live} tied up in a nice warm place above the~\textbf{bar} and~\textbf{we're} all of the farmers horses slapped. But~\textbf{happened} was not alone in her little home above the bar~\textbf{in now}. She shared her hey bed with her mommy and five other sisters\dots
\end{minipage}
\\ \\
\begin{minipage}[t]{0.45\columnwidth}%
Q$_{1}$: What color was Cotton? \\
A$_{1}$: white \\
R$_{1}$: a little white kitten named Cotton.
\end{minipage}
&
\begin{minipage}[t]{0.45\columnwidth}%
ASR-Q$_{1}$: What color was~\textbf{caught in}? \\
A$_{1}$: white \\
R$_{1}$: a little white kitten named Cotton.
\end{minipage}
\\ \\
\begin{minipage}[t]{0.45\columnwidth}%
Q$_{2}$: Where did she live? \\
A$_{2}$: in a barn \\
R$_{2}$: in a barn near a farm house, there lived a little white kitten.
\end{minipage}
&
\begin{minipage}[t]{0.45\columnwidth}%
ASR-Q$_{2}$: Where did she live? \\
A$_{2}$: in a barn \\
R$_{2}$: in a barn near a farm house, there lived a little white kitten.
\end{minipage}
\\ \\
\begin{minipage}[t]{0.45\columnwidth}%
Q$_{3}$: Did she live alone? \\
A$_{3}$: no \\
R$_{3}$: Cotton wasn't alone
\end{minipage}
&
\begin{minipage}[t]{0.45\columnwidth}%
ASR-Q$_{3}$: Did she live alone? \\
A$_{3}$: no \\
R$_{3}$: Cotton wasn't alone
\end{minipage}
\end{tabular}
\end{center}
\end{table}
\section{DDNet}
In this section, we detail our data distillation approach by leveraging the dual nature of speech and text domains to boost the prediction accuracy in a spoken dialogue system. An overview pipeline of this task is shown in Figure~\ref{fig1}. We first introduce the multi-modality fusion mechanism. Then we present the major components of the CRMC module. Finally, we describe a simple yet effective distillation strategy in the proposed \textit{DDNet} to learn feature representation in the speech-text domain comprehensively.
\label{subsec:cmm}
Given spoken words $\displaystyle {\bm{S}}$ = $\{\displaystyle{s_1},\displaystyle{s_2},...,\displaystyle{s_n}\}$ and corresponding text words $\displaystyle {\bm{X}}$ = $\{\displaystyle{x_1},\displaystyle{x_2},...,\displaystyle{x_n}\}$, we utilize Speech-BERT and Text-BERT to generate speech feature embedding~$\displaystyle {\bm{E}}_s$=$\{\displaystyle {\bm{E}}_{s1},\displaystyle ~{\bm{E}}_{s2},...,\displaystyle {\bm{E}}_{sn}\}$ and context word embedding~$\displaystyle {\bm{E}}_x$=$\{\displaystyle {\bm{E}}_{x1},\displaystyle~ {\bm{E}}_{x2},...,\displaystyle {\bm{E}}_{xn}\}$, respectively. Concretely, we first use vq-wav2vec~\citep{baevski2019vq} to transfer speech signals into a series of tokens, which is the standard tokenization procedure in nature language processing tasks, and then use Speech-BERT~\citep{chuang2019speechbert}, a variant of BERT-based models, to process the speech sequences for training. We re-train Speech-BERT~\citep{chuang2019speechbert} on our Spoken-CoQA dataset. The scale of Speech-BERT is similar with BERT-base~\citep{devlin2018bert} model that contains 12 transformer layers with the residual structure and the embedding dimension is with 768. In parallel, we embed the text context into a sequence of vectors via our text encoder - Text-BERT. We adopt the same architecture of BERT-base~\citep{devlin2018bert} in our Text-BERT due to its superior performance.
\textbf{Cross Attention }
Inspired by ViLBERT~\citep{lu2019vilbert}, we apply the co-attention transformer layer\citep{lu2019vilbert}, a variant of Self-Attention~\citep{vaswani2017attention}, as the Cross Attention module for speech and text embedding fusion. We pass query, key, and value matrices~($\displaystyle {\mathbf{Q}}$, $\displaystyle {\mathbf{K}}$, $\displaystyle {\mathbf{V}}$) as input to the Cross Attention module. We then compute the cross attention-pooled features by querying one modality with $\displaystyle {\mathbf{Q}}$ vector from another modality.
\begin{equation}
\hat{\displaystyle {\bm{E}}}{_s^{cross}}=CrossAttention(\displaystyle {\bm{E}}_s,\displaystyle {\bm{E}}_x,\displaystyle {\bm{E}}_x)
\end{equation}
\begin{equation}
\hat{\displaystyle {\bm{E}}}{_x^{cross}}=CrossAttention(\displaystyle {\bm{E}}_x,\displaystyle {\bm{E}}_s,\displaystyle {\bm{E}}_s)
\end{equation}
Finally, we obtain the aligned cross attention embedding~$\displaystyle {\bm{E}}_{cross}$ by concatenating~$\hat{\displaystyle {\bm{E}}}{_s^{cross}}$ and $\hat{\displaystyle {\bm{E}}}{_x^{cross}}$.
\subsection{Key Components}
\label{subsec:cmrc}
We build our CMRC module, based on recent works~\citep{zhu2018sdnet,huang2017fusionnet}. We divide our CMRC module into three key components: Encoding Layer, Attention Layer and Output Layer.
\textbf{Encoding Layer }
We encode documents and conversations~(questions and answers) into the corresponding feature embedding~(e.g.,character embedding, word embedding, and contextual embedding), and then concatenate the output contextual embedding and the aligned cross attention embedding~$\displaystyle {\bm{E}}_{cross}$, and pass it as input.
\begin{equation}
\hat{\displaystyle {\bm{E}}}_{enc}=[\displaystyle {\bm{E}}_{enc};\displaystyle {\bm{E}}_{cross}]
\end{equation}
\textbf{Attention Layer }
We compute the attention on the context representations of the documents and questions, and extensively exploit correlations between them. Note that we adopt the default attention layers in four baseline models.
\textbf{Output Layer }
After obtaining attention-pooled representations, the Output Layer computes the probability distribution of the start and end index within the entire documents and predicts an answer to the current question.
\subsection{Knowledge Distillation}
\label{subsec:kd}
For prior speech-language models, the only guidance is the standard training objective to measure the difference between the prediction and the reference answer. However, such criteria makes no sense for noisy ASR transcriptions. To tackle this issue, we distill the knowledge from our~\textit{teacher} model, and use them to guide the~\textit{student} model to learn contextual features in our spoken CMRC task. Concretely, we set the model trained on the speech document and text corpus as the~\textit{teacher} model and trained on the ASR transcripts as the~\textit{student} model, respectively. Thus, the~\textit{student} trained on low-quality data learn to imbibe the knowledge that the~\textit{teacher} has discovered.
Concretely, given the $z_{S}$ and $z_{T}$ are the prediction vectors by the~\textit{student} and~\textit{teacher} models, the objective is define as:
\begin{equation}
L = \sum_{x\in \mathcal{X}} (\alpha \tau^2 \mathcal{KL}(p_{\tau}(z_{S}), p_{\tau}(z_{T})) + (1 - \alpha) \mathcal{XE}(z_{T}, y) ),
\end{equation}
where $\mathcal{KL}(\cdot)$ and $\mathcal{XE}(\cdot)$ denote the Kullback-Leibler divergence and cross entropy, respectively. $y$ represents the ground truth labels in the text training dataset~$\displaystyle {\bm{X}}$. $p_{\tau}(\cdot)$ refers the softmax function with temperature $\tau$, and $\alpha$ is a balancing factor.
\section{Experiments and Results}
In this section, we first introduce several state-of-the-art language models as our baselines, and then evaluate the robustness of these models on our proposed Spoken-CoQA dataset. Finally, we provide a thorough analysis of different components of our method. Note that we use the default setting in all the evaluated methods.
\subsection{Baselines}
In principle, \textit{DDNet} can utilize any backbone network for SCQA tasks. We choose several state-of-the-art language models (FlowQA~\citep{huang2018flowqa}, SDNet~\citep{zhu2018sdnet}, BERT-base~\citep{devlin2018bert}, ALBERT~\citep{lan2019albert}) as our backbone network due to its superior performance. To train the~\textit{teacher-student} pairs simultaneously, we first train baselines on the CoQA training set and then compare the performances of testing baselines on CoQA dev set and Spoken-CoQA dev set. Finally, we train the baselines on the Spoken-CoQA training set and evaluate the baselines on the CoQA dev set and Spoken-CoQA test set. We provide quantitative results in Table~\ref{tab:my_label_2}.
\begin{table}[t]
\footnotesize
\caption{Comparison of four baselines~(FlowQA, SDNet, BERT, ALBERT). Note that we denote Spoken-CoQA test set as S-CoQA test for brevity.}
\centering
\begin{tabular}{l | c c|c c|c c|c c}
&\multicolumn{4}{c}{\textbf{CoQA}}&
\multicolumn{4}{c}{\textbf{S-CoQA}}\\
\hline\hline
&\multicolumn{2}{c}{\textbf{CoQA dev}}&
\multicolumn{2}{|c}{\textbf{S-CoQA test}}&\multicolumn{2}{|c|}{\textbf{CoQA dev}}&
\multicolumn{2}{|c}{\textbf{S-CoQA test}}\\
\textbf{Methods} &EM &F1 &EM &F1 & EM&F1 &EM&F1 \\
\hline\hline
FlowQA \citep{huang2018flowqa} & 66.8& 75.1 & 44.1& 56.8 & 40.9& 51.6 & 22.1& 34.7 \\
SDNet \citep{zhu2018sdnet} & 68.1 & 76.9 & 39.5 & 51.2 & 40.1 & 52.5 & 41.5 & 53.1\\
BERT-base~\citep{devlin2018bert} & 67.7 & 77.7 & 41.8 & 54.7 & 42.3 & 55.8 & 40.6 & 54.1\\
ALBERT-base~\citep{lan2019albert} & 71.4&80.6 &42.6& 54.8 & 42.7&56.0 &41.4& 55.2\\
\hline
Average & 68.5 &77.6 &42& 54.4 & 41.5 & 54.0 & 36.4 & 49.3\\
\end{tabular}
\label{tab:my_label_2}
\end{table}
\subsection{Experiment Settings}
We use the official BERT~\citep{devlin2018bert} and ALBERT~\citep{lan2019albert} as our starting point for training. We use BERT-base~\citep{devlin2018bert} and ALBERT-base~\citep{lan2019albert}, which both include 12 transformer encoders, and the hidden size of each word vector is 768. BERT and ALBERT utilize BPE as the tokenizer, but FlowQA and SDNet use SpaCy~\citep{spacy2} for tokenization. Specifically, in the case of tokens in spaCy~\citep{spacy2} correspond to more than one BPE sub-tokens, we average the BERT embeddings of these BPE sub-tokens as the embedding for each token. To maintain the integrity of all evaluated model performance, we use standard implementations and hyper-parameters of four baselines for training. The balancing factor $\alpha$ is set to 0.9, and the temperature~$\tau$ is set to 2. For evaluation, we use Exact Match (EM) and $F_1$ score to compare the model performance on the test set. Note that in this work, each baseline is trained in the local computing environment, which may results in different results compared with the ones on the CoQA leader board.
\subsection{Results}
We compare several~\textit{teacher-student} pairs on CoQA and Spoken-CoQA dataset. Quantitative results are shown in Table~\ref{tab:my_label_2}. We can observe that the average F1 scores are 77.6$\%$ when training on CoQA (text document) and testing on the CoQA dev set. However, when training the models on Spoken-CoQA (ASR transcriptions) and testing on the Spoken-CoQA test set, average F1 scores are dropped to 49.3$\%$. For FlowQA, the performance even dropped by 40.4$\%$ on F1 score. This confirms the importance of mitigating ASR errors which severely degrade the model performance in our tasks.
\begin{table}[]
\caption{Comparison of key components in \textit{DDNet}. We set the model on speech document and text corpus as the~\textit{teacher} model, and the one on the ASR transcripts as the~\textit{student} model.}
\centering
\begin{tabular}{l|cc|cc}
& \multicolumn{2}{|c|}{\textbf{CoQA dev}}&\multicolumn{2}{|c}{\textbf{S-CoQA test}}\\
\textbf{Methods}&EM&F1 &EM&F1\\
\hline\hline
FlowQA \citep{huang2018flowqa}&40.9&51.6&22.1&34.7\\
FlowQA \citep{huang2018flowqa}+ \textbf{Cross Attention}&41.1&52.2&22.5&35.5\\
FlowQA \citep{huang2018flowqa}+ \textbf{Knowledge Distillation} &42.5&53.7&23.9&39.2\\
\hline
SDNet \citep{zhu2018sdnet}&40.1&52.5&41.5&53.1\\
SDNet \citep{zhu2018sdnet}+ \textbf{Cross Attention}&40.4&52.9&41.6&53.4\\
SDNet \citep{zhu2018sdnet}+ \textbf{Knowledge Distillation}&41.7&55.6&43.6&56.7\\
\hline
BERT-base \citep{devlin2018bert}&42.3&55.8&40.6&54.1\\
BERT-base \citep{devlin2018bert}+ \textbf{Cross Attention}&42.4&56.3&40.9&54.5\\
BERT-base \citep{devlin2018bert}+ \textbf{Knowledge Distillation}&44.1& 58.8& 42.8& 57.7\\
\hline
ALBERT-base \citep{lan2019albert}&42.7&56.0&41.4&55.2\\
ALBERT-base \citep{lan2019albert}+ \textbf{Cross Attention}&42.9&56.4&41.6&55.9\\
ALBERT-base \citep{lan2019albert}+ \textbf{Knowledge Distillation}&44.8&59.6&43.9&58.7
\\
\end{tabular}
\label{tab:my_label_3}
\end{table}
As shown in Table~\ref{tab:my_label_3}, it demonstrates that our proposed Cross Attention block and knowledge distillation strategy consistently boost the remarkable performance on all baselines, respectively. More importantly, our distillation strategy works particularly well. For FlowQA, our method achieves 53.7$\%$ (vs.51.6$\%$) and 39.2$\%$ (vs.34.7$\%$) in terms of F1 score over text document and ASR transcriptions, respectively. For SDNet, our method outperforms the baseline without distillation strategy, achieving 55.6$\%$ (vs.52.5$\%$) and 56.7$\%$ (vs.53.1$\%$) in terms of F1 score. For two BERT-like models (BRET-base and ALBERT-base), our methods also improve F1~scores to 58.8$\%$ (vs.55.8$\%$) and 57.7$\%$ (vs.54.1$\%$); 59.6$\%$ (vs.56.0$\%$) and 58.7$\%$ (vs.55.2$\%$), respectively. Such significant improvements demonstrate the effectiveness of \textit{DDNet}.
\section{Quantitative Analysis}
\textbf{Speech Feature in ASR System }
To perform qualitative analysis of speech features, we visualize the log-mel spectrogram features and the mel-frequency cepstral coefficients (MFCC) feature embedding learned by \textit{DDNet} in Figure~\ref{fig:mel}. We can observe how the spectrogram features respond to different sentence examples.
\textbf{Temperature~$\tau$ }
To study the effect of temperature~$\tau$ (See Section~\ref{subsec:kd}), we conduct the additional experiments of four baselines with the standard choice of the temperature~$\tau \in \{1,2,4,6,8,10\}$. All models are trained on Spoken-CoQA dataset, and validated on the CoQA dev and Spoken-CoQA test set, respectively.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{temperature.pdf}
\caption{Ablation studies of temperature $\tau$ on \textit{DDNet} performance~(FlowQA, SDNet, BERT, ALBERT). Red and blue denote the results on CoQA dev and Spoken-CoQA test set, respectively.}
\label{fig:t}
\end{figure}
In Figure \ref{fig:t}, when $T$ is set to 2, four baselines all achieve their best performance in term of F1 and EM metrics.
\textbf{Multi-Modality Fusion Mechanism }
To study the effect of different modality fusion mechanisms, we introduce a novel fusion mechanism~\textit{Con Fusion}: first, we directly concatenate two output embedding from speech-BERT and text-BERT models, and then pass it to the encoding layer in the following CMRC module. In Table~\ref{tab:my_label_5}, we observe that Cross Attention fusion mechanism outperform four baselines with \textit{Con Fusion} in terms of EM and F1 scores. We further investigate the effect of uni-model input. Table~\ref{tab:my_label_5} shows that~\textit{text-only} performs better than~\textit{speech-only}. One possible reason for this performance is that only using speech features can bring additional noise. Note that speech-only (text-only) represents that we only feed the speech (text) embedding for speech-BERT (text-BERT) to the encoding layer in the CMRC module.
\section{Conclusion}
In this paper, we have presented a new spoken conversational question answering task - Spoken-CoQA, for enabling human-machine communication. Unlike the existing Spoken conversational machine reading comprehension datasets, Spoken-CoQA includes multi-turn conversations and passages in both text and speech form. Furthermore, we propose a data distillation method, which leverages audio-text features to reduce the misalignment between ASR hypotheses and the reference transcriptions. Experimental results show that \textit{DDNet} achieves superior performance in prediction accuracy. For future work, we will further investigate different mechanism of integrating speech and text content, and propose novel machine learning based networks to migrate ASR recognition errors to boost the performance of QA systems.
| proofpile-arXiv_059-15707 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Given the prescribed $Q$-curvature equation on $\mathbb{R}^4$
\begin{equation}\label{eqQ}
\Delta^2 u = K e^{4u}\quad \text{in }\mathbb{R}^4,
\end{equation}
where $K\in L^\infty_{loc}(\mathbb{R}^4)$ is a given function, we say that $u$ is a normal solution to \eqref{eqQ} if $Ke^{4u}\in L^1(\mathbb{R}^4)$ and $u$ solves the integral equation
\begin{equation}\label{eqI}
u(x)=\frac{1}{8\pi^2}\int_{\mathbb{R}^4}\log\(\frac{|y|}{|x-y|}\) K(y)e^{4u(y)}dy +c,
\end{equation}
where $c\in \mathbb{R}$.
It is well known that \eqref{eqI} implies \eqref{eqQ}, while the converse is not true, see e.g. \cite{CC,Lin}.
If the right-hand side of \eqref{eqQ} is slightly more integrable, more precisely if $\log(|\cdot|)Ke^{4u}\in L^1(\mathbb{R}^4)$, then \eqref{eqI} is equivalent to
\begin{equation}\label{eqIbis}
u(x)=\frac{1}{8\pi^2}\int_{\mathbb{R}^4}\log\(\frac{1}{|x-y|}\) K(y)e^{4u(y)}dy +c'.
\end{equation}
We will often use this second version for convenience.
A solution $u$ to \eqref{eqQ} has the geometric property that the conformal metric $e^{2u}|dx|^2$ on $\mathbb{R}^4$ has $Q$-curvature equal to $K$. For this reason equation \eqref{eqQ} has received a lot of attention in the last decades, including lower and higher order analogs, both when $K$ is constant and non-constant, see e.g. \cite{CK,CY, MarClass} and the references therein.
Part of the interest in solutions to \eqref{eqQ} arises from the Nirenberg problem, i.e. the problem of finding whether a given function on a given smooth Riemannian manifold (M, g), usually compact and without boundary, and of dimension $4$ in our case (similar considerations hold in any dimension), can be the Q-curvature of a conformal metric $e^{2u}g$. The variational methods or geometric flows used to study such problems, usually lead to lack of compactness issues, and upon suitable scaling at a blow-up point one often obtain as solution of \eqref{eqQ}. Moreover, because of a priori gradient and volume bounds, usually such solutions are normal, in the sense of \eqref{eqI}, with $Ke^{4u}\in L^1(\mathbb{R}^4)$, and if the prescribed curvature function is always positive and continuous, a blow-up argument will lead to a normal solution to \eqref{eqQ} with $K$ constant. Such solutions have been studied by \cite{Lin} (in other dimensions by \cite{CC, H-odd, H-clas, JMMX, MarClass, WX} and others), and always take the form, when $K=6$,
\begin{equation}\label{uspherical}
u(x)=\log\(\frac{2\lambda}{1+\lambda^2|x-x_0|^2}\),\quad \lambda>0,\quad x_0\in \mathbb{R}^4.
\end{equation}
More recently, though, Borer, Galimberti and Struwe \cite{BGS} studied a \emph{sign-changing} prescribed Gaussian curvature problem in dimension $2$, and, under the generic assumption that the prescribed curvature has only non-degenerate maxima, their blow-up analysis led possibly to either normal solutions to \eqref{eqQ} with $K>0$ constant, or to normal solutions to
\begin{equation}\label{eqG}
-\Delta u(z)=\(1+ A(x,x)\)e^{2u(x)},\quad \text{in }\mathbb{R}^2,
\end{equation}
where $A$ is a negative definite bilinear map. Later Struwe \cite{StrJEMS} showed that in fact \eqref{eqG} admits no \emph{normal} solutions.
A similar analysis was done by Galimberti \cite{Gal} and Ng\^o-Zhang \cite{NZ} in dimension $4$, which led to normal solution to \eqref{eqQ} with $K(x)=(1+A(x,x))$, again with $A(x,x)$ a negative definite bilinear form, assuming that the prescribed curvature $f$ has non-degenerate maxima. In this case, it was not possible to use the ideas of \cite{StrJEMS} to rule out the existence of such normal solutions. On the other hand in case the prescribed curvature function $f$ has derivatives vanishing up to order $3$, and $4$-th order derivative negative definite, blow-up leads to normal solutions to \eqref{eqQ} with $K(x)=(1+A(x,x,x,x))$ and $A$ a negative definite symmetric $4$-linear map, and in this case Struwe \cite{StrLiou} has recently proven that such solutions do not exist.
Then the non-degenerate case remained open, namely whether solutions to \eqref{eqQ} with $K(x)=(1+A(x,x))$, $A$ bilinear and negative definite, do exist. In this paper we answer this question in the affirmative. In fact, focusing on the case $K(x)=(1-|x|^p)$ for any $p\in (0,4)$, we shall see that \eqref{eqQ} has a normal solution with prescribed total curvature $\Lambda$, if and only if $\Lambda$ lies in a certain range.
More precisely, for {$p>0$} we set
\begin{equation}\label{defLambda*}
\Lambda_{\mathrm{sph}}:=16\pi^2,\quad \Lambda_{*,p}:=(4+p)2\pi^2=\(1+\frac{p}{4}\)\frac{\Lambda_{\mathrm{sph}}}{2}.
\end{equation}
The constant $\Lambda_{\mathrm{sph}}= 6|S^4|$ is the total $Q$-curvature of the sphere $S^4$.
We start with a non-existence result.
\begin{thm}\label{thm0} Fix $p>0$. For any $\Lambda\in ({-\infty},\Lambda_{*,p})\cup[\Lambda_{\mathrm{sph}},\infty)$ the problem
\begin{align}\label{eq-0}
\Delta^2 u=(1-|x|^p)e^{4u}\quad\text{in }\mathbb{R}^4,\quad \Lambda=\int_{\mathbb{R}^4}(1-|x|^p)e^{4u} dx,
\end{align}
admits no normal solutions. In particular, for $p\ge 4$, Problem \eqref{eq-0} admits no normal solutions.
\end{thm}
Theorem \ref{thm0} is based on a Pohozaev identity (Proposition \ref{poho-3}) and some asymptotic estimates at infinity.
\medskip
Based on a variational approach of A. Chang and W. Cheng \cite{CC}, together with a blow-up argument, we shall then prove the following existence result.
\begin{thm}\label{thm1} Let $p\in (0,4)$ be fixed. Then for every $\Lambda \in (\Lambda_{*,p},\Lambda_{\mathrm{sph}})$ there exists a (radially symmetric) \emph{normal} solution to Problem \eqref{eq-0}.
Such solutions (in fact, every normal solution to \eqref{eq-0}) have the asymptotic behavior
\begin{equation}\label{asymp}
u(x)= -\frac{\Lambda}{8\pi^2}\log|x| + C +O(|x|^{-\alpha}), \quad\text{as }|x|\to\infty,
\end{equation}
for every $\alpha\in [0,1]$ such that $\alpha<\frac{\Lambda-\Lambda_{*,p}}{2\pi^2}$, and
\begin{equation}\label{asymp3}
|\nabla^\ell u(x)|=O(|x|^{-\ell}), \quad \text{as }|x|\to\infty, \quad \ell=1,2,3.
\end{equation}
\end{thm}
Theorems \ref{thm0} and \ref{thm1} leave open the case $\Lambda =\Lambda_{*,p}$, which is borderline from the point of view of integrability, in the sense that \eqref{asymp} is compatible with the integrability condition in $\eqref{eq-0}$ if $\Lambda>\Lambda_{*,p}$, but for $\Lambda=\Lambda_{*,p}$, \eqref{asymp} degenerates to
$$-\frac{\Lambda+o(1)}{8\pi^2}\log|x| +O(1)\le u(x)\le -\frac{\Lambda}{8\pi^2}\log|x| + O(1), \quad\text{as }|x|\to\infty,$$
(see Lemma \ref{lemasymp} and Lemma \ref{lemasymp2}), which is not incompatible the integrability of $(1-|x|)e^{4u}$.
We shall study the case $\Lambda=\Lambda_{*,p}$ from the point of view of compactness: while solutions of Theorem \ref{thm1} must necessarily blow up as $\Lambda\uparrow \Lambda_{\mathrm{sph}}$ (see Theorem \ref{thm2}), we find that such solutions remain compact as $\Lambda\downarrow\Lambda_{*,p}$.
\begin{thm}\label{thm1b}
Fix $p\in (0,4)$. Given any sequence $(u_k)$ of radial normal solutions to \eqref{eq-0} with $\Lambda=\Lambda_k\in [\Lambda_{*,p}, \Lambda_{\mathrm{sph}})$ and $\Lambda_k\to \bar\Lambda \in [\Lambda_{*,p}, \Lambda_{\mathrm{sph}})$, up to a subsequence $u_k\to \bar u$ locally uniformly, where $\bar u$ is a normal (and radial) solution to \eqref{eq-0} with $\Lambda=\bar\Lambda$.
In particular, choosing $\Lambda_k\downarrow \Lambda_{*,p}$ and $u_k$ given by Theorem \ref{thm1}, we obtain that \eqref{eq-0} has a normal solution $u$ also for $\Lambda=\Lambda_{*,p}$.
Moreover such $u$ satisfies
\begin{equation}\label{asymp2}
u(x) {\le}-\frac{\Lambda_{*,p}}{8\pi^2}\log|x| -\(\frac12+o(1)\)\log\log |x|, \quad \text{as }|x|\to\infty,
\end{equation}
and \eqref{asymp3}.
\end{thm}
\noindent \textbf{Open problem} The solutions given by Theorems \ref{thm1} and \ref{thm1b} are radially symmetric by construction. It is open whether all normal solutions are radially symmetric (compare to \cite{Lin}), and whether they are unique, for every given $\Lambda\in [\Lambda_{*,p},\Lambda_{\mathrm{sph}})$.
\medskip
\noindent\textbf{Open problem} Is the inequality in \eqref{asymp2} actually an equality (see Lemma \ref{lem-4.6} for a sharper version of \eqref{asymp2})?
\medskip
The proof of Theorem \ref{thm1b} relies on blow-up analysis and quantization, as studied in \cite{Rob}, \cite{MarOpen}, which implies that in case of loss of compactness the total $Q$-curvature converges to $\Lambda_{\mathrm{sph}}$, which is a contradiction if $\Lambda_k \to \bar \Lambda\in [\Lambda_{*,p},\Lambda_{\mathrm{sph}})$. An important part of this argument is to rule out loss of curvature at infinity, see Lemma \ref{lemLambdabar}.
On the other hand, as $\Lambda\uparrow \Lambda_{\mathrm{sph}}$, the non-existence result of Theorem \ref{thm0} leaves open only two possibilities: loss of curvature at infinity, or loss of compactness. In the next theorem we show that the second case occurs.
\begin{thm}\label{thm2} Fix $p\in (0,4)$ and let $(u_k)$ be a sequence of radial normal solutions of \eqref{eq-0} with $\Lambda=\Lambda_k\uparrow \Lambda_{\mathrm{sph}}$ (compare to Theorem \ref{thm1}) as $k\to\infty$. Then
\begin{equation}\label{concentration}
(1-|x|^p)e^{4u_k}\rightharpoonup \Lambda_{\mathrm{sph}} \delta_0 \quad \text{as }k\to\infty,
\end{equation}
weakly in the sense of measures, and setting
$$\eta_k(x):= u_k(r_k x) -u_k(0)+\log 2,\quad r_k:=12 e^{-u_k(0)},$$ we have
\begin{equation}\label{convetak}
\eta_k(x)\to \log\(\frac{2}{1+|x|^2}\)\quad \text{locally uniformly in } \mathbb{R}^4.
\end{equation}
\end{thm}
Having addressed the case $K(x)=1-|x|^p$, we now analize the case $K(x)=1+|x|^p$.
Similar to Theorems \ref{thm0}, \ref{thm1} and \ref{thm1b}, one can ask for which values of $\Lambda$ problem
\begin{align}\label{eq-positive}
\Delta^2u=(1+|x|^p)e^{4u}\quad\text{in }\mathbb{R}^4,\quad \Lambda:=\int_{\mathbb{R}^4} (1+|x|^p)e^{4u}dx<\infty
\end{align}
admits a normal solution. The following result gives a complete answer for $p\in (0,4]$ and a partial answer for $p>4$. Let $\Lambda_{*,p}$ be as in \eqref{defLambda*}.
\begin{thm}\label{thm-positive2}
For $p\in (0,4]$, Problem \eqref{eq-positive} has a normal solution if and only if
\begin{align} \label{nece0}
\Lambda_{\mathrm{sph}} <\Lambda<2\Lambda_{*,p}.
\end{align}
For $p>4$
\begin{align} \label{nece}
\Lambda_{*,p} <\Lambda<2\Lambda_{*,p}
\end{align}
is a necessary condition for the existence of normal solutions to \eqref{eq-positive}, and there exists $\varepsilon_p>0$ such that
\begin{align} \label{nece2}
\Lambda_{*,p}+\varepsilon_p <\Lambda<2\Lambda_{*,p}
\end{align}
is a necessary condition for the existence of \emph{radial} normal solutions to \eqref{eq-positive}. Finally, for $p>4$ and for every
\begin{align}\label{suffi}
\frac{p\Lambda_{\mathrm{sph}}}4 <\Lambda<2\Lambda_{*,p}
\end{align}
there exists a radially symmetric normal solution to \eqref{eq-positive}.
\end{thm}
While the necessary condition \eqref{nece0}-\eqref{nece} follow from the Pohozaev identity, the existence part and the more restrictive condition \eqref{nece2} are based on blow-up analysis. To study blow-up at the origin we use again the methods of \cite{Rob} and \cite{MarOpen}, and to avoid vanishing of curvature at infinity, which can be seen as a form of blow-up at infinity or, equivalently, as blow-up of the Kelvin transform at the origin, we will use the blow-up analysis and classification result of \cite{HMM} for normal solutions of \eqref{eqQ} with $K(x)=|x|^p$.
\medskip
\noindent\textbf{Open problems} In the case $p>4$ it is not know whether the condition \eqref{suffi} is also necessary. This is also related to the problem of uniqueness/multiplicity of solutions to \eqref{eq-positive} for a given $\Lambda$ (open also in the case $p\in (0,4]$), and to the problem of the existence of a minimal value of $\Lambda$ for which \eqref{eq-positive} admits a solution, in analogy with Theorem \ref{thm1b}.
\medskip
\noindent\textbf{Open problem} Every \emph{radial} solution to \eqref{eqQ} with $K(x)=1- |x|^p$, $p>0$ must have finite total curvature, namely $Ke^{4u}\in L^1(\mathbb{R}^4)$, see Proposition \ref{propfinite}. The same happens when $K=const>0$, but in this case Albesino \cite{Alb} has recently proven the existence of non-radial solutions $u$ with $Ke^{4u}\not\in L^1(\mathbb{R}^4)$. It would be interesting to see whether there exist non-radial solutions to \eqref{eqQ} with infinite total $Q$-curvature also in the case $K(x)=1-|x|^p$.
\medskip
\noindent\textbf{Open problem} Inspired by \cite{HD, HVol,MarVol}, can one find (non-normal) solutions to \eqref{eqQ} with $K=(1-|x|^p)$ and arbitrarily large but finite total $Q$-curvature $\Lambda=\int_{\mathbb{R}^4}Ke^{4u}dx$? In the case of $K=(1+|x|^p)$, using the methods from \cite{HMM} it should be possible to prove the upper bound $\Lambda<2\Lambda_{*,p}$ for (not necessarily normal) \emph{radial} solutions.
\section{Some preliminary results and proof of Theorem \ref{thm0}}
We start with a Pohozaev-type identity that will be used several times.
\begin{prop}[Pohozaev identity]\label{poho-3}
{Let $K(x)=(1\pm |x|^p)$} and let $u$ be a solution to the integral equation
\begin{equation*}
u(x)= \frac{1}{8\pi^2}\int_{\mathbb{R}^4} \log\left(\frac{|y|}{|x-y|}\right)K(y) e^{4u(y)} dy + c
\end{equation*}
for some $c\in \mathbb{R}$, with $Ke^{4 u}\in L^1(\mathbb{R}^4)$ and $ |\cdot|^p e^{4 u} \in L^1(\mathbb{R}^4)$. If
\begin{align}\label{cond-poho}
\lim_{R\to\infty} R^{4+p} \max_{|x|=R} e^{4u(x)}=0,
\end{align}
then, denoting $\Lambda:=\int_{\mathbb{R}^4}(1\pm|x|^p)e^{4u(x)} dx,$ we have
\begin{equation}\label{general Poho}
\frac{\Lambda}{\Lambda_{\mathrm{sph}}}\left(\Lambda-\Lambda_{\mathrm{sph}}\right) =\pm \frac{p}{4}\int_{\mathbb{R}^4}|x|^p e^{4u(x)}dx.
\end{equation}
\end{prop}
\begin{proof} Following the proof of Proposition A.1 in \cite{HMM}, we need to show that as $R\to\infty $ $$R\int_{\partial B_R}|x|^pe^{4u}d\sigma\to0\quad\text{and }\int_{|x|\leq R}\int_{|y|\geq R}\frac{|x+y|}{|x-y|} |x|^pe^{4u(x)}|y|^pe^{4u(y)}dydx\to0.$$
By \eqref{cond-poho} the boundary term goes to $0$ as $R\to\infty$. For the double integral term we divide the domain of $B_R^c$ into $B_{2R}^c$ and $B_{2R}\setminus B_R$. Now using $|\cdot|^pe^{4u}\in L^1(\mathbb{R}^4)$ and \eqref{cond-poho} we estimate
\begin{align*}
\int_{|x|\leq R}\int_{R\leq|y|\leq 2R}\frac{|x+y|}{|x-y|} |x|^pe^{4u(x)}|y|^pe^{4u(y)}dydx &= o\(\frac{1}{R^3}\int_{|x|\leq R} |x|^pe^{4u(x)} \int_{R\leq |y| \leq2 R}\frac{ dy}{|x-y|} dx \)\\
&=o(1),\quad \text{as }R\to\infty
\end{align*}
and
\begin{align*} \int_{|x|\leq R}\int_{|y|\geq 2R}\frac{|x+y|}{|x-y|} |x|^pe^{4u(x)}|y|^pe^{4u(y)}dydx &\leq C \int_{|x|\leq R} |x|^pe^{4u(x)} dx \int_{ |y| \geq2 R} |y |^p e^{4u(y)}dy \\ & =o(1),\quad \text{as }R\to\infty.
\end{align*}
\end{proof}
Another basic tool often used is the Kelvin transform.
\begin{prop}\label{PKelvin} Let $u$ be a normal solution to \eqref{eqQ} with $K\in L^\infty_{loc}(\mathbb{R}^4)$ and $Ke^{4u}\in L^1(\mathbb{R}^4)$. Then the function
\begin{equation}\label{defKelvin}
\tilde u(x)= u\(\frac{x}{|x|^2}\)-\alpha \log|x|,\quad \text{for }x\ne 0,\quad \alpha:=\frac{1}{8\pi^2}\int_{\mathbb{R}^4}Ke^{2u}dx,
\end{equation}
satisfies
$$\tilde u(x)=\frac{1}{8\pi^2}\int_{\mathbb{R}^4}\log\(\frac{1}{|x-y|}\)K\(\frac{y}{|y|^2}\)\frac{e^{4\tilde u(y)}}{|y|^{8-4\alpha}} dy +c,$$
namely $\tilde u$ is a normal solution to
$$\Delta^2\tilde u(x) =K\(\frac{x}{|x|^2}\)\frac{e^{4\tilde u}}{|x|^{8-4\alpha}}.$$
\end{prop}
\begin{proof} Starting from \eqref{eqI}, with a change of variables, and using that $|x||y|\left|\frac{x}{|x|^2}-\frac{y}{|y|^2}\right|=|x-y|$, we obtain
\begin{equation}\label{Kelvin}
\begin{split}
\tilde u(x)&=\frac{1}{8\pi^2}\int_{\mathbb{R}^4}\log\(\frac{|y|}{|x|\left|\frac{x}{|x|^2}-y\right|}\)K\(y\)e^{4 u(y)}dy+c\\
&=\frac{1}{8\pi^2}\int_{\mathbb{R}^4}\log\(\frac{1}{|x||y|\left|\frac{x}{|x|^2}-\frac{y}{|y|^2}\right|}\)K\(\frac{y}{|y|^2}\)\frac{e^{4\tilde u(y)}}{|y|^{8-4\alpha}}dy+c\\
&=\frac{1}{8\pi^2}\int_{\mathbb{R}^4}\log\(\frac{1}{|x-y|}\)K\(\frac{y}{|y|^2}\)\frac{e^{4\tilde u(y)}}{|y|^{8-4\alpha}}dy+c.
\end{split}
\end{equation}
\end{proof}
We now start studying the asymptotic behavior of normal solutions to \eqref{eqQ} under various assumptions.
\begin{lem}\label{lemasymp} Let $u$ solve the integral equation
\begin{equation}\label{eqI2}
u(x)=\frac{1}{8\pi^2}\int_{\mathbb{R}^4}\log\(\frac{|y|}{|x-y|}\) K(y)e^{4u(y)}dy +c,
\end{equation}
where $K(y)\le 0$ for $|y|\ge R_0$, for a given $R_0\ge 0$, and $Ke^{4u}\in L^1$. Then we have
\begin{equation}\label{asymp4}
u(x)\le -\frac{\Lambda}{8\pi^2}\log|x| +O(1),\quad \text{as }|x|\to\infty,
\end{equation}
where
$$\Lambda=\int_{\mathbb{R}^4} K(y)e^{4u(y)}dy\in \mathbb{R} .$$
\end{lem}
\begin{proof} Choose $x$ such that $|x|\ge 2R_0$. Without loss of generality we can assume $R_0\ge 2$. Split $\mathbb{R}^4= A_1\cup A_2\cup A_3$ where
$$A_1=B_{\frac{|x|}{2}}(x),\quad A_2= B_{R_0}(0), \quad A_3=\mathbb{R}^4\setminus(A_1\cup A_2).$$
Using that
$$\log\(\frac{|y|}{|x-y|}\)\ge 0, \quad K(y)\le 0\quad \text{in }A_1,$$
we get
$$\int_{A_1}\log\(\frac{|y|}{|x-y|}\) K(y)e^{4u(y)}dy \le 0.$$
For $y\in A_2$ we have $\log\(\frac{|y|}{|x-y|}\)=-\log|x|+O(1)$ as $|x|\to\infty$, so that
\begin{align*}
\int_{A_2}\log\(\frac{|y|}{|x-y|}\) K(y)e^{4u(y)}dy& =(-\log|x|+O(1))\int_{A_2}Ke^{4u}dy\\
&= -\log|x|\int_{A_2}Ke^{4u}dy+O(1).
\end{align*}
For $|y|\ge R_0\ge 2$ and $|x-y|>\frac{|x|}{2}$ we have $|x-y|\le |x|+|y|\le |x||y|$ so that
$$\int_{A_3}\log\(\frac{|y|}{|x-y|}\) K(y)e^{4u(y)}dy \le - \log |x| \int_{A_3} K(y)e^{4u(y)}dy.$$
Summing up
\begin{align*}
u(x)&\le -\frac{1}{8\pi^2} \log|x|\int_{A_2\cup A_3} K(y)e^{4u(y)}dy+O(1)\\
&\le -\frac{\Lambda}{8\pi^2} \log|x|+O(1),
\end{align*}
where again we used that $K\le 0$ in $A_1$.
\end{proof}
\begin{cor}\label{cornonex} Given $p\in (0,4)$ there is no normal solution to \eqref{eq-0} for $\Lambda\ge \Lambda_{\mathrm{sph}}=16\pi^2$.
\end{cor}
\begin{proof} Assume that $u$ solves \eqref{eq-0} for some $\Lambda\ge \Lambda_{\mathrm{sph}}$. Then, by Lemma \ref{lemasymp}, $u$ satisfies \eqref{asymp4}, which implies that assumption \eqref{cond-poho} in Proposition \ref{poho-3} is satisfied. Then \eqref{general Poho} implies $\Lambda <\Lambda_{\mathrm{sph}}$, a contradiction.
\end{proof}
\begin{lem}\label{lemasymp2} Given $p>0$ let $u$ be a normal solution to \eqref{eq-0} for some $\Lambda\in \mathbb{R}$. Then $\Lambda\ge \Lambda_{*,p}$ and
\begin{equation}\label{asymp5}
u(x)= -\frac{\Lambda +o(1)}{8\pi^2}\log|x|\quad \text{as }|x|\to \infty.
\end{equation}
\end{lem}
\begin{proof}
We start by proving \eqref{asymp5}. We write $u=u_1+u_2$, where
$$u_2(x)=-\frac{1}{8\pi^2}\int_{B_1(x)}\log\frac{1}{|x-y|}|y|^p e^{4u(y)}dy.$$
Then we have
$$u_1(x)= -\frac{\Lambda +o(1)}{8\pi^2}\log|x|\quad \text{as }|x|\to \infty.$$
We now claim that $\frac{\Lambda}{2\pi^2}>p$. Then we have that $|y|^pe^{4u_1(y)}\leq C$ on $\mathbb{R}^4$. This, and as $u_2\leq 0$, we easily get that $$|u_2(x)|\leq C\int_{B_1(x)}\log\frac{1}{|x-y|}dy\leq C,$$
hence \eqref{asymp5} is proven.
In order to prove the claim, given $R\gg 1$ and $|x|\geq R+1$ we write $$-u_2(x)= \int_{B_R^c}h(R)\log\frac{1}{|x-y|}\chi_{|x-y|\leq 1}d\mu(y),\quad d\mu(y)=\frac{|y|^pe^{4u}}{\int_{B_R^c}|y|^pe^{4u}dy} dy,$$ where $$h(R)=\frac{1}{8\pi^2}\int_{B_R^c}|y|^pe^{4u}dy=o_R(1)\xrightarrow{R\to\infty}0.$$
By Jensen's inequality and Fubini's theorem we obtain
\begin{align*} \int_{R+1<|x|<2R}e^{-4u_2}dx\leq \int_{B_R^c} \int_{R+1<|x|<2R} \left(1+\frac{1}{|x-y|^{4h(R)}}\right) dx d\mu(y)\leq CR^4. \end{align*}
Therefore, by H\"older inequality $$R^4\approx \int_{R+1<|x|<2R}e^{2u_2}e^{-2u_2}dx\leq CR^2\left( \int_{R+1<|x|<2R}e^{4u_2}dx\right)^\frac12.$$ If $\frac{\Lambda}{2\pi^2}\leq p$, then we have that $|y|^pe^{4u_1(y)}\geq \frac{1}{|y|} $ for $|y|$ large.
Hence, $$o_R(1)= \int_{R+1<|x|<2R}|x|^pe^{4u_1}e^{4u_2}dx\gtrsim \frac1R \int_{R+1<|x|<2R} e^{4u_2}dx,$$ a contradiction.
Now that \eqref{asymp5} is proven, we have that $\Lambda< \Lambda_{*,p}$ contradicts $(1-|x|^p)e^{4u}\in L^1(\mathbb{R}^4)$, hence we must have $\Lambda\ge \Lambda_{*,p}$.
\end{proof}
\begin{lem}\label{lem-gen}
Let $w\in C^0(B_1\setminus \{0\})$ be given by
$$w(x)=\int_{B_1}\log\(\frac{1}{|x-y|}\) f(y)dy,$$
for some nonnegative $f\in L^1(B_1)$. If $w(x_k)=O(1)$ for some $x_k\to 0$ then
$$\int_{B_1}\log\(\frac{1}{|y|}\)f(y)dy<\infty.$$
\end{lem}
\begin{proof}
Let $x_k\to 0$ be such that $w(x_k)=O(1)$ as $k\to\infty$. Then we have
\begin{align*}
O(1)=\int_{B_1} \log\(\frac{1}{|x_k-y|}\) f(y)dy&\geq O(1)+\int_{2|x_k|\leq |y|\leq 1}\log\(\frac{1}{|x_k-y|}\)f(y)dy \\&=O(1)+\int_{2|x_k|\leq |y|\leq 1}\log\(\frac{1}{|y|}\) f(y)dy.
\end{align*}
The lemma follows by taking $k\to\infty$.
\end{proof}
\begin{lem}\label{lemmatildeu} Let $u$ be a normal solution to \eqref{eq-0} with $\Lambda =\Lambda_{*,p}$, and let $\tilde u$ as in \eqref{defKelvin} be its Kelvin transform, namely
\begin{equation}\label{defutilde}
\tilde u(x)=u\(\frac{x}{|x|^2}\)-\(1+\frac p4\)\log|x|,\quad x\neq 0.
\end{equation}
Then
\begin{align}\label{lim-0}
\lim_{x\to 0}\tilde u(x)=-\infty, \\
\lim_{x\to 0}\Delta \tilde u(x)= +\infty. \label{lim-1}
\end{align}
\end{lem}
\begin{proof}
Observe that $\sup_{B_1}\tilde u<\infty$ by Lemma \ref{lemasymp}.
To prove \eqref{lim-0} we assume by contradiction that $\tilde u(x_k)=O(1)$ for a sequence $x_k\to 0$.
By Proposition \ref{PKelvin} we have
\begin{equation}\label{Kelvin2}
\tilde u(x)=\frac{1}{8\pi^2}\int_{\mathbb{R}^4}{\log\(\frac{1}{|x-y|}\)}\(1-\frac{1}{|y|^p}\)\frac{e^{4\tilde u(y)}}{|y|^{4-p}}dy+c,\quad x\ne 0.
\end{equation}
Since $\tilde u\leq C$ in $B_1$, from \eqref{defutilde} and the continuity of $u$ it follows that
\begin{equation}\label{35}
\(1+\frac{1}{|y|^p}\)\frac{e^{4\tilde u(y)}}{|y|^{4-p}}\leq \frac{C}{|y|^8}\quad \text{in } B_1^c.
\end{equation}
Then from \eqref{Kelvin2} we obtain
\begin{align}\label{36}
\tilde u(x)= -\frac{1}{8\pi^2}\int_{B_1}\log\left(\frac{1}{|x-y|}\right)\frac{e^{4\tilde u(y)}}{|y|^{4}}dy+O(1),\quad 0<|x|<1.
\end{align}
Then by Lemma \ref{lem-gen} applied to \eqref{36}, and with a change of variables, we get
$$\int_{B_1^c}\log(|y|)|y|^pe^{4u(y)}dy=\int_{B_1}\log\(\frac{1}{|y|}\)\frac{e^{4\tilde u(y)}}{|y|^4}dy<\infty.$$
Then, as $|x|\to\infty$ we obtain
\begin{equation}\label{}
\begin{split}
\int_{|y|\le \sqrt{x}}(1-|y|^p)e^{4u(y)}dy&=\Lambda_{*,p}-\int_{|y|> \sqrt{x}}(1-|y|^p)e^{4u(y)}dy\\
&=\Lambda_{*,p}+O\(\frac{1}{\log(|x|)}\int_{|y|> \sqrt{x}}\log(|y|)(1-|y|^p)e^{4u(y)}dy\)\\
&=\Lambda_{*,p}+O\(\frac{1}{\log(|x|)}\).
\end{split}
\end{equation}
Moreover, $u$ can be given by \eqref{eqIbis} with $K=1-|x|^p$. Hence, for $|x|>>1$
\begin{align*}
u(x)&=O(1)+\frac{1}{8\pi^2}\left( \int_{|y|\leq \sqrt{|x|}} +\int_{\sqrt{|x|}\leq|y|\leq 2|x|}+\int_{|y|\geq 2|x|} \right) \log\(\frac{1}{|x-y|}\) (1-|y|^p)e^{4 u(y)}dy \\
&\geq O(1)-\frac{\Lambda_{*,p}}{8\pi^2}\(1+O\(\frac{1}{\log|x|}\)\)\log|x|\\
& =-\frac{\Lambda_{*,p}}{8\pi^2}\log|x|+O(1),
\end{align*}
a contradiction to $|\cdot|^pe^{4u}\in L^1(\mathbb{R}^4)$, which completes the proof of \eqref{lim-0}.
To prove \eqref{lim-1} we differentiate into \eqref{Kelvin2} and obtain
$$\Delta \tilde u(x)=\frac{1}{2\pi^2} \int_{\mathbb{R}^4}\frac{1}{|x-y|^2}\(1-\frac{1}{|y|^p}\)\frac{e^{4\tilde u(y)}}{|y|^{4-p}}dy,$$
and as before, we use \eqref{35} to get
\begin{align*}
\Delta \tilde u(x)&=\frac{1}{2\pi^2} \int_{B_1}\frac{1}{|x-y|^2}\frac{e^{4\tilde u(y)}}{|y|^4}dy+O(1)\\
&\ge C \int_{B_1}\log\(\frac{1}{|x-y|}\)\frac{e^{4\tilde u(y)}}{|y|^4}dy +O(1)\\
&= -C8\pi^2 \tilde u(x)+O(1) \to \infty \quad \text{as }x\to 0.
\end{align*}
\end{proof}
\begin{prop}\label{propnonex} There exists no normal solution to \eqref{eq-0} for $p\geq4$.
\end{prop}
\begin{proof} Assume by contradiction that there exists a normal solution $u$ to \eqref{eq-0} for some $p\geq 4$. Then necessarily we have that $\Lambda\geq\Lambda_{*,p}$, thanks to \eqref{asymp5}. Now we distinguish the following two cases.
\noindent\textbf{Case 1} $\Lambda>\Lambda_{*,p}$.
Since $u\leq -\frac{\Lambda}{8\pi^2}\log|x|+C$ for $|x|\geq 1$ by Lemma \ref{lemasymp}, we see that $u$ satisfies \eqref{cond-poho}. Hence, by \eqref{general Poho}, $$\Lambda_{\mathrm{sph}}\leq \Lambda_{*,p}<\Lambda<\Lambda_{\mathrm{sph}},$$ a contradiction.
\noindent\textbf{Case 2} $\Lambda=\Lambda_{*,p}$. By Lemma \ref{lemmatildeu} we see that \eqref{cond-poho} is satisfied, and we arrive at a contradiction as in Case 1.
\end{proof}
\noindent\emph{Proof of Theorem \ref{thm0}} Combine Corollary \ref{cornonex}, Lemma \ref{lemasymp2} and Proposition \ref{propnonex}.
\endproof
For $\Lambda=\Lambda_{*,p}$ we obtain a sharper version of \eqref{lim-0} if $u$ is radial
\begin{lem}\label{lem-4.6}
Given $p\in (0,4)$, let $u$ be a radially symmetric normal solution to \eqref{eq-0} with $\Lambda=\Lambda_{*,p}$. Then
$$\limsup_{|x|\to\infty}\frac{u(x)+(1+\frac p4)\log |x|}{\log\log|x|}=-\frac{1}{2}$$
\end{lem}
\begin{proof}
We set $\tilde u$ as in \eqref{defutilde}, so that it satisfies \eqref{Kelvin2}.
By Lemma \ref{lemmatildeu} we get
$$\lim_{r\to0}\tilde u(r)=-\infty,\quad \lim_{r\to0}\Delta\tilde u(r)=+\infty.$$
In particular, $\tilde u$ is monotone increasing in a small neighborhood of the origin. Using this and \eqref{36} we estimate for $|x|\to 0$
\begin{align*} -\tilde u(x)&\geq \frac{1}{8\pi^2}\int_{2|x|\leq |y|<1}\log\left(\frac{1}{|x-y|}\right)\frac{e^{4\tilde u(y)}}{|y|^{4}}dy +O(1)\\
& = \frac{1}{8\pi^2}\int_{2|x|\leq |y|<1}\log\left(\frac{1}{|y|}\right)\frac{e^{4\tilde u(y)}}{|y|^{4}}dy +O(1) \\
& \geq \frac{e^{4\tilde u(x)}}{8\pi^2}\int_{2|x|\leq |y|<1}\log\(\frac{1}{|y|}\)\frac{ dy}{|y|^{4}} +O(1) \\ &=\frac{e^{4\tilde u(x)}}{8\pi^2}|S^3|\int_{2|x|}^1\frac{\log\frac1t}{t}dt +O(1).
\end{align*}
{Computing the integral and considering that $|S^3|=2\pi^2$ we get
$$-\tilde u(x)+O(1)\ge \frac{e^{4\tilde u(x)}}{8} (\log(2|x|))^2,\quad \text{as }x\to 0.$$
Taking the logarithm and rearranging we finally get}
$$\limsup_{x \to0}\frac{\tilde u(x)}{\log\log\(\frac{1}{|x|}\)}\leq -\frac12.$$ Next we show that the above limsup is actually $-\frac12$.
We assume by contradiction that the above $\limsup$ is less than $-\frac12$. Then there exists $\varepsilon>0$ such that for $|x|$ small we have $$\tilde u(x)\leq -\(\frac12+\frac\ve4\)\log\log\frac {1}{|x|}.$$ Hence, from \eqref{36} we obtain for $|x|$ small \begin{align*} -\tilde u(x)&\leq C\int_{B_1}\log\left(\frac{1}{|x-y|}\right)\frac{dy}{|y|^4|\log|y||^{2+\varepsilon}}+O(1) \\ &=C(I_1+I_2+I_3)+O(1), \end{align*} where $$I_i=\int_{A_i}\log\frac{1}{|x-y|}\frac{dy}{|y|^4|\log|y||^{2+\varepsilon}},$$ $$A_1=B_{\frac{|x|}{2}}, \quad A_2=B_{2|x|}\setminus B_\frac{|x|}{2},\quad A_3=B_1\setminus B_{2|x|}.$$ One easily gets $$I_1\leq \frac{C}{|\log|x||^\varepsilon},\quad I_2\leq \frac{C}{|\log|x||^{1+\varepsilon}},\quad I_3\leq C,$$ a contradiction to $\tilde u(x)\to-\infty$ as $|x|\to0.$
\end{proof}
\begin{prop}\label{propfinite} Let $u$ be a radial solution to
\begin{equation}\label{eqpm}
\Delta^2 u=(1-|x|^p)e^{4u}\quad\text{in }\mathbb{R}^4,
\end{equation}
for some $p>0$.
Then
\begin{equation}\label{intfinite}
\int_{\mathbb{R}^4}(1+|x|^p)e^{4u}dx<\infty.
\end{equation}
\end{prop}
\begin{proof} If \eqref{intfinite} is false then there exists $R>0$ such that
$$\int_{B_R}(1-|x|^p)e^{4u}dx<0.$$
In particular, $(\Delta u)'(r)<0$ for $r\geq R$. We can consider the following two cases:\\
\noindent \textbf{Case 1:} $\lim_{r\to\infty}\Delta u(r)\geq 0$. Then as $\Delta u$ is monotone decreasing on $(R,\infty)$, we see that $\Delta u>0$ on $(R,\infty)$. Therefore, by \eqref{identity} we get that $u\geq -C$ in $\mathbb{R}^4$. This, \eqref{eqpm} and \eqref{identity} imply that $\Delta u(r)\lesssim -r^{p+2}$ as $r\to\infty$, a contradiction. \\
\noindent \textbf{Case 2:} $\lim_{r\to\infty}\Delta u(r)<0$.
In this case, by \eqref{identity}, we have that $u(r)\lesssim -r^2$ as $r\to\infty$, a contradiction to the assumption $(1+|x|^p)e^{4u}\not\in L^1(\mathbb{R}^4)$.
\end{proof}
\section{Proof of Theorem \ref{thm1}}
\subsection{Existence}
The existence part in Theorem \ref{thm1} will be based on the following result, which will be proven in Section \ref{Sec:Prop2.1} using methods from \cite{CC} and \cite{Lin}.
\begin{prop}\label{prop1} For every $0<\Lambda<\Lambda_{\mathrm{sph}}$ and $\lambda>0$, there exists a radial solution $u_\lambda$ to
\begin{align}
& \Delta^2 u_\lambda=(\lambda-|x|^p)e^{-|x|^2}e^{4u_\lambda}\quad\text{in }\mathbb{R}^4,\label{eq-1}\\
&\int_{\mathbb{R}^4}(\lambda-|x|^p)e^{-|x|^2}e^{4u_\lambda}dx=\Lambda,\label{eqLambda1}
\end{align} which is normal, namely $u_\lambda$ solves the integral equation
\begin{equation}\label{prop1eq1}
u_\lambda(x)=\frac{1}{8\pi^2}\int_{\mathbb{R}^4}\log\(\frac{1}{|x-y|}\)\(\lambda-|y|^p\)e^{-|y|^2}e^{4u_\lambda(y)}dy+c_\lambda,
\end{equation}
for some constant $c_\lambda\in \mathbb{R}$.
\end{prop}
We will often use the identity
\begin{align}\label{identity} w(R)-w(r)=\int_{r}^R\frac{1}{\omega_3t^3}\int_{B_t}\Delta w dxdt,\quad 0\leq r< R,\, w\in C^2 _{rad}(\mathbb{R}^4), \quad \omega_3=2\pi^2,\end{align}
which follows at once from the divergence theorem and the fundamental theorem of calculus.
Let $u_\lambda$ be given as in Proposition \ref{prop1}.
\begin{lem}\label{lem1} For every $\lambda>0$ we have $u_\lambda(x)\downarrow -\infty$ and $\Delta u_\lambda(x)\uparrow 0$ as $|x|\to \infty$.
\end{lem}
\begin{proof} The function
$$r\mapsto \int_{B_r}\Delta^2 u_\lambda(x) dx=\int_{B_r} (\lambda-|x|^p)e^{-|x|^2}e^{4u_\lambda(x)}dx$$
is increasing on $[0,\lambda^\frac 1p]$, and decreasing to $\Lambda$ on $[\lambda^\frac 1p,\infty)$. In particular it is positive for every $r>0$. Then, by \eqref{identity} (applied with $w=\Delta u_\lambda$) we infer that $\Delta u_\lambda(x)$ is an increasing function of $|x|$.
Differentiating under the integral sign from \eqref{prop1eq1} we obtain $$|\Delta u_\lambda(x)|\leq C\int_{\mathbb{R}^4}\frac{1}{|x-y|^2}(1+|y|^p)e^{-|y|^2}dy\xrightarrow{|x|\to\infty}0,$$ where in the first inequality we have used that $u_\lambda\leq C$ on $\mathbb{R}^4$, thanks to Lemma \ref{lemasymp}.
This in turn implies that $\Delta u_\lambda <0$, hence $u_\lambda$ is decreasing by \eqref{identity}. Finally $u_\lambda \to -\infty$ as $|x|\to\infty$ follows from Lemma \ref{lemasymp}.
\end{proof}
\begin{lem}\label{lem2} We have $\lambda e^{4u_\lambda (0)}\to\infty$ as $\lambda\downarrow 0$. \end{lem}
\begin{proof} Assume by contradiction that $\lambda e^{4u_\lambda(0)}\leq C$ as $\lambda\to0$. Then $$\Lambda=\int_{\mathbb{R}^4}(\lambda-|x|^p)e^{-|x|^2}e^{4u_\lambda}dx\leq \int_{B_{\lambda^\frac1p}} \lambda e^{-|x|^2}e^{4u_\lambda (0)}\xrightarrow{\lambda\to0}0, $$
which is absurd.
\end{proof}
Now we set
\begin{equation}\label{defetalambda}
\eta_\lambda(x)=u_\lambda(r_\lambda x)-u_\lambda(0),\quad \lambda r_\lambda^4 e^{4u_\lambda(0)}:=1.
\end{equation}
Notice that $r_\lambda\to0$ by Lemma \ref{lem2}.
By definition and Lemma \ref{lem1} we have that
$$\eta_\lambda\leq 0= \eta_\lambda(0),\quad \Delta \eta_\lambda(x)\uparrow 0 \quad\text{as }|x|\to\infty,$$
and by a change of variables in \eqref{prop1eq1} we see that $\eta_\lambda$ is a normal solution to
$$\Delta^2\eta_\lambda=\(1-\frac{r_\lambda^p}{\lambda}|x|^p\)e^{-r_\lambda^2|x|^2}e^{4\eta_\lambda}.$$
With a similar change of variables in \eqref{eqLambda1} we also get
\begin{align}\int_{\mathbb{R}^4}\(1-\frac{r_\lambda^p}{\lambda}|x|^p\)e^{-r_\lambda^2|x|^2}e^{4\eta_\lambda}dx=\Lambda.\label{relation-4}\end{align}
Since $\eta_\lambda\leq 0$, we have
$$0<\Lambda <\int_{B_\frac{\lambda^{1/p}}{r_\lambda}} e^{-r_\lambda^2|x|^2}e^{4\eta_\lambda}dx \le \mathrm{meas}\(B_\frac{\lambda^{1/p}}{r_\lambda}\),$$
which implies that
\begin{equation}\label{rlala}
\limsup_{\lambda\to0}\frac{r_\lambda^p}{\lambda}<\infty.
\end{equation}
\begin{lem}\label{lem3} We have
\begin{equation}\label{limsupDeta}
\limsup_{\lambda\to0}|\Delta \eta_\lambda(0)|<\infty.
\end{equation}
\end{lem}
\begin{proof} Assume by contradiction that
\begin{equation}\label{limsupDeta2}
\limsup_{\lambda\to0}|\Delta \eta_\lambda(0)|=\infty.
\end{equation}
Then, using that $\Delta \eta_\lambda(x)\uparrow 0$ as $|x|\to\infty$ for every $\lambda>0$, there exists $R_\lambda>0$ such that $\Delta \eta_{\lambda}(R_\lambda)=-1$. Then, as $\Delta\eta_\lambda\leq -1$ on $[0,R_\lambda]$, we have that $$\eta_{\lambda}(r)\leq -\frac {1}{8}r^2\quad \text{for } 0\leq r\leq R_\lambda.$$
From this, and using \eqref{identity} and \eqref{rlala} one obtains
\begin{equation*}\begin{split}
\Delta\eta_\lambda(R_\lambda)-\Delta \eta_\lambda(0)&= O\(\int_0^{R_\lambda}\frac{1}{t^3}\int_{B_t}\Delta^2 \eta_\lambda dxdt\) \\
&=O\(\int_0^{R_\lambda}\frac{1}{t^3}\int_{B_t}(1+|x|^p)e^{4\eta_\lambda}dxdt\) =O(1),
\end{split}\end{equation*}
a contradiction to \eqref{limsupDeta2} and the definition of $R_\lambda$.
\end{proof}
Using that $\eta_\lambda(0)=0$ and Lemma \ref{lem3}, together with ODE theory we get that, up to a subsequence,
\begin{equation}\label{convetalambda}
\eta_\lambda\to\eta\quad \text{in } C^4_{loc}(\mathbb{R}^4),\text{ as }\lambda\to 0,
\end{equation}
where the limit function $\eta$ satisfies
$$\Delta^2 \eta=(1-\mu |x|^p)e^{4\eta}\quad\text{in }\mathbb{R}^4,\quad \mu:=\lim_{\lambda\to0}\frac{r_\lambda^p}{\lambda}\in [0,\infty).$$
Notice that at this stage we do not know whether $\eta$ is a normal solution, $\mu>0$, and $\int_{\mathbb{R}^4}(1-|x|^p)e^{4\eta}dx=\Lambda$. This is what we are going to prove next.
\begin{lem}\label{lem-mu0} If $\mu=0$ then $e^{4\eta}\in L^1(\mathbb{R}^4)$. \end{lem}
\begin{proof} It follows from Lemma \ref{lem1} and \eqref{convetalambda} that $\Delta \eta$ is increasing, and $\lim_{r\to\infty}\Delta\eta(r)=:c_0\in[-\infty, 0]$. If $c_0<0$ then $\eta(r)\lesssim-r^2$, and hence $e^{\eta}\in L^1(\mathbb{R}^4)$. Therefore, if the lemma were false then necessarily we have $c_0=0$ and $e^{4\eta}\not\in L^1(\mathbb{R}^4)$. Then using \eqref{identity} one can show that for any $M>0$ large we have $\Delta\eta(r)\leq -\frac{M}{r^2}$ for $r\gg 1$. This in turn implies that $\eta(r)\leq -2\log r$ for $r\gg 1$, and hence $e^{4\eta}\in L^1(\mathbb{R}^4)$, a contradiction.\end{proof}
\begin{lem} \label{lem-mu} We have $\mu>0$.
\end{lem}
\begin{proof} Assume by contradiction that $\mu=0$. Then $\eta$ is a radial solution to
$$\Delta^2 \eta=e^{4\eta}\quad\text{in }\mathbb{R}^4,$$
with $e^{4\eta}\in L^1(\mathbb{R}^4)$ by Lemma \ref{lem-mu0}. By \cite[Theorem 2.1]{Lin}, either $\eta$ is spherical, namely $\eta(x)=\log\(\frac{2\lambda}{1+\lambda^2|x|^2}\)+\frac{\log 6}{4}$, for some $\lambda>0$, or there exists $c_0>0$ such that
\begin{equation}\label{Detac0}
-c_0:=\lim_{|x|\to\infty}\Delta\eta(x)<0.
\end{equation}
We shall now show that each of these two cases leads to a contradiction. \\
\noindent \textbf{Case 1:} \eqref{Detac0} holds.
By Lemma \ref{lem1}, for every $\lambda>0$ we can find $0<R_{1,\lambda}< R_{2,\lambda}$ such that
$$\eta_\lambda(R_{1,\lambda})=-\frac{c_0}{2},\quad \eta_\lambda(R_{2,\lambda})=-\frac{c_0}{4}.$$
Moreover \eqref{convetalambda} implies that $R_{1,\lambda},R_{2,\lambda}\to\infty$ as $\lambda\downarrow 0$.
Again by Lemma \ref{lem1} and \eqref{convetalambda} we have $\Delta \eta_\lambda\le - \frac{c_0}{4}$ in $[0,R_{2,\lambda}]$ and \eqref{identity} implies that $$\eta_\lambda(r)\leq -\frac{c_0}{32}r^2\quad\text{for }0\leq r\leq R_{2,\lambda}.$$
Applying \eqref{identity} with $w=\Delta \eta_\lambda$, and using \eqref{rlala}, we finally get
\begin{align*}
0< \frac{c_0}{4}&=\Delta \eta_\lambda(R_{2,\lambda})-\Delta \eta_\lambda(R_{1,\lambda})\\
&=\int_{R_{1,\lambda}}^{R_{2,\lambda}}\frac{1}{\omega_3 t^3}\int_{B_t}\(1-\frac{r_\lambda^p}{\lambda}|x|^p\)e^{-r_\lambda^2|x|^2}e^{4\eta_\lambda}dxdt\\
&=O\(\int_{R_{1,\lambda}}^{\infty}\frac{1}{t^3}\int_{B_t}(1+|x|^p)e^{-\frac{c_0}{8}|x|^2}dxdt \)\xrightarrow{\lambda\to0}0,
\end{align*}
which is a contradiction. \\
\noindent\textbf{Case 2:} $\eta$ is spherical, and in particular
$$\int_{\mathbb{R}^4}e^{4\eta}dx=\Lambda_{\mathrm{sph}}.$$
Since $\Lambda<\Lambda_{\mathrm{sph}} $, we can fix $R_0>0$ such that
$$\int_{B_{R_0}}e^{4\eta}dx>\Lambda.$$
Taking into account that $r_\lambda \to 0$ and by assumption $\frac{r_\lambda^p}{\lambda}\to 0$ as $\lambda\downarrow 0$, we can find $\lambda_0=\lambda_0(R_0)$ such that for
\begin{equation}\label{lem-muEq1}
\int_{B_{R_0}} \(1-\frac{r_\lambda^p}{\lambda}|x|^p\)e^{-r_\lambda^2|x|^2}e^{4\eta_\lambda}dx\geq \Lambda\quad \text{for }0<\lambda<\lambda_0.
\end{equation}
Setting
$$\Gamma_\lambda(t):=\int_{B_t}\(1-\frac{r_\lambda^p}{\lambda}|x|^p\)e^{-r_\lambda^2|x|^2}e^{4\eta_\lambda}dx,$$ we see that $\Gamma_\lambda(0)=0$, $\Gamma_\lambda$ is monotone increasing on $[0,\frac{\lambda^{1/p}}{r_\lambda}]$, and then it decreases to $\Lambda$ on the interval $[\frac{\lambda^{1/p}}{r_\lambda},\infty)$. Together with \eqref{lem-muEq1} it follows that
\begin{equation}\label{lem-mueq2}
\Gamma_\lambda(t)\geq \Lambda\quad\text{for }t\geq R_0, \;0<\lambda<\lambda_0.
\end{equation}
Applying \eqref{identity} with $R=\infty$, $w=\Delta\eta_\lambda$, and recalling that $\lim_{|x|\to\infty}\Delta\eta_\lambda(x)=0$, we get for $r\geq R_0$
\begin{align*} \Delta\eta_\lambda(r)=-\int_r^\infty \frac{\Gamma(t)}{\omega_3 t^3}dt\leq -\frac{\Lambda}{2\omega_3} \frac{1}{r^2}=-(4+p+\delta)\frac{1}{2r^2},\end{align*}
where $\delta>0$ is such that $\Lambda_{*,p}+2\delta\pi^2=\Lambda$. Hence, for $t>R_0$,
$$\int_{B_t}\Delta\eta_\lambda dx\leq\int_{B_t\setminus B_{R_0}}\Delta\eta_\lambda dx \le -\frac{4+p+\delta}{4} \omega_3(t^2-R_0^2).$$
Again by \eqref{identity}, as $\eta_\lambda(R_0)=O(1)$ by \eqref{convetalambda}, we have
\begin{equation}\label{uniform0}
\begin{split}
\eta_\lambda(r)&\leq \eta_\lambda(R_0)+\int_{R_0}^r \frac{-(4+p+\delta)(t^2-R_0^2)}{4t^3}dxdt\\
&=C(R_0) -\frac{4+p+\delta}{4} \log r,\quad r\geq R_0.
\end{split}
\end{equation}
This implies that
\begin{align}\label{uniform}
\lim_{R\to\infty}\lim_{\lambda\to0}\int_{B_R^c}(1+|x|^p)e^{4\eta_\lambda}dx=0.
\end{align}
It follows from \eqref{uniform} that as $\lambda\downarrow 0$,
$$\Lambda=\int_{\mathbb{R}^4}\(1-\frac{r_\lambda^p}{\lambda}|x|^p\)e^{-r_\lambda^2|x|^2}e^{4\eta_\lambda}dx\to \int_{\mathbb{R}^4}e^{4\eta}dx=\Lambda_{\mathrm{sph}},$$
a contradiction.
This completes the proof of the lemma.
\end{proof}
\begin{rem} The above proof also works without using the fact that $e^{4\eta}\in L^1(\mathbb{R}^4)$. Indeed, trivially one can find $R_0>0$ as in Case 2, and proceed in a similar way. \end{rem}
\medskip
\noindent\emph{Proof of the existence part (completed).}
From Lemma \ref{lem-mu}, choosing $R=\(\frac{4}{\mu}\)^{1/p}$ we obtain for $\lambda$ sufficiently small
$$1-\frac{r_\lambda^p}{\lambda} |x|^p\le 1-\frac{\mu}{2}|x|^p \le -\frac{\mu}{4}|x|^p,\quad |x|\ge R,$$
hence from \eqref{relation-4} we obtain
\begin{equation}\label{thm1eq1}
\begin{split}
\int_{B_R^c}\frac{\mu}{4}|x|^p e^{-r_\lambda^2|x|^2}e^{4\eta_\lambda}dx &\le -\int_{B_R^c} \(1-\frac{r_\lambda^p}{\lambda} |x|^p\) e^{-r_\lambda^2|x|^2}e^{4\eta_\lambda}dx\\
&= \int_{B_R} \(1-\frac{r_\lambda^p}{\lambda} |x|^p\) e^{-r_\lambda^2|x|^2}e^{4\eta_\lambda}dx-\Lambda \le C,
\end{split}
\end{equation}
where in the last inequality we also used that $\eta_\lambda\le 0$. Since, the integrand in $B_R$ is uniformly bounded, it follows at once that
\begin{align}\label{volume}
\int_{\mathbb{R}^4}(1+ |x|^p)e^{-r_\lambda^2|x|^2}e^{4\eta_\lambda}dx\leq C.
\end{align}
Moreover \eqref{thm1eq1} also implies
$$\int_{B_R^c} e^{-r_\lambda^2|x|^2}e^{4\eta_\lambda}dx \le \frac{C}{R^p} \to 0,\quad \text{as }R\to\infty,$$
uniformly with respect to $\lambda$, which in turn yields
\begin{equation}\label{thm1eq2}
\lim_{\lambda\to0} \int_{\mathbb{R}^4}e^{-r_\lambda^2|x|^2}e^{4\eta_\lambda}dx=\int_{\mathbb{R}^4}e^{4\eta}dx.
\end{equation}
By Fatou's lemma
\begin{equation}\label{thm1eq3}
\int_{\mathbb{R}^4}|x|^pe^{4\eta}dx\leq \lim_{\lambda\to0}\int_{\mathbb{R}^4}|x|^pe^{-r_\lambda^2|x|^2}e^{4\eta_\lambda}dx.
\end{equation}
Then \eqref{thm1eq2} and \eqref{thm1eq3} give
\begin{align}\label{int-limit}
\int_{\mathbb{R}^4}(1-\mu |x|^p)e^{4\eta}dx\geq\Lambda.
\end{align}
We now proceed as in Case 2 of the proof of Lemma \ref{lem-mu} to show that the above inequality is actually an equality and that $\eta$ is a normal solution. Since
$$\frac{\lambda^\frac1p}{r_\lambda}\to\frac{1}{\mu^\frac1p}>0,$$
we have for $R_0 =2\mu^{-\frac1p}$ and $\lambda_0=\lambda_0(R_0)$ sufficiently small that \eqref{lem-mueq2} holds, hence, as before \eqref{uniform0}-\eqref{uniform} follow. In particular \eqref{uniform} implies that
$$\int_{\mathbb{R}^4}(1-\mu |x|^p)e^{4\eta}dx=\Lambda,$$
and by taking the limit using \eqref{convetalambda} and \eqref{uniform0} we obtain
\begin{equation}\label{etaint}
\begin{split}
\eta(x)\leftarrow \eta_\lambda(x)&= \frac{1}{8\pi^2}\int_{\mathbb{R}^4}\log\left(\frac{1}{|x-y|}\right)\(1-\frac{r_\lambda^p}{\lambda} |y|^p\)e^{-r_\lambda^2|y|^ 2}e^{4\eta_\lambda(y)}dy+c_\lambda\\
&\to \frac{1}{8\pi^2}\int_{\mathbb{R}^4}\log\left(\frac{1}{|x-y|}\right)\(1-\mu |y|^p\)e^{4\eta(y)}dy+c,
\end{split}
\end{equation}
where the identity in the first line follows from \eqref{prop1eq1} and \eqref{defetalambda}. In particular we have shown that $\eta$ is a normal solution.
We now set
$$u(x)=\eta(\rho x)+\log\rho,\quad \rho:=\mu^{-\frac{1}{4p}},$$
and with a simple change of variable we get
\begin{equation}\label{uint}
u(x)=\frac{1}{8\pi^2}\int_{\mathbb{R}^4}\log\left(\frac{1}{|x-y|}\right)\(1-|y|^p\)e^{4u(y)}dy+c
\end{equation}
so that $u$ is a normal solution to \eqref{eq-0}.
\endproof
\subsection{Asymptotic behaviour}
\noindent\emph{Proof of \eqref{asymp}}
Consider the Kelvin transform of $u$ given by \eqref{defKelvin}.
By Proposition \ref{PKelvin} $\tilde u$ satisfies
\begin{equation}\label{Kelvin3}
\begin{split}
\tilde u(x)&=\frac{1}{8\pi^2}\int_{\mathbb{R}^4}\log\(\frac{1}{|x-y|}\)\(1-\frac{1}{|y|^p}\)\frac{e^{4\tilde u(y)}}{|y|^{8-\Lambda/2\pi^2}}dy+c.
\end{split}
\end{equation}
In particular
$$\Delta^2 \tilde u(x)= \(1-\frac{1}{|x|^p}\)\frac{e^{4\tilde u(x)}}{|x|^{8-\Lambda/2\pi^2}}=O\(\frac{1}{|x|^{8+p-\Lambda/2\pi^2}}\),\quad \text{as }|x|\to 0.$$
Observing that for $\Lambda>\Lambda_{*,p}$ we have
$$8+p-\frac{\Lambda}{2\pi^2}=4-\frac{\Lambda-\Lambda_{*,p}}{2\pi^2}<4,$$
we get
\begin{equation}\label{Delta^2uLp}
\Delta^2 \tilde u \in L^q_{loc}(\mathbb{R}^4) \quad \text{for }1\le q<\frac{1}{1-\frac{\Lambda-\Lambda_{*,p}}{8\pi^2}},
\end{equation}
hence by elliptic estimates $\tilde u\in W^{4,q}_{loc}(\mathbb{R}^4)$ with $p$ as in \eqref{Delta^2uLp}, and by the Morrey-Sobolev embedding $\tilde u\in C^{0,\alpha}_{loc}(\mathbb{R}^4)$ for $\alpha\in [0,1]$ such that $\alpha<\frac{\Lambda-\Lambda_{*,p}}{2\pi^2}$.
Then \eqref{asymp} follows.
Alternatively to the elliptic estimates, the same $C^\alpha$ regularity can also be obtained directly from \eqref{Kelvin3}, using the H\"older inequality and the following estimate: For any $r>0$
\begin{align*}
\int_{B_1}\left| \log|z-h|-\log|z| \right|^r dz\leq C(r)\left\{
\begin{array}{ll}|h|^r&\quad\text{for }r< 4\\
|h|^r|\log|h||&\quad\text{for }r= 4\\
|h|^4&\quad\text{for }r>4,
\end{array}\right.
\end{align*}
for $|h|>0$ small.
\endproof
\noindent\emph{Proof of \eqref{asymp3}} For $\ell=1,2,3$ we differentiate in \eqref{uint} to get
\begin{equation*}
|\nabla^\ell u(x)|=O\(\int_{\mathbb{R}^4}\frac{1}{|x-y|^\ell} (1+|y|^p)e^{4u(y)}dy\)
\end{equation*}
Since $\Lambda>\Lambda_{*,p}$, by \eqref{asymp} we have that $$(1+|x|^p)e^{4u(x)}\leq \frac{C}{1+|x|^{4+\delta}},$$ for some $\delta>0$. Therefore, for $|x|$ large
\begin{align*} |\nabla^\ell u(x)| &\leq C\left( \int_{B_\frac{|x|}{2}}+\int_{B_{2|x|}\setminus B_\frac{|x|}{2}}+\int_{B_{2|x|} ^c}\right) \frac{1}{|x-y|^\ell}\frac{dy}{1+|y|^{4+\delta}} \\ &\leq \frac{C}{|x|^\ell} +\frac{C}{|x|^{4+\delta}}\int_{B_{2|x|}\setminus B_\frac{|x|}{2}}\frac{dy}{|x-y|^\ell}\\ &\leq \frac{C}{|x|^\ell}. \end{align*}
\endproof
\section{Proof of Theorem \ref{thm1b}}
Let $(u_k)$ be a sequence of radial normal solutions to \eqref{eq-0} with $\Lambda=\Lambda_k\in [\Lambda_{*,p},\Lambda_{\mathrm{sph}})$, i.e.
\begin{equation}\label{eqIk}
u_k(x)=\frac{1}{8\pi^2}\int_{\mathbb{R}^4}\log\(\frac{|y|}{|x-y|}\) (1-|y|^p)e^{4u_k(y)}dy +c_k,
\end{equation}
and
\begin{equation}\label{volk}
\Lambda_k=\int_{\mathbb{R}^4}(1-|x|^p)e^{4u_k(x)}dx\to\bar\Lambda \in[\Lambda_{*,p},\Lambda_{\mathrm{sph}}).
\end{equation}
We want to prove the following:
\begin{prop}\label{propu*} Up to a subsequence we have $u_k\to \bar u$ uniformly locally in $\mathbb{R}^4$ where $\bar u$ is a normal solution to \eqref{eq-0} with $\Lambda=\bar \Lambda$
\end{prop}
In the following we shall use several times that $u_k$ is radially decreasing. This follows with the same proof of Lemma \ref{lem1}.
\begin{lem}\label{ukgeqC} We have $u_k (0)\geq -C$ where $C$ only depends on $\inf_k \Lambda_k$.
\end{lem}
\begin{proof}
We have
\begin{equation*}
\Lambda_k = \int_{\mathbb{R}^4} (1-|x|^p)e^{4u_k}dx\le \int_{B_1}e^{4u_k(x)}dx\le |B_1|e^{4u_k(0)},
\end{equation*}
where in the last inequality we used that $u_k$ is monotone decreasing.
\end{proof}
Since $\Lambda_k\in [\Lambda_*,\Lambda_{\mathrm{sph}})$, we have the following Pohozaev identity (see Proposition \ref{poho-3}, which can be applied thanks to Lemma \ref{lemasymp} if $\Lambda\in (\Lambda_{*,p},\Lambda_{\mathrm{sph}})$ and thanks to Lemma \ref{lem-4.6} if $\Lambda=\Lambda_{*,p}$): \begin{align}\label{poho}
\frac{\Lambda_k}{\Lambda_{\mathrm{sph}}}(\Lambda_k-\Lambda_{\mathrm{sph}})=-\frac{p}{4}\int_{\mathbb{R}^4} |x|^pe^{4u_k}dx.
\end{align}
Therefore, by \eqref{eq-0} we get that
\begin{align}\label{V1}
\int_{\mathbb{R}^4}e^{4u_k}dx= \Lambda_k+\frac{4\Lambda_k}{p\Lambda_{\mathrm{sph}}}(\Lambda_{\mathrm{sph}}-\Lambda_k).
\end{align}
This yields
\begin{align}\label{V2}
\lim_{k\to\infty} \int_{\mathbb{R}^4}e^{4u_k}dx = \bar \Lambda+\frac{4\bar\Lambda}{p\Lambda_{\mathrm{sph}}}(\Lambda_{\mathrm{sph}}-\bar\Lambda).
\end{align}
\begin{lem}\label{limsupuk}
We have
$$\limsup_{k\to\infty} u_k(0)<\infty.$$
\end{lem}
\begin{proof} It follows from {\eqref{volk}}, \eqref{poho} and \eqref{V1} that
\begin{align}\label{unibound}
\limsup_{k\to \infty}\int_{\mathbb{R}^4}(1+|x|^p)e^{4u_k}dx<\infty.
\end{align}
Then, differentiating in \eqref{eqIk}, integrating over $B_1$ and using Fubini's theorem and \eqref{unibound}, one obtains
\begin{equation}\label{nablauk}
\int_{B_1}|\nabla u_k|dx\leq C \int_{\mathbb{R}^4}\(\int_{B_1}\frac{1}{|x-y|}dx \)(1+|y|^p)e^{4u_k(y)}dy \le C.
\end{equation}
Hence, if (up to a subsequence) $u_k(0)\to\infty$ as $k\to\infty$, by \cite[Theorem 2]{MarOpen} (see also \cite{HydCV} and \cite{Rob}) the blow-up at the origin is spherical, i.e.
$$u_k(r_kx)-u_k(0)+\log(2)=: \eta_k(x)\to \log\frac{2}{1+|x|^2},\quad \text{locally uniformly},$$
where $r_k:= 12e^{-u_k(0)}\to 0$ as $k\to\infty$,
and, we have quantization of mass in the sense that
$$\lim_{k\to\infty}\int_{B_\frac12}(1-|x|^p)e^{4u_k}dx= \Lambda_{\mathrm{sph}}.$$
As $u_k$ is monotone decreasing, we have that $u_k\to-\infty$ locally uniformly in $\mathbb{R}^4\setminus\{0\}$. Consequently, using \eqref{unibound} we get
\begin{align}\label{V2b}
\lim_{k\to\infty}\int_{\mathbb{R}^4}e^{4u_k}dx =\Lambda_{\mathrm{sph}}.
\end{align}
On the other hand, comparing \eqref{V2b} with \eqref{V2}, and recalling that $\Lambda_{*,p}\le \bar\Lambda<\Lambda_{\mathrm{sph}}$ and $\Lambda_{*,p}=\frac18(4+p)\Lambda_{\mathrm{sph}}$, we obtain
$$1=\frac{4\bar\Lambda}{p\Lambda_{\mathrm{sph}}}\ge \frac{4\Lambda_{*,p}}{p\Lambda_{\mathrm{sph}}}=\frac{4+p}{2p}>1$$
for $p\in (0,4)$, a contradiction.
\end{proof}
\begin{lem}\label{lemconv}
We have $u_k\to \bar u$, where $\bar u$ is a normal solution to \eqref{eq-0} for some $\Lambda=\tilde \Lambda \ge \bar\Lambda$.
\end{lem}
\begin{proof} Since $u_k\leq u_k(0)=O(1)$ by Lemma \ref{ukgeqC} and Lemma \ref{limsupuk}, we have
$$\Delta^2 u_k=O_R(1)\quad \text{on }B_R.$$
Differentiating under the integral sign in \eqref{eqIk}, integrating over $B_R$ and using Fubini's theorem, together with \eqref{unibound}, we get
$$\int_{B_R}|\Delta u_k|dx\leq CR^2,\quad\text{for every }R>1.$$
Hence, by elliptic estimate, up to a subsequence, $u_k\to \bar u$ in $C^{3}_{loc}(\mathbb{R}^4)$.
To prove that $\bar u $ is normal, first note that the constant $c_k=u_k(0)$ in \eqref{eqIk}. Moreover For a fixed $x\in \mathbb{R}^4$ we have as $R\to\infty$
$$\int_{B_R^c}\log\left(\frac{|y|}{|x-y|}\right)(1-|y|^p)e^{4u_k(y)}dy=O\(\frac{|x|}{R}\)\int_{B_R^c} (1+|y|^p)e^{4u_k(y)}dy=O\(\frac{|x|}{R}\),$$
thanks to \eqref{unibound}. Therefore, using the convergence $u_k\to\bar u$ in $C^0_{loc}(\mathbb{R}^4)$, we conclude from \eqref{eqIk} that
\begin{equation*}
\begin{split}
\bar u(x)& \xleftarrow{k\to\infty} u_k(x) =\frac{1}{8\pi^2}\int_{B_R}\log\left(\frac{|y|}{|x-y|}\right)(1-|y|^p)e^{4 u_k(y)}dy+O\(\frac{|x|}{R}\)+ u_k(0)\\
&\xrightarrow{k\to\infty} \frac{1}{8\pi^2}\int_{B_R}\log\left(\frac{|y|}{|x-y|}\right)(1-|y|^p)e^{4\bar u(y)}dy+O\(\frac{|x|}{R}\)+ \bar u(0)\\
&\xrightarrow{R\to\infty} \frac{1}{8\pi^2}\int_{\mathbb{R}^4}\log\left(\frac{|y|}{|x-y|}\right)(1-|y|^p)e^{4\bar u(y)}dy+ \bar u(0),
\end{split}
\end{equation*}
where in the last line we used dominated convergence, which is possible since $\log\left(\frac{|y|}{|x-y|}\right)=O(1)$ as $|y|\to\infty$, and $(1+|\cdot|^p)e^{4\bar u}\in L^1(\mathbb{R}^4)$ by \eqref{unibound} and Fatou's lemma.
Still by Fatou's lemma, we also get
$$\tilde\Lambda:=\int_{\mathbb{R}^4}(1-|x|^p)e^{4\bar u}dx\geq \lim_{k\to\infty}\int_{\mathbb{R}^4}(1-|x|^p)e^{4 u_k}dx = \bar \Lambda.$$
\end{proof}
\begin{lem}\label{lemLambdabar}
We have $\tilde\Lambda=\bar \Lambda$.
\end{lem}
\begin{proof} We assume by contradiction that $\tilde\Lambda>\bar \Lambda$. Since $u_k\to \bar u$ in $C^0_{loc}(\mathbb{R}^4)$, this is equivalent to
\begin{equation}\label{defrho}
\rho:=\lim_{R\to\infty}\lim_{k\to\infty}\int_{B_R^c}(|x|^p-1)e^{4u_k}dx=\lim_{R\to\infty}\lim_{k\to\infty}\int_{B_R^c}|x|^pe^{4u_k}dx=\tilde\Lambda-\bar \Lambda>0.
\end{equation}
We consider the Kelvin transform
$$\tilde u_k(x)=u_k\(\frac{x}{|x|^2}\)-\frac{\Lambda_k}{8\pi^2} \log |x|, \quad x\ne 0.$$
{By Proposition \ref{PKelvin} we have
$$\tilde u_k(x)=\frac{1}{8\pi^2}\int_{B_1}\log\left(\frac{1}{|x-y|}\right)\(1-\frac{1}{|y|^{p}}\)\frac{e^{4\tilde u_k(y)}}{|y|^{4-p-\delta_k}}dy,\quad \delta_k:=\frac{\Lambda_k-\Lambda_{*,p}}{2\pi^2}.$$
In fact, with the same proof of \eqref{36} we obtain}
\begin{equation}\label{37}
\tilde u_k(x)=-\frac{1}{8\pi^2}\int_{B_1}\log\left(\frac{1}{|x-y|}\right)\frac{e^{4\tilde u_k(y)}}{|y|^{4-\delta_k}}dy+O(1)\quad\text{for } x\in B_1.
\end{equation}
If $\delta_k\not\to 0$ then from \eqref{37} we easily see that $\tilde u_k=O(1)$ in $B_1$, a contradiction to our assumption that $\rho>0$. Let us then assume that $\delta_k\to 0$, i.e. $\Lambda_k\to \Lambda_{*,p}$, and let $\varepsilon_k>0$ be such that
$$\int_{B_{\varepsilon_k}}\frac{e^{4\tilde u_k(y)}}{|y|^{4-\delta_k}}dy=\frac\rho2.$$
Then clearly $\varepsilon_k\to 0$ as $k\to\infty$. Using that $\log\(\frac{1}{|x-y|}\)=\log\(\frac{1}{|x|}\)+O(1)$ for $|y|\leq\varepsilon_k$, $ |x|\geq 2\varepsilon_k$, and that $\log\(\frac{1}{|x-y|}\)$ is lower bounded for $y\in B_1$ and $x\to 0$, we get
\begin{equation}\label{eq64}
\begin{split}
\tilde u_k(x)&=-\frac{\log(1/|x|)}{8\pi^2}\int_{B_{\varepsilon_k}}\frac{e^{4\tilde u_k(y)}}{|y|^{4-\delta_k}}dy -\frac{1}{8\pi^2} \int_{B_1\setminus B_{\varepsilon_k}} \log\(\frac{1}{|x-y|}\)\frac{e^{4\tilde u_k(y)}}{|y|^{4-\delta_k}}dy +O(1)\\
& \leq-\frac{\rho}{16\pi^2}\log\(\frac{1}{|x|}\)+C\quad\text{for }2\varepsilon_k\leq |x|\leq 1,
\end{split}
\end{equation}
which, in particular implies
\begin{equation}\label{uLambda0}
\lim_{r\to0}\lim_{k\to\infty}\sup_{B_r}\tilde u_k=-\infty.
\end{equation}
From \eqref{eq64} we immediately infer
$$\lim_{r\to0}\lim_{k\to\infty}\int_{B_r\setminus B_{2\varepsilon_k}}\frac{e^{4\tilde u_k(y)}}{|y|^{4-\delta_k}}dy=0,$$
hence, also recalling \eqref{defrho},
$$\lim_{k\to\infty}\int_{B_{2\varepsilon_k}}\frac{e^{4\tilde u_k(y)}}{|y|^{4-\delta_k}}dy=\rho.$$
This, and using \eqref{uLambda0} we get
$$\frac\rho2= \lim_{k\to\infty}\int_{B_{2\varepsilon_k}\setminus B_{\varepsilon_k}}\frac{e^{4\tilde u_k(y)}}{|y|^{4-\delta_k}}dy=o(1)\int_{B_{2\varepsilon_k}\setminus B_{\varepsilon_k}}\frac{dy}{|y|^4}=o(1),\quad \text{as }k\to\infty,$$
contradiction.
\end{proof}
\noindent\emph{Proof of Theorem \ref{thm1b}.} With Lemma \ref{lemconv} and Lemma \ref{lemLambdabar} we have the desired convergence (up to a subsequece) of $u_k$ to $\bar u$, a normal solution of \eqref{eq-0} with $\Lambda=\bar\Lambda$. The asymptotic behaviour \eqref{asymp2} follows from Lemma \ref{lem-4.6}, while for \eqref{asymp3}, the same proof used for $\Lambda\in (\Lambda_{*,p},\Lambda_{\mathrm{sph}})$ also works for the case $\Lambda=\Lambda_{*,p}$.
\endproof
\section{Proof of Theorem \ref{thm2}}
\begin{lem}\label{ukbdd} Let $(u_k)$ be a sequence solving \eqref{eq-0} with $\Lambda=\Lambda_k\uparrow \Lambda_{\mathrm{sph}}$. Then we have $u_k(0)\to\infty$ as $k\to\infty$.
\end{lem}
\begin{proof} By Lemma \ref{ukgeqC} we have $u_k(0)\ge -C$. Assume by contradiction that, up to a subsequence, $u_k(0)\to \ell \in \mathbb{R}$. Then, by Lemma \ref{lemconv} (or, rather, following its proof) we have $u_k\to \bar u$, normal solution to \eqref{eq-0} for some $\Lambda\ge \Lambda_{\mathrm{sph}}$, contradicting Theorem \ref{thm0}.
\end{proof}
Differentiating under the integral sign in \eqref{eqIk} and integrating over $B_1$ we obtain \eqref{nablauk}. By Lemma \ref{ukbdd}, the sequence $(u_k)$ blows up at the origin. This, \eqref{nablauk} and \cite[Theorem 2]{MarOpen} imply \eqref{convetak}.
This completes the proof of Theorem \ref{thm2}. \hfill$\square$
\section{Proof of Theorem \ref{thm-positive2}}
We start by looking for normal solutions with prescribed value at the origin.
\begin{thm} \label{thm-positive} For every $p>0$ and $\rho\in \mathbb{R}$ there exists a unique radially symmetric normal solution to
\begin{align}\label{eq-rho}
\Delta^2u=(1+|x|^p)e^{4u}\quad\text{in }\mathbb{R}^4,\quad u(0)=\rho,\quad (1+|x|^p)e^{4u}\in L^1(\mathbb{R}^4). \end{align}
\end{thm}
\begin{proof}
For every $\varepsilon>0$ we claim that there exists a radial normal solution to
\begin{equation}\label{veps}
\Delta^2 v_\varepsilon=(1+|x|^p)e^{-\varepsilon |x|^2}e^{4v_\varepsilon}\quad\text{in }\mathbb{R}^4,\quad v_\varepsilon(0)=\rho.
\end{equation}
To this end, we set
$$X=\{v\in C^0_{rad}(\mathbb{R}^4):\|v\|_X<\infty\},\quad \|v\|_X:=\sup_{x\in\mathbb{R}^4}\frac{|v(x)|}{\log(2+|x|)},$$
and define the operator $T_\varepsilon:X\to X$, $T_\varepsilon v=\bar v$, where $$\bar v(x):=\frac{1}{8\pi^2}\int_{\mathbb{R}^4}\log\left(\frac{|y|}{|x-y|}\right)(1+|y|^p)e^{-\varepsilon|y|^2}e^{4v(y)}dy+\rho.$$
By the Arzerl\`a-Ascoli theorem it follows that $T_\varepsilon$ is compact.
Notice that $\Delta \bar v<0$, hence $\bar v$ is monotone decreasing by \eqref{identity}, and
$$\bar v\leq \bar v(0)=\rho.$$
In particular, if $v$ is a solution to $v=tT_\varepsilon (v)$ with $0<t\leq 1$, then $$|v(x)|\leq t\rho+\frac{te^{4\rho}}{8\pi^2}\int_{\mathbb{R}^4}\left|\log\left(\frac{|y|}{|x-y|}\right)\right|(1+|y|^p)e^{-\varepsilon|y|^2} dy\leq C\log(2+|x|),$$ where $C>0$ is independent of $v$ and $t$. Then, by the Schauder fixed-point theorem, $T_\varepsilon$ has a fixed point, which we call $v_\varepsilon$, and which is a radial normal solution to \eqref{veps} by definition, so the claim is proven.
Setting
$$\Lambda_\varepsilon=\int_{\mathbb{R}^4}(1+|x|^p)e^{-\varepsilon|x|^2}e^{4 v_\varepsilon(x)}dx,$$
and using the Pohozaev identity (Proposition A.1 in \cite{HMM} or a minor modification of Proposition \ref{poho-3}, with $K(x)=(1+|x|^p)e^{-\varepsilon|x|^2}$), we get
$$\frac{\Lambda_\varepsilon}{\Lambda_{\mathrm{sph}}}(\Lambda_\varepsilon-\Lambda_{\mathrm{sph}})<\frac{p}{4}\int_{\mathbb{R}^4}|x|^pe^{-\varepsilon|x|^2}e^{4 v_\varepsilon(x)}dx<\frac p4\Lambda_\varepsilon,$$
which implies
$$\Lambda_\varepsilon<\(1+\frac p4\)\Lambda_{\mathrm{sph}}.$$
Now one can follow the proof of Lemma \ref{lemconv} to conclude that $v_\varepsilon\to v$, where $v$ is a normal solution with $v(0)=\rho$.
The uniqueness follows by the monotonicity property of solutions to ODEs with respect to the initial data.
\end{proof}
Notice that the result of Theorem \ref{thm-positive} does not hold in the case of Problem \eqref{eq-0}, see Lemma \ref{ukgeqC}.
\begin{lem}\label{T6L1} Let $u$ be a normal solution to \eqref{eq-positive} for some $\Lambda>0$. Then we have
\begin{equation}\label{u+low}
u(x)\geq -\frac{\Lambda}{8\pi^2}\log|x|+O(1)\quad\text{as }|x|\to\infty,
\end{equation}
hence $\Lambda>\Lambda_{*,p}$.
\end{lem}
\begin{proof}
The proof of \eqref{u+low} follows as in Lemma \ref{lemasymp} by changing the sign of $K$. Using that $|\cdot|^pe^{4u}\in L^1(\mathbb{R}^4)$ together with \eqref{u+low} we then infer that $\Lambda>(1+\frac p4)\frac{\Lambda_{\mathrm{sph}}}{2}=\Lambda_{*,p}$.
\end{proof}
\begin{lem}\label{T6L2} Let $u$ be a normal solution to \eqref{eq-positive} for some $\Lambda>0$. Then we have
\begin{equation}\label{u+upp}
u(x)\le -\frac{\Lambda}{8\pi^2}\log|x|+o(\log|x|)\quad\text{as }|x|\to\infty,
\end{equation}
\end{lem}
\begin{proof}
From the same proof of \cite[p. 213]{Lin}, we easily get that for every $\varepsilon>0$ there exists $R(\varepsilon)>0$ such that
\begin{equation}\label{estimateLin}
u(x)\le \(-\frac{\Lambda}{8\pi^2}+\varepsilon\)\log|x| +\frac{1}{8\pi^2} \int_{B_1(x)}\log \(\frac{1}{|x-y|}\) K(y)e^{4u(y)}dy, \quad |x|\ge R(\varepsilon),
\end{equation}
where $K(y)=1+|y|^p$.
As in \cite[Lemma 3.5]{HMM}, from \eqref{estimateLin} and Jensen's inequality we get that for every $\varepsilon'>0$ and $q\ge 1$ there is a constant $C=C(\varepsilon',q)$ such that
$$\int_{B_1(x)}e^{4qu}dy \le C|x|^{-\(\frac{\Lambda}{2\pi^2}-\varepsilon'\)q}.$$
With H\"older's inequality we then infer for $|x|$ large
\begin{equation}\label{T6L2E2}
\begin{split}
\int_{B_1(x)}\log \(\frac{1}{|x-y|}\) K(y)e^{4u(y)}dy&\le C|x|^p \int_{B_1(x)}\log \(\frac{1}{|x-y|}\) e^{4u(y)}dy\\
&\le C|x|^p\|e^{4u}\|_{L^q(B_1(x))}\\
&\le C|x|^{p-\frac{\Lambda}{2\pi^2}+\varepsilon'}\le C
\end{split}
\end{equation}
since by Lemma \ref{T6L1} we have $\Lambda> \Lambda_{*,p}>\frac{p\Lambda_{\mathrm{sph}}}{8}=2\pi^2 p$ and we can choose $0<\varepsilon'< \frac{\Lambda}{2\pi^2}-p$.
Plugging \eqref{T6L2E2} into \eqref{estimateLin}, we obtain \eqref{u+upp}.
\end{proof}
\begin{lem}\label{T6L3} Let $u$ be a normal solution to \eqref{eq-positive} for some $\Lambda\in \mathbb{R}$. Then $u$ satisfied the Pohozaev identity \eqref{general Poho} and
\begin{equation}\label{neceb}
\max\{\Lambda_{\mathrm{sph}},\Lambda_{*,p}\}<\Lambda<2\Lambda_{*,p}.
\end{equation}
\end{lem}
\begin{proof} From Lemma \ref{T6L1} we have $\Lambda>\Lambda_{*,p}$, which together with Lemma \ref{T6L2} implies that \eqref{cond-poho} is satisfied, hence the Pohozaev identity \eqref{general Poho} holds. From it we obtain
\begin{equation}\label{pohopositive}
\frac{\Lambda}{\Lambda_{\mathrm{sph}}}(\Lambda-\Lambda_{\mathrm{sph}})=\frac{p}{4}\int_{\mathbb{R}^4}|x|^pe^{4u}dx.
\end{equation}
Since
$$0< \int_{\mathbb{R}^4}|x|^pe^{4u}dx <\Lambda,$$
\eqref{pohopositive} implies
$$\Lambda_{\mathrm{sph}}<\Lambda<\(1+\frac{p}{4}\)\Lambda_{\mathrm{sph}}=2\Lambda_{*,p},$$
and since we have already proven that $\Lambda>\Lambda_{*,p}$, \eqref{neceb} follows.
\end{proof}
\begin{lem}\label{ukcontinuous} Let $(u_k)$ be a sequence of radially symmetric normal solutions to \eqref{eq-positive} with $\Lambda=\Lambda_k$. If the sequence $(u_k(0))$ is bounded, then also the sequence $(\Lambda_k)$ is bounded and, up to a subsequence, $u_k\to \bar u$, $\Lambda_k\to\bar\Lambda\in (0,\infty)$, where $\bar u$ is a normal solution to \eqref{eq-positive} with $\Lambda=\bar\Lambda.$
\end{lem}
\begin{proof} By Lemma \ref{T6L3} the Pohozaev identity \eqref{general Poho} holds, hence
$$\frac{\Lambda_k}{\Lambda_{\mathrm{sph}}}\left(\Lambda_k-\Lambda_{\mathrm{sph}}\right)=\frac p4\int_{\mathbb{R}^4}|x|^pe^{4u_k}dx<\frac p4 \Lambda_k,$$
which then implies
$$\Lambda_k<\(1+\frac p4\)\Lambda_{\mathrm{sph}}=2\Lambda_{*,p}.$$
Following the proof of Lemma \ref{lemconv} (with inequality reversed in the last line because of the positivity of $K$) we obtain that, up to a subsequence $u_k\to \bar u$, normal solution to \eqref{eq-positive} for some $\Lambda =\tilde \Lambda\le \bar \Lambda$.
To prove that $\tilde \Lambda =\bar \Lambda$ it suffices to show that
\begin{equation}\label{49}
\lim_{R\to\infty}\lim_{k\to\infty}\int_{B_R^c}|x|^pe^{4u_k}dx=0.
\end{equation}
Upon the Kelvin transform
$$\tilde u_k(x):=u_k\(\frac {x}{|x|^2}\)-\frac{\Lambda_k}{8\pi^2}\log|x|,$$
\eqref{49} is equivalenti to
\begin{align}\label{50}
\lim_{r\to0}\lim_{k\to\infty}\int_{B_r}|x|^{p_k}e^{4\tilde u_k}dx=0,\quad p_k:=\frac{\Lambda_k}{2\pi^2}-p-8.
\end{align}
By Proposition \ref{PKelvin}, $\tilde u_k$ is a normal solution to
$$\Delta^2 \tilde u_k=\(1+|x|^p\)|x|^{p_k}e^{4\tilde u_k},\quad \int_{\mathbb{R}^4}\(1+|x|^p\)|x|^{p_k}e^{4\tilde u_k}dx=\Lambda_k.$$
Since $\bar u$ is a normal solution, by Lemma \ref{T6L1} we have $\tilde \Lambda>\Lambda_{*,p}$. Therefore $\Lambda_{*,p}<\tilde \Lambda\leq \bar \Lambda$, which implies that for some $\delta>0$ and $k$ large, we have $p_k\geq -4+\delta$. Hence, \eqref{50} will follow if we show that
\begin{equation}\label{77}
\lim_{k\to\infty}\sup_{B_1}\tilde u_k<\infty.
\end{equation}
First we note that by differentiating the integral formula of $\tilde u_k$ we obtain $\Delta u_k<0$, hence by \eqref{identity} we have that $\tilde u_k$ is monotone decreasing. Therefore, if $\tilde u_k(0)\to\infty$, then up to a subsequence, the rescaled function $$\eta_k(x)=\tilde u_k(r_kx)-\tilde u_k(0),\quad r_k:= e^{-\frac{4}{4+p_k}\tilde u_k(0)},$$
which is a normal solution to
$$\Delta^2 \eta_k=(1+o(1))|x|^{p_k}e^{4\eta_k},$$
with $o(1)\to 0$ locally uniformly, converges to a limit function $\eta$ (see the proof of Proposition 4.1 in \cite{HMM} for details), where $\eta$ is a normal solution to
$$\Delta^2 \eta=|x|^{p_\infty}e^{4\eta},\quad p_\infty:=\lim_{k\to\infty}p_k>-4.$$ Then by \cite[Theorem 1]{HMM} we have
$$ \int_{\mathbb{R}^4}|x|^{p_\infty}e^{4\eta} dx=\(1+\frac{p_\infty}4\)\Lambda_{\mathrm{sph}}.$$
Then, from Fatou's lemma we have
\begin{equation}\label{78}
\lim_{r\to0}\lim_{k\to\infty}\int_{B_r}|y|^{p_k}e^{4\tilde u_k}dy\geq \(1+\frac{p_\infty}4\)\Lambda_{\mathrm{sph}}.
\end{equation}
Moreover, as in the proof of Lemma \ref{lemLambdabar} we estimate
\begin{align*}
\tilde u_k(x)&\geq\frac{1}{8\pi^2}\int_{B_1}\log\(\frac{1}{|x-y|}\)|y|^{p_k}e^{4\tilde u_k(y)}dy+O(1),\quad x\in B_1.
\end{align*}
Since $\tilde u_k\to\tilde{u}$ outside the origin, where $\tilde u=\tilde{\bar u}$ is the Kelvin transform of $\bar u$, together with \eqref{78}
we get that
\begin{align*}
\tilde{u}(x)&\geq \frac{1}{8\pi^2}\(1+\frac{p_\infty}{4}\)\Lambda_{\mathrm{sph}}\log\(\frac{1}{|x|}\)+O(1)\\
&=2\(1+\frac{p_\infty}{4}\)\log\(\frac{1}{|x|}\)+O(1),\quad x\in B_1.
\end{align*}
In particular, as $p_\infty>-4$, we have $$|x|^{p_\infty}e^{4\tilde u(x)}\geq \delta |x|^{-4}\quad x\in B_1,$$ for some $\delta>0$. This shows that $|x|^{p_\infty}e^{4\tilde u(x)}\not\in L^1(B_1)$, however, by Fatou's lemma $$\int_{B_1}|x|^{p_\infty}e^{4\tilde u(x)}dx\leq \liminf_{k\to\infty }\int_{B_1}|x|^{p_k}e^{4\tilde u_k(x)}\leq \liminf_{k\to\infty}\Lambda_k\leq 2\Lambda_{*,p}.$$
This contradiction completes the proof of \eqref{77}, hence of the lemma.
\end{proof}
\noindent\emph{Proof of Theorem \ref{thm-positive2} (completed).} We have already proven the necessary conditions \eqref{nece0}-\eqref{nece} in Lemma \ref{T6L3}, so it remains to prove the existence part and the necessary condition \eqref{nece2} in the radial case with $p>4$.
By Lemma \ref{ukcontinuous}, the map
$$\mathbb{R}\ni\rho\mapsto \Lambda_\rho:=\int_{\mathbb{R}^4}(1+|x|^p)e^{4u_\rho}dx$$
is continuous, where $u_\rho$ is the solution to \eqref{eq-rho} given by Theorem \ref{thm-positive}.
We now have
\begin{equation}\label{T6E1}
\lim_{\rho\to -\infty}\Lambda_\rho= \(1+\frac p4\)\Lambda_{\mathrm{sph}}=2\Lambda_{*,p},
\end{equation}
which is a consequence of \eqref{V1} and
$$\int_{\mathbb{R}^4}|x|^pe^{4u_\rho}dx\leq C\quad \Rightarrow\quad\int_{\mathbb{R}^4}e^{4u_\rho}dx\to0.$$ Taking $\rho\to\infty$ we see that the blow-up around the origin is spherical (see e.g. \cite{MarOpen}), and
$$\lim_{\rho\to+\infty}\int_{\mathbb{R}^4}e^{4u_\rho}dx= \Lambda_{\mathrm{sph}}.$$
Again by \eqref{V1}, and as $\Lambda_\rho>\max\{\Lambda_{\mathrm{sph}},\Lambda_{*,p}\}$, we conclude that \begin{equation}\label{T6E2}
\lim_{\rho\to\infty }\Lambda_\rho=\max\left\{\Lambda_{\mathrm{sph}},\frac p4\Lambda_{\mathrm{sph}}\right\}.
\end{equation}
Then, by continuity, we have existence for every $\max\{\Lambda_{\mathrm{sph}},\Lambda_{*,p}\}<\Lambda<2\Lambda_{*,p}$.
\medskip
It remains to prove the stronger necessary condition \eqref{nece2} for $p>4$ in the radial case.
Assume by contradiction that for a sequence $(\Lambda_k)$ with $\Lambda_k\downarrow \Lambda_{*,p}$ there are radial solutions $u_k$ to \eqref{eq-positive} with $\Lambda=\Lambda_k$. Since
$$\Lambda_{*,p}<\frac{p}{4}\Lambda_{\mathrm{sph}}<2\Lambda_{*,p},$$
from \eqref{T6E1}-\eqref{T6E2} we obtain that the sequence $(u_k(0))$ is bounded. Then by Lemma \ref{ukcontinuous} we have that (up to a subsequence) $u_k\to \bar u$ locally uniformly, where $\bar u$ is a normal solution to \eqref{eq-positive} with $\Lambda=\Lambda_{*,p}$, and this contradicts Lemma \ref{T6L3}.
\endproof
\section{Proof of Proposition \ref{prop1}}\label{Sec:Prop2.1}
By \cite[Theorem 2.1]{CC} (and its proof), setting $K_\lambda= (\lambda-|x|^p)e^{-|x|^2}$ and given $\mu=1-\frac{\Lambda}{\Lambda_{\mathrm{sph}}}\in (0,1)$ one can find a solution $u_\lambda$ to
$$\Delta^2 u_\lambda = K_\lambda e^{4u_\lambda}\quad \text{in }\mathbb{R}^4,$$
such that
$$\int_{\mathbb{R}^4}K_\lambda e^{4u_\lambda} dx=(1-\mu)\Lambda_{\mathrm{sph}}=\Lambda.$$
Moreover $u_\lambda$ is of the form $u_\lambda= w\circ \Pi^{-1} +(1-\mu)\eta_0$ where $\eta_0(x)=\log\(\frac{2}{1+|x|^2}\)$, $\Pi:S^4\to\mathbb{R}^4$ denotes the stereographic projection, and $w\in H^2(S^4)$ minimizes a certain functional on $S^4$. This leads to the Euler-Lagrange equation
$$P^4_{g_0}w +6(1-\mu)= (K_\lambda \circ\Pi) e^{-4\mu (\eta_0\circ \Pi)} e^{4w},$$
where $P^4_{g_0}=\Delta_{g_0}(\Delta_{g_0}-2)$ is the Paneitz operator on $S^4$ with respect to the round metric $g_0$. Since $(K_\lambda \circ\Pi) e^{-4\mu (\eta_0\circ \Pi)}\in L^\infty(S^4)$, and $e^{4w}\in L^q(S^4)$ for every $q\in [1,\infty)$ by the Moser-Trudinger inequality, by elliptic estimates, we obtain $w\in C^{3,\alpha}(S^4)$ for $\alpha\in (0,1)$. In particular $w$ is continuous at the South pole $S=(0,0,0,0,-1)$ (the singularity of the stereographic projection), hence
$$u_\lambda(x) = (1-\mu)\eta_0(x) + w(S) +o(1)= \frac{\Lambda}{8\pi^2}\log|x|+C +o(1)\quad \text{as }|x|\to\infty.$$
Now, setting
$$v_\lambda=\frac{1}{8\pi^2}\int_{\mathbb{R}^4}\log\(\frac{1}{|x-y|}\) K_\lambda(y)e^{4u_\lambda(y)}dy $$
we observe that $h_\lambda:=u_\lambda-v_\lambda$ satisfies
$$\Delta^2 h_\lambda=0,\quad h_\lambda(x)=O(\log|x|)\quad \text{as }|x|\to\infty.$$
Hence by the Liouville theorem we get $h_\lambda = const$. In particular $u_\lambda$ is a normal solution, i.e. it satisfies \eqref{prop1eq1}.
This completes the proof of Proposition \ref{prop1}. \hfill$\square$
| proofpile-arXiv_059-15708 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{INTRODUCTION}
The spreading of ideas and information, the propagation of viruses and diseases, and the fluctuation of stock prices are examples of processes evolving over social, information or other types of networks \cite{barrat2008,Neely2010,AcemogluDLO2011,JadbabaieMST2012,daneshmand2014estimating,quinn2015directed, pouget2015inferring,NowzariPP2016}. Identifying the underlying network structure in these systems motivates the so-called \textit{network inference problem}, which aims at recovering the underlying connectivity between entities or nodes in the system based on observed data. The dependencies, correlations or causal relationships between network entities can be modeled as undirected or directed edges in a graph. The associated dependency strengths can be described as edge weights.
Many algorithms have been proposed to identify the network structure and edge weights from time series data for various processes. Clearly, efficient algorithms in terms of sample complexity are desired.
The network inference problem for various dynamic processes has been recently studied in both the machine learning literature and the system identification literature. Among the relevant studies in the machine learning community, the so-called continuous-time independent cascade model (CICE) considered in \cite{gomez2012inferring, abrahao2013trace, daneshmand2014estimating}, presents a typical model for capturing the dynamics of virus or information spreading in networks. A discrete-time version of CICE is studied in \cite{netrapalli2012learning}. In \cite{pouget2015inferring}, the Generalized Linear Model is formulated, which is a class of diffusion models encompassing both the discrete and continuous-time CICE models, and the Linear Voter model.
System identification is a well-studied model estimation approach in the context of control theory; e.g., see \cite{SoderstromStoica89,ljung1998system,VerVer2007} and references therein. The traditional task of system identification focuses on estimating system parameters from measured input and output data.
Among recent emerging applications in the system identification area, two main extensions are relevant. The first involves cases where the state matrix of a state-space model represents a directed graph, in other words, the dynamic process can be factorized into subprocesses for each node \cite{materassi2010, materassi2013model,chiuso2012bayesian, seneviratne2012topology,VandenHofDHB2013}. In such cases, system identification methods are found effective in tackling dynamic network inference problems. The second extension is to consider state vectors evolving in discrete state spaces.
In this paper, we consider the Bernoulli Autoregressive (BAR) model, which is a parameterized discrete-time Markov chain initially introduced in \cite{katselis2018mixing}. In this model, the state of each node is a Bernoulli random variable with probability of success equal to a convex combination of the parental node states (or their flipped states) in the previous time step and an additional binary noise term ensuring persistence of excitation. The BAR model can be used to approximate opinion dynamics, biological and financial times series, and similar processes \cite{katselis2018mixing}. Another relevant discrete-time binary process is the ALARM model proposed in \cite{agaskar2013alarm}. In contrast to the BAR model, the ALARM model defines the transition probabilities via a logistic function.
Relying on well-established statistical principles, we first formulate and study the consistency properties of the Maximum Likelihood (ML) parameter estimator for the BAR model in which every parental node causally influences each descendant node positively; the notion of positive correlations is formalized in \cite{katselis2018mixing}. The consistency of ML estimators in the case of independent and identically distributed (i.i.d.) random variables has been studied extensively; see e.g., \cite{moulin2018,poor2013introduction} and references therein. The consistency of ML estimators for Markov chains appears to be less well studied, see \cite{ranneby1978necessary} for a reference.
To establish the (strong) consistency of the ML estimator for the BAR model, we prove that the vectorized transition probability matrix is an injective mapping of the model parameters. In the rest of the paper, we call the injectivity of this mapping \emph{identifiability} of the BAR model. The strong consistency of the ML estimator is then shown by leveraging the injectivity and the continuity of the transition probabilities with respect to the parameters, as well as the compactness of the parameter space. By relying on the ML principle, a closed-form estimator is subsequently provided. Strong consistency is also shown to hold for this estimator. The identifiability proof is then extended to the \emph{generic} BAR model with both positive and negative correlations, where the notion of negative correlations is also formalized in \cite{katselis2018mixing}. This identifiability extension establishes the strong consistency of the ML estimator for the general BAR model class. The closed-form estimator and its consistency are also extended to the \emph{generic} BAR model. These analytical results provide a complement to the prior work \cite{katselis2018mixing}. Finally, numerical simulations are provided to demonstrate the sample complexity gain achieved by the derived estimators in this paper over other existing algorithms in the literature for the BAR model when focusing on the structure identification subproblem. We note here that structural inference is an identification subproblem that can be tackled by parameter estimation in processes with underlying network structures.
The rest of this paper is organized as follows. In Section \ref{sec:BAR Model}, the BAR model with positive correlations only is introduced. In Sections \ref{sec:ML Estimation} and \ref{sec:closed form}, the identifiability of the BAR model with positive correlations only and the strong consistency of the corresponding ML estimator, as well as a closed-form estimator and its strong consistency are derived. The generic BAR model with positive and negative correlations and also, the identifiability and strong consistency of the corresponding ML estimator are provided in Section \ref{sec: General BAR}. The extension of the closed-form estimator to the generic BAR model and its strong consistency are derived in Section \ref{sec: closed form 2}. Finally, simulation results are provided in Section \ref{sec:sims} and Section \ref{sec:concl} concludes the paper.
\textbf{Notation}: Matrices and vectors are denoted by bold upper and lowercase letters, respectively. Probability distributions in vector form may be either denoted by bold upper or lowercase letters. Random vectors are also denoted by uppercase bold letters, while their corresponding realizations are denoted by lowercase bold letters. Scalar random variables are denoted by uppercase letters. The $i$-th entry of a vector $\mathbf{x}$ is denoted by $x_i$. For a matrix $\mathbf{A}$, $a_{ij}$ corresponds to its $(i,j)$-th entry. Depending on the context, vector and matrix entries may be indexed more generally, e.g., by state elements. $\mathbf{1}_m$ and $\mathbf{0}_m$ are the $m\times 1$ all-ones and all-zeros vectors, respectively, and $\mathbf{0}_{m\times n}$ is the all-zeros $m\times n$ matrix. $\mathbf{I}_m$ is the $m\times m$ identity matrix. Moreover, $\mathbf{e}_{m,i}$ is the $i$-th column of $\mathbf{I}_m$.
The cardinality of a set $\mathcal{V}$ is denoted by $\left|\mathcal{V}\right|$. For $m\in\mathbb{N}$, $[m]=\{1,2,\dots, m\}$. Finally, $\mathbb{I}(\cdot)$ stands for the indicator of a set or an event.
\section{THE BAR MODEL WITH POSITIVE CORRELATIONS}
\label{sec:BAR Model}
The BAR model is a special form of a Markov chain defined on a directed graph $\mathcal{G}=\left(\mathcal{V},\mathcal{E}\right)$ with $\left|\mathcal{V}\right|=p$ nodes. Let $X_i(k)\in \{0,1\}$ be the state of node $i\in [p]$ at time instant $k$ and let $\mathbf{X}(k)\in \{0,1\}^p$ be the associated BAR process state vector at the same time instant. The most natural BAR model, with positive correlations only, is described by
\begin{equation}\label{eq: BAR_def}
X_i(k+1) \sim \textnormal{Ber}\left(\mathbf{a}_i^\top\mathbf{X}(k)+b_iW_i(k+1)\right),\ \ \ i=1,\ldots,p,
\end{equation}
where $\mathbf{a}_i\in [0,1]^p, b_i\in [0,1], i=1,\ldots, p$ are parameters of the BAR model and $\textnormal{Ber}(\rho)$ represents the Bernoulli distribution with parameter $\rho$. Additionally, $\{W_i(k+1)\sim\text{Ber}(\rho_{w_i})\}_{i=1}^p$
are independent noise random variables, also independent of $\mathbf{X}(t)$ for any $t<k+1$, where $\rho_{w_i}\in[\rho_{min},\,\rho_{max}]$ for all $i\in[p]$ with $0 <\rho_{min}<\rho_{max}< 1$. Moreover, the initial distribution is $P_{\mathbf{X}(0)}$, i.e., $\mathbf{X}(0) \sim P_{\mathbf{X}(0)}$. The interpretation here is that the entries of $\mathbf{X}(k+1)$ are conditionally independent Bernoulli random variables given $\mathbf{X}(k)$.
To ensure that the Bernoulli random variables in (\ref{eq: BAR_def}) are well-defined, we require that
\begin{equation}\label{eq: BAR_const}
\sum_{j=1}^p a_{ij} + b_i = 1, \quad \forall i\in[p].
\end{equation}
For persistent excitation, we further assume that $b_{i}\geq b_{min},\forall i\in [p]$, where $ b_{min}\in(0,1)$ is a constant. Notice that if $b_i=0$ for all $i\in[p]$, the BAR Markov chain will get absorbed in $\mathbf{0}_p$ or $\mathbf{1}_p$ upon visiting the state $\mathbf{0}_p$ or $\mathbf{1}_p$, respectively.
Furthermore, we assume that $\mathbf{a}_i$ encodes a part of the graph structure through the equivalence
\begin{equation}\label{eq: edge}
\left(j,i \right)\in\mathcal{E} \iff a_{ij}>0, \quad\forall i,j \in [p],
\end{equation}
where the ordered pair $(j,i)$ denotes a directed edge from node $j$ to node $i$. The notion of positive correlations in (\ref{eq: BAR_def}) relies on the fact that $a_{ij}>0$ increases the probability of the event $\{X_{i}(k+1)=1\}$ when $X_j(k)=1$. A more general form of the BAR model with both positive and negative correlations is introduced in Section \ref{sec: General BAR}.
We now let $\mathbf{A}=\left[\mathbf{a}_1,\, \mathbf{a}_2,\, \cdots, \mathbf{a}_p \right]^T$, i.e., $\mathbf{a}_r^T$ corresponds to the $r$-th row of $\mathbf{A}$, $\mathbf{b}=\left[b_1,\, b_2,\, \cdots, b_p \right]^T$, $\mathbf{W}=\left[W_1,\, W_2,\, \cdots, W_p \right]^T$ and $\rho_w=\left[\rho_{w_1},\, \rho_{w_2},\, \cdots, \rho_{w_p} \right]^T$.
We note that $\{\mathbf{X}(k)\}_{k\geq 0}$ is an irreducible and aperiodic Markov chain with finite state space $\{0,1\}^p$. Moreover, for any vectors $\mathbf{u},\mathbf{v}\in \{0,1\}^p$
\begin{align}\label{eq: tranProb}
p_{\mathbf{uv}}&=P\left(\mathbf{X}(k+1)=\mathbf{v}|\mathbf{X}(k)=\mathbf{u}\right) \nonumber\\
&= E_{\mathbf{W}}\left[P\left(\mathbf{X}(k+1)=\mathbf{v}|\mathbf{X}(k)=\mathbf{u},\mathbf{W}\right) \right]\nonumber\\
&=\prod_{i=1}^p \left[\mathbf{a}_i^\top \mathbf{u} + \rho_{w_i} b_i \right]^{v_{i}} \left[1-\mathbf{a}_i^\top \mathbf{u} - \rho_{w_i} b_i \right]^{1-v_{i}}
\end{align}
specifies the transition probability from state $\mathbf{u}$ to state $\mathbf{v}$.
We denote by $\pi\in \mathbb{R}^{2^p}$ the associated stationary distribution with component $\pi_{\mathbf{u}}$ corresponding to the state $\mathbf{u} \in \{0,1\}^p$ and by $\mathbf{P}=\left(p_{\mathbf{u}\mathbf{v}}\right) \in \mathbb{R}^{2^p\times 2^p}$ the BAR transition probability matrix.
The goal is to recover the model parameters from an observed sequence $\{\mathbf{X}(k)=\mathbf{x}(k)\}_{k=0}^{T}$.
Clearly, by inferring $\mathbf{A}$, estimates of $\mathbf{b}$ and the underlying network structure are direct per (\ref{eq: BAR_const}) and (\ref{eq: edge}), respectively.
\section{MAXIMUM LIKELIHOOD ESTIMATION}
\label{sec:ML Estimation}
In this section, we consider recovering the BAR model parameters via ML estimation and we establish the strong consistency of the ML estimator.
Suppose that $\{\mathbf{x}(k)\}_{k=0}^{T}$ is a sequence of observations generated by the BAR model (\ref{eq: BAR_def}). Let $\theta=(\mathbf{A},\mathbf{b}, \rho_{w})$ with the implicit relationship $\mathbf{b}=\mathbf{1}_p-\mathbf{A1}_p$. Clearly, $\mathbf{b}$ is a redundant parameter, but it is preserved here to facilitate the subsequent analysis. From (\ref{eq: tranProb}) the rescaled log-likelihood function is given by
\begin{equation}\label{eq: log_likelihood_1}
\begin{split}
L_T(\theta) &=\frac{1}{T}\sum_{k=0}^{T-1} \log P\left(\mathbf{x}(k+1)|\mathbf{x}(k); \theta\right)+\frac{1}{T}\log P_{\mathbf{X}(0)}(\mathbf{x}(0);\mathbf{\theta})\\
&=\frac{1}{T} \sum_{k=0}^{T-1} \sum_{i=1}^p \Bigg[ x_i(k+1)\log \left(\mathbf{a}_i^\top \mathbf{x}(k)+ \rho_{w_i} b_i\right) \\
&+ \left( 1-x_i(k+1)\right)\log \left(1-\mathbf{a}_i^\top \mathbf{x}(k)- \rho_{w_i} b_i\right) \Bigg]\\
& +\frac{1}{T}\log P_{\mathbf{X}(0)}(\mathbf{x}(0);\mathbf{\theta}).
\end{split}
\end{equation}
In the rest of the paper, we assume that $P_{\mathbf{X}(0)}$ is independent of the model parameters, which is well-aligned with the realistic scenario of arbitrarily initializing the Markov chain.
For any states $\mathbf{u}$, $\mathbf{v}\in \{0,1\}^p$, we denote by $N_{\mathbf{u}\mathbf{v}}$ the number of one-step transitions from state $\mathbf{u}$ to state $\mathbf{v}$ in the observed sequence and we let $N_{\mathbf{u}}= \sum_{\mathbf{v}} N_{\mathbf{u}\mathbf{v}}$ be the amount of time spent in state $\mathbf{u}$ in a horizon of $T$ time steps. Then (\ref{eq: log_likelihood_1}) can be also written as
\begin{equation}\label{eq:log_likelihood_1_new}
L_T(\theta) = \sum_{\mathbf{u},\,\mathbf{v}}\frac{N_{\mathbf{u}\mathbf{v}}}{T}\log p_{\mathbf{u}\mathbf{v}}(\theta)+\frac{1}{T}\log P_{\mathbf{X}(0)}(\mathbf{x}(0)).
\end{equation}
Let $\theta_0$ be the true parameter tuple $(\mathbf{A},\mathbf{b},\rho_w)$. An application of the Ergodic Theorem \cite{b13} for Markov chains reveals that \[L_T(\theta_0)\xrightarrow[T\rightarrow \infty]{\rm a.s.} \sum_{\mathbf{u},\mathbf{v}}\pi_{\mathbf{u}}p_{\mathbf{uv}}(\theta_0)\log p_{\mathbf{uv}}(\theta_0),\] which is the negative of the entropy rate of the corresponding BAR chain with parameter set $\theta_0$ and is always finite since the BAR model has a finite state space.
Let $\theta_0\in \Theta$, where $\Theta$ is a compact set appropriately defined on the basis of (\ref{eq: BAR_const}) and additional assumptions in Section \ref{sec:BAR Model}.
The ML estimator $\hat{\theta}_T$ of $\theta_0$ satisfies
\begin{equation}\label{eq:MLEstimatorDef}
\hat{\theta}_{T} \in \arg\max_{\theta\in \Theta}\,\, T\cdot L_T(\theta) = \arg\max_{\theta\in \Theta}\,\, L_T(\theta).
\end{equation}
In the rest of this section, we will show the strong consistency of $\hat{\theta}_T$.
The key idea of the proof is along the lines of the proof of Theorem 2.1 in \cite{ranneby1978necessary} using techniques for general discrete-time Markov chains. To summarize, we first prove that $\mathbf{P}(\hat{\theta}_T)\xrightarrow[T\rightarrow \infty]{\rm a.s.}
\mathbf{P}(\theta_0)$. To establish that $\hat{\theta}_T\xrightarrow[T\rightarrow \infty]{\rm a.s.} \theta_0$, we then show that the vector-valued mapping $\mathbf{p}:\Theta\rightarrow \mathbb{R}^{2^{2p}}$ defined as $\mathbf{p}(\theta)=\mathrm{vec}(\mathbf{P}(\theta))$ is injective, i.e.,
\begin{equation}\label{eq: injectivity}
\forall \theta, \theta' \in \Theta,\ \ \ \theta \neq \theta'\implies \mathbf{p}(\theta)\neq\mathbf{p}(\theta').
\end{equation}
Here, $\mathrm{vec}(\cdot)$ denotes the vectorization of a matrix.
Finally, we complete the proof by leveraging the compactness of the parameter space $\Theta$ and the continuity of the components of $\mathbf{p}(\theta)=\mathrm{vec}(\mathbf{P}(\theta))$ or equivalently, of the transition probabilities with respect to the model parameters.
\textbf{Remark}: In the following, we will say that the BAR model is \emph{identifiable} when (\ref{eq: injectivity}) holds.
The main result of this section can be now stated.
\begin{theorem}\label{thm: consistency}
The ML estimator $\hat{\theta}_T$ of $\theta_0$, defined in (\ref{eq:MLEstimatorDef}), for the BAR model in (\ref{eq: BAR_def}) is strongly consistent.
\end{theorem}
\begin{proof} We break up the proof into three parts.
\textbf{\emph{Proof of $\mathbf{P}(\hat{\theta}_T)\xrightarrow[T\rightarrow \infty]{\rm a.s.} \mathbf{P}(\theta_0)$}}: We present a simpler, self-contained proof, following ideas in proof of Theorem 2.1 in \cite{ranneby1978necessary}.
For each $\mathbf{u} \in \{0,1\}^p$, we define the (row) vector $\mathbf{Q}_{\mathbf{u}}=\left(N_{\mathbf{u}\mathbf{v}}/N_{\mathbf{u}}\right)_{\mathbf{v}\in\{0,1\}^p} \in\mathbb{R}^{2^p}$ with the convention $\mathbf{Q}_{\mathbf{u}}=2^{-p}\mathbf{1}_{2^p}^\top$ for $N_{\mathbf{u}}=0$ and we let $\mathbf{P}_{\mathbf{u}}= \left(P_{\mathbf{u}\mathbf{v}}\right)_{\mathbf{v}\in\{0,1\}^p} \in \mathbb{R}^{2^p}$ denote the transition distribution out of state $\mathbf{u}$, which is also a row vector in the transition matrix $\mathbf{P}$. In particular, it is well-known that the set $\{\mathbf{Q}_{\mathbf{u}}\}_{\mathbf{u}\in \{0,1\}^p}$ is the \emph{ML estimator} of the transition matrix $\mathbf{P}$, assuming no further parameterization of the transition probabilities. Consider the fact that the Kullback--Leibler divergence from $\mathbf{P}_{\mathbf{u}}(\hat{\theta}_T)$ to $\mathbf{Q}_{\mathbf{u}}$ is nonnegative, i.e.,
\begin{equation*}
D_{\rm KL}\left(\mathbf{Q}_{\mathbf{u}}\Big\| \mathbf{P}_{\mathbf{u}}(\hat{\theta}_T)\right) = -\sum_{\mathbf{v}} \frac{N_{\mathbf{u}\mathbf{v}}}{N_{\mathbf{u}}} \log \frac{ p_{\mathbf{u}\mathbf{v}}(\hat{\theta}_T)}{N_{\mathbf{u}\mathbf{v}}/N_{\mathbf{u}}} \geq 0
\end{equation*}
or equivalently,
\begin{equation*}
\sum_{\mathbf{v}} \frac{N_{\mathbf{u}\mathbf{v}}}{N_{\mathbf{u}}} \log \frac{N_{\mathbf{u}\mathbf{v}}}{N_{\mathbf{u}}} \geq \sum_{\mathbf{v}} \frac{N_{\mathbf{u}\mathbf{v}}}{N_{\mathbf{u}}} \log p_{\mathbf{u}\mathbf{v}}(\hat{\theta}_T).
\end{equation*}
Multiply both sides of the above inequality by $\frac{N_{\mathbf{u}}}{T}$ and sum over $\mathbf{u}$ to obtain
\begin{equation}\label{eq: thm1_1}
\sum_{\mathbf{u},\mathbf{v}} \frac{N_{\mathbf{u}\mathbf{v}}}{T} \log \frac{N_{\mathbf{u}\mathbf{v}}}{N_{\mathbf{u}}} \geq \sum_{\mathbf{u},\mathbf{v}} \frac{N_{\mathbf{u}\mathbf{v}}}{T} \log p_{\mathbf{u}\mathbf{v}}(\hat{\theta}_T) \geq \sum_{\mathbf{u},\mathbf{v}} \frac{N_{\mathbf{u}\mathbf{v}}}{T} \log p_{\mathbf{u}\mathbf{v}}(\theta_0)
\end{equation}
where the last inequality is due to (\ref{eq:log_likelihood_1_new}) and the definition of the ML estimator.
From (\ref{eq: thm1_1}), we can further obtain
\begin{equation}\label{eq: thm1_2}
0\geq \sum_{\mathbf{u},\mathbf{v}} \frac{N_{\mathbf{u}\mathbf{v}}}{T} \log \frac{p_{\mathbf{u}\mathbf{v}}(\hat{\theta}_T)}{N_{\mathbf{u}\mathbf{v}}/N_{\mathbf{u}}} \geq \sum_{\mathbf{u},\mathbf{v}} \frac{N_{\mathbf{u}\mathbf{v}}}{T} \log \frac{p_{\mathbf{u}\mathbf{v}}(\theta_0)}{N_{\mathbf{u}\mathbf{v}}/N_{\mathbf{u}}}.
\end{equation}
By the Ergodic Theorem for Markov chains,
\begin{equation} \label{eq: ergodic_thm}
\sum_{\mathbf{u},\,\mathbf{v}}\frac{N_{\mathbf{u}\mathbf{v}}}{T}\log p_{\mathbf{u}\mathbf{v}}(\theta_0) \xrightarrow[T\rightarrow \infty]{\rm a.s.} \sum_{\mathbf{u},\mathbf{v}} \pi_{\mathbf{u}}(\theta_0) p_{\mathbf{u}\mathbf{v}}(\theta_0) \log p_{\mathbf{u}\mathbf{v}}(\theta_0)
\end{equation}
and also
\begin{equation}\label{eq: thm1_3}
\sum_{\mathbf{u},\mathbf{v}} \frac{N_{\mathbf{u}\mathbf{v}}}{T} \log \frac{N_{\mathbf{u}\mathbf{v}}}{N_{\mathbf{u}}} \xrightarrow[T\rightarrow \infty]{\rm a.s.} \sum_{\mathbf{u},\mathbf{v}} \pi_{\mathbf{u}}(\theta_0) p_{\mathbf{u}\mathbf{v}}(\theta_0) \log p_{\mathbf{u}\mathbf{v}}(\theta_0).
\end{equation}
By (\ref{eq: ergodic_thm}) and (\ref{eq: thm1_3}) we have that
\begin{equation*}
\sum_{\mathbf{u},\mathbf{v}} \frac{N_{\mathbf{u}\mathbf{v}}}{T} \log \frac{p_{\mathbf{u}\mathbf{v}}(\theta_0)}{N_{\mathbf{u}\mathbf{v}}/N_{\mathbf{u}}}\xrightarrow[T\rightarrow \infty]{\rm a.s.} 0.
\end{equation*}
This together with (\ref{eq: thm1_2}) yields
\begin{equation}\label{eq: thm1_4}
\sum_{\mathbf{u},\mathbf{v}} \frac{N_{\mathbf{u}\mathbf{v}}}{T} \log \frac{p_{\mathbf{u}\mathbf{v}}(\hat{\theta}_T)}{N_{\mathbf{u}\mathbf{v}}/N_{\mathbf{u}}}\xrightarrow[T\rightarrow \infty]{\rm a.s.} 0.
\end{equation}
Employing Pinsker's inequality \cite{kullback1967lower} and the fact that the total variation distance between two discrete measures $q,r$ with vector forms $\mathbf{q},\mathbf{r}$, respectively, is $\|q-r\|_{\rm TV}=(1/2)\|\mathbf{q}-\mathbf{r}\|_1$, we have that for each $\mathbf{u}\in \{0,1\}^p$
{\small \begin{equation*}
\frac{1}{2}D_{\rm KL}\left(\mathbf{Q}_{\mathbf{u}}\Big\| \mathbf{P}_{\mathbf{u}}(\hat{\theta}_T)\right) \geq \left\| \mathbf{Q}_{\mathbf{u}} -\mathbf{P}_{\mathbf{u}}(\hat{\theta}_T) \right\|_{\rm TV}^2 \geq \frac{1}{4}\left\| \mathbf{Q}_{\mathbf{u}} -\mathbf{P}_{\mathbf{u}}(\hat{\theta}_T) \right\|_2^2.
\end{equation*}}
Multiplying again with $\frac{N_{\mathbf{u}}}{T}$ and summing over $\mathbf{u}$ gives
\begin{equation}\label{eq: thm1_5}
-2\sum_{\mathbf{u},\mathbf{v}} \frac{N_{\mathbf{u}\mathbf{v}}}{T} \log \frac{ p_{\mathbf{u}\mathbf{v}}(\hat{\theta}_T)}{N_{\mathbf{u}\mathbf{v}}/N_{\mathbf{u}}} \geq \sum_{\mathbf{u},\mathbf{v}}\frac{N_{\mathbf{u}}}{T} \left(p_{\mathbf{u}\mathbf{v}}(\hat{\theta}_T)-\frac{N_{\mathbf{u}\mathbf{v}}}{N_{\mathbf{u}}}\right)^2\geq0.
\end{equation}
Now employing again the Ergodic Theorem for Markov chains, i.e.,
\begin{equation*}
\frac{N_{\mathbf{u}}}{T}\xrightarrow[T\rightarrow \infty]{\rm a.s.}\pi_{\mathbf{u}}>0, \forall \mathbf{u}\in \{0,1\}^p,
\end{equation*}
and combining (\ref{eq: thm1_4}) and (\ref{eq: thm1_5}) yields
\begin{equation*}
\left|p_{\mathbf{u}\mathbf{v}}(\hat{\theta}_T)-\frac{N_{\mathbf{u}\mathbf{v}}}{N_{\mathbf{u}}}\right| \xrightarrow[T\rightarrow \infty]{\rm a.s.} 0, \ \ \forall (\mathbf{u},\mathbf{v})\in (\{0,1\}^p)^2.
\end{equation*}
We then end up with
\begin{equation}\label{eq: thm1_6}
\left|p_{\mathbf{u}\mathbf{v}}(\hat{\theta}_T)-p_{\mathbf{u}\mathbf{v}}(\theta_0)\right| \xrightarrow[T\rightarrow \infty]{\rm a.s.} 0, \ \ \forall (\mathbf{u},\mathbf{v})\in (\{0,1\}^p)^2.
\end{equation}
\noindent\textbf{\emph{Proof of identifiability}}: Due to the redundancy of $\mathbf{b}$, we reparameterize the BAR model as $\mathbf{\theta}=(\mathbf{A},\mathbf{c})$ with $\mathbf{c}=\mathrm{diag}(\mathbf{b})\mathbf{\rho}_w$. Clearly, there is a one-to-one correspondence between a given set of parameters $(\mathbf{A},\mathbf{c})$ and $\left(\mathbf{A},\mathbf{b}, \rho_w\right)$ via the relations $\mathbf{b}=\mathbf{1}_p-\mathbf{A1}_p$ and $\mathbf{\rho}_w=\left(\mathrm{diag}(\mathbf{1}_p-\mathbf{A1}_p)\right)^{-1}\mathbf{c}$, where $(\cdot)^{-1}$ denotes matrix inversion.
Suppose that two different sets of parameters $\theta = \left(\mathbf{A},\mathbf{c}\right)$ and $\theta' = \left(\mathbf{A}',\mathbf{c}'\right)$ lead to the same transition probability matrix, i.e., $\mathbf{P}(\theta)=\mathbf{P}(\theta')$ or equivalently, $\mathbf{p}(\theta)=\mathbf{p}(\theta')$.
First, consider the case of $\mathbf{c}\neq\mathbf{c}'$. The following argument is valid for both the cases of $\mathbf{A}=\mathbf{A}'$ and $\mathbf{A}\neq\mathbf{A}'$. Without loss of generality, assume that $c_1 \neq c_1'$. Let $\mathbf{u}=\mathbf{0}$ and $\mathbf{v}$ be some vector in $\{0,1\}^p$ with $v_1 = 0$. Since $p_{\mathbf{uv}}(\theta)=p_{\mathbf{uv}}(\theta')$ and $c_i,c_i'\neq 0 $ $\forall i$, (\ref{eq: tranProb}) implies that
\begin{equation}\label{eq: lemma1_1}
1-c_1' = \left(1-c_1\right) \prod_{i=2}^p \left(\frac{c_i}{c'_i} \right)^{v_i} \left(\frac{1-c_i}{1-c'_i} \right)^{1-v_i}.
\end{equation}
Consider now the transition probability from $\mathbf{u}=\mathbf{0}$ to $\mathbf{v}'$, where $v'_1=1$ and $v'_j = v_j$ for $j=2,\dots,p$. Since $p_{\mathbf{uv'}}(\theta)=p_{\mathbf{uv'}}(\theta')$,
\begin{equation}\label{eq: lemma1_2}
c'_1 = c_1 \prod_{i=2}^p \left(\frac{c_i}{c'_i} \right)^{v_i} \left(\frac{1- c_i}{1- c'_i} \right)^{1-v_i}.
\end{equation}
Combining (\ref{eq: lemma1_1}) and (\ref{eq: lemma1_2}), it is easy to see that $c_1 = c_1'$, which is a contradiction. Thus, $\mathbf{c}\neq \mathbf{c'}\implies \mathbf{P(\theta)}\neq \mathbf{P}(\theta')$ or equivalently, $\mathbf{p}(\theta)\neq \mathbf{p}(\theta')$.
Now we consider the second case where $\mathbf{c}=\mathbf{c'}$ and $\mathbf{A}\neq \mathbf{A}'$. Without loss of generality, let $a_{11}\neq a'_{11}$. Consider $\mathbf{u}'=\mathbf{e}_{p,1}$ and the same $\mathbf{v},\mathbf{v}'$ as before. By our assumption that $p_{\mathbf{u'v}}(\theta)=p_{\mathbf{u'v}}(\theta')$ and $p_{\mathbf{u'v'}}(\theta)=p_{\mathbf{u'v'}}(\theta')$, the contradiction $a_{11}= a'_{11}$ arises. Thus, $\mathbf{c}=\mathbf{c'}, \mathbf{A}\neq \mathbf{A}'\implies \mathbf{P(\theta)}\neq \mathbf{P}(\theta')$ or equivalently, $\mathbf{p}(\theta)\neq \mathbf{p}(\theta')$.
Finally, it is easy to see that $\mathbf{c}=\mathbf{c'}$ and $\mathbf{A} = \mathbf{A}'$ imply that $\mathbf{b}=\mathbf{b}'$ and $\mathbf{\rho}_w=\mathbf{\rho}_w'$ due to the aforementioned one-to-one correspondence between $(\mathbf{A},\mathbf{c})$ and $\left(\mathbf{A},\mathbf{b}, \rho_w\right)$.
\noindent\textbf{\emph{Completion of the proof}}: Let $\left(\Omega, \mathcal{F}, P_{\theta_0, P_{\mathbf{X}(0)}}\right)$ be the probability space on which the BAR process is defined, where the subscript $P_{\mathbf{X}(0)}$ indicates that the law of the BAR chain depends on the initial measure. By (\ref{eq: thm1_6}),
\begin{equation*}
P_{\theta_0, P_{\mathbf{X}(0)}}\left(\tilde{\Omega}=\left\{\omega\in \Omega: \lim_{T\rightarrow\infty} P(\hat{\theta}_T(\omega))= P(\theta_0)\right\}\right)=1,
\end{equation*}
which holds independently of $P_{\mathbf{X}(0)}$ due to the Ergodic Theorem for Markov chains.
Consider $\omega\in \tilde{\Omega}$ such that $ \lim_{T\rightarrow\infty} \hat{\theta}_T(\omega)= \theta_0$ does not hold and note that $\hat{\theta}_T(\omega)\in \Theta, \forall T\geq 1$ by (\ref{eq:MLEstimatorDef}). Then, by the Bolzano-Weierstrass Theorem, there exists a subsequence $\{\hat{\theta}_{T_{k}}(\omega)\}_{k=1}^\infty$ converging to some point $\theta^{*}(\omega)\neq \theta_0$. Here, we use the fact that if every convergent subsequence of the bounded sequence $\{\hat{\theta}_T(\omega)\}_{T=1}^{\infty}$ converges to the same limit $\theta_0$, then $ \lim_{T\rightarrow\infty} \hat{\theta}_T(\omega)= \theta_0$.
The compactness of $\Theta$ implies that $\theta^{*}(\omega)\in \Theta$. Since for every pair $(\mathbf{u},\mathbf{v})$, $p_{\mathbf{u}\mathbf{v}}(\theta)$ is continuous in $\theta$ due to (\ref{eq: tranProb}) and the definition of $\Theta$, $p_{\mathbf{u}\mathbf{v}}(\theta)$ is sequentially continuous. Therefore,
$
\lim_{k\rightarrow\infty} p_{\mathbf{u}\mathbf{v}}(\hat{\theta}_{T_k}(\omega))= p_{\mathbf{u}\mathbf{v}}(\theta^*(\omega)), \forall (\mathbf{u},\mathbf{v})$
and $p_{\tilde{\mathbf{u}}\tilde{\mathbf{v}}}(\theta^*(\omega))\neq p_{\tilde{\mathbf{u}}\tilde{\mathbf{v}}}(\theta_0)$ for at least a pair $(\tilde{\mathbf{u}},\tilde{\mathbf{v}})$. Here, the identifiability of the BAR model is invoked. Observing now that $\left\{p_{\tilde{\mathbf{u}}\tilde{\mathbf{v}}}(\hat{\theta}_{T_k}(\omega))\right\}_{k=1}^{\infty}$ is a subsequence of $\left\{p_{\tilde{\mathbf{u}}\tilde{\mathbf{v}}}(\hat{\theta}_{T}(\omega))\right\}_{T=1}^{\infty}$, a contradiction with the choice of $\omega\in \tilde{\Omega}$ for which $ \lim_{T\rightarrow\infty} p_{\mathbf{u}\mathbf{v}}(\hat{\theta}_T(\omega))= p_{\mathbf{u}\mathbf{v}}(\theta_0), \forall (\mathbf{u}, \mathbf{v})$ is established.
Therefore, $ \lim_{T\rightarrow\infty} \hat{\theta}_T(\omega)= \theta_0, \forall \omega \in \Tilde{\Omega}$, or equivalently $\hat{\theta}_T\xrightarrow[T\rightarrow \infty]{\rm a.s}\theta_0$.
\end{proof}
\section{A CLOSED-FORM ESTIMATOR}
\label{sec:closed form}
In this section, we provide a closed-form estimator for the BAR model parameters in (\ref{eq: BAR_def}), i.e., for $(\mathbf{A},\mathbf{c})$, since by knowing $(\mathbf{A},\mathbf{c})$ we can recover $(\mathbf{b},\mathbf{\rho}_w)$. Recall that $\mathbf{c}=\mathrm{diag}(\mathbf{b})\mathbf{\rho}_w$. Considering the log-likelihood function for $\theta=(\mathbf{A},\mathbf{c})$, we have
\begin{align} \label{eq: log-likelihood_3}
L(\theta) = & \sum_{k=0}^{T-1} \sum_{\mathbf{u}}\sum_{\mathbf{v}} \mathbb{I}\left(\mathbf{x}(k)=\mathbf{u},\,\mathbf{x}(k+1)=\mathbf{v}\right) \log p_{\mathbf{u}\mathbf{v}}(\theta)\nonumber\\
&+ \sum_{\mathbf{u}} \mathbb{I}\left(\mathbf{X}(0)=\mathbf{u}\right) \log P_{\mathbf{X}(0)}(\mathbf{u})\nonumber\\
= & \sum_{k=0}^{T-1} \sum_{\mathbf{u}}\sum_{\mathbf{v}} \mathbb{I}\left(\mathbf{x}(k)=\mathbf{u},\,\mathbf{x}(k+1)=\mathbf{v}\right) \cdot \nonumber\\
& \sum_{r=1}^p \left[v_r \log P\left(v_r=1|\mathbf{u}\right) + \left(1-v_r\right)\log P\left(v_r=0|\mathbf{u}\right)\right]\nonumber\\
&+ \sum_{\mathbf{u}} \mathbb{I}\left(\mathbf{X}(0)=\mathbf{u}\right) \log P_{\mathbf{X}(0)}(\mathbf{u}).
\end{align}
Observe that $P\left(v_r=1|\mathbf{u}\right)$ and $P\left(v_r=0|\mathbf{u}\right)$ are independent of $\mathbf{v}$. We can therefore define $\vartheta_{\mathbf{u},r,l} = P\left((\cdot)_r = l|\mathbf{u}\right)$, for $l\in\{0,1\}$. Furthermore, we define $N_{\mathbf{u},r,l}=\sum_{k=0}^{T-1} \mathbb{I}\left(\mathbf{x}(k)=\mathbf{u}, x_r(k+1)=l\right)$, which is the number of times the BAR chain transitions from state $\mathbf{u}$ to a state with $r$-th entry being equal to $l$. Moreover, $N_{\mathbf{u},r,0}+ N_{\mathbf{u},r,1} = N_{\mathbf{u}} = \sum_{\mathbf{v}}N_{\mathbf{u}\mathbf{v}}$, $\forall \mathbf{u}\in\{0,1\}^p$ and $r\in[p]$. With these introductions we have the following theorem:
\begin{theorem}\label{thm: LS_red}
Consider an observed sequence $\{\mathbf{x}(k)\}_{k=0}^{T}$. For $i=1,\ldots, p$, define the estimator $\hat{\mathbf{c}}=[\hat{c}_1,\ldots, \hat{c}_p]^T$ by the entry estimators $\hat{c}_i=\sum_{k=0}^{T-1} \mathbb{I}(\mathbf{x}(k)=\mathbf{0}_p,x_i(k+1)=1)/\sum_{k=0}^{T-1} \mathbb{I}(\mathbf{x}(k)=\mathbf{0}_p)$, assuming that the state $\mathbf{0}_p$ is visited at least once in the time span $\{0,\ldots,T-1\}$. Moreover, in the special case where $\rho_{w_i}=\rho_w, \forall i\in \{1,\ldots, p\}$, $\hat{\mathbf{c}}$ can be replaced by $\hat{\mathbf{c}}=\left(\frac{1}{p}\sum_{i=1}^p\hat{c}_i\right)\mathbf{1}_p$.
Furthermore, suppose that in $\{\mathbf{x}(k)\}_{k=0}^{T-1}$ there are $m$ distinct states $\mathbf{u}_1$, $\mathbf{u}_2$,..., $\mathbf{u}_m$ such that $p\leq m\leq 2^p$. Let $\mathbf{U}_m\in \mathbb{R}^{m\times p}$ be a matrix with $k$-th row equal to $\mathbf{u}_k^\top$ for $k\in[m]$ and $\mathbf{y}_{m,r} = \left[N_{\mathbf{u}_1,r,1}/N_{\mathbf{u}_1},\cdots, N_{\mathbf{u}_m,r,1}/N_{\mathbf{u}_m}\right]^\top$. Then, whenever $\mathbf{U}_{m}$ is full-column rank, $\hat{\mathbf{A}}$ is an estimator of $\mathbf{A}$, where
\begin{equation}\label{eq: LS_red}
\hat{\mathbf{a}}_r = \left(\mathbf{U}_m^\top\mathbf{U}_m\right)^{-1}\mathbf{U}_m^\top\left(\mathbf{y}_{m,r}-\hat{c}_r\cdot \mathbf{1}_m\right), \ \ \forall r\in [p].
\end{equation}
Finally, to obtain a valid estimate of the parameter set for any $T\geq 1$, we let
\[
\hat{\mathbf{\theta}}=\left[\left(\hat{\mathbf{A}}, \hat{\mathbf{b}}=\mathbf{1}_p-\hat{\mathbf{A}}\mathbf{1}_p,\hat{\mathbf{\rho}}_w= \left(\mathrm{diag}(\mathbf{1}_p-\hat{\mathbf{A}}\mathbf{1}_p)\right)^{-1}\hat{\mathbf{c}}\right)\right]^{+},
\]
where $[\cdot]^{+}$ corresponds to a projection onto the parameter space $\Theta$.
\end{theorem}
\begin{proof}
First, let us rewrite the log-likelihood function in (\ref{eq: log-likelihood_3}) as
\begin{align*}
\mathcal{L}=L(\theta)=& \sum_{\mathbf{u}}\sum_{r=1}^p \left(N_{\mathbf{u},r,0} \log \vartheta_{\mathbf{u},r,0} + N_{\mathbf{u},r,1} \log\vartheta_{\mathbf{u},r,1}\right)\\& + \sum_{\mathbf{u}} \mathbb{I}\left(\mathbf{X}(0)=\mathbf{u}\right) \log P_{\mathbf{X}(0)}(\mathbf{u}).
\end{align*}
Instead of maximizing this function with respect to $\theta=(\mathbf{A},\mathbf{c})$, we maximize it with respect to the choice of the marginal conditional probabilities $\{\vartheta_{\mathbf{u},r,0},\vartheta_{\mathbf{u},r,1}\}_{\mathbf{u},r}$.
Consider the constrained ML estimation problem
\begin{align}\label{eq: const_MLE}
& \quad\max_{\{\vartheta_{\mathbf{u},r,0}\geq 0, \vartheta_{\mathbf{u},r,1}\geq 0\}_{\mathbf{u},r}} \quad \mathcal{L} \nonumber\\
& {\rm s.t.} \quad \vartheta_{\mathbf{u},r,0} + \vartheta_{\mathbf{u},r,1} =1, \quad \forall \mathbf{u}\in\{0,1\}^p, \, \forall r\in[p].
\end{align}
Forming the Lagrangian and setting the gradient (with respect to $\{\vartheta_{\mathbf{u},r,0},\vartheta_{\mathbf{u},r,1}\}_{\mathbf{u},r}$) to zero, we obtain
\begin{equation*}
\hat{\vartheta}_{\mathbf{u},r,i} = \frac{N_{\mathbf{u},r,i}}{N_{\mathbf{u}}},\quad \forall \mathbf{u}\in\{0,1\}^p,\, \forall r\in[p],\, \forall i\in\{0,1\}.
\end{equation*}
Recall that $\vartheta_{\mathbf{u},r,1}$ is defined as the probability of transitioning from state $\mathbf{u}$ to some state with $r$-th component equal to $1$. We can therefore require that
\begin{align}\label{eq:invariance}
\begin{bmatrix}\frac{N_{\mathbf{u}_1,r,1}}{N_{\mathbf{u}_1}} \\ \vdots \\\frac{N_{\mathbf{u}_m,r,1}}{N_{\mathbf{u}_m}} \end{bmatrix} = \underbrace{\begin{bmatrix}\mathbf{u}_1^\top \\ \vdots \\\mathbf{u}_m^\top\end{bmatrix}}_{\mathbf{U}_m}\cdot \hat{\mathbf{a}}_r +\hat{c}_r\mathbf{1}_m
\end{align}
\begin{equation}\label{eq:LS_red_2}
\text{or}\ \ \mathbf{y}_{m,r}-\hat{c}_r\mathbf{1}_m=\mathbf{U}_m\hat{\mathbf{a}}_r,
\end{equation}
where the estimates $\hat{c}_r$ are provided in the statement of the theorem.
Under the assumption that $\mathbf{U}_m$ is full-column rank, $\mathbf{U}_m^\top \mathbf{U}_m$ is nonsingular and (\ref{eq: LS_red}) follows. The proof is then concluded by projecting the obtained estimate onto $\Theta$.
\end{proof}
\textbf{Remark}: (\ref{eq:invariance}) is reminiscent of the \emph{invariance property} for ML estimation.
\begin{theorem}\label{thm: LS_constistency}
The closed-form estimator in Theorem \ref{thm: LS_red} is strongly consistent.
\end{theorem}
\begin{proof}
It is sufficient to show that $(\hat{\mathbf{A}},\hat{\mathbf{c}})$ given by Theorem \ref{thm: LS_red} is strongly consistent.
Since the BAR chain is finite-state, the stationary probabilities satisfy $\pi_{\mathbf{u}}>0, \forall \mathbf{u}\in \{0,1\}^p$. Moreover, for any initial measure, the Ergodic Theorem for Markov chains implies that\footnote{The following convergences can be easily justified by straightforward embeddings of $\mathbf{U}_m$ into $\mathbb{R}^{2^p\times p}$ in the first case and of $\mathbf{y}_{m,r}$ into $\mathbb{R}^{2^p}$ in the second case via zero padding.}
\begin{itemize}
\item $\frac{N_\mathbf{u}}{T}\xrightarrow[T\rightarrow \infty]{\rm a.s.}\pi_{\mathbf{u}}$. This further implies that $\mathbf{U}_m\xrightarrow[T\rightarrow \infty]{\rm a.s.} \mathbf{U}_{2^p}$ in the sense that $[\mathbf{U}_m^\top\ \ \mathbf{0}_{2^p-m\times p}^\top]^\top\xrightarrow[T\rightarrow \infty]{\rm a.s.} \mathbf{U}_{2^p}$.
\item $\frac{N_\mathbf{u},r,1}{N_{\mathbf{u}}}\xrightarrow[T\rightarrow \infty]{\rm a.s.} P\left((\cdot)_r = 1|\mathbf{u}\right), \forall \mathbf{u}\in \{0,1\}^p, \forall r\in [p]$ or equivalently, $\frac{N_\mathbf{u},r,1}{N_{\mathbf{u}}}\xrightarrow[T\rightarrow \infty]{\rm a.s.} \mathbf{u}^\top\mathbf{a}_r+c_r, \forall \mathbf{u}\in \{0,1\}^p, \forall r\in [p]$. This implies that $\mathbf{y}_{m,r}\xrightarrow[T\rightarrow \infty]{\rm a.s.} [P\left((\cdot)_r = 1|\mathbf{u}\right)]_{\mathbf{u}\in \{0,1\}^p}, \forall r\in [p]$, which is a $2^p\times 1$ column vector, in the sense that $[\mathbf{y}_{m,r}^\top\ \ \mathbf{0}_{2^p-m}^\top]^\top\xrightarrow[T\rightarrow \infty]{\rm a.s.} [P\left((\cdot)_r = 1|\mathbf{u}\right)]_{\mathbf{u}\in \{0,1\}^p}, \forall r\in [p]$. As a consequence, $\hat{\mathbf{c}}\xrightarrow[T\rightarrow \infty]{\rm a.s.}\mathbf{c}$.
\end{itemize}
Combining these observations with (\ref{eq:LS_red_2}) we obtain
\begin{align}
&\lim_{T\rightarrow \infty} [\mathbf{U}_m^\top\ \ \mathbf{0}_{2^p-m\times p}^\top]^\top\hat{\mathbf{a}}_r=\lim_{T\rightarrow \infty} \mathbf{U}_{2^p}\hat{\mathbf{a}}_r=\mathbf{U}_{2^p}\mathbf{a}_r\ \ \text{a.s.}
\end{align}
and the strong consistency of the closed-form estimator in Theorem \ref{thm: LS_red} follows if $\mathbf{U}_{2^p}$ is full-column rank or equivalently if $\mathbf{U}_{2^p}^\top\mathbf{U}_{2^p}$ is nonsingular. It is easy to see that
$\mathbf{U}_{2^p}^\top\mathbf{U}_{2^p}$ has diagonal entries equal to $2^{p-1}$ and off-diagonal entries equal to $2^{p-2}$. Thus, we can write $\mathbf{U}_{2^p}^\top\mathbf{U}_{2^p}=2^{p-2}\mathbf{1}_p\mathbf{1}_p^\top+2^{p-2}\mathbf{I}_p$. This matrix is invertible for every $p<\infty$, since $1+2^{p-2}\mathbf{1}_p^\top\left(2^{p-2}\mathbf{I}_p\right)^{-1}\mathbf{1}_p=1+p>0$ as the condition in the Sherman–Morrison formula \cite{press2007numerical} dictates.
\end{proof}
\section{THE GENERIC BAR MODEL}
\label{sec: General BAR}
Motivated by modeling positive and negative influences from parental nodes, an extension of the BAR model in (\ref{eq: BAR_def}) has been introduced in \cite{katselis2018mixing}. We first reformulate this generic BAR model.
Denote by $\mathcal{S}_i=\mathcal{S}_i^+ \cup \mathcal{S}_i^- $ the parental set of node $i$, where $\mathcal{S}_i^+ \cap \mathcal{S}_i^- = \emptyset$. The nodes in $\mathcal{S}_i^+$ and $\mathcal{S}_i^-$ are said to have positive and negative influence on $i$, respectively. The generic BAR model, parameterized by $\tilde{\theta} = \left(\mathbf{A},\Tilde{\mathbf{A}},\mathbf{b},\rho_w\right)$, is defined as
\begin{equation}\label{eq: BAR_def2}
X_i(k+1)\sim \textnormal{Ber}\left(\mathbf{a}_i^\top\mathbf{X}(k)+\Tilde{\mathbf{a}}_i^\top\left(\mathbf{1}-\mathbf{X}(k)\right)+b_iW_i(k+1)\right) ,
\end{equation}
for all $i\in[p]$, where $\mathbf{a}_i^\top$ and $\Tilde{\mathbf{a}}_i^\top$ are the $i$-th rows of $\mathbf{A}\in\mathbb{R}^{p\times p}$ and $\Tilde{\mathbf{A}}\in\mathbb{R}^{p\times p}$, respectively. Furthermore, we assume that $\mathcal{S}_i^+ = \text{supp}(\mathbf{a}_i)$ and $\mathcal{S}_i^- = \text{supp}(\Tilde{\mathbf{a}}_i)$. Here, $\text{supp}(\cdot)$ denotes the support of a vector. As in the previous case, the constraints
\begin{equation}\label{eq: BAR_const2}
\sum_{j=1}^p \left(a_{ij} + \Tilde{a}_{ij} \right) + b_i = 1, \quad \forall i\in[p]
\end{equation}
are also required in this case. Similarly, we assume that $a_{ij},\Tilde{a}_{ij}\geq0$, $\forall i,j\in[p]$, $b\geq b_{min}$ and $\rho_{w_i}\in[\rho_{min},\,\rho_{max}], \forall i\in [p]$. Therefore, the parameter space is defined as
\begin{equation*}
\begin{split}
\tilde{\Theta} = \Bigg\{ & \left(\mathbf{A},\Tilde{\mathbf{A}},\mathbf{b}, \rho_w \right)\Big| \sum_{j=1}^p \left(a_{ij} + \Tilde{a}_{ij} \right)+ b_i = 1, \,\, b_i\geq b_{min},\\ & \rho_{w_i}\in\left[\rho_{min},\,\rho_{max}\right], \forall i\in [p], \, \, \text{and} \,\, \,a_{ij},\tilde{a}_{ij}\geq 0,\\ & a_{ij}\Tilde{a}_{ij}=0,\,\forall i,j \in [p] \Bigg\} .
\end{split}
\end{equation*}
\textbf{Remark}: The parameter space $\tilde{\Theta}$ is compact. To see this, first note that the associated constraints imposed on $\tilde{\Theta}$ are defined row-wise. This implies that $\tilde{\Theta}$ can be viewed as the Cartesian product of row spaces, i.e., $\tilde{\Theta}=\prod_{i=1}^p \tilde{\Theta}_i$, where
\begin{equation*}
\begin{split}
\tilde{\Theta}_i =& \Bigg\{
\left(\mathbf{a}_i^\top, \tilde{\mathbf{a}}_i^\top, b_i, \rho_{w_i}\right)\Big| \sum_{j=1}^p (a_{ij} + \Tilde{a}_{ij}) + b_i = 1,b_i\geq b_{min},\\ & \rho_{w_i}\in\left[\rho_{min},\,\rho_{max}\right], \text{and} \,\, a_{ij},\tilde{a}_{ij}\geq 0,\, a_{ij}\Tilde{a}_{ij}=0,\,\forall j \in [p] \Bigg\} .
\end{split}
\end{equation*}
Then $\tilde{\Theta}$ is compact if and only if $\tilde{\Theta}_i$ is compact for every $i\in[p]$. It is straightforward to see that $\tilde{\Theta}_i$ is compact without the constraints $a_{ij}\Tilde{a}_{ij}=0$, $\forall j\in[p]$. Also observe that adding a constraint $a_{ij}\Tilde{a}_{ij}=0$ leads to the coordinate projection of a compact set in $\mathbb{R}^{2p+2}$ onto two $(2p+1)$-dimensional subspaces of $\mathbb{R}^{2p+2}$ with the corresponding images of the projected compact set being also compact sets. Denote the union of these images as $\tilde{\Theta}_{i,j}$. Then $\tilde{\Theta}_i = \cap_{j=1}^p\tilde{\Theta}_{i,j}$, i.e., $\tilde{\Theta}_i$ is the intersection of $p$ compact sets and is therefore compact.
The ML estimator is a maximizer of the rescaled log-likelihood function, that is,
\begin{equation*}
\tilde{\theta}_T \in \arg\max_{\tilde{\theta}\in \tilde{\Theta}}\,\,\Tilde{L}_T(\tilde{\theta}),
\end{equation*}
where
{\small
\begin{equation*}
\begin{split}
&\Tilde{L}_T(\tilde{\theta}) = \frac{1}{T} \sum_{k=0}^{T-1} \sum_{i=1}^p \Bigg[ x_i(k+1)\log \left(\mathbf{a}_i^\top \mathbf{x}(k)+ \Tilde{\mathbf{a}}_i^\top (\mathbf{1}-\mathbf{x}(k))+\rho_{w_i} b_i\right) \\
&+ \left( 1-x_i(k+1)\right)\log \left(1-\mathbf{a}_i^\top \mathbf{x}(k)-\Tilde{\mathbf{a}}_i^\top (\mathbf{1}-\mathbf{x}(k))- \rho_{w_i} b_i\right) \Bigg] \\
&+\frac{1}{T}\log P_{\mathbf{X}(0)}(\mathbf{x}(0)).
\end{split}
\end{equation*}
}
The ML estimator for the generic BAR model can be shown to be strongly consistent via a direct extension of the analysis in Section \ref{sec:ML Estimation}. More precisely, it is sufficient to establish identifiability.
\begin{theorem}\label{thm: identifiability}
For the generic BAR model in (\ref{eq: BAR_def2}), $\tilde{\theta} \neq \tilde{\theta}'\implies \mathbf{p}(\tilde{\theta}) \neq \mathbf{p}(\tilde{\theta}'), \forall (\tilde{\theta}, \tilde{\theta}' )\in \tilde{\Theta}\times \tilde{\Theta}$ with $\tilde{\theta} \neq \tilde{\theta}'$.
\end{theorem}
\begin{proof}
For two different sets of parameters $(\mathbf{A},\tilde{\mathbf{A}},\mathbf{b}, \rho_w)$, $(\mathbf{A}',\tilde{\mathbf{A}}',\mathbf{b}',\rho_w')$ and by letting $c_i = b_i \rho_{w_i}, i=1,\ldots, p$, we can reparameterize the generic BAR model to obtain $\tilde{\theta}=(\mathbf{A},\tilde{\mathbf{A}},\mathbf{c})$, $\tilde{\theta}'=(\mathbf{A}',\tilde{\mathbf{A}}',\mathbf{c}')$ by recalling the one-to-one correspondence between the initial sets of parameters and the later ones as in the case of the BAR model with only positive correlations. The corresponding relations in this case are $\mathbf{b}=\mathbf{1}_p-(\mathbf{A}+\tilde{\mathbf{A}})\mathbf{1}_p$ and $\mathbf{\rho}_w=\left(\mathrm{diag}(\mathbf{b})\right)^{-1}\mathbf{c}$. We will examine different cases for which $\tilde{\theta}\neq \tilde{\theta}'$ and by assuming that $\mathbf{p}(\tilde{\theta})=\mathbf{p}(\tilde{\theta}')$ we will arrive to contradictions.
\begin{itemize}
\item \emph{First Case}: Suppose that there exists some $i\in[p]$ such that $\mathcal{S}_i^{+} \neq \mathcal{S'}_i^{+}$, where $\mathcal{S}_i^{+}$ and $\mathcal{S’}_i^{+}$ correspond to the parental neighborhoods with positive influence on the $i$-th node, as these neighborhoods are encoded in $\tilde{\theta}$ and $\tilde{\theta}'$, respectively. Without loss of generality, we assume that $\mathcal{S}_1^{+} \neq \mathcal{S'}_1^{+}$. Translating this structural difference into the model parameters, suppose again without loss of generality that $1\in \mathcal{S}_1^+$, i.e., $a_{11}\neq 0$ and therefore, $\Tilde{a}_{11}=0$ and $1\notin \mathcal{S'}_1^+ $ , i.e., $a'_{11}=0$ and $\tilde{a'}_{11}=0$ or $\tilde{a'}_{11}>0$.
We first consider the transition probabilities from $\mathbf{u}=\sum_{i\in\mathcal{S}_1^-} \mathbf{e}_{p,i}$ to states $\mathbf{v}$ and $\mathbf{v}'$, where $\mathbf{v}$ and $\mathbf{v}'$ only differ in the first element with $v_1=1$ and $v'_1=0$. Then by letting
$p_{\mathbf{u}\mathbf{v}}(\tilde{\theta}) = p_{\mathbf{u}\mathbf{v}}(\tilde{\theta}')$ and $p_{\mathbf{u}\mathbf{v'}}(\tilde{\theta}) = p_{\mathbf{u}\mathbf{v'}}(\tilde{\theta}')$, we obtain
\begin{align}\label{eq: thm2_1}
c_1 &= \sum_{j\in\mathcal{S}_1^-}a'_{1j} + \sum_{k\notin \mathcal{S}_1^-}\Tilde{a'}_{1k} + c'_1 \nonumber\\
&=\sum_{j\in\mathcal{S}_1^-}a'_{1j} + \tilde{a}'_{11} +\sum_{k\notin \mathcal{S}_1^-\cup\{1\}}\Tilde{a'}_{1k} + c'_1.
\end{align}
Further by considering the probabilities of transitioning from $\mathbf{u'}= \mathbf{e}_{p,1} + \sum_{i\in\mathcal{S}_1^-} \mathbf{e}_{p,i}$ to $\mathbf{v}$ and $\mathbf{v'}$, we obtain
\begin{equation}\label{eq: thm2_2}
\begin{split}
&a_{11}+c_1=\sum_{j\in\mathcal{S}_1^-}a'_{1j} + \sum_{k\notin \mathcal{S}_1^-\cup \{1\}}\Tilde{a'}_{1k} + c'_1,
\end{split}
\end{equation}
where the assumption $a'_{11}=0$ has been used.
By (\ref{eq: thm2_1}) and (\ref{eq: thm2_2}) we have that $a_{11}+\tilde{a}'_{11}=0$, which implies that $a_{11}=0$ and contradicts our assumption. The case of $\mathcal{S}_i^{-} \neq \mathcal{S'}_i^{-}$ is similar.
\item \emph{Second Case}: Suppose that $(\mathcal{S}_i^{+}, \mathcal{S}_i^{-}) =(\mathcal{S'}_i^{+},\mathcal{S’}_i^{-})$ for all $i\in[p]$ with either $\left(\mathbf{A},\mathbf{\Tilde{A}}\right)=\left(\mathbf{A}',\mathbf{\Tilde{A}}'\right)$ or $\left(\mathbf{A},\mathbf{\Tilde{A}}\right) \neq \left(\mathbf{A}',\mathbf{\Tilde{A}}'\right)$ and $\mathbf{c}\neq \mathbf{c'}$. Without loss of generality, let $c_1\neq c_1'$. Similarly to the proof of Theorem \ref{thm: consistency}, by selecting $\mathbf{u}=\sum_{i\in\mathcal{S}_1^-} \mathbf{e}_{p,i}$ and $\mathbf{v},\mathbf{v}'$ as before, we arrive at the contradiction $c_1=c'_1$.
\item \emph{Third Case}: Suppose that $(\mathcal{S}_i^{+}, \mathcal{S}_i^{-}) =(\mathcal{S'}_i^{+},\mathcal{S'}_i^{-})$ for all $i\in[p]$, $\mathbf{c} = \mathbf{c'}$ and $\left(\mathbf{A},\mathbf{\Tilde{A}}\right) \neq \left(\mathbf{A}',\mathbf{\Tilde{A}}'\right)$. Without loss of generality, we assume that $a_{11}\neq 0$, $a'_{11}\neq 0$ and $a_{11}\neq a'_{11}$. In this case, the contradiction $a_{11}= a'_{11}$ arises when selecting $\mathbf{u}= \mathbf{e}_{p,1} + \sum_{i\in\mathcal{S}_1^-} \mathbf{e}_{p,i}$ and $\mathbf{v},\mathbf{v}'$ as before.
\end{itemize}
\end{proof}
\section{A CLOSED-FORM ESTIMATOR FOR THE GENERIC BAR MODEL}
\label{sec: closed form 2}
Extending the closed-form estimator for the BAR model with only positive correlations, we now introduce a closed-form estimator for the generic BAR model. Consider the same introductions as before and also the Bernoulli argument for $X_i(k+1)$ in (\ref{eq: BAR_def2}). We can rewrite
\begin{align}\label{eq:BARargument-new}
&\mathbf{a}_i^\top\mathbf{X}(k)+\Tilde{\mathbf{a}}_i^\top\left(\mathbf{1}-\mathbf{X}(k)\right)+b_iW_i(k+1)=\nonumber\\
&(\mathbf{a}_i-\Tilde{\mathbf{a}}_i)^\top\mathbf{X}(k)+\Tilde{\mathbf{a}}_i^\top\mathbf{1}+b_iW_i(k+1)
\end{align}
and we note that due to the nonoverlapping supports of $\mathbf{a}_i$ and $\Tilde{\mathbf{a}}_i$ for every $i\in [p]$, the vector $\mathbf{a}_i-\Tilde{\mathbf{a}}_i$ contains the entries of $\mathbf{a}_i$ and the entries of $\Tilde{\mathbf{a}}_i$ with flipped signs, each at a different location. We further note that
\begin{align}
P(X_i(k+1)=1|\mathbf{X}(k)=\mathbf{x}(k))&=(\mathbf{a}_i-\Tilde{\mathbf{a}}_i)^\top\mathbf{x}(k)+\Tilde{\mathbf{a}}_i^\top\mathbf{1}+b_i\rho_{w_i}\nonumber\\&=\bar{\mathbf{a}}_i^\top\mathbf{x}(k)+\bar{c}_i,
\end{align}
where $\bar{\mathbf{a}}_i=\mathbf{a}_i-\Tilde{\mathbf{a}}_i$ and $\bar{c}_i=\Tilde{\mathbf{a}}_i^\top\mathbf{1}+b_i\rho_{w_i}$ for $i\in[p]$. With these introductions, we can reparameterize the generic BAR model as $\bar{\mathbf{\theta}}=(\bar{\mathbf{A}},\bar{\mathbf{c}})$, where
$\bar{\mathbf{A}}=[\bar{\mathbf{a}}_1,\ldots, \bar{\mathbf{a}}_p]^T$ and $\bar{\mathbf{c}}=[\bar{c}_1,\ldots, \bar{c}_p]^T$. Clearly, by knowing $\bar{\mathbf{A}}$, we can immediately separate $\mathbf{A}$ and $\Tilde{\mathbf{A}}$ based on the underlying signs of the entries. Furthermore, by also knowing $\bar{\mathbf{c}}$, we can compute the products $c_i=b_i\rho_{w_i}, i=1,\ldots, p$ and finally, we can compute $\rho_{w_i}$ via the formula
$\rho_{w_i}=c_i/(1-\sum_{j=1}^p(a_{ij}+\tilde{a}_{ij}))$ for $i\in [p]$. The parameterization $\bar{\mathbf{\theta}}=(\bar{\mathbf{A}},\bar{\mathbf{c}})$ is of the same form as the parameterization $\mathbf{\theta}=(\mathbf{A},\mathbf{c})$ of the BAR model with only positive correlations and therefore, the same unprojected closed-form estimator as before is suitable.
\begin{theorem}\label{thm: LS_red_genericBAR}
Consider an observed sequence $\{\mathbf{x}(k)\}_{k=0}^{T}$. For $i=1,\ldots, p$, define the estimator $\hat{\bar{\mathbf{c}}}=[\hat{\bar{c}}_1,\ldots, \hat{\bar{c}}_p]^T$ by the entry estimators $\hat{\bar{c}}_i=\sum_{k=0}^{T-1} \mathbb{I}(\mathbf{x}(k)=\mathbf{0}_p,x_i(k+1)=1)/\sum_{k=0}^{T-1} \mathbb{I}(\mathbf{x}(k)=\mathbf{0}_p)$, assuming that the state $\mathbf{0}_p$ is visited at least once in the time span $\{0,\ldots,T-1\}$.
Furthermore, suppose that in $\{\mathbf{x}(k)\}_{k=0}^{T-1}$ there are $m$ distinct states $\mathbf{u}_1$, $\mathbf{u}_2$,..., $\mathbf{u}_m$ such that $p\leq m\leq 2^p$. Let $\mathbf{U}_m\in \mathbb{R}^{m\times p}$ be a matrix with $k$-th row equal to $\mathbf{u}_k^\top$ for $k\in[m]$ and $\mathbf{y}_{m,r} = \left[N_{\mathbf{u}_1,r,1}/N_{\mathbf{u}_1},\cdots, N_{\mathbf{u}_m,r,1}/N_{\mathbf{u}_m}\right]^\top$. Then, whenever $\mathbf{U}_{m}$ is full-column rank, $\hat{\bar{\mathbf{A}}}$ is an estimator of $\bar{\mathbf{A}}$, where
\begin{equation}\label{eq: LS_red2}
\hat{\bar{\mathbf{a}}}_r = \left(\mathbf{U}_m^\top\mathbf{U}_m\right)^{-1}\mathbf{U}_m^\top\left(\mathbf{y}_{m,r}-\hat{\bar{c}}_r\cdot \mathbf{1}_m\right), \ \ \forall r\in [p].
\end{equation}
Based on the signs of the entries in $\hat{\bar{\mathbf{A}}}$, we can separate $\hat{\mathbf{A}}$ and $\hat{\Tilde{\mathbf{A}}}$. Moreover, $\hat{\mathbf{b}}=\mathbf{1}_p-(\hat{\mathbf{A}}+\hat{\Tilde{\mathbf{A}}})\mathbf{1}_p$, $\hat{\mathbf{c}}=\hat{\bar{\mathbf{c}}}-\hat{\Tilde{\mathbf{A}}}\mathbf{1}_p$ and $\hat{\mathbf{\rho}}_w= \left(\mathrm{diag}(\hat{\mathbf{b}})\right)^{-1}\hat{\mathbf{c}}$.
Finally, to obtain a valid estimate of the parameter set for any $T\geq 1$, we let
$
\hat{\bar{\mathbf{\theta}}}=\left[\left(\hat{\mathbf{A}},\hat{\Tilde{\mathbf{A}}}, \hat{\mathbf{b}},\hat{\mathbf{\rho}}_w\right)\right]^{+},
$
where $[\cdot]^{+}$ corresponds to a projection onto the parameter space $\tilde{\Theta}$ by preserving the supports of $\hat{\mathbf{A}}$ and $\hat{\Tilde{\mathbf{A}}}$.
\end{theorem}
The proofs of this theorem and of the strong consistency of the proposed closed-form estimator are straightforward based on the proofs of Theorems \ref{thm: LS_red}
and \ref{thm: LS_constistency}, respectively.
\section{SIMULATION RESULTS}
\label{sec:sims}
To validate our analysis, experiments on synthetic networks of various sizes are performed. Focusing only on the structure identification of the underlying BAR graphs, we compare the ML estimators with the closed-form estimators and the BAR structure observer (BARobs) proposed in \cite{katselis2018mixing}. Our simulations show that the ML estimator either for the BAR model with positive correlations only or for the generic BAR model can perfectly recover the graph structure with sufficient data and outperforms the other estimators in terms of sample complexity.
\begin{figure}
\centering
\includegraphics[scale=0.255]{p10d5_rho.pdf}
\caption{$p=10$, $d_{max}=5$: $F_1$ scores for the ML and closed-form estimators.}
\label{fig: MLEvsBARobs_a}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.24]{p20d5_new.pdf}
\caption{$p=20$, $d_{max}=5$: $F_1$ scores for the ML, closed-form and BARobs estimators.}
\label{fig: MLEvsBARobs_c}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.24]{biological.pdf}
\caption{A biological network with $43$ nodes: $F_1$ score for the ML estimator.}
\label{fig: Bio}
\end{figure}
We implement all experiments in MATLAB. In particular for the ML estimators, we use the ``fmincon'' function in the ``optimization toolbox'' to solve the maximization problems. The ground truth networks $\left(\mathbf{A},\mathbf{b},\rho_w\right)$ or $\left(\mathbf{A},\mathbf{\Tilde{A}},\mathbf{b},\rho_w\right)$ are randomly generated in such a way that all constraints specifying the parameter spaces $\Theta$ or $\Tilde{\Theta}$ are satisfied. Moreover, to facilitate the simulation, we create networks with minimum edge weight $a_{min}\in (0,1)$, i.e., $a_{ij}\geq a_{min}, \forall (j,i)\in\mathcal{E}$ in the case of the BAR model (\ref{eq: BAR_def}) and either $a_{ij}\geq a_{min}$ or $\Tilde{a}_{ij}\geq a_{min}, \forall (j,i)\in\mathcal{E}$ in the case of the generic BAR model (\ref{eq: BAR_def2}). The data $\{\mathbf{x}(k)\}_{k=0}^{T}$ are generated according to the created ground truth models and then are used for network inference. We use the $F_1$ score, defined by
\begin{equation*}
F_1 = \frac{2}{\text{recall}^{-1}+ \text{precision}^{-1}},
\end{equation*}
as the criterion to evaluate the performance of the algorithms. We note that \emph{recall} is the fraction of correctly recovered edges among the ground truth edges and \emph{precision} is the fraction of correctly recovered edges over all correct and incorrect edges identified by the algorithms. More specifically, an edge $(j,i)$ is viewed as being inferred if $\hat{a}_{ij}\geq ca_{min}$ or $\hat{\tilde{a}}_{ij}\geq ca_{min}$ for some empirically selected $c\in(0,1)$.
In Fig. \ref{fig: MLEvsBARobs_a}, we compare the performance of ML and closed-form estimators for the BAR model (\ref{eq: BAR_def}) with $p=10$ nodes and maximum in-degree $d_{max}=5$. In Fig. \ref{fig: MLEvsBARobs_c}, the performance of the ML, closed-form and BARobs estimators is demonstrated for a synthetic network with size $p=20$ and maximum in-degree $d_{max}=5$ corresponding to the generic BAR model (\ref{eq: BAR_def2}). The $x$-axis corresponds to the number of observations $T$ and the $y$-axis to the $F_1$ score. In both cases, the ML estimator can achieve $F_1$ scores equal to $1$ for a sufficient sample size, e.g., $T=1200$ in Fig. \ref{fig: MLEvsBARobs_a} and $T=4000$ in Fig. \ref{fig: MLEvsBARobs_c}. In Fig. \ref{fig: MLEvsBARobs_c}, the BARobs algorithm also achieves an $F_1$ score equal to $1$ for a sufficiently large sample size. However, compared to the ML estimator, the BARobs requires a larger sample size. Finally, in Fig. \ref{fig: MLEvsBARobs_c} both the ML and BARobs estimators outperform the closed-form estimator in (\ref{eq: LS_red2}).
Finally, focusing on the ML estimator which outperforms the other two estimators in terms of sample complexity, we present a real network experiment in a biological application in which small sample sizes are critical \cite{SIGNET}. The underlying biological network consists of 43 nodes. The state of each node is binary and is updated according to a boolean rule defined by the states of some nodes in the network. We approximate the network by the generic BAR model (\ref{eq: BAR_def2}) and we generate pseudo-real data. Fig. \ref{fig: Bio} illustrates the performance of the ML estimator on this data set.
\section{CONCLUSIONS}
\label{sec:concl}
In this paper, we studied the problem of estimating the parameters of a class of Markov chains called BAR models. ML estimation for BAR chains was shown to be strongly consistent. Strong consistency was also established for closed-form estimators of these BAR models.
\section*{ACKNOWLEDGMENT}
This research has been supported in part by NSF Grants NeTS 1718203, CPS ECCS 1739189, ECCS 16-09370, CCF 1934986, NSF/USDA Grant AG 2018-67007-28379, ARO W911NF-19-1-0379, ECCS 2032321 and ONR Grant Navy N00014-19-1-2566.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_059-15709 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
The convergence in entropy for stochastic systems is an important topic in both probability theory and mathematical physics, and has been well studied for Markov processes by using the log-Sobolev inequality, see for instance \cite{BGL} and references therein. However, the existing results derived in the literature do not apply to McKean-Vlasov SDEs due to the nonlinearity of the associated Fokker-Planck equations.
In 2003, Carrillo, McCann and Villani \cite{CMV} proved the exponential convergence in a mean field entropy of the following granular media equation for probability density functions $(\rr_t)_{t\ge 0}$ on $\R^d$:
\beq\label{E0} \pp_t \rr_t= \DD\rr_t + {\rm div} \big\{\rr_t\nabla} \def\pp{\partial} \def\E{\mathbb E (V + W*\rr_t)\big\},\end{equation}
where the internal potential $V\in C^2( \R^d)$ satisfies $\Hess_V\ge \ll I_{d}$ for a constant $\ll>0$ and the $d\times d$-unit matrix $I_d$, and the interaction potential $W\in C^2(\R^d)$ satisfies
$W(-x)=W(x)$ and $\Hess_W\ge -\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho I_{d}$
for some constant $\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho\in [0, \ll/2)$. Recall that we write $M\ge \ll I_d$ for a constant $\ll$ and a $d\times d$-matrix $M$, if $\<Mv,v\>\ge \ll |v|^2$ holds for any $v\in \R^d$.
To introduce the mean field entropy, let $\mu_V(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x):= \ff{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x} {\int_{\R^d}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x}$, recall the classical relative entropy
$${\rm Ent}(\nu|\mu) := \begin} \def\beq{\begin{equation}} \def\F{\scr F{cases} \mu(\rr\log\rr), &\text{if} \ \nu=\rr\mu,\\
\infty, &\text{otherwise}\end{cases}$$ for $\mu,\nu\in \scr P$, the space of all probability measures on $\R^d$, and consider the free energy functional
$$E^{V,W}(\mu):= {\rm Ent}(\mu|\mu_V)+ \ff 1 2 \int_{\R^d\times\R^d} W(x-y) \mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y),\ \ \mu\in \scr P,$$
where we set $E^{V,W}(\mu)=\infty$ if either ${\rm Ent}(\mu|\mu_V)=\infty$ or the integral term is not well defined. Then the
associated mean field entropy ${\rm Ent}^{V,W}$
is defined by
\beq\label{ETP} {\rm Ent}^{V,W}(\mu):= E^{V,W}(\mu) - \inf_{\nu\in\scr P} E^{V,W}(\nu),\ \ \mu\in \scr P.\end{equation}
According to \cite{CMV}, for $V$ and $W$ satisfying the above mentioned conditions, $E^{V,W}$ has a unique minimizer $\mu_\infty,$ and $\mu_t(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x):= \rr_t(x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ for probability density $\rr_t$ solving \eqref{E0} converges to $\mu_\infty$ exponentially in the mean field entropy:
$${\rm Ent}^{V,W}(\mu_t)\le \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(\ll-2\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho)t} {\rm Ent}^{V,W}(\mu_0),\ \ t\ge 0.$$
Recently, this result was generalized in \cite{GLW} by establishing the uniform log-Sobolev inequality for the associated mean field particle systems, such that ${\rm Ent}^{V,W}(\mu_t)$ decays exponentially for a class of non-convex $V\in C^2(\R^d)$ and $W\in C^2(\R^d\times \R^d)$, where $W(x,y)=W(y,x)$ and $\mu_t(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x):=\rr_t(x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ for $\rr_t$ solving the nonlinear PDE
\beq\label{E00} \pp_t \rr_t= \DD\rr_t + {\rm div} \big\{\rr_t\nabla} \def\pp{\partial} \def\E{\mathbb E (V + W\circledast\rr_t)\big\},\end{equation}
where
\beq\label{AOO} W\circledast\rr_t:=\int_{\R^d}W(\cdot,y) \rr_t(y)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y.\end{equation} In this case, $\Ent^{V,W}$ is defined in \eqref{ETP} for the free energy functional
$$E^{V,W}(\mu):= {\rm Ent}(\mu|\mu_V)+ \ff 1 2 \int_{\R^d\times\R^d} W(x,y) \mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y),\ \ \mu\in \scr P.$$
To study \eqref{E00} using probability methods, we consider the
following McKean-Vlasov SDE with initial distribution $\mu_0$:
\beq\label{E0'} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= \ss 2 \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D B_t-\nabla} \def\pp{\partial} \def\E{\mathbb E \big\{ V+ W\circledast\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}\big\}(X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\end{equation}
where $B_t$ is the $d$-dimensional Brownian motion, $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}$ is the distribution of $X_t$, and
\beq\label{A01} (W\circledast\mu)(x):=\int_{\R^d} W(x,y) \mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y),\ \ x\in \R^d, \mu\in \scr P\end{equation} provided the integral exists.
Let $\rr_t(x)=\ff{(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x},\ t\ge 0.$ By It\^o's formula and the integration by parts formula, we have
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*}& \ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t} \int_{\R^d} ( \rr_t f)(x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x =\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D }{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t} \E [f(X_t)] =\E\big[\big(\DD-\nabla} \def\pp{\partial} \def\E{\mathbb E V- \nabla} \def\pp{\partial} \def\E{\mathbb E \{W\circledast\rr_t\}\big)f(X_t)\big]\\
&= \int_{\R^d} \rr_t(x) \big\{\DD f -\<\nabla} \def\pp{\partial} \def\E{\mathbb E V+ \nabla} \def\pp{\partial} \def\E{\mathbb E \{W\circledast\rr_t\}, \nabla} \def\pp{\partial} \def\E{\mathbb E f\> \big\}(x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\\
&= \int_{\R^d} f(x) \{\DD\rr_t + {\rm div}[\rr_t\nabla} \def\pp{\partial} \def\E{\mathbb E V+ \rr_t \nabla} \def\pp{\partial} \def\E{\mathbb E (W\circledast\rr_t)]\big\}(x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x,\ \ t\ge 0,\ f\in C_0^\infty(\R^d).\end{align*}
Therefore, $\rr_t$ solves \eqref{E00}. On the other hand, by this fact and the uniqueness of \eqref{E0} and \eqref{E0'}, if $\rr_t$ solves \eqref{E0} with $\mu_0(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x):=\rr_0(x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$,
then $\rr_t(x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x)$ for $X_t$ solving \eqref{E0'} with $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\mu_0.$
To extend the study of \cite{CMV, GLW}, in this paper we investigate the exponential convergence in entropy for the following McKean-Vlasov SDE on $\R^d$:
\beq\label{E1} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= \sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}(X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t+ b(X_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\end{equation}
where $W_t$ is the $m$-dimensional Brownian motion on a complete filtration probability space $(\OO, \{\F_t\}_{t\ge 0}, \P)$,
$$\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}: \R^d\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma \R^{d}\otimes \R^m,\ \ b:\R^d\times \scr P_2\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma \R^d$$ are measurable, and $\scr P_2$ is the class of probability measures on $\R^d$ with $\mu(|\cdot|^2)<\infty$.
Since the $``$mean field entropy" associated with the SDE \eqref{E1} is not available, and it is less explicit even exists as in \eqref{ETP} for the special model \eqref{E0'}, we intend to study the exponential convergence of $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}$ in the classical relative entropy $ {\rm Ent}$ and the Wasserstein distance $\W_2$.
Recall that for any $p\ge 1$, the $L^p$-Wasserstein distance is defined by
$$\W_p(\mu_1,\mu_2):= \inf_{\pi\in \scr C} \def\aaa{\mathbf{r}} \def\r{r(\mu_1,\mu_2)} \bigg(\int_{\R^d\times \R^d} |x-y|^p\pi(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y)\bigg)^{\ff 1 p},\ \ \mu_1,\mu_2\in \scr P_p,$$
where $\scr C} \def\aaa{\mathbf{r}} \def\r{r(\mu_1,\mu_2)$ is the set of all couplings of $\mu_1$ and $\mu_2$.
Unlike in \cite{CMV, GLW} where the mean field particle systems are used to estimate the mean field entropy, in this paper we use
the log-Harnack inequality introduced in \cite{W10, RW10} and the Talagrand inequality developed in \cite{TAL, BGL, OV}, see Theorem \ref{T0} below. Thus, the key point of the present study is to establish these two types of inequalities for McKean-Vlasov SDEs.
Since the log-Harnack inequality is not yet available when $\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}$ depends on the distribution, in \eqref{E1} we only consider distribution-free $\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}$. In particular, for a class of granular media type equations generalizing the framework of \cite{CMV,GLW}, we prove
$$ \W_2(\mu_t,\mu_\infty)^2 +\Ent(\mu_t|\mu_\infty)\le c \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll t} \min\big\{\W_2(\mu_0, \mu_\infty)^2,\Ent(\mu_0|\mu_\infty)\big\},\ \ t\ge 1$$
for $\mu_t(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x):= \rr_t(x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ and some constants $c,\ll>0$, see Theorem \ref{C2.0} below for details.
\
The remainder of the paper is organized as follows. In Section 2, we state our main results for non-degenerate and degenerate models respectively, where the first case includes the granular media type equations \eqref{E00} or the corresponding Mckean-Vlasov SDE \eqref{E0'} as a special example, and the second case deals with the McKean-Vlasov stochastic Hamiltonian system referring to the degenerate granular media equation.
The main results are proved in Sections 3-5 respectively, where Section 4 establishes the log-Harnack inequality for McKean-Vlasov stochastic Hamiltonian systems.
\section{Main results and examples }
We first present a criterion on the exponential convergence for McKean-Vlasov SDEs by using the log-Harnack and Talagrand inequalities, and prove \eqref{WU} for the granular media type equations \eqref{E01} below which generalizes the framework of \cite{GLW}. Then we state our results for solutions of SDE \eqref{E1} with non-degenerate and degenerate noises respectively.
\subsection{A criterion with application to Granular media type equations}
In general, we consider the following McKean-Vlasov SDE:
\beq\label{EP} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= \sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}(X_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t+ b(X_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\end{equation}
where $W_t$ is the $m$-dimensional Brownian motion and
$$\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}: \R^d\times \scr P_2\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma \R^{d}\otimes \R^m,\ \ b:\R^d\times\scr P_2 \rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma \R^d$$ are measurable. We assume that this SDE is strongly and weakly well-posed
for square integrable initial values. It is in particular the case if $b$ is continuous on $\R^d\times\scr P_2$ and there exists a constant $K>0$ such that
\beq\label{KK}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &\<b(x,\mu)-b(y,\nu), x-y\>^+ +\|\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}(x,\mu)-\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}(y,\nu)\|^2\le K \big\{|x-y|^2 +\W_2(\mu,\nu)^2\},\\
& |b(0,\mu)|\le c\Big(1+\ss{\mu(|\cdot|^2)}\Big),\ \ x,y\in \R^d, \mu,\nu\in \scr P_2,\end{split}\end{equation}
see for instance \cite{W18}. See also \cite{HW20,Zhao20} and references therein for the well-posedness of McKean-Vlasov SDEs with singular coefficients. For any $\mu\in \scr P_2$, let $P_t^*\mu=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}$ for the solution $X_t$ with initial distribution $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\mu$. Let
$$P_t f(\mu)= \E[f(X_t)]=\int_{\R^d}f\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D P_t^*\mu,\ \ t\ge 0, f\in \B_b(\R^d).$$
We have the following equivalence on the exponential convergence of $P_t^*\mu$ in $\Ent$ and $\W_2$.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{T0} Assume that $P_t^*$ has a unique invariant probability measure $\mu_\infty\in \scr P_2$ such that for some constants $t_0, c_0,C>0$ we have the log-Harnack inequality
\beq\label{LHI} P_{t_0} (\log f)(\nu)\le \log P_{t_0} f (\mu)+ c_0 \W_2(\mu,\nu)^2,\ \ \mu,\nu\in \scr P_2,\end{equation} and the Talagrand inequality
\beq\label{TLI} \W_2(\mu,\mu_\infty)^2\le C \Ent(\mu|\mu_\infty),\ \ \mu\in \scr P_2\\mathbf f} \def\g{\mathbf g}\def\BL{{\bf L}} \def\BG{{\mathbb G}\in \B_b(\R^d).\end{equation}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[$(1)$] If there exist constants $c_1,\ll, t_1\ge 0$ such that
\beq\label{EW} \W_2(P_t^* \mu,\mu_\infty)^2
\le c_1\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll t} \W_2(\mu,\mu_\infty)^2,\ \ t\ge t_1, \mu\in \scr P_2, \end{equation}
then
\beq\label{ET}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} & \max\big\{c_0^{-1} \Ent(P_t^*\mu|\mu_\infty), \W_2(P_t^*\mu, \mu_\infty)^2\big\} \\
&\le c_1\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll (t-t_0)} \min\big\{\W_2(\mu,\mu_\infty)^2, C\Ent(\mu|\mu_\infty)\big\},\ \ t\ge t_0+t_1, \mu\in \scr P_2.\end{split}
\end{equation}
\item[$(2)$] If for some constants $\ll, c_2,t_2>0$
\beq\label{EW'} \Ent (P_t^* \mu|\mu_\infty)\le c_2\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll t} \Ent(\mu|\mu_\infty),\ \ t\ge t_2, \nu\in \scr P_2, \end{equation}
then
\beq\label{ET'}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &\max\big\{\Ent(P_t^*\mu,\mu_\infty), C^{-1} \W_2(P_t^* \mu,\mu_\infty)^2\big\}\\
&\le c_2\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll (t-t_0)} \min\big\{c_0\W_2(\mu,\mu_\infty)^2, \Ent(\mu|\mu_\infty)\big\},\ \ t\ge t_0+t_2, \mu\in \scr P_2.\end{split} \end{equation}
\end{enumerate}
\end{thm}
When $\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}\si^*$ is invertible and does not depend on the distribution, the log-Harnack inequality \eqref{LHI} has been established in \cite{W18}.
The Talagrand inequality was first found in \cite{TAL} for $\mu_\infty$ being the Gaussian measure, and extended in \cite{BGL} to
$\mu_\infty$ satisfying the log-Sobolev inequality
\beq\label{LS0} \mu_\infty(f^2\log f^2)\le C \mu_\infty(|\nabla} \def\pp{\partial} \def\E{\mathbb E f|^2),\ \ f\in C_b^1(\R^d), \mu_\infty(f^2)=1,\end{equation}
see \cite{OV} for an earlier result under a curvature condition, and see \cite{W04} for further extensions.
To illustrate this result, we consider the granular media type equation for probability density functions $(\rr_t)_{t\ge 0}$ on $\R^d$:
\beq\label{E01} \pp_t \rr_t= {\rm div} \big\{ a\nabla} \def\pp{\partial} \def\E{\mathbb E\rr_t + \rr_t a\nabla} \def\pp{\partial} \def\E{\mathbb E (V + W\circledast\rr_t)\big\}, \end{equation}
where $W\circledast \rr_t$ is in \eqref{AOO}, and the functions
$$ a: \R^d\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma \R^d\otimes \R^d,\ \ V:\R^d\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma\R,\ \ W:\R^d\times\R^d\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma \R$$ satisfy the following assumptions.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[$(H_1)$] $a:=(a_{ij})_{1\le i,j\le d} \in C_b^2(\R^d\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma \R^d\otimes\R^d)$, and
$a\ge \ll_aI_{d}$ for some constant $\ll_a>0$.
\item[$(H_2)$] $V\in C^2(\R^d), W\in C^2(\R^d\times\R^d)$ with $W(x,y)=W(y,x)$, and there exist constants $\kk_0\in\R$ and $\kk_1,\kk_2,\kk_0'>0$ such that
\beq\label{ICC1} \Hess_V\ge \kk_0I_{d},\ \ \kk_0' I_{2d}\ge \Hess_{W}\ge \kk_0 I_{2d},\end{equation}
\beq\label{ICC2} \<x, \nabla} \def\pp{\partial} \def\E{\mathbb E V(x)\> \ge \kk_1|x|^2-\kk_2,\ \ x\in \R^d.\end{equation}
Moreover, for any $\ll>0$,
\beq\label{ICC3} \int_{\R^d\times\R^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)-V(y)-\ll W(x,y)} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y<\infty.\end{equation}
\item[$(H_3)$] There exists a function $b_0\in L^1_{loc}([0,\infty))$ with
$$r_0:= \ff {\|\Hess_W\|_\infty} 4 \int_0^\infty \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\ff 1 4 \int_0^t b_0(s)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s } \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t < 1$$
such that for any $x,y,z\in\R^d$,
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} \big\<y-x, \nabla} \def\pp{\partial} \def\E{\mathbb E V(x)-\nabla} \def\pp{\partial} \def\E{\mathbb E V(y) +\nabla} \def\pp{\partial} \def\E{\mathbb E W(\cdot,z)(x)-\nabla} \def\pp{\partial} \def\E{\mathbb E W(\cdot, z)(y)\big\>
\le |x-y| b_0(|x-y|).\end{align*} \end{enumerate}
\
For any $N\ge 2$, consider the Hamiltonian for the system of $N$ particles:
$$H_N(x_1,\cdots, x_N)=\sum_{i=1}^N V(x_i)+ \ff 1 {N-1} \sum_{1\le i<j\le N}^N W(x_i, x_j),$$
and the corresponding finite-dimensional Gibbs measure
$$\mu^{(N)}(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x_1,\cdots, x_N)= \ff 1 {Z_N} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-H_N(x_1,\cdots, x_N)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x_1\cdots\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x_N,$$
where $Z_N:=\int_{\R^{dN}} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-H_N(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x <\infty$ due to \eqref{ICC3} in $(H_2)$.
For any $1\le i\le N$, the conditional marginal of $\mu^{(N)}$ given $z\in \R^{d(N-1)}$ is given by
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} &\mu^{(N)}_z(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x) := \ff 1 {Z_N(z)} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-H_N(x|z)} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x, \ \ Z_N(z):= \int_{\R^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-H_N(x|z)} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x,\\
&H_N(x|z):= V(x)-\log \int_{\R^{d(N-1)}} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\sum_{i=1}^{N-1} \{V(z_i) +\ff 1 {N-1} W(x, z_i)\}}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z_1\cdots \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z_{N-1}.\end{align*}
We have the following result.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{C2.0} Assume $(H_1)$-$(H_3)$. If there is a constant $a>0$ such that the uniform log-Sobolev inequality
\beq\label{LSS} \mu^{(N)}_z(f^2\log f^2)\le \ff 1 \bb \mu^{(N)}_z(|\nabla} \def\pp{\partial} \def\E{\mathbb E f|^2),\ \ f\in C_b^1(\R^d), \mu^{(N)}_z(f^2)=1, N\ge 2, z\in \R^{d(N-1)}\end{equation} holds,
then there exists a unique $\mu_\infty \in \scr P_2$ and a constant $c>0$ such that
\beq\label{WU} \W_2(\mu_t,\mu_\infty)^2 +\Ent(\mu_t|\mu_\infty)\le c \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll_a\bb (1-r_0)^2 t} \min\big\{\W_2(\mu_0, \mu_\infty)^2 +\Ent(\mu_0|\mu_\infty)\big\},\ \ t\ge 1\end{equation}
holds for any probability density functions $(\rr_t)_{t\ge 0}$ solving \eqref{E01}, where $\mu_t(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x) := \rr_t(x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x, t\ge 0.$ \end{thm}
This result allows $V$ and W to be non-convex. For instance, let $V=V_1+V_2\in C^2(\R^d)$ such that
$\| V_1\|_\infty\land\|\nabla} \def\pp{\partial} \def\E{\mathbb E V_1\|_\infty<\infty$, $\Hess_{V_2}\ge \ll I_{d}$ for some $\ll>0$, and $W\in C^2(\R^d\times\R^d)$ with $\|W\|_\infty\land \|\nabla} \def\pp{\partial} \def\E{\mathbb E W\|_\infty<\infty$. Then the uniform log-Sobolev inequality
\eqref{LSS} holds for some constant $\bb>0$. Indeed, by the Bakry-Emery criterion, $\mu_2(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x):=\ff 1 {\int_{\R^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-V_2(x)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ satisfies the log-Sobolev inequality
$$\mu_2(f^2\log f^2)\le \ff 2 \ll \mu_2(|\nabla} \def\pp{\partial} \def\E{\mathbb E f|^2),\ \ f\in C_b^1(\R^d), \mu_2(f^2)=1.$$
then \eqref{LSS} with some constant $\bb>0$ follows by the stability of the log-Sobolev inequality under bounded perturbations (see \cite{DS,CW97}) as well as Lipschitz perturbations (see \cite{Aida}) for the potential $V_2$.
Moreover, assumptions $(H_1)$-$(H_3)$ hold provided $\|\Hess_W\|_\infty$ is small enough such that $r_0<1$. So, Theorem \ref{C2.0} applies.
See \cite{GLW} for more concrete examples satisfying $(H_1)$-$(H_3)$ and \eqref{LSS}.
\subsection{The non-degenerate case}
In this part, we make the following assumptions:
\begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[$(A_1)$] $b$ is continuous on $\R^d\times\scr P_2$ and there exists a constant $K>0$ such $\eqref{KK}$ holds.
\item[$(A_2)$] $\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}\si^*$ is invertible with $\ll:=\|(\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}\si^*)^{-1}\|_\infty<\infty$, and there exist constants $K_2>K_1\ge 0$ such that for any $x,y\in \R^d$ and $\mu,\nu\in \scr P_2$,
$$ \|\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}(x)-\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}(y)\|_{HS}^2 + 2\<b(x,\mu)-b(y,\nu), x-y\>\le K_1 \W_2(\mu,\nu)^2 - K_2 |x-y|^2.$$
\end{enumerate}
According to \cite[Theorem 2.1]{W18}, if $(A_1)$ holds and $b(x,\mu)$ is continuous on $\R^d\times \scr P_2$, then for any initial value $X_0\in L^2(\OO\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma\R^d,\F_0,\P)$, \eqref{E1} has a unique solution which satisfies
$$\E\Big[\sup_{t\in [0,T]} |X_t|^2 \Big]<\infty,\ \ T\in (0,\infty).$$
Let $P_t^*\mu=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}$ for the solution with $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\mu.$ We have the following result.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{T1} Assume $(A_1)$ and $(A_2)$. Then $P_t^*$ has a unique invariant probability measure $\mu_\infty$ such that
\beq\label{EX1} \max\big\{\W_2(P_t^*\mu,\mu_\infty)^2, \Ent(P_t^*\mu|\mu_\infty)\big\}\le \ff{c_1}{t\land 1} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(K_2-K_1)t} \W_2(\mu,\mu_\infty)^2,\ \ t>0, \mu\in \scr P_2\end{equation} holds for some constant $c_1>0$.
If moreover $\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}\in C_b^2(\R^d\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma \R^d\otimes \R^m)$, then there exists a constant $c_2>0$ such that for any $\mu\in \scr P_2, t\ge 1$,
\beq\label{EX2} \max\big\{\W_2(P_t^*\mu,\mu_\infty)^2, \Ent(P_t^*\mu|\mu_\infty)\big\} \le c_2 \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(K_2-K_1)t} \min\big\{\W_2(\mu,\mu_\infty)^2, \Ent(\mu|\mu_\infty)\big\}. \end{equation}
\end{thm}
\
To illustrate this result, we consider the granular media equation \eqref{E00}, for which we take
\beq\label{SBB1} \sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}=\ss 2 I_{d},~~~~~~ b(x,\mu)= -\nabla} \def\pp{\partial} \def\E{\mathbb E \big\{V+ W\circledast\mu\big\}(x),\ \ (x,\mu)\in \R^d\times \scr P_2.\end{equation}
The following example is not included by Theorem \ref{C2.0} since the function $W$ may be non-symmetric.
\paragraph{Example 2.1 (Granular media equation).} Consider \eqref{E0} with $V\in C^2(\R^d)$ and $W\in C^2(\R^d\times\R^d)$ satisfying
\beq\label{CC1} \Hess_V\ge \ll I_{d},\ \ \Hess_W \ge \delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_1 I_{d},\ \ \|\Hess_W\|\le \delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_2\end{equation}
for some constants $\ll_1, \delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_2>0$ and $\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_1\in \R$.
If $\ll+\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_1-\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_2>0$, then there exists a unique $\mu_\infty\in \scr P_2$ and a
constant $c>0$ such that for any probability density functions $(\rr_t)_{t\ge 0}$ solving \eqref{E00}, $\mu_t(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x):=\rr_t(x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ satisfies
\beq\label{ERR0} \max\big\{\W_2(\mu_t,\mu_\infty), \Ent(\mu_t|\mu_\infty)\big\} \le c\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(\ll+\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_1-\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_2)t} \min\big\{\W_2(\mu_0,\mu_\infty), \Ent(\mu_0|\mu_\infty)\big\},\ \ t\ge 1.\end{equation}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} Let $\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}$ and $b$ be in \eqref{SBB1}. Then \eqref{CC1} implies
$(A_1)$ and
$$\<b(x,\mu)-b(y,\nu), x-y\> \le -(\ll_1+\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_1)|x-y|^2 + \delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_2 |x-y|\W_1(\mu,\nu),$$
where we have used the formula
$$\W_1(\mu,\nu)= \sup\{\mu(f)-\nu(f):\ \|\nabla} \def\pp{\partial} \def\E{\mathbb E f\|_\infty\le 1\}.$$ So, by taking $\aa=\ff{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_2}2$ and noting that $\W_1\le \W_2$, we obtain
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} &\<b(x,\mu)-b(y,\nu), x-y\> \le -\big(\ll+\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_1-\aa \big)|x-y|^2 + \ff{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_2^2}{4\aa} \W_1(\mu,\nu)^2\\
&\le - \Big(\ll+\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_1-\ff {\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_2}2\Big) |x-y|^2+ \ff{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_2} 2 \W_2(\mu,\nu)^2,\ \ x,y\in\R^d,\mu,\nu\in \scr P_2.\end{align*}
Therefore, if \eqref{CC1} holds for $\ll+\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_1-\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_2>0$, Theorem \ref{T1} implies that $P_t^*$ has a unique invariant probability measure $\mu_\infty\in\scr P_2$, such that
\eqref{ERR0} holds for $\mu_0\in \scr P_2$.
When $\mu_0\notin \scr P_2$, we have $\W_2(\mu_0,\mu_\infty)^2=\infty$ since $\mu_\infty\in \scr P_2$.
Combining this with the Talagrand inequality
$$\W_2(\mu_0,\mu_\infty)^2\le C \Ent(\mu_0|\mu_\infty)$$
for some constant $C>0$, see the proof of Theorem \ref{T1}, we have $\Ent(\mu_0|\mu_\infty)=\infty$ for $\mu_0\notin \scr P_2$, so that \eqref{ERR0} holds for all $\mu_0\in \scr P$.\end{proof}
\subsection{The degenerate case}
When $\R^k$ with some $k\in \mathbb N$ is considered, to emphasize the space we use $\scr P(\R^k)$ ($\scr P_2(\R^k)$) to denote the class of probability measures (with finite second moment) on $\R^k$.
Consider the following McKean-Vlasov stochastic Hamiltonian system for $(X_t,Y_t)\in \R^{d_1+d_2}:= \R^{d_1}\times \R^{d_2}:$
\beq\label{E21} \begin} \def\beq{\begin{equation}} \def\F{\scr F{cases} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= BY_t\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\\
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D Y_t= \ss 2 \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t - \Big\{ B^* \nabla} \def\pp{\partial} \def\E{\mathbb E V(\cdot,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(X_t,Y_t)})(X_t) + \bb B^* (BB^*)^{-1}X_t+Y_t\Big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\end{cases}\end{equation}
where $\bb>0$ is a constant, $B$ is a $d_1\times d_2$-matrix such that $BB^*$ is invertible, and $$V: \R^{d_1}\times \scr P_2(\R^{d_1+d_2})\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma \R^{d_2}$$ is measurable.
Let
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*}& {\psi_B}((x,y),(\bar x,\bar y)):=\ss{|x-\bar x|^2 +|B(y-\bar y)|^2},\ \ (x,y), (\bar x, \bar y) \in \R^{d_1+d_2},\\
&\W_2^{\psi_B}(\mu,\nu):=\inf_{\pi\in \scr C} \def\aaa{\mathbf{r}} \def\r{r(\mu,\nu)} \bigg\{ \int_{\R^{d_1+d_2}\times\R^{d_1+d_2}} {\psi_B}^2 \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\pi\bigg\}^{\ff 1 2},\ \ \mu,\nu\in\scr P_2(\R^{d_1+d_2}).\end{align*}
We assume
\begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[{\bf (C)}] $V(x,\mu)$ is differentiable in $x$ such that $\nabla} \def\pp{\partial} \def\E{\mathbb E V(\cdot,\mu)(x)$ is Lipschitz continuous in $(x,\mu)\in \R^{d_1}\times \scr P_2(\R^{d_1+d_2}).$
Moreover, there exist constants $\theta_1, \theta_2\in \R$ with
\beq\label{AH1} \theta_1+\theta_2<\bb,\end{equation} such that for any $(x,y), (x',y')\in \R^{d_1+d_2}$ and $\mu,\mu'\in \scr P_2(\R^{d_1+d_2})$,
\beq\label{AH2}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} & \big\< BB^* \{\nabla} \def\pp{\partial} \def\E{\mathbb E V(\cdot,\mu)(x)-\nabla} \def\pp{\partial} \def\E{\mathbb E V(\cdot,\mu')(x')\}, x-x'+(1+\bb)B(y-y')\big\>\\
&\ge -\theta_1{\psi_B} ((x,y), (x',y'))^2 -\theta_2 \W_2^{\psi_B} (\mu,\mu')^2.\end{split}\end{equation}
\end{enumerate}
Obviously, {\bf (C)} implies $(A_1)$ for $d=m=d_1+d_2$, $\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}= {\rm diag}\{0,\ss 2 I_{d_2}\}$, and
$$b((x,y),\mu)= \big(By, - B^* \nabla} \def\pp{\partial} \def\E{\mathbb E V(\cdot,\mu)(x) - \bb B^* (BB^*)^{-1}x-y\big).$$ So, according to \cite{W18}, \eqref{E21} is well-posed for any initial value in $L^2(\OO\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma \R^{d_1+d_2}, \F_0,\P)$.
Let $P_t^*\mu=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(X_t,Y_t)}$ for the solution with initial distribution $\mu\in \scr P_2(\R^{d_1+d_2}).$
In this case, \eqref{E21} becomes
$$\begin} \def\beq{\begin{equation}} \def\F{\scr F{cases} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= BY_t\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\\
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D Y_t= \ss 2 \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t +Z_t(X_t,Y_t) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\end{cases}$$
where $Z_t(x,y):= - B^* \{\nabla} \def\pp{\partial} \def\E{\mathbb E V(\cdot, P_t^*\mu)\}(x) + \bb B^* (BB^*)^{-1}x+y.$ According to \cite[Theorems 2.4 and 3.1]{W14b}, when $\Hess_V(\cdot, P_t^*\mu)$ is bounded,
$$\rr_t(z):=\ff{(P_t^*\mu)(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z) }{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z} =\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D (\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(X_t,Y_t)})(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z} $$
exists and is differentiable in $z\in \R^{d_1+d_2}$. Moreover, since {\bf (C)} implies that
the class $$\{\pp_{y_j}, [\pp_{ y_j}, (By)_i\pp_{x_i}]: 1\le i\le d_1, 1\le j\le d_2\}$$ spans the tangent space at any point (i.e. the H\"ormander condition of rank $1$ holds),
according to the H\"ormander theorem, $\rr_t\in C^\infty(\R^{d_1+d_2})$ for $t>0$ provided $Z_t\in C^\infty(\R^{d_1+d_2})$ for $t\ge 0$.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{T2} Assume {\bf (C)}. Then $P_t^*$ has a unique invariant probability measure $\mu_\infty$ such that for any $t>0$ and $\mu\in \scr P_2(\R^{d_1+d_2}),$
\beq\label{KK0} \max\big\{\W_2(P_t^*\mu,\mu_\infty)^2, \Ent(P_t^*\mu|\mu_\infty) \big\}\le \ff{c\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-2\kk t}}{(1\land t)^{3}}
\min\big\{\Ent(\mu|\mu_\infty), \W_2(\mu,\mu_\infty)^2\big\}\end{equation}
holds for some constant $c>0$ and
\beq\label{KK} \kk:=\ff{ 2(\bb-\theta_1-\theta_2)}{2+2\bb+\bb^2+\ss{\bb^4+4}}>0.\end{equation}
\end{thm}
\paragraph{Example 2.2 (Degenerate granular media equation).} Let $ m\in \mathbb N$ and $W\in C^\infty(\R^m\times\R^{2m}).$ Consider the following PDE for probability density functions $(\rr_t)_{t\ge 0}$ on $\R^{2m}$:
\beq\label{*EN2} \pp_t\rr_t(x,y)= \DD_y \rr_t(x,y) -\<\nabla} \def\pp{\partial} \def\E{\mathbb E_x\rr_t(x,y), y\>+ \<\nabla} \def\pp{\partial} \def\E{\mathbb E_y \rr_t(x,y), \nabla} \def\pp{\partial} \def\E{\mathbb E_x (W\circledast\rr_t)(x) + \bb x+y\>,\end{equation}
where $\bb>0$ is a constant, $\DD_y, \nabla} \def\pp{\partial} \def\E{\mathbb E_x,\nabla} \def\pp{\partial} \def\E{\mathbb E_y$ stand for the Laplacian in $y$ and the gradient operators in $x,y$ respectively, and
$$(W\circledast \rr_t)(x):=\int_{\R^{2m}} W(x,z)\rr_t(z) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z,\ \ x\in \R^m.$$ If there exists a constant $\theta\in \big(0, \ff{2\bb}{1+3\ss{2+2\bb+\bb^2}}\big)$ such that
\beq\label{*EN3} |\nabla} \def\pp{\partial} \def\E{\mathbb E W(\cdot,z)(x)-\nabla} \def\pp{\partial} \def\E{\mathbb E W(\cdot,\bar z)(\bar x) |\le \theta \big(|x-\bar x|+|z-\bar z|\big),\ \ x,\bar x\in \R^m, z,\bar z\in \R^{2m},\end{equation}
then there exists a unique probability measure $\mu_\infty\in\scr P_2( \R^{2m})$ and a constant $c>0$ such that for any probability density functions $(\rr_t)_{t\ge 0}$ solving \eqref{*EN2}, $\mu_t(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x):=\rr_t(x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x$ satisfies
\beq\label{ERR} \max\big\{\W_2(\mu_t,\mu_\infty)^2, \Ent(\mu_t|\mu_\infty) \big\} \le c\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\kk t} \min\big\{\W_2(\mu_0,\mu_\infty)^2, \Ent(\mu_0|\mu_\infty)\big\},\ \ t\ge 1\end{equation} holds for
$\kk= \ff{2\bb- \theta\big(1+ 3\ss{2+2\bb+\bb^2}\big)}{2+2\bb+\bb^2+\ss{\bb^4+4}}>0.$
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof}
Let $d_1=d_2=m$ and $(X_t,Y_t)$ solve \eqref{E21} for
\beq\label{GPP} B:= I_{m},\ \ V(x, \mu):= \int_{\R^{2m}} W(x,z)\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z).\end{equation}
We first observe that $\rr_t$ solves \eqref{*EN2} if and only if $\rr_t(z)=\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D (P_t^*\mu)(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z} $ for $\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)=\rr_0(z)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z$, where $P_t^*\mu:=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(X_t, Y_t)}.$
Firstly, let $\rr_t(z) = \ff{\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(X_t, Y_t)}(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)} {\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z}$ which exists and is smooth as explained before Theorem \ref{T2}.
By It\^o's formula and the integration by parts formula, for any $f\in C_0^2(\R^{2m})$ we have
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*}& \ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t} \int_{\R^{2m}} ( \rr_t f)(z)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z =\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D }{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t} \E [f(X_t,Y_t)]\\
&= \int_{\R^{2m}}\rr_t(x,y)\big\{\DD_y f(x,y)+\<\nabla} \def\pp{\partial} \def\E{\mathbb E_x f(x,y), y\> -\<\nabla} \def\pp{\partial} \def\E{\mathbb E_y f(x,y), \nabla} \def\pp{\partial} \def\E{\mathbb E_x V(x, \rr_t(z)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z)+\bb x+y\>\big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y\\
&= \int_{\R^{2m}} f(x,y) \big\{\DD_y\rr_t (x,y)- \<\nabla} \def\pp{\partial} \def\E{\mathbb E_x \rr_t(x,y), y\> +\<\nabla} \def\pp{\partial} \def\E{\mathbb E_y \rr_t(x,y), \nabla} \def\pp{\partial} \def\E{\mathbb E_x \mu_t(W(x,\cdot)) +\bb x+y\>\big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y.
\end{align*} Then $\rr_t$ solves \eqref{*EN2}.
On the other hand, let $\rr_t$ solve \eqref{*EN2} with $\mu_0(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z):= \rr_0(z)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z\in \scr P_2( \R^{2m})$. By the integration by parts formula, $\mu_t(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z):=\rr_t(z)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z$ solves the non-linear Fokker-Planck equation
$$\pp_t\mu_t=L_{\mu_t}^* \mu_t$$ in the sense that for any $f\in C_0^\infty(\R^{d_1+d_2})$ we have
$$\mu_t(f)= \mu_0(f)+ \int_0^t \mu_s(L_{\mu_s}f)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s,\ \ t\ge 0,$$
where $L_\mu:= \DD_y +y\cdot\nabla} \def\pp{\partial} \def\E{\mathbb E_x -\{\nabla} \def\pp{\partial} \def\E{\mathbb E_x \mu(W(x,\cdot)) + \bb x-y\}\cdot\nabla} \def\pp{\partial} \def\E{\mathbb E_y.$ By the superposition principle, see \cite[Section 2]{BR},
we have $\mu_t=P_t^*\mu$.
Now, as explained in the proof of Example 2.1, by Theorem \ref{T2} we only need to verify {\bf (C)} for $B, V$ in \eqref{GPP} and
\beq\label{TTH} \theta_1= \theta\Big(\ff 1 2 +\ss{2+2\bb+\bb^2}\Big),\ \ \theta_2= \ff \theta 2 \ss{2+2\bb+\bb^2},\end{equation}
so that the desired assertion holds for
$$ \kk:=\ff{ 2(\bb-\theta_1-\theta_2)}{2+2\bb+\bb^2+\ss{\bb^4+4}}= \ff{2\bb- \theta(1+ 3\ss{2+2\bb+\bb^2})}{2+2\bb+\bb^2+\ss{\bb^4+4}}.$$
By \eqref{*EN3} and $V(x,\mu):=\mu(W(x,\cdot))$, for any constants $\aa_1,\aa_2,\aa_3>0$ we have
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} I &:= \big\<\nabla} \def\pp{\partial} \def\E{\mathbb E V(\cdot,\mu)(x)- \nabla} \def\pp{\partial} \def\E{\mathbb E V(\cdot,\bar\mu)(\bar x), x-\bar x+(1+\bb)(y-\bar y)\big\> \\
&= \int_{\R^{2m}} \big\< \nabla} \def\pp{\partial} \def\E{\mathbb E W(\cdot, z)(x)- \nabla} \def\pp{\partial} \def\E{\mathbb E W(\cdot, z)(\bar x), x-\bar x +(1+\bb)(y-\bar y)\big\>\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D z) \\
&\qquad +\big\<\mu(\nabla} \def\pp{\partial} \def\E{\mathbb E_{\bar x}W(\bar x,\cdot))-\bar\mu(\nabla} \def\pp{\partial} \def\E{\mathbb E_{\bar x} W(\bar x,\cdot)), x-\bar x +(1+\bb)(y-\bar y)\big\>\\
&\ge -\theta\big\{ |x-\bar x|+ \W_1(\mu,\bar \mu)\big\} \cdot\big(|x-\bar x|+(1+\bb)|y-\bar y|\big)\\
&\ge - \theta (\aa_2+\aa_3) \W_2(\mu,\bar\mu)^2- \theta\Big\{\Big(1+\aa_1+\ff 1 {4\aa_2}\Big)|x-\bar x|^2 +(1+\bb)^2 \Big(\ff 1 {4 \aa_1}+\ff 1 {4\aa_3}\Big)|y-\bar y|^2\Big\}.\end{align*}
Take
$$\aa_1= \ff{\ss{2+2\bb+\bb^2}-1} 2,\ \ \aa_2= \ff 1 {2\ss{2+2\bb+\bb^2 }},\ \ \aa_3=\ff {(1+\bb)^2}{2\ss{2+2\bb+\bb^2 }}.$$
We have
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*}&1+\aa_1+\ff 1 {4\aa_2} =\ff 1 2+ \ss{2+2\bb+\bb^2},\\
&(1+\bb)^2\Big(\ff 1 {4 \aa_1}+\ff 1 {4\aa_3}\Big)=\ff 1 2+ \ss{2+2\bb+\bb^2},\\
& \ \aa_2+\aa_3= \ff 1 2 \ss{2+2\bb+\bb^2}.\end{align*}
Therefore,
$$I\ge -\ff \theta 2 \ss{2+2\bb+\bb^2} \W_2(\mu,\bar\mu)^2 -\theta\Big(\ff 1 2 + \ss{2+2\bb+\bb^2}\Big) |(x,y)-(\bar x,\bar y)|^2,$$
i.e. {\bf (C)} holds for $B$ and $V$ in \eqref{GPP} where $B=I_m$ implies that ${\psi_B}$ is the Euclidean distance on $\R^{2m}$, and for $\theta_1,\theta_2$ in \eqref{TTH}.
\end{proof}
\section{Proofs of Theorems \ref{T0} and \ref{C2.0}}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof}[Proof of Theorem \ref{T0}] (1) Since
$$\Ent(P_{t_0}^*\nu|P_{t_0}^*\mu)=\sup_{f\ge 0, (P_{t_0}f)(\mu)=1} P_{t_0}(\log f)(\nu),$$
\eqref{LHI} implies
$$\Ent(P_{t_0}^*\nu|P_{t_0}^*\mu)\le c_0 \W_2(\mu,\nu)^2.$$
This together with $P_{t_0}^*\mu_\infty=\mu_\infty$ gives
\beq\label{EWW} \Ent(P_{t_0}^*\mu| \mu_\infty)\le c_0 \W_2(\mu,\mu_\infty)^2,\ \ \mu\in \scr P_2.\end{equation}
Combining \eqref{EW} with \eqref{TLI} and \eqref{EWW}, we obtain
$$\W_2(P_t^*\mu, \mu_\infty)^2 \le c_1\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll t} \W_2(\mu,\mu_\infty)^2\le c_1\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll t} \min\big\{\W_2(\mu,\mu_\infty)^2, C \Ent(\mu|\mu_\infty)\big\},\ \ t\ge t_1$$ and
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} &\Ent(P_t^*\mu|\mu_\infty) =\Ent(P_{t_0}^*P_{t-t_0}^*\mu|\mu_\infty)\le c_0 \W_2(P_{t-t_0}^*\mu, \mu_\infty)^2
\le c_0c_1\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll (t-t_0)} \W_2(\mu, \mu_\infty)^2\\
&= \{c_0c_1\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\ll t_0} \}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll t} \min\big\{\W_2(\mu, \mu_\infty)^2, C\Ent(\mu|\mu_\infty)\big\},\ \ t\ge t_0+t_1.\end{align*}Therefore,
\eqref{ET} holds.
(2) Similarly, if \eqref{EW'} holds, then \eqref{TLI} and \eqref{EWW} imply
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} & \Ent(P_t^*\mu|\mu_\infty)\le c_2\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll(t-t_0)} \min\big\{\Ent(P_{t_0}^*\mu|\mu_\infty), \Ent(\mu|\mu_\infty)\big\}\\
& \le c_2\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll(t-t_0)} \min\big\{ c_0 \W_2(\mu,\mu_\infty)^2, \Ent(\mu|\mu_\infty)\big\},\ \ t\ge t_0+t_2\end{align*} and
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} &C^{-1} \W_2(P_t^*\mu,\mu_\infty)^2\le \Ent(P_{t-t_0}^*P_{t_0}^*\mu|\mu_\infty)\\
&\le c_2 \min\big\{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll t} \Ent( \mu|\mu_\infty), \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll (t-t_0)} \Ent( P_{t_0}^*\mu|\mu_\infty) \big\}\\
&\le c_2 \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll (t-t_0)} \min\big\{ \Ent( \mu|\mu_\infty), c_0 \W_2(\mu, \mu_\infty)^2\big\},\ \ t\ge t_0+t_2.\end{align*}
Then \eqref{ET'} holds, and the proof is finished. \end{proof}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof}[Proof of Theorem \ref{C2.0}] By \cite[Theorem 10]{GLW}, there exits a unique $\mu_\infty\in\scr P_2$ such that
\beq\label{MIN} \Ent^{V,W}(\mu_\infty)=0.\end{equation} Let $\mu_0=\rr_0\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\in \scr P_2$. We first note that $\mu_t= P_t^*\mu_0:=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}$ for $X_t$ solving the distribution dependent SDE \eqref{EP} with
\beq\label{SBB} \sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}(x,\mu)= \ss{2a(x)},\ \ b(x,\mu)= \sum_{j=1}^d\pp_j a_{\cdot,j}(x) -a\nabla} \def\pp{\partial} \def\E{\mathbb E\{V+W\circledast \mu\}(x),\ \ x\in \R^d, \mu\in \scr P_2.\end{equation} Obviously, for this choice of $(\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s},b)$, assumptions $(H_1)$ and $(H_2)$ imply condition \eqref{KK} for some constant $K>0$,
so that the McKean-Vlasov \eqref{EP} SDE is weakly and strongly well-posed.
For any $N\ge 2$, let $\mu_t^{(N)}=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t^{(N)}}$ for the mean field particle system $X^{(N)}_t=(X_t^{N,k})_{1\le i\le N}$:
\beq\label{MF} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t^{N,k} = \ss 2 \sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}(X_t^{N,k}) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D B_t^k +\Big\{\sum_{j=1}^d \pp_j a_{\cdot,j}(X_t^{N,k}) -a(X_t^{N,k}) \nabla} \def\pp{\partial} \def\E{\mathbb E_k H_N(X_t^{(N)} )\Big\} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\ \ t\ge 0,\end{equation}
where $\nabla} \def\pp{\partial} \def\E{\mathbb E_k$ denotes the gradient in the $k$-th component, and $\{X_0^{N,k}\}_{1\le i\le N}$ are i.i.d. with distribution $\mu_0\in \scr P_2$. According to the propagation of chaos, see \cite{SN},
$(H_1)$-$(H_3)$ imply
\beq\label{SNN} \lim_{N\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma \infty}\W_2(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t^{N,1}}, P_t^*\mu_0)=0.\end{equation}
Next, our conditions imply (25) and (26) in \cite{GLW} for $\rr_{LS}= \bb (1-r_0)^2.$ So, by \cite[Theorem 8(2)]{GLW},
we have the log-Sobolev inequality
\beq\label{LSN} \mu^{(N)}(f^2\log f^2)\le \ff 2 {\bb (1-r_0)^2}\mu^{(N)}(|\nabla} \def\pp{\partial} \def\E{\mathbb E f|^2),\ \ f\in C_b^1(\R^{dN}), \mu^{(N)}(f^2)=1.\end{equation}
By \cite{BGL}, this implies the Talagrand inequality
\beq\label{TLN} \W_2(\nu^{(N)}, \mu^{(N)})^2\le \ff 2 {\bb (1-r_0)^2} \Ent(\nu^{(N)}|\mu^{(N)}),\ \ t\ge 0, N\ge 2, \nu^{(N)}\in \scr P(\R^{dN}).\end{equation}
On the other hand, by It\^o's formula we see that the generator of the diffusion process $X_t^{(N)} $ is
$$L^{(N)} (x^{(N)} )= \sum_{i,j,k=1}^d \Big\{a_{ij}(x^{N,k}) \pp_{x_i^{N,k}} \pp_{x_j^{N,k}} +\pp_j a_{ij} (x^{N,k})\pp_{x_i^{N,k}} -a_{ij}(x^{N,k}) \big[\pp_{x_i^{N,k}} H_N(x^{(N)})\big]\pp_{x_i^{N,k}}\Big\},$$
for $x^{(N)}=(x^{N,1},\cdots, x^{N,N})\in \R^{dN},$ where $x_i^{N,k}$ is the $i$-th component of $x^{N,k}\in \R^d$. Using the integration by parts formula, we see that this operator is symmetric in $L^2(\mu^{(N)})$:
$$\scr E} \def\W{\mathbb W^{(N)}(f,g):=\int_{\R^{dN}} \<a^{(N)}\nabla} \def\pp{\partial} \def\E{\mathbb E f,\nabla} \def\pp{\partial} \def\E{\mathbb E g\> \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu^{(N)}= -\int_{\R^{dN}} (f L^{(N)}g) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu^{(N)},\ \ f,g\in C_0^\infty(\R^{dN}), $$ where
$a^{(N)}(x^{(N)}) :={\rm diag}\{a(x^{N,1}),\cdots, a(x^{N,N})\}, x^{(N)}=(x^{N,1},\cdots, x^{N,N})\in \R^{dN}.$
So, the closure of the pre-Dirichelt form $(\scr E} \def\W{\mathbb W^{(N)}, C_0^\infty(\R^{dN}))$ in $L^2(\mu^{(N)})$ is the Dieichlet form for the Markov semigroup $P_t^{(N)}$ of $X_t^{(N)}$.
By $(H_1)$ we have $a^{(N)} \ge \ll_a I_{dN}$, so that \eqref{LSN} implies
$$\mu^{(N)}(f^2\log f^2)\le \ff 2 {\bb \ll_a (1-r_0)^2}\scr E} \def\W{\mathbb W^{(N)}(f,f),\ \ f\in C_b^1(\R^{dN}), \mu^{(N)}(f^2)=1.$$
It is well known that this log-Sobolev inequality implies the exponential convergence
\beq\label{EXN} \begin} \def\beq{\begin{equation}} \def\F{\scr F{split}&\Ent(\mu_t^{(N)}| \mu^{(N)}) \le \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll_a\bb (1-r_0)^2 t} \Ent(\mu_0^{(N)}|\mu^{(N)})\\
&= \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll_a\bb (1-r_0)^2 t} \Ent(\mu^{\otimes N}|\mu^{(N)}),\ \ t\ge 0, N\ge 2,\end{split} \end{equation}
see for instance \cite[Theorem 5.2.1]{BGL0}.
Moreover, since $\Hess_V$ and $\Hess_W$ are bounded from below, $(H_1)$ implies that the Bakry-Emery curvature
of the generator of $X_t^{(N)}$ is bounded by a constant. Then according to \cite{W10},
there exists a constant $K\ge 0$ such that the Markov semigroup $P_t^{(N)}$ of
$X_t^{(N)}$ satisfies the log-Harnack inequality
\beq\label{PLM} P_t^{(N)} \log f(x)\le \log P_t^{(N)}f(y)+ \ff{K\rr^{(N)}(x,y)^2}{2(1-\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-2Kt})},\ \ 0<f\in \B_b(\R^{dN}), t>0, x,y\in \R^{dN},\end{equation}
where $\rr^{(N)}$ is the intrinsic distance induced by the Dirichlet form $\scr E} \def\W{\mathbb W^{(N)}$. Since $a^{(N)}\ge \ll_a I_{dN}$, we have $\rr^{(N)}(x,y)^2\le \ll_a^{-1} |x-y|^2$.
So, \eqref{PLM} implies \eqref{LHI} for $P_t^{(N)}$ replacing $P_{t_0}$ and $c_0= \ff{K}{2\ll_a(1-\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-2Kt})}:$
$$P_t^{(N)} (\log f)(\nu) \le \log P_t^{(N)}f(\mu)+ \ff{K\,\W_2(\mu,\nu)^2}{2\ll_a(1-\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-2Kt})},\ \ 0<f\in \B_b(\R^{dN}), t>0, \mu,\nu \in \scr P_2( \R^{dN}).$$
Thus, by Theorem \ref{T0}, \eqref{EXN} implies
\beq\label{EXN2} \W_2(\mu_t^{(N)}| \mu^{(N)})^2\le \ff{c_1 \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll_a\bb(1-r_0)^2 t} }{1\land t} \W_2(\mu^{\otimes N}, \mu^{(N)})^2,\ \ t>0, N\ge 2 \end{equation}
for some constant $c_1>0$. Moreover, \eqref{TLN}, \eqref{MIN} and \cite[Lemma 17]{GLW} yield
\beq\label{EXN3} \begin} \def\beq{\begin{equation}} \def\F{\scr F{split} \lim_{N\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma\infty} \ff 1 N \W_2(\mu_\infty^{\otimes N}, \mu^{(N)})^2 &\le \limsup_{N\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma\infty} \ff {2 }{\bb(1-r_0)^2 N} \Ent (\mu_\infty^{\otimes N}| \mu^{(N)})^2\\
&=\ff{2}{\bb (1-r_0)^2} \Ent^{V,W}(\mu_\infty) =0.\end{split} \end{equation}
Combining this with \eqref{EXN2} we derive
\beq\label{EXN5} \begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &\limsup_{N\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma\infty} \ff 1 N \W_2(\mu_t^{(N)}, \mu_\infty^{\otimes N})^2= \limsup_{N\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma\infty} \ff 1 N \W_2(\mu_t^{(N)}, \mu^{(N)})^2\\
&\le \ff{ c_1\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll_a\bb(1-r_0)^2 t} }{1\land t} \limsup_{N\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma\infty} \ff 1 N \W_2(\mu^{\otimes N}_0, \mu^{(N)})^2\\
&= \ff{ c_1 \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll_a\bb(1-r_0)^2 t}}{1\land t} \limsup_{N\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma\infty} \ff 1 N \W_2(\mu^{\otimes N}_0, \mu_\infty^{\otimes N})^2
= \ff{ c_1 \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll_a\bb(1-r_0)^2 t}}{1\land t} \W_2(\mu_0, \mu_\infty)^2\ ,\ \ t>0.\end{split}\end{equation}
Now, let $\xi=(\xi_i)_{1\le i\le N}$ and $\eta=(\eta_i)_{1\le i\le N}$ be random variables on $\R^{dN}$ such that
$\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_\xi= \mu_t^{(N)}, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_\eta=\mu_\infty^{\otimes N}$ and
$$\sum_{i=1}^N \E |\xi_i-\eta_i|^2=\E|\xi-\eta|^2= \W_2(\mu_t^{(N)},\mu_\infty^{\otimes N})^2.$$
We have $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\xi_i}= \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t^{N,1}}, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\eta_i}=\mu_\infty$ for any $1\le i\le N$, so that
\beq\label{EXX} N \W_2(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t^{N,1}},\mu_\infty)^2 \le \sum_{i=1}^N \E |\xi_i-\eta_i|^2= \W_2(\mu_t^{(N)},\mu_\infty^{\otimes N})^2.\end{equation}
Substituting this into \eqref{EXN5}, we arrive at
$$\limsup_{N\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma\infty} \W_2(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t^{N,1}},\mu_\infty)^2 \le \ff{ c_1 \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll_a\bb(1-r_0)^2 t} }{1\land t} \W_2(\mu, \mu_\infty)^2\ ,\ \ t>0. $$
This and \eqref{SNN} imply
\beq\label{EXN6} \W_2(P_t^*\mu,\mu_\infty)^2\le \ff{ c_1 \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll_a\bb(1-r_0)^2 t}}{1\land t} \W_2(\mu, \mu_\infty)^2\ ,\ \ t>0. \end{equation}
By \cite[Theorem 4.1]{W18}, $(H_1)$-$(H_3)$ imply the log-Harnack inequality
\beq\label{LHK} P_t(\log f)(\nu)\le \log P_tf(\mu)+\ff {c_2}{1\land t} \W_2(\mu,\nu)^2,\ \ \mu,\nu\in \scr P_2, t>0\end{equation} for some constant $c_2>0$.
Similarly to the proof of \eqref{EXX} we have
$$N\W_2(\mu_\infty, \mu^{(N,1)})^2\le \W_2(\mu_\infty^{\otimes N},\mu^{(N)})^2,$$
where $\mu^{(N,1)}:=\mu^{(N)}(\cdot\times \R^{d(N-1)})$ is the first marginal distribution of $\mu^{(N)}$. This together with \eqref{EXN3} implies
$$\lim_{N\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma\infty} \W_2(\mu^{(N,1)}, \mu_\infty)^2=0.$$
Therefore, applying \eqref{LSN} to $f(x)$ depending only on the firs component $x_1$, and letting $N\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma\infty$, we derive the log-Sobolev inequality
$$\mu_\infty(f^2\log f^2)\le \ff 2 {\bb (1-r_0)^2} \mu_\infty(|\nabla} \def\pp{\partial} \def\E{\mathbb E f|^2),\ \ f\in C_b^1(\R^d), \mu_\infty(f^2)=1.$$
By \cite{BGL}, this implies \eqref{TLI} for $C=\ff 2 {\bb (1-r_0)^2}.$
Combining this with the log-Harnack inequality and \eqref{EXN6}, by Theorem \ref{T0} we prove \eqref{WU} for some constant $c>0$ and $\mu_t=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}=P_t^*\mu_0$ for
solutions to \eqref{EP} with $b,\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}$ in \eqref{SBB}.
Similarly to the link of \eqref{E00} and \eqref{E0'} shown in Introduction, for any probability density functions $\rr_t$ solving \eqref{E01}, we have $\rr_t\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x= P_t^*\mu_0$ for $\mu_0=\rr_0\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\in \scr P_2$. So, we have proved \eqref{WU} for $\rr_t$ solving \eqref{E01} with $\mu_0\in \scr P_2$. As explained in the proof of Example 2.1 that $\Ent(\mu_0,\mu_\infty)=\W_2(\mu,\mu_\infty)=\infty$ for $\mu_0\notin \scr P_2$, so that the desired inequality \eqref{WU} tribally true. Then the proof is finished.
\end{proof}
\section{Proof of Theorems \ref{T1} }
According to \cite[Theorem 3.1]{W18}, $(A_1)$ and $(A_2)$ imply that $P_t^*$ has a unique invariant probability measure $\mu_\infty$ and
\beq\label{001} \W_2(P_t^*\mu, \mu_\infty) \le \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\frac{1}{2}(K_2-K_1) t} \W_2(\mu,\mu_\infty),\ \ t\ge 0, \mu \in \scr P_2,\end{equation}
while \cite[Corollary 4.3]{W18} implies
$$ \Ent(P_t^*\mu| \mu_\infty) \le \ff {c_0}{1\land t} \W_2(\mu,\mu_\infty)^2,\ \ t>0, \mu\in \scr P_2$$ for some constant $c_0>0$.
Then for any $p>1$, combining these with $P_t^*=P_{1\land t}^* P_{(t-1)^+}^*$, we obtain
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} &\Ent(P_t^*\mu| \mu_\infty) =\Ent(P_{1\land t}^*P_{(t-1)^+}^* \mu| \mu_\infty) \le \ff{c_0} {1\land t} \W_2(P_{(t-1)^+}^*\mu,\mu_\infty)^2\\
&\le \ff{c_0\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(K_2-K_1)(t-1)^+}} {1\land t} \W_2(\mu,\mu_\infty)^2= \ff{c_0\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{K_2-K_1}} {1\land t} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(K_2-K_1)t} \W_2(\mu,\mu_\infty)^2.\end{align*}
This together with \eqref{001} implies \eqref{EX1} for some constant $c_1>0$.
Now, let $\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}\in C_b^2(\R^d\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma \R^d\otimes \R^m)$. To deduce \eqref{EX2} from \eqref{EX1}, it remains to find a constant $c>0$ such that the following Talagrand inequality holds:
$$\W_2(\mu,\mu_\infty)^2\le c\, \Ent(\mu|\mu_\infty),\ \ \mu\in \scr P_2.$$
According to \cite{BGL}, this inequality follows from the log-Sobolev inequality
\beq\label{LSII} \mu_\infty(f^2\log f^2)\le c \mu_\infty(|\nabla} \def\pp{\partial} \def\E{\mathbb E f|^2),\ \ f\in C_b^1(\R^d), \mu_\infty(f^2)=1.\end{equation}
To prove this inequality, we consider the diffusion process $\bar X_t$ on $\R^d$ generated by
$$\bar L:= \ff 1 2 \sum_{i,j=1}^d (\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}\si^*)_{ij}\pp_i\pp_j +\sum_{i=1}^\infty b_i(\cdot,\mu_\infty)\pp_i,$$
which can be constructed by solving the SDE
\beq\label{BSDE} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\bar X_t= \sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}(\bar X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t+ b(\bar X_t,\mu_\infty)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t.\end{equation}
Let $\bar P_t$ be the associated Markov semigroup. Since $P_t^*\mu_\infty=\mu_\infty$, when $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\mu_\infty$ the SDE \eqref{BSDE} coincides with \eqref{E1}
so that by the uniqueness, we see that $\mu_\infty$ is an invariant probability measure of $\bar P_t$. Combining this with $(A_2)$ and It\^o's formula, we obtain
\beq\label{ECC} \W_2(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\bar X_t},\mu_\infty)^2\le \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-K_2t} \W_2(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\bar X_0},\mu_\infty)^2,\ \ t>0. \end{equation}
To prove the log-Sobolev inequality \eqref{LSII}, we first verify the hyperboundedness of $\bar P_t$, i.e. for large $t>0$ we have
\beq\label{HP} \|\bar P_t\|_{L^2(\mu_\infty)\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma L^4(\mu_\infty)}<\infty.\end{equation}
It is easy to see that conditions $(A_1)$ and $(A_2)$ in Theorem \ref{T1} imply that $\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}$ and $b(\cdot,\mu_\infty)$ satisfy conditions $(A1)$-$(A3)$ in \cite{W11} for
$K=-(K_2-K_1), \ll_t^2=\ll$ and $\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho_t= \|\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}\|_\infty.$ So, by \cite[Theorem 1.1(3)]{W11}, we find a constant $C>0$ such that the following Harnack inequality holds:
$$(\bar P_tf(x))^2\le \bar P_t f^2 (y) \exp\Big[\ff{C|x-y|^2}{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(K_2-K_1)t}-1}\Big],\ \ t>0.$$
Then for any $f$ with $\mu_\infty(f^2)\le 1$, we have
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} &\big(\bar P_t f(x)\big)^2 \int_{\R^d} \exp\Big[-\ff{C|x-y|^2}{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(K_2-K_1)t}-1}\Big]\mu_\infty(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y)\\
&\le \mu_\infty(\bar P_t f^2) = \mu_\infty(f^2)\le 1.\end{align*}
So,
\beq\label{UPP} \begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &\sup_{\mu_\infty(f^2)\le 1} |\bar P_t f(x)|^4\le \ff 1 {\big(\int_{\R^d} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ff{C|x-y|^2}{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(K_2-K_1)t}-1}}\mu_\infty(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y)\big)^2} \\
&\le \ff 1 {\big(\int_{B(0,1)} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ff{C|x-y|^2}{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(K_2-K_1)t}-1}}\mu_\infty(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y)\big)^2}\le C_1\exp\big[C_1 \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-(K_2-K_1)t}|x|^2\big],\ \ t\ge 1, x\in \R^d.\end{split} \end{equation}
Next, by
$ \|\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}\|_\infty<\infty$, $(A_2)$ and It\^o's formula, for any $k\in (0,K_2)$ there exists a constant $c_k>0$ such that
$$ \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D |\bar X_t|^2\le 2\<\bar X_t, \sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}(\bar X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t\> +\big\{c_k-k |\bar X_t|^2\big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t.$$
Then for any $\vv>0$,
\beq\label{PW} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv |\bar X_t|^2} \le 2\vv \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv|\bar X_t|^2} \<\bar X_t, \sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}(\bar X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t\>+\vv \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv |\bar X_t|^2}\big\{c_k +2\vv\|\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}\|_\infty^2|\bar X_t|^2-k|\bar X_t|^2\big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t.\end{equation}
When $\vv>0$ is small enough such that $2\vv\|\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}\|_\infty^2<K_2$, there exist constants $c_1(\vv), c_2(\vv)>0$ such that
$$\vv \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv |\bar X_t|^2}\big\{c_k +2\vv\|\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}\|_\infty^2|\bar X_t|^2-k|\bar X_t|^2\big\}\le c_1(\vv)-c_2(\vv)\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv|\bar X_t|^2}.$$Combining this with \eqref{PW} we obtain
$$\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv |\bar X_t|^2} \le c_1(\vv) - c_2(\vv) \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv |\bar X_t|^2} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t + 2 \vv \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv |\bar X_t|^2} \<\bar X_t, \sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}(\bar X_t) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t\>.$$
Taking for instance $\bar X_0=0$, we get
$$\ff {c_2(\vv)} t \int_0^t \E \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv|\bar X_s|^2}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s \le \ff{1+c_1(\vv)t} t,\ \ t>0.$$
This together with \eqref{ECC} yields
$$\mu_\infty(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv(|\cdot|^2\land N)}) = \lim_{t\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma\infty} \ff 1 t\int_0^t \E \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv(|\bar X_s|^2\land N)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s \le \ff{c_1(\vv)}{c_2(\vv)},\ \ N>0.$$
By letting $N\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma\infty$ we derive $\mu_\infty(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\vv|\cdot|^2})<\infty$.
Obviously, this and \eqref{UPP} imply \eqref{HP} for large $t>0$.
Moreover, since $\|(\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}\si^*)^{-1}\|_\infty<\infty$, $\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}\in C_b^2(\R^d\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma\R^d\otimes \R^m)$ and noting that $(A_2)$ gives
$$\<v,\nabla} \def\pp{\partial} \def\E{\mathbb E_v b(\cdot,\mu_\infty)\> \le -K_2|v|^2,\ \ v\in\R^d,$$
we find a constant $K_0\in \R$ such that for any $f\in C^\infty(\R^d)$,
$$\GG_2(f):= \ff 1 2 \bar L |\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}^*\nabla} \def\pp{\partial} \def\E{\mathbb E f|^2 -\<\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}^*\nabla} \def\pp{\partial} \def\E{\mathbb E f, \sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}^* \nabla} \def\pp{\partial} \def\E{\mathbb E \bar L f\> \ge K_0 |\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}^*\nabla} \def\pp{\partial} \def\E{\mathbb E f|^2,$$
i.e. the Bakry-Emery curvature of $\bar L$ is bounded below by a constant $K_0$. According to \cite[Theorem 2.1]{RW03},
this and the hyperboundedness \eqref{HP} imply the defective log-Sobolev inequality
\beq\label{DFL} \begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &\mu_\infty(f^2\log f^2)\le C_1 \mu_\infty(|\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}^*\nabla} \def\pp{\partial} \def\E{\mathbb E f|^2) +C_2\\
&\le C_1\|\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}\|_\infty^2 \mu_\infty(|\nabla} \def\pp{\partial} \def\E{\mathbb E f|^2) +C_2,\ \ f\in C_b^1(\R^d), \mu_\infty(f^2)=1\end{split}\end{equation} for some constants $c_1,c_2>0$.
Since $\bar L$ is elliptic, the invariant probability measure $\mu_\infty$ is equivalent to the Lebesgue measure, see for instance \cite[Theorem 1.1(ii)]{BRW01}, so that the Dirichlet form
$$\scr E(f,g):= \mu_\infty(\<\nabla} \def\pp{\partial} \def\E{\mathbb E f,\nabla} \def\pp{\partial} \def\E{\mathbb E g\>),\ \ f,g\in W^{1,2}(\mu)$$ is irreducible, i.e. $f\in W^{1,2}(\mu)$ and $\scr E(f,f)=0$ imply that $f$ is constant. Therefore,
by \cite[Corollary 1.3]{W14}, see also \cite{Miclo}, the defective log-Sobolev inequality \eqref{DFL} implies the desired log-Sobolev inequality \eqref{LSII} for some constant $c>0$.
Hence, the proof is finished.
\section{Proof of Theorem \ref{T2} }
We first establish the log-Harnack inequality for a more general model, which extends existing results derived in \cite{GW12, BWY15} to the distribution dependent setting.
\subsection{Log-Harnack inequality}
Consider the following McKean-Vlasov stochastic Hamiltonian system for $(X_t,Y_t)\in \R^{d_1}\times \R^{d_2}:$
\beq\label{E'} \begin} \def\beq{\begin{equation}} \def\F{\scr F{cases} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= \big(AX_t+BY_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\\
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D Y_t= Z((X_t,Y_t), \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(X_t,Y_t)}) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t + \sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t,\end{cases}\end{equation}
where $A$ is a $d_1\times d_1$-matrix, $B$ is a $d_1\times d_2$-matrix, $\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}$ is a $d_2\times d_2$-matrix,
$W_t$ is the $d_2$-dimensional Brownian motion on a complete filtration probability space $(\OO,\{\F_t\}_{t\ge 0}, \P)$, and
$$Z: \R^{d_1+d_2}\times \scr P_2(\R^{d_1+d_2})\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma\R^{d_2}$$ is measurable. We assume
\begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[{\bf (C)}] $\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}$ is invertible, $Z$ is Lipschitz continuous, and $A,B$ satisfy the following Kalman's rank condition for some $k\ge 1$:
$${\rm Rank}[A^0B,\cdots, A^{k-1} B]=d_1, \ \ A^0:= I_{d_1}. $$ \end{enumerate}
Obviously, this assumption implies $(A_1)$, so that \eqref{E'} has a unique solution $(X_t,Y_t)$ for any initial value $(X_0,Y_0)$ with
$\mu:=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(X_0,Y_0)}\in \scr P_2(\R^{d_1+d_2}).$ Let $P^*_t\mu:=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(X_t,Y_t)}$ and
$$(P_tf)(\mu) :=\int_{\R^{d_1+d_2} } f\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D P_t^*\mu,\ \ t\ge 0, f\in \B_b(\R^{d_1+d_2}).$$ By \cite[Theorem 3.1]{W18}, the Lipschitz continuity of $Z$ implies
\beq\label{ES0} \W_2(P_t^*\mu, P_t^*\nu) \le \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{Kt} \W_2(\mu,\nu),\ \ t\ge 0, \mu,\nu\in \scr P_2(\R^{d_1+d_2})\end{equation} for some constant $K>0.$
We have the following result.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{prp}\label{P1} Assume {\bf (C)}. Then there exists a constant $c>0$ such that
\beq\label{LH} (P_T\log f)(\nu)\le \log (P_T f)(\mu) + \ff {c\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{cT}}{T^{4k-1}\land 1} \W_2(\mu,\nu)^2,\ \ T>0, \mu,\nu\in \scr P_2(\R^{d_1+d_2}).\end{equation}
Consequently,
\beq\label{LH'} \Ent(P_T^*\nu|P_T^*\mu) \le \ff {c\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{cT}}{T^{4k-1}\land 1} \W_2(\mu,\nu)^2,\ \ T>0, \mu,\nu\in \scr P_2(\R^{d_1+d_2}).\end{equation}
\end{prp}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} According to \cite[Corollary 4.3]{W18}, \eqref{LH} implies \eqref{LH'}. Below we prove \eqref{LH} by using the coupling by change of measures summarized in \cite[Section 1.1]{W13}. By the Kalman rank condition in {\bf (C)},
$$Q_T:= \int_0^T t(T-t) \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(T-t)A}BB^* \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(T-t)A^*} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t$$
is invertible and there exists a constant $c_1>0$ such that
\beq\label{EQ} \|Q_T^{-1}\|\le \ff{c_1\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{c_1T}}{(T\land 1)^{2k+1}},\ \ T>0,\end{equation}
see for instance \cite[Theorem 4.2(1)]{WZ13}.
Let $(X_0,Y_0), (\bar X_0,\bar Y_0)\in L^2(\OO\rightarrow}\def\l{\ell}\def\iint{\int}\def\gg{\gamma\R^{d_1+d_2},\F_0,\P)$ such that $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(X_0,Y_0)}=\mu, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(\bar X_0,\bar Y_0)}=\nu$ and
\beq\label{E'2} \E\big(|X_0-\bar X_0|^2 + |Y_0-\bar Y_0|^2\big) = \W_2(\mu,\nu)^2.\end{equation}
Next, let $(X_t,Y_t)$ solve \eqref{E'}. Then $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(X_t,Y_t)}= P_t^*\mu.$
Consider the the modified equation with initial value $(\bar X_0,\bar Y_0)$:
\beq\label{E'3} \begin} \def\beq{\begin{equation}} \def\F{\scr F{cases} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \bar X_t= \big(A\bar X_t+B\bar Y_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\\
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \bar Y_t= \Big\{Z((X_t,Y_t), P_t^*\mu) +\ff {Y_0-\bar Y_0}T +\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t} \big[t(T-t) B^* \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(T-t)A^*}v\big]\Big\} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t + \sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t,\end{cases}\end{equation}
where
\beq\label{EV} v:= Q_T^{-1} \bigg\{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{TA}(X_0-\bar X_0) +\int_0^T \ff{t -T}T\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(T-t)A} B(\bar Y_0-Y_0) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\bigg\}.\end{equation}
Then
\beq\label{ES1} \begin} \def\beq{\begin{equation}} \def\F{\scr F{split} \bar Y_t-Y_t &= \bar Y_0-Y_0+\int_0^t \Big\{\ff {Y_0-\bar Y_0}T +\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r} \big[r(T-r) B^* \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(T-r)A^*}v\big]\Big\} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\\
&= \ff{T-t}T ( \bar Y_0-Y_0) + t(T-t) B^* \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(T-t)A^*}v,\ \ t\in [0,T].\end{split} \end{equation}
Consequently, $Y_T=\bar Y_T$, and combining with Duhamel's formula, we obtain
\beq\label{ES2} \begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &\bar X_t-X_t= \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{tA}(\bar X_0-X_0) +\int_0^t \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(t-r)A} B \Big\{\ff{T-r}T ( \bar Y_0-Y_0) + r(T-r) B^* \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(T-r)A^*}v\Big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\end{split}\end{equation}
for $t\in [0,T].$ This and \eqref{EV} imply
$$\bar X_T-X_T= \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{TA} (\bar X_0-X_0) +\int_0^T \ff{T-r}T \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(T-r)A} B (\bar Y_0-Y_0)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r +Q_T v=0,$$
which together with $Y_T=\bar Y_T$ observed above yields
\beq\label{CP} (X_T,Y_T)= (\bar X_T, \bar Y_T).\end{equation}
On the other hand, let
$$\xi_t=\sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s}^{-1}\Big\{\ff 1 T (Y_0-\bar Y_0)+ \ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t} \Big[t(T-t) B^* \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(T-t)A^*} v\Big] + Z\big((X_t,Y_t), P_t^*\mu\big)- Z\big((\bar X_t, \bar Y_t), P_t^*\nu\big)\Big\},\ \ t\in [0,T].$$
By {\bf (C)}, \eqref{ES0}, \eqref{EQ}, \eqref{EV}, \eqref{ES1}, and \eqref{ES2}, we find a constant $c_2>0$ such that
\beq\label{E'4} |\xi_t|^2 \le \ff{c_2}{(T\land 1)^{4k} } \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{c_2T}\big\{|X_0-\bar X_0|^2 + |Y_0-\bar Y_0|^2 + \W_2(\mu,\nu)^2\big\},\ \ t\in [0,T].\end{equation}
So, the Girsanov theorem implies that
$$\tilde}\def\[{\lfloor} \def\]{\rfloor W_t:= W_t+\int_0^t\xi_sds,\ \ t\in [0,T]$$ is a $d_2$-dimensional Brownian motion under the probability measure $\Q:=R\P$, where
\beq\label{RR} R:= \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\int_0^T \<\xi_t,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t\> -\ff 1 2 \int_0^T |\xi_t|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t}.\end{equation}
Reformulating \eqref{E'3} as
$$\begin} \def\beq{\begin{equation}} \def\F{\scr F{cases} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \bar X_t= \big(A\bar X_t+B\bar Y_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\\
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \bar Y_t= Z((\bar X_t,\bar Y_t), P_t^*\nu) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t + \sigma} \def\ess{\text{\rm{ess}}}\def\s{{\bf s} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde}\def\[{\lfloor} \def\]{\rfloor W_t, \ \ t\in [0,T],\end{cases}$$
by the weak uniqueness of \eqref{E'} and that the distribution of $(\bar X_0,\bar Y_0)$ under $\Q$ coincides with $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(\bar X_0, \bar Y_0)}=\nu$, we obtain
$\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(\bar X_t, \bar Y_t)|\Q}=P_t^*\nu$ for $t\in [0,T].$ Combining this with \eqref{CP} and using the Young inequality, for any $f\in \B_b^+(\R^{d_1+d_2})$ we have
\beq\label{LH3} \begin} \def\beq{\begin{equation}} \def\F{\scr F{split} & (P_T\log f)(\nu)= \E[R \log f(\bar X_T, \bar Y_T)] = \E[R \log f(X_T, Y_T)] \\
&\le \log \E[f(X_T,Y_T)] +\E [R\log R]= \log (P_Tf)(\mu) + \E_\Q [\log R].\end{split}\end{equation}
By \eqref{E'4}, and \eqref{RR}, $\tilde}\def\[{\lfloor} \def\]{\rfloor W_t$ is a Brownian motion under $\Q$, and noting that $\Q|_{\F_0}=\P|_{\F_0}$ and \eqref{E'2} imply
$$ \E_\Q\big(|X_0-\bar X_0|^2 + |Y_0-\bar Y_0|^2\big) = \W_2(\mu,\nu)^2, $$
we find a constant $c>0$ such that $$ \E_\Q [\log R] =\ff 1 2\E_\Q\int_0^T|\xi_t|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t \le \ff{c\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{cT}}{(T\land 1)^{4k-1} } \W_2(\mu,\nu)^2.$$
Therefore, \eqref{LH} follows from \eqref{LH3}.
\end{proof}
\subsection{Proof of Theorem \ref{T2} }
We first prove the exponential convergence of $P_t^*$ in $\W_2$.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{lem}\label{LN1} Assume {\bf (C)}. Then there exists a constant $c_1>0$ such that
\beq\label{AC0} \W_2(P_t^*\mu,P_t^*\nu)^2\le c_1\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\kk t} \W_2(\mu,\nu)^2,\ \ t\ge 0, \mu,\nu\in \scr P_2(\R^{d_1+d_2}).\end{equation}
Consequently, $P_t^*$ has a unique invariant probability measure $\mu_\infty\in\scr P_2(\R^{d_1+d_2})$. \end{lem}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} As shown in the proof of \cite[Theorem 3.1(2)]{W18} that the second assertion follows from the first. So, it suffices to prove \eqref{AC0}.
For \beq\label{AC1} a:=\Big(\ff{1+\bb+\bb^2}{1+\bb}\Big)^{\ff 1 2} ,\ \ r:= a -\ff{\bb}{ a} =\ff 1 {\ss{(1+\bb)(1+\bb+\bb^2)}}\in (0,1),\end{equation}
we define the distance
\beq\label{AC2} \bar \psi_{B}((x,y),(\bar x,\bar y)):=\ss{a^2|x-\bar x|^2 +|B(y-\bar y)|^2+ 2 r a \<x-\bar x, B(y-\bar y)\>}
\end{equation} for $ (x,y), (\bar x, \bar y) \in \R^{d_1+d_2}. $ Then there exists a constant $C>1$ such that
\beq\label{ACC} C^{-1}|(x-\bar x, y-\bar y)|\le \bar\psi_{B}((x,y),(\bar x,\bar y))\le C|(x-\bar x, y-\bar y)|.\end{equation}
Moreover, we claim that
\beq\label{AC3} \bar\psi_{B}((x,y),(\bar x,\bar y))^2\le \ff{2+2\bb+\bb^2 +\ss{\bb^4+4}}{2(1+\bb)} {\psi_B}((x,y),(\bar x,\bar y))^2.\end{equation}
Indeed, by \eqref{AC1} and \eqref{AC2}, for any $\vv>0$ we have
\beq\label{AC} \bar\psi_{B}((x,y),(\bar x,\bar y))^2\le a^2(1+\vv) |x-\bar x|^2 + \Big(1+ \ff 1 {\vv(1+\bb)(1+\bb+\bb^2)}\Big)|B(y-\bar y)|^2.\end{equation}
Obviously, by \eqref{AC1},
$$\vv:= \ff{1-a^2+\ss{(a^2-1)^2+4 a^2(1+\bb)^{-1}(1+\bb+\bb^2)^{-1}}}{2 a^2}= \ff{\ss{\bb^4+4}-\bb^2}{2(1+\bb+\bb^2)}$$
satisfies
$$a^2(1+\vv)= 1+ \ff 1 {\vv(1+\bb)(1+\bb+\bb^2)}= \ff{2+2\bb+\bb^2 +\ss{\bb^4+4}}{2(1+\bb)}.$$
Thus, \eqref{AC3} follows from \eqref{AC}.
Now, let $(X_t, Y_t)$ and $ (\bar X_t, \bar Y_t)$ solve \eqref{E21} with $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(X_0,Y_0)}=\mu, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(\bar X_0,\bar Y_0)}=\nu$ such that
\beq\label{AC4} \W_2(\mu,\nu)^2 =\E|(X_0-\bar X_0, Y_0-\bar Y_0)|^2.\end{equation}
Simply denote $\mu_t=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(X_t,Y_t)}, \bar \mu_t= \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(\bar X_t, \bar Y_t)}.$ By {\bf (C)} and It\^o's formula, and noting that \eqref{AC1} implies
$$a^2-\bb-ra=0,\ \ 1-ra=ra\bb =\ff\bb {1+\bb},$$ we obtain
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} &\ff 1 2 \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \big\{\bar\psi_{B}((X_t,Y_t), (\bar X_t,\bar Y_t))^2\big\}= \big\<a^2(X_t-\bar X_t)+ r a B(Y_t-\bar Y_t), B(Y_t-\bar Y_t)\big\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t \\
& + \big\<B^*B (Y_t-\bar Y_t)+ r a B^*(X_t-\bar X_t),\ \bb B^*(BB^*)^{-1} (\bar X_t-X_t) +\bar Y_t-Y_t \big\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t \\
&+\big\<B^*B (Y_t-\bar Y_t)+ r a B^*(X_t-\bar X_t),\ B^*\{\nabla} \def\pp{\partial} \def\E{\mathbb E V(\bar X_t, \bar\mu_t) -\nabla} \def\pp{\partial} \def\E{\mathbb E V(X_t,\mu_t)\}\big\> \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\
&\le \Big\{-(1-ra)|B(Y_t-\bar Y_t)|^2+(a^2-\bb-ra) \<X_t-\bar X_t, B(Y_t-\bar Y_t)\> - ra\bb |X_t-\bar X_t|^2 \\
&\qquad + \big\<B^*B (Y_t-\bar Y_t)+ (1+\bb)^{-1}B^*(X_t-\bar X_t),\ B^*\{\nabla} \def\pp{\partial} \def\E{\mathbb E V(\bar X_t, \bar\mu_t) -\nabla} \def\pp{\partial} \def\E{\mathbb E V(X_t,\mu_t)\} \big\>\Big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\
&\le \Big\{\ff{\theta_2}{1+\bb}\W_2^{\psi_B}(\mu_t,\bar\mu_t)^2-\ff{\bb-\theta_1}{1+\bb}{\psi_B}((X_t,Y_t), (\bar X_t,\bar Y_t))^2 \Big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t.\end{align*}
By \eqref{AC3} and the fact that
$$\W_2^{\psi_B}(\mu_t,\bar\mu_t)^2\le \E[{\psi_B}((X_t,Y_t), (\bar X_t,\bar Y_t))^2],$$
for $\kk>0$ in \eqref{KK}, we obtain
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*}&\ff 1 2 \big\{\E[ \bar\psi_{B}((X_t,Y_t), (\bar X_t,\bar Y_t))^2]- \E[ \bar\psi_{B}((X_s,Y_s), (\bar X_s,\bar Y_s))^2]\big\}\\
&\le -\ff{\bb-\theta_1-\theta_2}{1+\bb} \int_s^t \E [{\psi_B}((X_r,Y_r), (\bar X_r,\bar Y_r))^2]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r \\
&\le - \kk \int_s^t \E [ \bar\psi_{B}((X_r,Y_r), (\bar X_r,\bar Y_r))^2]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r,\ \ t\ge s\ge 0.\end{align*} Therefore, Gronwall's inequality implies
$$\E[ \bar\psi_{B}((X_t,Y_t), (\bar X_t,\bar Y_t))^2]\le \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-2\kk t} \E[ \bar\psi_{B}((X_0,Y_0), (\bar X_0,\bar Y_0))^2],\ \ t\ge 0.$$
Combining this with \eqref{ACC} and \eqref{AC4}, we prove \eqref{AC0} for some constant $c>0$.
\end{proof}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof}[Proof of Theorem \ref{T2}] By Proposition \ref{P1} with $k=1$, Lemma \ref{LN1} and Theorem \ref{T0}, we only need to verify the Talagrand inequality.
As shown in the beginning of \cite[Section 3]{GW19} that $ \mu_\infty$ has the representation
$$\mu_\infty(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y)= Z^{-1} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\bar V(x,y)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y,\ \ \bar V(x,y):= V(x,\mu_\infty)+ \ff\bb 2 |(BB^*)^{-\ff 1 2} x|^2 +\ff 1 2 |y|^2,$$
where $Z:=\int_{\R^{d_1+d_2} }\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{ -\bar V(x,y)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y$ is the normalization constant. Since \eqref{AH2} implies
$$BB^* \Hess_{ V(\cdot,\mu_\infty)}\ge -\theta_1I_{d_1},$$
we deduce from \eqref{AH1} that
$$\Hess_{\bar V}\ge \gg I_{d_1+d_2},\ \ \gg:=1\land \ff{\bb-\theta_1}{\|B\|^2} >0.$$
So, by the Bakry-Emery criterion \cite{BE84}, we have the log-Sobolev inequality
$$\mu_\infty(f^2\log f^2)\le \ff 2 {\gg} \mu_\infty(|\nabla} \def\pp{\partial} \def\E{\mathbb E f|^2),\ \ f\in C_b^1(\R^{d_1+d_2}), \mu_\infty(f^2)=1.$$
According to \cite{BGL}, this implies the Talagrand inequality
$$\W_2(\mu,\mu_\infty)^2\le \ff 2 {\gg} \Ent(\mu|\mu_\infty).$$
Then the proof if finished.
\end{proof}
\paragraph{Acknowledgement.} We would like to thank the referee for helpful comments and corrections.
| proofpile-arXiv_059-15710 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction} \label{sec:introduction}
{\color{black}Consider two directed graphs $G_1$ and $G_2$ with labeled nodes from the sets $V_1$ and $V_2$, respectively. A \emph{simulation} \cite{DBLP:journals/pvldb/MaCFHW11} relation $R \subseteq V_1 \times V_2$ is a binary relation over $V_1$ and $V_2$. For each node pair $(u, v)$ in $R$ (namely, $u$ is simulated by $v$), each $u$'s out-neighbor\footnote{\color{black}A node $u'$ is an out-neighbor of $u$, if there is an outgoing edge from $u$ to $u'$ in $G$. Similarly, $u''$ is an in-neighbor of $u$, if an edge from $u''$ to $u$ presents.} is simulated by one of $v$'s out-neighbors, and the same applies to in-neighbors. An illustration of this concept is shown below.}
\begin{example} \label{ex:simulation_definition_examples}
As shown in \reffig{example_graphs}, node $u$ is simulated by node $v_2$, as they have the same label, and each $u$'s out-neighbor can be simulated by the same-label out-neighbor of $v_2$ ($u$ has no in-neighbors). Note that the two hexagonal nodes in $\mathcal{P}$ are simulated by the same hexagonal node in $\mathcal{G}_2$. Similarly, $u$ is simulated by $v_3$ and $v_4$. However, $u$ can not be simulated by $v_1$, as the pentagonal neighbor of $u$ cannot be simulated by any neighbor of $v_1$.
\end{example}
{\color{black}The original definition of simulation put forward by Milner in 1971 \cite{DBLP:conf/ijcai/Milner71} only considered out-neighbors. But, in 2011, Ma et al. \cite{DBLP:journals/pvldb/MaCFHW11} revised the definition to consider in-neighbors, making it capture more topological information. Additionally, different variants of simulation have emerged over the years, each with its own constraint(s). For example, on the basis that $R$ is a simulation relation, \emph{bisimulation} \cite{milner1989communication} further requires that $R^{-1}$ is also a simulation, where $R^{-1}$ denotes the converse relation of $R$ (i.e., $R^{-1} = \{(v,u) | \forall (u, v) \in R\}$); and \emph{degree-preserving simulation} \cite{DBLP:journals/pvldb/SongGCW14} requires that two neighbors of $u$ cannot be simulated by the same neighbor of $v$.}
\stitle{Applications.} Simulation and its variants are important relations among nodes, and have been adopted in a wide range of applications. For example, simulation and degree-preserving simulation are shown to be effective in graph pattern matching \cite{DBLP:journals/pvldb/FanLMTWW10,DBLP:journals/pvldb/MaCFHW11,DBLP:journals/tods/MaCFHW14,DBLP:journals/pvldb/SongGCW14}, and {\color{black}a node in the data graph is considered to be a potential match for a node in the query graph if it simulates the query node. Bisimulation has been applied to compute RDF graph alignment \cite{DBLP:journals/pvldb/BunemanS16} and {\color{black}graph partition \cite{DBLP:conf/sigmod/HellingsFH12, DBLP:conf/sigmod/SchatzleNLP13,DBLP:conf/sac/HeeswijkFP16}}.} Generally, two nodes will be aligned or be placed in the same partition if they are in a bisimulation relation. Other applications include data retrieval \cite{DBLP:conf/vldb/Ramanan03}, {\color{black}graph compression \cite{DBLP:conf/sigmod/FanLWW12} and index construction \cite{DBLP:journals/is/FletcherGWGBP09,DBLP:conf/sigmod/KaushikBNK02,DBLP:journals/corr/abs-2003-03079}}, etc.
\comment{
\begin{example} \label{ex:pattern_matching_example}
In the Amazon co-purchasing graph $G$, books are the vertices labeled by their categories, and an edge from book A to B indicates that people are very likely to buy B when they buy A. Suppose a user wants to search for "Parenting" books that are connected by books of "Children", "Home" and "Health" (mutually), which can be modelled as a pattern graph $P$ as shown in \reffig{intro_query}. The simulation-based algorithm first considers all the nodes of the same label as the candidates (i.e. book 89985 and book 3004 in \reffig{intro_data_graph}). Then for each candidate $v$, it encloses a sub-graph $G_v$ (as highlighted) induced by the candidate nodes and all its one-hop neighbors. Thereafter, the algorithm computes simulation relation between $P$ and $G_v$, and includes such a candidate $v$ in the results while all vertices of $P$ present in the simulation relation. In \reffig{pattern_matching_example}, one can verify that the book 89985 is the only result.
\end{example}
}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figures/motivating_example_for_fracsim_aline_2.pdf} \label{fig:example_graphs}
\topcaption{Example graphs. A node's shape denotes its label.} \label{fig:example_graphs}
\end{figure}
\stitle{Motivations.} {\color{black}Despite their numerous valuable uses, simulation and its variants are all coarse ``yes-or-no'' indicators. That is, simulation and its variants can only answer whether node $v$ can fully simulate node $u$; they cannot tell us whether $v$ might be able to partially or even very nearly (i.e., approximately) simulate $u$. This coarseness raises two practical issues.} First, there often exist some nodes that nearly simulate $u$ in real-world graphs, which either naturally present in the graphs or are consequences of data errors (a common issue by data noise and mistakes of collecting data). However, simulation and its variants cannot catch these nodes and cause loss of potential results. Second, the coarseness makes it inappropriate to apply simulation and its variants to applications that naturally require fine-grained evaluation, such as node similarity measurement.
\refex{pattern_matching_example} provides a real-life illustration of these issues.
\begin{example} \label{ex:pattern_matching_example}
We consider the application of simulation to determine whether or not a poster $A$ is simulated by another poster $B$ in terms of their design elements (e.g., color, layout, font, and structure). For example, when compared with the poster $P_1$ in \reffig{data_poster}, the candidate poster $P$ in \reffig{query_poster} only slightly differs in the font and font style. Hence, it is highly suspected as a case of plagiarism \cite{solo-wiki}. {\color{black}Nevertheless, due to a minor change of design elements, there is no exact simulation relation between posters $P$ and $P_1$, and thus exact simulation can not be used to discover such similarity.} As a result, it is more desirable to develop a mechanism to capture the similarity between two posters via the degree of approximate simulation (some fine-grained measurement), instead of simply using the exact simulation.
\end{example}
\begin{figure}
\centering
\subfigure[A poster]{ \includegraphics[width=0.095 \linewidth]{figures/query_poster.pdf} \label{fig:query_poster}}
\hspace{1em}
\subfigure[A database of existing posters]{
\includegraphics[width=0.3\linewidth]{figures/data_posters.pdf} \label{fig:data_poster}
}
\hspace{-0.6em}
\subfigure[Query graph]{ \includegraphics[width=0.16\linewidth]{figures/solo_query_0617.pdf} \label{fig:query_graph}}
\hspace{0.2em}
\subfigure[Data graph]{
\includegraphics[width=0.3\linewidth]{figures/solo_data_0617.pdf} \label{fig:data_graph}
}
\topcaption{Motivating example. Figures (c) and (d) are graphs representing the posters in (a) and (b), respectively. {\color{black}Nodes are marked with their labels. An edge from nodes u to v indicates that the poster u has a design element v.}
\label{fig:pattern_matching_example}
\vspace{-1.8em}
\end{figure}
In general, it is of practical need to develop a mechanism to quantify the cases of approximate simulation to remedy the impacts of the ``yes-or-no'' semantics. {\color{black}Such quantification can not only open up a host of possibilities for using simulation but also make the results of simulation more effective and robust. Although the simulation variants differ in certain properties, they are actually derived from a common foundation, namely the simulation relation \cite{DBLP:conf/ijcai/Milner71}. Consequently, instead of developing a quantification technique independently and individually for each variant, it is more desirable to devise a general framework that works for all simulation variants. Aside from the obvious benefits of less redundancy, developing a unified framework requires a systematic study of the properties of the different simulation variants. Not only has this not been done before, doing so may help to inspire new variants.}
\stitle{Our Contributions.} We propose the \emph{fractional $\chi$-simulation framework} that quantifies the extent of simulation and its variants in the range of $[0,1]$. Our main contributions are as follows
{\color{black}\ssstitle{(1) A unified definition of $\chi$-simulation.} From a systematic study of the properties of simulation and its variants, we distill the base definition of simulation and its variants into a unified definition called $\chi$-simulation. Further, we discover and name a new simulation variant - \emph{bijective simulation}. Theoretically, bijective simulation is akin to the well-known Weisfeiler-Lehman isomorphism test \cite{DBLP:journals/jmlr/ShervashidzeSLMB11} (\refsec{discussion}). Practically, its fractional form (contribution 2) is more effective than the existing models regarding node similarity measurement, as detailed in \refsec{case_studies}.}
{\color{black}\ssstitle{(2) A general framework $\kw{FSim}_{\chi}$ for computing fractional $\chi$-simulation.} To quantify the degree to which one node simulates another by a $\chi$-simulation, {\color{black} we propose the concept of \emph{fractional $\chi$-simulation} and identify a list of properties that a \emph{fractional $\chi$-simulation} measure should satisfy.} Then, we present a general computation framework, namely $\kw{FSim}_{\chi}$, which can be configured to compute fractional $\chi$-simulation for all $\chi$-simulations with the properties satisfied.
$\kw{FSim}_{\chi}$ is an iterative framework that computes the fractional $\chi$-simulation scores for all pairs of nodes over two graphs. Furthermore, we show the relations of $\kw{FSim}_{\chi}$ to several well-known concepts, including node similarity measures (i.e., \kw{SimRank} \cite{DBLP:conf/kdd/JehW02} and \kw{RoleSim} \cite{DBLP:conf/kdd/JinLH11}) and an approximate variant of bisimulation (i.e., $k$-bisimulation \cite{DBLP:books/cu/12/AcetoIS12,DBLP:conf/bncod/LuoLFBHW13,DBLP:conf/cikm/LuoFHWB13,DBLP:conf/sac/HeeswijkFP16}), in \refsec{discussion}}.
\ssstitle{(3) Extensive experiments and case studies.} We perform empirical studies to exhibit that $\kw{FSim}_\chi$ is robust to parameter tuning and data errors, and is efficient to compute on real-world graphs. We further conduct three case studies to evaluate $\kw{FSim}_\chi$'s potential for subgraph pattern matching, node similarity measurement, and RDF graph alignment. {\color{black}Based on these studies, we reach the following conclusions. First, fractional $\chi$-simulation can remedy the ``yes-or-no" semantics of $\chi$-simulation, and it significantly improves the effectiveness of $\chi$-simulation in the related applications, e.g., simulation in subgraph pattern matching. Second, fractional bijective simulation (proposed in this paper) is a highly effective way of measuring node similarity. Finally, the $\kw{FSim}_\chi$ framework provides a flexible way to study the effectiveness of different simulation variants, and thus can be used as a tool to help identify the best variant for a specific application.}
\section{Simulation and Its Variants} \label{sec:preliminary}
\stitle{Data Model.} {\color{black
Consider a node-labeled directed graph $G=(V, E, \ell)$, where $V(G)$ and $E(G)$ denote the node set and edge set, respectively (or $V$ and $E$ when the context is clear). $\Sigma$ is a set of string labels, and $\ell: V \rightarrow \Sigma$ is a labeling function that maps each node $u$ to a label $\ell(u) \in \Sigma$. $N_G^+(u) = \{u' | (u, u') \in E(G)\}$ denotes node $u$'s out-neighbors and, likewise, $N_G^-(u) = \{u' | (u', u) \in E(G)\}$ denotes its in-neighbors. Let $d_G^+(u) = |N_G^+(u)|$ and $d_G^-(u) = |N_G^-(u)|$ be the out- and in-degrees of node $u$, and let $d_G$, $D^+_G$ and $D^-_G$ denote the average degree, maximum out-degree and maximum in-degree of $G$, respectively. A summary of the notations used throughout this paper appears in \reftab{Notations}.}
\begin{table}[h]
\centering
\small
\color{black}
\topcaption{Table of Notations} \label{tab:Notations}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{|c|c|} \hline
\textbf{Notation}&\textbf{Description}\\ \hline
$G = (V, E, \ell)$ & a node-labeled directed graph \\ \hline
$V(G)/E(G)$ & the node/edge set of graph $G$ \\ \hline
$\ell(\cdot)$ & a labeling function \\ \hline
$N_G^+(u)/N_G^-(u)$ & the out-neighbors/in-neighbors of node $u$ in $G$\\ \hline
$d_G^+(u)/d_G^-(u)$ & the out-degree/in-degree of node $u$ in $G$ \\ \hline
$d_G$ & the average degree of $G$ \\ \hline
$D^+_G/D^-_G$ & the maximum out-degree/in-degree of $G$ \\ \hline
\end{tabular}
\end{table}
\stitle{Simulation Variants.} {\color{black}The first step in developing a unified definition of simulation and its variants is to formally define simulation as the foundation of all its variants.}
\vspace{-0.3em}
\begin{definition} \label{def:simulation}
\textsc{(Simulation)} Given the graphs $G_1=(V_1, E_1, \ell_1)$ and $G_2=(V_2, E_2, \ell_2)$\footnote{$G_1 = G_2$ is allowed in this paper.}, a binary relation $R \subseteq V_1 \times V_2$ is a simulation if, for $\forall (u,v) \in R$, it satisfies that:
\begin{enumerate}[(1)] \setlength{\itemsep}{0cm}
\item $\ell_1(u) = \ell_2(v)$,
\item $\!\forall u' \in N_{G_1}^+(u)$, $\!\exists v' \in N_{G_2}^+(v)\!$ such that (s.t.) $(u', v') \in R$,
\item $\forall u'' \in N_{G_1}^-(u)$, $\exists v'' \in N_{G_2}^-(v)$ s.t. $(u'', v'') \in R$.
\end{enumerate}
\end{definition}
\vspace{-0.5em}
For clarity, $u$ is always a node from $V_1$, and $v$ is always a node from $V_2$ in this paper.
{\color{black}The variants of simulation are based on \refdef{simulation} but have additional constraints. \refdef{simulation_variants} below provides a summary of several common simulation variants. However, one exceptional variant, \emph{strong simulation} \cite{DBLP:journals/pvldb/MaCFHW11}, must be discussed first. Strong simulation is designed for subgraph pattern matching. In brief, strong simulation exists between the query graph $Q$ and data graph $G$ if a subgraph $G[v,\delta_Q]$ of $G$ satisfies the following criteria: (1) a simulation relation $R$ exists between $Q$ and $G[v,\delta_Q]$; and (2) $R$ contains node $v$ and all nodes in $Q$. Note that the subgraph $G[v,\delta_Q]$ is an induced subgraph that includes all nodes whose shortest distances to $v$ in $G$ are not larger than the diameter $\delta_Q$ of $Q$. In essence, strong simulation essentially performs simulation (\refdef{simulation}) multiple times, and so does not need to be specifically defined or further discussed.
\refdef{simulation_variants}, which follows, shows how $\chi$-simulation summarizes the base definition of simulation but also considers its variants. The definition below includes two notable ones in \emph{degree-preserving simulation} \cite{DBLP:journals/pvldb/SongGCW14} and \emph{bisimulation} \cite{milner1989communication}.
}
\begin{definition} \label{def:simulation_variants}
\textsc{($\chi$-simulation)} A simulation relation $R$ by \refdef{simulation} is further a $\chi$-simulation relation, which corresponds to
\begin{itemize}[noitemsep]
\item \textbf{Simulation} ($\chi = \kw{s}$): no extra constraint;
{\color{black}
\item \textbf{Degree-preserving simulation} ($\chi = \kw{dp}$): if $(u, v) \in R$, (1) there exists an \textbf{injective} function $\lambda_1: N_{G_1}^+(u) \to N_{G_2}^+(v)$, s.t. $\forall u' \in N_{G_1}^+(u)$, $(u', \lambda_1(u')) \in R$; and (2) there exists an \textbf{injective} function $\lambda_2: N_{G_1}^-(u) \to N_{G_2}^-(v)$, s.t. $\forall u'' \in N_{G_1}^-(u)$, $(u'', \lambda_2(u'')) \in R$;
\item \textbf{Bisimulation} ($\chi$ = \kw{b}): if $(u, v) \in R$, (1) $\forall v' \in N^+(v)$, $\exists u' \in N^+(u)$ s.t. $(u', v') \in R$; and (2) $\forall v'' \in N^-(v)$, $\exists u'' \in N^-(u)$ s.t. $(u'', v'') \in R$.
}
\end{itemize}
{\color{black}Node $u$ is $\chi$-simulated by node $v$ (or $v$ $\chi$-simulates $u$), denoted as $u \rightsquigarrow^{\chi} v$, if there is a $\chi$-simulation relation $R$ with $(u,v) \in R$. Specifically, if $u \rightsquigarrow^{\chi} v$ implies $v \rightsquigarrow^{\chi} u$ (i.e., $\chi = \kw{b}$), we {\color{black}may use} $u \sim^{\chi} v$ directly.
}
\end{definition}
\begin{example} \label{ex:simulation_variants}
Recall that in \refex{simulation_definition_examples}, $u$ is simulated by nodes $v_2$, $v_3$ and $v_4$ in \reffig{example_graphs}. However, $u$ cannot be $\kw{dp}$-simulated by $v_2$. This is because $u$ has two hexagonal neighbors and $v_2$ does not, which contradicts the requirement of ``injective function''; Analogously, $u$ cannot be $\kw{b}$-simulated by $v_3$, since $v_3$'s square neighbor fails to simulate any neighbor of $u$.
\end{example}
{\color{black}Inspired by the constraints of $\kw{dp}$- and $\kw{b}$-simulations, we find that {a $\chi$-simulation} may have the following properties: (1) \emph{injective neighbor mapping} (or IN-mapping for short), i.e., $\forall (u,v) \in R$, two different neighbors (either in or out) of $u$ cannot be mapped to the same neighbor of $v$; and (2) \emph{converse invariant}, i.e., where $R^{-1} = \{(v,u) | \forall (u, v) \in R\}$ is a $\chi$-simulation if $R$ is a $\chi$-simulation.
By \refdef{simulation_variants}, $\kw{dp}$-simulation has the property of IN-mapping, while $\kw{b}$-simulation has converse invariant. The properties of the exiting simulation variants are listed in \reffig{properties_of_variants}(a).}
\begin{remark}
\color{black}
Given a $\chi$-simulation with the property of converse invariant, if $u\rightsquigarrow^{\chi} v$, then $v\rightsquigarrow^{\chi} u$ must hold. Therefore, in \refdef{simulation_variants}, we have $u \rightsquigarrow^{\kw{b}} v$ implies $v \rightsquigarrow^{\kw{b}} u$.
\end{remark}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figures/properties_of_simulation.pdf}
\topcaption{The summarization of all simulation variants.} \label{fig:properties_of_variants}
\end{figure}
\stitle{A New Variant: Bijective Simulation.} In compiling \reffig{properties_of_variants}(a), we realize that no simulation variant had both IN-mapping and converse invariant. This motivated us to define one. Called \emph{bijective simulation}, our definition follows.
{\color{black}
\begin{definition} \label{def:bijectivesimulation}
\textsc{(Bijective Simulation)} A simulation relation $R \subseteq V_1 \times V_2$ is a bijective simulation ($\chi = \kw{bj}$), if $R$ is a degree-preserving simulation and the functions $\lambda_1$ and $\lambda_2$, as defined in \refdef{simulation_variants}, are further to be surjective (i.e., $\lambda_1$ and $\lambda_2$ are bijective). Bijective simulation is considered in the $\chi$-simulation (\refdef{simulation_variants}) by letting $\chi = \kw{bj}$.
\end{definition}}
Compared to $\kw{dp}$-simulation, $\kw{bj}$-simulation requires that the mapping functions of the neighbors to be bijective. In other words, each pair of neighbors in a $\kw{bj}$-simulation must be mapped one by one. It's not hard to verify that $\kw{bj}$-simulation has the properties of both IN-mapping and {\color{black}converse invariance.}
\reffig{properties_of_variants}(b) shows the strictness among the simulation variants, where a ``more-strict'' edge from a $\chi_1$- to a $\chi_2$-simulation means that the $\chi_1$-simulation must also be a $\chi_2$-simulation. Such strictness among the variants can also be inferred from \reffig{example_graphs}. More specifically, given $u \rightsquigarrow^{\kw{bj}} v_4$, it holds that $u \rightsquigarrow^{\chi} v_4$, $\forall \chi \in \{\kw{s}, \kw{b}, \kw{dp}\}$.
\stitle{Summary.} In this paper, we consider \emph{all together four simulation variants}: simulation ($\chi = \kw{s}$), degree-preserving simulation (\kw{dp}), bisimulation (\kw{b}), and bijective simulation (\kw{bj}). With a systematic study of existing simulation variants, we have discovered bijective simulation as a new variant. We believe that our work will further inspire more variants.
Hereafter, we may omit the $\chi$ in $\chi$-simulation, referring simply to simulation. To avoid ambiguity, we call the simulation relation in \refdef{simulation} as \emph{simple simulation}.
\section{Fractional Simulation} \label{sec:fractionalsimulation}
{\color{black}To quantify the degree to which one node simulates the other node, we now set out the properties fractional $\chi$-simu-lation should satisfy and the framework for its computation.}
\subsection{The Properties of Fractional Simulation}
\label{fractional_simulation_properties}
\begin{definition} \label{def:fractional_simulation_properties}
\textsc{(Fractional $\chi$-Simulation)} Given graphs $G_1=(V_1, E_1, \ell_1)$ and $G_2=(V_2, E_2, \ell_2)$, and two nodes $u \in V_1$ and $v \in V_2$, the fractional $\chi$-simulation of $u$ and $v$ quantifies the degree to which $u$ is approximately $\chi$-simulated by $v$, denoted as $\kw{FSim}_\chi(u, v)$. $\kw{FSim}_\chi(u, v)$ should satisfy:
\begin{enumerate}[P1.] \setlength{\itemsep}{0cm}
\item Range: $0 \leq \kw{FSim}_\chi(u, v) \leq 1$;
{\color{black}\item Simulation definiteness: $u$ is $\chi$-simulated by $v$, i.e., $u {\rightsquigarrow}^{\chi} v$, \textbf{if and only if} $\kw{FSim}_\chi(u, v) = 1$;
\item $\chi$-conditional symmetry: if the $\chi$-simulation has the property of converse invariant (i.e., $u \rightsquigarrow^{\chi} v$ implies $v \rightsquigarrow^{\chi} u$), then $\kw{FSim}_\chi(u, v)$ should be symmetric, i.e., $\kw{FSim}_\chi(u, v) = \kw{FSim}_\chi(v, u)$.}
\end{enumerate}
A computation scheme $\kw{FSim}_\chi$ is \textbf{well-defined} for fractional $\chi$-simulation, if for $\forall (u, v) \in V_1 \times V_2$, $\kw{FSim}_\chi(u, v)$ satisfies all three of the above properties.
\end{definition}
Property 1 is a common practice. Property 2 bridges the fractional simulation and the corresponding simulation variant. The sufficient condition reflects the fact that $u$ being $\chi$-simulated by $v$ stands for the maximum degree of their simulation, while the necessary condition (only if) makes fractional simulation imply the case of simulation. {\color{black}Property 3 means the variants with converse invariance (i.e., bisimulation and bijective simulation) can be used as similarity measures.}
\subsection{Framework to Compute Fractional Simulation} \label{sec:fsim_framework}
{\color{black}We propose the $\kw{FSim}_\chi$ framework to compute the fractional $\chi$-simulation scores for all pairs of nodes across two graphs. The $\kw{FSim}_\chi$ is a non-trivial framework because it needs to account for the properties of all simulation variants as well as convergence in general. Note that hereafter, we use $\kw{FSim}_\chi$ interchangeably to indicate the framework and a $\chi$-simulation value.
Recall from \refdef{simulation_variants} that a node $u$ is $\chi$-simulated by node $v$ if they have the same label, and their neighbors are $\chi$-simulated accordingly. Thus, we have divided the computation of $\kw{FSim}_\chi(u, v)$ into three parts as follows: }
\begin{equation} \label{eq:fractionalsimulation}
\small
\begin{split}
{\kw{FSim}_\chi}(u,v) = \underbrace{w^+ \text{ }\kw{FSim}_\chi(N_{G_1}^+(u), N_{G_2}^+(v))}_{\textbf{score by out-neighbors}} +\underbrace{w^- \text{ }\kw{FSim}_\chi(N_{G_1}^-(u),N_{G_2}^-(v))}_{\textbf{score by in-neighbors}} + \underbrace{(1-{w^+}-{w^-}) \text{ }\mathcal{L}(u,v)}_{\textbf{score by node label}},
\end{split}
\end{equation}
where $\!\kw{FSim}_\chi(N_{G_1}^+(u), N_{G_2}^+(v))\!$ and $\kw{FSim}_\chi(N_{G_1}^-(u), N_{G_2}^-(v))$ denote the scores contributed by the out- and in-neighbors of $u$ and $v$ respectively. $w^+$ and $w^-$ are weighting factors that satisfy $0 \leq w^+ < 1$, $0 \leq w^- < 1$ and $0 < w^+ + w^- < 1$; and $\mathcal{L}(\cdot)$ is a label function that evaluates the similarity of two nodes' labels. Specifically, if there is no prior knowledge about the labels, $\mathcal{L}(\cdot)$ can be derived by a wide variety of string similarity functions, such as an indicator function, normalized edit distance, Jaro-Winkler similarity, etc. Alternatively, the user could specify/learn the similarities of the label semantics. Since the latter case is beyond the scope of this paper, in the following, we assume no prior knowledge about the labels.
In \refeq{fractionalsimulation}, we need to compute the $\chi$-simulation score between two node sets $S_1$ and $S_2$ (the respective neighbors of each node pair). To do so, we derive:
\begin{equation} \label{eq:set_score}
\small
\kw{FSim}_\chi(S_1, S_2) = \frac{\sum_{(x, y) \in \mathcal{M}_{\chi}(S_1, S_2)} {\kw{FSim}_\chi}(x,y)}{\Omega_{\chi}(S_1, S_2)},
\end{equation}
where $\Omega_{\chi}$ denotes the normalizing operator that returns a positive integer w.r.t. $S_1$ and $S_2$. $\mathcal{M}_{\chi}$ denotes the mapping operator, which returns a set of node pairs defined as:
\begin{align*}
\small
\mathcal{M}_{\chi}(S_1, S_2; f_\chi) = \{(x, y) \;|\; x \in X, y = f_\chi(x) \in Y\},
\end{align*}
where $X \subseteq S_1 \cup S_2$ and $Y \subseteq S_1 \cup S_2$. $f_\chi: X \to Y$ is a \emph{function} that is subject to certain constraints regarding the simulation variant $\chi$. These constraints include the domain and codomain of $f_\chi$, and the properties that $f_\chi$ should satisfy (e.g., that $f_\chi$ is an injective function). Note that, for clear presentation, $f_\chi$ is always omitted from the mapping operator. How $\mathcal{M}_{\chi}$ and $\Omega_{\chi}$ are configured to deploy different simulation variants for the framework is demonstrated in \refsec{configurations_of_all_variants}.
\begin{table}
\centering
{\color{black}
\topcaption{Results of whether $u$ is simulated by $v_i$ ($i \in \{1,2,3,4\}$) in \reffig{example_graphs} regarding each simulation variant ($\checkmark$ for yes, $\times$ for no) and the corresponding fractional scores (in bracket)} \label{tab:fracsim_scores}
\begin{tabular}{|c|c|c|c|c|} \hline
Variants & $(u,v_1)$ & $(u,v_2)$ & $(u,v_3)$ & $(u,v_4)$ \\ \hline
$\kw{s}$-simulation & $\times$ (0.85) & $\checkmark$ (1.00) & $\checkmark$ (1.00) & $\checkmark$ (1.00)\\ \hline
\makecell{$\kw{dp}$-simulation} & $\times$ (0.72) & $\times$ (0.85) & $\checkmark$ (1.00) & $\checkmark$ (1.00)\\ \hline
$\kw{b}$-simulation & $\times$ (0.78) & $\checkmark$ (1.00) & $\times$ (0.93) & $\checkmark$ (1.00) \\ \hline
$\kw{bj}$-simulation & $\times$ (0.72) & $\times$ (0.81) & $\times$ (0.94) & $\checkmark$ (1.00) \\ \hline
\end{tabular}}
\end{table}
\begin{example}
\color{black}
\reftab{fracsim_scores} shows the $\kw{FSim}_{\chi}$ scores for some of the node pairs in \reffig{example_graphs} based on the definition of fractional $\chi$-simulation (\refdef{fractional_simulation_properties}) and the $\kw{FSim}_{\chi}$ framework (\refeq{fractionalsimulation}). We can observe that: (1) a pair $(u,v)$ where $u$ is not but very closely simulated by $v$ has a high $\kw{FSim}_{\chi}$ score, e.g., $\kw{FSim}_\kw{bj}(u,v_3)$; (2) when $u$ is $\chi$-simulated by $v$, $\kw{FSim}_{\chi}(u,v)$ reaches a maximum value of 1, e.g., $\kw{FSim}_\kw{b}(u,v_4)$, which conforms with the well-definiteness of $\kw{FSim}_{\chi}$.
\end{example}
According to \refeq{fractionalsimulation}, the $\!\kw{FSim}_\chi\!$ score between two nodes depends on the $\kw{FSim}_\chi$ scores of their neighbors. This naturally leads to an iterative computation scheme. This iterative process is detailed in the next section along with how to guarantee its convergence.
\subsection{Iterative Computation} \label{sec:iterative_computation}
Consider ${\kw{FSim}}_\chi^{k}(u,v)$, which denotes the $\chi$-simulation score of nodes $u$ and $v$ in the $k$-[th] iteration ($k \geq 1$), the mapping operator $\mathcal{M}_\chi^k$ and the normalizing operator $\Omega_\chi^k$ applied in the given iteration.
\stitle{Initialization.} As all simulation variants require an equivalence of node labels (\refdef{simulation} and \refdef{simulation_variants}), The $\kw{FSim}_{\chi}$ score is initially set to ${\kw{FSim}}_\chi^{0}(u,v) = \mathcal{L}(u, v)$ by default unless otherwise specified. When using such initialization, $\mathcal{L}(u,v) = 1$ must be further constrained if, and only if, $\ell_1(u) = \ell_2(v)$, in order to guarantee that $\kw{FSim}_{\chi}$ is well-defined (\refdef{fractional_simulation_properties}).
\stitle{Iterative Update.} According to \refeq{fractionalsimulation} and \refeq{set_score}, the simulation score in the $k$-[th] iteration for a node pair $(u,v)$ regarding $\chi$ is updated via the scores of previous iteration as:
\begin{equation} \label{eq:fractional_simulation_computation}
\small
\begin{split}
{\kw{FSim}}_{\chi}^{k}(u, v) &= \frac{w^+\sum_{(x, y) \in \mathcal{M}^k_{\chi}(N_{G_1}^+(u), N_{G_2}^+(v))} {\kw{FSim}}_{\chi}^{k-1}(x,y)}{\Omega^k_{\chi}(N_{G_1}^+(u), N_{G_2}^+(v))}
+ \frac{w^-\sum_{(x, y) \in \mathcal{M}^k_{\chi}(N_{G_1}^-(u), N_{G_2}^-(v))} {\kw{FSim}}_{\chi}^{k-1}(x,y)}{\Omega^k_{\chi}(N_{G_1}^-(u), N_{G_2}^-(v))} \\
&+ (1-{w^+}-{w^-}) \mathcal{L}(u,v)
\end{split}
\end{equation}
\stitle{Convergence.} Below we show what conditions the mapping and normalizing operators should satisfy to guarantee \refeq{fractional_simulation_computation} converges. {\color{black}Specifically, the computation is considered to converge if $|\kw{FSim}_{\chi}^{k+1}(u,v) - \kw{FSim}_{\chi}^{k}(u,v)| < \epsilon$ for $\forall (u, v) \in V_1 \times V_2$, in which $\epsilon$ is a small positive value. Note that the simulation subscript $\chi$ is omitted in the following theorem as it applies to all simulation variants.}
\begin{theorem} \label{thm:convergence}
The computation in \refeq{fractional_simulation_computation} is guaranteed to converge if in every iteration $k$, the following conditions are satisfied for any two node sets $S_1$ and $S_2$ in the mapping and normalizing operators:
\begin{enumerate}[(C1)] \setlength{\itemsep}{0cm}
\item $|\mathcal{M}^{k+1}(S_1, S_2)| = |\mathcal{M}^k(S_1, S_2)|$, and $\Omega^{k+1}(S_1, S_2) = \Omega^{k}(S_1, S_2)$.
\item $|\mathcal{M}^k(S_1, S_2)| \leq \Omega^k(S_1, S_2)$.
\item Subject to the function $f$, $\mathcal{M}^k(S_1, S_2)$ returns node pairs such that
\begin{align*}
\sum_{(x, y) \in \mathcal{M}^k(S_1, S_2)} {\kw{FSim}}^{k-1}(x,y) \text{ is maximized.}
\end{align*}
\end{enumerate}
\end{theorem}
\begin{proof}
Let $\delta^k(u,v) = |\kw{FSim}^k(u,v) - \kw{FSim}^{k-1}(u,v)|$ and $\Delta^k = \max_{(u,v)}\delta^k(u,v)$. To prove this theorem, we must show that $\Delta^k$ decreases monotonically, i.e., $\Delta^{k+1} < \Delta^{k}$.
Let $W^k(S_1, S_2) = \sum_{(x,y) \in \mathcal{M}^k(S_1,S_2)} \kw{FSim}^{k-1}(x,y)$. As the size of the mapping operator and the value of normalizing operator between $S_1$ and $S_2$ do not vary with $k$ (C1), we simply write $|\mathcal{M}(S_1, S_2)|$ and $\Omega(S_1, S_2)$ by dropping the superscript. Then, we have
\begin{equation*} \label{eq:two_eq1}
\small
\begin{split}
W^{k+1}(S_1, S_2) & \geq \sum_{(x,y)\in \mathcal{M}^{k}(S_1, S_2)} \kw{FSim}^{k}(x,y) \text{ (by C3)} \\
& \geq W^{k}(S_1, S_2) - |\mathcal{M}(S_1, S_2)|\Delta^k \text{ (by C1) }
\end{split}
\end{equation*}
Similarly, {\small $ W^{k}(S_1, S_2) \geq W^{k+1}(S_1, S_2) - |\mathcal{M}(S_1, S_2)|\Delta^k$} can be derived, and we immediately have,
\begin{equation} \small \label{eq:two_eq_conclusion}
\small
|W^{k+1}(S_1, S_2)-W^{k}(S_1, S_2)| \leq \Omega(N_1,N_2)\Delta^k \text{ (by C2) }
\end{equation}
Then,
\begin{equation} \label{eq:convergence_proof}
\small
\begin{split}
\delta^{k+1}(u,v) &\leq (w^+ + w^-)\Delta^k \text{ (by \refeq{two_eq_conclusion}) }\\
& < \Delta^k \text{ (by $w^+ + w^- < 1$) }
\end{split}
\end{equation}
Thus, $\Delta^{k+1}< \Delta^k$, and the computation converges.
\end{proof}
\begin{corollary} \label{coro:converge_speed}
The computation in \refeq{fractional_simulation_computation} converges within $\lceil\log_{(w^+ + w^-)}\epsilon\rceil$ iterations.
\end{corollary}
\begin{proof}
According to \refeq{convergence_proof}, we have $\Delta^{k+1}\leq (w^+ + w^-)\Delta^k$. As $\Delta^{0}$ cannot exceed 1, the theorem holds.
\end{proof}
{
{\color{black} We discuss the needs of the three conditions in \refthm{convergence}. Given node sets $S_1$ and $S_2$, C1 requires that the value of the normalizing operator $\Omega_\chi(S_1, S_2)$ and the number of node pairs in $\mathcal{M}_\chi(S_1, S_2)$ (i.e., $|\mathcal{M}_\chi(S_1, S_2)|$) remain unchanged throughout the iterations. C2 requires that $|\mathcal{M}_\chi(S_1, S_2))|$ should be less than $\Omega_\chi(S_1, S_2)$ to guarantee the range property in \refdef{fractional_simulation_properties}. C3 requires that $\mathcal{M}_\chi$ should include the pairs of neighbors that maximize the sum of their $\kw{FSim}_{\chi}$ scores in previous iteration. Intuitively, C3 maximizes the contributions of neighbors and is essential to satisfy simulation definiteness (property 2 in \refdef{fractional_simulation_properties}). Such a mapping operator is accordingly called a \emph{maximum mapping operator}, and will be applied by default in the following. }
}
\subsection{Computation Algorithm} \label{sec:computing_algorithm}
\refalg{fractional_simulation_computation} outlines the process for computing $\kw{FSim}_{\chi}$. The computation begins by initializing a hash map $H_{\kw{c}}$\xspace to maintain the initial $\kw{FSim}_{\chi}$ scores of candidate node pairs (Line~\ref{initialization}). Note that not all $|V_1|\times|V_2|$ node pairs need to be maintained, which is explained in \refsec{configurations_of_all_variants}.
Then, the scores of the node pairs in $H_{\kw{c}}$\xspace are updated iteratively until convergence (Lines~\ref{update_start}-\ref{update_end}). In Line~\ref{return}, the hash map is returned with the results .
\stitle{Parallelization.} The most time-consuming part of \refalg{fractional_simulation_computation} is running the iterative update in Lines~\ref{update_start} through \ref{update_end}. This motivated us to consider accelerating the computation with para-llelization by using multiple threads to compute different node pairs simultaneously. In this implementation, the simulation scores of the previous iteration are maintained in $H_{\kw{p}}$, which means computing the node pairs in Lines~\ref{update-out-neighbor} and \ref{update-in-neighbor} is independent of each other, and can be completed in parallel without any conflicts.
We simply round-robin the node pairs in $H_{\kw{c}}$ to distribute the load to all available threads, which achieves satisfactory scalability in the experiment (\reffig{time_vary_thread}).
\begin{algorithm}
\SetAlgoVlined
\SetFuncSty{textsf}
\SetArgSty{textsf}
\small
\caption{The algorithm of computing $\kw{FSim}_{\chi}$} \label{alg:fractional_simulation_computation}
\small
\Input {Graphs $G_1 = (V_1, E_1, \ell_1)$, $G_2 = (V_2, E_2, \ell_2)$, weighting factors ${w}^+, {w}^-$.}
\Output {$\kw{FSim}_{\chi}$ Scores.}
\State{$H_{\kw{c}} \leftarrow$ \textbf{Initializing($G_1$, $G_2$, ${w}^+$, ${w}^-$)};} \label{initialization} \\
\State{$H_{\kw{p}} \leftarrow H_{\kw{c}}$;} \\
\While{not converged}{ \label{update_start}
\ForEach{$(u,v) \in H_{\kw{c}}$} {
\State{$H_{\kw{c}}[(u,v)] \leftarrow (1-w^+-w^-)\mathcal{L}(u, v)$;} \label{update-label-sim} \\
\ForEach{$(x,y) \in \mathcal{M}_\chi (N_{G_1}^+(u), N_{G_2}^+(v))$} {
\State{$H_{\kw{c}}[(u,v)] \leftarrow H_{\kw{c}}[(u,v)] + \frac{w^+H_{\kw{p}}[(x,y)]}{\Omega_{\chi}(N_{G_1}^+(u), N_{G_2}^+(v))}$;} \label{update-out-neighbor}\\
}
\ForEach{$(x',y') \in \mathcal{M}_\chi (N_{G_1}^-(u), N_{G_2}^-(v))$} {
\State{$H_{\kw{c}}[(u,v)] \leftarrow H_{\kw{c}}[(u,v)] + \frac{w^-H_{\kw{p}}[(x',y')]}{\Omega_{\chi}(N_{G_1}^-(u), N_{G_2}^-(v))}$;} \label{update-in-neighbor}\\
}
}
\State{$H_{\kw{p}} \leftarrow H_{\kw{c}}$;} \label{update_end}\\
}
\State{\Return $H_{\kw{c}}$.} \label{return}
\end{algorithm}
\stitle{Upper-Bound Updating.} According to the range property (\refdef{fractional_simulation_properties}) and the computation in \refeq{fractional_simulation_computation}, there exists an upper-bound on the $\kw{FSim}_{\chi}$ value of each node pair, which is computed via:
\begin{equation} \label{eq:upper_bound_label_constrained_mapping}
\small
\begin{split}
{\kw{FSim}}_\chi(u,v) & \leq \overline{\kw{FSim}}_\chi(u,v) \\
& = \lambda^+(u, v) + \lambda^-(u, v) + (1-w^+-w^-) \mathcal{L}(u,v),
\end{split}
\end{equation}
where $\lambda^\kw{s} = \frac{w^\kw{s}|\mathcal{M}_\chi(N_{G_1}^\kw{s}(u), N_{G_2}^\kw{s}(v))|}{\Omega^\kw{s}_{\chi}(N_{G_1}^\kw{s}(u),N_{G_2}^\kw{s}(v))}$, for $\kw{s} \in \{+, -\}$. Accordingly, if the upper bound of a certain node pair $(u, v)$ is relatively small (smaller than a given threshold $\beta$), it is expected to make a limited contribution to the scores of others. Thus, we can skip computing (and maintaining) $\kw{FSim}_\chi(u, v)$, and use an approximated value $\alpha\overline{\kw{FSim}}_\chi(u,v)$ ($0 < \alpha < 1$ is a given small constant) instead when needed. The implementation of upper-bound updating based on \refalg{fractional_simulation_computation} is as follows: (1) in Line~\ref{initialization}, $H_{c}$ only maintains the node pairs that are guaranteed to be larger than $\beta$; (2) in Lines~\ref{update-out-neighbor} and \ref{update-in-neighbor}, if $(x, y)$ (or $(x',y')$) is not in $H_{p}$, use $\alpha\overline{\kw{FSim}}_\chi(x,y)$ (or $\alpha\overline{\kw{FSim}}_\chi(x',y')$) instead.
\section{Configure Framework to Quantify Different Simulation Variants} \label{sec:configurations_of_all_variants}
In this section, we show how to configure the mapping and normalizing operators in \refeq{set_score}, such that the computation of $\kw{FSim}_\chi$ converges, and $\kw{FSim}_\chi$ remains well-defined (\refdef{fractional_simulation_properties}) for all simulation variants.
\subsection{Configurations of Simple Simulation} \label{sec:simple_simulation}
\stitle{Fractional \kw{s}-simulation.}
Given two node sets $S_1$ and $S_2$, $\mathcal{M}_\kw{s}$ and $\Omega_\kw{s}$ are the operators for implementing fractional $\kw{s}$-simulation according to \refdef{fractional_simulation_properties} as follows:
\begin{equation} \label{eq:match_simulation}
\small
\mathcal{M}_{\kw{s}}(S_1, S_2)= \{(x, y)| \forall x \in S_1, y = f_\kw{s}(x) \in S_2\},
\end{equation}
where $f_\kw{s}: S_1 \to S_2$ is a function subject to the label constraint $\mathcal{L}(x, f_\kw{s}(x)) \geq \theta$, and
\begin{equation} \label{eq:denom_simulation}
\small
\Omega_{\kw{s}}(S_1, S_2)= |S_1|.
\end{equation}
\begin{remark} \label{rem:label_constraint_mapping} \textbf{Label-constrained Mapping.}
Analogous to the initialization of $\kw{FSim}_\chi$ (\refsec{iterative_computation}), a label constraint is added when mapping neighbors. $\theta$ is a constant given by the user to control the strictness of the mapping. When $\theta = 0$, the nodes can be mapped arbitrarily. When $\theta = 1$, only nodes of the same label can be mapped. It is obvious that a larger $\theta$ leads to faster computation. As a practical guide to setting $\theta$, \refsec{sensitivity_analysis} includes a sensitivity analysis of $\theta$ and \refsec{efficiency} provides an efficiency analysis. In the following, the label constraint is applied in the mapping operator by default and is thus omitted from the notations for clear presentation.
\end{remark}
\comment{
\begin{figure}
\centering
\includegraphics[width=0.78\linewidth]{figures/function_of_label_constrained_mapping_aline_2.pdf}
\topcaption{When computing $\kw{FSim}_\kw{s}(u,v_3)$, the searching space of the green pentagon node (a neighbor of $u$ outlined with red) regarding the $v_3$'s neighbor set.} \label{fig:function_of_label_constrained_mapping}
\end{figure}
{\color{red}According to \refdef{simulation}, two simulated nodes must have the same node label. Therefore, we add the constraint $\mathcal{L}(x, f_\kw{s}(x)) \geq \theta$ in the mapping operator in \refeq{match_simulation}. We use the following example to illustrate the affect of $\mathcal{L}(x, f_\kw{s}(x)) \geq \theta$.}
\begin{example} \label{ex:label_constrained_mapping}
In \reffig{function_of_label_constrained_mapping}, when calculating $\kw{FSim}_\kw{s}(u, v_3)$, we need to calculate the mapped node for each node in $u's$ neighbors. Take the green pentagon neighbor (outlined with red) of $u$ for an example, if $\theta = 0$, the searching space is all the neighbors of $v_3$ (light shading). When $\theta$ is set to 0.5, the search space will shrink to the darker shading area, which contains only half of the neighbors.
\end{example}
}
{\stitle{Convergence.} \color{black}It is obvious that $|M_\kw{s}(S_1, S_2)| \leq |S_1| = \Omega_\kw{s}(S_1, S_2)$, which satisfies C1 and C2 in \refthm{convergence}. As mentioned earlier, C3 is applied by default. Therefore, the convergence of $\kw{FSim}_\kw{s}$ is guaranteed.}
\stitle{Well-Definiteness.}
\refthm{frac_S_simulation} shows that $\kw{FSim}_\kw{s}$ is well-defined for fractional $\kw{s}$-simulation according to \refdef{fractional_simulation_properties}.
\begin{theorem} \label{thm:frac_S_simulation}
$\kw{FSim}_{\kw{s}}$ is well-defined for fractional \kw{s}-simulation.
\end{theorem}
\begin{proof}
We prove that $\kw{FSim}_{\kw{s}}$ satisfies all the properties in \refdef{fractional_simulation_properties}. P1 is easy to verify. It is unnecessary to verify P3 as \kw{s}-simulation {\color{black}has no converse invariant}. Below we prove P2. For brevity, we only consider out-neighbors in the proof, and the proof with in-neighbors is similar.
We first prove that if $\kw{FSim}_{\kw{s}}(u,v)=1$, $u \rightsquigarrow^{\kw{s}} v$. Based on \refeq{fractionalsimulation}, we have $\ell_1(u) = \ell_2(v)$, and we add $(u,v)$ into $R$ (initialized as $\emptyset$). $\forall (x, y) \in \mathcal{M}_{\kw{s}}$, $\kw{FSim}_{\kw{s}}(x,y)=1$ and $\ell_1(x) = \ell_2(y)$. Then, we add these nodes pairs into $R$, i.e. $R = R \bigcup \mathcal{M}_{\kw{s}}$.
New node pairs can be added recursively. The process will terminate as $|R|$ increases and cannot exceed $|V_1|\times|V_2|$. One can verify that $R$ is a simulation.
We next prove that, for $\forall (u,v) \in R$, $\kw{FSim}_{\kw{s}}^k(u,v) = 1$ for any $k$, where $R$ is a simulation relation. Based on \refdef{simulation}, we define the mapping $\mathcal{M}$ between $N_{G_1}^+(u)$ and $N_{G_2}^+(v)$ as $\mathcal{M} = \{(u',v') | \forall u' \in {N}_{G_1}^+(u), (u',v') \in R\}$. The case of $k=0$ is easy to verify. Assume the theorem holds at $k-1$. For a node pair $(u,v) \in R$, any $(u',v') \in \mathcal{M}$ defined above satisfies $\kw{FSim}_{\kw{s}}^{k-1}(u',v') = 1$. Clearly, $\mathcal{M}$ is a mapping operator defined in \refeq{match_simulation}. Thus, $\kw{FSim}_{\kw{s}}^{k}(u,v) = 1$.
\end{proof}
\stitle{Computation.} The mapping operator $\mathcal{M}_{\kw{s}}$ (\refeq{match_simulation}) constrains that $\forall (x, y) \in \mathcal{M}_{\kw{s}}$, $\mathcal{L}(x,y) \geq \theta$. As a result, the nodes pairs with $\mathcal{L}(\cdot) < \theta$ will never contribute to the computation. Thus, only the node pairs with $\mathcal{L}(\cdot) \geq \theta$ need to be maintained(Line~\ref{initialization} in \refalg{fractional_simulation_computation}), which helps to reduce both the time and space complexity.
\stitle{Cost Analysis.} The time cost to compute $\kw{FSim}_{\kw{s}}(u,v)$ is dominated by the mapping operator. According to \refeq{match_simulation}, for $\forall x \in S_1$, we simply search $y \in S_2$ to maximize $\kw{FSim}^{k-1}_\chi(x, y)$, which takes $O(|S_1||S_2|)$ time. Therefore, the time complexity of computing $\kw{FSim}_{\kw{s}}$ is $O(k|H|(D^+_{G_1} D^+_{G_2} + D^-_{G_1} D^-_{G_2})$ with $|H|\leq |V_1| \times |V_2|$ and $k$ as the number of iterations. The space cost is $O({|H|})$, as the map of $\kw{FSim}_{\kw{s}}$ scores for the previous iteration needs to be stored.
\subsection{Discussions} \label{sec:discussion}
{\color{black}$\kw{FSim}_\chi$ is closely related to several well-known concepts, including node similarity measures (i.e., \kw{SimRank} and \kw{RoleSim}), $k$-bisimulation (a variant of bisimulation) and graph isomorphism. In this subsection, we discuss their relations to $\kw{FSim}_\chi$.}
\stitle{Relations to Similarity Measures.} The $\kw{FSim}_{\chi}$ framework (\refeq{fractional_simulation_computation}) can be configured to compute \kw{SimRank} \cite{DBLP:conf/kdd/JehW02} and \kw{Rolesim} \cite{DBLP:conf/kdd/JinLH11}. As both the algorithms are applied to a single unlabeled graph, we let $G_1 = G_2$, and the graph be label-free.
To configure $\kw{FSim}_\chi$ for \kw{SimRank}, if $u = v$, we set $\kw{FSim}^0_\chi(u, v)$ to 1 in the initialization step, and 0 otherwise. In the update step, we set $w^+ = 0$, $\mathcal{M}(S_1,S_2) = S_1 \times S_2$, $\Omega(S_1,S_2) = |S_1||S_2|$ and $\mathcal{L}(u, v) = 0$ in \refeq{fractional_simulation_computation}. It is clear that with such configurations, $\kw{FSim}_{\chi}$ computes \kw{SimRank} scores for all node pairs in a manner following \cite{DBLP:conf/kdd/JehW02}. Note that the convergence of $\kw{FSim}_{\chi}$ is guaranteed, as the mapping and normalizing operators satisfy all conditions in \refthm{convergence}.
\kw{RoleSim} \cite{DBLP:conf/kdd/JinLH11} computes structural similarity with automorphic confirmation (i.e. the similarity of two isomorphic nodes is 1) on an undirected graph. Thus, we let the out-neighbors of each node maintain its undirected neighbors, and leave the in-neighbors empty. In the initialization step, we set $\kw{FSim}^0_\chi(u, v) = \frac{\min(d^+(u), d^+(v))}{\max(d^+(u), d^+(v))}$ for all node pairs following \cite{DBLP:conf/kdd/JinLH11}. In the update step, we set $w^- = 0$ and $\mathcal{L}(u, v) = 1$ for each node pair, and follow the settings of mapping and normalizing operators of bijective simulation in \refeq{fractional_simulation_computation}. With such configurations, one can verify according to \cite{DBLP:conf/kdd/JinLH11} that $\kw{FSim}_\chi$ is computing axiomatic role similarity.
{\color{black}\stitle{Relation to $k$-bisimulation.} $k$-bisimulation \cite{DBLP:books/cu/12/AcetoIS12,DBLP:conf/bncod/LuoLFBHW13,DBLP:conf/cikm/LuoFHWB13,DBLP:conf/sac/HeeswijkFP16} is a type of approximate bisimulation. Given a graph $G(V,E,\ell)$ and an integer $k\geq 0$, node $u$ is simulated by node $v$ via $k$-bisimulation \cite{DBLP:conf/bncod/LuoLFBHW13} (i.e., $u$ and $v$ are $k$-bisimilar) if, and only if, the following conditions hold: (1) $\ell(u) = \ell(v)$; (2) if $k > 0$, for $\forall u' \in N_G^+(u)$, there exists $v' \in N_G^+(v)$ s.t. $u'$ and $v'$ are [k-1]-bisimilar; and (3) if $k > 0$, for $\forall v' \in N_G^+(v)$, there exists $u' \in N_G^+(u)$ s.t. $v'$ and $u'$ are [k-1]-bisimilar. An iterative framework is proposed by \cite{DBLP:conf/bncod/LuoLFBHW13} to compute $k$-bisimulation, in which each node $u$ is assigned with a signature ${sig}_k(u)$ based on its node label and neighbors' signatures. Node $u$ is simulated by node $v$ via $k$-bisimulation if and only if ${sig}_k(u) = {sig}_k(v)$ \cite{DBLP:conf/bncod/LuoLFBHW13}. We show in \refthm{k-bisimulation} that our $\kw{FSim}_{\chi}$ can be configured to compute $k$-bisimulation. As $k$-bisimulation in \cite{DBLP:conf/bncod/LuoLFBHW13} uses one single graph and only considers out-neighbors, we set $G_1 =G_2$ and $w^- = 0$ for $\kw{FSim}_{\chi}$. Recall that $\kw{FSim}_{\kw{b}}^k(u,v)$, computed by \refeq{fractional_simulation_computation}, is the $\kw{b}$-simulation score of nodes $u$ and $v$ in the $k$-[th] iteration,
\begin{theorem} \label{thm:k-bisimulation}
Given a graph $G$ and an integer $k$, node $u$ is simulated by node $v$ via $k$-bisimulation if and only if {$\kw{FSim}_{\kw{b}}^k(u,v) = 1$.
\end{theorem}
\begin{proof}
The case when $k=0$ is easy to verify. Assume the theorem is true at $k-1$, we show that the theorem also holds at $k$. On the one hand, if $u$ is simulated by $v$ via $k$-bisimulation, i.e., ${sig}_{k}(u) = {sig}_{k}(v)$, one can verify that $\!\mathcal{M} = \{(u',v')|{sig}_{k-1}(u') = {sig}_{k-1}(v') \wedge v' \in N_G^+(v), \forall u' \in N_G^+(u)\} \bigcup \{(v'',u'')|{sig}_{k-1}(v'') = {sig}_{k-1}(u'') \wedge u'' \in N_G^+(u), \forall v'' \in N_G^+(v)\}\!$ is a matching of $\kw{FSim}_{\kw{b}}$. Based on the assumption, we have $\kw{FSim}_{\kw{b}}^k(u,v) = 1$. On the other hand, if $\kw{FSim}_{\kw{b}}^k(u,v) = 1$, for $\forall u' \in N_G^+(u)$, there exists $v' \in N_G^+(v)$ such that $\kw{FSim}_{\kw{b}}^{k-1}(u',v') = 1$, which means ${sig}_{k-1}(u') = {sig}_{k-1}(v')$. Similarly, $\forall v'' \in N_G^+(v)$, there exists $u'' \in N_G^+(u)$ with ${sig}_{k-1}(u'') = {sig}_{k-1}(v'')$. Thus, the set of signature values in $u$'s neighborhood is the same as that in $v$'s neighborhood. Then, we have ${sig}_{k}(u) = {sig}_{k}(v)$.
\end{proof}
\stitle{Relation to isomorphism.} The graph isomorphism test asks for whether two graphs are topologically identical, and node $u$ of $G_1$ is \emph{isomorphic} to node $v$ of $G_2$ if there exists an isomorphism between $G_1$ and $G_2$ mapping $u$ to $v$. {\color{black} Graph isomorphism is a challenging problem, and there is no polynomial-time solution yet \cite{DBLP:conf/stoc/Babai16}. The Weisfeiler-Lehman isomorphism test (the WL test) \cite{DBLP:journals/jmlr/ShervashidzeSLMB11} is a widely used solution to test whether two graphs are isomorphic. The WL test can be solved in polynomial time, but it is necessary but not sufficient for isomorphism, that is two graphs that are isomorphic must pass the WL test but not vice versa.
Like the WL test, bijective simulation is also necessary but not sufficient for isomorphism. We next show that it is as powerful as the WL test in theory.
}
The WL test \cite{DBLP:journals/jmlr/ShervashidzeSLMB11} is applied to undirected labeled graphs, and the graph model is accordingly adapted as \kw{RoleSim}. We assume both graphs are connected, as otherwise each pair of connected components can be independently tested. Given graphs $G_1$ and $G_2$, the WL test iteratively labels each node $u \in V_1$ (resp. $v \in V_2$) as $s(u)$ (resp. $s(v)$). The algorithm decides that node $u$ is isomorphic to node $v$ if $s(u) = s(v)$ when the algorithm converges\footnote{The algorithm is not guaranteed to converge.} The following theorem reveals the connection between WL test and bijective simulation.}
\begin{theorem}\label{thm:bijection_and_wl_test}
Given graphs $G_1$ and $G_2$, and a node pair $(u, v) \in V_1 \times V_2$, and assume the WL test converges, we have $s(u) = s(v)$ \textbf{if and only if} $\kw{FSim}_\kw{bj}(u, v) = 1$, namely $u {\sim}^\kw{bj} v$.
\end{theorem}
\begin{proof}
Let $s^k(u)$ and $s^k(v)$ be the label of $u$ and $v$ at the $k$-[th] iteration during WL test. We first prove that for any $k$, if $s^k(u) = s^k(v)$, $\kw{FSim}_{\kw{bj}}^k(u,v) = 1$. The case of $k=0$ is easy to verify. Suppose the theorem is true at $k-1$. At the $k$-[th] iteration, we have $s^k(u)$ = $s^{k-1}(u) \sqcup_{u'\in N(u)} {s^{k-1}(u')}$ and $s^k(v)$ = $s^{k-1}(v) \sqcup_{v'\in N(v)} {s^{k-1}(v')}$, where $\sqcup$ denotes label concatenation. If $s^k(u) = s^k(v)$, there exists a bijective function $\!\lambda_1: N_{G_1}(u) \to N_{G_2}(v)\!$ s.t. $s^{k-1}(u') = s^{k-1}(\lambda_1(u'))$. Based on the assumption, we have $\kw{FSim}_{\kw{bj}}^{k}(u,v) = 1$.
Next, we prove if $u {\sim}^{\kw{bj}} v$, $s(u) = s(v)$. It is easy to verify the case of $k=0$. Assume that if $\kw{FSim}_{\kw{bj}}^{k-1}(u,v) = 1$, $\!s^{k-1}(u) = s^{k-1}(v)\!$ holds. At the $k$-[th] iteration, if $\kw{FSim}_{\kw{bj}}^{k}(u,v) = 1$, there exits a bijective function $\!\lambda_2: N_{G_1}(u) \to N_{G_2}(v)\!$ s.t. for $\forall u' \in N_{G_1}(u)$, $\kw{FSim}_{\kw{bj}}^{k-1}(u',\lambda_2(u')) = 1$. Thus, we can derive $s^{k-1}(u') = s^{k-1}(\lambda_2(u'))$ and $s^k(u) = s^k(v)$.
\end{proof}
\begin{remark}
\color{black}
Note that there is no clear relation between bijective simulation and graph homomorphism. To be specific, bijective simulation cannot derive homomorphism, and homomorphism cannot derive bijective simulation either.
\end{remark}
\subsection{Configurations for All Simulation Variants}
\reftab{configurations} summarizes the configurations of each simulation variant. With the given configurations, $\kw{FSim}_{\chi}$ is well-defined for $\forall \chi \in \{\kw{s},\kw{dp},\kw{b},\kw{bj}\}$. We only provide the proofs of the well-definiteness for $\kw{FSim}_\kw{s}$ (asymmetric, \refthm{frac_S_simulation}) and $\kw{FSim}_\kw{bj}$ (symmetric, \refthm{bijectivesimulation}). The proofs for the other variants are similar and thus are omitted due to space limitations.
\begin{table}[h]
\small
\centering
\topcaption{Configurations of the fractional $\chi$-simulation framework ($\kw{FSim}_{\chi}$) to quantify the studied simulation variants.}\label{tab:configurations}
\scalebox{0.86}{
\renewcommand{\arraystretch}{1.8}
\begin{tabular}{|c|c|c|c|} \hline
\textbf{$\kw{FSim}_{\chi}$} & $\Omega_{\chi}(S_1,S_2)$ & $\mathcal{M}_{\chi}(S_1,S_2)$ & \textbf{Function Constraints (label constraint implied)} \\ \hline
$\kw{FSim}_{\kw{s}}$ & {$|S_1|$} & {$\{(x, y)| \forall x \in S_1, y = f_\kw{s}(x) \in S_2\}$} & {\makecell{$f_\kw{s}(x): S_1 \to S_2
}} \\ \hline
$\kw{FSim}_{\kw{dp}}$ & {$|S_1|$} & {\makecell{$\{(x, y)| \forall x \in S'_1, y = f_\kw{dp}(x) \in S_2\}$, \\ where $S'_1 \subseteq S_1$ with $|S'_1| = \min(|S_1|, |S_2|)$}} & {\makecell{$f_\kw{dp}: S'_1 \to S_2$ is an injective functio
}} \\ \hline
$\kw{FSim}_{\kw{b}}$ & {$|S_1| + |S_2|$} & {\makecell{$\{(x, y)| \forall x \in S, y = f_\kw{b}(x) \in S\}$, where $S = S_1 \cup S_2$}} & {\makecell{$
f_\kw{b}(x) \in \begin{cases}
S_2, \text{ if } x \in S_1, \\
S_1, \text{ if } x \in S_2
\end{cases}
}} \\ \hline
$\kw{FSim}_{\kw{bj}}$ & {$\sqrt{|S_1|\times|S_2|}$} & {\makecell{$\{(x, y)| \forall x \in S_m, y = f_\kw{bj}(x) \in S_M\}$, in which if $|S_1| \leq |S_2|$, \\ $S_{m} = S_1$ and $S_{M} = S_2$; otherwise, $S_{m} = S_2$ and $S_{M} = S_1$}} & {\makecell{$f_\kw{bj}(x): S_m \to S_M$ is an injective functio
}} \\ \hline
\end{tabular}}
\end{table}
\begin{theorem} \label{thm:bijectivesimulation}
$\!\kw{FSim}_{\kw{bj}}\!$ is well-defined for fractional {\kw{bj}}-simulation.
\end{theorem}
\begin{proof}
Proofs of P1 and P2 are similar to those of $\kw{FSim}_{\kw{s}}$ (\refthm{frac_S_simulation}). Proof of P3, i.e., $\kw{FSim}_{\kw{bj}}(u,v) = \kw{FSim}_{\kw{bj}}(v,u)$, is given by mathematical induction.
As the initialization function is symmetric, we have $\kw{FSim}_{\kw{bj}}^0(u,v) =\kw{FSim}_{\kw{bj}}^0(v,u)$. Suppose that $\kw{FSim}_{\kw{bj}}^{k-1}(u,v)$ is symmetric, the symmetry of $\kw{FSim}_{\kw{bj}}^{k}(u,v)$ can be immediately proved as $\Omega_{\kw{bj}}$ is symmetric as well. As a result, we have $\kw{FSim}_{\kw{bj}}(u,v) = \kw{FSim}_{\kw{bj}}(v,u)$.
\end{proof}
\comment{
{\color{red}
\begin{remark} \label{rem:normalizing_bijective_simulation}
We actually explore three normalizing operators that can guarantee both the convergence and the well-definiteness of $\kw{FSim}_\kw{bj}$. They are:
\begin{itemize}[noitemsep]
\item \emph{Mean}: $\Omega_{\kw{bj}}(S_1, S_2) = (|S_1| + |S_2|) / 2$.
\item \emph{Max}: $\Omega_{\kw{bj}}(S_1, S_2) = \max(|S_1|, |S_2|)$.
\item \emph{Root of Product (RoP)}: $\Omega_{\kw{bj}}(S_1, S_2) = \sqrt{|S_1|\times|S_2|}$
\end{itemize}
With \emph{Mean} and \emph{Max}, the score decreases (nearly) proportionally with the increment of $|S_1|$ and $|S_2|$. For example, suppose $|S_1| \ll |S_2|$, the score drops nearly $10$ times when $|S_2|$ gets $10$ times larger, which is not very robust. Therefore, we adopt RoP for $\kw{FSim}_\kw{bj}$ in this paper.
\end{remark}
}
}
\stitle{Cost Analysis.} According to \refalg{fractional_simulation_computation}, the space complexity is $O(|H|)$, where $|H|\leq |V_1| \times |V_2|$. The time complexity of computing $\kw{FSim}_{\kw{b}}$ is the same as $\kw{FSim}_{\kw{s}}$. For computing $\kw{FSim}_{\kw{dp}}$ and $\kw{FSim}_{\kw{bj}}$, the Hungarian algorithm needs to be applied to implement the mapping operators due to the presence of injection. Using a popular greedy approximate of Hungarian \cite{avis1983survey}, $\mathcal{M}_\kw{dp}(S_1, S_2)$ and $\mathcal{M}_\kw{bj}(S_1, S_2)$ can be solved in a time complexity of $O(|S_1||S_2| \log(|S_1||S_2|))$. {\color{black}As a whole, the time cost of computing $\kw{FSim}_{\kw{dp}}$ and $\kw{FSim}_{\kw{bj}}$ is $O(k{|H|}({D^+_{G_1}}{D^+_{G_2}}\cdot\log {D^+_{G_1}}{D^+_{G_2}}+ {D^-_{G_1}}{D^-_{G_2}}\cdot\log {D^-_{G_1}}{D^-_{G_2}}))$.}
\section{Experimental Evaluation} \label{sec:experiment}
\subsection{Setup} \label{sec:setup}
\stitle{Datasets.} We used eight publicly available real-world datasets. \reftab{graph_statistics} provides their descriptive statistics, including the number of nodes $|V|$, the number of edges $|E|$, the number of labels $|\Sigma|$, the average degree $d_G$, the maximum out-degree $D^+_G$ and the maximum in-degree $D^-_G$.
\begin{table}[h]
\small
\centering
\topcaption{Dataset Statistics and Sources} \label{tab:graph_statistics}
\begin{tabular}{|c||c|c|c|c|c|c|c|} \hline
\textbf{Datasets} & $|E|$ & $|V|$ & $|\Sigma|$ & $d_G$ & $D^+_G$ & $D^-_G$ & Source\\ \hline \hline
Yeast& 7,182 & 2,361 & 13 & 3 & 60 & 47 & \cite{kunegis2013konect} \\ \hline
Cora& 91,500 & 23,166 & 70 & 4 & 104 & 376 & \cite{kunegis2013konect}\\ \hline
Wiki& 119,882 & 4,592 & 120 & 26 & 294 & 1,551 & \cite{kunegis2013konect}\\ \hline
JDK& 150,985 & 6,434 & 41 & 23 & 375 & 32,507 & \cite{kunegis2013konect}\\ \hline
NELL& 154,213 & 75,492 & 269 & 2 & 1,011 & 1,909 & \cite{xiong2017deeppath} \\ \hline
GP & 298,564 & 144,879 & 8 & 2 & 191 & 18,553 & \cite{harding2017iuphar} \\ \hline
Amazon& 1,788,725 & 554,790 & 82 & 3 & 5 & 549 & \cite{snapnets} \\ \hline
ACMCit & 9,671,895 & 1,462,947 & 72K & 7 & 809 & 938,039 & \cite{DBLP:conf/kdd/TangZYLZS08} \\ \hline
\end{tabular
\end{table}
\stitle{Experimental Settings.} Without loss of generality, we assume that in-neighbors and out-neighbors contribute equally to the $\kw{FSim}_{\chi}$ computation. Thus, $w^+ = w^-$ in all experiments. Algorithms were terminated when the values changed by less than 0.01 of their previous values. \textit{Note that when we applied $\kw{FSim}_{\chi}$ to one single graph, we actually computed the $\kw{FSim}_{\chi}$ scores from the graph to itself.} $\kw{FSim}_{\chi} \{\theta = a\}$ and $\kw{FSim}_{\chi} \{\kw{ub}\}$ denote the computation of $\kw{FSim}_{\chi}$ uses the optimizations of label-constrained mapping (setting $\theta = a$) and upper-bound updating (\refsec{computing_algorithm}), respectively. The two optimizations can be meanwhile used as $\kw{FSim}_{\chi} \{\kw{ub}, \theta=a\}$. We use $\theta=0$ by default, which will be omitted for simplicity thereafter.
We implemented $\kw{FSim}_{\chi}$ in C++. All experiments were conducted on a platform comprising two Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz (each with 20 cores) and 512GB memory.
\subsection{Sensitivity Analysis} \label{sec:sensitivity_analysis}
Our first test was a sensitivity analysis to examine $\kw{FSim}_\chi$'s robustness to parameter tuning and data errors. Following \cite{DBLP:conf/kdd/JinLH11}, we calculated Pearson's correlation coefficients. The larger the coefficient, the more correlated the evaluated subjects. Note that the patterns were similar across datasets. Hence, only the results for NELL are reported.
\stitle{Sensitivity of Framework Parameters.} We performed the sensitivity analysis against three parameter settings: (1) the initialization function $\mathcal{L}(\cdot)$ presented in \refsec{iterative_computation}; (2) the threshold $\theta$ for the label-constrained mapping outlined in \refrem{label_constraint_mapping}; and (3) the weighting factors outlined in \refsec{iterative_computation}.
\sstitle{\color{black}Varying $\mathcal{L}(\cdot)$.} In this analysis, we computed and cross-compared the $\kw{FSim}_{\chi}$ scores using the three different initialization functions: indicator function $\mathcal{L}_I(\cdot)$, normalized edit distance $\mathcal{L}_E(\cdot)$, and Jaro-Winkler similarity $\mathcal{L}_{J}(\cdot)$. The results are shown in \reftab{compare_initialization}. The Pearson's coefficients for all pairs of initialization functions are very high ($>0.92$), which indicates that $\kw{FSim}_{\chi}$ is not sensitive to initialization functions. Hence, going forward, we used $\mathcal{L}_{J}(\cdot)$ as the initialization function unless specified otherwise.
\begin{table}[h]
\centering
\small
\topcaption{Pearson's correlation coefficients when comparing initialization functions.} \label{tab:compare_initialization}
\begin{tabular}{|c|c|c|c|c|} \hline
$\kw{FSim}_{\chi}$ & $\kw{FSim}_{\kw{s}}$ & $\kw{FSim}_{\kw{dp}}$ & $\kw{FSim}_{\kw{b}}$ & $\kw{FSim}_{\kw{bj}}$ \\ \hline
$\mathcal{L}_{I}$- $\mathcal{L}_{E}$ & 0.990 & 0.982 & 0.979 & 0.969 \\ \hline
$\mathcal{L}_{I}$-$ \mathcal{L}_{J}$ & 0.967 & 0.950 & 0.937 & 0.922 \\ \hline
$\mathcal{L}_{J}$- $\mathcal{L}_{E}$ & 0.985 & 0.977 & 0.975 & 0.962 \\ \hline
\end{tabular
\end{table}
\sstitle{\color{black}Varying $\theta$.} For this analysis, we varied $\theta$ from 0 to 1 in steps of 0.2, and calculated the Pearson's coefficient against the baseline case of $\theta = 0$ (with $w^+$ and $w^-$ set to 0.4). The results in \reffig{correlation_vary_theta} clearly show that the coefficients decrease as $\theta$ increases.
{\color{black}This is reasonable as node pairs with $\mathcal{L}(\cdot) < \theta$ will not be considered by the mapping operator. Also, more node pairs are pruned as $\theta$ grows.} However, the coefficients are still very high ($>0.8$) for all variants, even when $\theta = 1$, which indicates that $\kw{FSim}_{\chi}$ is not sensitive to $\theta$.
\sstitle{\color{black}Varying $w^*$.} To examine the influence of the weighting parameters, we varied $w^*$ from 0.1 to 1, where $w^* = 1-w^+-w^-$.
Recall that $\theta = 1$ constrains mapping only the same-label nodes (\refrem{label_constraint_mapping}). As $w^*$ is label-relevant, we computed the coefficients of $\kw{FSim}_\chi$ (vs. $\kw{FSim}_\chi\{\theta = 1\}$) by varying $w^*$. The results, reported in \reffig{vary_w}, show that the coefficients increase as $w^*$ increases and at $w^* > 0.6$, the coefficient is already almost 1. This is expected because a larger $w^*$ mitigates the impact of the label-constrained mapping. At a more reasonable setting of $w^* = 0.2$, the coefficients sit at around 0.85, which indicates that $\kw{FSim}_\chi\{\theta = 1\}$ aligns with $\kw{FSim}_\chi$ well. Hence, we set $w^* = 0.2$ by default in subsequent tests.
\begin{figure}[h]
\centering
\subfigure[varying $\theta$]{
\centering
\includegraphics[width=0.38\linewidth]{figures/NELL_jarowinkler_theta.pdf}
\label{fig:correlation_vary_theta}
}
\subfigure[varying $w^*$]{
\includegraphics[width=0.38\linewidth]{figures/NELL_wl_single.pdf} \label{fig:correlation_vary_w}
}
\topcaption{Pearson's correlation coefficients when varying $\theta$ and $w^*$}
\label{fig:vary_w}
\end{figure}
\stitle{Robustness against Data Errors.} \reffig{correlation_vary_data_errors} plots the robustness of $\kw{FSim}_{\kw{bj}}$ against data errors, i.e., structural errors (with edges added/removed) and label errors (with certain labels missing), from one extreme ($\theta = 0$) to the other ($\theta = 1$) as an example of how all simulation variants performed. It is expected that the coefficients decrease as the error level increases. Yet, the coefficients remained high even at the 20\% error level ($>0.7$ for both cases). This shows that $\kw{FSim}_{\chi}$ is robust to data errors, which conforms with one of the reasons why we initially thought to propose fractional simulation.
\begin{figure}[h]
\centering
\subfigure[varying structural errors]{
\includegraphics[width=0.38\linewidth]{figures/vary_structural_noise.pdf} \label{fig:vary_structural_noise}
}
\subfigure[varying label errors]{
\includegraphics[width=0.38\linewidth]{figures/vary_label_noise.pdf} \label{fig:vary_label_noise}
}
\topcaption{Pearson's correlation coefficients when varying the ratio of data errors}
\label{fig:correlation_vary_data_errors}
\end{figure}
\stitle{Sensitivity of Upper-bound Updating.} To assess the influence of upper-bound updating (\refsec{computing_algorithm}), we varied $\alpha$ (the approximate ratio) from 0 to 0.5 and $\beta$ (the threshold) from 0 to 1 in steps of $0.1$. Again, the results for all simulation variants were similar, so only the results for $\kw{FSim}_{\kw{bj}}\{\kw{ub}\}$ (vs. $\kw{FSim}_{\kw{bj}}$) and $\!\kw{FSim}_\kw{bj}\{\kw{ub}, \theta = 1\!\}$ (vs. $\kw{FSim}_\kw{bj}\{\theta = 1\}$) are shown.
\sstitle{\color{black}Varying $\beta$.} \reffig{vary_beta} shows the coefficients while varying $\beta$ from 0 to 0.5 with $\alpha$ fixed to 0.2. It is clear that the coefficients decrease as $\beta$ increases. This is reasonable as more node pairs are pruned, and the scores become less precise as $\beta$ gets larger. Note that when $\beta \geq 0.3$, the decreasing trend becomes smoother for $\kw{FSim}_{\kw{bj}}\{\kw{ub}, \theta = 1\}$. Observe that even at $\beta = 0.5$, the coefficients are still very high ($>0.9$), which indicates that the validity of upper-bound updating is not sensitive to $\beta$. We thus set $\beta = 0.5$ going forward to utilize as much pruning power as possible.
\begin{figure}[h]
\centering
\subfigure[varying $\beta$]{
\centering
\includegraphics[width=0.38\linewidth]{figures/NELL_beta.pdf}
\label{fig:vary_beta}
}
\subfigure[varying $\alpha$]{
\centering
\includegraphics[width=0.38\linewidth]{figures/NELL_alpha.pdf}
\label{fig:vary_alpha}
}
\topcaption{Pearson's correlation coefficients when varying $\alpha$ and $\beta$} \label{fig:sensitivity_upper_bound}
\end{figure}
\sstitle{\color{black}Varying $\alpha$.} \reffig{vary_alpha} shows the coefficients when varying $\alpha$ from 0.0 to 1.0. We made two observations here. First, the coefficients of $\kw{FSim}_\kw{bj}\{\kw{ub}\}$ initially increase, then decrease as $\alpha$ gets larger. A possible reason is that $\alpha = 0$ and $\alpha = 1$ are at each extreme of the setting range, but the most appropriate setting lies somewhere in between. Second, the coefficients for $\kw{FSim}_\kw{bj}\{\kw{ub}, \theta = 1\}$ increase as $\alpha$ increases. Potentially, the true scores of pruned node pairs are larger than $1-w^+-w^-$, and thus a larger $\alpha$ is preferred. Note that when $\alpha = 0$, {\color{black}i.e., when ignoring the pruned node pairs,} the coefficients for both $\kw{FSim}_\kw{bj}\{\kw{ub}\}$ and $\kw{FSim}_\kw{bj}\{\kw{ub}, \theta = 1\}$ were above 0.9; hence, $\alpha = 0$ became our default.
\subsection{Efficiency} \label{sec:efficiency}
\stitle{{\color{black}Varying $\theta$.}} With NELL as a representative of all tests, \reffig{time_vary_theta} shows the running time of $\kw{FSim}_{\chi}$ while varying $\theta$ from 0 to 1. The experimental results show that $\kw{FSim}_{\chi}$ runs faster as $\theta$ increases, which is expected since a larger $\theta$ contributes to less candidate pairs to compute, as shown in \reffig{node_pairs_vary_theta}. We then compared the running time of different simulation variants under certain $\theta$ value. It is not surprising that $\kw{FSim}_\kw{dp}$ and $\kw{FSim}_\kw{bj}$ ran slower than the other two variants, as they contain a costly maximum-matching operation (cost analysis in \refsec{configurations_of_all_variants}). $\kw{FSim}_\kw{b}$ ran slower than $\kw{FSim}_\kw{s}$ because the mapping operator of $\kw{FSim}_\kw{b}$ considers both neighbors of a node pair(\reftab{configurations}). At $\theta \geq 0.6$, the difference in running time for all variants was already very small. Considering the sensitivity analysis in \reffig{correlation_vary_theta} as well as these results, $\theta = 1$ seems a reasonable setting that renders both good coefficients and performance.
\begin{figure}[h]
\centering
\subfigure[running time]{
\centering
\includegraphics[width=0.38\linewidth]{figures/time_NELL_jarowinkler_theta.pdf}
\label{fig:time_vary_theta}
}
\subfigure[number of node pairs]{
\centering
\includegraphics[width=0.38\linewidth]{figures/NELL_num_of_node_pairs.pdf}
\label{fig:node_pairs_vary_theta}
}
\topcaption{Running time of $\kw{FSim}_{\chi}$, $\chi \in \{\kw{s},\kw{b},\kw{dp},\kw{bj}\}$, while varying $\theta$}
\end{figure}
\stitle{\color{black}Varying the Datasets.} \reffig{vary_opt} reports the running time of $\kw{FSim}_{\kw{bj}}$, the most costly simulation variant, with different optimizations on all datasets. Additionally, experiments that resulted in out-of-memory errors have been omitted. From these tests, we made the observations: (1) the upper-bound updating alone contributed about 5$\times$ the performance gain compared to $\kw{FSim}_{\kw{bj}}\{\kw{ub}\}$ with $\kw{FSim}_{\kw{bj}}$. (2) Label-constrained mapping is the most effective optimization, making $\!\kw{FSim}_{\kw{bj}}\{\theta = 1\}\!$ faster than $\!\kw{FSim}_{\kw{bj}}\!$ by up to 3 orders of magnitude. Applying both label-constrained mapping and upper-bound updating, $\kw{FSim}_{\kw{bj}}\{\kw{ub}, \theta = 1\}$ was the only algorithm that could complete the executions on all datasets in time, including the two largest ones, Amazon and ACMCit.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{figures/time_vary_opt.pdf}
\topcaption{Running time of $\kw{FSim}_{\kw{bj}}$ on all datasets with different optimizations} \label{fig:vary_opt}
\end{figure}
\stitle{Parallelization and Scalability.} {\color{black}We studied the scalability of $\kw{FSim}_{\chi}$ with parallelization on two representative datasets, i.e., NELL and ACMCit (with more than 1 million nodes). The results for $\kw{FSim}_{\kw{bj}}\{\kw{ub},\theta = 1\}$ follow.}
\sstitle{\color{black}Varying the Number of Threads.} \reffig{time_vary_thread} shows the running time of $\kw{FSim}_{\kw{bj}}\{\kw{ub},\theta = 1\}$ by varying the number of threads from 1 to 32. We observe that both curves demonstrate reasonable decreasing trends as the number of threads increases. The benefits from 1 to 8 threads are substantial. After 8, the reward ratio flattens due to the cost of thread scheduling.
{\color{black}Specifically, when setting $t=32$, parallelization can speed up the computation by 15 to 17 times of magnitude.}
\begin{figure}
\centering
\subfigure[varying the number of threads]{
\centering
\includegraphics[width=0.46\linewidth]{figures/time_vary_thread.pdf}
\label{fig:time_vary_thread}
}
\subfigure[{varying density}]{
\centering
\includegraphics[width=0.46\linewidth]{figures/time_vary_density.pdf}
\label{fig:time_vary_density}
}
\topcaption{\color{black}Parallelization and Scalability} \label{fig:parallelization_and_scalability}
\end{figure}
\sstitle{\color{black}Varying Density.} {\color{black}\reffig{time_vary_density} reports the running time of $\kw{FSim}_{\kw{bj}}\{\kw{ub},\theta = 1\}$ (with 32 threads) while varying the density of the datasets from $\times 10$ to $\times 50$ by randomly adding edges. Unsurprisingly, the running times of both grew longer as the graphs became denser. However, although increased density means greater computational complexity in theory, it also means each node has more neighbors by expectation. Hence, the upper bound in \refeq{upper_bound_label_constrained_mapping} may become smaller, which, in turn, contributes to greater upper-bound pruning power. This may offset some of the increase in computation complexity. Note that $\kw{FSim}_{\chi}$ finished within reasonable time on the ACMCit with $50\times$ more edges, indicating that it is scalable to the graphs with hundreds of millions of edges.}
\subsection{Case Studies} \label{sec:case_studies}
In this subsection, we used three case studies to exhibit the potential of $\kw{FSim}_\chi$ in the applications of pattern matching, node similarity measurement and RDF graph alignment. We will demonstrate the following strengths of our framework.
\begin{enumerate}[S1.] \setlength{\itemsep}{0cm}
\item Our $\kw{FSim}_\chi$ framework quantifies the degree of simulation, which remedies the coarse ``yes-or-no'' semantics of simulation, significantly improves the effectiveness, and expand the scope of applying simulation.
\item When multiple simulation variants are suitable for a certain application, the $\kw{FSim}_\chi$ framework provides a flexible way to experiment with all suitable variants, so as to determine the one that performs the best.
\end{enumerate}
Before simply driving into the case studies, the first question to answer is: which simulation variant should be used for a given application? We discuss the answer intuitively. Subgraph pattern matching is essentially asymmetric (matching the pattern graph to the data graph but not the other way around), and thus $\kw{FSim}_{\kw{s}}$ and $\kw{FSim}_{\kw{dp}}$ are appropriate choices. Node similarity measurement and graph alignment require symmetry, and hence $\kw{FSim}_{\kw{b}}$ and $\kw{FSim}_{\kw{bj}}$ are applied. The codes of all the baselines were provided by the respective authors. $\mathcal{L}(\cdot)$ was used indicator function since the semantics of node labels in the studied data were clear and without ambiguity.
\stitle{Pattern Matching.} In this case study, we first considered strong simulation (exact simulation by nature, \cite{DBLP:journals/pvldb/MaCFHW11}) and $\kw{dp}$-simulation \cite{DBLP:journals/pvldb/SongGCW14} as two baselines, and compared them with $\kw{FSim}_{\kw{s}}$ and $\kw{FSim}_{\kw{dp}}$ to illustrate how $\kw{FSim}_{\chi}$ facilitates pattern matching. \reffig{real-life-match} shows two example matches on the Amazon graph (see \reftab{graph_statistics} for graph statistics). When answering query $Q_1$, strong simulation (and $\kw{dp}$-simulation) returns $G_1$, pictured in \reffig{pattern_matching_g1}, which is also the top-1 result of $\kw{FSim}_\kw{s}$ (and $\kw{FSim}_{\kw{dp}}$). Clearly, a simulation relation exists between $Q_1$ and $G_1$, and $\kw{FSim}_{\chi}$ captures $G_1$ with the highest score because of simulation definiteness (\refdef{fractional_simulation_properties}). $Q_2$ adds two extra nodes with new labels to $Q_1$ but, with this modification, both strong simulation and $\kw{dp}$-simulation fail to return a result while $\kw{FSim}_{\chi}$ returns $G_2$ (strength S1), which closely matches $Q_2$ by missing only an edge.
\begin{figure}
\centering
\subfigure[$Q_1$]{
\centering
\includegraphics[width=0.18\linewidth]{figures/pattern_matching_p1.pdf}
\label{fig:pattern_matching_q1}
}
\subfigure[$G_1$]{
\centering
\includegraphics[width=0.17\linewidth]{figures/pattern_matching_g1.pdf}
\label{fig:pattern_matching_g1}
}
\subfigure[$Q_2$]{
\centering
\includegraphics[width=0.17\linewidth]{figures/pattern_matching_p2.pdf}
\label{fig:pattern_matching_q2}
}
\subfigure[$G_2$]{
\centering
\includegraphics[width=0.18\linewidth]{figures/pattern_matching_g2.pdf}
\label{fig:pattern_matching_g2}
}
\topcaption{Real-life matches on the Amazon graph. $G_1$ and $G_2$ are top-1 matches of $\kw{FSim}_{\chi}$ for answering queries $Q_1$ and $Q_2$ respectively. Nodes in match $G$ are marked by their item ids, while nodes in query $Q$ are marked by their labels. Nodes with the same shape have the same label.} \label{fig:real-life-match}
\end{figure}
{\color{black}For a more complete study, we also compared the results of $\kw{FSim}_{\chi}$ with some other approximate pattern matching algorithms. The related algorithms can be summarized in two categories: (1) the edit-distance based algorithms, e.g., \kw{SAPPER} \cite{DBLP:journals/pvldb/ZhangYJ10} and \kw{TSpan} \cite{DBLP:conf/sigmod/ZhuLZZY12}, which enumerate all matches with mismatched edges up to a given threshold; and (2) the similarity-based algorithms that compute matches based on (sub)graph similarity or node similarity. To name a few, G-Ray \cite{DBLP:conf/kdd/TongFGE07} computes the ``goodness'' of a match based on node proximity. \kw{IsoRank} \cite{DBLP:journals/pnas/0001XB08}, \kw{NeMa} \cite{DBLP:journals/pvldb/KhanWAY13} and \kw{NAGA} \cite{DBLP:conf/www/DuttaN017} find matches based on node similarity. \kw{G}-\kw{Finder} \cite{DBLP:conf/bigdataconf/LiuDXT19} and \kw{SAGA} \cite{DBLP:journals/bioinformatics/TianMSSP07} design cost functions with multiple components to allow node mismatches and graph structural differences. \kw{SLQ} \cite{DBLP:journals/pvldb/YangWSY14} and $S^4$ \cite{DBLP:journals/pvldb/ZhengZPYSZ16} find matches in RDF knowledge graphs by considering the semantics of queries. More specifically, $S^4$ uses the semantic graph edit distance to integrate structure similarity and semantic similarity. Note that, in the Amazon graph, an edge from $u$ to $v$ indicates that people are highly likely to buy item $v$ after buying item $u$ \cite{DBLP:journals/pvldb/MaCFHW11}, and hence there is no complex semantics among edges. As a result, we choose \kw{TSpan}, \kw{NAGA} and \kw{G}-\kw{Finder}, the state-of-the-art algorithms in each category, as another three baselines.}
{\color{black
We followed the state-of-the-art algorithm \kw{NAGA} \cite{DBLP:conf/www/DuttaN017} for match generation and quality evaluation. Briefly, node pairs with high $\kw{FSim}_{\chi}$ scores are considered to be ``seeds'', and matches are generated by expanding the regions around the ``seeds'' subsequently.} The evaluated queries are generated randomly by extracting subgraphs from the data graph and introducing structural noises (randomly insert edges, up to 33\%) or label noises (randomly modify node labels, up to 33\%). We then evaluated different algorithms across four query scenarios: (1) queries with no noises (\kw{Exact}); (2) queries with structural noises only (\kw{Noisy}-\kw{E}); (3) queries with label noises only (\kw{Noisy}-\kw{L}); and (4) queries with both kinds of noises (\kw{Combined}). Note that the queries are extracted from the graphs, which naturally serve as the ``ground truth''. {\color{black}Given a query $Q$ and a returned match $\phi$ (we use top-1 match in this case study), the $F_1$ score is calculated by $F_1 = \frac{2\cdot P\cdot R}{(P+R)}$, where $P = \frac{|\phi_t|}{|\phi|}$, $R = \frac{|\phi_t|}{|Q|}$, $\phi_t$ is a subset of $\phi$ that includes the correctly discovered node matches in $\phi$, and $|X|$ indicates the number of nodes in the match or graph, $\forall X \in \{\phi_t,\phi,Q\}$}.
\begin{table}
\centering
\small
\topcaption{Average F1 scores (\%) while answering queries in different scenarios on the Amazon dataset. \kw{TSpan}-x indicates miss-matching up to $x$ edges in \kw{TSpan}.} \label{tab:pattern-matching-fscore}
\renewcommand{\arraystretch}{1.05}
\color{black}
\begin{tabular}{|c||ccccc|cc|} \hline
\multirow{2}{*}{\makecell{Query\\Scenario}} & \multicolumn{5}{c|}{Baselines} & \multicolumn{2}{c|}{$\kw{FSim}_{\chi}$} \\ \cline{2-8}
&\kw{NAGA} & \kw{G}-\kw{Finder} & \kw{TSpan}-1 & \kw{TSpan}-3 & \makecell{Strong \\ Simulation} & $\kw{FSim}_{\kw{s}}$ & $\kw{FSim}_{\kw{dp}}$ \\ \hline \hline
\kw{Exact} & 30.2 & \textbf{100} & \textbf{100} & \textbf{100} & \textbf{100} & \textbf{100} & \textbf{100} \\
{\kw{Noisy}-\kw{E}} & 30.5 & 49.2 & 71.0 & \textbf{95.8} & 50.0 & 84.0 & 65.7 \\
{\kw{Noisy}-\kw{L}} & 20.6 & 40.7 & - & - & 33.3 & \textbf{75.1} & 73.2 \\
\kw{Combined} & 21.2 & 40.9 & - & - & 29.2 & \textbf{76.6} & 66.7 \\ \hline
\end{tabular}
\end{table}
\reftab{pattern-matching-fscore}\footnote{The results of \kw{NAGA} are provided by the authors and we acknowledge the assistance from Dr. Sourav Dutta and Dr. Shubhangi Agarwal.} shows the F1 scores of different algorithms. The result is an average from 100 random queries of sizes ranging from 3 to 13. $\kw{dp}$-simulation was not compared as it is similar to strong simulation. {\color{black}As with the last results, strong simulation performed poorly against noise. In comparison, $\kw{FSim}_{\chi}$ was more robust and performed much better (strength S1).}
{\color{black}Additionally, $\kw{FSim}_{\kw{s}}$ outperformed \kw{NAGA}, \kw{G}-\kw{Finder} and \kw{TSpan}-1 by a big margin on all query scenarios.} \kw{TSpan}-3 performed well in ``\kw{Exact}'' and ``\kw{Noisy}-\kw{E}'' with its highest F1 score of 95.8\% for ``\kw{Noisy}-\kw{E}''. This is because \kw{TSpan}-3 finds all matches with up to 3 mismatched edges, which is not less than the number of noisy edges in most queries. However, \kw{TSpan} favors the case with missing edges rather than nodes. Thus, it has no results for ``\kw{Noisy}-\kw{L}'' and ``\kw{Combined}''. In summary, $\kw{FSim}_{\chi}$ is qualified for approximate pattern matching (strength S1). While both $\kw{s}$- and $\kw{dp}$-simulation can be configured for the application, $\kw{FSim}_{\kw{s}}$ is more robust to noises and performs better than $\kw{FSim}_{\kw{dp}}$ (strength S2).
\stitle{Node Similarity Measurement.} In this case study, we compared $\kw{FSim}_{\chi}$ to four state-of-the-art similarity measurement algorithms: \kw{PCRW} \cite{DBLP:journals/ml/LaoC10}, \kw{PathSim} \cite{DBLP:journals/pvldb/SunHYYW11}, \kw{JoinSim} \cite{DBLP:journals/tkde/XiongZY15} and \kw{nSimGram} \cite{DBLP:conf/kdd/ConteFGMSU18}. Following \cite{DBLP:conf/kdd/ConteFGMSU18,DBLP:journals/pvldb/SunHYYW11}, we used the DBIS dataset, which contains 60,694 authors, 72,902 papers and 464 venues. In DBIS, the venues and papers are labeled as ``V'' and ``P", respectively. The authors are labeled by their names.
We first computed the top-5 most similar venues to WWW using all algorithms. The results are shown in \reftab{top_10_venue}. Note that WWW$_1$, WWW$_2$ and WWW$_3$ all represent the WWW venue but with different node ids in DBIS, and thus they are naturally similar to WWW. Although all algorithms gave reasonable results, $\kw{FSim}_\kw{bj}$ was the only one to return WWW$_1$, WWW$_2$ and WWW$_3$ {\color{black}in the top-5 results. In addition, if we applied exact $\kw{b}$- and $\kw{bj}$-simulation to the task, other than ``WWW'' itself (``Yes''), all the other venues had the same score (``No''). This shows that $\kw{FSim}_{\chi}$ can be applied to the scenarios that require fine-grained evaluation, such as node similarity measurement (strength S1).}
\begin{table} [h]
\centering
\small
\topcaption{The top-5 similar venues for ``WWW'' of different algorithms} \label{tab:top_10_venue}
\renewcommand{\arraystretch}{0.92}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
Rank & \kw{PCRW} & \kw{PathSim} & \kw{JoinSim} & \kw{nSimGram} & $\kw{FSim}_{\kw{b}}$ & $\kw{FSim}_{\kw{bj}}$ \\ \hline \hline
1 & WWW & WWW & WWW & WWW & WWW & WWW \\
2 & SIGIR & CIKM & WWW$_1$ & CIKM & CIKM & WWW$_1$ \\
3 & ICDE & SIGKDD & CIKM & SIGIR & ICDE & CIKM \\
4 & VLDB & WISE & WSDM & WWW$_1$ & VLDB & WWW$_2$ \\
5 & Hypertext & ICDM & WWW$_2$ & SIGKDD & SIGIR & WWW$_3$ \\ \hline
\end{tabular}
\end{table}
Following \cite{DBLP:conf/kdd/ConteFGMSU18,DBLP:journals/pvldb/SunHYYW11}, we further computed the top-15 most similar venues to 15 subject venues (same as \cite{DBLP:conf/kdd/ConteFGMSU18}) of each algorithm.
For each subject venue, we labeled each returned venue with a relevance score: 0 for non-relevant, 1 for some-relevant, and 2 for very-relevant, considering both the research area and venue ranking in \cite{core_ranking}. For example, the relevance score for ICDE and VLDB is 2 as both are top-tier conferences in the area of database. We then evaluated the ranking quality of the algorithms using nDCG (the larger the score, the better).
\begin{table} [h]
\small
\centering
\topcaption{NDCG results of node similarity algorithms}
\label{tab:node_similarity_results}
\begin{tabular}{|c|c|c|c|c|c|} \hline
\multicolumn{4}{|c|}{Baselines} & \multicolumn{2}{c|}{Fractional $\chi$-simulation} \\ \hline
\kw{PCRW} & \kw{PathSim} & \kw{JoinSim} & \kw{nSimGram} & $\kw{FSim}_{\kw{b}}$ & $\kw{FSim}_{\kw{bj}}$ \\ \hline\hline
0.684 & 0.684 & 0.689 & 0.700 & 0.699& \textbf{0.733}\\ \hline
\end{tabular}
\end{table}
\reftab{node_similarity_results} shows the nDCG results. Accordingly, $\kw{FSim}_\chi$ outperforms the state-of-the-art algorithms by a large margin. This indicates that $\kw{FSim}_\chi$ is qualified to measure node similarity on labeled graphs (strength S1). The result that $\kw{FSim}_\kw{bj}$ outperforms $\kw{FSim}_\kw{b}$ in both the ``WWW'' case and the general evaluation suggests $\kw{FSim}_\kw{bj}$ is a better candidate for similarity measurement (strength S2).
\stitle{RDF Graph Alignment.} We investigate the potential of $\kw{FSim}_{\chi}\!$ in RDF graph alignment and briefly discuss its performance below. We followed Olap \cite{DBLP:journals/pvldb/BunemanS16} (a bisimulation-based alignment algorithm) to align three different versions of biological graphs from different times, $G_1$, $G_2$ and $G_3$ \cite{harding2017iuphar}. $G_1$ has 133,195 nodes and 273,512 edges, $G_2$ has 138,651 nodes and 285,000 edges, $G_3$ includes 144,879 nodes and 298,564 edges, and all of them have 8 node labels and 23 edge labels. Note that the original URI values in these datasets do not change over time. Hence, we can use this information to identify the ground truth alignment. In addition to Olap, we also included another four state-of-the-art algorithms, {\color{black}namely $k$-bisimulation \cite{DBLP:conf/sac/HeeswijkFP16}, \kw{\textsc{gsaNA}} \cite{DBLP:conf/kdd/YasarC18}, \kw{FINAL} \cite{DBLP:conf/kdd/ZhangT16} and \kw{EWS} \cite{DBLP:journals/pvldb/KazemiHG15}. When aligning graphs with $\kw{FSim}_{\chi}$, a node $u \in V_1$ will be aligned to a node set $A_u = \mathop{\mathrm{argmax}}_{v \in V_2}\kw{FSim}_\chi(u, v)$, while with $k$-bisimulation, $u$ will be aligned to $A_u = \{v|v \in V_2 \wedge u \text{ and } v \text{ are bisimilar}\}$. The F1 score of $\kw{FSim}_{\chi}$ and $k$-bisimulation is calculated by $F1 = \sum_{u\in V_1}\frac{2\cdot P_u \cdot R_u}{|V_1|(P_u + R_u)}$, where $P_u$ (resp. $R_u$) is $\frac{1}{|A_u|}$ (resp. 1) if $A_u$ contains the ground truth, and 0 otherwise. We follow the settings in the related papers for the other baselines. }
\reftab{graph_alignment_results} reports the F1 scores of each algorithm. {\color{black}Note that we also tested the bisimulation, which resulted in 0\% F1 scores in both cases since there is no exact bisimulation relation between two graphs. $k$-bisimulation performs better than bisimulation as it, to some extent, approximates bisimulation.} From \reftab{graph_alignment_results}, our $\kw{FSim}_{\chi}$ had the highest F1 scores and thus outperformed all the other baselines. This shows that we can apply $\kw{FSim}_\kw{b}$ and $\kw{FSim}_\kw{bj}$ with high potential for graph alignment (strength S1). $\kw{FSim}_{\kw{b}}$ outperforms $\kw{FSim}_{\kw{bj}}$ and thus is a better candidate for graph alignment (strength S2).
\begin{table}
\centering
\topcaption{The F1 scores (\%) of each algorithm when aligning two graphs. $x$-bisim indicates setting $k=x$ in $k$-bisimulation.} \label{tab:graph_alignment_results}
\color{black}
\begin{tabular}{|c||cccccc|cc|} \hline
\multirow{2}{*}{Graphs} & \multicolumn{6}{c|}{Baselines} & \multicolumn{2}{c|}{$\kw{FSim}_{\chi}$} \\ \cline{2-9} & $2$-bisim & $4$-bisim & Olap & \kw{\textsc{gsaNA}} & \kw{FINAL} & \kw{EWS} & $\kw{FSim}_{\kw{b}}$& $\kw{FSim}_{\kw{bj}}$ \\ \hline\hline
$G_1$-$G_2$ & 19.9 & 9.1 & 37.9 & 11.8 & 55.2 & 70.8 & \textbf{97.6} & 96.5 \\
$G_1$-$G_3$ & 53.0 & 10.9 & 37.6 & 14.9 & 52.7 & 65.3 & \textbf{96.9} & 95.6 \\ \hline
\end{tabular}
\end{table}
{\color{black}\stitle{Efficiency Evaluation.} Given the superior effectiveness of $\kw{FSim}_{\chi}$ in the above case studies, one may also be interested in its efficiency. Next, we show the running time of $\kw{FSim}_{\chi}$ (with 32 threads) and the most effective baseline in each case study. We will also report the running time of exact simulation (or its variant) if it is applied and effective in the case study. For pattern matching, $\kw{FSim}_{\chi}$ on average took 0.25s for each query. In comparison, exact simulation took around 1.2s, and \kw{TSpan}, the most effective baseline, spent more than 70s. In similarity measurement, \kw{nSimGram} took 0.03ms to compute a single node pair, while $\kw{FSim}_{\chi}$ finished the computation within 6500s for 134060$\times$134060 pairs or roughly 0.0004ms per pair. In graph alignment, $k$-bisimulation ($k=4$) spent 0.4s for the computation, and \kw{EWS} spent 1496s. {Our $\kw{FSim}_{\chi}$
{ran a bit slower than \kw{EWS} and took 3120s}, which is tolerable as {it is much more effective} than the other algorithms.} Note that it is not straightforward and potentially unfair to compare with all the baselines as they either focus on per-query computation (e.g., \kw{PathSim}\xspace and \kw{JoinSim}) or have been individually implemented in different languages (e.g., Olap in Python and \kw{FINAL} in Matlab).}
\section{Related Work} \label{sec:related_work}
\stitle{Simulation and Its Variants.}
In this paper, we focused on four simulation variants: simple simulation \cite{DBLP:conf/ijcai/Milner71, DBLP:journals/pvldb/MaCFHW11}, bisimulation \cite{milner1989communication}, degree-preserving simulation \cite{DBLP:journals/pvldb/SongGCW14} and bijective simulation. The original definition of simulation \cite{DBLP:conf/ijcai/Milner71} only considered out-neighbors, but Ma et al.'s redefinition in 2011 \cite{DBLP:journals/pvldb/MaCFHW11} takes in-neighbors into account and hence is the definition we used. Reverting to the original definition is as easy as setting $w^- = 0$ in our framework. {\color{black}Additionally, we discussed a variant of approximate bisimulation, namely $k$-bisimulation \cite{DBLP:books/cu/12/AcetoIS12,DBLP:conf/bncod/LuoLFBHW13,DBLP:conf/cikm/LuoFHWB13,DBLP:conf/sac/HeeswijkFP16}, and investigated its relation to our framework} (\refsec{discussion}). There are other variants that have not yet included in the framework, including bounded simulation \cite{DBLP:journals/pvldb/FanLMTWW10} and weak simulation \cite{ milner1989communication}. These variants consider the $k$-hop neighbors ($k\geq 1$) in addition to the immediate neighbors. As an interesting future work, we will study to incorporate them in our framework. {\color{black}There are also some algorithms that aim to compute simulation (variants) efficiently and effectively, e.g., a hash-based algorithm in \cite{DBLP:conf/sac/HeeswijkFP16}, external-memory algorithms in \cite{DBLP:conf/sigmod/HellingsFH12,DBLP:conf/cikm/LuoFHWB13}, a distributed algorithm in \cite{DBLP:conf/bncod/LuoLFBHW13} and a partition refinement algorithm in \cite{DBLP:journals/acta/Ranzato14}. However, all these algorithms compute the ``yes-or-no'' simulation (or its variant) and cannot provide fractional scores as proposed in this paper.}
{\color{black} \stitle{Node Similarity Measures.}}
{\color{black} We have shown that $\kw{FSim}_{\kw{bj}}$ is qualified for node similarity measurement. Thus, we review node similarity measures on labeled graphs.} \kw{SimRank} \cite{DBLP:conf/kdd/JehW02} and \kw{RoleSim} \cite{DBLP:conf/kdd/JinLH11} are two representative measures, and their relations to our $\kw{FSim}_{\chi}$ have been discussed in \refsec{discussion}.
As these two measures are less effective in computing node similarity on labeled graphs \cite{DBLP:conf/kdd/ConteFGMSU18,DBLP:journals/pvldb/SunHYYW11}, similarity measures \cite{DBLP:conf/kdd/ConteFGMSU18,DBLP:conf/kdd/HuangZCSML16,DBLP:journals/pvldb/SunHYYW11,DBLP:journals/tkde/XiongZY15} were proposed. \kw{PathSim} \cite{DBLP:journals/pvldb/SunHYYW11}, for instance, uses a ratio of meta-paths connecting two nodes as the measure. \kw{JoinSim} \cite{DBLP:journals/tkde/XiongZY15} is similar to \kw{PathSim}, but it satisfies the triangle inequality. \kw{nSimGram} \cite{DBLP:conf/kdd/ConteFGMSU18} computes node similarity based on q-grams instead of meta-paths to capture more topology information. {\color{black}Note that these measures cannot substitute our work as their scores are not related to simulation and thus are not suitable to quantify the extent of simulation.}
{\color{black}
{\color{black} \stitle{Similarity-based Applications.} There are a number of works on pattern matching and graph alignment that are based on node similarity techniques. These works may differ in measuring node similarities. Specifically,} \kw{IsoRank} \cite{DBLP:journals/pnas/0001XB08} computes the similarity between two nodes based on an weighted average of their neighbors' scores. \kw{NeMa} \cite{DBLP:journals/pvldb/KhanWAY13} defines a vector that encodes the neighborhood information for each node. The distance between two nodes is then computed from these vectors. \kw{NAGA} \cite{DBLP:conf/www/DuttaN017} leverages statistical significance through chi-square measure to compute node similarity. \kw{REGAL} \cite{DBLP:conf/cikm/HeimannSSK18} measures the similarity of two nodes by taking the information of $k$-hop neighbors into account. \kw{FIRST} \cite{DBLP:conf/kdd/DuZCT17} and \kw{FINAL} \cite{DBLP:conf/kdd/ZhangT16} use a Sylvester equation to compute similarities, which encodes structural consistency and attribute consistency of two networks. {\color{black}For similar reasons, these works are also not suitable to quantify the degree of simulation.}}
\section{Conclusion} \label{sec:conclusion}
In this paper, we formally define fractional $\chi$-simulation to quantify the degree to which one node simulates another by a $\chi$-simulation. We then propose the $\kw{FSim}_{\chi}$ computation framework to realize the quantification for all $\chi$-simulations. We conduct extensive experiments to demonstrate the effectiveness and efficiency of the fractional $\chi$-simulation framework. {\color{black}Considering end-users are also interested in the top-k similarity search. In the future, we plan to devise efficient techniques to process top-k queries based on the $\kw{FSim}_{\chi}$.}
| proofpile-arXiv_059-15711 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Many theories allow existence of higher dimensions that are non compact. Study of stability and structure of stars in such a framework is interesting and explored by different groups \cite{chav08, krama14, pc00}. All these studies have been performed for compact objects like white dwarfs, neutron stars, and black holes; all of which exhibit strong field gravity. These studies were mostly analytical and did not give numerical values of observable properties like the mass, radius, gravitational redshift, etc. for the stars. The reason for this is the fact that we do not know the value of the gravitational constant and properties of matter at higher dimensions. In the present work, I provide values of these observables for a few dimensions and discuss possible observational aspects.
\section{Metric, equations of hydrostatic equilibrium, and structure of static, isotropic, spherically symmetric stars in $D$ dimension}
\label{sec:formalism}
Spacetime coordinates in $D$ dimension can be chosen as ($ct$, $x^i$, $r$, $\theta^j$) where $i= 1, 2, \ldots n_c$ represent the directions in the compact space, and $j= 1,2, \ldots m$ represent the directions in the non-compact transverse space, and $c$ is the speed of light in vacuum \cite{krama14} . Hence $D=2+n_c + m$ and the most general line element is:
\begin{equation}
\label{eq:line1}
ds^2 = e^{\nu} \, (c \, dt)^2 - \displaystyle\sum_{i=1}^{n_c} e^{\mu_i} \,(dx^i)^2 - e^{\lambda} \, dr^2 - e^{\sigma} \, d\Omega_{m}^2 ~,
\end{equation} where $d\Omega_{m}^2 = d\theta_1^{\,2} + \sin^2 \theta_1 \, d\theta_2^{\,2} + \sin^2 \theta_1 \, \sin^2 \theta_2 \, d\theta_3^{\,2} + \ldots \prod_{j=1}^{m-1} \sin^2 \theta_j \, d \theta_m^{2}$ is the line element on the $m$ dimensional unit sphere.
Ignoring the compact space, i.e., by taking $n_c =0$ one gets $D=2+m=3+n$ where we have introduced a new number $n=m-1=D-3$. For spherically symmetric metric in such a situation, eq. (\ref{eq:line1}) reduces to:
\begin{equation}
\label{eq:line2}
ds^2 = e^{\nu(r)} \, (c \, dt)^2 - e^{\lambda(r)} \, dr^2 - r^2 \, d\Omega_{m}^2 ~.
\end{equation}
Einstein's fields equations can be written as \cite{pc00}:
\begin{equation}
\label{eq:einstein}
R_{\alpha \beta} = \frac{8 \pi \widetilde{G}}{c^4} \left[ T_{\alpha \beta} -\frac{1}{n+1} g_{\alpha \beta} \, T \right] ~,
\end{equation} where the Greek indices ($\alpha$, $\beta$) go from 1 to $D$, $\widetilde{G}$ is the gravitational constant in $D$ dimension in the unit of $G ~{\rm length}^{D-4}$, and $G$ is the gravitational constant for $D=4$ dimensional spacetime. $T_{\alpha \beta}$ is the energy momentum tensor in the $D$ dimension.
For the isotropic distribution, the energy-momentum tensor becomes
\begin{equation}
\label{eq:energymomentum}
T_{\alpha}^{~ \beta} = diag(\widetilde{\rho c^2 }, -\widetilde{P}, -\widetilde{P}, \ldots, -\widetilde{P}) ~,
\end{equation} where $\widetilde{\rho} c^2$ is the energy density and $\widetilde{P}$ is the pressure in D dimension, both having the unit of mass-length$^{-D+3}$-time$^{-2}$. $\widetilde{\rho}$ is the mass density in the unit of mass-length$^{-D+1}$. The relation between $\widetilde{\rho c^2 }$ and $\widetilde{P}$ is the Equation of State (EoS) in $D$ dimension. The lack of numerical analysis for $D > 4$ is the fact that the values of $\widetilde{G}$, $\widetilde{\rho}\, c^2$, and $\widetilde{P}$ in various dimensions are not known.
Solving eqns. (\ref{eq:einstein}) with the help of eqns (\ref{eq:energymomentum}), one gets \cite{pc00}:
\begin{subequations}\label{eq:fieldall}
\begin{align}
\label{eq:field1}
e^{-\lambda(r)} \left( \frac{\lambda^{\prime} }{r} -\frac{n}{r^2} \right) + \frac{n}{r^2} & = \frac{16 \pi \, \widetilde{G} }{(n+1) c^4} \, \widetilde{\rho} c^2
= \frac{16 \pi \, G }{(n+1) c^4} \, \rho c^2
\\
\label{eq:field2}
e^{-\lambda(r)} \left( \frac{\nu^{\prime} }{r} + \frac{n}{r^2} \right) - \frac{n}{r^2} & = \frac{16 \pi \, \widetilde{G}}{(n+1) c^4} \, \widetilde{P}
= \frac{16 \pi \, G }{(n+1) c^4} \, P
\\
\label{eq:field3}
e^{-\lambda(r)} \left( \frac{ \nu^{\prime \, \prime}}{2} + \frac{{\nu^{\prime}}^2}{4} -\frac{\nu^{\prime} \lambda^{\prime} }{4} -\frac{ n \, \lambda^{\prime} + \nu^{\prime}}{2 \, r} - \frac{n}{r^2} \right) + \frac{n}{r^2} & = 0
\end{align}
\end{subequations} where a prime symbol over a parameter means the first derivative with respect to $r$ and a double prime symbol means the second derivative with respect to $r$. In eqns. (\ref{eq:fieldall}), we have used $\widetilde{G} \, \widetilde{\rho} = G \, \rho$ and $\widetilde{G} \, \widetilde{P} = G \, P$ where $\rho$ is the mass density and $P$ is the pressure for $D=4$. Integrating equations (\ref{eq:field1}), we get
\begin{equation}
\label{eq:metric1}
e^{-\lambda(r)} = 1 - \frac{2 G}{c^2}\, \cdot \, \frac{1}{r^n} \, m(r) ~,
\end{equation} where
\begin{equation}
\label{eq:massfunction}
m(r) = \frac{8 \pi}{n+1} \int_{0}^{r} \rho(r^{\prime}) \, {r^{\prime}}^{n+1} \, dr^{\prime}
\end{equation} is called the mass-function. Note that the dimension of $m(r)$ is not the dimension of mass, it is mass-length$^{n-1}$, and the density is kept inside the integration as inside a star the density is a function of the radial coordinate.
The conservation of energy gives:
\begin{equation}
\label{eq:energycons}
\begin{split}
\frac{d\widetilde{P}(r)}{dr} & = -\frac{1}{2} \frac{d \nu}{dr} \left( \widetilde{\rho} c^2 + \widetilde{P} \right) ~~~ {\rm or}
\\
\frac{dP(r)}{dr} & = -\frac{1}{2} \frac{d \nu}{dr} \left( \rho c^2 + P \right)
\end{split}
\end{equation}
Using eqns. (\ref{eq:field2}), (\ref{eq:metric1}), and (\ref{eq:energycons}), we get
\begin{equation}
\label{eq:dpdr}
\frac{dP(r)}{dr} = \frac{- \frac{G}{c^4} \left[ \rho(r) c^2 + P(r) \right] \, \left[ n(n+1) \, m(r) \, c^2 + 8 \pi \, P(r) \, r^{n+2} \right] }{(n+1) \, r^{n+1} \, \left[ 1 - \frac{2 G}{c^2} \cdot \frac{m(r)}{r^n} \right]} ~.
\end{equation}
Eqns. (\ref{eq:dpdr}) and (\ref{eq:massfunction}) are the equations of hydrostatic equilibrium in $D$ dimension. For $D=4$, i.e., $n=1$, these equations reduces to the well known Tolman-Oppenheimer-Volkoff equations \cite{tol39, op39}.
To get the structure of a star, one first needs to choose a high density $\rho_c$ at the centre of the star ($r=0$) and corresponding pressure $P_c$ from a chosen EoS, and calculate $\Delta \, m = m(r=\Delta \, r)$ by assuming that the density is constant over a small $\Delta \, r$. Using these initial values of the pressure and mass-function one integrates eqns. (\ref{eq:dpdr}) and (\ref{eq:massfunction}) until the value of the pressure becomes zero. The value of $r$ at which one gets zero pressure is the radius $R$ of the star, and $m(r=R) = \mathcal{M}$ is the total ``mass'' of the star. $\mathcal{M}$ also has the dimension of $M \, L^{n-1}$, and we can define the physical mass of the star as $M=\mathcal{M}/R^{n-1}$ which has the dimension of mass. The line element interior and exterior of such a spherically symmetric static star can be written as \cite{pc00}:
\begin{equation}
\label{eq:starmetric}
\begin{split}
ds^2 &= \left( 1 - \frac{2G}{c^2} \, \frac{m(r)}{r^n} \right) \, (c \, dt)^2 - \left( 1 - \frac{2G}{c^2} \, \frac{m(r)}{r^n} \right)^{-1} \, dr^2 - r^2 \, d\Omega_{m}^2
\qquad
r < R
\\
&= \left( 1 - \frac{2G}{c^2} \, \frac{\mathcal{M}}{R^n} \right) \, (c \, dt)^2 - \left( 1 - \frac{2G}{c^2} \, \frac{\mathcal{M}}{R^n} \right)^{-1} \, dr^2 - r^2 \, d\Omega_{m}^2 \qquad
r \geq R
\\
&= \left( 1 - \frac{2G}{c^2} \, \frac{M}{R} \right) \, (c \, dt)^2 - \left( 1 - \frac{2G}{c^2} \, \frac{M}{R} \right)^{-1} \, dr^2 - r^2 \, d\Omega_{m}^2
\qquad
r \geq R ~.
\end{split}
\end{equation}
From eqn (\ref{eq:starmetric}), it is obvious that the surface of the star would show a singularity if $1 \geq \frac{2G}{c^2} \, \frac{M}{R} $ or $ R \geq \frac{2G}{c^2} \, M$. This is independent of $n$ and same as the Schwarzschild limit for the case of $D=4$. Similarly, the gravitational redshift parameter can be written as:
\begin{equation}
\label{eq:redshift}
z+1 = \frac{\lambda_{\rm observed}}{\lambda_{\rm emitted}} = \left( 1 - \frac{2 G}{c^2} \cdot \frac{M}{R} \right)^{-1/2} ~,
\end{equation} which is also independent of $n$ and similar to the case of $D=4$.
The compactness limit for a spherically symmetric static star is known as \cite{pc00}:
\begin{equation}
\label{eq:buchdahl}
\frac{G}{c^2} \cdot \frac{M}{R} \leq \frac{2 \, (n+1)}{(n+2)^2} ~,
\end{equation} which reduces to the standard `Buchdahl limit' \cite{buch59} $\frac{G}{c^2} \cdot \frac{M}{R} \leq \frac{4}{9}$ for 4-dimensional spacetime ($n=1$). It is possible to numerically solve eqns. (\ref{eq:massfunction}), (\ref{eq:dpdr}), (\ref{eq:redshift}), and (\ref{eq:buchdahl}), as all of these now contain parameters from $D=4$, e.g., $G$, $\rho$, and $P$, which are known. In particular, we can choose a known EoS derived for $D=4$ to get values of $\rho$ and $P$. We report results of such numerical solutions in the next section.
\section{Static, isotropic, spherically symmetric neutron stars with an APR-like equations of state in $D$ dimension}
\label{sec:results}
There are many EsoS for dense matter, out of which, many have been ruled out by the gravitational wave event GW170817 detected by the LIGO-Virgo collaboration \cite{aa18}. The assumption was that the event was caused by mergers of two neutron stars in $D=4$. We choose one of the few EsoS allowed by the GW170817 event, i.e, the Akmal-Pandharipande-Ravenhall (APR) EoS \cite{apr98}.
We compute $M$ and $R$ for various central densities of stars for $n=1,2,3,4$ by numerically solving eqns. (\ref{eq:dpdr}) and (\ref{eq:massfunction}). The resulting $M-R$ relations are shown with solid lines in fig. \ref{fig:massradius} where the radius in kilometer is plotted along the abscissa and the mass in the unit of solar mass ($M_{\odot} = 1.98847 \times 10^{30}$ kg) is plotted along the ordinate. Each point on a $M-R$ curve represents a particular star and different points are obtained with different choice of the central density $\rho_c$. For any fixed $n$, a small value of $\rho_c$ gives a small mass and large radius. As the value of $\rho_c$ increases, the mass of the star also increases until it reaches a maximum value, this corresponds to $\rho_{c, max}$. No stable stellar configuration is possible for $\rho_c \geq \rho_{c, max}$.
We find that the $M-R$ curves always stay below the compactness limit given by eqn (\ref{eq:buchdahl}). This limit for each $n$ has been shown in the same figure with dashed lines. Moreover, lines for $z=$ 0.05, 0.1, 0.14, 0.2, 0.3, and 0.7 have been shown with dots, and the singular region has been shown as the hatched region in the top-left corner of the plot.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth, angle=-90]{f1.pdf}
\caption{Mass-radius plots (solid lines) for APR EoS obtained for $n=1, \,2, \,3, {\rm and} \, 4$ where $n=D-3$. The hatched region in the top-left corner is the region of singularity. The limit of compactness are shown for each $n$ with dashed lines. The lines for gravitational redshift parameter $z=$ 0.05, 0.1, 0.14, 0.2, 0.3, and 0.7 are shown with dotted lines.}
\label{fig:massradius}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth, angle=-90]{f2.pdf}
\caption{APR EoS. The part marked with small squares is the EoS available in the literature and the straight line is our fit.}
\label{fig:massradius}
\end{figure}
Our most interesting findings are: (i) for each $n$, we obtain a maximum mass, (ii) the value of the maximum mass decreases with increasing $n$, (iii) the central density is higher for the maximum mass at higher dimension as shown in Table \ref{tab:maximummass}, (iv) the stars are less compact for larger $n$, e.g., for $M=1.03 \, {\rm M_{\odot}}$, we get $R=$ 11.27, 13.05, 15.14, and 17.25 km, and central densities 0.453$\times 10^3$, 0.663$\times 10^3$, 0.873$\times 10^3$, and 1.173$\times 10^3$ ${\rm~ MeV ~ fm^{-3}}$ for $n=$ 1, 2, 3, and 4 respectively.
Also, note that the APR EoS in the literature is available upto the density 2.0439 $\times 10^3 {\rm ~ MeV ~ fm^{-3}}$, which does not give the maximum mass for $n>2$. So, we have extrapolated the EoS by fitting the high density part with a straight line $P=0.732112 \times (\rho \, c^2) + 765.528$. Figure 2 shows the EoS, both the original and the fit.
\begin{table}[tbp]
\centering
\begin{tabular}{|l |c |c |c |c|}
\hline
parameter & $n=1$ & $n=2$ & $n=3$ & $n=4$ \\
\hline
maximum mass (${\rm M_{\odot}}$) & 2.19 & 1.84 & 1.57 & 1.35 \\
& & & & \\
corresponding radius (${\rm km}$) & 9.83 & 12.52 & 14.93 & 17.17 \\
& & & & \\
corresponding central density ($10^{3} \, {\rm MeV~fm^{-3}}$) & 1.553 & 1.963 & 2.573 & 3.783 \\
\hline
\end{tabular}
\caption{\label{tab:maximummass} Maximum mass and corresponding radius and central density for $n=$ 1, 2, 3, and 4 where $n=D-3$.}
\end{table}
\section{Discussion}
\label{sec:disc}
Note that, there are some neutron stars with measured masses larger than 2 ${\rm M_{\odot}}$, e.g., PSR J0740+6620 with $2.14^{+0.20}_{-0.18}~ {\rm M_{\odot}}$ \cite{cfr20} and PSR J0348+0432 with $2.01 \pm 0.04 ~ {\rm M_{\odot}}$ \cite{afw13}. This implies that at least for these two neutron stars, either we can rule out the possibility of $n \geq 2$ as the maximum mass possible for $n=2$ is $1.84~{ \rm M_{\odot}}$ and lower for higher $n$ for APR EoS, or the EoS is much stiffer which would result a higher value of the maximum mass for each value of $n$. However, it still remains an open question whether low mass neutron stars have just low central density, or they belong to higher dimensions. In principle, if one can measure $M$, $R$, $z$ - all three at the same time it would be possible to constrain both the EoS, central density and the dimension of the spacetime inside that object. This will be an extremely challenging task for observational astronomers, but not impossible by combining timing and spectral analysis of binary pulsars. Where timing analysis of binary pulsars would result in measurements of the mass of the star while the spectral analysis would give the radius (using the value of the mass obtained from timing analysis) and gravitational redshift (if any known spectral line is detected).
\acknowledgments
The author thanks S. Kalyana Rama for many illuminating discussions.
| proofpile-arXiv_059-15712 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
COVID-19 is a recently emerging infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) \cite{Lai2020}. After outbreaks of severe acute respiratory syndrome coronavirus (SARS-CoV) in 2002, and of Middle East respiratory syndrome coronavirus (MERS-CoV) in 2012, this is the third outbreak of a coronavirus since the turn of the century. Mathematical models of COVID-19 transmission at the population level have been instrumental in controlling the spread of the virus, but a detailed understanding of within-host infection dynamics is still lacking. Like SARS-CoV and MERS-CoV, SARS-CoV-2 is a betacoronavirus in Group IV of the Baltimore classification of viruses. Compared with other single-stranded (ss)RNA viruses, coronaviruses have the longest genomes. The smallest RNA viruses, for example, are only $\sim$ 1-4kb in length, and HIV has two copies of a $\sim 10$kb genome, whilst the SARS-CoV-2 genome is a positive-sense, ssRNA molecule of $\sim$ 30kb. As a consequence, the viral life cycle of coronaviruses is distinct from most other ssRNA viruses, and existing intracellular infection models, e.g. for hepatitis C virus \cite{Aunins2018}, cannot be applied here. Therefore, we introduce here a novel intracellular model of SARS-CoV-2 infection, that incorporates essential steps specific to coronaviral life cycles. This model enables us to study in detail the viral dynamics inside an infected cell, and provides a framework to study the impacts of antiviral treatments on the infection dynamics within an infected cell. In particular, it enables us to quantify the impact of different treatment options on the viral load that is secreted from an infected cell.
Outcomes from the intracellular model are then integrated into an intercellular model, that takes the impact of the immune response on infection dynamics within an infected individual into account. The model has been parameterised with data from 12 patients from a study in Singapore \cite{Young2020}, enabling us to generate a generic profile of disease progression in patients that have recovered from the disease. Model predictions agree well with experimentally and clinically measured parameters such as the duration of the \textit{incubation period}, suggesting that this scenario is representative of disease progression seen in COVID-19 patients. We then use this model to study the infection dynamics in patients with different levels of immune responses by varying parameters associated with the immune response, such as the proliferation rates of effector cells and antibodies, and the rate by which effector cells remove infected cells. Comparison of different scenarios is based on tissue damage and viral load, highlighting the impact(s) of antibodies and adaptive cell-mediated immune response on infection dynamics.
This provides a framework in which to compare the impacts of different forms of antiviral therapy and assess their synergies. We focus here on two prominent forms of therapy against COVID-19: remdesivir, that inhibits virus production within an infected cell \cite{Beigel2020}, and convalescent plasma (CP) therapy, whereby CP derived from recently recovered donors is transfused to the patients as an additional support \cite{Duan2020}. \textcolor{mycolor}{Recent studies have concluded that remdesivir is an effective antiviral treatment option for COVID-19 \cite{Beigel2020}. However, the rapid spread and novel nature of the disease make the detailed evaluation of effective treatment protocols difficult. Using mathematical models enables us to study in detail the effect of the drug remdesivir on viral load in a COVID-19 infection. Gon{\c{c}}alves et al. used a ``target-cell limited'' model to evaluate the efficacy of different treatment options against SARS-CoV-2 infections \cite{Gonccalves2020}. They showed that if drugs such as remdesivir are administered very early, this may help control viral load, but may not have a major effect in severely ill patients. Iwanami et al. also introduced a mathematical model to describe the within-host viral dynamics of SARS-CoV-2 and demonstrated that late timing of treatment initiation can mask the effect of antivirals in clinical studies of COVID-19 \cite{Iwanami2020}. However, none of these models have included the impact of the immune response directly into their model, which plays an important role in the outcomes of the infection. In order to analyse different aspects of viral dynamics, studying the interactions between viruses and the immune system of the host is crucial \cite{Nowak2000,Andrew2007}. In this work we take the impact of the immune response on infection dynamics into account to perform a more robust analysis of different treatment options. Regarding the CP therapy several studies performed in various countries have shown that this treatment is effective against COVID-19 infections and its safety has been well established in a randomized clinical trial (RCT) on a large population \cite{Duan2020,Joyner2020a,Joyner2020b,Ahn2020,Ye2020,Gharbharan2020,Khulood2020}. However, finding the optimal dose and time for CP therapy is still debated \cite{Duan2020}. Our intercellular model provides insights into the effects of different dosages and treatment starts in terms of infection-related quantities for CP therapy and this supports efforts in combating the COVID-19 pandemic.}
\section{Results}
\subsection{An {\emph{in silico}} model of intracellular SARS-CoV-2 infection dynamics}
Our stochastic model of viral infection dynamics within an infected host cell tracks the different viral and cellular components required for formation of progeny virus. These include the structural proteins that make up the virus: the envelope (E) protein, the membrane (M) protein, and the spike (S) protein, as well as the nucleocapsid (N) protein (Fig. \ref{qrtime}a). N protein forms a complex with the genomic RNA (gRNA), and thus aids its compaction for ease of packaging within the viral envelope. The S protein binds the \textcolor{mycolor}{Angiotensin-converting enzyme 2 (ACE-2)} receptor on human cells and is therefore essential for cell entry. The model also includes non-structural proteins that are important for the viral life cycle, such as the replicase–transcriptase complex (RTC), and keeps track of the numbers of gRNAs (and subgenomic sgRNAs) at different stages of the replication process. These include both the original plus-sense template of the RNA molecules, as well as their negative-sense variants that arise transiently during transcription. There are nine negative-sense sub-genomic RNAs (-sgRNAs), corresponding to different gene products. In particular, there is an individual one for each of the structural proteins, allowing the virus to produce these components in the different quantities required for formation of viral particles \cite{Bar2020}.
The reactions modelling viral replication within the host cell are described in detail in the Supplementary Information (SI) \textcolor{mycolor}{ and {\it Materials and Methods}}. Here we provide a brief summary of the main reactions. The SARS-CoV-2 genome encodes two polyproteins, pp1a and pp1ab. The former is translated from the open reading frame ORF1a, and the latter from the overlapping reading frames 1a and 1b \cite{Fehr2015} via a -1 ribosomal frameshift during the translational elongation step. The relative frequencies of occurrence of the ribosomal frameshift is a key mechanism of self-regulation of protein expression of the virus and hence an important parameter in our model, which is captured by the {\it frameshift probability} $q$ \cite{Nakagawa2016}.
Proteins cleaved from pp1a and pp1ab form the RTC, which is then used by the virus for genome replication and production of the nine (-)sgRNAs. (-)sgRNAs are produced through discontinuous transcription \cite{De2016}, where elongation of nascent (-)RNA continues until the first functional transcription-regulating sequence (TRS) is encountered. A fixed proportion of RTCs will disregard the TRS motif and continue to elongate the nascent strand, while the remainder will halt synthesis of the nascent minus strand and instead synthesise (-)sgRNAs \cite{Sawicki2007}. The nine TRS motifs in the SARS-CoV-2 genome correspond to the nine sgRNAs produced, hence the choice to elongate or terminate synthesis occurs up to nine times during the elongation process \cite{Kim2020}. The (-)RNA and (-)sgRNAs produce positive-sense RNAs by recruiting RTC. The sgRNAs are translated to form proteins using cellular ribosomes. In the final step, a gRNA and the structural proteins (S, M, N, and E) form a new virion according to an assembly reaction that takes the stoichiometry of the different viral components into account \cite{Bar2020}, and the virus particle is then released from the host cell.
Stochastic simulations of the reactions were implemented using the Gillespie algorithm \cite{Gillespie1977}, and the number of particles released over the course of 100 hours was computed as the average over 200 stochastic simulations. Parameter values used (SI table S1) are predominantly based on \textcolor{mycolor}{experimentally available data \cite{Dimelow2009,Bar2020,Kim2020,Te2010,Zhang2020,Gordon2020};} for parameters for which no data were available, we ensured that our main conclusions are robust against their variation. In particular, the release rate of virions, following a time lag between infection of a host cell and its first release of viral particles, is constant and virions are secreted linearly (SI Fig. S3).
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{virusandqrandtreatment.png}
\caption{An earlier treatment start, especially during the latent period, is more effective. (a) Illustration of a SARS-CoV-2 virion; the viral genome (dark green) is in complex with nucleocapsid (N) protein (light green) and is enclosed by the viral envelope that is studded by other structural glycoproteins, the spike (S) protein (red), the membrane (M) protein (maroon), and the envelope (E) protein (yellow). (b) Time lag before the release of the first virion from an infected cell; the maximal release of virions occurs when the RTC elongation probability, $r$, is high and the frameshifting rate, $q$, is between 0.3 and 0.5. (c) and (d) Profiles of viral load from an infected cell after introducing treatment at different times post infection, for a concentration of 25 (c) and 50 (d) molecules of remdesivir, respectively. \textcolor{mycolor}{The black solid curve indicates the drug-free control. The magenta (long-dashed), green (dotted), red (dashed-dotted), and orange (dashed) curves correspond to a treatment start at 50, 30, 20, and 10 hours post infection, respectively.} Parameter values are given in (SI Table S1).}
\label{qrtime}
\end{figure}
We note, however, that the length of the time lag, and correspondingly the total number of virions released, are affected by some of these parameters and therefore warrant a more detailed investigation. For example, increasing the ribosomal protein production rate (SI Fig. S3a) or the RTC nucleotide association rate (SI Fig. S3b) decreases the time lag and therefore increases the number of virions released. By contrast, variation of the half-life of RTC or the formation rate of RTC from the constituent proteins does not have a significant effect on the time lag (SI Fig. S3c and d). Figure \ref{qrtime}b shows the time lag to the release of the first virion from an infected cell as a function of the ribosomal frame shifting probability $q$, and of the RTC elongation probability $r$.
\textcolor{mycolor}{Figure \ref{qrtime}b} indicates that decreasing $r$ results in a rapid increase in the time lag: for example, when $r=0.55$ and the virus produces many sub-genomic fragments, this time lag is longer than 200 hours. This implies that viral load is maximised for the scenario that the RTC favours continuation of the transcription process when encountering a TRS. Given the importance of frameshifting in the coronavirus life cycle to control the relative numbers of different viral components, it has been argued that the virus will have evolved to optimise this ratio. In particular, frameshift signals have been characterized experimentally previously to have efficiencies in the range of 20–45\% \cite{Baranov2005,Brierley1987,Herald1993,Plant2006}, although a recent study has suggested that the frameshift rate may be slightly higher \cite{Irigoyen2016}. Our model identifies an optimal value of $0.3<q<0.5$ as the range with the lowest time lag and hence maximal virion release (Fig. \ref{qrtime}b), in good agreement with the experimentally determined range of 20-45\%.
\subsection{The intracellular infection model in the presence and absence of antiviral therapy}
In order to assess the impact of an antiviral drug on viral load in the context of the intracellular model, we use the example of remdesivir, which is a widely used treatment option against COVID-19. Remdesivir, originally developed as a treatment for Hepatitis C virus and later trialled for efficacy against Ebola, acts as a nucleoside analogue that mimics adenosine nucleotide \cite{Warren2016}. During the replication process, RTC may insert remdesivir molecules instead of adenosine, resulting in capping of the strand and thus terminating replication \cite{Zhang2020}. We have included additional reactions into the model that describe remdesivir binding to the RTC complexes on the gRNAs and sgRNAs to capture this (see SI for details), and track the effect of a given, fixed number of remdesivir molecules per cell on the release of viral particles from an infected host cell.
Figures \ref{qrtime}c and d show the impact on viral load released from a single infected cell for two different concentrations of remdesivir, as well as treatment starts at different times post-infection (TPIs). In Fig. \ref{qrtime}c, a free concentration equivalent to 25 remdesivir molecules per cell is considered, corresponding to a concentration of $\sim 0.06$ \textmu M (see SI) \cite{Gordon2020}. The black solid curve indicates the viral load in the absence of treatment as a control, computed as an average over 200 stochastic simulations of the model. Magenta (long-dashed), green (dotted), red (dashed-dotted), and orange (dashed) curves show the impact(s) of treatment start at TPIs of 50, 30, 20, and 10 hours, respectively. Figure \ref{qrtime}c demonstrates that starting treatment during the latent period reduces the total number of virions released significantly. Even given a later treatment start the rate of virion production is slowed down, but the earlier treatment is started, the stronger the reduction in the virion production rate. Our results are consistent with experiments that probed the impact of remdesivir on \textcolor{mycolor}{mouse hepatitis virus (MHV)} infection \cite{Agostini2018}, which also revealed that starting treatment earlier and during the latent period is more effective, as is the case also in other betacoronviruses. Figure \ref{qrtime}d shows the impact of doubling the drug concentration (equivalent to a concentration of 50 molecules of remdesivir per cell). In this case, starting treatment early during infection at 20 hours post infection reduced the number of virions released on average by more than half. However, starting treatment later in the infection, such as 30 or 50 hours post infection, decreases the number of released virions by a smaller fraction. This suggests that although an increased drug concentration can be beneficial, starting the treatment earlier is more effective at reducing viral load than an increase in dosage.
\textcolor{mycolor}{The intracellular model provides new insights into the release of viral particles from an infected cell, both in the absence and presence of antiviral treatment. The model shows that there is a time lag between infection of a host cell and the first release of new virions. It also shows that virions are effectively released linearly in time after the time lag which is not observed in other viral infections such as hepatitis B viral (HBV) infection \cite{Fatehi2020}. The model reveals that antiviral therapy based on remdesivir has a higher efficacy in infected cells which are in the latent period compared with those that are already producing virions. We incorporated these facts in the next section into an intercellular model of within-host infection dynamics. The model can be used as a platform for comparing different therapeutic strategies that may develop in the future against COVID-19 infections \cite{Fatehi2020}.}
\subsection{Within-host model of SARS-CoV-2 infection dynamics}
The intracellular model affords insights into the release of viral particles from an infected cell, both in the absence and presence of antiviral treatment. We integrate results from this model into an intercellular model of within-host infection dynamics in order to probe the impact of the adaptive immune response on disease progression both in the absence and presence of antiviral therapy. Uninfected target cells ($T$) are assumed to follow logistic growth with proliferation rate $r_T$ and carrying capacity $T_m$. Inclusion of growth capacity of uninfected cells is important, because SARS-CoV-2 is detectable in patients over 20 days after the onset of symptoms \cite{Zou2020}, comparable to the time taken to regenerate the epithelium (up to 1 month \cite{Wright2001}). Uninfected cells are infected by free virions at rate $\beta$. Although infected cells in the latent phase probably die at a somewhat lower rate than productively infectious cells, we assume that all infected cells die at approximately the same rate, $\delta$, in order to minimise the number of free parameters that would complicate parameter estimation.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{diagram.png}
\caption{Diagram of the model of immune response to a viral infection. Purple circles show host cells (uninfected cells, infected cells in latent phase and productively infected cells), green circles indicated immune response (effector cells and free antibodies), and gray shows virions. Double arrow-headed lines show natural clearance. Bar-headed lines indicate the removal of infected cells and virions by immune response. Single arrow-headed lines show proliferation and production.}
\label{diagram}
\end{figure}
Our intracellular model shows a time lag between infection of a host cell and the first release of new virions, consistent with experimental observation \cite{Bar2020}. This effect is included into our intercellular model via a latent phase ($L$) with a lifetime defined as $1/\gamma$, where $\gamma$ denotes the average transition rate from the latent to the productively infectious ($I$) state, i.e. when the cell sheds viral particles. The intracellular model also shows that virions are effectively released linearly in time. Therefore we model infected cells as producing new virions $V$ at a constant production rate $p$, and assume that they are naturally cleared at rate $c$.
Our model of the adaptive immune response consists of antibodies $A$ (humoral immune response) that remove virions at rate $k$, and effector cells $E$ (cell-mediated immune response) that kill infected cells at rate $\mu$, assuming the same rate for cells in the latent and infectious phase in order to minimise the number of free parameters. Antibodies are produced at rate $p_A$ proportional to the viral load and are degraded at rate $d_A$. After viral clearance, the antibody level is kept at a homeostatic level, because of the long-lived plasma and memory B cells. To represent this, we add a logistic term with proliferation rate $r_A$ and carrying capacity $A_m$ to the antibody equation. \textcolor{mycolor}{A fixed basal level of effector cells is assumed ($\lambda_E/d_E$), and upon infection the population of effector cells will expand at rate $\alpha(L+I)E$ \cite{Ciupe2007,Handel2010,Chenar2018}.} $L$ and $I$ both have an impact on the immune response, because infected cells during the latent phase are producing viral proteins. Although infected cells at different stages of infection are likely to express slightly different levels of viral \textcolor{mycolor}{peptide-MHC (major histocompatibility complex)} on their surface, we assumed that the rates are the same for $L$ and $I$ in order to minimise the number of free parameters in our model.
Considering the above assumptions, the model, as illustrated in Fig. \ref{diagram}, takes on the following form:
\begin{equation}\label{inter model}
\begin{array}{l}
\vspace{0.15cm}
\displaystyle{\dfrac{dT}{dt}=r_TT(1-\dfrac{T+L+I}{T_m})-\beta TV,}\\\vspace{0.15cm}
\displaystyle{\dfrac{dL}{dt}=\beta TV-\delta L-\gamma L-\mu LE,}\\\vspace{0.15cm}
\displaystyle{\dfrac{dI}{dt}=\gamma L-\delta I-\mu IE,}\\\vspace{0.15cm}
\displaystyle{\dfrac{dV}{dt}=pI-cV-kAV,}\\\vspace{0.15cm}
\displaystyle{\dfrac{dE}{dt}=\lambda_E+\alpha(L+I)E-d_EE,}\\
\displaystyle{\dfrac{dA}{dt}=p_AV+r_AA(1-\dfrac{A}{A_m})-kAV-d_AA.}
\end{array}
\end{equation}
The model was fitted to data from 12 hospitalised patients in Singapore \cite{Young2020} using measurements of viral load (see \textit{Materials and Methods}). The parameter values derived from fitting $V$ are presented in Table \ref{patient param}. \textcolor{mycolor}{Our model captures essential features of the viral load in all patients, including the positions and heights of the first peak, and where applicable also those of the second peak (see SI Fig. S4). In all patients the viral load eventually decreases to below detectable levels, matching the clinical outcomes in these patients. We note that even details such as the slower viral decline in patients 2 and 12 are correctly represented by our model.}
Our model predicts an \textit{incubation period}, i.e. time between infection and presentation of symptoms, of 4.25 days (3.45-5.05 95\% CI), in excellent agreement with the median SARS-CoV-2 \textit{incubation period} of roughly 5 days estimated based on clinical data elsewhere \cite{Bar2020,Lauer2020}. The average time after which antibodies appear is predicted here to be 16 days (13.9-18.1 95\% CI) after infection, again in excellent agreement with the clinically reported first detection of antibodies after 10-20 days \cite{Bar2020}. Similarly, the \textit{latent period} of 27.28 hours (26.19-28.37 95\% CI) predicted by our model agrees well with the experimentally observed latent period of 12-36 hours \cite{Harcourt2020}.
\subsection{Immune response dynamics}
The within-host model enables the roles of different aspects of the adaptive immune response in viral clearance to be investigated in more detail. The adaptive immune response to a viral infection relies on both antibodies and effector T cells. In order to understand their respective contributions to viral clearance, we first generate a generic progression profile based on the data from all 12 patients, and then vary parameters pertinent to different aspects of the immune response in isolation in order to probe their impact on disease progression.
The median values from our parameter fitting were used to generate a generic progression profile from the 12 patient data as a control (black curves in Fig. \ref{treatment inter new}a,b,c and SI Fig. S5). This control curve reveals a characteristic two-peak behaviour for viral load (Fig. \ref{treatment inter new}b), with antibodies passing the detection limit ($0.1\mbox{ ng/ml}=4\times 10^8\mbox{ molecules/ml}$ \cite{Ciupe2014}) after 14 days post infection (SI Fig. S5d).
\begin{sidewaystable}
\footnotesize
\centering
\caption{Parameter best estimates}
\label{patient param}
\begin{tabular}{lcccccccccccc
\hline
Patient & $\beta\times 10^{-8}$ & $\delta$ & $\gamma$ & $\mu$ & $p\times 10^{-3}$ & $k\times 10^{-10}$ & $c$ & $\alpha\times 10^{-9}$ & $p_A\times 10^{-5}$ & $r_A$ & I.P (day) & A.A (day) \\\hline
1 & 20.7 & 0.248 & 0.9 & 0.047 & 2.18 & 2.37 & 1.24 & 17.2 & 1.33 & 1.98 & 4 & 15 \\
2 & 2.54 & 0.248 & 0.9 & 0.00578 & 8.13 & 3.68 & 3.15 & 1.17 & 0.87 & 1.63 & 4 & 21 \\
3 & 9.12 & 0.344 & 0.9 & 0.099 & 2.12 & 2.69 & 1.18 & 15.5 & 1.41 & 2.05 & 6 & 16 \\
4 & 16.5 & 0.2 & 1 & 0.001 & 9.4 & 2.2 & 1.61 & 1.37 & 4.43 & 1.25 & 2 & 16 \\
5 & 5.89 & 0.17 & 1 & 0.115 & 1.96 & 5.55 & 1.14 & 19.1 & 1.94 & 1.37 & 6 & 23 \\
6 & 13 & 0.275 & 0.82 & 0.187 & 3.2 & 5 & 1.89 & 7.89 & 3.1 & 1.48 & 2 & 20 \\
7 & 13.3 & 0.2 & 0.85 & 0.2 & 9.9 & 4.28 & 1.97 & 6.27 & 6.6 & 2.83 & 4 & 10 \\
8 & 10.4 & 0.35 & 0.8 & 0.04 & 3.65 & 3.5 & 1.3 & 16.5 & 7 & 2.18 & 4 & 14 \\
9 & 10.6 & 0.51 & 0.84 & 0.0035 & 10.4 & 2.8 & 1.1 & 0.86 & 0.96 & 2.13 & 3 & 13 \\
10 & 12.2 & 0.229 & 0.8 & 0.12 & 10.2 & 2.66 & 3.91 & 3 & 5.65 & 1.57 & 6 & 16 \\
11 & 12.5 & 0.2 & 0.9 & 0.067 & 1.3 & 4 & 0.75 & 8.09 & 6.5 & 2.5 & 6 & 13 \\
12 & 6.07 & 0.2 & 0.9 & 0.001 & 9.31 & 1.26 & 1.7 & 1.12 & 7.7 & 1.98 & 4 & 14 \\\specialrule{.2em}{.1em}{.1em}
median & 11.4 & 0.239 & 0.9 & 0.057 & 5.89 & 3.15 & 1.46 & 7.08 & 3.77 & 1.98 & 4 & 15.5 \\
mean & 11.1 & 0.265 & 0.884 & 0.0739 & 5.98 & 3.33 & 1.75 & 8.17 & 3.96 & 1.91 & 4.25 & 16 \\
std & 4.9 & 0.096 & 0.067 & 0.07 & 3.8 & 1.24 & 0.92 & 7.1 & 2.64 & 0.47 & 1.4 & 3.73 \\
95\% CI & [8.33, 13.9] & [0.211, 0.319] & [0.846, 0.922] & [0.0343, 0.113] & [3.83, 8.13] & [2.63, 4.03] & [1.23, 2.27] & [4.15, 12.2] & [2.47, 5.45] & [1.64, 2.18] & [3.45, 5.05] & [13.9, 18.1] \\\hline
\end{tabular}
\footnotetext{I.P: incubation period. A.A: antibodies appearance.}
\footnotetext{The units of these parameters are day$^{-1}$.}
\end{sidewaystable}
\normalsize
The cell-mediated immune response is captured in the equations by two factors: $\mu$, the removal rate of infected cells by effector cells (T cells); and $\alpha$, the proliferation rate of the effector cells. As $\lambda_E/d_E$ is the basal level of effector cells, it is assumed to be constant for each patient \cite{Ciupe2007}. Thus, $\alpha$ and $\mu$ are parameters that are likely varying in different patients. In particular, they would be expected to be lower in immunocompromised patients than for a patient with a healthy immune system \cite{Ciupe2007,Guedj2013,Conway2015,Chenar2018}. Figure \ref{treatment inter new}d (red curve; see also SI Fig. S5) demonstrates that although reduction in the value of $\mu$ can increase the damage to healthy cells ($T$) slightly, this effect is much stronger when reducing $\alpha$ (Fig. \ref{treatment inter new}g, blue curves). Even though decreasing either of these parameters causes a slower decline in viral load after the second peak (Fig. \ref{treatment inter new}e and h), a smaller value of $\alpha$ in addition increases the maximum of both peaks. Figures \ref{treatment inter new}j,k,l (magenta curves in SI Fig. S5) model the case where just the humoral immune response is weakened, i.e. where the proliferation rate of antibodies $r_A$ is reduced \cite{Ciupe2014}. In this case, viral load shows three peaks, and the damage to healthy cells recovers only slowly. This demonstrates that each component of the immune response plays a different, and crucial role in the recovery process. In particular, in immunocompromised or elderly individuals who have a weakened immune response, this could lead to significant tissue damage, with infections lasting much longer than for non-immunocompromised patients.
\subsection{The impact of different therapeutic strategies on viral dynamics in patients with different types of immune response}
In order to study the effects of antiviral therapy in the context of our intercellular model, we multiply the viral production rate $p$ by $(1-\epsilon)$, where $0\leq\epsilon\leq 1$ is the drug efficacy \cite{Dahari2009,Kim2012,Guedj2013,Chenar2018}. As our intracellular model (Figs. \ref{qrtime}c and d) suggests that starting treatment in the \textit{latent period} is most effective, and would effectively block the production of virions, we set $\gamma=0$ at the onset of treatment. This means that infected cells in the latent phase ($L$) do not transition to phase $I$, and begin shedding virions at a reduced rate (following a time lag $\tau$) compared with cells that were already in phase $I$ at the onset of treatment. \textcolor{mycolor}{However, as our numerical results show that the delayed model (SI Fig. S6) has the same behaviour as the model without a time delay, i.e. $\tau=0$, (Fig. \ref{treatment inter new}), we set $\tau=0$. Thus, we have the following equations for the numbers of infected cells and free virions;}
\begin{equation}\label{inter model treatment}
\begin{array}{l}
\vspace{0.13cm}
\displaystyle{\dfrac{dL}{dt}=\beta TV-\delta L-\gamma(1-\theta(t-t_R))L-\mu LE,}\\
\vspace{0.13cm}
\displaystyle{\dfrac{dI}{dt}=\gamma(1-\theta(t-t_R)) L-\delta I-\mu IE,}\\
\displaystyle{\dfrac{dV}{dt}=p(1-\epsilon)\theta(t-t_R)L+p(1-\eta\theta(t-t_R))I-cV-kAV,}\\
\end{array}
\end{equation}
where $\theta(.)$ is the Heaviside function \textcolor{mycolor}{($\theta(t)=1, \mbox{ for } t \geq 0, \mbox{ and } \theta(t)=0, \mbox{ for } t < 0 $)}, and $t_R$ is the time when the antiviral treatment (remdesivir) is introduced. \textcolor{mycolor}{Cells in phase $L$ produce virions at the reduced rate of $p(1-\epsilon)$. Our intracellular model (Fig. 1c and d) suggests that starting therapy during the \textit{latent period} can reduce the level of virions released by $\sim 99\%$ on average. We thus assume $\epsilon=0.99$ for the efficacy of remdesivir in cells in phase $L$. This is a good approximation, as all values of $\epsilon$ calculated from Fig. \ref{qrtime}c are above $98\%$, regardless of treatment start time. Cells that were already in the productively infectious phase ($I$) at the time of treatment start are assumed to produce virions at rate $p(1-\eta)$, where $\eta \leq \epsilon$ is the efficacy of remdesivir in these cells. Figure \ref{qrtime}c indicates that starting therapy during the productively infectious period can reduce the level of released virions by $\sim 90\%$ on average. Therefore we set $\eta=0.9$. We note that these values of $\epsilon$ and $\eta$ are consistent with values used in previous models \cite{Iwanami2020,Kim2020,Gonccalves2020}}.
\begin{figure}[H]
\centering
\includegraphics[width=0.86\linewidth]{TreatmentNew.png}
\caption{Antiviral treatment prevents the second viral peak and an earlier antiviral treatment start is more effective. Solid lines indicate progression of the infection in the absence of treatment as a control. Dashed, dotted and dash-dotted curves show the result of starting treatment at 7, 6, and 5 dpi, respectively. \textcolor{mycolor}{Parameters are the median values of Table \ref{patient param} (black curves (a), (b) and (c)). Red curves ((d), (e) and (f)) correspond to the scenario of low removal rate of infected cells by effector cells ($\mu=3.5\times 10^{-4}$). Blue curves ((g), (h) and (i)) illustrate the scenario of a low proliferation rate of effector cells ($\alpha=5.4\times 10^{-10}$), and magenta curves ((j), (k) and (l)) of a low antibody proliferation rate ($r_A=1$). The green line (horizontal line in (b), (e), (h) and (k)) indicates the viral detection limit. Note that for black, red and magenta the 6 dpi scenarios have similar curves to 7 dpi, so only 5 dpi and 7 dpi are visible in the plot.}}
\label{treatment inter new}
\end{figure}
Figures \ref{treatment inter new}a,b,c illustrate the impact of treatment on viral clearance in a patient who would most likely recover without treatment, as the parameters have been chosen as the median values of the 12 patients who have recovered from COVID-19 without treatment (Table \ref{patient param}). In this generic scenario, a treatment start 7 days post infection (dpi), which is approximately the time of the first peak in viral load, prevents the second peak from occurring (black dashed lines in Fig. \ref{treatment inter new}). An earlier treatment start at 6 dpi also leads to the same result. However, starting treatment even earlier at 5 dpi (black dash-dot lines in Fig. \ref{treatment inter new}) also reduces the damage to healthy cells. Even though free virus declines slower in this case, the area under the viral load curve (AUC), an infection-related quantity commonly used to help the assessment of a treatment against acute viral diseases \cite{Vegvari2016}, is much smaller. In respiratory infections, even after viral clearance the immune response can cause respiratory and systemic symptoms in some incidences \cite{Vegvari2016,Ison2010}. A treatment start at 5 dpi results in a reduction in the peak of immune response cells, suggesting that early treatment perhaps could mitigate against this.
\textcolor{mycolor}{In Fig. \ref{treatment inter new} solid red (Figs. \ref{treatment inter new} d,e,f), blue (Figs. \ref{treatment inter new} g,h,i) and magenta (Figs. \ref{treatment inter new} j,k,l) curves indicate cases where different aspects of the immune response are weakened in isolation.} In particular, the solid red (Figs. \ref{treatment inter new} d,e,f) curves illustrate the case of a reduction in the removal rate of infected cells by effector cells $\mu$ by 99\% reduction with respect to the generic case ($\mu=3.5\times 10^{-4}$), the solid blue (Figs. \ref{treatment inter new} g,h,i) curves that of a 92\% reduced proliferation rate of effector cells $\alpha$ ($\alpha=5.4\times 10^{-10}$), and the solid magenta (Figs. \ref{treatment inter new} j,k,l) curves correspond to a low antibody proliferation rate ($r_A=1$ instead of 1.98). As in the generic case above, figures \ref{treatment inter new}d-l indicate that in each case starting treatment 5 dpi reduces tissue damage, viral peak height and AUC significantly \textcolor{mycolor}{(Table S2)}, compared with treatment starts at 6 or 7 dpi, again emphasizing the importance of an early treatment start. However, early treatment increases the duration of infection compared with a later therapy start (Fig. \ref{treatment inter new}h). This suggests that viral load could persist for a longer time in such patients, who may still be infectious.
Our intercellular model also enables the modelling of drugs that operate at the level of the immune response, rather than virus production in the intracellular milieu. As an example of a therapy option of this type we study the impact of convalescent plasma (CP) therapy on viral dynamics \cite{Duan2020}. For this, we add a new variable to our model ($\tilde{A}$), which captures the antibodies that are administered as treatment. We assume that these antibodies remove virions at rate $kf$, where $0\leq f\leq 1$, implying that they are at most as efficient as the antibodies that are being developed by the body over the course of infection. The equations for the number of virions and antibodies thus have the form;
\begin{equation}\label{antibody model}
\begin{array}{l}
\vspace{0.15cm}
\displaystyle{\dfrac{dV}{dt}=pI-cV-kAV-\theta(t-t_{CP})kf\tilde{A}V,}\\\vspace{0.15cm}
\displaystyle{\dfrac{d\tilde{A}}{dt}=\theta(t-t_{CP})(-kf\tilde{A}V-d_A\tilde{A}),}
\end{array}
\end{equation}
where $\tilde{A}(t)=0$ for $t<t_{CP}$ and $\tilde{A}(t_{CP})=\tilde{A}_m$, with $\tilde{A}_m$ representing the number of antibodies per ml that are administered as treatment. $t_{CP}$ denotes the time at with the treatment is started, and $\theta(.)$ is the Heaviside function.
Our model enables the impact of CP therapy on viral dynamics to be studied for different treatment starts and doses, thus addressing the bottle-neck pointed out in the recent literature of finding the optimal dose and start for CP treatment \cite{Duan2020}. Using again median values from Table \ref{patient param} to generate a generic patient profile as a control, and using three immunocompromised cases that are presented in Fig. \ref{treatment inter new} (red, blue, and magenta curves, representing cases with reduced values for the immune response parameters $\mu$, $\alpha$, and $r_A$, respectively) we studied the impact of CP therapy. SI Fig. S7 shows the minimum level of therapeutic antibodies $\tilde{A}_m$ that is needed to reduce the AUC by 25\% and 50\% as a function of the start of treatment (in dpi) and the factor $f$ by which therapeutic antibodies are less efficient than those produced by the host during the infection. It indicates that the level of $\tilde{A}_m$ that is needed for an effective reduction in the AUC is at most around $3\times10^{11}\mbox{ molecules/ml}$. This is in good agreement with clinical data, reporting a 200 ml dose of CP and $A_m=4\times10^{12}\mbox{ molecules/ml}$ (see {\it Materials and Methods}) \cite{Duan2020,Sun2020,Ciupe2014}. Indeed, this implies that each dose would contain about $8\times10^{14}\mbox{ molecules}$, resulting in $\tilde{A}_m=2.6\times10^{11}\mbox{ molecules/ml}$ on the basis of an average level of 3 liters blood in the body \cite{Murray2015}. Hence, a reduction of the AUC by 50\% is achievable. SI Fig. S7 also shows that AUC reduction is comparable in the range $0.7\leq f\leq 0.9$, therefore we use the average value of this range ($f=0.8$) in our calculations. Our conclusions are robust for efficiencies $\geq0.15$, while for values of $f$ below 0.15, the outcomes vary for different immunocompromised cases. Using these parameters, we present a comparative analysis between antiviral and CP therapy and explore their synergistic potential.
Figure \ref{antibody and rem treatment} shows that similar to the case of antiviral therapy in Fig. \ref{treatment inter new}, an early treatment start is more effective in the reduction of tissue damage and the level of AUC compared with a later therapy start (cf. SI Fig. S8 for the equivalent of Fig. \ref{treatment inter new} for CP therapy, \textcolor{mycolor}{cf. Table S3 for AUC values}). While Fig. \ref{treatment inter new} reveals scenarios in which an antiviral therapy does not mitigate against tissue damage (such as a later treatment start at 6 or 7 dpi), Fig. \ref{antibody and rem treatment} shows that using CP therapy can reduce tissue damage even for those delayed treatment starts. Interestingly, Fig. \ref{antibody and rem treatment} shows that starting CP therapy early can increase the duration of infection more than for antiviral therapy, implying that for an early treatment starts using remdesivir is more effective. By contrast, for later treatment starts CP therapy reduces the viral load faster and decreases tissue damage compared with remdesivir therapy. Our model also enables us to probe the synergies of these treatments options. Figs. \ref{antibody and rem treatment}c, f, i, and l indicate that for an early treatment combination therapy mitigates against a longer duration of the infection \textcolor{mycolor}{(cf. Table S4 for AUC values)}. However, for later treatment starts any synergistic effects are minimal and combination therapy has the same outcome as CP therapy in isolation. Thus, unless infection is detected early, e.g. though an efficient track and trace system, treatment would likely start at a time when CP therapy in isolation would be as effective as combination therapy.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{RFigure.png}
\caption{An early CP therapy increases the duration of infection more compared with an early antiviral therapy. Solid lines indicate progression of the infection in the absence of treatment as a control. Dashed, dotted and dash-dotted curves show the result of starting treatment at 7, 6, and 5 dpi, respectively. Parameters are the median values of Table \ref{patient param}. \textcolor{mycolor}{Red curves ((d), (e) and (f)) correspond to the scenario of low removal rate of infected cells by effector cells ($\mu=3.5\times 10^{-4}$). Blue curves ((g), (h) and (i)) illustrate the scenario of a low proliferation rate of effector cells ($\alpha=5.4\times 10^{-10}$), and magenta curves ((j), (k) and (l)) of a low antibody proliferation rate ($r_A=1$). The green line (horizontal line in (b), (c) (e), (f), (h), (i), (k) and (l)) indicates the viral detection limit.} First and second columns indicate the impact of only CP therapy while third column show the impact of having both antiviral and CP therapy.}
\label{antibody and rem treatment}
\end{figure}
\textcolor{mycolor}{Figure \ref{treatment all patient} indicates the impact of starting treatments after the onset of symptoms, i.e. the day on which symptoms were first reported by the 12 patients in the study \cite{Young2020} used for model fitting. Both treatments reduced the duration of the infection significantly (in 67\% of the patients), enabling a faster recovery, or otherwise have no significant impact on the duration of the infection. However, in some cases in which the peak in viral load and the AUC are significantly reduced (Table S5), the treatments have not decreased the duration of infection. This figure also shows that there are cases for which the duration of infection is not reduced by one treatment, but would be reduced by combination therapy of both treatments. In the other cases these treatment options have more or less similar effects, although CP therapy performs slightly better, and in these cases there is not a noticeable synergistic effect.}
\begin{figure}[H]
\centering
\includegraphics[width=0.96\linewidth]{PatientsAllTreatments.png}
\caption{Starting treatment after the onset of symptoms reduces the peak in viral load and leads to faster viral clearance. Solid lines indicate the best fit to patient data. Dashed and dotted curves indicate the result of starting antiviral and CP therapy after the onset of symptoms, respectively, while dash-dotted curves indicate a combination therapy start after the onset of symptoms.}
\label{treatment all patient}
\end{figure}
\section{Discussion}
The severe consequences of the COVID-19 pandemic demand a concerted interdisciplinary effort to identify novel antiviral solutions. Vaccination options against SARS-CoV-2 are actively pursued and a number of treatments are now in use in the clinics, but there are still many open questions regarding when and how best to administer these treatments, either in isolation or in combination. Whilst modelling of disease transmission has already played a key role in informing policy makers \cite{Bertozzi2020}, models of within-host dynamics have not yet had a prominent role in combating the disease. There is a precedence of intracellular modelling for other viral diseases, such as hepatitis C virus \cite{Aunins2018}. However, such models cannot be readily transferred to coronaviral infection, as viral life cycles are very different. Here we introduce a within-host model of a SARS-CoV-2 infection that contains sufficient details specific to coronaviruses to enable antiviral strategies against SARS-CoV-2 to be compared and to analyse their synergies. We demonstrate this via a comparative analysis of an antiviral treatment (remdesivir) and CP therapy, which apart from steriod treatment are the most prominent forms of therapy currently used against COVID-19 infections. In particular, we compare disease progression for different treatment starts and dosages, and thus provide new insights into these therapeutic options.
\textcolor{mycolor}{Our analysis highlights, as expected and previously observed \cite{Iwanami2020,Gonccalves2020}, an early treatment start before the first peak in viral load can reduce both tissue damage and the peak viral load, especially when using a combination of both therapies. However, those models do not capture the impact of early treatment on the duration of the infection. Surprisingly, our model suggests that early treatment by either therapy alone can actually increase the duration of infection compared with a later therapy start, likely because suppressing virus production results in a reduced immune response. This implies that even though early treatment accelerates the recovery process and reduces the peak in viral load, the infection may persist for a longer time than later treatment, meaning that these patients may possibly still be infectious.}
Our model has provided insights into disease progression for different doses and treatment starts for the CP therapy \cite{Duan2020}. In particular, it enabled us to address a question recently raised in the literature as to the impact of dose and time of treatment on disease progression under CP treatment \cite{Duan2020}. Our model also enabled us to perform a comparison between the antiviral treatment and CP therapy, and explore their potential synergistic effects. The model reveals that early into the infection an antiviral treatment using remdesivir could be more effective than CP therapy, and a combination therapy can significantly reduce the duration of infection. However, for later treatment starts, CP therapy appears to be more beneficial than antiviral therapy, and there are no longer any significant synergistic effects that would warrant combination therapy. These insights from our within-host model suggest that the time course of infection should be considered when deciding on appropriate theraputic response to COVID-19 infection.
\section{Materials and Methods}
\color{mycolor}
\subsection{Intracellular modelling of SARS-CoV-2 infection}
The first step in the viral lifecycle is the production of two polyproteins (pp1a and pp1ab) using the host cell ribosomes. The kinetics of ribosomes {\it in vivo} are studied using insights from a detailed stochastic model \cite{Dykeman2020}. For synthesis of the polyproteins pp1a and pp1ab, host ribosomes (denoted by R in Fig. S1b) reversibly bind to (+)RNA with binding/unbinding rates $r_{on}$ and $r_{off}$:
\begin{align*}
&\mbox{(+)RNA}+\mbox{R}\xtofrom[r_{off}]{r_{on}}\mbox{R:(+)RNA}.
\end{align*}
We model the kinetic steps involved in ribosome initiation and transition to the elongation state (Ri$_{1a}$:(+)RNA) that occur subsequent to ribosomal binding to the (+)RNA to produce pp1a as a single kinetic step with rate $r_{in}$.
\begin{align*}
&\mbox{R:(+)RNA}\xrightarrow{r_{in}}\mbox{Ri$_{1a}$:(+)RNA}.
\end{align*}
The ribosome then translates the pp1a gene (ORF1a) at rate $t_{1a}$. After translation of the pp1a gene, the ribosome can either frameshift -1 nt to the ORF1b reading frame, translating the polyprotein pp1ab, or terminate, releasing the polyprotein pp1a \cite{Nakagawa2016}. We model the -1 ribosomal frameshift as a reaction with rate $q\times t_f$ and the termination at ORF1a as a reaction with rate $(1-q)t_f$. If the ribosome successfully frameshifts, it completes translation of the ORF1b reading frame with rate $t_{1b}$, terminates, and releases the polyprotein pp1ab.
\begin{align*}
&\mbox{Ri$_{1a}$:(+)RNA}\xrightarrow{t_{1a}}\mbox{R$_{1a}$:(+)RNA},\\
&\mbox{R$_{1a}$:(+)RNA}\xrightarrow{(1-q)t_f}\mbox{pp1a}+\mbox{R}+\mbox{(+)RNA},\\
&\mbox{R$_{1a}$:(+)RNA}\xrightarrow{q\times t_f}\mbox{Ri$_{1b}$:(+)RNA},\\
&\mbox{Ri$_{1b}$:(+)RNA}\xrightarrow{t_{1b}}\mbox{pp1ab}+\mbox{R}+\mbox{(+)RNA}.
\end{align*}
The polyproteins pp1a and pp1ab form RTC at rate $f_{rt}$.
\begin{align*}
&\mbox{pp1a}+\mbox{pp1ab}\xrightarrow{f_{rt}}\mbox{RTC}.\\
\end{align*}
The transcription of gRNA which leads to the formation of -gRNA and nine -sgRNAs is modelled as illustrated in Figure S2. RTC (denoted by RTC in Fig. S2) binds to the genome with binding/unbinding rates $rt_{on}$ and $rt_{off}$.
\begin{align*}
\mbox{(+)RNA}+\mbox{RTC}\xtofrom[rt_{off}]{rt_{on}}\mbox{RT$_0$:(+)RNA},
\end{align*}
The full length genome contains functional transcription-regulating sequence (TRS) motifs which are found at the $3'$ end of the leader (leader TRS) and in front of each of the 9 ORFs (Fig. S1a) \cite{Sawicki2007}. During transcription of the full length minus strand by the RTC, the process can terminate at one of these TRS motifs, resulting in one of the 9 negative sgRNA being produced. In our model, when an RTC encounters TRS motif number $k$, it will continue the elongation of the negative strand with rate $r\times t_c$, and terminate with rate $(1-r)t_c$, resulting in the production of (-)sgRNA$_k$.
\begin{align*}
&\mbox{RT$_0$:(+)RNA}\xrightarrow{rt_{in}}\mbox{RTi$_1$:(+)RNA}, \\
&\mbox{RTi$_k$:(+)RNA}\xrightarrow{tr_k}\mbox{RT$_{k}$:(+)RNA},\\
&\mbox{RT$_k$:(+)RNA}\xrightarrow{(1-r)t_c}\mbox{(-)sgRNA$_{k}$}+\mbox{RTC}+\mbox{(+)RNA},\\
&\mbox{RT$_k$:(+)RNA}\xrightarrow{r\times t_c}\mbox{RTi$_{k+1}$:(+)RNA},\\
&\mbox{RTi$_{10}$:(+)RNA}\xrightarrow{t_{10}}\mbox{(-)RNA}+\mbox{RTC}+\mbox{(+)RNA}.
\end{align*}
Here, $tr_k$ is the rate of RTC transcription between the TRS at site $k-1$ and site $k$ (Fig. S2) and each $\mbox{(-)sgRNA}_{k}$ for $k=1,2,...,9$, corresponds to sgRNAs for N, 8, 7b, 7a, 6, M, E, 3a, S (-)sgRNAs, respectively, whereas the $k=10$ state with transcription rate $t_{10}$ denotes the rate to transcribe the remaining length of 22kb of RNA upstream of the structural genes. This last step is responsible for the creation of full length (-)RNA. Then (-)sgRNAs and (-)RNA serve as templates for (+)sgRNAs and viral genome synthesis, respectively. The negative RNAs bind to RTC and produce positive RNAs (SI, equation S1).
(+)sgRNA$_1$, (+)sgRNA$_6$, (+)sgRNA$_7$, and (+)sgRNA$_9$ encode structural proteins N, M, E, and S, respectively, which are involved in new virion formation \cite{Kim2020}. We assume that free ribosomes are at an equilibrium level, where +sgRNAs are saturated with available ribosomes and produce protein at constant rates $t_n$, $t_m$ etc.
\begin{align*}
&\mbox{(+)sgRNA$_1$}\xrightarrow{t_n}\mbox{(+)sgRNA$_1$}+\mbox{N},\\
&\mbox{(+)sgRNA$_6$}\xrightarrow{t_m}\mbox{(+)sgRNA$_6$}+\mbox{M},\\
&\mbox{(+)sgRNA$_7$}\xrightarrow{t_e}\mbox{(+)sgRNA$_7$}+\mbox{E},\\
&\mbox{(+)sgRNA$_9$}\xrightarrow{t_s}\mbox{(+)sgRNA$_9$}+\mbox{S}.
\end{align*}
The budding of a virion is modelled as a single reaction with budding rate $k_{bud}$ as follows \cite{Aunins2018,Bar2020}:
\begin{equation*}
\mbox{(+)RNA}+300\mbox{S}+2000\mbox{M}+1000\mbox{N}+100\mbox{E}\xrightarrow{k_{bud}}\mbox{virion}\,.
\end{equation*}
\subsection{Modelling of antiviral strategy}
Remdesivir acts as a nucleotide analogue that mimics the adenosine structure \cite{Warren2016}. During the replication process RTC may insert remdesivir molecules rather than adenine, which caps the strand and stops the replication process at rate $r_{term}$ \cite{Zhang2020}. In order to model the impact of this drug, we assume that complexes with RTC in our model can bind (and subsequently unbind) to remdesivir molecules (Rem). Thus the reactions have the following form:
\begin{align*}
&\mbox{RTi$_k$:(+)RNA}+\mbox{Rem}\xtofrom[k_{off}]{k_{on}}\mbox{Re:RTi$_k$:(+)RNA},\\
&\mbox{Re:RTi$_k$:(+)RNA}\xrightarrow{r_{term}}\mbox{RTC},\\
&\mbox{RTi$_{10}$:(+)RNA}+\mbox{Rem}\xtofrom[k_{off}]{k_{on}}\mbox{Re:RTi$_{10}$:(+)RNA},\\
&\mbox{Re:RTi$_{10}$:(+)RNA}\xrightarrow{r_{term}}\mbox{RTC},\\
&\mbox{RT$_k$:(+)RNA}+\mbox{Rem}\xtofrom[k_{off}]{k_{on}}\mbox{Re:RT$_k$:(+)RNA},\\
&\mbox{Re:RT$_k$:(+)RNA}\xrightarrow{r_{term}}\mbox{RTC},
\end{align*}
where $k=1,2,...,9$.
\color{black}
\subsection{Patient data}
\textcolor{mycolor}{Our patient data comprise the first 18 confirmed patients who reported COVID-19 infection in Singapore \cite{Young2020}. Nasopharyngeal swabs were collected for up to 30 days since onset of symptoms.} Five patients received lopinavir-ritonavir treatment, and in one patient viral load was detectable only twice, and these six patients were therefore excluded from the analysis. \textcolor{mycolor}{The viral loads were reported in cycle threshold (Ct) values, which is inversely proportional to the logarithm of the viral RNA copy number ($\log(V)=-0.3231\mbox{Ct}+14.11$) \cite{Zou2020}.} We converted Ct values to viral copies per ml. In model fitting, viral load values under the detection threshold were set at the detection limit (Ct=38).
\subsection{Intercellular model parameter estimation}
COVID-19 is a respiratory illness, so we assume that modelling insights from influenza models are applicable. In influenza, at approximately 5 to 7 dpi mitoses are detected at the basal cell layer, and regeneration of the epithelium begins. Complete resolution of the epithelial takes up to 1 month \cite{Wright2001}. We therefore assume that the maximum proliferation rate for uninfected cells is small, and that $r_T=0.1\mbox{ day}^{-1}$. The number of host cells that express ACE-2 and \textcolor{mycolor}{transmembrane serine protease (TMPRSS)} is approximately equal to $10^{11}$ ($T_m=10^{11}$) \cite{Bar2020}, and we use $T(0)=T_m$. \textcolor{mycolor}{As SARS-CoV-2 is a novel infection, we assume that $E(0)=0$ and that the basal level of effector cells is low ($\lambda_E=1$ and $d_E=0.5\mbox{ day}^{-1}$) \cite{Ciupe2006}. However, considering a higher basal level does not change model outcomes regarding the viral dynamics as $\alpha$ and $\mu$, the proliferation rates of effector cells and removal rate of infected cells by effector cells, are estimated using viral load data fitting. Note that increasing $\lambda$ and decreasing $\mu$ simultaneously does result in the same viral dynamics, although it will change the value of the peak in effector cells. Since data is only available regarding viral load, we decided to fix the basal level of effector cells before finding other parameters \cite{Ciupe2006,Ciupe2014}.} Initially there is no specific antibody, therefore $A(0)=0$ and $d_A=0.033\mbox{ day}^{-1}$ \cite{Ciupe2014}. We use 1 \textmu g/ml \textcolor{mycolor}{immunoglobulin G (IgG)} positive control as a strong positive standard \cite{Sun2020}. Thus, we assume $A_m=1\mbox{ \textmu g/ml}=4\times 10^{12}\mbox{ molecules/ml}$. \textcolor{mycolor}{Although we are setting the individual’s antibody carrying capacity to a fixed value \cite{Ciupe2014}, we also checked that variation of the parameter does not impact the qualitative results and therefore all conclusions remain valid. $d_A$ is measured for HBV infection, but it has been shown that $r_AA(1-A/A_m)-d_AA$ is equivalent to a logistic growth of antibodies with growth rate $\rho_A=r_A-d_A$ \cite{Ciupe2014}. Since we are fitting $r_A$, fixing $d_A$ does not have a significant impact on the model. The same argument is valid for $d_E$, and as we are assuming a fixed basal level ($\lambda_E/d_E$), changing $d_E$ would not have a significant effect on our results.}
The patient data used is only available from the time after onset of symptoms, and the initial viral load at the start of the infection is not recorded. We therefore estimate the value $V(0)$ assuming the infection is transmitted via droplets. The average number of expelled droplets during talking is assumed to be 1000 \cite{Loudon1967,Xie2009}. It has also been reported that more than 50\% of droplets have a size range between 50-75 \textmu m \cite{Xie2009}. Thus, the average volume in expelled droplets during talking is equal to $1.1\times 10^{-4}\mbox{ ml}$. The median level of viral load on the day of symptom onset in patients in this study is estimated as $5\times 10^{3}\mbox{ virion/ml}$ \cite{Ejima2020}. We assume that infected individuals infect others before the onset of symptoms, and we therefore assume an average level of $10^{3}\mbox{ virion/ml}$ are available for transmission. Thus, assuming $V(0)=0.1\mbox{ virion/ml}$ appears to be a reasonable choice \cite{Gonccalves2020}. This value is also comparable to those used in modelling of influenza \cite{Pawelek2012}.
Since structural identifiability is a necessary condition for model fitting, we used the method by Castro and de Boer \cite{Castro2020} to show that our model (\ref{inter model}) is structurally identifiable (see SI Section S3 for more detail). We estimate the remaining parameters and the \textit{incubation period} (the time between the beginning of the infection and the onset of symptoms) by fitting $V$ from the model (\ref{inter model}) to patient data individually in Matlab using the method in Ciupe et al. \cite{Ciupe2014} which uses the minimum search function for data fitting. Although we fitted patient data individually, which is suboptimal compared to population fitting using mixed effects, the outcomes of the model were in agreement with clinically measured values, such as the incubation period and the onset of appearance of antibodies in the body. The resulting parameter values are presented in Table 1. Decreasing/increasing of $V(0)$ ($V(0)=0.01\mbox{ virion/ml}$ or $V(0)=1\mbox{ virion/ml}$) does not change the estimated values of parameters significantly and only changes the estimated \textit{incubation period} by $\pm 1$ day. Additionally, we used residual bootstrapping to provide 95\% confidence intervals (CIs) for the parameter estimates following \cite{Ciupe2007} (see also SI Table S2). For each set of patient data $\{V_1, V_2, ...,V_n\}$, we calculated the normalised residuals $\epsilon_i={V_i}/{\overline{V}_i}$, $i=1, 2, ...,n$, where $\{\overline{V}_1, \overline{V}_2, ..., \overline{V}_n\}$ denotes the viral load values predicted by the model. We then created the set $\{V^*_1, V^*_2, ..., V^*_n\}$ where $V^*_i=\overline{V}_i\times\epsilon_j$ for $\epsilon_j$ randomly chosen to be any of the normalised residuals or 1, the latter to include the option that the data remains unchanged. We created 50 samples and fitted each individually to the data. We then calculated the 95\% CI for each given parameter across the 50 parameter sets (SI Table S2). We generated 500 simulations based on randomly chosen parameters from the 50 parameter sets, and then used these curves to calculate the 95\% CI for each patient (see red shaded areas in Fig. S4). As the 95\% CIs have negligible width compared with the widths of the curves, given the logarithmic scale, we also added the mean plus/minus standard deviation as shaded green areas in order to reflect the noise in the data, especially for P4 and P6. We note that the predicted two-peak behaviour is consistent with observations in W\"{o}lfel et al. \cite{Wolfel2020}, and indeed is expected in any model that includes the adaptive immune response \cite{Ke2020}.
\vspace{-0.1cm}
\section*{Acknowledgment}
RT acknowledges funding via an EPSRC Established Career Fellowship (EP/R023204/1) and a Royal Society Wolfson Fellowship (RSWF/R1/180009). RT \& PGS acknowledge support from a Joint Wellcome Trust Investigator Award (110145 \& 110146).
\vspace{-0.1cm}
\section{A stochastic model of intracellular dynamics of SARS-CoV-2}
The intracellular model contains reactions representing the translation, transcription and assembly of SARS-CoV-2 in cells. The viral entry step is not included into the model as we are focusing on the intracellular dynamics. Here we will detail the reactions underlying each process and describe the stages which have been modelled mathematically. All parameter values are given in Table \ref{param}. Stochastic simulations of the reactions were implemented using a Gillespie algorithm \cite{Gillespie1977}, and seeded with one virion at the start of the infection.
\subsection{Model for translation in SARS-CoV-2}
The SARS-CoV-2 genome, illustrated in Fig. \ref{Rib}a, shows the gene products encoded by the 29,903 nts ssRNA viral genome. The replicase gene comprises roughly 22kb and encodes two polyproteins (pp1a and pp1ab) which undergo cotranslational proteolysis into the 16 non-structural proteins (nsp1-nsp16) required by the virus for infection in the host cell. The remaining genes 3' of the replicase gene encode for the structural proteins S, M, E, N. Structural and non-structural proteins are translated from different messenger RNAs that are produced by the viral replicase-transcriptase complex (RTC) from viral gRNA. The replicase gene is translated from full-length viral RNA whereas the structural genes are translated from sub-genomic mRNAs synthesised by the viral RTC \cite{Fehr2015}.
We model the translational process from the mRNAs produced by the RTC as follows (Fig. \ref{Rib}b). For synthesis of the polyproteins pp1a and pp1ab, host ribosomes (denoted by R in Fig. \ref{Rib}b) reversibly bind to with binding/unbinding rates $r_{on}$ and $r_{off}$, $K_d=0.08$ \textmu M \cite{Dimelow2009}.
\begin{align*}
&\mbox{(+)RNA}+\mbox{R}\xtofrom[r_{off}]{r_{on}}\mbox{R:(+)RNA}.
\end{align*}
As the ribosome can bind and unbind, we assume after binding it can either unbind or start the elongation process. Thus, we model the kinetic steps involved in ribosome initiation and transition to the elongation state (Ri$_{1a}$:(+)RNA) that occur subsequent to ribosomal binding to the as a single kinetic step with rate $r_{in}$.
\begin{align*}
&\mbox{R:(+)RNA}\xrightarrow{r_{in}}\mbox{Ri$_{1a}$:(+)RNA}.
\end{align*}
The ribosome then translates the pp1a gene (ORF1a) at rate $t_{1a}$. After translation of the pp1a gene, the ribosome can either frameshift -1 nt to the ORF1b reading frame, translating the polyprotein pp1ab, or terminate, releasing the polyprotein pp1a \cite{Nakagawa2016}. We model the -1 ribosomal frameshift as a reaction with rate $q\times t_f$ and the termination at ORF1a as a reaction with rate $(1-q)t_f$. These rates give a probability of frameshifting from the ORF1a to ORF1b reading frames of $q$. If the ribosome sucessfully frameshifts, it completes translation of the ORF1b reading frame with rate $t_{1b}$, terminates, and releases the polyprotein pp1ab. As the viral genome needs to bind to the RTC for replication, we assume that (+)RNA can have only one ribosome or one RTC on at a time. Details of the reactions are listed below.
\begin{align*}
&\mbox{Ri$_{1a}$:(+)RNA}\xrightarrow{t_{1a}}\mbox{R$_{1a}$:(+)RNA},\\
&\mbox{R$_{1a}$:(+)RNA}\xrightarrow{(1-q)t_f}\mbox{pp1a}+\mbox{R}+\mbox{(+)RNA},\\
&\mbox{R$_{1a}$:(+)RNA}\xrightarrow{q\times t_f}\mbox{Ri$_{1b}$:(+)RNA},\\
&\mbox{Ri$_{1b}$:(+)RNA}\xrightarrow{t_{1b}}\mbox{pp1ab}+\mbox{R}+\mbox{(+)RNA},
\end{align*}
The polyproteins pp1a and pp1ab are cleaved by two proteases, papainlike protease (PLpro; corresponding to nsp3) and a main protease, 3C‑like protease (3CLpro; corresponding to nsp5). Polyproteins pp1a and contains the nsps 1–11 and pp1ab contains nsps 1–10 and nsps 12-16 \cite{Fehr2015,Romano2020}. It has been proposed that nsp2-16 collectively constitute a functional RTC in infected cells \cite{Egloff2004,v2019}. These nsps have their own specific roles in the replication process, but the functions of some of them are not well understood \cite{Chen2020}. For example nsp12 is the RNA-dependent RNA polymerase (RdRp); nsp13 is the NTPase/helicase; nsp14 is a proof-reading exonuclease \cite{Fehr2015}. As the exact stoichiometry of these nsps in an RTC molecule is unknown \cite{Hagemeijer2010} and nsp11 which is a part of RTC is only in pp1a, we assume that pp1a and pp1ab have equal impact on RTC formation and this step is modelled as follows:
\begin{align*}
&\mbox{pp1a}+\mbox{pp1ab}\xrightarrow{f_{rt}}\mbox{RTC},\\
\end{align*}
Finally, we add the natural clearance of these proteins via the reactions
\begin{align*}
&\mbox{RTC}\xrightarrow{\delta_r}\mbox{0},\\
&\mbox{pp1a}\xrightarrow{\delta_a}\mbox{0},\\
&\mbox{pp1ab}\xrightarrow{\delta_b}\mbox{0},
\end{align*}
(+)sgRNA$_1$, (+)sgRNA$_6$, (+)sgRNA$_7$, and (+)sgRNA$_9$ encodes structural proteins N, M, E, and S, respectively, which are involved in new virion formation \cite{Kim2020}. The other (+)sgRNA encode accessory proteins which are hypothesised to interfere with the host innate immune response and their functions are poorly understood \cite{De2016}. Thus we only include Synthesis of S, M, N, and E proteins into our model. Furthermore, similar to previous work on intracellular modelling of HCV, where the number of free ribosomes is fitted to obtain the viral dynamics that was observed experimentally \cite{Aunins2018}, we assume that free ribosomes are at an equilibrium level, where sgRNAs are saturated with available ribosomes and produce protein at a constant rate $t_n$, $t_m$ etc.
\begin{align*}
&\mbox{(+)sgRNA$_1$}\xrightarrow{t_n}\mbox{(+)sgRNA$_1$}+\mbox{N},\\
&\mbox{(+)sgRNA$_6$}\xrightarrow{t_m}\mbox{(+)sgRNA$_6$}+\mbox{M},\\
&\mbox{(+)sgRNA$_7$}\xrightarrow{t_e}\mbox{(+)sgRNA$_7$}+\mbox{E},\\
&\mbox{(+)sgRNA$_9$}\xrightarrow{t_s}\mbox{(+)sgRNA$_9$}+\mbox{S},\\
&\mbox{N}\xrightarrow{\delta_n}0,\\
&\mbox{M}\xrightarrow{\delta_m}0,\\
&\mbox{E}\xrightarrow{\delta_e}0,\\
&\mbox{S}\xrightarrow{\delta_s}0.
\end{align*}
The E protein is found in small quantities within the virion. We assume each SARS-CoV-2 virion has 100 copies of E protein assembled into 20 pentameric structures, and 300 copies of S protein assembled into 100 trimeric structures \cite{Bar2020}. The level of M and N proteins in SARS-CoV-2 consists of approximately 2000 and 1000 copies, respectively \cite{Bar2020}. Note that in this model we assume that SARS-CoV-2 has the similar levels of E, S, M and N proteins to SARS-CoV-1 which was reported for \cite{Bar2020}. Similar to the previous intracellular models for HCV infection which have modelled the assembly and budding of virions as a single reaction \cite{Aunins2018}, we model the budding of a virion is modelled as a single reaction with budding rate $k_{bud}$ as follows:
\begin{equation*}
\mbox{(+)RNA}+300\mbox{S}+2000\mbox{M}+1000\mbox{N}+100\mbox{E}\xrightarrow{k_{bud}}\mbox{virion}\,.
\end{equation*}
\subsection{Model for transcription in SARS-CoV-2}
Transcription of the RNA genome of SARS-CoV-2 occurs via the interaction of the RTC complex with genomic (+)ssRNA, genomic (-)ssRNA, and negative sense sub-genomic fragments. During transcription of (+)ssRNA, a subset of 9 sub-genomic minus strands, which encode all structural proteins, are produced through discontinuous transcription \cite{De2016}. Each sub-genome contains a $5'$ leader sequence corresponding to the $5'$ end of the genome which allows interaction between host ribosomes and (+)sgRNAs. Each sub-genomic RNA consists of a single ORF that encodes for a structural or accessory protein \cite{Sawicki2007}. In SARS-CoV-2, the order of these ORFs (from 5' to 3') is S, 3a, E, M, 6, 7a, 7b, 8, and N \cite{Kim2020}. The full length (+)ssRNA genome contains functional transcription-regulating sequence (TRS) motifs which are found at the $3'$ end of the leader (leader TRS) and in front of each of the 9 ORFs. During transcription of the full length minus strand by the RTC, the process can terminate at one of these TRS motifs, resulting in one of the 9 negative sgRNA being produced. In our model, when an RTC encounters TRS motif number $k$, it will continue the elongation of the negative strand with rate $r\times t_c$, and terminate with rate $(1-r)t_c$, resulting in the production of (-)sgRNA$_k$ after extension by transcription of the $5'$ end of the genome \cite{Sawicki2007}. The completed minus-strand sgRNA serves as a template for (+)sgRNA synthesis. Figure \ref{RTC} illustrates the transcription reactions that we model in this work. For transcription from full length genome (+)ssRNA, RTC (denoted by RTC in Fig. \ref{RTC}) reversibly binds to 3' UTR with binding/unbinding rates $rt_{on}$ and $rt_{off}$, $K_d=0.1$\textmu M \cite{Te2010}.
\begin{align*}
\mbox{(+)RNA}+\mbox{RTC}\xtofrom[rt_{off}]{rt_{on}}\mbox{RT$_0$:(+)RNA},
\end{align*}
We model the kinetic steps involved in RTC initiation and transition to the elongation state that occur subsequent to RTC binding to the 3' UTR as a single kinetic step with rate $rt_{in}$. The formation of (-)sgRNAs can be modelled as follows:
\begin{align*}
&\mbox{RT$_0$:(+)RNA}\xrightarrow{rt_{in}}\mbox{RTi$_1$:(+)RNA}, \\
&\mbox{RTi$_k$:(+)RNA}\xrightarrow{tr_k}\mbox{RT$_{k}$:(+)RNA},\\
&\mbox{RT$_k$:(+)RNA}\xrightarrow{(1-r)t_c}\mbox{(-)sgRNA$_{k}$}+\mbox{RTC}+\mbox{(+)RNA},\\
&\mbox{RT$_k$:(+)RNA}\xrightarrow{r\times t_c}\mbox{RTi$_{k+1}$:(+)RNA},\\
&\mbox{RTi$_{10}$:(+)RNA}\xrightarrow{t_{10}}\mbox{(-)RNA}+\mbox{RTC}+\mbox{(+)RNA}.
\end{align*}
Here, $tr_k$ is the rate of RTC transcription between the TRS at site $k-1$ and site $k$ (Fig. \ref{RTC}) and each sub-genomic (-)RNA for $k=1,2,...,9$, corresponds to sgRNAs for N, 8, 7b, 7a, 6, M, E, 3a, S (-)sgRNAs, respectively, whereas the $k=10$ state with transcription rate $t_{10}$ denotes the rate to transcribe the remaining length of 22kb of RNA upstream of the structural genes. This last step is responsible for the creation of full length (-)genomic RNA.
To model the synthesis of the 9 (+)sgRNAs that are required for translation by the host ribosomes, we assume that RTCs can bind to negative sense RNAs producing a polysome arrangement with up to $n_k$ RTCs present on (-)sgRNA$_k$. RTCs bind to (-)sgRNA one at a time with binding rate $b_k$, until the (-)sgRNA$_k$ is saturated with $n_k$ RTCs. After saturation, the last RTC (i.e. the $n_k$th) at the 5' end of the (-)sgRNA can terminate transcription with rate $p_i$, resulting in the production of a (+)sgRNA and release of an RTC. The resulting (-)sgRNA with $n_k-1$ RTCs can bind another RTC at the 3' end. This reaction models the movement of the remaining RTCs down the RNA strand and movement of the 5' most RTC to the 5' end for termination and daughter strand release. The synthesis of (+)sgRNA is described via the following reactions:
\begin{align}
&\mbox{(-)sgRNA$_k$}+\mbox{RTC}\xrightarrow{b_k}\mbox{(-)sgRNA$_k$:RT},\nonumber\\
&\mbox{(-)sgRNA$_k$:jRT}+\mbox{RTC}\xrightarrow{b_k}\mbox{(-)sgRNA$_k$:(j+1)RT}, \\
&\mbox{(-)sgRNA$_k$:$n_k$RT}\xrightarrow{p_k}\mbox{(+)sgRNA$_k$}+\mbox{(-)sgRNA$_k$:$(n_k-1)$RT}+\mbox{RTC},\nonumber
\end{align}
where $k=1,2,...,9$ denote the 9 sgRNA species and $j$ indicate the number of RTC currently bound to the (-)sgRNA. We model the replication of the full length (+)RNA from the (-)RNA using the same model as the sgRNAs and denote full length (-)RNA as $k=10$ and use binding rate $b_{10}$ with a maximal RTC number of $n_{10}$ and termination rate of $p_{10}$. Finally, we allow decay of the viral (+) and (-)sgRNA via the reactions
\begin{align*}
&\mbox{(-)sgRNA$_k$:$n_k$RT}\xrightarrow{d_k}0,\\
&\mbox{(+)sgRNA$_k$}\xrightarrow{d_k}0,\\
&\mbox{(-)sgRNA$_k$}\xrightarrow{d_k}0,
\end{align*}
\subsection{Remdesivir related reactions}
Remdesivir acts as a nucleotide analogue that mimics the adenosine structure. It was originally developed as a treatment for Hepatitis C virus and later repurposed for Ebola \cite{Warren2016}. Recent studies have pointed to remdesivir as an effective antiviral treatment option for Covid-19 \cite{Beigel2020}. During the replication process RTC may insert remdesivir molecules rather than adenine, which caps the strand and stops the replication process \cite{Zhang2020}. In order to model the impact of this drug, we assume that complexes with RTC in our model can bind (and subsequently unbind) to remdesivir molecules (Rem). Thus the new reactions have the following form:
\begin{align*}
&\mbox{RTi$_k$:(+)RNA}+\mbox{Rem}\xtofrom[k_{off}]{k_{on}}\mbox{Re:RTi$_k$:(+)RNA},\\
&\mbox{Re:RTi$_k$:(+)RNA}\xrightarrow{r_{term}}\mbox{RTC},\\
&\mbox{RTi$_{10}$:(+)RNA}+\mbox{Rem}\xtofrom[k_{off}]{k_{on}}\mbox{Re:RTi$_{10}$:(+)RNA},\\
&\mbox{Re:RTi$_{10}$:(+)RNA}\xrightarrow{r_{term}}\mbox{RTC},\\
&\mbox{RT$_k$:(+)RNA}+\mbox{Rem}\xtofrom[k_{off}]{k_{on}}\mbox{Re:RT$_k$:(+)RNA},\\
&\mbox{Re:RT$_k$:(+)RNA}\xrightarrow{r_{term}}\mbox{RTC},\\
&\mbox{(-)sgRNA$_l$:jRT}+\mbox{Rem}\xtofrom[k_{off}]{k_{on}}\mbox{Re:(-)sgRNA$_l$:jRT},\\
&\mbox{(-)sgRNA$_l$:jRT}\xrightarrow{r_{term}}\mbox{RTC}+\mbox{(-)sgRNA:(j-1)RT},\\
\end{align*}
where $k=1,2,...,9$ and $l=1,2,...,10$.
\subsection{Parameters for the intracellular model}
\subsubsection{RNA and protein decay rates}
Wada et al. \cite{Wada2018} have measured half-lives of the genome and S-sgRNA for mouse hepatitis virus (MHV), a prototypic member of the CoV family belonging to the same genus (\textit{Betacoronavirus}) as SARS-CoV-2. Half-lives of S-sgRNA and genome are measured as 5.96 and 5.41 hours, respectively. Therefore, we assume that half-lives of (+)RNA, (-)RNA, and all positive and negative sgRNA are 6 hours. Thus, their decay rate is equal to $\ln2/6\mbox{ hour}^{-1}$. Half-life of E protein is 1 hour \cite{Keng2011}. We are assuming that half-lives of N, M, S, pp1a and pp1ab are also 1 hour. As half-life of the viral polymerase of HCV (like SARS-CoV-2 a +ssRNA virus) has been measured as 6 hours \cite{Aunins2018}, we assume that half-life of RTC is also 6 hours. See Table \ref{param} for full details on these constants.
\subsubsection{Ribosome translation and RTC transcription rates}
Dimelow and Wilkinson \cite{Dimelow2009} have built a detailed kinetic model of translation initiation in yeast. They have estimated that the forward rate constants for ribosome binding are in the range of 1–50 (\textmu M s)$^{-1}$. In this model we assume that the binding rate of ribosome to RNA is 25 (\textmu M s)$^{-1}$ and its unbinding rate is 2 s$^{-1}$ \cite{Dimelow2009,Wilkinson2008}. RTC binds to RNAs with dissociation constant ($K_d=\dfrac{rt_{off}}{rt_{on}}$) equal to 0.1 \textmu M \cite{Te2010}. It has been observed that increasing/decreasing RTC binding rate does not have a significant effect on the outcome as it will also increase/decrease RTC unbinding rate \cite{Fatehi2020}. Therefore we consider that RTC binding rate is equal to 1 (\textmu M s)$^{-1}$ and the unbinding rate is 0.1 s$^{-1}$. The translation rates are computed using the length of each protein and considering a constant speed for the ribosome. We study the impact of changing of this parameter (ribosome speed) in Fig. \ref{virion release}a. We observe that by changing ribosome speed the model behaves qualitatively the same. Thus, we fix this parameter in other simulations and we assume that the speed of the ribosome is 10 amino acids per second. The replication and transcription rates of RTC are computed using the length of each RNA and considering a constant speed for RTC \cite{Kim2020}. We assume that the speed of the RTC is 30 nucleotides per second \cite{Arnold2004,Arias2008}, although this parameter has also been varied. See Table \ref{param} for full details on these constants.
\subsubsection{Remdisivir binding rates}
The relative free energy of binding for remdesivir is $\Delta G=-8.28 \pm 0.65 \mbox{ kcal/mol}$ \cite{Zhang2020}. The following equation relates the binding free energy ($\Delta G$) to $K_d$:
$K_d=e^{\beta\Delta G},$ where $\beta=\dfrac{1}{k_BT}$, $k_B$ is the Boltzmann constant and $T$ is Kelvin temperature \cite{Dykeman2013}. Therefore, for remdesivir $K_d$ is approximately 1~\textmu M and we assume that the binding rate is equal to 1 (\textmu M s)$^{-1}$ and the unbinding rate is 1 s$^{-1}$.
The binding rates are measured in (M s)$^{-1}$, but as we are using a reaction based model with the Gillespie algorithm, we use the cellular volume to change these units to (molecule hour)$^{-1}$ \cite{Dykeman2013}. For changing the units of parameters related to the ribosome, we have used the volume from yeast, but for RTC and remdesivir, we have used the volume of an E. coli cell as these parameters were measured in E. coli. The level of drug in each cell is constant and equal to 25 molecules. Since $V=0.7 \mu m^3$, it is equivalent to 0.06 \textmu M \cite{Gordon2020}.
\section{Intercellular model in the context of antiviral therapy}
In order to study the effects of antiviral therapy in the context of our intercellular model, we multiply the viral production rate $p$ by $(1-\epsilon)$, where $0\leq\epsilon\leq 1$ is the drug efficacy \cite{Dahari2009,Kim2012,Guedj2013,Chenar2018}. As our intracellular model (Fig. 1c and d) suggests that starting treatment in the \textit{latent period} is most effective, and would effectively block the production of virions, we set $\gamma=0$ at the onset of treatment. This means that infected cells in the latent phase ($L$) do not transition to phase $I$, and begin shedding virions at a reduced rate (following a time lag $\tau$) compared with cells that were already in phase $I$ at the onset of treatment. Thus, we have the following equations for the numbers of infected cells and free virions;
\begin{equation}\label{intertreatment}
\begin{array}{l}
\vspace{0.15cm}
\displaystyle{\dfrac{dL}{dt}=\beta TV-\delta L-\gamma(1-\theta(t-t_R))L-\mu LE,}\\
\vspace{0.15cm}
\displaystyle{\dfrac{dI}{dt}=\gamma(1-\theta(t-t_R)) L-\delta I-\mu IE,}\\
\displaystyle{\dfrac{dV}{dt}=p(1-\epsilon)\theta(t-\tau-t_R)L(t-\tau)+p(1-\eta\theta(t-t_R))I-cV-kAV,}\\
\end{array}
\end{equation}
where $\theta(.)$ is the Heaviside function ($\theta(t)=1, \mbox{ for } t \geq 0, \mbox{ and } \theta(t)=0, \mbox{ for } t < 0 $), $\epsilon$ and $\eta$ are efficacies of the drug for cells that are in phases $L$ and $I$, respectively, and $t_R$ is the time when the antiviral treatment (remdesivir) is introduced.
\newpage
\section{Structural identifiability of the intercellular model}
We use the method by Castro and de Boer \cite{Castro2020}, that is based on scale invariance of the equations, to establish structural identifiability of our model. Following this method, we scale parameters and variables by unknown scaling factors, denoted here as $u_j$ with indices $j$ used to distinguish them. Assuming otherwise fixed parameters, and given that viral load $V$ is the observable, we have $u_V=1$. We substitute the new rescaled parameters and variables into the model to obtain:
\begin{equation*}
\begin{array}{l}
\vspace{0.15cm}
\displaystyle{\dfrac{dT}{dt}=\dfrac{1}{u_T}(u_Tr_TT(1-\dfrac{u_TT+u_LL+u_LI}{T_m})-u_{\beta}u_T\beta TV),}\\\vspace{0.15cm}
\displaystyle{\dfrac{dL}{dt}=\dfrac{1}{u_L}(u_{\beta}u_T\beta TV-u_{\delta}u_L\delta L-u_{\gamma}u_L\gamma L-u_{\mu}u_Lu_E\mu LE),}\\\vspace{0.15cm}
\displaystyle{\dfrac{dI}{dt}=\dfrac{1}{u_I}(u_{\gamma}u_L\gamma L-u_{\delta}u_I\delta I-u_{\mu}u_Iu_E\mu IE),}\\\vspace{0.15cm}
\displaystyle{\dfrac{dV}{dt}=u_pu_IpI-u_ccV-u_ku_AkAV,}\\\vspace{0.15cm}
\displaystyle{\dfrac{dE}{dt}=\dfrac{1}{u_E}(\lambda_E+u_{\alpha}u_E\alpha(u_LL+u_II)E-u_{d_E}u_Ed_EE),}\\
\displaystyle{\dfrac{dA}{dt}=\dfrac{1}{u_A}(u_{p_A}p_AV+u_{r_A}u_Ar_AA(1-\dfrac{u_AA}{A_m})-u_ku_AkAV-u_Ad_AA).}
\end{array}
\end{equation*}
We then equate the right hand sides of these equations to the right hand sides in the model (2.1), and then solve for the values of these scaling factors. It is easy to show that for all parameters and variables the scaling factor is implied to be equal to 1. Thus, according to the method by Castro and de Boer the model is structurally identifiable.
\color{black}
\newpage
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{Rib.png}
\caption{Schematic presentation of the SARS-CoV-2 genome and genome translation process. (a) Viral genome encodes ORF1a and ORF1b which are are translated and nine subgenomic RNAs (sgRNA) which encodes viral structural proteins (S, M, N, E) and accessory proteins. (b) The ribosome translates the pp1a gene (ORF1a). After translation of the pp1a gene, the ribosome can either frameshift -1 nt to the ORF1b reading frame, translating the polyprotein pp1ab, or terminate releasing the polyprotein pp1a. Probability of frameshifting from the ORF1a to ORF1b reading frames is $q$. }
\label{Rib}
\end{figure}
\newpage
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{RTC.png}
\caption{Transcription of SARS-CoV-2 genome via the RTC complex. During transcription process a subset of 9 sub-genomic minus strands are produced through discontinuous transcription. The full length genome contains functional (TRS) motifs which are found at the $3'$ end of the leader (leader TRS) and in front of each of the 9 ORFs (Fig. \ref{Rib}a). When an RTC encounters TRS motif, it will either continue the elongation of the negative strand or terminate the transcription, resulting in the production of (-)sgRNAs. Probability of RTC elongation is $r$.}
\label{RTC}
\end{figure}
\newpage
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{virionreleased.png}
\caption{Total number of released virions using parameter values from Table \ref{param}. Total number of secreted virions from an infected cell as a function of (a) ribosome amino acid association rate, and (b) RTC nucleotide association rate. Total number of released viral particles as a function of (c) RTC half-life, and (d) RTC formation rate ($f_{rt}$).}
\label{virion release}
\end{figure}
\newpage
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{ForEditor.png}
\caption{Comparison of patient data with model predictions. The best fit of viral load $V$ computed using the model in (2.1) (red line) to data for 12 patients (\textcolor{blue}{$\bullet $}) is presented. The 95\% CI (the confidence interval) is shown as a red shaded area around each curve, but is indistinguishable from it by eye due it its small size. The green shaded areas indicate the mean plus/minus standard deviation, reflecting the fact that some patient data (such as P4 and P6) are more noisy, resulting in bigger fluctuations in the fitted curves.}
\label{patient data}
\end{figure}
\newpage
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{immunereducedsepa.png}
\caption{The impact of the immune response on infection dynamics. Parameter values used are the median values of Table 1 (black lines are given by $\mu=5.7\times 10^{-2}$, $\alpha=7.08\times 10^{-9}$ and $r_A=1.98$). A reduction in the removal rate of infected cells by effector cells causes a spike in the level of effector cells (red lines given by $\mu=3.5\times 10^{-4}$). A decrease of the proliferation rate of effector cells causes significant damage to healthy cells and increases the viral peak (blue lines are given by $\alpha=5.4\times 10^{-10}$), and reduction of the proliferation rate of antibodies indicates a third peak in viral load (magenta lines are given by $r_A=1$).}
\label{immune}
\end{figure}
\newpage
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{TreatmentDelay.png}
\caption{Treatment model with a time lag, i.e. infected cells in group $L$ do not move to $I$ after starting of treatment and produce virions after a time lag $\tau$ (\ref{intertreatment}). Solid lines indicate progression of the infection in the absence of treatment as a control. Dashed, dotted and dash-dotted curves show the result of starting treatment at 7, 6, and 5 dpi, respectively. Parameters are the median values of Table 1 with $\tau=2\mbox{ days}$. Red curves correspond to the scenario of low removal rate of infected cells by effector cells ($\mu=3.5\times 10^{-4}$). Blue curves illustrate the scenario of a low proliferation rate of effector cells ($\alpha=5.4\times 10^{-10}$), and magenta curves of a low antibody proliferation rate ($r_A=1$). The green line indicates the viral detection limit. Outcomes are similar to the model in the absence of time lag, which is therefore used in the main text.}
\label{treatment delay}
\end{figure}
\newpage
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{surf.png}
\caption{The impact of CP therapy on the reduction of AUC for different treatment starts and antibody efficacy. The first and second column illustrate the minimum level of $\tilde{A}_m$ required for 25\% and 50\% reduction in the AUC, respectively. (a) and (b) Parameters are the median values of Table 1, i.e. represent the generic case based on the patient data; (c) and (d) correspond to the scenario of low removal rate of infected cells by effector cells ($\mu=3.5\times 10^{-4}$); (e) and (f) illustrate the scenario of a low proliferation rate of effector cells ($\alpha=5.4\times 10^{-10}$); and (g) and (h) show a case with a low antibody proliferation rate ($r_A=1$).}
\label{surf}
\end{figure}
\newpage
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{mAbTherapy.png}
\caption{An early CP therapy increases the duration of infection more compared with an early antiviral therapy. Solid lines indicate progression of the infection in the absence of treatment as a control. Dashed, dotted and dash-dotted curves show the result of starting treatment at 7, 6, and 5 dpi, respectively. Parameters are the median values of Table 1. Red curves correspond to the scenario of low removal rate of infected cells by effector cells ($\mu=3.5\times 10^{-4}$). Blue curves illustrate the scenario of a low proliferation rate of effector cells ($\alpha=5.4\times 10^{-10}$), and magenta curves of a low antibody proliferation rate ($r_A=1$). The green line indicates the viral detection limit.}
\label{mAb treatment}
\end{figure}
\newpage
\begin{table}[H]
\centering
\caption{Table of parameter values}
\begin{tabular}{c|c|c|c}
\hline\hline
Parameter & Value & Parameter & Value \\\hline
$r_{on}$ & 8.64 Molecule$^{-1}$hour$^{-1}$ & $r_{off}$ & 7200 hour$^{-1}$ \\
$d_r$ & 0.11 hour$^{-1}$ & $r_{in}$ & 360 hour$^{-1}$ \\
$t_{1a}$ & 8.2 hour$^{-1}$ & $t_{1b}$ & 13.4 hour$^{-1}$ \\
$t_f$ & 360 hour$^{-1}$ & $q$ & 0.4 \\
$\delta_a$ & 0.69 hour$^{-1}$ & $\delta_b$ & 0.69 hour$^{-1}$ \\
$f_{rt}$ & 30 hour$^{-1}$ & $\delta_r$ & 0.11 hour$^{-1}$ \\
$rt_{on}$ & 8.64 Molecule$^{-1}$hour$^{-1}$ & $rt_{off}$ & 360 hour$^{-1}$ \\
$rt_{in}$ & 360 hour$^{-1}$ & $r$ & 0.7 \\
$t_n$ & 4.3 hour$^{-1}$ & $t_m$ & 8.1 hour$^{-1}$ \\
$t_e$ & 24 hour$^{-1}$ & $t_s$ & 1.4 hour$^{-1}$ \\
$\delta_n$ & 0.69 hour$^{-1}$ & $\delta_m$ & 0.69 hour$^{-1}$ \\
$\delta_e$ & 0.69 hour$^{-1}$ & $\delta_s$ & 0.69 hour$^{-1}$ \\
$t_c$ & 360 hour$^{-1}$ & $r_{term}$ & 360 hour$^{-1}$ \\
$k_{bud}$ & 0.0001 hour$^{-1}$ & $Rib$ & 1000 \\
$k_{on}$ & 8.64 Molecule$^{-1}$hour$^{-1}$ & $k_{off}$ & 3600 hour$^{-1}$ \\\hline
\textbf{tr} & \multicolumn{3}{l}{(66.26, 284.21, 138d0, 782.61, 562.5, 159.1, 388.49, 126.76, 28.2, 5) hour$^{-1}$}\\
\textbf{b} & \multicolumn{3}{l}{(8.64, 8.64, 8.64, 8.64, 8.64, 8.64, 8.64, 8.64, 8.64, 8.64) hour$^{-1}$}\\
\textbf{d} & \multicolumn{3}{l}{(0.11, 0.11, 0.11, 0.11, 0.11, 0.11, 0.11, 0.11, 0.11, 0.11) hour$^{-1}$}\\
\textbf{n} & \multicolumn{3}{l}{(1, 1, 1, 1, 1, 1, 1, 1, 1, 1)} \\
\textbf{p} & \multicolumn{3}{l}{(63.53, 51.92, 48.69, 41.86, 38.96, 31.29, 28.96, 23.57, 12.84, 3.61) hour$^{-1}$}\\\hline
\end{tabular}
\label{param}
\end{table}
\begin{sidewaystable}
\footnotesize
\centering
\caption{Bootstrap 95\% CIs for the parameter estimates}
\label{patient param 95CI}
\begin{tabular}{lcccccccccccc}
\hline
Patient & $\beta\times 10^{-8}$ & $\delta$ & $\gamma$ & $\mu$ & $p\times 10^{-3}$ & $k\times 10^{-10}$ & $c$ & $\alpha\times 10^{-9}$ & $p_A\times 10^{-5}$ & $r_A$ & I.P & A.A \\\hline
1 & {[}20.685, 20.715{]} & {[}0.247, 0.249{]} & {[}0.899, 0.901{]} & {[}0.0464, 0.0476{]} & {[}2.153, 2.207{]} & {[}2.338, 2.402{]} & {[}1.236, 1.244{]} & {[}17.162, 17.238{]} & {[}1.288, 1.372{]} & {[}1.976, 1.984{]} & {[}3.97, 4.03{]} & {[}14.97, 15.03{]} \\
2 & {[}2.522, 2.558{]} & {[}0.246, 0.25{]} & {[}0.899, 0.901{]} & {[}0.0052, 0.0064{]} & {[}8.073, 8.187{]} & {[}3.65, 3.71{]} & {[}3.11, 3.19{]} & {[}1.149, 1.191{]} & {[}0.842, 0.898{]} & {[}1.624, 1.636{]} & {[}3.95, 4.05{]} & {[}20.95, 21.05{]} \\
3 & {[}9.02, 9.22{]} & {[}0.341, 0.347{]} & {[}0.898, 0.902{]} & {[}0.096, 0.104{]} & {[}2.087, 2.153{]} & {[}2.609, 2.771{]} & {[}1.16, 1.2{]} & {[}15.027, 15.973{]} & {[}1.362, 1.458{]} & {[}2.045, 2.055{]} & {[}5.98, 6.02{]} & {[}15.98, 16.02{]} \\
4 & {[}16.09, 16.91{]} & {[}0.194, 0.206{]} & {[}0.99996, 1.00004{]} & {[}0.00098, 0.00102{]} & {[}9.267, 9.533{]} & {[}2.116, 2.284{]} & {[}1.57, 1.65{]} & {[}1.257, 1.483{]} & {[}4.267, 4.593{]} & {[}1.233, 1.267{]} & {[}1.999, 2.001{]} & {[}15.999, 16.001{]} \\
5 & {[}5.808, 5.972{]} & {[}0.168, 0.172{]} & {[}0.999994, 1.000006{]} & {[}0.112, 0.118{]} & {[}1.929, 1.991{]} & {[}5.52, 5.58{]} & {[}1.12, 1.16{]} & {[}19.045, 19.155{]} & {[}1.879, 2.001{]} & {[}1.368, 1.372{]} & {[}5.96, 6.04{]} & {[}22.96, 23.04{]} \\
6 & {[}12.79, 13.21{]} & {[}0.268, 0.282{]} & {[}0.818, 0.822{]} & {[}0.182, 0.192{]} & {[}3.116, 3.284{]} & {[}4.883, 5.117{]} & {[}1.82, 1.96{]} & {[}7.364, 8.416{]} & {[}2.969, 3.231{]} & {[}1.47, 1.49{]} & {[}1.998, 2.002{]} & {[}19.998, 20.002{]} \\
7 & {[}13.15, 13.45{]} & {[}0.193, 0.207{]} & {[}0.847, 0.853{]} & {[}0.19998, 0.20002 & {[}9.803, 9.997{]} & {[}4.156, 4.404{]} & {[}1.94, 2{]} & {[}5.927, 6.613{]} & {[}6.457, 6.743{]} & {[}2.822, 2.838{]} & {[}3.97, 4.03{]} & {[}9.97, 10.03{]} \\
8 & {[}10.29, 10.51{]} & {[}0.346, 0.354{]} & {[}0.79998, 0.80002{]} & {[}0.038, 0.042{]} & {[}3.587, 3.713{]} & {[}3.408, 3.592{]} & {[}1.28, 1.32{]} & {[}16.275, 16.725{]} & {[}6.923, 7.077{]} & {[}2.173, 2.187{]} & {[}3.96, 4.04{]} & {[}13.96, 14.04{]} \\
9 & {[}10.43, 10.77{]} & {[}0.50995, 0.51005{]} & {[}0.837, 0.843{]} & {[}0.0029, 0.0041{]} & {[}10.336, 10.464{]} & {[}2.713, 2.887{]} & {[}1.09, 1.11{]} & {[}0.829, 0.891{]} & {[}0.938, 0.982{]} & {[}2.124, 2.136{]} & {[}2.98, 3.02{]} & {[}12.98, 13.02{]} \\
10 & {[}12.05, 12.35{]} & {[}0.219, 0.239{]} & {[}0.79997, 0.80003{]} & {[}0.114, 0.126{]} & {[}10.077, 10.323{]} & {[}2.533, 2.787{]} & {[}3.89, 3.93{]} & {[}2.9, 3.1{]} & {[}5.562, 5.738{]} & {[}1.564, 1.576{]} & {[}5.96, 6.04{]} & {[}15.96, 16.04{]} \\
11 & {[}12.471, 12.529{]} & {[}0.196, 0.204{]} & {[}0.896, 0.904{]} & {[}0.0664, 0.0676{]} & {[}1.275, 1.325{]} & {[}3.952, 4.048{]} & {[}0.7496, 0.7504{]} & {[}7.911, 8.269{]} & {[}6.415, 6.585{]} & {[}2.491, 2.509{]} & {[}5.98, 6.02{]} & {[}12.98, 13.02{]} \\
12 & {[}6.036, 6.104{]} & {[}0.196, 0.204{]} & {[}0.898, 0.902{]} & {[}0.0009998, 0.0010002{]} & {[}9.267, 9.353{]} & {[}1.237, 1.283{]} & {[}1.68, 1.72{]} & {[}1.103, 1.137{]} & {[}7.615, 7.785{]} & {[}1.975, 1.985{]} & {[}3.97, 4.03{]} & {[}13.97, 14.03{]} \\\hline
\end{tabular}
\footnotetext{I.P: incubation period (day). A.A: antibodies appearance (day).}
\footnotetext{The units of these parameters are day$^{-1}$.}
\end{sidewaystable}
\newpage
\begin{table}[H]
\color{black}
\caption{AUC values for antiviral therapy}
\begin{tabular}{lllll}
& No Treatment & 5 dpi & 6 dpi & 7 dpi \\ \hline
Healthy I.S (median values, Fig. 3b) & $8.51\times 10^5$ & $1.22\times 10^5$ & $6.6\times 10^5$ & $6.73\times 10^5$ \\ \hline
Immunocompromised (low $\mu$, Fig. 3e) & $1.43\times 10^6$ & $1.95\times 10^5$ & $1.4\times 10^6$ & $1.41\times 10^6$ \\ \hline
Immunocompromised (low $\alpha$, Fig. 3h) & $9.81\times 10^6$ & $4.37\times 10^5$ & $2.5\times 10^6$ & $8.77\times 10^6$ \\ \hline
Immunocompromised (low $r_A$, Fig. 3k) & $1.23\times 10^6$ & $1.23\times 10^5$ & $6.6\times 10^5$ & $6.73\times 10^5$ \\ \hline
\end{tabular}\\
\footnotesize{I.S: immune system}
\end{table}
\begin{table}[H]
\caption{AUC values for CP therapy}
\begin{tabular}{lllll}
& No Treatment & 5 dpi & 6 dpi & 7 dpi \\ \hline
Healthy I.S (median values, Fig. 3b) & $8.51\times 10^5$ & $5.78\times 10^4$ & $3.51\times 10^5$ & $5.94\times 10^5$ \\ \hline
Immunocompromised (low $\mu$, Fig. 3e) & $1.43\times 10^6$ & $1.03\times 10^5$ & $6.29\times 10^5$ & $1.23\times 10^6$ \\ \hline
Immunocompromised (low $\alpha$, Fig. 3h) & $9.81\times 10^6$ & $6.22\times 10^5$ & $7.93\times 10^5$ & $5.2\times 10^6$ \\ \hline
Immunocompromised (low $r_A$, Fig. 3k) & $1.23\times 10^6$ & $1.22\times 10^5$ & $3.51\times 10^5$ & $5.94\times 10^5$ \\ \hline
\end{tabular}\\
\footnotesize{I.S: immune system}
\end{table}
\begin{table}[H]
\caption{AUC values for antiviral+CP therapy}
\begin{tabular}{lllll}
& No Treatment & 5 dpi & 6 dpi & 7 dpi \\ \hline
Healthy I.S (median values, Fig. 3b) & $8.51\times 10^5$ & $2.9\times 10^4$ & $3.51\times 10^5$ & $5.94\times 10^5$ \\ \hline
Immunocompromised (low $\mu$, Fig. 3e) & $1.43\times 10^6$ & $3.84\times 10^4$ & $6.29\times 10^5$ & $1.23\times 10^6$ \\ \hline
Immunocompromised (low $\alpha$, Fig. 3h) & $9.81\times 10^6$ & $3.03\times 10^4$ & $5.37\times 10^5$ & $5.2\times 10^6$ \\ \hline
Immunocompromised (low $r_A$, Fig. 3k) & $1.23\times 10^6$ & $2.9\times 10^4$ & $3.51\times 10^5$ & $5.94\times 10^5$ \\ \hline
\end{tabular}\\
\footnotesize{I.S: immune system}
\end{table}
\begin{table}[H]
\caption{AUC values for treatment starts after the reported onset of symptoms}
\begin{tabular}{lllll}
& No treatment & Antiviral & CP & Antiviral+CP \\\hline
P1 & $1.85\times 10^{5}$ & 5420 & $1.17\times 10^4$ & 365.08 \\\hline
P2 & $6.13\times 10^6$ & 132.08 & 226.72 & 51.35 \\\hline
P3 & $1.77\times 10^5$ & 1200 & 512.21 & 312.85 \\\hline
P4 & $1.11\times 10^7$ & $4.66\times 10^5$ & $1.28\times 10^6$ & 77.36 \\\hline
P5 & $2.41\times 10^5$ & 304.46 & 102.16 & 84.03 \\\hline
P6 & $3.41\times 10^5$ & 204.63 & 84.89 & 53.77 \\\hline
P7 & $5.9\times 10^5$ & $8.08\times 10^4$ & $3.76\times 10^4$ & $1.76\times 10^4$ \\\hline
P8 & $2.41\times 10^5$ & 766.43 & 363.77 & 90.23 \\\hline
P9 & $1.79\times 10^7$ & $1.25\times 10^5$ & $8.21\times 10^5$ & 445.64 \\\hline
P10 & $1.26\times 10^6$ & $6.38\times 10^5$ & $4.34\times 10^5$ & $4.28\times 10^5$ \\\hline
P11 & $3.24\times 10^5$ & 4500.1 & 1008.6 & 629.8 \\\hline
P12 & $1.19\times 10^7$ & $2.19\times 10^4$ & $1.71\times 10^6$ & 888.37 \\\specialrule{.2em}{.1em}{.1em}
median & $4.66\times 10^5$ & 4960 & 6350 & 339 \\\hline
mean & $4.2\times 10^6$ & $1.12\times 10^5$ & $3.58\times 10^5$ & $3.74\times 10^4$ \\\hline
std & $6.13\times 10^6$ & $2.1\times 10^5$ & $5.95\times 10^5$ & $1.23\times 10^5$ \\\hline
95\% CI& $[0.73, 7.7]\times 10^6$ & $[0.00001, 2.3]\times 10^5$ & $[0.21, 6.95]\times 10^5$ & $[0.000001, 1.07]\times 10^5$ \\\hline
\end{tabular}
\end{table}
\newpage
| proofpile-arXiv_059-15713 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section*{Acknowledgment}
The authors are grateful to Demosthenis Teneketzis, Peter Caines, and Dileep
Kalathil for useful discussions and feedback. The work of JS and AM was
supported in part by the Natural Science and Engineering Research Council of
Canada through Discovery Grant RGPIN-2016-05165. The work of AS, RS, and AM
was supported in part by the Innovation for Defence Excellence and Security
(IDEaS) Program of the Canadian Department of National Defence through grant
CFPMN2-037. AS was also supported by an FRQNT scholarship. The numerical
experiments were enabled in part by support provided by Calcul Québec and
Compute Canada.
\section{Convergence of the PORL algorithm}
\label{sec:porl-convergence}
In this section, we discuss the convergence of the PORL algorithm presented in
Sec.~\ref{sec:grad-ascent} and~\ref{sec:PORL}. The proof of convergence relies
on multi-timescale stochastic approximation~\cite{Borkar_1997} under
conditions similar to the standard conditions for convergence of policy
gradient algorithms with function approximation stated
below:
\begin{assumption} \label{ais_assmpt}
The following conditions are satisfied:
\begin{enumerate}[leftmargin=1pc,nosep]
\item All network parameters $(\bar \xi_k, \zeta_k, \theta_k)$
lie in convex and bounded subsets of Euclidean spaces.
\item The gradient of the loss function $\GRAD_{\bar \xi}\mathcal{L}(\bar \xi_k)$ of the state approximator is
Lipschitz in $\bar \xi_k$, the gradient of the TD loss $\GRAD_{\zeta}\mathcal{L}_{\text{TD}}(\bar \xi_k, \theta_k, \zeta_k)$
and the policy gradient $\widehat \GRAD_{\theta_k} J(\bar \xi_k,
\theta_k, \zeta_k)$ is Lipschitz in $(\bar \xi_k, \theta_k, \zeta_k)$
with respect to the sup norm.
\item All the gradients---$\GRAD_{\bar \xi}\mathcal{L}(\bar \xi_k)$ at the state
approximator; $\GRAD_{\zeta}\mathcal{L}_{\text{TD}}(\bar \xi_k, \theta_k, \zeta_k)$ at the critic;
and $\widehat \GRAD_{\theta_k}J(\bar \xi_k, \theta_k, \zeta_k)$ at the actor---are unbiased with bounded variance. Furthermore, the critic and the actor function approximators are compatible as given in~\cite{Sutton2000}, i.e.,
\[
\frac{\partial Q_{\zeta_k}(\hat Z_t, A_t)}{\partial \zeta} = \frac{1}{\pi_{\theta_k}(\hat Z_t, A_t)}\frac{\partial \pi_{\theta_k}(\hat Z_t, A_t)}{\partial \theta}.
\]
\item The learning rates are sequences of positive numbers $\{a_k\}_{k \ge 0}, \{b_k\}_{k \ge 0}, \{c_k\}_{k \ge 0}$ that satisfy:
\(
\sum a_k = \infty,
\)
\(
\sum b_k = \infty,
\)
\(
\sum c_k = \infty,
\)
\(
\sum a_k^2 < \infty,
\)
\(
\sum b_k^2 < \infty,
\)
\(
\sum c_k^2 < \infty,
\)
\(
\lim_{k \to \infty} c_k/a_k = 0,
\)
and
\(
\lim_{k \to \infty} b_k/c_k = 0.
\)
\end{enumerate}
\end{assumption}
\begin{assumption}\label{ass:regularity}
The following regularity conditions hold:
\begin{enumerate}
\item The ODE
corresponding to $\theta$ in~\eqref{eq:ais_ac} is locally asymptotically
stable.
\item The ODEs corresponding to $\bar \xi$ and $\zeta$ in~\eqref{eq:ais_ac}
are globally asymptotically stable. In addition, the ODE corresponding to
$\zeta$ has a fixed point which is
Lipschitz continuous in $\theta$.
\end{enumerate}
\end{assumption}
The proposed RL framework has the following convergence guarantees.
\begin{theorem} \label{thm:ais_rl_convergence}
Under Assumptions~\ref{ais_assmpt} and~\ref{ass:regularity},
along any sample path, almost surely we have the following:
\begin{enumerate}
\item[\textup{(a)}] The iteration for $\bar \xi$ in~\eqref{eq:ais_ac}
converges to a state estimator that minimizes the loss function
$\mathcal L(\bar \xi)$;
\item[\textup{(b)}] The iteration for $\zeta$ in~\eqref{eq:ais_ac} converges
to a critic that minimizes the error with respect to the true
$Q$-function;
\item[\textup{(c)}] The iteration for $\theta$ in~\eqref{eq:ais_ac}
converges to a local maximum of the performance $J(\bar \xi^*, \zeta^*,
\theta)$, where $\bar \xi^*$ and $\zeta^*$ are the converged values of
$\bar \xi$ and $\zeta$.
\end{enumerate}
\end{theorem}
\begin{proof}
The assumptions satisfy all the four conditions stated in~\cite[page 35]{Leslie_2004},
\cite[Theorem 23]{Borkar_1997}. The proof follows from combining this two-time scale algorithm proof with the fastest third time-scale of learning the state representation. Due to the specific choice of learning rates, the state representation algorithm sees a stationary actor and critic, while the actor and critic in turn see a converged state approximator ietration due to its faster learning rate. The convergence of the state approximator follows from~\cite[Theorem
2.2]{Borkar:book} and the fact that the model satisfies conditions (A1)--(A4) of~\cite[pg~10--11]{Borkar:book}. The Martingale difference condition (A3) of \cite{Borkar:book} is satisfied due to the unbiasedness assumption of the state approximator.
The result then follows from by combining the theorem given in~\cite[page 35]{Leslie_2004},
\cite[Theorem 23]{Borkar_1997} along with ~\cite[Theorem
2.2]{Borkar:book} and using a third fastest time scale for the state apparoximator.
\end{proof}
}
\section{Details about the network architecture, training, and hyperparameters}
\label{sec:network}
As explained in Sec.~\ref{sec:experiments}, the \acs{AIS}-generator consists
of four components: the history compression function $\ainfo$, the \acs{AIS}
update function $\aupdate$, the reward prediction function $\rewinfo$, and the
observation prediction kernel $\nextobs$. We model the first as an LSTM, where
the memory update unit of LSTM acts as $\aupdate$. We model $\rewinfo$,
$\nextobs$, and the policy $\hat{\pi}$ as feed-forward neural networks. We
describe the details for each difficulty class of environment separately. In
the description below, we use $\Linear(n,m)$ to denote a linear layer
$\Tanh(n,m)$ to denote a tanh layer, $\Relu(n,m)$ to denote a ReLU layer, and
$\LSTM(n,m)$ to denote an LSTM layer, where $n$ denotes the number of inputs
and $m$ denotes the number of outputs of each layer. The size of the input of
the outputs depend on the size of the observation and action spaces, which we
denote by $n_O$ and $n_A$, respectively as well as on the dimension of
\acs{AIS} and for the case of minigrid environments, the dimension of the
latent space for observations, we denote by $d_{\hat Z}$ and $d_O$. We also use $\Conv(IC, OC, (FSx, FSy))$ to denote a 2D convolutional layer with $IC$, $OC$, $(FSx, FSy)$ represent the number of input channels, output channels and kernel size (along $x$ and $y$) respectively. Note that the strides are the same as the kernel size in this case. $\ELU$ represents Exponential Linear Unit and is used to model the prediction of variance. Finally, $\GMM(n_{\textup{comp}})$ represents a Gaussian Mixture Model with $n_{\textup{comp}}$ Gaussian components. Most of the details are common for both the AIS+KL and the AIS+MMD cases, we make a distinction whenever they are different by indicating KL or MMD.
\subsection{Details for low dimensional environments:}
\begin{itemize}
\item \textsc{Environment Details:}
\par\nopagebreak[4]
\begin{tabular}{@{}cccc@{}}
\toprule
Environment & Discount & No.~of actions & No.~of obs. \\
& $\gamma$ & $n_A$ & $n_O$ \\
\midrule
\textsc{Voicemail} & 0.95 & 3 & 2 \\
\textsc{Tiger} & 0.95 & 3 & 2 \\
\textsc{CheeseMaze}& 0.7 & 4 & 7 \\
\bottomrule
\end{tabular}
The discount factor for \textsc{CheeseMaze} is chosen to match with
standard value used in that environment \citep{McCallum_1993}.
\item \textsc{AIS and Network details:}
\par\nopagebreak[4]
\begin{tabular}{@{}llcl}
$\bullet$ & Dimensions of AIS ($d_{\hat Z}$) & : & $40$ \\
$\bullet$ & Weight in \acs{AIS} loss ($\lambda$) (KL)& : & $0.0001$ \\
& Weight in \acs{AIS} loss ($\lambda$) (MMD)& : & $0.001$ \\
\end{tabular}
\begin{tabular}{@{}cccc@{}}
\toprule
$\ainfo$ & $\rewinfo$ & $\nextobs$ & $\hat{\pi}$ \\
\midrule
$\Linear(n_O + n_A + 1, d_{\hat Z})$ & $\Linear(n_A + d_{\hat Z}, \tfrac12 d_{\hat Z})$ &
$\Linear(n_A + d_{\hat Z}, \tfrac12 d_{\hat Z})$ & $\Linear(d_{\hat Z},d_{\hat Z})$
\\ \Arrow & \Arrow & \Arrow & \Arrow \\
$\Tanh(d_{\hat Z},d_{\hat Z})$ & $\Tanh(\tfrac12d_{\hat Z}, \tfrac12d_{\hat Z})$ &
$\Tanh(\tfrac12d_{\hat Z}, \tfrac12d_{\hat Z})$ & $\Tanh(d_{\hat Z},d_{\hat Z})$
\\ \Arrow & \Arrow & \Arrow & \Arrow \\
$\LSTM(d_{\hat Z},d_{\hat Z})$ & $\Linear(\tfrac12d_{\hat Z}, 1)$ & $\Linear(\tfrac12d_{\hat Z}, n_O)$ &
$\Linear(d_{\hat Z}, n_A)$
\\
& & \Arrow & \Arrow \\
& & $\Softmax$ & $\Softmax$ \\
\bottomrule
\end{tabular}
\item \textsc{Training details:}
As explained in Section~\ref{sec:PORL}, we update the parameters after a
rollout of $T$, which we call a \emph{training batch}. The choice of
parameters for the training batch are as follows:
\par\nopagebreak[4]
\begin{tabular}{@{}llcl}
$\bullet$ & Samples per training batch & : & $200$ \\
$\bullet$ & Number of training batches & : & $10^5$ \\
\end{tabular}
In addition, we use the following learning rates:
\par\nopagebreak[4]
\begin{tabular}{@{}llcl}
$\bullet$ & AIS learning rate & : & ADAM(0.003) \\
$\bullet$ & Policy learning rate (KL)& : & ADAM(0.0006) \\
& Policy learning rate (MMD)& : & ADAM(0.0008) \\
\end{tabular}
\par\nopagebreak[4]
In the above description, we use ADAM($\alpha$) to denote the choice of
$\alpha$ parameter of ADAM. All other parameters have their default value.
\item \textsc{Evaluation details:}
\par\nopagebreak[4]
\begin{tabular}{@{}llcl}
$\bullet$ & No.~of batches after which evaluation is done& : & $500$ \\
$\bullet$ & Number of rollouts per evaluation & : & $50$ \\
\end{tabular}
\end{itemize}
\subsection{Details for moderate dimensional environments:}
\begin{itemize}
\item \textsc{Environment Details:}
\par\nopagebreak[4]
\begin{tabular}{@{}cccc@{}}
\toprule
Environment & Discount & No.~of actions & No.~of obs. \\
& $\gamma$ & $n_A$ & $n_O$ \\
\midrule
\textsc{Drone Surveillance} & 0.99 & 5 & 10 \\
\textsc{Rock Sampling} & 0.99 & 8 & 3 \\
\bottomrule
\end{tabular}
\item \textsc{AIS and Network details:}
\par\nopagebreak[4]
\begin{tabular}{@{}llcl}
$\bullet$ & Dimensions of AIS ($d_{\hat Z}$) & : & $128$ \\
$\bullet$ & Weight in \acs{AIS} loss ($\lambda$) (KL)& : & $0.0001$ \\
& Weight in \acs{AIS} loss ($\lambda$) (MMD)& : & $0.001$ \\
\end{tabular}
\begin{tabular}{@{}cccc@{}}
\toprule
$\ainfo$ & $\rewinfo$ & $\nextobs$ & $\hat{\pi}$ \\
\midrule
$\LSTM(n_O + n_A + 1, d_{\hat Z})$ & $\Linear(n_A + d_{\hat Z}, \tfrac12 d_{\hat Z})$ &
$\Linear(n_A + d_{\hat Z}, \tfrac12 d_{\hat Z})$ & $\Linear(d_{\hat Z},n_A)$
\\ & \Arrow & \Arrow & \Arrow \\
& $\Relu(\tfrac12d_{\hat Z}, \tfrac12d_{\hat Z})$ &
$\Relu(\tfrac12d_{\hat Z}, \tfrac12d_{\hat Z})$ & $\Softmax$
\\ & \Arrow & \Arrow & \\
& $\Linear(\tfrac12d_{\hat Z}, 1)$ & $\Linear(\tfrac12d_{\hat Z}, n_O)$ & \\
& & \Arrow & \\
& & $\Softmax$ & \\
\bottomrule
\end{tabular}
\item \textsc{Training details:}
As explained in Section~\ref{sec:PORL}, we update the parameters after a
rollout of $T$, which we call a \emph{training batch}. The choice of
parameters for the training batch are as follows:
\par\nopagebreak[4]
\begin{tabular}{@{}llcl}
$\bullet$ & Samples per training batch & : & $200$ \\
$\bullet$ & Number of training batches & : & $10^5$ \\
\end{tabular}
In addition, we use the following learning rates:
\par\nopagebreak[4]
\begin{tabular}{@{}llcl}
$\bullet$ & AIS learning rate & : & ADAM(0.003) \\
$\bullet$ & Policy learning rate & : & ADAM(0.0007) \\
\end{tabular}
\par\nopagebreak[4]
In the above description, we use ADAM($\alpha$) to denote the choice of
$\alpha$ parameter of ADAM. All other parameters have their default value.
\item \textsc{Evaluation details:}
\par\nopagebreak[4]
\begin{tabular}{@{}llcl}
$\bullet$ & No.~of batches after which evaluation is done& : & $500$ \\
$\bullet$ & Number of rollouts per evaluation & : & $100$ \\
\end{tabular}
\end{itemize}
\subsection{Details for high dimensional environments:}
\begin{itemize}
\item \textsc{Environment Details:}
\par\nopagebreak[4]
Note that here $n_O$ represents the number of possible observations that a
general minigrid environment can have. With the actual rules of the
environment plugged in, this number is smaller since some combinations of the encoded observation are not possible. The actual input that we get from the environment is a vector of size $147$ ($d_O$) which is basically an observation grid of $7 \times 7$ with $3$ channels containing characteristic information about the observation.
\begin{tabular}{@{}ccccc@{}}
\toprule
Environment & Discount & No.~of actions & No.~of obs. & Obs. dimen.\\
& $\gamma$ & $n_A$ & $n_O$ & $d_O$ \\
\midrule
\textsc{Minigrid Envs} & 0.99 & 7 & $(6 \times 11 \times 3)^{7 \times 7}$ & $7 \times 7 \times 3$ \\
\bottomrule
\end{tabular}
\item \textsc{Autoencoder ($q$) details:}
\par\nopagebreak[4]
\begin{tabular}{@{}llcl}
$\bullet$ & Latent space dimensions ($d_L$) & : & $64$ \\
$\bullet$ & Type of autoencoder used & : & Basic autoencoder \\
$\bullet$ & Reconstruction Loss Criterion Used & : & Mean Square Error \\
\end{tabular}
\begin{tabular}{@{}c@{}}
\toprule
$q$ \\
\midrule
$\Linear(d_O, \tfrac32 d_L)$ \\
\Arrow \\
$\Relu(\tfrac32 d_L, \tfrac32 d_L)$ \\
\Arrow \\
$\Linear(\tfrac32 d_L, d_L)$ \\
\bottomrule
\end{tabular}
\item \textsc{AIS and Network details:}
\par\nopagebreak[4]
\begin{tabular}{@{}llcl}
$\bullet$ & Dimensions of AIS ($d_{\hat Z}$) & : & $128$ \\
$\bullet$ & Weight in \acs{AIS} loss ($\lambda$) & : & $0.1$ \\
$\bullet$ & Number of GMM components used ($n_{\textup{comp}}$) (only for KL) & : & $5$ \\
\end{tabular}
\begin{tabular}{@{}cccc@{}}
\toprule
$\ainfo$ & $\rewinfo$ & $\nextobs$ & $\hat{\pi}$ \\
\midrule
$\LSTM(d_L + n_A + 1, d_{\hat Z})$ & $\Linear(n_A + d_{\hat Z}, \tfrac12 d_{\hat Z})$ &
$\Linear(n_A + d_{\hat Z}, \tfrac12 d_{\hat Z})$ & $\Linear(d_{\hat Z},d_{\hat Z})$\\
& \Arrow & \Arrow & \Arrow \\
& $\Relu(\tfrac12d_{\hat Z}, \tfrac12d_{\hat Z})$ &
$\Relu(\tfrac12d_{\hat Z}, \tfrac12d_{\hat Z})$ & $\Relu(d_{\hat Z}, d_{\hat Z})$
\\ & \Arrow & \Arrow & \Arrow\\
& $\Linear(\tfrac12d_{\hat Z}, 1)$ & $\Linear(\tfrac12d_{\hat Z}, d_L)$ & $\Linear(d_{\hat Z}, n_A)$\\
& & & \Arrow \\
& & & $\Softmax$ \\
\bottomrule
\end{tabular}
For KL, $\nextobs$ is replaced by the following while other networks remain the same:
\par\nopagebreak[4]
\begin{tabular}{@{}ccc@{}}
\toprule
& $\nextobs$ & \\
\midrule
& $\Linear(n_A + d_{\hat Z}, \tfrac12 d_{\hat Z})$ & \\
& \Arrow & \\
&$\Relu(\tfrac12d_{\hat Z}, \tfrac12d_{\hat Z})$ &\\
\LLArrow & \Arrow & \LRArrow \\
$\Linear(\tfrac12d_{\hat Z}, d_L n_{\textup{comp}})$ &
$\ELU(\Linear(\tfrac12d_{\hat Z}, d_L n_{\textup{comp}}))$ + $1$
+ $10^{-6}$ &
$\Softmax(\Linear(\tfrac12d_{\hat Z}, n_{\textup{comp}}))$\\
\LRArrow & \Arrow & \LLArrow \\
& $\GMM(n_{\textup{comp}})$ & \\
\bottomrule
\end{tabular}
\par\nopagebreak[4]
Note that the third layer generates the mean vector of each component, the
diagonal vector for variance of each component and the mixture weights of
each component of the GMM model in the last layer.
\item \textsc{Training details:}
As explained in Section~\ref{sec:PORL}, we update the parameters after a
rollout of $T$, which we call a \emph{training batch}. The choice of
parameters for the training batch are as follows:
\par\nopagebreak[4]
\begin{tabular}{@{}llcll}
$\bullet$ & Samples per training batch & : & $200$ \\
$\bullet$ & Number of training batches & : & $2 \times 10^5$ & (MGKCS3R3,
MGOM1Dl, MGOM1Dlh) \\
& & & $10^5$ & (others) \\
\end{tabular}
In addition, we use the following learning rates:
\par\nopagebreak[4]
\begin{tabular}{@{}llcl}
$\bullet$ & AIS learning rate & : & ADAM(0.001) \\
$\bullet$ & Policy learning rate & : & ADAM(0.0007) \\
\end{tabular}
\par\nopagebreak[4]
In the above description, we use ADAM($\alpha$) to denote the choice of
$\alpha$ parameter of ADAM. All other parameters have their default value.
\item \textsc{Evaluation details:}
\par\nopagebreak[4]
\begin{tabular}{@{}llcll}
$\bullet$ & No.~of batches after which evaluation & : & $5000$
& (MGKCS3R3, MGOM1Dl, MGOM1Dlh) \\
& is done & & $1000$ & (others) \\
$\bullet$ & Number of rollouts per evaluation & : & $20$ \\
\end{tabular}
\end{itemize}
\subsection{Details for PPO with LSTM and Critic:}
\begin{itemize}
\item \textsc{Environment Details:}
\par\nopagebreak[4]
The environment details are the same as mentioned previously.
\item \textsc{Network details:}
\par\nopagebreak[4]
\begin{itemize}
\item
Low and moderate dimensionality environments:
\par\nopagebreak[4]
\begin{tabular}{@{}ccc@{}}
\toprule
Feature Extractor & Actor Head & Critic Head\\
\midrule
$\LSTM(n_O, n_O)$ & $\Linear(n_O, 64)$ & $\Linear(n_O, 64)$\\
& \Arrow & \Arrow \\
& $\Tanh(64, 64)$ & $\Tanh(64, 64)$\\
& \Arrow & \Arrow \\
& $\Linear(64, n_A)$ & $\Linear(64, 1)$\\
& \Arrow & \\
& Softmax & \\
\bottomrule
\end{tabular}
\item High dimensionality environments:
\par\nopagebreak[4]
\begin{tabular}{@{}llcll}
$\bullet$ & Observation tensor & : & $7 \times 7 \times 3$ \\
$\bullet$ & Embedding size ($d_E$) & : & $64$ \\
\end{tabular}
\begin{tabular}{@{}ccc@{}}
\toprule
Conv. Feature Extractor & Actor Head & Critic Head\\
\midrule
$\Conv(3, \tfrac14 d_E, (2,2))$ & $\Linear(d_E, d_E)$ & $\Linear(d_E, d_E)$\\
\Arrow & \Arrow & \Arrow \\
$\Relu$ & $\Tanh(d_E, d_E)$ & $\Tanh(d_E, d_E)$\\
\Arrow & \Arrow & \Arrow \\
$\MP$ & $\Linear(d_E, n_A)$ & $\Linear(d_E, 1)$\\
\Arrow & \Arrow & \\
$\Conv(\tfrac14 d_E, \tfrac12 d_E, (2,2))$ & Softmax & \\
\Arrow & & \\
$\Relu$ & & \\
\Arrow & & \\
$\Conv(\tfrac12 d_E, d_E, (2,2))$ & & \\
\Arrow & & \\
$\Relu$ & & \\
\Arrow & & \\
$\LSTM(d_E, d_E)$ & & \\
\bottomrule
\end{tabular}
\end{itemize}
\item \textsc{Training details:}
\par\nopagebreak[4]
\begin{tabular}{@{}llcll}
$\bullet$ & Number of parallel actors & : & $64$ \\
$\bullet$ & Number of training batches & : & $4\times10^{7}$ & (MGKCS3R3, MGOM1Dl, MGOM1Dlh) \\
& & & $2\times10^{7}$ & (others) \\
$\bullet$ & Epochs per training batch & : & $4$ \\
$\bullet$ & Samples per training batch & : & $1280$ \\
$\bullet$ & Frames per parallel actor & : & $40$ \\
$\bullet$ & GAE ($\lambda_{GAE}$) & : & $0.99$ \\
$\bullet$ & Trajectory recurrence length & : & $20$ \\
\end{tabular}
In addition, we use ADAM with the following details:
\par\nopagebreak[4]
\begin{tabular}{@{}llcl}
$\bullet$ & Learning rate $\alpha$ & : & 0.0001 \\
$\bullet$ & ADAM parameter $\epsilon$ & : & 0.00001 \\
\end{tabular}
\par\nopagebreak[4]
\item \textsc{Evaluation details:}
\par\nopagebreak[4]
\begin{tabular}{@{}llcll}
$\bullet$ & No.~of batches after which evaluation & & \\
& is done & : & $200$ \\
$\bullet$ & Rollouts used for evaluation & : & All recent episodes completed by all actors\\
\end{tabular}
\end{itemize}
\subsection{Details about hyperparameter tuning}
Hyperparameter tuning was carried out by searching a grid of values, but exhaustive grid search was not carried out due to the prohibitive computational cost. Instead, coarse values were used initially as starting points and finer tuning was done around promising values, which was essentially an iterative process of performing experiments, observing results and trying similar parameters to the ones generating good results. Hyperparameters observed in each previous environment class (low, moderate, high dimensionality) were used as a starting point for the search in the new environment class.
Performance was quite sensitive to different learning rates used for the AIS and policy in most environments. Performance generally improved or remained the same when a larger AIS State Size was used (values considered were 128, 256, 512 for moderate/high-dimensional environments and 5, 10, 20, 40 for low-dimensional environments), although in some cases, it was more unstable during training. $\lambda$ values considered were between 0 and 1 and generally only made a difference (in terms of performance results) when the rewards were very large. The choice of activation function between ReLU and Tanh did not seem to make a significant difference for the considered environments.
\section{An \acs{AIS}-based approximate dynamic programming for Dec-POMDPs}
\label{sec:dec}
The theory of approximation for partially observed systems presented in the previous section is fairly general and is applicable to other models of decision making as well. As an example, in this section we show how to use the same ideas to obtain approximation results for decentralized (i.e., multi-agent) partially observed models.
There is a rich history of research on these models in multiple research
disciplines. Decentralized multi-agent systems have been studied in Economics
and Organizational Behavior since the mid
1950s~\citep{Marschak1954,Radner1962,MarschakRadner_1972} under the heading
of team theory. Such models have been studied in systems and control since the
mid 1960s under the heading of decentralized stochastic
control~\citep{witsenhausen1968counterexample,Witsenhausen_1971,sandell1978survey}.
Such models have also been studied in Artificial Intelligence since the
2000s~\citep{bernstein2005bounded,
SzerCZ05,seuken2007memory,carlin2008observation} under the heading of
Dec-POMDPs. In the interest of space, we do not provide a detailed overview of
this rich area; instead we refer the reader to the comprehensive survey
articles of~\cite{MMRY:tutorial-CDC,liu2016learning} for a detailed overview
from the perspective of Systems and Control and Artificial Intelligence,
respectively.
We briefly state the facts about this literature which are pertinent to the
discussion below. The general Dec-POMDP problem is NEXP
complete~\citep{bernstein2002complexity}, so it is not possible to derive an
efficient algorithm to compute the optimal solution. Nonetheless, considerable
progress has been made in identifying special cases where a dynamic
programming decomposition is
possible~\citep{WalrandVaraiya:1983, AicardiDavoliMinciardi:1987,
OoiVerboutLudwigWornell:1997, MT:NCS, MT:real-time, MNT:tractable-allerton, Nayyar_2011,NayyarMahajanTeneketzis_2013, M:control-sharing,
ArabneydiMahajan_2014, OliehoekAmato_2015, Dibangoye2016,
boularias2008exact,KumarZ09}. A high level approach which encapsulates many of
these special cases is the common information approach
of~\cite{NayyarMahajanTeneketzis_2013} which shows that the Dec-POMDP problem
with a specific but relatively general information
structure can be converted into a single agent, partially observed problem from the
point of view of a virtual agent which knows the information commonly known to
all agents and chooses prescriptions (or partially evaluated policies) which
map the local information at each agent to their respective actions. We
summarize these results in the next subsection and then show how we can
identify an AIS for such models.
\subsection{Model of a Dec-POMDP}
A Dec-POMDP is a tuple $\langle \ALPHABET K, \StSp, (\ALPHABET{\Ob}^k)_{k \in \ALPHABET
K}, (\ALPHABET{\Act}^k_t)_{k \in \ALPHABET K}, P_1, P, P^y, r \rangle$ where
\begin{itemize}
\item $\ALPHABET K = \{1, \dots, K\}$ is the set of agents.
\item $\StSp$ is the state space. $\ALPHABET{\Ob}^k$, $\ALPHABET{\Act}^k$, $k
\in \ALPHABET K$, are the observation and action spaces of agent~$k$. Let
$\ALPHABET{\Ob} = \prod_{k \in \ALPHABET K} \ALPHABET{\Ob}^k$ and $\ALPHABET{\Act} =
\prod_{k \in \ALPHABET K} \ALPHABET{\Act}^k$. We use $S_t \in \StSp$,
$Y_t \coloneqq (Y^k_t)_{k \in \ALPHABET K} \in \ALPHABET{\Ob}$, and $A_t
\coloneqq (A^k_t)_{k \in \ALPHABET K} \in \ALPHABET{\Act}$, to denote the
system state, observations, and actions at time~$t$.
\item $P_1 \in \Delta(\StSp)$ is the initial distribution of the
initial state $\StSp_1$.
\item $P \colon \StSp \times \ALPHABET{\Act} \to \Delta(\StSp)$
denotes the transition probability of the system, i.e.,
\begin{align*}
\PR(S_{t+1} = s_{t+1} \mid S_{1:t} = s_{1:t}, A_{1:t} = a_{1:t}) &= \PR(S_{t+1} = s_{t+1} \mid S_t = s_t, A_t = a_t)\\
&= P(s_{t+1} | s_t, a_t).
\end{align*}
\item $P^y \colon \StSp \times \ALPHABET{\Act} \to \Delta(\ALPHABET{\Ob})$
denotes the observation probability of the system, i..e,
\begin{align*}
\PR(Y_{t} = y_{t} \mid S_{1:t} = s_{1:t}, A_{1:t-1} = a_{1:t-1}) &= \PR(Y_{t} = y_{t} \mid S_t = s_t, A_{t-1} = a_{t-1}) \\
&= P^y(y_{t} | s_t, a_{t-1}).
\end{align*}
\item $r \colon \StSp \times \ALPHABET{\Act} \times \StSp \to
\reals$ denotes the per-step reward function. The team receives a reward
$R_t = r(S_t, A_t, S_{t+1})$ at time~$t$.
\end{itemize}
\paragraph{Information structure:} A critical feature of a Dec-POMDP
is the \emph{information structure} which captures the knowledge of who knows
what about the system and when. We use $I^k_t$ to denote the information
known to agent~$k$ at time~$t$. In general, $I^k_t$ is a subset of the total
information $(Y_{1:t}, A_{1:t-1}, R_{1:t-1})$ known to all agents in the system.
We use $\ALPHABET{\Istruct}^k_t$ to denote the space of
the information available to agent $k$ at time~$t$. Note that, in general, the
information available to agent~$k$ increases with time. So, $\ALPHABET{\Istruct}^k_t$
are sets that are increasing with time. Some examples of information structures
are:
\begin{itemize}
\item \textbf{Delayed sharing:}
\(
I^k_t = \{Y_{1:t-d}, A_{1:t-d}, \allowbreak Y^{k}_{t-d+1:t}, A^k_{t-d+1:t-1} \}.
\) This models systems where agents broadcast their
information and communication has delay of~$d$. Planning for models where
$d=1$ has been considered in~\cite{SandellAthans:1974, Yoshikawa:1975} and
for general~$d$ has been considered in~\cite{NMT:delay-sharing}.
\item \textbf{Periodic sharing:}
\(
I^k_t = \{Y_{1:t-\tau}, A_{1:t-\tau}, Y^{k}_{t-\tau+1:t}, A^k_{t-\tau+1:t-1} \},
\)
where
\(
\tau = p \bigl\lfloor \tfrac tp \bigr\rfloor
\). This models systems where agents periodically broadcast their
information every~$p$ steps. Planning for this model has been
considered in~\cite{OoiVerboutLudwigWornell:1997}.
\item \textbf{Control sharing:}
\(
I^k_t = \{Y^k_{1:t}, A_{1:t-1} \}.
\)
This models systems where control actions are observed by everyone
(which is the case for certain communication and economic applications).
Planning for variations of this model has been considered
in~\cite{Bismut:1972, SandellAthans:1974, M:control-sharing}.
\item \textbf{Mean-field sharing:}
\(
I^k_t = \{S^k_{1:t}, A^k_{1:t-1}, M_{1:t} \},
\)
where the state $S_t$ is $(S^1_t, \dots, S^K_t)$, the observation of agent~$k$ is
$S^k_t$, and $M_t = \bigl( \sum_{k \in \ALPHABET K} \delta_{S^k_t} \bigr)/K$
denotes the empirical distribution of the states. This models
systems where mean-field is observed by all agents (which is the case for
smart grid and other large-scale systems).
Planning for variations of this model has been considered
in~\cite{ArabneydiMahajan_2014}.
\end{itemize}
\paragraph{Policy:} The policy of agent~$k$ is a collection
$\pi^k = (\pi^k_1, \pi^k_2, \dots)$, where $\pi^k_t \colon \ALPHABET{\Istruct}^k_t \to
\Delta(\ALPHABET A^i)$. We use $\pi =
(\pi^k)_{k \in \ALPHABET K}$ to denote the policy for all agents. The
performance of a policy $\pi$ is given by
\begin{equation} \label{eq:performance}
J(\pi) = \EXP^{\pi}\bigg[ \sum_{t=1}^T R_t \bigg].
\end{equation}
The objective is to find a
(possibly time-varying) policy $\pi$ that maximizes the performance $J(\pi)$
defined in~\eqref{eq:performance}.
\subsection{Common information based planning for Dec-POMDPs}
As mentioned earlier, in general, finding the optimal plan for multi-agent teams is
NEXP-complete~\citep{bernstein2002complexity}. However, it is shown
in~\cite{NayyarMahajanTeneketzis_2013} that when the
information structure is of a particular form (known as partial history
sharing), it is possible to reduce the multi-agent planning problem to a
single agent planning problem from the point of view of a virtual
agent called the coordinator. We summarize this approach below.
\paragraph{Common and local information:} Define
\[
C_t = \bigcap_{s \ge t} \bigcap_{k \in \ALPHABET K} I^k_s
\qquad\text{and}\qquad
L^k_t = I^k_t \setminus C_t, \quad k \in \ALPHABET K.
\]
$C_t$ denotes the \emph{common information}, i.e., the information that
is common to all agents all the time in the future and $L^k_t$ denotes
the \emph{local information} at agent~$k$. By construction, $I^k_t = \{C_t,
L^k_t \}$. Let $\ALPHABET C_t$ and $\ALPHABET L^k_t$ denote the space of
realizations of $C_t$ and $L^k_t$ and let $L_t = (L^k_t)_{k \in \ALPHABET K}$
and $\ALPHABET L_t = \prod_{k \in \ALPHABET K} \ALPHABET L^k_t$. By
construction, $C_t \subseteq C_{t+1}$. Let $C^{\new}_{t+1} = C_{t+1} \setminus
C_t$ denote the new common information at time~$t$. Then, $C_t$ may be written
as $C^{\new}_{1:t}$.
\begin{definition}
The information structure is called \emph{partial history sharing} if
for any Borel subset $\ALPHABET B$ of $\ALPHABET L^k_{t+1}$ and any
realization $c_t$ of $C_t$, $\ell^k_t$ of $L^k_t$, $a^k_t$ of
$A^k_t$ and $y^k_{t+1}$ of $Y^k_{t+1}$, we have
\begin{multline*}
\PR(L^k_{t+1} \in \ALPHABET B \mid C_t = c_t, L^k_t = \ell^k_t,
A^k_t = a^k_t, Y^k_{t+1} = y^k_{t+1})
\\=
\PR(L^k_{t+1} \in \ALPHABET B \mid L^k_t = \ell^k_t,
A^k_t = a^k_t, Y^k_{t+1} = y^k_{t+1}).
\end{multline*}
\end{definition}
\blue{The main intuition behind this definition is as follows. For any system,
the information available to the agents can always be split into common and
local information such that $I^k_t = \{ C_t, L^k_t\}$. A partial history
sharing information structure satisfies the property that at any time~$t$
and for any agent~$k$, the updated value $L^k_{t+1}$ of the local
information is a function of only the current local information $L^k_t$, the
current local action $A^k_t$ and the next local observation $Y^k_{t+1}$.
Consequently, the common information $C_t$ is not needed to keep
track of the update of the local information. This ensures that compressing the
common information into an information state or an approximate information
state does not impact the update of the local information.}
\paragraph{Prescriptions:}
Given a policy $\pi = (\pi^k)_{k \in \ALPHABET K}$ and a realized
trajectory $(c_1, c_2, \dots)$ of the common information, the prescription
$\hat \xi^k_t$ is the partial application of $c_t$ to $\pi^k_t$, i.e.,
\(
\hat \xi^k_t = \pi^k_t(c_t, \cdot)
\),
\(
k \in \ALPHABET K.
\)
Note that $\hat \xi^k_t$ is a function from $\ALPHABET L^k_t$ to $\Delta(\ALPHABET
A^k_t)$. Let $\hat \xi_t$ denote $(\hat \xi^k_t)_{k \in \ALPHABET K}$ and let $\mathcal{X}$
denote the space of all such prescriptions for time $t$.
\blue{The reason for constructing prescriptions is as follows. Prescriptions
encode the information about the policies of all agents needed to evaluate
the conditional expected per-step reward given the common information, i.e.,
$\EXP[ R_t | C_t, (\pi^k)_{k \in \ALPHABET K}]$ can be written as a function
of $C_t$ and $(\hat \xi^k_t)_{k \in \ALPHABET K}$, say $\hat r_t(C_t,
(\hat \xi^k_t)_{k \in \ALPHABET K})$. This allows us to construct a virtual single-agent
optimization problem where a decision maker (which we call the virtual
coordinator) observes the common information $C_t$ and chooses the
prescriptions $(\hat \xi^k_t)_{k \in \ALPHABET K}$ to maximize the sum of rewards
$\hat r_t(C_t, (\hat \xi^k_t)_{k \in \ALPHABET K})$. The details of this virtual
coordinated system are presented next.}
\paragraph{A virtual coordinated system:}
The key idea of \cite{NayyarMahajanTeneketzis_2013} is to construct a
virtual single agent planning problem which they call a coordinated system.
The environment of the virtual coordinated system consists of two components:
the first component is the same as the environment of the original multi-agent
system which evolves according to dynamics~$P$; the second component consists
of $K$ \emph{passive agents}, whose operation we will describe later. There is
a virtual coordinator who observes the common information $C_t$ and chooses
\emph{prescriptions} $\hat \Xi_t = (\hat \Xi^k_t)_{k \in \ALPHABET K}$, where
$\hat \Xi^k_t \colon \ALPHABET L^k \to \Delta(\ALPHABET A^k)$ using a
\emph{coordination rule}~$\psi_t$, i.e., $\hat \Xi_t \sim \psi_t(C_t)$. In general, the
coordination rule can be stochastic. Let $\hat \xi_t$ denote the realization of
$\hat \Xi_t$. Each agent in the virtual coordinated system is a passive agent and
agent~$k$ uses the prescription $\hat \Xi^k_t$ to sample an action $A^k_t \sim
\hat \Xi^k_t(L^k_t)$.
A key insight of \cite{NayyarMahajanTeneketzis_2013} is that the virtual
coordinated system is equivalent to the original multi-agent system in the
following sense.
\begin{theorem}[{\cite{NayyarMahajanTeneketzis_2013}}] \label{thm:equiv}
Consider a \textup{Dec-POMDP} with a partial history sharing information structure.
Then, for any policy $\pi = (\pi^k)_{k \in \ALPHABET K}$, where $\pi^k =
(\pi^k_1, \dots, \pi^k_T)$ for the \textup{Dec-POMDP}, define a coordination policy
$\psi = (\psi_1, \dots, \psi_T)$ for the virtual coordinated system given by
\( \psi_t(c_t) = \bigl( \pi^k_t(c_t, \cdot) \bigr)_{k \in \ALPHABET K} \).
Then, the performance of the virtual coordinated system with policy $\psi$
is the same as the performance of the \textup{Dec-POMDP} with policy $\pi$.
Conversely, for any coordination policy $\psi = (\psi_1, \dots, \psi_T)$ for
the virtual coordinated system, define a policy $\pi = (\pi^k)_{k \in
\ALPHABET K}$ with $\pi^k = (\pi^k_1, \dots, \pi^k_T)$ for the \textup{Dec-POMDP}
given by
\(
\pi^k_t(c_t, \ell^k_t) = \psi^k_t(c_t)(\ell^k_t).
\)
Then, the performance of the \textup{Dec-POMDP} with policy $\pi$ is the same as
that of the virtual coordinated system with policy $\psi$.
\end{theorem}
\paragraph{Dynamic program:}
Theorem~\ref{thm:equiv} implies that the problem of finding optimal
decentralized policies in a Dec-POMDP is equivalent to a centralized
(single-agent) problem of finding the optimal coordination policy for the
virtual coordinated system. The virtual coordinated system is a POMDP with
unobserved state $(S_t, L^1_t, \dots, L^K_t)$, observation $C^\new_t$,
and actions $\hat \Xi_t$. The corresponding history of observations is
$(C^\new_{1:t}, \hat \Xi_{1:t-1})$ and therefore we can write a history dependent
dynamic program similar to the one presented in
Proposition~\ref{prop:optimal}. \cite{NayyarMahajanTeneketzis_2013} presented
a simplified dynamic program which used the belief state as an information
state; however, it is clear from the above discussion that any other choice of
information state will also lead to a dynamic programming decomposition.
\subsection{Common-information based AIS and approximate dynamic programming}
Since the coordinated system is a POMDP, we can simply adapt the definition of
AIS Dec-POMDPs and obtain an approximate dynamic program with approximation
guarantees. Let $\mathcal{X}_t$ denote the space of realization of $\hat \Xi_t$.
Then, we have the following.
\begin{definition} \label{def:decais}
Let $\{\hat{\ALPHABET{Z}}_t\}_{t=1}^T$ be a pre-specified collection of Banach spaces,
$\mathfrak{F}$ be a function class for \textup{IPMs}, and $\epsilonDelta$ be
pre-specified positive real numbers. A collection $\{ \ainfo_t \colon (C_t,
\hat \Xi_{1:t-1}) \mapsto \hat{\ALPHABET{Z}}_t \}_{t=1}^T$ of history compression functions,
along with approximate update kernels $\{\nextinfo_t \colon \hat{\ALPHABET{Z}}_t
\times \mathcal{X}_t \to
\Delta(\hat{\ALPHABET{Z}}_{t+1})\}_{t=1}^T$ and reward approximation functions
$\{\rewinfo_t \colon \hat{\ALPHABET{Z}}_t \times \mathcal{X}_t \to \reals\}_{t=1}^T$, is
called an \emph{$\epsilonDelta$-\ac{AIS} generator} if the process
$\{\hat Z_t\}_{t=1}^T$, where $\hat Z_t = \ainfo_t(C_t, \hat \Xi_{1:t-1})$, satisfies the
following properties:
\begin{description}
\item[(DP1)] \textbf{\textup{Sufficient for approximate performance
evaluation}}, i.e., for any time~$t$, any realization $c_t$ of
$C_t$ and any choice $\hat \xi_{1:t}$ of $\hat \Xi_{1:t}$, we have
\[
\bigl\lvert \EXP[ \Rew_t \mid C_t = c_t, \hat \Xi_{1:t} = \hat \xi_{1:t} ] -
\rewinfo_t(\ainfo_t(c_t, \hat \xi_{1:t-1}), \hat \xi_t) \bigr\rvert
\le \varepsilon_t.
\]
\item[(DP2)] \textbf{\textup{Sufficient to predict itself approximately}}.
i.e., for any time~$t$, any realization $c_t$ of $C_t$, any choice
$\hat \xi_{1:t}$ of $\hat \Xi_{1:t}$, and for any Borel subset $\ALPHABET B$ of
$\hat{\ALPHABET{Z}}_{t+1}$, define
\(
\mu_t(\ALPHABET B) \coloneqq \PR(\hat Z_{t+1} \in B \mid C_t = c_t,
\hat \Xi_{1:t} = \hat \xi_{1:t})
\)
and
\(
\nu_t(\ALPHABET B) \coloneqq \nextinfo_t(B \mid \ainfo_t(c_t,
\hat \xi_{1:t-1}), \hat \xi_t);
\)
then,
\[
d_\mathfrak{F}( \mu_t, \nu_t) \le \delta_t.
\]
\end{description}
\end{definition}
Similar to Proposition~\ref{prop:alt-info-state}, we can provide an
alternative characterization of an \acs{AIS} where we replace (DP2) with
approximations of (P2a) and (P2b) and we can prove a proposition similar to
Proposition~\ref{prop:alt-ais} for the virtual coordinated system.
We can now establish a result similar to Theorem~\ref{thm:ais} that any
\acs{AIS} gives rise to an approximate dynamic program. In this discussion,
$h_t$ denotes $(c_t, \hat \xi_{1:t-1})$ and $\ALPHABET{\His}_t$ denotes the space of
realization of $h_t$.
\begin{theorem}\label{thm:dec_ais}
Suppose $\{\ainfo_t, \nextinfo_t, \rewinfo_t\}_{t=1}^T$ is an
$\epsilonDelta$-\acs{AIS} generator.
Recursively define approximate action-value functions $\{\hat Q_t \colon
\hat{\ALPHABET{Z}}_t \times \mathcal{X}_t \to \reals \}_{t=1}^T$ and value functions $\{\hat
V_t \colon \hat{\ALPHABET{Z}}_t \to \reals\}_{t=1}^T$ as follows:
$\hat V_{T+1}(\hat z_{T+1}) \coloneqq 0$ and for $t \in \{T, \dots,
1\}$:
\begin{subequations}\label{eq:decDP-ais}
\begin{align}
\hat Q_t(\hat z_t, \hat \xi_t) &\coloneqq \rewinfo_t(\hat z_t, \hat \xi_t)
+ \int_{\hat{\ALPHABET{Z}}_{t+1}} \hat V_{t+1}(\hat z_{t+1})
\nextinfo_t(d \hat z_{t+1} \mid \hat z_t, \hat \xi_t),
\\
\hat V_t(\hat z_t) &\coloneqq \max_{\hat \xi_t \in \mathcal{X}_t} \hat Q_t(\hat z_t, \hat \xi_t).
\end{align}
\end{subequations}
Then, we have the following:
\begin{enumerate}
\item \textbf{\textup{Value function approximation:}} For any time~$t$,
realization~$h_t$ of $H_t$, and choice $\hat \xi_t$ of $\hat \Xi_t$, we have
\begin{equation}\label{eq:value-approx}
\lvert Q_t(h_t, \hat \xi_t) - \hat Q_t(\ainfo_t(h_t), \hat \xi_t)\rvert
\le \alpha_t
\quad\text{and}\quad
\lvert V_t(h_t) - \hat V_t(\ainfo_t(h_t)) \rvert
\le \alpha_t,
\end{equation}
where
\[
\alpha_t = \varepsilon_t + \sum_{\tau=t+1}^{T}\big[
\Minkowski_\mathfrak{F}(\hat V_{\tau}) \delta_{\tau-1} + \varepsilon_\tau \bigr].
\]
\item \textbf{\textup{Approximately optimal policy:}} Let $\hat \psi = (\hat \psi_1, \dots,
\hat \psi_T)$, where $\hat \psi_t \colon \hat{\ALPHABET{Z}}_t \to \Delta(\mathcal{X}_t)$,
be a coordination rule that satisfies
\begin{equation}\label{eq:decais-opt}
\Supp(\hat \psi(\hat z_t)) \subseteq
\arg \max_{\hat \xi_t \in \mathcal{X}_t} \hat Q_t(\hat z_t, \hat \xi_t).
\end{equation}
Define coordination rule $\psi = (\psi_1, \dots, \psi_T)$, where $\psi_t
\coloneqq \hat \psi_t \circ \ainfo_t$. Then, for any time~$t$,
realization~$h_t$ of $H_t$, and choice $\hat \xi_t$ of $\hat \Xi_t$, we
have
\begin{equation}\label{eq:coordinationrule-approx}
\lvert Q_t(h_t, \hat \xi_t) - Q^\psi_t(h_t, \hat \xi_t)\rvert
\le 2\alpha_t
\quad\text{and}\quad
\lvert V_t(h_t) - V^\psi_t(h_t) \rvert
\le 2\alpha_t.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
The proof is similar to the proof of Theorem~\ref{thm:ais}.
\end{proof}
We can extend the approximation result{}s for the virtual coordinated system
to {\color{black}the approximate policy evaluation case (as in Sec.~\ref{sec:ais-poleval})}, infinite horizon case (as in Sec.~\ref{sec:infinite}), the stochastic \ac{AIS}
case (as in Sec.~\ref{sec:stochais}), the action compression case (as in Sec.~\ref{sec:actais}),
and the observation compression case (as in Sec.~\ref{sec:obsais}) in a straightforward manner.
\section{Comparison with the results of \texorpdfstring{\cite{Abel2016}}{Abel
et al. (2016)} for state aggregation in MDPs} \label{app:Abel}
\cite{Abel2016} introduce four models of state aggregation and derive
approximation bounds for all four. In this section, we show that one of these
models, which they call \emph{approximate model similarity} may be viewed as
an \acs{AIS}. We also show that the approximation bounds of
Theorem~\ref{thm:inf-ais} for this model are stronger than those derived in
\cite{Abel2016} by a factor of $\mathcal O(1/(1-\gamma))$.
Since we follow a slightly different notation than \cite{Abel2016} and for the
sake of completeness, we start by describing the notion of approximate model
similarity defined in \cite{Abel2016}.
Consider an infinite horizon finite-state finite-action MDP with state space
$\StSp$, action space $\ALPHABET{\Act}$, transition probability matrix $P \colon \StSp
\times \ALPHABET{\Act} \to \Delta(\StSp)$, per-step reward function $r \colon \StSp
\times \ALPHABET{\Act} \to \reals$, and discount factor $\gamma$.
Let $\hat \StSp$ be an aggregated state space and it is assumed that the
following two functions are available: a compression function $q \colon \StSp
\to \hat \StSp$ and a weight function $w \colon \StSp \to [0, 1]$ such that
for all $\hat \st \in \hat \StSp$, $\sum_{\st \in q^{-1}(\hat \st)} w(\st) =
1$. Given these functions, define an aggregated MDP with state space $\hat
\StSp$, action space $\ALPHABET{\Act}$, transition probability function $\hat P \colon
\hat \StSp \times \ALPHABET{\Act} \to \hat \StSp$ given by
\[
\hat P(\hat \st' | \hat \st, a) =
\sum_{\st \in q^{-1}(\hat \st)} \sum_{\st' \in q^{-1}(\hat \st')}
P(\st' | \st, a) w(\st),
\quad
\forall \hat \st, \hat \st' \in \hat \StSp, a \in \ALPHABET{\Act},
\]
and a per-step reward $\hat r \colon \hat \StSp \times \ALPHABET{\Act} \to \reals$
given by
\[
\hat r(\hat \st, a) = \sum_{\st \in q^{-1}(\hat \st)} r(\st, a) w(\st),
\quad \forall \hat \st \in \hat \StSp, a \in \ALPHABET{\Act}.
\]
\begin{definition}[$\varepsilon$-approximate model similarity \citep{Abel2016}]
The aggregated MDP is said to be $\varepsilon$-approximate model similar to
the original MDP if it satisfies the following two properties:
\begin{enumerate}
\item For all $\hat \st \in \hat \StSp$, $s_1, s_2 \in q^{-1}(\hat \st)$, and
$a \in \ALPHABET{\Act}$, we have
\[
\bigl| r(s_1, a) - r(s_2, a) \bigr| \le \varepsilon.
\]
\item For all $\hat \st, \hat \st' \in \hat \StSp$, $s_1, s_2 \in q^{-1}(\hat
\st)$, and $a \in \ALPHABET{\Act}$, we have
\[
\biggl|
\sum_{\st' \in q^{-1}(\hat \st')} P(\st' | s_1, a)
-
\sum_{\st' \in q^{-1}(\hat \st')} P(\st' | s_2, a)
\biggr| \le \varepsilon.
\]
\end{enumerate}
\end{definition}
\begin{proposition}[Lemma 2 of \cite{Abel2016}]\label{prop:Abel}
Let $\hat \pi \colon \hat \StSp \to \ALPHABET{\Act}$ be the (deterministic) optimal
policy for the aggregated MDP. Define $\pi \colon \StSp \to \ALPHABET{\Act}$ by
$\pi = \hat \pi \circ q$. Let $V \colon \StSp \to \reals$ denote the
optimal value function and let $V^\pi \colon \StSp \to \reals$ denote the
value function for policy~$\pi$. Then, for all $\st \in \StSp$
\[
\bigl| V(\st) - V^{\pi}(\st) \bigr| \le
\frac{2 \varepsilon}{(1 - \gamma)^2} +
\frac{ 2 \gamma \varepsilon |\StSp| \|r\|_\infty}
{(1-\gamma)^3}.
\]
\end{proposition}
Note that the result is presented slightly differently in \cite{Abel2016}. They
assume that $\|r\|_\infty = 1$ and simplify the above expression.
We now show an approximate model similarity is also an \acs{AIS} and
directly using the result of Theorem~\ref{thm:inf-ais} for this model gives a
stronger bound than Proposition~\ref{prop:Abel}.
\begin{proposition}\label{prop:model-similarity}
Let $(q,w)$ be such that the aggregated model is $\varepsilon$-approximate
model similar to the true model. Then, $(q, \hat P, \hat r)$ is an
$(\varepsilon, \varepsilon |\hat \StSp|)$-\acs{AIS} with respect to the
total variation distance.
\end{proposition}
\begin{proof}
We first establish (AP1). For any $\st \in \StSp$ and $a \in \ALPHABET{\Act}$,
\begin{align*}
\bigl| r(\st, a) - \hat r(q(\st), a) \bigr| &
\stackrel{(a)}\le
\biggl|
\sum_{\tilde \st \in q^{-1}(q(\st))} w(\tilde \st)r( \st, a)
- \sum_{\tilde \st \in q^{-1}(q(\st))} w(\tilde \st)r(\tilde \st, a)
\biggr| \\
&\stackrel{(b)}\le
\sum_{\tilde \st \in q^{-1}(q(\st))} w(\tilde \st)
\bigl| r( \st, a) - r(\tilde \st, a) \bigr|
\\
&\stackrel{(c)}\le \varepsilon
\end{align*}
where $(a)$ follows from the basic property of the weight function $w$
and the definition of the aggregated reward $\hat r$; $(b)$ follows form the
triangle inequality; and $(c)$ follows from the definition of approximate
model similarity and the basic property of the weight function $w$. This
establishes property (AP1).
Now, we establish (AP2).
Let $d_\mathfrak{F}$ denote the total variation distance. Define probability measures
$\mu, \nu$ on $\Delta(\hat \StSp)$ in the definition of (AP2), i.e.,
for any $\st \in \StSp$, $\hat \st' \in \hat \StSp$, and $a \in \ALPHABET{\Act}$,
\begin{align*}
\mu(\hat \st') &\coloneqq \sum_{\st' \in q^{-1}(\hat \st')} P(\st' | \st, a)
\\
\nu(\hat \st') &\coloneqq \hat P(\hat \st' | q(\st), a)
=
\sum_{\tilde \st \in q^{-1}(q(\st))} \sum_{\st' \in q^{-1}(\hat \st')}
P(\st'|\tilde \st,a) w(\tilde \st)
\end{align*}
Now consider (see footnote~\ref{fnt:TV} on page~\pageref{fnt:TV})
\begin{align*}
d_\mathfrak{F}(\mu, \nu) &= \sum_{\hat \st' \in \hat \StSp}
| \mu(\hat \st') - \nu(\hat \st') |
\\
&= \sum_{\hat \st' \in \hat \StSp} \biggl|
\sum_{\st' \in q^{-1}(\hat \st')} P(\st' | \st, a)
-
\sum_{\tilde \st \in q^{-1}(q(\st))} \sum_{\st' \in q^{-1}(\hat \st')}
P(\st'|\tilde \st,a) w(\tilde \st)
\biggr|
\\
&\stackrel{(a)}\le
\sum_{\hat \st' \in \hat \StSp}
\sum_{\tilde \st \in q^{-1}(q(\st))}
w(\tilde \st)
\biggl|
\sum_{\st' \in q^{-1}(\hat \st')} P(\st' | \st,a)
-
\sum_{\st' \in q^{-1}(\hat \st')} P(\st' | \tilde \st,a)
\biggr|
\\
&\stackrel{(b)}\le
\sum_{\hat \st' \in \hat \StSp}
\sum_{\tilde \st \in q^{-1}(q(\st))}
w(\tilde \st) \varepsilon
\stackrel{(c)}=
\sum_{\hat \st' \in \hat \StSp} \varepsilon
= |\hat \StSp| \varepsilon,
\end{align*}
where $(a)$ follows from triangle inequality, $(b)$ follows from definition
of approximate model similarity and $(c)$ follows from the basic property of
the weight function. This proves (AP2).
\end{proof}
\begin{lemma}\label{lem:MS-V-bound}
For any MDP
\[
\Span(V) \le \frac{\Span(r)}{1 - \gamma}.
\]
Therefore, when $d_\mathfrak{F}$ is the total variation distance, $\Minkowski_\mathfrak{F}(V) \le
\tfrac12 \Span(r)/(1-\gamma)$.
\end{lemma}
\begin{proof}
This result follows immediately by observing that the per-step cost
$r(\St_t, A_t) \in [ \min(r), \max(r) ]$. Therefore, $\max(V) \le
\max(r)/(1-\gamma)$ and $\min(V) \ge \min(r)/(1-\gamma)$.
\end{proof}
\begin{proposition}\label{prop:MS-ais}
Let $\hat \pi$, $\pi$, $V$, and $V^\pi$ be defined as in
Proposition~\ref{prop:model-similarity}. Then, for all $\st \in \StSp$,
\[
\bigl| V(\st) - V^\pi(\st) \bigr| \le
\frac{2 \varepsilon}{(1 - \gamma)} +
\frac{ \gamma \varepsilon |\hat \StSp| \Span(r)}
{(1-\gamma)^2}.
\]
\end{proposition}
\begin{proof}
This follows immediately from Theorem~\ref{thm:inf-ais},
Proposition~\ref{prop:model-similarity} and Lemma~\ref{lem:MS-V-bound}.
\end{proof}
Note that the error bounds of Propositions~\ref{prop:model-similarity}
and~\ref{prop:MS-ais} have similar structure but the key difference is that
the bound of Proposition~\ref{prop:MS-ais} is tighter than a factor of
$1/(1-\gamma)$ as compared to Proposition~\ref{prop:model-similarity}.
There are other minor improvements as well ($|\hat \StSp|$ instead of
$|\StSp|$ and $\tfrac12\Span(r)$ instead of $\|r\|_\infty$).
\section{Approximate planning in partially observed systems}\label{sec:ais}
Our key insight is that information states provide a principled
approach to approximate planning and learning in partially observed systems.
To illustrate this, reconsider the machine maintenance example presented
earlier in Sec.~\ref{ex:info-state}.
Theorem~\ref{thm:info-state} implies that we can write a dynamic program for
that model using the information state $(\St_\tau, t - \tau)$, which takes
values in a countable set. This countable state dynamic program is
considerably simpler than the standard belief state dynamic program typically
used for that model. Moreover, it is possible to approximate the countable
state model by a finite-state model by truncating the state space, which
provides an approximate planning solution to the problem. Furthermore, the
information state $(\St_\tau, t-\tau)$ does not depend on the transition
probability of the state of the machine or the cost of inspection or repair.
Thus, if these model parameters were unknown, we can use a standard
reinforcement learning algorithm to find an optimal policy which maps
$(\St_\tau, t-\tau)$ to current action.
Given these benefits of a good information state, it is natural to consider a
data-driven approach to identify an information state. An information state
identified from data will not be exact and it is important to understand what
is the loss in performance when using an approximate information state.
Theorem~\ref{thm:info-state} shows that a compression of the history which
satisfies properties (P1) and (P2) is sufficient to identify a dynamic
programming decomposition. Would a compression of history that approximately
satisfied properties (P1) and (P2) lead to an approximate dynamic program? In
this section, we show that the answer to this question is yes. First, we need
to precisely define what we mean by ``approximately satisfy properties (P1)
and (P2)''. For that matter, we need to fix a distance metric on probability
spaces. There are various metrics on probability space and it turns out that
the appropriate distance metric for our purposes is the
integral probability metric (IPM) \citep{muller1997integral}.
\subsection{Integral probability metrics (IPM)}
\begin{definition}\label{def:ipm}
Let $(\ALPHABET X, \mathcal G)$ be a measurable space and $\mathfrak{F}$
denote a class of uniformly bounded measurable functions on $(\ALPHABET X,
\mathcal G)$. The integral probability metric (IPM) between two probability
distributions $\mu, \nu \in \Delta(\ALPHABET X)$ with respect to the
function class $\mathfrak{F}$ is defined as
\[
d_{\mathfrak{F}}(\mu, \nu) \coloneqq \sup_{f \in \mathfrak{F}}\biggl |
\int_{\ALPHABET{X}} fd\mu - \int_{\ALPHABET{X}} fd\nu \biggr|.
\]
\end{definition}
In the literature, IPMs are also known as probability metrics with a
$\zeta$-structure; see e.g., \cite{Zolotarev1983,Rachev1991}. They are useful
to establish weak convergence of probability measures. Methods for estimating
IPM from samples are discussed in \cite{Sriperumbudur2012}.
\subsubsection*{Examples of integral probability metrics (IPMs)}
When $(\ALPHABET X, \mathcal G)$ is a metric space, then various commonly used
distance metrics on $(\ALPHABET X, \mathcal G)$ lead to specific instances of IPM
for a particular choice of function space $\mathfrak{F}$. We provide some
examples below:
\begin{enumerate}
\item \textsc{Total variation distance:} If $\mathfrak{F}$ is chosen as $\{f :
\lVert f \rVert_\infty \le 1\}$, then $d_{\mathfrak{F}}$ is the total
variation distance.\footnote{\label{fnt:TV}%
In particular, if $\mu$ and $\nu$ are
absolutely continuous with respect to some measure $\lambda$ and let $p
= d\mu/d\lambda$ and $q = d\nu/d\lambda$, then
\[
\left| \int_{\ALPHABET X} f d\mu - \int_{\ALPHABET X} f d\nu \right|
=
\left| \int_{\ALPHABET X} f(x) p(x) \lambda(dx)
- \int_{\ALPHABET X} f(x) q(x) \lambda(dx)
\right|
\le
\| f \|_{\infty} \int_{\ALPHABET X}
\bigl| p(x) - q(x) \bigr| \lambda(dx).
\]
In this paper, we are defining total variation distance as
$\int_{\ALPHABET X}| p(x) - q(x)| \lambda(dx)$. Typically, it is defined
as half of that quantity. Note that it is possible to get a tighter bound
than above where $\| f\|_\infty$ is replaced by $\tfrac12\Span(f)=
\tfrac12(\max(f) - \min(f))$. }
\item \textsc{Kolmogorov distance:} If $\ALPHABET X = \reals^m$ and $\mathfrak
F$ is chosen as $\{ \IND_{(-\infty, t]} \colon t \in \reals^m \}$, then
$d_{\mathfrak{F}}$ is the Kolmogorov distance.
\item \textsc{Kantorovich metric or Wasserstein distance:} Let $\lVert f
\rVert_\mathrm{Lip}$ denote the Lipschitz semi-norm of a function. If $\mathfrak{F}$
is chosen as $\{ f : \lVert f \rVert_\mathrm{Lip} \le 1 \}$, then
$d_{\mathfrak{F}}$ is the Kantorovich metric. When $\ALPHABET X$ is
separable, the Kantorovich metric is the dual representation of the
Wasserstein distance via the Kantorovich-Rubinstein
duality~\citep{Villani_2008}.
\item \textsc{Bounded-Lipschitz metric:} If $\mathfrak{F}$ is chosen as $\{f : \lVert f
\rVert_\infty + \lVert f \rVert_\mathrm{Lip} \le 1\}$, then $d_{\mathfrak{F}}$ is the
bounded-Lipschitz (or Dudley) metric.
\item \textsc{Maximum mean discrepancy (MMD):} Let $\mathcal H$ be a reproducing
kernel Hilbert space (RKHS) of real valued functions on $\ALPHABET X$
and let $\mathfrak{F} = \{ f \in \mathcal H : \lVert f \rVert_{\mathcal H} \le 1
\}$, then $d_{\mathfrak{F}}$ is the maximum mean discrepancy\footnote{One of
features of MMD is that the optimizing $f$ can be identified in closed
form. In particular, if $k$ is the kernel of the RKHS, then (see
\cite{Gretton2006, Sriperumbudur2012} for details)
\begin{align*}
d_\mathfrak{F}(\mu, \nu) &= \biggl\| \int_{\ALPHABET X} k(\cdot, x) d\mu(x)
- \int_{\ALPHABET X} k(\cdot, x) d\nu(x) \biggr\|_{\mathcal H}
\\
&= \biggl[
\int_{\ALPHABET X} \int_{\ALPHABET X} k(x,y) \mu(dx) \mu(dy)
+
\int_{\ALPHABET X} \int_{\ALPHABET X} k(x,y) \nu(dx) \nu(dy)
- 2
\int_{\ALPHABET X} \int_{\ALPHABET X} k(x,y) \mu(dx) \nu(dy)
\biggr]^{1/2}.
\end{align*}
We use an MMD as a IPM in the PORL algorithms proposed in
Sec.~\ref{sec:RL}, where we exploit this property.}
\citep{Sriperumbudur2008}. The energy distance studied in statistics
\citep{Szekely2004} is a special case of maximum mean discrepancy;
see~\cite{Sejdinovic2013} for a discussion.
\end{enumerate}
We say that $\mathfrak{F}$ is a closed set if it is closed under the topology of
pointwise convergence. We say that $\mathfrak{F}$ is a convex set if $f_1, f_2 \in \mathfrak{F}$
implies that for any $\lambda \in (0,1)$, $\lambda f_1 + (1 - \lambda)f_2 \in
\mathfrak{F}$. Note that all the above function classes are convex and all except
Kolmogorov distance are closed.
We now list some useful properties of IPMs, which immediately follow from
definition.
\begin{enumerate}
\item Given a function class $\mathfrak{F}$ and a function $f$ (not
necessarily in $\mathfrak{F}$),
\begin{equation} \label{eq:minkowski}
\biggl| \int_{\ALPHABET X} f d\mu - \int_{\ALPHABET X} f d\nu \biggr|
\le \Minkowski_{\mathfrak{F}}(f) \cdot d_{\mathfrak{F}}(\mu, \nu),
\end{equation}
where $\Minkowski_{\mathfrak{F}}(f)$ is the Minkowski functional with respect to $\mathfrak{F}$
given by
\begin{equation}
\Minkowski_{\mathfrak{F}}(f) \coloneqq \inf\{
\Minkowski \in \reals_{> 0} : \Minkowski^{-1} f \in \mathfrak{F} \}.
\end{equation}
For the total variation distance,
\( \bigl| \int_{\ALPHABET X} f d\mu - \int_{\ALPHABET X} f d\nu \bigr|
\le \tfrac12 \Span(f) d_\mathfrak{F}(\mu, \nu)\). Thus, for total variation, $\Minkowski_\mathfrak{F}(f)
= \tfrac12 \Span(f)$. For the Kantorovich metric,
\( \bigl| \int_{\ALPHABET X} f d\mu - \int_{\ALPHABET X} f d\nu \bigr|
\le \|f\|_\mathrm{Lip} d_\mathfrak{F}(\mu,\nu)\). Thus, for Kantorovich
metric, $\Minkowski_{\mathfrak{F}}(f) = \lVert f \rVert_\mathrm{Lip}$. For the maximum mean
discrepancy,
\( \bigl| \int_{\ALPHABET X} f d\mu - \int_{\ALPHABET X} f d\nu \bigr|
\le \| f\|_{\mathcal H} d_\mathfrak{F}(\mu, \nu) \). Thus, for maximum mean
discrepancy, $\Minkowski_\mathfrak{F}(f) = \lVert f \rVert_{\mathcal H}$.
\item Let $\ALPHABET X$ and $\ALPHABET Y$ be Banach spaces and let
$\mathfrak{F}_{\ALPHABET X}$ and $\mathfrak{F}_{\ALPHABET Y}$ denote the function class for
$d_{\mathfrak{F}}$ with domain $\ALPHABET X$ and $\ALPHABET Y$, respectively. Then,
for any $\ell \colon \ALPHABET X \to \ALPHABET Y$, any real-valued
function $f \in \mathfrak{F}_{\ALPHABET Y}$ and any measures $\mu$ and $\nu$ on
$\Delta(\ALPHABET X)$, we have
\begin{equation*}
\biggl| \int_{\ALPHABET X} f(\ell(x)) \mu (dx)
- \int_{\ALPHABET X} f(\ell(x)) \nu (dx) \biggr|
\le
\Minkowski_{\mathfrak{F}_{\ALPHABET X}}(f \circ \ell) d_{\mathfrak{F}_{\ALPHABET X}}(\mu, \nu).
\end{equation*}
We define the contraction factor of the function $\ell$ as
\begin{equation}\label{eq:F-contraction}
\Contraction_{\mathfrak{F}_{\ALPHABET X}, \mathfrak{F}_{\ALPHABET Y}}(\ell) =
\sup_{f \in \mathfrak{F}_{\ALPHABET Y}}
\Minkowski_{\mathfrak{F}_{\ALPHABET X}}
( f \circ \ell).
\end{equation}
Therefore, we can say that for any $f \in \mathfrak{F}_{\ALPHABET Y}$,
\begin{equation}\label{eq:contraction}
\biggl| \int_{\ALPHABET X} f(\ell(x)) \mu (dx)
- \int_{\ALPHABET X} f(\ell(x)) \nu (dx) \biggr|
\le
\Contraction_{\mathfrak{F}_{\ALPHABET X}, \mathfrak{F}_{\ALPHABET Y}}(\ell) d_{\mathfrak{F}_{\ALPHABET X}}(\mu, \nu).
\end{equation}
For the total variation distance, $\tfrac12 \Span( f \circ \ell) \le \| f \circ
\ell \|_{\infty} \le \| f \|_{\infty} \le 1$. Thus,
$\Contraction_\mathfrak{F}(\ell) \le 1$. For the Kantorovich metric, $\| f \circ
\ell \|_\mathrm{Lip} \le \| f \|_\mathrm{Lip} \| \ell \|_\mathrm{Lip}$ Thus, $\Contraction_\mathfrak{F}(\ell) \le \|
\ell \|_\mathrm{Lip}$.
\end{enumerate}
\subsection{Approximate information state (AIS) and approximate dynamic
programming
Now we define a notion of \ac{AIS} as a compression of
the history of observations and actions which approximately satisfies
properties (P1) and (P2).
\begin{definition} \label{def:ais}
Let $\{\hat{\ALPHABET{Z}}_t\}_{t=1}^T$ be a pre-specified collection of Banach spaces,
$\mathfrak{F}$ be a function class for \textup{IPMs}, and
$\epsilonDelta$ be pre-specified positive real numbers. A collection $\{
\ainfo_t \colon \ALPHABET{\His}_t \to \hat{\ALPHABET{Z}}_t \}_{t=1}^T$ of history compression
functions, along with approximate update kernels $\{\nextinfo_t \colon
\hat{\ALPHABET{Z}}_t \times \ALPHABET{\Act} \to \Delta(\hat{\ALPHABET{Z}}_{t+1})\}_{t=1}^T$ and reward
approximation functions $\{\rewinfo_t \colon \hat{\ALPHABET{Z}}_t \times \ALPHABET{\Act} \to
\reals\}_{t=1}^T$, is called an \emph{$\epsilonDelta$-\ac{AIS}
generator} if the process $\{\hat Z_t\}_{t=1}^T$, where
$\hat Z_t = \ainfo_t(H_t)$, satisfies the following properties:
\begin{description}
\item[(AP1)] \textbf{\textup{Sufficient for approximate performance
evaluation}}, i.e., for any time~$t$, any realization $h_t$ of
$H_t$ and any choice $a_t$ of $A_t$, we have
\[
\bigl\lvert \EXP[ \Rew_t \mid H_t = h_t, A_t = a_t ] -
\rewinfo_t(\ainfo_t(h_t), a_t) \bigr\rvert
\le \varepsilon_t.
\]
\item[(AP2)] \textbf{\textup{Sufficient to predict itself approximately}}.
i.e., for any time~$t$, any realization $h_t$ of $H_t$, any choice
$a_t$ of $A_t$, and for any Borel subset $\ALPHABET B$ of
$\hat{\ALPHABET{Z}}_{t+1}$, define
\(
\mu_t(\ALPHABET B) \coloneqq \PR(\hat Z_{t+1} \in B \mid H_t = h_t, A_t = a_t)
\)
and
\(
\nu_t(\ALPHABET B) \coloneqq \nextinfo_t(B \mid \ainfo_t(h_t), a_t);
\)
then,
\[
d_\mathfrak{F}( \mu_t, \nu_t) \le \delta_t.
\]
\end{description}
\end{definition}
We use the phrase ``$(\varepsilon, \delta)$-\acs{AIS}'' when $\varepsilon_t$ and
$\delta_t$ do not depend on time.
Similar to Proposition~\ref{prop:alt-info-state}, we can provide an
alternative characterization of an \acs{AIS} where we replace (AP2) with the
following approximations of (P2a) and (P2b).
\begin{description}
\item[(AP2a)] \textbf{Evolves in a state-like manner}, i.e., there exist
measurable update functions $\{\aupdate_t \colon \hat{\ALPHABET{Z}}_t \times \ALPHABET{\Ob}
\times \ALPHABET{\Act}\}_{t=1}^T$ such that for any realization $h_{t+1}$ of
$H_{t+1}$, we have
\[
\ainfo_{t+1}(h_{t+1}) = \aupdate(\ainfo_t(h_t), y_t, a_t).
\]
\item[(AP2b)] \textbf{Is sufficient for predicting future observations
approximately}, i.e., there exist measurable observation prediction kernels
$\{\nextobs_t \colon \hat{\ALPHABET{Z}}_t \times \ALPHABET{\Act} \to \Delta(\ALPHABET{\Ob})\}_{t=1}^T$
such that for any time~$t$, any realization $h_t$ of $H_t$,
any choice $a_t$ of $A_t$, and for any Borel subset $\ALPHABET B$ of
$\ALPHABET{\Ob}$ define,
\(
\mu^y_t(\ALPHABET B) \coloneqq \PR(Y_{t} \in \ALPHABET B \mid H_t = h_t, \allowbreak A_t = a_t)
\)
and
\(
\nu^y_t(\ALPHABET B) = \nextobs_t(\ALPHABET B | \ainfo_t(h_t), a_t)
\);
then,
\[
d_{\mathfrak{F}}( \mu^y_t, \nu^y_t) \le
\delta/\Contraction_{\mathfrak{F}}(\aupdate_t),
\]
where $\Contraction_{\mathfrak{F}}(\aupdate_t)$ is defined as
$\sup_{h_t \in \ALPHABET{\His}_t, a_t \in \ALPHABET{\Act}_t}
\Contraction_{\mathfrak{F}}(\aupdate_t(\ainfo_t(h_t), \cdot, a_t))$. Note
that for the total variation distance $\Contraction_{\mathfrak{F}}(\aupdate_t) = 1$; for
the Kantorovich distance $\Contraction_{\mathfrak{F}}(\aupdate_t)$ is equal to the
Lipschitz uniform bound on the Lipschitz constant of $\aupdate_t$ with
respect to $y_t$.
\end{description}
\begin{proposition}\label{prop:alt-ais}
\textup{(AP2a)} and \textup{(AP2b)} imply \textup{(AP2)} holds with
transition kernels $\{\nextobs_t\}_{t=1}^T$ defined as follows: for any
Borel subset $\ALPHABET B$ of $\hat{\ALPHABET{Z}}$,
\[
\nextinfo_t(\ALPHABET B \mid \ainfo_t(h_t), a_t) =
\int_{\ALPHABET{\Ob}}
\IND_{\ALPHABET B}
(\aupdate_t(\ainfo_t(h_t), y_t, a_t))
\nextobs_t(dy_t | \ainfo_t(h_t), a_t ).
\]
Therefore, we can alternatively define an $\epsilonDelta$-\acs{AIS} generator
as a tuple $\{(\ainfo_t, \rewinfo_t, \aupdate_t, \nextobs_t)\}_{t=1}^T$
which satisfies \textup{(AP1)}, \textup{(AP2a)}, and \textup{(AP2b)}.
\end{proposition}
\begin{proof}
Note that by the law of total probability, $\mu_t$ and $\nu_t$ defined in
(AP2) are
\begin{align*}
\mu_t(\ALPHABET B) &= \int_{\ALPHABET{\Ob}}
\IND_{\ALPHABET B}
(\aupdate_t(\ainfo_t(h_t), y_t, a_t))
\mu^y_t(dy_t),
\\
\nu_t(\ALPHABET B) &= \int_{\ALPHABET{\Ob}}
\IND_{\ALPHABET B}
(\aupdate_t(\ainfo_t(h_t), y_t, a_t))
\nu^y_t(dy_t).
\end{align*}
Thus, for any function
$f \colon \hat{\ALPHABET{Z}}_{t+1} \to \reals$,
\begin{align*}
\int_{\hat{\ALPHABET{Z}}_{t+1}} f d\mu_t
&=
\int_{\ALPHABET{\Ob}_t} f( \aupdate_t(\ainfo_t(h_t), y_t, a_t))
\mu^y_t(dy_t),
\\
\int_{\hat{\ALPHABET{Z}}_{t+1}} f d\nu_t
&=
\int_{\ALPHABET{\Ob}_t} f( \aupdate_t(\ainfo_t(h_t), y_t, a_t))
\nu^y_t(dy_t).
\end{align*}
The result then follows from~\eqref{eq:contraction}.
\end{proof}
Our main result is to establish that any \acs{AIS} gives rise to an approximate
dynamic program.
\begin{theorem}\label{thm:ais}
Suppose $\{\ainfo_t, \nextinfo_t, \rewinfo_t\}_{t=1}^T$ is an
$\epsilonDelta$-\acs{AIS} generator.
Recursively define approximate action-value functions $\{\hat Q_t \colon
\hat{\ALPHABET{Z}}_t \times \ALPHABET{\Act} \to \reals \}_{t=1}^T$ and value functions $\{\hat
V_t \colon \hat{\ALPHABET{Z}}_t \to \reals\}_{t=1}^T$ as follows:
$\hat V_{T+1}(\hat z_{T+1}) \coloneqq 0$ and for $t \in \{T, \dots,
1\}$:
\begin{subequations}\label{eq:DP-ais}
\begin{align}
\hat Q_t(\hat z_t, a_t) &\coloneqq \rewinfo_t(\hat z_t, a_t)
+ \int_{\hat{\ALPHABET{Z}}_{t+1}} \hat V_{t+1}(\hat z_{t+1})
\nextinfo_t(d \hat z_{t+1} \mid \hat z_t, a_t),
\\
\hat V_t(\hat z_t) &\coloneqq \max_{a_t \in \ALPHABET{\Act}} \hat Q_t(\hat z_t, a_t).
\end{align}
\end{subequations}
Then, we have the following:
\begin{enumerate}
\item \textbf{\textup{Value function approximation:}} For any time~$t$,
realization~$h_t$ of $H_t$, and choice $a_t$ of $A_t$, we have
\begin{equation}\label{eq:value-approx-finite}
\lvert Q_t(h_t, a_t) - \hat Q_t(\ainfo_t(h_t), a_t)\rvert
\le \alpha_t
\quad\text{and}\quad
\lvert V_t(h_t) - \hat V_t(\ainfo_t(h_t)) \rvert
\le \alpha_t,
\end{equation}
where $\alpha_t$ satisfies the following recursion: $\alpha_{T+1} =
0$ and for $t \in \{T, \dots, 1 \}$,
\[
\alpha_t = \varepsilon_t + \Minkowski_{\mathfrak{F}}(\hat V_{t+1})
\delta_{t} + \alpha_{t+1}.
\]
Therefore,
\[
\alpha_t = \varepsilon_t + \sum_{\tau=t+1}^{T}\big[
\Minkowski_\mathfrak{F}(\hat V_{\tau}) \delta_{\tau-1} + \varepsilon_\tau \bigr].
\]
\item \textbf{\textup{Approximately optimal policy:}} Let $\hat \pi = (\hat \pi_1, \dots,
\hat \pi_T)$, where $\hat \pi_t \colon \hat{\ALPHABET{Z}}_t \to \Delta(\ALPHABET{\Act})$,
be a stochastic policy that satisfies
\begin{equation}\label{eq:ais-opt}
\Supp(\hat \pi(\hat z_t)) \subseteq
\arg \max_{a_t \in \ALPHABET{\Act}} \hat Q_t(\hat z_t, a_t).
\end{equation}
Define policy $\pi = (\pi_1, \dots, \pi_T)$, where $\pi_t
\colon \ALPHABET{\His}_t \to \Delta(\ALPHABET{\Act})$ by $\pi_t \coloneqq \hat \pi_t \circ
\ainfo_t$. Then, for any time~$t$, realization~$h_t$ of $H_t$, and
choice $a_t$ of $A_t$, we
have
\begin{equation}\label{eq:policy-approx}
\lvert Q_t(h_t, a_t) - Q^\pi_t(h_t, a_t)\rvert
\le 2\alpha_t
\quad\text{and}\quad
\lvert V_t(h_t) - V^\pi_t(h_t) \rvert
\le 2\alpha_t.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
We prove both parts by backward induction. We start with value function
approximation. Eq.~\eqref{eq:value-approx-finite} holds at $T+1$ by definition.
This forms the basis of induction. Assume that~\eqref{eq:value-approx-finite}
holds at time~$t+1$ and consider the system at time~$t$. We have that
\begin{align*}
\bigl| Q_t(h_t, a_t) &-
\hat Q_t(\ainfo_t(h_t), a_t) \bigr|
\\
&\stackrel{(a)}\le
\bigl| \EXP[ R_t \mid H_t = h_t, A_t = a_t ]
- \rewinfo_t(\ainfo_t(h_t), a_t) \bigr|
\\
&\quad +
\EXP\bigl[ \bigl| V_{t+1}(H_{t+1}) - \hat V_{t+1}(\ainfo_{t+1}(H_{t+1}))
\bigr| \bigm| H_t = h_t, A_t = a_t \bigr]
\\
&\quad+
\biggl| \EXP[ \hat V_{t+1}(\ainfo_{t+1}(H_{t+1})) \mid H_t = h_t,
A_t = a_t ] -
\int_{\hat{\ALPHABET{Z}}_{t+1}} \hat V_{t+1}(\hat z_{t+1}) \nextinfo_t(d\hat z_{t+1} \mid
\ainfo_t(h_t), a_t) \biggr|
\\
&\stackrel{(b)}\le
\varepsilon_t + \alpha_{t+1} + \Minkowski_{\mathfrak{F}}(\hat V_{t+1})
\delta_{t} = \alpha_t
\end{align*}
where $(a)$ follows from triangle inequality and $(b)$ follows from (AP1),
the induction hypothesis, (AP2) and~\eqref{eq:minkowski}. This proves the
first part of~\eqref{eq:value-approx-finite}. The second part follows from
\[
\bigl| V_t(h_t) - \hat V_t(\ainfo_t(h_t)) \bigr|
\stackrel{(a)}\le
\max_{a_t \in \ALPHABET{\Act}}
\bigl| Q_t(h_t, a_t) - \hat Q_t(\ainfo_t(h_t), a_t) \bigr|
\le \alpha_t,
\]
where $(a)$ follows from the inequality $\max f(x) \le \max | f(x) - g(x) |
+ \max g(x)$.
To prove the policy approximation, we first prove an intermediate result.
For policy $\hat \pi$ recursively define $\{ \hat Q^{\hat \pi}_t \colon
\hat{\ALPHABET{Z}} \times \ALPHABET{\Act} \to \reals \}_{t=1}^T$ and $\{\hat V^{\hat \pi}_t
\colon \hat{\ALPHABET{Z}} \to \reals \}_{t=1}^{T+1}$ as follows: $\hat V^{\hat
\pi}_{T+1}(\hat z_{T+1}) \coloneqq 0$ and for $t \in \{T, \dots, 1\}$:
\begin{subequations}
\begin{align}
\hat Q^{\hat \pi}_t(\hat z_t, a_t) &\coloneqq
\rewinfo_t(\hat z_t, a_t) +
\int_{\hat{\ALPHABET{Z}}_{t+1}} \hat V^{\hat \pi}_t(\hat z_{t+1})
\nextinfo_t(d\hat z_{t+1} \mid \hat z_t, a_t)
\\
\hat V^{\hat \pi}_t(\hat z_t) &\coloneqq \sum_{a_t \in \ALPHABET{\Act}}
\hat \pi_t(a_t \mid \hat z_t).
\hat Q^{\hat \pi}_t(\hat z_t, a_t).
\end{align}
\end{subequations}
Note that~\eqref{eq:ais-opt} implies that
\begin{equation}\label{eq:ais-policy}
\hat Q^{\hat \pi}_t(\hat z_t, a_t) = \hat Q_t(\hat z_t, a_t)
\quad\text{and}\quad
\hat V^{\hat \pi}_t(\hat z_t) = \hat V_t(\hat z_t).
\end{equation}
Now, we prove that
\begin{equation}\label{eq:policy-approx-2}
\lvert Q^\pi_t(h_t, a_t) -
\hat Q^{\hat \pi}_t(\ainfo_t(h_t), a_t) \rvert
\le \alpha_t
\quad\text{and}\quad
\lvert
V^\pi_t(h_t)
-
\hat V^{\hat \pi}_t(\ainfo_t(h_t))
\rvert
\le \alpha_t.
\end{equation}
We prove the result by backward induction. By construction,
Eq.~\eqref{eq:policy-approx-2} holds at time~$T+1$. This forms the basis of
induction. Assume that~\eqref{eq:policy-approx-2} holds at time~$t+1$ and
consider the system at time~$t$. We have
\begin{align*}
\bigl| Q^\pi_t(h_t, a_t) &-
\hat Q^{\hat \pi}_t(\ainfo_t(h_t), a_t) \bigr|
\\
&\stackrel{(a)}\le
\bigl| \EXP[ R_t \mid H_t = h_t, A_t = a_t ]
- \rewinfo_t(\ainfo_t(h_t), a_t) \bigr|
\\
&\quad +
\EXP\bigl[ \bigl| V^\pi_{t+1}(H_{t+1}) - \hat V^{\hat
\pi}_{t+1}(\ainfo_{t+1}(H_{t+1}))
\bigr| \bigm| H_t = h_t, A_t = a_t \bigr]
\\
&\quad+
\biggl| \EXP[ \hat V^{\hat \pi}_{t+1}(\ainfo_{t+1}(H_{t+1})) \mid H_t = h_t,
A_t = a_t ] -
\int_{\hat{\ALPHABET{Z}}_{t+1}} \hat V^{\hat \pi}_{t+1}(\hat z_{t+1}) \nextinfo_t(d\hat z_{t+1} \mid
\ainfo_t(h_t), a_t) \biggr|
\\
&\stackrel{(b)}\le
\varepsilon_t + \alpha_{t+1} + \Minkowski_{\mathfrak{F}}(\hat V_{t+1})
\delta_{t} = \alpha_t
\end{align*}
where $(a)$ follows from triangle inequality and $(b)$ follows from (AP1),
the induction hypothesis, (AP2) and~\eqref{eq:minkowski}. This proves the
first part of~\eqref{eq:policy-approx-2}. The second part follows from the
triangle inequality:
\[
\bigl| V^\pi_t(h_t) - \hat V^{\hat \pi}_t(\ainfo_t(h_t)) \bigr|
\le \sum_{a_t \in \ALPHABET{\Act}} \hat \pi_t(a_t | \ainfo_t(h_t) )
\bigl| Q^\pi(h_t, a_t) - \hat Q^{\hat \pi}_t(\ainfo_t(h_t),
a_t) \bigr| \le \alpha_t.
\]
Now, to prove the policy approximation, we note that
\[
\bigl| Q_t(h_t, a_t) - Q^{\pi}_t(h_t, a_t) \bigr|
\le
\bigl| Q_t(h_t, a_t) - \hat Q^{\hat \pi}_t(\ainfo_t(h_t), a_t) \bigr|
+
\bigl| Q^\pi_t(h_t, a_t) - \hat Q^{\hat \pi}_t(\ainfo_t(h_t), a_t) \bigr|
\le \alpha_t + \alpha_t,
\]
where the first inequality follows from the triangle inequality, the first
part of the second inequality follows from~\eqref{eq:value-approx-finite}
and~\eqref{eq:ais-policy} and the second part follows
from~\eqref{eq:policy-approx-2}. This proves the first part
of~\eqref{eq:policy-approx}. The second part of~\eqref{eq:policy-approx}
follows from the same argument.
\end{proof}
An immediate implication of Theorems~\ref{thm:info-state} and~\ref{thm:ais} is
the following.
\begin{corollary}
Let $\{\info_t\}_{t=1}^T$ be an information state generator and
$\{(\ainfo_t, \nextinfo_t, \rewinfo_t)\}_{t=1}^T$ be an \acs{AIS} generator.
Then, for any time~$t$, realization $h_t$ of history $H_t$, and choice
$a_t$ of action $A_t$, we have
\[
\bigl| \bar Q_t(\info_t(h_t), a_t) - \hat Q_t(\ainfo_t(h_t), a_t) \bigr|
\le \alpha_t
\quad\text{and}\quad
\bigl| \bar V_t(\info_t(h_t)) - \hat V_t(\ainfo_t(h_t)) \bigr|
\le \alpha_t,
\]
\blue{where $\bar Q_t$ and $\bar V_t$ are defined as in
Theorem~\ref{thm:info-state}.}
\end{corollary}
\blue{
\begin{remark}
It is possible to derive a tighter bound in Theorem~\ref{thm:ais} and show
that
\[
\alpha_t = \varepsilon_t + \Delta^*_t(\hat V_{t+1}) + \alpha_{t+1}
\]
where
\[
\Delta^*_t(\hat V_{t+1}) =
\sup_{h_t, a_t} \biggl| \EXP[ \hat V_{t+1}(\ainfo_{t+1}(H_{t+1})) \mid H_t = h_t,
A_t = a_t ] -
\int_{\hat{\ALPHABET{Z}}_{t+1}} \hat V_{t+1}(\hat z_{t+1}) \nextinfo_t(d\hat z_{t+1} \mid
\ainfo_t(h_t), a_t) \biggr|
\]
The bound presented in Theorem~\ref{thm:ais} can be then thought of as an
upper bound on $\Delta^*_t(\hat V_{t+1}) \le \rho_\mathfrak{F}(\hat V_{t+1})\delta$ using~\eqref{eq:minkowski}.
\end{remark}
}
\blue{
\begin{remark}
In part~1 of Theorem~\ref{thm:ais}, it is possible to derive an alternative
bound
\[
|Q_t(h_t, a_t) - \hat Q_t(\ainfo_t(h_t), a_t)| \le \alpha'_t
\quad\text{and}\quad
|V_t(h_t) - \hat V_t(\ainfo_t(h_t))| \le \alpha'_t
\]
where $\alpha'_t$ satisfies the recursion: $\alpha'_{T+1} = 0$ and for $t
\in \{T, \dots, 1\}$,
\[
\alpha'_t = \varepsilon_t + \rho_\mathfrak{F}(V_{t+1})\delta_t + \alpha'_{t+1}.
\]
This is because while using the triangle inequality in step $(a)$ in the proof of Theorem~\ref{thm:ais}, we could have
alternatively added and subtracted the term $\EXP[ V^{\pi}_{t+1}(H_{t+1}) \mid H_t = h_t,
A_t = a_t ]$ instead of $\EXP[ \hat V^{\hat \pi}_{t+1}(\ainfo_{t+1}(H_{t+1})) \mid H_t = h_t,
A_t = a_t ]$. Using this bound, we can also derive an alternative
bound for part~2 of the Theorem and show that
\[
\lvert Q_t(h_t, a_t) - Q^\pi_t(h_t, a_t)\rvert
\le \alpha_t + \alpha'_t
\quad\text{and}\quad
\lvert V_t(h_t) - V^\pi_t(h_t) \rvert
\le \alpha_t + \alpha'_t.
\]
\end{remark}
}
\subsection{Examples of approximate information states}
\label{ex:ais}
We now present various examples of information state and show that many
existing results in the literature may be viewed as a special case of
Theorem~\ref{thm:ais}. \blue{Some of these examples are for infinite horizon
discounted reward version of Theorem~\ref{thm:ais} (with discount factor
$\gamma \in (0,1)$), which we prove later in Theorem~\ref{thm:inf-ais}.}
\begin{enumerate}
\item \textsc{Model approximation in MDPs:} Consider an
MDP with state space $\StSp$, action space $\ALPHABET{\Act}$, transition kernel
$P_t \colon \StSp \times \ALPHABET{\Act} \to \Delta(\StSp)$, and per-step reward
$r_t \colon \StSp \times \ALPHABET{\Act} \to \reals$. Consider an approximate model
defined on the same state and action spaces with transition kernel $\hat
P_t \colon \StSp \times \ALPHABET{\Act} \to \Delta(\StSp)$ and per-step reward
$\hat r_t \colon \StSp \times \ALPHABET{\Act} \to \reals$. Define
$\ainfo_t(\St_{1:t}, A_{1:t-1}) = \St_t$. Then $\{ (\ainfo_t, \hat
P_t, \hat r_t)\}_{t=1}^T$ is an \acs{AIS} with
\[
\varepsilon_t \coloneqq \sup_{\st \in \StSp, a \in \ALPHABET{\Act}}
\bigl| r_t(\st, a) - \hat r_t(\st, a) \bigr|
\quad\text{and}\quad
\delta_t = \sup_{\st \in \StSp, a \in \ALPHABET{\Act}}
d_{\mathfrak{F}}\bigl( P_t(\cdot | \st, a), \hat P_t (\cdot | \st, a) \bigr).
\]
A result similar in spirit to Theorem~\ref{thm:ais} for this setup for
general $d_{\mathfrak{F}}$ is given in Theorem~4.2 of \cite{Muller1997}. When $d_\mathfrak{F}$
is the Kantorovich metric, a bound for model approximation for infinite
horizon setup is provided in Theorem~2 of \cite{Asadi_2018}. This is
similar to our result generalization of Theorem~\ref{thm:ais} to infinite
horizon, which is given in Theorem~\ref{thm:inf-ais};
a bound on $\Minkowski_\mathfrak{F}(\hat V)$ in this case can be
obtained using results of \cite{Hinderer2005, Rachelson2010}.
\item \textsc{State abstraction in MDPs:} Consider an MDP with state space
$\StSp$, action space $\ALPHABET{\Act}$, transition kernel $P_t \colon \StSp \times
\ALPHABET{\Act} \to \Delta(\StSp)$, and per-step reward $r_t \colon \StSp \times
\ALPHABET{\Act} \to \reals$. Consider an abstract model defined over a
state space $\hat \StSp$ (which is ``smaller'' than $\StSp$) and the same
action space with transition kernel $\hat P_t \colon \hat \StSp \times
\ALPHABET{\Act} \to \Delta(\hat \StSp)$ and per-step reward $\hat r_t \colon \hat
\StSp \times \ALPHABET{\Act} \to \reals$. Suppose there is an abstraction function
$q \colon \StSp \to \hat \StSp$ and, in state $\St \in \StSp$, we
choose an action based on $q(\StSp)$. For such a model, define
$\ainfo_t(\St_{1:t}, A_{1:t-1}) = q(\St_t)$. Then
$\{ (\ainfo_t, \hat P_t, \hat r_t)\}_{t=1}^T$ is an \acs{AIS} with
\[
\varepsilon_t \coloneqq \sup_{\st \in \StSp, a \in \ALPHABET{\Act}}
\bigl| r_t(\st, a) - \hat r_t(q(\st), a) \bigr|
\quad\text{and}\quad
\delta_t \coloneqq \sup_{\st \in \StSp, a \in \ALPHABET{\Act}}
d_{\mathfrak{F}}\bigl(\mu_t(\cdot | \st, a), \hat P_t(\cdot | q(\st), a)\bigr),
\]
where for any Borel subset $\ALPHABET B$ of $\hat \StSp$,
\(
\mu_t(B | \st, a) \coloneqq P_t( q^{-1}(\ALPHABET B) | \st, a)
\).
There is a rich literature on state abstraction starting with
\cite{Bertsekas_1975} and \cite{Whitt_1978}, but the error bounds in those
papers are of a different nature. There are some recent papers which
derive error bounds similar to Theorem~\ref{thm:ais} for the infinite
horizon setup with state abstraction. We generalize Theorem~\ref{thm:ais}
to infinite horizon later in Theorem~\ref{thm:inf-ais}.
When $d_{\mathfrak{F}}$ is the Kantorovich metric, a bound on $\Minkowski_\mathfrak{F}(\hat V)
= \| \hat V \|_{\mathrm{Lip}}$ can be
obtained using results of \cite{Hinderer2005, Rachelson2010}. Substituting
this bound in Theorem~\ref{thm:inf-ais} gives us the following bound on the policy
approximation error by using \acs{AIS}.
\[
\bigl| V(\st) - V^\pi(s) \bigr| \le
\frac{2\varepsilon}{(1-\gamma)} + \frac{2\gamma \delta
\| \hat V \|_\mathrm{Lip}}{(1-\gamma)}.
\]
Similar bound has been obtained in Theorem~5 of \cite{DeepMDP}. A detailed
comparison with this model is presented in Appendix~\ref{app:LipMDP}.
When $d_{\mathfrak{F}}$ is the total variation distance, a bound on $d_\mathfrak{F}(\hat V)$
is given by $\Span(r)/(1-\gamma)$. Substituting this in
Theorem~\ref{thm:inf-ais}, we get that
\[
| V(s) - V^\pi(s) | \le \frac{2\varepsilon}{(1-\gamma)}
+ \frac{\gamma \delta \Span(r) }{(1-\gamma)^2}.
\]
A $\mathcal O(1/(1-\gamma)^3)$ bound on the policy approximation error in this
setup was obtained in Lemma~2 and Theorem~2 of \cite{Abel2016}. \textbf{Directly
using the \acs{AIS} bound of Theorems~\ref{thm:ais} and~\ref{thm:inf-ais} gives
a factor of $1/(1-\gamma)$ improvement in the error bound of
\cite{Abel2016}.} See
Appendix~\ref{app:Abel} for a detailed comparison.
\item \textsc{Belief approximation in POMDPs:} Consider a POMDP with state space
$\StSp$, action space $\ALPHABET{\Act}$, observation space $\ALPHABET{\Ob}$, and a per-step
reward function $r_t \colon \StSp \times \ALPHABET{\Act} \to \reals$. Let
$b_t(\cdot | H_t) \in \Delta(\StSp)$ denote the belief of the current
state given the history, i.e.,
\(
b_t(\st | H_t) = \PR(\St_t = \st \mid H_t)
\).
Suppose there are history compression functions $\{\phi_t \colon
\ALPHABET{\His}_t \to \Phi_t \}_{t=1}^T$ (where $\Phi_t$ is some arbitrary space)
along with
belief approximation functions $\{\hat b_t \colon \Phi_t \to
\Delta(\StSp)\}_{t=1}^T$, such that for any time~$t$ and any realization
$h_t$ of $H_t$, we have
\[
\| \hat b_t(\cdot \mid \phi_t(h_t)) - b_t(\cdot \mid h_t) \|_1 \le
\varepsilon.
\]
Such a $\{(\phi_t, \hat b_t)\}_{t=1}^T$ was called an
$\varepsilon$-sufficient statistic in \cite{FrancoisLavet2019}. An example
of $\varepsilon$-sufficient statistic is belief quantization, where the
belief is quantized to the nearest point in the \emph{type lattice}
(here $m = |\StSp|$)
\[
Q_n \coloneqq \bigl\{ (p_1, \dots, p_m) \in \Delta(\StSp) :
n p_i \in \integers_{\ge 0} \bigr\}.
\]
An efficient algorithm to find the nearest point in $Q_n$ for any given
belief $b_t \in \Delta(\StSp)$ is presented in \cite{Reznik2011}. Under
such a quantization, the maximum $\ell_1$ distance between a belief vector
and its quantized value is given by $2\lfloor m/2 \rfloor \lceil m/2
\rceil/ mn \approx m/2n$ (see Proposition~2 of \cite{Reznik2011}). Thus,
by taking $n > m/2\varepsilon$, we get an $\varepsilon$-sufficient
statistic.
\cite{FrancoisLavet2019} showed
that the bias of using the optimal policy based on $\hat b_t(h_t)$ in the
original model is $2\varepsilon \|r\|_{\infty}/(1-\gamma)^3$. This result
uses the same proof argument as \cite{Abel2016} discussed in the
previous bullet point, which is not tight. By metricizing the belief space
using total variation distance and using the bounded-Lipschitz metric on
the space of probability measures on beliefs, we can show that an
$\varepsilon$-sufficient statistic induces a $(\varepsilon \Span(r),
3\varepsilon)$-AIS. When $d_\mathfrak{F}$ is the bounded-Lipschitz metric, a bound
on $\Minkowski_\mathfrak{F}(\hat V)$ is given by $2\|r\|_\infty/(1-\gamma)$.
Substituting this in Theorem~\ref{thm:inf-ais}, we get that
\[
| V(s) - V^\pi(s) | \le
\frac{2 \varepsilon \| r \|_{\infty} }{(1-\gamma)}
+
\frac{6 \gamma \varepsilon \| r\|_{\infty}}{(1-\gamma)^2}.
\]
Thus, \textbf{directly using the AIS bound of Theorems~\ref{thm:ais}
and~\ref{thm:inf-ais} gives a factor of $1/(1-\gamma)$ improvement
in the error bound of \cite{FrancoisLavet2019}.}
See Appendix~\ref{app:FrancoisLavet} for details.
In a slightly different vein, belief quantization in POMDPs with finite or
Borel valued unobserved state was investigated in~\cite{saldi2018finite},
who showed that under appropriate technical conditions the value
function and optimal policies for the quantized model converge to the value
function and optimal policy of the true model. However~\cite{saldi2018finite}
did not provide approximation error for a fixed quantization level.
\end{enumerate}
{\color{black}
\subsection{Approximate policy evaluation}\label{sec:ais-poleval}
In some settings, we are interested in comparing the performance of an
arbitrary policy in an approximate model with its performance in the real
model. The bounds of Theorem~\ref{thm:ais} can be adapted to such a setting as
well.
\begin{theorem}\label{thm:ais-poleval-fin}
Suppose $\{\ainfo_t, \nextinfo_t, \rewinfo_t\}_{t=1}^T$ is an
$\epsilonDelta$-\acs{AIS} generator. Let $\hat \pi^{\#} = (\hat \pi^{\#}_1, \dots,
\hat \pi^{\#}_T)$, where $\hat \pi^{\#}_t \colon \hat{\ALPHABET{Z}}_t \to \Delta(\ALPHABET{\Act})$,
be an arbitrary stochastic policy. Recursively define approximate policy action-value
functions $\{\hat Q^{\hat \pi^{\#}}_t \colon
\hat{\ALPHABET{Z}}_t \times \ALPHABET{\Act} \to \reals \}_{t=1}^T$ and value functions $\{\hat
V^{\hat \pi^{\#}}_t \colon \hat{\ALPHABET{Z}}_t \to \reals\}_{t=1}^T$ as follows:
$\hat V^{\hat \pi^{\#}}_{T+1}(\hat z_{T+1}) \coloneqq 0$ and for $t \in \{T, \dots,
1\}$:
\begin{subequations}\label{eq:DP-ais-poleval}
\begin{align}
\hat Q^{\hat \pi^{\#}}_t(\hat z_t, a_t) &\coloneqq
\rewinfo_t(\hat z_t, a_t) +
\int_{\hat{\ALPHABET{Z}}_{t+1}} \hat V^{\hat \pi^{\#}}_t(\hat z_{t+1})
\nextinfo_t(d\hat z_{t+1} \mid \hat z_t, a_t)
\\
\hat V^{\hat \pi^{\#}}_t(\hat z_t) &\coloneqq \sum_{a_t \in \ALPHABET{\Act}}
\hat \pi^{\#}_t(a_t \mid \hat z_t).
\hat Q^{\hat \pi^{\#}}_t(\hat z_t, a_t).
\end{align}
\end{subequations}
Define policy $\pi^{\#} = (\pi^{\#}_1, \dots, \pi^{\#}_T)$, where
$\pi^{\#}_t
\colon \ALPHABET{\His}_t \to \Delta(\ALPHABET{\Act})$ by $\pi^{\#}_t \coloneqq \hat \pi^{\#}_t \circ
\ainfo_t$. Then, for any time~$t$, realization~$h_t$ of $H_t$, and
choice $a_t$ of $A_t$, we have:
\begin{equation}\label{eq:policy-approx-2-poleval}
\lvert Q^{\pi^{\#}}_t(h_t, a_t) -
\hat Q^{\hat \pi^{\#}}_t(\ainfo_t(h_t), a_t) \rvert
\le \alpha^{\#}_t
\quad\text{and}\quad
\lvert
V^{\pi^{\#}}_t(h_t)
-
\hat V^{\hat \pi^{\#}}_t(\ainfo_t(h_t))
\rvert
\le \alpha^{\#}_t,
\end{equation}
where $\alpha^{\#}_t$ satisfies the following recursion: $\alpha^{\#}_{T+1} =
0$ and for $t \in \{T, \dots, 1 \}$,
\[
\alpha^{\#}_t = \varepsilon_t + \Minkowski_{\mathfrak{F}}(\hat V^{\hat \pi^{\#}}_{t+1})
\delta_{t} + \alpha^{\#}_{t+1}.
\]
Therefore,
\[
\alpha^{\#}_t = \varepsilon_t + \sum_{\tau=t+1}^{T}\big[
\Minkowski_\mathfrak{F}(\hat V^{\hat \pi^{\#}}_{\tau}) \delta_{\tau-1} + \varepsilon_\tau \bigr].
\]
\end{theorem}
\begin{proof}
The proof proceeds by backward induction along the same lines as the proof
of Theorem~\ref{thm:ais}.
By construction,
Eq.~\eqref{eq:policy-approx-2-poleval} holds at time~$T+1$. This forms the basis of
induction. Assume that~\eqref{eq:policy-approx-2-poleval} holds at time~$t+1$ and
consider the system at time~$t$. We have
\begin{align*}
\bigl| Q^{\pi^{\#}}_t(h_t, a_t) &-
\hat Q^{\hat \pi^{\#}}_t(\ainfo_t(h_t), a_t) \bigr|
\\
&\stackrel{(a)}\le
\bigl| \EXP[ R_t \mid H_t = h_t, A_t = a_t ]
- \rewinfo_t(\ainfo_t(h_t), a_t) \bigr|
\\
&\quad +
\EXP\bigl[ \bigl| V^{\pi^{\#}}_{t+1}(H_{t+1}) - \hat V^{\hat
\pi^{\#}}_{t+1}(\ainfo_{t+1}(H_{t+1}))
\bigr| \bigm| H_t = h_t, A_t = a_t \bigr]
\\
&\quad+
\biggl| \EXP[ \hat V^{\hat \pi^{\#}}_{t+1}(\ainfo_{t+1}(H_{t+1})) \mid H_t = h_t,
A_t = a_t ] -
\int_{\hat{\ALPHABET{Z}}_{t+1}} \hat V^{\hat \pi^{\#}}_{t+1}(\hat z_{t+1}) \nextinfo_t(d\hat z_{t+1} \mid
\ainfo_t(h_t), a_t) \biggr|
\\
&\stackrel{(b)}\le
\varepsilon_t + \alpha^{\#}_{t+1} + \Minkowski_{\mathfrak{F}}(\hat V^{\hat \pi^{\#}}_{t+1})
\delta_{t} = \alpha^{\#}_t
\end{align*}
where $(a)$ follows from triangle inequality and $(b)$ follows from (AP1),
the induction hypothesis, (AP2) and~\eqref{eq:minkowski}. This proves the
first part of~\eqref{eq:policy-approx-2-poleval}. The second part follows from the
the fact that $\pi^{\#}(a_t | h_t) = \hat \pi^{\#}(a_t |
\ainfo_t(h_t))$ and the triangle inequality:
\[
\bigl| V^{\pi^{\#}}_t(h_t) - \hat V^{\hat \pi^{\#}}_t(\ainfo_t(h_t)) \bigr|
\le \sum_{a_t \in \ALPHABET{\Act}} \hat \pi^{\#}_t(a_t | \ainfo_t(h_t) )
\bigl| Q^{\pi^{\#}}(h_t, a_t) - \hat Q^{\hat \pi^{\#}}_t(\ainfo_t(h_t),
a_t) \bigr| \le \alpha^{\#}_t.
\]
\end{proof}
}
\subsection{Stochastic AIS} \label{sec:stochais}
We have so far assumed that the history compression functions $\ainfo_t
\colon \ALPHABET{\His}_t \to \hat{\ALPHABET{Z}}_t$ are deterministic functions.
When learning a
discrete-valued \acs{AIS} from data, it is helpful to consider stochastic
mappings of history, so that quality of the mapping may be improved via
stochastic gradient descent. In general, the definition of deterministic
\acs{AIS} also covers the case of stochastic \acs{AIS} because a stochastic
function from $\ALPHABET{\His}_t$ to $\hat{\ALPHABET{Z}}_t$ may be viewed as a deterministic
function from $\ALPHABET{\His}_t$ to $\Delta(\hat{\ALPHABET{Z}}_t)$. However, a more explicit
characterization is also possible, which we present next.
\begin{definition} \label{def:stoc-ais}
Let $\{\hat{\ALPHABET{Z}}_t\}_{t=1}^T$ be a pre-specified collection of Banach spaces,
$\mathfrak{F}$ be a function class for \textup{IPMs}, and
$\epsilonDelta$ be pre-specified positive real numbers. A collection $\{
\ainfo^s_t \colon \ALPHABET{\His}_t \to \Delta(\hat{\ALPHABET{Z}}_t) \}_{t=1}^T$ of stochastic history compression
functions, along with approximate update kernels $\{\nextinfo_t \colon
\hat{\ALPHABET{Z}}_t \times \ALPHABET{\Act} \to \Delta(\hat{\ALPHABET{Z}}_{t+1})\}_{t=1}^T$ and reward
approximation functions $\{\rewinfo_t \colon \hat{\ALPHABET{Z}}_t \times \ALPHABET{\Act} \to
\reals\}_{t=1}^T$, is called an \emph{$\epsilonDelta$-stochastic \ac{AIS}
generator} if the process $\{\hat Z_t\}_{t=1}^T$, where
$\hat Z_t = \ainfo_t(H_t)$, satisfies the following properties:
\begin{description}
\item[(AP1)] \textbf{\textup{Sufficient for approximate performance
evaluation}}, i.e., for any time~$t$, any realization $h_t$ of
$H_t$ and any choice $a_t$ of $A_t$, we have
\[
\bigl\lvert \EXP[ \Rew_t \mid H_t = h_t, A_t = a_t ] -
\EXPsAIS[\rewinfo_t(\hat Z_t, a_t)] \bigr\rvert
\le \varepsilon_t.
\]
\item[(AP2)] \textbf{\textup{Sufficient to predict itself approximately}}.
i.e., for any time~$t$, any realization $h_t$ of $H_t$, any choice
$a_t$ of $A_t$, and for any Borel subset $\ALPHABET B$ of
$\hat{\ALPHABET{Z}}_{t+1}$, define
\(
\mu_t(\ALPHABET B) \coloneqq \PR(\hat Z_{t+1} \in B \mid H_t = h_t, A_t = a_t)
\)
and
\(
\nu_t(B) \coloneqq \EXPsAIS[\nextinfo_t(B |
\hat Z_t, a_t)];
\)
then,
\[
d_\mathfrak{F}( \mu_t, \nu_t) \le \delta_t.
\]
\end{description}
\end{definition}
Similar to Theorem~\ref{thm:ais}, we then have the following result.
\begin{theorem}\label{thm:stoc-ais}
Given a stochastic \acs{AIS} generator $\{\ainfo^s_t, \nextinfo_t,
\rewinfo_t\}_{t=1}^T$, define value functions $\{ \hat V_t \colon \hat{\ALPHABET{Z}}_t
\to \reals\}_{t=1}^T$ and action-value functions $\{ \hat Q_t \colon \hat{\ALPHABET{Z}}_t
\times \ALPHABET{\Act} \to \reals \}_{t=1}^T$ as in Theorem~\ref{thm:ais}.
Then, we have the following:
\begin{enumerate}
\item \textbf{\textup{Value function approximation:}} For any time~$t$,
realization~$h_t$ of $H_t$, and choice $a_t$ of $A_t$ we have
\begin{equation}\label{eq:stoch-value-approx}
\lvert Q_t(h_t, a_t) - \EXPsAIS[ \hat Q_t(\hat Z_t, a_t)] \rvert
\le \alpha_t
\quad\text{and}\quad
\lvert V_t(h_t) - \EXPsAIS[ \hat V_t(\hat Z_t) ] \rvert
\le \alpha_t,
\end{equation}
where $\alpha_t$ is defined as in Theorem~\ref{thm:ais}.
\item \textbf{\textup{Approximately optimal policy:}} Let $\hat \pi = (\hat \pi_1, \dots,
\hat \pi_T)$, where $\hat \pi_t \colon \hat{\ALPHABET{Z}}_t \to \Delta(\ALPHABET{\Act})$,
be a stochastic policy that satisfies
\begin{equation}\label{eq:stoch-ais-opt}
\Supp(\hat \pi(\hat z_t)) \subseteq
\arg \max_{a_t \in \ALPHABET{\Act}} \hat Q_t(\hat z_t, a_t).
\end{equation}
Define policy $\pi = (\pi_1, \dots, \pi_T)$, where $\pi_t
\colon \ALPHABET{\His}_t \to \Delta(\ALPHABET{\Act})$ by $\pi_t(h_t) = \EXPsAIS[ \hat
\pi_t(\hat Z_t) ]$. Then, for any time~$t$, realization~$h_t$ of $H_t$,
and choice $a_t$ of $A_t$, we
have
\begin{equation}\label{eq:stoc-policy-approx}
\lvert Q_t(h_t, a_t) - Q^\pi_t(h_t, a_t)\rvert
\le 2\alpha_t
\quad\text{and}\quad
\lvert V_t(h_t) - V^\pi_t(h_t) \rvert
\le 2\alpha_t.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
The proof is almost the same as the proof of Theorem~\ref{thm:ais}. The main
difference is that for the value and action-value functions of the
stochastic approximation state, we take an additional expectation over the
realization of the stochastic AIS. We only show the details of the proof of
the first part of the result (value approximation). The second part (policy
approximation) follows along similar lines.
Eq.~\eqref{eq:stoch-value-approx} holds at $T+1$ by definition.
This forms the basis of induction. Assume that~\eqref{eq:stoch-value-approx}
holds at time~$t+1$ and consider the system at time~$t$. We have that
\begin{align*}
\bigl| Q_t&(h_t, a_t) -
\EXPsAIS[ \hat Q_t(\hat Z_t, a_t)] \bigr|
\\
&\stackrel{(a)}\le
\bigl| \EXP[ R_t \mid H_t = h_t, A_t = a_t ]
- \EXPsAIS[ \rewinfo_t(\hat Z_t, a_t) ] \bigr|
\\
&\quad +
\EXP\bigl[ \bigl| V_{t+1}(H_{t+1}) -
\EXPsAISt[ \hat V_{t+1}(\hat Z_{t+1}) ]
\bigr| \bigm| H_t = h_t, A_t = a_t \bigr]
\\
&\quad+
\biggl| \EXP[ \hat V_{t+1}(\ainfo_{t+1}(H_{t+1})) \mid H_t = h_t,
A_t = a_t ] -
\EXPsAIS \biggl[
\int_{\hat{\ALPHABET{Z}}_t} \hat V_{t+1}(\hat z_{t+1}) \nextinfo_t(d\hat z_{t+1} \mid
\hat Z_t, a_t) \biggr] \biggr|
\\
&\stackrel{(b)}\le
\varepsilon_t + \alpha_{t+1} + \Minkowski_{\mathfrak{F}}(\hat V_{t+1})
\delta_{t} = \alpha_t
\end{align*}
where $(a)$ follows from triangle inequality and $(b)$ follows from (AP1),
the induction hypothesis, (AP2) and~\eqref{eq:minkowski}. This proves the
first part of~\eqref{eq:stoch-value-approx}. The second part follows from
\[
\bigl| V_t(h_t) - \hat V_t(\ainfo^s_t(h_t)) \bigr|
\stackrel{(a)}\le
\max_{a_t \in \ALPHABET{\Act}}
\bigl| Q_t(h_t, a_t) - \hat Q_t(\ainfo^s_t(h_t), a_t) \bigr|
\le \alpha_t,
\]
where $(a)$ follows from the inequality $\max f(x) \le \max | f(x) - g(x) |
+ \max g(x)$. This completes the proof of value approximation. The proof of
policy approximation is similar to that of Theorem~\ref{thm:ais} adapted in
the same manner as above.
\end{proof}
\subsection{AIS with action compression} \label{sec:actais}
So far we have assumed that the action space for the AIS is the same as the
action space for the original model. In some instances, for example, for
continuous or large action spaces, it may be desirable to quantize or compress
the actions as well. In this section, we generalize the notion of AIS to
account for action compression.
\begin{definition} \label{def:ais-ac}
As in the definition of \acs{AIS}, suppose $\{\hat{\ALPHABET{Z}}_t\}_{t=1}^T$ are
pre-specified collection of Banach spaces, $\mathfrak{F}$ be a function class for
\textup{IPMs}, and $\epsilonDelta$ be pre-specified positive real numbers. In
addition, suppose we have a subset $\hat {\ALPHABET{\Act}} \subset \ALPHABET{\Act}$ of quantized
actions. Then, a collection $\{\ainfo_t \colon \ALPHABET{\His}_t \to \hat{\ALPHABET{Z}}_t
\}_{t=1}^T$ of history compression functions, along with action quantization
function $\aquant \colon \ALPHABET{\Act} \to \hat {\ALPHABET{\Act}}$,
approximate update kernels $\{\nextinfo_t \colon
\hat{\ALPHABET{Z}}_t \times \hat {\ALPHABET{\Act}} \to \Delta(\hat{\ALPHABET{Z}}_{t+1})\}_{t=1}^T$ and reward
approximation functions $\{\rewinfo_t \colon \hat{\ALPHABET{Z}}_t \times \hat {\ALPHABET{\Act}} \to
\reals\}_{t=1}^T$, is called an
\emph{$\epsilonDelta$-action-quantized \acs{AIS}
generator} if the process $\{\hat Z_t\}_{t=1}^T$, where
$\hat Z_t = \ainfo_t(H_t)$, satisfies the following properties:
\begin{description}
\item[(AQ1)] \textbf{\textup{Sufficient for approximate performance
evaluation}}, i.e., for any time~$t$, any realization $h_t$ of
$H_t$ and any choice $a_t$ of $A_t$, we have
\[
\bigl\lvert \EXP[ \Rew_t \mid H_t = h_t, A_t = a_t ] -
\rewinfo_t(\ainfo_t(h_t), \aquant(a_t)) \bigr\rvert
\le \varepsilon_t.
\]
\item[(AQ2)] \textbf{\textup{Sufficient to predict itself approximately}}.
i.e., for any time~$t$, any realization $h_t$ of $H_t$, any choice
$a_t$ of $A_t$, and for any Borel subset $\ALPHABET B$ of
$\hat{\ALPHABET{Z}}_{t+1}$, define
\(
\mu_t(\ALPHABET B) \coloneqq \PR(\hat Z_{t+1} \in B \mid H_t = h_t, A_t = a_t)
\)
and
\(
\nu_t(\ALPHABET B) \coloneqq \nextinfo_t(B \mid \ainfo_t(h_t),
\aquant(a_t));
\)
then,
\[
d_\mathfrak{F}( \mu_t, \nu_t) \le \delta_t.
\]
\end{description}
\end{definition}
Similar to Theorem~\ref{thm:ais}, we show that an action-quantized \acs{AIS}
can be used to determine an approximately optimal policy.
\begin{theorem}\label{thm:ais-ac}
Suppose $\{\ainfo_t, \aquant, \nextinfo_t, \rewinfo_t\}_{t=1}^T$ is an
action-quantized \acs{AIS} generator.
Recursively define approximate action-value functions $\{\hat Q_t \colon
\hat{\ALPHABET{Z}}_t \times \hat {\ALPHABET{\Act}} \to \reals \}_{t=1}^T$ and value functions $\{\hat
V_t \colon \hat{\ALPHABET{Z}}_t \to \reals\}_{t=1}^T$ as follows:
$\hat V_{T+1}(\hat z_{T+1}) \coloneqq 0$ and for $t \in \{T, \dots,
1\}$:
\begin{subequations}\label{eq:DP-ais-ac}
\begin{align}
\hat Q_t(\hat z_t, \hat a_t) &\coloneqq \rewinfo_t(\hat z_t, \hat a_t)
+ \int_{\hat{\ALPHABET{Z}}_t} \hat V_{t+1}(\hat z_{t+1})
\nextinfo_t(d \hat z_{t+1} \mid \hat z_t, \hat a_t),
\\
\hat V_t(\hat z_t) &\coloneqq \max_{\hat a_t \in \hat {\ALPHABET{\Act}}} \hat Q_t(\hat z_t, \hat a_t).
\end{align}
\end{subequations}
Then, we have the following:
\begin{enumerate}
\item \textbf{\textup{Value function approximation:}} For any time~$t$,
realization~$h_t$ of $H_t$, and choice $a_t$ of $A_t$, we have
\begin{equation}\label{eq:value-approx-ac}
\lvert Q_t(h_t, a_t) - \hat Q_t(\ainfo_t(h_t), \aquant(a_t))\rvert
\le \alpha_t
\quad\text{and}\quad
\lvert V_t(h_t) - \hat V_t(\ainfo_t(h_t)) \rvert
\le \alpha_t,
\end{equation}
where $\alpha_t$ is defined as in Theorem~\ref{thm:ais}.
\item \textbf{\textup{Approximately optimal policy:}} Let $\hat \pi = (\hat \pi_1, \dots,
\hat \pi_T)$, where $\hat \pi_t \colon \hat{\ALPHABET{Z}}_t \to \Delta(\hat {\ALPHABET{\Act}})$,
be a stochastic policy that satisfies
\begin{equation}\label{eq:ais-opt-ac}
\Supp(\hat \pi_t(\hat z_t)) \subseteq
\arg \max_{\hat a_t \in \hat {\ALPHABET{\Act}}} \hat Q_t(\hat z_t, \hat a_t).
\end{equation}
Define policy $\pi = (\pi_1, \dots, \pi_T)$, where $\pi_t
\colon \ALPHABET{\His}_t \to \Delta(\ALPHABET{\Act})$ by $\pi_t \coloneqq \hat \pi_t \circ
\ainfo_t$. Then, for any time~$t$, realization~$h_t$ of $H_t$, and
choice $a_t$ of $A_t$, we
have
\begin{equation}\label{eq:policy-approx-ac}
\lvert Q_t(h_t, a_t) - Q^\pi_t(h_t, \aquant(a_t))\rvert
\le 2\alpha_t
\quad\text{and}\quad
\lvert V_t(h_t) - V^\pi_t(h_t) \rvert
\le 2\alpha_t.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
The proof is similar to the proof of Theorem~\ref{thm:ais}. We only show the
details of the first part (value approximation). The second part (policy
approximation) follows along similar lines.
As before, we prove the result by backward induction.
Eq.~\eqref{eq:value-approx-ac} holds at $T+1$ by definition. This forms the
basis of induction. Assume that~\eqref{eq:value-approx-ac} holds at time~$t+1$
and consider the system at time~$t$. We have that
\begin{align*}
\bigl| Q_t(h_t, a_t) &-
\hat Q_t(\ainfo_t(h_t), \aquant(a_t)) \bigr|
\\
&\stackrel{(a)}\le
\bigl| \EXP[ R_t \mid H_t = h_t, A_t = a_t ]
- \rewinfo_t(\ainfo_t(h_t), \aquant(a_t)) \bigr|
\\
&\quad +
\EXP\bigl[ \bigl| V_{t+1}(H_{t+1}) - \hat V_{t+1}(\ainfo_{t+1}(H_{t+1}))
\bigr| \bigm| H_t = h_t, A_t = a_t \bigr]
\\
&\quad+
\biggl| \EXP[ \hat V_{t+1}(\ainfo_{t+1}(H_{t+1})) \mid H_t = h_t,
A_t = a_t ] -
\int_{\hat{\ALPHABET{Z}}_t} \hat V_{t+1}(\hat z_{t+1}) \nextinfo_t(d\hat z_{t+1} \mid
\ainfo_t(h_t), \hat a_t) \biggr|
\\
&\stackrel{(b)}\le
\varepsilon_t + \alpha_{t+1} + \Minkowski_{\mathfrak{F}}(\hat V_{t+1})
\delta_{t} = \alpha_t
\end{align*}
where $(a)$ follows from triangle inequality and $(b)$ follows from (AQ1),
the induction hypothesis, (AQ2) and~\eqref{eq:minkowski}. This proves the
first part of~\eqref{eq:value-approx-ac}. The second part follows from
\[
\bigl| V_t(h_t) - \hat V_t(\ainfo_t(h_t)) \bigr|
\stackrel{(a)}\le
\max_{a_t \in \ALPHABET{\Act}}
\bigl| Q_t(h_t, a_t) - \hat Q_t(\ainfo_t(h_t), \aquant(a_t)) \bigr|
\le \alpha_t,
\]
where $(a)$ follows from the inequality $\max f(x) \le \max | f(x) - g(x) |
+ \max g(x)$. We have also used the fact that if $\aquant$ is an onto
function, then
\(
\max_{\hat a \in \hat {\ALPHABET{\Act}}} \hat Q_t(\hat z_t, \hat a_t) =
\max_{a \in \ALPHABET{\Act}} \hat Q_t(\hat z_t, \aquant(a_t))
\).
This completes the proof of value approximation. The proof of policy
approximation is similar to that of Theorem~\ref{thm:ais} adapted in the
same manner as above.
\end{proof}
Action quantization in POMDPs with finite or Borel valued unobserved state was
investigated in~\cite{saldi2018finite}, who showed that under
appropriate technical conditions the value function and optimal policies for
the quantized model converge to the value function and optimal policy of the
true model. However~\cite{saldi2018finite} did not provide approximation
error for a fixed quantization level.
\paragraph{Simplification for perfectly observed case:}
The approximation bounds for action compression
derived in Theorem~\ref{thm:ais-ac} can be simplified when the system is
perfectly observed. In particular, consider an MDP with state space $\StSp$,
action space $\ALPHABET{\Act}$, transition probability $P \colon \StSp \times \ALPHABET{\Act}
\to \Delta(\StSp)$, per-step reward function $r \colon \StSp \times \ALPHABET{\Act} \to
\reals$, and discount factor $\gamma$.
For MDPs, we can simplify the definition of action quantized
\acs{AIS}-generator as follows.
\begin{definition} \label{def:ais-ac-only}
Given an MDP as defined above, let $\mathfrak{F}$ be a function class for
\textup{IPMs}, and $(\varepsilon, \delta)$ be pre-specified positive real numbers. In
addition, suppose we have a subset $\hat {\ALPHABET{\Act}} \subset \ALPHABET{\Act}$ of quantized
actions. Then, an action quantization function $\aquant \colon \ALPHABET{\Act} \to \hat {\ALPHABET{\Act}}$,
where $\hat {\ALPHABET{\Act}} \subset \ALPHABET{\Act}$,
is called an
\emph{$(\varepsilon, \delta)$-action-quantizer}
if the following properties are satisfied:
\begin{description}
\item[(AQM1)] \textbf{\textup{Sufficient for approximate performance
evaluation}}, i.e., for any $\st \in \StSp$ and $a \in \ALPHABET{\Act}$, we
have
\[
\bigl\lvert \RewFn(\st, a) -
\RewFn(\st, \aquant(a)) \bigr\rvert
\le \varepsilon.
\]
\item[(AQM2)] \textbf{\textup{Sufficient to predict the next state approximately}}.
i.e., for any $\st \in \StSp$ and $a \in \ALPHABET{\Act}$,
\[
d_\mathfrak{F}(P(\cdot | \st, a), P(\cdot | \st, \aquant(a))
\le \delta.
\]
\end{description}
\end{definition}
Then, the approximation in Theorem~\ref{thm:ais-ac} simplifies for an MDP as
follows.
\begin{corollary}\label{cor:ais-ac-only}
Suppose $\aquant$ is an $(\varepsilon, \delta)$-action-quantizer.
Recursively define approximate action-value functions $\{\hat Q_t \colon
\StSp \times \hat {\ALPHABET{\Act}} \to \reals \}$ and value functions $\{\hat
V_t \colon \StSp_t \to \reals\}$ as follows:
$\hat V_{T+1}(\st_{T+1}) \coloneqq 0$ and for $t \in \{T, \dots,
1\}$:
\begin{subequations}\label{eq:DP-ais-ac-only}
\begin{align}
\hat Q_t(\st_t, \hat a_t) &\coloneqq \RewFn(\st_t, \hat a_t)
+ \int_{\StSp} \hat V_{t+1}(\st_{t+1})
P(d \st_{t+1} \mid \st_t, \hat a_t),
\\
\hat V_t(\st_t) &\coloneqq \max_{\hat a_t \in \hat {\ALPHABET{\Act}}} \hat Q_t(\st_t, \hat a_t).
\end{align}
\end{subequations}
Then, we have the following:
\begin{enumerate}
\item \textbf{\textup{Value function approximation:}} For any time~$t$,
$\st \in \StSp$ and $a \in \ALPHABET{\Act}$, we have
\begin{equation}\label{eq:value-approx-ac-only}
\lvert Q_t(\st, a) - \hat Q_t(\st, \aquant(a))\rvert
\le \alpha_t
\quad\text{and}\quad
\lvert V_t(\st) - \hat V_t(\st) \rvert
\le \alpha_t,
\end{equation}
where $\alpha_t$ is defined as in Theorem~\ref{thm:ais}.
\item \textbf{\textup{Approximately optimal policy:}} Let $\hat \pi = (\hat \pi_1, \dots,
\hat \pi_T)$, where $\hat \pi_t \colon \StSp \to \Delta(\hat {\ALPHABET{\Act}})$,
be a stochastic policy that satisfies
\begin{equation}\label{eq:ais-opt-ac-only}
\Supp(\hat \pi_t(\st_t)) \subseteq
\arg \max_{\hat a_t \in \hat {\ALPHABET{\Act}}} \hat Q_t(\st_t, \hat a_t).
\end{equation}
Since $V^{\hat \pi}_t(\st_t) = \hat V_t(\st_t)$ and $Q^{\hat
\pi}_t(\st, \hat a_t) = \hat Q_t(\st, \hat a_t)$, we have
\begin{equation}\label{eq:policy-approx-ac-only}
\lvert Q_t(\st_t, a_t) - Q^{\hat \pi}(\st_t, \aquant(a_t))\rvert
\le \alpha_t
\quad\text{and}\quad
\lvert V_t(\st_t) - V^{\hat \pi}(\st_t) \rvert
\le \alpha_t.
\end{equation}
\end{enumerate}
\end{corollary}
\begin{proof}
The proof follows in a straightforward manner from the proof of Theorem~\ref{thm:ais-ac}.
\end{proof}
Note that in contrast to Theorem~\ref{thm:ais-ac}, the final approximation
bounds~\eqref{eq:policy-approx-ac-only} in Corollary~\ref{cor:ais-ac-only} do
not have an additional factor of~$2$. This is because the approximate policy
$\hat \pi$ can be directly executed in the original MDP because $\hat {\ALPHABET{\Act}}
\subset \ALPHABET{\Act}$.
Approximation bounds similar to Corollary~\ref{cor:ais-ac-only} are used to
derive bounds for lifelong learning in \cite{chandak2020lifelong}. We show
that similar bounds may be obtained using Corollary~\ref{cor:ais-ac-only} in
Appendix~\ref{app:ais-ac-only}.
\subsection{AIS with observation compression} \label{sec:obsais}
In applications with high-dimensional observations such as video input, it is
desirable to pre-process the video frames into a low-dimensional
representation before passing them on to a planning or learning algorithm. In
this section, we generalize the notion of \ac{AIS} to account for such
observation compression.
\begin{definition}
As in the definition of \acs{AIS}, suppose $\{\hat{\ALPHABET{Z}}_t\}_{t=1}^T$ are a
pre-specified collection of Banach spaces, $\mathfrak{F}$ be a function class for
\textup{IPMs}, and $\epsilonDelta$ be pre-specified positive real numbers. In
addition, suppose we have a set $\hat {\ALPHABET{\Ob}}$ of compressed observations and a
compression function $q \colon \ALPHABET{\Ob} \to \hat {\ALPHABET{\Ob}}$. Let $\hat H_t$ denote the
history $(\hat Y_{1:t-1}, A_{1:t-1})$ of compressed observations and
actions and $\hat {\ALPHABET{\His}}_t$ denote the space of realizations of such compressed
histories. Then, a collection $\{\ainfo_t \colon \hat {\ALPHABET{\His}}_t \to \hat{\ALPHABET{Z}}_t
\}_{t=1}^T$ of history compression functions, along with observation
compression function $q \colon \ALPHABET{\Ob} \to \hat {\ALPHABET{\Ob}}$,
approximate update kernels $\{\nextinfo_t \colon
\hat{\ALPHABET{Z}}_t \times \ALPHABET{\Act} \to \Delta(\hat{\ALPHABET{Z}}_{t+1})\}_{t=1}^T$ and reward
approximation functions $\{\rewinfo_t \colon \hat{\ALPHABET{Z}}_t \times \ALPHABET{\Act} \to
\reals\}_{t=1}^T$, is called an
\emph{$\epsilonDelta$-observation-compressed \acs{AIS}
generator} if the process $\{\hat Z_t\}_{t=1}^T$, where
$\hat Z_t = \ainfo_t(\hat H_t)$, satisfies properties \textup{(AP1)} and
\textup{(AP2)}.
\end{definition}
\begin{figure}[ht]
\centering
\resizebox{0.65\linewidth}{!}{%
\begin{mpost}[mpsettings={input boxes;}]
defaultdx := 10bp;
defaultdy := 20bp;
boxit.system(\btex System etex);
system.c = origin;
drawboxed(system);
z1 = 0.5[system.w, system.nw];
z2 = 0.5[system.w, system.sw];
z3 = z1 - (1cm,0);
z4 = z2 - (1cm,0);
drawarrow z3 -- lft z1;
drawarrow z4 -- lft z2;
label.lft(\btex Stochastic input $W_t$ etex, z3);
label.lft(\btex Controlled input $A_t$ etex, z4);
z5 = 0.5[system.e, system.ne];
z6 = 0.5[system.e, system.se];
z7 = z5 + (1cm, 0);
z8 = z6 + (1cm, 0);
boxit.quantizer(\btex \vbox{\hbox{~Obs.} \endgraf \hbox{Comp.}} etex);
quantizer.dy=5pt;
quantizer.dx=5pt;
quantizer.w = z7 ;
drawarrow z5 -- z7;
z9 = quantizer.e + (1cm, 0);
z10 = (x9, y8);
drawboxed(quantizer);
drawarrow quantizer.e -- z9;
drawarrow z6 -- z10;
z100 = (xpart system.w - 0.5cm, ypart quantizer.n + 0.5cm);
z101 = (xpart quantizer.e + 0.5cm, ypart quantizer.n + 0.5cm);
z102 = (xpart quantizer.e + 0.5cm, ypart system.s - 0.5cm);
z103 = (xpart system.w - 0.5cm, ypart system.s - 0.5cm);
draw z100 -- z101 -- z102 -- z103 -- cycle
dashed evenly;
label.bot(\btex Modified input-output system etex, 0.5[z102, z103]);
label.rt(\btex Compressed Obs. $Y_t$ etex, z9);
label.rt(\btex Reward $R_t$ etex, z10);
\end{mpost}}
\caption{A stochastic input-output system with observation compression}
\label{fig:output-compression}
\end{figure}
In essence, we can view observation compression as a new input-output system
whose outputs are $(\hat Y_t, \Rew_t)$ instead of $(Y_t, \Rew_t)$ as shown in
Fig.~\ref{fig:output-compression}. A construction similar to
observation-compressed \ac{AIS} is proposed in~\cite{ha2018world}, where it is
shown that such a construction performs well empirically, but there was no
analysis of the approximation guarantees of such a construction.
An immediate implication of the above definition is the following:
\begin{corollary}
Let $\{ \ainfo_t, q, \nextinfo_t, \rewinfo_t\}_{t \ge 1}$ be an
$\epsilonDelta$-observation-compression \ac{AIS}. Then, the bounds of
Theorem~\ref{thm:ais} hold.
\end{corollary}
\subsection{Discussion and related work}
\acs{AIS} may be viewed as a generalization of state discretization
\citep{Bertsekas_1975} or state aggregation \citep{Whitt_1978} in MDPs. As
illustrated by the examples in Sec.~\ref{ex:ais}, many of the recent results
on approximation bounds for state aggregation and latent state embedding in
MDPs are specific instances of \acs{AIS} and, in some instances, using the
approximation bounds of Theorem~\ref{thm:ais} or its generalization to
infinite horizon (Theorem~\ref{thm:inf-ais}) provide tighter bounds than those
in the literature. A detailed comparison with these results is presented in
the Appendices. We had presented a simpler definition of AIS and the
approximation bounds in the preliminary version of this paper \citep{CDC}.
As mentioned in Sec.~\ref{discuss:info-state} while discussing the related literature
on information states, there are two other methods for identifying ``states''
for POMDPs: bisimulation-based methods and predictive state representations (PSRs). Approximation techniques for both
these methods have been proposed in the literature.
State aggregation techniques based on bisimulation metrics have been proposed
in \cite{FernsPanangadenPrecup_2004, FernsPanangadenPrecup_2011} for MDPs and
\cite{CastroPanangadenPrecup_2009} for POMDPs. The key insight of these
papers is to define a semi-metric called bisimulation metric on the state
space of an MDP or the belief space of a POMDP as the unique fixed point of an
operator on the space of semi-metrics on the state space of the MDP or the
belief space of the POMDP. It is then shown that the value function is
Lipschitz with respect to this metric. Then, they propose state aggregation
based on the bisimulation metric. Although the basic building blocks of
bisimulation metrics are the same as those of an \ac{AIS}, the approximation
philosophies are different. The bisimulation-metric based approximations are
a form of state aggregation, while \ac{AIS} need not be a state
aggregation.
Various methods for learning low dimensional approximations of PSRs have been
proposed in the literature, including approaches which use spectral learning
algorithms \citep{RosencrantzGordonThrun_2004,Boots2011, Hamilton2014,
Kulesza2015,Kulesza2015a,Jiang2016}, and stochastic gradient descent
\citep{Jiang2016}. Error bounds for using an approximate PSR were derived in
\cite{Wolfe2008, Hamilton2014}. These approximation methods for PSRs rely on the specific structure of PSRs and are conceptually different from the approximation
methods used in \ac{AIS}.
\section{Comparison with the results of
\texorpdfstring{\cite{FrancoisLavet2019}}{Francois-Lavet et al. (2019)} for belief
approximation in POMDPs}\label{app:FrancoisLavet}
\cite{FrancoisLavet2019} analyze the trade off between asymptotic bias and
overfitting in reinforcement learning with partial observations. As part of
their analysis, they express the quality of state representation in terms of
the bounds on the $L_1$ error of the associated belief states. We show that
these approximation bounds may be viewed as an instance of AIS-based bounds of
Theorems~\ref{thm:ais} and~\ref{thm:inf-ais}. We also show that the bounds of
Theorem~\ref{thm:inf-ais} for this model are stronger than those derived in
\cite{FrancoisLavet2019} by a factor of $\mathcal O(1/(1-\gamma))$.
Since we follow a slightly different notation than \cite{FrancoisLavet2019}
and for the sake of completeness, we start by describing the notion of
$\varepsilon$-sufficient statistics defined in \cite{FrancoisLavet2019}.
Consider an infinite-horizon finite-state finite-action POMDP with state space
$\StSp$, action space $\ALPHABET{\Act}$, observation space $\ALPHABET{\Ob}$, transition
probability matrix $P \colon \StSp \times \ALPHABET{\Act} \to \Delta(\StSp)$,
observation matrix $P^y \colon \StSp \to \Delta(\ALPHABET{\Ob})$, per-step reward
$r \colon \StSp \times \ALPHABET{\Act} \to \reals$, and discount factor $\gamma$.
\begin{definition}[$\varepsilon$-sufficient statistic
\citep{FrancoisLavet2019}]
Given a family of Banach spaces $\{\Phi_t\}_{t=1}^L$, an
$\varepsilon$-sufficient statistic is a collection of history
compression function $\{ \phi_t \colon \ALPHABET{\His}_t \to \Phi_t \}_{t=1}^T$ and
belief approximation functions $\{ \hat b_t \colon \Phi_t \to \Delta(\StSp)
\}_{t=1}^T$ such that for any time~$t$ and any realization $h_t$ of
$H_t$, we have
\[
\| \hat b_t(\cdot | \phi_t(h_t)) - b_t (\cdot | h_t) \|_1 \le
\varepsilon.
\]
\end{definition}
Given an $\varepsilon$-sufficient statistic, \cite{FrancoisLavet2019} define
an MDP with state space $\Delta(\StSp)$, action space $\ALPHABET{\Act}$, transition
probability kernel $\PR( \hat b_{t+1}(\cdot | \phi(h_{t+1})) \mid \hat
b_t(\cdot | \phi(h_t)), a_t)$ computed from the underlying POMDP,
and per-step reward given by
\[
\hat r(\hat b_t(h_t), a_t) =
\sum_{\st \in \StSp} r(\st, a_t) \hat b_t(\st | \phi(h_t)).
\]
\begin{proposition}[Theorem 1 of \cite{FrancoisLavet2019}]
\label{prop:FrancoisLavet}
Let $\{(\hat b_t, \phi_t)\}_{t=1}^T$ be an $\varepsilon$-sufficient statistic and $\hat
\pi = (\hat \pi_1, \hat \pi_2, \dots)$ be an optimal policy for the MDP
described above. Define a policy $\pi = (\pi_1, \pi_2, \dots)$ given by
$\pi_t = \hat \pi_t \circ \phi_t$. Let $V_t \colon \ALPHABET{\His}_t \to \reals$
denote the optimal value functions and $\hat V^\pi_t \colon H_t \to
\reals$ denote the value function for policy $\pi$. Then for any initial
history $h_1 \in \ALPHABET{\His}_1$,
\[
\bigl| V_1(h_1) - V_1^\pi(h_1) \bigr| \le
\frac{2 \varepsilon \|r\|_{\infty}}{(1-\gamma)^3}.
\]
\end{proposition}
We now show that an $\varepsilon$-sufficient statistic gives rise to an
\acs{AIS} and directly using the results of Theorem~\ref{thm:inf-ais} for this
model gives a stronger bound than Proposition~\ref{prop:FrancoisLavet}.
\begin{proposition}\label{prop:POMDP-ais}
Let $\{(\hat b_t, \phi_t)\}_{t=1}^T$ be an $\varepsilon$-sufficient
statistic. Let $\hat{\ALPHABET{Z}}_t = \Delta(\StSp)$ and define the different
components of an AIS as follows:
\begin{itemize}[nosep]
\item history compression functions $\ainfo_t = \hat b_t \circ \phi_t$,
\item \acs{AIS} prediction kernels $\nextinfo_t(\cdot
| \hat z_t, a_t)$ is given by
\[
\nextinfo_t(\ALPHABET B | \hat z_t, a_t)
= \sum_{y_{t+1} \in \ALPHABET{\Ob}} \psi(y_{t+1} | \hat z_t, a_t)
\IND_{\ALPHABET B}\{ \aupdate(\hat z_t, y_{t+1}, a_t) \},
\]
where
\[
\psi(y_{t+1} | \hat z_t, a_t) =
\sum_{\st_{t+1} \in \StSp} \sum_{\st_t \in \StSp}
P^y(y_{t+1} | \st_{t+1}) P(\st_{t+1} | \st_t, a_t) \hat z_t(\st_t)
\]
and
\[
\aupdate(\hat z_t, y_{t+1}, a_t)(\st_{t+1}) =
\frac{ \sum_{\st_t \in \StSp} P^y(y_{t+1} | \st_{t+1}) P(\st_{t+1} | \st_t, a_t)
\hat z_t(\st_t) }
{\psi(y_{t+1} | \hat z_t, a_t)},
\]
where $\aupdate$ is the same as the Bayes'-rule based update of the belief
state,
\item reward approximation functions
$\rewinfo(\hat z_t, a_t) = \sum_{\st \in \St}\hat z_t(\st) r(\st, a_t)$.
\end{itemize}
Then, $\{(\ainfo_t, \nextinfo_t, \rewinfo_t)\}_{t=1}^T$ is an $(\varepsilon
\|r\|_\infty, 3\varepsilon)$-AIS with respect to the bounded-Lipschitz metric.
\end{proposition}
\begin{proof}
We need to equip $\hat{\ALPHABET{Z}} = \Delta(\StSp)$ with a metric in order to define a
bounded-Lipschitz metric over $\Delta(\hat{\ALPHABET{Z}})$. We use the total variation as
the metric and denote it by $d_\mathrm{TV}$. We use $\mathfrak{F}$ to denote $\{ f \colon
\Delta(\hat{\ALPHABET{Z}}) \to \reals : \| f \|_{\infty} + \| f \|_\mathrm{Lip} \le 1\}$ and
denote the corresponding bounded-Lipschitz metric over
$\Delta(\hat{\ALPHABET{Z}})$ by $d_\mathfrak{F}$.
We first establish (AP1). For any time~$t$, realization $h_t$ of
history $H_t$, and action $a_t \in \ALPHABET{\Act}$, we have
\begin{align*}
\bigl|
\EXP[ r(\St_t, a_t) &\mid H_t = h_t, A_t = a_t] -
\rewinfo_t(\ainfo_t(h_t), a_t)
\bigr|
\\
&= \biggl|
\sum_{\st \in \StSp} r(\st, a_t) b_t(\st | h_t)
-
\sum_{\st \in \StSp} r(\st, a_t) \hat b_t(\st | \phi(h_t))
\biggr|
\\
&\stackrel{(a)}\le \|r\|_\infty d_\mathrm{TV}(b_t, \hat b_t)
\\
&\stackrel{(b)}\le \varepsilon \|r\|_\infty
\end{align*}
where $(a)$ follows from~\eqref{eq:minkowski} and the fact that for total
variation distance $\Minkowski_\mathrm{TV}(r) \le \|r\|_\infty$; and $(b)$ follows from
definition of $\varepsilon$-sufficient statistic.
Before establishing (AP2), we
note that $\aupdate$ is the Bayes'-rule based update of the true belief;
therefore,
\[
b_{t+1}(\cdot | h_{t+1}) = \aupdate(b_t(\cdot | h_t), y_{t+1},
a_t).
\]
For ease of notation, we use $b_t(\cdot)$ and $\hat b_t(\cdot)$ instead of
$b_t(\cdot | h_t)$ and $\hat b_t(\cdot | \phi_t(h_t))$, when the
conditioning is clear from context.
Now consider $\mu_t$ and $\nu_t$ as defined in the definition of (AP2). In
particular, for any Borel set $\ALPHABET B$,
\begin{align*}
\mu_t(\ALPHABET B)
&= \sum_{y_{t+1} \in \ALPHABET{\Ob}} \psi(y_{t+1} | b_t, a_t)
\IND_{\ALPHABET B}\{ \hat b_{t+1}(\cdot | \phi(h_t, y_{t+1},
a_t)) \}
\\
\nu_t(\ALPHABET B) &= \nextinfo_t(\ALPHABET B | \hat z_t, a_t).
\end{align*}
We also define an additional measure $\xi_t$ given by
\[
\xi_t(\ALPHABET B)
= \sum_{y_{t+1} \in \ALPHABET{\Ob}} \psi(y_{t+1} | b_t, a_t)
\IND_{\ALPHABET B}\{ \aupdate(b_t, y_{t+1}, a_t) \},
\]
Now, by the triangle inequality
\begin{equation} \label{eq:POMDP-diff}
d_\mathfrak{F}(\mu_t, \nu_t) \le d_\mathfrak{F}(\mu_t, \xi_t) + d_\mathfrak{F}(\xi_t, \nu_t).
\end{equation}
Now consider the first term of~\eqref{eq:POMDP-diff}:
\begin{align}
d_\mathfrak{F}(\mu_t, \xi_t) &= \sup_{f \in \mathfrak{F}} \biggl|
\int_{\hat{\ALPHABET{Z}}} f d\mu_t - \int_{\hat{\ALPHABET{Z}}} f d\xi_t \biggr|
\notag\\
&= \sup_{f \in \mathfrak{F}} \biggl|
\sum_{y_{t+1} \in \ALPHABET{\Ob}} f(\hat b_{t+1}(\cdot | \phi(h_t, y_{t+1}, a_t)))
\psi(y_{t+1} | b_t, a_t)
\notag \\
\displaybreak[1]
&\qquad \qquad
-
\sum_{y_{t+1} \in \ALPHABET{\Ob}} f(b_{t+1}(\cdot | h_t, y_{t+1}, a_t))
\psi(y_{t+1} | b_t, a_t)
\biggr|
\notag \\
&\stackrel{(a)}\le
\sum_{y_{t+1} \in \ALPHABET{\Ob}}
d_\mathrm{TV}(\hat b_{t+1}(\cdot | \phi(h_{t+1})), b_{t+1}(\cdot | h_{t+1}))
\psi(y_{t+1} | h_t, a_t)
\notag\\
&\stackrel{(b)}\le \varepsilon
\label{eq:POMDP-diff-1}
\end{align}
where $(a)$ follows from triangle inequality and the fact that slope of $f$
is bounded by $1$; and $(b)$ follows from the definition of
$\varepsilon$-sufficient statistic (see footnote~\ref{fnt:TV} on
page~\pageref{fnt:TV}).
Now consider the second term of~\eqref{eq:POMDP-diff} (for ease of notation,
we use $b_t(\cdot)$ instead of $b_t(\cdot | h_t)$):
\begin{align}
d_\mathfrak{F}(\xi_t, \nu_t) &= \sup_{f \in \mathfrak{F}} \biggl|
\int_{\hat{\ALPHABET{Z}}} f d\xi_t - \int_{\hat{\ALPHABET{Z}}} f d\nu_t \biggr|
\notag\\
\displaybreak[1]
&= \sup_{f \in \mathfrak{F}} \biggl|
\sum_{y_{t+1} \in \ALPHABET{\Ob}} f(\aupdate(b_t, y_{t+1}, a_t))
\psi(y_{t+1} | b_t, a_t)
-
\sum_{y_{t+1} \in \ALPHABET{\Ob}} f(\aupdate(\hat z_t, y_{t+1}, a_t))
\psi(y_{t+1} | \hat z_t, a_t)
\biggr|
\notag \\
&\stackrel{(c)}\le
\sup_{f \in \mathfrak{F}} \biggl|
\sum_{y_{t+1} \in \ALPHABET{\Ob}} f(\aupdate(b_t, y_{t+1}, a_t))
\psi(y_{t+1} | b_t, a_t)
-
\sum_{y_{t+1} \in \ALPHABET{\Ob}} f(\aupdate(\hat z_t, y_{t+1}, a_t))
\psi(y_{t+1} | b_t, a_t)
\biggr|
\notag \\
\displaybreak[1]
&\quad + \sup_{f \in \mathfrak{F}} \biggl|
\sum_{y_{t+1} \in \ALPHABET{\Ob}} f(\aupdate(\hat z_t, y_{t+1}, a_t))
\psi(y_{t+1} | b_t, a_t)
-
\sum_{y_{t+1} \in \ALPHABET{\Ob}} f(\aupdate(\hat z_t, y_{t+1}, a_t))
\psi(y_{t+1} | \hat z_t, a_t)
\biggr|
\notag \\
&\stackrel{(d)}\le
\sum_{y_{t+1} \in \ALPHABET{\Ob}}
d_\mathrm{TV}( \aupdate(b_t,y_{t+1}, a_t),
\aupdate(\hat z_t, y_{t+1}, a_t))
\psi(y_{t+1} | b_t, a_t)
\notag \\
&\quad +
\Contraction_{\mathfrak{F},\mathrm{TV}}(\aupdate(\hat z_t, \cdot, a_t))
\,
d_\mathrm{TV}(\psi( \cdot | b_t, a_t),
\psi( \cdot | \hat z_t, a_t)),
\label{eq:POMDP-diff-2}
\end{align}
where $(c)$ follows from the triangle inequality; the first step of
$(d)$ follows from an argument similar to step $(a)$
of~\eqref{eq:POMDP-diff-1}; and the second part of $(d)$ follows
from~\eqref{eq:contraction}.
Now, we obtain bounds for both terms of~\eqref{eq:POMDP-diff-2}.
For $y_{t+1} \in \ALPHABET{\Ob}$, define
\begin{align*}
\xi^y_t(y_{t+1} ) &\coloneqq \psi(y_{t+1} | b_t, a_t) =
\sum_{\st_{t+1} \in \StSp} \sum_{\st_t \in \StSp}
P^y(y_{t+1} | \st_{t+1}) P(\st_{t+1} | \st_t, a_t) b_t(\st_t | h_t),
\\
\nu^y_t(y_{t+1} ) &\coloneqq \psi(y_{t+1} | \hat z_t, a_t) =
\sum_{\st_{t+1} \in \StSp} \sum_{\st_t \in \StSp}
P^y(y_{t+1} | \st_{t+1}) P(\st_{t+1} | \st_t, a_t) \hat z_t(\st_t),
\end{align*}
Total variation is also an $f$-divergence\footnote{%
Let $f \colon \reals_{\ge 0} \to \reals$ be a convex function such that
$f(1) = 0$. Then the $f$-divergence between two measures $\mu$ and $\nu$
defined on a measurable space $\ALPHABET X$ is given by
\[
D_f(\mu \| \nu) = \int_{\ALPHABET X} f\Bigl( \frac{d\mu}{d\nu} \Bigr) d\nu.
\]
Total variation is a $f$-divergence with $f(x) = \lvert x - 1 \rvert$
(also see footnote~\ref{fnt:TV} on page~\ref{fnt:TV}).
\cite{Sriperumbudur2009} showed that total variation is the only non-trivial
IPM which is also an $f$-divergence.}
therefore, it satisfies the strong data processing inequality.\footnote{Let
$\ALPHABET X$ and $\ALPHABET Y$ be measurable spaces, $\mu$ and $\nu$
be measures on $\ALPHABET X$ and $P \colon \ALPHABET X \to
\Delta(\ALPHABET Y)$ be a stochastic kernel from $\ALPHABET X$ to
$\ALPHABET Y$. We use $\mu P$ to denote the
measure $\mu_{\ALPHABET Y}$ on $\ALPHABET Y$ given by $\mu_{\ALPHABET
Y}(dy) = \int_{\ALPHABET X} P(dy|x) \mu(dx)$. Similar interpretation holds
for $\nu P$. Then, the \emph{strong data processing inequality}
\citep{Sason2019} states that for any
$f$-divergence, \( D_f(\mu P \| \nu P) \le D_f(\mu \| \nu)\).}
Note that the
definition of both $\xi^y_t$ and $\nu^y_t$ may be viewed as outputs of a
``channel'' from $\st_t$ to $y_{t+1}$. In case of $\xi^y_t$, the
channel input is distributed according to $b_t(\cdot | h_t)$ and in case
of $\nu^y_t$, the channel input is distributed according to $\hat z_t$.
Therefore, from the data processing inequality,
\begin{equation} \label{eq:data-processing-1}
d_\mathrm{TV}(\xi^y_t, \nu^y_t) \le d_\mathrm{TV}(b_t(\cdot | h_t), \hat z_t)
\le \varepsilon
\end{equation}
where the last inequality follows from the definition of
$\varepsilon$-sufficient statistic.
A similar argument can be used to bound
\(
d_\mathrm{TV}( \aupdate(b_t,y_{t+1}, a_t),
\aupdate(\hat z_t, y_{t+1}, a_t))
\). In particular, we can think of $\aupdate(\cdot, y_{t+1}, a_t)$ as a
channel from $\st_t$ to $\st_{t+1}$. Then, by the data processing
inequality,
\begin{equation}\label{eq:data-processing-2}
d_\mathrm{TV}( \aupdate(b_t,y_{t+1}, a_t),
\aupdate(\hat z_t, y_{t+1}, a_t))
\le d_\mathrm{TV}(b_t, \hat z_t) \le \varepsilon
\end{equation}
where the last inequality follows from the definition of
$\varepsilon$-sufficient statistic.
The final part of~\eqref{eq:POMDP-diff-2} that needs to be characterized is
$\Contraction_{\mathfrak{F}, \mathrm{TV}}(\aupdate(\hat z_t, \cdot, a_t))$.
From~\eqref{eq:F-contraction}
\[
\Contraction_{\mathfrak{F}, \mathrm{TV}}(\aupdate(\hat z_t, \cdot, a_t)) =
\sup_{f \in \mathfrak{F}} \Minkowski_\mathrm{TV}(f \circ \aupdate(\hat z_t, \cdot, a_t))
\le
\sup_{f \in \mathfrak{F}} \| f \circ \aupdate(\hat z_t, \cdot, a_t) \|_{\infty}
\le 1.
\]
Substituting this bound along with~\eqref{eq:data-processing-1}
and~\eqref{eq:data-processing-2} in~\eqref{eq:POMDP-diff-2}, we get
$d_\mathfrak{F}(\xi_t, \nu_t) \le 2\varepsilon$. Substituting this along
with~\eqref{eq:POMDP-diff-1} in~\eqref{eq:POMDP-diff}, we get that
$d_\mathfrak{F}(\mu_t, \nu_t) \le 3\varepsilon$. Hence (AP2) is satisfied.
\end{proof}
\begin{lemma}\label{lem:POMDP-bound}
For any POMDP,
\[
\Minkowski_\mathfrak{F}(V) = \| V \|_\infty + \| V \|_\mathrm{Lip} \le
\frac{2 \| r \|_\infty }{1 - \gamma}.
\]
\end{lemma}
\begin{proof}
The result follows immediately from the sup norm on the value function
(Lemma~\ref{lem:MS-V-bound} and the bounds on the Lipschitz constant of the
value function (Lemma~1 of \cite{Lee2008}).
\end{proof}
\begin{proposition}\label{prop:POMDP-bound}
Let $\hat \pi$, $\pi$, $V$, and $V^\pi$ be as defined in
Proposition~\ref{prop:FrancoisLavet}. Then, for any initial history $h_1 \in
\ALPHABET{\His}_1$,
\[
\bigl| V(h_1) - V^\pi(h_1) \bigr| \le
\frac{2 \varepsilon \| r \|_{\infty} }{(1-\gamma)}
+
\frac{6 \gamma \varepsilon \| r\|_{\infty}}{(1-\gamma)^2}.
\]
\end{proposition}
\begin{proof}
This follows immediately from Theorem~\ref{thm:inf-ais},
Proposition~\ref{prop:POMDP-ais}, and Lemma~\ref{lem:POMDP-bound}.
\end{proof}
Note that the error bounds of Propositions~\ref{prop:FrancoisLavet}
and~\ref{prop:POMDP-bound} have similar structure but the key difference is
that the bound of Proposition~\ref{prop:POMDP-bound} is tighter by a factor of
$1/(1-\gamma)$.
\section{Conclusion}
In this paper, we present a theoretical framework for
approximate planning and learning in partially observed system. Our
framework is based on the fundamental notion of information state. We
provide two equivalent definitions of information state. An information
state is a function of history which is sufficient to compute the expected
reward and predict its next value. Equivalently, an information state is a
function of the history which can be recursively updated and is sufficient
to compute the expected reward and predict the next observation. We show
that an information state always leads to a dynamic programming
decomposition and provide several examples of simplified dynamic programming
decompositions proposed in the literature which may be viewed as specific
instances of information states.
We then relax the definition of an information state to describe an
\acf{AIS}, which is a function of the history that approximately satisfies
the properties of the information state. We show that an \acs{AIS} can be
used to identify an approximately optimal policy with the approximation
error specified in terms of the ``one-step'' approximation errors in the
definition of the \acs{AIS}. We present generalizations of \acs{AIS} to
setups with observation and action compression as well as to multi-agent
systems. We show that various approximation approaches
for both fully and partially observed setups proposed in the literature may
be viewed as special cases of \acs{AIS}.
One of the salient features of the \ac{AIS} is that it is defined in terms of
properties that can be estimated from data, and hence the corresponding
\ac{AIS} generators can be learnt from data. These can then be used as
history representations in partially observed reinforcement learning (PORL)
algorithms. We build up on this idea to present policy gradient algorithms
which learn an \acs{AIS} representation and an optimal policy and/or
action-value function using multi time-scale stochastic gradient descent.
We present detailed numerical experiments which compare the performance of
\acs{AIS}-based PORL algorithms with a state-of-the-art PORL algorithm for
three classes of partially observed problems---small, medium and large scale problems---and find out that \acs{AIS}-based PORL outperforms the state-of-the-art baseline in most cases.
We conclude by observing that in this paper we restricted attention to the
simplest classes of algorithms but the same idea can be extended to develop
\acs{AIS}-based PORL algorithms which uses value-based approaches such as
Q-learning and its improved variants such as DQN, DDQN, distributional RL,
etc. Finally, we note that the \acs{AIS} representation includes a model of
the system, so it can be used as a component of model-based reinforcement
learning algorithms such as Dyna~\cite[Sec 8.2, page 161]{SuttonBarto_2018}.
Such an approach will provide anytime guarantees on the approximation error
which will depend on the ``one-step'' approximation error of the current
\acs{AIS}-representation. Therefore, we believe that \acs{AIS} presents a
systematic framework to reason about learning in partially observed
environments.
\endinput
In this paper, we present two notions of information states for partially observed
systems. We show that both these information states are sufficient for dynamic
programming, with the second information state definition being a refinement of the
more general first information state definition.
We then relax the definition to describe an \acf{AIS}
that can be used to identify an approximately optimal
policy. We also present an extension of the concept of AIS to
stochastic compressions of history.
We compare the \ac{AIS} concept with various approximation approaches in literature
including approaches such as
state compression, action compression, observation compression (world models),
predictive state representations, bisimulation based approaches, deep MDPs. We show that
these approaches can be considered as special cases of \ac{AIS} and in some of these cases,
we derive bounds that are tighter than the ones presented in literature.
In this paper, we present the most basic approach for using
\ac{AIS} in PORL. We describe an algorithm that learns a locally optimal
\ac{AIS} representation and then uses a policy gradient based approach that
utilizes this representation. These steps are combined by using the multi-time
scale stochastic approximation algorithm, with the \ac{AIS} representation
being learnt at the fastest time scale. Though we present details only for the
actor based or policy gradient based approaches, this can be extended in a
straightforward manner to critic only approaches such as Q-learning and its
improved variants such as DQN, DDQN, distributional RL etc. More sophisticated
PORL algorithms can also be constructed on the lines of improvements presented
in the literature for the MDP case. For instance, use of replay buffers for
\ac{AIS} learning could be investigated. Furthermore, since the \ac{AIS}
approach also yields \ac{AIS} generators, we can use these generators as an
approximate model and then use model-based reinforcement learning approaches.
The model-free and model-based approaches can be combined in a Dyna~\cite[Sec
8.2, page 161]{SuttonBarto_2018} like manner. It is to be noted here that the
\ac{AIS} representation has an any-time flavour, in the sense, that for any
\ac{AIS} representation, we can derive a locally optimal policy using this
representation, which will have performance guarantees determined by the
corresponding $(\varepsilon, \delta)$.
\section{Comparison with the results of
\texorpdfstring{\cite{chandak2020lifelong}}{Chandak et al. (2020)} on lifelong
learning for time-varying action spaces}\label{app:ais-ac-only}
Lifelong learning refers to settings where a reinforcement learning agent
adapts to a time-varying environment. There are various models for lifelong
learning and \cite{chandak2020lifelong} recently proposed a model where the
action spaces change over time. The environment has an underlying finite state
space $\StSp$, finite action space $\ALPHABET{\Act}$, and reward $r \colon \StSp \to
\reals$. Note that the reward depends only on the current state and not the
current action.
It is assumed that there is an underlying finite dimensional representation
space $\ALPHABET E$ and for any feasible action $a \in \ALPHABET{\Act}$, there is an
underlying representation $e \in \ALPHABET E$. This relationship is captured
via an invertible map $\phi$, i.e., $a = \phi(e)$. There is a transition
kernel $P \colon \StSp \times \ALPHABET E \to \Delta(\StSp)$ with respect to
this representation space. This induces a transition kernel $P^a \colon \StSp
\times \ALPHABET{\Act} \to \Delta(\StSp)$ with respect to
the action, where $P^a(s'|s,a) = P(s'|s,\phi^{-1}(a))$. It is assumed that the
transition kernel $P$ is $\rho$-Lipschitz, i.e., for all $\st,\st' \in \StSp$
and $e_i, e_j \in \ALPHABET E$,
\[
\bigl\| P(s' | s, e_i) - P(s' | s, e_j) \bigr\|_{1} \le
\rho \| e_i - e_j \|_{1}.
\]
\cite{chandak2020lifelong} consider infinite horizon discounted
setup with discount factor $\gamma$.
Initially, the RL agent is not aware of the action space and learns about the
actions in discrete stages indexed by $k \in \integers_{\ge 0}$. At stage~$k$,
the agent becomes aware of a subset $\ALPHABET U_k$ of $\ALPHABET E$, where
$\ALPHABET U_k \supseteq \ALPHABET U_{k-1}$. Thus, the environment at stage
$k$ may be modelled as an MDP $\mathcal{M}_k = \{ \StSp, \ALPHABET{\Act}_k, P^{a}_k, r \}$,
where $\ALPHABET{\Act}_k = \{ \phi(e) : e \in \ALPHABET U_k \}$ and $P^{a}_k(\st' | \st,
a) = P(\st' | \st, \phi^{-1}(a))$.
Two main results are established in \cite{chandak2020lifelong}. The first one
is the following.
\begin{proposition}[Theorem 1 of \cite{chandak2020lifelong}]\label{prop:chandak1}
Let $\pi_k$ and $V^{\pi_k}$ denote the optimal policy for MDP
$\mathcal{M}_k$ and its performance. Let $V$ denote the value function for
the hypothetical model when the agent has access to all actions. Let
\[
\eta_k = \sup_{a_i, a_j \in \ALPHABET{\Act}_k}
\| \phi^{-1}(a_i) - \phi^{-1}(a_j) \|_1.
\]
Then, for any $s \in \StSp$,
\[
V(s) - V^{\pi_k}(s) \le \frac{\gamma \rho
\eta_k \| r \|_{\infty}}{(1-\gamma)^2}.
\]
\end{proposition}
We now show that this result may be viewed as a corollary of
Corollary~\ref{cor:ais-ac-only}. In particular, we have the following.
\begin{lemma}\label{lem:chandak1}
The action set $\ALPHABET{\Act}_k$ may be viewed as a ``quantization'' of $\ALPHABET{\Act}$
using a function $\aquant \colon \ALPHABET{\Act} \to \ALPHABET{\Act}_k$, which maps any
action $a = \phi(e) \in \ALPHABET{\Act}$ to action $a' = \phi(e') \in \ALPHABET{\Act}_k$ such
that $e' = \arg \min_{e'' \in \ALPHABET U_k} \| e - e'' \|_{1}$. Then,
$\aquant$ is $(0, \rho \eta_k)$-action-quantizer with respect to the
total variation distance.
\end{lemma}
\begin{proof}
Since the per-step reward does not depend on the action, there is no
approximation error in the reward and, therefore, $\varepsilon = 0$. Now
note that for any $s \in \StSp$ and $a \in \ALPHABET{\Act}$, we have
\[
d_{\mathrm{TV}}(P^a(\cdot | s, a), P^a(\cdot | s, \aquant(a))) \le
\sup_{e_i, e_j \in \ALPHABET U_k}
\| P(\cdot | s, e_i) - P(\cdot | s, e_j) \|_{1}
\le \rho \eta_k
\]
where the last equality follows from the $\rho$-Lipschitz continuity of
$P$ and the definition of $\eta_k$. Thus, $\delta = \rho \eta_k$.
\end{proof}
\begin{proposition}\label{prop:chandak2}
Let $\pi_k$, $V^{\pi_k}$ and $V$ be as defined in
Proposition~\ref{prop:chandak1}.
Then, for any $s \in \StSp$,
\[
V(s) - V^{\pi_k}(s) \le \frac{\gamma \rho
\eta_k \Span(r)}{2 (1-\gamma)^2}
\]
\end{proposition}
\begin{proof}
The result can be established from the following observations: (i)~The
result of Corollary~\ref{cor:ais-ac-only} continues to hold in the infinite horizon
discount reward setup with $\alpha_t$ replaced by $(\varepsilon + \gamma
\Minkowski_\mathfrak{F}(\hat V^*) \delta)/(1-\gamma)$. This can be established in a
manner similar to Theorem~\ref{thm:inf-ais}.
(ii)~From Lemma~\ref{lem:MS-V-bound}, we know that for total variation
distance $\Minkowski_\mathfrak{F}(\hat V^*) \le \frac12 \Span(r)/(1 - \gamma)$.
The result follows from substituting the values of
$(\varepsilon,\delta)$ from Lemma~\ref{lem:chandak1} and the value of
$\Minkowski_\mathfrak{F}(\hat V^*)$ from~(ii) in~(i).
\end{proof}
Note that if the rewards $r(s)$ belongs in a symmetric interval, say
$[-R_{\max}, R_{\max}]$, as is assumed in \cite{chandak2020lifelong}, the result
of Proposition~\ref{prop:chandak2} matches that of Proposition~\ref{prop:chandak1}.
The second result of \cite{chandak2020lifelong} is for the setting when the
mapping $\phi$ is not known. They assume that the agent selects some finite
dimensional representation $\hat{\ALPHABET E}$ and, for every~$k$,
parameterizes the policy using two components: (i) a map $\beta \colon \StSp
\to \Delta(\hat{\ALPHABET E})$
and (ii)~an estimator $\hat{\phi}_k \colon \hat{\ALPHABET E} \to
\Delta(\ALPHABET{\Act}_k)$. Then the action at any state $S_t \in \StSp$ is chosen by
first sampling $\hat e \sim \beta(s)$ and then choosing the action $a \sim
\hat{\phi}_k(\hat e)$.
The second main result in \cite{chandak2020lifelong} is the
following.\footnote{This result is stated slightly different in
\cite{chandak2020lifelong} using an \emph{inverse dynamics}
function $\varphi \colon \StSp \times \StSp \to \Delta(\hat {\ALPHABET E})$,
where $e \sim \varphi(s, s')$ is a prediction of a latent action $e$ which
might have caused the transition from $s$ to $s'$. However, the bounds hold for the
simpler form presented here as well.}
\begin{proposition}[Theorem 2 of \cite{chandak2020lifelong}]\label{prop:chandak3}
Let $\hat \pi_k$ denote the best overall policy that
can be represented using the above structure, $V^{\hat \pi_k}$ denotes its
performance, and $V$ denote the value function when the agent has access to
the complete model. Suppose there exists a $\zeta \in \reals_{\ge 0}, \beta \colon \StSp
\to \Delta(\hat{\ALPHABET E})$ and $\hat \phi_k \colon \hat {\ALPHABET
E} \to \Delta(\ALPHABET{\Act}_k)$, such that for
\[
\sup_{s \in \StSp, a_k \in \ALPHABET{\Act}_k}
\mathrm{KL}( P^a(\cdot | s, a_k) \| P^a(\cdot | s, \hat A) ) \le \zeta_k^2/2,
\]
where $\hat A \sim \hat \phi_k(\hat E)$ and $\hat E \sim \beta(s)$.
Then, for any $s \in \StSp$,
\[
V(s) - V^{\pi_k}(s) \le \frac{\gamma (\rho \eta_k + \zeta_k) \| r \|_{\infty}}{(1-\gamma)^2}.
\]
\end{proposition}
We now show that this result may be viewed as a corollary of
Corollary~\ref{cor:ais-ac-only}. In particular, we have the following.
\begin{lemma}\label{lem:chandak2}
The action set $\hat {\ALPHABET E}$ may be viewed as a ``compression'' of
the ``quantized'' action set $\ALPHABET{\Act}_k$. In particular, let $\aquant \colon
\ALPHABET{\Act} \to \ALPHABET{\Act}_k$ be as defined in Lemma~\ref{lem:chandak1}. Then, the
function $\hat \phi_k^{-1} \circ \aquant$ is a $(0, \rho \eta_k + \zeta_k)$-action-
quantizer with respect to the total variation distance.
\end{lemma}
\begin{proof}
As argued in the proof of Lemma~\ref{lem:chandak1}, since the reward
function does not depend on action, $\varepsilon = 0$. Now, recall that from
Pinsker's inequality, for any distribution $\mu$ and $\nu$, $d_\mathrm{TV}(\mu, \nu)
\le \sqrt{ 2 D_{\mathrm{KL}}(\mu \| \nu)}$. Thus,
\[
\sup_{s \in \StSp, a_k \in \ALPHABET{\Act}_k}
d_\mathrm{TV}( P^a(\cdot | s, a_k) , P^a(\cdot | s, \hat A) ) \le \zeta_k
\]
where $\hat A \sim \hat \phi_k(\hat E)$ and $\hat E \sim \beta(s)$.
Now, by the triangle inequality, for any $s \in \StSp$ and $a \in \ALPHABET{\Act}$
\begin{align*}
d_\mathrm{TV}( P^a(\cdot | s, a) , P^a(\cdot | s, \hat A) )
&\le
d_\mathrm{TV}( P^a(\cdot | s, a) , P^a(\cdot | s, \aquant(a)) ) +
d_\mathrm{TV}( P^a(\cdot | s, \aquant(a)) , P^a(\cdot | s, \hat A) )
\\
&\le
\rho \eta_k + \zeta_k,
\end{align*}
Thus, $\delta = \rho \eta_k + \zeta_k$.
\end{proof}
\begin{proposition}\label{prop:chandak4}
Let $\hat \pi_k$, $V^{\hat \pi_k}$ and $V$ be as defined in
Proposition~\ref{prop:chandak1}.
Then, for any $s \in \StSp$,
\[
V(s) - V^{\hat \pi_k}(s) \le \frac{\gamma (\rho \eta_k + \zeta_k) \Span(r)}{2 (1-\gamma)^2}
\]
\end{proposition}
\begin{proof}
The proof is similar to the proof of Proposition~\ref{prop:chandak2}, where
we replace the values of Lemma~\ref{lem:chandak1} with those of
Lemma~\ref{lem:chandak2}.
\end{proof}
As before, if the rewards $r(s)$ belongs in a symmetric interval, say
$[-R_{\max}, R_{\max}]$, as is assumed in \cite{chandak2020lifelong}, the result
of Proposition~\ref{prop:chandak4} matches that of Proposition~\ref{prop:chandak3}.
\section{Introduction}
Reinforcement learning (RL) provides a conceptual framework for designing
agents which learn to act optimally in an unknown environment. RL has been
successfully used in various applications ranging from robotics, industrial
automation, finance, healthcare, and natural language processing. The success
of RL is based on a solid foundation of combining the theory of exact and
approximate Markov decision processes (MDPs) with iterative algorithms that
are guaranteed to learn an exact or approximate action-value function and/or
an approximately optimal policy \citep{SuttonBarto_2018,
BertsekasTsitsiklis1996}. However, for the most part, the research on RL
theory is focused primarily on systems with full state observations.
In various applications including robotics, finance, and healthcare, the agent
only gets a partial observation of the state of the environment. Such partially
observed systems are mathematically modeled as partially observable Markov
decision processes (POMDPs) and there is a fairly good understanding of how to
identify optimal or approximately optimal policies for POMDPs when the system
model is known to the agent.
Since the initial work on POMDPs \citep{Astrom_1965}, it is known that POMDPs can be
modeled as fully observed MDPs by considering the belief state (i.e., the
posterior belief of the unobserved state given all the observations made by the
agent) as an information state. Therefore, the theory and algorithms for exact
and approximate planning for MDPs are also applicable to POMDPs. One
computational challenge is that the belief state is continuous valued.
However, the value function based on the belief state has a nice property---it
is piecewise linear and a convex function of the belief state---which can be
exploited to develop efficient algorithms to identify the optimal policy.
Building on the one-pass algorithm of \citep{SmallwoodSondik_1973}, various
such algorithms have been proposed in the literature including the linear
support algorithm \citep{Cheng1988}, the witness algorithm
\citep{CassandraKaelblingLittman_1994}, incremental pruning \citep{Zhang1996,
Cassandra1997}, the duality based approach \citep{Zhang2009}, and others. Since
POMDPs are PSPACE-complete \citep{Papadimitriou1999}, the worst case complexity of
such algorithms is exponential in the size of the unobserved state space. To
overcome the worst case complexity of finding an optimal policy, various
point-based methods have been proposed in the literature which obtain an
approximate solution by sampling from the belief space \citep{Pineau2003,
Smith2004, Spaan2005, Shani2007, Kurniawati2008, Poupart2011}; see
\cite{ShaniPineauKaplow_2013} for an overview and comparison.
However, the exact and approximate planning results are of limited value for
partially observed reinforcement learning (PORL) because they are based on the belief state, constructing which
requires the knowledge of the system model. So, when an agent is operating in
an unknown environment, it cannot construct a belief state based on its
observations. An attempt to circumvent this difficulty was to use
memoryless policies (i.e., choose the action based only on the current
observation) \citep{Littman1994a,Loch1998,Jaakkola1998, Williams1999, Li2011,
Azizzadenesheli2016}. A related idea is to choose the action based on $k$
recent observations \citep{Littman1994a, Loch1998} or choose the action
based on a memory which is updated using a finite state machine
\citep{Whitehead1995,McCallum_1993,Hansen1997,Meuleau1999,Amato2010}. Such
finite memory policies are also amenable to policy search methods
\citep{Hansen1998,Baxter2001,Poupart2004}. However, there are no approximation
guarantees available for such methods.
Another approach taken in the literature is to use a Bayesian RL framework
\citep{Ross2008,Poupart2008,Ross2011,Katt2019} where a posterior distribution
over the models of the environment is maintained; at each step, a model is
sampled from the posterior and the corresponding optimal policy is executed.
Appproximation error bounds in using such methods are derived in
\cite{Ross2011}.
A completely different class of model-based RL algorithms are methods
using predictive state representations (PSRs) \citep{LittmanSutton_2002,
SinghLittmanJongPardoeStone_2003}. PSRs are constructed only based on
observational data so they can easily be adapted to the RL setup. There have
been a number of papers which use PSRs to propose model based RL algorithms
\citep{James2004,RosencrantzGordonThrun_2004, Boots2011, Hamilton2014,
Kulesza2015, Kulesza2015a, Jiang2016}.
Inspired by the recent successes of deep reinforcement learning, there are
many recent results which suggest using RNNs (Recurrent Neural
Networks~\citep{Rumelhart1986}) or LSTMs (Long Short-Term
Memories~\citep{Hochreiter1997}) for modeling the action-value
function and/or the policy function
\citep{bakker2002reinforcement,WierstraFoersterPetersSchmidhuber_2007,
WierstraFoersterPetersSchmidhuber_2010,HausknechtStone_2015,HeessHuntLillicrapSilver_2015,Zhu2017,ha2018world,
BaiseroAmato_2018,Igl2018,Zhang2019}. It is shown that these approaches perform well on empirical benchmarks, but there are no approximation guarantees available for such methods.
Our main contribution is to present a rigorous approach for PORL which is
based on a principled theory of approximate planning for POMDPs that we
develop.
In particular:
\begin{enumerate}
\item In Sec.~\ref{sec:prelim}, we formalize the notion of information state for partially observed
systems and provide equivalent methods of identifying information
states.
\item In Secs.~\ref{sec:ais} and~\ref{sec:infinite}, we present the notion
of an \acf{AIS} as a compression of history which approximately
satisfies the properties of an information state. The two equivalent
formulations of information state lead to two equivalent formulations of
\acs{AIS}. We present bounds on the loss in performance (compared to the
optimal history dependent policy) when planning using an \acs{AIS}. We
generalize these results to cover approximation in action spaces as
well. We show that various existing approximation results for MDPs and
POMDPs in the literature may be viewed as special cases of \acs{AIS}
(and in some cases, our bounds are tighter than those in the
literature).
\item In Sec.~\ref{sec:dec}, we present a theory for approximate planning
for decentralized (i.e., multi-agent) partially observed systems using a
common-information based \acs{AIS}.
\item In Secs.~\ref{sec:RL} and~\ref{sec:experiments}, we then present
policy gradient based online RL algorithms for PORL which learn an
\acs{AIS} representation using multi-timescale stochastic gradient
descent. We provide detailed numerical experiments on several classes
of partially observed environments ranging from classical low-dimensional toy
environments, to moderate-dimensional environments, and high-dimensional grid-world
environments.
\end{enumerate}
\endinput
In this paper, we present a rigorous theory for planning and learning in
partially observed models using the notions of information state and
approximate information state. We not only address the issue of principled
compression of observation history, but also of individual observations that
can lead to lower dimensional representations that can be used for planning
and learning with bounded sub-optimality.
Our key contributions are as follows:
\begin{enumerate}
\item We present two versions of states sufficient for optimal control, which are called information states. We then present the notion of an AIS, define two versions of it and derive performance (error) bounds when using AIS instead of the
exact information state.
\item We compare our approximation approach with other approaches in literature, including literature on approximation
bounds in fully observable settings (MDPs), and either improve these bounds or derive them as special cases of our approach.
\item We extend our approximation theory to cover approximations in action spaces as well.
\item Finally, we present an RL algorithm that uses AIS and demonstrate its performance by comparison with results from one of the state of the art methods for POMDPs for several classes of problems ranging from the small classical toy examples, to mid-sized problems and larger scale grid-world problems.
\end{enumerate}
\section{Experiments}\label{sec:experiments}
We perform numerical experiments to check the effectiveness of AIS-based PORL
algorithms proposed in the previous section. The code for all AIS experiments is available in \cite{ais_git_repo}. We consider three classes of
POMDP environments, which have increasing difficulty in terms of the dimension
of their state and observation spaces:
\begin{enumerate}
\item Low-dimensional environments (Tiger, Voicemail, and Cheese Maze)
\item Moderate-dimensional environments (Rock Sampling and Drone Surveillance)
\item High-dimensional environments (different variations of MiniGrid)
\end{enumerate}
For each environment, we use the actor only framework and learn an AIS based
on (AP1), (AP2a), and (AP2b). There are four components
of the corresponding \acs{AIS}-generator: the history compression function
$\ainfo$, the \acs{AIS} update function $\aupdate$, the reward prediction
function $\rewinfo$, and the observation prediction kernel $\nextobs$.
We model the $\ainfo$ as an LSTM, where the memory update unit of LSTM acts as
$\aupdate$. We model $\rewinfo$, $\nextobs$, and the policy $\hat{\pi}$ as
feed-forward neural networks. A block diagram of the network architecture is
shown in Fig.~\ref{fig:network} and the details of the networks and the
hyperparameters are presented in
Appendix~\ref{sec:network}. To avoid over-fitting, we use the same network
architecture and hyperparameters for all environments in the same difficulty
class.
\begin{figure}[!t!b]
\centering
\includegraphics{Figures/ais/nns}
\caption{Network architecture for PORL using AIS.}
\label{fig:network}
\end{figure}
We repeat each experiment for multiple random seeds and plot the median value
along with the uncertainty band from the first to the third quartile. For all
environments, we compare our performance with a baseline which uses an
actor-critic algorithm where both the actor and critic are modeled using LSTM
and the policy parameters are updated using PPO. This architecture was
proposed as a baseline for the Minigrid environments in
\cite{chevalier2018babyai}. The details of the baseline architecture are
presented in Appendix~\ref{sec:network}.
To evaluate the performance of the policy while training for AIS-based PORL, a
separate set of rollouts is carried out at fixed intervals of time steps and
the mean of these rollouts is considered. For the PPO baseline a number of
parallel actors are used during training, and once the episodes are completed,
their returns are stored in a list. A fixed number (based on the number of parallel actors) of past episodes are
considered to evaluate the mean performance of the current policy during
training. See Appendix~\ref{sec:network} for details.
For the low and moderate dimensional environments, we compare the performance with
the best performing planning solution obtained from the JuliaPOMDP repository
\citep{JuliaPOMDPs}. For the high-dimensional environments, finding a planning solution is intractable, so we only compare with the PPO baseline mentioned previously.
\subsection{Low-dimensional environments}
In these POMDP environments, the size of the unobserved state space is less
than about 10 and the planning solution can be easily obtained using standard
POMDP solvers.
\begin{figure}[!thb]
\centering
\begin{subfigure}[t]{0.475\textwidth}
\includegraphics[width=\textwidth]{Figures/ais/final_plots/LowDim/Tiger-v0.png}
\caption{Tiger}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.475\textwidth}
\includegraphics[width=\textwidth]{Figures/ais/final_plots/LowDim//Voicemail-v0.png}
\caption{Voicemail}
\end{subfigure}%
\begin{subfigure}[t]{0.475\textwidth}
\includegraphics[width=\textwidth]{Figures/ais/final_plots/LowDim/CheeseMaze-v0.png}
\caption{Cheese Maze}
\end{subfigure}
\caption{Comparison of AIS-based actor only PORL algorithm with LSTM+PPO
baseline for low-dimensional environments (for 10 random seeds).}
\label{fig:low}
\end{figure}
\begin{enumerate}
\item \textsc{Tiger:} The Tiger environment is a sequential
hypothesis testing task proposed in \cite{KaelblingLittmanCassandra_1998}.
The environment consists of two doors, with a tiger behind one door and a
treasure behind the other. The agent can either perform a \textsc{listen}
action, which has a small negative reward of $-1$ and gives a noisy
observation about the location of the tiger, or the agent can open one of
the doors. Opening the door with the treasure gives a reward of $+10$
while opening the door with a tiger gives a large negative reward of
$-100$. After opening a door, the environment is reset.
\item \textsc{Voicemail:} The Voicemail enviroment is also a sequential
hypothesis testing task propsed in \cite{WilliamsYoung_2007}. This
environment models a dialog system for managing voicemails. The agent can
either perform an \textsc{ask} action, wich has a small negative reward of
$-1$ and gives a noisy observation about the intent of the user, or the
agent can execute \textsc{save} or \textsc{delete}. Choosing a
\textsc{save}/\textsc{delete} action which matches the intent of the user
gives a reward of $+5$. The agent receives a negative reward of $-20$ for
action \textsc{delete} when the user intent is
\textsc{save}, while choosing action \textsc{save} when the user intent
is \textsc{delete} gives a smaller but still significant negative reward
of $-10$. Since the user prefers \textsc{save} more than \textsc{delete},
the initial belief is given by $[0.65,0.35]$ for \textsc{save} and
\textsc{delete} respectively. After taking a \textsc{save}/\textsc{delete}
action, the agent moves on to the next voicemail message.
\item
\parbox[t]{\dimexpr\textwidth-\leftmargin}{%
\vspace{-2.5mm}
\begin{wrapfigure}[7]{r}{0.25\textwidth}
\centering
\vspace{-1.5\baselineskip}
\resizebox{0.9\linewidth}{!}{%
\begin{mpost}[mpsettings={input metafun;}]
ux := 0.75cm;
uy := 0.75cm;
save box;
path box;
box := fullsquare xysized (ux, uy) ;
draw box;
label(\btex $1$ etex, origin);
draw box shifted (1ux, 0);
label(\btex $2$ etex, (1ux, 0));
draw box shifted (2ux, 0);
label(\btex $3$ etex, (2ux, 0));
draw box shifted (3ux, 0);
label(\btex $2$ etex, (3ux, 0));
draw box shifted (4ux, 0);
label(\btex $4$ etex, (4ux, 0));
draw box shifted (0ux, -1uy);
label(\btex $5$ etex, (0ux, -1uy));
draw box shifted (2ux, -1uy);
label(\btex $5$ etex, (2ux, -1uy));
draw box shifted (4ux, -1uy);
label(\btex $5$ etex, (4ux, -1uy));
draw box shifted (0ux, -2uy);
label(\btex $6$ etex, (0ux, -2uy));
draw image (draw box shifted (2ux, -2uy)) anglestriped (1, 45, 2) ;
fill fullsquare xysized (0.5ux, 0.5uy) shifted (2ux, -2uy)
withcolor white;
label(\btex $7$ etex, (2ux, -2uy));
draw box shifted (4ux, -2uy);
label(\btex $6$ etex, (4ux, -2uy));
\end{mpost}}
\end{wrapfigure}
\textsc{CheeseMaze:} The CheeseMaze environment is a POMDP with masked
states proposed in \cite{McCallum_1993}. The environment consists of
11~states and 7~observations as shown on the right. The objective is to
reach the goal state, which is indicated by observation~$7$. The agent only
receives a reward of $+1$, when the goal state is reached.
}
\end{enumerate}
For all three environments, we compare the performance of AIS-based PORL with
the LSTM+PPO baseline, described earlier. We also compare
with the best performing planning solution from the JuliaPOMDP repository
\citep{JuliaPOMDPs}. The results are
presented in Fig.~\ref{fig:low}, which shows both AIS-based PORL and LSTM+PPO
converge close to the planning solutions relatively quickly.\footnote{The
performance of all learning algorithms for \textsc{CheeseMaze} are better
than the best planning solution. We solved the \textsc{CheeseMaze} model
with other solvers available in the JuliaPOMDP \cite{JuliaPOMDPs}, and all
these solution performed worse than the solution obtained by incremental
pruning presented here.}
\subsection{Moderate-dimensional environments}
\begin{figure}[!ht]
\centering
\hfill
\begin{subfigure}[t]{0.475\textwidth}
\includegraphics[width=\textwidth]{Figures/ais/final_plots/ModDim/RockSampling-v0.png}
\caption{Rock Sampling} \label{fig:rocksampling_withoutprior_results}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.475\textwidth}
\includegraphics[width=\textwidth]{Figures/ais/final_plots/ModDim//DroneSurveillance-v0.png}
\caption{Drone Surveillance} \label{fig:dronesurveillance_results}
\end{subfigure}
\hfill
\caption{Comparison of AIS-based actor only PORL algorithm with LSTM+PPO
baseline for moderate-dimensional environments (for 10 random seeds).} \label{fig:moderate}
\end{figure}
In these environments, the size of the unobserved state is moderately large
(of the order of $10^2$ to $10^3$ unobserved states) and the optimal planning
solution cannot be easily obtained using standard POMDP solvers. However, an
approximate planning solution can be easily obtained using standard
approximation algorithms for POMDPs.
\begin{enumerate}
\item
\parbox[t]{\dimexpr\textwidth-\leftmargin}{%
\vspace{-2.5mm}
\begin{wrapfigure}[9]{r}{0.25\textwidth}
\centering
\vspace{-1\baselineskip}
\includegraphics[width=\linewidth]{Figures/ais/env_rendered/juliaPOMDP_envs/RockSample-Julia.png}
\end{wrapfigure}
\textsc{RockSample:} RockSample is a scalable POMDP environment
introduced in \cite{Smith2004} which models the rover science exploration.
The RockSample$(n,k)$ environment consists of a $n \times n$ grid with $k$
rocks. The rocks are at known positions. Some of the rocks which are labeled as
\textsc{good} rocks have scientific values; other rocks which are labeled as
\textsc{bad} rocks do not. Sampling a rock is expensive and the agent has a
noisy long-range sensor to help determine if a rock is \textsc{good}
before choosing to approach and sample it. At each stage, the agent can
choose from $k+5$ actions: \textsc{north}, \textsc{south}, \textsc{east},
\textsc{west}, \textsc{sample}, \textsc{check}\textsubscript{1}, \ldots,
\textsc{check}\textsubscript{$k$}. The first four are deterministic
single-step motion actions. The \textsc{sample} action samples the rock at
the current location; if the rock is \textsc{good}, there is a reward of
$+20$ and the rock becomes \textsc{bad} (so that no further reward can be
gained from sampling it); if the rock is \textsc{bad}, there is a negative
reward of $-10$. The right edge of the map is a terminal state and
reaching it gives a reward of $+10$. In our experiments, we use a
RockSample$(5,3)$ environment.
}
\item
\parbox[t]{\dimexpr\textwidth-\leftmargin}{%
\vspace{-2.5mm}
\begin{wrapfigure}[9]{r}{0.25\textwidth}
\centering
\vspace{-1\baselineskip}
\includegraphics[width=\linewidth]{Figures/ais/env_rendered/juliaPOMDP_envs/Drone-Julia.png}
\end{wrapfigure}
\textsc{DroneSurveillance:} DroneSurveillance is a POMDP model of
deploying an autonomous aerial vehicle in a partially observed, dynamic,
indoor environment introduced in \cite{Svorenova2015}. The environment is a
$5 \times 5$ grid with two agents: a ground agent which moves randomly and
an aerial agent, whose motion has to be controlled. The aerial agent
starts at the bottom-left cell and has to reach the upper-right cell (the
goal state) without being in the same location as the ground agent. The
ground agent cannot enter the start or goal states. The aerial agent has a
downward facing camera which can view a $3\times3$ grid centered at its
current location and it can perfectly see the location of the ground agent
if it is in this view. At each stage, the aerial agent may choose from $5$
actions: \textsc{north}, \textsc{south}, \textsc{east}, \textsc{west},
\textsc{hover}. The first four are deterministic single-step motion
actions and the \textsc{hover} action keeps the aerial vehicle at its
current position. Reaching the goal gives a reward of $+1$ and ends the
episode. If both agents are in the same cell, there is a negative reward
of $-1$ and the episode ends.
}
\end{enumerate}
The visualizations above are taken from the JuliaPOMDP environments
\citep{JuliaPOMDPs}. For both environments, we compare the performance of
AIS-based PORL with the LSTM+PPO baseline described earlier. We also compare
with the best performing planning solution from the JuliaPOMDP repository
\citep{JuliaPOMDPs}. The results are shown in Fig.~\ref{fig:moderate} which
shows that both AIS-based PORL algorithms converge close to the best planning
solution in both environments. The performance of LSTM+PPO is similar in
\textsc{DroneSurveillance} but LSTM+PPO gets stuck in a local minima in
\textsc{RockSample}.
\subsection{High-dimensional environments} \label{subsec:Minigrid_Experiments}
We use the MiniGrid environments from the BabyAI platform
\citep{gym_minigrid}, which are partially observable 2D grid environments
which has tasks of increasing complexity level. The environment has multiple
entities (agent, walls, lava, boxes, doors, and keys); objects can be picked up,
dropped, and moved around by the agent; doors can be unlocked via keys of the
same color (which might be hidden inside boxes). The agents can see a
$7\times7$ view in front of it but it cannot see past walls and closed
doors. At each time, it can choose from the
following actions: $\{$\textsc{Move Forward}, \textsc{Turn Left}, \textsc{Turn
Right}, \textsc{Open Door/Box}, \textsc{Pick up item}, \textsc{Drop Item}, \textsc{Done}$\}$.
The agent can only hold one item at a time.
The objective is to reach a goal state in the quickest amount of time (which
is captured by assigning to the goal state a reward which decays over time).
Most of the environments have a certain theme, and we cluster the environments
accordingly. The visualizations below are taken from the Gym Minigrid environments
\citep{gym_minigrid}.
\begin{enumerate}
\item
\parbox[t]{\dimexpr\textwidth-\leftmargin}{%
\vspace{-2.5mm}
\begin{wrapfigure}[7]{r}{0.2\textwidth}
\centering
\vspace{-1\baselineskip}
\includegraphics[width=\linewidth]{Figures/ais/env_rendered/minigrid_envs/MiniGrid-SimpleCrossingS9N3-v0.png}
\end{wrapfigure}
\textsc{Simple Crossing:} A simple crossing environment is a 2D grid with
columns of walls with an opening (or a crossing). The agent can traverse
the wall only through the openings and needs to find a path from the start
to the goal state. There are four such environments
(MGSCS9N1, MGSCS9N2, MGSCS9N3, and MGSCS11N5) where the label S$n$N$m$
means that the size of the environment is $n \times n$ and there are $m$
columns of walls.
}
\item
\parbox[t]{\dimexpr\textwidth-\leftmargin}{%
\vspace{-2.5mm}
\begin{wrapfigure}[7]{r}{0.2\textwidth}
\centering
\vspace{-1\baselineskip}
\includegraphics[width=\linewidth]{Figures/ais/env_rendered/minigrid_envs/MiniGrid-LavaCrossingS9N2-v0.png}
\end{wrapfigure}
\textsc{Lava Crossing:} The lava crossing environments are similar to the
simple crossing environments, but the walls are replaced by lava. If the
agent steps on to the lava block then it dies and the episode ends.
Therefore, exploration is more difficult in lava crossing as compared to
simple crossing. There are two such environments (MGLCS9N1 and MGLCS9N2)
where the label S$n$N$m$ has the same interpretation as simple crossing.
}
\item
\parbox[t]{\dimexpr\textwidth-\leftmargin}{%
\vspace{-2.5mm}
\begin{wrapfigure}[7]{r}{0.2\textwidth}
\centering
\vspace{-1\baselineskip}
\includegraphics[width=\linewidth]{Figures/ais/env_rendered/minigrid_envs/MiniGrid-KeyCorridorS3R3-v0.png}
\end{wrapfigure}
\textsc{Key Corridor:} The key corridor environments consist of a central
corridor which has rooms on the left and right sides which can be accessed
through doors. When the door is locked it can be opened using a key of the
same color. The agent has to move to the location of the key, pick it up,
move to the location of the correct door, open the door, drop the key, and
pick up the colored ball. There are three such environments (MGKCS3R1,
MGKCS3R2, and MGKCS3R3), where the label S$n$R$m$ means that the size of
the grid is proportional to $n$ and the number of rooms present is
$2 m$.
}
\item
\parbox[t]{\dimexpr\textwidth-\leftmargin}{%
\vspace{-2.5mm}
\begin{wrapfigure}[6]{r}{0.3\textwidth}
\centering
\vspace{-1\baselineskip}
\includegraphics[width=\linewidth]{Figures/ais/env_rendered/minigrid_envs/MiniGrid-ObstructedMaze-1Dlhb-v0.png}
\end{wrapfigure}
\textsc{Obstructed Maze:} The obstructed maze environments are similar to
key corridor environments but the key is inside a box and the box has to be
opened to find the key. We consider two such environments (MGOM1Dl and
MGOM1Dlh). In MGOM1Dl box is already open while in MGOM1Dlh the box is
closed. There is an additional such environment in the BabyAI platform
(MGOM1Dlhb), which is more suitable for continual learning algorithms so
we exclude it here.
}
\end{enumerate}
The number of observations in a given Minigrid environment is discrete but is
too large to model it as a one-hot encoded discrete observation as done in the previous environments. Instead we compress the observations as described in Section~\ref{sec:obsais} by using an autoencoder to convert a large discrete space to a continuous space with a tractable size. A separate autoencoder is trained for each environment using a dataset that is created by performing random rollouts. Once the autoencoder is trained over the fixed dataset for several epochs, it is fixed and used to generate the observations for learning the AIS. This is very similar to \cite{ha2018world}, where they learn the autoencoder in a similar fashion and then fix it, following which their training procedure for the next observation distribution prediction and policy takes place.
\begin{figure}[!p]
\centering
\begin{subfigure}[t]{0.475\textwidth}
\includegraphics[width=\textwidth]{Figures/ais/final_plots/HighDim/MiniGrid-SimpleCrossingS9N1-v0.png}
\caption{MGSCS9N1}
\end{subfigure}
\begin{subfigure}[t]{0.475\textwidth}
\includegraphics[width=\textwidth]{Figures/ais/final_plots/HighDim/MiniGrid-SimpleCrossingS9N2-v0.png}
\caption{MGSCS9N2}
\end{subfigure}
\begin{subfigure}[t]{0.475\textwidth}
\includegraphics[width=\textwidth]{Figures/ais/final_plots/HighDim/MiniGrid-SimpleCrossingS9N3-v0.png}
\caption{MGSCS9N3}
\end{subfigure}
\begin{subfigure}[t]{0.475\textwidth}
\includegraphics[width=\textwidth]{Figures/ais/final_plots/HighDim/MiniGrid-SimpleCrossingS11N5-v0.png}
\caption{MGSCS11N5}
\end{subfigure}
\begin{subfigure}[t]{0.475\textwidth}
\includegraphics[width=\textwidth]{Figures/ais/final_plots/HighDim/MiniGrid-LavaCrossingS9N1-v0.png}
\caption{MGLCS9N1} \label{fig:MGLCS9N1_results}
\end{subfigure}
\begin{subfigure}[t]{0.475\textwidth}
\includegraphics[width=\textwidth]{Figures/ais/final_plots/HighDim/MiniGrid-LavaCrossingS9N2-v0.png}
\caption{MGLCS9N2} \label{fig:MGLCS9N2_results}
\end{subfigure}
\begin{subfigure}[t]{0.475\textwidth}
\includegraphics[width=\textwidth]{Figures/ais/final_plots/HighDim/MiniGrid-KeyCorridorS3R1-v0.png}
\caption{MGKCS3R1}
\end{subfigure}
\begin{subfigure}[t]{0.475\textwidth}
\includegraphics[width=\textwidth]{Figures/ais/final_plots/HighDim/MiniGrid-KeyCorridorS3R2-v0.png}
\caption{MGKCS3R2}
\end{subfigure}
\caption{Comparison of AIS-based actor only PORL algorithm with LSTM+PPO
baseline for high-dimensional environments (for 5 random seeds).} \label{fig:high}
\end{figure}
\begin{figure}[!t]\ContinuedFloat
\centering
\begin{subfigure}[t]{0.475\textwidth}
\includegraphics[width=\textwidth]{Figures/ais/final_plots/HighDim/MiniGrid-KeyCorridorS3R3-v0.png}
\caption{MGKCS3R3} \label{fig:MGKCS3R3_results}
\end{subfigure}
\begin{subfigure}[t]{0.475\textwidth}
\includegraphics[width=\textwidth]{Figures/ais/final_plots/HighDim/MiniGrid-ObstructedMaze-1Dl-v0.png}
\caption{MGOM1Dl} \label{fig:MGOM1Dl_results}
\end{subfigure}
\begin{subfigure}[t]{0.475\textwidth}
\includegraphics[width=\textwidth]{Figures/ais/final_plots/HighDim/MiniGrid-ObstructedMaze-1Dlh-v0.png}
\caption{MGOM1Dlh} \label{fig:MGOM1Dlh_results}
\end{subfigure}
\caption*{Figure~\ref{fig:high} (continued): Comparison of AIS-based actor only PORL algorithm with LSTM+PPO
baseline for high-dimensional environments (for 5 random seeds).}
\end{figure}
Note that the output of the autoencoder is a continuous variable and we are
using MMD with $p=2$ as an IPM. As explained in Section~\ref{sec:learn-IPM},
$d_{\mathfrak{F}_2}(\mu,\nu)^2$ only depends on the mean of $\mu$ and $\nu$. So, to
simplify the computations, we assume that $\nu$ is a Dirac delta distribution
centered at its mean. Thus, effectively, we are predicting the mean of the
next observation. In general, simply predicting the mean of the observations
may not lead to a good representation, but in the Minigrid environments, the
transitions are deterministic and the only source of
stochasticity in the observations is due to the initial configuration of the
environment. So, in practice, simply predicting the mean of the next
observation works reasonably well. We emphasize that for other more general
environments with truly stochastic observations, such a choice of IPM may not
work well and it may be better to choose the MMD $d_{\mathfrak{F}_p}$ defined in
Proposition~\ref{prop:RKHS} for a different value of~$p$, say $p=1$ (which
corresponds to the energy distance \citep{Szekely2004}).
For all minigrid environments, we compare the performance of
AIS-based PORL with the LSTM+PPO baseline proposed in
\cite{chevalier2018babyai}. The results are shown in Fig.~\ref{fig:high} which
shows that for most environments AIS-based PORL converges to better
performance values. Note that AIS-based PORL fails to learn in the
\textsc{Lava Crossing} environments (MGLCS9N1 and MGLCS9N2) while LSTM+PPO
fails to learn in the larger \textsc{Key Crossing} environments (MGKCS3R2 and
MGKCS3R3) and in the \textsc{Obstructed Maze} environments (MGOM1Dl and
MGOM1Dlh).
\blue{The results indicate that one IPM does not necessarily lead to better
performance than others in all cases. The performance of a particular IPM
depends on whether the observation and AIS spaces are discrete or
continuous, on the size of these spaces, and the stochasticity of the
environment. The fact that we are approximating the policy using non-linear
neural networks makes it difficult to quantify the impact of the choice of
IPM on the accuracy of learning. It will be important to understand this
impact in more detail and develop guidelines on how to choose an IPM based
on the features of the environment.}
\section{Reinforcement learning for partially observed systems using AIS}
\label{sec:RL}
In this section, we present a policy gradient based reinforcement learning
(RL) algorithm for infinite horizon partially observed systems. The algorithm
learns a time-homogeneous AIS generator $(\ainfo_t, \rewinfo, \nextinfo)$
which satisfies (AP1) and (AP2) or a time-homogeneous AIS generator
$(\ainfo_t, \rewinfo, \aupdate, \nextobs)$ which satisfies (AP1), (AP2a), and
(AP2b). The key idea is to represent each component of the AIS generator using
a parametric family of functions/distributions and use a multi time-scale
stochastic gradient descent algorithm \citep{Borkar_1997} which learns AIS
generator at a faster time-scale than the policy and/or the action-value
function.
Then, for the ease of exposition, we first assume that the policy is fixed and
describe how to learn the AIS generator using stochastic gradient descent.
To specify an \acs{AIS}, we must pick an IPM $\mathfrak{F}$ as well. Although, in
principle, we can choose any IPM, in practice, we want to choose an IPM such
that the distance $d_\mathfrak{F}(\mu_t, \nu_t)$ in (AP2) or (AP2b) can be computed
efficiently. We discuss the choice of IPMs in Sec.~\ref{sec:learn-IPM} and
then discuss the stochastic gradient descent algorithm to learn the
AIS-generator for a fixed policy in Sec.~\ref{sec:grad-ascent}.
Then we describe how to simultaneously learn the AIS generator and the policy
using a multi-time scale algorithm, first for an actor only framework and then for an actor-critic framework in Sec.~\ref{sec:PORL}.
\subsection{The choice of an IPM} \label{sec:learn-IPM}
As we will explain in the next section in detail, our general modus operandi
is to assume that the stochastic kernel $\nextinfo$ or $\nextobs$ that we are
trying to learn belongs to a parametric family and then update the parameters
of the distribution to either minimize $d_\mathfrak{F}(\mu, \nu)$ defined in (AP2) or
minimize $d_\mathfrak{F}(\mu^y, \nu^y)$ defined in (AP2b). Just to keep the
discussion concrete, we focus on (AP2). Similar arguments apply to (AP2b)
as well. First note that for a particular choice of parameters, we know the
distribution $\nu$ in closed form, but we do not know the distribution $\mu$
in closed form and only have samples from that distribution. One way to
estimate the IPM between a distribution and samples from another distribution
is to use duality and minimize $\bigl| \int_{\hat{\ALPHABET{Z}}} f d\mu - \int_{\hat{\ALPHABET{Z}}} f
d\nu \bigr|$ over the choice of function $f$ such that $f \in \mathfrak{F}$. When $d_\mathfrak{F}$
is equal to the total variation distance or the Wasserstein distance, this
optimization problem may be solved using a linear program
\citep{Sriperumbudur2012}. However, solving a linear program at each step of
the stochastic gradient descent algorithm can become a computational
bottleneck. We propose two alternatives here. The first is to use the total
variation distance or the Wasserstein distance but instead of directly working
with them, we use a KL divergence based upper bound as a surrogate loss. The
other alternative is to work with RKHS-based MMD (maximum mean
discrepancy) distance, which can be computed from samples without solving an
optimization problem \citep{Sriperumbudur2012}. It turns out that for the
\acs{AIS}-setup, a specific form of MMD known as distance-based MMD is particularly convenient as we explain below.
\paragraph{KL-divergence based upper bound for total variation or Wasserstein
distance.}
Recall that the KL-divergence between two densities $\mu$ and $\nu$ on
$\Delta(\ALPHABET X)$ is defined as
\[
D_{\textup{KL}}(\mu \| \nu) =
\int_{\ALPHABET X} \log \mu(x)\mu(dx) -
\int_{\ALPHABET X} \log \nu(x) \mu(dx).
\]
The total variation distance can be upper bounded by the KL-divergence using
Pinsker's inequality \citep{csiszar2011} (see footnote~\ref{fnt:TV}
for the difference in constant factor from the standard Pinsker's
inequality):
\begin{equation}\label{eq:pinsker}
d_{\textup{TV}}(\mu, \nu) \le
\sqrt{ 2 D_{\textup{KL}}(\mu \| \nu) }.
\end{equation}
As we will explain in the next section, we consider the setup where
we know the distribution $\nu$ but only obtain samples from the distribution
$\mu$. Since there are two losses---the reward prediction loss $\varepsilon$
and the \acs{AIS}/observation prediction loss $\delta$, we work with
minimizing the weighted square average $\lambda \varepsilon^2 + (1-\lambda)
\delta^2$, where $\lambda \in [0,1]$ is a hyper-parameter. Pinsker's
inequality~\eqref{eq:pinsker} suggests that instead of
$d_{\textup{TV}}(\mu,\nu)^2$, we can use the surrogate loss function
\[
\int_{\ALPHABET X} \log \nu(x) \mu(dx)
\]
where we have dropped the term that does not depend on $\nu$. Note that the
above expression is the same as the cross-entropy between $\mu$ and $\nu$
which can be efficiently computed from samples. In particular, if we get $T$
i.i.d.\@ samples $X_1, \dots, X_T$ from $\mu$, then
\begin{equation}\label{eq:surrogate-KL}
\frac 1T \sum_{t=1}^T \log \nu(X_t)
\end{equation}
is an unbiased estimator of $\int_{\ALPHABET X} \log \nu(x) \mu(dx)$.
Finally, if $\ALPHABET X$ is a bounded space with diameter~$D$, then
\[
d_{\textup{Wass}}(\mu, \nu) \le D d_{\textup{TV}}(\mu,\nu).
\]
So, using cross-entropy as a surrogate loss also works for Wasserstein
distance.
\paragraph{Distance-based MMD.}
The key idea behind using a distance-based MMD is the following results.
\begin{proposition}[Theorem 22 of \cite{Sejdinovic2013}]
\label{prop:RKHS}
Let $\ALPHABET X \subseteq \reals^m$ and $d_{\ALPHABET X,p} \colon \ALPHABET
X \times \ALPHABET X \to \reals_{\ge 0}$ be a metric given by $d_{\ALPHABET
X, p}(x,x') = \| x - x'\|_2^{p}$, for $p \in (0, 2]$. Let $k_p \colon
\ALPHABET X \times \ALPHABET X \to \reals$ be any kernel given
\[
k_p(x,x') = \tfrac 12\bigl[
d_{\ALPHABET X, p}(x,x_0) +
d_{\ALPHABET X, p}(x',x_0) -
d_{\ALPHABET X, p}(x, x')
\bigr],
\]
where $x_0 \in \ALPHABET X$ is arbitrary,
and let $\mathcal H_p$ be a \textup{RKHS} with kernel $k_p$
and $\mathfrak{F}_p = \{ f \in \mathcal H_p : \| f \|_{\mathcal H_p} \le
1\}$. Then, for any distributions $\mu, \nu \in \Delta(\ALPHABET X)$, the
\textup{IPM} $d_{\mathfrak{F}_p}(\mu,\nu)$ can be expressed as follows:
\begin{equation}\label{eq:RKHS}
d_{\mathfrak{F}_p}(\mu, \nu) = \sqrt{\EXP[ d_{\ALPHABET X,p}(X,W) ] -
\tfrac{1}{2} \EXP[ d_{\ALPHABET X,p}(X,X') ] -
\tfrac{1}{2} \EXP[ d_{\ALPHABET X,p}(W,W') ]},
\end{equation}
where $X,X' \sim \mu$, $W, W' \sim \nu$ and $(X,X',W,W')$ are all
independent.
\end{proposition}
We call $d_p$ defined above as a \emph{distance-based} MMD. For $p=1$
(for which $d_{\ALPHABET X}$ corresponds to the $L_2$ distance), the
expression inside the square root in~\eqref{eq:RKHS} is called the Energy
distance in the statistics literature~\citep{Szekely2004}. In
\cite{Sejdinovic2013}, the above result is stated for a general semimetric of
a negative type. Our statement of the above result is specialized to the
semimetric $d_{\ALPHABET X, p}$. See Proposition~3 and Example~15 of
\cite{Sejdinovic2013} for details.
As explained in the previous section, we work with
minimizing the weighted square average $\lambda \varepsilon^2 + (1-\lambda)
\delta^2$, where $\lambda$ is a hyper-parameter. Proposition~\ref{prop:RKHS}
suggests that instead of $d_{\mathfrak{F}_p}(\mu, \nu)^2$, we can use a surrogate loss
function
\begin{equation}\label{eq:surrogate-general-0}
\int_{\ALPHABET X} \int_{\ALPHABET X} \| x - w \|_2^p\; \mu(dx) \nu(dw)
- \frac12
\int_{\ALPHABET X} \int_{\ALPHABET X} \| w - w' \|_{2}^p\; \nu(dw) \nu(dw')
\end{equation}
for $p \in (0, 2]$, where we have dropped the term that does not depend on
$\nu$. It is possible to compute the surrogate loss efficiently from samples
as described in \cite{Sriperumbudur2012}. In particular, if we get $T$
i.i.d.\@ samples $X_1, \dots, X_T$ from $\mu$, then
\begin{equation}\label{eq:surrogate-general}
\frac 1T \sum_{t=1}^T \int_{\ALPHABET X} \| X_t - w \|_2^p\; \nu(dw)
- \frac12
\int_{\ALPHABET X} \int_{\ALPHABET X} \| w - w' \|_{2}^p\; \nu(dw) \nu(dw')
\end{equation}
is an unbiased estimator of~\eqref{eq:surrogate-general-0}.
In our numerical experiments, we use the surrogate
loss~\eqref{eq:surrogate-general} for $p=2$, which simplifies as follows.
\begin{proposition}\label{prop:surrogate}
Consider the setup of \textup{Proposition~\ref{prop:RKHS}} for $p=2$.
Suppose $\nu_\xi$ is a known parameterized distribution with mean $M_\xi$
and $X$ is a sample from $\mu$. Then, the gradient of
\begin{equation}\label{eq:surrogate}
(M_\xi - 2 X) ^\TRANS M_\xi
\end{equation}
with respect to $\xi$
in an unbiased estimator of $\GRAD_{\xi} d_{\mathfrak{F}_2}(\mu, \nu_\xi)^2$.
\end{proposition}
\begin{proof}
For $p=2$, we have that
\[
d_{\mathfrak{F}_2}(\mu, \nu_\xi)^2 = \EXP[\| X - W \|_2^2 ] -
\tfrac{1}{2} \EXP[\| X - X'\|_2^2 ] -
\tfrac{1}{2} \EXP[ \| W - W'\|_2^2 ],
\]
where $X,X' \sim \mu$ and $W,W' \sim \nu_\xi$. Simplifying the right hand
side, we get that
\[
d_{\mathfrak{F}_2}(\mu, \nu_\xi)^2 =
\|\EXP[ X ]\|_2^2 - 2 \EXP [ X ]^\TRANS \EXP [ W ] +
\| \EXP[ W ] \|_2^2.
\]
Note that the term $\|\EXP[ X ]\|_2^2 $ does not depend on the
distribution~$\nu_\xi$. Thus, the expression~\eqref{eq:surrogate} captures
all the terms which depend on $\xi$.
\end{proof}
The implication of Proposition~\ref{prop:surrogate} is if we use MMD with the
RKHS $\mathcal H_2$ defined in Proposition~\ref{prop:RKHS}, then we can can
use the expression in~\eqref{eq:surrogate} as a surrogate loss function for
$d_{\mathfrak{F}_2}(\mu, \nu_\xi)^2$.
Now we show how to compute the surrogate loss~\eqref{eq:surrogate} for two
types of parameterized distributions $\nu_\xi$.
\begin{enumerate}
\item \textsc{Surrogate loss for predicting discrete variables:}
When predicting a discrete-valued
random variable, say a discrete-valued \acs{AIS} $\hat Z_{t+1}$ in (AP2) or
a discrete-valued observation $Y_t$ in (AP2b), we view the discrete
random variable as a continuous-valued random vector by representing it as
a one-hot encoded vector. In particular, if the discrete random variable,
which we denote by $V$, takes $m$ values, then its one-hot encoded
representation, which we denote by $X$, takes values in the corner points
of the simplex on $\reals^m$. Now, suppose $\nu_\xi$ is any parameterized
distribution on the discrete set $\{1, \dots, m\}$ (e.g., the softmax
distribution). Then, in the one-hot encoded representation, the mean
$M_\xi$ is given by
\[
M_\xi = \sum_{i=1}^m \nu_\xi(i) e_i =
\begin{bmatrix}
\nu_\xi(1) \\ \vdots \\ \nu_\xi(m)
\end{bmatrix},
\]
where $e_i$ denotes the $m$-dimensional unit vector with $1$ in the
$i$-th location. Thus, when we one-hot encode discrete \acs{AIS} or
discrete observations, the ``mean'' $M_\xi$ is same as the probability
mass function (PMF) $\nu_\xi$. Thus, effectively, $d_{\mathfrak{F}_2}(\mu,\nu)^2$ is
equivalent to $\| \mu - \nu \|_2^2$ and~\eqref{eq:surrogate} is an
unbiased estimator where we have removed the terms that do not depend on~$\nu$.
\item \textsc{Surrogate loss for predicting continuous variables:}
When predicting a continuous-valued random variable, say a
continuous-valued \acs{AIS} $\hat Z_{t+1}$ in (AP2) or a continuous-valued
observation $Y_t$ in (AP2b), we can immediately use the surrogate
loss~\eqref{eq:surrogate} as long as the parameterized distribution
$\nu_\xi$ is such that its mean $M_\xi$ is given in closed form. Note that
the surrogate loss~\eqref{eq:surrogate} only depends on the mean of the
distribution and not one any other moment. So, any two distributions $\nu$
and $\nu'$ that have the same mean, the surrogate loss between any
distribution $\mu$ and $\nu$ is same as the surrogate loss between $\mu$
and $\nu'$. Thus, using the surrogate loss~\eqref{eq:surrogate} for
predicting continuous variables only makes sense when we expect the true
distribution to be close to a deterministic function.
\end{enumerate}
\subsection{Learning an AIS for a fixed policy} \label{sec:grad-ascent}
The definition of \acs{AIS} suggests that there are two ways to construct an
information state from data: we either learn a time-homogeneous
\acs{AIS}-generator $(\ainfo, \rewinfo, \nextinfo)$ that satisfies (AP1) and
(AP2) or we learn a time-homogeneous \acs{AIS}-generator $(\ainfo, \rewinfo,
\aupdate, \nextobs)$ that satisfies (AP1), (AP2a), and (AP2b). In either case,
there are three types of components of \acs{AIS}-generators: (i)~regular
functions such as $\rewinfo$ and $\aupdate$; (ii)~history compression
functions $\{ \ainfo_t \}_{t \ge 1}$; and (iii)~stochastic kernels $\nextinfo$
and $\nextobs$. To learn these components from data, we must choose
parametric class of functions for all of these. In this section, we do not
make any assumption about how these components are chosen. In particular,
$\rewinfo$ and $\aupdate$ could be represented by any class of function
approximators (such as a multi-layer preceptron); $\ainfo$
could be represented by any class of time-series approximators (such as a
RNN or its refinements such as LSTM or GRU); and $\nextinfo$ and $\nextobs$
could be represented by any class of stochastic kernel approximators (such as
softmax distribution or mixture of Gaussians). We use $\xi_t$ to denote the
corresponding parameters.
There are two losses in the definition of an \acs{AIS}: the reward loss $|R_t
- \rewinfo(\hat z_t, a_t) |$ and the prediction loss $d_\mathfrak{F}(\mu_t, \nu_t)$ or
$d_\mathfrak{F}(\mu^y_t, \nu^y_t)$. We combine these into a single criterion and
minimize the combined loss function
\[
\frac{1}{T} \sum_{t=1}^T \Bigl[ \lambda | R_t - \rewinfo(\hat Z_t, A_t) |^2
+ (1-\lambda) d_\mathfrak{F}(\mu_t, \nu_t)^2
\Bigr]
\]
where $T$ is the length of the episode or the rollout horizon and $\lambda \in
[0, 1]$ may be viewed as a hyper-parameter.
As described in Section~\ref{sec:learn-IPM}, there are two possibilities to
efficiently compute $d_\mathfrak{F}(\mu_t, \nu_t)^2$: use total-variation distance or
Wasserstein distance as the IPM and use surrogate
loss~\eqref{eq:surrogate-KL}; or use distance-based MMD as the IPM and use
the surrogate loss~\eqref{eq:surrogate}.
In particular, to choose an AIS that satisfies (AP1) and (AP2), we either
minimize the surrogate loss
\begin{equation}\label{eq:loss-KL-1}
\frac 1T \sum_{t=1}^T \big[ \lambda | R_t - \hat r(\hat Z_t, A_t)|^2 +
(1-\lambda) \log(\nu_t(\hat Z_{t+1})) \bigr]
\end{equation}
or we minimize the surrogate loss (specialized for $p=2$)
\begin{equation} \label{eq:loss-1}
\frac{1}{T} \sum_{t=1}^T \Bigl[ \lambda | R_t - \rewinfo(\hat Z_t, A_t) |^2
+ (1-\lambda) (M_{t} - 2 \hat Z_{t+1})^\TRANS M_{t}
\Bigr]
\end{equation}
where $M_t$ is the mean of the distribution $\nu_t$.
Similarly, in order to choose an \acs{AIS} that satisfies (AP1), (AP2a) and
(AP2b), we minimize the surrogate loss
\begin{equation}\label{eq:loss-KL-2}
\frac 1T \sum_{t=1}^T \big[ \lambda | R_t - \hat r(\hat Z_t, A_t)|^2 +
(1-\lambda) \log(\nu^y_t(Y_{t})) \bigr]
\end{equation}
or we minimize the surrogate loss (specialized for $p=2$)
\begin{equation}\label{eq:loss-2}
\frac{1}{T} \sum_{t=1}^T \Bigl[ \lambda | R_t - \rewinfo(\hat Z_t, A_t) |^2
+ (1-\lambda) (M^y_{t} - 2 Y_t)^\TRANS M^y_{t}
\Bigr]
\end{equation}
where $M^y_{t}$ is the mean of the distribution $\nu^y_{t}$.
We use $\bar \xi$ to denote the parameters of the \acs{AIS}-generator, i.e.,
the parameters of $(\ainfo, \nextinfo, \rewinfo)$ when using (AP1) and (AP2)
or the parameters of $(\ainfo, \aupdate, \nextobs, \rewinfo)$ when using
(AP1), (AP2a), (AP2b). We then use $\mathcal L(\bar \xi)$ to the denote the
corresponding loss~\eqref{eq:loss-KL-1}, \eqref{eq:loss-1},
\eqref{eq:loss-KL-2}, or~\eqref{eq:loss-2}. Then, we can learn the
parameters $\bar \xi$ using stochastic gradient descent:
\begin{equation}\label{sgd:ais}
\bar \xi_{k+1} = \bar \xi_k - a_k \GRAD_{\bar \xi} \mathcal L(\bar \xi_k),
\end{equation}
where the learning rates $\{a_k\}_{k \ge 0}$ satisfy the standard conditions
$\sum a_k = \infty$ and $\sum a_k^2 < \infty$.
\subsection{AIS-based PORL}\label{sec:PORL}
Given the stochastic gradient descent algorithm to learn an
\acs{AIS}-generator for a fixed policy, we can simultaneously learn a policy
and \acs{AIS}-generator by following a multi time-scale stochastic gradient
descent \citep{Borkar_1997}, where we learn the \acs{AIS}-generator at a faster
learning rate than the policy.
In particular, let $\pi_\theta \colon \hat{\ALPHABET{Z}} \to \Delta(\ALPHABET{\Act})$ be a
parameterized stochastic policy with parameters~$\theta$. Let $J(\bar \xi,
\theta)$ denote the performance of policy $\pi_\theta$. From the policy
gradient theorem \citep{Sutton2000, Baxter2001}, we know that
\begin{equation}
\GRAD_{\theta} J(\bar \xi, \theta) =
\EXP \biggl[ \sum_{t=1}^\infty \biggl( \sum_{\tau = 1}^t
\GRAD_{\theta} \log \pi_\theta(A_t \mid \hat Z_t) \biggr)
\gamma^{t-1} \Rew_t \biggr]
\end{equation}
which can be estimated from a sampled trajectory with a rollout horizon of $T$ using the G(PO)MDP gradient \citep{Baxter2001}
\begin{equation} \label{eq:GPOMDP_update}
\widehat \GRAD_{\theta} J(\bar \xi, \theta) =
\sum_{t=1}^T \biggl( \sum_{\tau = 1}^t
\GRAD_{\theta} \log \pi_\theta(A_t \mid \hat Z_t) \biggr)
\gamma^{t-1} \Rew_t .
\end{equation}
We can iteratively update the parameters $\{ (\bar \xi_k, \theta_k) \}_{k \ge
1}$ of both the \acs{AIS}-generator and policy as follows. We start with an
initial choice $(\bar \xi_1, \theta_1)$, update both parameters after a
rollout of $T$ as follows
\begin{equation}\label{eq:ais_a}
\bar \xi_{k+1} = \bar \xi_k - a_k \GRAD_{\bar \xi} \mathcal L(\bar \xi_k)
\quad\text{and}\quad
\theta_{k+1} = \theta_k + b_k \widehat \GRAD_{\theta} J(\bar \xi_k, \theta_k)
\end{equation}
where the learning rates $\{a_k\}_{k \ge 1}$ and $\{b_k\}_{k \ge 1}$ satisfy
the standard conditions on multi time-scale learning: $\sum_{k} a_k = \infty$,
$\sum_k b_k = \infty$, $\sum_{k} a_k^2 < \infty$, $\sum_{k} b_k^2 < \infty$,
and $\lim_{k \to \infty} b_k/a_k = 0$, which ensures that \ac{AIS}-generator
learns at a faster rate than the policy.
A similar idea can be used for an actor-critic algorithm. Suppose we have a
parameterized policy $\pi_\theta \colon \hat{\ALPHABET{Z}} \to \Delta(\ALPHABET{\Act})$ and a
parameterized critic $\hat Q_{\zeta} \colon \hat{\ALPHABET{Z}} \times \ALPHABET{\Act} \to \reals$,
where $\theta$ denotes the parameters of the policy and $\zeta$ denotes the
parameters of the critic. Let $J(\bar \xi, \theta, \zeta)$ denote the
performance of the policy. From the policy gradient theorem \citep{Sutton2000,
KondaTsitsiklis2003}, we know that
\begin{equation}
\GRAD_\theta J(\bar \xi, \theta, \zeta) = \frac{1}{1-\gamma}
\EXP\bigl[ \GRAD_\theta \log \pi_\theta(A_t \mid \hat Z_t)
Q_\zeta(\hat Z_t, A_t) \bigr]
\end{equation}
which can be estimated from a sampled trajectory with a rollout horizon of $T$
by
\begin{equation}
\widehat \GRAD_\theta J(\bar \xi, \theta, \zeta) = \frac{1}{(1-\gamma)T}
\sum_{t=1}^T \GRAD_\theta \log \pi_\theta(A_t \mid \hat Z_t)
\hat Q_\zeta(\hat Z_t, A_t).
\end{equation}
For the critic, we use the temporal difference loss
\begin{equation}
\mathcal L_{\textup{TD}}(\bar \xi, \theta, \zeta) = \frac{1}{T}
\sum_{t=1}^T \texttt{smoothL1}\bigl(
\hat Q_\zeta(\hat Z_t, A_t) - R_t - \gamma
\hat Q_\zeta(\hat Z_{t+1}, A_{t+1}) \bigr)
\end{equation}
where \texttt{smoothL1} is the smooth $L_1$ distance given by
\[
\texttt{smoothL1}(x) = \begin{cases}
\frac12 x^2 \quad &\text{if } |x| < 1 \\
|x| - \frac12 &\text{otherwise}.
\end{cases}
\]
We can iteratively update the parameters $\{ (\bar \xi_k, \theta_k, \zeta_k)
\}_{k \ge 1}$ of the \acs{AIS}-generator, policy, and critic as follows. We start
with an initial choice $(\bar \xi_1, \theta_1, \zeta_1)$, and update all the parameters after a rollout of $T$ steps as follows
\begin{equation}\label{eq:ais_ac}
\bar \xi_{k+1} = \bar \xi_k - a_k \GRAD_{\bar \xi} \mathcal L(\bar \xi_k)
, \quad
\theta_{k+1} = \theta_k + b_k \widehat \GRAD_{\theta} J(\bar \xi_k, \theta_k, \zeta_k)
\quad\text{and}\quad
\zeta_{k+1} = \zeta_k - c_k \GRAD_{\zeta} \mathcal L_{\textup{TD}}(\bar \xi_k,
\theta_k, \zeta_k)
\end{equation}
where the learning rates $\{a_k\}_{k \ge 1}$, $\{b_k\}_{k \ge 1}$,
$\{c_k\}_{k \ge 1}$ satisfy
the standard conditions on multi time-scale learning: $\sum_{k} a_k = \infty$,
$\sum_k b_k = \infty$, $\sum_k c_k = \infty$, $\sum_{k} a_k^2 < \infty$,
$\sum_{k} b_k^2 < \infty$, $\sum_k c_k^2 < \infty$,
$\lim_{k \to \infty} c_k/a_k = 0$, and $\lim_{k \to \infty} b_k/c_k = 0$,
which ensures that \ac{AIS}-generator learns at a faster rate than the critic,
and the critic learns at a faster rate than the policy. \blue{The complete algorithm
is shown in Algorithm~\ref{alg:PORL}.}
\begin{algorithm}[!t]
{\color{black}
\caption{AIS-based PORL algorithm}\label{alg:PORL}
\textbf{Input:} Initial AIS-Generator: $(\ainfo, \nextinfo, \rewinfo)_{\bar \xi_0}$,
Initial Policy: $\pi_{\polPars_0}$,
Discount factor: $\gamma$,
\\
\phantom{\textbf{Input:}}
Reward weight: $\lambda$,
Number of episodes: $K$,
AIS-LR: $a_{k=1}^K$,
Policy-LR: $b_{k=1}^K$.
\\
\textbf{Output:}
Learned policy: $\pi_{\polPars_K}$,
Learned AIS-generator: $(\ainfo, \nextinfo, \rewinfo)_{\bar \xi_K}$
\begin{algorithmic}[1]
\Procedure{AIS-based PORL}{}
\ForAll {$k \in \{1, \dots, K\}$}
\State Reset environment and perform an episode using
$\pi_{\polPars_{k-1}}, (\ainfo, \nextinfo, \rewinfo)_{\bar \xi_{k-1}}$.
\State $A_{1:T}, Y_{1:T}, \Rew_{1:T} \gets$ Actions, observations, and
rewards for episode~$k$.
\State Compute AIS loss using $A_{1:T}, Y_{1:T}, \Rew_{1:T},
\lambda, (\ainfo, \nextinfo, \rewinfo)_{\bar \xi_{k-1}}$
using Eq.~\eqref{eq:loss-KL-2} or~\eqref{eq:loss-2}
\State Compute policy loss using $A_{1:T}, Y_{1:T}, \Rew_{1:T},
\gamma, \pi_{\polPars_{k-1}}, (\ainfo)_{\bar \xi_{k-1}}$ using
Eq.~\eqref{eq:GPOMDP_update}
\State Update AIS parameters $\bar \xi_{k-1}$ and policy
parameters $\pi_{\polPars_{k-1}}$ using Eq.~\eqref{eq:ais_a}
\EndFor \EndProcedure
\end{algorithmic}
}
\end{algorithm}
Under standard technical conditions (see Theorem~23 of \cite{Borkar_1997} or
Page~35 of \cite{Leslie_2004}), we can show that
iterations~\eqref{eq:ais_a} and~\eqref{eq:ais_ac} will converge to a
stationary point of the corresponding ODE limits. At convergence, depending on
$\varepsilon$ and $\delta$ for the quality of \acs{AIS} approximation, we can
obtain approximation guarantees corresponding to Theorem~\ref{thm:inf-ais}.
\blue{For a more detailed discussion on convergence, please refer to
Appendix~\ref{sec:porl-convergence}.}
We conclude this discussion by mentioning that algorithms similar to the
\acs{AIS}-based PORL have been proposed in the literature including
\cite{bakker2002reinforcement,WierstraFoersterPetersSchmidhuber_2007,
WierstraFoersterPetersSchmidhuber_2010,HausknechtStone_2015,HeessHuntLillicrapSilver_2015,Zhu2017,ha2018world,BaiseroAmato_2018,Igl2018,Zhang2019}.
However, these papers only discuss the empirical performance of the proposed
algorithms but do not derive performance bounds.
\endinput
Iterations~\eqref{eq:ais_a} for the actor only case and~\eqref{eq:ais_ac} for the actor critic case
can be shown to converge using the multi-time scale stochastic approximation convergence
theorem under standard assumptions, such as all the four conditions stated in~\cite[page 35]{Leslie_2004},
\cite[Theorem 23]{Borkar_1997}. The effect of these iterations can be viwed as follows.
Due to the specific choice of learning rates, the state representation algorithm sees a
stationary actor and/or critic, while the actor and/or critic in turn see a converged state approximator iteration due to its faster learning rate. The convergence of the state approximator follows from~\cite[Theorem
2.2]{Borkar:book} and the fact that the model satisfies conditions (A1)--(A4) of~\cite[pg~10--11]{Borkar:book}. The Martinagle difference condition (A3) in this reference is satisfied by the unbiasedness assumption for the state approximator.
At convergence, for the actor only case, the \ac{AIS} representation learnt is locally optimal
given the states visited by the converged policy and the converged policy is locally optimal given this
\ac{AIS} representation. Similarly, for the actor-critic case, at convergence, the \ac{AIS} learnt is locally
optimal given the states visited by the converged policy, the converged critic represents the best estimate of
the performance of this converged policy subject to function approximation and the converged policy
is locally optimal given the converged critic and the converged \ac{AIS}. Formally, this guarantee
can be stated as:
\begin{theorem}\label{thm:optimality-guarantee-conv}
At convergence, let $\varepsilon$ and $\delta$ be the error constants in
\textup{(AP1)} and \textup{(AP2)}, and let $\kappa = \| V_{\varphi}
- \hat V \|_{\infty}$ where $V_{\varphi}$ is the converged value
function at the critic and $\hat V$ is the solution of~\eqref{eq:DP-Approx-POMDP}.
Then, by the triangle inequality, we have for any realization of history $H_t$,
\[
\bigl| V(H_t) - V_{\varphi}(\info_t(H_t)) \bigr|
\le \kappa + \frac{ \varepsilon + \gamma L_V \delta} {1 - \gamma}.
\]
\end{theorem}
\section{Comparison with the results of \texorpdfstring{\cite{DeepMDP}}{Gelada
et al. (2019)} for latent space models for MDPs}\label{app:LipMDP}
\cite{DeepMDP} propose a latent space model for an MDP and show that
minimizing the losses in predicting the per-step reward and repredicting the
distribution over next latent space provides a bound on the quality of the
representation. In this section, we show that latent space representation
defined in \cite{DeepMDP} may be viewed as an instance of an AIS and show that
the approximation bounds of Theorem~\ref{thm:inf-ais} are similar to those
derived in \cite{DeepMDP}.
Since we follow a slightly different notation than \cite{DeepMDP} and for the
sake of completeness, we start by describing the notion of latent space
representation used in \cite{DeepMDP}.
Consider an MDP with infinite horizon, finite-state and finite-action having state
space $\StSp$, action space $\ALPHABET{\Act}$, transition probability matrix $P \colon
\StSp \times \ALPHABET{\Act} \to \Delta(\StSp)$, per-step reward function $r \colon
\StSp \times \ALPHABET{\Act} \to \reals$ and discount factor $\gamma$.
Let $(\hat \StSp, d)$ be a Banach space and it is assumed that we are given an
embedding function $\phi \colon \StSp \to \hat \StSp$, along with transition
dynamics $\hat P \colon \hat \StSp \times \ALPHABET{\Act} \to \Delta(\hat \StSp)$ and reward
function $\hat r \colon \hat \StSp \times \ALPHABET{\Act} \to \reals$. The MDP
$\widehat {\mathcal M} = (\hat \StSp,
\ALPHABET{\Act}, \hat P, \hat r, \gamma)$ along with the embedding function $\phi$
is called the \emph{latent space model} of the original MDP.
\begin{definition}
The MDP $\widehat {\mathcal M}$ is said to be $(L_r, L_p)$-Lipschitz if for
any $\hat s_1, \hat s_2 \in \hat \StSp$ and $a \in A$,
\[
\bigl| \hat r(\hat \st_1, a) - \hat r(\hat \st_2, a) \bigr|
\le L_r d(\hat \st_1, \hat \st_2)
\quad\text{and}\quad
d_\mathfrak{F}(\hat P(\cdot | \hat \st_1, a), \hat P(\cdot | \hat \st_2, a))
\le L_p d(\hat s_1, \hat s_2),
\]
where $d_\mathfrak{F}$ denotes the Kantorovich distance.
\end{definition}
Given a latent space embedding, define
\[
\varepsilon = \sup_{\st \in \StSp, a \in \ALPHABET{\Act}}
\bigl| r(\st, a) - \hat r(\phi(\st), a) \bigr|
\quad\text{and}\quad
\delta = \sup_{\st \in \StSp, a \in \ALPHABET{\Act}}
d_\mathfrak{F}( \mu, \hat P(\cdot | \phi(\st), a) ),
\]
where $\mu \in \Delta(\hat \StSp)$ given by $\mu(\ALPHABET B) = P(
\phi^{-1}(\ALPHABET B) | \st, a)$ for any Borel subset $\ALPHABET B$ of
$\hat \StSp$.
\begin{proposition}[Theorem 5 of \cite{DeepMDP}]\label{prop:DeepMDP}
Let $\hat \pi \colon \hat \StSp \to \ALPHABET{\Act}$ be the (deterministic)
optimal policy of the latent space MDP. Define $\pi \colon \StSp \to
\ALPHABET{\Act}$ by $\pi = \hat \pi \circ \phi$. Let $V \colon \StSp \to \reals$
denote the optimal value function and let $V^\pi \colon \StSp \to \reals$
denote the value function for policy $\pi$.
If the latent space MDP $\widehat {\mathcal M}$ is $(L_r, L_p)$-Lipschitz,
then,
\[
\bigl| V(\st) - V^\pi(\st) \bigr| \le \frac{2\varepsilon}{1-\gamma}
+ \frac{2\gamma \delta L_r}{(1-\gamma)(1 - \gamma L_p)}.
\]
\end{proposition}
We show that a latent space model is an AIS and directly using the result
of Theorem~\ref{thm:inf-ais} gives the same approximation bound.
\begin{proposition}
Let $\widehat {\mathcal M} = (\hat \StSp, \ALPHABET{\Act}, \hat P, \hat r,
\gamma)$ be a latent space model with embedding function $\phi$. Then,
$(\phi, \hat P, \hat r)$ is an $(\varepsilon, \delta)$-AIS with respect to
the Kantorovich distance.
\end{proposition}
\begin{proof}
The result is an immediate consequence of the definition of $\varepsilon$ and
$\delta$ for latent space model.
\end{proof}
\begin{lemma}\label{lem:LipMDP}
For any $(L_r, L_p)$-Lipschitz MDP, if $\gamma L_p < 1$, then
\[
\| V \|_\mathrm{Lip} \le \frac{L_r}{1 - \gamma L_p}.
\]
Therefore, when $d_\mathfrak{F}$ is the Kantorovich distance, $\Minkowski_\mathfrak{F}(V) = \| V
\|_{\mathrm{Lip}} \le L_r/(1 - \gamma L_p)$.
\end{lemma}
\begin{proof}
This result follows immediately from Theorem 4.2 of \cite{Hinderer2005}.
\end{proof}
\begin{proposition}\label{prop:DeepMDP-ais}
Let $\hat \pi$, $\pi$, $V$, and $V^\pi$ be defined in
Proposition~\ref{prop:DeepMDP}. The, for all $s \in \StSp$,
\[
\bigl| V(\st) - V^\pi(\st) \bigr| \le \frac{2\varepsilon}{1-\gamma}
+ \frac{2\gamma \delta L_r}{(1-\gamma)(1 - \gamma L_p)}.
\]
\end{proposition}
\begin{proof}
This follows immediately from Theorem~\ref{thm:inf-ais},
Proposition~\ref{prop:DeepMDP}, and Lemma~\ref{lem:LipMDP}.
\end{proof}
Note that the error bounds in Propositions~\ref{prop:DeepMDP}
and~\ref{prop:DeepMDP-ais} are exactly the same.
\section{Infinite-horizon discounted reward setup}\label{sec:infinite}
So far, we have restricted attention to the finite horizon setup. In this section,
we show how to generalize the notions of information state and \acl{AIS} to the
infinite horizon discounted reward setup.
\subsection{System model and problem formulation}
We consider the same model as described in Sec.~\ref{sec:model} but assume that
the system runs for an infinite horizon. The performance of any
(history dependent and possibly stochastic) policy $\pi \coloneqq (\pi_1,
\pi_2, \dots)$, where $\pi_t \colon \ALPHABET{\His}_t \to \Delta(\ALPHABET{\Act})$, is given
by
\[
J(\pi) \coloneqq \liminf_{T \to \infty}
\EXP^\pi\biggl[ \sum_{t=1}^T \gamma^{t-1} R_t \biggr],
\]
where $\gamma \in (0,1)$ is the discount factor. As before, we assume that
the agent knows the system dynamics $\{f_t\}_{t \ge 1}$, the reward functions
$\{r_t \}_{t \ge 1}$, and the probability measure $\PR$ on the primitive
random variables $\{W_t\}_{t \ge 1}$. The objective of the agent
is to choose a policy $\pi$ that maximizes the expected discounted total reward
$J(\pi)$.
Note that we use $\liminf$ rather than $\lim$ in the above definition because
in general the limit might not exist. We later assume that the rewards are
uniformly bounded (see Assumption~\ref{ass:bounded}) which, together with the
finiteness of the action space, implies that the limit is well defined. When the
action space is uncountable, we need to impose appropriate technical conditions
on the model to ensure that an appropriate measurable selection condition holds
\citep{Hernandez-LermaLasserre_2012}.
\subsection{A dynamic programming decomposition}
In the finite-horizon setup, we started with a dynamic program to evaluate the
performance $\{V^\pi_t\}_{t=1}^T$ for any history dependent policy~$\pi$. We
then identified an upper-bound $\{V_t\}_{t=1}^T$ on $\{V^\pi_t\}_{t=1}^T$ and
showed that this upper bound is tight and achieved by any optimal policy. The
subsequent analysis of the information state and the \acl{AIS} based dynamic
programs was based on comparison with $\{V_t\}_{t=1}^T$.
One conceptual difficulty with the infinite horizon setup is that we cannot
write a general dynamic program to evaluate the performance $\{V^\pi_t\}_{t
\ge 1}$ of an arbitrary history dependent policy $\pi$ and therefore identify
a tight upper-bound $\{V_t\}_{t \ge 1}$. In traditional MDP models, this
conceptual difficulty is resolved by restricting attention to Markov
strategies and then establishing that the performance of a Markov strategy can
be evaluated by solving a fixed point equation. For partially observed MDPs, a
similar resolution works because one can view the belief state as an
information state. However, for general partially observed models as
considered in this paper, there is no general methodology to identify a
time-homogeneous information state. So, we follow a different approach and
identify a dynamic program which bounds the performance of a general history
dependent policy. We impose the following mild
assumption on the model.
\begin{assumption} \label{ass:bounded}
The reward process $\{\Rew_t\}_{t \ge 1}$ is uniformly bounded and takes
values inside a finite interval $[\Rew_{\min}, \Rew_{\max}]$.
\end{assumption}
Given any (history dependent) policy $\pi$, we define the \emph{reward-to-go}
function for any time~$t$ and any realization $h_t$ of $H_t$ as
\begin{equation}
V^\pi_t(h_t) \coloneqq
\EXP^\pi\biggl[ \sum_{s=t}^\infty \gamma^{s-t} \Rew_s \biggm| H_t = h_t
\biggr].
\end{equation}
Define the corresponding action value function as:
\begin{equation}
Q^\pi_t(h_t, a_t) \coloneqq
\EXP^\pi[ R_t + \gamma V^\pi_{t+1}(H_{t+1})
\mid H_t = h_t, A_t = a_t ] .
\end{equation}
As stated above, we cannot identify a dynamic program to recursively compute
$\{ V^\pi_t\}_{t \ge 1}$. Nonetheless, we show that under
Assumption~\ref{ass:bounded} we can identify arbitrarily precise upper and
lower bounds for $\{ V^\pi_t\}_{t \ge 1}$ which can be recursively computed.
\begin{proposition}\label{prop:eval-inf}
Arbitrarily pick a horizon $T$ and define $\{ J^\pi_{t,T} \colon \ALPHABET{\His}_t
\to \reals \}_{t=1}^{T}$ as follows: $J^\pi_{T,T}(h_{T}) = 0$ and
for $t \in \{T-2, \dots, 1\}$,
\begin{equation}\label{eq:pol-eval-inf}
J^\pi_{t,T}(h_t) \coloneqq \EXP^\pi[ \Rew_t + \gamma
J^\pi_{t+1,T}(H_{t+1}) \mid H_t = h_t ].
\end{equation}
Then, for any time $t \in \{1, \dots, T\}$ and realization $h_t$ of
$H_t$, we have
\begin{equation}\label{eq:pol-eval-bound}
J^\pi_{t,T}(h_t) + \frac{\gamma^{T-t}}{1 - \gamma} \Rew_{\min}
\le V^\pi_t(h_t) \le
J^\pi_{t,T}(h_t) + \frac{\gamma^{T-t}}{1 - \gamma} \Rew_{\max}.
\end{equation}
\end{proposition}
\begin{proof}
The proof follows from backward induction. Note that for $t=T$, $\Rew_t
\in [\Rew_{\min}, \Rew_{\max}]$ implies that
\[
\frac{\Rew_{\min}}{1-\gamma} \le V^\pi_{T}(h_T) \le
\frac{\Rew_{\max}}{1-\gamma}.
\]
This forms the basis of induction. Now asusme that~\eqref{eq:pol-eval-bound}
holds for time~$t+1$ and consider the model for time~$t$:
\begin{align*}
V^\pi_t(h_t) &= \EXP^\pi\biggl[ \sum_{s=t}^{\infty} \gamma^{s-t} \Rew_s
\biggm| H_t = h_t \biggr]
\\
&\stackrel{(a)}= \EXP^\pi\biggl[ \Rew_t +
\gamma \EXP^\pi\biggl[\sum_{s=t+1}^{\infty} \gamma^{s-(t+1)} \Rew_s
\biggm| H_{t+1} \biggr]
\biggm| H_t = h_t \biggr]
\\
&\stackrel{(b)}\le \EXP^\pi\biggl[ \Rew_t + \gamma
\EXP^\pi\biggl[ J^\pi_{t+1,T}(H_{t+1})
+ \frac{\gamma^{T-(t+1)}}{1 - \gamma} \Rew_{\max}
\biggm| H_{t+1} \biggr]
\biggm| H_t = h_t \biggr]
\\
&\stackrel{(c)}=
J^\pi_{t,T}(h_t)
+ \frac{\gamma^{T-t}}{1 - \gamma} \Rew_{\max},
\end{align*}
where $(a)$ follows from the smoothing property of conditional expectation,
$(b)$ follows from the induction hypothesis, and $(c)$ follows from the
definition of $J^\pi_{t,T}(\cdot)$. This establishes one side
of~\eqref{eq:pol-eval-bound}. The other side can be established in a similar
manner. Therefore, the result holds by the principle of induction.
\end{proof}
Note that Proposition~\ref{prop:eval-inf} gives a recursive method to
approximately evaluate the performance of any history dependent policy~$\pi$.
We can modify the recursion in~\eqref{eq:pol-eval-inf} to obtain policy
independent upper bound on performance of an arbitrary policy. For that
matter, define value functions $\{ V_t \colon \ALPHABET{\His}_t \to \reals\}_{t \ge 1}$as follows:
\begin{equation}\label{eq:value-inf}
V_t(h_t) = \sup_{\pi} V^\pi_t(h_t),
\end{equation}
where the supremum is over all history dependent policies. Furthermore, define
action-value functions $\{ Q_t \colon \ALPHABET{\His}_t \times \ALPHABET{\Act} \to \reals\}_{t
\ge 1}$ as follows:
\begin{equation}\label{eq:Q-inf}
Q_t(h_t, a_t) = \EXP[ R_t + \gamma V_{t+1}(H_{t+1})
\mid H_t = h_t, A_t = a_t ].
\end{equation}
Then, we have the following.
\begin{proposition}\label{prop:optimal-inf}
Arbitrarily pick a horizon $T$ and define $\{ J_{t,T} \colon \ALPHABET{\His}_t \to
\reals\}$ as follows: $J_{T,T}(h_{T}) = 0$ and for $t \in \{T-2, \dots,
1\}$,
\begin{equation}\label{eq:DP-inf}
J_{t,T}(h_t) \coloneqq \max_{a_t \in \ALPHABET{\Act}} \EXP[
\Rew_t + \gamma J_{t+1}(H_{t+1}) \mid
H_t = h_t, A_t = a_t ].
\end{equation}
Then, for any time $t \in \{1, \dots, T\}$ and realization $h_t$ of
$H_t$,
\begin{equation}\label{eq:pol-eval-bound-2}
V^\pi_t(h_t) \le J_{t,T}(h_t) +
\frac{\gamma^{T-t}}{1 - \gamma} \Rew_{\max}.
\end{equation}
Therefore,
\begin{equation} \label{eq:opt-bound}
J_{t,T}(h_t) +
\frac{\gamma^{T-t}}{1 - \gamma} \Rew_{\min}
\le V_t(h_t) \le
J_{t,T}(h_t) +
\frac{\gamma^{T-t}}{1 - \gamma} \Rew_{\max}.
\end{equation}
\end{proposition}
Note that $J_{t,T}(h_t)$ is the optimal value function for a finite horizon
system with the discounted reward criterion that runs for horizon $T-1$.
\begin{proof}
By following almost the same argument as Proposition~\ref{prop:optimal}, we
can establish that for any history dependent policy~$\pi$, $J^\pi_{t,T}(h_t) \le
J_{t,T}(h_t)$, which immediately implies~\eqref{eq:pol-eval-bound-2}.
Maximizing the left hand side of~\eqref{eq:pol-eval-bound-2} gives us the
upper bound in~\eqref{eq:opt-bound}. For the lower bound
in~\eqref{eq:opt-bound}, observe that
\begin{align*}
V_t(h_t) &=
\sup_{\pi} \EXP^\pi\biggl[
\sum_{s=t}^\infty \gamma^{s-t} \Rew_s
\biggm| H_t = h_t \biggr]
\\
&\stackrel{(a)}\ge
\sup_{\pi} \EXP^\pi\biggl[
\sum_{s=t}^{T-1} \gamma^{s-t} \Rew_s +
\sum_{s=T}^\infty \gamma^{s-t} \Rew_{\min}
\biggm| H_t = h_t \biggr]
\\
&=
\sup_{\pi} \EXP^\pi\biggl[
\sum_{s=t}^{T-1} \gamma^{s-t} \Rew_s
\biggm| H_t = h_t \biggr]
+ \frac{\gamma^{T-t}}{1-\gamma} \Rew_{\min}
\\
&\stackrel{(b)}= J_{t,T}(h_t)
+ \frac{\gamma^{T-t}}{1-\gamma} \Rew_{\min} .
\end{align*}
where $(a)$ follows from the fact that $\Rew_s \ge \Rew_{\min}$ and $(b)$
follows from the definition of $J_{t,T}(h_t)$. This complete the proof
of~\eqref{eq:opt-bound}.
\end{proof}
\subsection{Time-homogeneous information state and simplified dynamic program}
\begin{definition}
Given a Banach space $\ALPHABET{\Is}$, an information state generator $\{\info_t \colon
\ALPHABET{\His}_t \to \ALPHABET{\Is}\}$ is said to be
\emph{time-homogeneous} if, in addition to \textup{(P1)} and \textup{(P2)},
it satisfies the following:
\begin{description}
\item[(S)] The expectation
\(
\EXP[\Rew_t | Z_t = \info_t(H_t), A_t = a_t ]
\)
and the transition kernel
\(
\PR(Z_{t+1} \in B | Z_t = \info_t(H_t), A_t = a_t)
\)
are time-homogeneous.
\end{description}
\end{definition}
Note that all except the first example of information state presented in
Sec.~\ref{ex:info-state} are time-homogeneous. However, in general, a
time-homogeneous information state may not exist for all partially observed
models and it is important to understand conditions under which such an
information state exists. However, we do not pursue that direction in this
paper.
For any time-homogeneous information state, define the Bellman operator
$\mathcal{B} \colon [ \ALPHABET{\Is} \to \reals] \to [\ALPHABET{\Is} \to \reals]$ as
follows: for any uniformly bounded function $\bar V \colon \ALPHABET{\Is} \to \reals$
\begin{equation} \label{eq:bellman}
[\mathcal{B} \bar V](z) = \max_{a \in \ALPHABET{\Act}}
\EXP[ \Rew_t + \gamma \bar V(Z_{t+1}) \mid Z_t = z, A_t = a ],
\end{equation}
where $\gamma \in (0,1)$ is the discount factor. Because of (S), the expectation on the right hand side does not depend on
time. Due to discounting, the operator $\mathcal B$ is a contraction and
therefore, under Assumption~\ref{ass:bounded}, the fixed point
equation
\begin{equation} \label{eq:fixed-point}
\bar V = \mathcal{B} \bar V
\end{equation}
has a unique bounded solution (due to the Banach fixed point theorem).
Let $\bar V^*$ be the fixed point and $\pi^*$ be any policy such that $\pi^*(z)$
achieves the arg max in the right hand side of~\eqref{eq:bellman} for
$[\mathcal{B} \bar V^*](z)$. It is easy to see that $\bar V^*$ is the performance of the
time homogeneous policy $(\pi^*, \pi^*, \dots)$. However, it is not obvious
that $\bar V^*$ equals to the optimal performance $V_1$ defined
in~\eqref{eq:value-inf},
because the proof of Theorem~\ref{thm:info-state} relies on backward induction
and is not applicable to infinite horizon models. So, we present an
alternative proof below which uses the performance bounds of
Proposition~\ref{prop:optimal-inf}.
\begin{theorem}\label{thm:inf-is}
Let $\{Z_t\}_{t \ge 1}$ be a time-homogeneous information state process
with generator $\{ \info_t \colon \ALPHABET{\His}_t \to \ALPHABET{\Is} \}_{t \ge 1}$.
Suppose Assumption~\ref{ass:bounded} holds and let $\bar V^*$ be the unique
bounded fixed point of~\eqref{eq:bellman}. Then, for any time~$t$ and
realization $h_t$ of $H_t$, we have
\[
V_t(h_t) = \bar V^*(\info_t(h_t)).
\]
Furthermore, let $\pi^* \colon \ALPHABET{\Is} \to \Delta(\ALPHABET{\Act})$ be a
time-homogeneous (stochastic) policy such that $\Supp(\pi^*(z))$ is a
subset of the arg max of the right hand side of~\eqref{eq:bellman}. Then,
the time-homogeneous policy $\pi^* \coloneqq (\pi^*, \pi^*, \dots)$ is
optimal.
\end{theorem}
\begin{proof}
Consider the following sequence of value functions: $\bar V^{(0)}(z) = 0$ and
for $n \ge 0$, define $\bar V^{(n+1)} = \mathcal{B} \bar V^{(n)}$. Now fix a
horizon~$T$ and consider the finite-horizon discounted reward problem of
horizon~$T-1$. As argued earlier, $J_{t,T}(h_t)$ is the optimal
value-function for this finite horizon discounted problem. Moreover, note
that $\{Z_t\}_{t=1}^T$ is an information state for this finite horizon
discounted problem. Therefore, from using the result of
Theorem~\ref{thm:info-state}, we get that for
any time $t \in \{1, \dots, T\}$, and
realization $h_t$ of $H_t$,
\[
J_{t,T}(h_t) = \bar V^{(T-t)}(\info_t(h_t)).
\]
Substituting~\eqref{eq:opt-bound} from Proposition~\ref{prop:optimal-inf} in
the above, we get
\[
\bar V^{(T-t)}(\info_t(h_t)) +
\frac{\gamma^{T-t}}{1 - \gamma} \Rew_{\min}
\le V_t(h_t) \le
\bar V^{(T-t)}(\info_t(h_t)) +
\frac{\gamma^{T-t}}{1 - \gamma} \Rew_{\max}.
\]
The result follows from taking limit $T \to \infty$ and observing that
$\bar V^{(T-t)}(z)$ converges to $\bar V^*(z)$.
\end{proof}
\subsection{Time-homogeneous AIS and approximate dynamic programming}
\begin{definition}
Given a Banach space $\hat{\ALPHABET{Z}}$, a function class $\mathfrak{F}$ for IPMs, and positive
real numbers $(\varepsilon, \delta)$, we say
that a collection $\{\ainfo_t \colon \ALPHABET{\His}_t \to \hat Z \}_{t \ge 1}$ along
with a time-homogeneous update kernel $\nextinfo \colon \hat{\ALPHABET{Z}} \times \ALPHABET{\Act} \to
\Delta(\hat{\ALPHABET{Z}})$ and a time-homogeneous reward approximation function
$\rewinfo \colon \hat{\ALPHABET{Z}} \times \ALPHABET{\Act} \to \reals$ is a
\emph{$(\varepsilon,\delta)$ time
homogeneous \ac{AIS} generator} if the process $\{\hat Z_t\}_{t \ge 1}$,
where $\hat Z_t = \ainfo_t(H_t)$, satisfies \textup{(AP1)} and
\textup{(AP2)} where $\rewinfo_t$, $\nextinfo_t$, $\varepsilon_t$ and
$\delta_t$ in the definition of \textup{(AP1)} and \textup{(AP2)} are replaced by their
time-homogeneous counterparts.
\end{definition}
For any time-homogeneous \ac{AIS}, define the approximate Bellman operator
$\hat{\mathcal{B}} \colon [\hat{\ALPHABET{Z}} \to \reals] \to [\hat{\ALPHABET{Z}} \to \reals]$ as
follows: for any uniformly bounded function $\hat V \colon \hat{\ALPHABET{Z}} \to \reals$,
\begin{equation}\label{eq:bellman-operator-ais}
[\hat{\mathcal B} \hat V](\hat z) = \max_{a \in \ALPHABET{\Act}}
\biggl\{
\rewinfo(\hat z, a) + \gamma \int_{\hat{\ALPHABET{Z}}} \hat V(\hat z') \nextinfo(d\hat z' |
\hat z, a)
\biggr\}.
\end{equation}
Note that the expectation on the right hand side does not depend on
time. Due to discounting, the operator $\hat{\mathcal{B}}$ is a contraction,
and therefore, under Assumption~\ref{ass:bounded}, the fixed point equation
\begin{equation}\label{eq:bellman-ais}
\hat V = \hat{\mathcal{B}}\hat V
\end{equation}
has a unique bounded solution (due to the Banach fixed point theorem). Let $\hat
V^*$ be the fixed point and $\hat \pi^*$ be any policy such that $\hat
\pi^*(\hat z)$ achieves the arg max in the right hand side
of~\eqref{eq:bellman-operator-ais} for $[\hat{\mathcal{B}} \hat V^*](\hat z)$. It is not
immediately clear if $\hat V^*$ is close to the performance of
policy $\pi = (\pi_1, \pi_2, \dots)$, where $\pi_t = \pi^* \circ
\ainfo_t$, or if $\hat V^*$ is close to the optimal performance. The proof of
Theorem~\ref{thm:ais} relies on backward induction and is not immediately
applicable to the infinite horizon setup. Nonetheless, we establish results
similar to Theorem~\ref{thm:ais} by following the proof idea of
Theorem~\ref{thm:inf-is}.
\begin{theorem}\label{thm:inf-ais}
Suppose $(\{\ainfo_t\}_{t \ge 1}, \nextinfo, \rewinfo)$ is a time-homogeneous
$(\varepsilon,\delta)$-\ac{AIS} generator. Consider the fixed point
equation~\eqref{eq:bellman-ais}, which we rewrite as follows:
\begin{subequations}\label{eq:DP-ais-inf}
\begin{align}
\hat Q(\hat z, a) &\coloneqq \rewinfo(\hat z, a)
+ \gamma \int_{\hat{\ALPHABET{Z}}} \hat V(\hat z')
\nextinfo(d \hat z' | \hat z, a),
\\
\hat V(\hat z) &\coloneqq \max_{a \in \ALPHABET{\Act}} \hat Q(\hat z, a).
\end{align}
\end{subequations}
Let $\hat V^*$ denote the fixed point of~\eqref{eq:DP-ais-inf} and $\hat
Q^*$ denote the corresponding action-value function.
Then, we have the following:
\begin{enumerate}
\item \textbf{\textup{Value function approximation:}} For any time~$t$,
realization~$h_t$ of $H_t$, and choice $a_t$ of $A_t$, we have
\begin{equation}\label{eq:value-approx-inf}
\lvert Q_t(h_t, a_t) - \hat Q^*(\ainfo_t(h_t), a_t)\rvert
\le \alpha
\quad\text{and}\quad
\lvert V_t(h_t) - \hat V^*(\ainfo_t(h_t)) \rvert
\le \alpha,
\end{equation}
where
\[
\alpha = \frac{ \varepsilon + \gamma \Minkowski_\mathfrak{F}(\hat V^*) \delta}{1 - \gamma}
\]
\item \textbf{\textup{Approximately optimal policy:}} Let
$\hat \pi^* \colon \hat{\ALPHABET{Z}} \to \Delta(\ALPHABET{\Act})$
be a stochastic policy that satisfies
\begin{equation}\label{eq:ais-opt-inf}
\Supp(\hat \pi^*(\hat z)) \subseteq
\arg \max_{a \in \ALPHABET{\Act}} \hat Q^*(\hat z, a).
\end{equation}
Define policy $\pi = (\pi_1, \pi_2, \dots)$, where $\pi_t
\colon \ALPHABET{\His}_t \to \Delta(\ALPHABET{\Act})$ is defined by $\pi_t \coloneqq \hat \pi^* \circ
\ainfo_t$. Then, for any time~$t$, realization~$h_t$ of $H_t$, and
choice $a_t$ of $A_t$, we
have
\begin{equation}\label{eq:policy-approx-inf}
\lvert Q_t(h_t, a_t) - Q^\pi_t(h_t, a_t)\rvert
\le 2\alpha
\quad\text{and}\quad
\lvert V_t(h_t) - V^\pi_t(h_t) \rvert
\le 2\alpha.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
The proof follows by combining ideas from Theorem~\ref{thm:ais}
and~\ref{thm:inf-is}. We provide a detailed proof of the value
approximation. The proof argument for policy approximation is similar.
Consider the following sequence of value functions:
$\hat V^{(0)}(\hat z) = 0$ and for $n \ge 0$, define $\hat V^{(n+1)} =
\hat{\mathcal{B}} \hat V^{(n)}$. Now fix a
horizon~$T$ and consider the finite-horizon discounted reward problem of
horizon~$T-1$. As argued earlier, $J_{t,T}(h_t)$ is the optimal
value-function for this finite horizon discounted problem. Moreover, note
that $\{\hat Z_t\}_{t=1}^T$ is an $(\varepsilon,\delta)$-\ac{AIS} for this
finite horizon discounted problem. Therefore, from using the result of
Theorem~\ref{thm:ais}, we get that for any time $t \in \{1, \dots, T\}$, and
realization $h_t$ of $H_t$,
\[
\lvert J_{t,T}(h_t) - \hat V^{(T-t)}(\ainfo_t(h_t)) \rvert
\le \alpha_t,
\]
where
\[
\alpha_t = \varepsilon + \sum_{\tau = t+1}^{T-1}
\gamma^{\tau - t}\bigl[ \Minkowski_\mathfrak{F}(\hat V^{(T-\tau)}) \delta + \varepsilon \bigr].
\]
Substituting~\eqref{eq:opt-bound} from Proposition~\ref{prop:optimal-inf} in
the above, we get that
\[
\hat V^{(T-t)}(\ainfo_t(h_t)) - \alpha_t +
\frac{\gamma^{T-t}}{1 - \gamma} R_{\min}
\le V_t(h_t) \le
\hat V^{(T-t)}(\ainfo_t(h_t)) + \alpha_t +
\frac{\gamma^{T-t}}{1 - \gamma} R_{\max}.
\]
Since $\hat{\mathcal{B}}$ is a contraction, from the Banach fixed point theorem
we know that $\lim_{T \to \infty} \hat V^{(T-t)} = \hat V^*$. Therefore, by
continuity of $\Minkowski_\mathfrak{F}(\cdot)$, we have
$\lim_{T \to \infty} \Minkowski_\mathfrak{F}(\hat V^{T-t}) = \Minkowski_\mathfrak{F}(\hat V^*)$.
Consequently, $\lim_{T \to \infty} \alpha_t = \alpha$. Therefore, taking the
limit $T \to \infty$ in the above equation, we get
\[
\hat V^*(\ainfo_t(h_t)) - \alpha
\le V_t(h_t) \le
\hat V^*(\ainfo_t(h_t)) + \alpha,
\]
which establishes the bound on the value function
in~\eqref{eq:value-approx-inf}. The bound on the action-value function
in~\eqref{eq:value-approx-inf} follows from a similar argument.
\end{proof}
\blue{Theorem~\ref{thm:inf-ais} shows how the result of Theorem~\ref{thm:ais}
generalizes to infinite horizon. We can similarly extend the results for
approximate policy evaluation (as in Sec.~\ref{sec:ais-poleval}), the stochastic AIS case (as in
Sec.~\ref{sec:stochais}), the action compression case (as in
Sec.~\ref{sec:actais}), and the observation
compression case (as in Sec.~\ref{sec:obsais}).}
\section{Preliminaries: Information state and dynamic programming
decomposition for partially observed systems}\label{sec:prelim}
\subsection{General model for a partially observed system} \label{sec:model}
Traditionally, partially observed systems are modeled as partially observable
Markov decision processes (POMDPs)~\citep{Astrom_1965,SmallwoodSondik_1973}, where
there is a controlled state and an agent which makes noise corrupted
observations of the state. However, for the purpose of understanding
approximation for partially observed systems, it is conceptually cleaner to
start with an input-output model of the system as described below.
\begin{figure}[ht]
\begin{minipage}{0.475\textwidth}
\centering
\resizebox{0.95\linewidth}{!}{%
\begin{mpost}[mpsettings={input boxes;}]
defaultdx := 10bp;
defaultdy := 20bp;
boxit.system(\btex System etex);
system.c = origin;
drawboxed(system);
z1 = 0.5[system.w, system.nw];
z2 = 0.5[system.w, system.sw];
z3 = z1 - (1cm,0);
z4 = z2 - (1cm,0);
drawarrow z3 -- lft z1;
drawarrow z4 -- lft z2;
label.lft(\btex Stochastic input $W_t$ etex, z3);
label.lft(\btex Controlled input $A_t$ etex, z4);
z5 = 0.5[system.e, system.ne];
z6 = 0.5[system.e, system.se];
z7 = z5 + (1cm, 0);
z8 = z6 + (1cm, 0);
drawarrow z5 -- z7;
drawarrow z6 -- z8;
label.rt(\btex Observation $Y_t$ etex, z7);
label.rt(\btex Reward $R_t$ etex, z8);
\end{mpost}}
\caption{A stochastic input-output system}
\label{fig:model}
\end{minipage}
\hfill
\begin{minipage}{0.475\textwidth}
\centering
\resizebox{0.95\linewidth}{!}{%
\begin{mpost}
draw (-0.5cm,0) -- (3cm, 0);
draw (3cm, 0) -- (4.5cm, 0) dashed evenly;
draw (4.5cm, 0) -- (6.5cm, 0);
drawarrow (6.5cm, 0) -- (8cm, 0) dashed evenly;
drawarrow (0, -0.75cm) -- (0, 0);
label.bot(\btex $A_1$ etex, (0, -0.75cm));
drawarrow (0.5cm, -0.75cm) -- (0.5cm, 0);
label.bot(\btex $W_1$ etex, (0.5cm, -0.75cm));
drawarrow (1cm, 0) -- (1cm, 0.75cm);
label.top(\btex $(Y_1, R_1)$ etex, (1cm, 0.75cm));
drawarrow (1.5cm, -0.75cm) -- (1.5cm, 0);
label.bot(\btex $A_2$ etex, (1.5cm, -0.75cm));
drawarrow (2cm, -0.75cm) -- (2cm, 0);
label.bot(\btex $W_2$ etex, (2cm, -0.75cm));
drawarrow (2.5cm, 0) -- (2.5cm, 0.75cm);
label.top(\btex $(Y_2, R_2)$ etex, (2.5cm, 0.75cm));
drawarrow (5cm, -0.75cm) -- (5cm, 0);
label.bot(\btex $A_t$ etex, (5cm, -0.75cm));
drawarrow (5.5cm, -0.75cm) -- (5.5cm, 0);
label.bot(\btex $W_t$ etex, (5.5cm, -0.75cm));
drawarrow (6cm, 0) -- (6cm, 0.75cm);
label.top(\btex $(Y_t,R_t)$ etex, (6cm, 0.75cm));
\end{mpost}}
\caption{The timing diagram of the input-output system.}
\label{fig:timeline}
\end{minipage}
\end{figure}
We view a partially observed system as a black-box input-output system shown
in Fig.~\ref{fig:model}. At each time~$t$, the system has two inputs and
generates two outputs. The inputs to the system are a control input (also
called an action) $A_t \in \ALPHABET{\Act}$ and a disturbance $W_t \in \ALPHABET{\W}$.
The outputs of the system are an observation $Y_t \in \ALPHABET{\Ob}$ and a reward
$\Rew_t \in \reals$. For the ease of exposition, we assume that $\ALPHABET{\Act}$,
$\ALPHABET{\W}$, and $\ALPHABET{\Ob}$ are finite sets. The analysis extends to general spaces
under appropriate technical conditions. The order in which the input and
output variables are generated is shown in Fig.~\ref{fig:timeline}.
As stated before, we do not impose a state space model on the system.
Therefore, all we can say is that the outputs $(Y_t, \Rew_t)$ at time~$t$
are some function of all the inputs $(A_{1:t}, W_{1:t})$ up to time~$t$,
i.e.,
\[
Y_{t} = f_t(A_{1:t}, W_{1:t})
\quad\text{and}\quad
\Rew_t = \RewFn_t(A_{1:t}, W_{1:t}),
\]
where $\{f_t \colon \ALPHABET{\Act}^{t} \times \ALPHABET{\W}^{t} \to \ALPHABET{\Ob} \}_{t = 1}^{T}$
are called the system output functions and $\{\RewFn_t \colon \ALPHABET{\Act}^{t} \times \ALPHABET{\W}^{t} \to \reals \}_{t=1}^T$ are called the
system reward functions.
There is an agent which observes the output $Y_t$ and generates a control
input or the action $A_t$ as a (possibly stochastic) function of the
history $H_t = (Y_{1:t-1}, A_{1:t-1})$ of the past observations and
actions, i.e.,
\[
A_t \sim \pi_t(H_t),
\]
where $\pi \DEFINED (\pi_t)_{t \ge 1}$ is a (history-dependent and possibly
stochastic) policy. We
use $\ALPHABET{\His}_t$ to denote the space of all histories up to time~$t$. Then the
policy $\pi_t$ is a mapping from $\ALPHABET{\His}_t$ to $\Delta(\ALPHABET{\Act})$ (which denotes
the space of probability measures on $\ALPHABET{\Act}$). We will use
$\pi_t(a_t|h_t)$ to denote the probability of choosing action $a_t$
at time~$t$ given history $h_t$ and use $\Supp(\pi_t(h_t))$ to denote the
support of $\pi_t$ (i.e., the set of actions chosen with positive
probability).
We assume that the disturbance $\{W_t\}_{t \ge 1}$ is a sequence of independent random
variables defined on a common probability space $(\Omega, \mathcal{F}, \PR)$.
Thus, if the control input process $\{A_{t} \}_{t \ge 1}$ is specified, then the
output processes $\{ Y_{t}, \Rew_{t}\}_{t \ge 1}$ are random variables on
$(\Omega, \mathcal{F}, \PR)$. Specifying a policy $\pi$ for the agent
induces a probability measure on the output processes $\{ Y_t, \Rew_t\}_{t \ge
1}$, which we denote by $\PR^{\pi}$.
We start our discussion by looking at the planning problem faced by the agent
when the system runs for a finite horizon~$T$. We will generalize our results
to the infinite horizon discounted reward setup later. In the finite horizon
setup, the performance of any policy $\pi$ is given by
\begin{equation} \label{eq:optimal}
J(\pi) \coloneqq \EXP^{\pi}\biggl[ \sum_{t=1}^T \Rew_t \biggr],
\end{equation}
where $\EXP^{\pi}$ denotes the expectation with respect to the probability
measure $\PR^{\pi}$.
We assume
that the agent knows the system dynamics $\{f_t\}_{t \ge 1}$, the reward
functions $\{r_t\}_{t\ge 1}$, and the probability measure $\PR$ on the
primitive random variables $\{W_t\}_{t\ge1}$. The objective of the agent is
to choose a policy $\pi$ which maximizes the expected total reward
$J(\pi)$.
Since all system variables are assumed to be finite valued and the
system runs for a finite horizon, there are only a finite number of policies
$\pi$. So, an optimal policy always exists and the important question is to
determine an efficient algorithm to compute the optimal policy.
In Sec.~\ref{sec:history}, we start by presenting a trivial dynamic
programming decomposition which uses the entire history of observations as a
state. Such a history-dependent dynamic program is not an efficient method to
compute the optimal policy; rather it serve as a reference with which we
compare the more efficient exact and approximate dynamic programs that we
derive later.
In Sec.~\ref{sec:info-state}, we present sufficient conditions to identify an
information state for dynamic programming. Our main result, presented in
Secs.~\ref{sec:ais} and~\ref{sec:infinite}, is to identify a notion of
approximate information state and derive approximation bounds when an
approximate policy is computed using an approximate information state.
\subsection{A dynamic programming decomposition}\label{sec:history}
To obtain a dynamic program to identify an optimal policy
for~\eqref{eq:optimal}, we can view the history $H_t$ as a ``state'' of a
Markov decision process (MDP) with transition probability
\[
\PR(H_{t+1} = (h'_t, a'_t, y_t) \mid H_t = h_t, A_t = a_t)
= \begin{cases}
\PR(Y_t = y_t | H_t = h_t, A_t = a_t), &
\text{if $h'_t = h_t$ \& $a'_t = a_t$} \\
0, & \text{otherwise}
\end{cases}
\]
and per-step reward $\EXP[ \Rew_t | H_t, A_t]$. Therefore, from standard
results from Markov decision processes \cite{Bellman1957}, we can recursively
compute the performance of a given policy as well as the best possible
performance using ``standard'' dynamic program.
\begin{proposition}[Policy evaluation]\label{prop:eval}
For any given (history dependent) policy $\pi$, define the
\emph{reward-to-go} function for any time~$t$ and realization $h_t$ of
history $H_t$ as
\begin{equation}\label{eq:J1_defn}
V^\pi_t(h_t) \coloneqq \EXP^{\pi}\biggl[ \sum_{s=t}^T \Rew_s \biggm|
H_t = h_t \biggr].
\end{equation}
The reward-to-go functions defined above satisfy the following recursion.
Define $V^\pi_{T+1}(h_{T+1}) = 0$ and for any $t \in \{T, \dots, 1\}$,
\begin{equation} \label{eq:pol-eval}
V^\pi_t(h_t) = \EXP^\pi
\bigl[ \Rew_t + V^\pi_{t+1}(H_{t+1}) \bigm| H_t = h_t \bigr].
\end{equation}
\end{proposition}
The reward-to-go function $V^\pi_t(h_t)$ denotes the expected cumulative
rewards obtained in the future when starting from history $h_t$ at time~$t$
and following policy~$\pi$.
Note that $V^\pi_t(h_t)$ only depends on the policy $\pi$ only through
the choice of the future policy $(\pi_{t}, \dots, \pi_T)$ and therefore can
be computed without the knowledge of the past policy $(\pi_1, \dots, \pi_{t-1})$.
Note that $h_1 = \emptyset$ and the performance $J(\pi)$ defined
in~\eqref{eq:optimal} equals $V^\pi(h_1)$. Thus, Proposition~\ref{prop:eval}
gives a recursive method to evaluate the performance of any history dependent
policy~$\pi$. Following the standard argument for Markov decision processes,
we can modify the recursion~\eqref{eq:pol-eval} to obtain a
dynamic program to identify an optimal policy as follows.
\begin{proposition}[Dynamic programming]\label{prop:optimal}
Recursively define \emph{value functions} $\{ V_t \colon \ALPHABET H_t \to
\reals\}_{t=1}^{T+1}$ as follows. $V_{T+1}(H_{t+1}) \coloneqq 0$ and for $t \in
\{T, \dots 1\}$,
\begin{equation} \label{eq:DP-hist}
V_t(h_t) \coloneqq \max_{a_t \in \ALPHABET{\Act}} \EXP
\bigl[ \Rew_t + V_{t+1}(H_{t+1}) \bigm| H_t = h_t, A_t = a_t \bigr].
\end{equation}
Then, a stochastic policy $\pi = (\pi_1, \dots, \pi_T)$ is
optimal if and only if for all $t \in \{1, \dots T\}$ it satisfies
\begin{equation} \label{eq:opt-policy}
\Supp(\pi_t(h_t)) \subseteq \arg\max_{a_t \in \ALPHABET{\Act}} \EXP
\bigl[ \Rew_t + V_{t+1}(H_{t+1}) \bigm| H_t = h_t, A_t = a_t
\bigr].
\end{equation}
\end{proposition}
Note that the expectation in~\eqref{eq:DP-hist} is with respect to the
probability measure $\PR$ on $(\Omega, \mathcal{F})$ and can be computed
without the knowledge of the policy~$\pi$.
\subsection{Information state and simplified dynamic programs}
\label{sec:info-state}
The dynamic program of Proposition~\ref{prop:optimal} uses the entire history
as state and may not be efficient for identifying an optimal policy. In this
section, we present a general class of dynamic programming decompositions
which may be more efficient. This class of dynamic programs is based on the
notion of information state, which we describe next.
\begin{definition}\label{def:info_state}
Let $\{\ALPHABET{\Is}_t\}_{t=1}^T$ be a pre-specified collection of Banach spaces. A
collection $\{\info_t \colon \ALPHABET{\His}_t \to \ALPHABET{\Is}_t \}_{t=1}^T$ of history
compression functions is called an \emph{information state generator} if the
process $\{Z_t\}_{t =1}^T$, where $Z_t = \info_t(H_t)$, satisfies the
following properties:
\begin{description}
\item[(P1)] \textbf{\textup{Sufficient for performance evaluation}}, i.e.,
for any time~$t$, any realization $h_t$ of $H_t$ and any choice
$a_t$ of $A_t$, we have
\[
\EXP[ \Rew_t \mid H_t = h_t, A_t = a_t ] =
\EXP[ \Rew_t \mid Z_t = \info_t(h_t), A_t = a_t].
\]
\item[(P2)] \textbf{\textup{Sufficient to predict itself}}, i.e.,
for any time~$t$, any realization $h_t$ of $H_t$ and any choice
$a_t$ of $A_t$, we have that for any Borel subset $\ALPHABET B$ of
$\ALPHABET{\Is}_{t+1}$,
\[
\PR(Z_{t+1} \in \ALPHABET B \mid H_t = h_t, A_t = a_t ) =
\PR(Z_{t+1} \in \ALPHABET B \mid Z_t = \info_t(h_t), A_t = a_t ).
\]
\end{description}
\end{definition}
In the sequel, we will sometimes use the phrase ``let $\{Z_t\}_{t=1}^T$ be an
information state'' to specify an information state and will implicitly assume
that the corresponding information state spaces are $\{\ALPHABET{\Is}_t\}_{t=1}^T$ and
the corresponding compression functions are $\{\info_t\}_{t=1}^T$.
Note that both the probabilities in Property (P2) can be computed without the
knowledge of the policy~$\pi$. Furthermore, there are no restrictions on the spaces
$\{\ALPHABET{\Is}_t\}_{t =1}^T$ although in practice an information state is useful
only when these spaces are ``small'' in an appropriate sense.
Condition (P1) is easy to verify but condition (P2) can be a bit abstract.
For some models, instead of (P2), it is easier to verify the following
stronger conditions:
\begin{description}
\item[(P2a)] \textbf{Evolves in a state-like manner}, i.e., there exist
measurable functions $\{\update_t\}_{t=1}^T$ such that for any time~$t$ and
any realization $h_{t+1}$ of $H_{t+1}$, we have
\[
\sigma_{t+1}(h_{t+1}) = \update_t( \sigma_t(h_t), y_t, a_t).
\]
Informally, the above condition may be written as
\(
Z_{t+1} = \update_t(Z_t, Y_{t}, A_t).
\)
\item[(P2b)] \textbf{Is sufficient for predicting future observations}, i.e.,
for any time~$t$, any realization $h_t$ of $H_t$ and any choice
$a_t$ of $A_t$, we have that for any subset $\ALPHABET D$ of $\ALPHABET{\Ob}$,
\[
\PR(Y_{t} \in \ALPHABET D \mid H_t = h_t, A_t = a_t) =
\PR(Y_{t} \in \ALPHABET D \mid Z_t = \info_t(h_t), A_t = a_t).
\]
\end{description}
\begin{proposition}\label{prop:alt-info-state}
\textup{(P2a)} and \textup{(P2b)} imply \textup{(P2)}.
\end{proposition}
\begin{proof}
For any Borel subset $\ALPHABET D$ of $\ALPHABET{\Is}_{t+1}$, we have
\begin{align*}
\PR(Z_{t+1} \in \ALPHABET D &\mid H_t = h_t, A_t = a_t) \\
&\stackrel{(a)}{=}\sum_{y_{t} \in \ALPHABET{\Ob}}
\PR(Y_{t} = y_{t}, Z_{t+1} \in \ALPHABET D \mid H_t = h_t, A_t = a_t)\\
&\stackrel{(b)}{=}\sum_{y_{t} \in \ALPHABET{\Ob}}
\IND\{\update_t(\info_t(h_t),y_{t},a_t) \in \ALPHABET D \}
\PR(Y_{t} = y_{t} \mid H_t = h_t, A_t = a_t)\\
&\stackrel{(c)}{=}\sum_{y_{t} \in \ALPHABET{\Ob}}
\IND\{\update_t(\info_t(h_t),y_{t},a_t) \in \ALPHABET D \}
\PR(Y_{t} = y_{t} \mid Z_t = \info_t(h_t), A_t = a_t)\\
&\stackrel{(d)}{=}\PR(Z_{t+1} \in \ALPHABET D \mid Z_t = \info_t(h_t), A_t = a_t)
\end{align*}
where $(a)$ follows from the law of total probability, $(b)$ follows from
(P2a), $(c)$ follows from (P2b) and $(d)$ from the law of total probability.
\end{proof}
\blue{
The following example illustrates how (P2a) and (P2b) are stronger conditions than
(P2). Consider a Markov decision process (MDP) with state $(S^1_t, S^2_t) \in
\mathcal{S}^1 \times \mathcal{S}^2$ and action $A_t \in \mathcal{A}$, where the
dynamics of the two components of the state are conditionally independent
given the action, i.e.,
\begin{multline*}
\PR(S^1_{t+1} = s^1_{+}, S^2_{t+1} = s^2_{+} | S^1_t = s^1, S^2_t = s^2, A_t = a)
\\
=
\PR(S^1_{t+1} = s^1_{+} | S^1_t = s^1, A_t = a)
\PR(S^2_{t+1} = s^2_{+} | S^2_t = s^2, A_t = a).
\end{multline*}
Furthermore, suppose the reward $R_t$ at any time $t$ is given by $R_t =
r_t(S^1_t, A_t)$. Since the model is an MDP, the observation at time~$t$ is
the same as the state. For this model, the component $\{S^1_t\}_{t\ge1}$ of the state
satisfies properties (P1) and (P2). Therefore, $\{S^1_t\}_{t \ge 1}$ is an
information state process. However, $\{S^1_t\}_{t \ge 1}$ is not sufficient to
predict the next observation $(S^1_{t+1}, S^2_{t+1})$. Therefore, $\{S^1_t\}_{t
\ge 1}$ does not satisfy property (P2b). This shows that properties (P2a) and
(P2b) are stronger than property (P2).
The above example may be considered as an instance of what is called the Noisy-TV
problem~\citep{burda2018exploration}.
}
Next, we show that an information state is useful because it is
always possible to write a dynamic program based on the information state. To
explain this dynamic programming decomposition, we first write the
history-based dynamic programs of Proposition~\ref{prop:eval}
and~\ref{prop:optimal} in a more compact manner as follows: Let
$V_{T+1}(h_{T+1}) \coloneqq 0$ and for $t \in \{T, \dots, 1\}$, define
\begin{subequations}\label{eq:DP-opt}
\begin{align}
Q_t(h_t, a_t) &\coloneqq \EXP\bigl[ \Rew_t + V_{t+1}(H_{t+1}) \bigm|
H_t = h_t, A_t = a_t \bigr],
\\
V_t(h_t) &\coloneqq \max_{a_t \in \ALPHABET{\Act}} Q_t(h_t, a_t).
\end{align}
\end{subequations}
The function $Q_t(h_t, a_t)$ is called the action-value function.
Moreover, for a given stochastic policy $\pi = (\pi_1, \dots, \pi_T)$,
where $\pi_t \colon \ALPHABET{\His}_t \to \Delta(\ALPHABET{\Act}_t)$, let
$V^\pi_{T+1}(h_{T+1}) = 0$ and for $t \in \{T, \dots, 1\}$, define
\begin{subequations} \label{eq:DP-eval}
\begin{align}
Q^\pi_t(h_t, a_t) &\coloneqq \EXP\bigl[ \Rew_t + V^\pi_{t+1}(H_{t+1}) \bigm|
H_t = h_t, A_t = a_t \bigr],
\\
V^\pi_t(h_t) &\coloneqq \sum_{a_t \in \ALPHABET{\Act}} \pi_t(a_t \mid h_t).
Q^\pi_t(h_t, a_t).
\end{align}
\end{subequations}
\begin{theorem}\label{thm:info-state}
Let $\{Z_t\}_{t=1}^T$ be an information state. Recursively
define value functions $\{\bar V_t \colon \ALPHABET{\Is}_t \to \reals\}_{t=1}^{T+1}$,
as follows: $\bar V_{T+1}(z_{T+1}) \coloneqq 0$ and for $t \in
\{T, \dots, 1\}$:
\begin{subequations}\label{eq:DP-info}
\begin{align}
\bar Q_t(z_t, a_t) &\coloneqq \EXP[ \Rew_t + \bar V_{t+1}(Z_{t+1}) \mid
Z_t = z_t, A_t = a_t ], \\
\bar V_t(z_t) &\coloneqq \max_{a_t \in \ALPHABET{\Act}} \bar Q_t(z_t, a_t).
\end{align}
\end{subequations}
Then, we have the following:
\begin{enumerate}
\item For any time~$t$, history $h_t$, and action $a_t$, we have
that
\begin{equation} \label{eq:equiv}
Q_t(h_t, a_t) = \bar Q_t(\info_t(h_t), a_t)
\text{ and }
V_t(h_t) = \bar V_t(\info_t(h_t)).
\end{equation}
\item Let $\bar \pi = (\bar \pi_1, \dots \bar \pi_T)$,
where $\bar \pi_t \colon \ALPHABET{\Is}_t \to \Delta(\ALPHABET{\Act})$, be a stochastic
policy. Then, the policy $\pi = (\pi_1, \dots, \pi_T)$ given by
$\pi_t = \bar \pi_t \circ \info_t$ is optimal if and
only if for all $t$ and all realizations $z_t$ of information states
$Z_t$, $\Supp(\bar \pi_t(z_t)) \subseteq \arg \max_{a_t \in
\ALPHABET{\Act}} \bar Q_t(z_t, a_t)$.
\end{enumerate}
\end{theorem}
\begin{proof}
We prove the result by backward induction. By construction, \eqref{eq:equiv}
is true at time $T+1$. This forms the basis of induction. Assume
that~\eqref{eq:equiv} is true at time $t+1$ and consider the system at
time~$t$. Then,
\begin{align*}
Q_t(h_t, a_t) &= \EXP[ \Rew_t + V_{t+1}(H_{t+1}) \mid H_t = h_t, A_t = a_t ]\\
&\stackrel{(a)}= \EXP[ \Rew_t + \bar V_{t+1}(\info_{t+1}(H_{t+1})) \mid H_t
= h_t, A_t = a_t ]
\\
&\stackrel{(b)}= \EXP[ \Rew_t + \bar V_{t+1}(Z_{t+1}) \mid
Z_t = \info_t(h_t), A_t = a_t]\\
&\stackrel{(c)}= \bar Q_t(\info_t(h_t), a_t),
\end{align*}
where $(a)$ follows from the induction hypothesis, $(b)$ follows from the
properties (P1) and (P2) of information state, and $(c)$ follows from the definition of $\bar
Q$. This shows that the action-value functions are equal. By maximizing over
the actions, we get that the value functions are also equal. The optimality
of the policy follows immediately from~\eqref{eq:equiv}.
\end{proof}
\subsection{Examples of information state} \label{ex:info-state}
For a general model, it is not immediately evident that a non-trivial
information state exists. The question of existence will depend on the
specifics of the observation and reward functions $\{ f_t, r_t \}_{t \ge 1}$
as well as the properties of the probability measure on the primitive random
variables $\{ W_t\}_{t \ge 1}$. We do not pursue the question of existence in
this paper, but present various specific models where information state
exists and show that the corresponding results for these models in the
literature may be viewed as
a special case of Theorem~\ref{thm:info-state}.
\begin{enumerate}
\item For any partially observed model, the history $H_t$ is always a
trivial information state. Therefore, the dynamic program of
Proposition~\ref{prop:optimal} may be viewed as a special case of
Theorem~\ref{thm:info-state}.
\item \textsc{Markov decision process (MDP):} Consider a Markov decision
process (MDP) with state $\St_t \in \StSp$ and action $A_t \in \ALPHABET{\Act}$
\citep{Bellman1957}. At each time, the state evolves in a controlled
Markovian manner with
\[
\PR(\St_{t+1} = \st_{t+1} \mid \St_{1:t} = \St_{1:t}, A_{1:t} = A_{1:t})
=
\PR(\St_{t+1} = \st_{t+1} \mid \St_t = \St_t, A_t = A_t).
\]
The observation of the agent is $Y_t = \St_{t+1}$ and the reward output
is $R_t = r(\St_t, A_t)$. An information state for an MDP is given by
the current state $\St_t$ (the corresponding compression function
is $\info_t(\St_{1:t}, A_{1:t-1}) = \St_t$). The standard dynamic
program for MDPs may be viewed as a special case of
Theorem~\ref{thm:info-state}.
\item \textsc{Even MDPs:} Consider an MDP where the state space $\StSp$ is
either $\reals$ or a symmetric subset of $\reals$ of the form $[-B, B]$,
the controlled transition matrix is even, i.e., for every $a \in
\ALPHABET{\Act}$ and $\st, \st' \in \StSp$,
\[
\PR(\St_{t+1} = \st' \mid \St_t = \st, A_t = a) =
\PR(\St_{t+1} = -\st' \mid \St_t = -\st, A_t = a),
\]
and for every $a \in A$, the per-step reward function $r(\st,
a)$ is even in $\st$. Such MDPs are called \emph{even}
MDPs~\citep{Chakravorty2018} and an information state for such MDPs is
given by the absolute value state $|\St_t|$ (the corresponding compression
function is $\info_t(\St_{1:t}, A_{1:t-1}) = |\St_t|$). The dynamic
program for even MDPs derived in \cite{Chakravorty2018} may be viewed as a
special case of Theorem~\ref{thm:info-state}.
\item \textsc{MDP with irrelevant components:} Consider an MDP with state space
$\StSp = \StSp^1 \times \StSp^2$, action space $\ALPHABET{\Act}$, transition matrix
$P(\st^1_+, \st^2_+ | \st^1, \st^2, a) = P^1(\st^1_{+} | \st^1, a)
P^2(\st^2_{+} | \st^1, \st^2, a)$, and per-step reward $r(\st^1,
a)$, which does not depend on the second component of the state. As
explained in \cite{Feinberg2005}, such models arise in control of queues
and transformation of continuous time Markov decision processes to
discrete time MDPs using uniformization. An information state for such
MDPs is given by the first component $\St^1_t$ (the corresponding
compression function is $\info_t(\St^1_{1:t}, \St^2_{1:t}, A_{1:t}) =
\St^1_t$). The qualitative properties of optimal policies for such models
derived in \cite{Feinberg2005} may be viewed as a special case of
Theorem~\ref{thm:info-state}.
\item \textsc{MDP with delayed state observation:} Consider an MDP where the
observation $Y_t$ of the agent is the $\delta$-step delayed state
$\St_{t-\delta+1}$ of the system \citep{Altman1992}. An information state
for such MDPs is given by the vector $(\St_{t-\delta+1},
U_{t-\delta+1:t-1})$. The dynamic program for such models derived in
\cite{Altman1992} may be viewed as a special case of
Theorem~\ref{thm:info-state}.
\item \textsc{Partially observable Markov decision processes (POMDPs):}
Consider a partially observable Markov decision process (POMDP) where
there is a state space model as for an MDP but the observation $Y_t$ is
some function of the state and the disturbance, i.e., $Y_t =
f^{y}_t(\St_t, W_t)$ \citep{Astrom_1965, SmallwoodSondik_1973}. An
information state for the POMDP is given by the belief state $B_t \in
\Delta(\StSp)$ which is given by
\(
B_t(\st) = \PR(\St_t = \st \mid H_t = h_t)
\). The corresponding compression function may be identified via the
update functions $\{\update_t\}_{t=1}^T$ of Property~(P2a), which are the
standard belief update functions for non-linear filtering. The standard
belief state dynamic program for POMDPs \citep{Astrom_1965,
SmallwoodSondik_1973} may be viewed as a special case of
Theorem~\ref{thm:info-state}.
\item \textsc{Linear quadratic and Gaussian (LQG) models:} Consider a POMDP where
the state and action spaces are Euclidean spaces, the system dynamics
$\PR(\St_{t+1} \mid \St_t, A_t)$ and the observation $f^y_t(\St_t,
W_t)$ are linear, the disturbance $W_t$ is Gaussian, and the per-step
\emph{cost} is a quadratic function of the state and action
\citep{Astrom_1970}. For such a \emph{linear-quadratic-and-Gaussian}
POMDP, an information state is given by the state estimate $\hat \St_t =
\EXP[ \St_t \mid H_t = h_t]$. The corresponding compression function
may be identified via the update functions $\{\update_t\}_{t=1}^T$ of
Property~(P2a), which in this case are Kalman filtering update equations.
The standard conditional estimate based
dynamic program for LQG models \citep{Astrom_1970} may be viewed as a
special case of Theorem~\ref{thm:info-state}.
\item \textsc{POMDPs with delayed observations:}
Consider a POMDP where the observation is delayed by $\delta$ time steps
\citep{Bander1999}. For such a system the belief on $\delta$ step delayed
state based on the $\delta$-step delayed observations and control, as well
as the vector of last $\delta$ control actions is an information state.
The structure of the optimal policy and the dynamic program derived in
\cite{Bander1999} may be viewed as a special case of
Theorem~\ref{thm:info-state}.
\item \textsc{Machine maintenance:} Consider the following model for machine
maintenance \citep{Eckles_1968}. A machine can be in one of $n$ ordered
states where the first state is the best and the last state is the worst.
The production cost increases with the state of the machine. The state
evolves in a Markovian manner. At each time, an agent has the option to
either run the machine or stop and inspect it for a cost. After
inspection, the agent may either repair it (at a cost that depends on the
state) or replace it (at a fixed cost). The objective is to identify a
maintenance policy to minimize the cost of production, inspection, repair,
and replacement.
Let $\tau$ denote the time of last inspection and $\St_\tau$ denote the
state of the machine after inspection, repair, or replacement. Then, it can
be shown that $(\St_\tau, t-\tau)$ is an information state for the system.
This is an instance of an incrementally expanding representation for a
POMDP described in \cite{AM:rl-pomdp}.
\end{enumerate}
The above examples show that there are generic information states for certain
class of models (e.g., MDPs, MDPs with delays, POMDPs, POMDPs with delays)
as well as specific information states tuned to the model (e.g., even MDPs,
MDPs with irrelevant components, LQG models, machine repair).
\subsection{Discussion and related work} \label{discuss:info-state}
Although we are not aware of a previous result which formally defines an
information state and shows that an information state always implies a dynamic
programming decomposition (Theorem~\ref{thm:info-state}), yet the notion of
information state is not new and has always existed in the stochastic control
literature. Information state may be viewed as a generalization of the
traditional notion of state~\citep{Nerode:1958}, which is defined as a
statistic (i.e., a function of the observations) sufficient for input-output
mapping. In contrast, we define an information state as a statistic sufficient
for performance evaluation (and, therefore, for dynamic programming). Such a
definition is hinted in \cite{Witsenhausen:1976}. The notion of information
state is also related to sufficient statistics for optimal control defined in
\cite{Striebel:1965} for systems with state space models.
As far as we are aware, the informal definition of information state was first
proposed by~\cite{Kwakernaak:1965} for adaptive control systems. Formal
definitions for linear control systems were given by~\cite{Bohlin:1970} for
discrete time systems and by~\cite{DavisVaraiya:1972} for continuous time
systems. \cite{Kumar_1986} define an information state as a compression of
past history which satisfies property (P2a) but do not formally show that
such an information state always leads to a dynamic programming decomposition.
A formal definition of information state appears in our previous work
\citep{MM:dec-control} where the result of Theorem~\ref{thm:info-state} is
asserted without proof. Properties of information states for
multi-agent teams were asserted in \cite{Mahajan:phd}. \cite{Adlakha2012}
provide a definition which is stronger than our definition.
They require that in a POMDP with unobserved state $\St_t \in
\StSp$, $\info_t(h_t)$ should satisfy (P1) and (P2) as well be sufficient
to predict $\St_t$, i.e., for any Borel subset $\ALPHABET B$ of $\StSp$ and
any realization $h_t$ of $H_t$,
\[
\PR(\St_t \in \ALPHABET B \mid H_t = h_t) =
\PR(\St_t \in \ALPHABET B \mid \hat Z_t = \info_t(h_t)).
\]
A similar definition is also used in \cite{FrancoisLavet2019}. We had
presented a definition similar to Definition~\ref{def:info_state} in the
preliminary version of this paper \citep{CDC}.
The notion of information state is also related to $\Gamma$-trace equivalence
for MDPs and POMDPs defined by \cite{CastroPanangadenPrecup_2009}. For MDPs.
$\Gamma$-trace equivalence takes a partition of the state space and returns a
finer partition such that for any choice of future actions any two states in
the same cell of the finer partition have the same distribution on future
states and rewards. \cite{CastroPanangadenPrecup_2009} show that recursive
applications of $\Gamma$-trace equivalence has a fixed point, which is
equivalent to bisimulation based partition \citep{Givan_2003} of the state
space of the MDP. Similar results were shown for MDPs in
\cite{FernsPanangadenPrecup_2004, FernsPanangadenPrecup_2011}.
\cite{CastroPanangadenPrecup_2009} extend the notion of trace equivalence for
MDPs to belief trajectory equivalence for POMDPs. In particular, two belief
states are said to be belief trajectory equivalent if for any choice of future
actions, they generate the same distribution on future observations and
rewards. Such belief trajectory equivalence is related to predictive state
representation (PSR) \citep{LittmanSutton_2002,
SinghLittmanJongPardoeStone_2003, Izadi2003, James2004, RosencrantzGordonThrun_2004, WolfeJamesSingh_2005} and observable operator models (OOM) \citep{Jaeger2000,
Jaeger2006}, which are a compression of the past history which is
sufficient to predict the future observations (but not necessarily rewards).
Information state may be viewed as a ``Markovianized'' version of belief
trajectory equivalence and PSRs, which has the advantage that both (P1) and
(P2) are defined in terms of ``one-step'' equivalence while belief trajectory
equivalence and PSR are defined in terms of ``entire future trajectory''
equivalence. It should be noted that PSR and bisimulation based equivalences
are defined for infinite horizon models, while the information state is
defined for both finite and infinite horizon models (see
Sec.~\ref{sec:infinite}).
Another related notion is the notion of causal states (or
$\varepsilon$-machines) used in computational
mechanics~\citep{Crutchfield:1989,Crutchfield:2001}. and forecasting in
dynamical systems~\citep{Grassberger:1986, Grassberger:1988}. These
definitions are for uncontrolled Markov chains and the emphasis is on the
minimal state representation for time-invariant infinite-horizon systems.
\section{AIS-based PORL algorithm}
\label{sec:main_algorithm}
In this section, we describe the pseudocode for the algorithm followed in the experiments. Note that we de not use a critic (as in Fig. \ref{fig:network}) for simplicity. In the following, the learning rates $a_k$ and $b_k$ can be selected by ADAM or any other descent methods.
\begin{algorithm}
\caption{AIS-based PORL algorithm}\label{euclid}
\textbf{Input:} Initial AIS-Generator: $(\ainfo, \nextinfo, \rewinfo)_{\bar \xi_0}$,
Initial Policy: $\pi_{\polPars_0}$,
Discount factor: $\gamma$,
\\
\phantom{\textbf{Input:}}
Reward weight: $\lambda$,
Number of episodes: $K$,
AIS-LR: $a_{k=1}^K$,
Policy-LR: $b_{k=1}^K$.
\\
\textbf{Output:}
Learned policy: $\pi_{\polPars_K}$,
Learned AIS-generator: $(\ainfo, \nextinfo, \rewinfo)_{\bar \xi_K}$
\begin{algorithmic}[1]
\Procedure{AIS-based PORL}{}
\ForAll {$k \in \{1, \dots, K\}$}
\State Reset environment and perform an episode using
$\pi_{\polPars_{k-1}}, (\ainfo, \nextinfo, \rewinfo)_{\bar \xi_{k-1}}$.
\State $A_{1:T}, Y_{1:T}, \Rew_{1:T} \gets$ Actions, observations, and
rewards for episode~$k$.
\State Compute AIS loss using $A_{1:T}, Y_{1:T}, \Rew_{1:T},
\lambda, (\ainfo, \nextinfo, \rewinfo)_{\bar \xi_{k-1}}$
using Eq.~\eqref{eq:loss-KL-2} or~\eqref{eq:loss-2}
\State Compute policy loss using $A_{1:T}, Y_{1:T}, \Rew_{1:T},
\gamma, \pi_{\polPars_{k-1}}, (\ainfo)_{\bar \xi_{k-1}}$ using
Eq.~\eqref{eq:GPOMDP_update}
\State Update AIS parameters $\bar \xi_{k-1}$ and policy
parameters $\pi_{\polPars_{k-1}}$ using Eq.~\eqref{eq:ais_a}
\EndFor \EndProcedure
\end{algorithmic}
\end{algorithm}
| proofpile-arXiv_059-15714 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
We consider data structures supporting order-based operations such as rank, select, membership, predecessor, successor, minimum, and maximum while providing dynamic operations insert, delete, change-key, split, and merge. The classic solution is the binary search tree (BST), perhaps the most fundamental data structure in computer science. The original unbalanced structure is credited to papers by Booth and Colin~\cite{Booth60}, Douglas~\cite{Douglas59}, Windley~\cite{Windley60}, and Hibbard~\cite{Hibbard62} in the early 1960's. Since then, a plethora of \emph{balanced} binary search tree data structures have been proposed~\cite{Adelson62,Bayer72,Bayer72b,Andersson89,Galperin93,Sleator85,Seidel96,NIEVER72}, notable examples including AVL trees~\cite{Adelson62}, red-black trees~\cite{Bayer72}, and splay trees~\cite{Sleator85}. A balanced binary search tree is a staple data structure included in nearly all major programming language's standard libraries and nearly every undergraduate computer science curriculum. The data structure is the dynamic equivalent of binary search in an array, allowing searches to be performed on a changing set of keys at nearly the same cost. Extending to multiple dimensions, the binary search tree is the base data structure on which range trees~\cite{Bentley79}, segment trees~\cite{Bentley77}, interval trees~\cite{Edelsbrunner80,McCreight80}, $k$d-trees~\cite{Bentley75}, and priority search trees~\cite{McCreight85} are all built.
The theory community has long focused on developing binary search trees with efficient \emph{query} times. Although $\Omega(\log n)$ is the worst-case time complexity of a query, on non-uniform access sequences binary search trees can perform better than logarithmic time per query by, for example, storing recently accessed elements closer to the root. The splay tree was devised as a particularly powerful data structure for this purpose~\cite{Sleator85}, achieving desirable access theorems such as static optimality, working set, scanning theorem, static finger, and dynamic finger~\cite{Sleator85,Cole2000a,Cole2000b}. The most famous performance statement about the splay tree, however, is the unproven dynamic optimality conjecture, which claims that the performance of the splay tree is within a constant factor of any binary search tree on any sufficiently long access sequence, subsuming all other access theorems. Proving this conjecture is widely considered one of the most important open problems in theoretical computer science, receiving vast attention by data structure researchers~\cite{Allen78,Demaine09,Demaine07,Iacono16,Bose2020,Sleator83,Wilber89,Kozma2019,Chalermsook15,Iacono01,Badoiu06}. Despite ultimately remaining unsolved for nearly four decades, this topic continues to receive extensive treatment~\cite{Iacono16,Bose2020,Chalermsook15,Levy19,Badoiu06}. %
Although widely considered for the task in literature, the binary search tree is not the most efficient data structure for the standard dictionary abstract data type: in practice, dictionaries are almost always implemented by hash tables, which support $O(1)$ time insert, delete, and look-up in expectation~\cite{Fredman84,Pagh04}. The advantage of binary search trees, over hash tables, is that they support \emph{order-based} operations. We call dictionaries of this type \emph{sorted dictionaries}, to differentiate them from the simpler data structures supporting only membership queries.
If we limit the order-based operations required of our sorted dictionary to queries for the minimum or maximum element (or both), a number of alternative solutions to the binary search tree have been developed, known as priority queues. The first of which was the binary heap, invented in 1964 for the heapsort algorithm~\cite{Williams64}. The binary heap achieves asymptotic complexity equivalent to a binary search tree, though due to the storage of data in an array and fast average-case complexity, it is typically the most efficient priority queue in practice. Later, the invention of the binomial heap showed that the merging of two arbitrary priority queues could be supported efficiently~\cite{Vuillemin78,Brown78}, thus proving that the smaller operation set of a priority queue allows more efficient runtimes. The extent to which priority queues can outperform binary search trees was fully realized with the invention of Fibonacci heaps, which showed insertion, merge, and an additional decrease-key operation can all be supported in $O(1)$ amortized time~\cite{Fredman87}. Since then, a number of priority queues with running times close to or matching Fibonacci heaps have been developed~\cite{Fredman86,Chan09,Brodal12,Elmasry09,Haeupler11,Brodal96,Hansen15}. We refer to such priority queues with $o(\log n)$ insertion and decrease-key costs as \textit{efficient} priority queues, to distinguish them from their predecessors and typically simpler counterparts with $O(\log n)$ insertion and/or decrease-key cost.
The history of efficient priority queues contrasts that of binary search trees. Efficient priority queues have been developed for the case when the number of queries is significantly less than the number of insertions or updates. On the other hand, research on binary search trees has focused on long sequences of element \emph{access}. %
Indeed, the dynamic optimality conjecture starts with the assumption that $n$ elements are already present in the binary search tree, placing any performance improvements by considering insertion cost entirely outside of the model. However, the theory of efficient priority queues shows that on some operation sequences, the efficiency gains due to considering insertion cost can be as much as a $\Theta(\log n)$ factor, showing an as-of-yet untapped area of potential optimization for data structures supporting the operations of a binary search tree. Further, aside from the theoretically-appealing possibility of the unification of the theories of efficient priority queues and binary search trees, the practicality of improved insertion performance is arguably greater than that of improved access times.
For the purpose of maintaining keys in a database, for example, an insert-efficient data structure can provide superior runtimes when the number of insertions dominates the number of queries, a scenario that is certainly the case for some applications~\cite{ONeil96,Brodal03} and is, perhaps, more likely in general.
Yet in spite of these observations, almost no research has been conducted that seriously explores this frontier~\cite{Bose13}.
We attempt to bridge this gap. We seek a general theory of comparison-based sorted dictionaries that encompasses efficient priority queues and binary search trees, providing the operational flexibility of the latter with the efficiency of the former, when possible.
We do not restrict ourselves to any particular BST or heap model; while these models with their stronger lower bounds are theoretically informative, for the algorithm designer these lower bounds in artificially constrained models are merely indications of what \emph{not} to try. If we believe in the long-term goal of improving algorithms and data structures in practice~-- an objective we think will be shared by the theoretical computer science community at large~-- we must also seek the comparison with lower bounds in a more permissive model of computation.%
We present \emph{lazy search trees}. The lazy search tree is the first data structure to support the general operations of a binary search tree while providing superior insertion time when permitted by query distribution. We show that the theory of efficient priority queues can be generalized to support queries for any rank, via a connection with the multiple selection problem. Instead of sorting elements upon insertion, as does a binary search tree, the lazy search
delays sorting to be completed incrementally while queries are answered. %
A binary search tree and an efficient priority queue are special cases of our data structure that result when queries are frequent and uniformly distributed or only for the minimum or maximum element, respectively. While previous work has considered binary search trees in a ``lazy" setting (known as ``deferred data structures'')~\cite{Karp88,CHING90} and multiple selection in a dynamic setting~\cite{Barbay15,BARBAY16}, no existing attempts fully distinguish between insertion and query operations, severely limiting the generality of their approaches. The model we consider gives all existing results as corollaries, unifying several research directions and providing more efficient runtimes in many cases, all with the use of a single data structure.
Before we can precisely state our results, we must formalize the model in which they are attained.
\subsection{Model and Results}
\label{sec:model}
We consider comparison-based data structures on the pointer machine. While we suggest the use of arrays in the implementation of our data structure in practice, constant time array access is not needed for our results. Limiting operations to a pointer machine has been seen as an important property in the study of efficient priority queues, particularly in the invention of strict Fibonacci heaps~\cite{Brodal12} compared to an earlier data structure with the same worst-case time complexities~\cite{Brodal96}.
We consider data structures supporting the following operations on a dynamic multiset $S$ with
(current) size $n = |S|$. We call such data structures \emph{sorted dictionaries}:
\begin{itemize}
\item \texttt{Construction($S$)} $\ce$ Construct a sorted dictionary on the set $S$.
\item \texttt{Insert($e$)} $\ce$ Add element $e=(k,v)$ to $S$, using key $k$ for comparisons;
(this increments $n$).
\item \texttt{RankBasedQuery($r$)} $\ce$ Perform a rank-based query pertaining to rank $r$ on $S$.
\item \texttt{Delete(ptr)} $\ce$ Delete the element pointed to by \texttt{ptr} from $S$; (this decrements $n$).
\item \texttt{ChangeKey(ptr,\,$k'$)} $\ce$ Change the key of the element pointed to by \texttt{ptr} to $k'$.
\item \twopart{\texttt{Split($r$)} $\ce$ }{Split $S$ at rank $r$, returning two sorted dictionaries $T_1$ and $T_2$ of $r$ and $n-r$ elements, respectively, such that for all $x \in T_1$, $y \in T_2$, $x \leq y$.}
\item \twopart{\texttt{Merge($T_1$,\,$T_2$)} $\ce$ }{Merge sorted dictionaries $T_1$ and $T_2$ and return the result, given that for all $x \in T_1$, $y \in T_2$, $x \leq y$.}
\end{itemize}
We formalize what queries are possible within the stated operation \texttt{RankBasedQuery($r$)} in \wref{sec:prelim}. For now, we informally define a rank-based query as any query computable in $O(\log n)$ time on a (possibly augmented) binary search tree and in $O(n)$ time on an unsorted array. Operations rank, select, contains, successor, predecessor, minimum, and maximum fit our definition. To each operation, we associate a rank $r$: for membership and rank queries, $r$ is the rank of the queried element (in the case of duplicate elements, an implementation can break ties arbitrarily), and for select, successor, and predecessor queries, $r$ is the rank of the element returned; minimum and maximum queries are special cases of select with $r = 1$ and $r = n$, respectively.
The idea of lazy search trees is to maintain a partition of current elements in the data structure into what we will call \emph{gaps}. We maintain a set of $m$ gaps $\{\Delta_i\}$, $1 \leq i \leq m$, where a gap $\Delta_i$ contains a bag of elements. Gaps satisfy a total order, so that for any elements $x \in \Delta_i$ and $y \in \Delta_{i+1}$, $x \leq y$. Internally, we will maintain structure within a gap, but the interface of the data structure and the complexity of the operations is based on the distribution of elements into gaps, assuming nothing about the order of elements within a gap. Intuitively, binary search trees fit into our framework by restricting $|\Delta_i| = 1$, so each element is in a gap of its own, and we will see that priority queues correspond to a single gap $\Delta_1$ which contains all elements. Multiple selection corresponds to gaps where each selected rank marks a separation between adjacent gaps.
To insert an element $e = (k, v)$, where $k$ is its key and $v$ its value, we find a gap $\Delta_i$ in which it belongs without violating the total order of gaps (if $x \leq k$ for all $x \in \Delta_i$ and $k \leq y$ for all $y \in \Delta_{i+1}$, we may place $e$ in either $\Delta_i$ or $\Delta_{i+1}$; implementations can make either choice). Deletions remove an element from a gap; if the gap is now empty we can remove the gap. When we perform a query, we first narrow the search down to the gap $\Delta_i$ in which the query rank $r$ falls (formally, $\sum_{j=1}^{i-1} |\Delta_j| < r \leq \sum_{j=1}^i |\Delta_j|$). We then answer the query using the elements of $\Delta_i$ and \emph{restructure} the gaps in the process.
We split gap $\Delta_i$ into two gaps $\Delta'_i$ and $\Delta'_{i+1}$ such that the total order on gaps is satisfied and the rank $r$ element is either the largest in gap $\Delta'_i$ or the smallest in gap $\Delta'_{i+1}$; specifically, either $|\Delta'_i| + \sum_{j=1}^{i-1} |\Delta_j| = r$ or $|\Delta'_i| + \sum_{j=1}^{i-1} |\Delta_j| = r-1$. (Again, implementations can take either choice. We will assume a maximum query to take the latter choice and all other queries the former. More on the choice of $r$ for a given query is discussed in \wref{sec:prelim}. Our analysis will assume two new gaps replace a former gap as a result of each query. Duplicate queries or queries that fall in a gap of size one follow similarly, in $O(\log n)$ time.) We allow duplicate insertions.
Our performance theorem is the following.
\begin{theorem}[Lazy search tree runtimes]
\label{thm:main}
Let $n$ be the total number of elements currently in the data structure and let $\{\Delta_i\}$ be defined as above (thus $\sum_{i=1}^m |\Delta_i| = n$). Let $q$ denote the total number of queries. Lazy search trees support the operations of a sorted dictionary on a dynamic set $S$ in the following runtimes:
\begin{itemize}
\item {\normalfont \texttt{Construction($S$)}}
in $O(n)$ worst-case time, where $|S| = n$.
\item {\normalfont \texttt{Insert($e$)}}
in $O(\min(\log(n/|\Delta_i|) + \log \log |\Delta_i|,\: \log q))$ worst-case time%
\footnote{\label{fn:note1}
To simplify formulas, we distinguish between $\log_2(x)$, the binary logarithm for any $x > 0$,
and $\log(x)$, which we define as $\max(\log_2(x),1)$.
%
}%
, where $e = (k,v)$ is such that $k \in \Delta_i$.
\item {\normalfont \texttt{RankBasedQuery($r$)}}
in $O(x \log c + \log n)$ amortized time, where the larger resulting gap from the split is of size $cx$ and the other gap is of size $x$.
\item {\normalfont \texttt{Delete(ptr)}}
in $O(\log n)$ worst-case time.
\item {\normalfont \texttt{ChangeKey(ptr,\,$k'$)}}
in $O(\min(\log q,\, \log \log |\Delta_i|))$ worst-case time, where the element pointed to by $\mathtt{ptr}$, $e=(k,v)$, has $k \in \Delta_i$ and $k'$ moves $e$ closer to its closest query rank%
\footnote{\label{fn:note2}
The closest query rank of $e$ is the closest boundary of $\Delta_i$ that was created in response to a query. For gaps $\Delta_i$ with $1\neq i \neq m$, this is the boundary of $\Delta_i$ that is closer with respect to the rank of $k$. Gaps $\Delta_1$ and $\Delta_m$ may follow similarly to $i \neq 1, m$ if a minimum or maximum has been extracted. With a single gap $\Delta_1$, increase-key is supported efficiently if maximums have been removed and decrease-key is supported efficiently if minimums have been removed. If both have been removed, the gap functions as in the general case for $i \neq 1, m$. Intuitively, this is configured to support the behavior of decrease-key/increase-key without special casing when the data structure is used as a min-heap/max-heap.
%
%
%
%
%
%
} in $\Delta_i$;
otherwise, {\normalfont \texttt{ChangeKey(ptr,\,$k'$)}} takes $O(\log n)$ worst-case time.
\item {\normalfont \texttt{Split($r$)}}
in time according to {\normalfont \texttt{RankBasedQuery($r$)}}.
\item {\normalfont \texttt{Merge($T_1$,\,$T_2$)}}
in $O(\log n)$ worst-case time.
\end{itemize}
Define $B = \sum_{i=1}^m |\Delta_i| \log_2(n/|\Delta_i|)$. Then over a series of insertions and queries with no duplicate queries, the total complexity is $O(B + \min(n \log \log n, n \log q))$.
\end{theorem}
We can also bound the number of pointers needed in the data structure.
\begin{theorem}[Pointers]
\label{thm:qbounds}
An array-based lazy search tree implementation requires $O(\min(q, n))$ pointers.
\end{theorem}
By reducing multiple selection to the sorted dictionary problem, we can show the following lower bound.
\begin{theorem}[Lower bound]
\label{thm:lb}
Suppose we process a sequence of operations resulting in gaps $\{\Delta_i\}$. Again define $B = \sum_{i=1}^m |\Delta_i| \log_2(n/|\Delta_i|)$. Then this sequence of operations requires $B-O(n)$ comparisons and $\Omega(B + n)$ time in the worst case.
\end{theorem}
\wref{thm:lb} indicates that lazy search trees are at most an additive $O(\min(n \log \log n, n \log q))$ term from optimality over a series of insertions and distinct queries. This gives a lower bound on the per-operation complexity of {\normalfont \texttt{RankBasedQuery($r$)} of $\Omega(x \log c)$; the bound can be extended to $\Omega(x \log c + \log n)$ if we amortize the total work required of splitting gaps to each individual query operation. A lower bound of $\Omega(\min(\log(n/|\Delta_i|), \log m))$ can be established on insertion complexity via information theory. We describe all lower bounds in \wref{sec:bounds}.
We give specific examples of how lazy search trees can be used and how to analyze its complexity according to \wref{thm:main} in the following subsection.
\subsection{Example Scenarios}
\label{examples}
Below, we give examples of how the performance of \wref{thm:main} is realized in different operation sequences. While tailor-made data structures for many of these applications are available, lazy search trees provide a \emph{single} data structure that seamlessly adapts to the actual usage pattern while achieving optimal or near-optimal performance for all scenarios in a uniform way.
\begin{enumerate}
\item \textbf{Few Queries:} The bound $B = \sum_{i=1}^{m} |\Delta_i| \log_2 (n/|\Delta_i|)$ satisfies $B = O(n \log q + q \log n)$. In the worst case, queries are uniformly distributed, and the lower bound $B = \Theta(n \log q + q \log n)$. Over a sequence of insertions and queries without duplicate queries, our performance is optimal $O(n \log q + q \log n)$. If $q = n^\epsilon$ for constant $\epsilon > 0$, lazy search trees improve upon binary search trees by a factor $1/\epsilon$. If $q = O(\log^c n)$ for some $c$, lazy search trees serve the operation sequence in $O(cn \log \log n)$ time and if $q = O(1)$, lazy search trees serve the operation sequence in linear time. Although it is not very difficult to modify a previous ``deferred data structure" to answer a sequence of $n$ insertions and $q$ queries in $O(n \log q + q \log n)$ time (see \wref{sec:deferred-data-structures}),
to the best of our knowledge, such a result has not appeared in the literature.
\item \textbf{Clustered Queries:} Suppose the operation sequence consists of $q/k$ ``range queries'', each requesting $k$ consecutive keys, with interspersed insertions following a uniform distribution. Here, $B = O(n \log (q/k) + q \log n)$, where $q$ is the total number of keys requested.
If the queried ranges are uniformly distributed, $B = \Theta(n \log (q/k) + q \log n)$, with better results possible if the range queries are non-uniform. Our performance on this operation sequence is $O(B + \min(n \log \log n, n \log q))$, tight with the lower bound if $k = \Theta(1)$ or $q/k = \Omega(\log n)$. Similarly to Scenario 1, we pay $O(n \log (q/k))$ in total for the first query of each batch; however, each successive query in a batch costs only $O(\log n)$ time as the smaller resulting gap of the query contains only a single element. We will see in \wref{sec:bounds} that we must indeed pay $\Omega(\log n)$ amortized time per query in the worst case; again our advantage is to reduce insertion costs.
Note that if an element is inserted within the elements of a previously queried batch, these insertions take $O(\log n)$ time. However, assuming a uniform distribution of element insertion throughout, this occurs on only an $O(q/n)$ fraction of insertions in expectation, at total cost $O(n \cdot q/n \cdot \log n) = O(q \log n)$.
Other insertions only need an overall $O(n \log (q/k) + \min(n \log \log n, n \log q))$ time.
%
\item \textbf{Selectable Priority Queue:} If every query is for a minimum element, each query takes $O(\log n)$ time and separates the minimum element into its own gap and all other elements into another single gap. Removal of the minimum destroys the gap containing the minimum element, leaving the data structure with a single gap $\Delta_1$. All inserted elements fall into this single gap, implying insertions take $O(\log \log n)$ time. Further, the \texttt{ChangeKey(ptr,\,$k'$)} operation supports decrease-key in $O(\log \log n)$ time, since all queries (and thus the closest query) are for rank $1$. Queries for other ranks are also supported, though if queried, these ranks are introduced into the analysis, creating more gaps and potentially slowing future insertion and decrease-key operations, though speeding up future selections. The cost of a selection is $O(x \log c + \log n)$ amortized time, where $x$ is the distance from the rank selected to the nearest gap boundary (which was created at the rank of a previous selection) and $c = |\Delta_i|/x - 1$, where the selection falls in gap $\Delta_i$. If no selections have previously occurred, $x$ is the smaller of the rank or $n$ minus the rank selected and $c = n/x - 1$.
%
%
Interestingly, finding the $k$th smallest element in a binary min-heap can be done in $O(k)$ time~\cite{Frederickson93}, yet we claim our runtime optimal! The reason is that neither runtime dominates in an amortized sense over the course of $n$ insertions. Our lower bound indicates that $\Omega(B + n)$ time must be taken over the course of multiple selections on $n$ elements in the worst case. %
In Frederickson's algorithm, the speed is achievable because a binary heap is more structured than an unprocessed set of $n$ elements and only a single selection is performed; the ability to perform further selection on the resulting pieces is not supported.
On close examination, lazy search trees can be made to answer the selection query alone without creating additional gaps in $O(x + \log n)$ amortized time or only $O(x)$ time given a pointer to the gap in which the query falls (such modification requires fulfilling \wref[Rules]{rule:merge} and~\ref{inv:credits} on category $A$ intervals in \wref{sec:queryanalysis}).
%
\item \textbf{Double-Ended Priority Queue:} If every query is for the minimum or maximum element, again each query takes $O(\log n)$ time and will separate either the minimum or maximum element into its own gap and all other elements into another single gap. The new gap is destroyed when the minimum or maximum is extracted. As there is only one gap $\Delta_1$ when insertions occur, insertions take $O(\log \log n)$ time. In this case, our data structure natively supports an $O(\log \log n)$ time decrease-key operation for keys of rank $n/2$ or less and an $O(\log \log n)$ time increase-key operation for keys of rank greater than $n/2$. Further flexibility of the change-key operation is discussed in \wref{sec:changekeyanalysis}.
\item \textbf{Online Dynamic Multiple Selection:} Suppose the data structure is first constructed on $n$ elements. (A close analysis of insert in \wref{sec:insertanalysis} shows that alternatively, we can construct the data structure on an empty set and achieve $O(1)$ time insertion before a query is performed.) After construction, a set of ranks $\{r_i\}$ are selected, specified online and in any order. Lazy search trees will support this selection in $O(B)$ time, where $B = \sum_{i=1}^m |\Delta_i| \log_2 (n/|\Delta_i|)$ is the lower bound for the multiple selection problem~\cite{KMMS}. We can further support additional insertions, deletions and queries. Data structures for online dynamic multiple selection were previously known~\cite{Barbay15,BARBAY16}, but the way we handle dynamism is more efficient, allowing for all the use cases mentioned here. We discuss this in \wref{sec:related}.
\item \textbf{Split By Rank:} Lazy search trees can function as a data structure for repeated splitting by rank, supporting construction on an initial set of $n$ elements in $O(n)$ time, insertion into a piece of size $n$ in $O(\log \log n)$ time, and all splitting within a constant factor of the information-theoretic lower bound. Here, the idea is that we would like to support the operations insert and split at rank $r$, returning two pieces of a data structure of the same form. In a sense, this is a generalization of priority queues, where instead of extracting the minimum, we may extract the $k$ smallest elements, retaining the ability to perform further extractions on either of the two pieces returned. %
As in scenario 3, the cost of splitting is $O(x \log c + \log n)$, where $x$ is the number of elements in the smaller resulting piece of the split, and we define $c$ so that the number of elements in the larger resulting piece of the split is $cx$. Again, $O(x \log c + \log n)$ is optimal. Note that we could also use an $O(\log \log n)$ time change-key operation for this application, though this time complexity only applies when elements are moved closer to the nearest split rank. If repeatedly extracting the $k$ smallest elements is desired, this corresponds to an $O(\log \log n)$ time decrease-key operation.
\item \textbf{Incremental Quicksort:} A version of our data structure can perform splits internally via selecting random pivots with expected time complexity matching the bounds given in \wref{thm:main}. (We initially describe a version using exact selection, which is conceptually simpler but less practical.) The data structure can then be used to extract the $q$ smallest elements in sorted order, online in $q$, via an incremental quicksort. Here, $B = \Theta(q \log n)$ and our overall time complexity is $O(n + q \log n)$, which is optimal up to constant factors\footnote{Note that $n + q \log n = \Theta(n + q \log q)$. If the $q \log n$ term dominates, $q = \Omega(n/\log n)$ and so $\log n = \Theta(\log q)$.}. Previous algorithms for incremental sorting are known~\cite{Paredes06,Navarro10,Regla15,Aydin15}; however, our algorithm is extremely flexible, progressively sorting any part of the array in optimal time $O(B + n)$ while also supporting insertion, deletion, and efficient change-key. The heap operations insert and decrease-key are performed in $O(\log \log n)$ time instead of $O(\log n)$, compared to existing heaps based on incremental sorting~\cite{Navarro08,Navarro10}; see also~\cite{Edelkamp12,Brodal09}. Our data structure also uses only $O(\min(q, n))$ pointers, providing many of the same advantages of sorting-based heaps%
. A more-complicated priority queue based on similar ideas to ours achieves Fibonacci heap amortized complexity with only a single extra word of space~\cite{Mortensen05}.
\end{enumerate}
%
We discuss the advantages and disadvantages of our model and data structure in the following subsections.
\subsection{Advantages}
\label{pro}
The advantages of lazy search trees are as follows:
\begin{enumerate}
\item Superior runtimes to binary search trees can be achieved when queries are infrequent or non-uniformly distributed.
\item A larger operation set, with the notable exception of efficient general merging, is made possible when used as a priority queue, supporting operations within an additive $O(n \log \log n)$ term of optimality, in our model.
\item Lazy search trees can be implemented to use only $O(\min(q, n))$ pointers, operating mostly on arrays. This suggests smaller memory footprint, better constant factors, and better cache performance compared to many existing efficient priority queues or binary search trees. Our data structure is not built on the heap-ordered tree blueprint followed by many existing efficient priority queues~\cite{Fredman87,Fredman86,Chan09,Brodal12,Elmasry09,Haeupler11,Brodal96,Hansen15}. Instead, we develop a simple scheme based on unordered lists that may of independent interest. In particular, we are hopeful our data structure or adaptations thereof may provide a theoretically-efficient priority queue that gets around the practical inefficiencies associated with Fibonacci heaps~\cite{Fredman87} and its derivatives.
\item While not a corollary of the model we consider, lazy search trees can be made to satisfy all performance theorems with regards to access time satisfied by splay trees. In this way, lazy search trees can be a powerful alternative to the splay tree. Locality of access can decrease both access and insertion times. This is discussed in \wref{sec:splay}.
%
\end{enumerate}
\subsection{Disadvantages}
\label{sec:con}
The weaknesses of lazy search trees are as follows:
\begin{enumerate}
\item Our gap-based model requires inserted elements be placed in a gap immediately instead of delaying all insertion work until deemed truly necessary by query operations. In particular, a more powerful model would ensure that the number of comparisons performed on an inserted element depends only on the queries executed \emph{after} that element is inserted. There are operation sequences where this can make a $\Theta(\log n)$ factor difference in overall time complexity, but it is not clear whether this property is important on operation sequences arising in applications.
\item We currently do not know whether the additive $O(\min( n \log q, n\log \log n))$ term in the complexity described in \wref{thm:main} over a sequence of insertions and queries is necessary. Fibonacci heaps and its variants show better performance is achievable in the priority queue setting.
%
In \wref{sec:average}, we show the (essentially) $O(\log \log |\Delta_i|)$ terms for insertion and change-key can be improved to a small constant factor if the (new) rank of the element is drawn uniformly at random from valid ranks in $\Delta_i$. As a priority queue, this corresponds with operation sequences in which binary heaps~\cite{Williams64} provide constant time insertion.
\item The \emph{worst-case} complexity of a single \texttt{RankBasedQuery($r$)} can be $O(n)$. Further, unlike amortized search trees like the splay tree~\cite{Sleator85}, the average case complexity is not necessarily $O(\log n)$. By delaying sorting, our lower bound indicates that we may need to spend $\Theta(n)$ time to answer a query that splits a gap of size $|\Delta_i| = \Theta(n)$ into pieces of size $x$ and $cx$ for $c = \Theta(1)$. Further, aside from an initial $O(\log n)$ time search, the rest of the time spent during query is on writes, so that over the course of the operation sequence the number of writes is $\Theta(B + n)$. In this sense, our algorithm functions more similarly to a lazy quicksort than a red-black tree~\cite{Bayer72}, which requires only $\Theta(n)$ writes regardless of operation sequence.
\end{enumerate}
\subsection{Paper Organization}
We organize the remainder of the paper as follows. In the following section, \wref{sec:related}, we discuss related work. In \wref{sec:technical}, we give a high-level overview of the technical challenge. In \wref{sec:prelim}, we formalize the definition of the queries we support. In \wref{sec:bounds}, we discuss lower bounds in our gap-based model. In \wref{sec:ds}, we show how lazy search trees perform insertions, queries, deletions, and change-key operations. We analyze the costs of these operations in \wref{sec:analysis}. In \wref{sec:bulk}, we explain how binary search tree bulk-update operations split and merge can be performed on lazy search trees. We show in \wref{sec:average} that the complexity of insertion and change-key can be improved with a weak average-case assumption. In \wref{sec:random}, we show that exact selection in our query algorithm can be replaced with randomized pivoting while achieving the same expected time complexity. In \wref{sec:splay}, we show how splay trees can be used with lazy search trees and show that lazy search trees can be made to support efficient access theorems. We give concluding remarks, open problems, and briefly discuss a proof-of-concept implementation in \wref{sec:conclude}.
\section{Related Work}
\label{sec:related}
Lazy search trees unify several distinct research fields. The two largest, as previously discussed, are the design of efficient priority queues and balanced binary search trees. We achieved our result by developing an efficient priority queue and lazy binary search tree simultaneously. There are no directly comparable results to our work, but research in \emph{deferred data structures} and \emph{online dynamic multiple selection} comes closest. We further discuss differences between dynamic optimality and our work.
\subsection{Deferred Data Structures}
\label{sec:deferred-data-structures}
To our knowledge, the idea of deferred data structures was first proposed by Karp, Motwani, and Raghavan in 1988~\cite{Karp88}. Similar ideas have existed in slightly different forms for different problems~\cite{Smid89,Borodin81,Brodal11,Barbay2019,Barbay17,Ar02,Gum01,Aggarwal91}.
The term ``deferred data structure'' has been used more generally for delaying processing of data until queries make it necessary, but we focus on works for one-dimensional data here, as it directly pertains to the problem we consider.
Karp, Motwani and Raghavan~\cite{Karp88} study the problem of answering membership queries on a static, unordered set of $n$ elements in the comparison model. One solution is to construct a binary search tree of the data in $O(n \log n)$ time and then answer each query in $O(\log n)$ time. This is not optimal if the number of queries is small. Alternatively, we could answer each query in $O(n)$ time, but this is clearly not optimal if the number of queries is large. Karp et al. determine the lower bound of $\Omega((n+q) \log (\min(n,q))) = \Omega(n \log q + q \log n)$ time to answer $q$ queries on a static set of $n$ elements in the worst case and develop a data structure that achieves this complexity.
This work was extended in 1990 to a dynamic model. Ching, Melhorn, and Smid show that $q'$ membership queries, insertions, and deletions on an initial set of $n_0$ unordered elements can be answered in $O(q' \log (n_0+q') + (n_0+q') \log q') = O(q' \log n_0 + n_0 \log q')$ time~\cite{CHING90}. When membership, insertion, and deletion are considered as the same type of operation, this bound is optimal.
It is not very difficult
(although not explicitly done in~\cite{CHING90}) to modify the result of Ching et al.\ to obtain a data structure supporting $n$ insertions and $q''$ membership or deletion operations in $O(q'' \log n + n \log q'')$ time, the runtime we achieve for uniform queries. We will see in \wref{sec:technical} that the technical difficulty of our result is to achieve the fine-grained complexity based on the query-rank distribution. For more work in one-dimensional deferred data structures, see~\cite{Smid89,Borodin81,Brodal11,Barbay2019,Barbay17,Gum01}.
\subsection{Online Dynamic Multiple Selection}
The optimality of Karp et al.~\cite{Karp88} and Ching et al.~\cite{CHING90} is in a model where the ranks requested of each query are not taken into account. In the multiple selection problem, solutions have been developed that consider this information in the analysis. Suppose we wish to select the elements of ranks $r_1 < r_2 < \cdots < r_q$ amongst a set of $n$ unordered elements. Define $r_0 = 0$, $r_{q+1} = n$, and $\Delta_i$ as the set of elements of rank greater than $r_{i-1}$ and at most $r_i$. Then $|\Delta_i| = r_i - r_{i-1}$ and as in \wref{thm:main}, $B = \sum_{i=1}^m |\Delta_i| \log_2(n/|\Delta_i|)$. The information-theoretic lower bound for multiple selection is $B-O(n)$ comparisons~\cite{Dobkin81}. Solutions have been developed that achieve $O(B + n)$ time complexity~\cite{Dobkin81} or $B + o(B) + O(n)$ comparison complexity~\cite{KMMS}.
The differences between the multiple selection problem and deferred data structuring for one-dimensional data are minor. Typically, deferred data structures are designed for online queries, whereas initial work in multiple selection considered the setting when all query ranks are given at the same time as the unsorted data. Solutions to the multiple selection problem where the ranks $r_1, \ldots, r_q$ are given online and in any order have also been studied, however~\cite{Barbay13}. Barbay et al.~\cite{Barbay15,BARBAY16} further extend this model to a dynamic setting: They consider online dynamic multiple selection where every insertion is preceded by a search for the inserted element. Deletions are ultimately performed in $O(\log n)$ time. Their data structure uses $B + o(B) + O(n + q'\log n)$ comparisons, where $q'$ is the number of search, insert, and delete operations. %
The crucial difference between our solution and that of Barbay et al.~\cite{Barbay15,BARBAY16} is how we handle insertions. Their analysis assumes every insertion is preceded by a search and therefore insertion must take $\Omega(\log n)$ time. Thus, for their result to be meaningful (\ie, allow $o(n \log n)$ performance), the algorithm must start with an initial set of $n_0 = n\pm o(n)$ elements. While Barbay et al.\ focus on online dynamic multiple selection algorithms with near-optimal comparison complexity, the focus of lazy search trees is on generality. We achieve similar complexity as a data structure for online multiple selection while also achieving near-optimal performance as a priority queue. We discuss the technical challenges in achieving this generality in \wref{sec:technical}.
\subsection{Dynamic Optimality}
As mentioned, the dynamic optimality conjecture has received vast attention in the past four decades~\cite{Allen78,Demaine09,Demaine07,Iacono16,Bose2020,Sleator83,Wilber89,Kozma2019,Chalermsook15}. The original statement conjectures that the performance of the splay tree is within a constant factor of the performance of any binary search tree on any sufficiently long access sequence~\cite{Sleator85}. To formalize this statement, in particular the notion of ``any binary search tree'', the BST model of computation has been introduced, forcing the data structure to take the form of a binary tree with access from the root and tree rotations for updates. Dynamic optimality is enticing because it conjectures splay trees~\cite{Sleator85} and a related ``greedy BST''~\cite{Demaine09} to be within a constant factor of optimality on \emph{any} sufficiently long access sequence. This \emph{per-instance} optimality~\cite{Fagin2003} is more powerful than the sense of optimality used in less restricted models, where it is often unattainable. Any sorting algorithm, for example, must take $\Omega(n \log n)$ time in the \emph{worst case}, but on any particular input permutation, an algorithm designed to first check for that specific permutation can sort it in $O(n)$ time: simply apply the inverse permutation and check if the resulting order is monotonic. %
The bounds we give in \wref{sec:bounds} are w.\,r.\,t.\ the \emph{worst case} over operation sequences based on distribution of gaps $\{\Delta_i\}$,
but hold for \emph{any} comparison-based data structure.
Hence, lazy search trees achieve a weaker notion of optimality compared to dynamic optimality,
but do so against a vastly larger class of algorithms.
Since splay trees, greedy BSTs, and lazy search trees are all implementations of sorted dictionaries and conjectured dynamically optimal, it is insightful to contrast the access theorems of dynamically-optimal BSTs with the improvements given in \wref{thm:main}.
Superficially, the two notions are orthogonal, with dynamic optimality allowing only queries, and
our bound becoming interesting mostly when insertions and queries are mixed.
On the other hand, the form of performance improvements achievable are indeed quite similar,
as the following property shows.
\begin{definition}[Static Optimality~\cite{Knuth73,Allen78,Sleator85}]
\label{def:statico}
Let $S$ denote the set of elements in the data structure and let $q_x$ denote the number of times element $x$ is accessed in a sequence of $m$ accesses. Assume every element is accessed at least once.
A data structure is said to achieve static optimality if the cost to perform any such access sequence is
\[
O(m + \sum_{x \in S} q_x \log(m/q_x)).
\]
\end{definition}
Historically, research into optimal binary search trees started with this notion of static optimality,
and both splay trees and greedy BSTs have been shown to be statically optimal~\cite{Sleator85,Fox2011}.
Contrast the bound given in \wref{def:statico} with the bound $O(B + n)$, where again we define $B = \sum_{i=1}^m |\Delta_i| \log_2(n/|\Delta_i|)$. If we replace $q_x$ and $m$ in \wref{def:statico} with $|\Delta_i|$ and $n$, respectively, they are exactly the same: the savings for query costs arising from repeated accesses with nonuniform access probabilities equal the savings for insertion costs when query ranks are nonuniform.
\section{Technical Overview}
\label{sec:technical}
This research started with the goal of generalizing a data structure that supports $n$ insertions and $q \leq n$ rank-based queries in $O(n \log q)$ time. Via a reduction from multiple selection, $\Omega(n \log q)$ comparisons are necessary in the worst case. However, by applying the fine-grained analysis based on rank distribution previously employed in the multiple selection literature~\cite{Dobkin81}, a new theory which generalizes efficient priority queues and binary search trees is made possible.
As will be discussed in \wref{sec:bounds}, to achieve optimality on sequences of insertion and distinct queries with regards to the fine-grained multiple selection lower bound, insertion into gap $\Delta_i$ should take $O(\log(n/|\Delta_i|))$ time. A query which splits a gap $\Delta_i$ into two gaps of sizes $x$ and $cx$ ($c \geq 1$), respectively, should take $O(x \log c + \log n)$ time. These complexities are the main goals for the design of the data structure.
The high-level idea will be to maintain elements in a gap $\Delta_i$ in an auxiliary data structure (the \emph{interval data structure} of \wref{sec:ds}). All such auxiliary data structures are then stored in a biased search tree so that access to the $i$th gap $\Delta_i$ is supported in $O(\log(n/|\Delta_i|))$ time. This matches desired insertion complexity and is within the $O(\log n)$ term of query complexity. The main technical difficulty is to support efficient insertion and repeated splitting of the auxiliary data structure.
Our high-level organization is similar to the selectable sloppy heap of Dumitrescu~\cite{Dumitrescu19}. The difference is that while the selectable sloppy heap keeps fixed quantile groups in a balanced search tree and utilizes essentially a linked-list as the auxiliary data structure, in our case the sets of elements stored are dependent on previous query ranks, the search tree is biased, and we require a more sophisticated auxiliary data structure.
Indeed, in the priority queue case, the biased search tree has a single element $\Delta_1$, and all operations take place within the auxiliary data structure. Thus, we ideally would like to support $O(1)$ insertion and $O(x\log c)$ split into parts of size $x$ and $cx$ ($c \geq 1$) in the auxiliary data structure. If the number of elements in the auxiliary data structure is $|\Delta_i|$, we can imagine finding the minimum or maximum as a split with $x = 1$ and $c = |\Delta_i|-1$, taking $O(\log |\Delta_i|)$ time. However, the ability to split at any rank in optimal time complexity is not an operation typically considered for priority queues. Most efficient priority queues store elements in heap-ordered trees, providing efficient access to the minimum element but otherwise imposing intentionally little structure so that insertion, decrease-key, and merging can all be performed efficiently.
Our solution is to group elements within the auxiliary data structure in the following way. We separate elements into groups (``intervals'') of unsorted elements, but the elements between each group satisfy a total order. Our groups are of exponentially increasing size as distance to the gap boundary increases. Within a gap $\Delta_i$, we maintain $O(\log |\Delta_i|)$ such groups. Binary search then allows insertion and key change in $O(\log \log |\Delta_i|)$ time. While not $O(1)$, the structure created by separating elements in this way allows us to split the data structure in about $O(x)$ time, where $x$ is the distance from the split point to the closest query rank. Unfortunately, several complications remain.
Consider if we enforce the exponentially-increasing group sizes in the natural way in data structure design. That is, we select constants $c_1\le c_2$ such that as we get farther from the gap boundary, the next group is at least a factor $c_1 > 1$ larger than the previous but at most a factor $c_2$. We can maintain this invariant while supporting insertion and deletion, but splitting is problematic. After splitting, we must continue to use both pieces as a data structure of the same form. However, in the larger piece, the $x$ elements removed require restructuring not only the new closest group to the gap boundary but could require a cascading change on all groups. Since the elements of each group are unstructured, this cascading change could take $\Omega(|\Delta_i|)$ time.
Thus, we must use a more flexible notion of ``exponentially increasing" that does not require significant restructuring after a split. This is complicated by guaranteeing fast insertion and fast splits in the future. In particular, after a split, if the larger piece is again split close to where the previous split occurred, we must support this operation quickly, despite avoiding the previous cascading change that would guarantee this performance. Further, to provide fast insertion, we must keep the number of groups at $O(\log |\Delta_i|)$, but after a split, the best way to guarantee fast future splits is to create more groups.
We will show that it is possible to resolve all these issues and support desired operations efficiently by applying amortized analysis with a careful choice of structure invariants. While we do not achieve $O(1)$ insertion and decrease-key cost, our data structure is competitive as an efficient priority queue while having to solve the more complicated issues around efficient repeated arbitrary splitting.%
%
\section{Rank-Based Queries}
\label{sec:prelim}
We formalize operation \texttt{RankBasedQuery($r$)} as follows. We first describe what we call an \emph{aggregate function}.
\begin{definition}[Aggregate function]
Let $S$ be a multiset of comparable elements and let $f(S)$ be a function\footnote{We do not actually require a strict function $f(S) = y$ for a set $S$, but rather can let the aggregate function depend on the queries that dynamically change that set. In particular, we can (initially) map $f(S) = \min S$ or $f(S) = \max S$ and change this value to decrease/increase monotonically as $S$ is updated, even if the minimum/maximum is removed.} computed on those elements. Suppose $S'$ is such that $S'$ differs from $S$ by the addition or removal of element $x$. Let $n = \max(|S|, |S'|)$. Then $f$ is an \textit{aggregate function} maintainable in $g(n)$ time if $f(S')$ can be computed from $f(S)$ and $x$ in $g(n)$ time.
\end{definition}
We focus on aggregates with $g(n) = O(1)$, though in principle any $g(n)$ can be supported with appropriate changes to overall runtime complexity. We formalize \emph{rank-based queries} as follows.
\begin{definition}[Rank-based query]
\label{def:rankbasedquery}
Call a query on a multiset of comparable elements $S$ such that $|S| = n$ a \textit{rank-based query} pertaining to rank $r$ if the following two conditions are satisfied:
\begin{enumerate}
\item Consider if $S$ is split into two sets $X$ and $Y$ such that for all $x \in X$, $y \in Y$, $x \leq y$. It must be possible, based on an aggregate function $f$ on $X$ and $Y$, to reduce the query to a query that can be answered considering only the elements of $X$ or only the elements of $Y$. The query rank $r$ should be such that if the query is reduced to $X$, $r \leq |X|$, and if the query is reduced to $Y$, $|X| < r$.
\item It must be possible to answer the query on $S$ in $O(n)$ time.
\end{enumerate}
\end{definition}
Critical to our analysis is the rank $r$ associated with each \texttt{RankBasedQuery($r$)} operation. We associate with each operation a rank $r$ which must be contained in each subproblem according to a recursion based on \wref{def:rankbasedquery}. Amongst a set of unsorted elements, $r$ can be chosen arbitrarily, but whichever rank is chosen will affect the restructuring and change the complexity of future operations. Implementation decisions may change $r$ to be $r-1$ or $r+1$; such one-off errors do not have measured effect on complexity, as long as the extract minimum or extract maximum queries result in a single gap $\Delta_1$.
The following well-studied operations fit our definition of rank-based query with ranks $r$ as described;
the aggregate function is either the cardinality of the set or a range of keys for the set.
\begin{itemize}
\item \texttt{Rank($k$)} $\ce$ Determine the rank of key $k$. Rank $r$ is the rank of $k$ in $S$.
\item \texttt{Select($r$)} $\ce$ Select the element of rank $r$ in $S$. Rank $r$ is the rank selected.
\item \twopart{\texttt{Contains($k$)} $\ce$ }{Determine if an element $(k,v)$ is represented in $S$ (and if so, returns $v$). Rank $r$ is the rank of $k$ in $S$.}
\item \twopart{\texttt{Successor($k$)} $\ce$ }{Determine the successor of key $k$ in $S$. Rank $r$ is the rank of the successor of~$k$.}
\item \twopart{\texttt{Predecessor($k$)} $\ce$ }{Determine the predecessor of key $k$ in $S$. Rank $r$ is the rank of the predecessor of $k$.}
\item \texttt{Minimum()} $\ce$ Return a minimum element of $S$. Rank $r$ is $1$.
\item \texttt{Maximum()} $\ce$ Return a maximum element of $S$. Rank $r$ is $n$.
\end{itemize}
On edge cases where the successor or predecessor does not exist, we can define $r$ to be $n$ or $1$, respectively. Similarly, in the case $(k,v)$ is represented in $S$ on a \texttt{Rank($k$)} or \texttt{Contains($k$)} query, we must pick a tie-breaking rule for rank $r$ returned consistent with the implemented recursion following \wref{def:rankbasedquery}.
\section{Lower and Upper Bounds}
\label{sec:bounds}
The balanced binary search tree is the most well-known solution to the sorted dictionary problem. It achieves $O(\log n)$ time for a rank-based query and $O(\log n)$ time for all dynamic operations. Via a reduction from sorting, for a sequence of $n$ arbitrary operations, $\Omega(n \log n)$ comparisons and thus $\Omega(n \log n)$ time is necessary in the worst case.
However, this time complexity can be improved by strengthening our model. The performance theorems of the splay tree~\cite{Sleator85} show that although $\Omega(q \log n)$ time is necessary on a sequence of $q$ arbitrary queries on $n$ elements, many access sequences can be answered in $o(q \log n)$ time. Our model treats sequences of element \emph{insertions} similarly to the splay tree's treatment of sequences of element access. Although $\Omega(n \log n)$ time is necessary on a sequence of $n$ insert or query operations, on many operation sequences, $o(n \log n)$ time complexity is possible, as the theory of efficient priority queues demonstrates.
Our complexities are based on the distribution of elements into the set of gaps $\{\Delta_i\}$. We can derive a lower bound on a sequence of operations resulting in a set of gaps $\{\Delta_i\}$ via reducing multiple selection to the sorted dictionary problem. We prove \wref{thm:lb} below.
\begin{proof}[Proof of \wref{thm:lb}{}]
We reduce multiple selection to the sorted dictionary problem. The input of multiple selection is a set of $n$ elements and ranks $r_1 < r_2 < \cdots < r_q$. We are required to report the elements of the desired ranks. We reduce this to the sorted dictionary problem by inserting all $n$ elements in any order and then querying for the desired ranks $r_1, \ldots, r_q$, again in any order.
Define $r_0 = 0$, $r_{q+1} = n$, and $\Delta_i$ as the set of elements of rank greater than $r_{i-1}$ and at most $r_i$. (This definition coincides with the gaps resulting in our data structure when query rank $r$ falls in the new gap $\Delta'_i$, described in \wref{sec:model}.) Then $|\Delta_i| = r_i - r_{i-1}$ and as in \wref{thm:main}, $B = \sum_{i=1}^m |\Delta_i| \log_2(n/|\Delta_i|)$. Note that here, $m = q+1$. The information-theoretic lower bound for multiple selection is $B-O(n)$ comparisons~\cite{Dobkin81}. Since any data structure must spend $\Omega(n)$ time to read the input, this also gives a lower bound of $\Omega(B + n)$ time. This implies the sorted dictionary problem resulting in a set of gaps $\{\Delta_i\}$ must use at least $B-O(n)$ comparisons and take $\Omega(B + n)$ time, in the worst case.
\end{proof}
\begin{remark}[Multiple selection inputs]
For the operation sequence from the proof of \wref{thm:lb},
\wref{thm:main} states our performance as $O(B + \min(n \log q, n\log\log n))$. A closer examination of our data structure in \wref{sec:insertanalysis} shows we actually achieve $O(B+n)$ complexity on such sequences, since insertions performed before any queries actually take $O(1)$ time.
\end{remark}
To achieve the performance stated in \wref{thm:lb} on any operation sequence, we will first consider how the bound $\Omega(B + n)$ changes with insertions and queries. This will dictate the allotted (amortized) time we can spend per operation to achieve an optimal complexity over the entire operation sequence.
We give the following regarding insertion time; recall our convention from \wpref{fn:note1} that
$\log(x) = \max(\log_2(x),1)$ and $\log_2$ is the binary logarithm.
\begin{lemma}[Influence of insert on lower bound]
\label{lem:inserttime}
Suppose we insert an element into gap $\Delta_i$. Then the bound $\Omega(B + n)$ increases by $\Omega(\log(n/|\Delta_i|))$.
\end{lemma}
\begin{proof}
The insertion simultaneously increases $|\Delta_i|$ and $n$, but we will consider the effect of these changes
separately. We first keep $n$ unchanged and
consider how $B$ changes in gap $\Delta_i$. Before insertion, the contribution to $B$ for gap $\Delta_i$ is $|\Delta_i| \log_2 (n/|\Delta_i|)$; after the insertion it is $(|\Delta_i|+1) \log_2 (n/(|\Delta_i|+1))$. Therefore, the change is
\begin{equation}
\label{dB}
(|\Delta_i|+1) \log_2 (n/(|\Delta_i|+1)) - |\Delta_i| \log_2 (n/|\Delta_i|).
\end{equation}
Consider the function $f(x) = x\log_2(n/x)$, where we treat $n$ as a constant. Then \eqref{dB} is at least the minimum value of the derivative $f'(x)$ with $x \in [|\Delta_i|, |\Delta_i|+1]$. The derivative of $f(x)$ is $f'(x) = -\log_2(e) + \log_2(n/x)$. This gives that the change in $B$ is at least $-\log_2(e) + \log_2(n/|\Delta_i|)$.
Now consider the effect of making $n$ one larger. This will only increase $B$; by the bound $\Omega(B + n)$, this change is (at least) $\Omega(1)$. We may therefore arrive at an increase of $\Omega(\log_2(n/|\Delta_i|)+1) = \Omega(\log(n/|\Delta_i|))$.
\end{proof}
\wref{lem:inserttime} implies that optimal insertion complexity is $\Omega(\log(n/|\Delta_i|))$.
This bound is using the fact the change in the set of gaps $\{\Delta_i\}$ resulting from an insertion corresponds to a multiple selection problem with lower bound greater by $\Omega(\log(n/|\Delta_i|))$. Since the multiple selection problem itself has insertions preceding queries, this lower bound is in some sense artificial. However, we can alternatively consider the problem of determining in which gap an inserted element falls. Here, information theory dictates complexities of $\Omega(\log m)$ if each gap is weighted equally or $\Omega(\log(n/|\Delta_i|))$ if gap $\Delta_i$ is weighted with weight $|\Delta_i|$~\cite{Bent85}. The latter corresponds with the change in $B$ noted above.
We now give the following regarding query time.
\begin{lemma}[Influence of query on lower bound]
\label{lem:querytime}
Suppose a query splits a gap $\Delta_i$ into two gaps of size $x$ and $cx$, respectively, with $c \geq 1$. Then the bound $\Omega(B + n)$ increases by $\Omega(x\log c)$.
\end{lemma}
\begin{proof}
The change in $B$ is
\begin{equation}
\label{dB2}
x \log_2\left(\frac{n}{x}\right) + cx \log_2\left(\frac{n}{cx}\right) - (c+1)x \log_2\left(\frac{n}{(c+1)x}\right).
\end{equation}
By manipulating logarithms and canceling terms, we can rearrange \eqref{dB2} to $x((c+1) \log_2 (c+1) - c \log_2 c)$, which is greater than $x \log_2 (c+1)$. Thus the increase in $\Omega(B + n)$ is $\Omega(x \log c)$.
\end{proof}
\wref{lem:querytime} gives a lower bound of $\Omega(x \log c)$ per rank-based query operation. Here, the bound is not artificial in any sense: insertions precede queries in the reduction of multiple selection to the sorted dictionary problem. We must spend time $\Omega(x \log c)$ to answer the query as more queries may follow and the total complexity must be $\Omega(B + n)$ in the worst case.
We can improve the query lower bound by considering the effect on $B$ over a sequence of gap-splitting operations. Consider the overall bound $B = \sum_{i=1}^m |\Delta_i| \log_2(n/|\Delta_i|)$. It can be seen that $B = \Omega(m \log n)$. Therefore, we can afford amortized $O(\log n)$ time whenever a new gap is created, even if it is a split say with $x = 1$, $c = 1$.%
Consider the lower bound given by the set of gaps $\{\Delta_i\}$ in \wref{thm:lb} combined with the above insight that queries must take $\Omega(\log n)$ time. If query distribution is not considered, the worst case is that $|\Delta_i| = \Theta(n/q)$ for all $i$. Then $B + q \log n = \Omega(n \log q + q \log n)$. This coincides with the lower bound given in~\cite{Karp88}.
It is worth noting that both \wref{lem:inserttime} and \wref{lem:querytime} can also be proven by information-theoretic arguments, without appealing to the algebraic bound $B$ given in multiple selection. The number of comparisons to identify the $x$ largest elements in a set of $(c+1)x$ elements is $\log{ \binom{(c+1)x}x}$, which is $\Omega(x \log c)$. A similar argument can be made that increasing $n$ by $1$ and category $\Delta_i$ by $1$ implies the number of comparisons required of the underlying selection problem increases by $\Omega(\log(n/\Delta_i))$.
\section{Data Structure}
\label{sec:ds}
We are now ready to discuss the details of lazy search trees. The high-level idea was discussed in \wref{sec:technical}. The data structure as developed is relatively simple, though it requires a somewhat tricky amortized time analysis given in the following section.
We split the data structure into two levels. At the top level, we build a data structure on the set of gaps $\{\Delta_i\}$. In the second level, actual elements are organized into a set of \emph{intervals} within a gap. Given a gap $\Delta_i$, intervals within $\Delta_i$ are labeled $\mathcal{I}_{i,1}, \mathcal{I}_{i,2}, \ldots, \mathcal{I}_{i,\ell_i}$, with $\ell_i$ the number of intervals in gap $\Delta_i$. The organization of elements of a gap into intervals is similar to the organization of elements into a gap. Intervals partition a gap by rank, so that for elements $x \in \mathcal{I}_{i,j}$, $y \in \mathcal{I}_{i,j+1}$, $x \leq y$. Elements within an interval are unordered. By convention, we will consider both gaps and intervals to be ordered from left to right in increasing rank. A graphical sketch of the high-level decomposition is given in \wref{fig:decomposition}.
\begin{figure}[htbp]
\def0.2{0.08}
\newcommand\drawint[4][1]
\draw[semithick,|-|] (#2,0) --
node[below=2pt,font=\scriptsize,scale=.75] {\scalebox{#1}[1]{$\mathcal{I}_{#4}$}}
++(#3-0.2,0) ;
}
\begin{tikzpicture}[scale=.32]
\begin{scope}
\drawint[.7]{0} 1{1,1}
\drawint {1} 2{1,2}
\drawint {3} 4{1,3}
\drawint {7} 4{1,4}
\drawint {11}2{1,5}
\drawint[.7]{13}1{1,6}
\end{scope}
\begin{scope}[shift={(14+.5,0)}]
\drawint[.7]{0} 1{2,1}
\drawint {1} 2{2,2}
\drawint {3} 3{2,3}
\drawint {6} 6{2,4}
\drawint {12}5{2,5}
\drawint {17}3{2,6}
\drawint {20}2{2,7}
\drawint[.7]{22}1{2,8}
\end{scope}
\begin{scope}[shift={(14+23+1,0)}]
\drawint[.7]{0} 1{3,1}
\drawint {1} 5{3,2}
\drawint {6} 3{3,3}
\drawint[.7]{9} 1{3,4}
\end{scope}
\foreach \x/\l in {0,14.5,14+23+1,14+23+10+1.5} {
\draw[thick,gray] (\x-.25,-1) -- (\x-.25,3) ;
%
}
\foreach \f/\t/\l in {0/14/1,14.5/37.5/2,38/48/3} {
\draw[decoration={brace,amplitude=5pt},decorate]
(\f,1.5) -- node[above=3pt,font=\scriptsize] {$\Delta_{\l}$} ++(\t-\f,0) ;
}
\node[font=\scriptsize,anchor=east] at (-0.5,2.3) {Gaps:} ;
\node[overlay,font=\scriptsize,anchor=east] at (-0.5,0) {Intervals:} ;
\end{tikzpicture}
\caption{The two-level decomposition into gaps $\{\Delta_i\}$ and intervals $\{\mathcal{I}_{i,j}\}$.}
\label{fig:decomposition}
\end{figure}
\subsection{The Gap Data Structure}
We will use the following data structure for the top level.
\begin{lemma}[Gap Data Structure]
\label{lem:gapds}
There is a data structure for the set of gaps $\{\Delta_i\}$ that supports the following operations in the given worst-case time complexities. Note that $\sum_{i=1}^m |\Delta_i| = n$.
\begin{enumerate}
\item Given an element $e=(k,v)$, determine the index $i$ such that $k \in \Delta_i$, in $O(\log(n/|\Delta_i|))$ time.
\item Given a $\Delta_i$, increase or decrease the size of $\Delta_i$ by $1$, adjusting $n$ accordingly, in $O(\log(n/|\Delta_i|))$ time.
\item Remove $\Delta_i$ from the set, in $O(\log n)$ time.
\item Add a new $\Delta_i$ to the set, in $O(\log n)$ time.
\end{enumerate}
It is also possible to store aggregate functions within the data structure (on subtrees), as required by some queries that fit \wref{def:rankbasedquery}.
\end{lemma}
\begin{proof}
We can use, for example, a globally-biased $2, b$ tree~\cite{Bent85}. We assign gap $\Delta_i$ the weight $w_i = |\Delta_i|$; the sum of weights, $W$, is thus equal to $n$. Access to gap $\Delta_i$, operation~1, is handled in $O(\log(n/|\Delta_i|))$ worst-case time~\cite[Thm.\,1]{Bent85}. By~\cite[Thm.\,11]{Bent85}, operation~2 is handled via weight change in $O(\log(n/|\Delta_i|))$ worst-case time. Again by~\cite[Thm.\,11]{Bent85}, operations~3 and~4 are handled in $O(\log n)$ worst-case time or better.
\end{proof}
\begin{remark}[Alternative implementations]
A variety of biased search trees can be used as the data structure of \wref{lem:gapds}. In \wref{sec:splay}, we suggest splay trees for that purpose, which greatly simplifies implementation at the cost of making the runtimes amortized. What is more, we show that efficient access properties of the data structure of \wref{lem:gapds} can be inherited by the lazy search tree, hence the (orthogonal) efficiency gains for insertions in lazy search trees and for structured access sequences in splay trees can be had simultaneously.
\end{remark}
The top level data structure allows us to access a gap in the desired time complexity for insertion. However, we must also support efficient queries. In particular, we need to be able to split a gap $\Delta_i$ into two gaps of size $x$ and $cx$ ($c \geq 1$) in amortized time $O(x \log c)$. We must build additional structure amongst the elements in a gap to support such an operation efficiently. At the cost of this organization, in the worst case we pay an additional $O(\log \log |\Delta_i|)$ time on insertion and key-changing operations.
\subsection{The Interval Data Structure}
\label{sec:intervalds}
We now discuss the data structure for the intervals.
Given a gap $\Delta_i$, intervals $\mathcal{I}_{i,1}, \mathcal{I}_{i,2}, \ldots, \mathcal{I}_{i,\ell_i}$ are contained within it and maintained in a data structure as follows.
We maintain with each interval the two splitting keys $(k_l,k_r)$ that separate this interval from its predecessor and successor (using $-\infty$ and $+\infty$ for the outermost ones), respectively;
the interval only contains elements $e=(k,v)$ with $k_l \le k \le k_r$.
We store intervals in sorted order in an array (see \wref{rem:avoid-arrays}), sorted with respect to $(k_l,k_r)$.
We can then find an interval containing a given key~$k$, \ie, with $k_l \le k \le k_r$,
using binary search in $O(\log \ell_i)$ time.
\begin{remark}[Handling duplicate keys]
\label{rem:duplicate-keys-search}
Recall that we allow repeated insertions, \ie, elements with the same key $k$.
As detailed in \wref{sec:queryalg}, intervals separated by a splitting key $k$ can then both contain
elements with key $k$. To guide the binary search in these cases,
we maintain for each interval the number of elements with keys equal to the splitting keys $k_l$ and $k_r$.
\end{remark}
As we will see below, the number of intervals in one gap is always $O(\log n)$,
and only changes during a query, so we can afford to update this array on query in linear time.
\begin{remark}[Avoiding arrays]
\label{rem:avoid-arrays}
Note that, to stay within the pointer-machine model, we can choose to arrange the intervals within any gap in a balanced binary search tree, thus providing the binary search without array accesses.
This also allows the ability to add new intervals efficiently.
In practice, however, binary search on an array is likely to be preferred.
\end{remark}
We conceptually split the intervals into two groups: intervals on the \textit{left} side and intervals on the \textit{right} side. An interval is defined to be in one of the two groups by the following convention.
\begin{enumerate}[label=(\Alph*),font=\bfseries]
\item \label{rule:left-right}
\textbf{Left and right intervals:} An interval $\mathcal{I}_{i,j}$ in gap $\Delta_i$ is on the \emph{left side} if the closest query rank (edge of gap $\Delta_i$ if queries have occurred on both sides of $\Delta_i$) is to the left. Symmetrically, an interval $\mathcal{I}_{i,j}$ is on the \emph{right side} if the closest query rank is on the right. An interval with an equal number of elements in $\Delta_i$ on its left and right sides can be defined to be on the left or right side arbitrarily.
\end{enumerate}
Recall the definition of closest query rank stated in \wref{fn:note2}. The closest query rank is the closest boundary of gap $\Delta_i$ that was created in response to a query.
We balance the sizes of the intervals within a gap according to the following rule:
\begin{enumerate}[label=(\Alph*),start=2,font=\bfseries]
\item \label{rule:merge}
\textbf{Merging intervals:}
Let $\mathcal{I}_{i,j}$ be an interval on the left side, not rightmost of left side intervals. We merge $\mathcal{I}_{i,j}$ into adjacent interval to the right, $\mathcal{I}_{i,j+1}$, if the number of elements left of $\mathcal{I}_{i,j}$ in $\Delta_i$ equals or exceeds $|\mathcal{I}_{i,j}| + |\mathcal{I}_{i,j+1}|$. We do the same, reflected, for intervals on the right side.
\end{enumerate}
The above rule was carefully chosen to satisfy several components of our analysis. As mentioned, we must be able to answer a query for a rank $r$ near the edges of $\Delta_i$ efficiently. This implies we need small intervals near the edges of gap $\Delta_i$, since the elements of each interval are unordered. However, we must also ensure the number of intervals within a gap does not become too large, since we must determine into which interval an inserted element falls at a time cost outside of the increase in $B$ as dictated in \wref{lem:inserttime}. We end up using the structure dictated by \wref{rule:merge} directly in our analysis of query complexity, particularly in \wref{sec:ensure}.
Note that \wref{rule:merge} causes the loss of information. Before a merge, intervals $\mathcal{I}_{i,j}$ and $\mathcal{I}_{i,j+1}$ are such that for any $x \in \mathcal{I}_{i,j}$ and $y \in \mathcal{I}_{i,j+1}$, $x \leq y$. After the merge, this information is lost. Surprisingly, this does not seem to impact our analysis. Once we pay the initial $O(\log \ell_i)$ cost to insert an element via binary search, the merging of intervals happens seldom enough that no additional cost need be incurred.
\wref{rule:merge} ensures the following.
\begin{lemma}[Few intervals]
\label{lem:numintervals}
Within a gap $\Delta_i$, there are at most $4 \log(|\Delta_i|)$ intervals.
\end{lemma}
\begin{proof}
First consider intervals on the left side. Let intervals $\mathcal{I}_{i, j+1}$ and $\mathcal{I}_{i, j+2}$ be on the left side. It must be that the number of elements in intervals $\mathcal{I}_{i, j+1}$ and $\mathcal{I}_{i, j+2}$ together is equal to or greater than the number of elements in the first $j$ intervals, by \wref{rule:merge}.
Indeed, the worst-case sequence of interval sizes is $1,1,1,2,2,4,4,8,8,16,16,\ldots$, obtained recursively as $a_1=a_2=1$ and $a_j = a_1+\cdots+a_{j-2}+1-a_{j-1}$.
It follows that with every two intervals, the total number of elements at least doubles;
indeed we can show that the first $k$ intervals contain at least $(\sqrt 2)^{k+2}$ elements,
therefore $n$ elements are spread over at most $2 \log_2 n -2$ intervals.
To count intervals on the left resp.\ right side in $\Delta_i$, we observe that the maximal
number of intervals occurs if half of the elements are on either side, so there
can be at most $2 \cdot (2\log_2 (|\Delta_i|/2)-2) \le 4\log(|\Delta_i|)$ intervals in gap $\Delta_i$.
\end{proof}
For ease of implementation, we will invoke \wref{rule:merge} only when a \emph{query} occurs in gap $\Delta_i$. In the following subsection, we will see that insertion does not increase the number of intervals in a gap, therefore \wref{lem:numintervals} will still hold at all times even though \wref{rule:merge} might temporarily be violated after insertions. We can invoke \wref{rule:merge} in $O(\log |\Delta_i|)$ time during a query, since $|\Delta_i| \leq n$ and we can afford $O(\log n)$ time per query.
\subsection{Representation of Intervals}
It remains to describe how a single interval is represented internally.
Our analysis will require that merging two intervals can be done in $O(1)$ time and further that deletion from an interval can be performed in $O(1)$ time ($O(\log n)$ time actually suffices for $O(\log n)$ time delete overall, but on many operation sequences the faster interval deletion will yield better runtimes). Therefore, the container in which elements reside in intervals should support such behavior. An ordinary linked list certainly suffices; however, we can limit the number of pointers used in our data structure by representing intervals as a linked list of arrays. Whenever an interval is constructed, it can be constructed as a single (expandable) array. As intervals merge, we perform the operation in $O(1)$ time by merging the two linked lists of arrays. Deletions can be performed lazily, shrinking the array when a constant fraction of the entries have been deleted.
We analyze the number of pointers required of this method and the resulting improved bounds on insertion and key change in \wref{sec:qbounds}. If we choose not to take advantage of this directly, we can alternatively replace standard linked lists with linked list/array hybrids such as unrolled linked lists~\cite{Shao94}, which will likely outperform standard linked lists in practice.
\subsection{Insertion}
\label{sec:insertalg}
Insertion of an element $e=(k,v)$ can be succinctly described as follows. We first determine the gap $\Delta_i$ such that $k \in \Delta_i$, according to the data structure of \wref{lem:gapds}. We then binary search the $O(\log |\Delta_i|)$ intervals (by maintaining ``router'' keys separating the intervals) within $\Delta_i$ to find the interval $\mathcal{I}_{i,j}$ such that $k \in \mathcal{I}_{i,j}$. We increase the size of $\Delta_i$ by one in the gap data structure.
\begin{remark}[A single data structure]
\label{rem:work-over-intervals}
The attentive reader may wonder why we must first perform a binary search for gap $\Delta_i$ and then perform another binary search for interval $\mathcal{I}_{i,j}$ within $\Delta_i$. It seems likely these two binary searches can be compressed into one, and indeed, this intuition is correct. If preferred, we can use the data structure of \wref{lem:gapds} directly on intervals within gaps, so that weight $|\Delta_i|$ is evenly distributed over intervals $\mathcal{I}_{i,1}, \mathcal{I}_{i,2}, \ldots, \mathcal{I}_{i,\ell_i}$. (Alternatively, assigning weight $|\Delta_i|/\ell_i + |\mathcal{I}_{i,j}|$ to interval $\mathcal{I}_{i,j}$ can provide better runtimes in average case settings.) Unfortunately, doing so means only an $O(\log n)$ time change-key operation can be supported (unless the data structure is augmented further), and (small) weight changes must be performed on the full set of intervals within gap $\Delta_i$ on insertion and deletion. While such a data structure is possible, we find the current presentation more elegant and simpler to implement.
\end{remark}
\begin{remark}[Lazy insert]
One seemingly-obvious way to improve insertion complexity, improving perhaps either of the first two disadvantages listed in \wref{sec:con}, is to \emph{insert lazily}. That is, instead of performing a full insert of $e = (k,v)$ through the gap data structure and then again through the interval data structure, we keep a buffer at each node of the respective BSTs with all the elements that require processing at a later time. While this can improve overall time complexity on some simple operation sequences, it seems difficult to make this strategy efficient overall, when insertions, deletions and queries can be mixed arbitrarily.
So while improving either of the two disadvantages listed in \wref{sec:con} (and indeed, an improvement in one may imply an improvement in the other) would likely utilize aspects of lazy insertion, we do not currently see a way to achieve this by maintaining buffers on nodes of the BSTs we use.
\end{remark}
\subsection{Query}
\label{sec:queryalg}
To answer a query with associated rank $r$, we proceed as follows. We again determine the gap~$\Delta_i$ such that $r \in \Delta_i$ according to the process described in \wref{def:rankbasedquery} on the data structure of \wref{lem:gapds}. %
While we could now choose to rebalance the intervals of $\Delta_i$ via \wref{rule:merge}, our analysis will not require application of \wref{rule:merge} until the \emph{end} of the query procedure.
We recurse into the interval $\mathcal{I}_{i,j}$ such that $r \in \mathcal{I}_{i,j}$, again using the process described in \wref{def:rankbasedquery} on the intervals of~$\Delta_i$ (this may use aggregate information stored in the data structure for intervals).
We proceed to process $\mathcal{I}_{i,j}$ by answering the query on $\mathcal{I}_{i,j}$ and replacing interval $\mathcal{I}_{i,j}$ with smaller intervals.
First,
we partition $\mathcal{I}_{i,j}$ into sets $L$ and $R$, such that all elements in $L$ are less than or equal to all elements in $R$ and there are $r$ elements in the entire data structure which are either in $L$ or in an interval or gap left of $L$.
This can typically be done in $O(|\mathcal{I}_{i,j}|)$ time using the result of the query itself; otherwise, linear-time selection suffices~\cite{Blum73}.
We further partition $L$ into two sets of equal size $L_l$ and $L_r$, again using linear-time selection, such that all elements in $L_l$ are smaller than or equal to elements in $L_r$; if $|L|$ is odd, we give the extra element to $L_l$ (unsurprisingly, this is not important).
We then apply the same procedure \emph{one more time} to $L_r$, again splitting into equal-sized intervals.
Recursing further is not necessary.
We do the same, reflected, for set $R$;
after a total of 5 partitioning steps the interval splitting terminates.
An example is shown in \wref{fig:split}.
\begin{figure}[htbp]
%
\def0.2{0.08}
\newcommand\drawint[4][1]
\draw[semithick,|-|] (#2,0) --
node[below=0pt,font=\scriptsize] {\scalebox{#1}[1]{$#4$}}
++(#3-0.2,0) ;
}
\scalebox{1.2}{%
\begin{tikzpicture}[scale=.4]
\foreach \x in {-.1,6,19.1-0.2} {
\draw[densely dotted,thin] (\x,0) -- (\x,-6) ;
}
\drawint{-.1}{19.2}{|\mathcal I_{i,j}|=19}
\node[font=\small] at (3*19/4,1) {interval $\mathcal I_{i,j}$} ;
\draw[thick,gray] (6,-.5) -- ++ (0,1)
node[above,black,font=\scriptsize,align=center]
{query\\rank\\$r=6$} ;
\begin{scope}[shift={(0,-5)}]
\begin{scope}[shift={(-0.1,0)}]
\drawint{0}{3}{3}
\drawint{3}{2}{2}
\drawint{5}{1}{1}
\end{scope}
\begin{scope}[shift={(0.1,0)}]
\drawint{6}{3}{3}
\drawint{9}{3}{3}
\drawint{12}{7}{7}
\end{scope}
\end{scope}
\node[scale=2,rotate=-90] at (19/2,-2.75) {$\Rightarrow$};
\foreach \f/\t/\l in {0/6/L,6/19/R} {
\draw[decoration={brace,amplitude=5pt,mirror},decorate]
(\f+.1,-6.75) -- node[below=5pt,font=\scriptsize]{$\l$} ++(\t-\f-.2,0);
}
\end{tikzpicture}%
}
\caption{An interval $\mathcal{I}_{i,j}$ is split and replaced with a set of intervals.}
\label{fig:split}
\end{figure}
\begin{remark}[Variants of interval replacement]
There is some flexibility in designing this interval-replacement procedure;
the critical property needed for our result is the following;
(details of which will become clear in \wref{sec:queryanalysis}):
(1) It yields at most $O(\log |\Delta_i|)$ intervals in gap $\Delta_i$ (typically by application of \wref{rule:merge}), (2) it satisfies an invariant involving a credit system~-- \wtpref{inv:credits}~-- and
(3) splitting takes time~$O(|\mathcal{I}_{i,j}|)$.
In \wref{sec:random}, we show that exact median selection (when splitting $L$, $L_r$, $R$, and $R_l$)
can be replaced with pivoting on a randomly chosen element.
On a set of $n$ elements, this requires only $n$ comparisons
instead of the at least $1.5n$ required by median-finding in expectation~\cite{Cunto89},
and it is substantially faster in practice.
\end{remark}
After splitting the interval $\mathcal I_{i,j}$ as described above,
we answer the query itself and update the gap and interval data structures as follows. We create two new gaps $\Delta'_i$ and $\Delta'_{i+1}$ out of the intervals of gap $\Delta_i$ including those created from sets $L$ and $R$. Intervals that fall left of the query rank $r$ are placed in gap $\Delta'_i$, and intervals that fall right of the query rank $r$ are placed in gap $\Delta'_{i+1}$. We update the data structure of \wref{lem:gapds} with the addition of gaps $\Delta'_i$ and $\Delta'_{i+1}$ and removal of gap~$\Delta_i$. Finally, we apply \wref{rule:merge} to gaps $\Delta'_i$ and~$\Delta'_{i+1}$.
\subsection{Deletion}
To delete an element $e=(k,v)$ pointed to by a given pointer \texttt{ptr}, we first remove $e$ from the interval $\mathcal{I}_{i,j}$ such that $k \in \mathcal{I}_{i,j}$.
If $e$ was the only element in $\mathcal{I}_{i,j}$, we remove interval $\mathcal{I}_{i,j}$ from gap $\Delta_i$ (we can do so lazily, when \wref{rule:merge} is next run on gap $\Delta_i$).
Then we decrement $\Delta_i$ in the gap data structure of \wref{lem:gapds};
if that leaves an empty gap, we remove $\Delta_i$ from the gap data structure.
\subsection{Change-Key}
\label{sec:changekeyalg}
The change-key operation can be performed as follows. Suppose we wish to change the key of element $e=(k,v)$, given by pointer \texttt{ptr}, to $k'$, and that $e$ currently resides in interval $\mathcal{I}_{i,j}$ in gap $\Delta_i$. We first check if $k'$ falls in $\Delta_i$ or if $e$ should be moved to a different gap. If the latter, we can do so as in deletion of $e$ and re-insertion of $(k',v)$. If the former, we first remove $e$ from $\mathcal{I}_{i,j}$. If necessary, we (lazily) delete $\mathcal{I}_{i,j}$ from $\Delta_i$ if $\mathcal{I}_{i,j}$ now has no elements. We then binary search the $O(\log |\Delta_i|)$ intervals of $\Delta_i$ and place $e$ into the new interval in which it belongs.
Note that although this operation can be performed to change the key of $e$ to anything, \wref{thm:main} only guarantees runtimes faster than $O(\log n)$ when $e$ moves closer to its nearest query rank within gap $\Delta_i$. Efficient runtimes are indeed possible in a variety of circumstances; this is explored in more detail in \wref{sec:changekeyanalysis}.
\section{Analysis}
\label{sec:analysis}
We use an amortized analysis~\cite{Tarjan85}. We will use a potential function with a built-in credit system.
Recall that our desired insertion complexity is about $O(\log(n/|\Delta_i|))$ time. On a query that splits a gap into two gaps of size $x$ and $cx$, we attempt to do so in (amortized) $O(\log n + x \log c)$ time.
We require several definitions before we may proceed.
We distinguish between $0$-sided, $1$-sided, and $2$-sided gaps. A $2$-sided gap is a gap $\Delta_i$ such that queries have been performed on either side of $\Delta_i$; thus, intervals in $\Delta_i$ are split into intervals on the left side and intervals on the right side. This is the typical case. A $1$-sided gap $\Delta_i$ is such that queries have only been performed on one side of the gap; thus, intervals are all on the side towards the query rank in $\Delta_i$. There can be at most two $1$-sided gaps at any point in time. In the priority queue case, there is a single $1$-sided gap. The final category is a $0$-sided gap; when the data structure has performed no queries, all elements are represented in a single interval in a single $0$-sided gap.
We now give the following functional definitions.
\begin{align*}
c(\mathcal{I}_{i,j}) &\;\ce\; \text{\# of \emph{credits} associated with interval $\mathcal{I}_{i,j}$.}\\
o(\mathcal{I}_{i,j}) &\;\ce\; \text{\# of elements \emph{outside} $\mathcal{I}_{i,j}$ in $\Delta_i$, i.\,e.,}\\
&\phantom{{}\;\ce\;{}}\text{\# of elements in $\Delta_i$ that are left (right) of $\mathcal{I}_{i,j}$ if $\mathcal{I}_{i,j}$ is on the left (right) side.}\\
M &\;\ce\; \text{total \# of elements in $0$-sided or $1$-sided gaps.}
\end{align*}
As previously mentioned, intervals are defined to be on either the left or right side according to \wpref{rule:left-right}. For an interval $\mathcal{I}_{i,j}$ in a $2$-sided gap $\Delta_i$, $o(\mathcal{I}_{i,j})$ hence is the minimum number of elements either to the left (less than) or to the right (greater than) $\mathcal{I}_{i,j}$ in gap $\Delta_i$.
The rules for assigning credits are as follows:
A newly created interval has no credits associated with it.
During a merge, the credits associated with both intervals involved in the merge may be discarded; they are not needed.
When an interval $\mathcal I_{i,j}$ is split upon a query, it is destroyed and new intervals
(with no credits) are created from it; by destroying $\mathcal I_{i,j}$, the $c(\mathcal I_{i,j})$ credits associated with it are released.
We use the following potential function:
\[
\Phi \;=\; 10M \;+\; 4\sum_{\mathclap{\substack{1 \leq i \leq m,\\1 \leq j \leq \ell_i}}} \:
c(\mathcal{I}_{i,j}).
\]
Credits accumulated when an operation is cheaper than
its amortized bound increase $\Phi$;
in this way, we use credits to pay for work that will need to be performed in the future.
We do so by maintaining the following invariant:
\begin{enumerate}[label=(\Alph*),start=3,font=\bfseries]
\item \label{inv:credits}
\textbf{Credit invariant:}
Let $\mathcal{I}_{i,j}$ be an interval.
Then $|\mathcal{I}_{i,j}| \le c(\mathcal{I}_{i,j}) + o(\mathcal{I}_{i,j})$.
\end{enumerate}
\begin{remark}[Intuition behind \wref{inv:credits}]
The intuition behind \wref{inv:credits} is that the cost of splitting $\mathcal{I}_{i,j}$ is entirely paid for by the credits associated with $\mathcal{I}_{i,j}$ and by outside elements,
\ie, either released potential or by the distance to previous queries causing a corresponding increase in $B$.
The intervals constructed from the elements of $\mathcal{I}_{i,j}$ are constructed in such a way that they satisfy \wref{inv:credits} at cost a constant fraction of the cost of splitting $\mathcal{I}_{i,j}$.
\end{remark}
\begin{remark}[Alternative potential function]
It is possible to remove the credits in our potential function and \wref{inv:credits} and instead use the potential function
\[
\Phi \;=\; 10M \;+\; 4\sum_{\mathclap{\substack{1 \leq i \leq m,\\1 \leq j \leq \ell_i}}} \:
\max(|\mathcal{I}_{i,j}| - o(\mathcal{I}_{i,j}), 0).
\]
We opt for the current method as we believe it is easier to work with.
\end{remark}
Observe that before any elements are inserted, $\Phi = 0$, and we have a single $0$-sided gap with one interval containing no elements. Thus \wref{inv:credits} is vacuously true. We proceed with an amortized analysis of the operations. For our amortization arguments, we assume the potential function to be adjusted to the largest constant in the $O(\cdot)$ notation necessary for the complexity of our operations.
In the interest of legibility, we will drop this constant and compare outside of $O(\cdot)$ notation, as is standard in amortized complexity analysis.
\subsection{Insertion}
\label{sec:insertanalysis}
Insertion of element $e=(k,v)$ can be analyzed as follows. As stated in \wref{lem:gapds}, we pay $O(\log(n/|\Delta_i|))$ time to locate the gap $\Delta_i$ that $e$ belongs into. We adjust the size of $\Delta_i$ and $n$ by one in the data structure of \wref{lem:gapds} in $O(\log(n/|\Delta_i|))$ time. By \wref{lem:numintervals}, there are $O(\log |\Delta_i|)$ intervals in gap $\Delta_i$, and so we spend $O(\log \log |\Delta_i|)$ time to do a binary search to find the interval $\mathcal{I}_{i,j}$ that $e$ belongs into. We increase the size of $\mathcal{I}_{i,j}$ by one and add one credit to $\mathcal{I}_{i,j}$ to ensure \wref{inv:credits}. Thus the total amortized cost of insertion\footnote{Note that although $\log \log |\Delta_i|$ can be $o(\log \log n)$, there is no difference between $O(\sum_{i=1}^{q+1} |\Delta_i| (\log (n/|\Delta_i| )+ \log \log n))$ and $O(\sum_{i=1}^{q+1} |\Delta_i| (\log(n/|\Delta_i|) + \log \log |\Delta_i|)$. When the $\log \log n$ term in a gap dominates, $|\Delta_i| = \Omega(n/ \log n)$, so $\log \log n = \Theta(\log \log |\Delta_i|)$.} (up to constant factors) is $\log(n/|\Delta_i|)+\log \log |\Delta_i| + 4+10 = O(\log(n/|\Delta_i|) + \log \log |\Delta_i|)$. Note that if the data structure for \wref{lem:gapds} supports operations in worst-case time, insertion complexity is also worst-case.
We show in \wref{sec:qbounds} that the bound $O(\log q)$ also holds.
We use the following lemma to show that \wref{inv:credits} holds on insertion.
\begin{lemma}[Insert maintains \wref{inv:credits}]
\label{lem:ruleAinvariant}
Updating side designations according to \wref{rule:left-right} after insertions preserves \wref{inv:credits}.
\end{lemma}
\begin{proof}
The insertion of additional elements may cause an interval $\mathcal{I}_{i,j'}$ in the middle of $\Delta_i$ to change sides. This occurs exactly when the number of elements on one side exceeds the number of elements on the other side. However, before this insertion occurred, \wref{inv:credits} held with an equal number of elements on both sides of $\mathcal{I}_{i,j'}$. Since we do not change the number of credits associated with $\mathcal{I}_{i,j'}$, in effect, $o(\mathcal{I}_{i,j'})$ just changes which side it refers to, monotonically increasing through all insertions. It follows \wref{inv:credits} holds according to redesignations via \wref{rule:left-right} after insertions.
\end{proof}
\wref{inv:credits} then holds on insertion due to \wref{lem:ruleAinvariant} and since $o(\mathcal{I}_{i,j'})$ only possibly increases for any interval $\mathcal{I}_{i,j'}$, $j' \neq j$, with $|\mathcal{I}_{i,j'}|$ remaining the same; recall that an extra credit was added to interval $\mathcal{I}_{i,j}$ to accommodate the increase in $|\mathcal{I}_{i,j}|$ by one.
Note that from an implementation standpoint, no work need be done for intervals $\mathcal{I}_{i,j'}$ on insertion, even if they change sides. Any readjustment can be delayed until the following query in gap $\Delta_i$.
\subsection{Query}
\label{sec:queryanalysis}
We now proceed with the analysis of a query. We split the analysis into several sections. We first assume the gap $\Delta_i$ in which the query falls is a $2$-sided gap. We show \wref{inv:credits} implies we can pay for the current query. We then show how to ensure \wref{inv:credits} holds after the query. Finally, we make the necessary adjustments for the analysis of queries in $0$-sided and $1$-sided gaps. Recall that our complexity goal to split a gap into gaps of size $x$ and $cx$ ($c \geq 1$) is $O(\log n + x \log c)$ amortized time.
\subsubsection{Current Query}
\label{sec:current}
For the moment, we assume the gap in which the query rank $r$ satisfies $r \in \Delta_i$ is a $2$-sided gap. Further, assume the query rank $r$ falls left of the median of gap $\Delta_i$, so that the resulting gaps are a gap $\Delta'_i$ of size $x$ and a gap $\Delta'_{i+1}$ of size $cx$ ($c \geq 1$). A picture is given in \wref{fig:query}. The case of query rank $r$ falling right of the median of $\Delta_i$ is symmetric.
\begin{figure}[htbp]
\def0.2{0.2}
\newcommand\drawint[2]
\draw[intline,|-|] (#1,0) -- ++(#2-0.2,0) ;
}
\scalebox{1.4}{%
\begin{tikzpicture}[xscale=.12,yscale=.18]
\scriptsize
\tikzset{intline/.style={thin}}
\drawint{0}{1}
\drawint{1}{3}
\drawint{4}{5}
\drawint{9}{8}
\drawint{17}{8}
\drawint{25}{10}
\drawint{35}{15}
\drawint{50}{21}
\drawint{71}{13}
\drawint{84}{4}
\drawint{88}{2}
\drawint{90}{1}
\drawint{91}{1}
\node[scale=.7] at (9+4,1) {$\mathcal I_{i,j}$};
\draw[thick,draw=gray] (15,1) -- ++(0,-4) node[below,scale=.7] {query};
\foreach \f/\t/\y/\l in {%
0/92-0.2/9/{gap $\Delta_i$},%
0/15-0.2/4.5/{gap $\Delta_i'$\\$|\Delta_i'| = x$},%
15/92-0.2/4.5/{gap $\Delta_{i+1}'$\\$|\Delta_{i+1}'|=cx$}%
} {
\draw[decoration={brace,amplitude=5pt},decorate] (\f+.2,\y) --
node[above=5pt,scale=.7,align=center] {\l} ++(\t-\f-.4,0);
}
\draw[decoration={brace,mirror,amplitude=2.5pt},decorate]
(0+.2,-2) -- node[below=2pt,scale=.7,align=center] {
intervals outside\\ of $\mathcal I_{i,j}$ in $\Delta_i$
will\\ move to gap $\Delta_i'$
} ++(9-.4,0) ;
\draw[decoration={brace,mirror,amplitude=2.5pt},decorate]
(17+.2,-2) -- node[below=2pt,scale=.7,align=center] {
intervals on \\same side of $\mathcal I_{i,j}$,\\
will move to $\Delta_{i+1}'$
} ++(50-17-.4,0) ;
\draw[stealth-,very thin] (35+15/2,0.2) to[out=70,in=270] ++(60:1.8)
node[above,align=center,scale=.6] {
last interval on\\ left side of $\Delta_i$} ;
\end{tikzpicture}
}
\caption{A query that splits $\mathcal{I}_{i,j}$ in gap $\Delta_i$.}
\label{fig:query}
\end{figure}
It takes $O(\log(n/|\Delta_i|)) = O(\log n)$ time via the data structure of \wref{lem:gapds} to find the gap~$\Delta_i$. We then find the interval $\mathcal{I}_{i,j}$ such that $r \in \mathcal{I}_{i,j}$. By \wref{def:rankbasedquery}, answering the query on the set of unsorted elements $\mathcal{I}_{i,j}$ can be done in $O(|\mathcal{I}_{i,j}|)$ time. Splitting interval $\mathcal{I}_{i,j}$ as described in \wref{sec:queryalg} can also be done in $O(|\mathcal{I}_{i,j}|)$ time.
Updating the data structure of \wref{lem:gapds} with the addition of gaps $\Delta'_i$ and $\Delta'_{i+1}$ and removal of gap $\Delta_i$ can be done in $O(\log n)$ time. Similarly, the total number of intervals created from the current query is no more than 6, and no more than $O(\log |\Delta_i|)$ intervals existed in gap $\Delta_i$ prior to the query, again by \wref{lem:numintervals}. Thus, applying \wref{rule:merge} to gaps $\Delta'_i$ and $\Delta'_{i+1}$ after the query takes no more than $O(\log |\Delta_i|) = O(\log n)$ time, because merging two intervals can be done in $O(1)$ time.
We next show that merging of intervals according to \wref{rule:merge} will preserve \wref{inv:credits}.
\begin{lemma}[Merge maintains \ref{inv:credits}]
Suppose interval $\mathcal{I}_{i,j}$ is merged into interval $\mathcal{I}_{i,j'}$ (note $j' = j+1$ if $\mathcal{I}_{i,j}$ is on the left side and $j' = j-1$ if $\mathcal{I}_{i,j}$ is on the right side), according to \wref{rule:merge}. Then the interval $\mathcal{I}_{i,j'}$ after the merge satisfies \wref{inv:credits}.
\label{ruleBinvariant}
\end{lemma}
\begin{proof}
Suppose interval $\mathcal{I}_{i,j}$ is merged into interval $\mathcal{I}_{i,j'}$ according to \wref{rule:merge}. Then $o(\mathcal{I}_{i,j}) \geq |\mathcal{I}_{i,j}| + |\mathcal{I}_{i,j'}|$. This implies that after the merge, $o(\mathcal{I}_{i,j'}) \geq |\mathcal{I}_{i,j'}|$, since elements outside the merged interval $\mathcal{I}_{i,j'}$ are outside both of the original intervals. Thus $\mathcal{I}_{i,j'}$ satisfies \wref{inv:credits} without any credits.
\end{proof}
In total, we pay $O(\log n + |\mathcal{I}_{i,j}|)$ actual time. As the $O(\log n)$ component is consistent with the $O(\log n)$ term in our desired query complexity, let us focus on the $O(|\mathcal{I}_{i,j}|)$ term. We have the following.
\begin{lemma}[Amortized splitting cost]
\label{lem:cancel}
Consider a query which falls in interval $\mathcal{I}_{i,j}$ and splits gap $\Delta_i$ into gaps of size $x$ and $cx$. Then $|\mathcal{I}_{i,j}| - c(\mathcal{I}_{i,j}) \leq x$.
\end{lemma}
\begin{proof}
By \wref{inv:credits}, $|\mathcal{I}_{i,j}| - c(\mathcal{I}_{i,j}) \leq o(\mathcal{I}_{i,j})$.
Now, since $\Delta_i$ is a $2$-sided gap, $o(\mathcal{I}_{i,j})$ is the lesser of the number of elements left or right of $\mathcal{I}_{i,j}$. Since the query rank $r$ satisfies $r \in \mathcal{I}_{i,j}$, this implies $o(\mathcal{I}_{i,j}) \leq x$ (See \wref{fig:query} for a visual depiction).
\end{proof}
We can apply amortized analysis with \wref{lem:cancel} as follows. Interval $\mathcal{I}_{i,j}$ is destroyed and intervals that are built from its contents have no credits. Thus, $4c(\mathcal{I}_{i,j})$ units of potential are released. By applying \wref{lem:cancel}, we can use $c(\mathcal{I}_{i,j})$ units of this released potential to bound the cost $|\mathcal{I}_{i,j}|$ with $x$. This gives an amortized cost thus far of $\log n + x - 3c(\mathcal{I}_{i,j})$. We will use the extra $3c(\mathcal{I}_{i,j})$ units of potential in the following section, ensuring \wref{inv:credits} holds for future operations.
\subsubsection[Ensuring Invariant (C)]{Ensuring \wref{inv:credits}}
\label{sec:ensure}
We must ensure \wref{inv:credits} holds on all intervals in gaps $\Delta'_i$ and $\Delta'_{i+1}$. Again, we will suppose $\Delta'_i$ is the smaller gap of the two, so that $\Delta'_i$ has $x$ elements and $\Delta'_{i+1}$ has $cx$ elements; the other case is symmetric.
Let us first consider gap $\Delta'_i$. This gap contains intervals from $\Delta_i$ outside of $\mathcal{I}_{i,j}$ as well as intervals made from the elements of $\mathcal{I}_{i,j}$. Observe (cf.\ \wref{fig:query}) that gap $\Delta'_i$ has in total $x$ elements. Therefore, we can trivially ensure \wref{inv:credits} holds by adding enough credits to each interval of $\Delta'_i$ to make it so, at total amortized cost at most $4x$. Let us do this after applying \wref{rule:merge} to $\Delta'_i$, so it is balanced and satisfies \wref{inv:credits}.
We now consider gap $\Delta'_{i+1}$ after rebalancing according to \wref{rule:merge}. The application of \wref{rule:merge} after the query may cause some intervals to change sides towards the query rank $r$ and subsequently merge. Intervals created from $\mathcal{I}_{i,j}$ may also merge (this may be because \wref{rule:merge} was applied lazily or even because the largest interval created from $\mathcal{I}_{i,j}$ may be on the opposite side of the rest of the intervals created from interval $\mathcal{I}_{i,j}$).
In total, the intervals of $\Delta'_{i+1}$ fall into four distinct categories. Recall that when we apply \wref{rule:merge}, we merge an interval $\mathcal{I}_{i,j'}$ into interval $\mathcal{I}_{i,j''}$, so we assume the identity of the merged interval as $\mathcal{I}_{i,j''}$, and interval $\mathcal{I}_{i,j'}$ ceases to exist.
We call the four categories $A$, $B$, $C$, and $D$, and show how to ensure \wref{inv:credits} on each of them.
Category $A$ are intervals that are created from $\mathcal{I}_{i,j}$ that fall on the side of the query rank $r$ so as to become $\Delta'_{i+1}$ intervals after the query. Category $B$ are intervals on the same side as $\mathcal{I}_{i,j}$ before the query which were located inward from $\mathcal{I}_{i,j}$ in $\Delta_i$. Category $C$ are intervals that were on the opposite side of interval $\mathcal{I}_{i,j}$ before the query, but now switch sides due to the removal of the gap $\Delta'_i$. Finally, category $D$ are intervals that lie on the opposite side of interval $\mathcal{I}_{i,j}$ both before and after the query.
A picture is given in \wref{fig:newgap}.
\begin{figure}[htbp]
\def0.2{0.2}
\newcommand\drawint[2]
\draw[intline,|-|] (#1,0) -- ++(#2-0.2,0) ;
}
\scalebox{1.5}{%
\begin{tikzpicture}[xscale=.12,yscale=.18]
\scriptsize
\tikzset{intline/.style={thin}}
\drawint{17}{4}
\drawint{21}{5}
\drawint{26}{7}
\drawint{33}{10}
\drawint{43}{16}
\drawint{59}{15}
\drawint{74}{10}
\drawint{84}{4}
\drawint{88}{2}
\drawint{90}{1}
\drawint{91}{1}
\begin{scope}[shift={(0,2)},opacity=.5]
\drawint{0}{8+9}
\node[scale=.7] at (9+5,1) {old $\mathcal I_{i,j}$};
\end{scope}
\begin{scope}[opacity=.3]
\drawint{0}{5}
\drawint{5}{2}
\drawint{7}{3}
%
%
\end{scope}
\drawint{10}{2}
%
\drawint{12}{2}
\drawint{14}{3}
\draw[thick,draw=gray] (10,4) -- ++(0,-6) node[below,scale=.7] {query};
\foreach \f/\t/\y/\l in {%
10/92-0.2/5/{gap $\Delta_{i+1}'$}%
} {
\draw[decoration={brace,amplitude=5pt},decorate] (\f+.2,\y) --
node[above=5pt,scale=.7,align=center] {\l} ++(\t-\f-.4,0);
}
\foreach \f/\t/\l in {%
10/17/{\textbf{\boldmath Cat.\,$A$}:\\new intervals \\from old $\mathcal I_{i,j}$},%
17/43/{\textbf{\boldmath Category $B$}:\\old left-side intervals to the \\right of $\mathcal I_{i,j}$ that remain \\left-side intervals in $\Delta_{i+1}'$},%
43/59/{\textbf{\boldmath Category $C$}:\\intervals that transitioned \\from right side to left side},%
59/92/{\textbf{\boldmath Category $D$}:\\right-side intervals that \\remain on right side in $\Delta_{i+1}'$}%
} {
\draw[decoration={brace,mirror,amplitude=2.5pt},decorate]
(\f+.2,-4) -- node[below=2pt,scale=.65,align=center] {\l}
++(\t-\f-.4,0) ;
}
\draw[stealth-,very thin] (33+5,0.2) to[out=70,in=270] ++(60:2)
node[above,align=center,scale=.6]
{last interval on\\left side of $\Delta_i$} ;
\end{tikzpicture}
}
\caption{Gap $\Delta'_{i+1}$ after query within interval $\mathcal{I}_{i,j}$ of $\Delta_i$. The picture assumes $\mathcal{I}_{i,j}$ was a left-side interval.}
\label{fig:newgap}
\end{figure}
We proceed with ensuring \wref{inv:credits} on each category.
\begin{itemize}
\item \textbf{\boldmath Category $D$}: Category $D$ intervals are easiest. These intervals are not affected by the query and thus still satisfy \wref{inv:credits} with no additional cost.
\item \textbf{\boldmath Category $A$}: Now consider category $A$ intervals.
Three such intervals, $\mathcal I'_{i+1,1}$, $\mathcal I'_{i+1,2}$, and $\mathcal I'_{i+1,3}$, are created in the query algorithm stated in \wref{sec:queryalg}. The leftmost and middlemost intervals, $\mathcal I'_{i+1,1}$ and $\mathcal I'_{i+1,2}$, have size $\frac{1}{4}|R| \pm 1$, and the rightmost interval $\mathcal I'_{i+1,3}$ has size $\frac{1}{2}|R| \pm 1$ (the $\pm1$ addresses the case that $|R|$ is not divisible by $4$).
Up to one element, $\mathcal I'_{i+1,2}$ has at least as many elements outside of it as within it. Thus after giving it one credit, $\mathcal I'_{i+1,2}$ satisfies \wref{inv:credits}. Similarly, $\mathcal I'_{i+1,3}$ will remain on the same side in most cases, and thus will also have enough elements outside it from the other two intervals (potentially after giving it one credit, too).
But we always have to assign credits to $\mathcal I'_{i+1,1}$.
Moreover, if interval $\mathcal{I}_{i,j}$ was very large, then $\mathcal I'_{i+1,3}$ may actually switch sides in the new gap $\Delta'_{i+1}$.
In the worst case, we will require credits to satisfy \wref{inv:credits} on both $\mathcal I'_{i+1,1}$ and $\mathcal I'_{i+1,3}$. As their sizes total $\frac{3}{4}|R| + O(1)$, at $4$ units of potential per credit the amortized cost to do so is no more than $3|\mathcal{I}_{i,j}| + O(1)$. We can use the extra $3c(\mathcal{I}_{i,j})$ units of potential saved from \wref{sec:current} to pay for this operation. By applying \wref{lem:cancel} again, we can bound $3|\mathcal{I}_{i,j}| - 3c(\mathcal{I}_{i,j})$ with $3x$, bringing the amortized cost of satisfying \wref{inv:credits} on category $A$ intervals to $O(x)$.
\item \textbf{\boldmath Categories $B$ and $C$}: We'll handle category $B$ and $C$ intervals together. First observe that since $x$ elements were removed with the query, we can bound the number of credits necessary to satisfy \wref{inv:credits} on a single interval in category $B$ or $C$ with either $x$ or the size of that interval. For category $C$ intervals, this follows because they had more elements on their left side prior to the query, thus upon switching sides after the query, $x$ credits will suffice to satisfy \wref{inv:credits}, similarly to the proof of \wref{lem:ruleAinvariant}. In the new gap $\Delta'_{i+1}$, let $j'$ be the smallest index such that $|\mathcal{I}_{i+1,j'}| \geq x$. We will handle category $B$ and $C$ intervals left of $\mathcal{I}_{i+1,j'}$ and right of $\mathcal{I}_{i+1,j'}$ differently.
Let us first consider category $B$ and $C$ intervals left of interval $\mathcal{I}_{i+1,j'}$. All such intervals have size less than $x$. If there are less than two such intervals, we may apply $x$ credits to each to ensure \wref{inv:credits} at total cost $O(x)$. Otherwise, consider intervals $\mathcal{I}_{i+1,j'-2}$ and $\mathcal{I}_{i+1,j'-1}$. Due to application of \wref{rule:merge} after the query, intervals $\mathcal{I}_{i+1,j'-2}$ and $\mathcal{I}_{i+1,j'-1}$ make up more than half of the total number of elements left of interval $\mathcal{I}_{i+1,j'}$. Since $|\mathcal{I}_{i+1,j'-2}| < x$ and $|\mathcal{I}_{i+1,j'-1}| < x$, it follows there are no more than $4x$ elements located in intervals left of interval $\mathcal{I}_{i+1,j'}$ in gap $\Delta'_{i+1}$. For each such interval, we add at most the size of the interval in credits so that \wref{inv:credits} holds on all intervals left of $\mathcal{I}_{i+1,j'}$ in gap $\Delta'_{i+1}$. The total cost is $O(x)$.
Now consider intervals right of $\mathcal{I}_{i+1,j'}$. If there are less than two such intervals, we may apply $x$ credits to each to ensure \wref{inv:credits} at total cost $O(x)$. Otherwise, consider intervals $\mathcal{I}_{i+1,j'+1}$ and $\mathcal{I}_{i+1,j'+2}$. By \wref{rule:merge} after the query, $|\mathcal{I}_{i+1,j'+1}| + |\mathcal{I}_{i+1,j'+2}| > x$, since interval $\mathcal{I}_{i+1,j'}$ is outside intervals $\mathcal{I}_{i+1,j'+1}$ and $\mathcal{I}_{i+1,j'+2}$ and $|\mathcal{I}_{i+1,j'}| \geq x$ by choice of $j'$. Similarly, if such intervals are category $B$ or $C$ intervals, then $|\mathcal{I}_{i+1,j'+3}| + |\mathcal{I}_{i+1,j'+4}| > 2x$ and $|\mathcal{I}_{i+1,j'+5}| + |\mathcal{I}_{i+1,j'+6}| > 4x$. In general, $|\mathcal{I}_{i+1,j'+2k-1}| + |\mathcal{I}_{i+1,j'+2k}| > 2^{k-1}x$ for any $k$ where intervals $\mathcal{I}_{i+1,j'+2k-1}$ and $\mathcal{I}_{i+1,j'+2k}$ are category $B$ or $C$ intervals. Since there are $cx$ total elements in gap $\Delta'_{i+1}$, it follows the number of category $B$ and $C$ intervals right of $\mathcal{I}_{i+1,j'}$ is $O(\log c)$. We may then apply $x$ credits to all such intervals and interval $\mathcal{I}_{i+1,j'}$ for a total cost of $O(x \log c)$.
\end{itemize}
Altogether, we can ensure \wref{inv:credits} for future iterations at total $O(x \log c)$ amortized cost.
\subsubsection{0-Sided and 1-Sided Gaps}
\label{sec:onesided}
We proceed with a generalization of the previous two sections for when the gap $\Delta_i$ in which the query falls is a $0$-sided or $1$-sided gap. If gap $\Delta_i$ is $0$-sided, we spend $O(n)$ time to answer the query, according to \wref{def:rankbasedquery} on a set of $n$ unsorted elements. Since \wref{inv:credits} is satisfied prior to the query, $4n$ credits are released. Quantity $M$ does not change. Thus, $4n$ units of potential are released, giving amortized time $n - 4n = -3n$. All intervals in the data structure resulting from the query are category $A$ intervals. The analysis of the preceding section for category $A$ intervals applies. We can pay $O(x)$ to satisfy \wref{inv:credits} on the smaller gap, and the remaining $3n$ units of released potential are enough to guarantee \wref{inv:credits} holds on all intervals in the larger gap.
Now suppose $\Delta_i$ is $1$-sided. If the query rank $r$ is closer to the side of $\Delta_i$ on which queries have been performed, then the same analysis of the preceding sections suffices. Note that there will be neither category $C$ nor category $D$ intervals. The creation of $2$-sided gap $\Delta'_i$ out of elements of $1$-sided gap $\Delta_i$ will cause $10x$ additional units of potential to be released due to the decrease in $M$; these units are not used in this case.
We are left with the case $\Delta_i$ is $1$-sided and the query rank $r$ is closer to the side of $\Delta_i$ on which queries have not been performed; suppose without loss of generality that previously only the right endpoint
of $\Delta_i$ has been queried and $r$ is closer to the left endpoint.
In this case, the creation of $2$-sided gap $\Delta'_{i+1}$ out of elements of $1$-sided gap $\Delta_i$ will cause $10cx$ units of potential to be released due to the decrease in $M$. Since $c \geq 1$, this is at least $5|\Delta_i|$ units of potential. We use them as follows. Answering the query takes no more than $O(|\Delta_i|)$ time, and ensuring intervals satisfy \wref{inv:credits} in new gaps $\Delta'_i$ and $\Delta'_{i+1}$ after the query similarly takes no more than $|\Delta_i|$ credits, which costs $4|\Delta_i|$ units of potential. Thus, in total this takes no more than $|\Delta_i| + 4|\Delta_i| - 5|\Delta_i| = O(1)$ amortized time.
\bigskip\noindent
Putting the preceding three sections together, we may answer a query in $O(\log n + x \log c)$ time while ensuring \wref{inv:credits} for future operations.
\subsection{Deletion}
The analysis of deletion of $e=(k,v)$ pointed to by \texttt{ptr} is as follows. The element $e$ can be removed from the interval in which it resides in $O(1)$ time. Removing said interval lazily, if applicable, takes $O(1)$ time. If the gap in which $e$ resides also needs removal, \wref{lem:gapds} says doing so will take $O(\log n)$ time.
In any case, when element $e \in \Delta_i$ is deleted, we must ensure \wref{inv:credits} on the remaining intervals of $\Delta_i$. If $e$ was outside of an interval $\mathcal{I}_{i,j}$, $o(\mathcal{I}_{i,j})$ decreases by one. Thus, for any such intervals, we pay one credit to ensure \wref{inv:credits} remains satisfied. Thus in accordance with \wref{lem:numintervals}, this takes $O(\log |\Delta_i|)$ total credits.
The total amortized cost is thus no more than $O(\log n + \log |\Delta_i|) = O(\log n)$. If the data structure of \wref{lem:gapds} supports operations in worst-case time, this runtime is also worst-case.
\subsection{Change-Key}
\label{sec:changekeyanalysis}
We analyze the change-key operation as follows. Suppose \texttt{ptr} points to element $e=(k,v)$ and we change its key as described in \wref{sec:changekeyalg} to $k'$. If $k'$ falls outside gap $\Delta_i$, $O(\log n)$ complexity follows from deletion and re-insertion of $(k',v)$. Otherwise, the binary search in $\Delta_i$ takes $O(\log \log |\Delta_i|)$ time, again by \wref{lem:numintervals}. To ensure \wref{inv:credits} on the intervals of $\Delta_i$, as is the case for deletion, we must pay one credit per interval $e$ is no longer outside of. Thus, the key-change operation takes at most $O(\log |\Delta_i|)$ time; however, if we change the key of $e$ towards the nearest query rank, we can show \wref{inv:credits} is satisfied without spending any credits.
At any point in time, all intervals in $\Delta_i$ are classified as being on the left side or the right side according to the closest query rank, in accordance to \wref{rule:left-right}. Any element of a left-side interval can have its key decreased, while only increasing or keeping constant the number of elements outside of any other left-side interval. The same is true for key increases of elements in right side intervals.
Now consider if $e \in \mathcal{I}_{i,j}$ and $\mathcal{I}_{i,j}$ is the rightmost interval on the left side. Then we can also increase the key of $e$ while keeping the same or increasing the number of elements outside of any interval in $\Delta_i$. The same is true of decreasing the key of an element in the leftmost interval on the right side. Since the median of $\Delta_i$ falls in either the leftmost interval of the right side or the rightmost interval of the left side, it follows that we can ensure \wref{inv:credits} as long as the element whose key changes moves closer to its nearest query rank. Note that this analysis holds even as intervals change side designations due to insertions; for a refresher of this analysis see the proof of \wref{lem:ruleAinvariant}. This is despite delaying the application of \wref{rule:merge} until the following query in gap~$\Delta_i$.
This proves our statement in \wref{thm:main} about change-key.
The dichotomy displayed therein between cheap and expensive key changes can be refined as follows.
Suppose $c \geq 2$ is such that $e$ is located between (gap-local) ranks $|\Delta_i|/c$ and $|\Delta_i|-|\Delta_i|/c$ in $\Delta_i$; then we can change its key \emph{arbitrarily} in $O(\log \log \Delta_i + \log c)$ time.
This is because of the geometric nature of interval sizes. Intervals are highly concentrated close to the edges of gap $\Delta_i$ in order to support queries that increase $B$ very little, efficiently. Thus, we can support arbitrary key changes in $O(\log \log |\Delta_i|)$ time for the vast majority of the elements of gap $\Delta_i$, since ensuring \wref{inv:credits} will only require a constant number of credits,
and the performance smoothly degrades as the changed elements get closer to previous query ranks.
A second refinement is that we can change $e$ arbitrarily without paying any credits if an insertion closer to the endpoint of gap $\Delta_i$ has happened before said key-change, but after the query that created $\Delta_i$: such insertion increases the number of elements outside of all intervals that are potentially affected by moving $e$ closer to the middle of $\Delta_i$, thus no credits have to be supplied.
A similar argument shows that the time complexity of deletion is only $O(1)$ if an element was previously inserted closer to the gap endpoint than the deleted element.
We point out again that, from the perspective of the data structure, these savings are realized automatically and the data structure will always run as efficiently as possible;
the credits are only an aspect of the analysis, not our algorithms.
\bigskip\noindent
In the following section, we show that a bound on the number of created intervals can bound the number of pointers required of the data structure and the insertion and change-key complexities when the number of queries is small.
\subsection{Pointer Bound and Improved Insertion and Change-Key}
\label{sec:qbounds}
The preceding sections show insertion into gap $\Delta_i$ in $O(\log (n/|\Delta_i|) + \log \log |\Delta_i|)$ time and a change-key time complexity of $O(\log \log |\Delta_i|)$. A bound of $O(\log q)$ can also be made, which may be more efficient when $q$ is small. We also prove the bound stated in \wref{thm:qbounds} on the total number of pointers required of the data structure. We address the latter first.
\begin{proof}[Proof of \wref{thm:qbounds}{}]
Each query (including \texttt{Split($r$)} queries) creates at most $6$ intervals, and no other operations create intervals. The number of pointers required of all interval data structures is linear in the number of total intervals created, bounded to at most $n$. This is because elements within an interval are contiguous (in the sense an expandable array is contiguous) unless the interval is a result of merged intervals, where we assume that intervals are implemented as linked lists of arrays. Each merged interval must have been created at some point in time, thus the bound holds. The number of pointers required in the data structure of \wref{lem:gapds} is linear in the number of gaps (or intervals, if the data structure operates directly over intervals), taking no more than $O(\min(q, n))$ pointers, as the number of intervals is $O(\min(q, n))$.
\end{proof}
The above proof shows that the number of intervals and gaps in the entire data structure can be bounded by $q$. This implies the binary searches during insertion (both in the data structure of \wref{lem:gapds} and in \wref{sec:insertalg}) and change-key operations take no more than $O(\log q)$ time.
This gives a refined insertion time bound of $O(\min(\log (n/|\Delta_i|) + \log \log |\Delta_i|, \log q))$ and a change-key time bound of $O(\min(\log q, \log \log |\Delta_i|))$.
To guarantee an $O(\log q$) time bound in the gap data structure, we can maintain
all gaps additionally in a standard balanced BST, with pointers between corresponding nodes
in both data structures. A query can alternatively advance from the root in both structures,
succeeding as soon as one search terminates.
Updates must be done on both structures, but the claimed $O(\log n)$ time bounds (for queries, delete, split, and merge) permit this behavior.
\section{Bulk Update Operations}
\label{sec:bulk}
Lazy search trees can support binary search tree bulk-update operations. We can split a lazy search tree at a rank $r$ into two lazy search trees $T_1$ and $T_2$ of $r$ and $n-r$ elements, respectively, such that for all $x \in T_1$, $y \in T_2$, $x \leq y$. We can also support a merge of two lazy search trees $T_1$ and $T_2$ given that for all $x \in T_1$, $y \in T_2$, $x \leq y$.
We state this formally in \wref{lem:bulk}.
\begin{lemma}
\label{lem:bulk}
Operation \texttt{Split($r$)} can be performed on a lazy search tree in time the same as \texttt{RankBasedQuery($r$)}. Operation \texttt{Merge($T_1$,\,$T_2$)} can be performed on lazy search trees in $O(\log n)$ worst-case time.
\end{lemma}
\begin{proof}
To perform operation \texttt{Split($r$)}, we first query for rank $r$ in the lazy search tree. We then split the data structure of \wref{lem:gapds} at the separation in gaps induced by the query for rank $r$. Two lazy search trees result, with their own future per-operation costs according to the number of elements and gaps that fall into each tree. Using a globally-biased $2, b$ tree~\cite{Bent85} with weights as in the proof of \wref{lem:gapds}, the split takes $O(\log n)$ worst-case time (Theorem 10 of~\cite{Bent85}). The overall time complexity is dominated by the query for rank $r$ in the original tree, since queries take at least $\Omega(\log n)$ time.
To perform operation \texttt{Merge($T_1$,\,$T_2$)}, we perform a merge on the data structures of \wref{lem:gapds} associated with each lazy search tree. Future per-operation costs are adjusted according to the union of all gaps and totaling of elements in the two lazy search trees that are combined. Using a globally-biased $2, b$ tree~\cite{Bent85} with weights as in the proof of \wref{lem:gapds}, the merge takes $O(\log n)$ worst-case time or better (Theorem 8 of~\cite{Bent85}).
\end{proof}
\wref{lem:bulk} completes the analysis for the final operations given in \wref{thm:main}.
\section{Average Case Insertion and Change-Key}
\label{sec:average}
Our time bounds from \wref{thm:main} are an additive $O(\log \log n)$ away from the optimal
time of insertion and change-key;
it turns out that in certain average-case scenarios, we can indeed reduce this time
to an optimal \emph{expected} amortized time.
The essential step will be to refine the binary search within a gap to an exponential search.
\subsection{Insert}
Recall that we store intervals in a sorted array.
We modify the insertion algorithm of the interval data structure in \wref{sec:insertalg}
so that we instead perform a \emph{double binary search}
(also called \emph{exponential search}~\cite{Bentley76}),
outward from the last interval on the left side and first interval on the right side.
This is enough to prove the following result.
\begin{theorem}[Average-case insert]
\label{thm:average-insert}
Suppose the intervals within a gap are balanced using \wref{rule:merge} and
further suppose insertions follow a distribution such that the gap in which an inserted element
falls can be chosen adversarially, but amongst the elements of that gap,
its rank is chosen uniformly at random.
Then insertion into gap $\Delta_i$ takes expected time $O(\log(n/|\Delta_i|))$.
\end{theorem}
\begin{proof}
First note that the double binary search during insertion finds an interval that is
$k$ intervals from the middlemost intervals in time $O(\log k)$;
apart from constant factors, this is never worse than the $O(\log \ell_i)$ of a binary search.
The assumption on insertion ranks implies that the probability to insert into interval $\mathcal I_{i,j}$
(out of the possible $\ell_i$ intervals in gap $\Delta_i$) is $|\mathcal I_{i,j}|/ |\Delta_i| \pm O(1/|\Delta_i|)$,
\ie, proportional to its size.
Recall that in a gap $\Delta_i$ satisfying \wref{lem:numintervals}, interval sizes grow at least
like $(\sqrt2)^k$; that implies the largest (middlemost) intervals contain
a constant fraction of the elements in $\Delta_i$;
for these, insertion takes $O(1)$ time.
The same applies recursively:
With every outward step taken, the insertion procedure takes $O(1)$ more time,
while the number of elements that fall in these intervals decreases by a constant factor.
The expected insertion time in the interval data structure is proportional to
\[
\sum_{k=1}^\infty \frac{\log k}{(\sqrt 2)^k} \;\leq\;
\sum_{k=1}^\infty \frac{k}{(\sqrt 2)^k} \;=\;
4 + 3 \sqrt 2,
\]
\ie, constant overall.
Adding the $O(\log(n/|\Delta_i|))$ time to find the gap yields the claim.
\end{proof}
Observe that walking from the largest intervals outward, instead of performing an exponential search~\cite{Bentley76}, is sufficient for the above analysis. However, the exponential search also satisfies the worst case $O(\log \log n)$ bound (more precisely $O(\min(\log \log |\Delta_i|, \log q))$) described in \wref[Sections]{sec:insertalg} and~\ref{sec:insertanalysis}.
\begin{remark}[Fast insertion without arrays]
We can achieve the same effect if intervals are stored in another biased search tree so that interval $\mathcal{I}_{i,j}$ receives weight $|\Delta_i|/\ell_i + |\mathcal{I}_{i,j}|$.
\end{remark}
\wref{thm:average-insert} assumes that intervals are balanced according to \wref{rule:merge}.
In \wref{sec:ds}, we described balancing according to \wref{rule:merge} lazily.
Keeping \ref{rule:merge} balanced while insertions or change-key operations occur,
in the required time complexity, is nontrivial.
We show it can be done in $O(1)$ amortized time below.
\begin{lemma}[Strict merging]
\label{lem:Bcheck}
Given a gap $\Delta_i$, we can keep intervals in $\Delta_i$ balanced according
to within a constant factor of the guarantee of \wref{rule:merge} in $O(1)$
amortized time per insertion into $\Delta_i$.
\end{lemma}
\begin{proof}
We utilize the exponentially-increasing interval sizes due to \wref{lem:numintervals}. We check the outermost intervals about every operation and exponentially decrease checking frequency as we move inwards. The number of intervals checked over $k$ operations is $O(k)$. The guarantee of \wref{rule:merge} is changed so that the number of elements left of $\mathcal{I}_{i,j}$ in $\Delta_i$ is no more than a constant times $|\mathcal{I}_{i,j}| + |\mathcal{I}_{i,j+1}|$ (reflected for right side intervals), to which previous analysis holds.
\end{proof}
\subsection{Change-Key}
If we apply \wref{lem:Bcheck}, we can also support improved average-case change-key operations
in the following sense.
\begin{theorem}[Average-case change-key]
\label{thm:average-changekey}
If a \textup{\texttt{ChangeKey(ptr,\,$k'$)}} operation is performed such that the element pointed to by \texttt{ptr}, $e=(k,v)$, moves closer to its closest query rank within its gap and the rank of $k'$ is selected uniformly at random from valid ranks, it can be supported in $O(1)$ expected time.
\end{theorem}
\begin{proof}
We again perform a double binary search (exponential search~\cite{Bentley76}) for the new interval of $e$;
this time we start at the interval $\mathcal{I}_{i,j}$ in which $e$ currently resides and move outwards from there.
The analysis follows similarly to \wref{thm:average-insert}.
\end{proof}
When used as a priority queue, \wref{thm:average-changekey}
improves the average-case complexity of decrease-key to $O(1)$.
\section{Randomized-Selection Variant}
\label{sec:random}
We can improve the practical efficiency of lazy search trees by replacing exact median-finding in the query procedure with randomized pivoting. Specifically, after finding sets $L$ and $R$ as described in \wref{sec:queryalg}, we then partition $L$ into sets $L_l$ and $L_r$ by picking a random element $p \in L$ and pivoting so that all elements less than $p$ are placed in set $L_l$ and all elements greater than $p$ are placed in set $L_r$. To avoid biasing when elements are not unique, elements equal to $p$, should be split between $L_l$ or $L_r$.
We then repeat the procedure one more time on set $L_r$. We do the same, reflected, for set $R$.
\begin{remark}[Partitioning with equal keys]
In our analysis, we assume for simplicity that the number of
elements with same key as $p$, including $p$ itself,
that are assigned to the left segment is chosen uniformly at random from the number of copies.
That implies overall a uniform distribution for the size of the segments.
Partitioning procedures as used in standard implementations of quicksort~\cite{Sedgewick1978}
actually lead to slightly more balanced splits~\cite{Sedgewick1977a}; they will only perform better.
For practical implementations of lazy search trees, choosing the partitioning element $p$
as the median of a small sample is likely to improve overall performance.
\end{remark}
Changing the query algorithm in this way requires a few changes in our analysis. The analysis given in \wref{sec:analysis} is amenable to changes in constant factors in several locations. Let us generalize the potential function as follows, where $\alpha$ is a set constant, such as $\alpha = 4$ in \wref{sec:analysis}. One can see from \wref{sec:onesided} that this will imply the constant in front of $M$ must be at least $2(\alpha+1)$.
\[
\Phi \;\;=\;\; 2(\alpha+1)M \;+\; \alpha\sum_{\mathclap{\substack{1 \leq i \leq m,\\1 \leq j \leq \ell_i}}} \: c(\mathcal{I}_{i,j}).
\]
Insertion still takes $O(\min(\log(n/|\Delta_i|) + \log \log |\Delta_i|, \log q))$ time. As before, splitting into sets $L$ and $R$ can typically be done in $O(|\mathcal{I}_{i,j}|)$ deterministic time via the result of the query, but if not, quickselect can be used for $O(|\mathcal{I}_{i,j}|)$ expected (indeed with high probability) time performance~\cite{Hoare61,FloydRivest1975,Kiwiel2005}. The modified pivoting procedure described above for $L_l$ and $L_r$ is repeated in total 4 times. We can thus bound the complexity of these selections at $O(|\mathcal{I}_{i,j}|)$, regardless of the randomization used.
Then by application of \wref{lem:cancel}, we reduce the current amortized time to split $\mathcal{I}_{i,j}$ to $O(x)$, leaving $(\alpha-1)c(\mathcal{I}_{i,j})$ units of potential to handle ensuring \wref{inv:credits} on category A intervals in \wref{sec:ensure}.
The number of credits necessary to satisfy \wref{inv:credits} on category $A$ intervals is now a random variable. %
Recall the arguments given in \wref{sec:current} and \wref{sec:ensure} regarding category $A$ intervals. As long as the (expected) number of credits to satisfy \wref{inv:credits} on category $A$ intervals is at most a constant fraction $\gamma$ of $|\mathcal{I}_{i,j}|$, we can set $\alpha = \frac{1}{1-\gamma}$ and the amortized analysis carries through.
We have the following regarding the expected number of credits to satisfy \wref{inv:credits} on category $A$ intervals using the randomized splitting algorithm.
\begin{lemma}
\label{lem:expectedcost}
Suppose a query falls in interval $\mathcal{I}_{i,j}$ and the intervals built from the elements of $\mathcal{I}_{i,j}$ are constructed using the randomized splitting algorithm. The expected number of credits necessary to satisfy \wref{inv:credits} on category $A$ intervals after a query is no more than $\frac{143}{144}|\mathcal{I}_{i,j}| + O(1)$.
\end{lemma}
\begin{proof}
We prove the loose bound considering only one random event in which a constant fraction of $|\mathcal{I}_{i,j}|$ credits are necessary, which happens with constant probability.
We orient as in \wref{sec:ensure}, assuming the larger new gap, $\Delta'_{i+1}$, is right of the smaller new gap, $\Delta'_i$. We must consider the number of credits necessary to satisfy \wref{inv:credits} on the three category $A$ intervals $\mathcal{I'}_{i+1,1}$, $\mathcal{I'}_{i+1,2}$, and $\mathcal{I'}_{i+1,3}$ of new gap $\Delta'_{i+1}$. The rightmost interval $\mathcal{I'}_{i+1,3}$ has size drawn uniformly at random in $1, \ldots, |R|$, the leftmost, $\mathcal{I'}_{i+1,1}$, takes size uniformly at random from the remaining elements, and the middlemost interval $\mathcal{I'}_{i+1,2}$ takes whatever elements remain.
Suppose the rightmost interval $\mathcal{I'}_{i+1,3}$ comprises a fraction of
$x = |\mathcal I'_{i+1,3}| / |R| \in \bigl[\frac{1}{3},\,\frac{2}{3}\bigr]$ of all elements in $R$, and further suppose the leftmost interval $\mathcal{I'}_{i+1,1}$ takes between $1/2$ and $3/4$ of the remaining elements, \ie,
a fraction $y = |\mathcal I'_{i+1,1}|/|R| \in \bigl[\frac{1}{2}(1-x),\,\frac{3}{4}(1-x)\bigr]$ of the overall elements in $R$. In this case, it is guaranteed we require no credits to satisfy \wref{inv:credits} on the middlemost interval. The number of credits to satisfy \wref{inv:credits} on the rightmost and leftmost intervals is $(x+y)|R|$, which is maximized at $\frac{11}{12}|R|$. This event happens with probability $\frac{1}{3} \cdot \frac{1}{4} - O\left({1}/{|R|}\right) = \frac{1}{12} - O\left({1}/{|R|}\right) $, where we include the $O\left({1}/{|R|}\right)$ term to handle rounding issues with integer values of $|R|$. As we never require more than $|R|$ credits in any situation and $|R|\le |\mathcal I_{i,j}|$, we can then bound the expected number of necessary credits at $\frac{11}{12}\cdot |\mathcal{I}_{i,j}| + \frac{1}{12} \cdot \frac{11}{12}|\mathcal{I}_{i,j}| + O(1) = \frac{143}{144}|\mathcal{I}_{i,j}| + O(1)$.
\end{proof}
With \wref{lem:expectedcost}, we can set $\alpha = 144$ and use the remaining $143c(\mathcal{I}_{i,j})$ credits from destroying $\mathcal{I}_{i,j}$ and bound $143|\mathcal{I}_{i,j}|-143c(\mathcal{I}_{i,j})$ with $143x$ via \wref{lem:cancel}. All other query analysis in \wref{sec:ensure} is exactly as before. This gives total expected amortized query time $O(\log n + x \log c)$ on $2$-sided gaps. With a constant of $2(\alpha+1)$ in front of $M$ in the generalized potential function, the analysis for $0$ and $1$-sided gaps in \wref{sec:onesided} carries through.
Putting it all together, we get the following result.
\begin{theorem}[Randomized splitting]
\label{thm:expected}
If partitioning by median in the query algorithm is replaced with splitting on random pivots, lazy search trees satisfy the same time bounds, in worst-case time, as in \wref{thm:main}, except that \texttt{RankBasedQuery($r$)} and \texttt{Split($r$)} now take $O(\log n + x \log c)$ expected amortized time.
\end{theorem}
Note that another possible approach is to change \wref{inv:credits} to something like $c(\mathcal{I}_{i,j}) + 2o(\mathcal{I}_{i,j}) \geq |\mathcal{I}_{i,j}|$, which gives further flexibility in the rest of the analysis. This is, however, not necessary to prove \wref{thm:expected}.
\section{Lazy Splay Trees}
\label{sec:splay}
Splay trees~\cite{Sleator85} are arguably the most suitable choice of a biased search tree in practice;
we thereby explore their use within lazy search trees in this section.
We show that an amortized-runtime version of \wref{lem:gapds} can indeed be obtained using splay trees.
We also show that by using a splay tree, the efficient access theorems of the splay tree are achieved automatically
by the lazy search tree. This generalizes to any biased search tree that is used as the data structure of \wref{lem:gapds}.
\subsection{Splay Trees For The Gap Data Structure}
We show that splay trees can be used as the gap data structure.
\begin{lemma}[Splay for Gaps]
\label{lem:gapds-amortized}
Using \textit{splay trees} as the data structure for the set of gaps $\{\Delta_i\}$ allows support of all operations listed in \wref{lem:gapds}, where the time bounds are satisfied as \textit{amortized} runtimes over the whole sequence of operations.
\end{lemma}
\begin{proof}
We use a splay tree~\cite{Sleator85} and weigh gap $\Delta_i$ with $w_i = |\Delta_i|$.
The sum of weights, $W$, is thus equal to $n$.
Operation 1 can be supported by searching with $e=(k,v)$ into the tree
until gap $\Delta_i$ is found and then splayed.
According to the Access Lemma~\cite{Sleator85}, this is supported in
$O(\log(n/|\Delta_i|))$ amortized time.
Operation 2 requires a weight change on gap $\Delta_i$. By first accessing gap $\Delta_i$,
so that it is at the root, and then applying a weight change,
this operation can be completed in time proportional to the access.
According to the Access Lemma~\cite{Sleator85} and the Update Lemma~\cite{Sleator85},
this will then take $O(\log(n/|\Delta_i|))$ amortized time.
Note that for our use of operation 2, the element will already have just been accessed,
so the additional access is redundant.
Operations 3 and 4 are supported in $O(\log n)$ time by the Update Lemma~\cite{Sleator85}.
Note that when the gap data structure is used in a lazy search tree, it always starts empty
and more gaps are added one by one when answering queries. Hence any sequence of operations
arising in our application will access every element in the splay tree at least once.
%
\end{proof}
Note that a bound of $O(\log q)$ amortized cost for all operations also holds by using
equal weights in the analysis above (recall that in splay trees, the node weights are solely a means
for analysis and do not change the data structure itself).
\subsection{Efficient Access Theorems}
\label{sec:efficient-access}
We now specify a few implementation details to show how lazy search trees can perform accesses as fast as the data structure of \wref{lem:gapds} (resp.\ \wref{lem:gapds-amortized}).%
If an element $e$ is the result of a query for a second time, then during that second access, $e$ is the largest element in its gap.
Instead of destroying that gap, we can assume the identity of the gap $e$ falls into after the query to be the same gap in which $e$ previously resided (depending on implementation, this may require a key change in the data structure of \wref{lem:gapds}, but the relative ordering of keys does not change). In this way, repeated accesses to elements directly correspond to repeated accesses to nodes in the data structure of \wref{lem:gapds}.
Further, implementation details should ensure that no restructuring occurs in the interval data structure when an element previously accessed is accessed again. This is implied by the algorithms in \wref{sec:ds}, but care must be taken in the case of duplicate elements. This will ensure accessing a previously-accessed element will take $O(1)$ time in the interval data structure.
With these modifications, the lazy search tree assumes the efficient access properties of the data structure of \wref{lem:gapds}. We can state this formally as follows.
\begin{theorem}[Access Theorem]
\label{thm:efficientaccess}
Given a sequence of element accesses, lazy search trees perform the access sequence in time no more than an additive linear term from the data structure of \wref{lem:gapds}, disregarding the time to access each element for the first time.%
\end{theorem}
\begin{proof}
Once every item has been accessed at least once, the data structures are the same, save for an extra $O(1)$ time per access in the interval data structure. The cost of the first access may be larger in lazy search trees due to necessary restructuring.%
\end{proof}
While we would ideally like to say that lazy search trees perform within a constant factor of splay trees on any operation sequence, this is not necessarily achieved with the data structure as described here. Time to order elements on insertion is delayed until queries, implying on most operation sequences, and certainly in the worst case, that lazy search trees will perform within a constant factor of splay trees, often outperforming them by more than a constant factor. However, if, say, elements $1, 2, \ldots, n$ are inserted in order in a splay tree, then accessed in order $n, n-1, \ldots, 1$, splay trees perform the operation sequence in $O(n)$ time, whereas lazy search trees as currently described will perform the operation sequence in $O(n \log n)$ time.
\wref{thm:efficientaccess} shows using a splay tree for the gap data structure (\wref{lem:gapds-amortized}) allows lazy search trees to achieve its efficient-access theorems. Observing that the initial costs of first access to elements total $O(n \log n)$, we achieve \wref{cor:splayaccess} below.
\begin{corollary}
\label{cor:splayaccess}
Suppose a splay tree is used as the gap data structure. Then lazy search trees achieve the efficient access theorems of the splay tree, including static optimality, static finger, dynamic finger, working set, scanning theorem, and the dynamic optimality conjecture~\cite{Sleator85,Cole2000a,Cole2000b,Elmasry04}.
\end{corollary}
\section{Conclusion and Open Problems}
\label{sec:conclude}
We have discussed a data structure that improves the insertion time of binary search trees, when possible. Our data structure generalizes the theories of efficient priority queues and binary search trees, providing powerful operations from both classes of data structures. As either a binary search tree or a priority queue, lazy search trees are competitive. From a theoretical perspective, our work opens the door to a new theory of insert-efficient order-based data structures.
This theory is not complete. Our runtime can be as much as an additive $O(n \log \log n)$ term from optimality in the model we study, providing $O(\log \log n)$ time insert and decrease-key operations as a priority queue when $O(1)$ has been shown to be possible~\cite{Fredman87}. Further room for improvement is seen in our model itself, where
delaying insertion work further can yield improved runtimes on some operation sequences. We see several enticing research directions around improving these shortcomings and extending our work. We list them as follows:
\begin{enumerate}
\item Extend our model and provide a data structure so that the order of operations performed is significant. A stronger model would ensure that the number of comparisons performed on an inserted element depends only on the queries performed after that element is inserted.
\item Within the model we study, improve the additive $O(n \log \log n)$ term in our analysis to worst-case $O(n)$, or give a lower bound that shows this is not possible while supporting all the operations we consider.
\item Explore and evaluate competitive implementations of lazy search trees. In the priority queue setting, evaluations should be completed against practically efficient priority queues such as binary heaps~\cite{Williams64}, Fibonacci heaps~\cite{Fredman87}, and pairing heaps~\cite{Fredman86}. On binary search tree workloads with infrequent or non-uniformly distributed queries, evaluations should be completed against red-black trees~\cite{Bayer72}, AVL trees~\cite{Adelson62}, and splay trees~\cite{Sleator85}.
\item Support efficient general merging of unordered data. Specifically, it may be possible to support $O(1)$ or $O(\log n)$ time merge of two lazy search trees when both are used as either min or max heaps.
\item Although the complexity of a rank-based query must be $\Omega(n)$ when the query falls in a gap of size $|\Delta_i| = \Omega(n)$, the per-operation complexity of \texttt{RankBasedQuery($r$)} could potentially be improved to $O(x \log c + \log n)$ worst-case time instead of amortized time, with $x$ and $c$ defined as in \wref{thm:main}.
\item Develop an external memory version of lazy search trees for the application of replacing B-trees~\cite{Bayer72}, $B^\epsilon$ trees~\cite{Brodal03}, or log-structured merge trees~\cite{ONeil96} in a database system.
\item Investigate multidimensional geometric data structures based off lazy search trees. Range trees~\cite{Bentley79}, segment trees~\cite{Bentley77}, interval trees~\cite{Edelsbrunner80,McCreight80}, kd-trees~\cite{Bentley75}, and priority search trees~\cite{McCreight85} are all based on binary search trees. By building them off lazy search trees, more efficient runtimes as well as space complexity may be possible.
\end{enumerate}
Regarding point 3, we have implemented a proof-of-concept version of a lazy search tree in C$++$, taking no effort to optimize the data structure. Our implementation is roughly 400 lines of code not including the gap data structure, to which we use a splay tree~\cite{Sleator85}. Intervals are split via randomized pivoting, as described in \wref{sec:random}. The optimization to support $O(1)$ time average case insertion into the interval data structure is implemented, and the data structure also satisfies the $O(\min(q, n))$ pointer bound by representing data within intervals in a linked list of C$++$ vectors.
Our implementation has high constant factors for both insertion and query operations. For insertion, this is likely due to several levels of indirection, going from a gap, to an interval, to a linked list node, to a dynamically-sized vector. For query, this is likely due to poor memory management. Instead of utilizing swaps, as in competitive quicksort routines, our implementation currently emplaces into the back of C$++$ vectors, a much slower operation. The current method of merging also suggests some query work may be repeated, which although we have shown does not affect theoretical analysis, may have an effect in practice.
Still, initial experiments are promising. Our implementation outperforms both the splay tree which our implementation uses internally as well as C$++$ \texttt{set}, for both low query load scenarios and clustered queries. To give a couple data points, on our hardware, with $n = 1\,000\,000$, our implementation shaves about 30\% off the runtime of the splay tree when no queries are performed and remains faster for anything less than about $2\,500$ uniformly distributed queries. When $n=10\,000\,000$, our implementation shaves about 60\% off the runtime of the splay tree when no queries are performed and remains faster for anything less than about $20\,000$ uniformly distributed queries. The C$++$ \texttt{set} has runtime about 30\% less than our splay tree on uniformly distributed query scenarios. Our experiments against C$++$ STL \texttt{priority\_queue} show that our current implementation is not competitive.
Finally, regarding points 2 and 4, since this article was written we have succeeded in devising a solution using very different techniques that removes the $O(\log \log n)$ terms and supports constant time priority queue merge. The new solution requires sophisticated data structures and is not based on arrays, so the approach discussed herein is likely to be more practical.
\subsection*{Acknowledgements}
The authors of this paper would like to thank Ian Munro, Kevin Wu, Lingyi Zhang, Yakov Nekrich, and Meng He for useful discussions on the topics of this paper. We would also like to thank the anonymous reviewers for their helpful suggestions.
\bibliographystyle{alphaurl}
\let\oldthebibliography\thebibliography
\renewcommand\thebibliography[1]{%
\oldthebibliography{#1}%
\pdfbookmark[1]{References}{}%
}
| proofpile-arXiv_059-15715 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
The notion of positive definiteness is an important and basic notion in mathematics, which occurs in a variety of algebraic settings. At present there exists a rather satisfactory theory of complex-valued positive definite functions on abelian semigroups. The further development splits mainly into two orientations: the theory of positive definite functions on non-abelian semigroups, and that of positive definite functions taking values in non-commutative algebras. This paper is concerned with the latter. Our interest lies especially in the spectral properties of quaternionic positive definite functions.
There are two major difficulties that we encounter:
Firstly, the conceptual framework in the quaternionic case is incomplete. Many vital concepts in the classical theory of positive definite functions have no proper counterparts in the quaternionic setting. For instance, in the complex case, for every locally compact group $G$, its Pontryagin dual, which is composed of continuous group homomorphisms from $G$ to the unit circle in the complex plane, has a natural group structure given by the ordinary function multiplication, thus it is also called the dual group of $G$; whereas, the analog composed of continuous group homomorphisms from $G$ to the unit sphere in the real quaternion algebra, possesses no natural group structure due to the non-commutative nature of quaternions. As of now, a proper definition for the dual group, along with other vital concepts, still has not been found in the quaternionic case.
Secondly, as is well known, several branches of the functional analysis play crucial roles in the theory of positive definite functions. In contrast to the complex case, the functional analysis in quaternionic vector spaces still remains to be perfected; even how to define the spectrum of a quaternionic linear operator is a controversial issue. It causes the absence of some important analytical tools.
It seems impossible to overcome all the mentioned difficulties and establish a complete theory for quaternionic positive definite functions in a short period of time. So we start with the quaternionic positive definite functions on $\mathbb R$, one of the simplest locally compact abelian groups. Based on the recent contributions (see, e.g., \cite{Alpay-2016, Ghiloni-2017, Ghiloni-2013, Ludkovsky-2013}) on the normal operators in quaternionic Hilbert spaces, we are able to give a spectral characterization of quaternionic positive definite functions on the real line.
Our strategy is as follows:
First we establish two generalized Stone theorems for right quaternionic linear one-parameter unitary groups via two different types of functional calculus in quaternionic Hilbert spaces. Explicitly speaking, the generalized Stone theorems state that
every strongly continuous right linear one-parameter unitary group $U(t)$ can be expressed as
$$U(t)=e^{tA}|_S,$$
and
$$U(t)=e^{tA}|_L,$$
for all $t\in\mathbb R$ with $A$ being a normal operator. Here $e^{tA}|_S$ is defined as the functional calculus for the function $e^{tx}$ on the S-spectrum of $A$, and $e^{tA}|_L$ is defined as the functional calculus on the left spectrum.
Based on the first generalized Stone theorem we construct a correspondence between continuous quaternionic positive definite functions and spectral systems, namely, unions of a spectral measure and a unitary anti-self-adjoint operator that commute with each other (for more details, one may refer to Theorem \ref{Thm-Spectral}). It leads to the conclusion that the Fourier transform of a continuous quaternionic positive definite function is a slice-condensed measure, which is an unusual type of quaternionic valued measure related to spectral systems (see Definition \ref{Def-PNM-2}). More precisely, if $\varphi$ is a continuous quaternionic positive definite function on $\mathbb R$, then there exists a unique slice-condensed measure $\mu$ such that
$$\varphi(t)=\int_{\mathbb{R}^+}\cos(tx)d\mathbf{Re}\mu(x)+\int_{\mathbb{R}^+}\sin(tx)d(\mu-\mathbf{Re}\mu)(x), $$
and vice versa. After that we apply the second generalized Stone theorem to show the concept of slice-condensed measure can be defined equivalently (as Definition \ref{Def-PNM-1}) in a more concrete way: A quaternionic regular Borel measure $\mu$ on $\mathbb{R}^+$ is slice-condensed if and only if there exists a non-negative finite regular Borel measure $\Gamma$ on $\mathbb{H}_I$, namely the 3-dimensional real vector space of pure imaginary quaternions, s.t.,
$$\mu=\rho_*(\Gamma+\frac{x}{|x|}\Gamma),$$
where $\rho_*$ is the push-forward mapping induced by the function $\rho:x\mapsto |x|, \ x\in\mathbb{H}_I$.
The present paper is organized as follows: Some preliminaries are given in Section \ref{Sec-Pre}. Two generalized Stone's theorems for right quaternionic linear unitary groups are established in Section \ref{Sec-Stone}. We devote Section \ref{Sec-Bochner} to the spectral characteristics of quaternionic positive definite functions on the real line; especially a generalized Bochner's theorem is established in this section. An application to weakly stationary quaternionic random processes is presented in Section \ref{Sec-App}. Section \ref{Sec-Final} is the final remark.
\section{Preliminaries}\label{Sec-Pre}
We would like to introduce some basic information about two types of functional calculus in quaternionic Hilbert spaces.
Let $\mathbb{H}$ denote the real quaternion algebra
$$\{q=q_0i_0+q_1i_1+q_2i_2+q_3i_3,\ q_i\in\mathbb{R}\ (i=0,1,2,3)\},
$$
where $i_0=1$, $i_3=i_1i_2$, and $i_1,i_2$ are the generators of $\mathbb H$, subject to the following identities:
$$i_1^2=i_2^2=-1, \qquad i_1i_2=-i_2i_1. $$
For all $q\in \mathbb{H}$,
its conjugate is defined as $\overline{q}:=q_0i_0-q_1i_1-q_2i_2-q_3i_3$, and its norm given by $|q|:=\sqrt{q_0^2+q_1^2+q_2^2+q_3^2}$.
$\mathbb{S}$ will denote the set of all imaginary units, namely,
$$\mathbb{S}:=\{q=q_0i_0+q_1i_1+q_2i_2+q_3i_3\in\mathbb{H}:q_0=0\text{ and }|q|=1\}. $$
Consider the subalgebra $\mathbb{C}_j$ generated by a imaginary unit $j\in\mathbb{S}$. It can be easily seen that $\mathbb{C}_j$ in fact is a complex field since $j^2=-1$. Let $\mathbb{C}_j^+$ denote the set of all $p\in\mathbb{C}_j$ with $\mathbf{Im}p\geq 0$, i.e., $$\mathbb{C}_j^+:=\{q=q_0i_0+q_jj\in \mathbb{C}_j: q_0\in\mathbb R, q_j\geq 0 \}. $$
Let $V$ be a right vector space over $\mathbb{H}$. An inner product on $V$ is a map $\langle\cdot, \cdot\rangle: V\times V \mapsto \mathbb{H}$ with the following properties:
\begin{eqnarray*}
\langle x, y\rangle&=& \overline{\langle y, x\rangle}, \\
\langle x+y,z\rangle&=&\langle x,z\rangle+\langle y,z\rangle, \\
\langle xp, y\rangle&=&\langle x, y\rangle p, \\
\langle x, yp\rangle&=&\overline{p}\langle x, y\rangle, \\
\end{eqnarray*}
and
$$\langle x, x\rangle \geq 0, \ =0 \text{ if and only if }x=0,
$$
for all $x, y, z\in V$ and $q\in \mathbb{H}$.
If $\langle\cdot, \cdot\rangle$ is an inner product, then $\|x\|=\sqrt{\langle x, x\rangle}$ is norm on $V$. A right vector space $V$ over $\mathbb{H}$ endowed with an inner product which makes $V$ be a complete norm space is called a quaternionic Hilbert space (see, e.g., \cite{Alpay-2016, Ghiloni-2013}).
Let $\mathcal{H}$ be a quaternionic Hilbert space. The set of all right linear bounded operators on $\mathcal{H}$ will be denoted by $\mathcal{B}(\mathcal{H})$, and the set of all right linear operators on the subspaces of $\mathcal{H}$ by $\mathcal{L}(\mathcal{H})$. For any operator $T$, the definition domain, the range and the kernel will be denoted by $\mathrm{D}(T)$, $\mathrm{R}(T)$ and $\mathrm{Ker}(T)$ respectively. The concepts of unitary, normal, self-adjoint and anti-self-adjoint operators are defined in the same way as the case that $\mathcal{H}$ is a real or complex Hilbert space.
\subsection{Functional calculus based on the S-spectrum}
For a densely defined operator $T\in \mathcal{L}(\mathcal{H})$, its S-spectrum (see Definition 2.12. of \cite{Alpay-2016}) is defined as
$$\sigma_S(T):=\mathbb{H}\setminus \rho_S(T), $$
where $\rho_S(T)$ is the S-resolution set of $T$ given by
\begin{eqnarray*}
\rho_S(T)&:=& \big\{q\in\mathbb{H}:\mathrm{Ker}(\mathcal{R}_q(T))=0,\ \mathrm{R}(\mathcal{R}_q(T)) \text{ is
dense in } \mathcal{H} \\
& &\text{ and } \mathcal{R}_q(T)^{-1}\in \mathcal{B}(\mathcal{H})\big\}
\end{eqnarray*}
with $\mathcal{R}_q(T):=T^2-2\mathbf{Re}(q)T+|q|^2I$.
A resolution of the identity in a quaternionic Hilbert space is defined as follows.
\begin{definition}\label{Def-right-linear-ROI}
Let $\mathcal{M}$ be the $\sigma$-algebra of all Borel sets on a locally compact Hausdorff space $\Omega$, and $\mathcal H$ be a quaternionic Hilbert space. A resolution of the identity on $\Omega$ is a mapping
$$E: \mathcal{M}\mapsto\mathcal{B}(\mathcal{H})$$
with the following properties:
1. $E(\emptyset)=0$, $E(\Omega)=I$.
2. $E(\omega)$ is a right linear self-adjoint projection for all $\omega\in\mathcal{M}$.
3. $E(\omega'\cap \omega'')=E(\omega')E(\omega'')$ holds for all $\omega',\ \omega''\in\mathcal{M}$.
4. If $\omega'\cap \omega''=\emptyset$, then $E(\omega'\cup \omega'')=E(\omega')+E(\omega'')$.
5. For every $x, y\in\mathcal{H}$, the set function $E_{x,y}$ defined by
$$E_{x,y}(\omega)=\langle E(\omega)(x),y\rangle$$
is a quaternion-valued regular Borel measure on $\Omega$.
\end{definition}
\begin{theorem}\label{Thm-Spetral-Normal}\cite{Alpay-2016}
Let $T$ be a right linear normal operator on a quaternionic Hilbert space $\mathcal{H}$ and $j$ be an imaginary unit in $\mathbb{S}$. There exists a uniquely determined resolution of the identity $E_j$ on $\sigma_S(T)\cap \mathbb{C}_j^+$, such that
$$\langle Tx,y\rangle
=\int_{\sigma_S(A)\cap \mathbb{C}_j^+}
\mathbf{Re}(p)d \langle E_jx,y\rangle (p)+
\int_{\sigma_S(A)\cap \mathbb{C}_j^+}
\mathbf{Im}(p)d\langle JE_jx,y\rangle(p),
$$
for all $x\in \mathrm{D}(T)$ and all $y\in\mathcal{H}$,
where $J\in\mathcal{B}(\mathcal{H})$ is a unitary anti-self-adjoint operator associated with $T$.
\end{theorem}
$E_j$ is called the spectral measure of $T$. Note that $E_j$ and $J$ commute with each other.
\begin{definition}\label{Def-FCS}
Let $\mathcal{H}$ be a quaternionic Hilbert space. If a resolution of the identity $E$ on a locally compact Hausdorff space $\Omega$ commutes with a unitary anti-self-adjoint operator $J\in\mathcal{B}(\mathcal{H})$, then $(E,J)$ is called a spectral system on $\Omega$.
\end{definition}
From this point of view, $(E_j,J)$ in Theorem \ref{Thm-Spetral-Normal} (Theorem 6.2 of \cite{Alpay-2016}) is a spectral system on $\sigma_S(A)\cap \mathbb{C}_j^+$.
\begin{definition}\cite{Alpay-2016}
A subset $\Omega$ of $\mathbb{H}$ is said to be axially symmetric if it satisfies the following property:
For an arbitrary element $p_0+p_1j$ $(p_0, p_1\in \mathbb{R},j\in\mathbb{S})$ in $\Omega$,
$p_0+p_1j'$ also belongs to $\Omega$ for all $j'\in\mathbb{S}$.
\end{definition}
\begin{definition}\cite{Alpay-2016}
Let $\Omega$ be an axially symmetric subset of $\mathbb{H}$. Set
$$D:=\{(u,v)\in\mathbb{R}^2: u+vj\in\Omega \text{ for some }j\in\mathbb{S}\}. $$ A
function $f:\Omega\mapsto\mathbb{H}$ is called an intrinsic slice function if it can be composed as
$$f(u+vj)=f_0(u,v)+f_1(u,v)j, \quad \forall \ u,v\in D, \ \forall\ j\in\mathbb{S}
$$
where $f_0$ and $f_1$ are both real-valued functions defined on $D$.
\end{definition}
The functional calculus based on the S-spectrum, also called S-functional calculus, is defined as follows.
\begin{definition}\cite{Alpay-2016}\label{Def-FC-S-Spec}
Let $T$ be a right linear normal operator on a quaternionic Hilbert space $\mathcal{H}$, and $(E_j,J)$ be the spectral system that arises in Theorem \ref{Thm-Spetral-Normal}. For any intrinsic slice function $f: \sigma_S(T)\mapsto\mathbb{H}$ with the real component $\mathbf{Re}(f)$ and the imaginary component $\mathbf{Im}(f)$ both bounded and Borel measurable, the S-functional calculus for $f$ is defined by
\begin{equation*}\begin{split}
\langle f(T)x,y\rangle = & \int_{\sigma_S(A)\cap \mathbb{C}_j^+}
\mathbf{Re}(f(p))d \langle E_jx,y\rangle(p) + \\
& \int_{\sigma_S(A)\cap \mathbb{C}_j^+}
\mathbf{Im}(f(p))d\langle JE_jx,y\rangle(p)
\end{split}
\end{equation*}
for all $x, y\in\mathbb{H}$
\end{definition}
\begin{remark}
The S-functional calculus retains almost all the important properties of the classical functional calculus in complex Hilbert spaces. There are two types of functional calculus in quaternionic Hilbert space (see \cite{Ghiloni-2018, Viswanath-1971}) quite similar with the S-functional calculus \cite{Alpay-2016}. The functional calculus via intertwining quaternionic PVMs \cite{Ghiloni-2018} and the S-functional calculus \cite{Alpay-2016} are both based on the continuous functional calculus introduced by R. Ghiloni, V. Moretti, and A. Perotti \cite{Ghiloni-2013}. The functional calculus via spectral systems \cite{Viswanath-1971} has been established decades earlier than the other two, but no proper notion of quaternionic spectrum appears in \cite{Viswanath-1971}. So from our point of view, it may remain to be perfected further.
\end{remark}
\subsection{Functional calculus based on the left spectrum}
In order to give a spectral characterization for quaternionic positive definite functions, we need apply two types of functional calculus, namely, the functional calculus based on the S-spectrum \cite{Alpay-2016} and the functional calculus based on the left spectrum \cite{Ludkovsky-2013,Ludkovsky-2012}, to certain quaternionic Hilbert spaces.
Let $\mathcal H$ be a quaternionic Hilbert space, i.e., a right $\mathbb{H}$-vector space with an inner product $\langle\cdot,\cdot\rangle$. Then the functional calculus based on the S-spectrum can be established on $\mathcal H$ as shown in \cite{Alpay-2016}. However, we can't apply the functional calculus based on the left spectrum directly to $\mathcal H$, since the basic setting in \cite{Alpay-2016} is different from that in \cite{Ludkovsky-2013,Ludkovsky-2012}. So we must make some adjustment as follows:
1. Introduce a new inner product $(\cdot|\cdot)$, given by
$$(x|y)=\overline{\langle x,y\rangle}, \quad \forall x, y\in\mathcal{H}.$$
2. Construct a left $\mathbb{H}$-linear structure on $\mathcal{H}$: Let $\{x_a\}_{a\in\Sigma}$ be an orthonormal basis, the left scalar multiplication is defined as
$$qx=\sum_{a\in\Sigma}x_aq\langle x,x_a\rangle, \quad \forall q\in \mathbb{H}, \ x\in \mathcal{H}$$
Then the $\mathbb{H}$-vector space $\mathcal{H}$ endowed with the inner product $(\cdot|\cdot)$ can be treated as a quaternionic Hilbert space defined in \cite{Ludkovsky-2013,Ludkovsky-2012}. We would like to emphasize that this very type of quaternionic Hilbert space in \cite{Ludkovsky-2013,Ludkovsky-2012} will be called a bilateral quaternionic Hilbert space in our article to distinguish it from the other type of quaternionic Hilbert space introduced in the preceding subsection.
For convenience, let $\tilde{\mathcal{H}}$ denote the bilateral quaternionic Hilbert space transformed from a quaternionic Hilbert space $\mathcal{H}$. An operator $T$ on $\tilde{\mathcal{H}}$ is said to be quasi-linear if it is additive and $\mathbb{R}$-homogeneous, i.e.,
$$T(x+y)=T(x)+T(y), \quad \forall \ x, y\in \mathrm{D}(T),$$
$$T(qx)=T(xq)=qT(x),\quad \forall \ x\in \mathrm{D}(T), \ q\in\mathbb{R}. $$
The Banach space consisting of all bounded quasi-linear operators is denoted by $\mathcal{B}_q(\tilde{\mathcal{H}})$. Note that every bounded right linear operator on $\mathcal{H}$
is a bounded quasi-linear operator on $\tilde{\mathcal{H}}$, which is to say
$$\mathcal{B}(\mathcal{H})\subset\mathcal{B}_q(\tilde{\mathcal{H}}). $$
\begin{definition}\cite{Ludkovsky-2013}
The Left spectrum, denoted by $\sigma_L(T)$, of a closed densely defined quasi-linear operator $T$ is the set of all $q\in\mathbb{H}$ such that $T-qI$ is not bijective from the definition domain $\mathrm{D}(T)$ onto the whole space $\tilde{\mathcal{H}}$.
\end{definition}
As shown in \cite{Ludkovsky-2013}, for any (not necessarily bounded) normal operator $T$, there exits a unique smallest quasi-commutative von Neumann algebra $\mathbf{A}\subset \mathcal{B}_q(\tilde{\mathcal{H}})$ that $T$ is affiliated with; moreover any quasi-commutative von Neumann algebra is $*$-isomorphic to $C(\Lambda,\mathbb{H})$ for some compact Hausdorff space $\Lambda$ via a generalized Gelfand transform. Then the $*$-isomorphism from $C(\Lambda,\mathbb{H})$ to $\mathbf{A}$ induces an involution preserving bounded $\mathbb{H}$-algebra homomorphism $$\phi: \mathcal{B}(\sigma_L(T),\mathbb{H})\mapsto\mathcal{B}_q(\tilde{\mathcal{H}}), $$ where $\mathcal{B}(\sigma_L(T),\mathbb{H})$ denotes the algebra consisting of all $\mathbb{H}$-valued bounded Borel measurable functions defined on $\sigma_L(T)$. Then the functional calculus on the left spectrum is naturally defined as follows.
\begin{definition}\label{Def-FC-L-Spec}\cite{Ludkovsky-2013}
The functional calculus on the left spectrum is given by
$$f(T)=\phi(f), \quad \forall \ f\in\mathcal{B}(\sigma_L(T),\mathbb{H}).$$
Moreover, there exist regular $\mathbb{R}$-valued Borel measures $\mu_{i_v,i_l}[x,y]$ $(v,l=0,1,2,3$$;$ $ x,y\in\tilde{\mathcal{H}})$ such that
\begin{equation*}
(f(T)x|y)=\sum_{v,l=0}^{3}\int_{\sigma_L(T)}f_vi_l\ d\mu_{i_v,i_l}[x,y],
\end{equation*}
where $f=\sum_{v=0}^{3}f_vi_v$ and $f_v$ is $\mathbb{R}$-valued.
\end{definition}
\begin{remark}
Another generalized Gelfand transform in quaternionic Hilbert spaces has been investigated by S. H. Kulkarni \cite{Kulkarni-1994,Kulkarni-1992} quite earlier. By contrast, the theory established by S. V. Ludkovsky has a higher degree of completion $($one may refer to \cite{Ludkovsky-2013,Ludkovsky-2012} for more details$)$.
\end{remark}
To avoid misunderstanding, we would like to mention the following facts:
1. Whether $V$ is a quaternionic Hilbert space or a bilateral quaternionic Hilbert space, the real part of its inner product $(\cdot|\cdot)$ is a real inner product. For a densely defined operator $T$ on $V$, its adjoint $T^*$ in $(V,(\cdot|\cdot))$ is identical with its adjoint in the real Hilbert space $(V,\mathbf{Re}(\cdot|\cdot))$.
Furthermore, $$(Tx|y)=(x|T^*y),\qquad x\in\mathrm{D}(T),y\in\mathrm{D}(T^*)$$ holds when $T$ is right linear. But this equality may fail when $T$ is quasi-linear.
For example, let's observe the operators $L_q$ and $R_q$, i.e., the left and right scalar multiplication by $p\in\mathbb{H}$, on a bilateral quaternionic Hilbert space $V$. It can be easily verified that $L_q^*=L_{\overline{q}}$, $R_q^*=R_{\overline{q}}$, and $(L_qx|y)=(x|L_{\overline{q}}y)$ holds for all $x,y\in V$. In contrast, $(R_qx|y)=(x|R_{\overline{q}}y)$ is generally not valid. The key difference between $L_q$ and $R_q$ is that the former is right linear, while the later is not.
2. The left scalar multiplication in a (bilateral) quaternionic Hilbert space is often uncertain. It may cause some problems since the left spectrum depends on the left scalar multiplication.
For instance, assume that we are discussing two closed densely defined operator $A$ and $B$; in one situation we may discover that there exists $q\in\mathbb{H}$ so that
$A=qB$, then it follows naturally that $\sigma_L(A)$ is identical with $q\sigma_L(B)$; however, in another situation, if the left scalar multiplication changes, the equality
$$\sigma_L(A)=q\sigma_L(B)$$
may no longer hold true.
The uncertainty of left scalar multiplication has been mentioned in Section 1 of \cite{Ghiloni-2017}, and also can be observed in Lemma 3.5 and Theorem 3.6 in \cite{Alpay-2016}.
\section{Stone's theorems in quaternionic Hilbert spaces}\label{Sec-Stone}
In this section, we are going to apply both the functional calculuses based on the S-spectrum and the left spectrum to establish two generalized Stone's theorems for one-parameter unitary groups in quaternionic Hilbert spaces.
For precision, if $f(T)$ is given by the functional calculus based on the S-spectrum, we denote it by $f(T)|_S$; if it is given by the functional calculus based on the left spectrum, then denote it by $f(T)|_L$.
Only when there is no ambiguity, we will just write it as $f(T)$.
\begin{definition}\label{Def-OPUG}
A one-parameter unitary group on a quaternionic Hilbert space $\mathcal{H}$ is a family $U(t)$, $t\in\mathbb{R}$, of right linear unitary operators on $\mathcal{H}$ with the following properties:
$$U(0)=I,\quad U(s+t)=U(s)U(t)\ \text{for all } s,t\in \mathbb{R}.$$
A one-parameter unitary group is said to be strongly continuous if
\begin{equation}\label{Eq-Def-OPUG-Continuity}
\lim_{s\to t}\|U(s)(x)-U(t)(x)\|=0
\end{equation}
for all $t\in \mathbb{R}$ and all $x\in\mathcal{H}$.
\end{definition}
\begin{definition}\label{Def-OPUG-GENERATOR}
Let $U(t)$ be a strongly continuous one-parameter unitary group on $\mathcal{H}$. The infinitesimal generator of $U(t)$ is the operator $A$ defined by
\begin{equation}\label{Eq-Def-OPUG-genarator}
A(x):=\lim_{t\to 0}\frac{U(t)(x)-x}{t},
\end{equation}
with its domain $\mathrm{D}(A)$ consisting of all $x\in \mathcal{H}$ for which the limit exists in the norm topology on $\mathcal{H}$.
\end{definition}
This type of infinitesimal generator has been investigated in the study of semigroups over real algebras (see, e.g., \cite{Colombo-2011,Ghiloni-2018}). They are anti-self-adjoint in contrast to the classical infinitesimal generators.
\begin{theorem}\label{Thm-Stone-I}
Suppose $U(t)$ is a strongly continuous one-parameter unitary group on $\mathcal{H}$. Then the infinitesimal generator $A$ is a right linear anti-self-adjoint operator, and
$$U(t)=e^{tA}|_S \ \text{for all} \ t\in \mathbb{R}.$$
\end{theorem}
Here $e^{tA}|_S$ is defined as the functional calculus for the intrinsic slice function $e^{tx}$ on the S-spectrum of $A$. To be precise, Definition \ref{Def-FC-S-Spec} says
$$\langle e^{tA}|_Sx,y\rangle
=\int_{\sigma_S(A)\cap \mathbb{C}_j^+}
\mathbf{Re}(e^{tp})d \langle E_jx,y\rangle (p)+
\int_{\sigma_S(A)\cap \mathbb{C}_j^+}
\mathbf{Im}(e^{tp})d\langle JE_jx,y\rangle (p),
$$
where $(E_j,J)$ is the spectral system associated with $A$. Moreover, we have
$$\mathbf{Re}(e^{tp})=\cos(t|p|)
\ \text{ and }\
\mathbf{Im}(e^{tp})=\sin(t|p|),
$$
since $\sigma_S(A)$ is a subset of $\mathbb H_I$, namely, the 3-dimensional real vector space of pure imaginary quaternions.
\begin{theorem}\label{Thm-Stone-II}
Suppose $U(t)$ is a strongly continuous one-parameter unitary group on $\mathcal{H}$. Then
$$U(t)=e^{tA}|_L \ \text{for all} \ t\in \mathbb{R},$$
where $A$ is the infinitesimal generator of $U(t)$.
\end{theorem}
Note that $e^{tA}|_L$ is given as the functional calculus for the function $e^{tx}$ on the left spectrum of $A$ in the bilateral Hilbert space $\tilde{\mathcal{H}}$ transformed from $\mathcal{H}$.
Moreover, according to Definition \ref{Def-FC-L-Spec}, we have
\begin{equation}\label{Eq-Stone-II-integral}
\begin{split}
(e^{tA}|_Lx|y)= & \sum_{l=0}^{3}\int_{\sigma_L(T)}\cos(|p|)i_l\ d\mu_{i_v,i_l}[x,y](p)+\\
& \sum_{v=1}^{3}\sum_{l=0}^{3}\int_{\sigma_L(T)}\frac{p_v}{|p|}\sin(|p|)i_l\ d\mu_{i_v,i_l}[x,y](p),
\end{split}
\end{equation}
where $p\in\sigma_L(A)$ is composed as $p=p_1i_1+p_2i_2+p_3i_3$ with $p_v\in\mathbb{R}$, and $\mu_{i_v,i_l}[x,y]$ are the regular Borel measures on $\sigma_L(A)$ uniquely determined by $A$ and $x,y\in\tilde{\mathcal{H}}$.
A generalization of Stone's theorem to the case of a one-parameter unitary group in a quaternionic Hilbert space has been established by S. V. Ludkovsky (see Theorem 2.33 in \cite{Ludkovsky-2007}). However, this version of Stone's theorem does not fit in with our aims, because the infinitesimal generator defined in \cite{Ludkovsky-2007} is self-adjoint.
We would like to emphasize that the major difference between the earlier version of Stone's theorem and our versions is that the spectrum of a self-adjoint generator is included in the real line, while that of an anti-self-adjoint generator is included in the 3-dimensional real vector space consisting of all pure imaginary quaternions.
\begin{remark}
The generalized Stone's theorems $($Theorems \ref{Thm-Stone-I} and \ref{Thm-Stone-II}$)$ can be proved in almost the same way as the case when $\mathcal{H}$ is a complex Hilbert space $($see, e.g., Chap. 10 in \cite{Hall-2013}$)$.
In fact, the following relation between a semigroup $U(t)$ on a quaternionic Hilbert space and its infinitesimal generator $A$:
$$U(t)=e^{tA}$$
has been verified in several settings
$($see, e.g., Theorem 3.2 in \cite{Colombo-2011} and Theorem 6.3 in \cite{Ghiloni-2018}$)$.
Compared with the previous contributions made by others, we have to admit Theorems \ref{Thm-Stone-I} and \ref{Thm-Stone-II} do not count as great progress. What really matters is that they are vital for us to achieve our purpose. Based on these two generalized Stone's Theorems, we are able to reveal the spectral characteristics of quaternionic positive definite functions.
\end{remark}
\begin{lemma}\label{Lemma-exp-anti-self-adjoint}
Suppose $A$ is a right linear anti-self-adjoint operator on $\mathcal{H}$, and $U(t)$ is a family of operators defined as
$$U(t)=e^{tA}|_S,\ t\in\mathbb R,$$
then the following results hold true:
1. $U(t)$ is a strongly continuous one-parameter unitary group.
2. For any $x\in \mathrm{D}(A)$,
$$
A(x)=\lim_{t\to 0}\frac{U(t)(x)-x}{t},
$$
where the limit is in the norm topology on $\mathcal{H}$.
3. For any $x\in \mathcal{H}$, if
$$\lim_{t\to 0}\frac{U(t)(x)-x}{t}
$$
exists in the norm topology on $\mathcal{H}$, then $x\in \mathrm{D}(A)$.
\end{lemma}
\begin{proof}
Since $\sigma_S(A)$ contains only pure imaginary quaternions, the function $f_t(p):=e^{tp}$ is a bounded continuous intrinsic slice function on $\sigma_S(A)$. More precisely,
$$f_t(jv)=\cos(tv)+j\sin(tv),$$
holds for any $j\in\mathbb{S}$ and any $v\in\mathbb{R}$ with $jv\in\sigma_S(A)$.
Hence, for different $j,j'\in\mathbb{S}$, $f_t(jv)$ and $f_t(j'v)$ share the same real and imaginary components that are both bounded and continuous. Therefore the functional calculus for $f(p)$ on the S-spectrum of $A$ is well defined as shown in Definition \ref{Def-FC-S-Spec}:
\begin{equation*}
\begin{split}
\langle f_t(A)x,y\rangle
= & \int_{\sigma_S(A)\cap \mathbb{C}_j^+}
\cos(t|p|)d \langle E_j(p)x,y\rangle + \\
& \int_{\sigma_S(A)\cap \mathbb{C}_j^+}
\sin(t|p|)d\langle JE_j(p)x,y\rangle,
\end{split}
\end{equation*}
where $(E_j,J)$ is the spectral system associated with $A$.
The functional calculus on the S-spectrum is also a $*$-homomorphism of real (not quaternionic) Banach $C^*$-algebras like the classical functional calculus \cite{Alpay-2016}. It indicates that
$$U(t)U(t)^*=f_t(A)\overline{f_t}(A)=(f_t\overline{f_t})(A)=1(A)=I,$$
$$U(t)^*U(t)=\overline{f_t}(A)f_t(A)=(\overline{f_t}f_t)(A)=1(A)=I,$$
$$U(s)U(t)=f_s(A)f_t(A)=(f_sf_t)(A)=f_{s+t}(A)=U(s+t); $$
which is to say $U(t)$ is a one-parameter unitary group. Moreover, for any $x\in \mathcal{H}$ and $s, t\in \mathbb{R}$, we have
\begin{equation*}
\begin{split}
\|U(s)(x)-U(t)(x)\|^2
=&\langle (f_s(A)-f_t(A))^*(f_s(A)-f_t(A))x, x\rangle\\
=&\langle |f_s-f_t|^2(A)x, x\rangle.
\end{split}
\end{equation*}
Definition \ref{Def-FC-S-Spec} yields
$$
\langle |f_s-f_t|^2(A)x, x\rangle
=\int_{\sigma_S(A)\cap \mathbb{C}_j^+}
\big(\cos((s-t)|p|)-1\big)^2+\big(\sin((s-t)|p|)\big)^2 d\mu_{j,x}(p)
$$
with
$\mu_{j,x}=\langle E_j(p)x,x\rangle$ being a finite Borel measure.
The integral on the right side tends to zero as $s$ approaches
$t$, by dominated convergence. Thus we reach the first conclusion:
1. $U(t)$ is a strongly continuous one-parameter unitary group.
To see the second conclusion, first notice that Corollary 6.5 in \cite{Alpay-2016} indicates:
\begin{eqnarray*}
& &\Big\|\frac{U(t)(x)-x}{t}-A(x)\Big\|^2\\
&=& \int_{\sigma_S(A)\cap \mathbb{C}_j^+}
\Big|\frac{e^{tp}-1}{t}-p\Big|^2 d\mu_{j,x}(p) \\
&=& \int_{\sigma_S(A)\cap \mathbb{C}_j^+}
\Big|\frac{\cos(t|p|)-1}{t}\Big|^2 + \Big|\frac{\sin(t|p|)}{t}-|p|\Big|^2 d\mu_{j,x}(p)
\end{eqnarray*}
is valid
for all $x\in \mathrm{D}(A)$ and all $t\in\mathbb R$.
Then we apply the dominated convergence theorem again with $5|p|^2$ as the dominating function to achieve the desired result:
2. For any $x\in \mathrm{D}(A)$,
$$
A(x)=\lim_{t\to 0}\frac{U(t)(x)-x}{t},
$$
where the limit is in the norm topology of $\mathcal{H}$.
For the third conclusion, let $A'$ be the infinitesimal generator of $U(t)$. For any $x,y\in\mathrm{D}(A')$, one can easily see
\begin{eqnarray*}
\langle A'(x),y\rangle
&=&\lim_{t\to 0}\langle \frac{U(t)(x)-x}{t},y\rangle \\
&=& \lim_{t\to 0}\langle x,\frac{U(-t)(y)-y}{t}\rangle \\
&=& \langle x,-A'(y)\rangle
\end{eqnarray*}
Hence, $A'$ is anti-symmetric. Combining with the second conclusion we have:
(1) $A'$ is an extension of the anti-self-adjoint operator $A$,
(2) $\mathrm{D}(A')\subset\mathrm{D}((A')^*)$,\\
and consequently $\mathrm{D}(A)$ is identical with $\mathrm{D}(A')$, which indicates the third conclusion:
3. For any $x\in \mathcal{H}$, if
$$\lim_{t\to 0}\frac{U(t)(x)-x}{t}
$$
exists in the norm topology of $\mathcal{H}$, then $x\in \mathrm{D}(A)$.
\end{proof}
\begin{lemma}\label{Lemma-anti-self-adjoint}
For any strongly continuous one-parameter unitary group $U(t)$ on $\mathcal{H}$, its infinitesimal generator $A$ is anti-self-adjoint.
\end{lemma}
\begin{proof}
Set $\displaystyle{\langle x,y\rangle_j:=\frac{1}{2}\big(\langle x,y\rangle-j\langle x,y\rangle j\big)}$ with $j$ being an arbitrary imaginary unit. It can be easily seen that
$\langle \cdot ,\cdot\rangle_j$ is a $\mathbb{C}_j$-linear inner product,
and the norm induced by $\langle \cdot ,\cdot\rangle_j$ is identical with the one induced by $\langle
\cdot ,\cdot\rangle$, which implies that $\mathcal{H}$, as a $\mathbb{C}_j$-linear vector space, endowed with the inner product $\langle \cdot ,\cdot\rangle_j$ is a complex Hilbert space.
Furthermore,
the adjoint of any densely defined right quaternion-linear operator with respect to the quaternionic inner product $\langle \cdot ,\cdot\rangle$ is identical with
the adjoint with respect to the complex inner product $\langle \cdot ,\cdot\rangle_j$.
From this point of view, $U(t)$ can be treated as a strongly continuous one-parameter unitary group in the complex Hilbert space $(\mathcal{H},\langle \cdot ,\cdot\rangle_j)$, and $AR_j(=R_jA)$ is exactly the classical infinitesimal generator of $U(t)$ (see, e.g., Definition 10.13 in \cite{Hall-2013}), where $R_j$ is the right scalar multiplication by $j$, i.e.,
$$R_j(x)=xj, \quad \forall \ x\in \mathcal{H}.
$$
We thus have $AR_j$ is self-adjoint in the complex Hilbert space $(\mathcal{H},\langle \cdot ,\cdot\rangle_j)$ according to the original Stone's theorem (see, e.g., Theorem 10.15 in \cite{Hall-2013}), which indicates $A$ is anti-self-adjoint in $(\mathcal{H},\langle \cdot ,\cdot\rangle_j)$. Since the adjoint of $A$ with respect to $\langle \cdot ,\cdot\rangle$ is identical with
the adjoint with respect to $\langle \cdot ,\cdot\rangle_j$, we conclude that
$A$ is anti-self-adjoint in the quaternionic Hilbert space $(\mathcal{H},\langle \cdot ,\cdot\rangle)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm-Stone-I}]
Suppose $U(t)$ is a strongly continuous one-parameter unitary group on $\mathcal{H}$, and $A$ is its infinitesimal generator. By Lemma \ref{Lemma-anti-self-adjoint}, $A$ is anti-self-adjoint. Then
by Lemma \ref{Lemma-exp-anti-self-adjoint}, $e^{tA}|_S$ is a strongly continuous one-parameter unitary group with the infinitesimal generator identical with $A$.
Take $x\in \mathrm{D}(A)$, and consider the function $g_x(t)=U(t)(x)-e^{tA}|_S(x)$. From the definition of the infinitesimal generator, it follows immediately that
$$\frac{d}{dt}U(t)(x)=AU(t)(x)=U(t)A(x),$$
and
$$\frac{d}{dt}e^{tA}|_S(x)=Ae^{tA}|_S(x)=e^{tA}|_SA(x),$$
hold in the norm topology of $\mathcal{H}$, which means
$U(t)(x)$ and $e^{tA}|_S(x)$ both belong to $\mathrm{D}(A)$, and
$$\frac{d}{dt} g_x(t)=A\big(U(t)(x)-e^{tA}|_S(x)\big)=Ag_x(t).
$$
Hence,
$$
\begin{array}{rcl}
\displaystyle{\frac{d}{dt} \langle g_x(t),g_x(t)\rangle}
& = &
\displaystyle{\langle \frac{d}{dt}g_x(t),g_x(t)\rangle + \langle g_x(t),\frac{d}{dt}g_x(t)\rangle}\\
& = & \langle Ag_x(t),g_x(t)\rangle + \langle g_x(t),Ag_x(t)\rangle\\
& = & \langle g_x(t),-Ag_x(t)\rangle + \langle g_x(t),Ag_x(t)\rangle \\
& = & 0.
\end{array}
$$
Since $g_x(0)=0$, we deduce that $g_x(t)=0$ holds for all $x\in\mathrm{D}(A)$ and all $t\in\mathbb R$, or equivalently
$$U(t)(x)=e^{tA}|_S(x), \quad \forall \ x\in\mathrm{D}(A),\ \forall \ t\in\mathbb R.
$$
In conclusion, $U(t)$ and $e^{tA}|_S$ agree on a dense subspace of $\mathcal{H}$, and thus on the whole space.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm-Stone-II}]
An analog of Lemma \ref{Lemma-exp-anti-self-adjoint} can be obtained by replacing $e^{tA}|_S$ with $e^{tA}|_L$. Then we can adopt the same procedure as in the proof of Theorem \ref{Thm-Stone-I} to carry out this one. To avoid repetition, the details of this proof are omitted.
\end{proof}
\section{Bochner's theorem for quaternionic positive definite functions}\label{Sec-Bochner}
In this section we are going to show that every continuous quaternionic positive definite function is related with a spectral system on $\mathbb{R}^+$ and also related with a non-negative finite Borel measure on $\mathbb R^3$ in certain ways. Moreover, the spectral system and the Borel measure will induce two identical quaternion-valued measures of an unusual type that we name as slice-condensed measures. Finally, A one-to-one correspondence between continuous quaternionic positive definite functions and slice-condensed measures will be established.
\begin{definition}
A quaternion-valued function $\varphi$ on $\mathbb{R}$ is said to be positive definite if for any $t_1,t_2,\cdots,t_k\in\mathbb{R}$ and any $q_1,q_2,\cdots,q_k\in\mathbb{H}$, the following inequality \begin{equation}\label{Eq-Def-PDF}
\sum_{1\leq i,j\leq k} \overline{p_i}\varphi(t_i-t_j)p_j\geq0
\end{equation}
is satisfied.
\end{definition}
Before giving a formal definition for slice-condensed measures, we introduce some notations. $\mathbb{H}_I$ denotes the set of all pure imaginary quaternions, and $\mathbb{R}^+$ the set of non-negative real number.
$\mathscr{B}(X)$ stands for the Borel $\sigma$-algebra on a topological space $X$. The function $\rho: \mathbb{H}_I\mapsto\mathbb{R}^+$ is defined as
$$\rho(x):=|x|. $$
Evidently, $\rho$ is Borel measurable, then induces a push-forward mapping:
$$\rho_*(\Gamma)(\Omega):=\Gamma(\rho^{-1}(\Omega)),\ \forall \text{ Borel measure }\Gamma \text{ on }\mathbb{H}_I, \ \forall\ \Omega \in\mathscr{B}(\mathbb{R}^+).$$
\begin{definition}\label{Def-PNM-1}
A quaternion-valued regular Borel measure $\mu$ on $\mathbb{R}^+$ is said to be slice-condensed if there exists a non-negative finite regular Borel measure $\Gamma$ on $\mathbb{H}_I$ such that the following equality holds:
$$\mu=\rho_*(\Gamma+\frac{x}{|x|}\Gamma).$$
\end{definition}
Here we stipulate that $\frac{x}{|x|}=0$ when $x=0$, and consider this function as a Radon-Nikodym derivative, which means $\frac{x}{|x|}\Gamma$ is a regular Borel measure defined by
$$\frac{x}{|x|}\Gamma(\Omega):=\int_{x\in\Omega}\frac{x}{|x|}d\Gamma(x)$$
for any Borel set $\Omega\in\mathscr{B}(\mathbb{H}_I)$.
This concept can also be defined equivalently as follows.
\begin{definition}\label{Def-PNM-2}
A quaternion-valued regular Borel measure $\mu$ on $\mathbb{R}^+$ is said to be slice-condensed if there exists a spectral system $(E:\mathscr{B}(\mathbb{R}^+)\mapsto\mathcal{B}(\mathcal{H}),J)$ and a point $\alpha$ in a quaternionic Hilbert space $\mathcal{H}$ such that the following equality holds:
$$\mu=\langle E\alpha,\alpha\rangle + \langle J_0E\alpha,\alpha\rangle,$$
where $J_0=J-JE(\{0\})$ and $\langle\cdot,\cdot\rangle$ stands for the inner product on $\mathcal{H}$.
\end{definition}
More precisely, this equality $\mu=\langle E\alpha,\alpha\rangle + \langle J_0E\alpha,\alpha\rangle$ means
$$\mu(\omega)=\langle E(\omega)\alpha,\alpha\rangle + \langle J_0E(\omega)\alpha,\alpha\rangle$$
is satisfied for any $\omega\in\mathscr{B}(\mathbb{R}^+)$.
\begin{remark}
We would like to emphasize that Definitions \ref{Def-PNM-1} and \ref{Def-PNM-2} are equivalent, and this assertion will be illuminated in Subsection \ref{Subs-Equa-Def-PNM}. To prevent confusion, one may ignore Definition \ref{Def-PNM-1} temporarily until reach Subsection \ref{Subs-Equa-Def-PNM}.
\end{remark}
Let $\mathcal{M}_{S}(\mathbb{R}^+)$ denote the set of all slice-condensed regular Borel measures on $\mathbb{R}^+$.
The next theorem will show there exists a one to one correspondence between the continuous quaternionic positive definite functions and the slice-condensed regular Borel measures.
\begin{theorem}[Generalized Bochner's theorem]\label{Bochner-theorem}
If a quaternion-valued function $\varphi$ on $\mathbb{R}$ is continuous and positive definite, then there exists a unique $\mu\in\mathcal{M}_{S}(\mathbb{R}^+)$ such that
$$\varphi(t)=\int_{\mathbb{R}^+}\cos(tx)d\mathbf{Re}\mu(x)+\int_{\mathbb{R}^+}\sin(tx)d(\mu-\mathbf{Re}\mu)(x), $$
and vice versa.
\end{theorem}
\subsection{The quaternionic Hilbert space associated with a positive definite function}\label{H-space-PDF}
Let $\varphi$ be a quaternionic positive definite function, and $F_0(\mathbb{R},\mathbb{H})$ be the family of quaternion-valued functions on $\mathbb{R}$ with finite support. Evidently, $F_0(\mathbb{R},\mathbb{H})$ has a natural right $\mathbb{H}$-linear structure that makes it a right $\mathbb{H}$-vector space. The positive definite function $\varphi$ will induce a (possibly degenerate) inner product:
$$\langle f,g\rangle:=\sum_{s,t\in\mathbb{R}}\overline{g(s)}\varphi(s-t)f(t), $$
for all $f,g\in F_0(\mathbb{R},\mathbb{H})$.
Quotienting $F_0(\mathbb{R},\mathbb{H})$ by the subspace of functions with zero norm eliminates the degeneracy. Then taking the completion gives a quaternionic Hilbert space $(\mathcal{H}_\varphi,\langle \cdot,\cdot\rangle)$.
Note that if $\varphi$ vanishes at the origin, it can be easily seen that $\mathcal{H}$ is a 0-dimensional space, and $\varphi\equiv 0$; then all the main results are trivially true. So without loss of generality, we can always assume $$\varphi(0)\neq 0. $$
Recall that every quaternionic Hilbert space can be transformed into a bilateral Hilbert space as follows:
1. Introduce a new inner product $(\cdot|\cdot)$, given by
$$(x|y)=\overline{\langle x,y\rangle}, \quad \forall x, y\in\mathcal{H}_\varphi.$$
2. Construct a left $\mathbb{H}$-linear structure on $\mathcal{H}_\varphi$: Let $\{x_a\}$ be an orthonormal basis, the left scalar multiplication is defined as
$$qx=\sum_{a}x_aq\langle x,x_a\rangle, \quad \forall q\in \mathbb{H}, \ x\in \mathcal{H}_\varphi. $$
Then the $\mathbb{H}$-vector space $\mathcal{H}_\varphi$ endowed with the inner product $(\cdot|\cdot)$ can be treated as a bilateral quaternionic Hilbert space.
For convenience, we denote the bilateral quaternionic Hilbert space $(\mathcal{H}_\varphi,(\cdot|\cdot))$ by $\tilde{\mathcal{H}}_\varphi$.
In this very case, we choose the orthonormal basis specifically given as
$$\{x_a\}:=\{\delta / \|\delta\|\}\cup\{x_\beta\},$$
where $\delta$ is the finite delta function given by
$$\delta(x)=\left\{\begin{array}{ccc}
1,& & x=0; \\
0,& & x\neq 0.
\end{array}\right.$$ and $\{x_\beta\}$ is an arbitrary orthonormal basis of $\{\delta / \|\delta\|\}^{\perp}$.
Such choice ensures the following commutativity:
\begin{equation}\label{Eq-Commute-delta}
q\delta=\delta q, \quad \forall \ q\in\mathbb{H}.
\end{equation}
\begin{remark}
Note that the left scaler multiplication on $\tilde{\mathcal{H}}_\varphi$ is different from the conventional left scaler multiplication. In other words, for a quaternion-valued function $f$ on $\mathbb{R}$ with finite support and an arbitrary element $q\in\mathbb{H}$, the following equality
$$(qf)(x)=qf(x), \qquad x\in\mathbb{R},$$
is generally non-valid in $\tilde{\mathcal{H}}_\varphi$. So the commutativity in \eqref{Eq-Commute-delta} is not trivial.
\end{remark}
\subsection{Spectral theorems for quaternionic positive definite functions}\label{Subs-Spectral-Thm}
\begin{theorem}\label{Thm-Spectral}
A quaternion-valued function $\varphi$ defined on $\mathbb{R}$ is continuous and positive definite if and only if there exist a spectral system $(E:\mathscr{B}(\mathbb{R}^+)\mapsto\mathcal{B}(\mathcal{H}),J)$ and a point $\alpha$ in a quaternionic Hilbert space $\mathcal{H}$ such that
$$\varphi(t)=\int_{\mathbb{R}^+}\cos(tx)\ d\langle E\alpha,\alpha\rangle(x)+\int_{\mathbb{R}^+}\sin(tx)\ d\langle JE\alpha,\alpha\rangle(x). $$
\end{theorem}
\begin{proof}
First, we shall show the sufficiency. Assume that for a function $\varphi$, there exist a spectral system $(E:\mathscr{B}(\mathbb{R}^+)\mapsto\mathcal{B}(\mathcal{H}),J)$ and a point $\alpha$ in a quaternionic Hilbert space $\mathcal{H}$ such that
$$\varphi(t)=\int_{\mathbb{R}^+}\cos(tx)\ d\langle E\alpha,\alpha\rangle(x)+\int_{\mathbb{R}^+}\sin(tx)\ d\langle JE\alpha,\alpha\rangle(x). $$
Obviously, $\varphi$ is continuous in view of the dominated convergence theorem.
We only need to check whether $\varphi$ is positive definite.
By Lemma 5.3 of \cite{Alpay-2016}, $\varphi(t)$ is identical with $\langle\mathbb{I}(f_t)\alpha,\alpha\rangle$ where $f_t(x):=e^{tx}$ and $\mathbb{I}$ is a $*$-homomorphism induced by the spectral system $(E,J)$.
For arbitrary $t_1,t_2,\cdots,t_k\in\mathbb{R}$ and $q_1,q_2,\cdots,q_k\in\mathbb{H}$, we have
\begin{eqnarray*}
\sum_{1\leq i,j\leq k} \overline{p_i}\varphi(t_i-t_j)p_j
&=&\sum_{1\leq i,j\leq k} \overline{p_i}\langle\mathbb{I}(f_{t_i-t_j})\alpha,\alpha\rangle p_j \\
&=& \sum_{1\leq i,j\leq k} \overline{p_i}\langle\mathbb{I}(f_{t_i})\mathbb{I}( f_{-t_j})\alpha,\alpha\rangle p_j\\
&=& \sum_{1\leq i,j\leq k} \overline{p_i}\langle\mathbb{I}( f_{-t_j})\alpha,\mathbb{I}(f_{-t_i})\alpha\rangle p_j
\end{eqnarray*}
Subsequently, since $\langle xp,yq\rangle=\overline q \langle x,y\rangle p$ for all $p,q\in\mathbb{H}$ and all $x,y\in\mathcal{H}$, the following equality holds true:
\begin{eqnarray*}
\sum_{1\leq i,j\leq k} \overline{p_i}\varphi(t_i-t_j)p_j
&=& \sum_{1\leq i,j\leq k} \langle\mathbb{I}( f_{-t_j})\alpha p_j,\mathbb{I}(f_{-t_i})\alpha p_i\rangle \\
&=& \| \sum_{1\leq j\leq k}\mathbb{I}( f_{-t_j})\alpha p_j\|^2 \\
&\geq&0
\end{eqnarray*}
Hence $\varphi$ is positive definite.
Next, we shall verify the necessity. The quaternionic Hilbert space $\mathcal{H}_\varphi$ constructed in Subsection \ref{H-space-PDF} will come into immediate use.
Assume that $\varphi:\mathbb{R}\mapsto\mathbb{H}$ is a continuous positive definite function.
Consider a family of shift operators $U_t$ $(t\in\mathbb{R})$ on $F_0(\mathbb{R},\mathbb{H})$ given by
\begin{equation}\label{Eq-Def-U_t}
U_t(f)=f(\cdot+t).
\end{equation}
The following facts hold:
1. $U_t$ is a right $\mathbb{H}$-linear bijection preserving the (possibly degenerate) inner product $\langle\cdot,\cdot
\rangle$ induced by the positive definite function $\varphi$ for all $t\in\mathbb{R}$.
2. $U_0=I$ and $U_tU_s=U_{t+s}$ for all $t,s\in\mathbb{R}$.
3. The continuity of $\varphi$ implies that $\displaystyle{\lim_{s\to t}}\| U_s(f)-U_t(f)\|=0$ holds for all $f\in F_0(\mathbb{R},\mathbb{H})$ and all $t\in\mathbb{R}$.
Thus $U_t$ $(t\in\mathbb{R})$ can be uniquely extended as a strongly continuous one-parameter unitary group on $\mathcal{H}_\varphi$. It follows directly from Theorem \ref{Thm-Stone-I} that
the infinitesimal generator, denoted by $A$, of $U_t$ is a right linear anti-self-adjoint operator, and
\begin{equation}\label{Eq-Thm-Spectral-1}
U_t=e^{tA}|_S \ \text{for all} \ t\in \mathbb{R}.
\end{equation}
Here $e^{tA}|_S$ is defined by the functional calculus for the intrinsic slice function $e^{tx}$ on the S-spectrum of $A$. More precisely,
\begin{equation}\label{Eq-Thm-Spectral-2}
\begin{split}
\langle e^{tA}|_Sx,y\rangle
= & \int_{\sigma_S(A)\cap \mathbb{C}_j^+}
\mathbf{Re}(e^{tp})d \langle E_jx,y\rangle (p)+\\
& \int_{\sigma_S(A)\cap \mathbb{C}_j^+}
\mathbf{Im}(e^{tp})d\langle JE_jx,y\rangle (p),
\end{split}
\end{equation}
where $(E_j:\mathscr{B}(\sigma_S(A)\cap \mathbb{C}_j^+)\mapsto\mathcal{B}(\mathcal{H}_\varphi),J)$ is the spectral system associated with $A$. Moreover,
$$\mathbf{Re}(e^{tp})=\cos(t|p|)
\ \text{ and }\
\mathbf{Im}(e^{tp})=\sin(t|p|),
$$
since $\sigma_S(A)$ contains only pure imaginary quaternions.
Define a resolution of identity $E:\mathscr{B}(\mathbb{R}^+)\mapsto\mathcal{B}(\mathcal{H}_\varphi)$ as
$$E(\omega)=E_j(\sigma_S(A)\cap \mathbb{C}_j^+ \cap j\omega), \qquad \omega\in\mathscr{B}(\mathbb{R}^+). $$
One may notice that $\sigma_S(A)\cap \mathbb{C}_j^+$ is in fact a subset of $j\mathbb{R}^+$. So $E$ is essentially a zero extension of $E_j$. Furthermore, \eqref{Eq-Thm-Spectral-1} and \eqref{Eq-Thm-Spectral-2} indicate
\begin{equation}\label{Eq-Thm-Spectral-3}
\langle U_t\delta,\delta \rangle =\int_{\mathbb{R}^+}
\cos (tx) d \langle E\delta,\delta \rangle (x)+
\int_{\mathbb{R}^+}
\sin (tx) d\langle JE\delta,\delta\rangle (x),
\end{equation}
where $\delta$ is the finite delta function given by
$$\delta(x)=\left\{\begin{array}{ccc}
1,& & x=0; \\
0,& & x\neq 0.
\end{array}\right.$$
A direct calculation yields the left side of \eqref{Eq-Thm-Spectral-3} is also equal to $\varphi(t)$. This completes the proof.
\end{proof}
\begin{corollary}\label{Cor-Spectral}
A quaternion-valued function $\varphi$ defined on $\mathbb{R}$ is continuous and positive definite if and only if there exist a spectral system $(E:\mathscr{B}(\mathbb{R}^+)\mapsto\mathcal{B}(\mathcal{H}),J)$ and a point $\alpha$ in a quaternionic Hilbert space $\mathcal{H}$ such that
$$\varphi(t)=\int_{\mathbb{R}^+}\cos(tx)\ d\langle E\alpha,\alpha\rangle(x)+\int_{\mathbb{R}^+}\sin(tx)\ d\langle J_0E\alpha,\alpha\rangle(x), $$
where $J_0=J-JE(\{0\})$.
\end{corollary}
\begin{proof}
For any $\omega\in\mathscr{B}(\mathbb{R}^+)$ with $0\not\in\omega$, we have
$$E(\{0\})E(\omega)=E(\{0\}\cap\omega)=E(\emptyset)=0. $$
Hence $JE(\omega)=J_0E(\omega)$, which means $\langle JE(\omega)\alpha,\alpha\rangle=\langle J_0E(\omega)\alpha,\alpha\rangle$.
Then we obtain
$$\int_{\mathbb{R}^+}\sin(tx)\ d\langle JE\alpha,\alpha\rangle(x)=\int_{\mathbb{R}^+}\sin(tx)\ d\langle J_0E\alpha,\alpha\rangle(x),$$
since the integrand $\sin(tx)$ vanishes at $x=0$ and these two measures $\langle JE\alpha,\alpha\rangle$ and $\langle J_0E\alpha,\alpha\rangle$ are identical on $(0,+\infty)$.
Therefore, this corollary follows directly from Theorem \ref{Thm-Spectral}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Bochner-theorem}]
We only verify the necessity. The sufficiency can be proved reversely.
Assume $\varphi:\mathbb{R}\mapsto\mathbb{H}$ is a continuous positive definite function. By Corollary \ref{Cor-Spectral}, there exist a spectral system $(E,J)$ and a point $\alpha$ in a quaternionic Hilbert space $\mathcal{H}$ such that
\begin{equation}\label{Eq-Thm-Bochner-1}
\varphi(t)=\int_{\mathbb{R}^+}\cos(tx)\ d\langle E\alpha,\alpha\rangle(x)+\int_{\mathbb{R}^+}\sin(tx)\ d\langle J_0E\alpha,\alpha\rangle(x).
\end{equation}
Consider a regular Borel measure given by
$$\mu=\langle E\alpha,\alpha\rangle+\langle J_0E\alpha,\alpha\rangle. $$
By Definition \ref{Def-PNM-2}, i.e., the second definition of a slice-condensed measure, we know $\mu$ is slice-condensed.
Notice two facts:
1. $E(\omega)$ is self-adjoint for all $\omega\in\mathscr{B}(\mathbb{R}^+)$.
2. $J$ is anti-self-adjoint, and commutes with $E$.
Thus, $\langle E\alpha,\alpha\rangle$ is pure real valued and $\langle J_0E\alpha,\alpha\rangle$ is pure imaginary valued. This implies
$$\mathbf{Re}\mu=\langle E\alpha,\alpha\rangle.$$
Hence, \eqref{Eq-Thm-Bochner-1} indicates
$$\varphi(t)=\int_{\mathbb{R}^+}\cos(tx)d\mathbf{Re}\mu(x)+\int_{\mathbb{R}^+}\sin(tx)d(\mu-\mathbf{Re}\mu)(x). $$
In addition, the uniqueness of the slice-condensed measure $\mu$ is a direct result of the Stone-Weierstrass theorem.
\end{proof}
\subsection{Equivalence between the two definitions of slice-condensed measures}\label{Subs-Equa-Def-PNM}
We claimed Definitions \ref{Def-PNM-1} and \ref{Def-PNM-2} are equivalent. Now this assertion will be verified.
First we recall some notations.
$\mathbb{H}_I$ denotes the set of pure imaginary quaternions, and $\mathbb{R}^+$ the set of non-negative real number. $\mathscr{B}(X)$ stands for the Borel $\sigma$-algebra on a topological space $X$. The function $\rho: \mathbb{H}_I\mapsto\mathbb{R}^+$ is defined as
$$\rho(x):=|x|. $$
It induces a push-forward mapping $\rho_*$ given by
\begin{equation}\label{Eq-Def-push-forward}
\rho_*(\Gamma)(\Omega):=\Gamma(\rho^{-1}(\Omega)),
\end{equation}
for any Borel measure $\Gamma$ on $\mathbb{H}_I$, and any $\Omega\in\mathscr{B}(\mathbb{R}^+)$.
Recall Definition \ref{Def-PNM-1}:
a quaternion-valued regular Borel measure $\mu$ on $\mathbb{R}^+$ is said to be slice-condensed if there exists a non-negative finite regular Borel measure $\Gamma$ on $\mathbb{H}_I$ such that the following equality holds:
$$\mu=\rho_*(\Gamma+\frac{x}{|x|}\Gamma).$$
Here we stipulate that $\frac{x}{|x|}=0$ when $x=0$, and consider this function as a Radon-Nikodym derivative, which means $\frac{x}{|x|}\Gamma$ is a regular Borel measure defined by
$$\frac{x}{|x|}\Gamma(\Omega):=\int_{x\in\Omega}\frac{x}{|x|}d\Gamma(x)$$
for any Borel set $\Omega\in\mathscr{B}(\mathbb{H}_I)$.
Recall Definition \ref{Def-PNM-2}:
a quaternion-valued regular Borel measure $\mu$ on $\mathbb{R}^+$ is said to be slice-condensed if
there exists a spectral system $(E:\mathscr{B}(\mathbb{R}^+)\mapsto\mathcal{B}(\mathcal{H}),J)$ and a point $\alpha$ in a quaternionic Hilbert space $\mathcal{H}$ such that the following equality holds:
$$\mu=\langle E\alpha,\alpha\rangle + \langle J_0E\alpha,\alpha\rangle,$$
where $J_0$ is given by $J_0=J-JE(\{0\})$ and $\langle\cdot,\cdot\rangle$ stands for the inner product on $\mathcal{H}$.
For ease of explanation, we call any measure that satisfies Definition \ref{Def-PNM-1} is a slice-condensed measure of type I, any measure that satisfies Definition \ref{Def-PNM-2} is a slice-condensed measure of type II.
\begin{lemma}\label{Lemma-Eq-PNM-1}
Any slice-condensed measure of type I is a slice-condensed measure of type II.
\end{lemma}
\begin{proof}
Assume $\mu$ is a slice-condensed measure of type I, then there exists a non-negative finite regular Borel measure $\Gamma$ on $\mathbb{H}_I$ such that the following equality holds:
\begin{equation}\label{Eq-mu}\mu=\rho_*(\Gamma+\frac{x}{|x|}\Gamma)
\end{equation}
Let $L^2(\mu,\mathbb{H})$ denote the quaternionic Hilbert space consisting of Borel measurable quaternion-valued functions on $\mathbb{H}_I$ which are square-integrable with respect to the measure $\Gamma$. The inner product on $L^2(\mu,\mathbb{H})$ is naturally given by
$$
\langle f,g\rangle:=\int_{\mathbb{H}_I}\overline{g(x)} f(x) d\Gamma(x).
$$
Consider a resolution of identity $E'$ on $\mathbb{H}_I$ and an anti-self-adjoint unitary operator $J$ defined as follows:
$$E'(\omega)f(x):=\chi_\omega(x) f(x),$$
for all $\omega\in \mathscr{B}(\mathbb{H}_I)$, and all $f\in L^2(\mu,\mathbb{H})$. Here $\chi_\omega$ is the characteristic function of the set $\omega$.
$$Jf(x):=\left\{\begin{array}{lll}
\frac{x}{|x|}f(x),& &x\neq 0; \\
jf(0),& & x=0;
\end{array}\right. $$
where $j$ is an arbitrary imaginary unit.
It can be easily seen that $E'$ commutes with $J$.
Applying the push-forward mapping $\rho_*$ given by \eqref{Eq-Def-push-forward} to $E'$ produces a resolution of identity $E$ on $\mathbb{R}^+$ as
$$\langle E f,g\rangle:=\rho_*(\langle E'f,g\rangle),\qquad \forall f,\ g\in L^2(\mu,\mathbb{H}). $$
Equivalently, the resolution of identity $E$ is defined by
$$E(\omega):=E'(\rho^{-1}(\omega)), \qquad \forall \omega\in \mathscr{B}(\mathbb{R}^+),$$
where $\rho(x)=|x|$, $x\in\mathbb{H}_I$.
We notice that $E$ also commutes with $J$. Thus $(E,J)$ is a spectral system according to Definition \ref{Def-FCS}. Furthermore, direct calculations yield
$$\langle E(\omega)\alpha,\alpha\rangle=\rho_*\Gamma(\omega),$$
and
$$\langle J_0E(\omega)\alpha,\alpha\rangle=\rho_*(\frac{x}{|x|}\Gamma)(\omega),$$
for all $\omega\in \mathscr{B}(\mathbb{R}^+)$,
where $\alpha$ is given as the characteristic function of $\mathbb{H}_I$, and $J_0=J-JE(\{0\})$.
Substituting the two equalities above into \eqref{Eq-mu}, we thus obtain
$$\mu=\langle E\alpha,\alpha\rangle + \langle J_0E\alpha,\alpha\rangle. $$
Therefore, $\mu$ is a slice-condensed measure of type II.
\end{proof}
\begin{lemma}\label{Lemma-Eq-PNM-2}
Any slice-condensed measure of type II is a slice-condensed measure of type I.
\end{lemma}
\begin{proof}
This proof is lengthy. We would like to outline it, then give the details.
Let $\mu$ be a slice-condensed measure of type II. Corollary \ref{Cor-Spectral} indicates that the following function
\begin{equation}\label{EQ-OULINE-varphi}
\varphi(t):=\int_{\mathbb{R}^+}\cos(tx)d\mathbf{Re}\mu(x)+\int_{\mathbb{R}^+}\sin(tx)d(\mu-\mathbf{Re}\mu)(x),
\end{equation}
is continuous and positive definite. As shown in Subsection \ref{H-space-PDF}, there is a quaternionic Hilbert apace $\mathcal{H}_\varphi$ associated with $\varphi$.
Applying Theorem \ref{Thm-Stone-II} to the unitary group $U_t$ on $\mathcal{H}_\varphi$ given by \eqref{Eq-Def-U_t}, we obtain
$$U_t=e^{tA}|_L $$
with $A$ being the infinitesimal generator of $U(t)$.
Definition \ref{Def-FC-L-Spec}, along with the commutativity shown in \eqref{Eq-Commute-delta}, yields two facts:\\
1.
\begin{equation*}
\begin{split}
(U_t \delta |\delta )=& \sum_{l=0}^{3}\int_{\sigma_L(A)}\cos t|x| i_ld\mu_{i_0,i_l}[\delta,\delta](x) - \\
& \sum_{k=1}^{3}\sum_{l=0}^{3}\int_{\sigma_L(A)}\frac{x_k}{|x|}\sin t|x|i_li_kd\mu_{i_0,i_l}[\delta,\delta](x),
\end{split}
\end{equation*}
where $\delta$ is the finite delta function given by
$$\delta(x)=\left\{\begin{array}{ccc}
1,& & x=0; \\
0,& & x\neq 0;
\end{array}\right.$$
and $\mu_{i_0,i_l}[\delta,\delta]$ $(l=0,1,2,3)$ are regular Borel measures on $\sigma_L(A)$ determined by $A$ and $\delta$.\\
2. $$\mu_{i_0,i_l}[\delta,\delta]\ \left\{\begin{array}{lcl}
\text{is finite and non-negative,} & & l=0, \\
& & \\
=0 & & l=1,2,3.\end{array} \right. $$
We thus have
\begin{equation}\label{Eq-OUTLINE-1}
(U_t\delta|\delta)= \int_{\mathbb{H}_I}\cos t|x|d\Gamma(x)+ \int_{\mathbb{H}_I}\frac{x}{|x|}\sin t|x|d\Gamma(x),
\end{equation}
where $\Gamma$ is a non-negative finite regular Borel measure on $\mathbb{H}_I$, the set of pure imaginary quaternions, defined by
$$\Gamma(\omega):=\mu_{i_0,i_0}[\delta,\delta]\big(\sigma_L(A)\cap(-\omega)\big), \quad \forall\ \omega\in\mathscr{B}(\mathbb{H}_I). $$
By the definitions of $U_t$ and $\delta$, it is easy to see that the left side of \eqref{Eq-OUTLINE-1} equals $\varphi(t)$.
On the other hand,
a direct calculation yields the right side of \eqref{Eq-OUTLINE-1} equals $$\int_{\mathbb{R}^+}\cos tx \ d\mathbf{Re}\mu'(x)+\int_{\mathbb{R}^+}\sin tx\ d(\mu'-\mathbf{Re}\mu')(x)$$
where $\mu'$ is a slice-condensed measure of type I given by $$\mu':=\rho_*(\Gamma+\frac{x}{|x|}\Gamma). $$
Hence, $$\varphi(t)=\int_{\mathbb{R}^+}\cos tx \ d\mathbf{Re}\mu'(x)+\int_{\mathbb{R}^+}\sin tx\ d(\mu'-\mathbf{Re}\mu')(x)$$
Comparing this equality with \eqref{EQ-OULINE-varphi} we conclude that the slice-condensed measure $\mu$ of type II is identical with the slice-condensed measure $\mu'$ of type I. It completes the proof.
The details are as follows:
{\bf Step 1:}
Assume $\mu$ is a slice-condensed measure of type II. Then there exists a spectral system $(E:\mathscr{B}(\mathbb{R}^+)\mapsto\mathcal{B}(\mathcal{H}),J)$ and a point $\alpha$ in a quaternionic Hilbert space $\mathcal{H}$ such that the following equality holds:
$$\mu=\langle E\alpha,\alpha\rangle + \langle J_0E\alpha,\alpha\rangle,$$
where $J_0=J-JE(\{0\})$. It can be seen easily that $\mathbf{Re}\mu(x)=\langle E\alpha,\alpha\rangle$, since $E$ is self-adjoint and $J_0$ is anti-self-adjoint.
Corollary \ref{Cor-Spectral} yields that the function $\varphi$ given by \begin{equation}\label{Eq-Def-varphi}
\varphi(t):=\int_{\mathbb{R}^+}\cos(tx)d\mathbf{Re}\mu(x)+\int_{\mathbb{R}^+}\sin(tx)d(\mu-\mathbf{Re}\mu)(x),
\end{equation}
is continuous and positive definite.
As shown in Subsection \ref{H-space-PDF}, there is a quaternionic Hilbert apace $\mathcal{H}_\varphi$ associated with $\varphi$; moreover, $\mathcal{H}_\varphi$ can be transformed into a bilateral quaternionic Hilbert apace $\tilde{\mathcal{H}}_\varphi$.
In the proof of \ref{Thm-Spectral}, it has been illuminated that the family of operators $U_t$ $(t\in\mathbb{R})$ defined by \eqref{Eq-Def-U_t} is a strongly continuous one-parameter unitary group on $\mathcal{H}_\varphi$. It follows immediately from Theorem \ref{Thm-Stone-II} that
$$U_t=e^{tA}|_L \ \text{for all} \ t\in \mathbb{R}, $$
Here, $A$ is the infinitesimal generator of $U(t)$, and $e^{tA}|_L$ is defined by the functional calculus for the function $e^{tx}$ on the left spectrum of $A$ in the bilateral Hilbert space $\tilde{\mathcal{H}}_\varphi$.
{\bf Step 2:}
Because $A$ is anti-self-adjoint, the left spectrum $\sigma_L(A)$ must be a subset of the pure imaginary space $\mathbb{H}_I:= i_1\mathbb{R}+ i_2\mathbb{R} + i_2\mathbb{R}$. Due to this fact, we know the function $e^{tx}$ can be composed as $$e^{tx}=\cos t|x|+i_1\frac{x_1}{|x|}\sin t|x|+i_2\frac{x_2}{|x|}\sin t|x|+i_3\frac{x_3}{|x|}\sin t|x|,$$
for all $x=x_0+x_1i_1+x_2i_2+x_3i_3$ in the left spectrum. Definition \ref{Def-FC-L-Spec} thus leads to the following equality:
\begin{equation}\label{Eq-exp-L-1-1}
(e^{tA}|_L \delta |\delta )=\big(\phi(\cos t|x|)\delta|\delta\big)+\sum_{k=1}^{3}\big(i_k\phi(\frac{x_k}{|x|}\sin t|x|)\delta|\delta\big),
\end{equation}
where $\delta$ is the finite delta function given by
$$\delta(x)=\left\{\begin{array}{ccc}
1,& & x=0; \\
0,& & x\neq 0;
\end{array}\right.$$
since $\phi$ is a $\mathbb{H}$-algebra homomorphism from $\mathcal{B}(\sigma_L(A),\mathbb{H})$ to $\mathcal{B}_q(\tilde{\mathcal{H}}_\varphi)$.
Moreover, the definition of left scalar multiplication (one may refer to Subsection \ref{H-space-PDF} for more details) indicates that $L_{i_k}$, i.e., the left scalar multiplication by $i_k$ ($k=1,2,3$), is an anti-self-adjoint right $\mathbb{H}$-linear bounded operator. Hence, by shifting the position of $i_k$ in \eqref{Eq-exp-L-1-1} we have
$$(e^{tA}|_L \delta |\delta )=\big(\phi(\cos t|x|)\delta|\delta\big)-\sum_{k=1}^{3}\big(\phi(\frac{x_k}{|x|}\sin t|x|)\delta|i_k\delta\big),$$
which, along with \eqref{Eq-Commute-delta}, implies
\begin{equation}\label{Eq-equivalence-PNM-exp-tA-1}
(e^{tA}|_L \delta |\delta )=\big(\phi(\cos t|x|)\delta|\delta\big)-\sum_{k=1}^{3}\big(\phi(\frac{x_k}{|x|}\sin t|x|)\delta|\delta\big)i_k,
\end{equation}
By Definition \ref{Def-FC-L-Spec} there exist regular $\mathbb{R}$-valued Borel measure $\mu_{i_v,i_l}[\alpha,\beta]$ $(v,l=0,1,2,3; \alpha,\beta\in\tilde{\mathcal{H}}_\varphi)$ such that
\begin{equation}\label{Eq-mu-expression}
(\phi(f)\alpha|\beta)=\sum_{v,l=0}^{3}\int_{\sigma_L(A)}f_vi_l\ d\mu_{i_v,i_l}[\alpha,\beta],
\end{equation}
where $f=\sum_{v=0}^{3}f_vi_v\in\mathcal{B}(\sigma_L(A),\mathbb{H})$ and $f_v$ is $\mathbb{R}$-valued. Applying \eqref{Eq-mu-expression} to \eqref{Eq-equivalence-PNM-exp-tA-1} yields
\begin{equation}\label{Eq-equivalence-PNM-exp-tA-2}
\begin{split}
(e^{tA}|_L \delta |\delta )=& \sum_{l=0}^{3}\int_{\sigma_L(A)}\cos t|x| i_ld\mu_{i_0,i_l}[\delta,\delta](x) - \\
& \sum_{k=1}^{3}\sum_{l=0}^{3}\int_{\sigma_L(A)}\frac{x_k}{|x|}\sin t|x|i_li_kd\mu_{i_0,i_l}[\delta,\delta](x).
\end{split}
\end{equation}
{\bf Step 3:}
Since $\phi$ is an involution preserving $\mathbb{H}$-algebra homomorphism, any $\mathbb{R}$-valued bounded measurable function $f$ on $\sigma_L(A)$ satisfies the following equations:
\begin{equation*}
\begin{split}
(\phi(f)\delta|-i_k\delta)&=(i_k\phi(f)\delta|\delta) \\
&=(\phi(i_kf)\delta|\delta)\\
\\
(\phi(f)\delta|i_k\delta)&=(-i_k\phi(f)\delta|\delta)\\
&=(\phi(\overline{i_kf})\delta|\delta)\\
\\
\mathbf{Re}(\phi(i_kf)\delta|\delta)&=\mathbf{Re}(\delta|\phi(\overline{i_kf})\delta)\\
&=\mathbf{Re}(\phi(\overline{i_kf})\delta|\delta)
\end{split}\qquad k=1,2,3.
\end{equation*}
It thus follows that
\begin{equation*}
\mathbf{Re}(\phi(f)\delta|-i_k\delta)=\mathbf{Re}(\phi(f)\delta|i_k\delta),\qquad k=1,2,3.
\end{equation*}
holds true. Applying \eqref{Eq-Commute-delta} to the equality above leads to
\begin{equation}\label{Eq-mu-v-l-vanish-1}
\mathbf{Re}\Big((\phi(f)\delta|\delta)(-i_k)\Big)=\mathbf{Re}\Big((\phi(f)\delta|\delta)i_k\Big),\qquad k=1,2,3.
\end{equation}
We combine \eqref{Eq-mu-expression} and \eqref{Eq-mu-v-l-vanish-1} to deduce that
$$\int_{\sigma_L(A)}f\ d\mu_{i_0,i_k}[\delta,\delta]=-\int_{\sigma_L(A)}f\ d\mu_{i_0,i_k}[\delta,\delta], \qquad k=1,2,3,$$
is valid for any $\mathbb{R}$-valued bounded measurable function $f$ on $\sigma_L(A)$.
Due to the randomness of $f$, we come to the vital result that
$$\mu_{i_0,i_k}[\delta,\delta]=0, \qquad k=1,2,3.$$
Moreover, the following equality $$\int_{\sigma_L(A)}f^2d\mu_{i_0,i_0}[\delta,\delta]=\mathbf{Re}(\phi(f^2)\delta|\delta)=\mathbf{Re}(\phi(f)\delta|\phi(f)\delta)\geq 0\ (\neq+\infty), $$
implies that $\mu_{i_0,i_0}[\delta,\delta]$ is non-negative and finite.
Substituting the equalities above to \eqref{Eq-equivalence-PNM-exp-tA-2} yields \begin{equation}\label{Eq-equivalence-PNM-exp-tA-3}
(e^{tA}|_L \delta |\delta )= \int_{\sigma_L(A)}\cos t|x| d\mu_{i_0,i_0}[\delta,\delta](x) - \int_{\sigma_L(A)}\frac{x}{|x|}\sin t|x|d\mu_{i_0,i_0}[\delta,\delta](x).
\end{equation}
Set $\gamma$ to be the zero extension of $\mu_{i_0,i_0}[\delta,\delta]$ to the pure imaginary space $\mathbb{H}_I=\mathbb{R}i_1+\mathbb{R}i_2+\mathbb{R}i_3$. Let $\Gamma$ be a non-negative finite regular Borel measure on $\mathbb{H}_I$ defined by
$$\Gamma(\omega):=\gamma(-\omega), \quad \forall\ \omega\in\mathscr{B}(\mathbb{H}_I). $$
Then \eqref{Eq-equivalence-PNM-exp-tA-3} can be rewritten as
\begin{equation*}
(U_t\delta|\delta)=(e^{tA}|_L \delta |\delta )= \int_{\mathbb{H}_I}\cos t|x|d\Gamma(x)+ \int_{\mathbb{H}_I}\frac{x}{|x|}\sin t|x|d\Gamma(x).
\end{equation*}
By the definition of $U_t$, we obtain
\begin{equation}\label{Eq-equivalence-PNM-exp-tA-4}
\varphi(t)=(U_t\delta |\delta )= \int_{\mathbb{H}_I}\cos t|x|d\Gamma(x)+\int_{\mathbb{H}_I}\frac{x}{|x|}\sin t|x|d\Gamma(x).
\end{equation}
{\bf Step 4: }
Consider the slice-condensed measure $\mu'$ of type I given by $$\mu':=\rho_*(\Gamma+\frac{x}{|x|}\Gamma). $$
A direct calculation yields the right side of \eqref{Eq-equivalence-PNM-exp-tA-4} is equal to $$\int_{\mathbb{R}^+}\cos tx \ d\mathbf{Re}\mu'(x)+\int_{\mathbb{R}^+}\sin tx\ d(\mu'-\mathbf{Re}\mu')(x)$$
Then according to the expression of $\varphi(t)$ in \eqref{Eq-Def-varphi}, we obtain
$$\int_{\mathbb{R}^+}\cos tx \ d\mathbf{Re}\mu(x)=\int_{\mathbb{R}^+}\cos tx \ d\mathbf{Re}\mu'(x),$$
and
$$\int_{\mathbb{R}^+}\sin tx \ d(\mu-\mathbf{Re}\mu)(x)=\int_{\mathbb{R}^+}\sin tx\ d(\mu'-\mathbf{Re}\mu')(x), $$
for all $t\in\mathbb{R}$.
Hence, by the Stone-Weierstrass theorem, we have
$$\int_{\mathbb{R}^+}f \ d\mathbf{Re}\mu=\int_{\mathbb{R}^+} f\ d\mathbf{Re}\mu'$$
holds for any $f\in C_0(\mathbb{R}^+)$;
and
$$\int_{\mathbb{R}^+}f \ d(\mu-\mathbf{Re}\mu)=\int_{\mathbb{R}^+} f\ d(\mu'-\mathbf{Re}\mu')$$
holds for any $f\in C_0(\mathbb{R}^+)$ with $f(0)=0$.
Thus we know
$$\mathbf{Re}\mu=\mathbf{Re}\mu' \ \text{ on } \mathbb{R}^+,$$
and
$$\mu-\mathbf{Re}\mu=\mu'-\mathbf{Re}\mu' \ \text{ on } \mathbb{R}^+\setminus\{0\}.$$
Since $\mu$ and $\mu'$ are slice-condensed of type II and type I respectively, by definition $\mu-\mathbf{Re}\mu$ and $\mu'-\mathbf{Re}\mu'$ both vanish at the origin, namely,
$$\mu(\{0\})-\mathbf{Re}\mu(\{0\})=\mu'(\{0\})-\mathbf{Re}\mu'(\{0\})=0. $$
Therefore, we come to the final conclusion: $\mu$ is identical with $\mu'$ on $\mathbb{R}^+$.
In other words, any slice-condensed measure of type II is a slice-condensed measure of type I.
\end{proof}
Then the equivalence of Definitions \ref{Def-PNM-1} and \ref{Def-PNM-2} follows immediately from Lemmas \ref{Lemma-Eq-PNM-1} and \ref{Lemma-Eq-PNM-2}.
\begin{theorem}
Any slice-condensed measure of type I is a slice-condensed measure of type II, and vice versa.
\end{theorem}
\section{An application to quaternionic random processes}\label{Sec-App}
In terms of applications, a growing popularity of quaternionic random processes has arisen in the field of signal processing (see, e.g. \cite{Buchholz-2008, Navarro-Moreno-2013, Took-2011}). In this section, we shall reveal some mathematical properties of a special family of quaternionic random processes via the spectral analysis.
Let $\mathbb{E}(Y)$ denote the mean of a quaternionic random variable $Y$. The covariance $cov(Y_1,Y_2)$ of arbitrary quaternionic random variables $Y_1,Y_2$ is given as
$$cov(Y_1,Y_2):=\mathbb{E}(Y_1\overline{Y_2});$$
and the variance of $Y$ is defined as
$$var(Y):=cov(Y,Y). $$
One may refer to \cite{Navarro-Moreno-2013, Took-2011} for more basic notations.
\begin{definition}
A quaternionic process $X=\{X_t:t\geq 0\}$ is said to be weakly stationary if for all $t,s\geq 0$, and $h>0$, the following equalities holds:
$$\mathbb{E}(X_t)=\mathbb{E}(X_s),$$
and
$$cov(X_t,X_s)=cov(X_{t+h},X_{s+h}). $$
\end{definition}
Thus, $X$ is weakly stationary if and only if it has a constant mean and its auto-covariance $cov(X_t,X_s)$ is a
function of $s-t$ only. For a weakly stationary process $X$, such function is called the auto-covariance function of $X$, and denoted by $c_X$. More precisely,
$$c_X(t):=\left\{\begin{array}{lcl}
cov(X_0,X_t), & & t\geq 0; \\
&&\\
cov(X_{-t},X_0), & & t<0.
\end{array} \right.
$$
Auto-covariance functions have the following property.
\begin{theorem}\label{Thm-PD-WSP}
Assume that $X$ is a weakly stationary quaternionic processes, then its auto-covariance function $c_X(t)$ is positive definite.
\end{theorem}
\begin{proof}
For all $t_1,t_2,\cdots,t_k\in\mathbb{R}$ and all $q_1,q_2,\cdots,q_k\in\mathbb{H}$,
\begin{equation*}
\begin{split}
\sum_{1\leq i,j\leq k} \overline{p_i}c_X(t_i-t_j)p_j&= \sum_{1\leq i,j\leq k} \overline{p_i}c_X(t'_j-t'_i)p_j \\
& =\sum_{1\leq i,j\leq k} \overline{p_i}cov(X_{t'_i},X_{t'_j})p_j \\
& =var(Y)\\
& \geq 0,
\end{split}
\end{equation*}
where, $t'_i=-t_i+\displaystyle{\max_{1\leq l\leq k} \{t_l\}}$, and $Y:=\displaystyle{\sum_{i=1}^{k}\overline{p_i}X_{t'_i}}$.
\end{proof}
\begin{theorem}[Spectral theorem for auto-variance functions]\label{Thm-Spec-cov}
Assume $X$ is a weakly stationary quaternionic process, there exists a unique slice-condensed measure $\mu$ on $\mathbb{R}^+$ such that
\begin{equation*}
c_X(t)=\int_{\mathbb{R}^+}\cos(tx)d\mathbf{Re}\mu(x)+\int_{\mathbb{R}^+}\sin(tx)d(\mu-\mathbf{Re}\mu)(x),
\end{equation*}
whenever the auto-variance function $c_X$ is continuous at the origin.
\end{theorem}
\begin{proof}
This follows immediately from Theorems \ref{Thm-PD-WSP} and \ref{Bochner-theorem}. We need only demonstrate that $c_X$ is continuous on $\mathbb{R}$.
Without loss of generality, we assume $$\mathbb{E}(X_t)=0, t\geq 0. $$
Then a simply application of the Cauchy-Schwarz inequality yields:
\begin{equation*}
\begin{split}
|c_X(t+\Delta t)-c_X(t)| & =\big|\mathbb{E}\big(X_0\overline{(X_{t+\Delta t}-X_t)}\big)\big| \\
& \leq \sqrt{\mathbb{E}(|X_0|^2)\mathbb{E}(| X_{t+\Delta t}-X_t|^2) }\\
& = \sqrt{c(0)[2c(0)-c(\Delta t)-c(-\Delta t)]}
\end{split}.
\end{equation*}
holds for all $t,t+\Delta t\geq 0$, which indicates that $c_X$ is continuous on $\mathbb{R}^+$.
What's more, it can be easily seen that
$$c_X(-t)=\mathbb{E}(X_t\overline{X_0})=\overline{\mathbb{E}(X_0\overline{X_t})}=\overline{c_X(t)}$$
holds for all $t\geq 0$. Hence, $c_X$ is continuous on $\mathbb{R}$.
\end{proof}
\begin{corollary}\label{Cor-Spec-cov}
Assume $X$ is a weakly stationary quaternionic process, there exists a $($not necessarily unique$)$ non-negative finite regular Borel measure $\Gamma$ on the imaginary plane $\mathbb{H}_I$ such that
\begin{equation*}
c_X(t)=\int_{\mathbb{H}_I}e^{tx}d\Gamma(x),
\end{equation*}
whenever the auto-variance function $c_X$ is continuous at the origin.
\end{corollary}
\begin{proof}
It follows from Theorem \ref{Thm-Spec-cov} and Definition \ref{Def-PNM-1} that
there exists a non-negative regular finite Borel measure $\Gamma$ on $\mathbb{H}_I$ such that
\begin{equation}\label{Eq-Cor-Spec-cov-1}
c_X(t)=\int_{\mathbb{R}^+}\cos(ty)d\mu_1(y)+\int_{\mathbb{R}^+}\sin(ty)d\mu_2(y),
\end{equation}
where $\mu_1=\rho_*\Gamma$ and $\displaystyle{\mu_2=\rho_*\Big(\frac{x}{|x|}\Gamma\Big)}$.
Direct calculations yield:
\begin{equation*}
\begin{split}
\int_{\mathbb{R}^+}\cos(ty)d\mu_1(y)=& \int_{\mathbb{H}_I}\cos(t|x|)d\Gamma(x), \\
\int_{\mathbb{R}^+}\sin(ty)d\mu_2(y)=& \int_{\mathbb{H}_I}\sin(t|x|)\frac{x}{|x|}d\Gamma(x).
\end{split}
\end{equation*}
Then we substitute these two equalities into \eqref{Eq-Cor-Spec-cov-1} to obtain
\begin{equation*}
\begin{split}
c_X(t)=& \int_{\mathbb{H}_I}\cos(t|x|)+\sin(t|x|)\frac{x}{|x|}d\Gamma(x)\\
=& \int_{\mathbb{H}_I}e^{tx}d\Gamma(x).
\end{split}
\end{equation*}
\end{proof}
\begin{remark}
The spectral theorems for auto-variance functions $($Theorem \ref{Thm-Spec-cov} and Corollary \ref{Cor-Spec-cov}$)$ are also valid for the class of wide-sense stationary quaternion random signals introduced by C. C. Took and D. P. Mandic \cite{Took-2011}, because wide-sense stationarity is stronger than weak stationarity.
\end{remark}
\section{Final remark}\label{Sec-Final}
What we achieve in this paper:
By the generalized Stone's theorems, we manage to reveal some vital spectral characteristics of quaternionic positive definite functions on the real line. We also find an application to weakly stationary quaternionic random processes.
What we want to achieve in the future:
Our work may shed some light on the general theory of quaternionic positive definite functions on topological groups. Based on what is already known about the simplest case, we can make some reasonable speculations about the general case. For example, one particular speculation is as follows:
Assume $G$ is a locally compact abelian group with Pontryagin dual $\hat{G}$. Then for any continuous quaternionic positive definite function $\varphi$ on $G$, there exists a (not necessarily unique) non-negative finite regular Borel measure $\Gamma$ on $\hat{G}\times\mathbb{S}$ such that
$$\varphi(x)=\int_{\hat{G}\times\mathbb{S}}\mathbf{Re}(\xi(x))+s\mathbf{Im}(\xi(x))d\Gamma(\xi,s). $$
Here $\mathbb S$ denotes the set of quaternionic imaginary units, $\mathbf{Re}$ and $\mathbf{Im}$ mean
extracting the real part and the imaginary part respectively.
This speculation, along with others, will be discussed in our future work.
\bigskip
{\bf References}
\bibliographystyle{amsplain}
| proofpile-arXiv_059-15716 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
The topological type of a normal surface singularity is determined by its resolution graph (\cite{neumann.plumbing}).
For a given resolution graph of a normal surface singularity, there are various types of complex structures which realize it.
We are interested in finding the upper (resp. lower) bound of basic invariants (e.g., the geometric genus), and in understanding the complex structures which attain their maximum (resp. minimum).
Let $\V$ be a normal complex surface singularity with minimal good resolution $X\to V$ and let $\Gamma$ be the resolution graph of $\V$.
As noticed above, the topological invariants of $\V$ are precisely the invariants of $\Gamma$.
In this paper, we consider the geometric genus $p_g\V=\dim H^1(\cO_X)$ and the maximal ideal cycle $M_X$ on $X$.
In general, these invariants cannot be determined by $\Gamma$ and it is difficult to compute them.
By the definition (\defref{d:cycles}), the fundamental cycle $Z_X$ on $X$ is determined by $\Gamma$ and the inequality $M_X\ge Z_X$ holds.
The fundamental problem we wish to explore is the following.
\begin{prob}
Let $p_g(\Gamma)$ denote the maximum of the geometric genus over the normal surface singularities with resolution graph $\Gamma$.
\begin{enumerate}
\item Find the value $p_g(\Gamma)$ and conditions for $M_X=Z_X$.
\item Describe the properties and invariants of a singularity $\V$ with $p_g\V=p_g(\Gamma)$ or $M_X=Z_X$.
\end{enumerate}
\end{prob}
It is known that in a complex analytic family of the resolution space $X$ preserving $\Gamma$ (cf. \cite{la.lift}), the dimension of the cohomology of the structure sheaf is upper semicontinuous.
So, we expect the singularities $\V$ with $p_g\V=p_g(\Gamma)$ may have some kind of nice structure.
The equality $M_X=Z_X$ holds for rational singularities (\cite{artin.rat}), minimally elliptic singularities (\cite{la.me}), and hypersurfaces $z^n=f(x,y)$ with certain conditions (\cite{dixon}, \cite{tomaru-Kodaira}).
We have an explicit condition for the equality $M_X=Z_X$ for Brieskorn complete intersection singularities (\cite{K-N}, \cite{MO}); the result is generalized to Kummer coverings over weighted homogeneous normal surface singularities in \cite{TT}.
The upper bound of $p_g$ has been also studied by several authors (e.g., \cite{yau.max}, \cite{tomari.ell}, \cite{tomari.max}, \cite{nem.lattice}, \cite{N-Sig}); the ``rational trees'' $\Gamma$ whose $p_g(\Gamma)$ can be obtained from $\Gamma$ are listed in \cite[1.7]{no-2cusp}.
In \exref{e:mpg} of the present paper, we shall introduce the weighted homogeneous singularities of {\em hyperelliptic type} for which $p_g(\Gamma)$ is easily computed.
Since $p_g\V=\dim H^0(\cO_X)/H^0(\cO_X(-Z_{K_X}))$ for numerically Gorenstein singularity,
where $Z_{K_X}$ is the canonical cycle (\defref{d:cycles}), it might be natural to expect that there is a correlation between the properties $p_g\V=p_g(\Gamma)$ and $M_X=Z_X$.
In fact, when $\V$ is a numerically Gorenstein elliptic singularity (this is characterized by $\Gamma$), we have that $p_g\V=p_g(\Gamma)$ if and only if $\V$ is a Gorenstein singularity with $M_X=Z_X$ (\cite[5.10]{o.numGell}, \cite{yau.max}, \cite{nem.ellip}); in this case, $p_g(\Gamma)$ coincides with the length of the elliptic sequence.
However, in \cite{no-2cusp}, we found an example such that the equality $p_g=p_g(\Gamma)$ is realized by both a Gorenstein singularity with $M_X>Z_X$ and a non-Gorenstein singularity with $M_X=Z_X$.
In \sref{s:exBCI}, we give an example which shows that the condition $M_X=Z_X$ cannot control $p_g$.
In this paper, we study normal surface singularities homeomorphic to Brieskorn complete intersection singularities from the perspective of our problem above.
First suppose that $V$ is a complete intersection given as follows:
\[
V=\defset{(x_i)\in \C^m}
{q_{i1}x_1^{a_1}+\cdots +q_{im}x_{m}^{a_{m}} =0,
\quad i=3,\dots , m}
\quad (q_{ij}\in \C).
\]
The resolution graph of the singularity $\V$ is determined by the integers $a_1, \dots, a_m$ (\thmref{t:BCImain}).
We denote it by $\Gamma(a_1, \dots, a_m)$.
Using the Pinkham-Demazure divisor $D$ on the central curve $E_0$ of the exceptional set $E\subset X$, the homogeneous coordinate ring $R$ of $V$ is represented as $R=\bigoplus_{k\ge 0}H^0(\cO_{E_0}(D_k))T^k$ (see \sref{ss:WH}).
We study arithmetic properties of the numerical invariants arising from the topological type in terms of the divisors $D_k$ on $E_0$.
For this purpose, we employ the monomial cycles (cf. \cite{o.pg-splice}) to connect the numerical information of the divisors $D_k$ and the complex analytic functions on $X$; note that monomial cycles play an important role in the study of invariants of splice quotients (\cite{o.pg-splice}, \cite{nem.coh-sq}).
For example, we show that $H^0(\cO_{E_0}(D_k))\ne 0$ if and only if $\deg D_k$ is a member of a certain semigroup, and that $D_k\sim D_{k'}$ if and only if $\deg D_k = \deg D_{k'}$ (see \proref{p:monomials}, \thmref{t:ehg}).
Applying these results, we obtain the following (see \thmref{t:g1p_gmax}, \thmref{t:g=1MZ}).
\begin{thm}\label{t:simple}
If $\V$ is a Brieskorn complete intersection such that the central curve $E_0$ is rational or elliptic curve, then $p_g\V=p_g(\Gamma)$ and $M_X=Z_X$.
\end{thm}
Even if the singularity is not a Brieskorn complete intersection, we can apply a part of the argument on the divisors $D_k$ and prove the following (\thmref{t:Z=M}).
\begin{thm}\label{t:int-exist}
There exists a weighted homogeneous singularity with resolution graph $\Gamma(a_1, \dots, a_m)$ such that
the maximal ideal cycle coincides with the fundamental cycle on the minimal good resolution.
\end{thm}
We shall describe the property of the Pinkham-Demazure divisor corresponding to the singularity in \thmref{t:int-exist}.
If the central curve $E_0$ has genus $g\ge 2$, we cannot expect a result similar to \thmref{t:simple}.
In fact, there may be various types of complex structures even when $g=2$.
To show this, in \sref{s:exBCI}, we fix a resolution graph $\Gamma=\Gamma(2,3,3,4)$, which is the simplest one in a sense, and investigate the singularities having this graph.
Any Brieskorn complete intersection singularity with this graph satisfies neither $p_g\V=p_g(\Gamma)$ nor $M_X=Z_X$.
Assume that $\V$ is a weighted homogeneous surface singularity with resolution graph $\Gamma$.
We prove that $\V$ satisfies $p_g\V=p_g(\Gamma)$ if and only if it is hyperelliptic type, and show that such a singularity is a complete intersection, which is a double cover of a rational double point of type $A_1$.
For the geometric genus, the multiplicity, and the embedding dimension of these singularities, see Table \ref{tab:special}, where the rightmost column indicates the subsections which include the details.
\begin{table}[htb]
\renewcommand{\arraystretch}{1.2}
\[
\begin{array}{ccccl}
\hline\hline
\text{type} & p_g & \mult & \emb & \text{Section} \\
\hline
\text{Brieskorn CI} & 8 & 6 & 4 & \text{\sref{ss:BCI2334}} \\
\hline
\text{maximal $p_g$} & 10 & 4 & 4 & \text{\sref{ss:maxpg}} \\
\hline\hline
\end{array}
\]
\caption{\label{tab:special}
Special types}
\end{table}
Next, in \sref{ss:M=Z}, we give a complete classification of the weighted homogeneous normal surface singularities $\V$ with resolution graph $\Gamma=\Gamma(2,3,3,4)$ such that $M_X=Z_X$.
We can see the fundamental invariants of those singularities in Table \ref{tab:M=Z}.
For each class, we prove the existence of the singularities by showing the explicit description of the Pinkham-Demazure divisor (cf. \sref{ss:M=Z}).
\begin{table}[h]
\renewcommand{\arraystretch}{1.2}
\[
\begin{array}{ccccl}
\hline\hline
p_g & \mult & \emb & \text{ring} & \text{Proposition} \\
\hline
8 & 3 & 4 & \ \ \text{non Gorenstein} \ \ & \ref{p:h3=1} \\
8 & 4 & 4 & \text{non Gorenstein} & \ref{p:D4=2} (1) \\
7 & 4 & 5 & \text{non Gorenstein} & \ref{p:D4=2} (2) \\
8 & 5 & 5 & \text{Gorenstein} & \ref{p:011} (1) \\
7 & 5 & 5 & \text{non Gorenstein} & \ref{p:011} (2) \\
6 & 6 & 7 & \text{non Gorenstein} & \ref{p:0101} \\
\hline\hline
\end{array}
\]
\caption{\label{tab:M=Z}
Singularities with $M_X=Z_X$}
\end{table}
Note that for any two singularities in Table \ref{tab:M=Z}, they have the same thick-thin decomposition if and only if they have the same multiplicity; see \cite{thick-thin} and the proof of \proref{p:M=Z2334} (2).
This paper is organized as follows.
In \sref{s:Pre}, we review basic facts on weighted homogeneous surface singularities and introduce the singularity of hyperelliptic type.
In \sref{s:BCI}, first we summarize the results on Brieskorn complete intersection surface singularities, and prove \thmref{t:simple} and \thmref{t:int-exist}.
In \sref{s:exBCI}, we study weighted homogeneous singularities with resolution graph $\Gamma=\Gamma(2,3,3,4)$ such that $p_g=p_g(\Gamma)$ and those with $M_X=Z_X$.
\begin{acknowledgement}
The author would like to thank the referee for reading the paper carefully and providing several thoughtful comments which helped improving the paper, especially, \lemref{l:Pi} and \proref{p:maxpg3}.
\end{acknowledgement}
\section{Preliminaries}\label{s:Pre}
Let $(V,o)$ be a normal complex surface singularity, namely, the germ of a normal complex surface $V$ at $o\in V$.
We denote by $\m$ the maximal ideal of the local ring $\cO_{V,o}$.
Let $\pi\: X \to V$ denote the minimal good resolution of the singularity $(V,o)$ with exceptional set $E= \pi^{-1}(p)$,
and let $\{E_i\}_{i\in \cal I}$ denote the set of irreducible components of $E$.
We denote by $\Gamma$ the {\em resolution graph} of $\V$, namely, the weighted dual graph of $E$.
A divisor on $X$ supported in $E$ is called a {\em cycle}.
We denote the group of cycles by $\Z E$.
An element of $\Q E:=\Z E\otimes \Q$ is called a {\em $\Q$-cycle}.
Since the intersection matrix
$(E_i E_j)$ is negative definite, for every $j\in
\cal I$ there exists an effective $\Q$-cycle $E_j^*$ such that $E_j^* E_i=-\delta_{ji}$,
where $\delta_{ji}$ denotes the Kronecker delta.
Let $\Z E^*\subset \Q E$ denote the subgroup generated by $\{E_i^*\}_{i\in I}$.
For any $\Q$-divisor $F=\sum c_iF_i$ with distinct irreducible components $F_i$, we denote by $\cf_{F_i}(F)$ the coefficient of $F_i$ in $F$, i.e., $\cf_{F_i}(F)=c_i$.
For a function $h\in H^0(\cO_X)\setminus \{0\}$, we denote by $(h)_E\in \Z E$ the {\em exceptional part} of the divisor $\di_X(h)$;
this means that $\di_X(h)-(h)_E$ is an effective divisor containing no components of $E$.
We call $\di_X(h)-(h)_E$ the {\em non-exceptional part} of $\di_X(h)$.
We simply write $(h)_E$ instead of $(h\circ \pi)_E$ for $h\in \m\setminus\{0\}$.
A $\Q$-cycle $D$ is said to be {\em nef} (resp. {\em anti-nef}) if $DE_i\ge 0$ (resp. $DE_i\le 0$) for all $i\in \cal I$.
Note that if a cycle $D\ne 0$ is anti-nef, then $D\ge E$.
\begin{defn}\label{d:cycles}
The {\em fundamental cycle} is by definition the smallest non-zero anti-nef cycle and denoted by $Z_X$.
The {\em maximal ideal cycle} on $X$ is the minimum of $\defset{(h)_E}{h \in \m\setminus\{0\}}$ and denoted by $M_X$.
Clearly, $Z_X\le M_X$.
There exists a $\Q$-cycle $Z_{K_X}$ such that $(K_X+Z_{K_X})E_i=0$ for every $i\in \cI$, where $K_X$ is a canonical divisor on $X$.
We call $Z_{K_X}$ the {\em canonical cycle} on $X$.
\end{defn}
\subsection{Cyclic quotient singularities}\label{ss:cyc}
Let $n$ and $\mu$ be positive integers with $\mu<n$
and $\gcd(n,\mu)=1$.
Let $\epsilon_{n}\in \C$ denote the primitive $n$-th root of unity
and let $G$ denote the cyclic group $\left\langle \begin{pmatrix}
\epsilon_{n} & 0 \\ 0 & \epsilon_{n}^{\mu}
\end{pmatrix} \right \rangle \subset GL(2,\C)$.
Suppose that
$V=\C^2/G$. Then $\V$ is called the cyclic quotient singularity of type $C_{n,\mu}$.
For integers $c_i\ge 2$, $i=1, \dots, r$, we put
\[
[[c_{1}, \dots ,c_{r} ]]:=c_{1}-
\cfrac{1}{c_{2}- \cfrac{1}{\ddots -\cfrac{1}
{c_{r}}}}
\]
If $n/\mu=[[c_{1}, \dots ,c_{r} ]]$, the resolution graph $\Gamma$
is a chain as in \figref{fig:HJ}, where all components $E_i$ are rational.
\begin{figure}[htb]
$
\xy
(15,0)*+{-c_{1}}*\cir<10pt>{}="B";
(55,0)*+{-c_{r}} *\cir<10pt>{}="C";
(30,0)*+{\cdot};
(35,0)*+{\cdot};
(40,0)*+{\cdot};
"B" *++!D(-2.0){E_1};
"C" *++!D(-2.0){E_r};
\ar @{-}"B";(25,0)
\ar @{-} (45,0); "C"
\endxy
$
\caption{The resolution graph of $C_{n,\mu}$ \label{fig:HJ}}
\end{figure}
It is known that the local class group $\Cl\V$
is isomorphic to the finite abelian group
\[
\Z E^* /\Z E =\gen{[E_1^*]}=\gen{[E_r^*]}
\]
of order $n$, where $[E_i^*]=E_i^*+\Z E$ (cf. \cite[II (a)]{mum.top}, \cite[III. 5]{CCS}).
Suppose that $E_0$ is a prime divisor on $X$ such that $E_0E_i=\delta_{1 i}$ for $1\le i \le r$; so $E_0+E_1+\cdots+E_r$ looks like a chain of curves.
For any positive integer $m_0$, let
\[
\cal L(m_0 )=\defset{m_0 E_0+\sum_{i=1}^r m_iE_i}{m_1, \dots, m_r\in \Z_{> 0}}.
\]
Then we define a set $\cal D(m_0)$ as follows:
\[
\cal D(m_0 ):=\defset{D\in \cal L(m_0)}{DE_i\le 0, \; i=1,\dots, r}.
\]
It is easy to see that
$\cal D(m_0 )$ is not empty and has a unique smallest element.
Let $\Ce{x}$ denote the ceiling of a real number $x$.
\begin{lem}\label{l:La}
Let $D\in \cal D(m_0 )$.
We have the following:
\begin{enumerate}
\item There exists an effective cycle $F$ such that $(D+F)E_i= 0$ for $1\le i<r$ and $\supp(F)\subset \bigcup_{i>1}E_i$.
\item If $DE_i= 0$ for $1\le i<r$ and $DE_r\ge -1$, then $D$ is the smallest element of $\cal D(m_0 )$.
\item Assume that $D, D'\in \cal D(m_0 )$
and $DE_i=D'E_i$ for $1\le i<r$.
If $D>D'$, then $\cf_{E_1}(D)>\cf_{E_1}(D')$.
\item Assume that $D$ and $D'$ are the smallest elements of $\cal D(m_0)$ and $\cal D(m_0')$, respectively,
and that $D'E_i=0$ for $1\le i \le r$.
Then $D+D'$ is the smallest element of $\cal D(m_0+m_0')$.
\end{enumerate}
\end{lem}
\begin{proof}
We write as $D=\sum_{i=0}^r m_iE_i$ and
$D'=\sum_{i=0}^r m_i'E_i$.
(1) For any $1\le k < r$, there exists a cycle $F'$ supported on $E_{k+1}+\cdots+E_r$ such that
\[
\cf_{E_{k+1}}(F')=1,\ \ F'E_{k+1}=\cdots=F'E_{r-1}=0, \quad F'E_r < 0
\]
(cf. \cite[III.5]{CCS}).
If $a:=DE_k<0$, then $D+aF'\in \cal D(m_0)$ and $(D+aF')E_{k}=0$. By repeating this process,
we obtain the assertion.
(2) It follows from \cite[Lemma 2.2]{la.TanCon} (cf. \cite[2.1]{MO}).
(3) If $m_1=m_1'$, we can take $1\le k<r$ so that $m_i=m_i'$ for $i\le k$ and $m_{k+1}>m_{k+1}'$. Then $(D-D')E_k=m_{k+1}-m_{k+1}'>0$; it contradicts that $DE_k=D'E_k$.
(4) Let $d_i=[[c_i, \dots, c_r]]$.
By \cite[Lemma 1.1]{K-N}, the minimality of $D$ is characterized by the condition that $m_i=\Ce{m_{i-1}/d_i}$ for $1\le i \le r$.
By the assumption, it follows from Lemma 1.2 (1) and (2) of \cite{K-N} that
$m_i'=m_{i-1}'/d_i$.
Hence we have $m_i+m_i'=\Ce{m_{i-1}/d_i}+m_{i-1}'/d_i=\Ce{(m_{i-1}+m_{i-1}')/d_i}$.
\end{proof}
\subsection{Weighted homogeneous surface singularities}
\label{ss:WH}
Let us recall some fundamental facts on weighted homogeneous surface singularities (cf. \cite{p.qh}).
Assume that $\V$ is a weighted homogeneous singularity.
Then the resolution graph $\Gamma$ of $\V$ is a star-shaped graph as in \figref{fig:star}, where $E_{i,j}$ are rational curves, $g$ is the genus of the curve $E_0$, $c_{i,j}$ and $c_{0}$ are the self-intersection numbers.
The component $E_0$ is called the {\em central curve}.
\begin{figure}[htb]
\begin{center}
$
\xy
(-9,0)*+{E_0}; (0,0)*+{-c_0}*\cir<10pt>{}="E"*++!D(-2.0){[g]};
(20,12)*+{-c_{1,1} }*\cir<14pt>{}="B_1"*++!D(-2.0){E_{1,1}};
(65,12)*+{-c_{1,s_1} }*\cir<16pt>{}="B_3"*++!D(-2.0){E_{1,s_1}};
(20,-12)*+{-c_{m,1} }*\cir<14pt>{}="D_1"*++!D(-2.0){E_{m,1}};
(65,-12)*+{-c_{m,s_{m}} }*\cir<16pt>{}="D_3"*++!D(-2.0){E_{m,s_{m}}};
(37.5,12)*{\cdot },(42.5,12)*{\cdot },(47.5,12)*{\cdot },
(37.5,-12)*{\cdot},(42.5,-12)*{\cdot},(47.5,-12)*{\cdot},
(55.5,2.5)*{\cdot},(55.5,-1)*{\cdot},(55.5,-4.5)*{\cdot},
\ar @{-} "E" ;"B_1"
\ar @{-} "E" ;"D_1"
\ar @{-}"B_1";(35,12) \ar @{-} (50,12);"B_3"
\ar @{-} "D_1";(35,-12) \ar @{-} (50,-12);"D_3"
\endxy
$
\end{center}
\caption{\label{fig:star} A star-shaped resolution graph}
\end{figure}
For $1\le i \le m$, we define positive integers $\alpha_i$ and $\beta_i$ with $\gcd(\alpha_i, \beta_i)=1$ by $\alpha_i/\beta_i=[[c_{i,1}, \dots ,c_{i,s_i} ]]$.
The data
\[
(g, c_0, (\alpha_1, \beta_1), \dots, (\alpha_m,\beta_m))
\]
is called the {\em Seifert invariant}.
Note that the graph $\Gamma$ can be recovered from the Seifert invariant.
Let $P_i\in E_0$ denote the point $E_0\cap E_{i,1}$ and $Q$ a divisor on $E_0$ such that $\cO_{E_0}(-E_0)\cong \cO_{E_0}(Q)$.
We define a $\Q$-divisor $D$ and divisors $D_k$ ($k\in \Z_{\ge 0}$) on $E_0$ as follows:
\[
D:=Q-\sum_{i=1}^m\frac{\beta_i}{\alpha_i}P_i, \qquad
D_k:=kQ-\sum_{i=1}^m \Ce{\frac{k\beta_i}{\alpha_i}} P_i.
\]
We call $D$ the {\em Pinkham-Demazure divisor}.
It is known that $\deg D >0$.
For any divisor $F$ on $E_0$, we write as
\[
H^i(F)=H^i(\cO_{E_0}(F)), \quad h^i(F)=\dim_{\C}H^i(F).
\]
Let $R:=R\V$ denote the homogeneous coordinate ring of the singularity $(V,o)$.
Then we have the expression $R=\bigoplus_{k\ge 0}H^0(D_k)T^k\subset \C(E_0)[T]$, where $\C(E_0)$ is the field of rational functions on $E_0$ and $T$ an indeterminate (cf. \cite{p.qh}, \cite{tki-w}).
We have the following.
\begin{thm}[Pinkham \cite{p.qh}]
\label{t:Pin}
$p_g\V=\sum_{k\ge 0}h^1(D_k)$.
\end{thm}
Let $H(V,t)$ denote the Hilbert series of the graded ring $R$, i.e., $H(V,t)=\sum_{k\ge 0}h^0(D_k)t^k$.
\begin{prop}\label{p:Hpg}
We have the following.
\begin{enumerate}
\item If we write as $H(V,t)=p(t)/q(t)+r(t)$, where $p,\, q, \, r\in\C[t]$ and $\deg p<\deg q$, then $p_g\V=r(1)$.
\item Let $(V_1, o_1)$ and $(V_2,o_2)$ be weighted homogeneous singularities with the same resolution graph. Then $p_g(V_1,o_1)-p_g(V_2,o_2)=(H(V_1,t)-H(V_2,t))|_{t=1}$.
\end{enumerate}
\end{prop}
\begin{proof}
(1) follows from \cite[3.1.3]{no-edwh}.
(2)
It follows from \thmref{t:Pin} and the Riemann-Roch theorem $h^0(D_n)-h^1(D_n)= \deg D_n+1-g$ (the right-hand side is determined by $\Gamma$).
\end{proof}
The next theorem follows from \cite[2.9]{KeiWat-D}.
\begin{thm}\label{t:WatD}
Let $D'=\sum ((\alpha_i-1)/\alpha_i)P_i$.
Then $R$ is Gorenstein if and only if there exists an integer $a$ such that $K_C\sim aD-D'$; the integer $a$ coincides with the $a$-invariant $a(R)$ of Goto--Watanabe (\cite{G-W}).
\end{thm}
\subsection{Surface singularities with star-shaped graph}
\label{ss:star}
First, we briefly review some important facts in \cite[\S 6]{tki-w}.
Assume that $\V$ is a normal surface singularity with star-shaped resolution graph $\Gamma$ as \figref{fig:star}.
Even if $\V$ is not weighted homogeneous, in the same manner as in \sref{ss:WH}, we obtain the Pinkham-Demazure divisor
\[
D=Q-\sum_{i=1}^m\frac{\beta_i}{\alpha_i}P_i
\]
on the central curve $E_0\subset E$ on the minimal good resolution $X$.
We define the graded ring $R$ by
\[
R=R(E_0, D):=\bigoplus_{k\ge 0}H^0(D_k)T^k\subset \C(E_0)[T].
\]
Let $\bV=\spec R$ and $o\in \bV$ the point defined by the maximal ideal $\bigoplus_{k\ge 1}H^0(D_k)T^k$.
Then $(\bV,o)$ is a weighted homogeneous normal surface singularity with resolution graph $\Gamma$.
\begin{thm}[Tomari-Watanabe {\cite[\S 6]{tki-w}}]\label{t:TW}
For every $n\in\Z_{\ge 0}$, there exists the minimal cycle $L_n\in \Z E$ such that $L_n$ is anti-nef on $E-E_0$ and $\cf_{E_0}(L_n)=n$.\footnote{Our symbol $L_n$ is equal to $-L_{-n}$ in \cite[\S 6]{tki-w}.}
Then we have a natural isomorphism $\cO_{E_0}(-L_n)\cong \cO_{E_0}(D_n)$ for $n\in\Z_{\ge 0}$; in fact,
\[
\sum_{i=1}^m \Ce{\frac{k\beta_i}{\alpha_i}} P_i=(L_n-nE_0)|_{E_0}.
\]
In general, we have $p_g\V \le p_g(\bV,o)$.
If the equality $p_g\V =p_g(\bV,o)$ holds, the following sequence is exact for $n\ge 0$:
\[
0\to H^0(\cO_X(-L_n-E_0)) \to H^0(\cO_X(-L_n)) \to H^0(\cO_{E_0}(D_n)) \to 0.
\]
\end{thm}
\begin{rem} \label{r:E0coeff}
From the definitions of $Z_X$ and $M_X$,
we have the following:
\begin{align*}
\cf_{E_0}(Z_X)& =\min\defset{m\in \Z_{>0}}{\deg D_m\ge 0}, \\
\cf_{E_0}(M_X)&=\min\defset{m\in \Z_{>0}}{H^0(D_m)\ne 0}.
\end{align*}
Clearly, $z_0:=\cf_{E_0}(Z_X) \le m_0:=\cf_{E_0}(M_X)$.
One of fundamental problems is to find a characterization for the equality $z_0=m_0$.
We have $Z_X=L_{z_0}$ by the definition of the cycles $L_n$.
It might be natural to ask whether the condition $m_0=z_0$ implies the equality $M_X=Z_X$.
For Brieskorn complete intersection singularities, we have a criterion for $z_0=m_0$ and we always have $M_X=L_{m_0}$ (see \cite{K-N}, \cite{MO}).
However, in general, this is not true even for weighted homogeneous singularities (see \cite{TT}).
We will see later (\proref{p:maxpg3}) an example of a weighted homogeneous singularity homeomorphic to a Brieskorn complete intersection singularity which does not satisfy $M_X=L_{m_0}$ though $z_0=m_0$ and has the ``maximal geometric genus'' in the following sense.
\end{rem}
\begin{defn}\label{d:pgG}
Let $\cal X(\Gamma)$ denote the set of normal surface singularities with resolution graph $\Gamma$ and let
\[
p_g(\Gamma):=\max \defset{p_g(W,o)}{(W,o)\in \cal X(\Gamma)}.
\]
\end{defn}
Obviously, $p_g(\Gamma)$ is an invariant of $\Gamma$.
From \thmref{t:TW}, $p_g(\Gamma)$
is attained by a weighted homogeneous singularity.
However, the inequality $p_g(\bV,o)<p_g(\Gamma)$ may happen in general, namely, $p_g(\bV,o)$ is not topological, even if $\Gamma$ is a resolution graph of a Brieskorn complete intersection singularity (see \sref{s:exBCI}).
Let $\Fl{x}$ denote the floor (or, integer part) of a real number $x$.
\begin{ex}\label{e:mpg}
Assume that a resolution graph $\Gamma$ has the Seifert invariant
\[
(g, c_0, k_1(\alpha_1, \beta_1), \dots, k_m(\alpha_m,\beta_m)),
\]
where $k_i(\alpha_i,\beta_i)$ means that $(\alpha_i,\beta_i)$ is repeated $k_i$ times, and $(\alpha_i,\beta_i)\ne (\alpha_j,\beta_j)$ for $i\ne j$.
Moreover, assume that $k_2, \dots, k_m\in 2\Z$; in this case, we call $\Gamma$ a {\em hyperelliptic type}.
Let $C$ be a hyperelliptic or elliptic curve of genus $g$ and let $\cal R(C)$ be the set of ramification points of the double cover $C\to \P^1$ with
involution $\sigma\: C\to C$.
Let $P\in \cal R(C)$ and $Q=c_0P$. Take $P_{i,j}\in C\setminus\cal R(C)$ ($1\le i \le m$, $1\le j \le \Fl{k_i/2}$) so that $P_{1,1}, \sigma(P_{1,1}), \dots, P_{m, \Fl{k_m/2}}, \sigma(P_{m, \Fl{k_m/2}})$ are different from each other.
Let $Q_{i,j}=P_{i,j}+\sigma(P_{i,j})$.
Then we define the Pinkham-Demazure divisor $D$ on $C$ by
\[
D=\begin{cases}
\dis Q-\sum_{i=1}^m \frac{\beta_i}{\alpha_i} \sum_{j=1}^{k_i/2}Q_{i,j}
& \text{ if } k_1 \in 2\Z, \\
\dis Q-\frac{\beta_1}{\alpha_1}P
-\frac{\beta_1}{\alpha_1}\sum_{j=1}^{(k_1-1)/2}Q_{1,j}-\sum_{i=2}^m \frac{\beta_i}{\alpha_i} \sum_{j=1}^{k_i/2}Q_{i,j} & \text{ if } k_1 \not\in 2\Z.
\end{cases}
\]
Since $Q_{i,j}\sim 2P$, we have $D_n\sim (\deg D_n)P$.
Let $R=\bigoplus_{k\ge 0}H^0(D_k)T^k$ and $\bV=\spec R$.
We say that the weighted homogeneous normal surface singularity $(\bV,o)$ is a {\em hyperelliptic type}, too.
Then the singularity $(\bV,o)$ has the resolution graph $\Gamma$ and
$p_g(\bV,o)=p_g(\Gamma)$, because it follows from Clifford's theorem that $h^1(D_n)$ is the maximum of $h^1(D'_n)$, where $C'$ is any nonsingular curve of genus $g$ and
$D'$ is any Pinkham-Demazure divisor on $C'$ which corresponding to the resolution graph $\Gamma$.
\end{ex}
The following problems are open even for Brieskorn complete intersections.
\begin{prob}
Give an explicit way to compute $p_g(\Gamma)$ from $\Gamma$.
\end{prob}
\begin{prob}
Classify complex structures which attain $p_g(\Gamma)$.
Is $E_0$ always hyperelliptic if $p_g\V=p_g(\Gamma)$?
\end{prob}
\begin{prob}
How can we generalize the notion of ``hyperelliptic type'' to non-star-shaped cases?
\end{prob}
\section{Brieskorn complete intersection singularities}
\label{s:BCI}
In this section, we review some basic facts on the Brieskorn complete intersection (BCI for short) surface singularities and study arithmetic properties of invariants of those singularities.
Then we show that a BCI singularity with $g\le 1$ always has the maximal geometric genus and its maximal ideal cycle coincides with the fundamental cycle on the minimal good resolution.
We basically use the notation of \sref{s:Pre}.
Recall that $\pi\: X\to V$ denotes the minimal good resolution of a normal surface singularity $\V$ with exceptional set $E$.
\subsection{The cycles and the Seifert invariants}\label{ss:BCISf}
We summarize the results in \cite{MO} which will be used in this section; those are a natural extension of the results on the hypersurface case obtained by Konno and Nagashima \cite{K-N}.
We assume that $\V$ is a BCI normal surface singularity, namely,
$V\subset \C^m$ can be defined as
\begin{equation}\label{eq:BCIeq}
V=\defset{(x_i)\in \C^m}
{q_{i1}x_1^{a_1}+\cdots +q_{im}x_{m}^{a_{m}} =0,
\quad i=3,\dots , m},
\end{equation}
where $a_i$ are integers such that $2\le a_1\le \dots \le a_m$
and $q_{ij}\in \C$.
We define positive integers
$\ell$, $\ell_i$, $\alpha$, $\alpha_i$, $\beta_i$,
$\hat g$, $\hat g_i$, and $e_i$ as follows:\footnote{Using the notation of \cite[\S 3]{MO}, we have $l=d_m$, $\ell_i=d_{im}$, $\alpha_i=n_{im}$, $\beta_i=\mu_{im}$, $e_i=e_{im}$. }
\begin{gather*}
\ell:=\lcm\{a_1, \dots, a_m\}, \ \
\ell_i:=\lcm(\{a_1, \dots, a_m\}\setminus\{a_i\}), \\
\alpha_i:=\ell/\ell_i, \ \
\alpha:=\alpha_1\cdots \alpha_m, \ \
\hat g:=a_1\cdots a_{m}/\ell, \ \
\hat g_i:=\hat g \alpha_i/a_i, \ \
e_i:=\ell/a_i, \\
e_i\beta_i+1 \equiv 0 \pmod{\alpha_i} \; \text{ and } \; 0\le \beta_i<\alpha_i.
\end{gather*}
We easily see that the polynomials appeared in \eqref{eq:BCIeq} are weighted homogeneous polynomials of degree $\ell$ with respect to the weights $(e_1, \dots, e_m)$ and that $\gcd\{\alpha_i ,\alpha_j\}=1$ for $i\ne j$.
\begin{defn}
Let $Z^{(i)}=(x_i)_E$, the exceptional part of the divisor $\di_X(x_i)$.
\end{defn}
The next result follows from Theorem 4.4, 5.1, 6.1 of \cite{MO}.
\begin{thm}\label{t:BCImain}
We have the following.
\begin{enumerate}
\item The resolution graph of $\V$ is as in \figref{fig:BCIG} ($s_i=0$ if $\alpha_i=1$), where
\[
E=E_0+\sum_{i=1}^{m}\sum_{\nu=1}^{s_{i}}
\sum_{\xi=1}^{\hat g_i}E_{i,\nu,\xi},
\]
and the Seifert invariant is given by the following:
\begin{gather*}
2g-2=(m-2)\hat g -\sum_{i=1}^{m}\hat g_i,
\\
c_0=\sum _{i=1}^{m}
\frac{\hat g_i\beta_i}{\alpha_i}
+\frac{a_1\cdots a_{m}}{\ell^2}, \ \
\beta_i/\alpha_i=
\begin{cases}
[[c_{i,1}, \dots, c_{i,s_i}]]^{-1} & \text{if} \ \ \alpha_i\ge 2 \\
0 & \text{if} \ \ \alpha_i=1.
\end{cases}
\end{gather*}
\item For $1\le i \le m$, we have
\[
\cf_{E_0}(Z^{(i)})=e_i=\deg(x_i), \quad
Z^{(i)}=\begin{cases}
\sum_{\xi=1}^{\hat g_i}E_{i,s_i,\xi}^*
& \text{if} \ \ \alpha_i\ge 2 \\
\hat g_i E_0^* & \text{if} \ \ \alpha_i=1.
\end{cases}
\]
Hence $Z^{(i)}=L_{e_i}$ for $1\le i \le m$,
and
$M_X=Z^{(m)}$ since $e_1 \ge \cdots \ge e_m$.
\item We have $\cf_{E_0}(Z_X)=\min\{e_m, \alpha\}$ (cf. \remref{r:E0coeff})
and
\[
Z_X=\begin{cases} M_X & \text{if} \ \ e_m\le \alpha \\
\deg (\alpha D) E_0^* & \text{if} \ \ e_m> \alpha.
\end{cases}
\]
In particular, $Z_X=M_X$ if and only if $e_{m}\le \alpha$.
\end{enumerate}
\end{thm}
\begin{figure}[htb]
\begin{center}
$
\xy
(-9,0)*+{E_0}; (0,0)*+{-c_0}*\cir<10pt>{}="E"*++!D(-2.0){[g]};
(20,31)*+{-c_{1,1} }*\cir<14pt>{}="A_1"*++!D(-2.0){E_{1,1,1}};
(40,31)*+{-c_{1,2} }*\cir<14pt>{}="A_2"*++!D(-2.0){E_{1,2,1}};
(85,31)*+{-c_{1,s_1} }*\cir<16pt>{}="A_3"*++!D(-2.0){E_{1,s_1,1}};
(20,14)*+{-c_{1,1} }*\cir<14pt>{}="B_1"*++!D(-2.0){E_{1,1,\hat g_1}}; (40,14)*+{-c_{1,2} }*\cir<14pt>{}="B_2"*++!D(-2.0){E_{1,2,\hat g_1}}; (85,14)*+{-c_{1,s_1} }*\cir<16pt>{}="B_3"*++!D(-2.0){E_{1,s_1,\hat g_1}};
(20,-14)*+{-c_{m,1} }*\cir<14pt>{}="D_1"*++!D(-2.0){E_{m,1,1}};
(40,-14)*+{-c_{m,2} }*\cir<14pt>{}="D_2"*++!D(-2.0){E_{m,2,1}};
(85,-14)*+{-c_{m,s_{m}} }*\cir<16pt>{}="D_3"*++!D(-2.0){E_{m,s_{m},1}};
(20,-31)*+{-c_{m,1} }*\cir<14pt>{}="E_1"*++!D(-2.0){E_{m,1,\hat g_{m}}};
(40,-31)*+{-c_{m,2} }*\cir<14pt>{}="E_2"*++!D(-2.0){E_{m,2,\hat g_{m}}};
(85,-31)*+{-c_{m,s_{m}} }*\cir<16pt>{}="E_3"*++!D(-2.0){E_{m,s_{m},\hat g_{m}}};
(57.5,31)*{\cdot },(62.5,31)*{\cdot },(67.5,31)*{\cdot },
(57.5,14)*{\cdot },(62.5,14)*{\cdot },(67.5,14)*{\cdot },
(57.5,-14)*{\cdot},(62.5,-14)*{\cdot},(67.5,-14)*{\cdot},
(57.5,-31)*{\cdot},(62.5,-31)*{\cdot},(67.5,-31)*{\cdot},
(62.5,25)*{\cdot},(62.5,22.5)*{\cdot},(62.5,20)*{\cdot},
(55.5,2.5)*{\cdot},(55.5,-1)*{\cdot},(55.5,-4.5)*{\cdot},
(62.5,-20)*{\cdot},(62.5,-22.5)*{\cdot},(62.5,-25)*{\cdot},
\ar @{-} "E" ;"A_1"
\ar @{-} "E" ;"B_1"
\ar @{-} "E" ;"D_1"
\ar @{-} "E" ;"E_1"
\ar @{-} "A_1"; "A_2"
\ar @{-} "A_2"; (55,31) \ar @{-} (70,31);"A_3"
\ar @{-} "B_1"; "B_2"
\ar @{-}"B_2";(55,14) \ar @{-} (70,14);"B_3"
\ar @/^2mm/@{-}^{\hat g_1} (92,31);(92,14)
\ar @{-} "D_1" ; "D_2"
\ar @{-} "D_2";(55,-14) \ar @{-} (70,-14);"D_3"
\ar @{-} "E_1" ; "E_2"
\ar @{-} "E_2";(55,-31) \ar @{-} (70,-31);"E_3"
\ar @/^2mm/@{-}^{\hat g_{m}} (92,-14);(92,-31)
\endxy
$
\end{center}
\caption{\label{fig:BCIG} The graph $\Gamma(a_1, \dots, a_m)$}
\end{figure}
\begin{defn}
We denote the weighted dual graph of \figref{fig:BCIG} by $\Gamma(a_1, \dots, a_m)$.
\end{defn}
\begin{rem}\label{r:ZiHi}
We describe more precisely the situation of \thmref{t:BCImain} (2).
Let $H_i:=\di_X(x_i)-Z^{(i)}$. Then we have the decomposition $H_i=\bigcup_{\xi=1}^{\hat g_i} H_{i,\xi}$ into irreducible components such that
\begin{itemize}
\item $H_{i,\xi} E=H_{i,\xi} E_{i,s_i,\xi}=1$ if $\alpha_i\ne 1$,
\item $H_{i,\xi} E=H_{i,\xi} E_{0}=1$ and $H_{i,\xi}\cap H_{i,\xi'}=\emptyset$ ($\xi\ne \xi'$) if $\alpha_i=1$.
\end{itemize}
In any cases, $H_i\cap H_j=\emptyset$ for $i\ne j$.
For $1\le i \le m$, let
$\defset{P_{i\xi}}{\xi=1, \dots, \hg_i}\subset E_0$
denote the set of points determined by $x_i=0$ in the weighted projective space $\P(e_1, \dots, e_m)$.
Then
\[
\{P_{i\xi}\}=
\begin{cases}
E_0\cap E_{i,1,\xi} & \text{if $\alpha_i\ne 1$,} \\
E_0\cap H_{i,\xi} & \text{if $\alpha_i= 1$}.
\end{cases}
\]
\end{rem}
Let us recall that $\cO_{E_0}(-L_n)\cong \cO_{E_0}(D_n)$ (see \thmref{t:TW}) and $D_{\alpha}=\alpha D$.
\begin{lem}\label{l:E_0*}
We have the following.
\begin{enumerate}
\item For $n\in \Z_{> 0}$, $\alpha \mid n$ if and only if
$L_n=(\deg D_n)E_0^*$.
In particular, if $\deg D_{e_i}>0$, then $\alpha \mid e_i$.
\item If $d\in \Z_{>0}$ and $dE_0^*\in \Z E$, then $dE_0^*=L_n$, where $n=d\alpha/\deg D_{\alpha}$.
\end{enumerate}
\end{lem}
\begin{proof}
(1)
Let $\phi\: X\to X'$ be the blowing-down of the divisor $E-E_0$.
Then, at each point $\phi(P_{i\xi})\in X'$ ($1\le i\le m$, $1\le \xi\le \hat g _i$), the reduced divisor $\phi(E_0)$ is a $\Q$-Cartier divisor and the order of $[\phi(E_0)]\in \Cl(X', \phi(P_{i\xi}))$ is $\alpha_i$ (see \sref{ss:cyc}).
As in \cite[II (b)]{mum.top}, we have the pull-back $\phi^*\phi(E_0)$. Then $E_0^*=\cf_{E_0}(E_0^*)(\phi^*\phi(E_0))$.
Since $\alpha_i$'s are pairwise relatively prime,
$\alpha$ is the minimal positive integer such that $\alpha\phi(E_0)$ is a Cartier divisor on $X'$,
or equivalently, $\phi^*(\alpha\phi(E_0))\in \Z E$.
Hence $\alpha \mid n$ if and only if $\phi^*(n\phi(E_0))\in \Z E$.
If this is the case, $\phi^*(n\phi(E_0))=L_{n}$ by \lemref{l:La} (2), and moreover, $L_{n}=(-L_nE_0)E_0^*=(\deg D_{n})E_0^*$.
By \thmref{t:BCImain} (2), $L_{e_i}=(\deg D_{e_i})E_0^*$ if $\deg D_{e_i}>0$.
(2)
As seen above, $dE_0^*=L_n$ by \lemref{l:La} (2).
Then $n=d\cf_{E_0}(E_0^*)$.
From (1), we have $\alpha=\deg D_{\alpha}\cf_{E_0}(E_0^*)$.
\end{proof}
\subsection{The coordinate ring and the semigroups}
By virtue of \thmref{t:BCImain}, we can write down the Pinkham-Demazure divisor as follows:
\[
D=Q-\Delta, \quad \Delta=\sum_{i=1}^m \frac{\beta_i}{\alpha_i}\bar P_i, \quad \bar P_i=\sum_{\xi=1}^{\hg_i}P_{i\xi}
\quad \text{($\beta_i=0$ if $\alpha_i=1$)}.
\]
\begin{defn}
We call a cycle $C\ge 0$ a {\em monomial cycle} if
$C=\sum_{i=1}^mm_iZ^{(i)}$ with $m_i\in \Z_{\ge 0}$, and write $x(C)=\prod_{i=1}^mx_i^{m_i}$.
Clearly, $(x(C))_E=C$.
\end{defn}
\begin{rem}\label{r:monom}
Let $C>0$ be an anti-nef $\Q$-cycle.
Suppose that $\alpha_i>1$ for $i\le s$ and $\alpha_i=1$ for $i>s$.
If, for each $i\le s$, $c_i:=CE_{i,s_i,\xi}$ is non-negative integer independent of $1\le \xi \le \hat g_i$, and if the intersection numbers of $C$ and the exceptional components other than $E_{i,s_i,\xi}$ ($i\le s$, $1\le \xi\le \hat g_i$) are zero, then $C$ is a monomial cycle since $C=\sum_{i=1}^sc_iZ^{(i)}$.
On the other hand, even if $C\in \Z E$ and $C=cE_0^*$ for some $c\in \Z_{>0}$, $C$ is not necessarily a monomial cycle.
For example, if $\alpha<e_m$, then $L_{\alpha}=(\deg D_{\alpha})E_0^*$ is not a monomial cycle (see \lemref{l:E_0*}, \thmref{t:BCImain} (2)).
\end{rem}
Let $\gen{m_1,\dots, m_k}\subset \Z_{\ge 0}$ denote the numerical semigroup generated by integers $m_1, \dots, m_k\in \Z_{\ge 0}$.
For $n\in \Z_{\ge 0}$, let $R_n=H^0(D_n)T^n\subset R:=R(V,o)$, the vector space of homogeneous functions of degree $n$ (see \sref{ss:WH}).
\begin{prop}\label{p:monomials}
Let $n\in \Z_{\ge 0}$.
We have the following.
\begin{enumerate}
\item If $\deg D_n\in \gen{\hg_1, \dots, \hg_m}$, then
there exists a monomial cycle $W$ such that $\cf_{E_0}(W)=n$, and hence $h^0(D_n)\ne 0$.
\item If $\deg D_n=\deg D_k \in \gen{\hg_1, \dots, \hg_m}$ for some $k\ge 0$,
then $D_n\sim D_k$.
In particular, if $\deg D_n=0$, then $D_n\sim 0$.
\item If $d:=\deg D_n>0$, then $dE_0^*\in \Z E$ and $\deg D_{\alpha}\mid d$.
\end{enumerate}
\end{prop}
\begin{proof}
(1)
We first assume that $\deg D_n=0$.
If $\alpha_i>1$, then $\cf_{E_{i,j,\xi}}(L_n)$ is independent of $1\le \xi \le \hat g_i$ for each $1\le j \le s_i$ (see \figref{fig:BCIG}).
Therefore, by \lemref{l:La} (1), there exists a cycle $F>0$ such that $L:=L_n+F$ is a monomial cycle with $\cf_{E_0}(L)=\cf_{E_0}(L_n)=n$ and $LE_0=0$ (cf. \remref{r:monom}).
Then $x(L)\in R_n$.
Next assume that $\deg D_n=c_1\hg_1+\cdots+c_m\hg_m>0$ ($c_i\in \Z_{\ge 0}$).
We may assume that $\alpha_i>1$ for $i\le s$ and $\alpha_i=1$ for $i>s$. For $i\le s$, let $F_i=\sum_{\xi=1}^{\hat g_i}\sum_{j=1}^{s_i} E_{i,j,\xi}$.
Since $F_i$ is anti-nef on its support and $\deg D_n = -L_n E_0$,
it follows from \thmref{t:BCImain} (2) that the cycle
\[
W'=L_n+\sum_{i=1}^s c_i F_i-\sum_{i=s+1}^mc_iZ^{(i)}
\]
is anti-nef and $W'E_0=0$.
Applying the argument above to the cycle $W'$, there exists a cycle $F'>0$ such that $W'+F'$ is a monomial cycle with $\cf_{E_0}(W')=\cf_{E_0}(W'+F')$ and $(W'+F')E_0=0$.
Hence
\[
W:=W'+F'+\sum_{i=s+1}^mc_iZ^{(i)}
\]
is also a monomial cycle and $\cf_{E_0}(W)=\cf_{E_0}(W'+\sum_{i=s+1}^mc_iZ^{(i)})=n$.
Thus, we obtain that $x(W)\in R_n$.
(2)
We denote by $C_n$ the monomial cycle $W'+F'$ above, and also by $C_k$ the monomial cycle obtained from $L_k$ in the same manner as above.
Since $C_n-C_k=L_n-L_k$, on a suitably small neighborhood of $E_0\subset X$, we have
\[
L_n-L_k=\di_X(x(C_n)/x(C_k))\sim 0.
\]
Hence $D_n-D_k\sim (-L_n+L_k)|_{E_0}\sim 0$.
(3) Since $\deg D_n=-L_nE_0$, $L_n-dE_0^*$ is an anti-nef $\Q$-cycle with $(L_n-dE_0)E_0=0$.
By the argument above, there exists a cycle $F>0$ such that $L_n-dE_0^*+F$ is a monomial cycle.
Hence $dE_0^*$ is also a cycle (cf. \remref{r:monom}).
We have $\deg D_{\alpha} \mid d$ by \lemref{l:E_0*}.
\end{proof}
\begin{thm}\label{t:g1p_gmax}
If $g\le 1$, then $p_g(V,o)=p_g(\Gamma(a_1, \dots, a_m))$ (see \defref{d:pgG}).
\end{thm}
\begin{proof}
By Pinkham's formula, $p_g(V,o)=\sum_{n\ge 0}h^1(D_n)$.
If $g=0$, then this is topological, and the assertion is clear.
Suppose that $g=1$. If $\deg D_n\ne 0$, then $h^1(D_n)$ is topological by Riemann-Roch theorem and Serre duality, namely, independent of the complex structure of $\V$.
If $\deg D_n=0$, then $h^1(D_n)=h^0(D_n)=1$ by \proref{p:monomials}.
Hence $p_g(V,o)= p_g(\Gamma(a_1, \dots, a_m))$.
\end{proof}
\begin{thm}\label{t:ehg}
We have the following.
\begin{enumerate}
\item $\gen{e_1, \dots, e_m}=\defset{n\in \Z_{\ge 0}}{h^0(D_n)\ne 0}$.
\item For $n\in \Z_{\ge 0}$, $n\in \gen{e_1, \dots, e_m}$ if and only if $\deg D_n\in \gen{\hat g_1, \dots, \hat g_m}$.
\end{enumerate}
\end{thm}
\begin{proof}
(1) follows from the fact that $R=\bigoplus_{k\ge 0}H^0(D_k)T^k$ is generated by the elements $x_1, \dots, x_m$ with $\deg x_i=e_i$.
(2) The ``if'' part follows from \proref{p:monomials} (1).
Assume that $n=\sum_{i=1}^mm_ie_i$ with $m_i \ge 0$.
Then the monomial cycle $M:=\sum_{i=1}^mm_iZ^{(i)}$ satisfies $\cf_{E_0}(M)=n$.
We proceed in a similar way as in the proof of \proref{p:monomials}.
We may assume that $\alpha_i>1$ for $i\le s$ and $\alpha_i=1$ for $i>s$. Then $-ME_0 =\sum_{i>s}m_i\hat g_i\in \gen{\hat g_1, \dots, \hat g_m}$ by \thmref{t:BCImain} (2).
Let $W=M-\sum_{i>s}m_iZ^{(i)}$ and $n'=\cf_{E_0}(W)$.
Clearly, $W$ is also a monomial cycle.
By the definition of $L_{n'}$, we have $\cf_{E_0}(W-L_{n'})=0$ and $W-L_{n'}\ge 0$.
Since $\cf_{E_{i,j,\xi}}(L_{n'})$ and $\cf_{E_{i,j,\xi}}(W)$) are independent of $1\le \xi \le \hat g_i$ for each $1\le j \le s_i$,
we obtain that $(W-L_{n'})E_0\in \gen{\hat g_1, \dots, \hat g_m}$.
On the other hand,
$L_n=L_{n'}+(M-W)$ by \lemref{l:La} (4).
Therefore,
\[
\deg D_n=-L_nE_0=(W-L_{n'}-M)E_0\in \gen{\hat g_1, \dots, \hat g_m}.
\qedhere
\]
\end{proof}
\begin{cor}
If $g>0$, then $a(R)\in \gen{e_1, \dots, e_m}$ and $2g-2\in \gen{\hat g_1, \dots, \hat g_m}$.
Note that $a(R)=(m-2)\ell-\sum_{i=1}^me_i$ by \cite[3.1.6]{G-W}.
\end{cor}
\begin{proof}
By \thmref{t:WatD}, $K_{E_0}\sim D_{a(R)}$. Since $h^0(K_{E_0})=g>0$, the assertion follows from \thmref{t:ehg}.
\end{proof}
\begin{thm}\label{t:g=1MZ}
If $H^0(D_{\alpha})\ne 0$, then $M_X=Z_X$.
In particular, if $g\le 1$, then $M_X=Z_X$.
\end{thm}
\begin{proof}
If $H^0(D_{\alpha})\ne 0$, then $\alpha\in \gen{e_1, \dots, e_m}$ by \thmref{t:ehg}.
Hence $e_m \le \alpha$, and $M_X=Z_X$ by \thmref{t:BCImain}.
If $g\le 1$, we have $H^0(D)\ne 0$ for any divisor $D$ on $E_0$ with $\deg D>0$.
\end{proof}
\begin{ex}
We have seen that if $\alpha<e_m$, then $H^0(D_{\alpha})=0$ even though $D_{\alpha}>0$.
We show that the condition $e_m < \alpha$ does not imply $H^0(D_{\alpha})\ne 0$; thus, the converse of \thmref{t:g=1MZ} does not hold.
Suppose that $(a_1,a_2,a_3)=(6,10,45)$.
Then we have
\[
\{e_{1}, e_2, e_{3}\}=\{15, 9, 2\}, \quad
\{\hat g_1, \hat g_2, \hat g_3\}=\{5, 3, 2\}, \quad
\alpha = 3, \quad \deg D_{\alpha}=1,
\]
and $H^0(D_{\alpha})=0$ by \thmref{t:ehg}.
Note that the Seifert invariant is $(11, 1, 2(3,1))$.
This is a hyperelliptic type (see \exref{e:mpg}).
Hence $p_g\V=p_g(\Gamma(6,10,45))$.
\end{ex}
\subsection{Non-BCI singularities}
\label{s:NBCI}
In the rest of this section, we assume that $\V$ is an arbitrary weighted homogeneous singularity with resolution graph $\Gamma(a_1, \dots, a_m)$.
We use the same notation as above.
Recall that the Pinkham-Demazure divisor is expressed as $D=Q-\Delta$.
\begin{lem}\label{l:M=Z}
Assume that $\alpha\le e_m$.
Then $M_X= Z_X$ if and only if there exists an effective divisor $F$ on $E_0$ such that $\alpha D =D_{\alpha}\sim F$ and $\supp F \cap \supp \Delta=\emptyset$.
\end{lem}
\begin{proof}
Let $c=\deg D_{\alpha}$.
Since $\alpha\le e_m$, it follows from \thmref{t:BCImain} and \lemref{l:E_0*} that $Z_X=L_{\alpha}=cE_0^*$ (note that the fundamental cycle is determined by the resolution graph).
On the other hand, $M_X= Z_X$ if and only if there exists a function $h\in H^0(\cO_X(- Z_X))$ such that $\di_X(h)=Z_X+H$, where $H$ is the non-exceptional part.
In this case, we have $EH=E_0H$ since $H\sim -cE_0^*$.
Thus $(E-E_0)H=0$.
Let $F=H|_{E_0}$. Then $\supp F \cap \supp \Delta=\emptyset$ and
$D_{\alpha}\sim -L_{\alpha}|_{E_0}\sim F$.
Conversely, suppose that $D_{\alpha}\sim F>0$ and $\supp F \cap \supp \Delta=\emptyset$.
Since $H^0(D_{\alpha})\ne 0$, there exists $h\in H^0(\cO_X)$ such that $\di_X(h)=cE_0^*+E'+H$ where $E'$ is a cycle supported in $E-E_0$ and $H$ is the non-exceptional part.
By assumption, $(E'+H)|_{E_0}\sim -L_{\alpha}|_{E_0} \sim F$.
In fact, we may assume that $(E'+H)|_{E_0}=F$, since the restriction map $H^0(\cO_X(-L_n)) \to H^0(\cO_{E_0}(D_n))$ is surjective by \thmref{t:TW}.
Then $H|_{ E_0}=F$ by the assumption on the supports, and $E'=0$ since $E'^2=\di_X(h)E'=0$.
\end{proof}
\begin{lem}\label{l:QF}
For any effective divisor $F\in \Di(E_0)$ such that $\deg F=\deg \alpha D$, there exists a divisor $\t Q\in \Di(E_0)$ such that
\[
F\sim \alpha \t Q-\alpha \Delta.
\]
Let $\t D= \t Q- \Delta$ and $\t R=R(E_0, \t D)$ (see \sref{ss:star}).
If $R=R(E_0,D)$ is a Gorenstein ring, then $\t R$ is also Gorenstein if and only if $a(\t Q-Q)\sim 0$, where $a=a(R)$.
\end{lem}
\begin{proof}
Since $\deg(F-\alpha D)= 0$,
there exists a divisor $Q_F$ with $\deg Q_F=0$ such that $\alpha Q_F\sim F-\alpha D$. Let $\t Q=Q_F+Q$.
Then
\[
\alpha \t Q-\alpha \Delta\sim \alpha Q_F+\alpha Q-\alpha \Delta
\sim F.
\]
Let $D'$ be the $\Q$-divisor as in \thmref{t:WatD}, and assume that $R$ is Gorenstein.
Then $K_{E_0}\sim aD-D'$, and $\t R$ is Gorenstein if and only if
$(aD-D')\sim (a\t D-D')$.
\end{proof}
\begin{thm}\label{t:Z=M}
There exists a weighted homogeneous singularity with resolution graph $\Gamma(a_1, \dots, a_m)$ such that
the maximal ideal cycle coincides with the fundamental cycle on the minimal good resolution.
\end{thm}
\begin{proof}
Let $\V$ be a BCI singularity. If $e_m \le \alpha$, we have $M_X=Z_X$ by \thmref{t:BCImain}.
If $e_m>\alpha$, by \lemref{l:M=Z} and \ref{l:QF}, we can take a Pinkham-Demazure divisor $\t D$ on $E_0$ so that $\spec R(E_0,\t D)$ satisfies the assertion.
\end{proof}
\section{Examples of singularities in $\cal X(\Gamma(2,3,3,4))$}\label{s:exBCI}
We study some special structures of weighted homogeneous singularities with resolution graph $\Gamma(2,3,3,4)$.
The tuple of integers $(a_1, a_2, a_3, a_4)=(2,3,3,4)$ is characterized by the properties that $a_1+\cdots+a_m$ ($a_i\ge 2$) is minimal such that the corresponding BCI singularity satisfies $E\ne E_0$ and $g=2$.
Let $\Gamma=\Gamma(2,3,3,4)$ and let $\overline{\cal X}(\Gamma)$ denote the set of weighted homogeneous singularities with resolution graph $\Gamma$.
We shall show that the singularities in $\overline{\cal X}(\Gamma)$ which attain the maximal geometric genus $p_g(\Gamma)$ (see \defref{d:pgG}) are of hyperelliptic type, and obtain the equations for them.
Moreover, we classify the singularities in $\overline{\cal X}(\Gamma)$ with the property that the maximal ideal cycle coincides with the fundamental cycle.
In the following, we use the notation of \sref{s:BCI}.
Notice that the coefficients of the cycles $Z_X$, $L_n$, and $Z_{K_X}$ are determined by $\Gamma$.
First, we give the fundamental invariants of BCI singularities with resolution graph $\Gamma$ (cf. \sref{ss:BCISf}); these data and the following theorem are used in other subsections.
\begin{nota}
Let $\mult\V$ (resp. $\emb\V$) denote the multiplicity (resp. embedding dimension) of the singularity $\V$, namely, that of the local ring $\cO_{V,o}$.\end{nota}
\begin{thm}\label{t:localring}
Let $A:=\cO_{W,p}$ be the local ring of a $d$-dimensional Cohen-Macaulay complex space $W$ at $p\in W$.
Then we have the following.
\begin{enumerate}
\item {\em (Abhyankar \cite{AbhIneq})} $\emb A \le \mult A + d -1$.
\item {\em (Sally \cite{sally.tangent})} If $A$ is Gorenstein and $\mult A\ge 3$, then $\emb A \le \mult A + d -2$.
\item {\em (Serre \cite{Serre-projmod})} If $A$ is Gorenstein and $\emb A =d+2$, then $A$ is a complete intersection.
\end{enumerate}
\end{thm}
\subsection{The BCI singularities}
\label{ss:BCI2334}
Assume that $(V,o)$ is a BCI surface singularity with $(a_1, \dots, a_4)=(2,3,3,4)$. Then $V$ can be defined by polynomials
\[
f_1:=x_1^2+x_2^3+p x_3^3, \quad f_2:=x_2^3 +x_3^3+x_4^4 \quad
(p \ne 0, 1).
\]
These are weighted homogeneous of $\deg f_i=\ell=12$ with respect to the weights
\[
(\deg x_1, \dots, \deg x_4)=(e_{1}, \dots, e_{4})=(6,4,4,3).
\]
We also have $(\alpha_1, \dots, \alpha_4)=(1,1,1,2)$.
By \cite[6.3]{MO}, $\mult(V,o)=a_1a_2=6$.
Let $R=\C[x_1, \dots, x_4]/(f_1, f_2)$. It follows from \cite[3.1.6]{G-W} that
\[
a(R) =12+12-(6+4+4+3)=7.
\]
The Hilbert series of $R$ is
\begin{equation}\label{eq:BCIH}
H(V,t)=\frac{(1-t^{12})^{2}}{(1-t^{3})(1-t^{4})^2(1-t^{6})}
=1+t^3+2 t^4+2 t^6+2 t^7+3 t^8+\cdots.
\end{equation}
By \proref{p:Hpg} (1), we have
\[
p_g(V,o)=(2+2 t+2 t^3+t^4+t^7)|_{t=1}=8.
\]
From the result of \sref{ss:BCISf}, we have the resolution graph $\Gamma$ as \figref{fig:2334}.
\begin{figure}[htb]
\begin{center}
$
\xy
(-9,0)*+{E_0};
(0,0)*+{-2}*\cir<10pt>{}="E"*++!D(-2.0){[2]};
(20,8)*+{-2}*\cir<10pt>{}="A_1";
(27,8)*+{E_{1}};
(35,0)*+{-2 }*\cir<10pt>{}="B_1";
(42,0)*+{E_{2}};
(20,-8)*+{-2 }*\cir<10pt>{}="E_1";
(27,-8)*+{E_{3}};
\ar @{-} "E" ;"A_1"
\ar @{-} "E" ;"B_1"
\ar @{-} "E" ; "E_1"
\endxy
$
\end{center}
\caption{\label{fig:2334} $\Gamma=\Gamma(2,3,3,4)$}
\end{figure}
Since $\alpha=2<e_4$, we have $Z_X\ne M_X$ by \thmref{t:BCImain}.
In fact, we have that
\[
Z_X=L_2=E+E_0=E_0^*, \quad M_X=Z^{(4)}=L_3=Z_X+E, \quad
Z_{K_X}=4Z_X.
\]
The {\em fundamental genus} is $p_a(Z_X)=h^1(\cO_{Z_X})=1+Z_X(Z_X+K_X)/2=4$.
The {\em arithmetic genus} of $(V,o)$ is defined by
$p_a(V,o)=\max\defset{p_a(D)}{\text{$D>0$ is a cycle}}$.
It is known that $p_a(Z_X)\le p_a(V,o) \le p_g(V,o)$ (see \cite{wag.ell}).
By Koyama's inequality (see \cite[Proposition 1.6]{Konno-pf2}), we have $p_a(V,o)=p_a(2Z_X)=5$.
The Pinkham-Demazure divisor $D$ and $D_n$ are as follows:
\begin{equation}\label{eq:PD}
D=Q-\sum _{i=1}^3 \frac{1}{2}P_i, \quad
D_n=nQ-\sum _{i=1}^3 \Ce{\frac{n}{2}}P_i,
\end{equation}
where $\cO_{E_0}(Q)=\cO_{E_0}(-E_0)$ and $\{P_i\}=E_0\cap E_i$.
Since $\deg Q=2$,
we have the following table; these are topological invariant and also used in \sref{ss:maxpg}--\ref{ss:M=Z}.
\begin{center}
$
\begin{array}{c|c|c|c|c|c|c|c}
\hline
n & 1 & 2& 3 & 4 & 5 & 6 & 7 \\
\hline
\deg D_n & -1 & 1 & 0 & 2 & 1 & 3 & 2 \\
\hline
\end{array}
$
\end{center}
The divisor $D$ satisfies the following analytic condition.
\begin{lem}\label{l:Pi}
$Q\sim 2P_i\sim K_{E_0}$ for $i=1,2,3$.
\end{lem}
\begin{proof}
Since $a(R)=7$, by \thmref{t:WatD} and \proref{p:monomials} (2),
\[
K_{E_0}\sim D_7 \sim D_7-2D_3=Q.
\]
Note that $E_0$ is a hyperelliptic curve with $g=2$.
From \remref{r:ZiHi}, we see that $\{P_1, P_2, P_3\}=\{f_1=f_2=x_4=0\}\subset \P(6,4,4,3)$. Thus, a double cover $E_0\to \P^1$ is given by $(x_1: x_2: x_3: x_4)\mapsto (x_2:x_3)$ and $P_i$ are its ramification points.
Hence $2P_i\sim K_{E_0}$.
\end{proof}
Later, we shall see the variation of the Pinkham-Demazure divisor $D$ and corresponding singularities with $\Gamma=\Gamma(2,3,3,4)$.
\subsection{Singularities with $p_g=p_g(\Gamma)$}
\label{ss:maxpg}
Let $C$ be a nonsingular curve of genus two and $\{P_1, P_2, P_3\}\subset C$ a set of distinct three points.
Let $Q$ be a divisor on $C$ with $\deg Q=2$.
We define $D$ and $D_n$ ($n\in \Z_{\ge 0}$) as in \eqref{eq:PD}.
Suppose that $\V\in \overline{\cal X}(\Gamma)$ and the homogeneous coordinate ring $R$ of $(V,o)$ is expressed as
$R=\bigoplus_{n\ge 0}H^0(D_n)T^n$, where $H^0(D_n)=H^0(C,\cO_C(D_n))$ (see \sref{ss:star}).
For $n\in \Z_{\ge 0}$, let $R_n=H^0(D_n)T^n$.
We identify $C$ with the central curve $E_0\subset E$.
\begin{lem}\label{l:gor}
The following are equivalent.
\begin{enumerate}
\item $(V,o)$ is Gorenstein.
\item $K_{C}$ is linearly equivalent to $D_7$.
\item $h^0(D_7)=2$.
\end{enumerate}
In this case, we have $a(R)=7$.
\end{lem}
\begin{proof}
Since $g=g(C)=2$, for a divisor $F$ of degree $2$ on $C$, $h^0(F)=2$ if and only if $F\sim K_{C}$.
The assertion follows from \thmref{t:WatD}.
\end{proof}
\begin{nota}\label{n:rs}
Let $\cal R(C) \subset C$ be the set of ramification points of the double cover $C\to \P^1$ and $\sigma\: C\to C$ the hyperelliptic involution; we have $\cal R(C)=\defset{P\in C}{\sigma(P)=P}$.
\end{nota}
From \exref{e:mpg}, we have the following.
\begin{prop}\label{p:maxpg1}
Assume that $P_1\in \cR(C)$, $P_2\in C\setminus \cR(C)$, $P_3=\sigma (P_2)$ and $Q=2P_1$.
Then
\begin{equation}\label{eq:Dn}
D_n\sim \begin{cases}
\frac{n}{2}P_1 & (\text{$n$ is even}) \\
\frac{n-3}{2}P_1 & (\text{$n$ is odd})
\end{cases}
\end{equation}
and $p_g\V=p_g(\Gamma)$.
\end{prop}
We can prove the converse of the above result.
\begin{prop}\label{p:maxpg2}
Assume that $p_g\V=p_g(\Gamma)$.
Then $D$ can be taken as in \proref{p:maxpg1}, namely, by suitable permutation of $P_i$'s, we have $P_1\in \cR(C)$, $P_2\in C\setminus \cR(C)$, $P_3=\sigma (P_2)$, and $Q\sim 2P_1$.
\end{prop}
\begin{proof}
By \proref{p:Hpg} (2) and Clifford's theorem (cf. \exref{e:mpg}), we have
\begin{equation}\label{eq:degDn}
h^0(D_n)=\Fl{\deg D_n /2}+1 \quad \text{if} \quad \deg D_n \le 2.
\end{equation}
Since $\deg D_2=1$ and $h^0(D_2)=1$, there exists a point $P_4\in C$ such that
\begin{equation}\label{eq:D2}
D_2=2Q-(P_1+P_2+P_3)\sim P_4.
\end{equation}
Since $\deg D_3=0$ and $h^0(D_3)=1$, it follows that
\begin{equation}
\label{eq:D3}
D_3=3Q-2(P_1+P_2+P_3)\sim 0.
\end{equation}
From \eqref{eq:D2} and \eqref{eq:D3}, we have $D_4\sim 2P_4\sim Q$.
Since $h^0(D_4)=2$, we have $P_4\in \cR(C)$.
Therefore, $P+\sigma(P)\sim Q$ for any $P\in C$.
It follows from \eqref{eq:D2} that
\[
P_1+P_2+P_3 \sim Q+P_4\sim P_1+\sigma(P_1)+P_4.
\]
Hence $P_2+P_3 \sim \sigma (P_1)+P_4$.
If $P_2+P_3 = \sigma (P_1)+P_4$, we are done (e.g., if $P_2=P_4$, then $P_2\in \cR(C)$, $\sigma(P_1)=P_3\not\in \cR(C)$).
If $P_2+P_3 \not= \sigma (P_1)+P_4$, then $h^0(\sigma (P_1)+P_4)=2$,
and this implies that $P_1=P_4$ and $P_3=\sigma (P_2)$.
\end{proof}
We shall give the fundamental invariants of these singularities.
For an invertible sheaf $\cal L$ on $X$, we say that $P\in X$ is a {\em base point} of $\cal L$ if $\cal L$ is not generated by its global sections at $P$.
\begin{lem}[cf. {\cite[2.7]{wag.ell}, \cite[4.6]{chap}}]
\label{l:multM2}
If $\cO_X(-M_X)$ has no base points, then $\mult\V=-M_X^2$.
\end{lem}
\begin{prop}\label{p:maxpg3}
Assume that $p_g\V=p_g(\Gamma)$. Then we have the following.
\begin{enumerate}
\item $M_X=Z_X+E_1$, where $P_1$ is taken as in \proref{p:maxpg1}.
Furthermore, $\cO_X(-M_X)$ has no base points and $\mult(V,o)=4$.
\item $p_g\V=10$.
\item $(V,o)$ is a complete intersection singularity defined as
\[
V=\defset{(x,y,z,w)\in\C^4}{y^2-xz=w^2-h_5(x^2,z)=0},
\]
where $h_5$ is a homogeneous polynomial of degree $5$.
This is a weighted homogeneous singularity of weight type $(2,3,4,10; 6,20)$.
\end{enumerate}
\end{prop}
\begin{proof}
Assume that $D$ is as in \proref{p:maxpg1}.
It follows from \lemref{l:gor} that $\V$ is Gorenstein, because $K_{C}\sim 2P_1\sim D_7$.
(1) Since $h^0(D_2)>0$, there exists a homogeneous function $h\in R_2$ such that $\di_X(h)=Z_X+F+H$, where $F$ is a cycle satisfying $0\le F \le E_1+E_2+E_3$ and $H$ is the non-exceptional part.
Note that any point of $H\cap E$ is in $E_0\setminus \{P_1, P_2, P_3\}$ or $(E_1\cup E_2\cup E_3)\setminus E_0$, because $h$ is homogeneous.
Since
\[
0\sim \di_X(h)|_{E_0} \sim -D_2+(F+H)|_{E_0}\sim -P_1+(F+H)|_{E_0},
\]
we have $F\cap E_0=\{P_1\}$ and $H\cap E_0=\emptyset$;
thus $F=E_1$ and $E\cap H\subset E_1\setminus E_0$.
Since $\cf_{E_1}(L_n)\ge 2$ for all $n\ge 3$, we have that $M_X=Z_X+E_1$ and $\cO_X(-M_X)$ is generated by global sections outside $E_1\cap H$.
Since $L_4=2E_0^*$ and $D_4\sim 2P_0$ for any $P_0\in \cR(C) \setminus \{P_1\}$, there exists $g\in R$ such that $\di_X(g)=L_4+H'$ where $H'$ intersects $E_0$ only at $P_0$ (cf. the proof of \lemref{l:M=Z}).
Since $\cf_{E_1}(M_X)=\cf_{E_1}(L_4)=2$ and $L_4E_1=0$,
$\cO_X(-M_X)$ has no base points.
Hence $\mult(V,o)=-(M_X)^2=4$ by \lemref{l:multM2}.
(2) Let $(V_0,o)\in \overline{\cal X}(\Gamma)$ be a BCI singularity.
Since $\deg D_n\ge 3$ for $n\ge 8$, $h^0(D_n)$ with $n\ge 8$ is independent of the complex structure of the pair $(C,D)$.
By \eqref{eq:BCIH} and \eqref{eq:degDn}, we have the Hilbert series $H(V,t)$ of $R=R\V$:
\begin{align}
\begin{split}
\label{eq:maxH}
H(V,t)&=H(V_0,t)+t^2+t^5
=\frac{\left(1-t^6\right) \left(1-t^{20}\right)}{\left(1-t^2\right) \left(1-t^3\right)
\left(1-t^4\right) \left(1-t^{10}\right)} \\
&=1+t^2+t^3+2 t^4+t^5+2 t^6+2 t^7+3 t^8+2 t^9+4 t^{10}+\cdots.\end{split}
\end{align}
By \proref{p:Hpg} (2), $p_g\V=p_g(V_0,o)+2=10$.
(3) From \eqref{eq:maxH}, we have the following functions belong to a minimal set of homogeneous generators of $\C$-algebra $R$:
\[
x=f_2T^2\in R_2, \ \ y=f_3T^3\in R_3, \ \ z=f_4T^4\in R_4
\]
such that $\di_{E_0}(f_i)\ge D_i$.
Since $x^3,y^2,xz\in H^0(D_6)T^6$ and $h^0(D_6)=2$,
we have a relation $r_6(x,y,z)=0$ at degree $6$.
Let $\C[X,Y,Z]$ be the polynomial ring with $(\deg X, \deg Y, \deg Z)=(2,3,4)$.
The difference between the Hilbert series of $R$ and that of the quotient ring $\C[X,Y,Z]/(r_6(X,Y,Z))$ is
\[
H(V,t)-\frac{(1-t^6)}{(1-t^2)(1-t^3)(1-t^4)}
=t^{10}+\cdots.
\]
Hence we have an element $w\in R_{10}$ such that
$\{x,y,z,w\}$ is a subset of a minimal set of homogeneous generators of $R$.
However, since $\V$ is Gorenstein and $\mult\V=4$, it follows from \thmref{t:localring} that $R$ is a complete intersection generated by just $x,y,z,w$ as $\C$-algebra.
Let $F(t)$ be the Hilbert series of $\C[X,Y,Z,W]/(r_6(X,Y,Z))$, where $\deg W=10$.
Then
\[
H(V,t)-F(t) =-t^{20}+\cdots.
\]
Hence we have a relation $r_{20}(x,y,z,w)=0$ at degree $20$.
Then the natural $\C$-homomorphism
\[
S:=\C[X,Y,Z,W]/(r_6(X,Y,Z), r_{20}(X,Y, Z,W)) \to R
\]
induced by $(X,Y,Z,W)\mapsto (x,y,z,w)$ is surjective and the Hilbert series of $S$ coincides with $H(V,t)$.
Hence $S\cong R$.
Next we consider the equations.
Suppose that $\phi\: E_0\to \P^1$ is a double cover such that $\phi(P_1)=\{x_0=0\}$ and $\phi(P_i)=\{x_1=0\}$ ($i=2,3$), where $x_0$ and $x_1$ are the homogeneous coordinates of $\P^1$.
Then $E_0$ can be defined by the equation $x_2^2=x_0h_5(x_0,x_1)$, where $h_5(x_0,x_1)$ is a homogeneous polynomial of degree $5$ such that $h_5(1,0)h_5(0,1)\ne 0$; the branch locus of the covering is $\{x_0h_5(x_0,x_1)=0\}\subset \P^1$.
Now, we can put $x=x_0x_1$, $y=x_0x_1^2$, $z=x_0x_1^3$, $w=x_0^2x_1^5x_2$. Then we have the relations
\[
y^2=x_0^2x_1^4=xz, \ \
w^2=h_5(x_0,x_1)(x_0 x_1^2)^5=h_5(x^2,z).
\qedhere
\]
\end{proof}
\subsection{Singularities with $M_X=Z_X$}
\label{ss:M=Z}
We classify the singularities $(V,o)\in \overline{\cal X}(\Gamma)$ with property that $M_X=Z_X$.
We use the notation of the preceding subsection.
\begin{prop}\label{p:M=Z2334}
We have the following.
\begin{enumerate}
\item $M_X=Z_X$ if and only if there exists a point $P_4\in C\setminus \{P_1, P_2, P_3\}$ such that
$D_2 \sim P_4$;
if this is the case, $D_7\sim 4P_4-Q$.
\item Assume that $M_X=Z_X$ and that $x\in R_2$ and $y\in R_m$ belong to a minimal set of homogeneous generators of the $\C$-algebra $R$, where $m$ is the minimum of the degrees of those generators except for $x$.
If $P_4$ is not a base point of $H^0(D_m)$, then $\mult\V=m$.
\end{enumerate}
\end{prop}
\begin{proof}
(1)
The equivalence follows from \lemref{l:M=Z}.
(2) We have $\di_X(x)=Z_X+H$, where $H$ is the non-exceptional part.
Since $H\cap E=\{P_4\}$, $\cO_X(-Z_X)$ has just a base point $P_4$.
Assume that $u,v$ are the local coordinates at $P_4\in X$ such that $E_0=\{u=0\}$ and $H=\{v=0\}$.
By the assumption, we may also assume that $x=u^2v$ and $y=u^m$.
Note that $m\ge 3$ since $h^0(D_2)=1$.
Then, at $P_4\in X$, $\m\cO_X=(u^2v,u^m)\cO_X=(v,u^{m-2})\cO_X(-Z_X)$, where $\m\subset \cO_{V,o}$ is the maximal ideal.
Therefore, the base point of $\cO_X(-Z_X)$ is resolved by the composition $Y\to X$ of $m-2$ blowing-ups at the intersection of the exceptional set and the proper transform of $H$.
Then the maximal ideal cycle $M_Y$ on $Y$ is the exceptional part of $\di_Y(x)$ and by \lemref{l:multM2}, $\mult (V,o)=-M_Y^2=-Z_X^2+(m-2)=m$.
\end{proof}
\begin{rem}\label{r:minmult}
The proof of \proref{p:M=Z2334} shows that $\mult (W,o)\ge -Z_X^2+1=3$ for any normal surface singularity $(W,o)$ with resolution graph $\Gamma$.
\end{rem}
\begin{lem}\label{l:4pts}
Let $P\in C$.
\begin{enumerate}
\item $P\not\in \cR(C)$ if and only if the linear system $|3P|$ is free.
\item There exist distinct three points $A_1, A_2, A_3\in C$ such that $3P\sim \sum_{i=1}^3A_i$. For such points, $P\in \cR(C)$ if and only if $P\in \{A_1, A_2, A_3\}$.
\end{enumerate}
\end{lem}
\begin{proof}
(1) Since $h^0(3P)=2$ by the Riemann-Roch theorem, $|3P|$ is free if and only if $h^0(2P)=1$.
(2) If the linear system $|3P|$ is free, then the first assertion follows from Bertini's theorem.
If $|3P|$ is not free, then $|2P|=|K_{C}|$ is free and thus we can take distinct three points $A_1:=P, A_2, A_3 \in C$ such that $2P\sim A_2+A_3$.
Suppose that $3P\sim \sum_{i=1}^3A_i$.
If $P\in \cR(C)$, we have $P\in \{ A_1, A_2, A_3\}$ since $|3P|$ has a base point $P$.
If $P\in \{A_1, A_2, A_3\}$, then $h^0(2P)=2$.
\end{proof}
We always assume that $M_X=Z_X$ in the rest of this section and use the notation above: notice that $h^0(D_2)=1$ and $D_2\sim P_4\in C\setminus\{P_1, P_2, P_3\}$, and that $h^0(D)\ge \deg D-1$ for any divisor $D$ on $C$ by the Riemann-Roch theorem.
Let $H(\Gamma,t)=\sum_{n\ge 0}c_nt^n$ denote the Hilbert series associated with a singularity $(V',o)\in \overline{\cal X}(\Gamma)$ with $p_g(V',o)=p_g(\Gamma)$. As we have seen in \eqref{eq:maxH},
\[
\sum_{n\ge 0}c_nt^n=1+t^2 +t^3 +2 t^4 +t^5 +2 t^6+2 t^7+\cdots.
\]
We have the following:
\begin{gather*}
h^0(D_n) = c_n
\text{ for $n=0,1,2,6$ and $n\ge 8$}, \\
h^0(D_3), h^0(D_5) \in \{0,1\}, \quad h^0(D_4), h^0(D_7)\in \{1,2\}.
\end{gather*}
We classify those singularities; they are divided into the following cases:
\begin{enumerate}
\item[I.] $h^0(D_3)=1$.
\item[II.] $h^0(D_3)=0$ and $h^0(D_4)=2$.
\item[III.] $h^0(D_3)=0$ and $h^0(D_4)=1$.
\end{enumerate}
We shall eventually have six cases as seen in Table \ref{tab:M=Z}.
\begin{prop}\label{p:h3=1}
Assume that $M_X=Z_X$.
If $h^0(D_3)=1$, then $\V$ is not Gorenstein,
$ p_g(V,o)= 8$, $\mult\V=3$, $\emb\V=4$, and
\[
H(V,t)=1+t^2+t^3+t^4+t^5+2 t^6+t^7+\cdots
=\frac{1+t^8+t^{10}}{\left(1-t^2\right) \left(1-t^3\right)}.
\]
Furthermore, the $\C$-algebra $R$ is generated by homogeneous elements of degree $2,3,8,10$.
Note that $\V$ has the minimal multiplicity among the singularities in $\cal X(\Gamma)$ (see \remref{r:minmult}).
\end{prop}
\begin{proof}
We have $h^0(D_5)=1$, since $h^0(D_2)=h^0(D_3)=1$.
Since $D_2\sim P_4$ and $D_3\sim 0$,
by a similar argument as in the proof of \proref{p:maxpg2}
we have that
\[
3Q\sim 2\sum _{i=1}^3 P_i, \quad
Q\sim 2P_4\sim D_4\sim D_7, \quad 3P_4\sim \sum _{i=1}^3 P_i.
\]
In particular, $h^0(D_4)=h^0(D_7)$.
By \proref{p:M=Z2334} (2), $\mult\V=3$.
Suppose that $h^0(D_4)=2$.
Then $(V,o)$ is Gorenstein by \lemref{l:gor}.
Therefore, $\emb(V,o)\le \mult(V,o)=3$ by \thmref{t:localring}.
Then $R$ is generated by $x\in R_2$, $y\in R_3$ and $z\in R_4$ as $\C$-algebra $R$ with equation $y^2+xz=0$ (cf. the proof of \proref{p:maxpg3} (3)); however, this implies that $(V,o)$ is rational.
Hence $h^0(D_4)=1$. Then $\V$ is not Gorenstein by \lemref{l:gor}, and therefore $\V$ is not hypersurface.
Thus, $\emb(V,o)= 4$ by \thmref{t:localring}.
Since $H(\Gamma,t)-H(V,t)=t^4+t^7$, we have $p_g(\Gamma)-p_g\V=2$ by \proref{p:Hpg}.
Since $x,y$ form a regular sequence of $R$, the Hilbert series of $R/(x,y)$ is $H(V,t)(1-t^2)(1-t^3)=1+t^8+t^{10}$.
Then we easily see the degrees of generators.
\end{proof}
\begin{rem}
By \lemref{l:4pts}, we can take distinct points $P_1, \dots, P_4\in C$ such that $3P_4\sim \sum_{i=1}^3P_i$ and $2P_4\not\sim K_{C}$.
Let $Q=2P_4$.
Then we have
\[
D_2 \sim P_4, \quad D_3 \sim 2(3P_4-\sum_{i=1}^3P_i)\sim 0, \quad
h^0(D_4)=h^0(D_7)=h^0(2P_4)=1,
\]
and $M_X=Z_X$ by \proref{p:M=Z2334}.
Hence we have a singularity $(V,o) \in \overline{\cal X}(\Gamma)$ satisfying all the conditions in \proref{p:h3=1}.
\end{rem}
Next we consider the case $h^0(D_3)=0$.
Since $D_2\sim P_4$, the following three conditions are equivalent
(cf. the proof of \proref{p:maxpg2}):
\begin{quotation}
(1) $h^0(D_3)=0$, \qquad (2) $3Q\not\sim 2\sum _{i=1}^3 P_i$,
\qquad (3) $Q\not\sim 2P_4$.
\end{quotation}
Let $x\in R_2\setminus \{0\}$.
We will compute the embedding dimension of $\V$ via the curve singularity $(V(x), o)$, where $V(x)=\{x=0\}\subset V$.
Let $H(V(x),t)=\sum_{n\ge 0}d_it^i$ denote the Hilbert series of $R/(x)$.
\begin{lem}\label{l:edC}
The curve $V(x)$ is irreducible and the set $\Gamma_x:=\defset{n\in \Z_{\ge 0}}{d_n\ne 0}$ is a numerical semigroup.
If $\Gamma_x= \gen{m_1, \dots, m_e}$, then
\[
\emb\V-1=\emb(V(x),o)\le e.
\qedhere
\]
\end{lem}
\begin{proof}
Let $H\subset X$ be as in the proof of \proref{p:M=Z2334}.
Then $H$ is irreducible and nonsingular
since $EH=1$, and hence the induced map $H\to V(x)$ is the normalization. If $h\in R\setminus (x)$ is a homogeneous element, then the order of $h|_{V(x)}$ at $o\in V(x)$ coincides with the order of vanishing of $h$ along $E_0$, that is, $\deg h$.
Hence $\Gamma_x$ coincides with the so-called {\em semigroup of values} of the curve singularity $(V(x),o)$.
Then the inequality is well-known.
\end{proof}
In the following, it will be useful to notice that the Frobenius number of $\gen{a,b}$ is $(a-1)(b-1)-1$.
\begin{prop}\label{p:D4=2}
Assume that $M_X=Z_X$. If $h^0(D_3)=0$ and $h^0(D_4)=2$, then $\V$ is not Gorenstein and $\mult(V,o)=4$.
\begin{enumerate}
\item If $h^0(D_5)=1$, then $p_g(V,o)= 8$, $\emb(V,o)=4$,
\[
H(V,t)=1+t^2+2 t^4+t^5+2 t^6+t^7+\cdots
=\frac{1+t^5+t^{10}+t^{11}}{\left(1-t^2\right)
\left(1-t^4\right)},
\]
and $\C$-algebra $R$ is generated by homogeneous elements of degree $2,4,5,11$.
\item If $h^0(D_5)=0$, then $p_g(V,o)= 7$, $\emb(V,o)=5$,
\[
H(V,t)=1+t^2+2 t^4+2 t^6+t^7+\cdots
=\frac{1+t^7+t^9+t^{10}}{\left(1-t^2\right)
\left(1-t^4\right)},
\]and $\C$-algebra $R$ is generated by homogeneous elements of degree $2,4,7,9,10$.
\end{enumerate}
\end{prop}
\begin{proof}
We have that $D_4\sim 2P_4\sim K_{C}$ and $D_4\not \sim D_7$.
Hence $h^0(D_7)= 1$ and $\V$ is not Gorenstein by \lemref{l:gor}.
Therefore, $\emb\V\ge 4$.
Since $H^0(D_4)$ has no base points, we have $\mult\V=4$ by \proref{p:M=Z2334}, and $\emb(V,o)\le 5$ by \thmref{t:localring}.
Take homogeneous element $y\in R_4$ such that $x$ and $y$ belong to a minimal set of homogeneous generators of $\C$-algebra $R$.
Then $x,y$ form a regular sequence of $R$ and the Hilbert series of $R/(x,y)$ is $H'(t):=H(V,t)(1-t^2)(1-t^4)$.
(1) Assume that $h^0(D_5)=1$.
We have $H(V,t)=H(\Gamma,t)-(t^3+t^7)$ and $p_g\V=p_g(\Gamma)-2$ by \proref{p:Hpg} (2).
Since
\[
H(V(x),t)=H(V,t)(1-t^2)=1+t^4+t^5+t^8\sum_{i\ge 0}t^i,
\]
we have $\Gamma_x= \gen{4,5,11}$.
It follows from \lemref{l:edC} that $\emb\V=4$.
Since $H'(t)=1+t^5+t^{10}+t^{11}$, we obtain the degrees of homogeneous generators of $R$.
(2) Assume that $h^0(D_5)=0$.
Then $H(V,t)=H(\Gamma,t)-(t^3+t^5+t^7)$,
$H(V(x),t)=1+t^4+t^7\sum_{i\ge 0}t^i$, and $H'(t)=1+t^7+t^9+t^{10}$.
Thus, we obtain the assertion by a similar argument as above.
\end{proof}
\begin{rem}
Let $\cR(C)$ and $\sigma$ be as in \notref{n:rs}.
Suppose that $P_4\in \cR(C)$ and $P_5\in C\setminus\cR(C)$.
(1)
Let $Q=P_4+P_5$. Then $|2Q-P_4|$ is free since $h^0(P_4+2P_5)=2>h^0(2P_5)=h^0(P_4+P_5)$.
Thus, there exist distinct points $P_1, P_2, P_3 \in C\setminus \{P_4\}$ such that $2Q-P_4\sim P_1+P_2+P_3$.
We set $D=Q-\frac{1}{2}\sum_{i=1}^{3}P_i$. Then
\begin{gather*}
D_2\sim P_4, \quad D_3\sim 2D_2-Q\sim P_4-P_5\not\sim 0, \quad
D_4\sim K_C, \quad \\
D_5\sim 3D_2-Q\sim 2P_4-P_5\sim (P_5+\sigma(P_5))-P_5=\sigma(P_5).
\end{gather*}
Therefore, we have a singularity satisfying the condition of \proref{p:D4=2} (1).
(2) Let $Q=4P_4-2P_5$. If $|2Q-P_4|$ has a base point $P_0$, then $K_C\sim 2Q-P_4-P_0\sim 7P_4-4P_5-P_0$, and thus $5P_4\sim 4P_5+P_0$.
However, since $|5P_4|$ has a base point $P_4$, we have $4P_4\sim 4P_5$; this is impossible.
Hence $|2Q-P_4|$ is free and there exist distinct points $P_1, P_2, P_3 \in C\setminus \{P_4\}$ such that $2Q-P_4\sim P_1+P_2+P_3$.
Then \begin{gather*}
D_2\sim P_4, \quad D_3\sim 2P_5-2P_4\not\sim 0, \quad
D_4\sim K_C, \quad \\
D_5\sim 2P_5-P_4, \quad h^0(2P_5-P_4)=0.
\end{gather*}
Hence we have a singularity satisfying the condition of \proref{p:D4=2} (2).
\end{rem}
\begin{prop}\label{p:011}
Assume that $M_X=Z_X$.
If $h^0(D_3)=0$ and $h^0(D_4)=h^0(D_5)=1$, then
$\mult(V,o)=\emb(V,o)=5$.
\begin{enumerate}
\item If $h^0(D_7)=2$, then $(V,o)$ is Gorenstein, $p_g(V,o)= 8$,
\[
H(V,t)=1+t^2+t^4+t^5+2 t^6+2
t^7+\cdots
=\frac{1+t^6+t^7+t^8+t^{14}}{\left(1-t^2\right)
\left(1-t^5\right)},
\]
and $\C$-algebra $R$ is generated by homogeneous elements of degree
$2,5,6,7,8$.
\item If $h^0(D_7)=1$, then $(V,o)$ is not Gorenstein, $p_g(V,o)= 7$,
\[
H(V,t)=1+t^2+t^4+t^5+2
t^6+t^7+\cdots
=\frac{1+t^6+t^8+t^9+t^{12}}{\left(1-t^2\right)
\left(1-t^5\right)},
\]
and $\C$-algebra $R$ is generated by homogeneous elements of degree $2,5,6,8,9$.
\end{enumerate}
\end{prop}
\begin{proof}
The proof is similar to that of \proref{p:D4=2}.
We have $R_4=R_2^2$ and $D_4\sim 2P_4\not\sim K_{C}$.
Since $D_3\not\sim 0$ and $h^0(D_5)=1$,
there exists a point $P_5\in C$ such that $D_5\sim P_5\ne P_4$ (note that $D_2\not \sim D_2+D_3=D_5$).
Therefore, $\mult(V,o)=5$ by \proref{p:M=Z2334} (2).
Let $y\in R_5\setminus\{0\}$.
Then the Hilbert series of $R/(x,y)$ is $H'(t):=H(V,t)(1-t^2)(1-t^5)$.
From \lemref{l:gor}, $\V$ is Gorenstein if and only if $h^0(D_7)=2$.
(1) Assume that $h^0(D_7)=2$.
We have $H(V,t)=H(\Gamma,t)-(t^3+t^4)$ and
$H'(t)=1+t^6+t^7+t^8+t^{14}$.
Hence $p_g(V,o)=p_g(\Gamma)-2$ by \proref{p:Hpg} and $\emb\V= 5$ by \thmref{t:localring} (2).
Therefore, $R$ is generated by homogeneous elements of degree $2,5,6,7,8$.
(2) Assume that $h^0(D_7)=1$.
We have $H(V,t)=H(\Gamma,t)-(t^3+t^4+t^7)$, $H'(t)=1+t^6+t^8+t^9+t^{12}$,
$
H(V,t)(1-t^2)=1+t^5+t^6+t^8\sum_{i\ge 0}t^i,
$
and $\Gamma_x= \gen{5,6,8,9}$.
Hence we obtain the assertion by similar arguments as above.
\end{proof}
The following proposition shows the existence and the property of $D$ corresponding to the singularities in \proref{p:011} (1).
\begin{prop}\label{p:h3=0h7=2}
We have the following.
\begin{enumerate}
\item There exist points $P_1, \dots, P_4\in C$ and an effective divisor $Q$ of degree two on $C$ which satisfy the condition
\begin{enumerate}
\item[(C1)]
$P_1, \dots, P_4$ are distinct, $2Q\sim \sum _{i=1}^4 P_i$, $2P_4\not\sim K_{C}$, $4P_4\sim Q+K_{C}$.
\end{enumerate}
\item Let $P_1, \dots, P_4$ and $Q$ be as above, and let $D=Q-\frac{1}{2}\sum _{i=1}^3 P_i$.
Then the condition {\rm (C1)} is satisfied if and only if $M_X=Z_X$ and $h^0(D_3)=0$, $h^0(D_4)=h^0(D_5)=1$, $h^0(D_7)=2$.
\end{enumerate}
\end{prop}
\begin{proof}
(1)
Assume that $\cR(C)$ and $\sigma$ be as in \notref{n:rs}.
Let $P_4\in C$ satisfies $3(P_4-\sigma(P_4))\not\sim 0$.
Then $2P_4\not \sim K_{C}$, because $P_4\not\in \cR(C)$.
Since $\deg(4P_4-K_C)\ge 2$, there exists an effective divisor $Q$ on $C$ such that $4P_4-K_{C}\sim Q$.
Since $\deg(2Q-P_4)=3$, we have $h^0(2Q-P_4)=2$.
If the linear system $|2Q-P_4|$ is free, then we have distinct three points $P_1, P_2, P_3\in C\setminus \{P_4\}$ such that $2Q\sim \sum _{i=1}^4 P_i$.
If $|2Q-P_4|$ has a base point $G\in C$, then $2Q-P_4-G\sim K_{C}$.
If $G=P_4$, we have $2Q\sim 2P_4+K_{C}$.
Since $4P_4\sim Q+K_{C}$, we have $Q+2P_4\sim 2K_{C} \sim Q+\sigma(Q)$, and hence $2P_4\sim \sigma (Q)$.
However, $4P_4\sim Q+K_{C} \sim \sigma(2P_4)+\sigma(P_4)+P_4$; it contradicts that $3(P_4-\sigma(P_4))\not\sim 0$.
Therefore, $G\ne P_4$. We can take $P_1\in C$ so that $P_1, P_2:=\sigma(P_1), P_3:=G, P_4$ are distinct.
Then $2Q-P_4\sim K_{C}+P_3\sim P_1+P_2+P_3$.
(2)
Assume that (C1) is satisfied.
By \proref{p:M=Z2334} (1), we have $M_X=Z_X$ since $D_2=2Q-\sum _{i=1}^3 P_i\sim P_4$.
We also have
\begin{gather*}
D_3\sim 2P_4-Q \not \sim 0, \ \
D_4\sim 2P_4 \not \sim K_{C}, \\
D_5 \sim 3P_4-Q\sim K_{C}-P_4\sim P_4+\sigma(P_4)-P_4=\sigma(P_4), \\
D_7\sim 4P_4-Q\sim K_{C}.
\end{gather*}
Thus, we obtain that $(h^0(D_3), h^0(D_4), h^0(D_5), h^0(D_7))=(0,1,1,2)$.
The converse follows from the arguments above.
\end{proof}
\begin{rem}
We take distinct points $P_4, P_5\in C\setminus \cal R(C)$ such that $P_4+P_5\not\sim K_C$ and $2(2P_4-P_5)\not\sim K_C$, and let $Q=3P_4-P_5$.
Then $P_4$ is not a basepoint of $|2Q-P_4|$.
As in the proof of \proref{p:h3=0h7=2}, we obtain distinct points $P_1, P_2, P_3\in C\setminus \{P_4\}$ such that $2Q-P_4\sim P_1+P_2+P_3$.
Then we have
\begin{gather*}
D_2\sim P_4, \ \
h^0(D_3)=h^0(P_5-P_4) =0, \ \
h^0(D_4)=h^0(2P_4)=1, \\
h^0(D_5)=h^0(P_5)=1, \ \
h^0(D_7)=h^0(P_4+P_5)=1.
\end{gather*}
Hence there exists a singularity satisfying the conditions of \proref{p:011} (2).
\end{rem}
\begin{prop}\label{p:0101}
Assume that $M_X=Z_X$. If $h^0(D_3)=0$, $h^0(D_4)=1$, $h^0(D_5)=0$.
Then $\V$ is not Gorenstein, $h^0(D_7)=1$, $p_g(V,o)= 6$, $\mult(V,o)=6$, $\emb(V,o)= 7$,
\[
H(V,t)=1+t^2+t^4+2
t^6+t^7+\cdots
=\frac{1+t^7+t^8+t^9+t^{10}+t^{11}}{\left(1-t^2\right)
\left(1-t^6\right)}
\] and $\C$-algebra $R$ is generated by homogeneous elements of degree $2,6,7,8,9,10,11$.
\end{prop}
\begin{proof}
Since $D_4\sim 2P_4\not\sim K_C$ and $D_6\sim 3P_4$, $H^0(D_6)$ is free (cf. \lemref{l:4pts}).
Hence we have $\mult\V=6$ by \proref{p:M=Z2334} (2) and $\emb\V\le 7$ by \thmref{t:localring}.
Take a homogeneous element $y\in R_6$ such that $x$ and $y$ belong to a minimal set of homogeneous generators of $\C$-algebra $R$. Then $x,y$ form a regular sequence of $R$ and the Hilbert series of $R/(x,y)$ is $H'(t):=H(V,t)(1-t^2)(1-t^6)$.
If $h^0(D_7)=2$, then $H'(t)=1+2 t^7+t^8+t^{10}+t^{11}-t^{13}+t^{15}$
has a negative coefficient; it is a contradiction.
Hence we have $h^0(D_7)=1$.
Then $H(V,t)=H(\Gamma,t)-(t^3+t^4+t^5+t^7)$,
$H'(t)=1+t^7+t^8+t^9+t^{10}+t^{11}$.
Hence $p_g\V=p_g(\Gamma)-4$, $\emb\V=7$ and $\C$-algebra $R$ is generated by homogeneous elements of degree $2,6,7,8,9,10,11$.
\end{proof}
\begin{rem}
Let $P_4, P_5\in C\setminus \cR(C)$ be distinct points such that $P_4+P_5\not\sim K_C$.
Let $Q=P_4+P_5$.
Then $|2Q-P_4|$ is free because $h^0(P_4+P_5)=h^0(2P_5)=1$.
Hence there exist distinct three points $P_1, P_2, P_3\in C\setminus\{P_4\}$ such that $2Q-P_4 \sim P_1+P_2+P_3$.
Then we have
\begin{gather*}
h^0(D_3)=h^0(P_4-P_5)=0, \quad h^0(D_4)=h^0(2P_4)=1, \\
h^0(D_5)=h^0(2P_4-P_5)<h^0(2P_4)=1, \\
h^0(D_7)=h^0(3P_4-P_5)<h^0(3P_4)=2.
\end{gather*}
Therefore, we have a singularity of \proref{p:0101}.
\end{rem}
For reader's convenience,
we provide a table of the conditions for the Pinkham-Demazure divisors $D=Q-\sum _{i=1}^3 \frac{1}{2}P_i$ which induce the singularities discussed in this subsection;
for each case, $\cR=\cR(C)$, four points $P_1, \dots, P_4\in C$ are distinct, and $P_1+P_2+P_3\sim 2Q-P_4$.
\begin{table}[h]
\renewcommand{\arraystretch}{1.2}
\[
\begin{array}{cccl}
\hline\hline
p_g & \mult & \emb & \text{Pinkham-Demazure divisor} \\
\hline
8 & 3 & 4 & Q=2P_4, P_4\not\in \cR \\
8 & 4 & 4 & Q=P_4+P_5, P_4\in \cR, P_5\not\in \cR \\
7 & 4 & 5 & Q=4P_4-2P_5, P_4\in \cR, P_5\not\in \cR \\
8 & 5 & 5 & Q=4P_4-K_C, P_4\not\in \cR \\
7 & 5 & 5 & Q=3P_4-P_5, P_4\not\in \cR, P_5\not\in \cR, P_4\ne P_5, \\
&&& P_4+P_5\not\sim K_C, 2(2P_4-P_5)\not\sim K_C \\
6 & 6 & 7 & Q=P_4+P_5, P_4\not\in \cR, P_5\not\in \cR, P_4\ne P_5, P_4+P_5\not\sim K_C \\
\hline\hline
\end{array}
\]
\caption{\label{tab:M=ZP-D}
Singularities with $M_X=Z_X$ and Pinkham-Demazure divisors}
\end{table}
\begin{rem}
Taking a general Pinkham-Demazure divisor $D=Q-\sum _{i=1}^3 \frac{1}{2}P_i$, we have a singularity $\V\in \overline{\cal X}(\Gamma)$ with $H(V,t)=1+ t^4 +2 t^6+ t^7+\cdots$ and that $p_g\V=5$.
Recall that $p_a\V=5$ (see \sref{ss:BCI2334}).
Therefore, we have the equality $p_a(V,o)=\min \defset{p_g(W,o)}{(W,o)\in \cal X(\Gamma)}$, and this is realized by a weighted homogeneous singularity (cf. \thmref{t:TW}).
\end{rem}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| proofpile-arXiv_059-15717 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Action Recognition has gained increasing importance in recent years, due to applications in several fields of research such as surveillance, human computer interaction, healthcare and automotive. Despite the significant steps forward made since the diffusion of deep learning, there are still challenges yet to be solved. Certain applications, for instance, have extremely high time constraints. This is the case when recognition must be performed from fast moving vehicles (e.g. drones or cars), or when the pattern to be recognized is extremely fast (e.g. eye glimpses).
Indeed, standard RGB cameras might even fail to capture a rich enough signal to enable recognition due to low frame-rates and motion blur.
These limitations of RGB cameras have been addressed with event cameras. Event cameras, also known as neuromorphic cameras, are sensors that capture illumination changes, producing asynchronous events independently for each pixel. These sensors have several desirable properties such as high dynamic range, low latency, low power consumption, absence of motion blur and, last but not least, they operate at extremely high frequencies, generating events at a $\mu s$ temporal scale. The output of an event camera therefore is highly different from the one of a regular RGB camera, making the applicability of computer vision algorithms not so straightforward. In particular, Deep Learning methods such as Convolutional Neural Networks (CNN), work with frames of synchronous data. Asynchronous events need to be aggregated into synchronous frames to be fed to a CNN.
Several event aggregation strategies have been proposed in literature, allowing the usage of frame-based algorithms \cite{nguyen2019real, miao2019neuromorphic, ghosh2019spatiotemporal, cannici2020differentiable, cannici2019asynchronous}. These techniques however approximate the signal by quantizing time into aggregation intervals, yielding to a loss of information. The aggregation time can be lowered to limit this phenomena, but this will result in an extremely high number of frames to be processed, making real-time analysis prohibitive.
In this paper we present an event aggregation strategy named Temporal Binary Representation (TBR). Compared to existing strategies, TBR generates compact representations without losing information up to an arbitrarily small quantization time. In fact, we first aggregate events to generate intermediate binary representations with small quantization times and then losslessly combine sequences of intermediate representations into a single frame. This allows us to lower the amount of data to be processed while retaining information at fine temporal scales. TBR is specifically tailored for fast moving actions or gestures and can be directly used for training and evaluating standard CNNs. Indeed, we exploit two models based on Alexnet+LSTM and Inception 3D for action recognition, reporting state of the art results on the IBM DVS128 Gesture Dataset \cite{amir2017low}. Furthermore, we highlight the benefits of the proposed strategy by collecting an extension of the dataset in more challenging scenarios, namely higher execution speed, multiple scales, camera pose and background clutter.
To summarize, the main contributions of this paper are the following:
\begin{itemize}
\item We propose a compact representation of event data dubbed Temporal Binary Representation, exploiting a conversion of binary event sequences into frames that encode both spatial and temporal information.
\item Our formulation allows to tune information loss and memory footprint, making it suitable for real-time applications.
\item We collected an extension of the popular DVS128 Gesture Dataset under challenging conditions, which we plan to release upon publication.
\end{itemize}
The paper is organized as follows: in Sec.~\ref{sec:related} a literature review is reported to frame the work in the current state of the art; in Sec.~\ref{sec:method} our Temporal Binary Representation is presented; in Sec.~\ref{sec:model} we provide an overview of the models used for classifying gestures; in Sec.~\ref{sec:dataset} we present the dataset used for evaluating our approach and introduce the additional benchmark that we have collected; in Sec.~\ref{sec:training} we discuss the training details; in Sec.~\ref{sec:experiments} and~\ref{sec:ablation} we report the results of our approach; finally in Sec.~\ref{sec:conclusions} we draw the conclusions.
\section{Related Work}
\label{sec:related}
\subsection{Action and Gesture Recognition}
Several formulations have been adopted in literature for the task of action recognition. Early works~\cite{csur2011, poppe2010} have treated it as a classification task, while more recent works have provided a finer level of detail adding a temporal dimension (action detection)~\cite{actoms,serena2016fgv,jcn2016fast,shou2017cvpr,escorcia2020guess,liu2019completeness,nguyen2019weakly} or spatial information (action localization)~\cite{ggjm2015tubes,peng2016multi,saha2016deep, singh2017online, becattini2020progress}.
Action detection aims at recognizing actions and determining their starting and ending points in untrimmed videos.
These approaches are often based on temporal proposals \cite{jcn2016fast}, i.e. a set of frame intervals that are likely to contain a generic action, which are then classified or refined \cite{shou2017cvpr, serena2016fgv}.
This concept has been extended in the spatio-temporal action localization formulation, where the temporal boundaries of the action still need to be determined, but at the same time the actor needs to be precisely localized in each frame, as in an object detection task. The output of such systems is a spatio-temporal tube \cite{ggjm2015tubes, saha2016deep, cuffaro2016segmentation}, i.e. a list of temporally adjacent bounding boxes enclosing the action.
Several works have been focusing on a specific subset of actions, referred to as gestures. Gestures can be divided into the three categories of body, hand and head gestures~\cite{mitra2007gesture}. The interest in gestures often stems from the need to establish some form of interaction between humans and machines, which indeed can happen interpreting human behaviors~\cite{liu2018gesture}. To reduce the reaction time to observed gestures, sensors with high frame-rate have been exploited~\cite{sato2006ohajiki}. Of particular interest is the usage of event cameras, which have been largely used for gesture recognition in the recent years~\cite{maro2020event, kaiser2019embodied, shrestha2018slayer, amir2017low, wang2019space, kaiser2020synaptic, ghosh2019spatiotemporal, bi2019graph}.
Some approaches rely on architectures specifically tailored to handle event data, such as spiking neural networks, which however require specialized hardware to be implemented~\cite{o2013real, kaiser2020synaptic, shrestha2018slayer}.
Most approaches, however, in order to exploit traditional computer vision algorithms, adopt an event aggregation strategy that allows the conversion of streams of asynchronous events into a set of synchronous frames. Most of these approaches, though, perform a temporal quantization in the form of histograms~\cite{ghosh2019spatiotemporal} or event subsampling~\cite{kaiser2019embodied}. To avoid information loss, the bins into which events are quantized can be shrinked, with the side effect of generating a large amount of data that has to be processed. Differently from these works, we propose an aggregation strategy that is lossless up to an arbitrarily small time interval. Our proposed approach in fact compacts several representations in a single frame, allowing to generate less data without discarding information.
\section{Event Representation}
\label{sec:method}
Events generated by an event camera are temporally and spatially localized respectively by a timestamp $t$ and pixel coordinates $x$ and $y$. Each event is also associated to a polarity $p\in \{-1, +1\}$, indicating the sign of the pixel illumination change. The output of an event camera is therefore a stream of tuples $E=(t, x, y, p)$.
To make events interpretable by standard Computer Vision algorithms, they must be aggregated into frames. In general, an aggregation algorithm is a function that maps asynchronous events into a a stream of synchronous frames.
Each generated frame $f^i$ aggregates all the events in the interval $[t^i; t^i + \Delta t ]$ spanning from an initial timestamp $t^i$ and covering a temporal extent $\Delta t$, known as accumulation time.
\subsection{Temporal Binary Representation}
Given a fixed $\Delta t$, we build an intermediate binary representation $b^i$ by simply checking the presence or absence of an event for each pixel during the accumulation time. The value in position $(x,y)$ is obtained as $b^i_{x,y} = \mathds{1}(x,y)$, where $\mathds{1}(x,y)$ is an indicator function returning 1 if an event is present in position $(x,y)$ and 0 otherwise.
We then consider $N$ temporally consecutive binary representations by stacking them together into a tensor $B \in \mathbb{R}^{H \times W \times N}$. Each pixel can be considered as a binary string of $N$ digits $[b^0_{x,y}~ b^1_{x,y}~ ...~ b^{N-1}_{x,y}]$ with the most significant digit corresponding to the most recent event. We then convert into a decimal number each binary string, as shown in Fig. \ref{fig:binaryevent}. This procedure allows us to compact the representation of $N$ consecutive accumulation times into a single frame without any loss of information. The frame is then normalized in $[0, 1]$, dividing its values by $N$.
We refer to this event representation as Temporal Binary Representation (TBR).
Compared to standard event aggregation strategies that generate a single frame for each $\Delta t$, TBR reduces the memory footprint by a factor of $N$. This also leads to less data to be processed by Computer Vision algorithms, enabling time-constrained applications. At the same time, the accumulation time can be significantly reduced to capture events at finer temporal scales, without increasing the total number of frames.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{img/binary_event}
\caption{Temporal Binary Representation. Events are first stacked together into intermediate binary representations which are then grouped into a single frame thanks to a binary to decimal conversion.}
\label{fig:binaryevent}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\columnwidth]{img/slow_concat}
\includegraphics[width=0.9\columnwidth]{img/fast_concat}\\
\includegraphics[width=0.9\columnwidth]{img/left_armroll_concat}
\includegraphics[width=0.9\columnwidth]{img/right_armroll_concat}\\
\caption{Samples from the MICC-Event Gesture Dataset. Slow and fast execution of the action \textit{air drum} (first row) and different scale and orientation of the action \textit{arm roll} (second row). A 1 second snippet is shown for each sample, where events are color-coded according to the timestamps from blue (start - 0s) to yellow (end - 1s). The actors are shown both frontal (left) and sideways (right).}
\label{fig:recordings}
\end{figure*}
\section{Model}
\label{sec:model}
We adopt our Temporal Binary Representation for event camera data to the task of Action Recognition. To process frames, we use two different architectures.
First, we combine a Convolutional Neural Network to extract spatial features with a Recurrent Neural Network to process sequences of frames. As a feature extractor, we train an AlexNet~\cite{krizhevsky2012imagenet}, replacing the final fully connected layers using a single layer with 512 neurons. The output for each frame in the sequence is directly fed to a Long Short Term Memory (LSTM) with 2 layers with hidden dimension 256 each. Finally, a fully connected layer with softmax activation performs the classification.
The second model that we adopt is the Inception 3D model \cite{carreira2017quo}, a state of the art architecture widely adopted with RGB data for action recognition. Based on Inception-V1 \cite{szegedy2015going}, the model relies on inflated 3D convolutions by adding a third dimension to filters and pooling kernels to learn spatio-temporal feature extractors. The model originally has two separate streams for RGB and Optical Flow data. Here we simply remove one branch and retrain the model with event camera data aggregated with TBR.
To process videos, we follow two different approaches, depending on the network. For the AlexNet+LSTM model we simply feed the whole sequence of frames to the model and collect the final output. With Inception 3D instead, we use as input non-overlapping blocks of $F$ frames stacked together, which are independently evaluated. To provide a final classification for the whole video, we adopt a majority-voting strategy among predictions for each block.
\section{Datasets}
\label{sec:dataset}
We train our model on the the IBM DVS128 Gesture Dataset~\cite{amir2017low}. The dataset contains a total of 1342 hand gestures with a variable duration spanning from approximately 2 to 18 seconds (6 seconds on average). Gestures are divided in 10 classes plus an additional random class for unknown gestures. Each of these actions are performed by 29 subjects under different illumination conditions (natural, fluorescent and led lights). The data is acquired using a DVS128 camera, i.e. an event camera with a sensor size of $128 \times 128$ pixels~\cite{lichtsteiner2006128}. We follow the split proposed by the authors, comprising 23 subjects for training and 6 for validation.
To increase the variability of the DVS128 Gesture Dataset we recorded an additional test benchmark using a Prophesee GEN 3S VGA-CD event camera\footnote{https://www.prophesee.ai/event-based-evk/}. The camera has a sensor with a higher resolution of $640 \times 480$ pixels (VGA). The recorded actions still belong to the 11 classes of the DVS128 dataset but are performed under more challenging conditions. In particular, the actors were asked to perform the actions at different speeds, in order to demonstrate the capacity of event cameras to capture high speed movements. In addition the actions have been recorded at different scales and camera orientations and also under uneven illumination which is likely to cast shadows on the body and the surroundings, generating spurious events.
The dataset was recorded by 7 different actors of different age, height and gender for a total of 231 videos. All the videos are used for testing, still using the DVS128 Gesture Dataset as training set.
We refer to the newly collected data as the MICC-Event Gesture Dataset, which will be released upon publication.
In Fig.~\ref{fig:recordings} a few samples from the dataset are shown, highlighting the different execution speeds, scales and orientations at which actions are recorded.
\section{Training}
\label{sec:training}
We train the models using the SGD optimizer with momentum. We use a learning rate equal to 0.01, which is then decreased to 0.001 after 25 epochs. As loss we adopt the Binary Cross-Entropy Loss, regularized with weight decay. Overall, the training of Inception 3D took 13 hours on an NVIDIA Titan Xp, while AlexNet+LSTM required approximately 30 hours.
For the DVS128 Gesture Dataset, to make the frames compatible with the input layer of the models, we apply a zero-padding up to $227\times227$ for AlexNet+LSTM and $224 \times224$ for Inception 3D. For the MICC-Event Gesture Dataset instead, which is recorded with the higher resolution of $640 \times 480$, we perform a central crop of $350 \times 350$ pixels and then reshape it to $128 \times 128$ to match the size of DVS128. Reshape is done with Nearest Neighbor interpolation to a avoid unwanted artifacts that may introduce noise in the event representation.
Frame values are normalized in $[-1; 1]$ before being fed to the models.
During training we also perform data augmentation applying random scaling, translation and rotation.
\section{Experiments}
\label{sec:experiments}
In Tab.~\ref{tab:dvs128} we report the results on the DVS128 Gesture Dataset for the two models AlexNet+LSTM and Inception 3D, trained with frames generated by our Temporal Binary Representation. The results are compared with state of the art approaches. Following prior work, we report the classification accuracy both including and excluding the \textit{Other Gesture} class, respectively referred to as "10 classes" and "11 classes".
In our models, events are aggregated with the proposed Temporal Binary Encoding, stacking $N=8$ binary representations with an accumulation time $\Delta t=2.5 ms$. Therefore, we use an 8 bit representation for each pixel, covering 20 ms with each frame. It is important to notice that this allows the model to observe events without any loss of information up to a precision of 2.5 ms, even if a single frame stores data covering an 8 times bigger time interval.
Since the Inception 3D model takes as input chunks of videos as a tensor of temporally stacked frames, we feed to the model chunks of 500 ms, i.e. chunks of 25 frames encoded with TBR. With classic event aggregation strategies that use the same $\Delta t$ of 2.5 ms, this would lead to 200 frames per chunk, increasing considerably the computational burden.
Overall, the Inception 3D model achieves the best results, reporting approximately a 2\% improvement respect to AlexNet+LSTM. Interestingly, both our architectures are capable to obtain a perfect classification of the \textit{Other Gesture} class, making the accuracy in the 11 classes settings higher than the 11 classes one. This behavior is the opposite compared to the baselines that adopt the 10 classes setting, which consistently lowers the accuracy.
\begin{table}[]
\caption{Results on the DVS128 Gesture Dataset.}
\label{tab:dvs128}
\centering
\begin{tabular}{l|c|c}
& 10 classes & 11 classes \\ \hline
Time-surfaces~\cite{maro2020event} & 96.59 & 90.62 \\
SNN eRBP\cite{kaiser2019embodied} & - & 92.70 \\
Slayer~\cite{shrestha2018slayer} & - & 93.64 \\
CNN~\cite{amir2017low} & 96.49 & 94.59 \\
Space-time clouds~\cite{wang2019space} & 97.08 & 95.32 \\
DECOLLE~\cite{kaiser2020synaptic} & - & 95.54 \\
Spatiotemporal filt.~\cite{ghosh2019spatiotemporal} & - & 97.75 \\
RG-CNN~\cite{bi2019graph} & - & 97.20 \\ \hline
Ours - AlexNet+LSTM & 97.50 & 97.73 \\
Ours - Inception3D & \textbf{99.58} & \textbf{99.62}
\end{tabular}
\end{table}
To better assess the benefits of adopting our Temporal Binary Representation, we report results on the MICC-Event Gesture Dataset. We use the whole dataset for testing the Inception 3D model, which is trained on DVS128. To provide a comparison with other approaches, we have trained 2 baseline variants using event aggregation strategies from the literature: \textit{Polarity}~\cite{nguyen2019real} and \textit{Surface of Active Events}~\cite{mueggler2017fast}.
The \textit{Polarity}~\cite{nguyen2019real} approach simply assigns a different value to events with different polarities. Therefore, the final representation is an image $I_p$, where each pixel $(x,y)$ is given by:
\begin{equation}
I_{p}(x,y) =
\begin{cases}
0, \hspace{15px} \text{if event polarity is negative}\\
0.5, \hspace{7px} \text{if no events happen in $\Delta$t}\\
1, \hspace{15px} \text{if event polarity is positive}
\end{cases}
\end{equation}
If multiple events are detected in the accumulation time, the most recent one is considered.
The \textit{Surface of Active Events} (SAE)~\cite{mueggler2017fast} instead, for each pixel measures the time between the last observed event and the beginning of the accumulation time $t_0$. The values are then normalized between 0 and 255, similarly to TBR with 8 bits. Polarity is discarded. The representation $I_{SAE}$ is obtained as:
\begin{equation}
I_{SAE}(x, y) = 255 \times \left(\frac{\text{t\scriptsize{p}} - \text{t\scriptsize{0}}}{\text{$\Delta$t}}\right)
\end{equation}
Samples using TBR, Polarity and SAE are shown in Fig.~\ref{fig:accumulation_strategies}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.3\columnwidth]{img/jet_n=8/D-T=2.5ms_jet/02/000.png}
\includegraphics[width=0.3\columnwidth]{img/Others/Polarity/D-T=2.5ms_jet/02/000.png}
\includegraphics[width=0.3\columnwidth]{img/Others/SAE/D-T=2.5ms_jet/02/000.png} \\ \medskip
\includegraphics[width=0.3\columnwidth]{img/jet_n=8/D-T=2.5ms_jet/08/000.png}
\includegraphics[width=0.3\columnwidth]{img/Others/Polarity/D-T=2.5ms_jet/08/000.png}
\includegraphics[width=0.3\columnwidth]{img/Others/SAE/D-T=2.5ms_jet/08/000.png}
\caption{Events aggregated with our Temporal Binary Representation (left), Polarity~\cite{nguyen2019real} (middle) and Surface of Active Events~\cite{mueggler2017fast} (right). All three representations are made using an accumulation time $\Delta t=2.5 ms$.}
\label{fig:accumulation_strategies}
\end{figure}
\begin{table}[t]
\caption{Results on the DVS128 Gesture Dataset and the MICC-Event Gesture Dataset for Inception 3D trained with three different aggregation strategies: TBR (ours), Polarity~\cite{nguyen2019real} and SAE~\cite{mueggler2017fast}.}
\label{tab:micc-event}
\begin{tabular}{l|c|c|c}
& TBR (ours) & Polarity & SAE \\ \hline
DVS128 Gesture Dataset & \textbf{99.62} & 98.86 & 98.11 \\
MICC-Event Gesture Dataset & \textbf{73.16} & 68.40 & 70.13 \\
\end{tabular}
\end{table}
\begin{figure*}[t]
\centering
\begin{tabular}{ccccc}
$\Delta t=1 ms$ & $\Delta t=2.5 ms$ & $\Delta t=5 ms$ & $\Delta t=10 ms$ & $\Delta t=20 ms$ \\
\includegraphics[width=0.15\textwidth]{img/jet_n=8/D-T=1ms_jet/03/000.png} &
\includegraphics[width=0.15\textwidth]{img/jet_n=8/D-T=2.5ms_jet/03/000.png} &
\includegraphics[width=0.15\textwidth]{img/jet_n=8/D-T=5ms_jet/03/000.png} &
\includegraphics[width=0.15\textwidth]{img/jet_n=8/D-T=10ms_jet/03/000.png} &
\includegraphics[width=0.15\textwidth]{img/jet_n=8/D-T=20ms_jet/03/000.png} \\
\includegraphics[width=0.15\textwidth]{img/jet_n=8/D-T=1ms_jet/07/000.png} &
\includegraphics[width=0.15\textwidth]{img/jet_n=8/D-T=2.5ms_jet/07/000.png} &
\includegraphics[width=0.15\textwidth]{img/jet_n=8/D-T=5ms_jet/07/000.png} &
\includegraphics[width=0.15\textwidth]{img/jet_n=8/D-T=10ms_jet/07/000.png} &
\includegraphics[width=0.15\textwidth]{img/jet_n=8/D-T=20ms_jet/07/000.png} \\
\includegraphics[width=0.15\textwidth]{img/jet_n=8/D-T=1ms_jet/10/000.png} &
\includegraphics[width=0.15\textwidth]{img/jet_n=8/D-T=2.5ms_jet/10/000.png} &
\includegraphics[width=0.15\textwidth]{img/jet_n=8/D-T=5ms_jet/10/000.png} &
\includegraphics[width=0.15\textwidth]{img/jet_n=8/D-T=10ms_jet/10/000.png} &
\includegraphics[width=0.15\textwidth]{img/jet_n=8/D-T=20ms_jet/10/000.png} \\
\end{tabular}
\caption{Temporal Binary Representations with different accumulation times $\Delta t$ with a number of bits $n=8$. Each frame represents all the events in the interval $[ 0; n \times \Delta t ]$. Three different gestures are shown: \textit{Right Hand Clockwise} (top); \textit{Arm Roll} (middle); \textit{Other Gesture} (bottom). Pixels are color-coded according to intensity, from 0 (blue - no events) to 255 (red - an event registered for each bit of the representation).}
\label{fig:deltat_fig}
\end{figure*}
In Tab.~\ref{tab:micc-event} we show the results obtained by Inception 3D trained with the three different aggregation strategies. All three strategies are used with an accumulation time $\Delta t=2.5 ms$. We also report the results on the original DVS128 Gesture Dataset test set obtained by our model with the baseline aggregation strategies. Interestingly, on DVS218 the three variants still obtain higher performances than the existing methods from the literature reported in Tab.~\ref{tab:dvs128}. This confirms the choice of Inception 3D, which proves to be suitable for the task of action/gesture recognition using event data.
The results on the MICC-Event Gesture Dataset overall are much lower due to the challenging scenarios that we have collected. However, the gap between the proposed aggregation strategy and the baselines increases considerably, suggesting that the Temporal Binary Representation is capable of representing event data more effectively. At the same time, since we are using $N=8$ bits, TBR generates 8 times less data to be processed since N frames are losslessly condensed into a single representation.
\section{Ablation Studies}
\label{sec:ablation}
We perform a series of ablation studies, showing the performance of the proposed method varying the parameters of the Temporal Binary Representation strategy.
In particular, we observe how the accuracy of the system is affected when varying the accumulation time $\Delta t$, the number of bits used for the binary representation and the length of the video chunk fed to the Inception 3D model.
\subsection{Accumulation time}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{img/D-T}
\caption{Accuracy of Inception 3D on the DVS 128 Gesture Dataset, varying the accumulation time $\Delta t$. The best results from the state of the art~\cite{ghosh2019spatiotemporal} is also shown as reference.}
\label{fig:deltat}
\end{figure}
Varying the accumulation time $\Delta t$, we can adjust the temporal quantization made by TBR. Higher accumulation times lead to more compact representations, which however carry less information. It can be seen from Fig.~\ref{fig:deltat} that this information loss comes with a drop in accuracy for accumulation times bigger than 2.5 ms. Interestingly, lowering $\Delta t$ beneath this threshold does not bring any improvement for the task at hand. In the plot, the best result from the state of the art~\cite{ghosh2019spatiotemporal}, is shown as reference.
Fig.~\ref{fig:deltat_fig} shows samples of Temporal Binary Representations for different accumulation times. Especially for sufficiently high $\Delta t$, both the spatial and temporal nature of the encoding appears clearly visible.
\subsection{Number of bits}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{img/n.pdf}
\caption{Accuracy of Inception 3D on the DVS 128 Gesture Dataset, varying the number of bits for the Temporal Binary Representation. The best results from the state of the art~\cite{ghosh2019spatiotemporal} is also shown as reference.}
\label{fig:n}
\end{figure}
Along with $\Delta t$, the number of bits $N$ used for the proposed Temporal Binary Representation, defines how much information gets condensed into a single frame. Fig.~\ref{fig:n} shows the accuracy of Inception 3D on the DVS128 Gesture Dataset using $N={4,8,16}$. Similarly to $\Delta t$, when $N$ becomes too small, the accuracy of the model saturates. Throughout the paper we have taken $N=8$ bits as reference for building our representations since it offers a trade-off between accuracy and data compactness. Furthermore, the choice of $N=8$ simplifies data storage since events can be saved as unsigned integers grayscale images with lossless compression.
\subsection{Chunk length}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{img/length}
\caption{Accuracy of Inception 3D on the DVS 128 Gesture Dataset, varying the chunk size fed to Inception 3D. The best results from the state of the art~\cite{ghosh2019spatiotemporal} is also shown as reference.}
\label{fig:chunk}
\end{figure}
Here we vary the length of the chunks fed to Inception 3D. Since the model exploits 3D inflated convolutions, it can process multiple frames concatenated together, therefore taking into account the temporal dimension. In the case of TBR, the temporal dimension is already encoded covering a timespan of $N\times\Delta t$.
By staking frames together we are extending the observation timespan by a factor equal to the number of frames. This setting is equivalent to the one adopted in \cite{ghosh2019spatiotemporal}, where the classifier performs a majority voting after having observed several chunks of various dimensions.
In Fig.~\ref{fig:chunk}, we report the results for both methods, varying the chunk length from 100 ms to 500 ms. For our Temporal Binary Encoding we use $\Delta t=2.5 ms$ and $N=8$, hence covering with each frame a temporal interval of 20 ms.
The accuracy of the system improves when the chunk length increases, up to 500 ms. We did not observe significant improvements when increasing it further by adding more frames. It has to be noted however that increasing the chunk length will also increase the latency of the model, since a longer part of the gesture needs to be observed before emitting the first classification.
\section{Conclusions}
\label{sec:conclusions}
In this paper we have presented an accumulation strategy called Temporal Binary Representation for converting the output of event cameras from raw events to frames, making them processable by Computer Vision algorithms. The proposed approach generates highly compact data, thanks to a lossless conversion of intermediate binary representations into a single decimal one. The effectiveness of the proposed approach has been validated on the commonly used DVS128 Gesture Dataset, reporting state of the art results. In addition a new test benchmark for event-based gesture recognition has been collected and will be publicly released.
\section*{Acknowledgments}
This work was partially supported by the Italian MIUR within PRIN 2017, Project Grant 20172BH297: I-MALL - improving the customer experience in stores by intelligent computer vision.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_059-15718 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{}\label{app:alpha}
\begin{theorem}
Partitioning graph $G(V, E)$ into $p$ partitions by our EBG algorithm, the upper bound of the edge imbalance factor is $1 + \frac{p-1}{|E|} (1 + \lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \rfloor$.
\end{theorem}
\begin{proof}
For the sake of simplicity, we denote $Score^m_{(u, v)}(i)$, $e_{count}^m[i]$, $v_{count}^m[i]$ and $keep^m[i]$ as the value of $Score_{(u, v)}(i)$, $e_{count}[i]$, $v_{count}[i]$ and $keep[i]$ before assigning the $m^{th}$ edge, while $m \in [1, |E|]$.
Specifically, $e_{count}^{|E| + 1}[i]$ are denoted as $|E_i|$.
Let the $m^{th}$ edge $(u, v)$ be assigned to subgraph $i$ ($1 \le i \le p$) by algorithm~\ref{alg:greedy}.
By line $15$ of algorithm~\ref{alg:greedy},
\begin{equation}
\label{equ:inequality_0}
Score^m_{(u,v)}(i) - Score^m_{(u,v)}(j) \le 0
\end{equation}
for any $j \in [1, p], i \ne j$.
Substitute equation (\ref{equ:eva}) to inequality (\ref{equ:inequality_0}), we obtain
\begin{equation}
\label{equ:inequality}
\begin{aligned}
\alpha \frac{e_{count}^m[i] - e_{count}^m[j]}{|E| / p} \le& \beta \frac{v_{count}^m[j] - v_{count}^m[i]}{|V| / p} \\
+ &\mathbb{I}(u \notin keep^m[j]) +\mathbb{I}(v \notin keep^m[j]) \\
- &\mathbb{I}(u \notin keep^m[i]) - \mathbb{I}(v \notin keep^m[i]).
\end{aligned}
\end{equation}
Indicator function $\mathbb{I}(State)$ is a boolean function. Therefore, the upper bound of $\mathbb{I}(u \notin keep^m[j]) + \mathbb{I}(v \notin keep^m[j]) - \mathbb{I}(u \notin keep^m[i]) - \mathbb{I}(v \notin keep^m[i])$ is $2$.
Meanwhile, since $0 \le v_{count}^m[i], v_{count}^m[j] \le |V|$, $|v_{count}^m[i] - v_{count}^m[j]| \le |V|$.
Applying these inequalities to equation (\ref{equ:inequality}), the following equation
\begin{equation}
\label{equ:inequality_2}
\begin{aligned}
e_{count}^m[i] - e_{count}^m[j] \le \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E|
\end{aligned}
\end{equation}
holds.
Since $e_{count}^m[i]$ and $e_{count}^m[j]$ are integers, equation (\ref{equ:inequality_2}) can be rewritten as
\begin{equation}
\label{equ:inequality_3}
\begin{aligned}
e_{count}^m[i] - e_{count}^m[j] \le \lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \rfloor .
\end{aligned}
\end{equation}
\begin{lemma}
For any $i, j$ satisfy $1 \le i \ne j \le p$, $e_{count}^m[i] - e_{count}^m[j] \le 1 + \lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \rfloor$ holds for any $m \in [1, |E| + 1]$.
\end{lemma}
\begin{proof}
If there exists any $i$, $j$ such that $e_{count}^m[i] - e_{count}^m[j] \ge 1 + \lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \rfloor$, equation (\ref{equ:inequality_3}) indicates that the new edge will not be assigned to subgraph $i$.
Therefore, for any $m \in [1, |E|]$
\begin{equation}
e_{count}^{m+1}[i] - e_{count}^{m+1}[j] \le e_{count}^m[i] - e_{count}^m[j]
\end{equation}
when $e_{count}^m[i] - e_{count}^m[j] \ge 1 + \lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \rfloor$.
Besides, for any $i \in [1, p]$, $e_{count}^1[i] = 0$.
Thus, the lemma can be proved by mathematical induction. For the sake of brevity, we omit the details.
\end{proof}
Since $e_{count}^{|E| + 1}[i] = |E_i|$ and $\sum_{i = 1}^p |E_i| = |E|$,
\begin{equation}
\label{equ:prove_1}
\begin{aligned}
\sum_{j = 1}^p (|E_i| - |E_j|) = & p \times |E_i| - |E|
\end{aligned}
\end{equation}
for any $i$.
By lemma 1,
\begin{equation}
\label{equ:prove_2}
\begin{aligned}
\sum_{j = 1}^p (|E_i| - |E_j|)& = \sum_{j = 1, j \ne i}^p (|E_i| - |E_j|) \\
& \le (p -1) \times (1 + \lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \rfloor).
\end{aligned}
\end{equation}
Substitute equation (\ref{equ:prove_1}) to (\ref{equ:prove_2}),
\begin{equation}
\label{equ:prove_3}
\begin{aligned}
\frac{|E_i|}{|E|/p} \le 1 + \frac{p-1}{|E|} (1 + \lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \rfloor)
\end{aligned}
\end{equation}
for any $i \in [1, p]$.
Thus we have $\frac{\max_{i=1,...,p} |E_{i}|}{|E|/p} \le 1 + \frac{p-1}{|E|} (1 + \lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \rfloor)$.
\end{proof}
\begin{theorem}
Partitioning graph $G(V, E)$ into $p$ partitions by our EBG algorithm, the upper bound of the vertex imbalance factor is $1 + \frac{p-1}{\sum_{j = 1}^p|V_j|} (1 + \lfloor \frac{2|V|}{\beta p} + \frac{\alpha}{\beta} |V| \rfloor)$.
\end{theorem}
The proof of theorem 2 adopts the same method used in the proof of theorem 1 with minor modifications.
For the sake of simplicity, we do not present it in this paper.
\section{}\label{app:cal}
Figure~\ref{fig:part} demonstrates the partition results of EBG.
We will show the detailed calculation steps of edge assignments here.
In this case, we set $\alpha$ and $\beta$ as the default value $1$.
First, we calculate the degree of each vertex and sort edges in ascending order by the sum of their two end-vertices' degrees.
\begin{table}[htb]
\caption{Edge order calculation}
\label{tab:sum_degree}
\centering
\begin{tabular}{@{}ccc@{}}
Edge & Sorting index & Sum of end-vertices' degrees \\ \midrule
$(B, C)$ & $1$ & $degree[B] + degree[C] = 2 + 2 = 4$ \\
$(A, E)$ & $2$ & $degree[A] + degree[E] = 5 + 1 = 6$ \\
$(A, F)$ & $3$ & $degree[A] + degree[F] = 5 + 1 = 6$ \\
$(A, D)$ & $4$ & $degree[A] + degree[D] = 5 + 1 = 6$ \\
$(A, B)$ & $5$ & $degree[A] + degree[E] = 5 + 2 = 7$ \\
$(A, C)$ & $6$ & $degree[A] + degree[E] = 5 + 2 = 7$ \\
\bottomrule
\end{tabular}
\end{table}
Table~\ref{tab:sum_degree} shows the order of edges by the sum of end-vertices' degrees.
Next, we calculate the edge assignments by the order in table~\ref{tab:sum_degree}.
The evaluation function $Score_{(u, v)}(i)$ is defined in equation (\ref{equ:eva}).
Detailed calculation steps for EBG are shown in table~\ref{tab:ebg}.
\begin{table}[htb]
\caption{EBG edge assignments \protect\\
Table~\ref{tab:ebg} shows calculation step of EBG. The column ``Evaluation function'' is written in format $(\mathbb{I}(u \notin keep[i]) + \mathbb{I}(v \notin keep[i]) ) + \alpha \frac{e_{count}[i]}{|E| / p} + \beta \frac{v_{count}[i]}{|V|/p}$}
\label{tab:ebg}
\centering
\begin{tabular}{@{}ccccc@{}}\toprule
& & \multicolumn{2}{c}{\emph{Evaluation function}} & \\
\cmidrule(lr){3-4}
Edge & Index & Subgraph $0$ & Subgraph $1$ & Assignment \\ \midrule
$(B, C)$ & $1$ & $2 + \frac{0}{3} + \frac{0}{3} = 2$ & $2 + \frac{0}{3} + \frac{0}{3} = 2$ & $1$ \\
$(A, E)$ & $2$ & $2 + \frac{0}{3} + \frac{0}{3} = 2$ & $2 + \frac{1}{3} + \frac{2}{3} = 3$ & $0$ \\
$(A, F)$ & $3$ & $1 + \frac{1}{3} + \frac{2}{3} = 2$ & $2 + \frac{1}{3} + \frac{2}{3} = 3$ & $0$ \\
$(A, D)$ & $4$ & $1 + \frac{2}{3} + \frac{3}{3} = \frac{8}{3}$ & $2 + \frac{1}{3} + \frac{2}{3} = 3$ & $0$ \\
$(A, B)$ & $5$ & $1 + \frac{3}{3} + \frac{4}{3} = \frac{10}{3}$ & $1 + \frac{1}{3} + \frac{2}{3} = 2$ & $1$ \\
$(A, C)$ & $6$ & $1 + \frac{3}{3} + \frac{4}{3} = \frac{10}{3}$ & $0 + \frac{2}{3} + \frac{3}{3} = \frac{5}{3}$ & $1$ \\
\bottomrule
\end{tabular}
\end{table}
\section{Background and Preliminaries}\label{sec:back}
In this section, we introduce the properties of large-scale power-law graphs and contemporary graph partition algorithms. We also introduce some notations and metrics for the rest of this paper.
\subsection{Large-Scale Power-law Graph}\label{sec:powerlaw}
Processing large-scale real-world graphs is one of the key topics in the parallel graph computation. One of the most notable properties for real-world graphs is the power-law degree distribution~\cite{gonzalez2012powergraph}.
The power-law graph is also called the scale-free graph.
The power-law degree distribution is common for natural graphs such as social networks (Twitter and collaboration networks) and the graph of the World Wide Web.
In the power-law graph, the probability of degree $d$ for a randomly sampled vertex is given by:
\begin{equation}\label{equ:power} \mathbb{P}(degree = d) \propto d^{-\eta} \end{equation}
where the exponent $\eta$ is a positive constant.
The lower $\eta$ is, the more skewed the graph will be.
\subsection{Graph Partitioners}
Existing graph partition algorithms can be divided into two main categories: edge-cut (vertex partitioning) and vertex-cut (edge partitioning).
Edge-cut method cuts the cross-partition edges and balances the number of vertices in all subgraphs.
Each subgraph needs to maintain the routing messages and the end-vertices on the other side (ghost vertices) of those replicated edges.
On the contrary, vertex-cut method cuts the vertices and balances the number of edges.
The cut vertices (replicated vertices) are also maintained in multiply subgraphs.
The majority of distributed graph computing systems (e.g.~\cite{low2012distributed, malewicz2010pregel, tian2013think, yan2014blogel}) use edge-cut for graph partitioning.
However, vertex-cut has been proposed in~\cite{gonzalez2012powergraph} as a better approach to process graphs with power-law degree distributions.
Thus many current graph partition algorithms are based on the vertex-cut method, such as CVC~\cite{boman2013scalable}, Ginger~\cite{chen2019powerlyra}, NE~\cite{ne} and ADWISE~\cite{ADWISE}.
We also adopt the vertex-cut method for our EBV algorithm.
\subsection{Preliminaries}\label{sec:pre}
Let $G = (V, E)$ be a directed graph, where $V$ denotes the set of vertices and $E$ denotes the set of edges.
An edge is represented as a pair of vertices $(u, v)$ where $u$ is the source vertex and $v$ is the target vertex.
For the undirected graph, we replace each undirected edge with two edges with opposite directions.
Suppose we partition the graph $G$ into $p$ subgraphs, we denote $G_i(V_i, E_i)$, $i \in [1, p]$ as the $i^{th}$ subgraph.
The vertex-cut (edge partitioning) algorithms partition the edge set $E$ into $p$ subsets.
Let $E_1 \cup E_2 \cup \cdots \cup E_p = E$ be the $p$ partitions of $E$, i.e. $E_i \cap E_j = \emptyset$, $\forall i \ne j$.
Here we define $V_i = \{u \mid (u, v) \in E_i \lor (v, u) \in E_i \}$ as the set of vertices covered by $E_i$.
Due to the definition of $V_i$, there may exist some replicated vertices which have the same vertex ID but belong to different subgraphs.
For the edge-cut (vertex partitioning) algorithms, they partition the vertex set $V$.
Let $V_1 \cup V_2 \cup \cdots \cup V_p = V$ and $V_i \cap V_j = \emptyset$ for all $i \ne j$, we define $E_i = \{(u, v) \mid u \in V_i \lor v \in V_i \}$. The definition of $E_i$ here means that there exists some replicated edges such that $E_i \cap E_j \ne \emptyset$.
Further, we introduce three metrics: edge imbalance factor, vertex imbalance factor and replication factor.
These metrics are also widely used in~\cite{boman2013scalable,chen2019powerlyra,xie2014distributed, ne}.
The edge imbalance factor is defined as
\begin{math}
\frac{\max_{i=1,...,p} |E_{i}|}{|E|/p}
\end{math}
, while the vertex imbalance factor is defined as
\begin{math}
\frac{\max_{i=1,...,p} |V_{i}|}{\sum_{i=1}^{p} |V_{i}|/p}
\end{math}
. Both of them are used to measure the balance of partition results.
The replication factor for the vertex-cut algorithm is defined as
\begin{math}
\frac{\sum_{i=1}^{p} |V_{i}|}{|V|}
\end{math}.
However, we have $\sum_{i=1}^{p} |V_{i}| = |V|$ for the edge-cut algorithm.
Thus the definition of the replication factor cannot be directly adapted and we define
\begin{math}
\frac{\sum_{i=1}^{p} |E_{i}|}{|E|}
\end{math} as the replication factor for the edge-cut algorithm.
The replication factor represents the average number of replicas for a vertex or edge.
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Computer Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Conclusion and Future Work}\label{sec:conclusion}
In this paper, we present an efficient and balanced vertex-cut graph partition algorithm for the subgraph-centric model. It is devised to improve the performance of frameworks with subgraph-centric model for large-scale power-law graphs.
Our results show that graphs partitioned by EBV have the same level of the communication imbalance compared with Ginger, DBH and CVC, while the total communication volume is on the same scale as NE and METIS for power-law graphs. As a result, the EBV algorithm outperforms Ginger, DBH, CVC, NE and METIS.
Experiments indicate that the EBV algorithm achieves outstanding performance for large-scale power-law graphs.
Based on these experiments and analysis, we conclude that the EBV algorithm has very good potential for processing large-scale power-law graphs far beyond the test examples we presented in this paper.
Our algorithm can be further improved in several aspects.
Firstly, EBV is a sequential and offline partition algorithm. We might need to extend it to the distributed and streaming environment to handle larger graphs.
Secondly, a more complex sorting mechanism can be tried for further improving its performance.
In the future, we plan to apply EBV to distributed graph neural networks (GNN) for processing large graphs.
We also plan to explore other potential optimization strategies to form a partition algorithm which could reduce the total communication volume and the communication imbalance further, while it maintains a reasonable partition overhead.
\section{Efficient and Balanced Vertex-Cut Partition Algorithm}\label{sec:implementation}
\subsection{Overview}
In order to solve the large-scale power-law graph computation problem in subgraph-centric frameworks, we devise a highly scalable efficient and balanced vertex-cut partition algorithm (EBV).
First, we demonstrate the subgraph-centric, bulk synchronous parallel programming model and workflow as our main test case.
Second, we propose and analyze our EBV algorithm.
Finally, we prove the upper bound of the edge imbalance factor and vertex imbalance factor of the EBV algorithm for general graphs.
\subsection{Subgraph-Centric Bulk Synchronous Parallel Model and Workflow}\label{sec:api}
The subgraph-centric, bulk synchronous parallel (BSP) model~\cite{valiant1990bridging} is one of the most popular programming models for the subgraph-centric approach.
All of the partition algorithms we test are based on this model.
In this model, the whole graph is divided into several subgraphs.
Each subgraph is bound to one worker (process), and each worker handles only one subgraph.
For fair comparison, here we do not use multi-threading technology for accelerating.
The entire graph processing of subgraph-centric, BSP model is iterative and can be divided into supersteps with three stages: the computation stage (update graph), the communication stage (exchange messages) and the synchronization stage (wait for other workers to complete message exchanging).
In the computation stage, the sequential algorithm takes the current subgraph and receiving messages as input.
Each subgraph is updated by the sequential algorithm.
In the communication stage, only the message sending/receiving operations among the replicated vertices are allowed.
Usually, the messages contain sufficient information for updating the receiving vertices' states.
The synchronization stage is designed as a barrier for separating supersteps.
In this stage, each worker waits for other workers to finish their computation and communication stages.
The term ``bulk synchronous'' is derived from such a process.
\subsection{Efficient and Balanced Vertex-Cut Partition}\label{sec:greedy}
\begin{algorithm}[htb]
\caption{Efficient and Balanced Vertex-Cut Partition Algorithm}
\label{alg:greedy}
\DontPrintSemicolon
\KwIn{Graph $G(V,E)$, the number of subgraphs $p$}
\KwOut{Partition result $part$, $part_{(u,v)}$ is the part assignment for edge $(u,v)$}
\For{$i \in [1, p]$} {
$//$ $keep[i]$ saves the vertex set that the $i^{th}$ subgraph should keep.\;
$keep[i] \gets \emptyset$\;
$//$ $e_{count}[i]$ and $v_{count}[i]$ are the number of edges and vertices of the $i^{th}$ subgraph.\;
$e_{count}[i] \gets 0$, $v_{count}[i] \gets 0$ \;
}
\For{$(u, v) \in E$} {
$//$ Calculate the evaluation function\;
\For{$i \in [1, p]$} {
$Eva[i] \gets 0$\;
\If{$u \notin keep[i]$} {
$Eva[i] \gets Eva[i] + 1$\;
}
\If{$v \notin keep[i]$} {
$Eva[i] \gets Eva[i] + 1$\;
}
$Eva[i] \gets Eva[i] + \alpha \frac{e_{count}[i]}{|E| / p} + \beta \frac{v_{count}[i]}{|V|/p}$
}
$part_{(u,v)} \gets \mathop{\arg\min}\limits_{ i} Eva[i]$\;
$//$ Update variables for further partition\;
$e_{count}[part_{(u,v)}] \gets e_{count}[part_{(u,v)}] + 1$\;
\If{$u \notin keep[part_{(u,v)}]$}{
$v_{count}[part_{(u,v)}] \gets v_{count}[part_{(u,v)}] + 1$\;
}
\If{$v \notin keep[part_{(u,v)}]$}{
$v_{count}[part_{(u,v)}] \gets v_{count}[part_{(u,v)}] + 1$\;
}
$keep[part_{(u,v)}] \gets keep[part_{(u,v)}] \cup \{u, v\}$\;
}
\end{algorithm}
Algorithm~\ref{alg:greedy} describes in details how our EBV partitions the whole graph $G(V, E)$ into $p$ subgraphs.
The EBV algorithm takes the graph $G(V, E)$ and the number of subgraphs $p$ as input and outputs the partition result.
The $keep$, $e_{count}$ and $v_{count}$ are auxiliary variables and updated dynamically.
$keep[i]$ saves the vertex set of the $i^{th}$ subgraph, while $e_{count}[i]$ and $v_{count}[i]$ represent the number of edges and vertices have been assigned to the $i^{th}$ subgraph.
We abstract an evaluation function
\begin{equation}
\label{equ:eva}
\begin{aligned}
Eva_{(u, v)}(i) =& \mathbb{I}(u \notin keep[i]) + \mathbb{I}(v \notin keep[i]) \\
+& \alpha \frac{e_{count}[i]}{|E| / p} + \beta \frac{v_{count}[i]}{|V|/p}
\end{aligned}
\end{equation}
from this algorithm.
The evaluation function $Eva_{(u, v)}(i)$ is used to measure the benefit of assigning edge $(u, v)$ to subgraph $i$.
The smaller $Eva_{(u, v)}(i)$ is, the more suitable this assignment is.
Our algorithm will assign edge $(u, v)$ to worker $i$ which has the smallest $Eva_{(u, v)}(i)$.
The $\mathbb{I}(State)$ in (\ref{equ:eva}) is an indicator function, which returns $1$ for $State$ is true and $0$ for false.
The hyperparameters $\alpha$ and $\beta$ are used to adjust sensitivity for the balance of edges and vertices in the evaluation function.
The larger they are, the more our algorithm focuses on balancing edges and vertices.
For the experiments in Section~\ref{sec:experiments}, we set $1$ as the default value of $\alpha$ and $\beta$.
The evaluation function can be divided into three parts: $\mathbb{I}(u \notin keep[i]) + \mathbb{I}(v \notin keep[i])$, $\alpha \frac{e_{count}[i]}{|E| / p}$ and $\beta \frac{v_{count}[i]}{|V|/p}$.
$\mathbb{I}(u \notin keep[i]) + \mathbb{I}(v \notin keep[i])$ is related to the replication factor, while $\alpha \frac{e_{count}[i]}{|E| / p}$ and $\beta \frac{v_{count}[i]}{|V|/p}$ restrict the edge and vertex imbalance factor respectively.
Thus, EBV can balance the overall communication/computation overhead and workload imbalance, and outperforms traditional graph partition algorithms.
Moreover, as a sequential graph partition algorithm, the quality of results for EBV is naturally affected by the edge processing order.
For offline partition jobs, we sort edges in ascending order by the sum of end-vertices' degrees before the execution of EBV.
Intuitively, this sorting mechanism prevents edges with high-degree end-vertices from being assigned at the early stage.
Since edges with two low-degree vertices are unlikely to share the same end-vertex, they are mainly assigned based on the imbalance constraints: $\alpha \frac{e_{count}[i]}{|E| / p}$ and $\beta \frac{v_{count}[i]}{|V|/p}$.
Thus, they tend to be evenly assigned to each subgraph at the beginning of EBV.
Generally, each subgraph keeps low-degree vertices as seeds within themselves in the early stage and cut few high-degree vertices later.
We verify this hypothesis through experiments in Section~\ref{sec:sorting}.
\subsection{Upper Bound of Edge and Vertex Imbalance Factors}\label{sec:alpha}
In this section, we show that the worst-case upper bound of the edge imbalance factor and the vertex imbalance factor of EBV for general graphs are $1 + \frac{p-1}{|E|} \left(1 + \left\lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \right\rfloor \right)$ and $1 + \frac{p-1}{\sum_{j = 1}^p|V_j|} \left(1 + \left\lfloor \frac{2|V|}{\beta p} + \frac{\alpha}{\beta} |V| \right\rfloor \right)$ respectively. This result means that we can restrict the upper bound of edge and vertex imbalance factors by tuning the hyperparameters $\alpha$ and $\beta$.
\begin{theorem}
Given any graph $G(V, E)$ and any positive integer $p$, partitioning $G$ into $p$ subgraphs by Algorithm~\ref{alg:greedy}, the upper bound of the edge imbalance factor for the partition result is $1 + \frac{p-1}{|E|} \left(1 + \left\lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \right\rfloor \right)$.
\end{theorem}
\begin{proof}
For the sake of simplicity, we denote $Eva^m_{(u, v)}(i)$, $e_{count}^m[i]$, $v_{count}^m[i]$ and $keep^m[i]$ as the value of $Eva_{(u, v)}(i)$, $e_{count}[i]$, $v_{count}[i]$ and $keep[i]$ before assigning the $m^{th}$ edge, while $m \in [1, |E|]$.
Specifically, we denote $e_{count}^{|E| + 1}[i]$ as $|E_i|$, the number of edges of the final subgraph $i$.
Let the $m^{th}$ edge $(u, v)$ be assigned to subgraph $i$ ($1 \le i \le p$) by Algorithm~\ref{alg:greedy}.
By line $15$ of Algorithm~\ref{alg:greedy}, we have
\begin{equation}
\label{equ:inequality_0}
Eva^m_{(u,v)}(i) - Eva^m_{(u,v)}(j) \le 0
\end{equation}
for any $j \in [1, p], i \ne j$.
Substitute (\ref{equ:eva}) to (\ref{equ:inequality_0}), we obtain
\begin{equation}
\label{equ:inequality}
\begin{aligned}
\alpha \frac{e_{count}^m[i] - e_{count}^m[j]}{|E| / p} \le& \beta \frac{v_{count}^m[j] - v_{count}^m[i]}{|V| / p} \\
+ &\mathbb{I}(u \notin keep^m[j]) +\mathbb{I}(v \notin keep^m[j]) \\
- &\mathbb{I}(u \notin keep^m[i]) - \mathbb{I}(v \notin keep^m[i]).
\end{aligned}
\end{equation}
Indicator function $\mathbb{I}(State)$ is a boolean function. Therefore, the upper bound of $\mathbb{I}(u \notin keep^m[j]) + \mathbb{I}(v \notin keep^m[j]) - \mathbb{I}(u \notin keep^m[i]) - \mathbb{I}(v \notin keep^m[i])$ is $2$.
As we have $0 \le v_{count}^m[i], v_{count}^m[j] \le |V|$, $|v_{count}^m[i] - v_{count}^m[j]| \le |V|$, we can apply these inequalities to (\ref{equ:inequality}) to derive
\begin{equation}
\label{equ:inequality_2}
\begin{aligned}
e_{count}^m[i] - e_{count}^m[j] \le \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \quad.
\end{aligned}
\end{equation}
Since $e_{count}^m[i]$ and $e_{count}^m[j]$ are integers, (\ref{equ:inequality_2}) can be rewritten as
\begin{equation}
\label{equ:inequality_3}
\begin{aligned}
e_{count}^m[i] - e_{count}^m[j] \le \left\lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \right\rfloor \quad.
\end{aligned}
\end{equation}
To complete the proof, we rely on the following technical Lemma~\ref{lemma}.
\begin{lemma}\label{lemma}
For any graph $G(V, E)$ partitioned into $p$ subgraphs by Algorithm~\ref{alg:greedy}, $e_{count}^m[i] - e_{count}^m[j] \le 1 + \left\lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \right\rfloor$ holds for any integer $i$, $j$ and $m$ satisfying $1 \le i \ne j \le p$ and $m \in [1, |E| + 1]$.
\end{lemma}
\begin{proof}
For any $i, j \in [1, p], i \ne j$ and $m \in [1, |E|]$ such that $e_{count}^m[i] - e_{count}^m[j] > \left\lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \right\rfloor$.
Inequality (\ref{equ:inequality_3}) indicates that the $m^{th}$ edge will not be assigned to subgraph $i$.
Therefore,
\begin{equation}
e_{count}^{m+1}[i] - e_{count}^{m+1}[j] \le e_{count}^m[i] - e_{count}^m[j]
\end{equation}
when $e_{count}^m[i] - e_{count}^m[j] = 1 + \left\lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \right\rfloor$.
Besides, for any $i \in [1, p]$, $e_{count}^1[i] = 0$.
Thus the lemma can be proved by mathematical induction. For the sake of brevity, we omit the details.
\end{proof}
Since $e_{count}^{|E| + 1}[i] = |E_i|$ and $\sum_{i = 1}^p |E_i| = |E|$,
\begin{equation}
\label{equ:prove_1}
\begin{aligned}
\sum_{j = 1}^p (|E_i| - |E_j|) = & p \times |E_i| - |E|
\end{aligned}
\end{equation}
for any $i$.
By Lemma 1,
\begin{equation}
\label{equ:prove_2}
\begin{aligned}
\sum_{j = 1}^p (|E_i| - |E_j|)& = \sum_{j = 1, j \ne i}^p (|E_i| - |E_j|) \\
& \le (p -1) \times \left(1 + \left\lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \right\rfloor \right).
\end{aligned}
\end{equation}
Substitute (\ref{equ:prove_1}) to (\ref{equ:prove_2}),
\begin{equation}
\label{equ:prove_3}
\begin{aligned}
\frac{|E_i|}{|E|/p} \le 1 + \frac{p-1}{|E|} \left(1 + \left\lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \right\rfloor\right)
\end{aligned}
\end{equation}
for any $i \in [1, p]$.
Thus we have $\frac{\max_{i=1,...,p} |E_{i}|}{|E|/p} \le 1 + \frac{p-1}{|E|} \left(1 + \left\lfloor \frac{2|E|}{\alpha p} + \frac{\beta}{\alpha} |E| \right\rfloor \right) $.
\end{proof}
\begin{theorem}
Given any graph $G(V, E)$ and any positive integer $p$, partitioning $G$ into $p$ subgraphs by Algorithm~\ref{alg:greedy}, the upper bound of the vertex imbalance factor for the partition result is $1 + \frac{p-1}{\sum_{j = 1}^p|V_j|} \left(1 + \left\lfloor \frac{2|V|}{\beta p} + \frac{\alpha}{\beta} |V| \right\rfloor \right)$.
\end{theorem}
The proof of Theorem 2 adopts the same method used in the proof of Theorem 1 with minor modifications.
For the sake of simplicity, we do not present it in this paper.
\section{Experiments and Analysis}\label{sec:experiments}
In this section, we compare and analyze the performance of EBV with the Ginger, Degree-Based Hash (DBH), Cartesian Vertex-Cut (CVC), Neighbor Expansion (NE) and METIS in large-scale power-law and non-power-law graphs.
Specifically, we address three major research problems:
\begin{enumerate}[(1)]
\item What's the influence of the total message size and the message imbalance for the parallel graph computation?
\item Are there any graph partition metrics that can reflect the total message size and the message imbalance? If we design a graph partition algorithm that aims to optimize both at the same time, is it better than existing algorithms?
\item Does our sorting preprocessing benefit EBV algorithm?
\end{enumerate}
\subsection{Experiment Setup and Data}
To answer these questions above, we choose three of the most famous graph algorithms as examples: Single Source Shortest Path (SSSP)~\cite{fredman1987fibonacci}, PageRank (PR)~\cite{page1999pagerank} and Connected Component (CC)~\cite{samet1979connected}.
We also choose four large-scale graphs: USARoad~\cite{USARoad}, LiveJournal~\cite{LiveJournal}, Twitter~\cite{Twitter} and Friendster~\cite{Friend}.
The statistics of these graphs are listed in Table~\ref{tab:dataset}.
Note that LiveJournal, Twitter and Friendster are power-law graphs, while USARoad is not.
The $\eta$ value (Section~\ref{sec:powerlaw}) quantifies the skewed property of power-law graphs.
Since we want to analyze the influence of $\eta$, we provide the $\eta$ value for USARoad according to the definition, although it's not a power-law graph.
\begin{table*}[htbp]
\caption{Statistics of tested graphs}
\label{tab:dataset}
\centering
\begin{tabular}{cccccc}
Graphs &Type & V & E & Average Degree & $\eta$ \\ \midrule
USARoad & Undirected & $23,947,347$ & $\quad\,\,\,58,333,344$ & $\,\,\,2.44$ & $6.30$ \\
LiveJournal & Directed & $\,\,\,4,847,571$ & $\quad\,\,\,68,993,773$ & $14.23$ & $2.64$ \\
Friendster & Undirected & $65,608,366$ & $1,806,067,135$ & $27.53$ & $2.43$ \\
Twitter & Directed & $41,652,230$ & $1,468,365,182$ & $35.25$ & $1.87$ \\
\bottomrule
\end{tabular}
\end{table*}
We test six partition algorithms: the EBV we proposed, Ginger, Degree-Based Hash (DBH), Cartesian Vertex-Cut (CVC), Neighbor Expansion (NE) and METIS.
We refer to their implementation in the subgraph-centric BSP framework DRONE~\cite{wen2018drone} as EBV, Ginger, DBH, CVC, NE and METIS respectively.
Our experiment platform is a 4-node cluster, with each node consisting 8 Intel Xeon E7-8830 2.13GHz CPU and 1TB memory.
\subsection{Graph Partition Comparison}\label{sec:partitionercompare}
Here we present the execution time comparison of EBV, Ginger, DBH, CVC, NE and METIS.
We also compare these results with the state-of-the-art frameworks Galois~\cite{Galois} and Blogel~\cite{yan2014blogel}) for diversity.
In this comparison, the partition overhead is not included.
Figure~\ref{fig:systemcompare} shows the performance comparison.
We should remark that Blogel employs a multi-source based Graph Voronoi Diagram partitioner to detect the connected components in the partitioning phase, ensuring that each block is ``connected''. In the subsequent CC computing phase, it only merges small connected components into a bigger one without performing the vertex-level computation. For a fair comparison, we add the pre-computing time in the partition phase to the total time of CC for Blogel. Blogel is also excluded from the PR comparison because its PR implementation is not standard and its result is not directly comparable. The interested reader is referred to~\cite{kamvar2003exploiting} for more details.
\begin{figure*}
\centering
\begin{minipage}{0.32\linewidth}
\centerline{\includegraphics[width=1\textwidth]{cc_lj}}
\centerline{CC - LiveJournal}
\end{minipage}
\hfill
\begin{minipage}{0.32\linewidth}
\centerline{\includegraphics[width=1\textwidth]{cc_twitter}}
\centerline{CC - Twitter}
\end{minipage}
\hfill
\begin{minipage}{0.32\linewidth}
\centerline{\includegraphics[width=1\textwidth]{cc_friend}}
\centerline{CC - Friendster}
\end{minipage}\\[0.5mm]
\begin{minipage}{0.32\linewidth}
\centerline{\includegraphics[width=1\textwidth]{pr_lj}}
\centerline{PR - LiveJournal}
\end{minipage}
\hfill
\begin{minipage}{0.32\linewidth}
\centerline{\includegraphics[width=1\textwidth]{pr_twitter}}
\centerline{PR - Twitter}
\end{minipage}
\hfill
\begin{minipage}{0.32\linewidth}
\centerline{\includegraphics[width=1\textwidth]{pr_friend}}
\centerline{PR - Friendster}
\end{minipage}\\[0.5mm]
\begin{minipage}{0.32\linewidth}
\centerline{\includegraphics[width=1\textwidth]{sssp_lj}}
\centerline{SSSP - LiveJournal}
\end{minipage}
\hfill
\begin{minipage}{0.32\linewidth}
\centerline{\includegraphics[width=1\textwidth]{sssp_twitter}}
\centerline{SSSP - Twitter}
\end{minipage}
\hfill
\begin{minipage}{0.32\linewidth}
\centerline{\includegraphics[width=1\textwidth]{sssp_friend}}
\centerline{SSSP - Friendster}
\end{minipage}
\caption{
Cross-system performance comparison of CC, PR and SSSP on various graphs. \protect\\
Figure~\ref{fig:systemcompare} shows the execution time of CC, PR and SSSP on three power-law graphs with different numbers of workers.
The X-axis is the number of workers. While the Y-axis is the execution time.
}
\label{fig:systemcompare}
\end{figure*}
From Figure~\ref{fig:systemcompare}, we can find that EBV performs best in most cases.
More specifically, compared with Ginger, DBH, CVC, NE and METIS, its execution time is reduced by an average of $16.8\%$, $37.3\%$, $25.4\%$, $31.5\%$ and $63.0\%$ respectively.
Although Galois performs well and even outperforms EBV in the PR-LiveJournal, it is limited for larger graphs.
For Blogel, it takes a longer time in our experiments than Galois.
\begin{figure*}
\centering
\begin{minipage}{0.48\linewidth}
\centerline{\includegraphics[width=1\textwidth]{cc_usa}}
\centerline{CC - USARoad}
\end{minipage}
\hfill
\begin{minipage}{0.48\linewidth}
\centerline{\includegraphics[width=1\textwidth]{sssp_usa}}
\centerline{SSSP - USARoad}
\end{minipage}
\caption{Comparison of CC and SSSP over USARoad.\protect\\
Figure~\ref{fig:cc_sssp_usa} shows the execution time of CC and SSSP on USARoad with different workers.
The X-axis is the number of workers. While the Y-axis is the execution time.
}
\label{fig:cc_sssp_usa}
\end{figure*}
We also compare the performance of different frameworks with CC and SSSP on the non-power-law graph (USARoad).
Figure~\ref{fig:cc_sssp_usa} shows the performance comparison of CC and SSSP.
In this case, the performance of METIS is comparable to EBV, Ginger and CVC, which is different from Figure~\ref{fig:systemcompare}.
Moreover, NE achieves the best performance among all partition algorithms.
Based on these experiments, we can conclude that EBV outperforms other partition algorithms in the power-law graphs, and ranks closely to METIS in the non-power-law graph.
In order to understand the mechanisms behind different partition algorithms' performance, we study the computation and communication pattern of individual subgraphs.
Here we choose CC with $4$ workers on LiveJournal as a representative example.
Based on the subgraph-centric BSP programming model and its workflow in Section~\ref{sec:api}, a superstep can be separated into three stages: the computation stage, the communication stage and the synchronization stage.
For the sake of convenience, we record the computation time and the communication time as $comp_i^k$ and $comm_i^k$ respectively, where $i$ indicates the subgraph ID and $k$ indicates the $k^{th}$ superstep.
The $comp$ and $comm$ are the average computation time and communication time over all workers. Thus they are computed as $\sum\limits_{i=1}^{p}\sum\limits_{k=1}^{s}comp_i^k / p $ and $\sum\limits_{i=1}^{p}\sum\limits_{k=1}^{s}comm_i^k / p $, while $p$ and $s$ refer to the total number of workers and supersteps.
We remark that $comp + comm$ is not equal to the total execution time, but it is proportional to the later. Moreover, we define $\Delta C^k$ as $\max\limits_i(comp^k_i + comm^k_i) - \min\limits_i(comp^k_i + comm^k_i)$. $\Delta C^k$ represents the longest synchronization (waiting) time in the $k^{th}$ superstep. To demonstrate the accumulated synchronization time and its relative scale to the total execution time, we use $\Delta C = \sum\limits_{k=1}^{s} \Delta C^k$ as the metrics for the workload balance.
\begin{table}
\centering
\caption{Breakdown (seconds) of CC with $4$ workers over LiveJournal.}
\label{tab:balance}
\begin{tabular}{@{}ccccc@{}}\toprule
& $comp$ & $comm$ & $\Delta C$ & Execution time \\ \midrule
EBV & 20.90 & 1.00 & 3.05 & 23.41 \\
Ginger & 22.23 & 1.12 & 3.38 & 25.65 \\
DBH & 23.31 & 1.31 & 7.59 & 27.85 \\
CVC & 24.15 & 1.47 & 8.63 & 30.99 \\
NE & 17.81 & 0.52 & 28.02 & 32.66\\
METIS & 23.28 & 0.49 & 22.70 & 34.66 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[htbp]
\centering
\begin{minipage}{0.48\linewidth}
\centerline{\includegraphics[width=1\textwidth]{sort}}
\centerline{EBV}
\end{minipage}
\hfill
\begin{minipage}{0.48\linewidth}
\centerline{\includegraphics[width=1\textwidth]{ginger}}
\centerline{Ginger}
\end{minipage}
\\
\begin{minipage}{0.48\linewidth}
\centerline{\includegraphics[width=1\textwidth]{DBH}}
\centerline{DBH}
\end{minipage}
\hfill
\begin{minipage}{0.48\linewidth}
\centerline{\includegraphics[width=1\textwidth]{CVC}}
\centerline{CVC}
\end{minipage}
\\
\begin{minipage}{0.48\linewidth}
\centerline{\includegraphics[width=1\textwidth]{NE}}
\centerline{NE}
\end{minipage}
\hfill
\begin{minipage}{0.48\linewidth}
\centerline{\includegraphics[width=1\textwidth]{Metis}}
\centerline{METIS}
\end{minipage}
\caption{The breakdown of CC with $4$ workers over LiveJournal}
\label{fig:balance}
\end{figure}
Table~\ref{tab:balance} demonstrates the detailed breakdown for CC with $4$ subgraphs. For the execution time, EBV is the shortest.
Although the $comp$ of METIS is similar to DBH and CVC and the $comp$ of NE is the shortest, the execution time of NE and METIS is much longer.
This is mainly because the $\Delta C$ of NE and METIS are larger than others, which indicates the strong workload imbalance.
To further demonstrate this phenomenon, we present the runtime comparison in Figure~\ref{fig:balance} for an intuitive perception.
With these experiments above, we can conclude that both the workload balance and the overall computation/communication time play extremely important roles in reducing the execution time.
\subsection{Communication Pattern Analysis}\label{sec:messages}
Here we discuss the communication characteristics of the six graph partition algorithms: EBV, Ginger, DBH, CVC, NE and METIS.
We first focus on the statistical characteristics of the partitioned subgraphs.
Further, we use the number of communication messages of each subgraph as a platform-independent indicator for quantifying the workload balance and the overall computation volume.
We present the characteristics of different graph partition algorithms with three metrics: edge imbalance factor, vertex imbalance factor and replication factor as shown in Section~\ref{sec:pre}.
The edge and vertex imbalance factors imply the computational complexity differences between subgraphs, while the replication factor is proportional to the total number of communication messages between subgraphs.
\begin{table*}[htbp]
\centering
\caption{Partitioning metrics comparison of EBV, Ginger, DBH, CVC, NE and METIS. \protect\\
USARoad, LiveJournal, Friendster and Twitter are partitioned into $12$, $12$, $32$ and $32$ subgraphs respectively.
}
\label{tab:partitionercompare}
\centering
\begin{tabular}{cC{15 pt} C{30 pt}C{30 pt}C{30 pt}C{30 pt}C{30 pt}C{30 pt} C{17 pt}C{19 pt}C{17 pt}C{17 pt}C{17 pt}C{24 pt}}\toprule
& & \multicolumn{6}{c}{Edge Imbalance Factor / Vertex Imbalance Factor} & \multicolumn{6}{c}{Replication Factor} \\
\cmidrule(lr){3-8} \cmidrule(lr){9-14}
Graphs & $\eta$ & EBV & Ginger & DBH & CVC & NE & METIS & EBV & Ginger & DBH & CVC & NE & METIS \\ \midrule
USARoad & 6.30 & 1.00/1.00 & 1.00/1.00 & 1.00/1.00 & 1.00/1.00 & 1.00/1.05 & 1.10/1.01 & 1.29 & 1.68 & 1.92 & 2.27 & 1.01 & 1.00 \\
LiveJournal & 2.64 & 1.00/1.01 & 1.00/1.01 & 1.00/1.00 & 1.00/1.00 & 1.00/2.14 & 2.07/1.03 & 1.80 &2.23 & 3.34 & 3.96 & 1.89 & 1.20 \\
Friendster & 2.43 &1.00/1.00 & 1.04/1.10 & 1.00/1.00 & 1.00/1.00 & 1.00/2.46 & 2.43/1.03 & 4.63 & 5.64 &6.55 & 5.95 & 4.43 & 1.36 \\
Twitter & 1.87 &1.00/1.01 & 1.02/1.03 & 1.00/1.00 & 1.03/1.04 & 1.00/3.64 & 6.44/1.05 & 3.59 & 4.51 & 4.99 & 6.75 & 2.42 & 1.56 \\
\bottomrule
\end{tabular}
\end{table*}
Table~\ref{tab:partitionercompare} shows the comparison of the edge and vertex imbalance factors and the replication factor among all algorithms.
For the convenience of analysis, we also provide the $\eta$ value and order this table by $\eta$ reversely.
Combing with the $\eta$ value, we find that as $\eta$ decreases (more skewed), the partition results of NE and METIS are more imbalanced.
For the non-power-law graph (with the largest $\eta$), NE and METIS achieve the roughly balanced results.
This phenomenon explains why they perform better in the non-power-law graph as shown in Figure~\ref{fig:cc_sssp_usa}.
Benefit from the evaluation function mentioned in Section~\ref{sec:greedy}, EBV outperforms Ginger, DBH and CVC for the replication factor.
Both the edge and vertex imbalance factors of EBV, Ginger, DBH and CVC are almost $1$.
The good performance of EBV for the edge and vertex imbalance factors and the replication factor conform to its performance in Figure~\ref{fig:systemcompare}.
We further discuss the metrics shown above by combing with the number of communication messages.
For the sake of simplicity, we focus on the experimental data of CC here.
The number of workers for USARoad, LiveJournal, Friendster and Twitter are $12$, $12$, $32$, $32$, which is the same as Table~\ref{tab:partitionercompare}.
Table~\ref{tab:message_sum} shows the total number of communication messages during the computation for CC.
According to this data, NE and METIS produce a small number of messages and lead by a large margin on the non-power-law graph (USARoad).
The total number of EBV's communication messages is smaller than Ginger, DBH and CVC all the time.
This is in accordance with the replication factor of EBV, Ginger, DBH and CVC in Table~\ref{tab:partitionercompare}.
More specifically, EBV can reduce $23.7\%$, $23.8\%$, $35.4\%$ and $26.0\%$ messages over USARoad, LiveJournal, Friendster and Twitter than Ginger respectively.
Overall, the data for the total number of messages in the parallel graph computations confirms the tendency of the replication factor.
\begin{table*}[htbp]
\centering
\caption{
The total number of communication messages on the CC algorithm. \protect\\
This table shows the total number of communication messages between all workers for EBV, Ginger, DBH, CVC, NE and METIS on four graphs.
The numbers in parentheses are the corresponding replication factors in Table~\ref{tab:partitionercompare}.
}
\label{tab:message_sum}
\begin{tabular}{@{}ccccccc@{}}\toprule
Graphs & EBV & Ginger &DBH & CVC & NE & METIS \\ \midrule
USARoad & $4.05 \times 10 ^7$ (1.29) & $5.01 \times 10 ^7$ (1.68) & $5.41 \times 10 ^7$ (1.92) & $1.26 \times 10 ^8$ (2.27) & $3.14 \times 10 ^5$ (1.01) & $1.63 \times 10 ^4$ (1.00) \\
LiveJournal & $1.43 \times 10 ^7$ (1.80) & $1.77 \times 10 ^7$ (2.23) & $1.85 \times 10 ^7$ (3.34) & $2.18 \times 10 ^7$ (3.96) & $8.36 \times 10 ^6$ (1.89) & $8.97 \times 10 ^6$ (1.20) \\
Friendster & $3.45 \times 10 ^8$ (4.63) & $4.67 \times 10 ^8$ (5.64)& $6.97 \times 10 ^8$ (6.55) & $4.51 \times 10 ^8$ (5.95) & $3.31 \times 10 ^8$ (4.43) & $2.64 \times 10 ^8$ (1.36) \\
Twitter & $1.81 \times 10 ^8$ (3.59) & $ 2.28 \times 10 ^8$ (4.51) & $2.39 \times 10 ^8$ (4.99) & $3.11 \times 10 ^8$ (6.75) & $9.98 \times 10 ^7$ (2.42) & $1.52 \times 10 ^8$ (1.56) \\
\bottomrule
\end{tabular}
\end{table*}
Although NE and METIS provide better performance in the total communication volume than EBV, it can be observed that EBV outperforms NE and METIS on the power-law graphs.
The better performance of EBV against Ginger, DBH and CVC can be attributed to the reduction of the communication.
But we can not make the same claim for NE and METIS.
Thus, we further analyze the characteristics of the workload balance with communication messages.
For quantifying the imbalance of communication messages for all workers, we introduce the $max/mean$ metrics.
The $max/mean$ metrics means the ratio of the maximum number of messages sent by each worker and the average number of messages.
Since the overall execution time is denoted by the slowest worker, this metrics is more meaningful than the common variance metrics.
Table~\ref{tab:message_var} presents the $max/mean$ ratio of the number of messages on the CC algorithm.
The corresponding edge and vertex imbalance factors are also presented in parentheses.
Coupling with the $max/mean$ metrics and the imbalance factor, we can see that the $max/mean$ value of EBV, Ginger, DBH and CVC are almost $1$ on all graphs just as the imbalance factors.
Meanwhile, the $max/mean$ values of NE and METIS vary a lot and is extremely affected by the vertex or edge imbalance factors.
In most cases, the $max/mean$ value increases as the edge or vertex imbalance factor gets larger.
It reveals the inefficiency reason for NE and METIS on power-law graphs.
\begin{table*}[htb]
\centering
\caption{The $max/mean$ ratio of the number of messages on the CC algorithm \protect\\
This table shows the $max/mean$ ratio of the number of messages among all workers for EBV, Ginger, DBH, CVC, NE and METIS.
We also provide the corresponding edge and vertex imbalance factors (Table~\ref{tab:partitionercompare}) in parentheses.
}
\label{tab:message_var}
\begin{tabular}{@{}ccccccc@{}}\toprule
Graphs & EBV & Ginger & DBH & CVC & NE & METIS \\ \midrule
USARoad & 1.002 (1.00/1.00) & 1.001 (1.00/1.00) & 1.001 (1.00/1.00) &1.019 (1.00/1.00) & 2.670 (1.00/1.05) & 1.75 (1.10/1.01)\\
LiveJournal & 1.002 (1.00/1.01) & 1.005 (1.00/1.01) & 1.002 (1.00/1.00) &1.008 (1.00/1.00) & 1.697 (1.00/2.14) & 1.93 (2.07/1.03)\\
Friendster & 1.000 (1.00/1.00) & 1.005 (1.04/1.10) & 1.001 (1.00/1.00) &1.018 (1.00/1.00) & 1.623 (1.00/2.46) & 2.12 (2.43/1.03)\\
Twitter & 1.001 (1.00/1.01) & 1.013 (1.02/1.03) & 1.001 (1.00/1.00) &1.067 (1.03/1.04) & 2.451 (1.00/3.64) & 3.29 (6.44/1.05)\\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Sorting Analysis}\label{sec:sorting}
Finally, we want to investigate the effort of the edge sorting preprocessing.
We mark our EBV algorithm with or without the sorting preprocessing as EBV-sort and EBV-unsort respectively.
We test EBV-sort and EBV-unsort over three power-law graphs (LiveJournal, Twitter and Friendster) while partitioning them into $4$, $8$, $16$ and $32$ subgraphs.
From Figure~\ref{fig:sort-compare}, we find that on all three power-law graphs, EBV-sort outperforms EBV-unsort for the final replication factor.
Besides, as the number of subgraphs grows, the margin between EBV-sort and EBV-unsort gets larger.
It shows that the sorting preprocessing improves the performance of EBV, especially when the number of subgraphs is large.
Further, we want to study the shapes of these curves.
The replication factor curves of EBV-sort increase sharply at the beginning, and tend to a fixed value later.
According to the definition of replication factor in Section~\ref{sec:pre}, it is proportional to the total number of vertices in all subgraphs.
Therefore, we can conclude that the EBV-sort algorithm assigns edges with low end-vertices' degrees to subgraphs and create many vertices at the beginning.
When processing the edges with high end-vertices' degrees, almost no new vertices are created.
This property implies that the EBV-sort algorithm has great potential in many dense power-law graphs.
\begin{figure*}
\centering
\begin{minipage}{0.32\linewidth}
\centerline{\includegraphics[width=1\textwidth]{rep_lj}}
\centerline{LiveJournal}
\end{minipage}
\hfill
\begin{minipage}{0.32\linewidth}
\centerline{\includegraphics[width=1\textwidth]{rep_twitter}}
\centerline{Twitter}
\end{minipage}
\hfill
\begin{minipage}{0.32\linewidth}
\centerline{\includegraphics[width=1\textwidth]{rep_friend}}
\centerline{Friendster}
\end{minipage}
\caption{Replication Factor Growth Curve.\protect\\
Figure~\ref{fig:sort-compare} shows the growth curve of replication factor over LiveJournal, Twitter and Friendster with $4$, $8$, $16$ and $32$ subgraphs.
The X-axis is the number of edges have been assigned and the Y-axis is the replication factor of current results.
}
\label{fig:sort-compare}
\end{figure*}
\subsection{Summary}
With the experiments, we can answer these questions we proposed:
\begin{enumerate}[(1)]
\item We select five representative partition algorithms: Ginger, DBH, CVC, NE and METIS to compare with EBV.
For NE and METIS, they only minimize the total message size. For Ginger, DBH and CVC, they care more about the message imbalance and partly ignore the total message size. For EBV, it optimizes both factors.
Experiments in Figure~\ref{fig:systemcompare} show that EBV outperforms other partition algorithms, which indicates that both the total message size and the message imbalance are essential for partitioning power-law graphs.
\item Table~\ref{tab:message_sum} and Table~\ref{tab:message_var} reveal that the edge and vertex imbalance factors and replication factor are closely related to the message balance and the total size of messages. The experiment in Figure~\ref{fig:systemcompare} further proves that EBV performs better than Ginger, DBH, CVC, NE, METIS and other representative frameworks in large-scale power-law graphs, which is attributed to the well-designed evaluation function that considers these two factors.
\item We compare the replication factor of EBV-sort and EBV-unsort in power-law graphs with different numbers of subgraphs. Experiments in Figure~\ref{fig:sort-compare} show that the sorting preprocessing can reduce the replication factor of EBV in power-law graphs significantly.
\end{enumerate}
\subsection{Limitation}
Many subgraph-centric frameworks do not publish their source code such as Giraph++~\cite{tian2013think} and GRAPE~\cite{fan2017parallelizing}. Thus they are not included in our cross-platform comparison. However, it does not influence our conclusion. Since the partition methods they employed are presented in our experiments.
\section{Introduction}
In recent decades, the need for processing large-scale graphs is increasing in both research and industry communities.
Moreover, the parallel processing of graphs is one of the biggest challenges in the field of graph computation.
Graph computation has become vital to a wide range of data analysis tasks such as link prediction and graph pattern matching.
Mining ``insights" from large-scale graphs is a challenging and engaging research area.
However, the parallel graph computation poses great difficulties for graphs with billions of vertices and edges on modern high-performance computing clusters with thousands of computing nodes. The basic approaches for large-scale parallel graph computation can be divided into two programming models: the \emph{vertex-centric} model~\cite{malewicz2010pregel} and the \emph{subgraph-centric} model~\cite{tian2013think}.
The vertex-centric model, labeled as ``think like a vertex", is an engineering approach similar in concept to MapReduce~\cite{dean2008mapreduce}.
Although the vertex-centric model has been successfully used in many parallel graph computation applications, researchers often report that it has significant bottlenecks in the communication for exchanging excessive messages through the network~\cite{tian2013think}.
On the other hand, the subgraph-centric model (a.k.a. block-centric, partition-centric)~\cite{fan2017parallelizing,tian2013think,yan2014blogel} provides another approach for the parallel graph computation, and has been applied in some state-of-the-art platforms~\cite{fan2017parallelizing}.
Compared to the vertex-centric model, the subgraph-centric model focuses on the entire subgraph.
This model is labeled as ``think like a graph".
Since subgraphs are more ``coarsely grained'' than a single vertex, they retain many inner edges.
For this reason, many messages are omitted from transferring through the network~\cite{fan2017parallelizing,yan2014blogel}.
Thus, the subgraph-centric model usually has less communication and converges faster.
Although the subgraph-centric model has many advantages, it does not reach its full potential in processing large-scale real-world graphs such as Twitter, Facebook and citation graphs~\cite{gonzalez2012powergraph}.
These real-word graphs share a common property that their vertex degree distributions are long tailed. Therefore, they are categorized as power-law graphs~\cite{albert2002statistical}.
Existing graph partition algorithms have difficulties in partitioning power-law graphs.
For example, METIS~\cite{karypis1997parmetis} and NE~\cite{ne} aim to minimize the number of edges and vertices cut by partitioning, hence reducing the cost of communication and improving the efficiency of graph partition results.
However, METIS only considers the number of vertices and NE only considers the number of edges when balancing their results.
Since the degree distribution of power-law graphs is skewed, the number of edges incident on each vertex varies greatly.
Thus they can not balance both vertices and edges by evenly distributing one of them for power-law graphs.
To improve the performance of subgraph-centric frameworks on large-scale power-law graphs, we analyze the communication pattern of several different partition algorithms.
Based on our analysis, we propose an efficient and balanced vertex-cut graph partition algorithm (EBV).
EBV assigns each edge based on the current value of a well-designed evaluation function, which considers both the total number of cutting vertices and the balance of graph partition results.
For handling the skewed degree distribution of power-law graphs, we adopt appropriate weights to balance the number of edges and vertices when partitioning.
Moreover, we design an edge sorting preprocessing process, which sorts edges in ascending order by the sum of their end-vertices' degrees before partitioning.
For power-law graphs, the edges with two low-degree end-vertices are assigned at the beginning.
Since the degrees of their end-vertices are low, they are less likely to share the same end-vertex.
Thus the balance of graph partition results is the major factor in the early stage and these low-degree vertices are evenly assigned to each subgraphs as seeds.
We analyze the effects of the evaluation function and the edge sorting preprocessing process theoretically and experimentally.
Our experiments show that EBV outperforms the state-of-the-art partition algorithms for large-scale power-law graphs.
This paper makes the following contributions:
\begin{itemize}
\item We propose an efficient and balanced vertex-cut graph partition algorithm.
\item We compare EBV with the state-of-the-art graph partition algorithms and several parallel graph computation frameworks in detail.
\item We use the number of communication messages as a platform-independent metrics to compare partition algorithms.
\item We study the influence of the sorting preprocessing for the EBV algorithm.
\end{itemize}
This paper is organized as follows:
In Section~\ref{sec:motivation} we discuss the challenges arising from large-scale power-law graphs and explain our motivation.
In Section~\ref{sec:back} we review the current graph partition algorithms and introduce the basic notations and metrics for this paper.
In Section~\ref{sec:implementation} we propose an efficient and balanced vertex-cut graph partition algorithm on the subgraph-centric model.
In Section~\ref{sec:experiments} we present and analyze several experiments, which demonstrate the characteristics and capabilities of our algorithm.
Finally, we discuss the related work, conclude our work and preview some future projects in Section~\ref{sec:relatedwork} and Section~\ref{sec:conclusion}.
\section*{Acknowledgment}
This work was partially supported by Natural Science Foundation
of China Grants 41930110, 61872272 and 61640221.
\bibliographystyle{abbrv}
\section{Motivation}\label{sec:motivation}
Considering the scope of information used when partitioning, current graph partition algorithms can be classified into two major categories: local-based and self-based.
When partitioning graphs, the local-based algorithms assign edges and vertices according to the local information (part of the graph)~\cite{local}.
However, the self-based algorithms only consider their own attributes (e.g. vertex degree, ID) when assigning edges and vertices.
Subgraph-centric frameworks usually employ some local-based frameworks such as METIS~\cite{karypis1997parmetis,wen2018drone} and its variants~\cite{tian2013think}.
Neighbor Expand (NE)~\cite{ne} is also a local-based partition algorithm, which expands the new ``core vertex" by searching in the boundary set.
The local-based partition algorithms (METIS and NE) are more concerned about reducing the number of replicated edges and vertices while saving the local structure.
However, the skewed degree distribution of power-law graphs means that the ratios of edges to vertices in the local structures are not uniform, which brings difficulties in balancing both edges and vertices while keeping the local structures.
The imbalanced assignment further results in the imbalanced communication and computation of graph algorithms.
On the other hand, vertex-centric frameworks often use self-based partition algorithms, such as Giraph~\cite{Giraph}, Powergraph~\cite{gonzalez2012powergraph} and Galois~\cite{Galois}.
Recently, many self-based graph partition algorithms have also been proposed for power-law graphs.
The Degree-Based Hashing~\cite{xie2014distributed} (DBH) makes effective use of the skewed degree distribution of power-law graphs.
It assigns each edge by hashing the ID of its end-vertex with a lower degree.
PowerLyra~\cite{chen2019powerlyra} adopts the hybrid-cut that distinguishes the low-degree and high-degree vertices during the partition process.
Inspired by Fennel~\cite{Fennel}, which is a greedy streaming edge-cut framework, they further propose Ginger by improving the hybrid-cut.
These partition algorithms are simple and efficient to handle large-scale power-law graphs and produce the good balance property.
However, they ignore and destroy the local structure of partitioned graphs.
Therefore, the total communication volume of them is much larger than those local-based algorithms.
Both the total communication volume and the message imbalance need to be considered to find an optimal partition result in power-law graphs.
We demonstrate this principle with a comparison of METIS, NE, Ginger, DBH and CVC.
We also seek to devise a novel partition algorithm following the above principle.
However, the optimal algorithm considering both the total communication volume and the message imbalance has been proved to be NP-hard~\cite{bui1992finding}.
Thus we follow the self-based approach and consider these two factors in our algorithm.
For self-based graph partition algorithms, the edge processing (assigning) order greatly affects the performance of the partition results.
We propose a sorting preprocessing mechanism, which sorts edges with the sum of their end-vertices' degrees in ascending order.
We demonstrate the effect of our sorting preprocessing with a comparison of the alphabetical order in Figure~\ref{fig:order_comp}.
In Figure~\ref{fig:order_comp}, the raw graph is an undirected graph with uneven degree distribution.
We partition the raw graph with our EBV algorithm in different edge processing orders.
The result of sorting preprocessing is more balanced than that of the alphabetical order.
For the result of alphabetical order, it assigns edge $(B, C)$ at the end.
Considering the balance of partition results, $(B, C)$ should be assigned to subgraph $0$.
However, it will lead to cut two more vertices $B$ and $C$.
Thus $(B, C)$ is assigned to subgraph $1$ by EBV.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{order_comp}
\caption{Edge processing order comparison \protect\\
A comparison of partitioning the raw graph in sorting preprocessing order and alphabetical order.
The cut vertices are represented as dot line and the number on each edge indicates the processing order.
}
\label{fig:order_comp}
\end{figure}
\section{Preliminaries}\label{sec:pre}
In this section, we introduce some notations and metrics for the rest of this paper.
Let $G = (V, E)$ be a directed graph, where $V$ denotes the set of vertices and $E$ denotes the set of edges.
An edge is represented as a pair of vertices $(u, v)$ where $u$ is the source vertex and $v$ is the target vertex.
For the undirected graph, we replace each undirected edge with two edges with opposite directions.
Suppose we partition the graph $G$ into $p$ subgraphs, we denote $G_i(V_i, E_i)$, $i \in [1, p]$ as the $i^{th}$ subgraph.
For the vertex-cut (edge partitioning) algorithms, they partition the edge set $E$ into $p$ subsets.
Let $E_1 \cup E_2 \cup \cdots \cup E_p = E$ be the $p$ partitions of $E$, i.e. $E_i \cap E_j = \emptyset$, $\forall i \ne j$.
Here we define $V_i = \{u |(u, v) \in E_i \lor (v, u) \in E_i \}$ as the set of vertices covered by $E_i$.
Due to the definition of $V_i$, there may exist some replicated vertices which have the same vertex ID but belong to different subgraphs.
For the edge-cut (vertex partitioning) algorithms, they partition the vertex set $V$.
Let $V_1 \cup V_2 \cup \cdots \cup V_p = V$ and $V_i \cap V_j = \emptyset$ for all $i \ne j$, we define $E_i = \{(u, v) | u \in V_i \lor v \in V_i \}$. The definition of $E_i$ here means that there exist some replicated edges such that $E_i \cap E_j \ne \emptyset$.
Further, we introduce three metrics: edge imbalance factor, vertex imbalance factor and replication factor.
The edge imbalance factor is defined as
\begin{math}
\frac{\max_{i=1,...,p} |E_{i}|}{|E|/p}
\end{math}
, while the vertex imbalance factor is defined as
\begin{math}
\frac{\max_{i=1,...,p} |V_{i}|}{\sum_{i=1}^{p} |V_{i}|/p}
\end{math}
. Both of them are used to measure the balance of partition results.
For the vertex-cut algorithm, the replication factor is defined as
\begin{math}
\frac{\sum_{i=1}^{p} |V_{i}|}{|V|}
\end{math}.
However, we have $\sum_{i=1}^{p} |V_{i}| = |V|$ for the edge-cut algorithm.
Thus the definition of the replication factor cannot be directly adapted and we define
\begin{math}
\frac{\sum_{i=1}^{p} |E_{i}|}{|E|}
\end{math} as the replication factor for the edge-cut algorithm.
The replication factor represents the average number of replicas for a vertex or edge.
For the three metrics above, the closer they are to 1, the better the performance they represent.
These metrics are also widely used in~\cite{boman2013scalable,chen2019powerlyra,xie2014distributed, ne}.
\section{Related Work}\label{sec:relatedwork}
Here we review the graph partition algorithms that can handle large-scale graphs. Some spectral algorithms that utilize the global graph information such as \cite{alpert1999spectral,mcsherry2001spectral} are omitted, because of their heavy partition overhead.
\underline{Local-Based:}
The most famous local-based graph partition algorithm is METIS~\cite{karypis1997parmetis}. METIS applies several stages to coarsen and uncoarsen the vertices and edges. It also use the K-L algorithm~\cite{kernighan1970efficient} for refining its results during the uncoarsening phase.
Andersen et al.~\cite{andersen2006local} present a local-based partitioning algorithm using a variation of pagerank with a specified starting distribution.
They show that the ordering of the vertices produced by a pagerank vector reveals a cut with small conductance.
A recent local-based graph partition algorithm is NE~\cite{ne}.
It expands the core vertex from the boundary vertex set round by round to partition graphs.
Moreover, they propose a distributed version of NE for trillion-edge graphs~\cite{NEdistributed}.
Shengwei Ji proposes a two-stage heuristic method~\cite{local}, which selects high-degree vertices as core vertices and than expands other vertices.
Those local-based algorithms can produce partition results with small replication factor.
However, they are limited for the balance of both edges and vertices on power-law graphs.
\underline{Self-Based:}
The major approach for self-based graph partition algorithms is based on random hash.
For the edge-cut method, it just assigns the vertex by hashing the vertex's ID.
For the vertex-cut method, one straight approach is hashing the edge with its end-vertices' ID into a 1-dimensional value.
Another approach is splitting the adjacency matrix into several blocks (2D partitioning), such as CVC~\cite{boman2013scalable}.
For better handling the skewed degree distribution of power-law graphs, the degree of vertices is widely used.
DBH~\cite{xie2014distributed} cut vertices with higher degrees to obtain better performance.
The hybrid-cut~\cite{chen2019powerlyra} is also proposed with similar ideas.
They assign the edges with the same low-degree target vertex to the same subgraph.
For edges with high-degree target vertices, they assign them according to the source vertices.
Those algorithms are lightweight and can produce roughly balanced results naturally.
However, their partition results need to be further improved for reducing the replication factor.
Besides, they ignore the influence of the processing order for greedy algorithms.
Recently, streaming graph partition algorithms are popular for partitioning large-scale graphs and most of them are self-based algorithms, such as Fennel~\cite{Fennel}, HDRF~\cite{HDRF} and CuSP~\cite{hoang2019cusp}.
They view the input graph as a sequence of edges and process them with just one-pass without extra information.
ADWISE~\cite{mayer2018adwise} is a compromise between streaming and offline, they improve the partitioning quality by caching the current processing edges with the adaptive window size.
However, their performance is limited due to the lack of information on the entire graph.
| proofpile-arXiv_059-15719 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
The goal of this paper is to compute the cyclic homology, negative cyclic homology, and periodic cyclic homology of the ring $A =k[x_1,x_2,\ldots,x_d]/(x_1,x_2,\ldots,x_d)^2$ over $k$ for $k$ an arbitrary commutative ring and to give explicit formulas for the cases $k = \mathbb{Q}$ and $k = \mathbb{Z}$. Since the complexes calculating all these homologies for $k[x_1,x_2,\ldots,x_d]/(x_1,x_2,\ldots,x_d)^2$ over $k$ can be obtained from the complexes calculating these homologies for $\mathbb{Z}[x_1,x_2,\ldots,x_d]/(x_1,x_2,\ldots,x_d)^2$ over $\mathbb{Z}$, the explicit results for general $k$ can be obtained from those for $\mathbb{Z}$ by using the Universal Coefficient Theorem.
An important motivation for studying Hochschild-type invariants is given by the Dennis trace map from algebraic $K$-theory to Hochschild homology. It factors through negative cyclic homology,
\[\begin{tikzcd}[column sep=small]
K_{*}(R) \arrow{dr}{} \arrow{rr}{\text{Dennis trace}}& & HH_{*}(R) \\
& HC_{*}^{-}(R) \arrow{ur}{}
\end{tikzcd}
\]
$\\*$and it has been proven by Goodwillie \cite{GW} that $HC_{*}^{-}(R)$ is a much better approximation of $K_{*}(R)$ than $HH_{*}(R)$.
Section 2 introduces the needed definitions and Section 3 contains the calculations. It starts by showing how the Hochschild complex breaks down by weights and cyclic words. We extend this idea to the Tsygan double complex and then compare the result to a specific Tor computation to get the general cyclic homology result.
Section 4 computes the cyclic homology for $k=\mathbb{Q}$, which follows easily from Section 3 since $\mathbb{Q}$ is a projective $\mathbb{Q}[C_{w}]$-module. Section 5 requires more careful analysis for $k=\mathbb{Z}$. As explained above, for a general ring $k$, since
$$
\big(k[x_1,x_2,\ldots,x_d]/(x_1,x_2,\ldots,x_d)^2\big)^{\otimes_{k} \ell} \cong k\otimes_{\mathbb{Z}}\big(\mathbb{Z}[x_1,x_2,\ldots,x_d]/(x_1,x_2,\ldots,x_d)^2\big)^{\otimes_{\mathbb{Z}} \ell}
$$
by the Universal Coefficient Theorem \newline$
HH_{\ell}^{k}\big(k[x_1,x_2,\ldots,x_d]/(x_1,x_2,\ldots,x_d)^2\big) \cong
$
$$
k\otimes_{\mathbb{Z}} HH_{\ell}^{\mathbb{Z}}\big(\mathbb{Z}[x_1,\ldots,x_d]/(x_1,\ldots,x_d)^2\big) \bigoplus \Tor \big(k, HH_{\ell - 1}^{\mathbb{Z}}\big(\mathbb{Z}[x_1,\ldots,x_d]/(x_1,\ldots,x_d)^2\big) \big),
$$
$\\*$and similarly for cyclic, negative cyclic, and periodic cyclic homology.
After the cyclic homology computations, in Section 6 and Section 7 the negative cyclic homology and periodic cyclic homology computations are similarly given for $k$, $k=\mathbb{Q}$, and $k=\mathbb{Z}$.
\section{Definitions}
\begin{definition}\label{HB}
Let $k$ be a commutative ring with unit and $A$ be a $k$-algebra with unit. The \textbf{Hochschild complex} of $A$ over $k$ consists in degree $n$ of
$$
C_{n}(A) = A^{\otimes n+1} \coloneqq \underbrace{A \otimes_{k} A \otimes_{k} \cdots \otimes_{k} A \otimes_{k} A }_{n+1 \textrm{ times}}$$.
$\\*$with respect to the Hochschild boundary
\begin{center}
$b_{n}(a_{0}\otimes a_{1}\otimes\cdots\otimes a_{n}) = \sum\limits_{i=0}^{n} (-1)^{i} d_{i}(a_{0}\otimes a_{1}\otimes\cdots\otimes a_{n}),$
\end{center}
where $d_{i}: A^{\otimes n+1} \rightarrow A^{\otimes n}$, $0 \leq i \leq n$ are defined as
\begin{center}
$d_{i}(a_{0}\otimes\cdots\otimes a_{n}) = a_{0}\otimes \cdots\otimes a_{i}a_{i+1}\otimes\cdots\otimes a_{n}$
$d_{n}(a_{0}\otimes a_{1}\otimes\cdots\otimes a_{n}) = a_{n}a_{0}\otimes a_{1} \otimes \cdots\otimes a_{n-1}$.
\end{center}
\end{definition}
\begin{definition}We also define $b'_{n}:A^{\otimes n+1} \rightarrow A^{\otimes n}$ by
\begin{center}
$b'_{n}(a_{0}\otimes a_{1}\otimes\cdots\otimes a_{n}) = \sum\limits_{i=0}^{n-1} (-1)^{i} d_{i}(a_{0}\otimes a_{1}\otimes\cdots\otimes a_{n}).$
\end{center}
The following chain complex $C'_{*}(A)$ is called the $\textbf{Bar}$ $\textbf{Resolution}$ $\textbf{of}$ $\textbf{A}$:
\medskip
\begin{center}
$C'_{*}(A) = \cdots\xrightarrow{b'_{n+1}} A^{\otimes n+1} \xrightarrow{\mathrel{\phantom{=}}b'_{n }\mathrel{\phantom{=}}} A^{\otimes n} \xrightarrow {b'_{n-1}} A^{\otimes n-1} \xrightarrow {b'_{n-2}}\cdots \xrightarrow{\mathrel{\phantom{=}}b'_{2}\mathrel{\phantom{=}}} A^{\otimes 2} \xrightarrow{\mathrel{\phantom{=}}b'_{1}\mathrel{\phantom{=}}} A \xrightarrow{\mathrel{\phantom{=}}\mathrel{\phantom{=}}\mathrel{\phantom{=}}} 0 $ .
\end{center}
In particular, it is acyclic.
\end{definition}
\begin{definition}\label{NHC}
The \textbf{normalized or reduced Hochschild complex}, denoted by $\bar{C_{*}}({A})$, is the quotient of $C_{*}({A})$ by all sums of degenerate elements (elements $a_{0}\otimes a_{1}\otimes\cdots\otimes a_{n}$ where at least one of $a_{1}, \hdots, a_{n}$ is in $k$). The reduction map is a quasi-isomorphism from the Hochschild complex to the reduced Hochschild complex by Lemma 1.1.15 \cite{CH} with the maps $b_{n}$ induced by those of the complex $C_{*}(A)$. $\bar{C_{*}}({A})$ is written as
\begin{center}
$\bar{C_{*}}({A})=\cdots\xrightarrow{b_{n+1}} A\otimes \bar{A}^{\otimes n} \xrightarrow{\mathrel{\phantom{=}}b_{n}\mathrel{\phantom{=}}} A\otimes\bar{A}^{\otimes n-1} \xrightarrow {b_{n-1}}\cdots \xrightarrow{\mathrel{\phantom{=}}b_{2}\mathrel{\phantom{=}}} A\otimes\bar{A} \xrightarrow{\mathrel{\phantom{=}}b_{1}\mathrel{\phantom{=}}} A \xrightarrow{\mathrel{\phantom{=}}\mathrel{\phantom{=}}\mathrel{\phantom{=}}}0$.
\end{center}
\end{definition}
\begin{definition}\label{CH} The \textbf{cyclic homology of A}, denoted by $HC_{*}(A)$, is the total homology of the first quadrant double complex:
\begin{center}
\begin{tikzcd}
\vdots\arrow{d} & \vdots\arrow{d} & \vdots\arrow{d} & \vdots\arrow{d} \\
A^{\otimes 3}\arrow{d}{b_{2}} & A^{\otimes 3}\arrow{l}{1-t_{2}}\arrow{d}{b'_{2}} & A^{\otimes 3}\arrow{l}{N_{2}}\arrow{d}{b_{2}} & A^{\otimes 3}\arrow{l}{1-t_{2}}\arrow{d}{b'_{2}} &{\cdots} \arrow{l}{N_{2}} \\
A^{\otimes 2}\arrow{d}{b_{1}} & A^{\otimes 2}\arrow{l}{1-t_{1}}\arrow{d}{b'_{1}} & A^{\otimes 2}\arrow{l}{N_{1}}\arrow{d}{b_{1}} & A^{\otimes 2}\arrow{l}{1-t_{1}}\arrow{d}{b'_{1}} &{\cdots}\arrow{l}{N_{1}} \\
A & A\arrow{l}{1-t_{0}} & A \arrow{l}{N_{0}} & A\arrow{l}{1-t_{0}} &{\cdots}\arrow{l}{N_{0}}
\end{tikzcd}
\end{center}
where the maps $t_{n}$ and $N_{n}$ are defined as
\begin{equation}\label{tn}
t_{n}(a_{0}\otimes a_{1}\otimes \cdots\otimes a_{n}) = (-1)^{n}(a_{n}\otimes a_{0}\otimes a_{1} \otimes \cdots \otimes a_{n-1})
\end{equation}
\begin{equation}\label{Nn}
N_{n} = 1 + t_{n} + t_{n}^{2} + \cdots + t_{n}^{n}.
\end{equation}
\end{definition}
This first quadrant double complex is called the Tsygan complex or the cyclic bicomplex. We can compute the total homology of the Tsygan complex using the spectral sequence associated to filtration by columns. The $E^{2}$ page consists of $HH_{*}(A)$ in even columns and $0$ in odd ones.
\begin{center}
\begin{tikzcd}
\vdots & \vdots& \vdots & \vdots& \vdots\\
HH_{2}(A) &0 & HH_{2}(A) & 0 & HH_{2}(A)&{\cdots} \\
HH_{1}(A) & 0 & HH_{1}(A)\arrow{llu}{\partial^{2}}& 0 & HH_{1}(A)\arrow{llu}{\partial^{2}}&{\cdots} \\
HH_{0}(A) & 0 & HH_{0}(A)\arrow{llu}{\partial^{2}} & 0 & HH_{0}(A)\arrow{llu}{\partial^{2}}&{\cdots}
\end{tikzcd}
\end{center}
$\\*$The map $s:A^{\otimes n} \rightarrow A^{\otimes n+1}$, given by
$s(a_{0}\otimes a_{1} \otimes \cdots \otimes a_{n}) = 1\otimes a_{0}\otimes a_{1} \otimes \cdots \otimes a_{n}$,
is a lift of the map $b'_{n}$ on $\ker(b'_{n-1})$. In other words, it is a chain homotopy between the identity map and the zero map on $(A^{\otimes n+1}, b')$. So $\partial^{2}: HH_{n-1}(A) \rightarrow HH_{n}(A)$ can be induced on the chain level by $(1-t_{n}) s N_{n-1}$. This leads to another bicomplex that can be used to compute $HC_{*}(A)$.
\begin{definition} The \textbf{Connes boundary map $B$} is the map $B: A^{\otimes n} \xrightarrow{} A^{\otimes (n+1)}$ given by:
\begin{center}
$B= (1-t_{n}) s N_{n-1}$.
\end{center}
One can equivalently look at the bicomplex $B(A)$
\begin{center}
\begin{tikzcd}
\vdots\arrow{d} & \vdots\arrow{d} & \vdots\arrow{d}\\
A^{\otimes 3}\arrow{d}{b_{2}} & A^{\otimes 2}\arrow{d}{b_{1}}\arrow{l}{B} &A\arrow{l}{B} \\
A^{\otimes 2}\arrow{d}{b_{1}} & A\arrow{l}{B} &\\
A & &
\end{tikzcd}
\end{center}
which essentially eliminates the odd-numbered columns. Once the even-numbered columns are shifted to the left, they need to be raised to preserve the total degree. By \cite{CH} (Theorem 2.1.8), the total homology of $B(A)$ is also $HC_{*}(A)$.
\end{definition}
\begin{definition}\label{negative} The \textbf{negative cyclic homology of A}, denoted by $HC^{-}_{*}(A)$, is the total homology of the second quadrant double complex:
\begin{center}
\begin{tikzcd}
& \vdots\arrow{d} & \vdots\arrow{d} & \vdots\arrow{d} & \vdots\arrow{d} \\
{\cdots}& A^{\otimes 3}\arrow{l}{1-t_{2}}\arrow{d}{b'_{2}} & A^{\otimes 3}\arrow{l}{N_{2}}\arrow{d}{b_{2}} & A^{\otimes 3}\arrow{l}{1-t_{2}}\arrow{d}{b'_{2}} & A^{\otimes 3}\arrow{l}{N_{2}}\arrow{d}{b_{2}} & \\
{\cdots}& A^{\otimes 2}\arrow{l}{1-t_{1}}\arrow{d}{b'_{1}} & A^{\otimes 2}\arrow{l}{N_{1}}\arrow{d}{b_{1}} & A^{\otimes 2}\arrow{l}{1-t_{1}}\arrow{d}{b'_{1}} & A^{\otimes 2}\arrow{l}{N_{1}}\arrow{d}{b_{1}} & \\
{\cdots}& A\arrow{l}{1-t_{0}} & A\arrow{l}{N_{0}} & A \arrow{l}{1-t_{0}} & A\arrow{l}{N_{0}} &
\end{tikzcd}
\end{center}
\end{definition}
\begin{definition}\label{periodic} The \textbf{periodic cyclic homology of A}, denoted by $HP_{*}(A)$, is the total homology of the first and second quadrant double complex:
\begin{center}
\begin{tikzcd}
& \vdots\arrow{d} & \vdots\arrow{d} & \vdots\arrow{d} & \\
{\cdots}& A^{\otimes 3}\arrow{l}{1-t_{2}}\arrow{d}{b'_{2}} & A^{\otimes 3}\arrow{l}{N_{2}}\arrow{d}{b_{2}} & A^{\otimes 3}\arrow{l}{1-t_{2}}\arrow{d}{b'_{2}} & {\cdots}\arrow{l}{N_{2}} & \\
{\cdots}& A^{\otimes 2}\arrow{l}{1-t_{1}}\arrow{d}{b'_{1}} & A^{\otimes 2}\arrow{l}{N_{1}}\arrow{d}{b_{1}} & A^{\otimes 2}\arrow{l}{1-t_{1}}\arrow{d}{b'_{1}} & {\cdots}\arrow{l}{N_{1}} & \\
{\cdots}& A\arrow{l}{1-t_{0}} & A\arrow{l}{N_{0}} & A \arrow{l}{1-t_{0}} & {\cdots}\arrow{l}{N_{0}} &
\end{tikzcd}
\end{center}
$\\*$where the even columns are the Hochschild complex and the odd columns are the bar complex.
\end{definition}
Cyclic homology is closely connected to taking a homotopy quotient of a circle action on Hochschild homology, whereas negative cyclic homology is closely connected to taking homotopy fixed points of that action.
\section{Cyclic Homology of $k[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$}
\subsection{Grading HH by weight} \hfill
Let $A = k[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$, where $k$ is any commutative unital ring and $\mathfrak{m}$ is the ideal $(x_1,x_2,\hdots,x_d)$. Computing $HH_{n}(A)$ is easier to do if we break the Hochschild complex down by weights, where the weight of a tensor monomial is the total number of $x'_{i}s$ in it. We can do this because the Hochschild boundary maps preserve weight. Thus, $C_{*}(A) \cong \displaystyle\bigoplus_{w=0}^{\infty}C_{*}^{(w)}(A)$ where $C_{*}^{(w)}(A)$ is the subcomplex consisting of all elements in $C_{*}(A)$ of weight $w$ and
$HH_{*}(A) \cong \displaystyle\bigoplus_{w=0}^{\infty}HH_{*}^{(w)}(A).$
When computing $HH_{n}^{(w)}(A)$, we can again look at the normalized Hochschild complex. The only nondegenerate elements of weight $w \neq 0$ are in the $(w)^{th}$ and $(w-1)^{th}$ levels of the normalized complex. The only nondegenerate elements of weight $w=0$ are elements from $k$ in the $0^{th}$ level of the complex. The elements of weight $w$ (for $w > 0$) that are in the ($w$)$^{th}$ level of the normalized complex are spanned by the nondegenerate tensor monomials
\begin{center}
$ 1\otimes x_{j_{1}}\otimes\cdots\otimes x_{j_{w}} $ where $j_{i} \in \{1,2,\hdots,d\}$.
\end{center}
The elements of weight $w$ (for $w > 0$) that are in the ($w-1$)$^{th}$ level of the reduced complex are spanned by the nondegenerate tensor monomials
\begin{center}
$x_{j_{1}}\otimes\cdots\otimes x_{j_{w}} $ where $j_{i} \in \{1,2,\hdots,d\}$.
\end{center}
Let $\bar{\mathfrak{m}} = \mathfrak{m} / \mathfrak{m}^{2}$. Note that $\bar{\mathfrak{m}}$ is the free $k$-module on $[x_{1}]$, $[x_{2}], \hdots,[x_{d}]$ which by abuse of notation we call $x_{1}$, $x_{2},\hdots,x_{d}$. Therefore, the normalized Hochschild complex for weight $w$ (for $w > 0$) is
\begin{center}
\begin{tikzcd}
{\cdots}\arrow{r}& 0\arrow{r} & 1\otimes \bar{\mathfrak{m}}^{\otimes w}\arrow{r}{b_{w}} & \bar{\mathfrak{m}}^{\otimes w}\arrow{r}& 0\arrow{r} & {\cdots}
\end{tikzcd}.
\end{center}
Consider the following chain map:
\begin{center}
\begin{tikzcd}
{\cdots}\arrow{r}& 0\arrow{r}\arrow{d} & 1\otimes \bar{\mathfrak{m}}^{\otimes w}\arrow{r}{b_{w}}\arrow{d}{f} & \bar{\mathfrak{m}}^{\otimes w}\arrow{r}\arrow{d}{id} & 0\arrow{r}\arrow{d} & {\cdots}\\
{\cdots}\arrow{r}& 0\arrow{r} & \bar{\mathfrak{m}}^{\otimes w}\arrow{r}{1-t_{w-1}} & \bar{\mathfrak{m}}^{\otimes w} \arrow{r} & 0\arrow{r} & {\cdots}\> ,
\end{tikzcd}
\end{center}
where $f(1\otimes x_{j_{1}}\otimes\cdots\otimes x_{j_{w}})=x_{j_{1}}\otimes\cdots\otimes x_{j_{w}}$. Note
\begin{center}
$
b_{w}(1\otimes x_{j_{1}}\otimes\cdots\otimes x_{j_{w}}) = (1-t_{w-1})(x_{j_{1}}\otimes\cdots\otimes x_{j_{w}}).
$
\end{center}
This chain map is an isomorphism so it induces an isomorphism on homology, yielding
\begin{equation}\label{HHofRing1} HH_{n}^{(0)}(A) \cong
\begin{dcases}
k & n = 0 \\
0 & \textrm{else}
\end{dcases}
\end{equation}
and for $w > 0$
\begin{equation}\label{HHofRing2} HH_{n}^{(w)}(A) \cong
\begin{dcases}
\ker\big((1-t_{w-1}):\bar{\mathfrak{m}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}^{\otimes w}\big) & n=w \\
\operatorname{coker}\big((1-t_{w-1}):\bar{\mathfrak{m}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}^{\otimes w}\big) & n= w-1 \\
0 & \textrm{else} \> .
\end{dcases}
\end{equation}
\subsection{Relating the reduced Hochschild complex to the Tsygan complex}\hfill
We now want to calculate $HC_{*}(A)$ for $A = k[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$. Since $b'_{n}$, $t_{n}$, and $N_{n}$ as well as $b_{n}$ preserve weight, the Tsygan complex and all pages of the spectral sequence and its homology split by weight as well, so $HC_{*}(A) \cong \displaystyle\bigoplus_{w=0}^{\infty}HC_{*}^{(w)}(A).$ The calculation of $HC_{*}^{(w)}(A)$ breaks down into two cases, one where the weight $w=0$ and the other where $w > 0$.
$\\*$\textbf{Case 1: $w = 0$}
$\\*$On the $E^{1}$-page, we have $E^{1}_{p,q}=0$ unless $q=0$ and $p\geq0$ is even, so $E^{1} \cong E^{\infty}$. Therefore
\begin{equation}\label{HCW0}
HC_{n}^{(0)}(A) \cong
\begin{dcases}
k & n \textrm{ even and } n \geq 0\\
0 & \textrm{else}. \\
\end{dcases}
\end{equation}
\textbf{Case 2: $w > 0$}
$\\*$ In the $E^{1}$-page, the even columns consist of $HH_{*}(A)$ and the odd columns are zero. Since the odd columns are zero, $\partial^{1}=0$ and $E^{1} \cong E^{2}$. By Equation (\ref{HHofRing2}), $E^{2}$ is given by:
\begin{equation}\label{double}
\begin{tikzcd}
& \vdots & \vdots & \vdots & \vdots & \\
(w+1)^{th} & 0 & 0 & 0 & 0 & \cdots \\
(w)^{th} & \ker(1-t_{w-1}) & 0 & \ker(1-t_{w-1}) & 0 & \cdots \\
(w-1)^{th} & \textrm{coker}(1-t_{w-1}) & 0 & \textrm{coker}(1-t_{w-1})\arrow{ull}{\partial^{2}} & 0 &\arrow{ull}{\partial^{2}} \cdots\\
(w-2)^{nd} & 0 & 0 & 0 & 0 & \cdots \\
& 0^{th} &1^{st} &2^{nd} &3^{rd} &
\end{tikzcd}
\end{equation}
All $\partial^{r}$ for $r\geq 3$ are trivial for dimension reasons, so the $E^{3}$ page is same as the $E^{\infty}$ page. Therefore,
\begin{equation}\label{HCNW}
HC_{n}^{(w)}(A) \cong
\begin{dcases}
0 & n \leq w-2 \\
\textrm{coker}\big((1-t_{w-1}):\bar{\mathfrak{m}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}^{\otimes w}\big) & n = w-1 \\
\textrm{coker}(\partial^{2}) & n = w + 2i,\hspace{11.5mm} i\geq 0\\
\ker(\partial^{2}) & n = w + 1 + 2i, \hspace{5mm} i\geq 0\> . \\
\end{dcases}
\end{equation}
As discussed above, $\partial^{2}$ is the map induced on the homology classes by $B = (1-t_{w})sN_{w-1}$.
\begin{center}
\begin{tikzcd}
& (w)^{th} & A^{\otimes w+1} & A^{\otimes w+1} \arrow{l}{1-t_{w}} & \\
& (w-1)^{th} & & A^{\otimes w} \arrow{u}{s} & A^{\otimes w} \arrow{l}{N_{w-1}} \> \\
\end{tikzcd}
\end{center}
$HH_{w-1}^{(w)}(A)$ is spanned by the homology classes of all cycles $x_{k_{1}}\otimes \cdots \otimes x_{k_{w}}$ for $k_{1},\hdots, k_{w} \in \{1, \hdots , d\}$, and on such a cycle, $(1-t_{w-1})sN_{w-1}(x_{k_{1}}\otimes \cdots \otimes x_{k_{w}})=sN_{w-1}(x_{k_{1}}\otimes \cdots \otimes x_{k_{w}})+$ degenerate elements. So if we reduce the range $A^{\otimes w+1} \rightarrow A\otimes \bar{A}^{\otimes w}$, we can regard the map as $sN_{w-1}$. If we further think of the reduced Hochschild complex of weight $w$ as $\bar{\mathfrak{m}}^{\otimes w}\xrightarrow{1-t_{w-1}}\bar{\mathfrak{m}}^{\otimes w}$ as described in the previous section, $\partial^{2}$ can be viewed as
\begin{center}
$N_{w-1}:\textrm{coker}(1-t_{w-1}) \xrightarrow{} \ker(1-t_{w-1})$
\end{center}
and we can rewrite Equation (\ref{HCNW}) as
\begin{equation}\label{HCNW2}
HC_{n}^{(w)}(A) \cong
\begin{dcases}
0 & n \leq w-2 \\
\textrm{coker}(1-t_{w-1}) & n = w-1 \\
\textrm{coker}\big(\textrm{coker}(1-t_{w-1}) \xrightarrow{N_{w-1}} \ker(1-t_{w-1})\big) & n = w + 2i, \hspace{11.5mm} i\geq 0 \\
\ker\big(\textrm{coker}(1-t_{w-1}) \xrightarrow{N_{w-1}} \ker(1-t_{w-1})\big) & n = w + 1 + 2i, \hspace{5mm} i\geq 0 \> .\\
\end{dcases}
\end{equation}
\subsection{Comparison to Tor}\hfill
The calculation of the pieces of Equation (\ref{HCNW2}) is exactly what we get computing $Tor_{n}^{k[C_{w}]}(k,\bar{\mathfrak{m}}^{\otimes w})$ where $C_{w}= \langle \alpha : \alpha^{(w)}=1 \rangle$ is the cyclic group of order $w$ and $\bar{\mathfrak{m}}^{\otimes w}$ is a $k[C_{w}]$-module with the action $\alpha(x_{n_{1}}\otimes\cdots\otimes x_{n_{w}}) = t_{w-1}(x_{n_{1}}\otimes\cdots\otimes x_{n_{w}}) = (-1)^{w-1}(x_{n_{w}}\otimes x_{n_{1}} \otimes\cdots\otimes x_{n_{w-1}}).
$ For any commutative ring $k$, the following is a projective $k[C_{w}]$-resolution of $k$:
\begin{equation}\label{r}
{\cdots}\xrightarrow{1+\alpha + {\cdots} +\alpha^{w-1}}k[C_{w}]\xrightarrow{\mathrel{\phantom{=}}1-\alpha\mathrel{\phantom{=}}}k[C_{w}]\xrightarrow{1+\alpha + {\cdots} + \alpha^{w-1}}k[C_{w}]\xrightarrow{\mathrel{\phantom{=}}1-\alpha\mathrel{\phantom{=}}}k[C_{w}]\xrightarrow{\mathrel{\phantom{=}}r\mathrel{\phantom{=}}} k\xrightarrow{}0 \> .
\end{equation}
Here, $C_{w}= \langle \alpha : \alpha^{(w)}=1 \rangle$ and $k[C_{w}]\xrightarrow{\mathrel{\phantom{=}}r\mathrel{\phantom{=}}} k$ is the augmentation $r\Bigg(\sum\limits_{j=0}^{w-1} k_{j}\cdot \alpha^{j} \Bigg) =\sum\limits_{j=0}^{w-1}k_{j} $ .
After deleting $k$, tensoring over $k[C_{w}]$ with $\bar{\mathfrak{m}}^{\otimes w}$, and setting $N = (1+\alpha + {\cdots} + \alpha^{w-1})$ we get
\begin{equation}\label{complexmbar}
\cdots\xrightarrow{1-t_{w-1}} \bar{\mathfrak{m}}^{\otimes w}\xrightarrow{N_{w-1}} \bar{\mathfrak{m}}^{\otimes w}\xrightarrow{1-t_{w-1}} \bar{\mathfrak{m}}^{\otimes w}\xrightarrow{N_{w-1}} \bar{\mathfrak{m}}^{\otimes w}\xrightarrow{1-t_{w-1}}\bar{\mathfrak{m}}^{\otimes w}\xrightarrow{} 0
\end{equation}
yielding
\begin{equation}\label{tor0}
Tor_{0}^{k[C_{w}]}(k,\bar{\mathfrak{m}}^{\otimes w})\cong \textrm{coker}(1-t_{w-1})
\end{equation}
\begin{equation}\label{tor1}
Tor_{1 + 2i}^{k[C_{w}]}(k,\bar{\mathfrak{m}}^{\otimes w})\cong\textrm{coker}\big(\textrm{coker}(1-t_{w-1}) \xrightarrow{N_{w-1}} \ker(1-t_{w-1})\big)
\end{equation}
\begin{equation}\label{tor2}
Tor_{2+ 2i}^{k[C_{w}]}(k,\bar{\mathfrak{m}}^{\otimes w})\cong\ker\big(\textrm{coker}(1-t_{w-1}) \xrightarrow{N_{w-1}} \ker(1-t_{w-1})\big) .
\end{equation}
So we can rewrite Equation (\ref{HCNW2}) as
\begin{theorem}\label{HCNW3}
Let $A = k[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$, where $k$ is any commutative unital ring, $\mathfrak{m}$ is the ideal $(x_1,x_2,\hdots,x_d)$, $\bar{\mathfrak{m}} = \mathfrak{m} / \mathfrak{m}^{2}$, $C_{w}= \langle \alpha : \alpha^{(w)}=1 \rangle$, and $\bar{\mathfrak{m}}^{\otimes w}$ is a $k[C_{w}]$-module with the action:
$$
\alpha(x_{n_{1}}\otimes\cdots\otimes x_{n_{w}}) = (-1)^{w-1}(x_{n_{w}}\otimes x_{n_{1}} \otimes\cdots\otimes x_{n_{w-1}}) \> .
$$
Then for $w = 0$
\begin{center}
$
HC_{n}^{(0)}(A) \cong
\begin{dcases}
k & n \textrm{ even and } n \geq 0 \\
0 & \textrm{else} \\
\end{dcases}
$
\end{center}
and for $w > 0$
\begin{center}
$
HC_{n}^{(w)}(A) \cong
\begin{dcases}
0 & n \leq w-2 \\
Tor_{n-w+1}^{k[C_{w}]}(k,\bar{\mathfrak{m}}^{\otimes w}) & n \geq w-1\> \\
\end{dcases}
$
\end{center}
$\\*$ which can be rewritten as
\begin{center}
$
HC_{n}^{(w)}(A) \cong
\begin{dcases}
0 & n \leq w-2 \\
H_{n-w+1}(C_{w};\bar{\mathfrak{m}}^{\otimes w}) & n \geq w-1\>. \\
\end{dcases}
$
\end{center}
\end{theorem}
\section{Cyclic Homology of $\mathbb{Q}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$}
Since $\mathbb{Q}$ is projective over $\mathbb{Q}[C_{w}]$, $Tor_{n}^{\mathbb{Q}[C_{w}]}(\mathbb{Q},\bar{\mathfrak{m}}^{\otimes w}) = 0$ for $n >0$. Therefore, we get that for $w = 0$
\begin{center}
$
HC_{n}^{(0)}(A) \cong
\begin{dcases}
\mathbb{Q} & n \textrm{ even and } n \geq 0 \\
0 & \textrm{else} \\
\end{dcases}
$
\end{center}
$\\*$ and for $w > 0$
\begin{center}
$
HC_{n}^{(w)}(A) \cong
\begin{dcases}
\bar{\mathfrak{m}}^{\otimes w} / (x_{n_{1}}\otimes {\cdots} \otimes x_{n_{w}} \sim (-1)^{w-1}x_{n_{w}}\otimes x_{n_{1}}\otimes {\cdots} \otimes x_{n_{w-1}}) & n = w-1 \\
\quad \quad \quad \forall n_{1}, \hdots, n_{w} \in \{1,\hdots, d\} \\
0 & \textrm{else}.\>
\end{dcases}
$
\end{center}
We later define the cycle length of a tensor monomial to be the smallest $m>0$ such that if you rotate the last $m$ coordinates of the monomial to the front, your monomial doesn't change. We can rewrite the tensor monomial as $x_{n_{1}}\otimes {\cdots} \otimes x_{n_{w}} = (x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}$. Notice that when $w$ is odd, $(x_{n_{1}}\otimes {\cdots} \otimes x_{n_{w}} \sim (-1)^{w-1}x_{n_{w}}\otimes x_{n_{1}}\otimes {\cdots} \otimes x_{n_{w-1}})$ gives $$
(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}
\sim
(x_{k_{m}}\otimes\cdots\otimes x_{k_{m-1}})^{\otimes \ell}
\sim
\cdots
\sim
(x_{k_{3}}\otimes\cdots\otimes x_{k_{2}})^{\otimes \ell}
\sim
(x_{k_{2}}\otimes\cdots\otimes x_{k_{1}})^{\otimes \ell},
$$
when $w$ is even and $m$ is even, $(x_{n_{1}}\otimes {\cdots} \otimes x_{n_{w}} \sim (-1)^{w-1}x_{n_{w}}\otimes x_{n_{1}}\otimes {\cdots} \otimes x_{n_{w-1}})$ gives $$
(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}
\sim
-(x_{k_{m}}\otimes\cdots\otimes x_{k_{m-1}})^{\otimes \ell}
\sim
\cdots
\sim
-(x_{k_{2}}\otimes\cdots\otimes x_{k_{1}})^{\otimes \ell}
\sim
(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell},
$$
but when $w$ is even and $m$ is odd, $(x_{n_{1}}\otimes {\cdots} \otimes x_{n_{w}} \sim (-1)^{w-1}x_{n_{w}}\otimes x_{n_{1}}\otimes {\cdots} \otimes x_{n_{w-1}})$ gives
$$
(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}
\sim
-(x_{k_{m}}\otimes\cdots\otimes x_{k_{m-1}})^{\otimes \ell}
\sim
\cdots
\sim
(x_{k_{2}}\otimes\cdots\otimes x_{k_{1}})^{\otimes \ell}
\sim
-(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}.
$$
This tells us that if $\omega_{m,d}$ is the set of all cycle families of words of length $m$ and cycle length $m$ in $x_1,\hdots,x_d$, $$\bar{\mathfrak{m}}^{\otimes w} / (x_{n_{1}}\otimes {\cdots} \otimes x_{n_{w}} \sim (-1)^{w-1}x_{n_{w}}\otimes x_{n_{1}}\otimes {\cdots} \otimes x_{n_{w-1}}) \cong
$$
$$
\Bigg(\displaystyle\bigoplus_{\substack{m \mid w \\ m \equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Q}\Bigg) \bigoplus \Bigg( \displaystyle\bigoplus_{\substack{m \mid w \\ m \not\equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Q}[x]/[x\sim-x]\Bigg) \cong \displaystyle\bigoplus_{\substack{m \mid w \\ m \equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Q}. $$
The number of elements in each $\omega_{m,d}$ is
$$
\hspace{5mm} \frac{\sum_{i \vert m}\mu(m/i) d^i}{m}, $$
where $\mu$ is the M\"obius function defined as
$$\mu(n) = \begin{dcases}
1 & \textrm{ when $n$ is a square-free positive integer with an even number of prime factors} \\
-1 & \textrm{ when $n$ is a square-free positive integer with an odd number of prime factors} \\
0 & \textrm{ when $n$ has a squared prime factor} \\
\end{dcases}$$
since we need to count all the $d^{m}$ words of length $m$ in $x_{1}, \ldots, x_{d}$, but subtract $d^{m/p}$ for words which are repeats of length $m/p$ for $p|m$, correct by adding $d^{m/pq}$ for all words which are repeats of words of length $m/pq$ for $p,q$ distinct primes dividing $m$, etc.
\begin{corollary}
Let $A = \mathbb{Q}[x_1,...,x_d]/\mathfrak{m}^2$ where $\mathfrak{m}$ is the ideal $(x_1,\hdots,x_d)$. Then for $w = 0$
\begin{center}
$
HC_{n}^{(0)}(A) \cong
\begin{dcases}
\mathbb{Q} & n \textrm{ even and } n \geq 0 \\
0 & \textrm{else} \\
\end{dcases}
$
\end{center}
$\\*$ and for $w > 0$
\begin{center}
$
HC_{n}^{(w)}(A) \cong
\begin{dcases}
\displaystyle\bigoplus_{\substack{m \mid w \\ m \equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Q}& n = w-1 \\
0 & \textrm{else}.\>
\end{dcases}
$
\end{center}
\end{corollary}
\section{Cyclic Homology of $\mathbb{Z}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$}
Now we use Equation (\ref{HCNW2}) to compute $HC_{n}^{(w)}(A)$ for $k = \mathbb{Z}$. Let $\bar{\mathfrak{m}}_{\mathbb{Q}} = \mathfrak{m} / \mathfrak{m}^{2}$ when $A = \mathbb{Q}[x_1,...,x_d]/\mathfrak{m}^2$ and $\mathfrak{m}$ is the ideal $(x_1,\hdots,x_d)$. Let $\bar{\mathfrak{m}}_{\mathbb{Z}} = \mathfrak{m} / \mathfrak{m}^{2}$ when $A = \mathbb{Z}[x_1,\hdots,x_d]/\mathfrak{m}^2$ and $\mathfrak{m}$ is the ideal $(x_1,\hdots,x_d)$. So $\bar{\mathfrak{m}}_{\mathbb{Q}}$ is the free $\mathbb{Q}$-vector space on $[x_{1}]$, $[x_{2}],\hdots,[x_{d}]$ which by abuse of notation we call $x_{1}$, $x_{2},\hdots , x_{d}$ and $\bar{\mathfrak{m}}_{\mathbb{Z}}$ is the free $\mathbb{Z}$-module on these generators. Note that because $\alpha$ acts as $t$, the fact that $\mathbb{Q}$ is projective over $\mathbb{Q}[C_{w}]$ and thus has no higher Tor gives us
\begin{equation}\label{hadtoremake1}
\ker\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\big) \cong \operatorname{im}(N:\bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w})
\end{equation}
\begin{equation}\label{hadtoremake2}
\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\big) \cong \ker(N:\bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}) \> .
\end{equation}
By Equation (\ref{complexmbar}), $Tor_{n}^{\mathbb{Z}[C_{w}]}(\mathbb{Z},\bar{\mathfrak{m}}^{\otimes w})$ is the homology of the complex
\begin{equation}\label{TORHomComplex}
\cdots\xrightarrow{1-t_{w-1}} \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\xrightarrow{N_{w-1}} \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\xrightarrow{1-t_{w-1}} \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\xrightarrow{N_{w-1}} \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\xrightarrow{1-t_{w-1}}\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\xrightarrow{} 0 \>.
\end{equation}
Therefore, from Theorem \ref{HCNW3}
\begin{equation}\label{ccz1}
HC_{n}^{(0)}(\mathbb{Z}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2) \cong
\begin{dcases}
\mathbb{Z} & n \textrm{ even and } n \geq 0 \\
0 & n \textrm{ else} \\
\end{dcases}
\end{equation}
and for $w > 0$
\begin{equation}\label{ccz2}
HC_{n}^{(w)}(\mathbb{Z}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2)
\cong
\begin{dcases}
\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\> 0 & n \leq w-2 \\
\\
\frac{\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)} & n = w-1 \\
\\
\frac{\ker\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)}{\operatorname{im}(N_{w-1}:\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w})} & n = w + 2i, \hspace{11.5mm} i\geq0 \\
\\
\frac{\ker{(N_{w-1}:\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}})}{\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)} & n = w + 1 + 2i, \hspace{5mm} i\geq0 \> . \\
\end{dcases}
\end{equation}
Equations (\ref{hadtoremake1}) and (\ref{hadtoremake2}) gives us
\begin{center}
$\ker\big((1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big) \cong \big(\ker\big((1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\big)\big) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}, $
\medskip
$\ker(N_{w-1}: \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}) \cong \big(\ker(N_{w-1}: \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w})\big) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}. $
\end{center}
$\\*$So we can deduce
\begin{proposition}\label{applykertozandq}
Let $A = \mathbb{Z}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$ where $\mathfrak{m}$ is the ideal $(x_1,x_2,\hdots,x_d)$ and $\bar{\mathfrak{m}} = \mathfrak{m} / \mathfrak{m}^{2}$.
Then
\begin{center}
$
HC_{n}^{(0)}(A) \cong
\begin{dcases}
\mathbb{Z} & n \textrm{ even and } n \geq 0 \\
0 & \textrm{else}\\
\end{dcases}
$
\end{center}
and for $w > 0$
\begin{center}
$
HC_{n}^{(w)}(A) \cong
\begin{dcases}
\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\> 0 & n \leq w-2 \\
\\
\frac{\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)} & n = w-1 \\
\\
\frac{\big(\operatorname{im}(N_{w-1}: \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w})\big) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}(N_{w-1}:\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w})} & n = w + 2i, \hspace{11.5mm}i\geq0 \\
\\
\frac{\big(\operatorname{im}\big((1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\big)\big) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)} & n = w + 1 + 2i, \hspace{5mm}i\geq0\> . \\
\end{dcases}
$
\end{center}
\end{proposition}
$\\*$The next three subsections will explicitly compute the pieces of Proposition \ref{applykertozandq}.
\subsection{Calculating $HC_{w+2i}^{(w)}$ for $i\geq0$}\hfill
From Proposition \ref{applykertozandq}, we know
$$
HC_{w+2i}^{(w)}(\mathbb{Z}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2)\cong \frac{\big(\operatorname{im}(N_{w-1}: \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w})\big) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}(N_{w-1}:\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w})}.
$$
The module $\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}$ is freely spanned over $\mathbb{Z}$ by elements $x_{j_{1}}\otimes\cdots\otimes x_{j_{w}}$ where $j_{i} \in \{1,\hdots,d\}$ for $i\in \{1,\hdots,w\}$. Define a new map
$T_{w-1}(x_{j_{1}}\otimes x_{j_{2}}\otimes\cdots\otimes x_{j_{w}}) = x_{j_{w}}\otimes x_{j_{1}}\otimes\cdots\otimes x_{j_{w-1}}.$
$\\*$Note that
\begin{equation}\label{tnandTn}
t_{n}(a_{0}\otimes a_{1}\otimes \cdots\otimes a_{n}) = (-1)^{n}(a_{n}\otimes a_{0}\otimes a_{1} \otimes \cdots \otimes a_{n-1}) = (-1)^{n}T_{n}(a_{0}\otimes a_{1}\otimes \cdots\otimes a_{n})
\end{equation}
\begin{definition}\label{cl}
Define the \textbf{cycle length} of a tensor monomial $x_{j_{1}}\otimes\cdots\otimes x_{j_{w}}$ to be the smallest $0<m$ such that $T_{w-1}^{m}(x_{j_{1}}\otimes\cdots\otimes x_{j_{w}}) = x_{j_{1}}\otimes\cdots\otimes x_{j_{w}}$.
\end{definition}
\begin{definition}
Define the \textbf{cycle family} of a tensor monomial $x_{j_{1}}\otimes\cdots\otimes x_{j_{w}}$ to be the set of all the $T_{w-1}^{n}(x_{j_{1}}\otimes\cdots\otimes x_{j_{w}})$ for $n \in \mathbb{N}$.
\end{definition}
All tensor monomials in a cycle family have the same cycle length. Also, $t_{w-1}$ and $N_{w-1}$ send elements of a cycle family to sums of elements of the same cycle family. Therefore, we can break the calculation down by cycle families.
\begin{lemma}\label{paritydifferent}
Consider a tensor monomial $x_{j_{1}}\otimes\cdots\otimes x_{j_{w}}$ of cycle length $m$ such that $m$ is not of the same parity as $w$. Then $N_{w-1}(x_{j_{1}}\otimes\cdots\otimes x_{j_{w}}) = 0$.
\end{lemma}
\begin{proof}
First, note that since $T_{w-1}^{(w)}$ is the identity map, $m$ must divide $w$. Odd numbers only have odd divisors, so the only case we need to consider is when $w$ is even and $m$ is odd. Write $w = 2\ell m$ for some positive $\ell\in \mathbb{Z}$. A tensor monomial $x_{j_{1}}\otimes\cdots\otimes x_{j_{w}}$ in $\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}$ of cycle length $m$ must be of the form $(x_{k_{1}}\otimes \cdots \otimes x_{k_{m}})^{\otimes \ell}$ where $k_{1},\ldots, k_{m} \in \{1,2, \ldots, d\}$ are such that $x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}$ has cycle length $m$. Then
$$
N_{w-1}(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}\otimes \cdots \otimes x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}) = \sum\limits_{i=0}^{w-1} t_{w-1}^{i}(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}\otimes \cdots \otimes x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})
$$
$$
= \sum\limits_{i=0}^{w-1} (-1)^{i \cdot (w-1)} T_{w-1}^{i}(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}\otimes \cdots \otimes x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}).
$$
Since the tensor monomial has cycle length $m$, if $ a \equiv b \Mod{m}$ then
$$
T_{w-1}^{a}(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}\otimes \cdots \otimes x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}) = T_{w-1}^{b}(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}\otimes \cdots \otimes x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}).
$$
The $i^{th}$ summand, $i = jm + a$ has sign $(-1)^{j}(-1)^{a}$, so this sum becomes
$$
\sum\limits_{j=0}^{2\ell-1} (-1)^{j} \Bigg( \sum\limits_{i=0}^{m-1} (-1)^{i} T_{w-1}^{i}(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}\otimes \cdots \otimes x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})\Bigg) = 0.
$$
\end{proof}
\begin{lemma}\label{paritysame}
Consider a tensor monomial $x_{j_{1}}\otimes\cdots\otimes x_{j_{w}}$ of cycle length $m$ such that $m$ is the same parity as $w$ and $w=\ell \cdot m$. Then $N_{w-1}(x_{j_{1}}\otimes\cdots\otimes x_{j_{w}}) = \ell \cdot ( 1 + t_{w-1} + t_{w-1}^{2} + \cdots + t_{w-1}^{m-1}) (x_{j_{1}}\otimes\cdots\otimes x_{j_{w}})$.\end{lemma}\begin{proof}
If $w = \ell\cdot m$ for $\ell$, $m$ odd, then $t_{w-1} = T_{w-1}$ so
$$
N_{w-1}(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}\otimes \cdots \otimes x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})
= \sum\limits_{i=0}^{w-1} t_{w-1}^{i}(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}\otimes \cdots \otimes x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})
$$
$$= \ell \sum\limits_{i=0}^{m-1} t_{w-1}^{i}(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}\otimes \cdots \otimes x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}) .
$$
If $w = \ell\cdot m$ for $m$ even,
$$
N_{w-1}(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}\otimes \cdots \otimes x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})
= \sum\limits_{i=0}^{w-1} t_{w-1}^{i}(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}\otimes \cdots \otimes x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})
$$
$$
= \sum\limits_{i=0}^{w-1} (-1)^{i \cdot (w-1)} T_{w-1}^{i}(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}\otimes \cdots \otimes x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})
$$
$$
= \sum\limits_{i=0}^{w-1} (-1)^{i} T_{w-1}^{i}(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}\otimes \cdots \otimes x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})
$$
$$
= \ell \sum\limits_{i=0}^{m-1} (-1)^{i} T_{w-1}^{i}(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}\otimes \cdots \otimes x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})
$$
$$
= \ell \sum\limits_{i=0}^{m-1} t_{w-1}^{i}(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}\otimes \cdots \otimes x_{k_{1}}\otimes\cdots\otimes x_{k_{m}}) .
$$ \end{proof}
\begin{lemma}\label{1}
Consider the family of rings $k[x_1,\hdots,x_d]/\mathfrak{m}^2$, where $\mathfrak{m}$ is the ideal $(x_1,\hdots,x_d)$ and $\bar{\mathfrak{m}} = \mathfrak{m} / \mathfrak{m}^{2}$. For any positive integer $w$,
$$
\dfrac{\big(\operatorname{im}(N_{w-1}: \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w})\big) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}(N_{w-1}:\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w})} \cong \displaystyle\bigoplus_{\substack{m \mid w \\ m \equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Z}/\big(\tfrac{w}{m}\big)
$$
where $\omega_{m,d} = \{\textrm{all cycle families of words of length $m$ and cycle length $m$ in $x_1,\hdots,x_d$}\}$.\end{lemma}
$\\*$Recall the number of elements in $\omega_{m,d}$ is $\frac{1}{m}\sum_{i \vert m}\mu(m/i) d^{i}$.
\begin{proof}
For cycle families of cycle length $m$ where $w = \ell \cdot m$, we only need to consider the case where $m \equiv w \Mod{2}$ by Lemma \ref{paritydifferent}. There are $m$ tensor monomials in its cycle family. By Lemma \ref{paritysame},
$$
N_{w-1}\big((x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}\big)=\ell \sum\limits_{j=0}^{m-1} t_{w-1}^{j}\big((x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}\big) .
$$
For any choice of coefficients $a_{i} \in \mathbb{Q}$ for $i \in \{1,2,\hdots,m\}$,
$$
N_{w-1}\big(a_{1}(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}+a_{2}(x_{k_{m}}\otimes\cdots\otimes x_{k_{m-1}})^{\otimes \ell} +\cdots+ a_{m-1}(x_{k_{3}}\otimes\cdots\otimes x_{k_{2}})^{\otimes \ell}+a_{m}(x_{k_{2}}\otimes\cdots\otimes x_{k_{1}})^{\otimes \ell}\big)
$$
$$
= \ell \cdot a_{1} \sum\limits_{j=0}^{m-1} t_{w-1}^{j}\big((x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}\big) + \ldots + \ell \cdot a_{m} \sum\limits_{j=0}^{m-1} t_{w-1}^{j}\big((x_{k_{2}}\otimes\cdots\otimes x_{k_{1}})^{\otimes \ell}\big).
$$
$\\*$\textbf{Case 1:} If $w$ and $m$ are odd,
$$
\ell \cdot a_{1} \sum\limits_{j=0}^{m-1} t_{w-1}^{j}\big((x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}\big) + \ldots + \ell \cdot a_{m} \sum\limits_{j=0}^{m-1} t_{w-1}^{j}\big((x_{k_{2}}\otimes\cdots\otimes x_{k_{1}})^{\otimes \ell}\big)
$$
$$
=\ell \cdot \Bigg(\sum\limits_{i=1}^{m} a_{i}\Bigg) \cdot \sum\limits_{j=0}^{m-1} t_{w-1}^{j}\big((x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}\big).
$$
Note that $\ell \cdot \bigg(\sum\limits_{i=1}^{m} a_{i}\bigg) \cdot \sum\limits_{j=0}^{m-1} t_{w-1}^{j}\big((x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}\big) \in \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}$ if and only if $\sum\limits_{i=1}^{m} a_{i} \in \dfrac{1}{\ell} \mathbb{Z}$. Therefore, $\big(\operatorname{im}(N_{w-1}: \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w})\big) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w} = \mathbb{Z} \cdot \sum\limits_{j=0}^{m-1} t_{w-1}^{j}\big((x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}\big) $. But
$\operatorname{im}(N_{w-1}:\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}) = \ell \cdot \mathbb{Z} \cdot \sum\limits_{j=0}^{m-1} t_{w-1}^{j}\big((x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}\big) $. So
$$
\dfrac{\big(\operatorname{im}(N_{w-1}: \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w})\big) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}(N_{w-1}:\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w})} \cong \mathbb{Z} / \ell \mathbb{Z}.
$$
$\\*$ \textbf{Case 2:} If $w$ and $m$ are even,
$$
\ell \cdot a_{1} \sum\limits_{j=0}^{m-1} t_{w-1}^{j}\big((x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}\big) + \ldots + \ell \cdot a_{m} \sum\limits_{j=0}^{m-1} t_{w-1}^{j}\big((x_{k_{2}}\otimes\cdots\otimes x_{k_{1}})^{\otimes \ell}\big)
$$
$$
=\ell \cdot \Bigg(\sum\limits_{i=1}^{m} (-1)^{i+1}a_{i}\Bigg) \cdot \sum\limits_{j=0}^{m-1} t_{w-1}^{j}\big((x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}\big).
$$
From here, the proof is the same as Case $1$ if we replace $\sum\limits_{i=1}^{m} a_{i}$ with $\sum\limits_{i=1}^{m} (-1)^{i+1}a_{i}$.
\end{proof}
\subsection{Calculating $HC_{w+1+2i}^{(w)}$ for $i\geq0$}\label{im1-tim1-t} \hfill
In Proposition \ref{applykertozandq}, we saw that:
$$
HC_{w+1+2i}^{(w)}(\mathbb{Z}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2)\cong \frac{(\operatorname{im}((1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w})) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w})}.
$$
This section looks at images of tensor monomials of length $w$ and cycle length $m$ under the map $(1-t_{w-1})$.
\begin{lemma}\label{wecl1}
For $w$ even, each cycle family of words of cycle length $m=1$ will contribute a copy of $\mathbb{Z}/2$ to $\dfrac{(\operatorname{im}((1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w})) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w})}$.
\end{lemma}
\begin{proof}
For $w$ even and $i \in \{1,2,\hdots,d\}$,
$$
(1-t_{w-1})(x_{i}^{\otimes w})= x_{i}^{\otimes w} + x_{i}^{\otimes w} = 2(x_{i}^{\otimes w}).
$$
\end{proof}
\begin{lemma}\label{wocl1}
For $w$ odd, each cycle family of words of cycle length $m=1$ will contribute nothing to $\dfrac{(\operatorname{im}((1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w})) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w})}$.
\end{lemma}
\begin{proof}
For $w$ even and $i \in \{1,2,\hdots,d\}$,
$$
(1-t_{w-1})(x_{i}^{\otimes w})= x_{i}^{\otimes w} - x_{i}^{\otimes w} = 0.
$$
\end{proof}
\begin{lemma}\label{woclo}
For any $w>1$ odd, each cycle family of words of cycle length $m>1$ will contribute nothing to $\dfrac{\big(\operatorname{im}\big((1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\big)\big) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)}$.
\end{lemma}
\begin{proof} Let $w >1$ be odd, and write $w = m \cdot \ell$. Then $m$ must also be odd. If the word is $(x_{k_{1}}\otimes \cdots \otimes x_{k_{m}})^{\otimes \ell}$, then there are $m$ tensor monomials in its cycle family.
$\\*$Consider the image of the span over $\mathbb{Q}$ of these tensor monomials under $\big( 1-t_{w-1} \big): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}$. Let $a_{i} \in \mathbb{Q}$ for any $i \in \{1,2,\hdots,m\}$. Taking the $m$ words in the cycle family as a basis for $\bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}$, we can express the map $(1-t_{w-1})$ by the matrix
\[
\begin{bmatrix}
1 & 0 & \cdots & 0 & -1\\
-1 & 1 & \cdots& 0 & 0\\
0 & -1 & \cdots & 0 & 0 \\
\vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0& \cdots & 1&0 \\
0 & 0& \cdots & -1& 1 \\
\end{bmatrix}_{m\times m}.
\]
$\\*$If we row reduce (which can be done over $\mathbb{Z}$), then we get the matrix
\[
\begin{bmatrix}
1 & 0 & \cdots & 0 & -1\\
0& 1 & \cdots& 0 & -1\\
0 & 0& \cdots & 0 & -1\\
\vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0& \cdots & 1&-1 \\
0 & 0& \cdots & 0& 0 \\
\end{bmatrix}_{m\times m}.
\]
$\\*$So the first $(m-1)$-columns of the original $m \times m $ matrix are a basis for $\operatorname{im}(1-t_{w-1})$, and the question is: what can be said about $\alpha_{1}, \alpha_{2},\hdots,\alpha_{m-1}$ in the following equation if $b_{1},b_{2},\hdots,b_{m}$ are all integers?
\begin{align}\label{oddspan2}
\alpha_{1}\begin{bmatrix}
1 \\
-1\\
0\\
\vdots \\
0\\
0\\
\end{bmatrix}_{m\times 1} +
\alpha_{2}\begin{bmatrix}
0 \\
1\\
-1\\
\vdots \\
0\\
0\\
\end{bmatrix}_{m\times 1}+ \cdots +
\alpha_{m-1}\begin{bmatrix}
0 \\
0\\
0\\
\vdots \\
1\\
-1\\
\end{bmatrix}_{m\times 1} =
\begin{bmatrix}
b_{1} \\
b_{2} \\
b_{3}\\
\vdots \\
b_{m-1} \\
b_{m} \\
\end{bmatrix}_{m\times 1}
\end{align}
If $b_{1},b_{2},\hdots,b_{m}$ are all integers, then $\alpha_{1}, \alpha_{2},\hdots,\alpha_{m-1}$ are also all integers. Therefore, a cycle family of tensor monomials of length $w>2$ for $w$ odd and with cycle length $m>2$ does not generate anything in $\frac{(\operatorname{im}((1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w})) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w})}$.
\end{proof}
\begin{lemma}\label{wecle}
For any $w>1$ even, each cycle family of words of cycle length $m$ also even will contribute nothing to $\dfrac{\big(\operatorname{im}\big((1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\big)\big) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)}$.
\end{lemma}
\begin{proof} $w=m\cdot \ell$. Then a monomial
$(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}$ with cycle length $m$ has $m$ monomials in its cycle family. Consider the image of the span over $\mathbb{Q}$ of these words under
$(1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}$. Taking the $m$ words in the cycle family as a basis for $\bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}$, we can express the map $(1-t_{w-1})$ by the matrix
\[
\begin{bmatrix}
1 & 0 & \cdots & 0 & 1\\
1 & 1 & \cdots& 0 & 0\\
0 & 1 & \cdots & 0 & 0 \\
\vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0& \cdots & 1&0 \\
0 & 0& \cdots & 1& 1 \\
\end{bmatrix}_{m\times m}.
\]
$\\*$We get the following row-reduced version of the matrix (since $m$ is even)
\[
\begin{bmatrix}
1 & 0 &0& 0&\cdots & 0& 0& 0 & 1\\
0& 1 & 0&0&\cdots& 0 & 0& 0& -1\\
0 & 0& 1&0&\cdots & 0& 0& 0 & 1\\
0 & 0& 0&1&\cdots & 0& 0& 0 & -1\\
\vdots & \vdots & \vdots & \vdots & \ddots& \vdots & \vdots& \vdots &\vdots \\
0 & 0& 0&0&\cdots & 1&0& 0& 1\\
0 & 0& 0&0&\cdots & 0& 1& 0 & -1\\
0 & 0& 0&0&\cdots & 0& 0& 1&1 \\
0 & 0& 0&0&\cdots & 0& 0& 0& 0 \\
\end{bmatrix}_{m\times m}.
\]
So the first $m-1$ columns of the original matrix span the image of $(1-t_{w-1})$, and the question is: what can be said about $\alpha_{1}, \alpha_{2},\hdots,\alpha_{m-1}$ in the following equation if $b_{1},b_{2},\hdots,b_{m}$ are all integers?
\begin{align}\label{oddspan4}
\alpha_{1}\begin{bmatrix}
1 \\
1\\
0\\
\vdots \\
0\\
0\\
\end{bmatrix}_{m\times 1} +
\alpha_{2}\begin{bmatrix}
0 \\
1\\
1\\
\vdots \\
0\\
0\\
\end{bmatrix}_{m\times 1}+ \cdots +
\alpha_{m-1}\begin{bmatrix}
0 \\
0\\
0\\
\vdots \\
1\\
1\\
\end{bmatrix}_{m\times 1} =
\begin{bmatrix}
b_{1} \\
b_{2} \\
b_{3}\\
\vdots \\
b_{m-1} \\
b_{m} \\
\end{bmatrix}_{m\times 1}
\end{align}
Again, if $b_{1},b_{2},\hdots,b_{m}$ are all integers, then $\alpha_{1}, \alpha_{2},\hdots,\alpha_{m-1}$ are also all integers. Therefore, a cycle family of monomials of length $w>2$ and $w$ even with cycle length $m$ also even and $m>2$ does not generate anything in $\frac{(\operatorname{im}((1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w})) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w})}$.
\end{proof}
\begin{lemma}\label{weclo}
For any $w>1$ even, each cycle family of words of cycle length $m>1$ where $m$ is odd will contribute a copy of $\mathbb{Z}/2$ to $\dfrac{(\operatorname{im}((1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w})) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w})}$.
\end{lemma}
\begin{proof}
Here, the matrix of $(1-t_{w-1})$ is
\[ M=
\begin{bmatrix}
1 & 0 & \cdots & 0 & 1\\
1 & 1 & \cdots& 0 & 0\\
0 & 1 & \cdots & 0 & 0 \\
\vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0& \cdots & 1&0 \\
0 & 0& \cdots & 1& 1 \\
\end{bmatrix}_{m\times m.}
\]
$\\*$This matrix has full rank. So the question is: what can be said about $\alpha_{1}, \alpha_{2},\hdots,\alpha_{m-1}, \alpha_{m}$ in the following equation if $b_{1},b_{2},\hdots,b_{m}$ are all integers?
\begin{align}\label{evenoddspan}
\alpha_{1}\begin{bmatrix}
1 \\
1\\
0\\
\vdots \\
0\\
0\\
\end{bmatrix}_{m\times 1} +
\alpha_{2}\begin{bmatrix}
0 \\
1\\
1\\
\vdots \\
0\\
0\\
\end{bmatrix}_{m\times 1}+ \cdots +
\alpha_{m-1}\begin{bmatrix}
0 \\
0\\
0\\
\vdots \\
1\\
1\\
\end{bmatrix}_{m\times 1} +
\alpha_{m}\begin{bmatrix}
1 \\
0\\
0\\
\vdots \\
0\\
1\\
\end{bmatrix}_{m\times 1} =
\begin{bmatrix}
b_{1} \\
b_{2} \\
b_{3}\\
\vdots \\
b_{m-1} \\
b_{m} \\
\end{bmatrix}_{m\times 1}
\end{align}
$\\*$If we row reduce the associated matrix, we get the following matrix. All of the signs flip on the terms from line to line except one sign that stays the same. The underlining emphasizes the pattern.
\[
\left[
\begin{array}{ccccccccc|c}
1 & 0 &0& 0&\cdots & 0& 0& 0 & 0 & \frac{1}{2}(-b_{m}+b_{m-1} - b_{m-2} +b_{m-3} \cdots -b_{5}+b_{4}-b_{3}\underline{+\space b_{2}}+b_{1})\\
0& 1 & 0&0&\cdots& 0 & 0& 0& 0& \frac{1}{2}(b_{m}-b_{m-1} + b_{m-2} -b_{m-3} \cdots +b_{5}-b_{4} \underline{+ b_{3}}+b_{2}-b_{1})\\
0 & 0& 1&0&\cdots & 0& 0& 0 & 0& \frac{1}{2}(-b_{m}+b_{m-1} - b_{m-2} +b_{m-3} \cdots -b_{5}\underline{+ b_{4}}+b_{3}-b_{2}+b_{1})\\
0 & 0& 0&1&\cdots & 0& 0& 0 & 0& \frac{1}{2}(b_{m}-b_{m-1} + b_{m-2} -b_{m-3} \cdots \underline{+ b_{5}}+b_{4}-b_{3}+b_{2}-b_{1})\\
\vdots & \vdots & \vdots & \vdots & \ddots& \vdots & \vdots& \vdots &\vdots &\vdots\\
0 & 0& 0&0&\cdots & 1&0& 0& 0& \frac{1}{2}(b_{m}-b_{m-1} \underline{+ b_{m-2}} +b_{m-3} \cdots -b_{5}+b_{4}-b_{3}+b_{2}-b_{1})\\
0 & 0& 0&0&\cdots & 0& 1& 0 &0 &\frac{1}{2}(-b_{m}\underline{+ b_{m-1}} + b_{m-2} -b_{m-3} \cdots +b_{5}-b_{4}+b_{3}-b_{2}+b_{1})\\
0 & 0& 0&0&\cdots & 0& 0& 1&0& \frac{1}{2}(\underline{+ b_{m}}+b_{m-1} - b_{m-2} +b_{m-3} \cdots -b_{5}+b_{4}-b_{3}+b_{2}-b_{1})\\
0 & 0& 0&0&\cdots & 0& 0& 0& 1 & \frac{1}{2}(b_{m}-b_{m-1} + b_{m-2} -b_{m-3} \cdots +b_{5}-b_{4}+b_{3}-b_{2}+b_{1})\\
\end{array}
\right]
\]
$\\*$This tells us that the solution of Equation (\ref{evenoddspan}) is
$$
\alpha_{1} =\frac{1}{2}(-b_{m}+b_{m-1} - b_{m-2} +b_{m-3} \hdots -b_{3}+b_{2}+b_{1})
$$
$$
\alpha_{2} = \frac{1}{2}(b_{m}-b_{m-1} + b_{m-2} -b_{m-3} \hdots +b_{3}+b_{2}-b_{1})
$$
$$
\alpha_{3} = \frac{1}{2}(-b_{m}+b_{m-1} - b_{m-2} +b_{m-3} \hdots +b_{3}-b_{2}+b_{1})
$$
$$
\alpha_{4} = \frac{1}{2}(b_{m}-b_{m-1} + b_{m-2} -b_{m-3} \hdots -b_{3}+b_{2}-b_{1})
$$
$$
\vdots
$$
$$
\alpha_{m-3} =\frac{1}{2}(b_{m}-b_{m-1} + b_{m-2} +b_{m-3} \hdots -b_{3}+b_{2}-b_{1})
$$
$$
\alpha_{m-2} = \frac{1}{2}(-b_{m}+b_{m-1} + b_{m-2} -b_{m-3} \hdots +b_{3}-b_{2}+b_{1})
$$
$$
\alpha_{m-1} = \frac{1}{2}(b_{m}+b_{m-1} - b_{m-2} +b_{m-3} \hdots -b_{3}+b_{2}-b_{1})
$$
$$
\alpha_{m} = \frac{1}{2}(b_{m}-b_{m-1} + b_{m-2} -b_{m-3} \hdots +b_{3}-b_{2}+b_{1}).
$$
When $b_{1},\hdots,b_{m}$ are all integers, $(\pm b_{m}\pm b_{m-1} \pm b_{m-2} \pm b_{m-3} \cdots \pm b_{3} \pm b_{2} \pm b_{1}) \equiv (b_{m}+b_{m-1} + b_{m-2} +b_{m-3} \cdots +b_{3}+b_{2}+b_{1}) \Mod{2}$ for any choice of $\pm$. Therefore, when $b_{1},\hdots,b_{m}$ are all integers and $(b_{m}+b_{m-1} + b_{m-2} +b_{m-3} \cdots +b_{3}+b_{2}+b_{1}) \equiv 1 \Mod{2}$ we know ${\alpha_{1}, \hdots , \alpha_{m} }\in \mathbb{Q} \smallsetminus \mathbb{Z} $ and in fact are in $(\frac{1}{2} \mathbb{Z}) \smallsetminus \mathbb{Z}$. Also, when $b_{1},\hdots,b_{m}$ are all integers and $(b_{m}+b_{m-1} + b_{m-2} +b_{m-3} \cdots +b_{3}+b_{2}+b_{1}) \equiv 0 \Mod{2}$ we know ${\alpha_{1}, \hdots , \alpha_{m} }\in \mathbb{Z} $.
Now consider the map $f: \mathbb{Z}^{m} \xrightarrow{} \mathbb{Z}/2$ defined as $f(\alpha_{1},\alpha_{2},\hdots,\alpha_{m-1},\alpha_{m})= [\alpha_{1}+\alpha_{2}+\cdots+\alpha_{m-1}+\alpha_{m}].$ Note that $\ker(f) = M \cdot \mathbb{Z}^{m}$. Now, $(\operatorname{im}((1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w})) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}$ is exactly the span over $\mathbb{Z}$ of
$(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}, (x_{k_{m}}\otimes\cdots\otimes x_{k_{m-1}})^{\otimes \ell}, \hdots ,(x_{k_{3}}\otimes\cdots\otimes x_{k_{2}})^{\otimes \ell},$ and $(x_{k_{2}}\otimes\cdots\otimes x_{k_{1}})^{\otimes \ell}$, which are isomorphic to $\mathbb{Z}^{m}$. Under this isomorphism, $\operatorname{im}((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}) \cong M\cdot \mathbb{Z}^{m}$. Therefore,
$$
\dfrac{\big(\operatorname{im}\big((1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\big)\big) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)} \cong \dfrac{\mathbb{Z}^{m}}{(M \cdot \mathbb{Z}^{m})} \cong \mathbb{Z}/2.
$$\end{proof}
$\\*$Lemmas \ref{wecl1}, \ref{wocl1}, \ref{woclo}, \ref{wecle}, and \ref{weclo} combine to give:
\begin{lemma}\label{2}
Consider the family of rings $k[x_1,\hdots,x_d]/\mathfrak{m}^2$, where $\mathfrak{m}$ is the ideal $(x_1,\hdots,x_d)$ and $\bar{\mathfrak{m}} = \mathfrak{m} / \mathfrak{m}^{2}$. For all $w \in \mathbb{Z}^{+}$,
$$
\dfrac{\big(\operatorname{im}\big((1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\big)\big) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)}\cong \displaystyle\bigoplus_{\substack{m \mid w \\ m \not\equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Z}/2
$$
$\\*$where $\omega_{m,d} = \{\textrm{all cycle families of words of length $m$ of cycle length $m$ in $x_1,\hdots,x_d$}\}$. (Note that if $w$ is odd, then there are no $m$ with $m \mid w$ such that $m \not\equiv w \Mod{2}$.)
\end{lemma}
\subsection{Calculating $HC_{w-1}^{(w)}$}\hfill
In Proposition \ref{applykertozandq}, we saw that
$$
HC_{w-1}^{(w)}(\mathbb{Z}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2)\cong \frac{ \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)}. $$
\begin{lemma}\label{wecl1.m}
For $w$ even, each word of cycle length $m=1$ will contribute a copy of $\mathbb{Z}/2$ to $\dfrac{ \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)}$.
\end{lemma}
\begin{proof}
For $w$ even and $i \in \{1,2,\hdots,d\}$,
$$
(1-t_{w-1})(x_{i}^{\otimes w})= x_{i}^{\otimes w} + x_{i}^{\otimes w} = 2(x_{i}^{\otimes w}).
$$
\end{proof}
\begin{lemma}\label{wocl1.m}
For $w$ odd, each word of cycle length $m=1$ will contribute a copy of $\mathbb{Z}$ to \newline $\dfrac{ \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)}$.
\end{lemma}
\begin{proof}
For $w$ odd and $i \in \{1,2,\hdots,d\}$,
$$
(1-t_{w-1})(x_{i}^{\otimes w})= x_{i}^{\otimes w} - x_{i}^{\otimes w} = 0.
$$
\end{proof}
\begin{lemma}\label{woclo.m}
For any $w>1$ odd, each cycle family of words of cycle length $m>1$ will contribute a copy of $\mathbb{Z}$ to $\dfrac{\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)}$.
\end{lemma}
\begin{proof} $w=m\cdot \ell$. As before, every tensor monomial $(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}$ of cycle length $m$ has $m$ other words of length $w$ in its cycle family. Quotienting out by the image of $(1-t_{w-1})$ identifies these $m$ generators with each other, to give the unique generator of
$\frac{\mathbb{Z}^{\oplus m}}{im(1-t_{w-1})} \cong \mathbb{Z}$.
\end{proof}
\begin{lemma}\label{wecle.m}
For any $w>1 $ even, each cycle family of words of cycle length $m$ also even will contribute a copy of $\mathbb{Z}$ to $\dfrac{\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)}$.
\end{lemma}
\begin{proof} Again, there are $m$ tensor monomials of length $w$ in the cycle family of this word. Quotienting out by the image of $(1-t_{w-1})$ identifies the following generators
$$
(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}
\sim
-(x_{k_{m}}\otimes\cdots\otimes x_{k_{m-1}})^{\otimes \ell}
\sim
\cdots
\sim
(x_{k_{3}}\otimes\cdots\otimes x_{k_{2}})^{\otimes \ell}
\sim
-(x_{k_{2}}\otimes\cdots\otimes x_{k_{1}})^{\otimes \ell}
$$
so again we are just left with a single copy of $\mathbb{Z}$.
\end{proof}
\begin{lemma}\label{weclo.m}
For any $w>2$ even, each cycle family of words of cycle length $m>1$ with $m$ odd will contribute a copy of $\mathbb{Z}/2$ to $\dfrac{\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)}$.
\end{lemma}
\begin{proof} Again, there are $m$ monomials of length $w$ in the cycle family of this word. When we quotient the $\mathbb{Z}^{\oplus m}$ generated by the tensor monomials in the cycle family of $(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}$ by the image of $(1-t_{w-1})$, we get
$$
(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}
\sim
-(x_{k_{m}}\otimes\cdots\otimes x_{k_{m-1}})^{\otimes \ell}
\sim
\cdots
\sim
(x_{k_{2}}\otimes\cdots\otimes x_{k_{1}})^{\otimes \ell}
\sim
-(x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}
$$
so we are just left with $\mathbb{Z}/2$ generated by any one of the elements.
\end{proof}
Gathering the results in Lemmas \ref{wecl1.m}, \ref{wocl1.m}, \ref{woclo.m}, \ref{wecle.m}, and \ref{weclo.m} give us the following Lemma.
\begin{lemma}\label{3}
Consider the family of rings $k[x_1,\hdots,x_d]/\mathfrak{m}^2$, where $\mathfrak{m}$ is the ideal $(x_1,\hdots,x_d)$ and $\bar{\mathfrak{m}} = \mathfrak{m} / \mathfrak{m}^{2}$. For all $w \in \mathbb{Z}^{+}$,
$$\dfrac{\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w})} \cong \Bigg(\displaystyle\bigoplus_{\substack{m \mid w \\ m \equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Z}\Bigg) \bigoplus \Bigg( \displaystyle\bigoplus_{\substack{m \mid w \\ m \not\equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Z}/2\Bigg)$$
where $\omega_{m,d} = \{\textrm{all cycle families of words of length $m$ of cycle length $m$ in $x_1,\hdots,x_d$}\}$. (Note that if $w$ is odd, then there are no $m$ with $m \mid w$ such that $m \not\equiv w \Mod{2}$.)
\end{lemma}
Proposition \ref{applykertozandq}, Lemma \ref{1}, Lemma \ref{2}, and Lemma \ref{3} give
\begin{theorem}\label{Zcyclic}
Let $A = \mathbb{Z}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$ where $\mathfrak{m}$ is the ideal $(x_1,x_2,\hdots,x_d)$ and $\bar{\mathfrak{m}} = \mathfrak{m} / \mathfrak{m}^{2}$. Then
\begin{center}
$
HC_{n}^{(0)}(A) \cong
\begin{dcases}
\mathbb{Z} & n \textrm{ even and } n \geq 0 \\
0 & n \textrm{ else} \\
\end{dcases}
$
\end{center}
and for $w > 0$
\begin{center}
$
HC_{n}^{(w)}(A) \cong
\begin{dcases}
\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\> 0 & n \leq w-2 \\
\\
\Bigg(\displaystyle\bigoplus_{\substack{m \mid w \\ m \equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Z}\Bigg) \bigoplus \Bigg( \displaystyle\bigoplus_{\substack{m \mid w \\ m \not\equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Z}/2\Bigg) & n = w-1 \\
\\
\displaystyle\bigoplus_{\substack{m \mid w \\ m \equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Z}/(\tfrac{w}{m})
& n = w + 2i,\hspace{11.5mm} i\geq 0 \\
\\
\displaystyle\bigoplus_{\substack{m \mid w \\ m \not\equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Z}/2 & n = w + 1 + 2i, \hspace{5mm}i\geq0 \> \\
\end{dcases}
$
\end{center}
where $\omega_{m,d} = \{\textrm{all cycle families of words length $m$ and cycle length $m$ in $x_1,\hdots,x_d$}\}$ has order \newline $\frac{1}{m}\sum_{i \vert m}\mu(m/i) d^{i}$.
\end{theorem}
\section{Negative Cyclic Homology of $k[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$}\hfill
We now want to calculate $HC_{*}^{-}(A)$ for $A = k[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$, to do this we will compute the total homology of the double complex in Definition \ref{negative}. Again, we can study the resulting calculation of $HC_{*}^{-}(A)$ by splitting it into $(HC^{-})_{*}^{(w)}(A)$ for all $w \geq 0 $ like we did for $HC_{*}^{(w)}(A)$.
\begin{center}
$HC_{*}^{-}(A) \cong \displaystyle\bigoplus_{w=0}^{\infty}(HC^{-})_{*}^{(w)}(A)$
\end{center}
As in Equation (\ref{HCW0}), if $w=0$ the $E^{1}_{p,q} = 0$ unless $p\leq 0$ is even and $q=0$, so $E^{1} \cong E^{\infty}$ and
\begin{equation}\label{HCNW0}
(HC^{-})_{n}^{(0)}(A) \cong
\begin{dcases}
k & n \textrm{ even and } n \leq 0 \\
0 & \textrm{else} \\
\end{dcases}.
\end{equation}
Again for $w>0$ in the $E^{1}$-page, the even columns (numbered less than or equal to 0) consist of $HH_{*}(A)$ and the odd columns are zero. So we again have $\partial^{1}=0$ and $E^{1} \cong E^{2}$. Again, all $\partial^{r}$ for $r\geq 3$ are $0$ for dimension reasons. Therefore, the $E^{3}$ page is same as the $E^{\infty}$ page, and we get
\begin{equation}\label{HNCNW}
(HC^{-})_{n}^{(w)}(A) \cong
\begin{dcases}
0 & n > w \\
\ker(1-t_{w-1}) & n = w \\
\ker(\partial^{2}) & n = w -1-2i,\hspace{5mm} i\geq 0 \\
\operatorname{coker}(\partial^{2}) & n = w - 2i,\hspace{11.5mm} i> 0.\\
\end{dcases}
\end{equation}
Following the same reasoning that appears after Equation (\ref{HCNW}), we can rewrite this as
\begin{equation}\label{HNCNW2}
(HC^{-})_{n}^{(w)}(A) \cong
\begin{dcases}
0 & n > w \\
\ker(1-t_{w-1}) & n = w \\
\ker(\operatorname{coker}(1-t_{w-1}) \xrightarrow{N_{w-1}} \ker(1-t_{w-1})) & n = w -1-2i,\hspace{5mm} i\geq 0 \\
\operatorname{coker}(\operatorname{coker}(1-t_{w-1}) \xrightarrow{N_{w-1}} \ker(1-t_{w-1})) & n = w - 2i,\hspace{11.5mm} i> 0. \\
\end{dcases}
\end{equation}
\medskip
\subsection{Comparison to Ext}\hfill
The calculation of the pieces of Equation (\ref{HNCNW2}) is exactly what we get computing $Ext_{n}^{k[C_{w}]}(k,\bar{\mathfrak{m}}^{\otimes w})$ where $C_{w}= \langle \alpha : \alpha^{(w)}=1 \rangle$ is the cyclic group of order $w$ and $\bar{\mathfrak{m}}^{\otimes w}$ is a $k[C_{w}]$-module with the action
$$
\alpha(x_{n_{1}}\otimes\cdots\otimes x_{n_{w}}) = t_{w-1}(x_{n_{1}}\otimes\cdots\otimes x_{n_{w}}) = (-1)^{w-1}(x_{n_{w}}\otimes x_{n_{1}} \otimes\cdots\otimes x_{n_{-1}}).
$$
To compute $Ext_{n}^{k[C_{w}]}(k,\bar{\mathfrak{m}}^{\otimes w})$, we apply ${\rm{Hom}}_{k[C_{w}]}(-,\bar{\mathfrak{m}}^{\otimes w})$ to the resolution in Equation (\ref{r}) to get
\begin{center}
\begin{tikzcd}[
ar symbol/.style = {draw=none,"#1" description,sloped},
isomorphic/.style = {ar symbol={\cong}},
equals/.style = {ar symbol={=}},
]
\cdots& {\rm{Hom}}(k[C_{w}],\bar{\mathfrak{m}}^{\otimes w}) \arrow{l}{(1-\alpha) \circ \_}& {\rm{Hom}}(k[C_{w}],\bar{\mathfrak{m}}^{\otimes w})\arrow{l}{N \circ \_}&{\rm{Hom}}(k[C_{w}],\bar{\mathfrak{m}}^{\otimes w})\arrow{l}{(1-\alpha) \circ \_}& 0\arrow{l} \>
\end{tikzcd}
\end{center}
so $Ext_{n}^{k[C_{w}]}(k,\bar{\mathfrak{m}}^{\otimes w})$ is the cohomology of the complex
\begin{equation}\label{cohomExt}
\cdots\xleftarrow{1-t_{w-1}} \bar{\mathfrak{m}}^{\otimes w}\xleftarrow{N_{w-1}} \bar{\mathfrak{m}}^{\otimes w}\xleftarrow{1-t_{w-1}} \bar{\mathfrak{m}}^{\otimes w}\xleftarrow{N_{w-1}} \bar{\mathfrak{m}}^{\otimes w}\xleftarrow{1-t_{w-1}}\bar{\mathfrak{m}}^{\otimes w}\xleftarrow{} 0
\end{equation}\label{complex}
$\\*$yielding
\begin{equation}\label{ext0}
Ext_{0}^{k[C_{w}]}(k,\bar{\mathfrak{m}}^{\otimes w})\cong \textrm{ker}(1-t_{w-1})
\end{equation}
\begin{equation}\label{ext1}
Ext_{1+ 2i}^{k[C_{w}]}(k,\bar{\mathfrak{m}}^{\otimes w})\cong\ker(\textrm{coker}(1-t_{w-1}) \xrightarrow{N_{w-1}} \ker(1-t_{w-1})) \textrm{ for } i\geq 0 \>
\end{equation}
\begin{equation}\label{ext2}
Ext_{2 + 2i}^{k[C_{w}]}(k,\bar{\mathfrak{m}}^{\otimes w})\cong\textrm{coker}(\textrm{coker}(1-t_{w-1}) \xrightarrow{N_{w-1}} \ker(1-t_{w-1})) \textrm{ for } i\geq 0.\>
\end{equation}
We deduce
\begin{theorem}\label{HNCNW3}
Let $A = k[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$, where $k$ is any commutative unital ring, $\mathfrak{m}$ is the ideal $(x_1,x_2,\hdots,x_d)$, $\bar{\mathfrak{m}} = \mathfrak{m} / \mathfrak{m}^{2}$, $C_{w}= \langle \alpha : \alpha^{(w)}=1 \rangle$, and $\bar{\mathfrak{m}}^{\otimes w}$ is a $k[C_{w}]$-module with the action
$$
\alpha(x_{n_{1}}\otimes\cdots\otimes x_{n_{w}}) = (-1)^{w-1}(x_{n_{w}}\otimes x_{n_{1}} \otimes\cdots\otimes x_{n_{w-1}}) \> .
$$
$\\*$ Then for $w = 0$
\begin{center}
$
(HC^{-})_{n}^{(0)}(A) \cong
\begin{dcases}
k & n \textrm{ even and } n \leq 0 \\
0 & n \textrm{ else} \\
\end{dcases}
$
\end{center}
\medskip
$\\*$ and for $w > 0$
\begin{center}
$
(HC^{-})_{n}^{(w)}(A) \cong
\begin{dcases}
0 & n > w \\
Ext_{w-n}^{k[C_{w}]}(k,\bar{\mathfrak{m}}^{\otimes w}) & n \leq w\> \\
\end{dcases}
$
\end{center}
$\\*$ which can be rewritten as
\begin{center}
$
(HC^{-})_{n}^{(w)}(A) \cong
\begin{dcases}
0 & n > w \\
H^{w-n}(C_{w};\bar{\mathfrak{m}}^{\otimes w}) & n \leq w\> \\
\end{dcases}
$
\end{center}
$\\*$for the $C_{w}$ action on $\bar{\mathfrak{m}}^{\otimes w}$ explained above.
\end{theorem}
\subsection{Negative Cyclic Homology of $\mathbb{Q}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$}\hfill
Since $\mathbb{Q}$ is a projective $\mathbb{Q}[C_{w}]$-module, we get that for $A = \mathbb{Q}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$ and for $w = 0$
\begin{equation}
(HC^{-})_{n}^{(0)}(A) \cong
\begin{dcases}
\mathbb{Q} & n \textrm{ even and } n \leq 0 \\
0 & \textrm{else} \\
\end{dcases}
\end{equation}
\medskip
$\\*$ and for $w > 0$
\begin{equation}
(HC^{-})_{n}^{(w)}(A) \cong
\begin{dcases}
Hom_{\mathbb{Q}[C_{w}]}(\mathbb{Q},\bar{\mathfrak{m}}^{\otimes w}) & n = w \\
0 & \textrm{else} \>.
\end{dcases}
\end{equation}
Since $Hom_{\mathbb{Q}[C_{w}]}(\mathbb{Q},\bar{\mathfrak{m}}^{\otimes w})$ is the set of $\mathbb{Q}[C_{w}]$-module homomorphisms $f:\mathbb{Q} \rightarrow \bar{\mathfrak{m}}^{\otimes w}$, we know any $f$ must satisfy
$f(\alpha q ) = \alpha f(q)$ for all $q \in \mathbb{Q}$, where $\alpha$ acts trivially on $q$ and
$\alpha(x_{n_{1}}\otimes\cdots\otimes x_{n_{w}}) = (-1)^{w-1}(x_{n_{w}}\otimes x_{n_{1}} \otimes\cdots\otimes x_{n_{w-1}})$ on $f(q)$. We can rewrite the action $\alpha f(q)$ as $t_{w-1}f(q)$. We then get $f(q) = t_{w-1} f(q)$ which can be rewritten as $(1-t_{w-1})f(q) = 0$. Therefore the set of possible $\mathbb{Q}[C_{w}]$-module homomorphisms $f:\mathbb{Q} \rightarrow \bar{\mathfrak{m}}^{\otimes w}$ are exactly in one to one correspondence with a choice of $f(1) \in \ker(1-t_{w-1})$, and
$$\ker\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\big) \cong \displaystyle\bigoplus_{\substack{m \mid w \\ m \equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Q}$$
by the same reasoning that will be used to prove Lemma \ref{ker1minustwforz}. We deduce
\begin{corollary}
Let $A = \mathbb{Q}[x_1,...,x_d]/\mathfrak{m}^2$ where $\mathfrak{m}$ is the ideal $(x_1,\hdots,x_d)$. Then for $w = 0$
\begin{center}
$
(HC^{-})_{n}^{(0)}(A) \cong
\begin{dcases}
\mathbb{Q} & n \textrm{ even and } n \leq 0 \\
0 & \textrm{else} \\
\end{dcases}
$
\end{center}
$\\*$ and for $w > 0$
\begin{center}
$
(HC^{-})_{n}^{(w)}(A) \cong
\begin{dcases}
\displaystyle\bigoplus_{\substack{m \mid w \\ m \equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Q} & n = w \\
0 & \textrm{else} \>
\end{dcases}
$
\end{center}
where $\omega_{m,d} = \{\textrm{all cycle families of words length $m$ and cycle length $m$ in $x_1,\hdots,x_d$}\}$ has order \newline $\frac{1}{m}\sum_{i \vert m}\mu(m/i) d^{i}$.
\end{corollary}
\subsection{Negative Cyclic Homology of $\mathbb{Z}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$}\hfill
To compute the negative cyclic homology of $\mathbb{Z}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$, note the similarity between the complex in Equation (\ref{cohomExt}) and the complex in Equation (\ref{TORHomComplex}). In Section 5, we showed that
\begin{equation}\label{rel1}
\ker\big(\textrm{coker}(1-t_{w-1}) \xrightarrow{N_{w-1}} \ker(1-t_{w-1})\big) \cong \frac{\big(\operatorname{im}\big((1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\big)\big) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)}
\end{equation}
\begin{equation}\label{rel2}
\textrm{coker}\big(\textrm{coker}(1-t_{w-1}) \xrightarrow{N_{w-1}} \ker(1-t_{w-1})\big)\cong \frac{\big(\operatorname{im}(N_{w-1}: \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w})\big) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}(N_{w-1}:\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w})}.
\end{equation}
Applying Theorem \ref{HNCNW3} we get
\begin{theorem}\label{applykertozandq2}
Let $A = \mathbb{Z}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$, where $\mathfrak{m}$ is the ideal $(x_1,x_2,\hdots,x_d)$ and $\bar{\mathfrak{m}} = \mathfrak{m} / \mathfrak{m}^{2}$.
Then
\begin{center}
$
(HC^{-})_{n}^{(0)}(A) \cong
\begin{dcases}
\mathbb{Z} & n \textrm{ even and } n \leq 0 \\
0 & \textrm{else}\\
\end{dcases}
$
\end{center}
and for $w > 0$
\begin{center}
$
(HC^{-})_{n}^{(w)}(A) \cong
\begin{dcases}
\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\> 0 & n > w \\
\\
\ker\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big) & n = w \\
\\
\frac{\big(\operatorname{im}(N_{w-1}: \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w})\big) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}(N_{w-1}:\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w})} & n = w - 1 - 2i, \hspace{5mm} i\geq 0 \\
\\
\frac{\big(\operatorname{im}\big((1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Q}}^{\otimes w}\big)\big) \cap \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}}{\operatorname{im}\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)} & n = w - 2i, \hspace{11.5mm} i> 0 \> . \\
\end{dcases}
$
\end{center}
\end{theorem}
$\\*$We can use Lemma \ref{1} and Lemma \ref{2} to understand the last two parts of the above equation for $(HC^{-})_{n}^{(w)}(A)$. However, we still need to compute $\ker\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)$.
\begin{lemma}\label{wel1nc}
For $w$ even, each cycle family of words of cycle length $m=1$ will contribute nothing to $\ker\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)$.
\end{lemma}
\begin{proof}
For $w$ even and $i \in \{1,2,\hdots,d\}$,
$$
(1-t_{w-1})(x_{i}^{\otimes w})= x_{i}^{\otimes w} + x_{i}^{\otimes w} = 2(x_{i}^{\otimes w}).
$$
\end{proof}
\begin{lemma}\label{wol1nc}
For $w$ odd, each cycle family of words of cycle length $m=1$ will contribute a copy of $\mathbb{Z}$ to $\ker\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)$.
\end{lemma}
\begin{proof}
For $w$ odd and $i \in \{1,2,\hdots,d\}$,
$$
(1-t_{w-1})(x_{i}^{\otimes w})= x_{i}^{\otimes w} - x_{i}^{\otimes w} = 0.
$$
\end{proof}
\begin{lemma}\label{wolonc}
For $w>1$ odd, each cycle family of words of cycle length $m>1$ will contribute a copy of $\mathbb{Z}$ to $\ker\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)$.
\end{lemma}
\begin{proof}
First note, if $m|w$ and $w$ is odd, $m$ is also odd. There exists an $\ell$ such that $w=m\cdot \ell$. Again, there are $m$ monomials of length $w$ in the cycle family of this word. Consider the image of these words under the map: $( 1-t_{w-1}): \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}$. The only linear combinations of words in this cycle family that will go to $0$ under the map $1-t_{w-1}$ must be a multiple of the sum of all the words in this cycle family.
\end{proof}
\begin{lemma}\label{welenc}
For $w>1$ even, each cycle family of words of cycle length $m$ also even will contribute a copy of $\mathbb{Z}$ to $\ker\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)$.
\end{lemma}
\begin{proof}
Again, there are $m$ monomials of length $w$ in the cycle family of this word. Consider the image of these words under the map: $\big( 1-t_{w-1} \big): \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}$. Note the following:
$$(1-t_{w-1})\big((x_{k_{1}}\otimes\cdots\otimes x_{k_{m}})^{\otimes \ell}-(x_{k_{m}}\otimes\cdots\otimes x_{k_{m-1}})^{\otimes \ell} + (x_{k_{m-1}}\otimes\cdots\otimes x_{k_{m-2}})^{\otimes \ell}- \ldots
$$
$$+(x_{k_{3}}\otimes\cdots\otimes x_{k_{2}})^{\otimes \ell} -(x_{k_{2}}\otimes\cdots\otimes x_{k_{1}})^{\otimes \ell}\big)=0
$$
$\\*$The only linear combinations of words in this cycle family that will go to $0$ under the map $1-t_{w-1}$ must be a multiple of the alternating sum (in the above order) of all the words in this cycle family which works in this case particularly because $m$ is even and there are $m$ words in this cycle family. Therefore, any element of $\ker(1-t_{w-1})$ that comes from this cycle family must be generated by the alternating sum (in the above order) of all members of this cycle family.
\end{proof}
\begin{lemma}\label{welonc}
For $w>1$ even, each cycle family of words of cycle length $m>1$ with $m$ odd will contribute nothing to $\ker\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big)$.
\end{lemma}
\begin{proof}
This is almost identical to the proof above of Lemma \ref{welenc}, except here the alternating sum of all the words of this cycle family will not go to $0$ because there are $m$ (which is odd) words in this cycle family. In fact, there is only the trivial linear combination of the above words that will be in the $\ker((1-t_{w-1})$. So the words in this cycle family will contribute nothing to $\ker((1-t_{w-1})$.
\end{proof}
Lemmas \ref{wel1nc}, \ref{wol1nc}, \ref{wolonc}, \ref{welenc}, and \ref{welonc} give the following lemma.
\begin{lemma}\label{ker1minustwforz}
$$\ker\big((1-t_{w-1}):\bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\rightarrow \bar{\mathfrak{m}}_{\mathbb{Z}}^{\otimes w}\big) \cong \displaystyle\bigoplus_{\substack{m \mid w \\ m \equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Z}$$
\end{lemma}
Summarizing Lemma \ref{1}, Lemma \ref{2}, Lemma \ref{ker1minustwforz}, and Theorem \ref{applykertozandq2}, we get
\begin{theorem}\label{NegCycZ}
Let $A = \mathbb{Z}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$, where $\mathfrak{m}$ is the ideal $(x_1,x_2,\hdots,x_d)$ and $\bar{\mathfrak{m}} = \mathfrak{m} / \mathfrak{m}^{2}$.
Then
\begin{center}
$
(HC^{-})_{n}^{(0)}(A) \cong
\begin{dcases}
\mathbb{Z} & n \textrm{ even and } n \leq 0 \\
0 & \textrm{else}\\
\end{dcases}
$
\end{center}
and for $w > 0$
\begin{center}
$
(HC^{-})_{n}^{(w)}(A) \cong
\begin{dcases}
\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\> 0 & n > w \\
\\
\displaystyle\bigoplus_{\substack{m \mid w \\ m \equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Z} & n = w \\
\\
\displaystyle\bigoplus_{\substack{m \mid w \\ m \not\equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Z}/2
& n = w -1 - 2i,\hspace{5mm} i\geq 0 \\
\\
\displaystyle\bigoplus_{\substack{m \mid w \\ m \equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Z}/\big(\tfrac{w}{m}\big)
& n = w -2i,\hspace{11.5mm} i> 0 \\
\end{dcases}
$
\end{center}
$\\*$where $\omega_{m,d} = \{\textrm{all cycle families of words length $m$ and cycle length $m$ in $x_1,\hdots,x_d$}\}$ has order \newline $\frac{1}{m}\sum_{i \vert m}\mu(m/i) d^{i}$.
\end{theorem}
\section{Periodic Cyclic Homology of $k[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$}
We now want to calculate $HP_{*}(A)$ for $A = k[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$ by computing the total homology of the double complex in Definition \ref{periodic}. As before, we break the calculation down by weight,
\begin{center}
$HP_{*}(A) \cong \displaystyle\bigoplus_{w=0}^{\infty}HP_{*}^{(w)}(A)$
\end{center}
$\\*$and as before, the case $w=0$ is easy yielding
\begin{equation}
HP_{n}^{(0)}(A) \cong
\begin{dcases}
k & n \textrm{ even} \\
0 & \textrm{else} \\
\end{dcases}.
\end{equation}
$\\*$For $w>0$, we will again get $E^{1}\cong E^{2}$ and $E^{3}\cong E^{\infty}$ so
\begin{equation}\label{HPNW}
HP_{n}^{(w)}(A) \cong
\begin{dcases}
\textrm{coker}(\partial^{2}) & n = w + 2i \textrm{ for } i\in\mathbb{Z}\\
\ker(\partial^{2}) & n = w + 2i +1 \textrm{ for } i\in\mathbb{Z} \> . \\
\end{dcases}
\end{equation}
$\\*$Following the same reasoning that appears after Equation (\ref{HCNW}), we can this as
\begin{equation}\label{HPNW2}
HP_{n}^{(w)}(A) \cong
\begin{dcases}
\textrm{coker}\big(\textrm{coker}(1-t_{w-1}) \xrightarrow{N_{w-1}} \ker(1-t_{w-1})\big) & n = w + 2i \textrm{ for } i\in\mathbb{Z}\\
\ker\big(\textrm{coker}(1-t_{w-1}) \xrightarrow{N_{w-1}} \ker(1-t_{w-1})\big) & n = w + 2i +1 \textrm{ for } i\in\mathbb{Z} \> . \\
\end{dcases}
\end{equation}
\begin{definition}
The \textbf{Tate cohomology groups} $\hat{H}^{n}(G,A)$ of a discrete group $G$ with coefficients in a $\mathbb{Z}[G]$-module $A$ are defined to be
$$
\hat{H}^{n}(G,A) =
\begin{dcases}
H^{n}(G,A) & n \geq 1\\
\operatorname{coker} N & n = 0\\
\ker N & n = -1\\
H_{-(n+1)}(G,A) & n \leq -2\\
\end{dcases}
$$
\end{definition}
Comparing Equation (\ref{HPNW2}) with Equations (\ref{HCNW2}) and (\ref{HNCNW2}), and then comparing the group homology and cohomology results from Theorems \ref{HCNW3} and \ref{HNCNW3} with the above definition of Tate cohomology, we get the following corollary.
\medskip
\begin{corollary}\label{HPNW3}
For $A = k[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$, where $k$ is any commutative unital ring, $\mathfrak{m}$ is the ideal $(x_1,x_2,\hdots,x_d)$, $\bar{\mathfrak{m}} = \mathfrak{m} / \mathfrak{m}^{2}$, $HP_{*}(A) \cong \displaystyle\bigoplus_{w=0}^{\infty}HP_{*}^{(w)}(A)$, where $HP_{*}^{(w)}(A)$ is the homology of the subcomplex of weight $w$ terms, and we let $C_{w}= \langle \alpha : \alpha^{(w)}=1 \rangle$ acts on $\bar{\mathfrak{m}}^{\otimes w}$ by
$$
\alpha(x_{n_{1}}\otimes\cdots\otimes x_{n_{w}}) = (-1)^{w-1}(x_{n_{w}}\otimes x_{n_{1}} \otimes\cdots\otimes x_{n_{w-1}}) \> .
$$
we get for $w = 0$
\begin{center}
$
HP_{n}^{(0)}(A) \cong
\begin{dcases}
k & n \textrm{ even} \\
0 & n \textrm{ odd} \\
\end{dcases}
$
\end{center}
\medskip
$\\*$ and for $w > 0$
\begin{center}
$
HP_{n}^{(w)}(A) =
\begin{dcases}
\operatorname{coker}\big(\operatorname{coker}(1-t_{w-1}) \xrightarrow{N_{w-1}} \ker(1-t_{w-1})\big) & n = w + 2i \textrm{ for } i\in\mathbb{Z}\\
\ker\big(\operatorname{coker}(1-t_{w-1}) \xrightarrow{N_{w-1}} \ker(1-t_{w-1})\big) & n = w + 2i +1 \textrm{ for } i\in\mathbb{Z} \> \\
\end{dcases}
$
\end{center}
$\\*$which can be rewritten as
\begin{center}
$
HP_{n}^{(w)}(A) \cong \hat{H}^{w-n}(C_{w},\bar{\mathfrak{m}}^{\otimes w})
$
\end{center}
$\\*$for the $C_{w}$ action on $\bar{\mathfrak{m}}^{\otimes w}$ explained above.
\end{corollary}
Since $\mathbb{Q}$ is a projective $\mathbb{Q}[C_{w}]$-module, applying Equations (\ref{tor1}), (\ref{tor2}), (\ref{ext1}), and (\ref{ext2}), we get
\begin{corollary}\label{PQ}
Let $A = \mathbb{Q}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$ where $\mathfrak{m}$ is the ideal $(x_1,x_2,\hdots,x_d)$.
\medskip
$\\*$ Then for $w = 0$
\begin{center}
$
HP_{n}^{(0)}(A) \cong
\begin{dcases}
\mathbb{Q} & n \textrm{ even} \\
0 & \textrm{else} \\
\end{dcases}
$
\end{center}
\medskip
$\\*$ and for $w > 0$
\begin{center}
$
HP_{n}^{(w)}(A) \cong 0.
$
\end{center}
\end{corollary}
Applying Lemma \ref{1}, Lemma \ref{2}, Equation (\ref{rel1}), and Equation (\ref{rel2}) to Corollary \ref{HPNW3}, we get
\begin{corollary}\label{Zperiodic}
Let $A = \mathbb{Z}[x_1,x_2,\hdots,x_d]/\mathfrak{m}^2$ where $\mathfrak{m}$ is the ideal $(x_1,x_2,\hdots,x_d)$. Then
\begin{center}
$
HP_{n}^{(0)}(A) \cong
\begin{dcases}
\mathbb{Z} & n \textrm{ even} \\
0 & n \textrm{ odd} \\
\end{dcases}
$
\end{center}
\medskip
$\\*$ and for $w > 0$
\begin{center}
$
HP_{n}^{(w)}(A) \cong
\begin{dcases}
\displaystyle\bigoplus_{\substack{m \mid w \\ m \equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Z}/\big(\tfrac{w}{m}\big)
& n = w + 2i \textrm{ for } i \in \mathbb{Z} \\
\\
\displaystyle\bigoplus_{\substack{m \mid w \\ m \not\equiv w \Mod{2}}}\displaystyle\bigoplus_{\omega_{m,d}} \mathbb{Z}/2 & n = w + 1 + 2i\textrm{ for } i \in \mathbb{Z} \\
\end{dcases}
$
\end{center}
$\\*$where $\omega_{m,d} = \{\textrm{all cycle families of words length $m$ and cycle length $m$ in $x_1,\hdots,x_d$}\}$ has order \newline $\frac{1}{m}\sum_{i \vert m}\mu(m/i) d^{i}$.
\end{corollary}
\section{Acknowledgements}
The author would like to thank her advisor Ayelet Lindenstrauss for bringing this problem to her attention and for the endless amount of useful conversations on the road to solving it. The author would also like to thank Michael Mandell for many helpful discussions and to thank Michael Larsen for showing her a better way to define $\omega_{m,d}$ using the M\"obius function. Some of this work was supported under the Air Force Office of Scientific Research Grant FA9550-16-1-0212.
| proofpile-arXiv_059-15720 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
LoRaWAN \cite{lorawan,bankov2016limits} is a low power wide area network technology that is gaining much popularity and is actively studied and deployed thanks to its wide coverage and low energy consumption.
LoRaWAN is used in many scenarios: environmental monitoring, e-health, smart cities, smart farming, etc
As the popularity of LoRaWAN grows, a heterogeneous LoRaWAN infrastructure arises with multiple networks used for different applications.
A step forward would be to unite multiple LoRaWAN networks into one network, but we need solutions to satisfy different quality of service (QoS) requirements posed by various applications within a single network.
LoRaWAN uses a proprietary LoRa protocol, which provides a set of modulation and coding schemes (MCSs) with different reliability and data transmission rate.
At the MAC layer, LoRaWAN uses an open protocol \cite{lorawan}, which describes three classes of devices, the most widespread of which is class A: an ALOHA-like protocol.
With such a protocol, the network does not have many ways to influence the operation of end-devices, also called motes, and the most prominent approach to satisfy the QoS requirements of motes is to control the MCSs that they use because different LoRa MCSs are almost orthogonal to each other. The usage of different MCSs allows reducing the interference between the motes.
Most existing works on the MCS assignment in LoRaWAN networks \cite{reynders2017power, bankov2017pimrc, cuomo2017explora, qin2017resource, zorbas2018improving} focus on providing the QoS requirements, such as the packet loss rate (PLR) or throughput, for one set of motes, and optimize a single network-wide performance indicator.
Such an approach is insufficient for heterogeneous networks that consist of different groups of motes because the network-wide optimization of a parameter does not mean that the requirements of all groups are satisfied.
At the same time, the existing works on MCS assignment for heterogeneous networks \cite{bankov2019lorawan} optimize only the average values of performance indicators, but it is wrong to assign MCSs only based on the average PLR or delay since they can be distributed in a vast range within a network.
In this paper, we consider the problem of MCS assignment in a heterogeneous network to satisfy the requirements on the maximal PLR for many groups of motes.
\textit{The contribution of this paper} is threefold.
First, we develop a new model to find the PLR distribution in a LoRaWAN network. Second, we show that in a LoRaWAN network, the PLR can vary among the motes in a wide range. So many motes may have a PLR value significantly higher than the average one.
Thus, it is important to consider the maximal PLR or the PLR percentile during the MCS assignment.
Third, we develop a new MCS assignment algorithm for heterogeneous LoRaWAN networks that assigns the MCSs to the motes in such a way that for each group, the maximal value or given percentile of PLR is below the required one.
The rest of the paper is organized as follows.
Section \ref{sec:description} contains the basics of LoRaWAN channel access. Section \ref{sec:scenario} describes the scenario and the considered problem.
In Section \ref{sec:model}, we find the PLR distribution.
In Section \ref{sec:algorithm}, we construct an algorithm of MCS allocation, which ensures the limits of maximal PLR for a group of motes.
Numerical results are provided in Section \ref{sec:results}.
Conclusion is given in Section \ref{sec:conclusion}.
\section{LoRaWAN Networks}
\label{sec:description}
A LoRaWAN network typically consists of a server, gateways (GWs), and motes.
The GWs serve as relays between the server and the motes, being connected with the server via an IP network, and with motes via LoRa wireless links.
Three classes of LoRaWAN devices use different channel access rules.
We study Class A operation as the default and the most widespread one.
The network uses several wireless channels, one of which is used as a service channel, and the remaining ones are referred to as main channels.
When a mote has a data frame for transmission, it randomly selects one of the main channels and transmits the frame in it.
By default, LoRaWAN networks work in the acknowledged mode, so having received a data frame from the mote, the server responds with two acknowledgments (ACKs).
The first ACK is sent $T_1 = \SI{1}{\s}$ after the data frame reception in the main channel where the data was transmitted.
The second ACK is sent $T_2 = T_1 + \SI{1}{\s}$ after the data frame ending in the service channel.
If the mote does not receive any ACK, it makes a retransmission.
The recommended by the standard \cite{lorawan_parameters} retransmission policy is to send the frame again after a random delay from one to three seconds.
For its transmission, a mote uses an MCS $m$ assigned by the server.
The corresponding ACKs are transmitted in the main channel with MCS $m - \Delta m$, where $\Delta m$ is a configurable parameter (0 by default).
In the service channel, the ACKs are transmitted with the slowest and the most reliable MCS.
The MCSs are determined by a parameter called spreading factor (SF), and the essential feature of LoRa PHY is that the signals with different SFs are almost orthogonal to each other, i.e., a LoRa receiver can recognize two simultaneously transmitted signals with different SFs.
The orthogonality is not perfect \cite{croce2017impact}, and if the difference between the signal powers is too high, the signals can interfere with each other.
However, according to \cite{mahmood2018scalability}, in a small network with low traffic intensity, the impact of inter-SF interference is negligible.
Another essential aspect of LoRa signals is that a LoRa device can receive a signal even if it overlaps with a sufficiently weaker signal at the same MCS.
The LoRa device vendor specifies a co-channel rejection parameter of 3 dB, which is the minimum power difference required for the successful reception.
Moreover, when a device has started to receive a signal and detects another signal, it can switch to the new signal and receive it correctly, provided that its power is sufficiently higher than the power of the previous signal \cite{haxhibeqiri2017lora}.
Such behavior is called the capture effect.
The MCS assignment algorithm is not specified in the standard, which only mentions the adaptive data rate (ADR): an MCS and transmission power assignment algorithm that should make the motes use the fastest MCSs.
However, the usage of the fastest possible MCS is not the best solution in some scenarios because of the interference between devices.
A widespread approach to MCS assignment \cite{reynders2017power, cuomo2017explora} uses the fact that the frame durations are proportional to $2^{SF}$, so the number of motes allocated to this MCS should be inversely proportional to $2^{SF}$.
With such MCS allocation, the airtime and the collision rate are almost equalized among all MCSs.
Other MCS allocation approaches involve the solution of optimization tasks.
In \cite{qin2017resource} the MCSs are allocated to maximize the minimal rate, but the network performance is calculated according to a PHY level model, which does not consider the LoRaWAN channel access rules.
In \cite{zorbas2018improving} the MCSs are assigned to maximize the number of motes for which the average success probability is higher than the given value.
This solution is based on a rather accurate mathematical model of LoRaWAN but does not take into account ACKs and retries.
The airtime equalization or optimization of a single performance indicator cannot satisfy heterogeneous requirements, and to our best knowledge, most papers about LoRaWAN do not consider the problem of MCS assignment with heterogeneous QoS requirements.
We have studied this problem in \cite{bankov2019lorawan}, where we develop a mathematical model of LoRaWAN channel access that can find the PLR and show how to use this model in order to provide the required PLR level to several groups of motes, each group demanding different PLR.
However, the developed model provides only the average PLR among all the motes in the network, while for the QoS provision the maximal PLR or the PLR percentile is much more critical, and as we show in Section \ref{sec:model}, the maximal PLR can significantly differ from the average value.
\section{Scenario and Problem Statement}
\label{sec:scenario}
We consider a LoRaWAN network with one GW and $N$ motes.
We divide the motes into $G$ groups, group $g$ having $N_g$ motes, each generating a Poisson flow of frames with rate $\lambda_g$.
The motes transmit the frames to the server via the GW, and the server acknowledges the frames with ACKs that have no payload.
As in \cite{bankov2019lorawan}, we assume that the motes have a limited buffer capacity, and if a mote generates a new frame during the transmission of another frame, the mote discards the old one after the end of the second receive window.
We also introduce a retry limit $RL$, which is a maximal number of retransmissions that a mote makes before discarding a frame.
The network operates in $F$ main channels and one downlink channel.
The motes transmit their data using the MCSs from $0$ to $M - 1$ allocated by the server.
The motes are spread uniformly around the GW within a circle with a radius $R$ that is small enough to let all the motes transmit their frames reliably with any MCS provided that no collisions occur.
We also assume that the MCSs are orthogonal.
Although this assumption is not absolutely correct, we study a small network with low network utilization, where the impact of inter-SF interference is negligible \cite{mahmood2018scalability, markkula2019simulating}.
Besides the MCSs being orthogonal, simultaneous transmissions with the same MCS do not always lead to a collision.
We consider that a frame is received correctly if its power is at least by $Q$ dB higher than the noise and the interference at the same MCS, where $Q$ is the co-channel rejection.
The motes are divided into groups based on their requirements: for the group $g$, PLR must not exceed $PLR^{QoS}_g$.
A promising way to satisfy different QoS requirements is an appropriate allocation of MCSs to the motes because the MCSs are almost orthogonal, and the mote's PLR depends on the total load of the motes that use the same MCS.
In this paper, we design an MCS allocation algorithm guaranteeing that for each mote, the PLR does not exceed the corresponding limit. For that, we firstly analyze the drawbacks of the model from \cite{bankov2019lorawan}.
Then we develop a new model that can find the $PLR$ distribution and the maximal PLR of the motes that use a given MCS. Based on the model, we design an MCS allocation algorithm.
\section{PLR Distribution}
\label{sec:model}
\subsection{Old Approach}
The model from \cite{bankov2019lorawan} describes a single group of motes, and the PLR is calculated as follows:
\vspace{-0.3em}
\begin{equation}
\label{eq:plr_old}
\begin{split}
PLR_{old} &= 1 - \sum \limits_{i = 0}^{M - 1} p_i \Big[P_{i}^{S, 1} + \left(1 - P_{i}^{S, 1}\right) P_i^{G} \times\\
&\times \sum \limits_{r = 0}^{RL - 1} \left(P_i^{G}\left(1 - P_{i}^{S, Re}\right)\right)^{r} P_{i}^{S, Re}\Big].
\end{split}\vspace{-0.3em}
\end{equation}
Here $p_i$ is the probability of the considered mote using MCS $i$,
$P_{i}^{S, 1}$ is the probability of the first transmission attempt of the frame being successful (including the data frame and the acknowledgements transmission) when MCS $i$ is used,
$P_i^{S, Re}$ is the probability of the retransmission attempt being successful when MCS $i$ is used,
and $P_i^G$ is the probability of no new frames being generated during the transmission attempt since it is supposed that the old frame is discarded if the new one is generated.
The $P_{i}^{S, 1}$ is found as $P^{S,1}_i = P^{Data}_i P^{Ack}_{i}$, where $P^{Ack}_{i}$ is the probability of at least one ACK being delivered successfully at MCS $i$, and $P^{Data}_i$ is the successful data frame transmission probability found as
\begin{equation}
\label{eq:data1}
\begin{split}
P^{Data}_i =& e^{-(2 T^{Data}_{i} + P^{Data}_i T^{Ack}_{i}) \frac{p_i \lambda}{F}} +\\
+& \sum_{k = 1}^{N - 1} \frac{\left(2 \frac{p_i \lambda}{F} T^{Data}_{i}\right)^k}{k!} e^{-2 \frac{p_i \lambda}{F} T^{Data}_{i}} \mathbb{V}^{GW}_{i, k},
\end{split}
\end{equation}
where $T^{Data}_{i}$ and $T^{Ack}_{i}$ are the durations of data frame and ACK at MCS $i$, respectively,
$\lambda$ is the total traffic intensity for all motes in the network,
and $\mathbb{V}^{GW}_{i, k}$ is the probability of the mote's signal to exceed the power of $k$ interfering signals at least by $Q$.
Due to the space limitation, we do not provide the equations for the other successful transmission probabilities, but they are calculated similarly, and the clarification of all the equations can be found in an open-access paper \cite{bankov2019lorawan}.
\subsection{New Approach}
Note that in \cite{bankov2019lorawan}, $\mathbb{V}^{GW}_{i, k}$ and other probabilities that describe the relationship between the signal power of motes are calculated as the average values for all the motes in the network.
As a consequence, the $PLR_{old}$ is also an average value of PLR for all the motes in the network and does not show the variability of PLR for motes located at a different distance from the GW.
Let us find how the PLR of a mote depends on its distance $x$ from the GW.
To solve this problem, just as in \cite{bankov2019lorawan} we consider a log-distance path-loss model, with which the signal transmitted with the power $w_{tx}$ dBm arrives at the receiver with the power \vspace{-0.5em}
\begin{equation}
w_{rx}\left(d\right) = C_1 - C_2 \log_{10}\left(d\right),\vspace{-0.5em}%
\end{equation}where $C_1$ and $C_2$ are the constant values ($C_1$ contains $w_{tx}$ as a summand), and $d$ is the distance between the receiver and the transmitter.
As an example of such models, one can consider an empiric model that describes the LoRa links \cite{jorke2017urban}, or the Okumura-Hata model \cite{hata1980empirical}
Let Mote 0 be at a distance $x$ from the GW and transmit a frame at MCS $i$ simultaneously with some Mote~1.
With uniform distribution of motes around the GW, the distance from Mote 1 to the GW has the following probability density function \vspace{-0.5em}
\begin{equation}
\label{eq:pdf_dist}
\rho\left(r_1\right) = \frac{2 r_1}{R^2}.\vspace{-0.5em}%
\end{equation}
In the studied scenario, there are three possible outcomes of the frame intersection.
The first one is that the GW successfully receives the frame from Mote 0 if its signal is at least by $Q$ dB more powerful than the signal from Mote 1.
Taking into account the distance distribution and the considered path-loss model, we find the probability of such an event as follows:
\begin{equation}
\begin{split}
&\mathbb{V}^{GW}_{i, 1}\left(x\right) = \mathbb{P} \left(w^{G}(r_1) < w^{G}(x) - Q\right) = \\
&= \mathbb{P} \left(C_1 - C_2 \lg(r_1) < C_1 - C_2 \lg(x) - Q\right) = \\
&= \mathbb{P} \left(r_1 > x 10^{\frac{Q}{C_2}}\right) = \int\limits_{min(R, x \cdot 10^{\frac{Q}{C_2}})}^R \frac{2 r_1}{R^2} d r_1 = \\
&= \begin{cases}
0, & x > R \cdot 10^{-\frac{Q}{C_2}}, \\
1 - \frac{x^2 \cdot 10^{\frac{2 Q}{C_2}}}{R^2}, & x \leq R \cdot 10^{-\frac{Q}{C_2}}.
\end{cases}
\end{split}
\end{equation}
Here $w^G(x)$ is the power of a Mote's signal at the GW if the Mote's distance from the GW is $x$.
The second possible outcome is that both the intersecting frames are damaged, which happens when the difference of the frames' powers is less than $Q$.
The probability of such an outcome is found in a similar way:
\begin{equation}
\begin{split}
&\mathbb{V}^{Both}_i\left(x\right) = \int\limits_{x \cdot 10^{-\frac{Q}{C_2}}}^{min(R, x \cdot 10^{\frac{Q}{C_2}})} \frac{2 r_1}{R^2} d r_1 = \\
=& \begin{cases}
1 - \frac{x^2 \cdot 10^{-\frac{2 Q}{C_2}}}{R^2}, & x > R \cdot 10^{-\frac{Q}{C_2}}, \\
\frac{x^2}{R^2} \left(10^{\frac{2Q}{C_2}} - 10^{-\frac{2Q}{C_2}}\right), & x \leq R \cdot 10^{-\frac{Q}{C_2}}.
\end{cases}
\end{split}
\end{equation}
The third possible outcome is that the frame of Mote 0 is not received, while the frame of Mote 1 is received successfully.
The probability of this outcome is
\begin{equation}
\mathbb{V}^{One}_i(x) = 1 - \mathbb{V}^{GW}_{i, 1}(x) - \mathbb{V}^{Both}_i(x) = \frac{x^2 \cdot 10^{-2\frac{Q}{C_2}}}{R^2}.
\end{equation}
It is essential to differentiate between the mentioned three possibilities, because in the first case Mote 0 makes a successful transmission, in the second case there is a collision and both Mote 0 and Mote 1 retransmits, while in the third case only Mote 0 retransmits and the probability of successful retransmission differs for these cases.
Let us now consider that Mote 0 has made a successful data frame transmission; the GW responds with an ACK at MCS $i$, and this transmission interferes with a frame of Mote 1.
The ACK is delivered successfully if at Mote 0 the signal $w^{M}_0(x)$ from the GW is stronger than the signal $w^{M}_1$ from Mote 1 by at least $Q$ dB.
Such an event happens with probability
\begin{equation}
\begin{split}
&\mathbb{V}^{Mote}_{i, 1}(x) = \mathbb{P} \left(w^{M}_1 < w^{M}_0(x) - Q\right)= \\
&= \mathbb{P} \left(d_1 > x 10^{\frac{Q}{C_2}}\right) = \iint\limits_{\mathcal{A}(x)} \frac{2 r_1}{R^2} \frac{d\phi} {2 \pi} d r_1 =\\
=& \int\limits_{0}^{R}\frac{r_1}{R^2} \left(2\arccos\left(\frac{x^2 + r_1^2 - x^2 10^{\frac{2Q}{C_2}}}{2 x r_1}\right) - \pi\right) d r_1,
\end{split}
\end{equation}
where $d_1$ is the distance from Mote 1 to Mote 0, and $\mathcal{A}(x)$ is the domain of such polar coordinates $(r_1, \phi)$ of Mote 1 that the power condition holds:
\[
\resizebox{\linewidth}{!}{
$\mathcal{A}(x) = \left\{r_1, \phi: 0 < r_1 < R \wedge cos \phi \leq \frac{x^2 + r_1^2 - x^2 10^{\frac{2Q}{C_2}}}{2 x r_1}\right\}$.
}
\]
The probabilities $\mathbb{V}^{*}_{*}(x)$ for a given distance $x$ of the transmitting mote describe possible outcomes of its transmission in the presence of interfering frame.
The values $P_{i}^{S, 1}$ and $P_i^{S, Re}$ in the equation \eqref{eq:plr_old} contain similar values, but in \cite{bankov2019lorawan} these values are found as an average for all the possible Mote 0 locations within the circe and do not depend on $x$.
We propose to replace these values with $\mathbb{V}^{*}_{*}(x)$ because they contain the dependency on $x$.
We also take into account multiple groups of devices which generate different network load.
Let $\vec{a}_i = \left[a_{i, 0}, a_{i, 1}, ..., a_{i, G - 1}\right]$ be the allocation vector which contains the numbers of motes from different groups assigned to the MCS $i$.
They generate $l_i = \sum_g a_{i, g} \lambda_g$ frames per second in total.
With such a new notation, we change the $P^{Data}_i$ as follows
\begin{equation}
\label{eq:data_new}
\begin{split}
P^{Data}_{i, g}(x) =& e^{-(2 T^{Data}_{i} + P^{Data}_{i, g}(x) T^{Ack}_{i}) \frac{l_i - \lambda_g}{F}} +\\
+& 2 \frac{l_i - \lambda_g}{F} T^{Data}_{i} e^{-2 \frac{l_i - \lambda_g}{F} T^{Data}_{i}} \mathbb{V}^{GW}_{i, 1}(x).
\end{split}
\end{equation}
Here we added the dependency on the mote distance $x$ and the group number $g$, and changed the network load at MCS $i$ from $p_i \lambda$ to $l_i - \lambda_g$, i.e., the traffic generation rate taken for all the motes except the considered one.
Note that we leave only two summands from \eqref{eq:data1} and do not consider the interference between more than two motes.
The same simplification has been used in \cite{bankov2019lorawan}, where it is shown that it does not significantly affect the accuracy of the model.
The PLR for a mote from group $g$ that transmits with MCS $i$ and is located at a distance $x$ from the GW is found as
\begin{equation}
\label{eq:plr}
\begin{split}
PLR_{i, g}(x) &= 1 - P_{i, g}^{S, 1}(x) - \left(1 - P_{i, g}^{S, 1}(x)\right) P_{i, g}^{G} \times \\
\times & \sum \limits_{r = 0}^{RL - 1} \left[P_{i, g}^{G}\left(1 - P_{i, g}^{S, Re}(x)\right)\right]^{r} P_{i, g}^{S, Re}(x),
\end{split}
\end{equation}
where $P_{i, g}^{S, 1}$ and $P_{i, g}^{S, Re}$ now depend on $x$, because the probabilities of power relations now depend on $x$, and all the probabilities in the equation depend on the group.
The meaning of this equation is that the mote loses the frame if it has a transmission failure after the first transmission attempt, and during the subsequent retransmissions, it either discards the frame due to a new arriving frame or makes $RL$ unsuccessful retransmission attempts.
With $\rho(r)$ and $PLR_{i, g}(x)$ we find the PLR distribution:
\begin{equation}
\mathbb{P}(PLR_{i, g} < y) = \int\limits_{0}^{R} \mathbb{I}\left(PLR_{i, g}(r) < y\right)\rho(r) d r,
\end{equation}
where $\mathbb{I}(x)$ equals 1 if $x$ is true and 0, otherwise.
In Section~\ref{sec:results}, we show that the PLR distribution is concentrated around the maximal PLR value, and it is important to consider the maximal value or the percentile of the PLR while assigning the MCSs to the motes.
\section{MCS Allocation Algorithm}
\label{sec:algorithm}
Let us now design an heuristic MCS allocation algorithm that uses a greedy approach to solve the problem stated in Section \ref{sec:scenario}.
Our algorithm uses the developed model to find the maximal PLR for motes that generate a given traffic intensity at MCS $i$:
\begin{equation}
\pi_{i, g}(\lambda, \lambda_g, l_i) \triangleq \max_{x \in [0, R]} PLR_{i_g}(x),
\end{equation}
where we explicitly indicate that this value depends on the total network load, network load for mote from group $g$ and the total load from motes assigned to MCS $i$.
If motes from group $g$ are allocated MCS $i$, then the algorithm should guarantee that $\pi_{i, g}(\lambda, \lambda_g, l_i)$ is less than $PLR^{QoS}_g$.
Moreover, if motes from several groups use the same MCS, this inequality should hold for all the groups.
To solve this problem, we define an auxiliary value $\nu^g_i$ as the maximal network load from group $g$ motes which can use MCS $i$ to yield the PLR less than $PLR^{QoS}_g$, provided that other motes do not use this MCS:
\begin{equation}
\nu^g_i \triangleq \max\left\{l: \pi_{i, g}\left(\lambda, \lambda_g, l\right) \leq PLR^{QoS}_g\right\}.
\end{equation}
This value can be interpreted as the network capacity for a given group at a given MCS.
With this value, we construct the following MCS allocation algorithm.
Its scheme is shown in Fig. \ref{fig:scheme}, where ''$\leftarrow$`` is the assignment operator.
Firstly, the server sorts the groups by $\nu^g_i$ in the ascending order.
Then it starts with MCS 0 and group~0: the slowest MCS with the lowest capacity and the group with the strictest PLR requirement.
The server allocates MCS 0 to a minimum between $N_0$ and $\lfloor\frac{\nu^0_0}{\lambda_0}\rfloor$ motes from group 1: it cannot allocate MCS 1 to more motes, because either the PLR requirement will not be satisfied, or there are no more motes from group 0 without an assigned MCS.
If $\nu^0_0$ is less than the number $N_0$ of motes from group 0, the algorithm transits to MCS 1 and assigns MCS 1 to the minimum between $N_0 - \lfloor\frac{\nu^0_0}{\lambda_0}\rfloor$ and $\lfloor\frac{\nu^0_1}{\lambda_0}\rfloor$ motes from group 0.
Such a procedure is done until all the motes from group 0 have an assigned MCS.
Let the procedure stop at MCS $m$, which was allocated to $n$ motes from group 0.
The server transits to group 1 and tries to allocate MCS $m$ to motes from this group.
However, it should guarantee that the PLR requirement is fulfilled both for group 0 and group 1.
So the server assigns MCS $m$ to a number of motes from group 1, equal to the minimum between $\lfloor\frac{\nu^1_m}{\lambda_1}\rfloor$ and $\nu^0_m - n$.
If not all the motes from group 1 have an MCS, then the server transits to the next MCS and continues the previously described procedure.
In the same way, the server assigns MCSs to motes from the remaining groups.
If the algorithm exhausts the capacity of all MCSs, but still some motes are assigned to an MCS, then the algorithm reports a failure.
Otherwise, the algorithm is completed successfully.
\begin{figure}[tb]
\center{\includegraphics[width=0.8\linewidth]{scheme.pdf}}
\vspace{-1em}
\caption{Scheme of the MCS Allocation Algorithm.}
\label{fig:scheme}
\vspace{-1.5em}
\end{figure}
\let\times\cdot
\section{Numerical Results}
\label{sec:results}
\subsection{Packet Loss Rate}
Let us now study the distribution of PLR within the circle.
To illustrate the inequality of PLR for motes at a different distance from the GW, we consider a scenario with $1000$ motes that generate a total flow of $0.5$ frames per second, use only MCS $5$ and are spread within a circle with radius \SI{600}{\m}.
The co-channel rejection parameter $Q$ equals $6$ dB, the retry limit $RL$ is $7$, and the path-loss parameters $C_1$ and $C_2$ equal $-133.7$ dBm, and $44.9$ dB, respectively.
Fig. \ref{fig:plr_dist} shows the dependency of PLR on the distance of a mote to the GW obtained with \eqref{eq:plr} and with simulation.
We note that our model provides very accurate results.
As one can see, the capture effect creates inequality in PLR: motes located close to the GW have lower PLR because they have a high probability of delivering their frames regardless of the interference.
The PLR grows with $x$, and at $x = R\cdot 10^{-\frac{Q}{C_2}}$ reaches a peak, after which the PLR slightly decreases.
The motes which are far from the GW have almost the same PLR, close to its maximal value.
The peak location depends on the co-channel rejection parameter $Q$, while without capture effect, i.e., with $Q \rightarrow \infty$, there is no peak at all, and the PLR is constant for all motes.
It is important to note that the average PLR, calculated according to \eqref{eq:plr_old}, differs from the maximal PLR by almost 30\%.
\begin{figure}[tb]
\vspace{-1em}
\center{\includegraphics[width=0.85\linewidth]{plr_dist.pdf}}
\vspace{-1em}
\caption{Dependency of PLR on the mote's distance from the GW.}
\label{fig:plr_dist}
\vspace{-1em}
\end{figure}
Let us now consider the PLR distribution.
The motes are distributed uniformly within the circle, their distance from the GW has a pdf $\rho(r)$ given in \eqref{eq:pdf_dist}, i.e., the number of motes at a distance $x$ grows linearly with the distance,
The linear growth of the mote density and of the PLR with the distance results in almost 50\% of the motes having the maximal PLR, as shown in Fig. \ref{fig:plr_cdf}, so a majority of motes have PLR, which is 30\% higher than the average value!
\begin{figure}[tb]
\center{\includegraphics[width=0.85\linewidth]{plr_cdf.pdf}}
\vspace{-1em}
\caption{Cumulative distribution function of PLR.}
\label{fig:plr_cdf}
\vspace{-1em}
\end{figure}
So, the MCS allocation for motes that require limited PLR should be made, taking into account the maximal PLR, not the average value.
\subsection{Algorithm Operation}
To illustrate the algorithm operation, we consider a network with three groups of motes:
\begin{itemize}
\item Group $0$ consists of 10 motes each generating 0.0001 frames per second and require PLR less than $10^{-7}$,
\item Group $1$ consists of 100 motes each generating 0.0001 frames per second and require PLR less than $10^{-6}$,
\item Group $2$ consists of 1000 motes, each generating 0.0001 frames per second and requires PLR less than $10^{-5}$.
\end{itemize}
The PLR requirements of the groups define their $\nu^g_i$ capacity values for all MCSs, shown in Table~\ref{tab:capacities}.
Given these values, the server uses the algorithm to find an MCS allocation that satisfies the PLR requirements.
The server firstly considers Group~0 and assigns MCSs 0, 1, and 2 to the maximal possible number of motes, i.e., to one, two, and four motes, respectively.
The remaining three motes from Group~0 use MCS~2, but the capacity of this MCS is not exhausted, so the server transits to Group~1 and assigns MCS~4 to additional $7 - 3 = 4$ motes.
At this step, the number of motes which can use MCS~3 is limited by the capacity $\nu^0_3$ for the Group~1, so even though the capacity $\nu^1_3$ for Group~1 is higher, the server cannot assign MCS to more motes from Group~1, because the PLR requirement for motes from Group~0 will not be satisfied.
MCS~4 has enough capacity for the remaining 96 motes from Group~1, and it also has the capacity for $132 - 96 = 36$ motes from Group~2.
The remaining motes from Group~2 use MCS~5.
The resulting MCS assignment is shown in Table \ref{tab:assignment}.
As one can see, with such an MCS assignment, the PLR for motes that use an MCS is less than their strictest $PLR^{QoS}$ requirement.
\begin{table}[tb]
\vspace{-1em}
\begin{center}
\caption{\label{tab:capacities} Network capacity for various mote groups and MCSs.}
\vspace{-1em}
\begin{tabular}{|c|ccc|} \hline
& \multicolumn{3}{c|}{\textbf{Capacity, s$^{-1}$}} \\ \hline
\textbf{MCS} & \textbf{Group 0} & \textbf{Group 1} & \textbf{Group 2} \\ \hline
0 & 0.0001 & 0.0006 & 0.006 \\
1 & 0.0002 & 0.0014 & 0.014 \\
2 & 0.0004 & 0.0034 & 0.034 \\
3 & 0.0007 & 0.0069 & 0.069 \\
4 & 0.0014 & 0.0132 & 0.133 \\
5 & 0.0026 & 0.0255 & 0.263 \\ \hline
\end{tabular}
\end{center}
\vspace{-1em}
\end{table}
\begin{table}[tb]
\begin{center}
\vspace{-1em}
\caption{\label{tab:assignment} MCS assignment that satisfies the QoS requirements.}
\vspace{-1em}
\begin{tabular}{|c|ccc|c|} \hline
\textbf{MCS} & \textbf{Group 0} & \textbf{Group 1} & \textbf{Group 2} & \textbf{Max. PLR} \\ \hline
0 & 1 & 0 & 0 & 0 \\
1 & 2 & 0 & 0 & $7.2 \times 10^{-8}$ \\
2 & 4 & 0 & 0 & $8.9 \times 10^{-8}$ \\
3 & 3 & 4 & 0 & $8.8 \times 10^{-8}$ \\
4 & 0 & 96 & 36 & $9.9 \times 10^{-7}$ \\
5 & 0 & 0 & 964 & $3.8 \times 10^{-6}$ \\ \hline
\textbf{Req. PLR} & $10^{-7}$ & $10^{-6}$ & $10^{-5}$ & \\
\hline
{\textbf{Max. PLR}} & $8.9 \times 10^{-8}$ & $9.9 \times 10^{-7}$ & $3.8 \times 10^{-6}$ & \\
\hline
\end{tabular}
\end{center}
\vspace{-2em}
\end{table}
If we change the $\pi_{i, g}$ in the algorithm to the average PLR given in \eqref{eq:plr_old}, the algorithm will assign MCSs as shown in Table~\ref{tab:assignment_average}.
Although the average PLR equals $PLR^{QoS}_g$, for some motes the PLR can still exceed the $PLR^{QoS}_g$, e.g., for MCS 2 the maximal PLR equals $1.2 \times 10^{-7}$, while devices from Group 0 require PLR less than $10^{-7}$.
The PLR requirement is also not met for devices using MCS 4 from Group 2.
Thus we confirm that the MCS allocation should be made while taking into account the maximal PLR, but not the average one.
\begin{table}[tb]\vspace{-1em}
\begin{center}
\caption{\label{tab:assignment_average} MCS assignment that considers only the average PLR.}
\vspace{-1em}
\begin{tabular}{|c|ccc|c|} \hline
\textbf{MCS} & \textbf{Group 0} & \textbf{Group 1} & \textbf{Group 2} & \textbf{Max. PLR} \\ \hline
0 & 1 & 0 & 0 & 0 \\
1 & 2 & 0 & 0 & $7.2 \times 10^{-8}$ \\
2 & 5 & 0 & 0 & $1.2 \times 10^{-7}$ \\
3 & 2 & 7 & 0 & $1.2 \times 10^{-7}$ \\
4 & 0 & 93 & 76 & $1.3 \times 10^{-6}$ \\
5 & 0 & 0 & 924 & $3.6 \times 10^{-6}$ \\ \hline
\textbf{Req. PLR} \ & $10^{-7}$ & $10^{-6}$ & $10^{-5}$ &\\
\hline
\textbf{Max. PLR}\ & $1.2 \times 10^{-7}$ & $1.3 \times 10^{-6}$ & $3.6 \times 10^{-6}$ &\\
\hline
\end{tabular}
\vspace{-1em}
\end{center}
\vspace{-1em}
\end{table}
Let us also show an example when the algorithm cannot allocate the MCSs to satisfy the PLR requirements for all motes.
Consider the same three groups, but Group 0 has 20 motes, each generating 0.0001 frames per second.
In such a case, the total capacity of MCS 0, ..., MCS 3 is not sufficient, and the server has to allocate MCS 4 to the remaining six motes from Group 0.
When the server further considers Group 1, the remaining capacity of MCS 4 is only $14 - 6 = 8$ motes, so the server assigns MCS 4 to these motes, while MCS 5 is allocated to the remaining 94 motes from Group 1.
The remaining capacity of MCS 5 is $255 - 94 = 161$, which is not enough for Group 2, so the algorithm signalizes the allocation failure.
The allocation failure indicates that it is needed to use some admission control methods to limit the number of motes in the network, or more GWs to extend the network capacity.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we have studied the problem of MCS assignment in a LoRaWAN network, where devices have different QoS requirements, e.g., set different limits on the PLR.
We have developed a new model to find the PLR distribution and have shown that in a LoRaWAN network, it is a possible situation that, for a majority of motes, the PLR exceeds the average value by 30\%.
Such a PLR distribution means that the provision of the required PLR should be done, taking into account the maximal PLR or the PLR percentile, while the previous approaches considering only the average PLR are erroneous.
We have designed an MCS allocation algorithm which considers several groups of motes with different PLR requirements and traffic intensity and assigns them MCSs in such a way, that the maximal PLR or the PLR percentile do not exceed the given values for all the motes.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_059-15721 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{A detailed example}
Here we include some equations and theorem-like environments to show
how these are labeled in a supplement and can be referenced from the
main text.
Consider the following equation:
\begin{equation}
\label{eq:suppa}
a^2 + b^2 = c^2.
\end{equation}
You can also reference equations such as \cref{eq:matrices,eq:bb}
from the main article in this supplement.
\lipsum[100-101]
\begin{theorem}
An example theorem.
\end{theorem}
\lipsum[102]
\begin{lemma}
An example lemma.
\end{lemma}
\lipsum[103-105]
Here is an example citation: \cite{KoMa14}.
\section[Proof of Thm]{Proof of \cref{thm:bigthm}}
\label{sec:proof}
\lipsum[106-112]
\section{Additional experimental results}
\Cref{tab:foo} shows additional
supporting evidence.
\begin{table}[htbp]
{\footnotesize
\caption{Example table} \label{tab:foo}
\begin{center}
\begin{tabular}{|c|c|c|} \hline
Species & \bf Mean & \bf Std.~Dev. \\ \hline
1 & 3.4 & 1.2 \\
2 & 5.4 & 0.6 \\ \hline
\end{tabular}
\end{center}
}
\end{table}
\bibliographystyle{siamplain}
\section{Introduction}
\label{sec1a}\vspace{-4mm}
Stochastic volatility is one of the main concepts widely used in mathematical finance to deal with the endemic time-varying volatility and co-dependence found in financial markets.
Stochastic volatility models since its invention have been widely used to evaluate derivative securities such as options, with the characteristic that the variance of a stochastic process is itself randomly distributed. Various extensions of stochastic volatility models for different purposes have been proposed in recent years with the fast-growing quantitative financial modeling in the past decade.
However, a shortage of sufficient theoretical support in terms of the existence and uniqueness of a (strong) solution of the proposed models comes along.
To cope with that, in this paper, we consider two classes of generalized stochastic volatility models, establish their well-posedness, and conduct stability analysis. The first class is the multi-dimensional path-dependent system \eqref{eqn:OriginalsystemofFBSDEs}, where a $d_2$-dimensional path-dependent $Y$ process is driven by a $d_1$-dimensional path-dependent $X$ process. The second class is a generalized one-dimensional stochastic volatility model with H\"older continuous coefficients \eqref{eqn:holder}. What greatly differentiates those two classes of models is that both the $X$ and $Y$ processes have their own subdifferential operators, whose one special case is the general reflection operators for multi-sided barriers, because of which the models under investigation are called stochastic variational inequalities (SVI).
For illustrative purpose, we consider a simplified one-dimensional path-dependent version of \eqref{eqn:OriginalsystemofFBSDEs} without control as follows
\begin{equation}\label{sec:OriginalsystemofFBSDEs1d}
\left\{\begin{array}{lll}
X_t\in x_{0}+\int_{0}^tb(s,X(s))ds+\int_{0}^t\sigma(s,X(s))d\widehat{W}_s-\int_{0}^t\partial\psi_1(X_s)ds, \\
\\
Y_t\in y_{0}+\int_{0}^t \alpha(s,X(s),Y(s))ds+\int_{0}^t\beta(s,X(s),Y(s))dB_s-\int_{0}^t\partial\psi_2(Y_s)ds,
\end{array}\right.
\end{equation}
where the path $X(t):=X_{t\wedge\cdot}$ up to time $t$, $\widehat{W}:=\sqrt{1-\rho^2}W+\rho B$ for $W$ and $B$ being two independent one-dimensional Brownian motions with $d\langle\widehat{W},B\rangle_t=\rho dt$ for $|\rho|\leq 1$. Apparently, \eqref{sec:OriginalsystemofFBSDEs1d} covers all the classical types of
stochastic volatility models and path-dependent models, and it also covers the Heston-type stochastic path-dependent volatility model proposed in \cite{cozma2018strong} (as well as local maximum stochastic volatility model proposed in \cite{bain2019calibration}) whose well-posedness is unknown,
\begin{align*}
dS_t&=\mu(t,S_t,M_t)S_tdt+\sqrt{V_t}\sigma(t,S_t,M_t)S_tdW_t,\\
dV_t&=\kappa(\theta-V_t)dt+\xi\sqrt{X_t}dW_t^{V},
\end{align*}
where $\sigma$ is a local volatility function depending on the running maximum $M_t:=\sup_{0\leq u\leq t}S_u$, and $d\langle W ,W^{V}\rangle_t=\rho dt$ for $|\rho|\leq 1$.
Reflection factors on stochastic differential equations (SDEs) have wide application and a long history in financial mathematics with great contributions from the pioneer works of N. El Karoui since $1970$s, see \cite{elkaroui1975processus}.
For economic
dynamics, reflected SDEs was used for the target zone models of the currency
exchange rate (see, for example, \cite{krugman1991target, bertola1993stochastic}). In a regulated financial market,
government regulations lead the spot foreign exchange
(FX) rate processes, the domestic interest rate processes,
and the goods or services (for
instance, grain, water, gas, electricity supply and other
important materials or services for a country), because of which reflected SDEs can be applied realistically and appropriately (see, for example, \cite{bo2011conditional,bo2011some,bo2013conditional}). \eqref{sec:OriginalsystemofFBSDEs1d}
not only extends all the classical reflected SDEs to handle multi-sided barriers, but also covers new models such as reflected stochastic local volatility model in its generalized skew stochastic local volatility model proposed in \cite{ding2020markov} (as well as the reflected stochastic volatility model proposed therein) whose well-posedness is unknown, by taking the special form $\psi_1(X_t)=(2p-1)\mathbbm{1}_{\{X_t\geq a\}}$,
\begin{align*}
dS_t&=\gamma(S_t,X_t)dt+m(X_t)\gamma(S_t)dW_t^{(1)},\\
dX_t&=\mu(X_t)dt+\sigma(X_t)dW_t^{(2)}+(2p-1)dL_t^X(a),
\end{align*}
where $d\langle W^{(1)},W^{(2)}\rangle_t=\rho dt$ for $|\rho|\leq 1$, and $L_t^X(a)$ is the symmetric local time of $X$ at the point $a$, and $p=0$ or $1$ for the $X$ process being the reflected diffusion at the value $a$.
Following the new trend in financial mathematics, a control process belonging to the set of predictable processes and taking values in a compact separable metric space, is embed in both the drift function and the diffusion function of the $Y$ process of both two classes of models under investigation. This control process equips the proposed models the applicability in stochastic control problems, such as the super-replicate valuation problem using the uncertain volatility models with stochastic bounds in \cite{fouque2018uncertain}.
We further followed \cite{fouque2018uncertain} in conducting the stability analysis of the SVI systems \eqref{eqn:OriginalsystemofFBSDEs} and \eqref{eqn:holder} by perturbing the systems with a small positive parameter $\epsilon$. Asymptotic analyses were conducted on the perturbed systems to explore their limiting behaviors as $\epsilon$ goes to zero. In financial mathematics, stochastic volatility models with a small parameter is a typical setup (see, for example, \cite{fouque2000derivatives,fouque2011multiscale}), which may function on the driving volatility process ($X$ process in the current setting) resulting in slow-moving effects.
Well-posedness for the two classes of models has to be established by different methods due to very different model setups. On proving the well-posedness of the multidimensional SVI system \eqref{eqn:OriginalsystemofFBSDEs}, we used the method of Euler scheme for any duration $T$. To handle the path-dependent effects, we extensively applied the functional It\^o formula that was introduced by \cite{dupire2019functional}.
When it comes to the one-dimensional model with H\"older continuous coefficients \eqref{eqn:holder}, we established its wellposedness by means of the Moreau-Yosida regularization approximation method which was used in \cite{asiminoaei1997approximation} with Lipschitz continuous coefficients. Analogous techniques can be used in handling other problems, see for example, \cite{ren2016approximate} on approximating continuity and the support of reflected SDEs, \cite{ren2013optimal} on reflected SDEs with jumps and its associated optimal control problems, \cite{wu2018limit} on limit theorems and the support of SDEs with oblique reflections on nonsmooth domains.
The rest of the paper is organized as follows.
In Section \ref{sec:general_model}, we analyze the multi-dimensional path-dependent SVI system \eqref{eqn:OriginalsystemofFBSDEs}, where the well-posedness of the $X$ and $Y$ processes is established in Section \ref{sec:general_model_wellposedness_X}
and Section \ref{sec:general_model_wellposedness_Y} respectively. Next we considered a perturbed version of \eqref{eqn:OriginalsystemofFBSDEs} with a small positive parameter $\epsilon$, and showed that the perturbed $X^\epsilon$ and $Y^\epsilon$ processes converge to the $X$ and $Y$ processes in Sections \ref{sec:general_model_asymptotic_X} and \ref{sec:general_model_asymptotic_Y} respectively.
In Section \ref{sec:holder}, we investigate the one-dimensional model with H\"older continuous coefficients \eqref{eqn:holder}, whose well-posedness is established in Section \ref{sec:holder_wellposedness} and whose stability analysis is conducted in Section \ref{sec:holder_asymptotic}. In the sequel, $C$ stands for a constant which may change line by line.
\section{Multi-dimensional Path-dependent SVI System}\label{sec:general_model}
In this section, our investigation is based on the following general multi-dimensional path-dependent system of stochastic variational inequalities (SVI):
\begin{equation}\label{eqn:OriginalsystemofFBSDEs}
\left\{\begin{array}{lll}
X_t\in & x_{0}+\int_{0}^tb(s,X(s))ds+\int_{0}^t\sigma_1(s,X(s))dW_s+\int_{0}^t\sigma_2(s, X(s)) dB_s\\
&-\int_{0}^t\partial\psi_1(X_s)ds, \\
\\
Y_t\in & y_{0}+\int_{0}^t \alpha(s,X(s),Y(s), q_s)ds+\int_{0}^t\beta(s,X(s),Y(s), q_s)dB_s\\
&-\int_{0}^t\partial\psi_2(Y_s)ds.
\end{array}\right.
\end{equation}
Here, $X_t\in \mathbb R^{d_1}$ denotes the status of $X$ at time $t\in[0,T]$; $b$, $\sigma_1$, and $\sigma_2$ are measurable functions on $\mathbb R^+\times{\mathcal C}(\mathbb R^+;\mathbb R^{d_1})$ depending on the path $X(t):=X_{t\wedge\cdot}$ up to time $t$, valued in $\mathbb R^{d_1}$, $\mathbb R^{{d_1}\times d_W}$, and $\mathbb R^{{d_1}\times d_B}$, respectively; $W$ and $B$ are two independent $d_W$-dimensional and $d_B$-dimensional standard Brownian motions on a complete filtered probability space $(\Omega,\mathcal F,\{\mathcal F_t;t\geq0\},\mathbb P)$.
We call $\nu:=(\Omega,\mathcal F,\{\mathcal F_t;t\geq0\},\mathbb P, W, B)$ a reference system, based on which, denote $\mathcal{A}_{\nu}$ as the set of admissible controls that is the set of $(\mathcal F_t)$-predictable and $\mathbb U$-valued processes. $Y_t\in \mathbb R^{d_2}$ denotes the status of $Y$ at time $t\in[0,T]$; $q$ is the control process belonging to the set of predictable processes and taking values in a compact separable metric space $\mathbb U$; $\alpha$ and $\beta$ are measurable functions on $\mathbb R^+\times{\mathcal C}(\mathbb R^+;\mathbb R^{d_1})\times{\mathcal C}(\mathbb R^+;\mathbb R^{d_2}) \times\mathbb U$, valued in $\mathbb R^{d_2}$ and $\mathbb R^{{d_2}\times d_B}$ respectively, depending on both paths $X(t)$ and $Y(t)$ as well as the control process $q$.
For $i=1,2$, $\psi_i$ is a proper, convex, and lower-semicontinuous function on $\mathbb R^{d_i}$, with its effective domain
$$
D_i:=\{x\in\mathbb R^{d_i}: \psi_i(x)<\infty\},
$$
and its subdifferential operator
$$\partial\psi_i(x):=\{z\in\mathbb R^{d_i}; \langle x'-x,z\rangle\leq\psi_i(x')-\psi_i(x), \forall x'\in\mathbb R^{d_i}\},$$
where $\langle\cdot,\cdot\rangle$ denotes the inner product.
Theories on subdifferential operators (see, \cite{rockafellar1970maximal}) indicate that $\partial\psi_i(x)$ is closed and convex for every $x\in\mathbb R^{d_i}$, satisfying that
$$
\langle x-x',z-z'\rangle\geq0
$$
for any $x, ~x'\in{\mathbb R}^{d_i}$, $z\in\partial\psi_i(x)$, and $z'\in\partial\psi_i(x')$;
$\partial\psi_i$ is maximal monotone, that is, if $x, ~z\in{\mathbb R}^{d_i}$ satisfying
$$\langle x-x',z-z'\rangle\geq0
$$
for any $x'\in\mathbb R^{d_i}$ and $z'\in\partial\psi_i(x')$, then $z\in\partial\psi_i(x)$.
\begin{condition}\label{X}
For the $X$ process in the SVI system \eqref{eqn:OriginalsystemofFBSDEs}, we impose the following conditions:
\begin{itemize}
\item $b(t, x)$ and $\sigma_i(t, x)$ are continuous in $t$, and satisfy
\begin{align}
\label{eqn:modelX}
&\langle b(t, x(t))-b(t, x'(t)),x_t-x'_t\rangle\leq 0,\quad &\forall x, x'\in{\mathcal C}(\mathbb R^+;\mathbb R^{d_1}),\nonumber\\
&|b(t, x(t))-b(t, x'(t))|\leq l_0(t) \|x-x'\|_t^{\frac12+\alpha}, \quad & \text{for some } \alpha\in[0,1/2],\\
&\|\sigma_i(t, x(t))-\sigma_i(t, x'(t))\|\leq l_i(t)\|x-x'\|_t, \quad & i=1, 2,\nonumber
\end{align}
where $l_i(\cdot)\in L^2([0,T])$ for $i=0,1,2$ and $\|z\|_t:=\sup_{s\leq t}|z_s|$.
\item $0\in\mathrm{Int}(D_1)$ and $\psi_1\geq\psi_1(0)\equiv0$.
\end{itemize}
\end{condition}
\begin{condition}
\label{Y}
For the $Y$ process in the SVI system \eqref{eqn:OriginalsystemofFBSDEs}, we impose the following conditions:
\begin{itemize}
\item $\lambda_1\leq q_t\leq \lambda_2$.
\item For $\|x\|_t\leq R$ and $L_R(t)$ being locally square integrable,
\begin{align*}
\left|\alpha(t, x(t), y(t),q_t)-\alpha(t, x(t), y'(t),q_t)\right|\leq & L_R(t) \|y-y'\|_t,\\
\left\|\beta(t, x(t), y(t),q_t)-\beta(t, x(t), y'(t),q_t)\right\|\leq & L_R(t) \|y-y'\|_t.
\end{align*}
\item $\alpha(\cdot,\cdot,\eta,\cdot)$ and $\beta(\cdot,\cdot,\eta,\cdot)$ are continuous in ${\mathbb R}^+\times{\mathcal C}({\mathbb R}^+;{\mathbb R}^{d_1})\times\mathbb{U}$, for $\eta\in{\mathcal C}({\mathbb R}^+;{\mathbb R}^{d_2})$.
\item $0\in\mathrm{Int}(D_2)$ and $\psi_2\geq\psi_2(0)\equiv0$.
\end{itemize}
\end{condition}
\subsection{Well-posedness}
\label{sec:general_model_wellposedness}
\subsubsection{Well-posedness of the $X$-system}
\label{sec:general_model_wellposedness_X}
The following theorem gives the well-posedness of the $X$ process in the above system.
\begin{theorem}\label{solutionX}
Under Condition \ref{X}, there exists a unique strong solution to the $X$ process in the SVI system \eqref{eqn:OriginalsystemofFBSDEs} in the following sense:
\begin{itemize}
\item For every $t\geq0$, $X_t\in\bar{D}_1$.
\item For any $\varrho\in{\mathcal C}(\mathbb{R}^+;\mathbb{R}^{d_1})$ and $t\geq s\geq 0$,
\begin{equation}
\label{solutionkey}
\int_s^t\langle\varrho_u-X_u,d\phi^{(1)}_u\rangle+\int_s^t\psi_1(X_u)du\leq \int_s^t\psi_1(\varrho_u)du,~~~~~a.e.,
\end{equation}
where $\phi^{(1)}$ is a continuous process of locally bounded variation, $\phi^{(1)}_0=0$.
\item For $t\in {\mathbb R}_+$,
\begin{equation}
\label{eqn:Xdynamic}
X_t=x_0+\int_0^tb(s,X(s))ds+\int_{0}^t\sigma_1(s,X(s))dW_s+\int_{0}^t\sigma_2(s, X(s)) dB_s-\phi^{(1)}_t.
\end{equation}
\end{itemize}
\end{theorem}
\begin{remark}\label{remark2.1}
\begin{enumerate}[(i)]
\item Note that when $\varrho=0$ in (\ref{solutionkey}), one has
$$
\int_s^t\langle X_s,d\phi^{(1)}_s\rangle\geq\int_s^t\psi_1(X_u)du.
$$
\item $\psi_1$ is locally bounded in $D_1$. Set $$
M:=\sup_{|x|\leq a}|\psi_1(x)|, \quad \varrho_u=a\frac{d\phi^{(1)}_u}{d|\phi^{(1)}|_u^r},
$$
where $|\phi^{(1)}|_u^r$ stands for the total variation of $\phi^{(1)}$ defined on an interval $[r,u]$.
Then according to equation \eqref{solutionkey},
$$
a|\phi^{(1)}|_t^s\leq\int_s^t\langle X_u,d\phi^{(1)}_u\rangle+M(t-s).
$$
\item If $(\tilde{X},\tilde{\phi}^{(1)})$ is also a solution, for any $t\geq s\geq 0$,
\begin{equation*}
\int_s^t\langle X_u-\tilde{X}_u,d\phi^{(1)}_u-d\tilde{\phi}^{(1)}_u\rangle\geq0.
\end{equation*}
\end{enumerate}
\end{remark}
We have the following lemma taken from \cite{{cepa1998problame}}.
\begin{lemma}\label{helly}
Suppose $\{k_n;n\geq1\}$ is a sequence of continuous functions $k_n: [0,T]\to{\mathbb R}^{d_1}$ satisfying
$\sup_n|k_n|_0^T<\infty$ and $\|k_n-k\|_T\to0$.
Then $k$ has finite variation on $[0,T]$ and for a sequence of continuous functions $\{f_n; n\geq1\}$ satisfying $\|f_n-f\|_T\to0$ as $n\to\infty$, the following holds:
\begin{equation*}
\int_s^t \langle f_n(r),dk_n(r)\rangle\to \int_s^t \langle f(r),dk(r)\rangle, \quad\mbox{as}~~n\to\infty, ~~\forall s, t\in[0,T].
\end{equation*}
\end{lemma}
\begin{proof} [Proof of Theorem \ref{solutionX}]
Suppose for every $T>0$ and every $n$ we are given a division of $[0,T]$:
$$
0=T_0^n<T_1^n<\cdots<T_{k_n}^n=T,
$$
with the mesh
$$\Delta_n:=\max_{1\leq k\leq k_n}|T^n_k-T^n_{k-1}|\to0 \quad\text{as } n\to\infty.$$
For $t\in
(T_{k-1}^n, T_k^n]$, denote $T^n_t:=T_{k-1}^n$. Consider the following equation:
\begin{equation}\label{eq02}\left\{\begin{array}{lll}
dX^n_t\in b(t,X^n(T^n_t))dt+\sigma_1(t,X^n(T^n_t))dW_t+\sigma_2(t,X^n(T^n_t)) dB_t-\partial\psi_1(X^n_t)dt, \\
X^n(0)=X^n(T_0^n)=x_0\in\bar{D}_1.
\end{array}\right.\end{equation}
Note that for $t\in[0,T_1^n]$, according to \cite{cepa1998problame}, there exists a unique solution to (\ref{eq02}), and we denote it by $(X^n,\phi^{(1),n})$. Applying It\^o's formula and Remark \ref{remark2.1},
\begin{align*}
|X^n_t|^2=&|x_0|^2+2\int_{0}^t\langle X^n_s, b(s,X^n(T^n_s))\rangle ds+2\int_{0}^t\langle X^n_s,\sigma_1(s,X^n(T^n_s))dW_s\rangle\\
&+2\int_{0}^t\langle X^n_s,\sigma_2(s,X^n(T^n_s))dB_s\rangle-2\int_{0}^t\langle X^n_s,d\phi^{(1),n}_s\rangle\\
&+\sum_{i=1}^2\int_{0}^t\|\sigma_i(s,X^n(T^n_s))\|^2ds\\
\leq &|x_0|^2+\int_{0}^t|X^n_s|^2ds+\int_{0}^t|b(s,X^n(T^n_s))|^2ds\\
&+\sum_{i=1}^2\int_{0}^t\|\sigma_i(s,X^n(T^n_s))\|^2ds+2M t-2a|\phi^{(1),n}|_t^{0}\\
&+2\int_{0}^t\langle X^n_s,\sigma_1(t,X^n(T^n_s))dW_s\rangle+2\int_{0}^t\langle X^n_s,\sigma_2(t,X^n(T^n_s))dB_s\rangle\\
\leq&|x_0|^2+C\int_{0}^t(1+|X^n_s|^2)ds\\
&+\int_{0}^t\bigg[b^2(s,0)+\sum_{j=1}^2\sigma_j^2(s,0)+\sum_{i=0}^2l_i^2(s)\bigg](1+\|X^n\|^2_{T^n_s})ds\\
&+2\int_{0}^t\langle X^n_s,\sigma_1(s,X^n(T^n_s))dW_s\rangle+2\int_{0}^t\langle X^n_s,\sigma_2(s,X^n(T^n_s))dB_s\rangle,
\end{align*}
from which and by using the Burkholder-Davis-Gundy (BDG) inequality and the Gr{\"o}nwall's lemma, we have
\begin{equation*}
\mathbb{E}\|X^n\|_t^2\leq C(1+\mathbb{E}|x_0|^2)\left(\int_0^T \bigg[b^2(s,0)+\sum_{j=1}^2\sigma_j^2(s,0)+\sum_{i=0}^2l_i^2(s) \bigg]ds\right),
\end{equation*}
and
\begin{equation*}\begin{split}
\sup_n\mathbb{E}\sup_{t\leq T_1^n}|X^n_t|^4\leq& C(1+\mathbb{E}|x_0|^2)^2\left(\int_0^T \bigg[b^2(s,0)+\sum_{j=1}^2\sigma_j^2(s,0)+\sum_{i=0}^2l_i^2(s)\bigg]ds\right)^2.\\
\end{split}\end{equation*}
Assuming \[\sup_n\mathbb{E}\sup_{t\leq T_k^n}|X^n_t|^4<\infty,\]
then with the same arguments as above, we have
\begin{equation*}
\begin{split}
&\sup_n\mathbb{E}\sup_{t\leq T_{k+1}^n}|X^n_t|^4\\
\leq& C(1+\sup_n\mathbb{E}\|X^n\|_{T_k^n}^4)\left(\int_0^T \bigg[b^2(s,0)+\sum_{j=1}^2\sigma_j^2(s,0)+\sum_{i=0}^2l_i^2(s)\bigg]ds\right)^2\\
<&\infty.
\end{split}\end{equation*}
Summing up,
\begin{equation}\label{momentXn}
\sup_n\mathbb{E}\sup_{t\leq T}|X^n_t|^4<\infty.
\end{equation}
Applying It\^o's formula again, for $t\in
(T_{k-1}^n, T_k^n]$, we have
\begin{align}
\label{eqn:square_difference}
&|X^n_t-X^n_{T^n_t}|^2\nonumber\\
=&2\int_{T^n_t}^t\langle X^n_s-X^n_{T^n_t}, b(s,X^n(T^n_t))\rangle ds+2\int_{T^n_t}^t\langle X^n_s-X^n_{T^n_t}, \sigma_1(s,X^n(T^n_s))dW_s\rangle\nonumber\\
&+2\int_{T^n_t}^t\langle X^n_s-X^n_{T^n_t}, \sigma_2(s,X^n(T^n_t))dB_s\rangle-2\int_{T^n_t}^t\langle X^n_s-X^n_{T^n_t}, d\phi^{(1),n}_s\rangle\nonumber\\
&+\sum_{i=1}^2\int_{T^n_t}^t\|\sigma_i(s,X^n(T^n_s))\|^2ds\nonumber\\
\leq &\int_{T^n_t}^t|X^n_s-X^n_{T^n_t}|^2ds+\int_{T^n_t}^t|b(s,X^n(T^n_t))|^2ds+\sum_{i=1}^2\int_{T^n_t}^t\|\sigma_i(s,X^n(T^n_t))\|^2ds\nonumber\\
&+2\int_{T^n_t}^t\langle X^n_s-X^n_{T^n_t},\sigma_1(s,X^n(T^n_s))dW_s\rangle+2\int_{T^n_t}^t\langle X^n_s-X^n_{T^n_t},\sigma_2(s,X^n(T^n_s))dB_s\rangle\\
&-2\int_{T^n_t}^t\langle X^n_s-X^n_{T^n_t}, d\phi^{(1),n}_s\rangle.\nonumber
\end{align}
For $\epsilon>0$ and $R>0$, set
\begin{equation}\label{Ae}\begin{split}
A_{\epsilon,R}:=&\{x\in{\mathbb R}^{d_1}:\forall x'\notin\bar{D}_1,|x-x'|\geq\epsilon~~\mbox{and}~~|x-a_0|\leq R\},
\end{split}
\end{equation}
where $a_0\in\mathrm{Int}(D_1)$
such that $A_{\epsilon,R}\neq\emptyset$~ for every $R>0$ and $\epsilon<\epsilon_0$ for some $\epsilon_0>0$.
Then $A_{\epsilon,R}$ is a convex compact subset of $\mathrm{Int}(D_1)$.
Set $$f_R(\epsilon):=\sup\{|x'|:x'\in \partial\psi_1(x), x\in A_{\epsilon,R}\},$$
and according to the local boundedness of $\partial\psi_1$ on $\mathrm{Int}(D_1)$, $|f_R(\epsilon)|<+\infty$.
Let \begin{equation*}
g_R(\delta):=\inf\{\epsilon\in(0,\epsilon_0):f_R(\epsilon)\leq\delta^{-1/2}\},\quad
\delta>0.\end{equation*}
Let
$\delta_R>0$ such that $\delta_R+g_R(\delta_R)<\epsilon_0$. Fix
$R>0$ and $\delta\in(0,\delta_R\wedge 1]$. Since
$$\delta_R+g_R(\delta_R)<\epsilon_0, \quad A_{\delta+g_R(\delta),R}\neq\emptyset,$$
we have
\begin{equation} \label{eqn:f_R_bound} f_R(\delta+g_R(\delta))\leq
\delta^{-1/2}.\end{equation}
For $0\leq t-s\leq\delta$, denote
$\xi^{n,\delta,R}$ as the projection of $X^n_s$ on
$A_{\delta+g_R(\delta),R}$.
Then on the set $\{\|X^n\|_T\leq R\}$, we have
\begin{equation*}
|X^n_{T^n_t}-\xi^{n,\delta,R}|\leq \delta+g_R(\delta),
\end{equation*}
which yields
$$\int_{s}^t\langle X^n_{s}-\xi^{n,\delta,R}, d\phi^{(1),n}_r\rangle\leq (\delta+g_R(\delta))|\phi^{(1),n}|_T^0,$$
and
\begin{align*}
\int_{s}^t\langle\xi^{n,\delta,R}-X^n_r,d\phi^{(1),n}_r\rangle
\leq &\int_{s}^t\langle\xi^{n,\delta,R}-X^n_r,\eta^{n,\delta,R}\rangle dr\\
\leq &2R(t-s)f_R(\delta+g_R(\delta))\\
\leq &2\delta^{1/2}R,
\end{align*}
where the first inequality follows by equation \eqref{solutionkey} with $\eta^{n,\delta,R}\in \partial\psi_1(\xi^{n,\delta,R})$, the second inequality follows by the boundedness of $\xi^{n,\delta,R}$ and the definitions of $\xi^{n,\delta,R}$ and $f_R(\delta+g_R(\delta))$, and the third inequality follows by equation \eqref{eqn:f_R_bound}.
Therefore, on the set $\{\|X^n\|_T\leq R\}$,
\begin{equation}\begin{split}
\label{eqn:inner_product_X_phi}
-\int_{s}^t\langle X^n_r-X^n_{s}, d\phi^{(1),n}_r\rangle
=&\int_{s}^t\langle X^n_{s}-\xi^{n,\delta,R}, d\phi^{(1),n}_r\rangle+\int_{s}^t\langle\xi^{n,\delta,R}-X^n_r,d\phi^{(1),n}_r\rangle\\
\leq & (\delta+g_R(\delta))|\phi^{(1),n}|_T^0+2\delta^{1/2}R.
\end{split}\end{equation}
Define the stopping time
$$\tau_n(R):=\inf\{s;|X^n_s|>R\}.$$ For $t\leq \tau_n(R)\wedge T$,
plugging the result of \eqref{eqn:inner_product_X_phi} in \eqref{eqn:square_difference}, we have
\begin{equation*}\begin{split}
&|X^n_t-X^n_{T^n_t}|^2\\
\leq &\int_{T^n_t}^t|X^n_s-X^n_{T^n_t}|^2ds+\int_{T^n_t}^t|b(s,X^n(T^n_t))|^2ds+\sum_{i=1}^2\int_{T^n_t}^t\|\sigma_i(s,X^n(T^n_t))\|^2ds\\
&+2\int_{T^n_t}^t\langle X^n_s-X^n_{T^n_t},\sigma_1(s,X^n(T^n_s))dW_s\rangle+2\int_{T^n_t}^t\langle X^n_s-X^n_{T^n_t},\sigma_2(s,X^n(T^n_s))dB_s\rangle\\
&+2(\Delta_n+g_R(\Delta_n))|\phi^{(1),n}|_T^0+4R\Delta_n^{1/2}.
\end{split}\end{equation*}
Taking supremum and then expectation, we have
\begin{equation}\label{Deltan}\begin{split}
\mathbb{E}\sup_{t\leq T\wedge \tau_n(R)}|X^n_t-X^n_{T^n_t}|^2\leq& C\Delta_n^{1/2}(1+\mathbb{E}\|X^n\|_T^2)+\mathbb{E}|\phi^{(1),n}|_T^0(\Delta_n+g_R(\Delta_n))\\
&+\max_k\int_{T_k^n}^{T_{k+1}^n}\bigg[b^2(s,0)+\sum_{j=1}^2\sigma_j^2(s,0)+\sum_{i=0}^2l_i^2(s)\bigg]ds,
\end{split}\end{equation}
which together with (\ref{momentXn}) implies that
\begin{equation*}\begin{split}
\mathbb{E}\sup_{t\leq T}|X^n_t-X^n_{T^n_t}|^2\leq&\mathbb{E}\sup_{t\leq T}|X^n_t-X^n_{T^n_t}|^2(\mathbbm{1}_{\{T<\tau_n(R)\}}+\mathbbm{1}_{\{T\geq\tau_n(R)\}})\\
\leq&\mathbb{E}\sup_{t\leq T\wedge \tau_n(R)}|X^n_t-X^n_{T^n_t}|^2+\mathbb{E}\|X^n\|_T^2 \mathbbm{1}_{\{T\geq\tau_n(R)\}}\\
\to &0, \quad \mbox{by letting} ~~~n\to\infty ~~~~\mbox{and then} ~~~~R\to\infty.
\end{split}\end{equation*}
Furthermore, by Condition \ref{X} which implies that
$$\int_{0}^t \big\langle X^n_s-X^m_s,b(s,X^n(s))-b(s,X^m(s))\big\rangle ds\leq 0,$$
and by Remark \ref{remark2.1} which implies that
$$\int_{0}^t\big\langle X^n_s-X^m_s,d(\phi^{(1),n}_s-\phi^{(1),m}_s)\big\rangle\geq 0,$$
we have
\begin{align*}
&|X^n_t-X^m_t|^2\\
=&2\int_{0}^t\big\langle X^n_s-X^m_s, b(s,X^n(T^n_s))-b(s,X^m(T^m_s))\big\rangle ds\\
&+2\int_{0}^t \big\langle X^n_s-X^m_s, \sigma_1(s,X^n(T^n_s))-\sigma_1(s,X^m(T^m_s))\big\rangle dW_s\\
&+2\int_{0}^t \big\langle X^n_s-X^m_s,\sigma_2(s,X^n(T^n_s))-\sigma_2(s,X^m(T^m_s))dB_s\big\rangle\\
&-2\int_{0}^t \big\langle X^n_s-X^m_s ,d(\phi^{(1),n}_s-\phi^{(1),m}_s)\big\rangle\\
&+\sum_{i=1}^2\int_{0}^t \|\sigma_i(s,X^n(T^n_s))-\sigma_i(s,X^m(T^m_s))\|^2ds\\
\leq&2\int_{0}^t \big\langle X^n_s-X^m_s,b(s,X^n(T^n_s))-b(s,X^n(s))\big\rangle ds\\
&+2\int_{0}^t \big\langle X^n_s-X^m_s,b(s,X^m(T^m_s))-b(s,X^m(s))\big\rangle ds\\
&+2\int_{0}^t \big\langle X^n_s-X^m_s,\sigma_1(s,X^n(T^n_s))-\sigma_1(s,X^m(T^m_s))\big\rangle dW_s\\
&+2\int_{0}^t \big\langle X^n_s-X^m_s,\sigma_2(s,X^n(T^n_s))-\sigma_2(s,X^m(T^m_s))\big\rangle dB_s\\
&+\sum_{i=1}^2\int_{0}^tl_i^2(s)\|X^m(T^m_\cdot)-X^n(T^n_\cdot)\|_s^2ds\\
\leq&2\int_{0}^tl_0(s)|X^n_s-X^m_s|\big(|X^n(T^n_s)-X^n(s)|^{\frac12+\alpha}+|X^m(T^m_s)-X^m(s)|^{\frac12+\alpha}\big) ds\\
&+2\int_{0}^t \big\langle X^n_s-X^m_s, \sigma_1(s,X^n(T^n_s))-\sigma_1(s,X^m(T^m_s))\big\rangle dW_s\\
&+2\int_{0}^t \big\langle X^n_s-X^m_s, \sigma_2(s,X^n(T^n_s))-\sigma_2(s,X^m(T^m_s))\big\rangle dB_s\\
&+C\sum_{i=1}^2\int_{0}^tl_i^2(s)\big(\|X^m(T^m_\cdot)-X^m(\cdot)\|_s^2+\|X^n(T^n_\cdot)-X^n(\cdot)\|_s^2\big)ds\\
&+C\sum_{i=1}^2\int_{0}^tl_i^2(s)\|X^m-X^n\|_s^2 ds.
\end{align*}
Define the stopping time $$\tau_m(R):=\inf\{s;|X^m_s|>R\}.$$ On one hand, by the BDG inequality and equation \eqref{Deltan}, we get
\begin{equation*}\begin{split}
&\mathbb{E}\sup_{t\leq T}|X^n_t-X^m_t|^2\mathbbm{1}_{\{T<\tau_m(R)\wedge\tau_n(R)\}}\\
=&\mathbb{E}\sup_{t\leq T\wedge\tau_m(R)\wedge\tau_n(R)}|X^n_t-X^m_t|^2\\
\leq&C_T\big(h_R(\Delta_m)+h_R(\Delta_n)\big)+C\sum_{i=1}^2\int_{0}^tl_i^2(s)\mathbb{E}\|X^m-X^n\|_s^2ds,
\end{split}\end{equation*}
where $h_R(\Delta_k)\to0$ as $k\to\infty$.
On the other hand, by H\"older's inequality and equation \eqref{momentXn},
\begin{equation*}\begin{split}
\mathbb{E}\sup_{t\leq T}|X^n_t-X^m_t|^2\mathbbm{1}_{\{T\geq\tau_m(R)\wedge\tau_n(R)\}}
\leq & \left[ \mathbb{E}\sup_{t\leq T}|X^n_t-X^m_t|^4 \cdot \mathbb{E}\mathbbm{1}_{\{T\geq\tau_m(R)\wedge\tau_n(R)\}} \right]^{\frac{1}{2}}\\
\leq & \left[2\sup_n \mathbb{E}\sup_{t\leq T}|X^n_t|^4 \cdot \mathbb{P}(T\geq\tau_m(R)\wedge\tau_n(R)) \right]^{\frac{1}{2}}\\
\leq & \left[C \cdot \mathbb{P} \left(\sup_{t\leq T}|X^n_t| \vee \sup_{t\leq T}|X^m_t| >R\right) \right]^{\frac{1}{2}}\\
\leq & \left[\frac{C}{R^2} \sup_n \mathbb{E}\sup_{t\leq T}|X^n_t|^2 \right]^{\frac{1}{2}}\\
\leq & \frac{C}{R^2}.
\end{split}\end{equation*}
Hence,
\begin{equation*}\begin{split}
\mathbb{E}\sup_{t\leq T}|X^n_t-X^m_t|^2=&\mathbb{E}\bigg[\sup_{t\leq T}|X^n_t-X^m_t|^2(\mathbbm{1}_{\{T<\tau_m(R)\wedge\tau_n(R)\}}+\mathbbm{1}_{\{T\geq\tau_m(R)\wedge\tau_n(R)\}})\bigg]\\
\leq& C_T\big(h_R(\Delta_m)+h_R(\Delta_n)\big)+\frac{C}{R^2}\\
\to& 0, ~~~~\mbox{as}~~~m, n\to\infty~~~~\mbox{and then}~~~~R\to\infty,
\end{split}\end{equation*}
and moreover by equation \eqref{eqn:Xdynamic},
$$
\lim_{m,n\to\infty}\mathbb{E}\|\phi^{(1),m}-\phi^{(1),n}\|_T\to0.
$$
Hence, $\{X_n,\phi^{(1),n}\}_n$ is a Cauchy sequence and by the completeness of the space of processes with respect to the uniform convergence, there exists a pair of continuous processes $(X,\phi^{(1)})$ satisfying that for any $\epsilon>0$,
\begin{equation*}
\mathbb{E}\sup_{t\leq T}|X^n_t-X_t|^2\rightarrow0,~~~~\mathbb{E}\sup_{t\leq T}\big|\phi^{(1),n}_t-\phi_t^{(1)}\big|^2\rightarrow0.
\end{equation*}
Then by Lemma \ref{helly}, we have that $\phi^{(1)}$ is of locally finite variations and equation (\ref{solutionkey}) holds.
Furthermore, by the continuity of $b$ and $\sigma$, we have
\begin{align*}
&\mathbb{E}\sup_{t\leq T}\left|\int_0^{t}\sigma_1(s,X^n(T^n_s))d{W}_s-\int_0^{t}\sigma_1(s,{X}(s))d{W}_s\right|^2\to0,\\
&\mathbb{E}\sup_{t\leq T}\left|\int_0^{t}\sigma_2(s,X^n(T^n_s))d{B}_s-\int_0^{t}\sigma_2(s,{X}(s))d{B}_s\right|^2\to0,\\
&\mathbb{E}\sup_{t\leq T}\left|\int_0^{t}b(s,{X}^n(T^n_s))ds-\int_0^{t}b(s,{X}(s))ds\right|^2\to0.
\end{align*}
Suppose $(\bar{X},\bar{\phi}^{(1)})$ is also a solution. It\^o's formula with Remark \ref{remark2.1} yields
\begin{align*}
&|X_t-\bar{X}_t|^2\\
=&2\int_0^t\big\langle X_s-\bar{X}_s, b(s,X(s))-b(s,\bar{X}(s))\big\rangle ds\\
&+2\int_0^t\big\langle X_s-\bar{X}_s,\sigma_1(s,X(s))-\sigma_1(s,\bar{X}(s))\big\rangle dW_s\\
&+2\int_0^t\big\langle X_s-\bar{X}_s, \sigma_2(s,X(s))-\sigma_2(s,\bar{X}(s))\big\rangle dB_s\\
&-2\int_0^t\big\langle X_s-\bar{X}_s,\big(d\phi^{(1)}_s-d\bar{\phi}^{1}_s\big)\big\rangle+\sum_{j=1}^2\int_0^t\|\sigma_j(s,X(s))-\sigma_j(s,\bar{X}(s))\|^2ds\\
\leq&2\int_0^t\big\langle X_s-\bar{X}_s, \sigma_1(s,X(s))-\sigma_1(s,\bar{X}(s)\big\rangle dW_s\\
&+2\int_0^t\big\langle X_s-\bar{X}_s,\sigma_2(s,X(s))-\sigma_2(s,\bar{X}(s))\big\rangle dB_s+\sum_{j=1}^2\int_0^tl_j^2(s)\|X-\bar{X}\|_s^2ds,
\end{align*}
from which we could get
\begin{equation*}
\mathbb{E}\|X-\bar{X}\|_T^2\leq C\sum_{j=1}^2\int_0^Tl_j^2(s)\mathbb{E}\|X-\bar{X}\|_s^2ds,
\end{equation*}
and the uniqueness follows by Gr{\"o}nwall's inequality.
\end{proof}
\subsubsection{Well-posedness of the $Y$-system}
\label{sec:general_model_wellposedness_Y}
\begin{remark}\label{remark2.1Y}
Analogous to Theorem \ref{solutionX} and Remark \ref{remark2.1Y}, one can show that
\begin{itemize}
\item For any $\varrho\in\mathcal{C}(\mathbb{R}^+;\mathbb{R}^{d_2})$ and $t\geq s\geq 0$,
\begin{equation}
\label{solutionkeyY}
\int_s^t \big\langle\varrho_u-Y_u,d\phi^{(2)}_u\big\rangle+\int_s^t\psi_2(Y_u)du\leq \int_s^t\psi_2(\varrho_u)du,~~~~~a.e.,
\end{equation}
where $\phi^{(2)}$ is a continuous process of locally bounded variation satisfying that $\phi^{(2)}_0=0$.
\item If $(Y,\phi^{(2)})$ and $(\tilde{Y},\tilde{\phi}^{(2)})$ are two solutions, then for any $t\geq s\geq 0$,
\begin{equation*}
\int_s^t\big\langle Y_u-\tilde{Y}_u,d\phi^{(2)}_u-d\tilde{\phi}^{(2)}_u\big\rangle\geq 0.
\end{equation*}
\end{itemize}
\end{remark}
\begin{proposition}\label{wellY}
Under Conditions \ref{X} and \ref{Y}, there exists a unique strong solution to the $Y$ process in the SVI system \eqref{eqn:OriginalsystemofFBSDEs}.
\end{proposition}
\begin{proof}
Suppose $Z$ is an adapted process satisfying
\begin{equation*}
\mathbb{E}\|Z\|_T^4<\infty.
\end{equation*}
Then according to the deterministic result (see \cite{cepa1998problame}), there exists a unique solution $(Y,\phi^{(2)})$ to the following SVI:
\begin{equation}\label{YZ}
Y_t\in y_0+\int_0^t \alpha(s,X(s),Z(s),q_s)ds+\int_0^t\beta(s,X(s),Z(s),q_s) dB_s-\int_0^t\partial\psi_2(Y_s)ds.
\end{equation}
Note that similar to (\ref{momentXn}), we have
\begin{equation}\label{Xmoment}
\mathbb{E}\|X\|_T^4<\infty.
\end{equation}
Denote $$\tau_R^1:=\inf\{s;|X_s|\vee|Z_s|>R\}.$$
Then for all $R>0$, $\tau_R^1$ is a stopping time and $\tau_R^1\uparrow\infty$ as $R\uparrow\infty$. By It\^o's formula and with arguments similar to the previous section, for any $t<\tau_R^1$,
\begin{align*}
|Y_t|^2\leq&|y_0|^2+2\int_0^t \big\langle Y_s, \alpha(s,X(s),Z(s),q_s)\big\rangle ds-2\int_0^t \big\langle Y_s, d\phi^{(2)}_s \big\rangle\\
&+\int_0^t\|\beta(s,X(s),Z(s),q_s)\|^2 ds+2\int_0^t \big\langle Y_s,\beta(s,X(s),Z(s),q_s)dB_s\big\rangle\\
\leq&|y_0|^2+\int_0^t|Y_s|^2ds+\int_0^t \left(L_R^2(s)\|Z\|_s^2+|\alpha(s,X(s),0,q_s)|^2\right)ds\\
&+\lambda_2^2\int_0^t(L_R^2(s)\|Z\|_s^2+\|\beta(s,X(s),0,q_s)\|^2)ds
+2M t-2a|\phi^{(2)}|_t^{0}\\&+2\int_0^t \big\langle Y_s, \beta(s,X(s),Z(s),q_s) dB_s\big\rangle,
\end{align*}
where in the last inequality we used equation \eqref{solutionkeyY} and the mean value theorem.
Hence,
\begin{align*}
&\mathbb{E}\sup_{t\leq T\wedge\tau_R^1}|Y_t|^4\\
\leq&C\mathbb E(1+|y_0|^4)+C\mathbb E\int_0^{T\wedge\tau_R^1}|Y_s|^4ds\\
&+C\mathbb{E}\left[\int_0^{T\wedge\tau_R^1}\left(L_R^2(s)\|Z\|_s^2+|\alpha(s,X(s),0,q_s)|^2\right)ds\right]^2\\
&+C\mathbb{E}\lambda_2^4\left[\int_0^{T\wedge\tau_R^1}\left(L_R^2(s)\|Z\|_s^2+\|\beta(s,X(s),0,q_s)\|^2\right)ds\right]^2\\
&+C\mathbb{E}\int_0^{T\wedge\tau_R^1}|Y_s|^2\cdot \|\beta(s,X(s),Z(s),q_s)\|^2 ds\\
\leq&C\mathbb E(1+|y_0|^4)+C\mathbb E\int_0^{T\wedge\tau_R^1}|Y_s|^4ds
+\frac12\mathbb{E}\sup_{t\leq T\wedge\tau_R^1}|Y_t|^4\\
&+C\mathbb{E}\|Z\|_T^4\bigg[\int_0^{T\wedge\tau_R^1}\bigg(L_R^2(s)\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad+\sup_{\|x\|_T\leq R,\lambda_1\leq\|y\|\leq\lambda_2}
(|\alpha(s,x,0,y)|^2+\|\beta(s,x,0,y)\|^2)\bigg)ds\bigg]^2,
\end{align*}
and thus by the Gr{\"o}nwall's lemma
$$\mathbb{E}\sup_{t\leq T\wedge\tau_R^1}|Y_t|^4\leq C (1+\mathbb{E}\|Z\|_T^4).$$
Furthermore,
\begin{equation}
\begin{split}
\label{eqn:supY_tail}
\mathbb{P}(\|Y\|_T>M)
=&\mathbb P(\|Y\|_T>M,T<\tau_R^1)+\mathbb{P}(\|Y\|_T>M,T\geq\tau_R^1)\\
\leq &\mathbb P(\|Y\|_{T\wedge\tau_R^1}>M)+\mathbb{P}(T\geq\tau_R^1)\\
\leq&\frac{\mathbb{E}\|Y\|_{T\wedge\tau_R^1}^4}{M^4}+\mathbb{P}(T\geq\tau_R^1)\\
\to&0, \quad\mbox{by letting} ~~~M\to\infty ~~\mbox{and then} ~~~R\to\infty.
\end{split}
\end{equation}
Now we are going to show that the map $Z\to (Y_{\cdot\wedge\tau_R^1},\phi_{\cdot\wedge\tau_R^1}^{(2)})$ is a contraction. Suppose $\bar{Z}$ is also an adapted process such that
$$
\mathbb{E}\|\bar{Z}\|^4_T<\infty,
$$
and $(\bar{Y},\bar{\phi}^{(2)})$ is the unique solution to equation \eqref{YZ} with $\bar{Z}$ in place of $Z$. Define
$$\tau_R^1:=\inf\{s;|X_s|\vee|Z_s|\vee|\bar{Z}_s|>R\}.$$
Remark \ref{remark2.1Y} implies that
$$\int_{0}^t \big\langle Y_s-\bar{Y}_s, d(\phi^{(2)}_s-\bar{\phi}^{(2)}_s)\big\rangle \geq 0,$$
and then by It\^o formula and Condition \ref{Y}
\begin{equation*}
\begin{split}
&|Y_{t\wedge\tau_R^1}-\bar{Y}_{t\wedge\tau_R^1}|^2\\
\leq&\int_0^{t\wedge\tau_R^1}|Y_s-\bar{Y}_s|^2ds+\int_0^{t\wedge\tau_R^1}|\alpha(s,X(s),Z(s),q_s)-\alpha(s,X(s),\bar{Z}(s),q_s)|^2ds\\
&+\int_0^{t\wedge\tau_R^1}\|\beta(s,X(s),Z(s),q_s)-\beta(s,X(s),\bar{Z}(s),q_s)\|^2ds\\
&+2\int_0^{t\wedge\tau_R^1} \big\langle(Y_s-\bar{Y}_s), \big(\beta(s,X(s),Z(s),q_s)-\beta(s,X(s),\bar{Z}(s),q_s)\big) dB_s\big\rangle\\
\leq&\int_0^{t\wedge\tau_R^1}|Y_s-\bar{Y}_s|^2ds+(1+\lambda_2^2)\int_0^{t\wedge\tau_R^1}L_R^2(s)\|Z-\bar{Z}\|_s^2ds\\
&+2\int_0^{t\wedge\tau_R^1}\big\langle(Y_s-\bar{Y}_s), \big(\beta(s,X(s),Z(s),q_s)-\beta(s,X(s),\bar{Z}(s),q_s)\big) dB_s\big\rangle.
\end{split}\end{equation*}
Set $l_t:=\int_0^t L_R^2(s)ds$. Taking supremum and expectation of the above equation yields
\begin{equation*}
\begin{split}
\mathbb{E}\sup_{t\leq T}|Y_{t\wedge\tau_R^1}-\bar{Y}_{t\wedge\tau_R^1}|^2
\leq&C(\lambda_2,T)\mathbb{E}\int_0^{T\wedge\tau_R^1}L_R^2(s)\|Z-\bar{Z}\|_s^2ds\\
\leq&C(\lambda_2,T)\left(\int_0^{T}L_R^2(s)e^{rl_s}ds\right) \cdot \left(\sup_{t\leq T}e^{-rl_t}\mathbb E\|Z-\bar{Z}\|_t^2\right)\\
=&\frac{C(\lambda_2,T)}{r}e^{rl_T}\sup_{t\leq T}e^{-rl_t}\mathbb{E}\|Z-\bar{Z}\|_t^2.
\end{split}\end{equation*}
Taking $r=2C(\lambda_2,T)$ gives
\begin{equation*}
\sup_{t\leq T}e^{-rl_t}\mathbb E\|Y-\bar{Y}\|_t^2\leq \frac12\sup_{t\leq T}e^{-rl_t}\mathbb{E}\|Z-\bar{Z}\|_t^2.
\end{equation*}
Let $Y^{(0)}\equiv y$ and for $n\geq1$, denote $(Y^{n},\phi^{(2),n})$ as the solution to equation (\ref{YZ}) with $Z$ replaced by $Y^{n-1}$. Then for any $\delta>0$,
\begin{equation*}\begin{split}
\mathbb{P}(\|Y^{n}-Y^{n-1}\|_T>\delta)
\leq&\mathbb{P}(\|Y^{n}-Y^{n-1}\|_T>\delta,T<\tau_R^1)+\mathbb{P}(T\geq \tau_R^1)\\
\leq&\frac{e^{2rl_T}}{\delta^2}e^{-rl_T}\mathbb{e}\|Y^{n}-Y^{n-1}\|_{T\wedge\tau_R^1}^2+\mathbb{P}(T\geq \tau_R^1)\\
\leq&\frac{e^{2rl_T}}{\delta^2}\left(\frac12 \right)^{n-1}\mathbb{E}\|Y^{1}\|_{T\wedge\tau_R^1}^2+\mathbb{P}(T\geq \tau_R^1)\\
\to&0, \quad\quad\mbox{by letting}~n\to\infty ~\mbox{and then}~R\to\infty,\\
\end{split}
\end{equation*}
which, by the $Y$ dynamic, yields
$$\mathbb P(\|\phi^{(2),n}-\phi^{(2),n-1}\|_T>\delta)\to 0, \quad\quad\mbox{by letting}~n\to\infty.$$
Thus, by completeness there exists a unique pair of processes $(Y,\phi^{(2)})$ such that
\begin{align*}
&\mathbb P(\|Y^{n}-Y\|_T>\delta)\to0,\quad \mathbb P(\|\phi^{(2),n}-\phi^{(2)}\|_T>\delta)\to0, &\quad\mbox{by letting}~n\to\infty.
\end{align*}
By equation \eqref{eqn:supY_tail} we have that
$$\mathbb{P}(\|Y^{n}\|_T>M)\to0, \quad \mathbb{P}(|\phi^{(2)}|_T>M)\to0,\quad\mbox{as}~M\to\infty,$$ from which we get
$$\mathbb P(\|Y\|_T>M)\to0, \quad\mathbb P(|\phi^{(2)}|_T^0>M)\to0, \quad \mbox{as}~M\to\infty.$$
Applying Lemma \ref{helly}, for any $a\in\bar{D}_2$ and $t\geq s\geq r$,
$$
\int_s^t(a-Y_r)d\phi^{(2)}_r+\int_s^t\psi_2(Y_r)dr\leq (t-s)\psi_2(a),~~~~~a.e..
$$
Hence we have proved that $(Y,\phi^{(2)})$ is a solution of the $Y$ process in the SVI system \eqref{eqn:OriginalsystemofFBSDEs}.
To prove the uniqueness, we first suppose $(\tilde{Y},\tilde{\phi}^{(2)})$ is also a solution. Denote
$$\tau_R:=\inf\{s;|X_s|\vee|Y_s|\vee|\tilde{Y}_s|>R\}.$$ Applying It\^o's formula, for $t<\tau_R$, yields
\begin{equation*}
\begin{split}
|Y_t-\tilde{Y}_t|^2\leq&2\int_0^t \big\langle Y_s-\tilde{Y}_s,\big[\alpha(s,X(s),Y(s),q_s)-\alpha(s,X(s),\tilde{Y}(s),q_s)\big]\big\rangle ds\\
&+2\int_0^t \big\langle Y_s-\tilde{Y}_s,\big[\beta(s,X(s),Y(s),q_s)-\beta(s,X(s),\tilde{Y}(s),q_s)\big] dB_s\big\rangle\\
&+\int_0^t\|\beta(s,X(s),Y(s),q_s)-\beta(s,X(s),\tilde{Y}(s),q_s)\|^2ds.
\end{split}\end{equation*}
Then taking expectations yields
\begin{equation*}
\begin{split}
\mathbb{E}\sup_{t\leq T\wedge\tau_R}|Y_t-\tilde{Y}_t|^2\leq C\mathbb{E}\int_0^{T\wedge\tau_R}|Y_s-\tilde{Y}_s|^2ds+C\mathbb{E}\int_0^{T\wedge\tau_R}\|Y-\tilde{Y}\|_s^2ds,
\end{split}\end{equation*}
from which we have
\begin{equation*}
\mathbb{E}\sup_{t\leq T\wedge\tau_R}|Y_t-\tilde{Y}_t|^2=0,
\end{equation*}
and furthermore
\begin{equation*}
\mathbb{P}\left(\sup_{t\leq T\wedge\tau_R}|Y_t-\tilde{Y}_t|>0\right)=0.
\end{equation*}
\end{proof}
\subsection{Asymptotic Analysis}
\label{sec:general_model_asymptotic}
We now study the stability of the SVI system \eqref{eqn:OriginalsystemofFBSDEs} by investigating its perturbed version with a small positive parameter $\epsilon$
\begin{equation}
\label{eqn:SVI_P}
\left\{\begin{array}{lll}
X_t^{\varepsilon}\in & x_0+\int_0^tb^{\varepsilon}(s,X^{\varepsilon}(s),\varepsilon)ds+\int_0^t\sigma_1^{\varepsilon}(s,X^{\varepsilon}(s),\varepsilon)dW_s+\int_0^t\sigma_2^{\varepsilon}(s, X^{\varepsilon}(s), \varepsilon) dB_s\\
&-\int_0^t\partial\psi_1(X^{\varepsilon}_s)ds,\\
\\
Y_t^{\varepsilon}\in & y_0+\int_0^t \alpha(s,X^{\varepsilon}(s),Y^{\varepsilon}(s), q_s)ds+\int_0^t\beta(s,X^{\varepsilon}(s),Y^{\varepsilon}(s), q_s)dB_s\\
&-\int_0^t\partial\psi_2(Y^{\varepsilon}_s)ds,
\end{array}\right.
\end{equation}
where
\begin{equation}
\label{eqn:barX}
\lim_{\varepsilon \rightarrow 0} b^{\varepsilon}(t, x, \varepsilon)=b(t, x), \quad \lim_{\varepsilon \rightarrow 0} \sigma_i^{\varepsilon}(t, x, \varepsilon)=\sigma_i(t, x), \quad i=1, 2.
\end{equation}
\begin{condition}\label{Cond:asymptotic_X}
Suppose that $b^{\varepsilon}(t, x, \varepsilon)$ and $\sigma_j^{\varepsilon}(t, x, \varepsilon)$ for $j=1,2$ are continuous in $t$ uniformly in $\varepsilon$, and satisfy
\begin{align*}
&\big\langle b^{\varepsilon}(t, x(t), \varepsilon)-b^{\varepsilon}(t, x'(t),\varepsilon), x(t)-x'(t)\big\rangle\leq 0,\quad &\forall x, x'\in{\mathcal C}(\mathbb R^+;\mathbb R^{d_1}),\\
&|b^{\varepsilon}(t, x(t), \varepsilon)-b^{\varepsilon}(t, x'(t), \varepsilon)|\leq l_0(t) \|x-x'\|_t^{1/2+\alpha}, \quad &\text{for some } \alpha\in[0,1/2],\\
&\|\sigma_i^{\varepsilon}(t, x(t), \varepsilon)-\sigma_i^{\varepsilon}(t, x'(t),\varepsilon)\|\leq l_i(t)\|x-x'\|_t, \quad & i=1, 2,
\end{align*}
where $l_i(t)$ for $i=0,1,2$ are functions of $t$ satisfying that $l_i(\cdot)\in L^2([0,T])$.
\end{condition}
\subsubsection{Asymptotic analysis of the $X$ system}
\label{sec:general_model_asymptotic_X}
In the following, we give the convergence result regarding the $X_t^{\varepsilon}$ process in the perturbed system \eqref{eqn:SVI_P} as $\varepsilon$ goes to $0$.
\begin{theorem}\label{Xpathasymp}
As $\varepsilon \rightarrow 0$, under Conditions \ref{X} and \ref{Cond:asymptotic_X}, we have
\begin{equation}
\mathbb{E}\sup_{t\in [0,T]}|X_t^{\varepsilon}-X_t|^2\rightarrow 0.
\end{equation}
\end{theorem}
\begin{proof}
By applying It\^o's formula,
\begin{align*}
|X_t^{\varepsilon}-X_t|^2=&2\int_0^t\big\langle X_s^{\varepsilon}-X_s,b^{\varepsilon}(s,X^{\varepsilon}(s),\varepsilon)-b(s,X(s))\big\rangle ds\\
&+\sum_{i=1}^2\int_0^t\|\sigma_i^{\varepsilon}(s,X^{\varepsilon}(s),\varepsilon)-\sigma_i(s,X(s))\|^2ds\\
&+2\int_0^t \big\langle X_s^{\varepsilon}-X_s,\big(\sigma_1^{\varepsilon}(s,X^{\varepsilon}(s),\varepsilon)-\sigma_1(s,X(s))\big)dW_s\big\rangle\\
&+2\int_0^t \big\langle X_s^{\varepsilon}-X_s,\big(\sigma_2^{\varepsilon}(s,X^{\varepsilon}(s),\varepsilon)-\sigma_2(s,X(s))\big)dB_s\big\rangle\\
&-2\int_0^t \big\langle X_s^{\varepsilon}-X_s ,d\phi^{(1),\varepsilon}_s-d\phi^{(1)}_s\big\rangle\\
\leq&C\int_0^t\big(1+l_1^2(s)+l_2^2(s)\big)\|X^{\varepsilon}-X\|_s^2ds\\
&+\int_0^t|b^{\varepsilon}(s,\bar{X}(s),\varepsilon)-b(s,X(s))|^2ds\\
&+\sum_{i=1}^2\int_0^t\|\sigma_i^{\varepsilon}(s,X(s),\varepsilon)-\sigma_i(s,X(s))\|^2ds\\
&+2\int_0^t \big\langle X_s^{\varepsilon}-X_s,\big(\sigma_1^{\varepsilon}(s,X^{\varepsilon}(s),\varepsilon)-\sigma_1(s,X(s))\big)dW_s\big\rangle\\
&+2\int_0^t \big\langle X_s^{\varepsilon}-X_s,\big(\sigma_2^{\varepsilon}(s,X^{\varepsilon}(s),\varepsilon)-\sigma_2(s,X(s))\big)dB_s\big\rangle,
\end{align*}
which implies that
\begin{equation*}
\begin{split}
\mathbb{E}\|X^{\varepsilon}-X\|_T^2\leq&C\mathbb{E}\int_0^T\big(1+l_1^2(s)+l_2^2(s)\big)\|X^{\varepsilon}-X\|_s^2ds\\
&+C\mathbb{E}\int_0^T|b^{\varepsilon}(s,X(s),\varepsilon)-b(s,X(s))|^2ds\\
&+\sum_{i=1}^2\mathbb{E}\int_0^T\|\sigma_i^{\varepsilon}(s,X(s),\varepsilon)-\sigma_i(s,X(s))\|^2ds.
\end{split}\end{equation*}
The Gr{\"o}nwall's lemma yields that
\begin{equation*}
\begin{split}
\mathbb{E}\|X^{\varepsilon}-X\|_T^2\leq&C\mathbb{E}\int_0^T|b^{\varepsilon}(s,X(s),\varepsilon)-b(s,X(s))|^2ds\\
&+C\sum_{i=1}^2\mathbb{E}\int_0^T\|\sigma_i^{\varepsilon}(s,X(s),\varepsilon)-\sigma_i(s,X(s))\|^2ds.\end{split}\end{equation*}
Now it follows from (\ref{Xmoment}) and (\ref{eqn:barX}) that
\begin{equation*}\begin{split}
\mathbb{E}\|X^{\varepsilon}-X\|_T^2\to 0,\quad\quad\quad\mbox{as}\;\; \varepsilon\to0.
\end{split}\end{equation*}
\end{proof}
\subsubsection{Asymptotic analysis of the $Y$ system}
\label{sec:general_model_asymptotic_Y}
In the following, we give the convergence result regarding the $Y_t^{\varepsilon}$ process in the perturbed system \eqref{eqn:SVI_P} as $\varepsilon$ goes to $0$.
\begin{theorem} \label{thm:Ycvg}
Under Conditions \ref{X}, \ref{Y}, and \ref{Cond:asymptotic_X}, as $\varepsilon \rightarrow 0$, for any $\eta>0$, we have
\begin{equation}
\mathbb{P}\left(\sup_{t\in [0,T]}|Y_t^{\varepsilon}-Y_t|>\eta\right)\rightarrow 0.
\end{equation}
\end{theorem}
\begin{proof}
We firstly define stopping time $\tau$ as
\begin{equation}
\label{eqn:tau}
\tau=\inf\{s: |X_s^{\varepsilon}|>R\}.
\end{equation}
Then with analysis analogous to Proposition \ref{wellY}, we have
\begin{equation*}
\begin{split}
&\mathbb{E}\sup_{t\in [0,T]}|Y_{t \wedge \tau}^{\varepsilon}|^2
\\
\leq & |y_0|^2+C\int_0^{T \wedge \tau}\left(L_R^2(s)|Y_s^{\varepsilon}|^2+|\alpha(s,X^{{\epsilon}}(s),0,q_s)|^2+\|\beta(s,X^{{\epsilon}}(s),0,q_s)\|^2\right)ds,\\
<& \infty.
\end{split}
\end{equation*}
By the proof of Theorem \ref{solutionX} we have that $\mathbb{E}\sup_{t\in [0,T]}|X_{t}^{\varepsilon}|<\infty$, and then
\begin{equation}
\label{eqn:Yassmprob}
\begin{split}
\mathbb{P}\left(\sup_{t\in [0,T]}|Y_{t}^{\varepsilon}|
> M \right)&=\mathbb{P}\left(\sup_{t\in [0,T]}|Y_{t}^{\varepsilon}|
> M, T\leq \tau\right)+\mathbb{P}\left(\sup_{t\in [0,T]}|Y_{t}^{\varepsilon}|
> M, T>\tau\right)\\
&\leq \mathbb{P}\left(\sup_{t\in [0,T]}|Y_{t\wedge \tau}^{\varepsilon}|
> M\right)+\mathbb{P}(T>\tau)\\
&\leq \frac{\mathbb{E}\left(\sup_{t\in [0,T]}|Y_{t\wedge \tau}^{\varepsilon}|^2\right)}{M^2}+\mathbb{P}\left(\sup_{t\in [0,T]}|X_{t}^{\varepsilon}|> R\right)\\
&\xrightarrow[]{M\to\infty ~\mbox{and then} ~R\rightarrow \infty}0.
\end{split}
\end{equation}
We further define another stopping time $\bar{\tau}$ as
\begin{equation}
\label{eqn:bartau}
\bar{\tau}=\tau \wedge \inf\{s: |Y_s^{\varepsilon}|>M\}.
\end{equation}
Then by It\^o's formula and the Gr{\"o}nwall's lemma, we have
\begin{align*}
&\mathbb{E}\sup_{t\in [0,T\wedge \bar \tau]}|Y_{t}^{\varepsilon}-Y_{t}|^2\\
\leq & C\mathbb{E}\int_0^{t \wedge \bar \tau} \left|\alpha(s,X_s^{\varepsilon},Y_s^{\varepsilon},q_s)-\alpha(s,X_s,Y_s,q_s)\right|^2ds\\
&+ C\mathbb{E}\int_0^{t \wedge \bar \tau} \left\|\beta(s,X_s^{\varepsilon},Y_s^{\varepsilon},q_s)-\beta(s,X_s,Y_s,q_s)\right\|^2ds\\
\leq & C \mathbb{E} \int_0^{t \wedge \bar \tau} \left(L_R^2(s)|Y_s^{\varepsilon}-Y_s|^2+|\alpha(s,X_s^{\varepsilon},Y_s,q_s)-\alpha(s,X_s,Y_s,q_s)|^2\right)ds\\
&+C \mathbb{E} \int_0^{t \wedge \bar \tau} \|\beta(s,X_s^{\varepsilon},Y_s,q_s)-\beta(s, X_s,Y_s,q_s)\|^2 ds\\
\leq & C \mathbb{E} \int_0^{t \wedge \bar \tau} |\alpha(s,X_s^{\varepsilon},Y_s,q_s)-\alpha(s,X_s,Y_s,q_s)|^2 ds\\
& + C \mathbb{E} \int_0^{t \wedge \bar \tau} \|\beta(s,X_s^{\varepsilon},Y_s,q_s)-\beta(s,X_s, Y_s,q_s)\|^2 ds.
\end{align*}
Then by the continuity of functions $\alpha$ and $\beta$ enforced in Condition \ref{Y}, as well as the the convergence result in Theorem \ref{Xpathasymp}, we have
$$\mathbb{E}\sup_{t\in [0,T\wedge \bar \tau]}|Y_{t}^{\varepsilon}-Y_{t}|^2\to 0,\quad\quad\quad\mbox{as}\;\; \varepsilon\to0.$$
By equation \eqref{eqn:Yassmprob} and similar to its derivation, we can obtain that for any $\eta>0$,
$$\lim_{\varepsilon\to0}\mathbb{P}\left(\sup_{t\in [0,T]}|Y_t^{\varepsilon}-Y_t|>\eta\right)= 0.$$
\end{proof}
\section{One-dimensional SVI system with H\"older continuous coefficients}
\label{sec:holder}
In this section, we consider the following one-dimensional SVI system with H\"older continuous coefficients:
\begin{equation}
\label{eqn:holder}
\left\{\begin{array}{lll}
X_t\in & x_0+\int_0^tb(s,X_s)ds+\int_0^t\sigma_1(s,X_s)dW_s+\int_0^t\sigma_2(s, X_s) dB_s\\
&-\int_0^t\partial\psi_1(X_s)ds,\\
\\
Y_t\in & y_0+\int_0^t \alpha(s,X_s,Y_s, q_s)ds+\int_0^t\beta(s,X_s,Y_s, q_s)dB_s\\
&-\int_0^t\partial\psi_2(Y_s)ds,\\
\end{array}\right.
\end{equation}
where $b, ~\sigma_1, ~\sigma_2$ are measurable functions mapping from ${\mathbb R}^+\times{\mathbb R}$ to ${\mathbb R}$, $\alpha$ and $\beta$ are measurable functions mapping from ${\mathbb R}^+\times{\mathbb R}\times{\mathbb R}\times \mathbb{U}$ to ${\mathbb R}$, $W$ and $B$ are two independent standard one-dimensional Brownian motions on a complete filtered probability space $(\Omega, \mathcal {F}, \mathcal {F}_t, \mathbb{P})$.
\begin{condition}\label{1dX} For the $X$ process in the SVI system \eqref{eqn:holder}, we impose the following conditions: Assume that $b(t,x), ~\sigma_1(t,x), ~\sigma_2(t,x)$ are continuous in $(t,x)$, and
\begin{align}
&\left(b(t, x)-b(t, x')\right)(x-x')\leq 0\nonumber\\
&\left(b(t, x)-b(t, x')\right)^2\leq l_0(t) (x-x')^{1+2\alpha}, \quad&\text{for some } \alpha\in[0,1/2],\nonumber\\
&\left(\sigma_i(t, x)-\sigma_i(t, x')\right)^2\leq l_i(t)(x-x')^{1+2\alpha}, \quad &i=1, 2,\nonumber\\
& \psi_1\geq\psi_1(0)=0, \quad & 0\in\mathrm{Int}(D_1),\nonumber
\end{align}
where $l_i(t)$ for $i=0,1,2$ are functions of $t$ only and satisfy $l_i(\cdot)\in L^1([0,T])$.
\end{condition}
\begin{condition}\label{1dY} For the $Y$ process in the SVI system \eqref{eqn:holder}, we impose the following conditions:
\begin{itemize}
\item $\lambda_1\leq q_t\leq \lambda_2$,
\item $\alpha, ~\beta$ are continuous in $(t,x,y,q)$ satisfying
$$(y-y')\big(\alpha(t,x,y,q)-\alpha(t,x,y',q)\big)\leq 0,$$
and for $\gamma\in[0,1/2]$
\begin{equation*}\begin{split}
&|\alpha(t,x,y,q)-\alpha(t,x',y',q)|^2\vee|\beta(t,x,y)-\beta(t,x',y')|^2\\
\leq&c(t)(|x-x'|^{1+2\gamma}+|y-y'|^{1+2\gamma}),
\end{split}\end{equation*}
where $c(t)$ is locally integrable for any $t\geq0$,.
\item $0\in\mathrm{Int}(D_2)$, $\psi_2\geq\psi_2(0)\equiv0$.
\end{itemize}
\end{condition}
\subsection{Well-posedness}
\label{sec:holder_wellposedness}
First of all we solve the well-posedness problem under the above conditions. An estimate for the solution process is given in the following proposition.
\begin{proposition}\label{1dproperty1}
Suppose $(X,\phi^{(1)})$ is a solution of the $X$ process in the SVI system \eqref{eqn:holder}, under Condition \ref{1dX}, one has
$$\mathbb{E}\|X\|_T^2+\mathbb{E}\int_0^T\psi_1(X_s)ds\leq C(1+|x_0|^2),$$
and then
$$\mathbb{E}|\phi^{(1)}|_T^0\leq C(1+|x_0|^2).$$
\end{proposition}
\begin{proof}
Note that by Condition \ref{1dX},
\begin{equation}\label{bsigma}
\begin{split}
&|b(t,x)|^2\leq 2|b(t,x)-b(t,0)|^2+|b(t,0)|^2\leq l_0(t)|x|^2+|b(t,0)|^2.\\
&|\sigma_i(t,x)|^2\leq l_i(t)|x|^2+|\sigma_i(t,0)|^2, ~~~i=1,2.
\end{split}
\end{equation}
Then applying It\^o's formula and by Remark \ref{remark2.1}, we have
\begin{align*}
|X_t|^2=&|x_0|^2+2\int_0^tX_sb(s,X_s)ds+2\int_0^tX_s\sigma_1(s,X_s)dW_s+2\int_0^tX_s\sigma_2(s,X_s)dB_s\\
&+\sum_{i=1}^2\int_0^t|\sigma_i(s,X_s)|^2ds-2\int_0^tX_sd\phi^{(1)}_s\\
\leq&|x_0|^2+\int_0^t\big(1+l_0(s)+l_1(s)+l_2(s)\big)|X_s|^2ds\\
&+\int_0^t\big(|b(s,0)|^2+|\sigma_1(s,0)|^2+|\sigma_2(s,0)|^2\big)ds\\
&+2\int_0^t(X_s,\sigma_1(s,X_s))dW_s+2\int_0^t(X_s,\sigma_2(s,X_s))dB_s-2\int_0^t\psi_1(X_s)ds.
\end{align*}
By using the BDG inequality and the H\"older's inequality,
\begin{align*}
&\mathbb{E}\sup_{t\leq T}\left|2\int_0^tX_s\sigma_1(s,X_s)dW_s+2\int_0^tX_s\sigma_2(s,X_s)dB_s\right|\\
\leq&C\mathbb{E}\left(\int_0^T|X_s|^2|\sigma_1(s,X_s)|^2ds\right)^{1/2}+C\mathbb{E}\left(\int_0^T|X_s|^2|\sigma_2(s,X_s)|^2ds\right)^{1/2}\\
\leq&C\mathbb{E}\|X\|_T^2+C\mathbb{E}\int_0^T\big(1+l_1(s)+l_2(s)\big)|X_s|^2ds+C\mathbb{E}\int_0^T\big(|\sigma_1(s,0)|^2+|\sigma_2(s,0)|^2\big)ds.
\end{align*}
Therefore, Gr{\"o}nwall's lemma yields that
\begin{equation*}
\mathbb{E}\|X\|_T^2+\mathbb{E}\int_0^T\psi_1(X_s)ds\leq C(1+|x_0|^2).
\end{equation*}
Moreover, by using this estimate and Remark \ref{remark2.1}, we also have
\begin{equation*}
\mathbb{E}|\phi^{(1)}|_T^0\leq C(1+|x_0|^2).
\end{equation*}
\end{proof}
The well-posedness of the $X$ process in the SVI system \eqref{eqn:holder} is established in the following proposition.
\begin{proposition}\label{well1d}
Under Condition \ref{1dX}, there is a unique strong solution of the $X$ process in the SVI system \eqref{eqn:holder}.
\end{proposition}
\begin{proof}
We apply a regularization approximation method here. Define the Moreau-Yosida regularization of $\psi_1$ as
\begin{equation}\label{MY}
\psi_1^n(x):=\inf \left\{\frac n{2}|x'-x|^2+\psi_1(x'); x'\in{\mathbb R} \right\}, ~~~n\geq1, ~~\forall x\in{\mathbb R}.
\end{equation}
Then $\psi_1^n$ is a $\mathcal{C}^1$-convex function, and its gradient $\nabla\psi_1^n$ is monotone and Lipschitz with Lipschitz constant $n$ which is due to the reason that $\nabla\psi_1$ has no gradient.
Moreover, according to \cite{asiminoaei1997approximation}, $\nabla\psi_1^n$ has the following properties
\begin{align}
&(x-x')(\nabla\psi_1^n(x)-\nabla\psi_1^m(x'))\geq-\left(\frac1n+\frac1m\right)\nabla\psi_1^n(x)\nabla\psi_1^m(x'),~~~\forall x, x'\in\mathbb R, \label{psinmproperty1}\\
&\nabla\psi_1^n(x)\in\partial\psi_1(J_nx),\quad \psi_1(J_nx)\leq \psi_1^n(x)\leq\psi_1(x),\label{psinmproperty2}\\
&\psi_1^n(x)=\psi_1^n(J_nx)+\frac1{2n}|\nabla\psi_1^n(x)|^2, \label{psinmproperty3}
\end{align}
where $J_nx:=x-\frac1n\nabla\psi_1^n(x)$.
It is known that the following stochastic differential equation has a unique strong solution
\begin{equation}
\label{eqn:holder_perturbed_system}
dX_t^n= b(t,X_t^n)dt+\sigma_1(t,X_t^n)dW_t+\sigma_2(t, X_t^n) dB_t-\nabla\psi_1^n(X_t^n)dt,~~~
X_0^n=x_0\in \bar{D}_1,
\end{equation}
where $\nabla\psi_1^n$ is the gradient of $\psi_1^n$.
Moreover, with arguments similar to those in Proposition \ref{1dproperty1},
\begin{align*}
\mathbb{E}\|X^n\|_T^4\leq&C\mathbb{E}|x_0|^4+C\mathbb{E}\Big(\int_0^T\big(1+l_0(s)+l_1(s)+l_2(s)\big)|X^n_s|^2 ds\Big)^2\\
&+C\mathbb{E}\Big(\int_0^T\big(|b(s,0)|^2+|\sigma_1(s,0)|^2+|\sigma_2(s,0)|^2\big)ds\Big)^2\\
&+\mathbb{E}\Big(\int_0^T(X^n_s,\sigma_1(s,X^n_s))dW_s+\int_0^T(X^n_s,\sigma_2(s,X^n_s))dB_s\Big)^2\\
\leq&C\mathbb{E}|x_0|^4+\frac12\mathbb{E}\|X^n\|_T^4+C\mathbb{E}\left(\int_0^T\big(1+l_0(s)+l_1(s)+l_2(s)\big)|X^n_s|^2 ds\right)^2\\
&+C\mathbb{E}\big(\int_0^T\big(|b(s,0)|^2+|\sigma_1(s,0)|^2+|\sigma_2(s,0)|^2\big)ds\big)^2\\
\leq&C\mathbb{E}|x_0|^4+\frac12\mathbb{E}\|X^n\|_T^4+C_T\mathbb{E}\left(\int_0^T\big(1+l_0(s)+l_1(s)+l_2(s)\big)^2|X^n_s|^4ds\right)\\
&+C\mathbb{E}\big(\int_0^T\big(|b(s,0)|^2+|\sigma_1(s,0)|^2+|\sigma_2(s,0)|^2\big)ds\big)^2,
\end{align*}
where in the last inequality we used the Cauchy-Schwarz inequality in the integral form.
Then Gr\"onwall's lemma yields
\begin{equation}
\label{4moment}
\sup_n\mathbb{E}\|X^n\|_T^4\leq C(1+\mathbb{E}|x_0|^4),
\end{equation}
and by the dynamic \eqref{eqn:holder_perturbed_system} we further have
\begin{equation}\label{psin}
\sup_n\mathbb{E}\left(\int_0^T|\nabla\psi_1^n(X_s^n)|ds\right)^2<\infty.
\end{equation}
Note that by It\^o's formula, the fact that $\nabla\psi_1^n$ is Lipschitz with Lipschitz constant $n$, and equation \eqref{psinmproperty3}, we have
\begin{align*}
&|\psi_1^n(X^n_t)|^2\\
=&|\psi_1^n(x_0)|^2+2\int_0^t\psi_1^n(X^n_s)\nabla\psi_1^n(X^n_s)b(s,X^n_s)ds-2\int_0^t\psi_1^n(X^n_s)|\nabla\psi_1^n(X^n_s)|^2ds\\
&+\sum_{i=1}^2\int_0^t|\nabla\psi_1^n(X^n_s)|^2|\sigma_i(s,X^n_s)|^2ds+n\sum_{i=1}^2\int_0^t\psi_1^n(X^n_s)|\sigma_i(s,X^n_s)|^2ds\\
&+2\int_0^t\psi_1^n(X^n_s)\nabla\psi_1^n(X^n_s)\sigma_1(s,X^n_s)dW_s+2\int_0^t\psi_1^n(X^n_s)\nabla\psi_2^n(X^n_s)\sigma_2(s,X^n_s)dB_s\\
\leq&|\psi_1^n(x_0)|^2+2n\int_0^t\psi_1^n(X^n_s)|X^n_s b(s,X^n_s)|ds-2\int_0^t\psi_1^n(X^n_s)|\nabla\psi_1^n(X^n_s)|^2ds\\
&+3n\sum_{i=1}^2\int_0^t\psi_1^n(X^n_s)|\sigma_i(s,X^n_s)|^2ds+2\int_0^t\psi_1^n(X^n_s)\nabla\psi_1^n(X^n_s)\sigma_1(s,X^n_s)dW_s\\
&+2\int_0^t\psi_1^n(X^n_s)\nabla\psi_1^n(X^n_s)\sigma_2(s,X^n_s)dB_s.
\end{align*}
By the BDG's inequality, Condition \ref{1dX}, equation \eqref{psinmproperty3}, and the Young's inequality for products, we obtain
\begin{equation*}\begin{split}
&\mathbb{E}\sup_{t\leq T}\left|2\int_0^t\psi_1^n(X^n_s)\nabla\psi_1^n(X^n_s)\sigma_1(s,X^n_s)dW_s+2\int_0^t\psi_1^n(X^n_s)\nabla\psi_1^n(X^n_s)\sigma_2(s,X^n_s)dB_s\right|\\
\leq&C\mathbb{E}\Big(\int_0^T\left|\psi_1^n(X^n_s)\nabla\psi_1^n(X^n_s)\sigma_1(s,X^n_s)\right|^2ds\Big)^{1/2}\\
&+C\mathbb{E}\Big(\int_0^T|\psi_1^n(X^n_s)\nabla\psi_1^n(X^n_s)\sigma_2(s,X^n_s)|^2ds\Big)^{1/2}\\
\leq&\frac12\mathbb{E}\sup_{t\leq T}|\psi_1^n(X^n_t)|^2+Cn\mathbb{E}\int_0^T|\psi_1^n(X^n_s)|\cdot \big(|\sigma_1(s,X^n_s)|^2+|\sigma_2(s,X^n_s)|^2\big)ds.
\end{split}\end{equation*}
By the fact that $$|\psi_1^n(X^n_s)|\leq |\nabla\psi_1^n(X^n_s)|\cdot |X^n_s|$$ since $\psi_1^n$ is a convex function, and by the Young's inequality for products, we have
\begin{align*}
&\frac12\mathbb{E}\sup_{t\leq T}|\psi_1^n(X^n_t)|^2+2\mathbb{E}\int_0^T\psi_1^n(X^n_s)|\nabla\psi_1^n(X^n_s)|^2ds\\
\leq&C\mathbb{E}|\psi_1^n(x_0)|^2+Cn\mathbb{E}\int_0^T|\psi_1^n(X^n_s)|\bigg(|X^n_s||b(s,X^n_s)|+|\sigma_1(s,X^n_s)|^2+|\sigma_2(s,X^n_s)|^2\bigg)ds\\
\leq&C\mathbb{E}|\psi_1^n(x_0)|^2+Cn\mathbb{E}\int_0^T|\psi_1^n(X^n_s)|^{1/3}|\nabla\psi_1^n(X^n_s)|^{2/3}|X^n_s|^{2/3}\\
&\hspace{120pt minus 1fil}\times\bigg(|X^n_s||b(s,X^n_s)|+|\sigma_1(s,X^n_s)|^2+|\sigma_2(s,X^n_s)|^2\bigg)ds\hfilneg\\
\leq&C\mathbb{E}|\psi_1^n(x_0)|^2+\mathbb{E}\int_0^T|\psi_1^n(X^n_s)||\nabla\psi_1^n(X^n_s)|^2ds\\
&+Cn^{3/2}\mathbb{E}\int_0^T|X^n_s|\left(|X^n_s|^2+\sum_{i=0}^2l_i(s)|X^n_s|^2+|b(s,0)|^2+|\sigma_1(s,0)|^2+|\sigma_2(s,0)|^2\right)ds\\
\leq&C\mathbb{E}|\psi_1^n(x_0)|^2+\mathbb{E}\int_0^T|\psi_1^n(X^n_s)||\nabla\psi_1^n(X^n_s)|^2ds\\
&+Cn^{3/2}\mathbb{E}\int_0^T(1+|X^n_s|^4)\left(1+\sum_{i=0}^2l_i(s)+|b(s,0)|^2+|\sigma_1(s,0)|^2+|\sigma_2(s,0)|^2\right)ds,
\end{align*}
which together with equation \eqref{4moment} yields that
\begin{equation}
\mathbb{E}\sup_{t\leq T}|\psi_1^n(X^n_t)|^2\leq Cn^{3/2}.
\end{equation}
By equation \eqref{psinmproperty3}, we further have
\begin{equation}\label{psin4}
\mathbb{E}\sup_{t\leq T}|\nabla\psi_1^n(X^n_t)|^4\leq 4n^2\mathbb{E}\sup_{t\leq T}|\psi_1^n(X^n_t)|^2\leq Cn^{7/2}.
\end{equation}
Now take any $\delta \in (0,1)$, any $h>0$, and set
$$g_{\delta,h}(x)=\int_0^x\int_0^y f_{\delta,h}(\gamma)d\gamma dy$$
where $f_{\delta,h}\geq 0$ and vanishes outside $[h\delta,h]$, and
$$f_{\delta,h}(x)\leq \frac{2}{x\ln \delta^{-1}}, \quad \int f_{\delta,h}(x) dx=1.$$
Then we have
\begin{equation}
\label{eqn:delta_h_inequality}
|x|\leq g_{\delta,h}(|x|)+h,
\end{equation}
and
\begin{equation}
\label{eqn:delta_h_inequality_2}
0\leq {g}_{\delta,h}'\leq 1, \quad {g}_{\delta,h}''(|x|)\leq \frac{2}{x\ln \delta^{-1}}\mathbbm{1}_{(|x|\in [h\delta,h])}.
\end{equation}
By applying equation \eqref{eqn:delta_h_inequality} and then It\^o's formula,
\begin{align*}
|X^{m}_t-X^n_t|\leq&g_{\delta,h}(|X^{m}_t-X^n_t|)+h\\
\leq&\int_0^tg'_{\delta,h}(|X^{m}_s-X^n_s|)\frac{X^{m}_s-X^n_s}{|X^{m}_s-X^n_s|}\big[b(s,X^{m}_s)-b(s,X^n_s)\big]ds\\
&+\frac12\sum_{i=1}^2\int_0^tg''_{\delta,h}(|X^{m}_s-X^n_s|)\big[\sigma_i(s,X^{m}_s)-\sigma_i(s,X^n_s)\big]^2ds\\
&+\int_0^tg'_{\delta,h}(|X^{m}_s-X^n_s|)\frac{X^{m}_s-X^n_s}{|X^{m}_s-X^n_s|}\big[\sigma_1(s,X^{m}_s)-\sigma_1(s,X^n_s)\big]dW_s\\
&+\int_0^tg'_{\delta,h}(|X^{m}_s-X^n_s|)\frac{X^{m}_s-X^n_s}{|X^{m}_s-X^n_s|}\big[\sigma_2(s,X^{m}_s)-\sigma_2(s,X^n_s)\big]dB_s\\
&-\int_0^tg'_{\delta,h}(|X^{m}_s-X^n_s|)\frac{X^{m}_s-X^n_s}{|X^{m}_s-X^n_s|}\big[\nabla\psi_1^m(X^{m}_s)-\nabla\psi_1^n(X^n_s)\big]ds+h.
\end{align*}
Then by Condition \ref{1dX}, equation \eqref{psinmproperty1}, and equation \eqref{eqn:delta_h_inequality_2},
$$|X^{m}_t-X^n_t|\leq I(t)+M(t)+J(t)+h,$$
where
\begin{align*}
I(t):=&\frac{1}{\ln \delta^{-1}}\sum_{i=1}^2\int_0^tl_i(s)|X^{m}_s-X^n_s|^{2\alpha}\mathbbm{1}_{\{|X^{m}_s-X^n_s|\in[h\delta,h]\}}ds,\\
M(t):=&\int_0^tg'_{\delta,h}(|X^{m}_s-X^n_s|)\frac{X^{m}_s-X^n_s}{|X^{m}_s-X^n_s|}\big[\sigma_1(s,X^{m}_s)-\sigma_1(s,X^n_s)\big]dW_s\\
&+\int_0^tg'_{\delta,h}(|X^{m}_s-X^n_s|)\frac{X^{m}_s-X^n_s}{|X^{m}_s-X^n_s|}\big[\sigma_2(s,X^{m}_s)-\sigma_2(s,X^n_s)\big]dB_s,\\
J(t):=&\int_0^tg'_{\delta,h}(|X^{m}_s-X^n_s|)\left(\frac1n+\frac1m\right)|X^{m}_s-X^n_s|^{-1}\nabla\psi_1^m(X^{m}_s)\nabla\psi_1^n(X^n_s)ds.
\end{align*}
Clearly we have
$$\mathbb{E}\sup_{t\leq T}|I(t)|\leq \frac{2h^{2\alpha}}{\ln \delta^{-1}}\sum_{i=1}^2\int_0^tl_i(s)\leq C\frac{h^{2\alpha}}{\ln \delta^{-1}},$$
and
\begin{equation*}\begin{split}
\mathbb{E}\sup_{t\leq T}|M(t)|\leq& C\sum_{i=1}^2\mathbb{E}\left(\int_0^Tl_i(s)|X^{m}_s-X^n_s|^{1+2\alpha}ds\right)^{1/2}\\
\leq&C\sum_{i=1}^2\mathbb{E}\int_0^Tl_i(s)|X^{m}_s-X^n_s|^{2\alpha}ds+\frac12\mathbb{E}\sup_{t\leq T}|X^{m}_t-X^n_t|.
\end{split}\end{equation*}
By equations \eqref{psin}, \eqref{psin4}, and \eqref{eqn:delta_h_inequality_2},
\begin{align*}
\mathbb{E}\sup_{t\leq T}|J(t)|\leq& \frac{1}{h\delta}\mathbb{E}\int_0^T \left(\frac1n+\frac1{m}\right)\nabla\psi_1^{m}(X^{m}_s)\nabla\psi_1^n(X^n_s)ds\\
\leq&\frac{1}{h\delta}\Big[\frac1n\big(\mathbb{E}\sup_{t\leq T}|\nabla\psi_1^n(X^n_t)|^2\big)^{1/2}\Big(\mathbb{E}\big(\int_0^T|\nabla\psi_1^{m}(X^{m}_t)|dt\big)^{2}\Big)^{1/2}\\
&+\frac1m\big(\mathbb{E}\sup_{t\leq T}|\nabla\psi_1^m(X^{m}_t)|^2\big)^{1/2}\Big(\mathbb{E}\big(\int_0^T|\nabla\psi_1^n(X^n_t)|dt\big)^{2}\Big)^{1/2}\Big]\\
\leq&C\frac1{h\delta}(n^{-1/8}+m^{-1/8}).
\end{align*}
Summing up these estimates, by Gr\"onwall's lemma, we have
\begin{equation*}\begin{split}
\mathbb{E}\sup_{t\leq T}|X^{m}_t-X^n_t|\leq& C\frac1{h\delta}(n^{-1/8}+m^{-1/8})+C\frac{h^{2\alpha}}{\ln \delta^{-1}}+h.
\end{split}\end{equation*}
Considering $\alpha\in[0,1/2]$, we further have
\begin{equation*}
\mathbb{E}\sup_{t\leq T}|X^{m}_t-X^n_t| \leq C(h\delta)^{-2\alpha}(n^{-\alpha/4}+m^{-\alpha/4})+C\frac{h^{2\alpha}}{\ln \delta^{-1}}+h.
\end{equation*}
Taking $\delta=\frac12$ and $h=\min\{m,n\}^{-16}$ yields
\begin{equation*}
\mathbb{E}\sup_{t\leq T}|X^{m}_t-X^n_t|\leq C\min\{m,n\}^{-\frac{\alpha}8}\to0, \quad\mbox{as}\quad n\to\infty.
\end{equation*}
Moreover, by setting $$\phi^{(1),n}_t:=\int_0^t\nabla\psi_1^n(X^n_s)ds,$$ we have
\begin{equation*}
\mathbb{E}\sup_{t\leq T}|\phi^{(1),m}_t-\phi^{(1),n}_t|\to0, \quad\mbox{as}\quad n\to\infty.
\end{equation*}
Hence $(X^n, \phi^{(1),n})$ is Cauchy in the complete metric space $$L^1(\Omega;\mathcal{C}([0,T];{\mathbb R}))\times L^1(\Omega;\mathcal{C}([0,T];{\mathbb R}))$$ and thus there exists $(X,\phi^{(1)})$ in the space satisfying that
\begin{equation}
\label{eqn:Xn_cvg}
\mathbb{E}\sup_{t\leq T}|X^{n}_t-X_t|\to0 \quad\text{and}\quad \mathbb{E}\sup_{t\leq T}|\phi^{(1),n}_t-\phi^{(1)}_t|\to0,\quad\quad\mbox{as}\quad n\to\infty.
\end{equation}
Now it remains to prove that $(X,\phi^{(1)})$ is a solution. Since by equations \eqref{psin} we have
$$
\sup_n\mathbb{E}\|\phi^{(1),n}\|_T<\infty,
$$
it then yields
$$
\mathbb{E}\|\phi^{(1)}\|_T<\infty.
$$
Recall that $\psi_1^n$ is convex and that
$$
\psi_1(J_nx)\leq\psi_1^n(x)\leq \psi_1(x)
$$
given in equation \eqref{psinmproperty2}, for any $\varrho\in \mathcal{C}([0,T];{\mathbb R})$ and any $t\in[0,T]$,
\begin{equation*}\begin{split}
\int_0^t(\varrho_s-X^n_s)d\phi^{(1),n}_s=&\int_0^t(\varrho_s-X^n_s)\nabla\psi_1^n(X^n_s)ds\\
\leq& \int_0^t\psi_1^n(\varrho_s)ds-\int_0^t\psi_1^n(X^n_s)ds\\
\leq&\int_0^t\psi_1^n(\varrho_s)ds-\int_0^t\psi_1(J_nX^n_s)ds.
\end{split}\end{equation*}
By equation \eqref{eqn:Xn_cvg} and the fact that monotone increasing sequence of random variables that converge in probability implies convergence almost surely, sending $n\to\infty$ gives
$$
\int_0^t(\varrho_s-X_s)d\phi^{(1)}_s\leq \int_0^t\psi_1(\varrho_s)ds-\int_0^t\psi_1(X_s)ds.
$$
Hence $(X,\phi^{(1)})$ is a solution.
\end{proof}
With analogous arguments, we can obtain that there exists a unique strong solution for the $Y$ process in the SVI system \eqref{eqn:holder} and the proof is omitted.
\subsection{Asymptotic analysis}
\label{sec:holder_asymptotic}
In this section, we perform asymptotic analysis on the perturbed one-dimensional SVI system \eqref{eqn:holder_perturbed_stystem} with H\"older continuous coefficients described in Condition \ref{1dX_asymptotic}, regarding its limiting system \eqref{eqn:holder} satisfying Condition \ref{1dX}.
The perturbed version of the one-dimensional SVI system \eqref{eqn:holder} with a small positive parameter $\epsilon$ is given by
\begin{equation}
\label{eqn:holder_perturbed_stystem}
\left\{\begin{array}{lll}
X_t^{\varepsilon}\in & x_0+\int_0^tb^{\varepsilon}(s,X_s^{\varepsilon},\varepsilon)ds+\int_0^t\sigma_1^{\varepsilon}(s,X_s^{\varepsilon},\varepsilon)dW_s+\int_0^t\sigma_2^{\varepsilon}(s, X_s^{\varepsilon}, \varepsilon) dB_s\\
&-\int_0^t\partial\psi_1(X^{\varepsilon}_s)ds,\\
\\
Y_t^{\varepsilon}\in & y_0+\int_0^t \alpha(s,X_s^{\varepsilon},Y_s^{\varepsilon}, q_s)ds+\int_0^t\beta(s,X_s^{\varepsilon},Y_s^{\varepsilon}, q_s)dB_s\\
&-\int_0^t\partial\psi_2(Y^{\varepsilon}_s)ds,
\end{array}\right.
\end{equation}
where
\begin{equation}
\label{eqn:barX_functions}
\begin{split}
\lim_{\varepsilon \rightarrow 0} b^{\varepsilon}(t, x, \varepsilon)=b(t, x), \quad \lim_{\varepsilon \rightarrow 0} \sigma_i^{\varepsilon}(t, x, \varepsilon)=\sigma_i(t, x), \quad i=1, 2.
\end{split}
\end{equation}
\begin{condition}\label{1dX_asymptotic} Assume that $b^{\varepsilon}(t,x,\varepsilon), ~\sigma_1^{\varepsilon}(t,x,\varepsilon), ~\sigma_2^{\varepsilon}(t,x,\varepsilon)$ are continuous in $(t,x)$, uniformly in $\varepsilon$, and
\begin{align}
&\left(b^{\varepsilon}(t, x, \varepsilon)-b^{\varepsilon}(t, x', \varepsilon)\right)(x-x')\leq 0,\nonumber\\
&\left(b^{\varepsilon}(t, x, \varepsilon)-b^{\varepsilon}(t, x', \varepsilon)\right)^2\leq l_0(t) (x-x')^{1+2\alpha}, \quad&\text{for some } \alpha\in[0,1/2],\nonumber\\
&\left(\sigma_i^{\varepsilon}(t, x, \varepsilon)-\sigma_i^{\varepsilon}(t, x', \varepsilon)\right)^2\leq l_i(t)(x-x')^{1+2\alpha}, \quad &i=1, 2,\nonumber\\
& \psi_1\geq\psi_1(0)=0, \quad & 0\in\mathrm{Int}(D_1),\nonumber
\end{align}
where $l_i(t)$ for $i=0,1,2$ are functions of $t$ only and satisfy $l_i(\cdot)\in L^1([0,T])$.
\end{condition}
With arguments similar to those of Proposition \ref{1dproperty1}, one can obtain the following proposition.
\begin{proposition}
\label{prop:sup_2ndmoment}
Under Conditions \ref{1dX} and \ref{1dX_asymptotic}, one has
\begin{equation}
\mathbb{E} \sup_{t\in [0,T]}|X_t|^2<\infty\quad\text{and}\quad \sup_{\varepsilon}\mathbb{E} \sup_{t\in [0,T]}|X_t^{\varepsilon}|^2<\infty.
\end{equation}
\end{proposition}
In the following, we give the convergence result regarding the $X_t^{\varepsilon}$ process as $\varepsilon$ goes to $0$.
\begin{proposition}
Under Conditions \ref{1dX} and \ref{1dX_asymptotic}, as $\varepsilon \rightarrow 0$, we have
\begin{equation}
\begin{split}
\mathbb{E}\sup_{t\in [0,T]}|X_t^{\varepsilon}-X_t|\rightarrow 0.
\end{split}
\end{equation}
\end{proposition}
\begin{proof}
By equation \eqref{eqn:delta_h_inequality} and It\^o's formula, one has
\begin{align}
|X_t^{\varepsilon}-X_t|\leq & g_{\delta,h}(|X_t^{\varepsilon}-X_t|)+h\nonumber\\
=&\int_0^t {g}_{\delta,h}'(|X_s^{\varepsilon}-X_s|)\frac{X_s^{\varepsilon}-X_s}{|X_s^{\varepsilon}-X_s|}\big[b^{\varepsilon}(s, X_s^{\varepsilon}, \varepsilon)-b(s, X_s)\big]ds\nonumber\\
&+\frac12\int_0^t {g}_{\delta,h}''(|X_s^{\varepsilon}-X_s|)\sum_{i=1}^2\big[\sigma_i^{\varepsilon}(s, X_s^{\varepsilon}, \varepsilon)-\sigma_i(s, X_s)\big]^2ds\nonumber\\
&+\int_0^t {g}_{\delta,h}'(|X_s^{\varepsilon}-X_s|)\frac{X_s^{\varepsilon}-X_s}{|X_s^{\varepsilon}-X_s|}\big[\sigma_1^{\varepsilon}(s, X_s^{\varepsilon}, \varepsilon)-\sigma_1(s, X_s)\big]dW_s\label{eqn:Xcvg_expansion}\\
&+\int_0^t {g}_{\delta,h}'(|X_s^{\varepsilon}-X_s|)\frac{X_s^{\varepsilon}-X_t}{|X_s^{\varepsilon}-X_s|}\big[\sigma_2^{\varepsilon}(s, X_s^{\varepsilon}, \varepsilon)-\sigma_2(s, X_s)\big]dB_s+h.\nonumber
\end{align}
Note that by Condition \ref{1dX_asymptotic} and that $g'_{\delta,h}\in[0,1]$,
\begin{equation}
\begin{split}
&\int_0^t {g}_{\delta,h}'(|X_s^{\varepsilon}-X_s|)\frac{X_s^{\varepsilon}-X_s}{|X_s^{\varepsilon}-X_s|}\big[b^{\varepsilon}(s, X_s^{\varepsilon}, \varepsilon)-b(s, X_s)\big]ds\\
= & \int_0^t {g}_{\delta,h}'(|X_s^{\varepsilon}-X_s|)\frac{X_s^{\varepsilon}-X_s}{|X_s^{\varepsilon}-X_s|}\big[b^{\varepsilon}(s, X_s^{\varepsilon}, \varepsilon)-b^{\varepsilon}(s, X_s, \varepsilon)+b^{\varepsilon}(s, X_s, \varepsilon)
-b(s, X_s)\big]ds\\
\leq & \int_0^t {g}_{\delta,h}'(|X_s^{\varepsilon}-X_s|)\frac{X_s^{\varepsilon}-X_s}{|X_s^{\varepsilon}-X_s|}|b^{\varepsilon}(s, X_s, \varepsilon)-b(s, X_s)|ds\\
\leq & \int_0^t |b^{\varepsilon}(s, X_s, \varepsilon)
-b(s, X_s)|ds,
\end{split}\end{equation}
and by Condition \ref{1dX} and Proposition \ref{1dproperty1} one has
\begin{align*}
&\sup_{\varepsilon}\mathbb{E}\left(\int_0^T |b^{\varepsilon}(s, X_s^{\varepsilon}, \varepsilon)-b(s,X_s)|ds\right)^2\\
\leq &C \sup_{\varepsilon}\mathbb{E}\int_0^T |b^{\varepsilon}(s, X_s^{\varepsilon}, \varepsilon)|^2ds+C\sup_{\varepsilon}\mathbb{E}\int_0^T|b(s,X_s)|^2ds\\
\leq& C \sup_{\varepsilon}\mathbb{E}\int_0^T\big(l_0(s)|X_s^{\varepsilon}|^{1+2\alpha}+|b^{\varepsilon}(s,0,\varepsilon)|^2+|b(s,0)|^2\big)ds\\
<&\infty.
\end{align*}
Hence, by equation \eqref{eqn:barX_functions}, as $\varepsilon\rightarrow 0$,
\begin{align*}
\mathbb{E}\int_0^t {g}_{\delta,h}'(|X_s^{\varepsilon}-X_s|)\frac{X_s^{\varepsilon}-X_s}{|X_s^{\varepsilon}-X_s|}\big[b^{\varepsilon}(s, X_s^{\varepsilon}, \varepsilon)-b(s, X_s)\big]ds\rightarrow 0.
\end{align*}
Similarly, by Propositions \ref{1dproperty1} and \ref{prop:sup_2ndmoment} as well as the regularity conditions for $\sigma_i$ and $\sigma_i$ respectively, one has
\begin{align*}
&\sup_{\varepsilon}\mathbb{E}\int_0^T\big[\sigma_i^{\varepsilon}(s, X_s, \varepsilon)-\sigma_i(s, X_s)\big]^2ds\\
\leq &C \sup_{\varepsilon}\mathbb{E}\int_0^T\big[\sigma_i^{\varepsilon}(s, X_s, \varepsilon)-\sigma_i^{\varepsilon}(s, 0, \varepsilon)]^2ds+C\sup_{\varepsilon}\mathbb{E}\int_0^T[\sigma_i^{\varepsilon}(s, 0, \varepsilon)-\sigma_i(s, 0)\big]^2ds\\
&+C\sup_{\varepsilon}\mathbb{E}\int_0^T\big[\sigma_i(s, X_s)-\sigma_i(s, 0)\big]^2ds\\
<&\infty.
\end{align*}
Then, by equation \eqref{eqn:barX_functions}, as $\varepsilon\to0$,
\begin{equation}
\mathbb{E}\int_0^T\big[\sigma_i^{\varepsilon}(s, X_s, \varepsilon)-\sigma_i(s, X_s)\big]^2ds\to0,~~~~i=1,2,
\end{equation}
and for sufficiently small $\varepsilon$ satisfying that
$$
\mathbb{E}\int_0^T\big[\sigma_i^{\varepsilon}(s, X_s, \varepsilon)-\sigma_i(s, X_s)\big]^2ds<\delta h^{1+2\alpha},
$$
by equation \eqref{eqn:delta_h_inequality_2} one has
\begin{align*}
&\mathbb{E}\int_0^t {g}_{\delta,h}''(|X_s^{\varepsilon}-X_s|)\big[\sigma_i^{\varepsilon}(s, X_s^{\varepsilon}, \varepsilon)-\sigma_i(s, X_s)\big]^2ds\\
\leq & C\mathbb{E}\int_0^t \frac{l_i(s)}{\ln \delta^{-1}|X_s^{\varepsilon}-X_s|}|X_s^{\varepsilon}-X_s|^{1+2\alpha}\mathbbm{1}_{\{|X_s^{\varepsilon}-X_s|\in[h\delta,h]\}}ds\\
&+C\mathbb{E}\int_0^t \frac{1}{\ln \delta^{-1}|X_s^{\varepsilon}-X_s|}\big[\sigma_i^{\varepsilon}(s X_s, \varepsilon)-\sigma_i(s, X_s)\big]^2\mathbbm{1}_{\{|X_s^{\varepsilon}-X_s|\in[h\delta,h]\}}ds\\
\leq & C\mathbb{E}\int_0^t \frac{l_i(s)}{\ln \delta^{-1}}h^{2\alpha}ds+C\mathbb{E}\int_0^t \frac{1}{\delta h \ln \delta^{-1}}\big[\sigma_i^{\varepsilon}(s, X_s, \varepsilon)-\sigma_i(s, X_s)\big]^2ds\\
\leq & \frac{C h^{2\alpha}}{\ln \delta^{-1}}+\frac{C}{\delta h \ln \delta^{-1}}\mathbb{E}\int_0^t\big[\sigma_i^{\varepsilon}(s, X_s, \varepsilon)-\sigma_i(s, X_s)\big]^2ds\\
\leq & \frac{C h^{2\alpha}}{\ln \delta^{-1}}.
\end{align*}
Plugging the above results in equation \eqref{eqn:Xcvg_expansion}, one gets
$$\mathbb{E}|X_t^{\varepsilon}-X_t|\leq \frac{C h^{2\alpha}}{\ln \delta^{-1}}+h.$$
Taking supremum and then expectation of equation \eqref{eqn:Xcvg_expansion}, we obtain
\begin{align*}
&\mathbb{E}\sup_{t\in [0,T]}|X_t^{\varepsilon}-X_t|\\
\leq & \frac{C h^{2\alpha}}{\ln \delta^{-1}}+h+C\sum_{i=1}^2\mathbb{E}\left(\int_0^T \big[\sigma_i^{\varepsilon}(t, X_s^{\varepsilon}, \varepsilon)-\sigma_i(s, X_s)\big]^2ds\right)^{1/2}\\
\leq & \frac{C h^{2\alpha}}{\ln \delta^{-1}}+h+C\sum_{i=1}^2\mathbb{E}\left(\int_0^T l_i(s) |X_s^{\varepsilon}- X_s|^{1+2\alpha}ds\right)^{1/2}\\
&+C\sum_{i=1}^2\mathbb{E}\left(\int_0^T \big[\sigma_i^{\varepsilon}(t, X_s, \varepsilon)-\sigma_i(s, X_s)\big]^2ds\right)^{1/2}\\
\leq & \frac{C h^{2\alpha}}{\ln \delta^{-1}}+h+\frac{1}{2}\mathbb{E}\sup_{t\in [0,T]}|X_t^{\varepsilon}-X_t|+C\sum_{i=1}^2\mathbb{E}\int_0^T l_i(s) |X_s^{\varepsilon}- X_s|^{2\alpha}ds\\
&+C\sum_{i=1}^2\mathbb{E}\left(\int_0^T \big[\sigma_i^{\varepsilon}(t, X_s, \varepsilon)-\sigma_i(s, X_s)\big]^2ds\right)^{1/2},
\end{align*}
where we used the H\"older's inequality in the last equality to reduce the order of $|X_s^{\varepsilon}- X_s|$ on the right hand side. By reorganizing the terms and noticing that $2\alpha<1$, Gr\"onwall's lemma yields
$$\mathbb{E}\sup_{t\in [0,T]}|X_t^{\varepsilon}-X_t|\leq \frac{C h^{2\alpha}}{\ln \delta^{-1}}+h.$$
Taking $h=\delta$, we have
$$\mathbb{E}\sup_{t\in [0,T]}|X_t^{\varepsilon}-X_t|\leq \frac{C \delta^{2\alpha}}{\ln \delta^{-1}}+\delta.$$
For the reason that $\delta$ can take any small values, we conclude the proof as desired.
\end{proof}
With arguments analogous to Theorem \ref{thm:Ycvg}, we also have the following convergence result for the $Y$-system whose proof is omitted.
\begin{theorem} Under Conditions \ref{1dY} and \ref{1dX_asymptotic}, one has
$$
\mathbb E\sup_{t\leq T}|Y^{\varepsilon}_t-Y_t|^2\to 0, ~~~\mbox{as}~~\varepsilon\to0.
$$
\end{theorem}
| proofpile-arXiv_059-15722 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Text summarization can facilitate the propagation of information by providing an abridged version for long articles and documents. Meanwhile, the globalization progress has prompted a high demand of information dissemination across language barriers. Thus, the cross-lingual summarization (CLS) task emerges to provide accurate gist of articles in a foreign language.
Traditionally, most CLS methods follow the two-step pipeline approach: either translate the article into the target language and then summarize it \citep{mtsum}, or summarize the article in the source language and then translate it \citep{summt}. Although this method can leverage off-the-shelf summarization and MT models, it suffers from error accumulation from two independent subtasks. Therefore, several end-to-end approaches have been proposed recently \citep{ncls,naacl2019,acl2019}, which conduct both translation and summarization simultaneously. Easy to optimize as these methods are, they typically require a large amount of cross-lingual summarization data, which may not be available especially for low-resource languages. For instance, NCLS \citep{ncls} proposes to co-train on monolingual summarization (MS) and machine translation (MT) tasks, both of which require tremendous labeling efforts.
On the other hand, the pre-training strategy has proved to be very effective for language understanding \citep{bert,gpt} and cross-lingual learning \citep{xlm, aaai2020}. One of the advantages of pre-training is that many associated tasks are self-learning by nature, which means no labeled data is required. This greatly increases the amount of training data exposed to the model, thereby enhancing its performance on downstream tasks.
Therefore, we leverage large-scale pre-training to improve the quality of cross-lingual summarization. Built upon a transformer-based encoder-decoder architecture \citep{transformer}, our model is pre-trained on both monolingual tasks including masked language model (MLM), denoising autoencoder (DAE) and monolingual summarization (MS), and cross-lingual tasks such as cross-lingual masked language model (CMLM) and machine translation (MT). This mixed-lingual pre-training scheme can take advantage of massive unlabeled monolingual data to improve the model's language modeling capability, and leverage cross-lingual tasks to improve the model's cross-lingual representation. We then finetune the model on the downstream cross-lingual summarization task.
Furthermore, based on a shared multi-lingual vocabulary, our model has a shared encoder-decoder architecture for all pre-training and finetuning tasks, whereas NCLS \citep{ncls} sets aside task-specific decoders for machine translation, monolingual summarization, and cross-lingual summarization.
In the experiments, our model outperforms various baseline systems on the benchmark dataset NCLS \citep{ncls}. For example, our model achieves 3.27 higher ROUGE-1 score in Chinese to English summarization than the state-of-the-art result and 1.28 higher ROUGE-1 score in English to Chinese summarization. We further conduct an ablation study to show that each pretraining task contributes to the performance, especially our proposed unsupervised pretraining tasks.
\section{Related Work}
\subsection{Pre-training}
Pre-training language models \citep{bert, unilm} have been widely used in NLP applications such as question answering \citep{sdnet}, sentiment analysis \citep{elmo}, and summarization \citep{leadbias,yang2020ted}.
In multi-lingual scenarios, recent works take input from multiple languages and shows great improvements on cross-lingual classification \citep{xlm, pires2019multilingual, huang2019unicoder} and unsupervised machine translation \citep{liu2020multilingual}.
\citet{artetxe2019massively} employs the sequence encoder from a machine translation model to produce
cross-lingual sentence embeddings. \citet{aaai2020} uses multi-lingual pre-training to improve cross-lingual question generation and zero-shot cross-lingual summarization. Their model trained on articles and summaries in one language is directly used to produce summaries for articles in another language, which is different from our task of producing summaries of one language for an article from a foreign language.
\subsection{Cross-lingual Summarization}
Early literatures on cross-lingual summarization focus on the two-step approach involving machine translation and summarization \citep{mtsum,summt}, which often suffer from error propagation issues due to the imperfect modular systems. Recent end-to-end deep learning models have greatly enhanced the performance. \citet{IEEE2018} presents a solution to zero-shot cross-lingual headline generation by using machine translation and summarization datasets. \citet{acl2019} leverages monolingual abstractive summarization to achieve zero-shot cross-lingual abstractive sentence summarization. NCLS \citep{ncls} proposes a cross-lingual summarization system for large-scale datasets for the first time. It uses multi-task supervised learning and shares the encoder for monolingual summarization, cross-lingual summarization, and machine translation. However, each of these tasks requires a separate decoder. In comparison, our model shares the entire encoder-decoder architecture among all pre-training and finetuning tasks, and leverages unlabeled data for monolingual masked language model training. A concurrent work by \citet{zhu2020attend} improves the performance by combining the neural model with an external probabilistic bilingual lexicon.
\section{Method}
\begin{CJK*}{UTF8}{gbsn}
\begin{table*}[ht]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{lcccc}
\toprule
\textbf{Objective} & \textbf{Supervised} & \textbf{Multi-lingual} & \textbf{Inputs} & \textbf{Targets} \\ \midrule
Masked Language Model & & & France \textless{}X\textgreater{} Morocco in \textless{}Y\textgreater{} exhibition match. & \textless{}X\textgreater{} beats \textless{}Y\textgreater{} an \\ \midrule
Denoising Auto-Encoder & & & France beats \textless{}M\textgreater{} in \textless{}M\textgreater{} exhibition . & France beats Morocco in an exhibition match. \\ \midrule
Monolingual Summarization & \checkmark & & \begin{tabular}[c]{@{}c@{}}World champion France overcame a stuttering \\ start to beat Morocco 1-0 in a scrappy exhibition \\ match on Wednesday night.\end{tabular} & France beats Morocco in an exhibition match. \\ \midrule
Cross-lingual MLM & \checkmark & \checkmark & \begin{tabular}[c]{@{}c@{}}France \textless{}X\textgreater{} Morocco in \textless{}Y\textgreater{} exhibition match. \\ 法国队在一场表演赛中击败摩洛哥队。\end{tabular} & \textless{}X\textgreater{} beats \textless{}Y\textgreater{} an \\ \midrule
Cross-lingual MLM & \checkmark & \checkmark & \begin{tabular}[c]{@{}c@{}}France beats Morocco in an exhibition match. \\ \textless{}X\textgreater{}队在一场表演赛中\textless{}Y\textgreater{}摩洛哥队。\end{tabular} & \textless{}X\textgreater法国 \textless{}Y\textgreater击败 \\ \midrule
Machine Translation & \checkmark & \checkmark & France beats Morocco in an exhibition match. & 法国队在一场表演赛中击败摩洛哥队。 \\ \bottomrule
\end{tabular}%
}
\caption{Examples of inputs and targets used by different objectives for the sentence ``France beats Morocco in an exhibition match'' with its Chinese translation. We use \textless{}X\textgreater{} and \textless{}Y\textgreater{} to denote sentinel tokens and \textless{}M\textgreater{} to denote shared mask tokens.}
\label{tab:obj-examples}
\end{table*}
\end{CJK*}
\subsection{Pre-training Objectives}
We propose a set of multi-task pre-training objectives on both monolingual and cross-lingual corpus. For monolingual corpus, we use the masked language model (MLM) from \citet{t5}. The input is the original sentence masked by sentinel tokens, and the target is the sequence consists of each sentinel token followed by the corresponding masked token. The other monolingual task is the denoising auto-encoder (DAE), where the corrupted input is constructed by randomly dropping, masking, and shuffling a sentence and the target is the original sentence. Since our final task is summarization, we also include monolingual summarization (MS) as a pre-training task.
To leverage cross-lingual parallel corpus, we introduce the cross-lingual masked language model (CMLM). CMLM is an extension of MLM on the parallel corpus. The input is the concatenation of a sentence in language A and its translation in language B. We then randomly select one sentence and mask some of its tokens by sentinels. The target is to predict the masked tokens in the same way as MLM. Different from MLM, the masked tokens in CMLM are predicted not only from the context within the same language but also from their translations in another language, which encourages the model to learn language-invariant representations. Note that CMLM is similar to the Translation Language Model (TLM) loss proposed in \citet{xlm}. The key differences are: 1) TLM randomly masks tokens in sentences from both languages, while CMLM only masks tokens from one language; 2) TLM is applied on encoder-only networks while we employ CMLM on the encoder-decoder network. In addition to CMLM, we also include standard machine translation (MT) objective, in which the input and output are the unchanged source and target sentences, respectively.
The examples of inputs and targets used by our pre-training objectives are shown in Table \ref{tab:obj-examples}.
\subsection{Unified Model for Pre-training and Finetuning}
While NCLS \citep{ncls} uses different decoders for various pre-training objectives, we employ a unified Transformer \citep{transformer} encoder-decoder model for all pre-training and finetuning tasks. This makes our model learn a cross-lingual representation efficiently. A shared dictionary across all languages is used. To accommodate multi-task and multilingual objectives, we introduce language id symbols to indicate the target language, and task symbols to indicate the target task. For instance, for the CMLM objective where the target language is Chinese, the decoder takes $<$cmlm$>$ and $<$zh$>$ as the first two input tokens.
We empirically find that our model does not suffer from the phenomenon of forgetting target language controllability as in \citet{aaai2020}, which requires manual freezing of encoder or decoder during finetuning. After pretraining, we conduct finetuning on cross-lingual summarization data.
\section{Experiments}
\begin{table*}[htbp]
\centering
\begin{tabular}{l|lll|lll}
\hline
& \multicolumn{3}{l|}{English$\rightarrow$Chinese} & \multicolumn{3}{l}{Chinese$\rightarrow$English} \\ \hline
& ROUGE-1 & ROUGE-2 & ROUGE-L & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \hline
TETran & 26.15 & 10.60 & 23.24 & 23.09 & 7.33 & 18.74 \\
GETran & 28.19 & 11.40 & 25.77 & 24.34 & 9.14 & 20.13 \\
TLTran & 30.22 & 12.20 & 27.04 & 33.92 & 15.81 & 29.86 \\
GLTran & 32.17 & 13.85 & 29.43 & 35.45 & 16.86 & 31.28 \\
NCLS & 36.82 & 18.72 & 33.20 & 38.85 & 21.93 & 35.05 \\
NCLS-MS & 38.25 & 20.20 & 34.76 & 40.34 & 22.65 & 36.39 \\
NCLS-MT & 40.23 & 22.32 & 36.59 & 40.25 & 22.58 & 36.21 \\
XNLG & 39.85 & 24.47 & 28.28 & 38.34 & 19.65 & 33.66 \\
ATS & 40.68 & 24.12 & \textbf{36.97} & 40.47 & 22.21 & 36.89 \\ \hline
Ours & \textbf{43.50} & \textbf{25.41} & 29.66 & \textbf{41.62} & \textbf{23.35} & \textbf{37.26} \\ \hline
\end{tabular}%
\caption{ROUGE-1, ROUGE-2, ROUGE-L for English to Chinese and Chinese to English summarization on NCLS dataset.} \label{table:mainresult}
\end{table*}
\subsection{Dataset}
We conduct our experiment on NCLS dataset \citep{ncls}, which contains paired data of English articles with Chinese summaries, and Chinese articles with English summaries. The cross-lingual training data is automatically generated by a machine translation model. For finetuning and testing, we followed the same train/valid/test split of the original dataset. We refer readers to Table 1 in \citet{ncls} for detailed statistics of the dataset.
For pre-training, we obtain monolingual data for English and Chinese from the corresponding Wikipedia dump. There are 83 million sentences for English monolingual corpus and 20 million sentences for Chinese corpus. For parallel data between English and Chinese, we use the parallel corpus from \citet{xlm}, which contains 9.6 million paired sentences.
For monolingual summarization objective, we use CNN/DailyMail dataset \citep{dailymail} for English summarization and LCSTS dataset \citep{hu2015lcsts} for Chinese summarization.
\subsection{Implementation Details}
Our transformer model has 6 layers and 8 heads in attention. The input and output dimensions $d_{model}$ for all transformer blocks are 512 and the inner dimension $d_{ff}$ is 2048.
We use a dropout probability of 0.1 on all layers. We build a shared {S}entence{P}iece \citep{sentencepiece} vocabulary of size $33,000$ from a balanced mix of the monolingual Wikipedia corpus. The model has approximately 61M parameters.
For MLM we use a mask probability of 0.15. For DAE, we set both the mask and drop out rate to 0.1.
For all pre-training and finetuning we use RAdam optimizer \cite{radam} with $\beta_1=0.9$, $\beta_2=0.999$. The initial learning rate is set to $10^{-9}$ for pre-training and $10^{-4}$ for finetuning. The learning rate is linearly increased to $0.001$ with $16,000$ warmup steps followed by an exponential decay.
For decoding, we use a beam size of 6 and a maximum generation length of 200 tokens for all experiments.
\begin{table}[htbp]
\centering
\resizebox{0.47\textwidth}{!}{%
\begin{tabular}{@{}l|lll@{}}
\thickhline
& \multicolumn{3}{c}{English$\rightarrow$Chinese} \\ \hline
& ROUGE-1 & ROUGE-2 & ROUGE-L \\ \hline
Ours & 43.50 & 25.41 & 29.66 \\ \hline
- MS & 42.48 & 24.45 & 28.49 \\
- MT & 42.12 & 23.97 & 28.74 \\
- MLM, DAE & 41.82 & 23.85 & 28.40 \\
- All Pretraining & 41.12 & 23.67 & 28.53 \\ \thickhline
\end{tabular}%
}
\caption{Finetuning performance on English$\rightarrow$Chinese summarization starting with various ablated pre-trained models.
}
\label{tab:ablation_study}
\end{table}
\subsection{Baselines}
We first include a set of pipeline methods from \citet{ncls} which combines monolingual summarization and machine translation.
\textbf{TETran} first translates the source document and then uses LexRank \citep{lexrank} to summarize the translated document.
\textbf{TLTran} first summarizes the source document and then translates the summary.
\textbf{GETran} and \textbf{GLTran} replace the translation model in TETran and TLTran with Google Translator\footnote{https://translate.google.com/} respectively.
We also include three strong baselines from \citet{ncls}: \textbf{NCLS}, \textbf{NCLS-MS} and \textbf{NCLS-MT}. NCLS trains a standard Transformer model on the cross-lingual summarization dataset. NCLS-MS and NCLS-MT both use one encoder and multiple decoders for multi-task scenarios. NCLS-MS combines the cross-lingual summarization task with monolingual summarization while NCLS-MT combines it with machine translation.
We finetune \textbf{XNLG} model from \citet{aaai2020} on the same cross-lingual summarization data. We finetune all layers of XNLG in the same way as our pretrained model.
Finally, we include the result of \textbf{ATS} from the concurrent work of \citet{zhu2020attend}.
\begin{figure*}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/en2zh_v4.pdf}
\caption{English$\rightarrow$Chinese ROUGE-1}
\label{fig:low-resource-sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/zh2en_v4.pdf}
\caption{Chinese$\rightarrow$English ROUGE-1}
\label{fig:low-resource-sub2}
\end{subfigure}
\caption{ROUGE-1 performance on NCLS dataset when the cross-lingual summarization training data is sub-sampled to size of 1k and 10k. The result on the full dataset is also shown.}
\label{fig:low-resource}
\end{figure*}
\subsection{Results}
Table \ref{table:mainresult} shows the ROUGE
scores of generated summaries in English-to-Chinese and Chinese-to-English summarization. As shown, pipeline models, although incorporating state-of-the-art machine translation systems, achieve sub-optimal performance in both directions, proving the advantages of end-to-end models.
Our model outperforms all baseline models in all metrics except for ROUGE-L in English-to-Chinese. For instance, our model achieves 2.82 higher ROUGE-1 score in Chinese to English summarization than the previously best result and 1.15 higher ROUGE-1 score in English to Chinese summarization, which shows the effectiveness of utilizing multilingual and multi-task data to improve cross-lingual summarization.
\subsection{Ablation Study}
Table \ref{tab:ablation_study} shows the ablation study of our model on English to Chinese summarization. We remove from the pre-training objectives i) all monolingual unsupervised tasks (MLM, DAE), ii) machine translation (MT), iii) monolingual summarization (MS), and iv) all the objectives. Note that "- All Pretraining" and NCLS both only train on the cross-lingual summarization data. The performance difference between the two is most likely due to the difference in model size, vocabulary, and other hyper-parameters.
As shown, the pre-training can improve ROUGE-1, ROUGE-2, and ROUGE-L by 2.38, 1.74, and 1.13 points respectively on Chinese-to-English summarization. Moreover, all pre-training objectives have various degrees of contribution to the results, and the monolingual unsupervised objectives (MLM and DAE) are relatively the most important. This verifies the effectiveness of leveraging unsupervised data in the pre-training.
\textbf{Low-resource scenario.} We sample subsets of size 1K and 10K from the training data of cross-lingual summarization and finetune our pre-trained model on those subsets. Figure~\ref{fig:low-resource} shows the the performance of the pre-trained model and the model trained from scratch on the same subsets. As shown, the gain from pre-training is larger when the size of training data is relatively small. This proves the effectiveness of our approach to deal with low-resource language in cross-lingual summarization.
\section{Conclusion}
We present a mix-lingual pre-training model for cross-lingual summarization. We optimize a shared encoder-decoder architecture for multi-lingual and multi-task objectives. Experiments on a benchmark dataset show that our model outperforms pipeline-based and other end-to-end baselines. Through an ablation study, we show that all pretraining objectives contribute to the model's performance.
| proofpile-arXiv_059-15723 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction} \label{Chap4SecIntro}
Measles is a vaccine-preventable, highly contagious respiratory infection that affects mostly young children \citep{who_measles}. Accelerated immunization activities in recent decades have had a major impact on reducing global measles deaths \citep{patel2019progress}, but disease burden is still high in many low- and middle-income countries (LMICs) where vaccination remains a challenge due to poor health care infrastructure and access \citep{who_measles1}. In high-burden settings, a key strategy for increasing MCV coverage is to conduct supplementary immunization activities (SIAs) in the form of vaccination campaigns, in addition to delivering scheduled vaccination through routine immunization (RI) programs \citep{who_measles1, mri}. During these campaigns, health workers run fixed-post vaccination sites and provide MCV to all children in a pre-specified target age group, regardless of whether they have been vaccinated previously. The aim is to immunize children missed by RI and reduce the susceptible population.
A common metric used for evaluating SIA effectiveness is the \textit{administrative campaign coverage}. It is calculated as the ratio of the number of MCV doses administered during a campaign to the size of the target population of the campaign. While SIAs usually have high reported campaign coverage, it is unclear how many people are effectively removed from the susceptible population \citep{mbabazi2009achieving}. Another popular indicator of SIA effectiveness is the \textit{SIA coverage among MCV zero-dose children}, which is defined as the proportion of children in a specific age group with no history of receipt of MCV before the SIA who received a dose of MCV during the SIA. \citet{portnoy2018impact} analyzed Demographic and Health Survey (DHS) data from a few low- and middle-income countries (LMICs) to investigate how many children who have not previously received a measles vaccine dose are reached by SIAs. However, their methods require surveys being conducted within 2 years of implementation of SIAs and having a measles SIA indicator variable, which resulted in only 14 out of 111 countries having SIAs and DHS in the years of 2000 onward, to be eligible for the analysis. More recently, \citet{utazi2020geospatial} analyzed the Nigeria 2017--18 post-campaign coverage survey (PCCS) data to estimate the \textit{SIA coverage among MCV zero-dose children} of the campaign. However, high quality PCCS are rare in most LMICs, and the resultant estimates cannot be easily applied to plan future campaigns.
An attractive measure of SIA effectiveness is the \textit{SIA efficacy}, defined as the fraction of the susceptible population effectively removed via vaccination after a measles SIA. However, estimation of \textit{SIA efficacy} has been a programmatic challenge due to difficulties in estimating the underlying susceptible population. \citet{thakkar2019decreasing} proposed a linear regression approach that incorporates \textit{SIA efficacy} to analyze measles incidence data in Pakistan to optimize campaign timing. While their method demonstrates effective use of \textit{SIA efficacy} in campaign planning, it does not properly account for the uncertainties due to the under-reporting of disease incidence. Specifically, their approach assumes a constant reporting ratio at every time point and ignores any binomial variation in the observed incidence data. The underlying true incidence time series are calculated by simply dividing the observed incidence by a single point estimate of the reporting ratio. As such, the method tends to under-estimate the uncertainties associated with the model parameters and predicted incidence.
In this paper, we extend the time-series susceptible-infected-recovered (TSIR) framework \citep{finkenstadt2000time} and propose a discrete-time hidden Markov model for estimating SIA efficacy using a hybrid of ordinary least squares (OLS) regression with robust standard errors \citep{huber1967behavior, white1980heteroskedasticity} and Markov chain Monte Carlo (MCMC) procedures. The TSIR model was first introduced by \citet{finkenstadt2000time} to provide an approximation for measles time series in large communities where the disease is endemic. It was then extended in a series of subsequent papers \citep{bjornstad2002dynamics, grenfell2002dynamics, glass2003interpreting, morton2005discrete} and has been used to understand measles and rubella transmission in a variety of settings \citep{ferrari2008dynamics, metcalf2011epidemiology, mahmud2017comparative, metcalf2013implications}. A recent review by \citet{wakefield2019spatio} details the theoretical motivation of the framework and compares it with other popular epidemic modelling approaches such as the Epidemic/Endemic (EE) models introduced by \citet{held2005statistical}. More details regarding the TSIR framework are provided in Web Appendix A in the Supporting Information. Our approach extends the existing framework by adding a model component to capture the impact of SIAs on the susceptible population. It also accounts for under-reporting and its associated uncertainty via a two-stage estimation procedure with uncertainty propagation. In addition, our model allows for seasonality in measles transmission and accommodates monthly reported incidence data that are publicly available from the WHO Measles and Rubella Surveillance Database for most member countries \citep{who_measles3}. The proposed model can be used to estimate the underlying susceptible population dynamics, assess how many susceptible people were immunized by past SIAs, and forecast incidence trends in the future under various hypothetical SIA scenarios.
This paper is structured as follows. In Section \ref{Chap4SecMethod}, we describe the model and the inference details. A simulation study in Section \ref{Chap4SecSim} considers various levels of under-reporting, where the observed incidence time series are obtained from extremely low to high reporting rates. In Section \ref{Chap4SecBenin}, we illustrate our model by using it to analyze the reported measles incidence in Benin from 2012 to 2018. Section \ref{Chap4SecDisc} contains concluding remarks.
\section{Methods} \label{Chap4SecMethod}
\subsection{Model specification} \label{Chap4SecMod}
In general, disease incidence (i.e., the number of new infections in a time period) is commonly modeled at time steps equal to the average generation time of the disease, which is about 14 days for measles \citep{who_measles2}. To better accommodate the monthly reported incidence data, we model the underlying true measles incidence in semi-monthly intervals. We let $m$ and $t$ index the monthly and the semi-monthly time points, so that $t = 2m-1$ and $t = 2m$ represent the first and second semi-monthly time points in month $m$. In addition, we use $C$ and $I$ to denote the reported and the underlying true measles incidence counts respectively. We account for under-reporting by letting the monthly reported (observed) incidence count $C_m$ follow a Binomial distribution with a sample size equal to the sum of the true semi-monthly incidence in that month, $I_{2m-1} + I_{2m}$, and a constant reporting probability $\rho$,
\begin{gather}
C_{m} | I_{2m-1}, I_{2m} \sim \mbox{Bin}\left(I_{2m-1}+ I_{2m}, \rho \right). \label{Chap4Eqn1}
\end{gather}
For the underlying (unobserved) true incidence, we let the number of new infections in the current semi-month $I_t$ follow a negative binomial distribution with a mean that depends on the incidence $I_{t-1}$, the susceptibles $S_{t-1}$ and the known total population $N_{t-1}$ from the previous semi-month,
\begin{gather}
I_{t} | I_{t-1}, S_{t-1} \sim \mbox{NegBin}(\lambda_{t}, \phi) \quad \mbox{for } t \ge 2, \label{Chap4Eqn1.25}\\ \label{Chap4Eqn1.5}
\lambda_{t} = \exp(\beta^{\mbox{\tiny{AR}}}_t) \frac{I_{t-1}^{\alpha} S_{t-1}}{N_{t-1}} + \exp(\beta^{\mbox{\tiny{EN}}}) N_{t-1}.
\end{gather}
We assume a constant dispersion parameter $\phi$ and endemic parameter $\beta^{\mbox{\tiny{EN}}}$ over time. The variance of $I_{t}$ is equal to $\lambda_{t} \left(1 + \frac{\lambda_{t}}{\phi} \right)$, so that smaller value of $\phi$ means more overdispersion. The mixing parameter $\alpha$ is set to be 0.975, which has been justified as accounting for the discrete-time approximation to the continuous time model \citep{glass2003interpreting}. As for the auto-regressive parameter $\beta^{\mbox{\tiny{AR}}}_t$, we use the following formulation to capture the yearly seasonality of measles transmission,
\begin{gather}
\beta^{\mbox{\tiny{AR}}}_t = \gamma_1 + \gamma_2 t + \gamma_3 \mbox{sin} \left(\frac{2 \pi }{\omega} t \right) + \gamma_4 \mbox{cos} \left(\frac{2 \pi}{\omega} t \right),
\end{gather}
where $\omega = 24$, for semi-monthly incidence time series.
For the underlying dynamic of the susceptibles, we propose a balancing equation that incorporates the information from past SIA campaigns. Specifically, we model the number of susceptibles in the first semi-month $S_1$ as the product of the proportion of susceptibles in the total population $\theta$ (to be estimated) and the known size of the total population at that time point $N_1$. For semi-month $t \ge 2$, we let the number of susceptibles $S_t$ be equal to the number of susceptibles in the previous semi-month $S_{t-1}$, plus the known number of births entering the susceptible pool $B_{t}$, minus the number of new infections $I_{t}$, and finally minus the number of susceptibles immunized in the most recent SIA campaign $S^*_t$,
\begin{align}
S_1 &= \theta \times N_1, \\
S_{t} &= S_{t-1} + B_{t} - I_{t} - S^*_{t}\quad \mbox{for } t \ge 2. \label{Chap4Eqn2}
\end{align}
The number of births entering the susceptible pool $B_{t}$ can be calculated as the number of 9-month-old children who are losing maternal immunity and are not effectively vaccinated via RI. We call this quantity the \textit{adjusted births}. Let $L_t$ and $R_t$ be the number of 9-month-old children in the population and the RI-specific coverage of the first dose of measles-containing-vaccine (MCV1) among the 9-month-olds at semi-month $t$. The adjusted births $B_{t}$ is obtained as
\begin{gather}
B_t = L_t \times (1 - R_t \times 0.87), \label{Chap4Eqn3}
\end{gather}
assuming the efficacy of MCV1 in LMIC settings is 87\% \citep{who_measles1}.
We calculate $S^*_t$, the number of susceptibles immunized in semi-month $t$ in the most recent SIA campaign, as
\begin{gather}
S^*_{t} = p \times S_{k(t)} \times \delta_t \label{Chap4Eqn3.5}
\end{gather}
where $p$ denotes the overall SIA efficacy parameter, representing the fraction of susceptible population immunized after a complete SIA campaign, $S_{k(t)}$ denotes the number of susceptibles right before the most recent campaign started, and $\delta_t$ denotes the fraction of the total target population to be covered in semi-month $t$ in the most recent campaign. For any semi-month $t$ in which no SIA activity is carried out, we set $\delta_t = 0$; otherwise, an estimate of $\delta_t$ can be obtained using the target population information from the SIA calendar on the World Health Organization (WHO) website \citep{who_measles3}. This formulation of $S^*_t$ allows for flexible modeling of susceptible dynamics when an SIA campaign is implemented in several phases with part of the total target population covered in each phase (e.g., by geographic regions).
\subsection{Computation} \label{Chap4SecComp}
\citet{morton2005discrete} described a Markov chain Monte Carlo (MCMC) algorithm for making inference about parameters in a TSIR model, but acknowledged that this approach can perform poorly in estimating the reporting rate parameter $\rho$. In preliminary work we also encounter serious unidentifibility issues when we estimate $\rho$ jointly with all other model parameters. Therefore, we propose a two-stage hybrid approach that estimates the reporting rate parameter separately from the rest of the model parameters. Specifically, we estimate the reporting rate parameter in the first stage via an ordinary least squares (OLS) regression model with robust standard errors. The results are then used to estimate the rest of the model parameters in the second stage via MCMC. We now describe each stage in detail.
\subsubsection{Stage 1: reporting rate estimation via OLS regression with robust standard errors} \label{Chap4SecCompStage1}
The reporting rate estimation step is inspired by the susceptible reconstruction method developed by \citet{finkenstadt2000time}. We consider a time period during which \textbf{no SIA is implemented}. Let the first month of this time period be $m = 1$ and the last month be $m = M$. Let $C_m$, $I_m$, $S_m$ and $B_m$ be the number of reported incidence, underlying true incidence, susceptible population and the adjusted births entering the susceptible pool in month $m$. Given a realization of the underlying monthly true incidence $\{I_m\}$ and the observed monthly reported incidence $\{C_m\}$, for $m = 1, \dots, M$, we let $\rho_m$ be the under-reporting ratio
\begin{gather*}
\rho_m = \frac{C_m}{I_m} \quad \text{so that} \quad I_m = \frac{1}{\rho_m} C_m = \kappa_m C_m
\end{gather*}
so that $\kappa_m$ is the inverse of $\rho_m$.
Now, we consider the underlying susceptible dynamics during this time period. Since no SIA is implemented, Equation (\ref{Chap4Eqn2}) implies that the monthly susceptible time series is
\begin{align*}
S_{m} &= S_{m-1} + B_{m} - I_{m} \\
&= S_{m-1} + B_{m} - \kappa_m C_m \\
&= (S_{m-2} + B_{m-1} - \kappa_{m-1} C_{m-1}) + B_{m} - \kappa_m C_m \\
&\hspace{0.25cm}\vdots \\
&= S_{0} + \sum_{i=1}^m B_{i} - \sum_{i=1}^m \kappa_{i} C_{i} \numberthis{}\label{Chap4Eqn4}
\end{align*}
where $S_0$ is the number of susceptibles right before month 1. Assuming the ratio between the reported incidence and the underlying true incidence $\rho_m$ is roughly constant and equal to $\rho$ during the entire time period, that is, $\rho_m \approx \rho$ and $\kappa_m = \frac{1}{\rho_m} \approx \frac{1}{\rho} = \kappa$, we can write Equation (\ref{Chap4Eqn4}) as
\begin{align}
S_{m} &\approx S_{0} + \sum_{i=1}^m B_{i} - \kappa \sum_{i=1}^m C_{i} \label{Chap4Eqn5}
\end{align}
Let $\bar{S}$ be the average of $\{S_m\}$ during the time period $m = 1, \dots, M$. We rewrite Equation (\ref{Chap4Eqn5}) as
\begin{align*}
\sum_{i=1}^m B_{i} &= (S_m - S_0) + \kappa \sum_{i=1}^m C_{i} \\
&= (\bar{S} - S_0) + \kappa \sum_{i=1}^m C_{i} + (S_m - \bar{S}) \numberthis{} \label{Chap4Eqn6}
\end{align*}
Let $Y_m = \sum_{i=1}^m B_{i}$ be the cumulative adjusted births, $X_m = \sum_{i=1}^m C_{i}$ be the cumulative reported incidence, $\beta_0 = \bar{S} - S_0$ be the difference between $S_0$ and $\bar{S}$, and $U_m = S_m - \bar{S}$ be the deviation of $S_m$ from the average $\bar{S}$. Then, we can rewrite Equation (\ref{Chap4Eqn6}) as
\begin{gather}
Y_m = \beta_0 + \kappa X_m + U_m \label{Chap4Eqn7}
\end{gather}
Assuming the deviations/errors $U_m$ follow a zero-mean distribution with variance $\sigma^2_m$, Equation (\ref{Chap4Eqn7}) motivates an OLS regression model with robust standard errors \citep{huber1967behavior, white1980heteroskedasticity} for estimating $\kappa$. We can obtain a point estimate $\hat{\kappa}$ and a robust standard error $\hat{\sigma}_{\kappa}$ for the inverse of the reporting rate $\kappa = \frac{1}{\rho}$ by fitting an OLS regression model with robust standard errors using the cumulative adjusted births $Y_m = \sum_{i=1}^m B_{i}$ as the outcome variable and the cumulative reported incidence $X_m = \sum_{i=1}^m C_{i}$ as the predictor variable. It should be noted that this method shows bias in estimating reporting rate $\rho$ because of the positive dependence between $X_m$ and $U_m$ induced by their relationship through the basic accounting equation. More details regarding this bias can be found in Web Appendix B in the Supporting Information.
\subsubsection{Stage 2: MCMC for Bayesian hierarchical model} \label{Chap4SecCompStage2}
In the second stage, we estimate the other model parameters and the latent variables $I_t$ using an MCMC algorithm that is similar to the one outlined by \citet{morton2005discrete}. Note that the posterior conditional distribution of any $I_t$ involves all future values of $I$ and the other parameters, because $I_t$ influences all $S_j$, $j > t$, from Equation (\ref{Chap4Eqn2}). To propagate the uncertainty associated with the estimation of $\rho$ from the first stage, we plug in a random value for $\rho$ in each MCMC iteration by taking the inverse of a random sample drawn from the distribution $\mbox{N}\left( \hat{\kappa}, \hat{\sigma}^2_{\kappa} \right)$.
We use the Metropolis-Hasting algorithm to draw posterior samples and adopt the following guidelines:
\begin{enumerate}
\item We use independent proper diffuse priors, such as the Normal, Uniform or Beta distribution, depending on the range of the parameters.
\item The starting values for $I_t$ and $\theta$ must be realizable,
i.e., $C_m \le I_{2m-1}+ I_{2m}$ and $I_{t+1} \le S_t \le N_t$ for each $m$ and $t$. Any proposed move such that these rules are violated is rejected.
\item We update each $I_t$ individually using a discrete proposal distribution as outlined by \citet{wakefield2011bayes}. After updating an $I_t$, all subsequent values in the susceptible series $S_t$ are updated as well.
\item In preliminary work we saw that the posterior samples of $\theta$, $\gamma_1$ and $\gamma_2$ tend to be highly correlated. We use the blocked Metropolis sampling technique \citep{wakefield2013bayesian} to update them together for more efficient convergence.
\end{enumerate}
\section{Simulation study} \label{Chap4SecSim}
In this section, we illustrate our method and investigate the effect of uncertainty propagation via a simulation study considering a range of reporting rates, from extremely low to high ($\rho = 0.01, 0.1, 0.3, 0.5 \text{ and }0.7$).
\subsection{Set up}
We simulated semi-monthly population and live births time series assuming a yearly population growth rate of 2.5\% and a yearly birth rate of 4\%, values that are common among LMICs \citep{UNpop}. To simulate the time series of adjusted births entering the susceptible pool, we assumed an RI-specific MCV1 coverage of 70\% and a first-dose efficacy of 87\% \citep{who_measles1}. We set the starting values of the incidence and susceptibles times series based on a simulation using the Epidemiological MODeling (EMOD) software developed by the Institute of Disease Modeling \citep{idm_emod}, and generated the subsequent $I_t$ and $S_t$ values based on the model described in Section \ref{Chap4SecMethod} using the following true parameter values:
\begin{gather*}
\gamma_1 = 3, \quad \gamma_2 = 0, \quad \gamma_3 = 0.2, \quad \gamma_4 = 0.5, \quad \beta^{\mbox{\tiny{EN}}} = -12, \quad \phi = 10
\end{gather*}
We allowed the number of infected and susceptible individuals to stochastically change for one year before collecting eight years of semi-monthly time series data. This resulted in 5.6\% of the total population being susceptible to measles at the first time point, i.e., $\theta = 0.056$. We let there be one SIA with an overall efficacy of $p = 40\%$ implemented in two phases at the beginning of the fourth year, with 50\% of the total target population covered in each phase. We simulated a semi-monthly time series of underlying incidence $I_t$ based on Equations (\ref{Chap4Eqn1.25}) to (\ref{Chap4Eqn3.5}). Then, for each reporting rate, we simulated a monthly observed incidence time series $C_m$ based on Equation (\ref{Chap4Eqn1}).
We used the simulated data from the first six years as the training set for model fitting and the data from the last two years (i.e., the forecast period) for forecast validation. To evaluate the model's ability to estimate the potential effect of a ``planned" SIA campaign, we simulated an additional set of validation data assuming a second SIA with the same overall efficacy of $p = 40\%$ was implemented at the beginning of the forecast period.
The simulated total population and adjusted births entering the susceptible population are shown in Web Figure 1 in the Supporting Information. The simulated time series of the underlying incidence and susceptible population, with and without a ``planned" SIA at the beginning of the forecast period, are shown in Web Figures 2 and 3 in the Supporting Information.
\subsection{Computation and results} \label{Chap4SecSimResult}
In each simulation corresponding to a different reporting probability, we estimated the reporting rate $\rho$ in the first stage using the training data from the time period before the first SIA. Figure \ref{Chap4FigSimPar1} shows the point estimates and 95\% confidence intervals of $\rho$ obtained by fitting OLS regression models with robust standard errors to the cumulative monthly adjusted births and the cumulative monthly reported incidence. The estimates are calculated by drawing 10000 random samples from the distribution $\mbox{N}\left( \hat{\kappa}, \hat{\sigma}^2_{\kappa} \right)$ and taking the inverse of the samples. Under all reporting rate scenarios, the point estimate is higher than the true parameter value. There are two issues that we believe are relevant with respect to bias in the reporting rate, one is finite sample bias, and the other is due to the covariance between the error terms and the explanataory variable. The positive bias we are seeing here is likely due to the relatively short length of the monthly time series data used to fit the OLS (36 monthly time points). We conducted an additional simulation study to investigate the behavior of the OLS estimates of the reporting rate using time series of different shapes and lengths (Web Appdendix B) and found that the OLS procedure tends to over-estimate reporting rate $\rho$ when shorter time series are used to fit the regression model. As the length of time series increases, the magnitude of over-estimation generally decreases and reaches 0 at some point. As the length increases further, the procedure produces under-estimated results and the magnitude of under-estimation eventually approaches a constant value, which is due to the dependence between errors and the explanatory variable. More details regarding the bias in the OLS estimation of the reporting rate $\rho$ can be found in Web Appendix B in the Supporting Information.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{chap4_figures/sim_4_random_vs_fixed/rho_est.jpeg}
\caption{The point estimates and 95\% confidence intervals of the reporting rate parameter $\rho$ obtained by fitting OLS regression models with robust standard errors to the cumulative monthly adjusted births and the cumulative monthly reported incidence before the first SIA. The estimates are calculated by drawing 10000 random samples from the distribution $\mbox{N}\left( \hat{\kappa}, \hat{\sigma}^2_{\kappa} \right)$ and taking the inverse of the samples. When $\rho = 0.01$, the point estimate is $0.0102$ and the 95\% confidence interval is $(0.0095, 0.0111)$. When $\rho = 0.1$, the point estimate is $0.107$ and the 95\% confidence interval is $(0.099, 0.117)$.}
\label{Chap4FigSimPar1}
\end{figure}
In the second stage of the estimation, we used all data from the first six years (i.e., the training set) to estimate the other model parameters and the underlying incidence and susceptible dynamics. We drew 20000 posterior samples using the MCMC algorithm and the uncertainty propagation procedure described in Section \ref{Chap4SecCompStage2}, discarded the first 10000 samples as burn-in, and obtained the posterior medians and the 95\% posterior credible intervals (CIs) of parameters. The traceplots of the posterior samples are shown in Web Figures 4 to 8 in the Supporting Information.
To investigate the impact of uncertainty propagation associated with the reporting rate estimation in the first stage, we carried out an additional round of model fitting using the same training data without the uncertainty propagation step. Specifically, instead of using random samples of $\rho$ drawn from the distribution estimated from the first stage, we plugged in the true reporting rate in each MCMC iteration in the second stage. This would allow us to see how much uncertainty in the model estimation and prediction can be attributed to the uncertainty in reporting rate estimation. The traceplots of the posterior samples obtained from this round of computation are shown in Web Figures 9 to 13 in the Supporting Information.
Figure \ref{Chap4FigSimPar2} shows the posterior medians and the 95\% posterior CIs of the model parameters computed with and without uncertainty propagation under various reporting rate scenarios. When we estimate the model parameters with the true $\rho$ values plugged in (blue triangles and lines), most of the resultant point estimates are close to the true parameter values and all 95\% posterior CIs cover the truth. The model tends to overestimate the SIA efficacy parameter $p$. Within each parameter, the uncertainties associated with the estimates are quite similar across the various levels of reporting rate, except for when the reporting rate is extremely low ($\rho = 0.01$), in which case the uncertainties tend to be considerably larger than the other scenarios.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{chap4_figures/sim_4_random_vs_fixed/gamma_1_est.jpeg}
\includegraphics[width=0.4\textwidth]{chap4_figures/sim_4_random_vs_fixed/gamma_2_est.jpeg}
\includegraphics[width=0.4\textwidth]{chap4_figures/sim_4_random_vs_fixed/gamma_3_est.jpeg}
\includegraphics[width=0.4\textwidth]{chap4_figures/sim_4_random_vs_fixed/gamma_4_est.jpeg}
\includegraphics[width=0.4\textwidth]{chap4_figures/sim_4_random_vs_fixed/beta_EN_est.jpeg}
\includegraphics[width=0.4\textwidth]{chap4_figures/sim_4_random_vs_fixed/phi_est.jpeg}
\includegraphics[width=0.4\textwidth]{chap4_figures/sim_4_random_vs_fixed/theta_est.jpeg}
\includegraphics[width=0.4\textwidth]{chap4_figures/sim_4_random_vs_fixed/q_SIA_est.jpeg}
\caption{The posterior medians and the 95\% posterior credible intervals (CIs) of $\gamma_1$, $\gamma_2$, $\gamma_3$, $\gamma_4$, $\beta^{\mbox{\tiny{EN}}}$, $\phi$, $\theta$ and $p$ for when the reporting rate is 0.01, 0.1, 0.3, 0.5 and 0.7 (along the x-axis), computed with (black) and without (blue) uncertainty propagation from the reporting rate estimation in the first stage.}
\label{Chap4FigSimPar2}
\end{figure}
When we estimate the model parameters with the first-stage uncertainty propagated (black triangles and lines), the resultant estimates tend to have wider 95\% posterior CIs than the plugged-in results. The increases in CI widths tend to be larger when the reporting rate is higher. The impact of uncertainty propagation is especially considerable when $\rho = 0.7$, in which case the uncertainty associated with the first-stage estimation of $\rho$ is the greatest compared to the other scenarios. One exemption we see is the estimation of the dispersion parameter $\phi$, in which case the uncertainty propagation resulted in considerable decreases in both the point estimates and the posterior CI widths. One possible reason is that some of the variability in the underlying incidence induced by the uncertainty of the $\rho$ estimates was attributed to overdispersion, which lead to lower estimated values of the dispersion parameter $\phi$ (corresponding to greater overdisperson).
To predict the underlying incidence and susceptible population dynamics in the forecast period, we used the posterior samples of the model parameters from each MCMC iteration and generated the potential realizations of the underlying dynamics according to Equations (\ref{Chap4Eqn1.5}) through (\ref{Chap4Eqn3.5}). For the simulations where a second SIA was implemented at the beginning of the forecast period, we carried out the prediction assuming the ``planned" SIA has the same (estimated) efficacy of the previous campaign.
Web Figures 14 and 15 in the Supporting Information show the simulated true values, posterior medians and 95\% CIs/predictive intervals, and 200 randomly selected posterior samples of the underlying measles incidence under various reporting rate scenarios, computed with and without uncertainty propagation and/or a ``planned" SIA at the beginning of the forecast period. The corresponding results for the underlying susceptible population dynamics are shown in Web Figures 16 and 17 in the Supporting Information. Under all reporting rate scenarios, the model successfully predicted the general timing and the relative magnitudes of the two measles outbreaks in the forecast period when no SIA is implemented, and captured the effect of a ``planned" SIA if it were implemented at the beginning of the forecast period. However, the model was not able to predict the peaks of the two outbreaks accurately. The results shown in Web Figures 14 and 15 in the Supporting Information also demonstrate the impact of uncertainty propagation in incidence forecasting. Under all reporting rate scenarios, we see that the predictive intervals became wider after the first-stage estimation uncertainty is accounted for in the second stage, and the magnitude of the change increases with the first-stage uncertainty.
Finally, we computed the root mean squared error (RMSE) of the predicted underlying incidence during the forecast period, $\mbox{RMSE}_f$, to assess the model's forecast accuracy:
\begin{gather*}
\mbox{RMSE}_f = \sqrt{ \frac{1}{T_f} \sum_{t=1}^{T_f}\left(\hat{I}_t - I_t \right)^2},
\end{gather*}
where $I_t$ and $\hat{I}_t$ are the ``true" simulated incidence and the posterior median of the predicted incidence at time $t$, and $T_f = 48$ is the total number of semi-monthly time steps in the forecast period.
Figure \ref{Chap4FigSimRMSE} shows the results computed with (black) and without (blue) uncertainty propagation from the reporting rate estimation in the first stage. When the true $\rho$ is plugged-in in the second stage estimation, the $\mbox{RMSE}_f$ is the highest when the reporting rate is extremely low ($\rho = 0.01$). As reporting rate increased, the forecast accuracy measured by the $\mbox{RMSE}_f$ first improved sharply, and then remained relatively stable. The impact of uncertainty propagation on $\mbox{RMSE}_f$ is small under under reporting rate scenarios in general, except for when the reporting rate is high ($\rho = 0.7$) and the fist-stage uncertainty and bias are large. In this case, the $\mbox{RMSE}_f$ value computed with uncertainty propagation is much higher than the plugged-in value.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{chap4_figures/sim_4_random_vs_fixed/RMSE_med.jpeg}
\caption{The root mean squared error of the underlying incidence in the forecast period, $\mbox{RMSE}_f$, calculated using the posterior medians of the semi-monthly incidence estimated with (black) and without (blue) uncertainty propagation from the reporting rate estimation in the first stage. }
\label{Chap4FigSimRMSE}
\end{figure}
\section{Analysis of the measles incidence data in Benin} \label{Chap4SecBenin}
In this section, we illustrate our method by analyzing the monthly reported measles incidence in Benin between 2012 and 2018. Specifically, we fit our model using data collected between January 2012 and December 2016, estimate the efficacy of the one SIA campaign carried out in this time period, and forecast the reported incidence between January 2017 and December 2018 for model validation.
\subsection{Data}
We obtained the yearly national population estimates and crude birth rates of Benin from 2011 to 2018 from the WorldBank database \citep{WorldBank}. The estimates were linearly interpolated over time to calculate the semi-monthly total population and live-births time series on the national level. To obtain the adjusted births entering the susceptible pool, we first calculated the department-level (administrative level 1 in Benin) live-births using a set of proportions derived from the WorldPop estimates \citep{WorldPopBenin}. We used the infant mortality rate estimates from the Institute for Health Metrics and Evaluation (IHME) \citep{IHMEBenin} to approximate the under-9-month mortality rate in each department and calculated the number of 9-month-old children at each time point. Using the space-time smoothing model developed by \citet{dong2020space}, we also obtained the department-level RI-specific MCV1 coverage with data from the Multiple Indicator Cluster Surveys (MICS) conducted in 2006, 2012 and 2017 \citep{MICS} and the Demographic and Health Survey (DHS) conducted in 2014 \citep{DHS}. Finally, we calculated the adjusted births time series for each of the 12 departments in Benin assuming the efficacy of the first dose of measles vaccine is 87\% \citep{who_measles1} based on Equation (\ref{Chap4Eqn3}), and aggregated them to obtain the adjusted births on the national level.
The monthly reported measles incidence and the SIA calendar were downloaded from the WHO database \citep{who_measles3}. There was one national SIA campaign implemented in two phases during the time period of interest: 2.6 million children aged between 9 months to 9 years were first targeted at the beginning of December 2014, then another 0.4 million children of the same age group were targeted at the end of January 2015. The reported incidence time series is shown in Web Figure 18 in the Supporting Information.
\subsection{Computation and results}
For reporting rate estimation, we fitted an OLS regression model with robust standard errors using the cumulative adjusted births and cumulative reported incidence data collected between January 2012 and December 2014. We drew 10000 random samples from the distribution $\mbox{N}\left( \hat{\kappa}, \hat{\sigma}^2_{\kappa} \right)$ and took the inverse of the samples to obtain estimates for the reporting rate parameter $\rho$. The resultant estimates for $\rho$ are extremely low, with a point estimate of 0.0074 and a 95\% confidence interval of (0.0069, 0.0081). This suggested that there was severe under-reporting in Benin's measles surveillance data during this time period.
We carried out the second stage of estimation described in Section \ref{Chap4SecCompStage2} with uncertainty propagation using data collected between January 2012 and December 2016. We drew a total of 20000 posterior samples, discarded the first 10000 samples as burn-in, and obtained the posterior medians and the 95\% posterior CIs of the model parameters and the underlying incidence time series. The traceplots of the posterior samples are shown in Web Figure 12 in the Supporting Information.
Table \ref{Chap4TabBeninSumm} shows the summary results for the parameter estimation. In particular, the estimates for $\theta$ implied that an estimated 2.8\% (95\% CI: 0.5\%, 14.9\%) of the total population was susceptible to measles at the beginning of year 2012. As for SIA efficacy, the point estimate of $p$ suggested that an estimated 49.9\% of the susceptible population were immunized after the SIA implemented in 2014--15. However, the 95\% posterior CI of $p$ is very wide (14.5\%, 85.9\%), indicating that the estimated efficacy was associated with huge uncertainty. This is expected considering the extremely low reporting rate estimates from the first stage and the relatively short time series we used for this analysis.
Finally, we predicted the underlying incidence and susceptible population dynamics in the forecast period by generating the potential realizations of the underlying dynamics using the posterior samples of the model parameters from all MCMC iterations. In Figure \ref{Chap4FigBeninPred}, we compare the posterior medians, 95\% CIs/predictive intervals, and 200 randomly selected posterior samples of the underlying monthly measles incidence, scaled by the estimated reporting rate, to the reported incidence time series in Benin between 2012 and 2018. The model was able to predict the timing of the first two outbreaks and their relative magnitudes in the forecast period, but failed to predict the precise timing or magnitude of the third outbreak. The estimated underlying incidence and susceptible population dynamics are also shown in Figure \ref{Chap4FigBeninPred} for reference.
\begin{table}
\centering
\caption{The posterior medians and the 95\% posterior CIs of model parameters from the Benin analysis. The estimates for $\rho$ were obtained by fitting an OLS regression model with robust standard errors to the cumulative monthly adjusted births and the cumulative monthly reported incidence before December 2014. The estimates are calculated by drawing 10000 random samples from the distribution $\mbox{N}\left( \hat{\kappa}, \hat{\sigma}^2_{\kappa} \right)$ and taking the inverse of the samples. }
\label{Chap4TabBeninSumm}
\begin{tabular}{ccc}
\hline \hline
Parameter & Estimate & 95\% CI \\
\hline
$\rho$ & 0.0074 & (0.0069, 0.0081) \\
$\gamma_1$ & 3.08 & (2.10, 5.34) \\
$\gamma_2$ & -0.005 & (-0.016, 0.003) \\
$\gamma_3$ & 0.173 & (-0.002, 0.376) \\
$\gamma_4$ & 0.248 & (0.080, 0.438) \\
$\beta^{\mbox{\tiny{EN}}}$ & -11.2 & (-12.5, -10.5) \\
$\theta$ & 0.028 & (0.005, 0.149) \\
$p$ & 0.499 & (0.145, 0.859) \\
$\phi$ & 4.89 & (2.48, 9.87) \\
\hline \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{chap4_figures/ben_analysis_3_random/block_C_pred.jpeg}\\
\includegraphics[width=0.7\textwidth]{chap4_figures/ben_analysis_3_random/block_I_pred.jpeg} \\
\includegraphics[width=0.7\textwidth]{chap4_figures/ben_analysis_3_random/block_S_pred.jpeg} \\
\caption{The posterior medians, the 95\% posterior CIs/predictive intervals and 200 randomly selected posterior samples of (1) Top: the underlying monthly measles incidence scaled by the estimated reporting rate (with the true reported incidence time series); (2) Middle: the underlying semi-monthly true incidence; and (3) Bottom: the underlying semi-monthly susceptible population in Benin between 2012 and 2018.}
\label{Chap4FigBeninPred}
\end{figure}
\section{Discussion} \label{Chap4SecDisc}
In this chapter, we developed a discrete-time hidden Markov model under the TSIR framework to estimate SIA efficacy using measles incidence time series data. Our approach accounts for under-reporting and seasonality of measles transmission, and accommodates monthly reported incidence data that is publicly available from the WHO database. We proposed a two-stage estimation procedure that first estimates the reporting rate through an OLS regression model with robust standard errors and then estimates the rest of the model parameters and the underlying incidence dynamics via an MCMC algorithm with uncertainty propagation. The proposed method can be used to estimate the fraction of susceptible people immunized by past SIAs and forecast incidence trends in the future under various hypothetical SIA scenarios.
We illustrated our method and investigated the impact of uncertainty propagation on parameter estimation and incidence prediction via a simulation study considering a range of reporting rates. In addition, we also applied our method to analyze the reported measles incidence data in Benin from 2012 to 2018. While both the simulation study and the Benin example demonstrate our model's ability to predict the general timing and relative magnitudes of measles outbreaks in the near future, they also revealed some of the limitations of our approach and the TSIR framework.
As mentioned in Sections \ref{Chap4SecCompStage1} and \ref{Chap4SecSimResult}, the OLS procedure produces biased results in reporting rate estimation. We investigated the sources and properties of the bias in an additional simulation study and presented the results in Web Appendix B of the Supporting Information. We found that the OLS procedure generally over-estimates the reporting rate parameter $\rho$ when short time series are used to fit the regression model. This is likely a result of the small sample size of the data. As the length of time series increases, the magnitude of over-estimation generally decreases and reaches 0 when certain length is reached. However, as the length increases further, the procedure starts to under-estimate $\rho$ and the magnitude of under-estimation eventually approaches a constant value. This negative bias is likely a result of the positive correlation between the cumulative reported incidence and the deviance of the susceptible time series from its mean. Our investigation is the first to uncover and derive the sources of the bias in this popular regression-based method for reporting rate estimation \citep{ferrari2008dynamics, metcalf2011epidemiology, metcalf2013implications, mahmud2017comparative, thakkar2019decreasing}. We are now actively working on a method for correcting this bias.
Another aspect of our model that could be improved is to allow temporal fluctuations in the reporting rate that may be influenced by time-varying factors such as the state of the epidemic and reports in the media. \citet{finkenstadt2000time} described a spline-based method for estimating time-varying reporting rates, but the approach uses an ad-hoc algorithm for deciding the bandwidth and gave poor performance in some of our preliminary work. As \citet{noufaily2019under} has pointed out, under-reporting corrections should be geographical- and disease-specific, as well as age- and gender-dependent. We are currently exploring alternative model specifications that can potentially incorporate additional information, such as measures of availability and reliability of surveillance system, to detect and account for temporal trends in reporting rates.
In our model, the semi-monthly time series of total population and adjusted births are input data that are treated as the truth --- we consider them to be deterministic quantities with fixed values. A potential extension of our current method is to account for potential uncertainties in the population and births data and propagate these uncertainties in parameter estimation and incidence prediction. In addition, the balancing equation characterizing the susceptible dynamic currently assumes negligible non-measles deaths among the susceptible people. To relax this assumption, one can potentially add another component to the balancing equation to explicitly account for the people exiting the susceptible pool due to non-measles deaths. The viability and data requirement of this extension still requires further investigation.
It should be noted that our model was derived based on some key underlying assumptions that the TSIR models require. These include homogeneous mixing of the human host population and \textit{frequency-dependent} contact rate (see Web Appendix A in the Supporting Information for details). Therefore, the model is not suitable for analyzing data from extremely large geographical areas in which there is considerable heterogeneity in measles transmission dynamics. An interesting future research topic is to extend the current method to jointly model incidence from multiple areas while allowing for spatially and temporally correlated transmission patterns.
\backmatter
\section*{Acknowledgements}
Jon Wakefield was supported by grant R01 AI029168 from the National Institutes of Health, Tracy Qi Dong by grant
U54 GM111274 from the National Institutes of Health. The authors are grateful to Dr. Kevin McCarthy, Dr. Niket Thakkar, and Dr. Kurt Frey from Institute for Disease Modeling for their informative discussions and data preparations.
\section*{Supporting Information}
Additional supporting information may be found online in the Supporting Information section at the end of the article.
\section*{Data Availability Statement}
The data used in this paper to support our findings are available from the corresponding author upon reasonable request.
\bibliographystyle{biom}
| proofpile-arXiv_059-15724 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Relativistic heavy ion collisions, such as those taking place at the Relativistic Heavy Ion Collider (RHIC) or at the Large Hadron Collider (LHC), routinely create a new phase of matter called the quark-gluon plasma (QGP). The QGP is formed only at extremely large temperatures and particle/antiparticle densities and it permeated the early Universe just a few microseconds after the Big Bang. The phase transition from hadronic matter to the QGP is a broad crossover when matter and antimatter are present in approximately the same amount \cite{Aoki:2006we}, i.e. at zero baryon chemical potential $\mu_B=0$, but the transition is expected to become first-order at high baryonic densities \cite{Stephanov:2007fk,Nahrgang:2016ayr,Ratti:2018ksb,Bzdak:2019pkr}. This implies the existence of a high-temperature $T$ critical point (CP) \cite{Stephanov:1998dy} on the phase diagram of quantum chromodynamics (QCD). In the next couple of years, questions concerning the potential existence and location of the QCD critical point may finally be answered. The STAR experiment at RHIC is running a second beam energy scan (BESII) until 2022 to find out whether a hot and dense system of quarks and gluons displays critical phenomena when doped with more quarks than antiquarks (finite net density). STAR has also developed a fixed target program to further extend their reach to larger densities \cite{STARnote,Cebra:2014sxa} and HADES at GSI \cite{Galatyuk:2014vha} covers even larger densities with the potential of a beam energy scan as well. The search for this putative critical point can also shed new light on whether quark matter exists in dense stellar objects, given that the conditions achieved in low energy heavy ion collisions \cite{Adamczewski-Musch:2019byl} can overlap with those in neutron star mergers \cite{Most:2018eaw}.
The information provided by low-energy heavy-ion collisions, which will continue after RHIC with the FAIR (GSI) \cite{Friese:2006dj,Tahir:2005zz,Lutz:2009ff,Durante:2019hzd} and NICA (Dubna) \cite{Kekelidze:2017tgp,Kekelidze:2016wkp} programs, can be complemented for the first time by very precise astrophysical observations that allow one to constrain the notoriously difficult high-density/low-temperature corner of the QCD phase diagram (see Fig.\ \ref{Fig:QCDphasediagram}). In fact, the unprecedented accuracy of recent astrophysical experiments, such as NASA's Neutron Star Interior Composition ExploreR (NICER) and NSF's Laser Interferometer Gravitational-Wave Observatory (LIGO), provides a new possibility to test and rule out hypotheses for the neutron star inner core composition, thus constraining the high-energy QCD equation of state (EoS) and its phase diagram. From a theoretical point of view, since QCD cannot be directly solved at high (net) baryon densities because of the sign problem \cite{Troyer:2004ge} (for a review, see \cite{Ratti:2018ksb}), the existence and location of the critical point and the high-density phases of matter described by their EoSs are at the moment not well constrained. These EoSs are not directly comparable to heavy-ion experimental data, but serve as a vital input in event-by-event relativistic viscous hydrodynamic simulations of heavy ion collisions \cite{Alba:2017hhe}. Furthermore, the inclusion of viscous effects in hydrodynamic simulations of the QGP is required to describe heavy ion data \cite{Heinz:2013th}, which in turn provide key information about the transport properties (such as shear and bulk viscosities) of the QGP \cite{Bernhard:2019bmu}. In fact, most theoretical studies indicate that viscous and diffusive effects are even more relevant at large densities \cite{Monnai:2016kud,Auvinen:2017fjw,Fotakis:2019nbq,Dore:2020jye}, as we will discuss in more detail in Sec.\ \ref{sec:dyn}.
\begin{figure*}
\centering
\includegraphics[width=0.6\textwidth]{QGPphase.pdf}
\caption{Schematic representation of the QCD phase diagram. Thick lines mark the liquid-gas and quark deconfinement phase transitions. The black dots mark critical points with predictions from \cite{Critelli:2017oub} and \cite{Elliott:2012nr}.}
\label{Fig:QCDphasediagram}
\end{figure*}
Moving towards the high baryon density regime, the EoS of strongly interacting matter dictates stellar masses and radii as determined by the Tolman-Oppenheimer-Volkoff equations \cite{Tolman:1939jz,Oppenheimer:1939ne}. Other stellar observables such as the spindown of fast pulsars \cite{Alford:2013pma}, the cooling rate of a large set of stable neutron stars \cite{Grigorian:2004jq,Page:2005fq,Negreiros:2018cho,Tolos:2016hhl}, the compactness and ellipticity of rotating neutron stars with hot spots on their surface, and tidal deformability measurements that determine the star's quadrupole moment in response to a strong tidal field during the inspiral phase of a neutron star merger \cite{Yagi:2013bca}, also generate important constraints for theoretical calculations \cite{Oertel:2016bki}. In neutron star mergers, knowledge about the EoS at high baryon density and finite temperatures is needed when performing general-relativistic hydrodynamic simulations \cite{Most:2018eaw}. The inclusion of viscous effects in the latter may also be important to determine the evolution of the hypermassive remnant formed after the merger, as stressed in \cite{Duez:2004nf,Shibata:2017jyf,Shibata:2017xht,Alford:2017rxf,Radice:2018ghv}.
The authors believe that this is the right time to bring together the scientific heavy-ion and neutron-star communities to discuss common goals involving the EoS of strongly interacting matter and define a strategy to achieve them. For this reason, they organized a 3-day virtual workshop in August 2020, supported by NSF, which brought together lattice and perturbative QCD (pQCD) theorists, heavy-ion phenomenologists and experimentalists, nuclear astrophysicists and gravitational wave physicists to discuss the state-of-the-art in their fields and define a path forward to address and solve the most urgent overlapping issues of these communities. Given the diversity of the participants' background, long and lively panel discussion sessions took place in the end of each day of the workshop, during which a moderator and panelists listed the main open questions in each community and discussed ways of addressing them with the other participants. In this document, we review the main topics covered in the workshop and some of the problems and common goals that the communities identified as needing to be urgently addressed.
\section{Equation of state at low-to-moderate densities: status and perspectives}
The EoS of QCD at zero net-baryonic density has been available for a few years in the continuum limit and for physical values of the quark masses, for a system of $N_f=2+1$ \cite{Borsanyi:2010cj,Borsanyi:2013bia,Bazavov:2014pvz} and $N_f=2+1+1$ \cite{Borsanyi:2016ksw} quark flavors. These results agree with the thermodynamics generated by a gas of non-interacting hadrons and resonances at low temperatures \cite{Borsanyi:2012cr,Borsanyi:2014ewa,Alba:2017mqu}, and with a gas of weakly interacting quarks and gluons at high temperatures \cite{Laine:2006cp,Andersen:2011sf,Haque:2014rua}.
In the field of heavy-ion collisions, these results are crucial to describe the fireball created in the collision, as the EoS is one of the key inputs in a hydrodynamic description of the system \cite{Romatschke:2017ejr}. Indeed, an important independent validation of lattice QCD results was obtained in \cite{Pratt:2015zsa}, where state-of-the-art statistical techniques have been applied to the combined analysis of a large number of experimental observables, while changing the input parameters in a controlled way. It was found that the posterior distribution over possible parametrizations of the EoS was consistent with current lattice QCD calculations.
First principles lattice QCD results of the EoS at nonzero baryon chemical potential are currently limited due to the sign problem, a fundamental technical obstacle of exponential complexity present in path integral approaches solved via importance sampling \cite{Troyer:2004ge}. The most common methods to extend the lattice QCD results beyond $\mu_B=0$ are the Taylor expansion of the thermodynamic observables around $\mu_B=0$ \cite{Allton:2002zi,Allton:2005gk,Gavai:2008zr,Basak:2009uv,Kaczmarek:2011zz} and the analytical continuation of simulations performed at imaginary chemical potential \cite{deForcrand:2002hgr,DElia:2002tig,Wu:2006su,DElia:2007bkz,Conradi:2007be,deForcrand:2008vr,DElia:2009pdy,Moscicki:2009id}. Exploratory studies in simplified fermionic models where the sign problem could be solved using other techniques, such as Lefschetz thimbles, can be found for instance in \cite{Alexandru:2015xva,Alexandru:2015sua}.
In general, one can write the pressure $p$ of QCD as a Taylor series in powers of the baryon chemical potential over temperature, $\mu_B/T$, around $\mu_B=0$:
\begin{eqnarray}
\frac{p(T,\mu_B)}{T^4}&=&\frac{p(T,0)}{T^4} +\sum_{n=1}^\infty\left.\frac{1}{n!}\frac{\mathrm{d^{n}}(p/T^4)}{d(\frac{\mu_B}{T})^{n}}\right |_{\mu_B=0}\left(\frac{\mu_B}{T}\right)^{n}
\nonumber\\
&=&\sum_{n=0}^{\infty}c_{n}(T)\left(\frac{\mu_B}{T}\right)^{n},
\end{eqnarray}
with Taylor coefficients given by
\begin{eqnarray}
c_n=\frac{1}{n!}\frac{\partial^n(p/T^4)}{\partial(\mu_B/T)^n},
\end{eqnarray}
which determine the baryon susceptibilities \cite{Ratti:2018ksb}. We note that in previous works other types of expansions have been explored (for instance, the Pade approximation) but it was generally found that the Taylor series worked best for our current data set \cite{Critelli:2017oub}.
It is important to mention that the QCD phase diagram relevant for heavy ion collisions is, in fact, at least a four-dimensional space in the variables $(T,~\mu_B,~\mu_S,~\mu_Q)$, where $\mu_S$ and $\mu_Q$ stand for the strangeness and electric charge chemical potentials, respectively.
Therefore, when performing a Taylor expansion in $\mu_B$, one has to make a choice about the values of $\mu_S$ and $\mu_Q$, as well when studying global quantities. Two common choices, useful for heavy ion collision physics, are to consider either $\mu_S=\mu_Q=0$ or a situation in which $\mu_S$ and $\mu_Q$ are functions of $T$ and $\mu_B$, such that they satisfy the following phenomenological relations for the strangeness, charge, and baryon densities
\begin{eqnarray}
\langle \rho_S\rangle=0~~~~~~\langle \rho_Q\rangle=0.4\,\langle\rho_B\rangle,
\label{phenoconstraints}
\end{eqnarray}
which reflect the initial conditions in heavy-ion collisions and the fact that strangeness and electric charge are conserved in strong interactions.
Results are currently available for the Taylor coefficients up to sixth-order for both conditions \cite{Gunther:2016vcp,Bazavov:2017dus,Borsanyi:2018grb}, together with estimates for $c_8$ at finite lattice spacing in the case where $\mu_S=\mu_Q=0$ \cite{Borsanyi:2018grb}. We point out that, for dynamical simulations, the entire 4D phase diagram is needed due to local fluctuations of conserved charges (e.g. see \cite{Shen:2017ruz}).
More recently, a full four-dimensional EoS has been reconstructed in \cite{Noronha-Hostler:2019ayj}, based on the Taylor expansion coefficients calculated in \cite{Borsanyi:2018grb} up to fourth-order and defined as
\begin{eqnarray}
\frac{p(T,\mu_B,\mu_S,\mu_Q)}{T^4}=\sum_{i,j,k}\frac{1}{i!j!k!}\chi^{BSQ}_{ijk}\left(\frac{\mu_B}{T}\right)^i\left(\frac{\mu_S}{T}\right)^j\left(\frac{\mu_Q}{T}\right)^k,
\label{fullT}
\end{eqnarray}
with baryon, strangeness, and electric charge (BSQ) susceptibilities given by
\begin{eqnarray}
\chi^{BSQ}_{ijk}=\frac{\partial^{i+j+k}(p/T^4)}{\partial(\frac{\mu_B}{T})^i\partial(\frac{\mu_Q}{T})^j\partial(\frac{\mu_S}{T})^k}.
\end{eqnarray}
Effects from the inclusion of a partial list of sixth-order coefficients were investigated in \cite{Monnai:2019hkn}.
Additionally, with the currently available Taylor expansion coefficients, a reliable EoS from first principles can be reconstructed up to $\mu_B/T\leq2$ \cite{Gunther:2016vcp}. It was emphasized in the workshop that any realistic EoS should reproduce the lattice QCD one (or smoothly merge with it) in the density regime where the latter is available. We note that, while we have focused on the EoS here, other lattice QCD quantities should also be used to constrain models, such as partial pressures \cite{Alba:2017mqu}, cross-correlators \cite{Bellwied:2019pxh}, and high-order susceptibilities \cite{Borsanyi:2018grb}.
One of the relevant goals for the lattice QCD community is to extend the range of the reconstructed lattice QCD EoS by generating reliable continuum extrapolated higher order coefficients. This needs to be done both for the extrapolation in $\mu_B$ only, and for the full four-dimensional EoS presented in Eq.\ (\ref{fullT}).
Finally, we note that at low temperatures, eventually one must match lattice QCD results to a hadron resonance gas model because lattice QCD results can only be calculated down to approximately $T\sim 100-130$ MeV. While a non-interacting gas of hadrons and resonances works well at small $\mu_B$ (with the exception of higher-order derivatives \cite{Huovinen:2017ogf}), large density calculations require the inclusion of interactions. These interactions are commonly modeled using a van der Waals description, which can in addition describe the nuclear liquid-gas phase transition \cite{Vovchenko:2015vxa,Vovchenko:2017zpj}. This points to the potential interplay between the high $T$ critical point and the liquid-gas phase transition, as already shown in \cite{Steinheimer:2010ib}.
As mentioned before, one of the most relevant still open questions is whether the phase transition of QCD becomes first-order as the baryon chemical potential/density increases.
Experimental efforts to find the location of the critical point/first order phase transition coexistence line include the STAR Fixed-Target program ($\sqrt{s_{NN}}=3-7.7$ GeV) \cite{STARnote,Cebra:2014sxa}, HADES at GSI ($\sqrt{s_{NN}}=1-3$ GeV) \cite{Galatyuk:2014vha}, FAIR at GSI ($\sqrt{s_{NN}}=4.5-9.3$ GeV) \cite{Friese:2006dj,Tahir:2005zz,Lutz:2009ff,Durante:2019hzd}, and NICA ($\sqrt{s_{NN}}=3-5$ GeV) \cite{Kekelidze:2017tgp,Kekelidze:2016wkp}.
The results of these experiments have consequences for the physics of neutron stars and their mergers \cite{TheLIGOScientific:2017qsa,Hanauske:2019vsz}. First-order phase transitions are signaled by a region where the speed of sound is equal to zero. If a strong first order phase transition between hadronic matter and quark matter was experienced within a neutron star merger, this would have consequences for the ringdown and the remnants \cite{Most:2018eaw,Bauswein:2018bma,Weih:2019xvw,Gieg:2019yzq,Tsokaros:2019anx,Ecker:2019xrw,Blacker:2020nlq,Pang:2020ilf}. In addition, if a strong first-order line occurs at zero temperature, it is possible that mass twins could be produced wherein two neutron stars have the same mass but very different radii \cite{Alford:2013aca,Dexheimer:2014pea,Benic:2014jia,Montana:2018bkb,Christian:2017jni}. Mass twins could be detected with NICER and LIGO observations of the radius and tidal deformability of neutron stars, respectively.
The effect of a potential critical point on heavy-ion collision observables was recently reviewed in \cite{Bzdak:2019pkr}. One of the most prominent experimental signatures is the expected divergence of higher order baryon number fluctuations in the vicinity of the critical point \cite{Stephanov:2008qz}. While critical slowing down phenomena are expected to reduce this effect \cite{Berdnikov:1999ph}, an increase of the kurtosis as the critical point is approached is still expected. A change in monotonicity for the kurtosis as a function of the collision energy was also suggested as a possible critical point signature \cite{Stephanov:2011pb}, although it was pointed out more recently that this non-monotonic behavior is washed out at temperatures immediately below the phase transition and, therefore, it is not likely to be observed in experiments \cite{Mroczek:2020rpm}.
To help answer the question about the existence and location of the QCD critical point, a family of EoSs was recently constructed in \cite{Parotto:2018pwx}. It reproduces the lattice QCD EoS where it is available and contains a critical point belonging to the universality class of the 3D-Ising model. The uniqueness of this family of EoSs is that the position and strength of the critical point can be changed and its effect on the data can be explored \cite{Mroczek:2020rpm}. The corresponding entropy density and baryonic density are shown in Fig.\ \ref{Fig1} for one possible choice of parameters. Both quantities are discontinuous for chemical potentials larger than the critical one. Currently, this family of EoSs is available only for the case where $\mu_S=\mu_Q=0$. Extensions to the strangeness-neutral case with fixed charge density, one of the identified priorities for both the heavy-ion and neutron-star communities, are the next obvious steps. This can be achieved using the Taylor series in Eq.\ (\ref{fullT}).
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Fig1.pdf}
\caption{Figure from \cite{Parotto:2018pwx} showing entropy density (left) and baryonic density (right) as functions of the temperature $T$ and chemical potential $\mu_B$. The critical point is marked as a red dot.}
\label{Fig1}
\end{figure*}
During the workshop panel discussions, the fate of the critical point in the four-dimensional phase diagram was thoroughly discussed. Given the lack of information from first principle simulations, guidance can be provided by models that reproduce lattice QCD results and contain a critical point such as, for instance, the holographic models in \cite{DeWolfe:2010he,DeWolfe:2011ts}. In particular, the holographic model of Ref.\ \cite{Critelli:2017oub} not only naturally incorporates the nearly perfect fluidity of the QGP \cite{Kovtun:2004de} and allows for the realistic determination of a number of characteristic temperatures \cite{Rougemont:2017tlu} and transport coefficients \cite{Finazzo:2014cna,Rougemont:2015wca,Rougemont:2015ona} as a function of $T$ and $\mu_B$ up to the critical point, but it also successfully predicts the temperature dependence of higher order baryon susceptibilities later computed on the lattice \cite{Borsanyi:2018grb}. The success of the holographic model \cite{Critelli:2017oub} in comparison to lattice QCD establishes benchmarks that other effective models must pass in order to be consistent with QCD results at finite temperature and density.
It was pointed out in the workshop discussion that the different high-energy communities explore different regions of the phase diagram, not only in terms of net-baryonic density, but also in terms of the conditions on the other conserved charges (besides baryon number). In particular, while isospin asymmetry is small in heavy ion collisions due to the fact that the initial nuclei contain almost the same amount of protons and neutrons, it is large in neutron stars as a result of the (proton) electron capture that takes place during supernovae. In cold neutron stars, $\mu_Q$ is determined in order to ensure chemical equilibrium and charge neutrality.
On the other hand, while strangeness is a conserved quantity on the timescales of a heavy ion collision, weak decays are significant on astrophysical timescales ($\mu_S=0$) and strangeness is expected to be significant in cold neutron stars. This gives rise to well-defined patterns in the four-dimensional phase diagram, which should be addressed as a first step to obtain a more complete understanding of it. In fact, different regimes of the phase diagram can experience the transition from quarks to hadrons at different chemical potentials depending on the underlying assumptions \cite{Fu:2018qsk,Rennecke:2019dxt,Aryal:2020ocm}. Nevertheless, we remark that in non-equilibrium situations, such as those involving neutron star merger simulations, a higher dimensional EoS would also be needed.
Additional questions were raised in the workshop about the influence of multiple conserved charges on the location of the critical point. For instance, in 3 dimensions, does it become a critical line? Does it turn into a critical plane in 4 dimensions? Or could it be a crossover in certain regions of the phase diagram and a real phase transition in other regions? While effective models provide some guidance in this regard, the answer currently depends on the underlying assumptions of the models and, therefore, further experimental studies and efforts to extend lattice QCD to large densities are needed.
\section{Equation of state at high densities and comparisons with observations}
Cold neutron stars can be well-approximated by an EoS at $T\sim 0$ MeV, covering all scales relevant to nuclear and particle physics, from nuclei at very low densities (crust) to potentially deconfined quarks in the inner core. A review of the different models used to describe the EoS of neutron stars can be found in \cite{Baym:2017whm}. Fundamental questions remain about N-body interactions between baryons \cite{Tolos:2007bh,Hebeler:2013nza,Tews:2018kmu,Lim:2018bkq}, the existence of hyperons (and their interactions) \cite{Weissenborn:2011kb,Weissenborn:2011ut,Lopes:2020rqn,Gerstung:2020ktv}, and the possibility of deconfinement to quark matter in the core of a neutron star.
At zero temperature, protons and neutrons begin to overlap at densities around 3 times saturation density ($\sim$ 0.5 baryons per fm$^3$), at which point a simple hadronic description of dense matter is not enough. Although the exact position and manner such transition takes place is still not certain, there are indications from a matching of low energy constraints with pQCD at asymptotically large energies that it is a steep first-order phase transition \cite{Annala:2019puf}. This feature is also hinted by a necessary bump in the dense-matter speed of sound that surpasses the conformal limit and then returns to $c_s^2\rightarrow \frac{1}{3}$ at large densities \cite{Bedaque:2014sqa,Alford:2015dpa,Tews:2018kmu,McLerran:2018hbz,Baym:2019iky,Jakobus:2020nxw}, although this effect could also be caused by a sudden appearance of hyperons (see Fig.~2 of \cite{Stone:2019abq} and \cite{Gulminelli:2013qr}). At sufficiently large densities and low temperatures, quark matter in neutron stars is expected to be a color superconductor, for which many kinds of pairing patterns exist \cite{Alford:2007xm}, including the color-flavor-locked phase \cite{Alford:1998mk}.
Although the crust of neutron stars is reasonably understood \cite{Chamel:2008ca}, the number of constraints for bulk hadronic matter decreases as the density increases in the core, especially beyond a couple of times saturation density. Up to this point, there are reliable ab-initio calculations (such as CEFT \cite{Hebeler:2013nza,Tews:2018kmu,Lim:2018bkq}), which solve a many-body problem starting from 2- and 3-body interactions, together with laboratory data of saturation properties (see a summary in Fig.~1 of \cite{Li:2013ola}).
Beyond that point, uncertainties increase, especially due to the appearance of new degrees of freedom such as hyperons and meson condensates \cite{Glendenning:1982nc,Kaplan:1986yq,Baym:1978sz}. In this case, a natural solution is to rely on QCD-inspired relativistic phenomenological models that include both deconfinement and chiral symmetry restoration \cite{Dexheimer:2009hi,Steinheimer:2011ea,Turko:2014jta,Marczenko:2020jma}.
For the quark phase, this is especially problematic, as the large uncertainty in modelling and parameters allow the EoS to be very similar to the hadronic one, which has been referred to as the ``masquerade effect” \cite{Alford:2004pf,Wei:2018mxy}. Of course, at this point one could once more invoke comparisons with pQCD results in the relevant regime \cite{Andersen:2002jz,Fraga:2013qra,Annala:2019puf}, but one must keep in mind that this regime is not achieved inside neutron stars. pQCD is expected to be applicable at approximately an order of magnitude larger densities compared to the cores of neutron stars, which may only reach 4-10 times nuclear saturation density.
This is exactly where multi-messenger astrophysical observations can provide guidance to nuclear physics. Measurements of neutron star masses larger than 2 solar masses, together with small radii, and small tidal deformability, imply that the cold matter EoS is soft at intermediate densities but stiff at large densities, which allows us to extract information about nuclear interactions \cite{Dexheimer:2018dhb,Hornick:2018kfi,Dexheimer:2020rlp,Otto:2020hoz,Kubis:2020ysv,Ferreira:2020evu}. In particular, a neutron star with mass larger than 2.5 solar masses (for instance, the secondary compact object from GW190814 \cite{Abbott:2020khf}) would imply that a sharp rise in $c_
s^2$ occurs between 2-3 times nuclear saturation density \cite{Tan:2020ics}, which may be attributed to exotic degrees of freedom in the core of a neutron star. However, we point out that many alternatives have been suggested for the secondary from GW190814: it may be a black hole or a fast spinning neutron star \cite{Most:2020bba,Dexheimer:2020rlp,Zhang:2020zsc, Tsokaros:2020hli}, or even a primordial black hole \cite{Vattis:2020iuz}. Nevertheless, non-monotonic behavior of $c_
s^2$ has been found in several different works \cite{Dexheimer:2007mt,Bedaque:2014sqa,Alford:2015dpa,Dutra:2015hxa,Ranea-Sandoval:2015ldr,Dexheimer:2017nse,Tews:2018kmu,Tews:2018iwm,McLerran:2018hbz,Jakobus:2020nxw,Zhao:2020dvu,Ferreira:2020kvu}.
At finite (but still not large) temperatures, the uncertainty in the EoS increases. In this case, information about deconfinement can be obtained from supernovae \cite{Fischer:2017lag,Zha:2020gjw} and compact star merger simulations \cite{Most:2018eaw,Bauswein:2018bma,Weih:2019xvw}. Unfortunately, not many finite temperature EoSs allowing for deconfinement to quark matter are available for testing. Additionally, some numerical relativity simulations may run into numerical difficulties when experiencing extremely sharp phase transitions, which could also limit current studies.
\begin{figure*}
\centering
\includegraphics[width=0.5\textwidth]{crust_plot.png}
\caption{Realistic simulation for the neutron-star crust-core liquid-gas phase transition at zero temperature from \cite{Steiner:2012bq}. The baryon number density and stellar radius are shown. The purple dots represent bulk hadronic matter and the empty white space marks the space occupied by atomic nuclei.}
\label{crust}
\end{figure*}
Ultimately, in order for EoSs to be able to describe the entire QCD phase diagram and reproduce the smooth crossover predicted by lattice QCD, models must contain both hadronic and deconfined degrees of freedom. At least, this is the prescription that has been used to describe the liquid-gas phase transition expected to take place in the crust-core interface of neutron stars. In the latter, the competition (of the strong force) with the electromagnetic force turns the first-order phase transition into a gradual one and the so called ``pasta phases" appear, as shown in Fig.~3. Comparisons with low-energy heavy-ion collisions are also important to constrain the finite temperature dense regime, but they require modifications to describe zero-net strangeness matter without chemical equilibrium and charge neutrality. As it was shown in \cite{Aryal:2020ocm,Costa:2020dgc}, these changes in environment conditions can modify the deconfinement position by several tens of MeV for a given temperature. Additionally, there are other considerations when comparing heavy-ion collisions to neutron star mergers that caution against direct comparisons with current theoretical frameworks, as discussed in Sec.\ \ref{sec:dyn}.
\section{Model agnostic approaches to extracting the EoS from gravitational wave data}
Due to the lack of first principle results that can fill the QCD phase diagram in the regime relevant for neutron stars, an alternative approach is to extract a band of possible EoSs from experimental data. Such an approach attempts to explore the relevant part of the EoS phase space that is causal (the speed of sound is bounded by zero and 1) by fitting observables that depend on the EoS to a set of observations. More specifically, NICER observations of the radiation emitted by hot spots on the surface of rotating neutron stars can be fitted to a pulse profile model \cite{Miller:2019cac,Riley:2019yda,Silva:2020acr}. This model depends on several parameters, including the neutron star compactness (the mass divided by the radius of the neutron star), which in turn depends on the EoS. Therefore, measurements of the compactness place a constraint in the mass-radius plane, which can be converted into a constraint on the EoS. Similarly, LIGO/Virgo observations of the gravitational waves emitted by inspiraling and merging neutron stars can also be used to constrain the EoS \cite{Miller:2019cac,Riley:2019yda,Silva:2020acr}. The LIGO/Virgo data is fitted to a gravitational wave model that depends on several parameters, including the so-called tidal deformabilities, which measure how much a star deforms when in the presence of a certain external perturbation. A measurement of the tidal deformabilities can then be converted into a constrain on the mass-radius plane through the use of certain quasi-universal relations~\cite{Yagi:2013awa,Yagi:2013bca,Yagi:2016bkt,Yagi:2016ejg,Yagi:2015pkc}, which in turn place constraints on the (so far) low and intermediate density portion of the neutron star EoS.
When fitting gravitational wave observations to a model, another approach is to use a parameterization of the EoS. Examples of this parameterization include piecewise polytropes that are patched together at different transition densities~\cite{Read:2008iy}, or a parametric spectral representation \cite{Lindblom:2010bb,Lindblom:2012zi,Lindblom:2013kra}. In both cases, the EoS is assumed to be given by either a known function or a set of differential equations, which depend on a set of parameters. Given a choice of these parameters and a choice of the central density, the Einstein equations can be solved numerically to find the mass, radius and tidal deformability of a neutron star. The waveform model parameter list -- i.e.~the parameters that characterize the waveform model -- is then enhanced to include these EoS parameters, while removing the tidal deformability from the list, which are then varied when fitting the model to the data. From the posterior probability distribution obtained for these EoS parameters, one can then reconstruct an ``allowed'' band in the pressure-density plane for the EoS \cite{Abbott:2018exr,Abbott:2020khf}.
These parametric approaches, however, suffer from the problem of restricting the EoS to a particular functional form, which may or may not be faithful to the EoS of nature inside neutron star cores. In fact, it has been recently pointed out that piecewise polytropes and the spectral functions both fail to accurately capture bumps, kinks, and jumps in the EoS that may occur due to rapid changes in the degrees of freedom or phase transitions \cite{Tan:2020ics}. In view of this, a different approach has been recently proposed: a non-parametric model. In this approach, one does not restrict the functional form of the EoS, but rather, one uses Gaussian processes conditioned on nuclear theory models \cite{Essick:2019ldf,Landry:2020vaw} to generate a wide variety of realizations of the EoS. For each of these, the Einstein equations are then solved to compute the mass, radius and tidal deformability, given a choice of central density, which are then used in a waveform model to fit the LIGO/Virgo data. In this way, one circumvents some of the earlier issues mentioned above and provides different confidence regions for the extracted EoS.
One should consider these approaches (the parametric and the non-parametric ones) as attempts to understand the EoS from an ``experimental'' perspective. They can provide important information about the allowable phase space but cannot provide detailed information about the correct microscopic degrees of freedom. Thus, such approaches cannot be directly tested against neutron-star cooling data. Temperature evolution is very sensitive to the proton and hyperon content \cite{Beloin:2018fyp,Negreiros:2018cho,Tolos:2016hhl} and, therefore, it is also an important part of EoS modeling at high densities. Additionally, the calculations of transport coefficients \cite{Alford:2019qtm} also require detailed microscopic information. This emphasizes that both effective models are needed to understand the microscopic degrees of freedom, but also model-agnostic approaches are needed in order to determine the allowed phase space and help to further constrain microscopic models.
The use of non-parametric models together with more informed models will become ever more prescient in the coming years, as gravitational wave detectors are upgraded and their sensitivities are increased. The advanced LIGO + (A+) upgrade is expected to conclude by 2025, bringing a plethora of sensitivity improvements to both the Hanford and Livingston LIGO detectors. Of particular relevance are upgrades related to quantum squeezing, which will bring improvements to the high-frequency part of the sensitivity curve. Lowering the noise at high frequencies would allow us to observe the merger of neutron stars, which would in turn allow for more stringent constraints on the EoS and to peek into its low temperature dependence during an out-of-equilibrium process. The extraction of information from the merger phase, however, will require its characterization in terms of numerical relativity models, which must include all of the complexity of the EoS at finite temperature and out-of-equilibrium phenomena. Much work is therefore still needed both in the development of such numerical relativity codes and in the construction of finite-temperature EoSs in the high-density portion of the QCD phase diagram.
\section{Out of equilibrium effects at large baryon density}\label{sec:dyn}
\subsection{Heavy ion Collisions}
While heavy ion collisions at low beam energies have comparable baryon densities to those reached within neutron stars, we now write a word of caution about direct comparisons between the two fields. Since the first collisions at RHIC turned on in the early 2000's, numerous discoveries have been made that have dramatically changed our approach to simulating heavy ion collisions. The field of heavy ion collisions was the first in the world to numerically solve relativistic viscous fluid dynamic equations of motion in complex situations \cite{Romatschke:2007mq} and, in recent years, the necessity of the inclusion of both shear and bulk viscosities \cite{Denicol:2010tr,Noronha-Hostler:2013gga,Noronha-Hostler:2014dqa,Denicol:2014mca,Ryu:2015vwa,Bernhard:2016tnd,Bernhard:2019bmu} (and at large densities BSQ diffusion \cite{Denicol:2012cn,Greif:2017byw,Denicol:2018wdp,Fotakis:2019nbq}) has become clear. Additionally, since the seminal works nearly a decade ago \cite{Takahashi:2009na,Alver:2010gr}, a deeper understanding has emerged concerning the initial conditions of heavy ion collisions and the role played by quantum fluctuations in the wave function of the colliding nuclei (and potential effects from proton substructure \cite{Mantysaari:2016ykx,Albacete:2017ajt}). These radically changed how experimentalists measure flow harmonics and, subsequently, invalidated earlier attempts at describing flow data at low beam energies when theoretical models did not incorporate event-by-event fluctuations. Finally, our understanding of the influence of critical phenomena has progressed enormously since the first discussions on the search for the QCD critical point \cite{Stephanov:1998dy}.
Because the heavy-ion field has been primarily focused on high beam energies, a number of upgrades must be implemented in dynamical models before one can attempt to extract information about the EoS from heavy ion data. An outline of such upgrades is made in Fig.\ \ref{fig:scheme} and is discussed in detail below. Current models that are applicable at these beam energies only consist of ideal hydrodynamics \cite{Rischke:1995ir,Rischke:1995mt,Aguiar:2000hw,deSouza:2015ena} and, yet, it is known that out-of-equilibrium effects can play a significant role in the search for the QCD critical point \cite{Berdnikov:1999ph,Nahrgang:2011mg,Mukherjee:2015swa,Mukherjee:2016kyu,Nahrgang:2018afz,Dore:2020jye} and the first-order phase transition line \cite{Feng:2018anl}. Thus, the upgrades are a necessary step forward in order to allow for direct theory to experimental data comparisons. To do this, the initial conditions will need to be adapted to incorporate all 3 conserved charges and be initialized with a full energy-momentum tensor $T^{\mu\nu}$ and $q^{\mu}$'s (out-of-equilibrium effects in the diffusion currents). See \cite{Steinheimer:2008hr,Shen:2017bsr,Martinez:2019rlp,Martinez:2019jbu,Mohs:2019iee,Akamatsu:2018olk} for recent efforts along those lines, although no single initial condition model can accomplish all of that at the time being.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{skematic.pdf}
\caption{Schematic diagram of the needed changes within heavy ion simulations in order to study the QCD critical point and the first-order phase transition line. The example EoS is taken from \cite{Critelli:2017oub} and the diffusion matrix is taken from \cite{Greif:2017byw,Fotakis:2019nbq}.}
\label{fig:scheme}
\end{figure*}
The effects of critical fluctuations were discussed extensively in the workshop. One approach is to use Hydro+ \cite{Stephanov:2017ghc}, which provides a generic extension of hydrodynamics by the inclusion of a parametrically slow
mode and fluctuations out of equilibrium. Other theoretical approaches have been explored as well \cite{Akamatsu:2018vjr,An:2019csj}. The inclusion of such fluctuations in realistic relativistic viscous hydrodynamics models in 2+1 or 3+1 dimensions is a very complex task and steps in this direction can be found in \cite{Young:2014pka,Sakai:2017rfi,Singh:2018dpk,Du:2019obx}.
In fact, event-by-event relativistic viscous hydrodynamic models have been constructed for large beam energies where boost invariance is a reasonable approximation, such that hyperbolic coordinates are used and boost invariant 2+1 hydrodynamics is employed \cite{Romatschke:2017ejr}. At low beam energies, Cartesian coordinates are more appropriate to describe the fully 3 dimensional evolution of the system. Unfortunately, in this regime only ideal hydrodynamic codes exist written in Cartesian coordinates, since most viscous hydrodynamics models were written with hyperbolic coordinates \cite{Luzum:2008cw,Schenke:2010nt,Schenke:2010rr,Noronha-Hostler:2013gga,Karpenko:2013wva,Noronha-Hostler:2014dqa,Shen:2014vra}. Thus, this has been identified as an important (and achievable) goal for the future.
Because in heavy ion collisions baryon number, strangeness, and electric charge conserved currents are relevant, especially at finite densities, hydrodynamics must incorporate all 3 conserved charges and their corresponding diffusion transport coefficients. However, since individual quarks can carry two or three charges (for example, a strange quark carries electric charge, strangeness, and baryon number), there are cross-correlations between the diffusion transport coefficients, which actually become a diffusion matrix \cite{Greif:2017byw}. A number of groups are already addressing this issue and we will likely have working BSQ hydrodynamic codes available in the years to come. However, this also requires special attention at the point of freeze-out such that both BSQ charges are conserved \cite{Oliinychenko:2019zfk} and the appropriate out-of-equilibrium corrections are taken into account (this is known in the field as $\delta f$ corrections).
Finally, at large densities one requires an accurate description of the hadron gas phase, for which one must use transport codes that incorporate the known hadrons and their interactions. SMASH \cite{Weil:2016zrk} has become the state-of-the-art in the field for hadronic transport, and it also incorporates mean field potential effects. One should note that there are other hadronic transport codes on the market and each of their underlying assumptions produces different results \cite{Zhang:2017esm}. Once a hadronic model is established, one can use it to calculate transport coefficients as well \cite{Rose:2017bjz}.
In principle, other considerations would also go into dynamical models such as spin \cite{Weickgenannt:2019dks,Florkowski:2019voj,Weickgenannt:2020aaf,Bhadury:2020puc,Montenegro:2020paq} and magnetic field effects \cite{Skokov:2009qp,Fukushima:2008xe,Jiang:2016wve,Finazzo:2016mhm,Shi:2017cpu,Sheng:2018jwf,Oliva:2019kin,Denicol:2019iyh}. While there has been very intriguing experimental data indicating that the quark-gluon plasma may be the most vortical fluid known to humanity \cite{STAR:2017ckg}, further studies are needed in this regard to draw strong conclusions at this time. Finally, once a full viscous hydrodynamic model that includes transport is established, there will be a number of interesting observables to compute, such as directed flow, which is expected to be sensitive to the EoS \cite{Nara:2016phs}
\subsection{``Heavy-ion Data" to constrain Neutron Stars }
One topic that was brought up at the workshop was the use of ``heavy-ion data", extracted from the insightful phenomenological study done in \cite{Danielewicz:2002pu}, as a constraint on the neutron-star EoS. We use quotation marks because this study was not performed by an experimental collaboration. Rather, the results in \cite{Danielewicz:2002pu} constitute a theoretical framework to extract EoS constraints within specific model assumptions and, therefore, cannot be considered bona fide experimental data. In fact, there are a number of assumptions (reasonable at that time) that went into this extraction of constraints that are not universally agreed upon by the entire heavy-ion community after two decades. Currently, the standard way to extract information from experimental data within heavy ions is to use event-by-event relativistic viscous hydrodynamics and make direct comparisons to experimental data using, for example, a Bayesian analysis (e.g. \cite{Bernhard:2019bmu}). Such a framework does not yet exist for data at low beam energies due to the issues outlined above and laid out in Fig.\ \ref{fig:scheme}. While initial attempts have been made for comparing ideal hydrodynamics \cite{Spieles:2020zaa} to experimental data at low beam energies, one must incorporate in this case transport coefficients, as there are many indications that the influence of out-of-equilibrium effects only grows with lower beam energies. Thus, we emphasize that the problem is not Ref.\ \cite{Danielewicz:2002pu} but rather the misuse of their results as a de facto experimental heavy ion constraint that must be used to eliminate certain physical scenarios concerning the high density regime of the EoS in neutron stars.
Instead, what has usually been done is to either compare hadronic models (transport) to experimental data and attempt to extract the EoS from such models (this was done in \cite{Danielewicz:2002pu,Hillmann:2019wlt}) or to compare ideal hydrodynamics to data with different EoS assumptions, as was done in \cite{Spieles:2020zaa}. At the moment, neither are a perfect fit to experimental data and fail to capture a number of experimental observables. However, we point out that the EoS that does fit best the results from hydrodynamics studies is consistent with the presence of quark degrees of freedom at the low heavy ion beam energies. Additionally, we stress that HADES is an ongoing experiment running at $E_{lab}=1-2$AGeV, right in the middle of the beam energies from \cite{Danielewicz:2002pu}, and it has found evidence that the temperatures reached in their reactions are much higher than initially anticipated, which makes the previous indication of reaching quark degrees of freedom much more likely \cite{Adamczewski-Musch:2019byl}. Furthermore, hints from STAR and HADES find long range correlations in multi-particle cumulants of proton number, which may be providing hints of a real phase transition \cite{Adam:2020unf,Adamczewski-Musch:2020slf}.
Thus, our current understanding of heavy ion collisions, which has evolved immensely in the last two decades, strongly indicates that the use of the ``heavy-ion data" extracted from the influential Ref.\ \cite{Danielewicz:2002pu} as a constraint for the neutron star EoS can be misleading, as there are many known discrepancies in the field that will likely be attributed to a variety of needed upgrades to dynamical models, as outlined in Fig.\ \ref{fig:scheme}. Additionally, experimental data from HADES indicate that even at very low beam energies there are signs of deconfined quarks and gluons. However, many more theoretical studies are still required for that to be a solid statement. In summary, one should not find this to be a disappointing conclusion but rather we emphasize that more cross-talk is needed between communities in order to understand the state-of-the-art in the respective fields, so that better and more meaningful comparisons can be made.
\subsection{Neutron Star Mergers}
It was pointed out in the workshop that multi-messenger observations of neutron star mergers \cite{TheLIGOScientific:2017qsa,Monitor:2017mdv,GBM:2017lvd} give not only key information on the thermodynamical behavior of dense matter, but they also provide fundamental insight into the complex out of equilibrium processes that take place on millisecond timescales. In fact, transport properties are a better discriminator of different phases than the EoS \cite{Alford:2007xm}. For neutron star mergers, the important dissipation mechanisms are those whose equilibration times are $\leq 20$ ms, as they can affect the post-merger gravitational wave signal.
For many years it was assumed that the evolution of the hot and ultradense matter formed after the merger could be reasonably described as an ideal fluid (dynamically coupled to Einstein’s equations) since the time scales for viscous transport to set in were previously estimated \cite{Bildsten:1992my} to be outside
the millisecond range. These estimates were recently revisited in \cite{Alford:2017rxf} using state-of-the-art merger simulations and it was concluded that damping of high-amplitude oscillations due to bulk viscosity is likely to be relevant if direct Urca processes remain suppressed. Thus, bulk viscosity effects should be investigated in neutron star merger simulations \cite{Alford:2017rxf}. Further studies about the bulk viscosity in the context of mergers were performed in \cite{Alford:2019qtm,Alford:2019kdw,Alford:2020lla,Alford:2020pld}. Ref.\ \cite{Alford:2017rxf} also concluded that neutrino-driven thermal transport and shear dissipation were unlikely to affect the post-merger gravitational wave
signal unless turbulent motion occurs.
In this context, as pointed out in \cite{Duez:2004nf}, viscosity and magnetic fields drive differentially rotating stars toward uniform rotation, which has important consequences. For instance, a differentially rotating hypermassive remnant can momentarily support a mass greater than it would be possible for a uniformly rotating star, which would imply in the observation of a delayed collapse to a black hole and a delayed burst of gravitational radiation. However, the magnetic field in the differentially rotating remnant may be amplified through magnetorotational instabilities \cite{Balbus:1998ja}, which can generate magnetohydrodynamic (MHD) turbulence whose description requires extremely high resolution simulations \cite{Kiuchi:2014hja}. General-relativistic shear viscous hydrodynamics becomes an interesting phenomenological alternative for studying how angular momentum transport occurs in such a system \cite{Duez:2004nf,Shibata:2017xht}, with the effective viscosity being induced by local MHD turbulent processes.
As stressed in \cite{Alford:2017rxf}, the effects of bulk, shear, and thermal conductivity have not yet been included in merger simulations because that requires a formulation of general-relativistic viscous fluid dynamics that is compatible with causality in the strong nonlinear regime probed by the mergers. Fortunately, a fully consistent physical and mathematical description of viscous fluids dynamically coupled to Einstein's equations is now possible as recently proven in \cite{Bemfica:2020zjp}. The latter employed the basic ideas behind the new formalism originally proposed in \cite{Bemfica:2017wps} to obtain a causal and stable first-order generalization of the relativistic Navier-Stokes equations for the case of a conformal fluid. This new approach to relativistic viscous hydrodynamics was later further developed and generalized in \cite{Kovtun:2019hdm,Bemfica:2019knx,Hoult:2020eho} to include the non-conformal regime and also finite baryon density effects.
Among many other results, it was also discussed in the workshop how prompt black holes formed in neutron star mergers can source bright electromagnetic counterparts for high-mass-ratio binaries, which can be constrained from multimessenger observations \cite{Bernuzzi:2020txg}. Nevertheless, it was thoroughly emphasized in the meeting that, while the inspiral~\cite{Dietrich:2018uni} and early postmerger phases (in the case of a black hole remnant) are now better understood, there is still a vast parameter space to explore. In fact, no current simulation of neutron star mergers include all the relevant physics, as it becomes increasingly complex on longer timescales in the postmerger and higher resolution, and more sophisticated simulations, are needed.
\section{Outlook}
After three days of a very intense (virtual) workshop, it became clear that there is a very large number of challenges and fundamental questions that must be addressed when it comes to determining the properties of ultradense matter in heavy ion collisions and also in neutron stars. The following questions were singled out and discussed during the meeting:
\begin{itemize}
\item What is the nature of matter at high baryon densities/chemical potentials? How can we measure QCD critical phenomena? Are there new exotic phases in neutron stars? Can we probe these with gravitational waves from neutron star inspirals and mergers?
\item What are the prospects of making a 10\% measurement of the tidal deformability of a neutron star with advanced LIGO? What are the challenges posed by detector upgrades and waveform systematics? And what nuclear physics information, other than pressure versus energy density (EoS), are nuclear physicists interested in knowing that one could extract from the tidal deformabilities or from the a merger phase?
\item How far can we push our understanding of hadronic matter and its interactions at high densities? How does one model and realistically constrain the EoS at finite temperatures and densities? What is the interplay between the EoS probed in heavy ions and that probed in neutron stars and their mergers? For instance, if mergers reveal that there is a first order phase transition at finite temperatures, this proves that the QCD critical point exists.
\item What is the best approach to extract an EoS from neutron star inspirals and NICER data? What do we miss by just focusing on the EoS? For instance, for cooling and transport one needs to know the correct phase of matter, including degrees of freedom and interactions.
\item How can hydrodynamics and transport be used to determine the correct degrees of freedom in low energy heavy-ion collisions and neutron star mergers? What upgrades are needed for current codes to be able to extract the EoS from low beam energies? What role do out-of-equilibrium effects play at larger densities and at a critical point/phase transition? What role does isospin/strangeness play in their respective dynamics? What signals from nuclear experiments are most promising to extract the EoS and information about phase transitions?
\item What are the challenges that must be faced when performing realistic simulations of ultradense matter in heavy ion collisions and in neutron star mergers? What is the synergy between them? What are the tools needed to systematically investigate out-of-equilibrium viscous effects under extreme temperatures, densities, and gravitational/electromagnetic fields?
\item What are the consequences beyond QCD i.e. if we understand the QCD EoS across the entire phase diagram? Can one then make quantitative statements about dark matter?
\end{itemize}
Clearly, these questions are far too complex to be tackled by one community alone – only truly interdisciplinary collaboration will be the herald of progress. In fact, the authors hope new insights and collaborations will emerge from this and future workshops to help solve the challenging questions mentioned above.
\section*{Acknowledgements} We thank all the speakers, panelists, and participants of the virtual workshop ``From heavy-ion collisions to neutron stars" for illuminating discussions and the Illinois Center for Advanced Studies of the Universe (ICASU) for support. V.D. acknowledges support from the National Science Foundation under grant PHY-1748621. J.N. is partially supported by the U.S. Department of Energy, Office of Science, Office for Nuclear Physics under Award No. DE-SC0021301. J.N.H. acknowledges support from the US-DOE Nuclear Science Grant No. DE-SC0019175 and the Alfred P Sloan Foundation. N.Y. thanks the Illinois Center for Advanced Studies of the Universe from the Department of Physics at the University of Illinois at Urbana-Champaign for support. This material is based upon work supported by the National Science Foundation under
grant No. PHY-1654219 and by the U.S. Department of Energy, Office of
Science, Office of Nuclear Physics, within the framework of the Beam Energy Scan Theory
(BEST) Topical Collaboration.
| proofpile-arXiv_059-15725 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{intro}
Betelguese ($\alpha$ Orionis) is a nearby, massive red supergiant
(RSG) that provides clues to a broad range of issues of the evolution and explosion of massive stars. It has been difficult to obtain tight constraints on the evolutionary state of Betelgeuse and hence when it might explode and information about the internal rotational state and associated mixing. It is thus important to understand Betelgeuse in greater depth. The recent extreme dimming episode has only added impetus to this quest \citep{Guinan20,Levesque20,Harper20,dharma20}.
The distance to Betelgeuse has been known to only 20\% ($D \approx 197\pm45$ pc; Harper et al. 2008, 2017) a situation that is not improved by {\sl Gaia} that saturates on such a bright star. Key
properties such as radius and luminosity are thus somewhat uncertain. Within this uncertainty, models of Betelgeuse might be brought into agreement with observations of $L$, $R$, and $T_{eff}$ at either the minimum--luminosity base of the giant branch or at the tip of the red supergiant branch (RSB). By invoking the constraint of the 412 d pulsation \citet{Joyce20} show that this corresponds to the fundamental mode and derive new constraints on the radius and hence the distance and parallax of Betelgeuse, reducing the uncertainty to about 10\% ($D \approx 165^{+16}_{-8}$ pc). This restricts Betelgeuse to be near the tip of the RSG and in core Helium burning. The proximity of Betelgeuese allows other key measurements since its image can be resolved \citep{Habois09}.
A particularly interesting potential constraint on Betelgeuse obtained with spatially-resolved spectra is the equatorial rotational velocity ($\sim 15$~km~s$^{-1}$) measured with $HST$ \citep{Dupree87} and ALMA \citep{Kervella18}. \blue{Our models of Betelgeuse give a critical Keplerian velocity of $\sim 65$~km~s$^{-1}$; the observed rotational velocity is thus a substantial fraction of the escape velocity.} In the first paper of {\sl The Betelgeuse Project} \citep{Wheeler17}, we showed that single star models have difficulty accounting for the rapid equatorial rotation and suggested that Betelgeuse might have merged with a companion of about 1 M$_\odot$\ to provide the requisite angular momentum to the envelope. In paper II of the series \citep{Nance18}, we explored the possibility of gleaning an understanding of the interior structure of Betelgeuse in particular and RSG in general with some of the techniques of asteroseismology. In this work, we return to the question of whether Betelgeuse might have merged with a companion, as many O and B stars are argued to do \citep{Sana12, deMink14, Dunstall15, Costa15, Renzo19, Zapartas19}. Of primary interest is how and under what circumstances, the merged system ends up rotating at $\sim 23$\% of the critical velocity.
While the hypothesis that Betelgeuse might have merged with a companion is credible and consistent with the {\sl a priori} estimate that Betelgeuse has a probability of $\sim 20$\% of being born in a binary system \citep{deMink14}, it raises a number of interesting issues involving common envelope evolution, the fate of the companion and its angular momentum, and effect on the structure
of the primary. A main sequence companion of about a solar mass would have a mean density of about 1 g~cm$^{-3}$. That density is characteristic of the base of the hydrogen envelope in the RSG models we consider here, implying that a companion might not be dissolved until it reached the edge of the helium core. If the
companion merged with the core, the evolution of the primary might be severely altered by anomalous burning and mixing effects, and surface abundances might be affected. The luminosity of an evolved massive star is typically a function of the mass of the helium core and rather independent of the mass of the envelope. If a companion merged with the core of Betelgeuse, then the current luminosity may be a measure of the core mass ($\sim$ 5 to 6~M$_\odot$), but the mass of the envelope would be rather unconstrained and probably smaller than the estimates given based on single--star models that attempt to reproduce the luminosity, radius and effective temperature. If there were a coalescence, there would be some mass ejected. \citet{Wheeler17} discussed the possible ramification of such a mass ejection and the possible connection to various structures in the immediate environment of Betelgeuse.
In \S2 we present the calculations we have done and discuss the
strengths and weaknesses of attempting to simulate a stellar merger
with a spherical code. Our results are presented in \S3 and a
discussion is given in \S4.
As this work was nearing completion, \citet{chatz20} presented a somewhat similar analysis and conclusions. Based on their pulsational analysis, \citet{Joyce20} also conclude that Betelgeuse underwent a merger.
\section{Computations}
\label{comp}
We used the stellar evolution code Modules for Experiments in
Stellar Astrophysics (\textsc{mesa}; \citealt{Paxton11, Paxton13, Paxton15}). The models were run using \textsc{mesa} version 10398, with the {\it pre-ccsn} test-suite model inlist. To keep the variables to a minimum, we considered test-suite prescriptions in \textsc{mesa}: including Schwarzschild convection and an overshoot parameter of $\alpha = 0.2$. For the rotating models, we again chose \textsc{mesa} test-suite values of mechanisms of angular momentum transport and mixing according to the prescriptions of \citet{Heger03} with the efficiency parameters of the individual viscosity and diffusion coefficients equal to unity. We included magnetic effects as treated by the Spruit/Tayler algorithm \citep{spruit02, tayler73} in some cases, but did not include magnetic effects of the magnetorotational
instability \citep{WKC15}.
\green{We employed the ``Dutch" and ``de Jager'' mass--loss prescriptions with test-suite wind factors for ``hot'' and ``cool'' winds, respectively and the prescription for rotationally-induced mass loss of \citet{Heger2000}}. We used nuclear reaction network {\sl approx21}.
The inlist we employed is available upon request from the authors.
While recent versions of \textsc{mesa} have the capacity to treat binary evolution and common-envelope evolution \citep{Paxton15}, we reserve such studies for the future and here treat the problem in a rudimentary way that nevertheless gives some insights to the relevant physical processes. We have not attempted to treat the companion as a corporeal entity, but allow for its effects by adding the relevant mass and associated angular momentum to the outer envelope of the primary. We refer to our computational process as an ``accretion" throughout this work to distinguish it from the more complex behavior of a true merger, while recognizing its limitations that we discuss below.
We computed a range of models to explore the parameter space for a possible merger of a primary Betelgeuse-like star and a lower-mass companion. We considered a range of primaries with limited variation in companion mass to explore the effect of rotation and primary mass on the merger scenario. In this work, we focus on
two primary masses, 15 M$_\odot$ \ and 20 M$_\odot$,
but a wider range of secondary masses. The goal is to explore the effect of the epoch of accretion and companion mass on the merger scenario. In subsequent discussion we will refer to these models by the mass of the primary and the secondary, for instance to the 20 + 1 model for the 20 M$_\odot$\ primary accreting a 1 M$_\odot$\ secondary.
We considered various states of rotation of the initial ZAMS models: non-rotating, ``barely" rotating, and rotating at 200 km~s$^{-1}$, a modest fraction of the ZAMS Keplerian speed. The barely rotating models were invoked because our prescription for adding angular momentum failed in truly non-rotating initial models. These barely-rotating models functioned computationally, but had so little angular momentum that they were basically equivalent to non-rotating in terms of structure. A significant rotation near the ZAMS can effect the later core rotation structure, but has very little role to play otherwise after the model evolves up the RSG or has a merger. In this work, we only present the barely-rotating
models.
These models are initialized with an angular speed set to 0.1\% of breakup. They do not attain substantial rotation until they undergo accretion.
In the \textsc{mesa} models presented here, we have explicitly explored possible circumstances under which a RSG like Betelgeuse might rotate at a substantial fraction of current breakup velocity. We anticipated that the results might be asymptotically independent of the accreted mass since the merger product must eject sufficient mass and angular momentum to be stable. We have thus computed models of \red{15 M$_\odot$\ and} 20 M$_\odot$\ accreting companions of 1, 2, 3, 5, 7, and 10 M$_\odot$.
In the spirit of covering parameter space, we have explored some models in which the accretion occurs during the epoch of early core He burning (hereafter CHeB) as the models cross the Hertzsprung gap. While not precluded, this possibility seems less likely than encountering the secondary during the transition up the RSB. We return to this issue in \S\ref{discussion}. Other sets of models invoked accretion during the epochs of core Carbon burning (CCB) and shell Carbon burning (SCB). We thus evolved the primary star from the pre-main sequence up to a cutoff at one of three epochs: CHeB, CCB, or SCB. As a practical matter, consistent epochs of accretion were enforced for CHeB, CCB, and SCB by specifying a fixed central mass fraction threshold in the \textsc{mesa} inlist before adding mass to the model with the {\it inlist\_accreted\_material\_j} inlist. In practice, accretion was triggered for a central helium mass fraction less than about 0.95 for CHeB, for a central carbon mass fraction of less than 0.1 for CCB and for a central mass fraction of oxygen exceeding 0.7 for SCB. At the chosen epoch, we added mass and angular momentum to simulate a merger and then continued the evolution to near core collapse.
An important aspect of the problem is the deposition of the mass and orbital angular momentum of the secondary. In 3D simulations most of the initial secondary angular momentum is deposited in the outer layers of the primary envelope. That feature is captured in our calculations in at least a rudimentary way. At the chosen accretion epoch, we add to the envelope of the primary an amount of mass corresponding to the chosen secondary and an amount of angular momentum corresponding to the orbital angular momentum of the secondary with an orbit corresponding to the radius of the primary at the epoch when we begin accretion. The addition of the mass and angular momentum is done on a timescale that is long compared to the dynamical time, but short compared to the thermal or evolution times of the envelope as an approximation to the spin-up due to the plunge-in of the companion. The mass and angular momentum are added in the outer few zones by engineering the accretion rate. In \textsc{mesa}, accretion is just a variation on mass loss, invoking a change of sign. Our treatment of accretion is thus the same as the test-suite prescription for mass loss in terms of how it is handled numerically in the code. At the chosen epoch, we used the {\it accreted\_material\_j} test-suite functionality to add the companion mass and its orbital angular momentum to the primary star over a very short fraction of the model lifetime. Subsequently, we evolved the post-accretion model to near core-collapse, using an upper limit of silicon mass fraction of 0.1 as the final stopping point. The addition of mass and angular momentum in our treatment takes place at maximum over about 100 y in the models to avoid numerical convergence issues. This timescale has no physical import, but is longer than the expected plunge-in time. Clearly, we are not capturing that process in any quantitative detail. The excess mass is assimilated as \textsc{mesa} readjusts the density profile on the dynamic timescale of the envelope. The spin up induced by the addition of angular momentum enhances the normal wind mass loss (see \S\ref{mass}). Accretion during CHeB deposits the same mass as for accretion in the RSG phase, but less angular momentum due to the smaller assumed orbit at the presumed onset of merging.
Our treatment is clearly a simple approximation to the complex
three-dimensional reality of the process of common envelope
evolution (CEE) and merger. \citet{Ivanova16} (see also
\citealt{MorrisPod07, Taam10, Ivanova13, IvanovaJP, Ivanova_rev16, MacLeod18, chatz20}) describe the basic phases of CEE and the mechanisms for treating it in 3D and 1D. There are three stages to the process, each with associated loss
of mass and angular momentum: 1) a precursor phase when the stars
begin to interact and co-rotation is lost; 2) a plunge-in phase with a
large rate of change of orbital separation and a timescale close
to dynamical, at the end of which most of the mass of the common
envelope (CE) is beyond the orbit of the companion; and 3) a
self-regulated slow inspiral of the companion. There are two basic
endpoints to CEE: formation of a compact binary system and merger. For mergers, \citet{IvanovaPod03} differentiate three outcomes: a quiet merger, a moderate merger, and an explosive merger. Only the former leaves behind an RSG and hence is pertinent to Betelgeuse.
Mass and angular momentum are lost by dynamical interaction,
outflow driven by recombination, and shrinking of the orbit.
The slow inspiral often begins with an envelope that is significantly
reduced in mass and angular momentum. In some cases, recombination
outflow can eject nearly all the envelope during the slow inspiral.
The exception to these cases of extreme mass loss is when the primary is substantially more massive than the secondary, the case we consider here for many, but not all models. For small secondary masses, the percentage of mass lost in the precursor phase and the plunge-in phase is of order $q$, the mass ratio of secondary to primary.
In their treatment of a red giant of modest mass (1.8 M$_\odot$),
\citet{Ivanova16} find that companions of mass less than 0.10 M$_\odot$,
corresponding to about 5\% of the primary mass, undergo merger. The
time to merger is about 1000 d, long compared to the dynamical
time of the CE but short compared to the thermal or evolutionary
time of the primary. While these results do not necessarily scale with
mass, this suggests that for many cases of interest here, a companion of about 1 M$_\odot$\ undergoing CEE with a primary of about 20 M$_\odot$\
is likely to quickly undergo merger while sustaining a substantial
envelope, as Betelgeuse is observed to have. As we will show below, our models retain the RSG envelope even for much more substantial secondary mass.
The plunge-in phase is expected to induce very asymmetric structures
and the slow inspiral to yield appreciable departures from spherical symmetry that can be simulated in 3D but are beyond the capacity of 1D models. In 3D there is a significant density inversion in the vicinity of the companion and rather little material near the center of mass of the binary. On the other hand, the 3D simulations often treat the companion star and the red giant core as point sources. In 1D, the primary core, at least, can be modeled in more detail. A 1D code like \textsc{mesa} conserves energy and angular momentum within expected numerical accuracy. \textsc{mesa} also automatically handles energy released by recombination as the envelope expands and the angular momentum lost in winds. In some 1D simulations of CEE, the companion is
treated in a ``thin shell" approximation. In the current work, we have
neglected even that attempt to account explicitly for the presence of
the companion. We thus avoid such complications as the orbit of
the primary core and companion about a center of mass and the
motion of that center of mass during the inspiral.
\citet{Ivanova16} argue that energy conservation during the plunge
phase cannot be properly treated in 1D codes. They recommend that
the CE should be constructed by assuming adiabatic expansion
of the envelope due to the plunge. While not capturing the full
richness of the problem, our procedure of adding mass and angular
momentum to the envelope does result in an adjustment of the
envelope structure that may be some approximation to a more
accurate treatment.
\section{Results}
\label{results}
\subsection{Evolution in the Hertzsprung-Russel Diagram}
\label{HRD}
The HRD of all the models are qualitatively similar. Figure \ref{HRD20+1} shows the evolution in the late Hertzsprung gap and RSB for the default model with no accretion and the perturbations on the RSB as mass is added in the three accretion epochs, CHeB, CCB, and SCB, of the 20+1 model.
The accretion events result in irregular transient loci before settling down to a rather normal evolution up the RSB to the point of collapse of the models. After the transient phase, the models that accreted during CHeB and CCB are nearly indistinguishable from the default model. The model with accretion during SCB ends up in a similar region of the HRD after its post-accretion transient. The CHeB and SCB are, respectively,
slightly brighter and slightly cooler than the default and CCB models at the point of collapse. Figure \ref{HRD20+10} shows similar evolution for the 20+10 model. The larger accreted mass causes more substantial transient perturbations, but modest
difference in the final location of the models at the point of collapse from one another or from the 20+1 model. The 20+10 CHeB model ends up essentially in the same position as the 20+1 model. The 20+10 CCB model has a more complicated path, but ends up only very slightly brighter and hotter than the 20+1 model. It ends slightly brighter and cooler than the 20+10 CHeB model. The 20+10 SCB model ends up
somewhat dimmer and cooler than the 20+10 CCB model. On the scale plotted, the 20+10 SCB model ends perceptibly hotter than the 20+1 SCB model.
\begin{figure}
\center
\includegraphics[width=3 in, angle=0]{Figure1.png}
\caption{Evolution in L and T$_{eff}$ in the late Hertzsprung gap and the RSB for the 20+1 model. The dashed line represents the default model that did not undergo accretion. Stars represent the three epochs of accretion. The blue (uppermost) star and line (nearly indistinguishable from the default model by the end of the evolution) correspond to accretion during core He burning (CHeB) at the base of the RSB. The orange (lowest) star and line (middle track) represent accretion during core C burning (CCB). The green star and line (rightmost star and track) represent accretion during shell C burning (SCB).
\label{HRD20+1}}
\end{figure}
\begin{figure}
\center
\includegraphics[width=3 in, angle=0]{Figure2.png}
\caption{Similar to Figure \ref{HRD20+1} but for evolution in the late Hertzsprung gap and the RSB of the 20+10 model.
\label{HRD20+10}}
\end{figure}
\subsection{Evolution of Mass}
\label{mass}
In 3D models, the surface layers are ``shock heated" and quickly ejected prior to the plunge-in. In our models, the associated spin-up of the outer envelope leads to a certain degree of mass loss, that, while perhaps not quantitatively equivalent to a full 3D simulation, captures some of the essence of the interaction \citep{zhaofuller20}. Figure \ref{Mdot20+1} gives the mass-loss history, beginning near the end of the main sequence phase, of the 20+1 model corresponding to the three principal epochs, CHeB, CCB, and SCB. Figure \ref{Mdot20+10} gives similar information for the 20+10 model. At the epoch of accretion, the mass loss rate jumps by about 4 orders of magnitude, abetted by the attempt of the models to restore hydrostatic and thermal equilibrium and by the rotationally-induced mass loss.
Despite the significant perturbation to the structure of the models, the mass loss rates by the epoch of collapse are very similar for all three accretion epochs and for both the 20+1 and 20+10 models.
\begin{figure}
\center
\includegraphics[width=3 in, angle=0]{Figure3.png}
\caption{The evolution of the mass loss rate from late on the
main sequence to the point of collapse for the 20+1 model. The red dashed line beginning on the left is the default model (this line is identical to and obscured by the line for the SCB model until the very end of the evolution). The blue star and spike (leftmost) correspond to the CHeB model. The orange star and spike (middle features) correspond to the CCB model. The green star and spike (rightmost) correspond to the SCB model. The excursion at $t = 0.86\times10^7$ yr corresponds to the contraction at the Terminal Age Main Sequence.
\label{Mdot20+1}}
\end{figure}
\begin{figure}
\center
\includegraphics[width=3 in, angle=0]{Figure4.png}
\caption{Similar to Figure \ref{Mdot20+1}, but for the evolution of the mass loss rate for the 20+10 model.
\label{Mdot20+10}}
\end{figure}
Figure \ref{RSGM+1} and Figure \ref{RSGM+10} give the mass history for the default, 20+1 and 20+10 models. The excess mass accreted is rapidly expelled in a transient phase of rapid loss of mass and angular momentum as qualitatively expected for the plunge-in phase. The 20+1 model rapidly returns to the same mass and subsequent mass loss rate as the default model for the CHeB and CCB models and the mass of these models is virtually identical at the point of core collapse. The 20+1 SCB model does not have time to relax to the original track before core collapse, but nevertheless ends up with a very similar final mass, $\sim 15$ M$_\odot$, as the default, CHeB, and CCB models. The 20+10 models undergo much stronger perturbations. The final masses remain somewhat larger than the 20+1 models for all three epochs of accretion, by about 2 M$_\odot$, but nevertheless end up very similar to one another with a mass at core collapse of $\sim 17 - 18$ M$_\odot$, nearly the mass of the original ZAMS model.
\green{Even though the SCB models undergo accretion almost a million years after the CHeB models, they reach nearly the same mass due to more rapid mass shedding between accretion and core collapse for the SCB models.}
\begin{figure}
\center
\includegraphics[width=3 in, angle=0]{Figure5.png}
\caption{The evolution of the mass from late on the main sequence to the point of collapse for the 20+1 model. The red line beginning on the left is the default model. The blue star and spike (leftmost) correspond to the CHeB model. The orange star and spike (middle features) correspond to the CCB model. The green star and spike (rightmost) correspond to the SCB model.
\label{RSGM+1}}
\end{figure}
\begin{figure}
\center
\includegraphics[width=3 in, angle=0]{Figure6.png}
\caption{Similar to Figure \ref{RSGM+1}, but for the evolution of the mass for the 20+10 model. A color version is available online.
\label{RSGM+10}}
\end{figure}
Table \ref{massloss} gives the final mass and the total mass ejected from the system during the accretion for ``mergers" with a primary of 20 M$_\odot$\ occurring at the three principle epochs, CHeB, CCB, and SCB, and for a range of accreted masses. Note that the final masses, ranging from 15 to 17 M$_\odot$, are remarkably independent of the mass of the secondary and the epoch at which accretion occurs.
The mass lost from the system, ranging from roughly \red{6} to \red{13} M$_\odot$\ in Table \ref{massloss} is a combination of the loss of mass accreted plus loss of mass from the primary itself. The latter is due to winds prior to the accretion event and then the rotationally-induced mass loss after the accretion. While it is difficult to isolate the loss of accreted mass from the mass lost directly from the primary, Table \ref{massloss} shows that the net mass loss exceeds the accreted mass so that some mass must be lost from the primary. With the assumption that none of the accreted mass is retained by the primary, the mass lost from the primary is 20 M$_\odot$\ minus $M_f$, or 3 - 5 M$_\odot$.
\begin{table}
\caption{\green{Final Mass ($M_f$) and Total Mass Ejected ($M_{ej}$) from the System} in M$_\odot$\ for Models with ZAMS Mass 20 M$_\odot$\ as a function of accreted mass ($M_2$) in M$_\odot$. }
\label{massloss}
\begin{tabular}{lccccccc}
\hline
& He core & & C core & & C shell \\
Secondary & $M_f$ & $M_{ej}$ & $M_f$ & $M_{ej}$ & $M_f$ & $M_{ej}$ \\
\hline
1 & 15.09 & 5.91 & 14.76 & 6.24 & 15.04 & 5.96 \\
\\
2 & 14.98 & 7.02 & 14.56 & 7.44 & 14.37 & 7.63 \\
\\
3 & 14.70 & 8.30 & 14.41 & 8.59 & 14.25 & 8.75 \\
\\
5 & 14.98 & 10.02 & 14.67 & 10.33 & 14.53 & 10.47 \\
\\
7 & 15.66 & 11.34 & 15.36 & 11.64 & 15.45 & 11.55 \\
\\
10 & 17.20 & 12.80 & 16.85 & 13.15 & 16.95 & 13.05 \\
\hline
\end{tabular}
\end{table}
\subsection{Evolution of Angular Momentum and Angular Velocity}
\label{angmom}
During the accretion and redistribution process, some angular momentum
is lost to the surroundings in the rotation-enhanced wind, and some is
retained to diffuse inward toward the primary core.
The angular momentum that is retained is redistributed by an inward diffusive wave of angular momentum. The profiles of the specific angular momentum and angular velocity quickly evolve to stable forms delineated by an inward propagating front with the specific angular momentum increasing outward beyond the front and the angular velocity being nearly constant.
\begin{figure}
\center
\includegraphics[width=3 in, angle=0]{Figure7.png}
\caption{The distribution of $j_{rot}$, $s$, and composition just prior to accretion for the 20 M$_\odot$\ model in the core carbon burning (CCB) phase.
\label{20jsomegacomppreCCB}}
\end{figure}
\begin{figure}
\center
\includegraphics[width=3 in, angle=0]{Figure8.png}
\caption{The distribution of $j_{rot}$, $s$, and composition
near the point of core collapse for the default 20 M$_\odot$\ model that did not undergo accretion.
\label{20jsomegacompdefault}}
\end{figure}
Figure \ref{20jsomegacomppreCCB} gives the distribution with mass of the specific angular momentum, $j$, the specific entropy, $s$, and the composition distribution for the 20 M$_\odot$\ model just before accretion in the carbon core-burning phase. Figures \ref{20jsomegacompdefault}, \ref{20+1jsomegacompCCB} and \ref{20+10jsomegacompCCB} give the corresponding distributions near the epoch of collapse for the default model, the 20+1 model and the 20+10 model, respectively. Prior to the accretion phase, the model has an inner homogeneous 5 M$_\odot$\ core composed primarily of oxygen, a helium shell extending from 5 to about 7 M$_\odot$, a shell composed of roughly 50\% H and He, and the outer RSG envelope. After accretion, the angular momentum per unit mass and the angular velocity in the outer envelope jump substantially.
A few years after accretion (arbitrarily set by our numerics) the composition distribution is virtually unchanged (there are some changes in detail), but the ingoing wave of angular momentum has propagated to the boundary between the outer envelope and the H/He shell. By the epoch of collapse, the angular momentum distribution in the outer envelope has scarcely changed. The wave of angular momentum has swept through the H/He shell, but is halted at the outer boundary of the He shell at 7 M$_\odot$\ for both the 20+1 and the 20+10 models.
\begin{figure}
\center
\includegraphics[width=3 in, angle=0]{Figure9.png}
\caption{The distribution of $j_{rot}$, $s$, and composition
near the point of core collapse for the 20 M$_\odot$\ model that accreted 1 M$_\odot$\ during core carbon burning (CCB).
\label{20+1jsomegacompCCB}}
\end{figure}
\begin{figure}
\center
\includegraphics[width=3 in, angle=0]{Figure10.png}
\caption{The distribution of $j_{rot}$, $s$, and composition
near the point of core collapse for the 20 M$_\odot$\ model that accreted \red{10} M$_\odot$\ during core carbon burning (CCB).
\label{20+10jsomegacompCCB}}
\end{figure}
All the model have inner regions of negative gradient in $j$ in regions of sharp composition gradients. These must be stabilized against the Rayleigh instability by the associated composition gradients. We have not investigated this condition in detail.
The net effect is that both the total mass of the model and the mass of the inner core are scarcely changed whether 1 or 10 M$_\odot$\ is accreted. All the models, CHeB, CCB, and CSB, end up with an oxygen core of about 5 M$_\odot$\ and the mass of helium and heavier elements beneath the envelope of about 7 M$_\odot$. The final angular momentum distribution of the outer envelope is very similar for the 20+1, and the 20+10 models. \green{Both accreting models have a substantially larger envelope angular momentum than the default model.}
Figures \ref{20+1jsomegacompCCB} and \ref{20+10jsomegacompCCB} show, however, that the inner core composition structure \green{near core collapse} is somewhat different. Inspection of the models shows that there is little difference among the final models for the default model and the 20+1 and 20+10 CHeB models. \green{The structure of the helium-rich shell is virtually identical. There are small quantitative differences in distribution of the oxygen in the inner oxygen core. There are also small differences in the extent of the carbon-rich outer layer of the oxygen core that lies immediately beneath the helium shell, with slightly thicker shells for the accreting models. The CCB and CSB models show similar behavior. Most of the models show a small carbon abundance but virtually no oxygen in the helium shell. The exceptions are the CSB models. The 20+1 CSB model shows a small oxygen abundance equivalent to that of carbon in the helium shell. The 20+10 CSB model reveals an enhanced abundance of both carbon and oxygen in the helium shell compared to the default model, with $ \green{36
$\% carbon and $ \green{28
$\% oxygen by mass at the expense of helium which has decreased to $ \green{49
$\%. }
The implication is that the inner composition structure of Betelgeuse could be rather different depending on the mass accreted with basically no indication reflected in the outer, directly observable structure.
\citet{Ivanova16} presented a model of a primary of 1.8 M$_\odot$\ and secondary of 0.1 M$_\odot$ (model M10; their figure 7). While the mass scale is smaller than we consider here, the mass ratio for the 20+1 models, $\sim 0.05$, is about the same.
The angular velocity as a function of mass 50 days after the plunge-in is basically flat throughout the model. The value of the angular velocity, $\sim 3\times 10^{-7}$ rad s$^{-1}$ is interestingly, if fortuitously, close to the value we find. The peak value of the angular momentum per unit mass is about a factor of 30 less than we find. The flat angular velocity profile in the 3D simulations
seems to arise naturally in our MESA simulations.
The significant departures in behavior between model M10 and
the results we present in Figures \ref{20+1jsomegacompCCB} and \ref{20+10jsomegacompCCB} are found in the innermost and the outermost regions. \citet{Ivanova16} do not consider the inner core, so they do not explore the distribution of angular momentum we depict in the core. On the other hand, \citet{Ivanova16} find a distinct decrease in both the specific angular momentum and the angular velocity in the outer 0.1 M$_\odot$\ of their models that our models do not reveal. This difference probably arises in the loss of mass and angular momentum in the dynamical plunge-in phase that we do not treat accurately.
While we are clearly not reproducing the interaction and plunge-in and associated angular momentum ejection properly, we do seem to capture many of the major qualitative aspects of the acquisition and redistribution of angular momentum due to merger.
\subsection{Evolution of Entropy}
\label{entropy}
\citet{Ivanova16} give an extensive discussion of the treatment of entropy in CEE.
They argue that 1D stellar codes should add the energy as mechanical energy rather than ``heat" that moves the material to a higher adiabat.
The entropy determines the location at which the recombination energy is able to overcome the binding energy. For this reason, the entropy generation computed in 1D codes is likely to predict different outcomes for 1D rather than 3D CE evolution. \citet{chatz20} find relatively little heating effects in their 3D merger simulation of Betelgeuse. We note, however, that heating during merger can lead to non-linear envelope pulsations and to potentially large mass loss \citep{Clayton17}. This aspect is beyond the scope of the current paper, but deserves closer attention.
Since we do not explicitly treat the secondary, we cannot address many of these issues directly, but we can examine the behavior of the entropy in our models to see where our models agree or disagree with other treatments. We neglect the generation of entropy in the merger and plunge-in phase, but our simulations can in principle produce some shear dissipation and entropy in the outer layers by treating the effective diffusion constant and viscosity associated with the Kelvin-Helmholtz instability. We may also capture the generation of some entropy from the flattening of the rotational profile.
Inspection of our models (see Figures \ref{20jsomegacompdefault}, \ref{20+1jsomegacompCCB} and \ref{20+10jsomegacompCCB}) show that the way we have treated the problem, there is very little perturbation to the entropy of the outer layers. The specific entropy has almost the same value before and after the accretion phase and until the epoch of core collapse.
\subsection{Recombination}
\label{recombination}
Hydrogen and helium recombination
can help to trigger envelope instability depending on where and when the energy is released. The time-scale of recombination runaway can be up to several hundred days and gets longer as the mass of the companion decreases. In such cases, radiative losses can become important so that 3D simulations that lack that feature are no longer appropriate. For all their limitations, 1D codes like MESA can handle this aspect of the physics.
In our calculations, we just add angular momentum, not heat. This results in a change in the distribution of kinetic energy in the envelope that is redistributed as the calculation proceeds. If this process leads to expansion of part of the envelope, triggering recombination, then \textsc{mesa} should compute the recombination self-consistently. It is not clear that this method properly captures the reality that would accompany the full 3D situation with radiative losses and recombination. We consider cases where the accreted mass is modest that might correspond to long recombination timescales, but also accretion of considerable mass. For stability, our numerical process requires that the mass be added on timescales that may be long compared to expected recombination timescales. Given our somewhat artificial means of adding mass and angular momentum, and the transient large perturbation to the envelope structure when large masses are accreted, \textsc{mesa} should compute the recombination self-consistently in the transient adjustment phase as hydrostatic equilibrium is maintained and thermal equilibrium is restored.
\subsection{Magnetic Fields}
\label{mag}
As noted in \S\ref{comp}, we included magnetic effects as treated by the \textsc{mesa} Spruit/Tayler algorithm in some cases, but did not include magnetic effects of the magnetorotational instability \citep{WKC15}. The omission of the latter will undoubtedly alter the quantitative, if not qualitative results. The Spruit/Tayler mechanism gives results that typically weight the radial component, $B_r$, orders of magnitude less than the toroidal component, $B_\phi$. The magnetorotational instability tends to give the radial component about 20 per cent of the toroidal component. Another important caveat is that \textsc{mesa} computes the magnetic field structure based on the instantaneous structure of the model. In reality, the field only decays on a dissipation timescale that might in some circumstances be long compared to the evolutionary timescales. This would lead to fossil magnetic field in a region that made a transition from being unstable to stable to the Spruit/Tayler instability. \textsc{mesa} has no means to treat the existence and decay of such fossil fields. The magnetic structure we compute is thus interesting, but should not be given any quantitative weight.
\green{Figures \ref{Bdefault}, \ref{B1}, and \ref{B10} show the final field configuration for the default model and the 20+1 and 20+10 models that accreted during core carbon burning, respectively. }
By the end of the calculation, the accreting models showed spikes of modest field, $\sim 1$ G in both $B_r$ and $B_\phi$, in the very outer layers where the models show a generic negative gradient in specific entropy. \green{The default model showed similar spikes, but of considerably smaller amplitude.} Of more interest is the substantial and complex field distribution in the inner core generated as various components burn out and the core contracts and spins up generating shear and magnetic fields.
\green{These effects would be} amplified if the initial rotation were larger on the ZAMS than we assume here.
\green{All three models show a more substantial field in the outer part of the helium shell, reaching up to the base of the hydrogen envelope. The peak fields are of order 1 G and 1000 G for the radial and toroidal fields, respectively, with considerable variation with radius that is likely to be affected by issues of numerical resolution. All three models then have an inward gap where the fields are very small. The fields are then large, but variable, in the innermost layers of the oxygen core. The radial fields peak at $\sim$ 1000 G and the toroidal fields at $\sim 10^6$ to $10^7$ G. In these models the fields peak off center and the toroidal field declines to about 1 G in the center. The accretion appears to have a quantitative, but not qualitative effect on the field strength and distribution just prior to collapse. Subsequent core collapse by a factor of $\sim 100$ in radius would amplify the field by compression alone by a factor of $\sim 10^4$. The resulting field of $\sim 10^{11}$ G would not be dynamically significant, but would give ample seed field for growth of the field in the proto-neutron star by the MRI \citep{Akiyama03, Obergaulinger09, Moesta18} }
\begin{figure}
\center
\includegraphics[width=3 in, angle=0]{Figure11.png}
\caption{The distribution of radial (top panel) and toroidal (middle panel) magnetic field, and composition (bottom panel) near the point of core collapse for the model with 20 M$_\odot$\ that did not undergo accretion.
\label{Bdefault}}
\end{figure}
\begin{figure}
\center
\includegraphics[width=3 in, angle=0]{Figure12.png}
\caption{\green{Similar to Figure \ref{Bdefault} but} for the model with 20 M$_\odot$\ accreting 1 M$_\odot$\ of mass during central carbon burning.
\label{B1}}
\end{figure}
\begin{figure}
\center
\includegraphics[width=3 in, angle=0]{Figure13.png}
\caption{\green{Similar to Figure \ref{Bdefault} but} for the model with 20 M$_\odot$\ accreting 10 M$_\odot$\ of mass during central carbon burning.
\label{B10}}
\end{figure}
\subsection{\blue{Insensitivity of Final Equatorial Velocity to Accreted Mass }}
\label{finalv}
The original motivation of \citep{Wheeler17} for hypothesizing that Betelgeuse might have merged with a companion was the difficulty of accounting for the nominal currently-observed equatorial rotation velocity, $\sim15$ km~s$^{-1}$, allowing for inclination. A companion mass of $\sim 1$ M$_\odot$\ was estimated from simple arguments based on conservation of angular momentum.
If a merger occurred in Betelgeuse, the product must have settled into a state for which the rotation is sub-Keplerian. This global criterion is independent of the masses of the primary and secondary involved in the merger. The implication is that the loss of mass and angular momentum must adjust to meet this criterion rather independently of the masses involved and the epoch of accretion. To explore this notion, we investigated the mergers of a range of primary and secondary masses. Here we concentrate on primaries of 15 and 20 M$_\odot$, but consider a range of secondaries up to a rather extreme 10 M$_\odot$.
\blue{In Figures \ref{vrotbare} and \ref{vrotcrit}, we explicitly compare the final equatorial rotation velocity and its ratio with the critical rotation velocity, respectively. We find that, broadly, the final rotational velocities of the models were \green{rather} independent of the companion mass accreted. \green{For} a given accretion epoch, the final rotational velocities for the 15 M$_\odot$\ primary models were typically higher than those of the 20 M$_\odot$\ primary models. \green{The results for the CHeB and CCB models were very similar. The final velocity for the SCB models were substantially higher, presumably due to the smaller time from accretion to collapse that prevented more loss of mass and angular momentum (see Figures \ref{RSGM+1} and \ref{RSGM+10}). } }
\green{Taking the results of our models at face value and interpolating in Figure \ref{vrotbare}, the rotational velocity attributed to Betelgeuse, $\sim 15$ km~s$^{-1}$, could be reproduced by a model with a primary of somewhat less than 15 M$_\odot$\ accreting between 1 and 10 M$_\odot$\ in the CHeB and CCB epochs. This velocity might also be attained by accreting any of a broad range of masses onto a primary of somewhat more than 20 M$_\odot$\ in the later SCB epoch. Accretion of as much as 10 M$_\odot$\ at the SCB epoch would require an even more massive primary. Similar conclusions are reached by examination of Figure \ref{vrotcrit} where the ``observed" ratio of equatorial rotational velocity to the critical equatorial rotational velocity is $\sim 0.23$.
}
\begin{figure}
\center
\includegraphics[width=3.5 in, angle=0]{Figure14.png}
\caption{The equatorial rotational velocity \green{as a function of} the accreted companion mass for the set of models containing 15 and 20 M$_\odot$\ primaries at Helium burning (red), core Carbon burning (green), and shell Carbon burning (blue).}
\label{vrotbare}
\end{figure}
\begin{figure}
\center
\includegraphics[width=3.5 in, angle=0]{Figure15.png}
\caption{The ratio of equatorial rotational velocity to the critical equatorial rotational velocity \green{as a function of} the accreted companion mass for the set of models containing 15 and 20 M$_\odot$\ primaries at Helium burning (red), core Carbon burning (green), and shell Carbon burning (blue).}
\label{vrotcrit}
\end{figure}
\section{Discussion and Conclusions}
\label{discussion}
We have used \textsc{mesa} to approximately simulate the merger of a massive primary with secondaries of a range of masses in an attempt to better understand the apparent equatorial rotation velocity of Betelgeuse, $\sim 15$ km~s$^{-1}$, $\approx 0.23$ of the Keplerian velocity. We simulate a merger by adding to the envelope of the primary the mass of the secondary and angular momentum corresponding to the orbital angular momentum of the secondary at the radius of the primary at the epoch when we begin accretion. We consider accretion during core helium burning, core carbon burning and shell carbon burning and compute the resulting models to near core collapse. \citet{Joyce20} conclude that Betelgeuse merged prior to the later carbon-burning phases. Our core helium burning models might thus be most pertinent to Betelgeuse, but our other models might pertain to other cases of RSG merger. We discuss the limitation of tackling a manifestly 3D problem with a 1D code, including treatment of the entropy distribution in the envelope and the effects of recombination on the energetics of envelope loss.
While the final mass of the primary and the equatorial velocity depend somewhat on circumstances, we find that with the assumptions we have made they are remarkably insensitive to the mass of the secondary and the epoch of accretion. For a 20 M$_\odot$\ primary, the final mass is \green{ 15 - 20} M$_\odot$, nearly independent of the mass of the secondary or the epoch of accretion. The equatorial velocity is consistent with the observed value within a factor of a few. The results for accreting 1 M$_\odot$\ are not drastically different than those for accreting 10 M$_\odot$. \green{ Our models suggest that the rotation of Betelgeuse could be consistent with a primary of somewhat less than 15 M$_\odot$\ accreting between 1 and 10 M$_\odot$\ in the CHeB and CCB epochs. The observed equatorial velocity might also be attained by accreting a broad range of masses onto a primary of somewhat more than 20 M$_\odot$\ in the later CSB epoch. } Although our treatment of the post-merger problem with \textsc{mesa} is rather different than that of \citet{chatz20}, we note that the results for the final equatorial rotational velocity (their Table 1) are very similar to ours. This gives us some confidence that this quantity is somewhat robust against the details of the merger process and depends primarily on a global quantity such as pre-merger orbital angular momentum.
For our study to have any relevance to Betelgeuse, it is important that the structure remain that of an RSG after the proposed merger. As mentioned in \S \ref{comp}, a ``quiet merger" can leave behind an RSG, depending on pre-merger conditions. \citet{IvanovaPod03} suggest that this condition favors secondary masses $>2$ M$_\odot$\ and a primary close to carbon ignition so that strong gradients inhibit core/envelope mixing.
To account for the circumstellar nebular rings, many studies of the mergers of massive stars have focused on the prospect that the progenitor of SN~1987A may have undergone a merger \citep{MorrisPod07}. Merger models can also account for why the progenitor was a blue rather than red supergiant by invoking mixing of helium from the core into the outer envelope \citep{MenonHeger17}. Mixing can happen when the secondary nears the helium core, fills its Roche lobe, and produces a mass transfer stream that can penetrate the core \citep{IPS02}. The depth of penetration of the stream into the core depends on the stream direction, entropy, width, and angular momentum, the rotation orientation and amplitude of the secondary, on the density structure and relative rotation of the core, and on fluid instabilities. In the case of Betelgeuse, a contrasting conclusion applies. Betelgeuse is still a red supergiant. If one accepts our basic {\it ansatz} that a merger is required to account for the observed rotational velocity of Betelgeuse, then it follows that a merger did not produce a compact blue envelope and thus, by the arguments of \citet{IPS02} and \citet{MenonHeger17}, little to no helium could have been mixed outward from the core, consistent with our particular simulations. The modeling of a putative Betelgeuse merger by \citet{chatz20} concluded that the plume from the disrupted secondary would not penetrate the helium core and induce substantial helium mixing according to the prescription of \citet{IPS02}. Mixing may be more likely for more massive secondaries, so our results may be less reliable for larger mass secondaries. Plume mixing is a complex hydrodynamical problem that deserves more study if we are to understand both Betelgeuse and SN~1987A as products of massive star mergers.
For our accreting models, a wave of angular momentum is halted at the composition boundary at the edge of the helium core leaving behind an envelope of constant angular velocity and a monotonically rising angular momentum per unit mass. \green{The composition distribution of the inner core can be slightly affected by the accretion of a companion of large mass. Accretion has little effect on the production of magnetic fields by the Spruit/Tayler mechanism.}
Thus while the inner structure might be somewhat perturbed by accretion of substantial mass, there may be little on the outside to indicate that the accretion occurred. \green{Our models provide a reasonable ``natural" explanation for why Betelgeuse has a large, but sub-Keplerian equatorial velocity.} Our results do not prove, but do allow that Betelgeuse might have merged with a moderately massive companion. Betelgeuse might look substantially the same whether it merged with a 1 or 10 M$_\odot$\ companion.
While we have run all of our models to near core collapse and examined the conditions there, the pertinent question is the structure and condition of Betelgeuse as we see it today, gracing Orion. While uncertainties in the distance remain troubling, Betelgeuse is most likely near the tip of the RSB. Since core helium burning is far longer than subsequent burning phases, Betelgeuse is most likely in core helium burning, a point reinforced by \citet{Joyce20}. For our models, once the transient phase of accretion has settled down and substantial mass and angular momentum have been ejected, there is rather little external difference in models in late core helium burning and subsequent phases.
In \citet{Wheeler17}, we noted that a merger event might have some relation with the interstellar shells of higher density in the vicinity of Betelgeuse. The strangely linear feature about 9' away might be related to the square axisymmetric circumstellar nebula recently discovered around the B9 Ia star HD93795 by \citet{Gvar20}. The prominent bow shock at $\sim 7'$ in the same direction indicates a peculiar velocity with respect to the local standard of rest of $v \approx 25$ km~s$^{-1}$\ \citep{2008AJ....135.1430H} or perhaps as much as 35 km~s$^{-1}$\ \citep{vanloon2013}. This number is of relevance because some hypothesize that this rather high peculiar velocity arises because Betelgeuse is a runaway star, having been ejected when a binary companion underwent a supernova explosion \citep{blaauw61,vanloon2013}. If a previous binary companion exploded, then it clearly could not have merged with the current Betelgeuse. Work on the kinematic effects of supernovae in massive star binary systems tends to discourage this conjecture. \citet{Renzo19} confirm that of order 20 - 50\% of massive star binaries merge, as we explore here. They also find that by far the largest fraction of binaries disrupted by the collapse and explosion of the primary result in ``walkaway" rather than ``runaway" stars. The velocity distribution of the ejected companion peaks at about 6 km~s$^{-1}$. For secondaries more massive than 15 M$_\odot$, as likely applies to Betelgeuse, only $\sim 0.5\%$ have velocities of 30 km~s$^{-1}$\ and above, as appropriate to Betelgeuse. These results suggest that, while non-zero,
the likelihood that the space motion of Betelgeuse resulted from the previous explosion of a companion is small. An alternative is that the proper motion of Betelgeuse arises from stellar dynamics in its natal cluster \citep{poveda67,ohkroupa16,schoettler19}. The results depend on assumptions about primordial binaries, among other things, but the general results are roughly the same. It is easier to generate walkaway stars than runaway stars. A runaway binary is likely to be rare, but not precluded.
The origin of the space motion of Betelgeuse is thus one more fascinating open question about this tantalizing star. Whether Betelgeuse attained its proper motion from the explosion of a companion or from cluster dynamics, if it emerged as a single star then the observed equatorial velocity remains an issue. Even if spun up on the ZAMs, its rotation on the RSB would be slow \citep{Wheeler17}. A possible way to account for both the equatorial velocity and the space motion would be to invoke cluster dynamics, ejection of a binary of which the star we currently observe as Betelgeuse was the primary, and a subsequent merger to account for the equatorial velocity. This is, admittedly, an improbable string of events. \citet{ohkroupa16} find that a majority of ejected massive binaries have a period shorter than
$10^5$ days. Our merger models have a typical presumed orbital period of about 30 years or $10^4$ days. Having a rather massive companion might increase the likelihood that the binary remains intact upon ejection from the natal cluster. Our current results allow for that possibility. We note that while Betelgeuse may have moved hundreds of pc during its main sequence lifetime, it is expected to have moved only $\sim 2$ pc during the 100,000 years or so it has been in core helium burning as a RSG.
While the overall goal of the Betelgeuse Project is to determine the current evolutionary phase and ultimate fate of Betelgeuse, this work has brought us no closer to a practical observational test of those important aspects. The notion that Betelgeuse may have undergone a merger remains viable.
\section*{Acknowledgments}
We are grateful to Natasha Ivanova for discussions of common envelope evolution and to the Aspen Center for Physics for providing the environment to do so. We also thank Manos Chatzopoulos, Juhan Frank, Meridith Joyce, and Andrea Dupree and the Month of Betelgeuse (MOB) team for discussions of Betelgeuse and mergers. We are especially thankful for the ample support of Bill Paxton and the MESA team. This research was supported in part by the Samuel T. and Fern Yanagisawa Regents Professorship in Astronomy and by NSF AST-1109801 and NSF AST-1813825.
\software{MESA (Paxton et al. 2011, 2013, 2015, 2018)}.
| proofpile-arXiv_059-15726 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Cardiac magnetic resonance (CMR) imaging provides high quality images of the heart and is therefore frequently used to assess cardiac condition. Clinical measures of interest include left ventricular (LV) volume and myocardial thickness, which can be calculated from a prior segmentation of LV and myocardium. In the last years, convolutional neural networks (CNNs) have shown to outperform traditional model-based segmentation techniques and quickly became the method of choice for this task \cite{Bernard2018}. However, since CNNs are trained to predict a class probability (i.e. LV or background) for each voxel, they are missing explicit shape constraints, occasionally resulting in unrealistic segmentations with missing or disconnected regions and hence requiring postprocessing. In this respect, several authors have proposed to integrate a shape prior in their CNN. Examples are atlases \cite{Duan2019,Zotti2019} or hidden representations of anatomy \cite{Oktay2018,Painchaud2019,Yue2019}. In contrast to CNNs, Active Shape Models (ASM) \cite{Cootes1995} construct a landmark-based statistical shape model from a training dataset and fit this model to a new image using learned local intensity models for each landmark, yielding patient-specific global shape coefficients. In this paper, we combine the advantages of both methods: (1) a CNN is used to extract complex appearance features from the images and (2) shape constraints are imposed by regressing the shape coefficients of the statistical model. Compared to Attar et al. \cite{Attar2019}, who used both CMR images and patient metadata to directly predict the coefficients of a 3D cardiac shape, we enforce robustness of coefficient prediction by simultaneously performing semantic segmentation. A similar approach combining segmentation with regression was used by Vigneault et al. \cite{Vigneault2018} to perform pose estimation of LV, by Gessert and Schlaefer \cite{Gessert2019} and by Tilborghs and Maes \cite{Tilborghs2019} to perform direct quantification of LV parameters and by Cao et al. \cite{Cao} for simultaneous hippocampus segmentation and clinical score regeression from brain MR images. In our approach, the semantic segmentation is performed by regression of signed distance maps, trained using a loss function incorporating both distance and overlap measures. Previous methods to incorporate distance losses include the boundary loss of Kervadec et al. \cite{Kervadec2019}, the Hausdorff distance loss of Karimi and Salcudean \cite{Karimi2020} and the method of Dangi et al. \cite{Dangi} who used separate decoders for the prediction of distance maps and segmentation maps. Different to Dangi et al., our CNN only generates a distance map, while the segmentation map is directly calculated from this distance map, guaranteeing full correspondence between the two representations.
\section{Methods}
\subsection{Shape model}
The myocardium in a short-axis (SA) cross-section is approximated by a set of $N$ endo- and epicardial landmarks radially sampled over uniform angular offsets of $2\pi/N$ rad, relative to an anatomical reference orientation $\theta$. From a training set of images, a statistical shape model representing the mean shape and the modes of variation is calculated using principal component analysis. For each image $i$, the myocardial shape $\textbf{p}_i$ is first normalized by subtracting the LV center position $\textbf{c}_i$ and by rotating around $\theta_i$, resulting in the pose-normalized shape $\textbf{s}_i$:
\begin{equation}
\label{eq:pointnorm}
\begin{bmatrix}
\textbf{s}_{i,x}\\
\textbf{s}_{i,y}
\end{bmatrix}
=
\begin{bmatrix}
\cos(\theta_i) & \sin(\theta_i)\\
-\sin(\theta_i) & \cos(\theta_i)
\end{bmatrix}
\begin{bmatrix}
\textbf{p}_{i,x}-\textbf{c}_{i,x}\\
\textbf{p}_{i,y}-\textbf{c}_{i,y}
\end{bmatrix}
\end{equation}
Representing the shapes as vectors $\textbf{s}_i = (x_{1},...,x_{2N},y_{1},...,y_{2N})$, the mean shape $\overline{\textbf{s}}$ is calculated as $\overline{\textbf{s}} = \frac{1}{I}\sum_{i=1}^{I} \textbf{s}_i$ with $I$ the number of training images. The normalized eigenvectors $\textbf{V} = \{\textbf{v}_1,...,\textbf{v}_m,...,\textbf{v}_{4N}\}$ and corresponding eigenvalues $\lambda_m$ are obtained from the singular value decomposition of the centered shapes $\textbf{s}_i-\overline{\textbf{s}}$. The shape of the myocardium is approximated by the $M$ first eigenmodes:
\begin{equation}
\label{eq:pca}
\textbf{s}_i \approx \overline{\textbf{s}} + \sum_{m=1}^{M} b_{i,m} \cdot \sqrt{\lambda_m}\cdot \textbf{v}_m
\end{equation}
Using this definition, the variance of the distribution of shape coefficients $b_m$ is the same for every mode $m$.
\subsection{CNN}
A schematic representation of the CNN architecture is shown in Fig. \ref{fig:network}. It has three outputs: (1) $M$ predicted shape coefficients $\{b_{1,p},...,b_{M,p}\}$, (2) pose parameters $\{\theta_p,c_{x,p},c_{y,p}\}$ and (3) segmentation map $D_p$. Semantic segmentation is performed by the regression of distance maps $D$. $D$ is an image representing the Euclidean distance $d$ between pixel position and contour. The sign is negative for pixels inside structure $S$:
\begin{equation}
D(x)=
\begin{cases}
-d(x), & \text{if } x \in S\\
d(x), & \text{if } x \notin S
\end{cases}
\label{eq:distmap}
\end{equation}
For both endo- and epicardium, separate distance maps $D_{endo}$ and $D_{epi}$ are created.
\begin{figure*}[tb]
\centering
\includegraphics[width =0.99\textwidth]{ArchitectureCNNestimateShapeModel.pdf}
\caption{Proposed CNN architecture with three outputs: shape $\{b_{1,p},...,b_{M,p}\}$, pose $\{\theta_p,c_{x,p},c_{y,p}\}$ and distance maps ($D_p$). The details of residual (Res), downsampling Res (ResD) and upsampling Res (ResU) blocks are given on the right. Every convolutional (Conv) layer is followed by batch normalization and a parameterized rectified linear unit, except for the final layer in every output. The number of feature maps ($\#FM$) is the same for all Conv layers in one Res block. The filter size $A$ in a Conv layer is equal to the dimensions of that layer's input. Same padding is used. }
\label{fig:network}
\end{figure*}
The loss function is a weighted sum of the shape loss $L_1$, pose loss $L_2$ and segmentation loss $L_3$:
\begin{equation}
L = \gamma_1 L_1 + \gamma_2 L_2 + \gamma_3 L_3
\label{eq:totalloss}
\end{equation}
with $L_1$ the mean squared error (MSE) between true and predicted coefficients $b_{m}$: $L_1 = \frac{1}{M} \sum_{m=1}^{M} (b_{m,t}-b_{m,p})^2 $, $L_2$ the MSE for pose parameters $O = \{\theta,c_x,c_y\}$: $L_2 = \frac{1}{3}\sum_{j=1}^3{(o_{j,t}-o_{j,p})^2}$, and $L_3$ a weighted sum of categorical Dice loss and MSE:
\begin{equation}
L_3 = \left(1 - \frac{1}{K} \sum_{k} {\frac{2 \cdot \sum_{x}{S_{k,t}(x)\cdot S_{k,p}(x)}} {\sum_{x}{S_{k,t}(x)} + \sum_{x}S_{k,p}(x)}}\right) +\mu\frac{1}{K\cdot X} \sum_{k,x} (D_{k,t}(x)-D_{k,p}(x))^2
\label{eq:segloss}
\end{equation}
where $X$ is the number of pixels in the image, $K$ is the number of classes and $S_k$ is the binarized distance map using a sigmoid as conversion function:
\begin{equation}
\label{eq:sigmoidconversion}
S_k = \frac{e^{-\alpha \cdot D_k}}{1+e^{-\alpha \cdot D_k}}
\end{equation}
where $\alpha$ affects the steepness of the sigmoid function.
\subsection{Implementation details}
Endo- and epicardium are both represented by $N=18$ landmarks and $\theta$ is defined as the orientation of the line connecting the center of LV with the middle of the septum. The network predicts the first $M=$ 12 shape coefficients, representing over 99$\%$ of shape variation. Pose parameters $\theta$, $c_x$ and $c_y$ are normalized to the range [-1,1]. Given the notable difference in magnitude of the different losses, they are weighted with $\gamma_1 = 1$, $\gamma_2 = 10$, $\gamma_3 = 100$ and $\mu = 0.1$. These weights were heuristically defined and assure significant contribution of each of the losses. Parameter $\alpha$ in Eq. \ref{eq:sigmoidconversion} is set to 5 to approximate a binary map with an error of only 6.7e$^{-3}$ for a distance of one pixel from the contour. The network is trained end-to-end over 5000 epochs with Adam optimizer, learning rate 2e-3 and batch size 32.
Online data augmentation is applied by adapting pose and shape parameters. Position and orientation offsets are sampled from uniform distributions between [-40,40]$mm$ and [-$\pi$,$\pi$]rad, respectively. Additionally, shape coefficients were adapted as $b_{m,aug} = b_m + a$, where $a$ is sampled from a uniform distribution between -1 and 1. The input images and distance maps are modified accordingly. For the input image, a thin-plate-spline point-to-point registration is performed using the $2N$ original and augmented landmarks while the distance maps are recreated from the augmented landmarks, connected using cubic spline interpolation, according to Eq. \ref{eq:distmap}. Furthermore, Gaussian noise with standard deviation between 0 and 0.1 is online added to the MR images during training.
\section{Experiments}
The models were constructed and validated in a fivefold cross validation on a clinical dataset ('D1') containing images of 75 subjects (M=51, age = 48.2$\pm$15.6 years) suffering from a wide range of pathologies including hypertrophic cardiomyopathy, dilated cardiomyopathy, myocardial infarction, myocarditis, pericarditis, LV aneurysm... The subjects were scanned on a 1.5T MR scanner (Ingenia, Philips Healthcare, Best, The Netherlands), with a 32-channel phased array receiver coil setup. The endo- and epicardium in end-diastole and end-systole in the SA cine images were manually delineated by a clinical expert. To allow calculation of $\theta$, the RV attachment points were additionally indicated. This resulted in a total of 1539 delineated SA images, covering LV from apex to base. All images of a patient were assigned to the same fold. For each fold, a separate shape model was constructed using the four remaining folds. The images were resampled to a pixel size of 2$mm$x2$mm$ and image size of 128x128, which results in a value of 8 for parameter $A$ in Fig. \ref{fig:network}.
We validated the performance of our method and the added value of each choice with five different setups: (1) semantic segmentation using categorical Dice loss ('S$_{\mu=0}$'), (2) semantic segmentation using combined loss ('S'): $L = \gamma_3L_3$, (3) regression of shape and pose parameters ('R'): $L = \gamma_1L_1+\gamma_2L_2$, (4) regression and segmentation losses ('RS') as in Eq. \ref{eq:totalloss}, (5) loss as in Eq. \ref{eq:totalloss} and with pose and shape data augmentation ('RS-A$_{ps}$'). For setups 1-4, data augmentation only consisted of the addition of Gaussian noise. Due to faster convergence of training without pose and shape data augmentation, setups 1-4 were only trained for 1000 epochs. For each setup, Dice similarity coefficient (DSC), mean boundary error (MBE) and Hausdorff distance (HD) were calculated from the binarized distance maps ('Map'), as well as from the predicted shape and pose parameters by converting the parameters to landmarks using Eq. \ref{eq:pointnorm} and \ref{eq:pca} and interpolating with cubic splines ('Contour'). The position and orientation errors were respectively defined as $\Delta d = \sqrt{(c_{x,t}-c_{x,p})^2+(c_{y,t}-c_{y,p})^2}$ and $\Delta\theta=|\theta_{t}-\theta_{p}|$. The influence of every shape coefficient was validated by calculating the Euclidean distance between ground truth landmarks and landmarks reconstructed using an increasing number of predicted coefficients. To only capture the impact of shape coefficients, ground truth pose parameters were used for reconstruction.
Furthermore, LV area, myocardial area, LV dimensions in three different orientations and regional wall thickness (RWT) for six cardiac segments were calculated from the predicted landmarks. LV dimensions and RWT were directly obtained by calculating the distance between two corresponding landmarks and averaging the different values in one segment. For these four physical measures, mean absolute error (MAE) and Pearson correlation coefficient ($\rho$) were calculated. Statistical significant improvement of every choice was assessed by the two-sided Wilcoxon signed rank test with a significance level of 5$\%$.
Additionally, we applied the proposed method to two different public datasets: LVQuan18 \cite{LVQuan18} and LVQuan19 \cite{LVQuan19}. Both datasets contain mid-cavity SA slices for 20 time frames spanning the complete cardiac cycle and provide ground truth values for LV and myocardial area, three LV dimensions and six RWT. In LVQuan18 (145 patients, 2879 images), the 80x80 images were normalized for pose and size while in LVQuan19 (56 patients, 1120 images), no preprocessing was applied. LVQuan19 was identically processed as D1, including prior resampling. Since LVQuan18 contained small, centered images, these images were not resampled, no pose regression was applied, the number of epochs was decreased to 1000 and parameter $A$ in Fig. \ref{fig:network} equals 5. For both datasets, a fivefold cross validation was performed and LV area, myocardial area, LV dimensions and RWT were calculated.
\section{Results}
Table \ref{tab:Seg} shows the results of DSC, MBE, HD, $\Delta d$ and $\Delta\theta$ for the different setups. The combined MSE and Dice loss (S) significantly improved DSC, MBE and HD compared to the the setup with only Dice loss (S$_{\mu=0}$), most notably for HD. S$_{\mu=0}$ resulted in 10.2$\%$ unrealistic shapes and S in 0$\%$. While adding $L_1$ and $L_2$ (RS) did not alter the performance of distance map regression, shape and pose data augmentation (RS-A$_{ps}$) did significantly improve all metrics. For the 'Contour' experiments, the addition of semantic segmentation and data augmentation both significantly improved the results, except for $\Delta \theta$. However, DSC, MBE and HD remain worse compared to the 'Map' experiments. The distance errors on the landmarks are visualized in Fig. \ref{fig:shapePoints}, which indicates again that both modifications to a standard regression CNN contribute to significant improvement. Furthermore, whereas the first coefficients, accounting for the largest variation, are relatively well predicted, the latter coefficients were not accurately estimated. The average landmark error for setup RS-A$_{ps}$ using 12 shape coefficients is 1.44$mm$, which is lower than the $MBE$, indicating that the inferior segmentation results are partially due to pose estimation.
\begin{table}[tb]
\centering
\caption{Results for D1 obtained from the binarized distance maps ('Map') or shape and pose parameters ('Contour'). Mean and standard deviation for DSC, MBE, HD, position error ($\Delta d$) and orientation error ($\Delta \theta$) are reported. Best values are indicated in bold. Statistical significant improvement with respect to the previous row is indicated with $^*$.}
\label{tab:Seg}
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
& DSC LV [$\%$] & DSC myo [$\%$] & MBE [$mm$]& HD [$mm$] & $\Delta d$ [$mm$] & $\Delta\theta$ [$^\circ$]\\
\hline
\underline{Map}& &&&& &\\
S$_{\mu=0}$&90.5$\pm$13.9& 81.2$\pm$14.0&1.99$\pm$3.47&18.38$\pm$42.39&/&/ \\
S &91.7$\pm$12.3$^*$&83.1$\pm$12.6$^*$&1.34$\pm$0.90$^*$&4.32$\pm$6.19$^*$&/&/\\
RS& 91.8 $\pm$ 11.6& 83.1 $\pm$ 12.4& 1.35$\pm$0.92 & 4.23$\pm$4.29&/&/\\
RS-A$_{ps}$&\textbf{92.8$\pm$10.1}$^*$&\textbf{85.3$\pm$10.6}$^*$&\textbf{1.18$\pm$0.69}$^*$ & \textbf{3.64$\pm$3.00}$^*$&/&/\\
\underline{Contour}& &&&&&\\
R&65.1$\pm$25.5&38.1$\pm$21.9&7.15$\pm$5.29 & 15.41$\pm$10.70 & 10.1$\pm$9.1&10.4$\pm$10.9\\
RS&82.6$\pm$18.9$^*$&64.3$\pm$21.5$^*$&3.29$\pm$3.29$^*$&7.70$\pm$7.39$^*$&4.1$\pm$5.4$^*$&11.7$\pm$12.4\\
RS-A$_{ps}$&\textbf{88.1$\pm$11.9}$^*$&\textbf{72.7$\pm$14.1}$^*$&\textbf{2.16$\pm$1.03}$^*$&\textbf{5.37$\pm$3.49}$^*$&\textbf{2.5$\pm$1.8}$^*$&\textbf{9.5$\pm$7.5}$^*$\\
\hline
\end{tabular}
\end{table}
\begin{figure}[tb]
\centering
\includegraphics[width =0.5\linewidth]{CoefGraph.pdf}
\caption{Average distance between ground truth landmarks and landmarks reconstructed using a limited number of coefficients. The results are given for predicted as well as ground truth (gt) coefficients.}
\label{fig:shapePoints}
\end{figure}
Table \ref{tab:physical} reports MAE and $\rho$ of LV area, myocardial area, LV dimensions and RWT, averaged over all segments. The results on D1 show that these metrics can be more accurately estimated by simultaneous semantic segmentation and by addition of data augmentation. For LV and myocardial area and LV dimensions, RS-A$_{ps}$ obtained better results compared to the winner of the LVQuan18 challenge \cite{winnerLVquan18}, who used a parameter regression approach, while the estimation of RWT was slightly worse. For LVQuan19, the results of RS-A$_{ps}$ are compared to the top three entries of the challenge. While the results of \cite{winnerLVquan19} and \cite{Gessert2019} are superior, our error on LV and myocardial area and LV dimensions is lower compared to the errors reported in \cite{Tilborghs2019}, and the correlation is higher for all metrics.
Fig. \ref{fig:Seg} depicts representative segmentation examples.
\begin{table}[tb]
\centering
\caption{MAE and $\rho$ for LV area, myocardial area, LV dimensions and RWT. Best values are indicated in bold. For D1, statistical significant improvement with respect to the previous column is indicated with $^*$. $^{(1)}$In \cite{winnerLVquan19}, a threefold cross validation was used. $^{(2)}$In \cite{Gessert2019}, the average MAE of LV and myocardial area was reported to be 122$mm^2$.}.
\label{tab:physical}
\begin{tabular}{|l|l|c|c|c||c|c||c|c|c|c|}
\hline
\multicolumn{2}{|c|}{}&\multicolumn{3}{c||}{D1 [$\%$]} & \multicolumn{2}{c||}{LVQuan18}& \multicolumn{4}{c|}{LVQuan19}\\
\multicolumn{2}{|c|}{}& R & RS & RS-A$_{ps}$ & \cite{winnerLVquan18} & RS-A$_{ps}$ & \cite{winnerLVquan19}$^1$& \cite{Gessert2019} & \cite{Tilborghs2019} & RS-A$_{ps}$\\
\hline
MAE&Area LV [$mm^2$] & 472 & 256$^*$ &\textbf{139}$^*$ & 135&\textbf{117} & \textbf{92} &122$^2$&186& 134\\
&Area Myo [$mm^2$] & 299 & 192$^*$ & \textbf{145}$^*$ &177&\textbf{162} & \textbf{121} &122$^2$&222& 201\\
&Dim [$mm$] & 7.06 & 3.58$^*$& \textbf{2.37}$^*$& 2.03&\textbf{1.50} & \textbf{1.52} &1.84&3.03&2.10\\
&RWT [$mm$]& 1.86 & 1.38$^*$ & \textbf{1.18}$^*$&\textbf{1.38}&1.52 &\textbf{1.01} &1.22&1.67&1.78\\
\hline
$\rho$ $[\%]$&Area LV& 81 & 95 &\textbf{99} & / & 99 & / &/&97& \textbf{98} \\
&Area Myo& 77 & 90 & \textbf{94} & / & 93 & / &/&88& \textbf{93} \\
&Dim& 84 & 96 & \textbf{98} & / & 98 & / &/&95& \textbf{97} \\
&RWT& 69 & 83 & \textbf{88} & / & 84 & / &/&73& \textbf{83} \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\centering
\begin{minipage}[t]{0.3\linewidth}
\includegraphics[width = \linewidth]{UZ_558.png}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\includegraphics[width = \linewidth]{LVQuan18_384_realOrig.png}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\includegraphics[width = \linewidth]{LVQuan19_164_realOrig.png}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\includegraphics[width = \linewidth]{UZ_13.png}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\includegraphics[width = \linewidth]{LVQuan18_500_realOrig.png}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\includegraphics[width = \linewidth]{LVQuan19_345_realOrig.png}
\end{minipage}
\caption{Representative segmentation examples for datasets D1, LVQuan18 and LVQuan19 (left to right). Ground truth (red), semantic segmentation output (yellow) and segmentation reconstructed from shape and pose output (cyan) are shown.}
\label{fig:Seg}
\end{figure}
\section{Discussion}
In contrast to semantic segmentation, the predicted shape coefficients are directly linked to an oriented landmark-based representation and as such allow straightforward calculation of regional metrics including myocardial thickness or strain. Furthermore, contrary to conventional semantic segmentation using Dice loss (S$_{\mu=0}$), our approach did not result in any missing or disconnected regions since the shape model is inherently unable to predict such unrealistic shapes. While some initial experiments showed that pose and shape data augmentation was able to significantly improve the segmentation for setup S$_{\mu=0}$, the results remained significantly worse compared to the proposed approach RS-A$_{ps}$.
For the LVQuan19 challenge data, we obtained higher MAE compared to the leading results of \cite{winnerLVquan19}.
There are multiple possible explanations for this. First, the two methods use significantly different approaches: Acero et al. \cite{winnerLVquan19} calculated the LV parameters from a segmentation obtained with a semantic segmentation CNN while we calculated the LV parameters from the 12 predicted shape coefficients. When calculating the LV parameters from the predicted distance maps and position instead, slightly lower MAE of 109$mm^2$ for LV area, 188$mm^2$ for myocardial area, 1.69$mm$ for LV dimensions and 1.74$mm$ for RWT were achieved. This is in accordance with the lower performance of the 'Contour' experiments compared to the 'Map' experiments in Table 1. Second, preprocessing steps such as resampling strategy and intensity windowing, data augmentation and training approach all have an impact on CNN performance. In the LVQuan18 challenge, the images were preprocessed by the challenge organizers, eliminating some of these sources of variability.
Third, contrary to the challenge entries \cite{winnerLVquan19,Gessert2019,Tilborghs2019}, our method was not specifically developed and tuned for this task. It should be noted that all three challenge entries reported substantially worse results on LVQuan19 test set, which is not publicly available.
We found that the regression of shape coefficients is a more difficult task compared to semantic segmentation. In semantic segmentation using distance maps, 128x128 correlated values should be estimated for every image while shape coefficient regression required the estimation of 12 uncorrelated values from relatively little training data. The combination with semantic segmentation and addition of data augmentation was however able to significantly improve the shape coefficient regression. In future work, we want to investigate if an extra loss term enforcing consistency between semantic segmentation and pose and shape parameters can further improve these results.
\section{Conclusion}
In this paper, we presented a proof-of-concept of our shape constrained CNN on 2D cardiac MR images for segmentation of LV cavity and myocardium. In the future, this can be expanded to 3D segmentation and to other applications.
\subsubsection{Acknowledgement}
Sofie Tilborghs is supported by a Ph.D fellowship of the Research Foundation - Flanders (FWO). The computational resources and services used in this work were provided in part by the VSC (Flemisch Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemisch Government - department EWI. This research also received funding from the Flemish Government under the “Onderzoeksprogramma Artificiële intelligentie (AI) Vlaanderen" programme and is also partially funded by KU Leuven Internal Funds C24/19/047 (promotor F. Maes).
| proofpile-arXiv_059-15727 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\IEEEPARstart{I}{n} recent years, backscatter communication (BackCom) has emerged as a promising solution to improve device lifetime. BackCom devices (or \textit{tags}) communicate by performing passive modulation onto existing radiofrequency (RF) signals, rendering active RF transmission chains unnecessary. The bistatic BackCom architecture, consisting of a carrier emitter (CE) transmitting a sinusoidal signal and a separately located reader, allows extended communication ranges for tags placed close to the CE. Thus, on top of reducing tag power consumption, bistatic BackCom is also ideally suited to applications requiring extended ranges. This aligns with, and is beneficial to aspects of the Internet of Things (IoT) paradigm such as environmental sensing, pervasive monitoring and smart cities, which aim to provide ubiquitous connectivity throughout society.
Various theoretical and experimental studies have been conducted to characterize the achievable range of tags under bistatic BackCom setups. The work in \cite{Kim14} demonstrated extended ranges achievable with a single tag placed close to a CE with only $20$ mW transmit power. The works in \cite{Fas15, Alev17} studied channel coding schemes at the tag and coherent and noncoherent detectors at the reader; while \cite{Shen16} addressed the phase cancellation problem arising from the use of unmodulated carrier signals, resulting in more effective signal reception. However, these works have mostly considered tag-level modifications, and their effects on communication range extension have been mostly incremental, evidenced by the achieved range over time: $130$ m in \cite{Kim14}, and $145$ m and $148$ m in the follow-up works \cite{Fas15, Alev17}, respectively. Moreover, these ranges were achievable only if tags were within $10$ m of the CE, severely limiting the areas over which tags may be placed. Works which consider changes to the system architecture, which could increase both the communication range and CE-tag separation, are currently lacking. To this end, this paper proposes the introduction of an intelligent reflecting surface (IRS) to the bistatic BackCom system for use in the mentioned applications, and examines how such a change on the system architecture level could benefit the range and flexibility of bistatic BackCom.
\subsection{Literature Review}
Recently, IRSs have been highlighted as key enablers for next-generation communication systems, due to their ability to modify the wireless propagation medium to benefit nearby communication links \cite{R2}. An IRS consists of many reflecting elements, each imposing a controllable phase shift on impinging signals. The phase shifts can be jointly optimized to achieve a constructive (or destructive) effect at a receiver \cite{Lia18}. The studies in \cite{Tang19, Ell19} examined the precise path loss scaling in detail, where the signal strength at a receiver was shown to scale proportionally with the IRS surface area \cite{Ozd19}. Thus, with a reasonably-sized IRS, considerable improvement in communication system performance can be realized.
Due to this favorable property, IRSs hold the potential to aid a range of communication systems in both indoor (e.g., buildings and industrial environments) and outdoor (e.g., cellular networks and smart cities) settings \cite{MDRsurvey}. A number of works have studied the optimization of the IRS phase shifts, often jointly with other variables such as transmit beamforming vectors. The work in \cite{Wu19} considered an IRS-aided downlink multiuser network, and proposed semidefinite relaxation (SDR) and alternating optimization (AO) approaches to handle the intractable phase shift optimization problem while jointly performing beamforming optimization. The work in \cite{R3} extended the system model in \cite{Wu19} to an IRS with discrete phase shifts. To reduce the complexity of channel estimation and optimization over many IRS reflectors, a grouping scheme for adjacent IRS elements was proposed in \cite{Yang19}; while \cite{Pan19} extended the joint beamforming and reflection optimization to a multiuser multiple-input multiple output (MIMO) system.
The potential benefits brought by IRSs to conventional communication systems naturally make them suitable candidates to also assist low-power, IoT-type networks. More recently, works on the use of IRSs to facilitate wireless power transfer and cognitive radio have also appeared. The work in \cite{Wu19b} studied the joint active and passive beamforming optimization problem in IRS-assisted simultaneous wireless information and energy transfer (SWIPT) systems; while \cite{Lyu20} considered a wireless-powered communication network (WPCN) where both the IRS and downlink users harvest energy to achieve self-sustainable operation. For improved energy and spectral efficiency, cognitive radio systems assisted by IRSs have also received attention, where the coexistence of primary and secondary systems creates more complex signal paths. The work in \cite{R1} addressed the weighted sum rate maximization problem in an IRS-assisted MIMO cognitive radio system, while \cite{Xu20, Yuan20} considered full-duplex and multi-IRS variants of the cognitive setup, resulting in the IRS needing to balance over the performance of many separate transmissions.
While IRSs are primarily used to assist actively transmitting devices, as seen in the mentioned works, their ubiquitous applicability creates opportunities for them to assist passive transmitters as well. As a technology reliant on external powering signals, BackCom stands to reap significant benefits from potential IRS assistance. Motivated by this, the authors of \cite{Nem20} pioneered the IRS-assisted BackCom system by studying throughput maximization under orthogonal frequency division multiplexing (OFDM) modulated signals. The work in \cite{Zhao20} examined the error performance of an IRS-aided monostatic-like BackCom system without a direct link between tag and reader. The work in \cite{Abe20} proposed channel estimation schemes for a monostatic IRS-assisted BackCom system; while \cite{Far21} experimentally demonstrated the feasibility of an IRS-aided ambient BackCom setup.
\subsection{Motivation and Contributions}
It is noted that most existing works on IRS-aided BackCom systems have focused on the monostatic or ambient architectures, both of which possess short communication ranges. Bistatic BackCom, on the other hand, has a range comparable to some conventional systems. The introduction of an IRS, as a change on the network infrastructure level, would not only realize extended ranges and device lifetimes, but also reduce tags' reliance on CEs in terms of their separation, which translates to more flexible tag deployment and coverage. Given the expected widespread deployment of IRSs on buildings and other structures, and even the appearance of mobile IRSs \cite{Zhang19}, one may reasonably expect the use-cases of IRS to cover the environmental monitoring application (under the scope of smart agriculture \cite{MDRsurvey}) typical of bistatic BackCom systems. The work in \cite{Chen21} explored the IRS phase shift optimization problem under a bistatic-like BackCom setting. However, as we highlight in the sequel, it has not considered the complete signal model, which includes the unique phenomenon of multiple signal reflections at the IRS arising from the reflecting nature of the BackCom device.
In this paper, we present the complete signal model for an IRS-aided bistatic BackCom system for the first time, which accounts for the presence of additional signal paths under line-of-sight scenarios. We present the first results into the extent of CE power consumption reduction through a transmit power minimization problem involving the IRS phase shifts, and quantify the potential backscattering range improvements. The contributions of this paper are as follows:
\begin{itemize}
\item The IRS-aided bistatic BackCom system is introduced, where a separate IRS assists the backscatter communication from the tags to the reader. To our best knowledge, this is the first work to incorporate IRS into bistatic BackCom systems. Different from prior studies on IRS where the signal traveling from the CE to the reader was assumed to undergo only one reflection at the IRS, we highlight a unique feature in that the co-existence of IRS and BackCom tags leads to two significant reflections at the IRS and thus a new signal model.
\item We study the CE transmit power minimization problem for the bistatic BackCom system, subject to the tags' signal-to-noise ratio (SNR) requirements. Due to the nonconvex nature of the problem, we present approximate solutions for the IRS phase shifts in the single-tag case using the minorization-maximization (MM) algorithm. Though suboptimal, we demonstrate considerable transmit power reductions using the MM algorithm, and reveal the IRS phase shift characteristics when balancing between multiple reflections of the same signal.
\item We extend the problem to a multi-tag scenario, and propose an alternating optimization method with a novel successive refinement (SR) approach for individual IRS phase shifts with lower complexity. The scaling behavior of the transmit power is shown to favor scenarios where more tags are present in the system, in terms of effective power per tag.
\end{itemize}
The rest of this paper is organized as follows. Section II introduces the IRS-aided BackCom system and the signal model. Section III presents the general form of the transmit power minimization problem. Section IV proposes solutions to a base case of the problem with one semi-passive tag and a single-antenna reader. Section V presents the algorithms required to solve the general problem involving multiple passive or semi-passive tags. Numerical results are presented in Section VI and Section VII concludes the paper.
\textit{Notations:} $j = \sqrt{-1}$ denotes the complex unit, and $\mathbb{R}$ and $\mathbb{C}$ denote the set of real and complex numbers, respectively. $\left| \cdot \right|$ and $\mathrm{Re}\{\cdot\}$ denote the magnitude and the real part of a complex number, respectively. $\mathcal{CN}(\mu, \sigma^{2})$ represents a complex Gaussian distribution with mean $\mu$ and variance $\sigma^{2}$. Vector and matrix quantities are denoted using lowercase and uppercase boldface letters, respectively. $\mathbf{I}$ denotes the identity matrix of variable size. $\left\lVert \mathbf{a} \right\rVert$ denotes the Euclidean norm of a vector; $\mathrm{tr}(\mathbf{A})$, $\mathbf{A}^{T}$ and $\mathbf{A}^{H}$ denote the trace, transpose and the Hermitian transpose of $\mathbf{A}$, respectively; and $\odot$ denotes the elementwise product of two vectors or matrices.
\section{System Model}
\subsection{System Setup}
We consider an IRS-aided bistatic BackCom system in Fig.~\ref{fig_1} with one $L$-antenna CE, $K \geq 1$ single-antenna tags, a reader with $M \geq 1$ antennas, and an IRS with $N$ reflecting elements. Hereafter, the CE, tags, IRS and reader are referred to in subscripts by $C$, $T$, $I$ and $R$, respectively.
We base the system model on a scenario where tags are deployed in the field for an environmental monitoring application. The IRS is either portable or fixed on a building, wall or other structure, and enhances the backscatter communication of the tags. Such a setup can occur in the greenhouse environment considered in \cite{Kam14}, where the IRS may be wall-mounted; or in open environments, where the IRS may be located on the walls of nearby structures such as houses or fences, or even be deployed on-board a hovering UAV to provide localized coverage at a specific time \cite{Zhang19}. The analysis in this paper considers non-negligible channels between all pairs of nodes, in line with experimental works on bistatic BackCom \cite{Kim14, Fas15, Alev17}. For environmental monitoring and related applications, it is typical to install tags at height such that the channels between tags and all other nodes exist. We adopt Rician fading in the channel model, although the choice of fading model does not affect the generality of the analysis in Sections II-V.
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in]{bistatic_diagram}
\caption{The IRS-aided bistatic BackCom system.}
\label{fig_1}
\end{figure}
The CE transmits a continuous-wave signal with carrier frequency $f_{c}$ to power the tags' communication. Each tag is equipped with two load impedances connected to its antenna, one of which represents an off-state, where the load and antenna impedances are perfectly matched such that the signal is completely absorbed. We consider a generalized tag power supply configuration with a circuit power consumption constraint $\xi$ (in Watts). When $\xi = 0$, the tag is semi-passive, where an on-board battery is the sole energy source for the circuit, and all of the incident signal is used for communication (i.e., backscattered). In the case of $\xi > 0$, a nonzero portion of energy from the incoming signal is used to power the tag.
Tags transmit using generalized binary frequency-shift keying (FSK) modulation, following the assumption in \cite{Kim14, Alev18} for bistatic BackCom systems. Under this scenario, two distinct subcarrier frequencies known by the reader are assigned to tag $k$, and are denoted by $f_{k, 0}$ and $f_{k, 1}$ for bits $0$ and $1$, respectively. That is, the tag switches between its impedances with frequency $f_{k, 0}$ when transmitting bit $0$, and with frequency $f_{k, 1}$ for bit $1$. The frequency-domain representation of the received signal consists of four peaks at $f_{c} \pm f_{k, 0}$ and $f_{c} \pm f_{k, 1}$, which can be resolved using a bank of correlator demodulators at the four frequencies \cite{Kim14}. Due to the small number of tags considered in our setup, we assume that the tags' subcarrier frequencies are distinct enough such that interference between tags' transmissions is negligible. We note that this study aims to conduct initial exploration into the IRS phase~shift design to handle multiple reflections of the same signal, rather than to examine the mitigation of inter-user interference.
Each tag is modeled as a diffuse reflector, which incurs significant signal strength losses. Thus, we ignore signal paths which undergo two or more reflections at tags.\footnote{A reflected signal from an RFID-type tag incurs a near-field path loss of approximately $30$ dB \cite{Dob08}; hence, a signal having undergone an additional reflection is generally weaker than the directly received signal by several orders of magnitude.} However, this assumption does not apply to the reflections at the IRS, particularly for the $C$-$I$-$T$-$I$-$R$ link, with twofold reasoning. First, the strength of the information-bearing signal depends on the continuous-wave signal, both of which are reflected by the IRS. Second,
the IRS is able to tune its individual elements to balance the signal strengths between the combined CE-tag and tag-reader links\footnote{That is, $C$-$T$ plus $C$-$I$-$T$ links, and $T$-$R$ plus $T$-$I$-$R$ links, respectively, here and elsewhere in the paper.} to maximize the end-to-end signal strength. Therefore, the twice-reflected signals are a unique feature that must be included in the considered system model.
The IRS is assumed to have its own power supply in order to configure its phase shifts. The reader utilizes linear combining vectors, denoted by $\mathbf{g}_{k} \in \mathbb{C}^{M \times 1}, \ k \in \{1, \ldots, K\}$, to separately decode the information signals from each of the $K$ tags. The set of all combining vectors is denoted by $\mathbf{G} = [\mathbf{g}_{1}, \ldots, \mathbf{g}_{K}] \in \mathbb{C}^{M \times K}$. We extend the reader receiver architecture in \cite{Kim14} to perform combining after demodulation at each antenna, where the aggregate received signal is split into separate components for each tag, such that each tag is assigned its own combiner.
We assume that the reader has perfect knowledge of all channels in the system. For this exploratory study, using this assumption allows us to characterize the upper bound to system performance. The mathematical treatment and evaluation of channel estimation methods for an IRS-aided bistatic BackCom system is outside the scope of this work. However, any channel estimation method may be intuitively split into phases. In the first phase, the direct $C$-$R$ link can be determined using methods available in the bistatic BackCom literature (e.g., \cite{Fas15}) with the IRS set to the non-reflecting state. Then, the tag is switched on to observe both the $C$-$T$ and $C$-$T$-$R$ channels and to infer the $T$-$R$ channel. In the second phase, the cascaded $C$-$I$-$R$ channel may be resolved into its $C$-$I$ and $I$-$R$ components on the basis of methods from the recent IRS literature \cite{Wei21}. In the third and final phase, the effects of the remaining $I$-$T$ channels can be observed using the information obtained previously on all other signal paths in the system.
\subsection{Signal Model}
Tag $k$ modulates its data symbols onto an incident signal by switching between its impedances to change the power of the reflected signal. The tag's baseband signal is given by $b_{k}(t) = A_{k} - \Gamma_{k}(t)$, where $A_{k} \in \mathbb{C}$ is the antenna structural mode and determines the default amount of signal reflection in the non-reflecting state, and $\Gamma_{k}(t)$ is the reflection coefficient function over time, taking on two possible values $\Gamma_{k, 0}$ and $\Gamma_{k, 1}$, both of unit magnitude or less. The term $A_{k}$ is a constant and can be subtracted from the overall received signal in post-processing. Therefore, as in \cite{Yang18}, we do not take into account the effect of $A_{k}$, and consider the $\Gamma_{k}(t)$ term only.
The energy $\xi_{k}$ required to operate the circuit of tag $k$ can be satisfied with the energy from a portion of the incoming signal. Denote the splitting coefficient at tag $k$ by $\alpha_{k} \in [0, 1]$, which represents the fraction of the incoming signal to be reflected. As such, the remaining $1 - \alpha_{k}$ fraction of the signal energy is used to power the circuit. Where $\xi_{k} = 0$, $\alpha_{k} = 1$.
Denote the channels from the CE to tag $k$, CE to IRS, CE to reader, IRS to tag $k$, reader to IRS and reader to tag $k$ by $\mathbf{h}_{CT_{k}} \in \mathbb{C}^{1 \times L}$, $\mathbf{H}_{CI} \in \mathbb{C}^{N \times L}$, $\mathbf{H}_{CR} \in \mathbb{C}^{M \times L}$, $\mathbf{h}_{T_{k}I}^{H} \in \mathbb{C}^{1 \times N}$, $\mathbf{H}_{RI}^{H} \in \mathbb{C}^{M \times N}$ and $\mathbf{h}_{T_{k}R} \in \mathbb{C}^{M \times 1}$, respectively. Each IRS element $n \in \{1, \ldots, N\}$ reflects the sum of all incident signal paths with unit amplitude gain and a phase shift denoted by $\theta_{n} \in [0, 2\pi)$. The vector containing the phase shifts of all elements is given by $\bm{\theta} = \left[ \theta_{1}, \ldots, \theta_{N} \right]^{T}$. The matrix of reflection coefficient values at the IRS can thus be written as $\mathbf{\Theta} = \mathrm{diag} \left( e^{j \theta_{1}}, \ldots, e^{j \theta_{N}} \right)$.
Linear transmit precoding is assumed at the CE, which transmits signal $s(t)$ to all tags with a single beamforming vector $\mathbf{w}$, such that the transmitted signal can be written as $\mathbf{x}_{C} = \mathbf{w} s(t)$. The transmit power is thus $P = \left\lVert \mathbf{w} \right\rVert^{2}$. The signal received at tag $k$ consists of the combined CE-tag link and is given by
\begin{equation}
y_{T_{k}} = \left( \mathbf{h}_{T_{k}I}^{H} \mathbf{\Theta} \mathbf{H}_{CI} + \mathbf{h}_{CT_{k}} \right) \mathbf{w} s(t). \label{tagReceivedSignal}
\end{equation}
Note that no noise term exists at the tag, as no signal processing is performed, consistent with e.g., \cite{Wang16}. The part of the signal reflected from the tag is given by $x_{T_{k}} = \sqrt{\alpha_{k}} b(t) y_{T_{k}}$. The remainder of the signal is used to power the circuit, whose energy is given by $(1 - \alpha_{k}) \eta |y_{T_{k}}|^{2}$, where $\eta \in [0, 1]$ is the energy conversion efficiency. For simplicity, $\eta$ is a constant and equal across all tags. Thus, we have the following expression for the circuit constraint:
\begin{equation}
(1 - \alpha_{k}) \eta \left| \left( \mathbf{h}_{T_{k}I}^{H} \mathbf{\Theta} \mathbf{H}_{CI} + \mathbf{h}_{CT_{k}} \right) \mathbf{w} \right|^{2} \geq \xi_{k}. \label{circuitConstraint}
\end{equation}
The signal received from tag $k$ at the reader consists of the combined tag-reader link:
\begin{multline}
y_{R, k} = \sqrt{\alpha_{k}} b_{k}(t) \mathbf{g}_{k}^{H} \left( \left( \mathbf{H}_{RI}^{H} \mathbf{\Theta} \mathbf{h}_{T_{k}I} + \mathbf{h}_{T_{k}R} \right) \right. \\ \left. \times \left( \mathbf{h}_{T_{k}I}^{H} \mathbf{\Theta} \mathbf{H}_{CI} + \mathbf{h}_{CT_{k}} \right) \mathbf{w} s(t) + \mathbf{n}_{R} \right), \label{readerReceivedSignalFull}
\end{multline}
where $\mathbf{n}_{R} = [n_{R,1}, \ldots, n_{R,M}]^{T}$ is the noise vector at the reader, following $\mathcal{CN}(0, \sigma_{R}^{2} \mathbf{I})$. We assume synchronization errors to be negligible \cite{Wang16}. Note that the full received signal at the reader also contains the direct link terms $\mathbf{H}_{CR} \mathbf{x}_{C}$ and $\mathbf{H}_{RI}^{H} \mathbf{\Theta} \mathbf{H}_{CI} \mathbf{x}_{C}$, which are standalone signal components separate from the tags' signals (i.e., independent of $k$). However, as both are DC terms, they can be removed at the reader before processing, and are hence omitted here.
The instantaneous SNR for tag $k$ at the reader is given by
\begin{multline}
\gamma_{k} = \frac{1}{\sigma_{R}^{2} \left\lVert \mathbf{g}_{k} \right\rVert^{2}} \alpha_{k} |b_{k}(t)|^{2} \left| \mathbf{g}_{k}^{H} \left( \mathbf{H}_{RI}^{H} \mathbf{\Theta} \mathbf{h}_{T_{k}I} + \mathbf{h}_{T_{k}R} \right) \right. \\ \left. \times \left( \mathbf{h}_{T_{k}I}^{H} \mathbf{\Theta} \mathbf{H}_{CI} + \mathbf{h}_{CT_{k}} \right) \mathbf{w} \right|^{2}. \label{SNR}
\end{multline}
With fixed subcarrier frequencies $f_{k, 0}$ and $f_{k, 1}$ that are sufficiently distinct, the decoding performance can be improved by maximizing the SNR \cite{Kim14}. Thus, we adopt the SNR as the primary BackCom performance metric for the remainder of this paper.
\section{Problem Formulation}
We study the transmit power minimization problem at the CE subject to each tag's SNR requirement. To do so, we jointly optimize the energy beamforming vector at the CE, the phase shifts at the IRS, the splitting coefficients at each tag and the combining vectors at the reader. We begin by presenting the general IRS-aided multi-tag problem with a multiantenna reader.
In Section IV, we solve a simplified base case of the general problem with a single semi-passive tag ($\xi = 0$) and a single-antenna reader, which is the most common system setup for bistatic BackCom \cite{Kim14, Fas15, Alev17, Shen16}, and for which closed-form solutions can be derived after problem transformations. The algorithms therein and their associated insights provide the basis for further algorithm development in Section V, where the general problem (whose solutions lack closed-forms and are more difficult to visualize) is studied in greater detail.
Letting $\bm{\alpha} = [\alpha_{1}, \ldots, \alpha_{K}]^{T}$, the general problem can be written as follows:
\begin{subequations}
\begin{align}
\text{(M)}: ~~\min_{\mathbf{w}, \mathbf{\Theta}, \bm{\alpha}, \mathbf{G}} ~~~&\left\lVert \mathbf{w} \right\rVert^{2} \label{MA} \\
\mathrm{s.t.}~~~~~&\gamma_{k} \geq \gamma_{k, th}, \ \forall k, \label{MB} \\
&\left| \left( \mathbf{h}_{T_{k}I}^{H} \mathbf{\Theta} \mathbf{H}_{CI} + \mathbf{h}_{CT_{k}} \right) \mathbf{w} \right|^{2} \nonumber \\
& \qquad \qquad \qquad \geq \frac{\xi_{k}}{(1 - \alpha_{k}) \eta}, \ \forall k, \label{MC} \\
&0 \leq \alpha_{k} \leq 1, \ \forall k, \label{MD} \\
&0 \leq \theta_{n} < 2 \pi, \ \forall n \in \{1, \ldots, N\}, \label{ME} \\
&\left\lVert \mathbf{g}_{k} \right\rVert^{2} \leq 1, \ \forall k, \label{MF}
\end{align}
\end{subequations}
where (\ref{MB}) is each tag's SNR constraint, (\ref{MC}) is each tag's circuit power constraint, (\ref{MD}) is the tag splitting coefficient constraint, (\ref{ME}) is the range of phase shifts achievable by each IRS element, and (\ref{MF}) is the constraint on the combining vector for each tag at the reader. To simplify the analysis hereafter, we set $\eta = 1$, $\gamma_{k, th} = \gamma_{th}$ and $\xi_{k} = \xi, \ \forall k$.
The transmit power minimization problem is appealing, as it allows comparisons to be made to the transmit power in a non-IRS-aided system. By fixing a target SNR, one may translate the power reduction to a range increase using the original transmit power. However, Problem (M) is highly nonconvex due to the SNR constraint being a fourth-order function of $\mathbf{\Theta}$, and an optimal solution in terms of all design variables cannot be obtained in a tractable manner. Compared to similar transmit power minimization problems in IRS-aided communication systems, such as that in\cite{Wu19}, Problem (M) has two major differences: 1) the presence of a BackCom device, which is also a passive reflector like the IRS, and poses an additional variable to be optimized (i.e., $\mathbf{\alpha}$); 2) the two-reflection signal path, which renders the problem vastly more complex to solve compared to a single-reflection signal model. In the following sections, we present methods to reduce the complexity of this problem. We consider the non-IRS-assisted system as a benchmark in Section VI to characterize the performance improvements brought about by an IRS.
\section{Base Case: Single Semi-Passive Tag and Single-Antenna Reader}
We begin our study of the general problem with one semi-passive tag and a single-antenna reader. The single-tag setup, as studied in experimental works such as \cite{Fas15, Alev17}, allows us to highlight the characteristics of the solution to the IRS phase shift optimization problem with two reflections of the same signal, both in closed-form (as shown in this section) and visually (as in Section VI) as opposed to the general case, which merits its status as a special case of the general problem. In the following, we propose algorithms to solve the base case problem, to provide the foundation for the general problem in the next section.
For a single-antenna reader, the channels are changed to $\mathbf{h}_{CR}\!\in\!\mathbb{C}^{1 \times L}$, $\mathbf{h}_{RI}^{H}\!\in\!\mathbb{C}^{1 \times N}$ and $h_{TR}\!\in\!\mathbb{C}^{1 \times 1}$. With one semi-passive tag in the system, we can drop the tag indexing and combiners in Problem~(M), and set the tag splitting coefficient $\alpha$ to $1$. We rewrite the problem as follows:
\begin{subequations}
\begin{align}
\text{(S)}: ~\min_{\mathbf{w}, \mathbf{\Theta}} ~~~&\left\lVert \mathbf{w} \right\rVert^{2} \label{SA} \\
\mathrm{s.t.}~~~&|b(t)|^{2} \big|\!\left( \mathbf{h}_{RI}^{H} \mathbf{\Theta} \mathbf{h}_{TI} + h_{TR} \right) \nonumber \\ &\quad \times \left( \mathbf{h}_{TI}^{H} \mathbf{\Theta} \mathbf{H}_{CI} + \mathbf{h}_{CT} \right) \mathbf{w} \big|^{2} \geq \gamma_{th} \sigma_{R}^{2}, \label{SB} \\
&0 \leq \theta_{n} < 2 \pi, \ \forall n \in \{1, \ldots, N\}. \label{SC}
\end{align}
\end{subequations}
Here, the phase shift matrix $\mathbf{\Theta}$ needs to be designed to balance the channel gains between the combined CE-tag and combined tag-reader links. For the single-tag case, it is well-known that optimal beamforming can be achieved by using maximum ratio transmission (MRT), given by\footnote{Note that when $\xi > 0$, MRT is not necessarily optimal, as constraint (\ref{MC}) also becomes active. A solution for $\mathbf{w}^{*}$ can then be found using the approaches proposed in Section V.}
\begin{equation}
\mathbf{w}^{*} = \sqrt{P} \frac{\left[ \left( \mathbf{h}_{RI}^{H} \mathbf{\Theta} \mathbf{h}_{TI} + h_{TR} \right) \left( \mathbf{h}_{TI}^{H} \mathbf{\Theta} \mathbf{H}_{CI} + \mathbf{h}_{CT} \right) \right]^{H}}{\left\lVert \left( \mathbf{h}_{RI}^{H} \mathbf{\Theta} \mathbf{h}_{TI} + h_{TR} \right) \left( \mathbf{h}_{TI}^{H} \mathbf{\Theta} \mathbf{H}_{CI} + \mathbf{h}_{CT} \right) \right\rVert}. \label{optimalBV}
\end{equation}
Substituting $\mathbf{w}^{*}$ into Problem (S), we rewrite it in power minimization form:
\begin{subequations}
\begin{align}
\text{(S1)}: ~~\min_{P, \mathbf{\Theta}} ~~~&P \label{S1A} \\
\mathrm{s.t.}~~~~&P |b(t)|^{2} \big\lVert \left( \mathbf{h}_{RI}^{H} \mathbf{\Theta} \mathbf{h}_{TI} + h_{TR} \right) \nonumber \\ &\quad \times \left( \mathbf{h}_{TI}^{H} \mathbf{\Theta} \mathbf{H}_{CI} + \mathbf{h}_{CT} \right) \big\rVert^{2} \geq \gamma_{th} \sigma_{R}^{2}, \label{S1B} \\
& \text{(\ref{SC})}. \nonumber
\end{align}
\end{subequations}
By inspection, the optimal transmit power is the minimum value that satisfies (\ref{S1B}):
\begin{equation}
P^{*} = \frac{\gamma_{th} \sigma_{R}^{2}}{|b(t)|^{2} \left\lVert \left( \mathbf{h}_{RI}^{H} \mathbf{\Theta} \mathbf{h}_{TI} + h_{TR} \right) \left( \mathbf{h}_{TI}^{H} \mathbf{\Theta} \mathbf{H}_{CI} + \mathbf{h}_{CT} \right) \right\rVert^{2}}. \label{optimalPower}
\end{equation}
To obtain $P^{*}$, we can directly maximize the denominator over $\mathbf{\Theta}$, with the problem as follows:
\begin{align}
\text{(S2)}: ~~\max_{\mathbf{\Theta}} ~~~&\left\lVert \left( \mathbf{h}_{RI}^{H} \mathbf{\Theta} \mathbf{h}_{TI} + h_{TR} \right) \left( \mathbf{h}_{TI}^{H} \mathbf{\Theta} \mathbf{H}_{CI} + \mathbf{h}_{CT} \right) \right\rVert^{2}, \label{S2} \\
\mathrm{s.t.}~~~~&\text{(\ref{SC})}. \nonumber
\end{align}
\subsection{Minorization Maximization Algorithm}
To solve Problem (S2), we may split (\ref{S2}) into a squared norm and a scalar term, corresponding to the two bracketed terms. The two terms can be expanded as
\begin{multline}
\left\lVert \mathbf{h}_{TI}^{H} \mathbf{\Theta} \mathbf{H}_{CI} + \mathbf{h}_{CT} \right\rVert^{2} = \mathbf{v}^{H} \mathbf{\Phi}_{CIT} \mathbf{\Phi}_{CIT}^{H} \mathbf{v} \\ + \mathbf{v}^{H} \mathbf{\Phi}_{CIT} \mathbf{h}_{CT}^{H} + \mathbf{h}_{CT} \mathbf{\Phi}_{CIT}^{H} \mathbf{v} + \left\lVert \mathbf{h}_{CT} \right\rVert^{2}, \label{QCQPStandard}
\end{multline}
\begin{multline}
|\mathbf{h}_{RI}^{H} \mathbf{\Theta} \mathbf{h}_{TI} + h_{TR}|^{2} = \mathbf{v}^{H} \mathbf{\Phi}_{TIR} \mathbf{\Phi}_{TIR}^{H} \mathbf{v} \\ + \mathbf{v}^{H} \mathbf{\Phi}_{TIR} h_{TR} + h_{TR}^{H} \mathbf{\Phi}_{TIR}^{H} \mathbf{v} + |h_{TR}|^{2}, \label{QCQP2}
\end{multline}
where $\mathbf{\Phi}_{CIT} = \mathrm{diag}(\mathbf{h}_{TI}^{H}) \mathbf{H}_{CI}$, $\mathbf{\Phi}_{TIR} = \mathrm{diag}(\mathbf{h}_{RI}^{H}) \mathbf{h}_{IT}$, and $\mathbf{v} = \left[ e^{j \theta_{1}}, \ldots, e^{j \theta_{N}} \right]^{H}$, where $|v_{n}|^{2} = 1, \ \forall n$. These two equations can be rewritten in matrix form as
\begin{align}
\left\lVert \mathbf{h}_{TI}^{H} \mathbf{\Theta} \mathbf{H}_{CI} + \mathbf{h}_{CT} \right\rVert^{2} &= \bar{\mathbf{v}}^{H} \mathbf{R} \bar{\mathbf{v}} + \left\lVert \mathbf{h}_{CT} \right\rVert^{2}, \label{expandedQCQP} \\
|\mathbf{h}_{RI}^{H} \mathbf{\Theta} \mathbf{h}_{TI} + h_{TR}|^{2} &= \mathbf{\bar{v}}^{H} \mathbf{S} \mathbf{\bar{v}} + |h_{TR}|^{2},
\end{align}
with
\begin{align}
\mathbf{R} &=
\begin{bmatrix}
\mathbf{\Phi}_{CIT} \mathbf{\Phi}_{CIT}^{H} & \mathbf{\Phi}_{CIT} \mathbf{h}_{CT}^{H} \\
\mathbf{h}_{CT} \mathbf{\Phi}_{CIT}^{H} & 0
\end{bmatrix}, \nonumber \\
\mathbf{S} &=
\begin{bmatrix}
\mathbf{\Phi}_{TIR} \mathbf{\Phi}_{TIR}^{H} & \mathbf{\Phi}_{TIR} h_{TR}^{H} \\
h_{TR} \mathbf{\Phi}_{TIR}^{H} & 0
\end{bmatrix},
\hspace{10mm} \mathbf{\bar{v}} =
\begin{bmatrix}
\mathbf{v} \\
1
\end{bmatrix}. \label{R}
\end{align}
As a result, the product of (\ref{expandedQCQP}) and (\ref{QCQP2}) can be written as
\begin{equation}
F(\bar{\mathbf{v}}) = \bar{\mathbf{v}}^{H} \mathbf{S} \bar{\mathbf{v}} \bar{\mathbf{v}}^{H} \mathbf{R} \bar{\mathbf{v}} + c_{1} \bar{\mathbf{v}}^{H} \mathbf{S} \bar{\mathbf{v}} + c_{2} \bar{\mathbf{v}}^{H} \mathbf{R} \bar{\mathbf{v}} + c_{1} c_{2}, \label{revisedOF}
\end{equation}
with $c_{1} = \lVert \mathbf{h}_{CT} \rVert^{2}$ and $c_{2} = |h_{TR}|^{2}$. Equation (\ref{revisedOF}) is a quartic polynomial in $\bar{\mathbf{v}}$. Normally, IRS phase shift optimization problems involve optimization over a quadratic polynomial such as (\ref{expandedQCQP}), which permits the use of the identity $\bar{\mathbf{v}}^{H} \mathbf{R} \bar{\mathbf{v}} = \mathrm{tr}(\mathbf{R} \bar{\mathbf{v}} \bar{\mathbf{v}}^{H})$. If we let $\mathbf{V} = \mathbf{\bar{v}} \mathbf{\bar{v}}^{H}$, then the objective function becomes a function of $\mathbf{V}$ (i.e., $\mathrm{tr}(\mathbf{R} \mathbf{V})$), which is rank-one. Here, we cannot invoke this identity, as the first resulting trace term, $\mathrm{tr}(\mathbf{S} \mathbf{V} \mathbf{R} \mathbf{V})$, is generally nonconvex, as $\mathbf{R}$ and $\mathbf{S}$ are generally not positive semidefinite. It has also been noted in the literature that the optimization (minimization, in the case of \cite{Luo10}) of multivariate polynomials of degree $4$ and above is NP-hard, meaning that a closed-form, optimal solution is generally not available. We address this challenging issue by using the MM algorithm.
To solve Problem (S2), in each MM iteration, we first find a minorizer to $F(\bar{\mathbf{v}})$, and solve the maximization problem with the minorizer as the objective. We note that in \cite[Lemma 12]{Sun17}, a convex \textit{majorizing} function was derived for functions $f(\mathbf{x}): \mathbb{R}^{N} \rightarrow \mathbb{R}$. As our objective is $f(\mathbf{x}): \mathbb{C}^{N} \rightarrow \mathbb{R}$, we can use similar logic and complex calculus to obtain a \textit{minorizer} with bounded curvature by taking the first-order Taylor expansion plus a \textit{negative} squared error term:
\begin{equation}
f(\mathbf{x}) \geq f(\mathbf{x}_{0}) + \mathrm{Re} \left\{ \nabla f(\mathbf{x}_{0})^{H} (\mathbf{x} - \mathbf{x}_{0}) \right\} - \frac{\ell}{2} \left\lVert \mathbf{x} - \mathbf{x}_{0} \right\rVert^{2}, \label{minoriser}
\end{equation}
where $\mathbf{x}_{0} \in \mathbb{C}^{N}$ is the point of intersection between $f(\mathbf{x})$ and the minorizer, $\nabla$ is the gradient operator, and $\ell$ is the Lipschitz constant (i.e., maximum curvature of $f(\mathbf{x})$). Following (\ref{minoriser}), the minorizer to (\ref{revisedOF}) is
\begin{align}
F(\bar{\mathbf{v}}) &\geq \bar{\mathbf{v}}_{0}^{H} \mathbf{S} \bar{\mathbf{v}}_{0} \bar{\mathbf{v}}_{0}^{H} \mathbf{R} \bar{\mathbf{v}}_{0} + c_{1} \bar{\mathbf{v}}_{0}^{H} \mathbf{S} \bar{\mathbf{v}}_{0} + c_{2} \bar{\mathbf{v}}_{0}^{H} \mathbf{R} \bar{\mathbf{v}}_{0} \nonumber \\
& \qquad + c_{1} c_{2} + \bar{\mathbf{v}}_{0}^{H} \mathbf{T} (\bar{\mathbf{v}} - \bar{\mathbf{v}}_{0}) + (\bar{\mathbf{v}} - \bar{\mathbf{v}}_{0})^{H} \mathbf{T} \bar{\mathbf{v}}_{0} \nonumber \\
& \qquad - \frac{\ell}{2}(\bar{\mathbf{v}}^{H} \bar{\mathbf{v}} - \bar{\mathbf{v}}^{H} \bar{\mathbf{v}}_{0} - \bar{\mathbf{v}}_{0}^{H} \bar{\mathbf{v}} + \left\lVert \bar{\mathbf{v}}_{0} \right\rVert^{2}) \nonumber \\
&= -\frac{\ell}{2}(\bar{\mathbf{v}}^{H} \bar{\mathbf{v}} - \bar{\mathbf{v}}^{H} \bar{\mathbf{v}}_{0} - \bar{\mathbf{v}}_{0}^{H} \bar{\mathbf{v}} + \left\lVert \bar{\mathbf{v}}_{0} \right\rVert^{2}) \nonumber \\
& \qquad + \bar{\mathbf{v}}_{0}^{H} \mathbf{T} \bar{\mathbf{v}} + \bar{\mathbf{v}}^{H} \mathbf{T} \bar{\mathbf{v}}_{0} + c \nonumber \\
&= -\frac{\ell}{2} \left( \bar{\mathbf{v}}^{H} \mathbf{I} \bar{\mathbf{v}} + \bar{\mathbf{v}}^{H} \left( -\frac{2}{\ell} \mathbf{T} \bar{\mathbf{v}}_{0} - \mathbf{I} \bar{\mathbf{v}}_{0} \right) \right. \nonumber \\
& \left. \qquad + \left( -\frac{2}{\ell} \mathbf{T} \bar{\mathbf{v}}_{0} - \mathbf{I} \bar{\mathbf{v}}_{0} \right)^{H} \mathbf{\bar{v}} \right) + c, \label{quadraticApprox}
\end{align}
with $\mathbf{T} = \mathbf{R} \bar{\mathbf{v}}_{0} \bar{\mathbf{v}}_{0}^{H} \mathbf{S} + \mathbf{S} \bar{\mathbf{v}}_{0} \bar{\mathbf{v}}_{0}^{H} \mathbf{R} + c_{2} \mathbf{R} + c_{1} \mathbf{S}$ being a Hermitian matrix obtained from the derivative of $F(\bar{\mathbf{v}}_{0})$, and $c$ denoting the cumulative sum of all constant terms and terms involving only $\bar{\mathbf{v}}_{0}$. Equation (\ref{quadraticApprox}) is also of quadratic form, and can be rewritten as $\bar{\bar{\mathbf{v}}}^{H} \mathbf{U} \bar{\bar{\mathbf{v}}}$, where
\begin{equation}
\mathbf{U} =
-\begin{bmatrix}
\mathbf{I} & -\frac{2}{\ell} \mathbf{T} \bar{\mathbf{v}}_{0} - \mathbf{I} \bar{\mathbf{v}}_{0} \\
(-\frac{2}{\ell} \mathbf{T} \bar{\mathbf{v}}_{0} - \mathbf{I} \bar{\mathbf{v}}_{0})^{H} & 0
\end{bmatrix},
\hspace{2mm} \bar{\bar{\mathbf{v}}} =
\begin{bmatrix}
\bar{\mathbf{v}} \\
1
\end{bmatrix}. \label{U}
\end{equation}
At this point, one approach is to let $\bar{\bar{\mathbf{V}}} = \bar{\bar{\mathbf{v}}} \bar{\bar{\mathbf{v}}}^{H}$, where $\bar{\bar{\mathbf{V}}} \succeq 0$ and $\mathrm{rank}(\bar{\bar{\mathbf{V}}}) = 1$. This gives us the minorizer $\mathrm{tr}(\mathbf{U} \bar{\bar{\mathbf{V}}}) + c$ to be maximized with respect to $\bar{\bar{\mathbf{V}}}$ in each MM iteration. Relaxing the rank constraint, we obtain a semidefinite program (SDP) that can be optimally solved using CVX \cite{cvx}. The best resulting $\bar{\bar{\mathbf{v}}}$ from Gaussian randomization \cite{gaussrand} can then be converted to a rank-one $\bar{\bar{\mathbf{V}}}$. Nonetheless, the SDR approach by itself incurs a worst-case complexity on the order of $O(I(N+2)^{4.5}) \simeq O(IN^{4.5})$ \cite{Luo10a}, where $I$ is the number of MM iterations. This complexity may be avoided by taking a further minorizer to $\bar{\bar{\mathbf{v}}}^{H} \mathbf{U} \bar{\bar{\mathbf{v}}}$ as in \cite{Pan19}:
\begin{equation}
\bar{\bar{\mathbf{v}}}^{H} \mathbf{U} \bar{\bar{\mathbf{v}}} \geq \bar{\bar{\mathbf{v}}}^{H} \mathbf{X} \bar{\bar{\mathbf{v}}} + 2\mathrm{Re}\{\bar{\bar{\mathbf{v}}}^{H} (\mathbf{U} - \mathbf{X}) \bar{\bar{\mathbf{v}}}_{0}\} + \bar{\bar{\mathbf{v}}}_{0}^{H} (\mathbf{X} - \mathbf{U}) \bar{\bar{\mathbf{v}}}_{0}, \label{secondMinoriser}
\end{equation}
with $\mathbf{X} = \lambda^{-} \mathbf{I}$, $\lambda^{-}$ being the minimum eigenvalue of $\mathbf{U}$, and $\bar{\bar{\mathbf{v}}}_{0}$ as before. The new objective is to maximize the second minorizer in every MM iteration, with respect to $\bar{\bar{\mathbf{v}}}$. As the first and last terms of (\ref{secondMinoriser}) are both constants, each MM subproblem reduces to
\begin{subequations}
\begin{align}
\text{(S3)}: ~~\max_{\bar{\bar{\mathbf{v}}}} ~~~&2\mathrm{Re}\{\bar{\bar{\mathbf{v}}}^{H} (\mathbf{U} - \mathbf{X}) \bar{\bar{\mathbf{v}}}_{0}\}, \label{S3MA} \\
\mathrm{s.t.}~~~~&|\bar{\bar{\mathbf{v}}}_{n, n}| = 1, \ \forall n \in \{1, \ldots, N+2\}, \label{S3MB} \\
&\bar{\bar{\mathbf{v}}}_{j, j} = 1, \ j = \{N+1, N+2\}. \label{S3MC}
\end{align}
\end{subequations}
Problem (S3) has a closed-form solution, given by $\bar{\bar{\mathbf{v}}}^{*} = e^{j \ \mathrm{arg}((\mathbf{U} - \mathbf{X}) \bar{\bar{\mathbf{v}}}_{0})}$.
Note that $F(\bar{\mathbf{v}})$ is bounded above, as both $\mathbf{R}$ and $\mathbf{S}$ are constant matrices, and $\left\lVert \bar{\mathbf{v}} \right\rVert^{2}\!=\!N$ is a finite constant. Both minorizers can be readily shown to satisfy the conditions required for MM convergence outlined in \cite{Pan19, Sun17} with respect to their objective functions. In addition, the second minorizer also acts as minorizer to the original objective function in (\ref{revisedOF}), and returns an optimal solution in terms of $\bar{\bar{\mathbf{v}}}$ in each MM iteration. Thus, the sequence of solutions from each iteration of Problem (S3) will monotonically increase and converge to at least a local optimum of (\ref{revisedOF}). \textbf{Algorithm 1} summarizes the process.
\begin{algorithm}
\caption{MM Algorithm with Nested SDR}
\begin{algorithmic}[1]
\STATE \textbf{Initialize:} Random IRS phase shifts $\bm{\theta}$; set iteration number $i = 1$.
\STATE Obtain $\bar{\mathbf{v}}$ from $\bm{\theta}$ and set $\bar{\mathbf{v}}_{0}^{(i)} = \bar{\mathbf{v}}$.
\WHILE{the rate of change in objective function (\ref{revisedOF}) is above a threshold $\varepsilon > 0$}
\STATE Construct $\mathbf{U}$ from $\bar{\mathbf{v}}_{0}^{(i)}$ and $\mathbf{T}$.
\STATE Set $\bar{\mathbf{v}}_{0}^{(i+1)} \leftarrow \bar{\bar{\mathbf{v}}}^{*} = e^{j \ \mathrm{arg}((\mathbf{U} - \mathbf{X}) \bar{\bar{\mathbf{v}}}_{0})}$ for next iteration after removing the last element.
\STATE Update iteration number $i \leftarrow i + 1$.
\ENDWHILE
\STATE \textbf{Return:} Optimized phase shift vector $\mathbf{v}^{*}$ by dropping the last element of $\bar{\mathbf{v}}_{0}$ at convergence.
\end{algorithmic}
\end{algorithm}
\subsection{Successive Refinement Algorithm}
We present a successive refinement (SR)-based phase shift optimization algorithm, where in each iteration, the $N$ phase shifts are optimized sequentially starting from the first, while holding the others constant. Denoting the current reflection coefficient to be optimized as $s_{n} = e^{j \theta_{n}}$, the objective function to be maximized with respect to each phase shift is given by
\begin{align}
F(s_{n}) &= \left| \mathbf{h}_{RI}^{H} \mathbf{\Theta} \mathbf{h}_{TI} \mathbf{h}_{TI}^{H} \mathbf{\Theta} \mathbf{H}_{CI} \mathbf{w} + \mathbf{h}_{RI}^{H} \mathbf{\Theta} \mathbf{h}_{TI} \mathbf{h}_{CT} \mathbf{w} \right. \nonumber \\
& \left. + h_{TR} \mathbf{h}_{TI}^{H} \mathbf{\Theta} \mathbf{H}_{CI} \mathbf{w} + h_{TR} \mathbf{h}_{CT} \mathbf{w} \right|^{2} \nonumber \\
&=\!\left| \left[ [\mathbf{h}_{RI}^{H}]_{n} s_{n} [\mathbf{h}_{TI}]_{n}\!+\!\sum_{j \neq n}^{N} [\mathbf{h}_{RI}^{H}]_{j} s_{j} [\mathbf{h}_{TI}]_{j} \right] \right. \nonumber \\
& \left.\!\times \left[ [\mathbf{h}_{TI}^{H}]_{n} s_{n} [\mathbf{H}_{CI} \mathbf{w}]_{n}\!+\!\sum_{j \neq n}^{N} [\mathbf{h}_{TI}^{H}]_{j} s_{j} [\mathbf{H}_{CI} \mathbf{w}]_{j} \right] \right. \nonumber \\
& \left.\!+ \left[ [\mathbf{h}_{RI}^{H}]_{n} s_{n} [\mathbf{h}_{TI}]_{n} \mathbf{h}_{CT} \mathbf{w}\!+\!\sum_{j \neq n}^{N} [\mathbf{h}_{RI}^{H}]_{j} s_{j} [\mathbf{h}_{TI}]_{j} \mathbf{h}_{CT} \mathbf{w} \right] \right. \nonumber \\
& \left.\!+ \left[ [h_{TR} \mathbf{h}_{TI}^{H}]_{n} s_{n} [\mathbf{H}_{CI} \mathbf{w}]_{n}\!+\!\sum_{j \neq n}^{N} [h_{TR} \mathbf{h}_{TI}^{H}]_{j} s_{j} [\mathbf{H}_{CI} \mathbf{w}]_{j} \right] \right. \nonumber \\
& \left. + h_{TR} \mathbf{h}_{CT} \mathbf{w} \right|^{2}. \label{expand}
\end{align}
The exact solution $\theta_{n}^{*}$ may be found by linear search. Alternatively, we can also operate further on (\ref{expand}), which can be rewritten as
\begin{equation}
F(s_{n}) = \left| h_{1} s_{n}^{2} + (h_{2} + h_{3}) s_{n} + h_{4} \right|^{2}, \label{expand2}
\end{equation}
where $h_{1}$ is the product of the $s_{n}$ terms in the first two brackets of (\ref{expand}); $h_{2}$ and $h_{3}$ are the non-summation terms in the second and third brackets, respectively; and $h_{4}$ is the sum of all remaining constants. Let $h_{\Sigma} = h_{2} + h_{3}$. When expanded, (\ref{expand2}) can be rewritten as
\begin{multline}
F(\theta_{n}) = 2 \mathrm{Re}\left\{ a_{1 \Sigma} e^{j \theta_{n}} e^{j \theta_{1 \Sigma}} \right\} + 2 \mathrm{Re}\left\{ a_{\Sigma 4} e^{j \theta_{n}} e^{j \theta_{\Sigma 4}} \right\} \\ + 2 \mathrm{Re}\left\{ a_{1 4} e^{j 2 \theta_{n}} e^{j \theta_{14}} \right\} + \kappa, \label{Fsj}
\end{multline}
where $a_{pq}$ and $\theta_{pq}$ denote the magnitude and phase of the composite term $h_{p} h_{q}^{H}, p, q \in \{1, \Sigma, 4\}$, and $\kappa = |h_{1}|^{2} + |h_{\Sigma}|^{2} + |h_{4}|^{2}$. With the reference path loss magnitude incorporated into the channels, one may note that $h_{4}$ is larger than both $h_{\Sigma}$ and $h_{1}$ by several orders of magnitude, since $h_{\Sigma}$ and $h_{1}$ are comprised of three and four channel terms, respectively, compared to two for $h_{4}$. Hence, the product magnitude of $h_{\Sigma}$ and $h_{4}$ dominates the objective function (out of the terms involving $\theta_{n}$). As such, the approximate derivative of $F(\theta_{n})$ can be given by
\begin{equation}
F_{apx}^{'}(\theta_{n}) = -2 \sin \left( \theta_{n} + \theta_{\Sigma 4} \right), \label{apx}
\end{equation}
which leads to an approximate closed-form solution of $\theta_{n}^{*} \approx \mathrm{mod}(2 \pi - \theta_{\Sigma 4}, 2 \pi)$.
The successive refinement of individual phase shifts is guaranteed to converge, as $F(s_{n})$ is nondecreasing due to the coupled nature of all phase shifts. While the converged solution may not be a local optimum, our numerical results in Section VI-A suggest close agreement between this algorithm and the MM-based algorithm in the previous section, whose solution has the optimality guarantee. Upon convergence of $F(s_{n})$, $\mathbf{w}$ is computed using (\ref{optimalBV}). This alternating process repeats until convergence of the objective function $\left\lVert \mathbf{w} \right\rVert^{2}$.
The complexity of \textbf{Algorithm 1} in the previous subsection is on the order of $O(I_{MM} N^{2})$, and that of the SR approach in this subsection is $O(I_{SR} N^{2})$, where $I_{MM}$ and $I_{SR}$ are the number of MM sub-iterations and SR cycles through all phase shifts, respectively.
\section{General Case: Multiple Tags and Multiantenna Reader}
The bistatic BackCom system with a multiantenna reader and multiple (possibly passive) tags encompasses all practical system configurations. However, as the IRS needs to be optimized to cater to multiple tags each with their own twice-reflected signal, the optimization problem becomes noticeably more complex. Given the coupling of $\mathbf{w}$, $\mathbf{\Theta}$, $\bm{\alpha}$ and $\mathbf{g}_{k}$ in (\ref{MB}), we use an AO approach to solve Problem (M), which iteratively optimizes one variable while holding the others constant. To optimize $\mathbf{\Theta}$ over multiple tags, we directly draw upon, and upscale the MM algorithm in Section IV-A as an intuitive approach. We also propose a low-complexity algorithm based on the successive refinement algorithm in Section IV-B.
\subsection{Transmit Beamforming Vector Optimization}
First, we minimize the squared norm of the energy beamforming vector $\mathbf{w}$:
\begin{subequations}
\begin{align}
\text{(M1)}: ~~\min_{\mathbf{w}} ~&\left\lVert \mathbf{w} \right\rVert^{2} \label{M1A}\\
\mathrm{s.t.}~~&\big| \mathbf{g}_{k}^{H}\!\left( \mathbf{H}_{RI}^{H} \mathbf{\Theta} \mathbf{h}_{T_{k}I}\!+\!\mathbf{h}_{T_{k}R} \right) \nonumber \\
& \quad \times \left( \mathbf{h}_{T_{k}I}^{H} \mathbf{\Theta} \mathbf{H}_{CI}\!+\!\mathbf{h}_{CT_{k}} \right)\!\mathbf{w} \big|^{2} \geq \frac{\alpha_{k} |b_{k}(t)|^{2}}{\gamma_{th} \sigma_{R}^{2}}, \ \forall k, \label{M1B} \\
&\left| \left( \mathbf{h}_{T_{k}I} \mathbf{\Theta} \mathbf{H}_{CI} + \mathbf{h}_{CT_{k}} \right) \mathbf{w} \right|^{2} \geq \frac{\xi}{1 - \alpha_{k}}, \ \forall k. \label{M1C}
\end{align}
\end{subequations}
We let $\mathbf{W} = \mathbf{w} \mathbf{w}^{H}$, where $\mathbf{W} \succeq 0$ and $\mathrm{rank}(\mathbf{W}) = 1$. In addition, where appropriate, we make the substitutions $\mathbf{h}_{k, 1}(\mathbf{\Theta}) = \mathbf{h}_{T_{k}I}^{H} \mathbf{\Theta} \mathbf{H}_{CI} + \mathbf{h}_{CT_{k}}$ and $\mathbf{h}_{k, 2}(\mathbf{\Theta}) = \mathbf{H}_{RI}^{H} \mathbf{\Theta} \mathbf{h}_{T_{k}I} + \mathbf{h}_{T_{k}R}$. As a result, the objective function (\ref{M1A}) is modified to $\mathrm{tr}(\mathbf{W})$; and the LHS of (\ref{M1B}) can be rewritten as $|\mathbf{g}_{k}^{H} \mathbf{h}_{k, 2}(\mathbf{\Theta})|^{2} \mathrm{tr}(\mathbf{W} \mathbf{H}_{k, 1}(\mathbf{\Theta}))$, with $\mathbf{H}_{k, 1}(\mathbf{\Theta}) = \mathbf{h}_{k, 1}(\mathbf{\Theta})^{H} \mathbf{h}_{k, 1}(\mathbf{\Theta}) \in \mathbb{C}^{L \times L}$. The LHS of (\ref{M1C}) can be rewritten as $(1 - \alpha_{k}) \mathrm{tr}(\mathbf{W} \mathbf{H}_{k, 1}(\mathbf{\Theta}))$. Thus, dropping the nonconvex rank constraint on $\mathbf{W}$, Problem (M1) can be transformed to the following problem:
\begin{subequations}
\begin{align}
\text{(M1.1)}: ~~\min_{\mathbf{W}} ~&\mathrm{tr}(\mathbf{W}) \label{M11A} \\
\mathrm{s.t.}~~&\big| \mathbf{g}_{k}^{H} \mathbf{h}_{k, 2}(\mathbf{\Theta}) \big|^{2} \mathrm{tr}(\mathbf{W} \mathbf{H}_{k, 1}(\mathbf{\Theta})) \nonumber \\
& \qquad \qquad \qquad \geq \frac{\gamma_{th} \sigma_{R}^{2}}{\alpha_{k} |b_{k}(t)|^{2}}, \ \forall k, \label{M11B} \\
&\mathrm{tr}(\mathbf{W} \mathbf{H}_{k, 1}(\mathbf{\Theta})) \geq \frac{\xi}{1 - \alpha_{k}}, \ \forall k, \label{M11C} \\
&\mathbf{W} \succeq 0. \label{M11D}
\end{align}
\end{subequations}
Problem (M1.1) is an SDP, and can be solved optimally using CVX. After obtaining a candidate solution $\mathbf{W}_{SDR}$, Gaussian randomization can be performed to obtain the best beamforming vector $\mathbf{w}_{SDR}^{*}$ to obtain a rank-one $\mathbf{W}_{SDR}$. Each beamforming vector generated by randomization is appropriately scaled to satisfy the most violated constraint for all tags out of (\ref{M11B})-(\ref{M11C}) with equality \cite{gaussrand}. The best solution is thus the post-scaling vector with the smallest norm.
\subsection{IRS Phase Shift Optimization}
As the objective function (\ref{MA}) in Problem (M) only depends on $\mathbf{w}$, the optimization over $\mathbf{\Theta}$ takes the form of a feasibility problem, which can be written as
\begin{subequations}
\begin{align}
\text{(M2)}: ~~\mathrm{find} ~~&\mathbf{\Theta} \label{M2A} \\
\mathrm{s.t.}~~&\left| \mathbf{g}_{k}^{H} \mathbf{h}_{k, 2}(\mathbf{\Theta}) \mathbf{h}_{k, 1}(\mathbf{\Theta}) \mathbf{w} \right|^{2} \geq \frac{\gamma_{th} \sigma_{R}^{2}}{\alpha_{k} |b_{k}(t)|^{2}}, \ \forall k, \label{M2B} \\
&\left| \mathbf{h}_{k, 1}(\mathbf{\Theta}) \mathbf{w} \right|^{2} \geq \frac{\xi}{1 - \alpha_{k}}, \ \forall k, \label{M2C} \\
&0 \leq \theta_{n} < 2 \pi, \ \forall n. \label{M2D}
\end{align}
\end{subequations}
The quartic function of $\mathbf{\Theta}$ in (\ref{M2B}) makes this problem highly nonconvex. Similar to the base case, we present two methods to finding a feasible $\mathbf{\Theta}$. The first involves a series of transformations on Problem (M2) to a more tractable, approximating problem, on which we can apply an MM-based approach to reach a desirable solution to the overall problem. The second is an extension of the successive phase shift optimization algorithm presented in Section IV-B.
\textit{1) MM-based Algorithm:} In this first method, we approximate each quartic constraint with a simpler minorizing constraint in each iteration. First, we recast the squared magnitude expression in (\ref{M2B}) using the same $\mathbf{v}$ and $\bar{\mathbf{v}}$ introduced in (\ref{QCQPStandard}) and (\ref{R}), respectively, resulting in
\begin{equation}
\bar{\mathbf{v}}^{H} \mathbf{A}_{k} \bar{\mathbf{v}} \bar{\mathbf{v}}^{H} \mathbf{C}_{k} \bar{\mathbf{v}} + |d_{k}|^{2} \bar{\mathbf{v}}^{H} \mathbf{A}_{k} \bar{\mathbf{v}} + |b_{k}|^{2} \bar{\mathbf{v}}^{H} \mathbf{C}_{k} \bar{\mathbf{v}} + |b_{k}|^{2} |d_{k}|^{2}, \label{modifiedM2B}
\end{equation}
where $\mathbf{a}_{k} = \text{diag}(\mathbf{g}_{k}^{H} \mathbf{H}_{RI}^{H}) \mathbf{h}_{T_{k}I}$, $b_{k} = \mathbf{g}_{k}^{h} \mathbf{h}_{T_{k}R}$, $\mathbf{c}_{k} = \text{diag}(\mathbf{h}_{T_{k}I}^{H}) \mathbf{H}_{CI} \mathbf{w}$, $d_{k} = \mathbf{h}_{CT_{k}} \mathbf{w}$, and
\begin{equation}
\mathbf{A}_{k} =
\begin{bmatrix}
\mathbf{a}_{k} \mathbf{a}_{k}^{H} & b_{k}^{H} \mathbf{a}_{k} \\
b_{k} \mathbf{a}_{k}^{H} & 0
\end{bmatrix},
\hspace{10mm} \mathbf{C}_{k} =
\begin{bmatrix}
\mathbf{c}_{k} \mathbf{c}_{k}^{H} & d_{k}^{H} \mathbf{c}_{k} \\
d_{k} \mathbf{c}_{k}^{H} & 0
\end{bmatrix}. \label{AC}
\end{equation}
The circuit constraint in (\ref{M2C}) can be similarly recast as
\begin{equation}
(1 - \alpha_{k}) (\bar{\mathbf{v}}^{H} \mathbf{C}_{k} \bar{\mathbf{v}} + |d_{k}|^{2}) \geq \xi. \label{modifiedM2C}
\end{equation}
Then, we formulate convex versions of both (\ref{modifiedM2B}) and (\ref{modifiedM2C}). For each tag, the SNR constraint is approximated by a convex minorizer in each iteration similar to the MM procedure from Section IV, which is a stricter version of the original constraint. Holding $\alpha_{k}$ and $b_{k}(t)$ constant, (\ref{modifiedM2B}) can be lower-bounded by $\mathrm{tr}( \mathbf{U}_{k} \bar{\bar{\mathbf{V}}} )$, with
\begin{equation}
\mathbf{U}_{k} =
-\begin{bmatrix}
\mathbf{I} & -\frac{2}{\ell} \mathbf{T}_{k} \bar{\mathbf{v}}_{0} - \mathbf{I} \bar{\mathbf{v}}_{0} \\
(-\frac{2}{\ell} \mathbf{T}_{k} \bar{\mathbf{v}}_{0} - \mathbf{I} \bar{\mathbf{v}}_{0})^{H} & 0
\end{bmatrix}, \label{U_k}
\end{equation}
where $\bar{\bar{\mathbf{V}}}$ was defined in Problem (S3),
$\mathbf{T}_{k} = \mathbf{A}_{k} \bar{\mathbf{v}}_{0} \bar{\mathbf{v}}_{0}^{H} \mathbf{C}_{k} + \mathbf{C}_{k} \bar{\mathbf{v}}_{0} \bar{\mathbf{v}}_{0}^{H} \mathbf{A}_{k} + |b_{k}|^{2} \mathbf{C}_{k} + |d_{k}|^{2} \mathbf{A}_{k}$, and $\bar{\mathbf{v}}_{0} \in \mathbb{C}^{N+1}$ is an initialization vector with $1$ as its last element. The new constraint is
\begin{equation}
\alpha_{k} |b_{k}(t)|^{2} \left( \mathrm{tr}(\mathbf{U}_{k} \bar{\bar{\mathbf{V}}}) + |b_{k}|^{2} |d_{k}|^{2} \right) \geq \gamma_{th} \sigma_{R}^{2}. \label{modifiedQ3B}
\end{equation}
The circuit constraint in (\ref{modifiedM2C}) can be recast into a convex form by defining a new matrix $\bar{\mathbf{C}}_{k}$, whose dimensions match that of $\mathbf{U}_{k}$ by adding an extra zero row and column:
\begin{equation}
(1 - \alpha_{k}) \left( \mathrm{tr}(\bar{\mathbf{C}}_{k} \bar{\bar{\mathbf{V}}}) + |d_{k}|^{2}\right) \geq \xi.
\label{modifiedQ3C}
\end{equation}
With these transformations, we obtain a feasibility problem in $\bar{\bar{\mathbf{V}}}$, where $\bar{\bar{\mathbf{V}}} \succeq 0$ and $\mathrm{rank}(\bar{\bar{\mathbf{V}}}) = 1$. The problem can then be transformed into an SDP by relaxing the rank constraint. Nonetheless, more favorable solutions can be obtained by further transforming the objective into an explicit optimization form \cite{Wu19}. To this end, we introduce the slack variables $\{\delta_{k}\}$ to denote the difference between the achievable values of the circuit constraints and their requirements:
\begin{subequations}
\begin{align}
\text{(M2.1)}: ~~\max_{\bar{\bar{\mathbf{V}}}, \{\delta_{k}\}} ~& \sum_{k=1}^{K} \delta_{k} \label{M21A} \\
\mathrm{s.t.}~~&\mathrm{tr}(\mathbf{U}_{k} \bar{\bar{\mathbf{V}}}) + |b_{k}|^{2} |d_{k}|^{2} \geq \frac{\gamma_{th} \sigma_{R}^{2}}{\alpha_{k} |b_{k}(t)|^{2}}, \ \forall k, \label{M21B} \\
&\mathrm{tr}(\bar{\mathbf{C}}_{k} \bar{\bar{\mathbf{V}}}) + |d_{k}|^{2} \geq \frac{\xi + \delta_{k}}{1 - \alpha_{k}}, \ \forall k, \label{M21C} \\
&\delta_{k} \geq 0, \ \forall k, \label{M21D} \\
&\bar{\bar{\mathbf{V}}}_{n, n} = 1, \ \forall n, \label{M21E} \\
&\bar{\bar{\mathbf{V}}} \succeq 0. \label{M21F}
\end{align}
\end{subequations}
Intuitively, this prioritizes the more limiting circuit constraint with the $\delta_{k}$ terms while ensuring the SNR constraints are also met. Problem (M2.1) can be readily solved using CVX. Again, Gaussian randomization is performed in each iteration to obtain a rank-one $\bar{\bar{\mathbf{V}}}$.\footnote{In choosing the randomization method, we compared the performance of all three methods (randA, randB, randC) in \cite{gaussrand}, and determined that while randA and randC performed near-identically, the quality of the candidate phase shift vectors both exceeded that from randB when the SNR constraint in (\ref{M21B}) is concerned.} Taking advantage of the slackness of the circuit constraint (\ref{M21C}), the randomization process selects the best candidate vector $\bar{\mathbf{v}}_{SDR}^{*}$ that results in the largest total SNR surplus for each tag over its requirement, which is translated into a reduction in $\left\lVert \mathbf{w} \right\rVert^{2}$ in the next iteration of Problem (M1.1). Each candidate vector $\bar{\mathbf{v}}_{SDR}$ is checked against constraints (\ref{M21B})-(\ref{M21C}), and the best $\bar{\mathbf{v}}_{SDR}^{*}$ is selected only from the candidates which meet both (\ref{M21B}) and (\ref{M21C}). As (\ref{M21A}) converges over the iterations of Problem (M2.1), the constraints in (\ref{M21B}) and (\ref{M21C}) are updated to narrow the solution space. \textbf{Algorithm 2} summarizes the procedure.
\begin{algorithm}
\caption{MM-Based IRS Phase Shift Optimization with Multiple Tags}
\begin{algorithmic}[1]
\STATE \textbf{Initialize:} Beamforming vector $\mathbf{w}^{(i+1)}$, tag splitting coefficients $\bm{\alpha}^{(i)}$, combining vectors $\mathbf{G}^{(i)}$ where $i$ is the outer iteration number; a random starting point $\bar{\mathbf{v}}_{0}^{(j)}$; iteration number $j = 1$.
\WHILE{the rate of change in objective function (\ref{M21A}) is above a threshold $\varepsilon > 0$}
\STATE For each tag, construct $\mathbf{T}_{k}$ from $\bar{\mathbf{v}}_{0}^{(j)}$ and $\mathbf{U}_{k}$ from (\ref{U_k}).
\STATE Solve Problem (M2.1) and denote the solution as $\bar{\bar{\mathbf{V}}}_{SDR}^{(j)}$.
\STATE Perform eigenvalue decomposition $\bar{\bar{\mathbf{V}}}_{SDR}^{(j)} = \mathbf{\Lambda} \mathbf{D} \mathbf{\Lambda}^{H}$.
\FOR{$q = 1$ to a required number of randomizations}
\STATE Generate vector $\mathbf{r}_{q}$ according to the randA method in \cite{gaussrand}.
\STATE Compute $\bar{\bar{\mathbf{v}}}_{q}\!\!=\!\!\bm{\Lambda} \mathbf{D}^{1/2} \mathbf{r}_{q}$, drop the last element to obtain $\bar{\mathbf{v}}_{SDR, q}$, and compute $F(\bar{\mathbf{v}}_{SDR, q})$.
\ENDFOR
\STATE Select the candidate vector resulting in the LHS of constraint (\ref{M21B}) exceeding the RHS by the largest amount over all $q$, and set $\bar{\mathbf{v}}_{0}^{(j)} \leftarrow \bar{\mathbf{v}}_{SDR}^{*(j+1)}$ for the next iteration.
\STATE Update inner iteration number $j \leftarrow j + 1$.
\ENDWHILE
\STATE \textbf{Return:} Optimized phase shift matrix $\mathbf{\Theta}^{*}$.
\end{algorithmic}
\end{algorithm}
\textit{2) Successive Refinement Algorithm:} Each iteration of Problem (M2.1) incurs high complexity due to the SDP and randomization. Thus, we propose a novel low-complexity algorithm based on the SR approach in Section IV-B, with several modifications to cater for the multiple SNR and circuit constraints. The solution to Problem (M2.1) has the advantage that the residuals (i.e., the difference between the LHS and RHS) to both (\ref{M21B}) and (\ref{M21C}) are maximized: the latter through maximizing the slack variables $\{\delta_{k}\}$ in the objective, and the former through randomization, on the basis that the circuit constraints can already be met with high probability after maximizing $\sum_{k=1}^{K} \delta_{k}$. The proposed successive refinement scheme is designed to mimic the process of maximizing both SNR and circuit constraint residuals at once via a single objective.
Similar to the single-tag case, in each SR iteration, the $N$ phase shifts are optimized sequentially starting from the first, while holding all others constant. Optimizing the $n$-th phase shift involves computing both the SNR and circuit constraint residuals for all $K$ tags over a vector of phase shift values $\theta_{n} = 0$:$T$:$2 \pi$, where $T$ is the phase shift precision. The resulting vectors are denoted by $S_{k}$ and $C_{k}$ for the $k$-th tag, respectively. For a certain $\theta_{n}$, the case where $S_{k}$ or $C_{k}$ is less than zero indicates that the corresponding constraint in Problem (M2.1) is violated. To construct the objective, we take the elementwise minima of the set of $\{S_{k}\}$ and $\{C_{k}\}$, to give us vectors of worst-case SNR and circuit constraint residuals over all tags. Then, both vectors are scaled such that their positive elements (if any) are normalized to the range $[0, 1]$. The post-scaling vectors are denoted by $S^{+}$ and $C^{+}$, respectively. Finally, the objective is defined as the elementwise product of $S^{+}$ and $C^{+}$. Intuitively, the value of $\theta_{n}$ that maximizes this objective helps to maximize the gap between the SNR and/or the available backscattering energy of the worst-performing tag and their requirements, while balancing over the better-performing tags. Where either $S^{+}$ or $C^{+}$ is all-zero, indicating that at least one tag does not meet the SNR or circuit constraints regardless of the phase shift used for the current reflector, the maximization over $\theta_{n}$ is performed with respect to the non-zero parts of the other vector. However, despite zero elements possibly being present in the objective, if Problem (M) was feasible before entering this step, this step will also provide a feasible solution.
The detailed procedure is described in \textbf{Algorithm 3}. To facilitate successive refinement with the circuit constraint, the LHS of the circuit constraint for tag $k$ can be written as a function of the $n$-th phase shift term $s_{n}$ in (\ref{Xisj}), in a similar manner as the LHS of the SNR constraint in (\ref{expand}).
\begin{figure*}
\normalsize
\begin{equation}
\Xi_{k}(s_{n}) = (1 - \alpha_{k}) \Bigg| [\mathbf{h}_{T_{k}I}^{H}]_{n} s_{n} [\mathbf{H}_{CI} \mathbf{w}]_{n} \\ + \sum_{j \neq n}^{N} [\mathbf{h}_{T_{k}I}^{H}]_{j} s_{j} [\mathbf{H}_{CI} \mathbf{w}]_{j} + \mathbf{h}_{CT_{k}} \mathbf{w} \Bigg|^{2}. \label{Xisj}
\end{equation}
\hrulefill
\end{figure*}
\begin{algorithm}
\caption{Successive Refinement-Based Multi-Tag IRS Phase Shift Optimization}
\begin{algorithmic}[1]
\STATE \textbf{Initialize:} Beamforming vector $\mathbf{w}^{(i+1)}$, phase shift matrix $\mathbf{\Theta}^{(i)}$, splitting coefficient vector $\bm{\alpha}^{(i)}$, combining vectors $\mathbf{G}^{(i)}$, vector of test phase shift values $\theta_{n} = 0$:$T$:$2 \pi$.
\WHILE{the rate of change in the minimum SNR of all tags is above a threshold $\varepsilon > 0$}
\FOR{$n$ = $1$:$N$}
\FOR{$k$ = $1$:$K$}
\STATE Compute $\alpha_{k} F(s_{n})$ in (\ref{expand2}) or (\ref{Fsj}) for all $\theta_{n}$ and subtract by $\gamma_{th} \sigma_{R}^{2}$ to obtain $S_{k}$.
\STATE \textbf{if} $\xi > 0$: Compute $(1 - \alpha_{k}) \Xi(s_{n})$ in (\ref{Xisj}) for all $\theta_{n}$ and subtract by $\xi$ to obtain $C_{k}$.
\ENDFOR
\STATE Compute $\min \{S_{k}\}$ and scale its positive portion to $[0, 1]$ to obtain $S^{+}$.
\STATE \textbf{if} $\xi > 0$: Compute $\min \{C_{k}\}$ and scale its positive portion to $[0, 1]$ to obtain $C^{+}$.
\STATE \textbf{if} $\xi = 0$: Set $\theta_{n}^{*} = \mathrm{arg} \max_{\theta} S^{+}$;
\STATE \textbf{else if} $S^{+} \odot C^{+}$ has nonzero elements: Set $\theta_{n}^{*} = \mathrm{arg} \max_{\theta} S^{+} \odot C^{+}$;
\STATE \textbf{else if} $S^{+} \odot C^{+} = \mathbf{0}^{T}$ but $S^{+}$ is nonzero: Set $\theta_{n}^{*} = \mathrm{arg} \max_{\theta} C^{+}$;
\STATE \textbf{else:} Set $\theta_{n}^{*} = \mathrm{arg} \max_{\theta} S^{+}$.
\ENDFOR
\ENDWHILE
\STATE \textbf{Return:} Optimized phase shift matrix $\mathbf{\Theta}^{*}$.
\end{algorithmic}
\end{algorithm}
\subsection{Tag Splitting Coefficient and Receive Beamforming Optimization}
Next, the feasibility problem for the splitting coefficients of all tags can be written as
\begin{subequations}
\begin{align}
\text{(M3)}: ~~\mathrm{find} ~~&\bm{\alpha} \label{M3A} \\
\mathrm{s.t.}~~&\left| \mathbf{g}_{k}^{H} \mathbf{h}_{k, 2}(\mathbf{\Theta}) \right|^{2} \mathrm{tr}(\mathbf{W} \mathbf{H}_{k, 1}(\mathbf{\Theta})) \nonumber \\
& \qquad \qquad \qquad \quad \geq \frac{\gamma_{th} \sigma_{R}^{2}}{\alpha_{k} |b_{k}(t)|^{2}}, \ \forall k, \label{M3B} \\
&\mathrm{tr}(\mathbf{W} \mathbf{H}_{k, 1}(\mathbf{\Theta})) \geq \frac{\xi}{1 - \alpha_{k}}, \ \forall k, \label{M3C} \\
&0 \leq \alpha_{k} \leq 1, \ \forall k. \label{M3D}
\end{align}
\end{subequations}
Noting that (\ref{M3B}) and (\ref{M3C}) can be combined with $\alpha_{k}$ on the LHS, (\ref{M3D}) is equivalent to
\begin{multline}
\frac{\gamma_{th} \sigma_{R}^{2}}{|b_{k}(t)|^{2} \mathrm{tr}(\mathbf{W} \mathbf{H}_{k, 1}(\mathbf{\Theta})) \left| \mathbf{g}_{k}^{H} \mathbf{h}_{k, 2}(\mathbf{\Theta}) \right|^{2}} \leq \alpha_{k} \\ \leq 1 - \frac{\xi}{\mathrm{tr}(\mathbf{W} \mathbf{H}_{k, 1}(\mathbf{\Theta}))}. \label{modifiedQ4D}
\end{multline}
Constraint (\ref{modifiedQ4D}) denotes the range of feasible splitting coefficients, where any value in this range suffices. As the residual in either the SNR or the circuit constraint is reduced over the $\mathbf{w}$ and $\mathbf{\Theta}$ sub-problems, the range of feasible $\alpha_{k}$ converges to a single feasible point over the iterations.
Finally, given orthogonal tag transmissions, the combiner which maximizes the strength of the received signal over all antennas for each tag is given by $\mathbf{g}_{k}^{*} = \frac{\mathbf{h}_{k, 2}(\mathbf{\Theta}) \mathbf{h}_{k, 1}(\mathbf{\Theta}) \mathbf{w}}{\left\lVert \mathbf{h}_{k, 2}(\mathbf{\Theta}) \mathbf{h}_{k, 1}(\mathbf{\Theta}) \mathbf{w} \right\rVert}$.
\subsection{Overall Algorithm and Complexity Analysis}
The procedure to solve Problem (M) is summarized in \textbf{Algorithm 4}.
\begin{algorithm}
\caption{Alternating Optimization (AO) Algorithm for Multi-Tag Systems}
\begin{algorithmic}[1]
\STATE \textbf{Initialize:} Random IRS phase shifts $\mathbf{\Theta}^{(1)}$; random splitting coefficients $\bm{\alpha}^{(1)}$; random combining vectors $\mathbf{G}^{(1)}$; set iteration number $i = 1$.
\WHILE{the rate of change of objective function (\ref{M1A}) is above a threshold $\varepsilon > 0$}
\STATE Solve Problem (M1.1) using $\mathbf{\Theta}^{(i)}$, $\bm{\alpha}^{(i)}$ and $\mathbf{G}^{(i)}$ and denote the solution as $\mathbf{W}_{SDR}^{(i)}$.
\STATE Perform Gaussian randomization as per \cite{gaussrand} to obtain the best solution $\mathbf{w}^{(i+1)}$.
\STATE Solve Problem (M2.1) using either \textbf{Algorithm 2} or \textbf{Algorithm 3} to obtain $\mathbf{\Theta}^{(i+1)}$.
\STATE Solve Problem (M3) using $\mathbf{W}^{(i+1)}$, $\bm{\Theta}^{(i+1)}$ and $\mathbf{G}^{(i)}$ by selecting $\bm{\alpha}^{(i+1)}$ to be within the bounds in (\ref{modifiedQ4D}).
\STATE Obtain $\mathbf{G}^{(i+1)}$ by computing the optimal combiner $\mathbf{g}_{k}^{*}$ for all $k$.
\STATE Update iteration number $i \leftarrow i + 1$.
\ENDWHILE
\STATE \textbf{Return:} Minimized transmit power, $P^{*} = \left\lVert \mathbf{w}^{*} \right\rVert^{2}$.
\end{algorithmic}
\end{algorithm}
While the relaxed SDPs in Problems (M1.1) and (M2.1) mean that optimality to \textbf{Algorithm 4} cannot be guaranteed, the objective value is nonincreasing in each iteration. This is due to the fact that the solutions from subproblems (M2.1) and (M3) are then converted to a reduction in the transmit power in the subsequent iteration of Problem (M1.1). Thus, where feasible solutions are found in both Problems (M2.1) and (M3), $\left\lVert \mathbf{w} \right\rVert^{2}$ decreases in the subsequent iteration.
\begin{table}
\caption{Computational complexities of the components of the multi-tag algorithms.}
\label{complexities}
\setlength{\tabcolsep}{3pt}
\centering
\begin{tabular}{|c|c|}
\hline
\textbf{Step} & Complexity \\
\hline
Tx. beamforming ($\mathbf{w}$) & \begin{tabular}{@{}c@{}} $O(\max\{K, L\}^{4} L^{1/2})$ for SDP \cite{Luo10a}, \\ $O(RN^{2})$ for randomization \end{tabular} \\
\hline
Phase shifts ($\mathbf{\Theta}$) & \begin{tabular}{@{}c@{}}MM-based (\textbf{Algorithm 2}): \\ $O(I_{MM} (\max\{2K,\!N\}^{4} N^{1/2}\!+\!N^{3}\!+\!KRN^{2}))$ \\ SR-based (\textbf{Algorithm 3}): \\ $O(I_{SR} KN^{2}/T)$ \end{tabular} \\
\hline
Splitting coeff. ($\bm{\alpha}$) & $O(KL^{3} + KN^{2})$ \\
\hline
Rx. beamforming ($\mathbf{G}$) & $O(K N^{2})$ \\
\hline
\end{tabular}
\end{table}
The complexities of the transmit beamforming optimization, phase shift optimization, tag splitting coefficient computation and the receive beamforming optimization per AO iteration are shown in Table~I; the typical convergence behavior is discussed in Section VI-C. For the general case, $I_{MM}$ and $I_{SR}$ refer to the number of MM sub-iterations in Problem (M2.1) and the number of SR cycles through all IRS phase shifts in lines 3-14 of \textbf{Algorithm 3}, respectively; $R$ is the number of randomization iterations. For the MM-based phase shift optimization, the three terms are due to the SDP in Problem (M2.1) (where $\bar{\bar{\mathbf{V}}}$ is a square matrix of size $N+2 \approx N$), the eigenvalue decomposition after solving the SDP, and randomization, respectively. For the SR-based phase shift optimization, $T$ is the precision of the linear search over each phase shift, where $\left\lceil \frac{2 \pi}{T} \right\rceil$ is the number of discrete phase shift values in the vector $\theta_{n}$ in \textbf{Algorithm 3}. These are asymptotic complexities based on the assumptions that $L, M \ll N$ and $R \gg N$. Moreover, when tags are semi-passive, Problems (M1.1) and (M2.1) are solved in the absence of constraints (\ref{M11C}) and (\ref{M21C}), respectively, and Problem (M3) and its associated computational cost is omitted.
\section{Numerical Results}
In this section, we numerically quantify the transmit power reduction at the CE in the IRS-aided bistatic BackCom system. We begin with the performance of the base case algorithms in Section IV in a system with a single semi-passive tag, and extend the results to a single passive tag in a special case of the general algorithm in Section V. Then, we quantify the performance of the general algorithm in a multi-tag system where tags may be either semi-passive or passive, and examine the convergence characteristics of our proposed algorithms.
The CE is located at the origin and the reader is located at $[100, 0]$ unless otherwise noted, with all coordinates in meters hereafter. For the single-tag system, the CE has $L = 4$ antennas, and the tag is located on a straight line between the CE and the reader. For the multi-tag system, $L = 4$ and the reader also has $M = 4$ antennas, and $K$ tags are centered around $[20, 0]$ at a distance of $5$ m, with equal angular spacing to simulate a cluster of tags. The default location of the IRS is $[20, 20]$. Fig.~\ref{fig:simulationSetup} shows the system setup for both the single- and multi-tag systems.
\begin{figure*}[h!]
\centering
\includegraphics[width=0.95\textwidth]{simulation_setup}
\caption{Simulation setup for (left) the single-tag setup and (right) the multi-tag setup.}
\label{fig:simulationSetup}
\end{figure*}
The CE transmits at the frequency of $915$ MHz, typical of the RFID band \cite{GD09}. We assume a square IRS and consider an outdoor scenario, where all links undergo Rician fading with $K$-factor $3$ dB and path loss exponent $2.1$. The Rician $K$-factor is drawn from the typical range in \cite{Alev18} after conversion from the Nakagami parameter; while the path loss exponent is chosen to be higher than the free-space assumption in \cite{Kim14} to account for sparse scatterers in the environment. The distances between the CE, tag and the reader are typical of bistatic BackCom systems \cite{Kim14}, with the tag being slightly further from the reader in our case to demonstrate the improved tag deployment flexibility as a result of the IRS. Unless otherwise noted, the number of channel realizations in all simulations is $1000$; the default number of IRS elements is $N = 64$; the tag's baseband signal magnitude is $|b(t)|^{2} = 1$; the SNR requirement for all tags is $\gamma_{th} = 8$ dB; the noise power at the reader is $\sigma_{R}^{2} = -110$ dBm; the Lipschitz constant is set to $\ell = 2.5 \times 10^{-16}$ and the convergence threshold for the AO-based algorithms is set to $\varepsilon = 10^{-4}$. The default IRS size is a representative value in the range used in \cite{Wu19}. The tag's SNR requirement of $8$ dB is based on \cite{Kim14}, which resulted in a BER of $0.01$ in the bistatic system therein. The Lipschitz constant is chosen via extensive simulations to cater to the objective function $F(\bar{\mathbf{v}})$ based on a range of tag locations and associated channel gains, and is used throughout this section; while the convergence threshold is a stricter version of the $10^{-3}$ used in \cite{Wu19}.
We adopt the path loss model in \cite[Eq. (23)]{Ell19}, which is applicable to both near- and far-field transmissions involving an IRS reflection. The path losses for the $C$-$I$, $T$-$I$ and $R$-$I$ links are split into their respective components based on \cite[Eq. (21)-(22)]{Ell19}, and are directly absorbed into $\mathbf{h}_{TI}^{H}$, $\mathbf{H}_{RI}^{H}$ and $\mathbf{H}_{CI}$. The path losses for the $C$-$T$ and $T$-$R$ links are directly absorbed into $\mathbf{h}_{CT}$ and $\mathbf{h}_{CR}$, respectively, consistent with \cite[Eq. (22)]{Ozd19}.
\subsection{Single Semi-Passive Tag with Single-Antenna Reader}
In this subsection, the minimized CE transmit powers are obtained by solving Problem (S) using both \textbf{Algorithm 1} (i.e., the MM algorithm) in Section IV-A and the successive refinement scheme in Section IV-B (i.e., the SR algorithm).
\begin{figure}[h!]
\centering
\includegraphics[width=3.5in]{fig_3}
\caption{Effect of tag location on the transmit power at the CE.}
\label{fig:singleUserTagLocation}
\end{figure}
\textit{1) Effect of Tag Location:} Fig. \ref{fig:singleUserTagLocation} shows the minimum CE transmit power as the tag is moved along a straight line between $[5, 0]$ and $[95, 0]$. In addition to our proposed algorithms, several suboptimal baseline schemes are adopted as benchmarks. These include: (a) no-IRS (where MRT is utilized for transmit beamforming); (b) random IRS phase shifts; (c) using a set of suboptimal phase shifts for the combined CE-tag link only (i.e., $C$-$T$ plus $C$-$I$-$T$); (d) using the optimal phase shifts for the combined tag-reader link only (i.e., $T$-$R$ plus $T$-$I$-$R$)\footnote{The phase shifts used in benchmark (c) (i.e., combined CE-tag link) are suboptimal, as they are obtained in an alternating manner over $\mathbf{w}$ and $\mathbf{\Theta}$ similar to the single-user case in \cite{Wu19}. On the other hand, the phase shifts used in benchmark (d) (i.e., combined tag-reader link) are optimal, and are given by $\theta_{n}^{*} = \theta_{TR} - \theta_{RI_{n}} - \theta_{TI_{n}}$, where $\theta_{TR}$, $\theta_{RI_{n}}$ and $\theta_{TI_{n}}$ are the phases of the tag-reader, individual IRS element-reader and IRS element-tag channels, respectively. The method used to obtain benchmark (c) was shown in \cite{Wu19} to perform identically to an SDR-based algorithm, and hence suffices as a near-optimal benchmark.}, and (e), (f) as single-reflection variants of (c) and (d). It is clear that notable power reductions are realized at all locations compared to the no-IRS system --- up to $6$ dB when the tag is closer to the CE. Moreover, the optimized transmit power is around $27$ dBm or less for all tag locations, which enables much-improved tag placement flexibility using reasonable transmit powers. The MM and SR algorithms perform similarly, confirming their convergence behavior. We also note that the two suboptimal schemes (c) and (d) that phase-align to either the combined CE-tag link or the combined tag-reader link is close to that of \textbf{Algorithm~1} when the tag is near the reader or the CE, respectively. The extent of power reduction is not symmetric with respect to tag location, suggesting that the IRS location is also of influence. Contrary to the observations of works on conventional systems (e.g., \cite{Wu19}), implementing random phase shifts at the IRS appears to hinder the balance between the two links, due to the presence of deterministic components in the channels. As the random phase scheme does not outperform the no-IRS benchmark over the range of tag locations in Fig.~\ref{fig:singleUserTagLocation}, it is unlikely to be comparable to any of the algorithms proposed in this paper, and is thus excluded from subsequent results.
\textit{2) Importance of Twice-Reflection in the Signal Model:} Noting that \cite{Chen21} considered only one reflection at the IRS for a signal from the transmitter to receiver, we also evaluated the effect of the single-reflection signal model on the transmit power, through benchmarks (e) and (f). Therein, the IRS is optimized with respect to the combined CE-tag link only and the combined tag-reader only, respectively, where the other combined link is not included in the signal model. One may observe that benchmarks (e) and (f) perform almost identically compared to benchmarks (c) and (d), respectively, but still substantially worse than the solutions from the MM/SR algorithms. This highlights that both the combined CE-tag and tag-reader links contribute significantly to the overall performance of the system when the IRS is optimized with respect to both combined links; thus, it is important to include the twice-reflection path in the signal model. Moreover, the consideration of only one reflection at the IRS, or optimizing the IRS with respect to only one of the two combined links, leads to considerable performance degradation.
\begin{figure}[h!]
\centering
\includegraphics[width=3.5in]{fig_4}
\caption{Normalized gains of the combined CE-tag and tag-reader links.}
\label{fig:singleUserChannelSums}
\end{figure}
\textit{3) Behavior of IRS Phase Shifts:} To gain insight into the phase shift configuration of the IRS when a signal is reflected twice, Fig. \ref{fig:singleUserChannelSums} plots the normalized channel gains of the combined CE-tag link and the combined tag-reader link, where the IRS phase shifts are tuned according to the solution from the MM and SR algorithms. The normalized channel gain is a path loss-independent measure of the extent to which the phase shifts of the IRS are aligned with a given channel. They are given by $\hat{h}_{CIT} = \left| \left( \hat{\mathbf{h}}_{TI}^{H} \mathbf{\Theta} \hat{\mathbf{H}}_{CI} + \hat{\mathbf{h}}_{CT} \right) \mathbf{w} \right|$ and $\hat{h}_{TIR} = \left| \hat{\mathbf{h}}_{RI}^{H} \mathbf{\Theta} \hat{\mathbf{h}}_{TI} + \hat{h}_{TR} \right|$, respectively, for the combined CE-tag and tag-reader links, where the $\hat{\cdot}$ quantities denote their respective channel matrices with all elements set to unit magnitude. From Fig.~\ref{fig:singleUserChannelSums}, we observe that the phase shifts obtained with our algorithms generally boost the weaker link. For example, when the tag is closer to the CE, the IRS phase shifts are configured to favor the combined tag-reader link more than the combined CE-tag link. The intersection of the curves occurs when neither combined link is more favorable than the other, in this case when the tag is located $30$ m from the CE. At this point, the IRS achieves a balance between the two combined links.
\begin{figure}[h!]
\centering
\begin{subfigure}{0.495\textwidth}
\centering
\includegraphics[width=3.5in]{fig_5a}
\caption{Minimum transmit power vs. $N$}
\end{subfigure} \hfill
\begin{subfigure}{0.495\textwidth}
\centering
\includegraphics[width=3.5in]{fig_5b}
\caption{Range improvement using baseline transmit power}
\end{subfigure}
\caption{Effect of the number of IRS elements on transmit power and range.}
\label{fig:singleUserN}
\end{figure}
\textit{4) Effect of the Number of IRS Elements:} Fig. \ref{fig:singleUserN}(a) compares the transmit power at the CE against the number of IRS elements, with the tag located at $[25, 0]$. The reduction in transmit power is roughly proportional to the number of IRS elements for both MM and SR algorithms and the suboptimal benchmarks compared to the no-IRS system. For a medium-sized IRS with $N = 64$, an average reduction of $6$ dB is achieved, increasing to over $8$ dB when $N = 100$.
The addition of the IRS allows considerable improvements in the link budget. Suppose the tag-reader distance is increased by moving the reader further. As the path loss of the combined CE-tag link is unchanged, the overall path loss from CE to reader is given in (\ref{pathlossEqnNo})-(\ref{pathlossEqn}),
\begin{figure*}[!t]
\normalsize
\begin{subequations}
\begin{align}
\text{No-IRS: } \mathrm{PL} &= \left( \frac{\lambda}{4 \pi} \right)^{4} \frac{1}{d_{CT}^{\delta} (d_{TR} + \Delta)^{\delta}} , \label{pathlossEqnNo} \\
\text{IRS-aided: } \mathrm{PL} &= \left( \frac{\lambda}{4 \pi} \right)^{4} \Biggl[\frac{1}{d_{CT}^{\delta}} + \left( \frac{\lambda}{4 \pi} \right)^{2} \left| \sum_{n=1}^{N} e^{j \theta_{n}} \sqrt{\frac{\pi^{2} (-\hat{\mathbf{r}}_{CI,n} \cdot \hat{\mathbf{n}})^{2q} (\hat{\mathbf{r}}_{TI,n} \cdot \hat{\mathbf{n}})^{2q}}{d_{CI, n}^{\delta} d_{TI, n}^{\delta}}} \right|^{2} \Biggr] \nonumber \\
& \qquad \times \Biggl[\frac{1}{(d_{TR} + \Delta)^{\delta}} + \left( \frac{\lambda}{4 \pi} \right)^{2} \left| \sum_{n=1}^{N} e^{j \theta_{n}} \sqrt{\frac{\pi^{2} (-\hat{\mathbf{r}}_{TI,n} \cdot \hat{\mathbf{n}})^{2q} (\hat{\mathbf{r}}_{RI,n} \cdot \hat{\mathbf{n}})^{2q}}{d_{TI, n}^{\delta} (d_{RI, n} + \Delta_{n})^{\delta}}} \right|^{2} \Biggr], \label{pathlossEqn}
\end{align}
\end{subequations}
\hrulefill
\end{figure*}
where $\Delta$ and $\Delta_{n}$ are the changes in the distances between tag and reader, and the $n$-th IRS element and the reader, respectively; $q$ is a constant defined in \cite{Ell19}; $\hat{\mathbf{r}}_{\cdot, n}$ represents the unit vector between two nodes; and $\hat{\mathbf{n}}$ is the vector normal to the IRS surface. While it is difficult to solve for the range increase $\Delta$ relative to the no-IRS system directly from the above equations, it can be easily computed numerically. The achievable ranges are visualized in Fig. \ref{fig:singleUserN}(b), where the tag is located at $[25, 0]$ and the transmit power is held constant for both the IRS-aided and non-IRS-aided systems (where $\mathbf{w}$ is MRT) at the minimized transmit power in a non-IRS-aided system. The approximately linear relationship between $N$ and the CE-reader distance is due to the square scaling behavior of the SNR in an IRS-assisted system and inverse-square scaling of the large-scale path loss. In any case, the range increases enabled by the IRS are significant, ranging from $12$ m for a small IRS with $N = 16$, to around $70$ m when $N = 100$.
\subsection{Single Passive Tag with Circuit Power Constraint}
\begin{figure}[h!]
\centering
\begin{subfigure}{0.495\textwidth}
\centering
\includegraphics[width=3.5in]{fig_6a}
\caption{Varying $\xi$}
\end{subfigure} \hfill
\begin{subfigure}{0.495\textwidth}
\centering
\includegraphics[width=3.5in]{fig_6b}
\caption{Varying CE-tag distance}
\end{subfigure}
\caption{Effect of circuit power constraint on the transmit power at the CE.}
\label{fig:singleUserCPS}
\end{figure}
Next, we extend the numerical analysis to a passive tag, and examine the effects of a nonzero circuit power constraint on the CE's transmit power. As a passive tag falls under the scope of the general problem, the CE's transmit power is obtained using both the MM and SR variants of the AO algorithm (\textbf{Algorithm 4}) with the number of tags set to $1$. Fig.~\ref{fig:singleUserCPS}(a) highlights the effects of various tag circuit power constraint values on the transmit power. Compared to the results from Fig. \ref{fig:singleUserTagLocation}, a nonzero tag power requirement significantly increases the transmit power, as the tag cannot communicate, let alone achieve its SNR requirement, if it does not power on. The extent of power reduction relative to the no-IRS system is less than the semi-passive tag scenario, at around $2.5$ dB. However, in absolute terms, several Watts are conserved at the maximum $\xi$, which is of significant benefit if the CE is subject to transmit power regulations. Also, for practical values of $\xi$ shown in Fig. \ref{fig:singleUserCPS}(a), an order-of-magnitude increase in $\xi$ results in $7$-$8$ dB increase in the transmit power, indicating a nonlinear relationship. The optimal tag splitting coefficient decreases with increased $\xi$, such that a smaller proportion of the incoming signal is backscattered bearing the tag's data symbols. This requires the CE transmit power to be increased to meet both the SNR and circuit constraints.
Fig. \ref{fig:singleUserCPS}(b) highlights the effect of the circuit power constraint, with $\xi = -22$ dBm, as the tag location is varied between $[5, 0]$ and $[30, 0]$, which is further than the typical tag-reader distances in bistatic BackCom systems even with semi-passive tags. Compared to Fig. \ref{fig:singleUserTagLocation}, although the power reduction is diminished at all distances, the effect is still non-trivial: around $3$ dB reduction when the CE-tag distance is $30$ m, corresponding to more than $3$ W in real terms. The variations in the tag splitting coefficients can be attributed to the small-scale fading in the CE-tag link and the alternating optimization algorithm in dealing with variations in channel coefficients, but a diminishing trend is nonetheless observed as the tag is further removed from the CE.
\subsection{Multiple Tags with Multiantenna Reader}
In this subsection, we present the results from the general problem based on \textbf{Algorithm 4} in a system with multiple tags, where the tags are either semi-passive or passive.
\begin{figure}[h!]
\centering
\begin{subfigure}{0.495\textwidth}
\centering
\includegraphics[width=3.5in]{fig_7a}
\caption{Convergence behavior of Algorithms 2 and 3}
\end{subfigure} \hfill
\begin{subfigure}{0.495\textwidth}
\centering
\includegraphics[width=3.5in]{fig_7b}
\caption{Comparison of converged objective values}
\end{subfigure}
\caption{Convergence comparison of the MM-based and successive refinement-based algorithms.}
\label{fig:multiUserConvergence}
\end{figure}
\textit{1) Convergence Comparison of Algorithms:} Fig. \ref{fig:multiUserConvergence}(a) shows the typical convergence behavior of the multi-tag transmit power minimization algorithm using AO and either \textbf{Algorithm 2} or \textbf{Algorithm 3} for the IRS phase shift optimization, with $K = 6$. Both the MM-based algorithm and the SR scheme are capable of converging to similar solutions. While the latter algorithm generally requires more iterations to converge, it is capable of doing so with far lower computational cost than the MM-based algorithm, as discussed in Section V-D.
Due to varying channel coefficients, the converged value of $\left\lVert \mathbf{w} \right\rVert^{2}$ exhibits considerable variation between channel realizations. Fig. \ref{fig:multiUserConvergence}(b) plots the converged objective values of \textbf{Algorithm 4} with both the MM-based approach using \textbf{Algorithm 2} and the SR scheme using \textbf{Algorithm 3}, over $10$ representative channel realizations, where $\xi = -22$ dBm. While the SR scheme does not outperform the MM-based algorithm all the time, the typical difference in the converged objective values is reasonably small in most cases.
\begin{figure}[h!]
\centering
\begin{subfigure}{0.495\textwidth}
\centering
\includegraphics[width=3.5in]{fig_8a}
\caption{No circuit power constraint}
\end{subfigure} \hfill
\begin{subfigure}{0.495\textwidth}
\centering
\includegraphics[width=3.5in]{fig_8b}
\caption{Nonzero circuit power constraint ($\xi = -22$ dBm)}
\end{subfigure}
\caption{Effect of multiple tags on the transmit power at the CE.}
\label{fig:multiUserNumTags}
\end{figure}
\textit{2) Effect of Number of Tags:} Fig. \ref{fig:multiUserNumTags} shows the effects of increasing the number of tags served on the transmit power, where tags are semi-passive and passive. One may observe that the transmit power reduction decreases with more tags, from roughly $3.5$ dB in a single-tag scenario in Fig. \ref{fig:multiUserNumTags}(a) to around $2$ dB for $K = 10$. Nonetheless, this reduction is still significant, given that the algorithms attempt to obtain the most favorable phase shift matrix to balance the performance of all tags, not all of whose channels may be favorably aligned. Moreover, the MM-based approach is outperformed by the SR scheme for $K \geq 2$ (albeit by a smaller margin in the circuit power constrained case), highlighting its declining performance as more constraints (proportional to the number of tags) are imposed and approximated with quadratic forms. The latter algorithm achieves a similar effect compared to randomization in \textbf{Algorithm 2} without performing the randomization or needing to factor in the structure of the SNR and circuit constraints. As a result, the SR algorithm achieves more favorable objective values for large $K$. Overall, the effective transmit power per tag decreases with larger $K$.
Fig. \ref{fig:multiUserNumTags}(b) presents the case where a circuit power consumption of $\xi = -22$ dBm is required. A significant increase in the base transmit power is observed for both algorithms, similar to the single-tag system with circuit constraint. In addition, the relative reduction in transmit power compared to the semi-passive tag case is slightly diminished. This is in part due to the circuit constraint in the optimization problem causing the phase shifts to favor the combined CE-tag link, which may not be favorably aligned with the combined tag-reader link. For both semi-passive and passive tag scenarios, we observe that lower transmit power is incurred on average per tag as the number of tags increases.
\section{Conclusion}
In this paper, we studied the novel integration of IRS into a bistatic BackCom system, and presented its corresponding signal model. A first attempt at obtaining solutions for the transmit power minimization problem at the CE was undertaken using the AO and MM techniques, to address the highly nonconvex nature of the problem brought about by the twice-reflection of the signal traveling from the CE to the reader; in addition, low-complexity successive refinement algorithms were also proposed for the phase shift optimization. Our findings suggest that the introduction of an IRS, even if moderately sized, has a considerable effect in reducing the CE transmit power compared to a non-IRS-aided system. Moreover, for multi-tag systems, the transmit power was found to scale favorably in terms of power consumption per tag, as the number of tags is increased. As an exploratory work into IRS-aided BackCom systems, there are many interesting avenues for future research. The transmit power minimization problem studied in this paper could be reformulated as an energy efficiency maximization problem, to investigate the energy usage-throughput trade-off. Joint optimization could be considered for systems with multiple PBs; and the case with multiantenna tags also warrants further study.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{ieeetran}
| proofpile-arXiv_059-15728 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |